Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I have the feeling that hard times are coming for the kumbaya type of ai researcher at BigCo who prefers to openly share all research, doesn’t care about commercialisation and wants to slow down the ai train behind curtains for safety considerations after the openai fiasco.


The last 10 years of deep learning progress are all thanks to Open Source and Open Research. 95% of LLM progress in the past year is also thanks to open models, which allowed the field to work on things like QLoRa, long context lenghts, RAG and etc.

Big Cos benefit the most from this since they have the resources and data to scale up these methods and a large user base to deploy it to.


It seems that startup money has poisoned this culture anyway. Every week, I see papers come out that are not necessarily fraudulent, but are contorting themselves to toot their own horn for yet another framework or GPT-4 beating finetune.

You can't trust the ML papers anymore.


It's a failure mode for arxiv, I feel.


how ? I thinks it's doing what it seeking to do. a place for pre-publications to be shared.

as far I know arxiv is not a journal. it doesn't reject papers and never intended to do so.

you can find many almost crackpot (meta)physics theories in it. you only need to have someone already on the platform to invite to be able to publish.

the only rejection criteria seems to be that your paper need to be related to the scientific/academic world (ex not your grandma's cookie recipe)


These AI whitepapers are getting increasingly close to cookie recipes.

You're right, it's within the charter of what arxiv is meant to be, but I don't think that's how a lot of people think about it.


So not being not being an absolute moralless predator megalomaniac is labeled as being a "kumbaya type" now?

A tiny subset of psychos will be the downfall of humanity with tech getting ever more potent, and it's especially sad when most of the world just wants to live a happy life with their families, not take over the world like these ghouls.

I miss the days of actual counterculture among hacker types, do stuff for the benefit of the human race or one's own community.


It's a strange characterization to make of AI researchers who openly share all research that they want to "slow down the AI train" when it appears much more likely that people who are developing in a silo are slowing overall progress.


... while pulling down $500K/yr.


What fiasco?


What kinds of hard times? Why?


This is a stupid straw man. At its most generous face value, you’ve said nothing. At worst you insulted a large swath of people.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: