Hacker Newsnew | past | comments | ask | show | jobs | submit | snek_case's commentslogin

I feel like this is most products in the AI space lately. More marketing fuzz than substance. Hard to figure out what thing even does.

You're definitely going to get people using LLMs running on 8x $50K GPUs in a datacenter to do the job of a bash script.

I already see people using an agent to write a git commit

What’s wrong with that? The agent session had all the business context, knows what changed, and how we verified it. It takes 5s to turn that into a PR desc vs 10-100x that by hand

Because it's not perfect and it still fabricates things from time to time.

I have coworkers who do this and it sucks to be on the receiving end of. It means I now need to read every commit message with skepticism.

It's an example of using AI to save energy for yourself while simultaneously increasing the energy expenditure of your coworkers.


100 x 5s is nearly 10 minutes. If it takes 10 minutes to write a PR there may be a "skill issue". The bottom end of this 1-2 minutes makes more sense.

How much productivity do we really need? Even at senior dev payscale 2 minutes is like a dollar. The tokens and calls involved in having a 5s commit could close in on 10¢, depending on your contract, the model etc. and that's today's costs. Do remember that my salary is on top of the rates for the LLM, so if the 5s response takes 5s for me to prompt, that's 15s (10 for me 5 for the LLM) that the boss is paying for.

This starts to feel like a billionaire eating ramen noodles just so he can reach his second billion dollars.

Where I work our contract limits API calls, so doing this could result in not being able to use the model when I need it later for something more sophisticated (planning, debugging etc.) than using tooling I'm paid to already know.


Im not even talking about the description but “commit this to git with the description x” type prompts

Probably constrained by training resources. It's much easier to experiment with a smaller architecture. You may need many training runs to figure out hyperparameters for example. If each run needs multiple GPUs for a week the cost adds up quickly. I think it makes a lot of sense to start small.

I've always thought of myself as more "centrist" (feel free to make fun of me), but seeing so many tech CEOs cheer for layoffs and destruction of the job market has been a bit of a wake up call. Also just being confronted with the sheer idiocy of these people. They are making hundreds of millions of dollars a year, but they barely understand the tech they are cheering for. They act as though being broadly "bullish on AI" and being overly enthusiastic about its short-term potential was some kind of visionary stance, when in fact they are just repeating the same ideas as every other idiot in the silicon valley VC bubble.

My personal bet would be that in the medium term, there will be a reversal of the idiotic belief that you can immediately just lay off developers because of LLMs. If your developers are more productive because of LLMs, you still have an advantage by having more developers than the competition. There's also a lot of institutional knowledge that's just not documented. You fire key people, you can cripple your organization.

In the longer term, I think AI will eventually take jobs, and unfortunately, it will have major negative societal impact. I doubt that our governments will be proactive in trying to anticipate this. They will just play damage control. There's probably going to be an anti-AI social movement. You'll have the confluence of more and more disinformation and AI slop online along with more and more job loss. There are probably going to be riots. Some people think UBI is inevitable. I think the problem is that if the government puts UBI in place, they will only give you the minimum necessary so that you don't starve. Just enough to afford to rent a bedroom, eat processed food and stay online all day.


> imbue it with the intrinsic desire to keep humans around, doing human things.

It's not the AI you have to convince, it's your government and the people running tech companies. Dario Amodei was cheering for AI to take all programming jobs (along with the others). If that happened, it would be an unmitigated disaster for millions of people. Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.

> the political will to stop AI development

The reason that's not likely is that it's an arms race. You stop AI research here, but how can you trust that China and Russia are doing the same? Unlike nuclear bombs, the potential harms are less tangible.


> Imagine a student who comes out of a CS major with tons of student debt. How much sympathy does Dario feel for this person? Getting him to STFU would be a good first step.

I don't need to imagine this student, I'm friends with some who are going through this right now. They graduated almost a year ago and haven't found work yet. One of them jokes about suicide often and I don't know how to help him

The social contract between labour and capital has been frayed for a long time, but it is near breaking now. It's going to get worse, maybe a lot worse, before it gets better. If it ever does get better


Surely a highly innovative product that will sell in high volumes /s

Mostly water, actually.

Typical. You know they pump the chickens at the grocery store too.

That's the direction the field is already going with "agents". People want autonomous AI agents that are capable of acting independently and that have more and more capabilities. For example, something like Claude code, but that acts as a sidekick that is constantly running, and able to act without being prompted. That's what people are imagining when they talk about teams of agents. You act as a manager, but your coding agents are off working on various features and only check in periodically.

First, believe it or not, 3 years is not that long. It's also not a given that LeCun was given the resources he needed to work on this tech at Meta. Zuck wanted another llama.

Second, AMI Labs just secured a billion in funding, and while that's a lot of money, it's literally just a fraction of the yearly salary they are paying to Wang. Big tech companies are literally throwing tens of billions to keep doing the same thing, just on a bigger scale. Why not try something else once in a while?


He has coauthored tens of papers on the same subject.

This means sponsorships, millions spent on training etc.


LLMs are super efficient at generating boilerplate for lots of APIs, which is a time consuming and tedious part of programming.


> LLMs are super efficient at generating boilerplate for lots of APIs

Yes they are. This is true.

> which is a time consuming and tedious part of programming.

In my experience, this is a tedious part of programming which I do not spend very much time on.

In my experience LLM generated API boilerplate is acceptable, yet still sloppier than anything I would write by hand.

In my experience LLMs are quite bad at generating essentially every other type of code, especially if you are not generating JS/TS or HTML/CSS.


> They are aggressively manipulating social media with astroturfed accounts, in particular this site and Reddit.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: