Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It's incredible.

I've been experimenting with running local LLMs for nearly two years now, ever since the first LLaMA release back in March 2023.

About six months ago I had mostly lost interest in them. They were fun to play around with but the quality difference between the ones I could run on my MacBook and the ones I could access via an online API felt insurmountable.

This has completely changed in the second half of 2024. The models I can run locally had a leap in quality - they feel genuinely GPT-4 class now.

They're not as good as the best hosted models (GPT-4o, Gemini 1.5 Pro, Claude 3.5 Sonnet) but they're definitely good enough to be extremely useful.

This started with the Qwen 2 and 2.5 series, but I also rate Llama 3.3 70B and now Phi-4 as GPT-4 class models that run on my laptop.

I wrote more about this here: https://simonwillison.net/2024/Dec/31/llms-in-2024/#some-of-...



I'm in complete agreement with your more recent timeline piece (the negative one), and as a younger user (22 year old student) I'm actively relocating this year to somewhere slightly more rural with a focus on physical/knowledge combined work to secure a good quality of life nearly solely because of how fast our timelines are.

A 'word calculator' this effective is the best substitute that we have for a logic calculator. And the fact that it's enough in 90% of situations is terrifying as it is transformative, as is the fact no one is awake to it.

Exponential power scaling in an unstable world feels like it only makes it exponentially more unstable though.


I should emphasize that I really don't think the dystopian version of this is likely to happen - the one where "AGI/ASI" puts every human out of work and society collapses.

Human beings have agency, and we are very good at rolling with the punches. We've survived waves of automation for hundreds of years. I'm much more confident that we will continue to find ways to use these things as tools that elevate us, not replace us.

I really hope the dystopian version doesn't come to pass!


> We've survived waves of automation for hundreds of years. I'm much more confident that we will continue to find ways to use these things as tools that elevate us, not replace us.

The difference with past technological breakthroughs is that they augmented what humans could do, but didn't have the potential to replace human labor altogether as AI does. They were disruptive, but humans were able to adapt to new career paths as they became available. Lamplighters were replaced by electrical lighting, but that created jobs for electrical engineers. Carriage drivers were replaced by car drivers; human computers by programmers, and so on.

The invention of AI is a tipping point for technology. The only jobs where human labor will be valued over machine labor are those that machines are not good at yet, and those where human creativity is a key component. And the doors are quickly closing on both of those as well.

Realistically, the only jobs that will have some form of longevity (~20 years?) are those of humans that build and program AI machines. But eventually even those will be better accomplished by other AI machines.

So, I'm really curious why you see AI as the same kind of technology we've invented before, and why you're so confident that humanity will be able to overcome the key existential problems AI introduces, which we haven't even begun to address. I don't see myself as a pessimist, but can't help noticing that we're careening towards a future we're not prepared to handle.


As many forums say, with other tech inventions they replaced the horse not the rider. With AI; they are replacing the rider - that makes it a unique technology that does not compare to previous technology being introduced. Other forms of technology typically enabled use cases which didn't seem possible (e.g. electricity, cooking food faster, flying, etc) - this one at present is just about making existing cases more efficient/removing the need for labor. As many non-techies mention - other than doing my assignment/email/etc what benefit does it have on my daily life other than threaten some jobs and generate some worthless online content?

The cost/benefit for the labor/middle/low classes is at best low right now. I define that as someone who needs to trade time to continue surviving as an ongoing concern even if they have some wealth behind them.

I think the outcome where any form of meritocratic society gives way to old fashioned resource acquisition based societies is definitely one believable outcome. Warfare, land and resource ownership - the old will become the new again.


You truly believe we’re on a timeline that involves the replacement of anaesthesiologists, emergency medicine physicians, trauma surgeons, and so on, within a 20 year timeframe? AI progress in the last few years has been astounding, but the gaps between where we are and a true all-human-labour-is-inferior scenario is almost unfathomable.


I could be wrong on the timeline. But are we not moving towards a future where even those professions are replaced by AI? The current wave of ML might not be the one to get us there, but there is an unprecedented level of interest and resources working to make that a reality. Regardless if they succeed or not, there is still a mountain of societal problems we need to address with even the current generation of this technology.

But my main argument is against the notion that this technology is the same as the ones that came before it, and that it will undoubtedly lead to a net better future. I think that is far from certain, and the way things are developing only leads me to believe that we're not ready for what we're building.


I agree people are way more agentic than we give them credit for in these situations. We tend to 'petri-dish' ourselves and act like we're just the products of our environments, being swept along when top-down analysing large situations like this when that really isn't the case.

That being said, I can't see any world where there isn't mass ontological shock/hysteria, mass unemployment, and unrest at least for a few years, and I feel like it is definitely the kind of event you take active measures and preparations for beforehand.

And so do I, but like the golden rule of camping you should prepare for the worst and hope for the best!


I’m on the opposite end of the spectrum. I’m almost certain that this is going to end extremely badly for the majority of humanity, and for programmers in particular.

I think there’s a less than 5% chance that this goes well, and that’s only if we get a series of things to go extremely well. And frankly, we’re tracking along the extremely bad path so far.


We barely survived one nuclear arms race, and this could give every nation state a new type of power weapon every 5ish years through the inevitable scaling in energy and weapons. I agree we're on one of the worst timelines for AI/AGI/ASI with the world actively being run into the ground by short sighted 'dementia-ocracies' and every security risk about to increase dramatically.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: