Hacker Newsnew | past | comments | ask | show | jobs | submit | thefaux's commentslogin

Just because it is small doesn't mean that it isn't important.

The lead maintainer of sbt does it as a labor of love. I am very curious if he actually will be receiving any money.


The examples that you and others provide are always fundamentally uninteresting to me. Many, if not most, are some variant of a CRUD application. I have yet seen a single ai generated thing that I personally wanted to use and/or spend time with. I also can't help but wonder what we might have accomplished if we devoted the same amount of resources to developing better tools, languages and frameworks to developers instead of automating the generation of boiler plate and selling developer's own skills back to them. Imagine if open source maintainers instead had been flooded with billions of dollars in capital. What might be possible?

And also, the capacities of llms are almost besides the point. I don't use llms but I have no doubt that for any arbitrary problem that can be expressed textually and is computable in finite time, in the limit as time goes to infinity, an llm will be able to solve it. The more important and interesting questions are what _should_ we build with llms and what should we _not_ build with them. These arguments about capacity are distracting from these more important questions.


Considering how much time developers spend building uninteresting CRUD applications I would argue that if all LLMs can do is speed that process up they're already worth their weight in bytes.

The impression I get from this comment is that no example would convince you that LLMs are worthwhile.


This feels like it conflates problem solving with the production of artifacts. It seems highly possible to me that the explosion of ai generated code is ultimately creating more problems than it is solving and that the friction of manual coding may ultimately prove to be a great virtue.


This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

How we work changes and the extra complexity buys us productivity. The vast majority of software will be AI generated, tools will exist to continuously test/refine it, and hand written code will be for artists, hobbyists, and an ever shrinking set of hard problems where a human still wins.


> This statement feels like a farmer making a case for using their hands to tend the land instead of a tractor because it produces too many crops. Modern farming requires you to have an ecosystem of supporting tools to handle the scale and you need to learn new skills like being a diesel mechanic.

This to me looks like an analogy that would support what GP is saying. With modern farming practices you get problems like increased topsoil loss and decreased nutritional value of produce. It also leads to a loss of knowledge for those that practice those techniques of least resistance in short term.

This is not me saying big farming bad or something like that, just that your analogy, to me, seems perfectly in sync with what the GP is saying.


And those trade-offs can only pay off if the extra food produced can be utilized. If the farm is producing more food than can be preserved and/or distributed, then the surplus is deadweight.


I’ll be honest with you pal - this statement sounds like you’ve bought the hype. The truth is likely between the poles - at least that’s where it’s been for the last 35 years that I’ve been obsessed with this field.


They may be early but they’re not wrong.


self-driving cars are only 5 years away, just like 10 years ago


"Airplanes are only 5 years away, just like 10 years ago" --Some guy in 1891.

Never use your phrase to say something is impossible. I mean there are driverless Waymo's on the street in my area so your statement is already partially incorrect.


"Flying cars are only 5 years away, just like 10 years ago" --Some guy in 1985

absolutely no one said that in 1891


Nobody is saying it isn't possible. Just saying nobody wants to pay as much money as it's going to take to get there. At some point investors will say, meh, good 'nuff.


That could be said about hover cars too.


The Moller car is just weeks away, haven't you heard?


I feel like we are at the crescendo point with "AI". Happens with every tech pushed here. 3DTV? You have those people who will shout you down and say every movie from now on will be 3D. Oh yeah? Hmmm... Or the people who see Apple's goggles and yell that everyone will be wearing them and that's just going to be the new norm now. Oh yeah? Hmmm...

Truth is, for "AI" to get markedly better than it is now (0) will take vastly more money than anyone is willing to put into it.

(0) Markedly, meaning it will truly take over the majority of dev (and other "thought worker") roles.


This is a false equivalence. If the farmer had some processing step which had to be done by hand, having mountains of unprocessed crops instead of a small pile doesn’t improve their throughput.


This is the classic mistake all AI hypemen make by assuming code is an asset, like crops. Code is a liability and you must produce as little of it as possible to solve your problem.


As an "AI hypeman" I 100% agree that code is a liability, which is exactly why I relish being able to increasingly treat code as disposable or even unnecessary for projects that'd before require a multiple developers a huge amount of time to produce a mountain of code.


I measure what I do by output.

Just about a week ago I launched a 100% AI generated project that shortcircuits a bunch of manual tasks. What before took 3+ weeks of manual work to produce, now takes us 1-2 days to verify instead. It generates revenue. It solved the problem of taking a workflow that was barely profitable and cutting costs by more than 90%. Half the remaining time is ongoing process optimization - we hope to fully automate away the reaming 1-2 days.

This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".

I fully agree that some places will drown in a deluge of AI generated code of poor quality, but that is an operator fault. In fact, one of my current clients retained me specifically to clean up after someone who dove head first into "AI first" without an understanding of proper guardrails.


>This was a problem that wasn't even tractable without AI, and there's no "explosion of AI generated code".

People often say this when giving examples, but what specifically made the problem intractable?

Sometimes before beginning work on a problem, I dramatically overestimate how hard it will be (or underestimate how capable I am of solving it.)


I do see this as a bad thing and an abdication of taking responsibility for one's own life. As was recently put to me after the sudden death of a friend's father (who lived an unusually rich life): everyone dies, but not everyone truly lives.


Ah... we found the person who thinks they can pass judgement on how people choose to live their lives. I didn't say that my friend doesn't love his job (he does) - I said that he'll probably die before retiring.

Stephen Hawking, Einstein, Marie Curie, and Linus Pauling never retired. Did they not "truly live"?


At the end of life, Maslow became convinced that self-transcendence was the pinnacle of the hierarchy. Strong identification with work will not get one to that final step. I am not sure if ai is a path to self transcendence or self annihilation, but it's interesting to ponder in the case of some like Brin.


I truly believe that the cult of c performance optimization has done more harm than good. It is truly evil to try and infer, or even worse, silently override programmer intent. Many if not most of the optimizations done by llvm and gcc should be warnings, not optimizations (dead code elimination outside of LTO being a perfect example).

How much wasted work has been created by compiler authors deciding that they know better than the original software authors and silently break working code, but only in release mode? Even worse, -O0 performance is so bad that developers feel obligated to compile with -O2 or more. I will bet dollars to donuts that the vast majority of the material wins of -O2 in most real world use cases is primarily due to better register allocation and good selective inlining, not all the crazy transformations and eliminations that subtly break your code and rely on UB. Yeah, I'm sure they have some microbenchmarks that justify those code breaking "optimizations" but in practice I'll bet those optimizations rarely account for more than 5% of the total runtime of the code. But everyone pays the cost of horrifically slow build times as well as nearly unbounded developer time loss debugging the code the compiler broke.

Of course, part of the problem is developers hating being told they're wrong and complaining about nanny compilers. In this sense, compiler authors have historically been somewhat similar to sycophantic llms. Rather than tell the programmer that their code is wrong, they will do everything they can to coddle the programmer while behind the scenes executing their own agenda and likely getting things wrong all because they were afraid to honestly tell the programmer there was a problem with their instructions.


Honestly, I found this piece depressing. Life is too short and precious to waste on crappy software.

So often the question ai related pieces ask is "can ai do X?" when by far the more important question is "should ai do X?" As written, the piece reads as though the author has learned helplessness around c++ and their answer is to adopt a technology that leaves them even more helpless, which they indeed lament. I'd challenge the author to actually reflect on why the are so attached to this legacy software and why they cannot abandon it if it is causing this level of angst.


Sure, but I prefer to work on projects that are fundamentally sound and high impact. Indeed, I have certainly noticed a pattern that very often ai enthusiasts exalt its capabilities to automate work that appears to be of questionable value in the first place, apart from the important second order property of keeping the developer sheltered and fed.


Can you tell us these patterns (should be easy) that have questionable value but yet they are being paid well enough for rent/food?


> Try a GraalVM native image. Milliseconds. Gone.

Try building a GraalVM native image. Minutes gone.


I agree the build might take a bit extra but its for sure not much for smaller clis. Making jbang native added 1-2 minutes and its all done in github action runners so in practice I don't see this as a problem as it does not affect the end user.


More gigabytes of ram than your machine has will be gone too.


Luckily, that's only during aot compilation and not runtime.


Right but the inspiration for this article is using Java as a terminal vibe coding language, so the aot step would be part of the critical path.

I’m not surprised this was not obvious to the LLM that “cleaned up my notes” for the “author”.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: