Hacker Newsnew | past | comments | ask | show | jobs | submit | arkensaw's commentslogin

It never even occurred to me that sudo was something people had to maintain. it's always just been part of linux

And Linux is maintained by who?

Amstrad 2286 PC, 4Mb RAM, 40Mb hard drive, 3.5" floppy.

40 Megabytes. I have photos that size these days.


> Copilot Shoes

LOL. "Looks like you're trying to tie those laces - would you like me to order you velcro?"


So nice to see this. I really loved Windows Phone for the simple UI it had which shared a lot of concepts with this. And I felt like Microsoft could have made something really great from the Win8 UI if they had iterated a few more times before dropping it.

I hope you take on that initiative and make the improvements that they didn't


> It's pretty clear to me where this is going. The only question is how long it takes to get there.

I don't think its a guarantee. all of the things it can do from that list are greenfield, they just have increasing complexity. The problem comes because even in agentic mode, these models do not (and I would argue, can not) understand code or how it works, they just see patterns and generate a plausible sounding explanation or solution. agentic mode means they can try/fail/try/fail/try/fail until something works, but without understanding the code, especially of a large, complex, long-lived codebase, they can unwittingly break something without realising - just like an intern or newbie on the project, which is the most common analogy for LLMs, with good reason.


While I do agree with you. To play the counterpoint advocate though.

What if we get to the point where all software is basically created 'on the fly' as greenfield projects as needed? And you never need to have complex large long lived codebase?

It is probably incredibly wasteful, but ignoring that, could it work?


That sounds like an insane way to do anything that matters.

Sure, create a one-off app to post things to your Facebook page. But a one-off app for the OS it's running on? Freshly generating the code for your bank transaction rules? Generating an authorization service that gates access to your email?

The only reason it's quick to create green-field projects is because of all these complex, large, long-lived codebases that it's gluing together. There's ample training data out there for how to use the Firebase API, the Facebook API, OS calls, etc. Without those long-lived abstraction layers, you can't vibe out anything that matters.


In Japan buildings (apartments) aren't built to last forever. They are built with a specific age in mind. They acknowledge the fact that houses are depreciating assets which have a value lim->0.

The only reason we don't do that with code (or didn't use to do it) was because rewriting from scratch NEVER worked[0]. And large scale refactors take massive amounts of time and resources, so much so that there are whole books written about how to do it.

But today trivial to simple applications can be rewritten from spec or scratch in an afternoon with an LLM. And even pretty complex parsers can be ported provided that the tests are robust enough[1]. It's just a metter of time someone rewrites a small to medium size application from one language to another using the previous app as the "spec".

[0] https://www.joelonsoftware.com/2000/04/06/things-you-should-...

[1] https://simonwillison.net/2025/Dec/15/porting-justhtml/


> But today trivial to simple applications can be rewritten from spec or scratch in an afternoon with an LLM. And even pretty complex parsers can be ported provided that the tests are robust enough[1]. It's just a metter of time someone rewrites a small to medium size application from one language to another using the previous app as the "spec".

This seems like a sort of I dunno chicken and the egg thing.

The _reason_ you don't rewrite code is because it's hard to know that you truly understand the spec. If you could perfectly understand the spec then you could rewrite the code, but then what is the software, is it the code or the spec that writes the code. So if you built code A from spec, rebuilding it from spec I don't think qualifies a rewrite, it's just a recompile. If you're trying to fundamentally build a new application from spec when an old application was written by hand, you're going to run into the same problems you have in a normal rewrite.

We already have an example of this. Typescript applications are basically rewritten every time that you recompile typescript to node. Typescript isn't the executed code, it's a spec.

edit: I think I missed that you said rewrite in a different language, then yeah fine, you're probably right, but I don't think most people are architecture agnostic when they talk about rewrites. The point of a rewrite is to keep the good stuff and lose a lot of bad stuff. If you're using the original app as a spec to rewrite in a new language, then fine yeah, LLM's may be able to do this relatively trivially.


I don't know about Japan - I vaguely recall reading that most buildings over there are built with wood (even the big ones) and that this is historically something to do with rebuilding after Tsunamis and earthquakes.

Buildings in most other countries in the world ARE built to last forever, and often renovated, changed, extended and modified long after the incept date until, because needs change, and destroying them to start over is complete overkill (Although some people do these "large scale refactors" - they're usually rich).

> It's just a metter of time someone rewrites a small to medium size application from one language to another using the previous app as the "spec".

I have no doubt of this. I'm sure it's happening already. But the whole point of long term stable applications is that they are tried and tested. A port done in an afternoon by an LLM might be great, but you can't know if it has problems until it has withstood the test of time.


Sure, and the buildings are built to a slowly-evolving code, using standard construction techniques, operating as a predictable building in a larger ecosystem.

The problem with "all software" being AI-generated is that, to use your analogy, the electrical standards, foundation, and building materials have all been recently vibe-coded into existence, and none of your construction workers are certified in any of it.


I don't think so. I don't think this is how human brains work, and you would have too many problems trying to balance things out. I'm thinking specifically like a complex distributed system. There are a lot of tweaks and iterations you need for things to work with eachother.

But then maybe this means what is a "codebase". If a code base is just a structured set of specs that compile to code ala typescript -> javascript. sure, but then, it's still a long-lived <blank>

But maybe you would have to elaborate on, what does "creating software on the fly" look like,. because I'm sure there's a definition where the answer is yes.


I have the same questions in my head lately.


using the wood for heating also releases the CO2. I do think planting trees is a good idea, but it's worth pointing out they can be a carbon source even after harvesting, depending on the usage.

On the other hand if the wood is used for construction or furniture it will not emit.


what a fantastic presentation, I wish all news articles were like that


> As AI edges humans out of the business of thinking, I think we need to be wary of losing something essential about being human

If AI edges humans out of the business of thinking, then we're all in deep shit, because it doesn't think, it just regurgitates previous human thinking. With no humans thinking, no advances in code will be possible. It will only be possible to write things which are derivatives of prior work

(cue someone arguing with me that everything humans do is a derivative of prior work)


Agreed, conceptually.

BUT. For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.

If the AI can regurgitate their thinking, my output is better.

Humans may need to think to advance the state of the art.

Humans may not need to think to just... do stuff.


> For 99% of tasks I'm totally certain there's people out there that are orders of magnitude better at them than me.

And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.

> If the AI can regurgitate their thinking, my output is better.

But it can’t. Not definitively and consistently, so that hypothetical is about as meaningful as “if I had a magic wand to end world hunger, I’d use it”.

> Humans may not need to think to just... do stuff.

If you don’t think to do regular things, you won’t be able to think to do advanced things. It’s akin to any muscle; you don’t use it, it atrophies.


> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse, and you have no way of forcing it to be the good one every time.

That's solvable though, whether through changing training data or RL.


> And LLMs slurped some of those together with the output of thousands of people who’d do the task worse

Theoretically fixable, then.

> But it can’t. Not definitively and consistently

Again, it can't, yet, but with better training data I don't see a fundamental impossibility here. The comparison with any magic wand is, in my opinion, disingenuous.

> If you don’t think to do regular things, you won’t be able to think to do advanced things

Humans already don't think for a myriad of critical jobs. Once expertise is achieved on a particular task, it becomes mostly mechanical.

-

Again, I agree with the original comment I was answering to in essence. I do think AI will make us dumber overall, and I sort of wish it was never invented.

But it was. And, being realistic, I will try to extract as much positive value from it as possible instead of discounting it wholly.


Only if you're less intelligent than the average. The problem with LLMs is that they will always fall to the average/mean/median of information.

And if the average person is orders of magnitude better than you at thinking, you're right... you should let the AI do it lol


Your comment is nonsensical. Have you ever used any LLM?

Ask the LLM to... I don't know, to explain to you the chemistry of aluminium oxides.

Do you really think the average human will even get remotely close to the knowledge an LLM will return to such a simple question?

Ask an LLM to amend a commit. Ask it to initialize a rails project. Have it look at a piece of C code and figure out if there are any off-by-one errors.

Then try the same to a few random people on the street.

If you think the knowledge stored in the LLM weights for any of these questions is that of the average person I don't even know what to say. You must live in some secluded community of savant polymaths.


"they will always fall to the average/mean/median of *information."*


Do you think that the average person can get a gold on the IMO?


> Humans may not need to think to just... do stuff.

God forbid we should ever have to think lol


It is concerning how some people really don't want to think about some things, and just "do".


Very Zen of you to say


Imagine if everyone got the opportunity to work on SOTA. What a world we would be.

Unfortunately that’s not where we’re headed.


We've never been there.

With AI and robotics there may be the slim chance we get closer to that.

But we won't. Not because AI, but because humans, of course.


> “…regurgitates previous human thinking.”

I was thinking about this after watching YouTube short verticals for about 2 hours last night: ~2min clips from different TV series, movies, SNL skits, music insider clips (Robert Trujillo auditions for Metallica, 2003. LOL). My friends and I often relate in regurgitated human sound bites. Which is fine when I’m sitting with friends driving to a concert. Just wasting time.

I’m thinking about this time suck, and my continual return/revisiting to my favorite hard topics in philosophy over and over. It’s certainly what we humans do. If I think deeply and critically about something, it’s from the perspective of a foundation I made for myself from reading and writing, or it was initialized by a professor and coursework.

Isn’t it all regurgitated thinking all the way down?


> a foundation I made for myself

Creative thinking requires an intent to be creative. Yes, it may be a delusion to imagine oneself as creative, one's thoughts to be original, but you have to begin with that idea if you're going to have any chance of actually advancing human knowledge. And the stronger wider higher you build your foundation-- your knowledge and familiarity with the works of humans before your time-- the better your chance of successful creativity, true originality, immortality.

Einstein thinks nothing of import without first consuming Newton and Galileo. While standing on their shoulders, he could begin to imagine another perspective, a creative reimaging of our physical universe. I'm fairly sure that for him, like for so many others, it began as a playful, creative thought stream, a What If juxtoposition between what was known and the mystery of unexplored ideas.

Your intent to create will make you creative. Entertain yourself and your thoughts, and share when you dare, and maybe we'll know if you're regurgitating or creating. But remember that you're the first judge and gatekeeper, and the first question is always, are you creative?


> Isn’t it all regurgitated thinking all the way down?

there it is


> If AI edges humans out of the business of thinking, then we're all in deep shit

Also because we live under capitalism, and you need something people need you to do to be allowed to live.

For a century+, "thinking" was the task that was supposed to be left to humans, as physical labor was automated. If "AI edges humans out of the business of thinking" what's left for humans, especially those who still need to work for a living because they don't have massive piles of money.


    If AI edges humans out of the business of thinking
This will never happen because the business of thinking is enjoyable and the humans whose thinking matters most will continue to be intrinsically motivated to do it.


> This will never happen because the business of thinking is enjoyable and the humans whose thinking matters most will continue to be intrinsically motivated to do it.

What world do you live in, where you get paid doing the things that are enjoyable to you, because they're enjoyable?


Humans draw, but humans have been edged out of the business of drawing long ago.


>With no humans thinking, no advances in code will be possible.

What? Coding is like the one thing that RL can do without any further human input because there is a testable provable ground truth; run the code.


> Running LLaMA-12 7B on a contact lens with WASM (arxiv.org)

Laughed out loud at this onion-like headline


Llama will be the only one that runs on a contact lens btw

All other tech companies are really shitty but only Zuck would be ok with very intimate use of AI like this


But not intimate use of their AR platform


Well not the only one, there’s Musk and Neurolink. Such chips will inevitably run AI of some sort to effectively communicate with our brains.


Yeah well I don't know how to feel about EM

I gave him a chance. Twitter was unacceptably censoring any covid dissent. He freed some of it. Then you find out about the people killed in Tesla crashes. Or him calling the cave rescuer in Thailand a pedo


He’s certainly a flawed character.


WTH - 61 upvotes and counting? Thank you but no, I don't deserve 61 upvotes for pointing out someone's funny thing was funny.


Well you better stop making subsequent non-additive comments otherwise you’ll end up with more of what you don’t deserve!


The real joke is that we'll ever get another Llama iteration.


I tried it, I like it a lot, but I did find an issue straight away.

I'm on MacOS and I have remapped the fn and command keys so it can be more like Windows (I can't undo 20+ years of muscle memory, and also I just don't wanna)

Anyway, Fresh seems to ignore the remapping - it's back to the command key for copy/paste and the command palette.

Is there a way to access the dropdown menus by keyboard? I can see F underlined for File but no modifier key seems to make it happen


I'll need to look into this, not sure what remapping does to the incoming key events.

Also I'm already working on a ui for customizing the key bindings so you could do whatever you wanted. (Currently managed by undocumented json)

Thanks for reporting!


This is probably not your responsibility. Modifier keys and especially rebinding them are really in the realm of the OS and the Terminal emulator. The application really shouldn't have to do special things to accommodate Mac OS idiosyncrasies.


You have to turn on "Use Option as Meta Key" in your terminal app's keyboard settings. (Terminal.app has it under Profile/Keyboard)


Alt+F should open the File menu. I guess that's Option+F on MacOS. Does that work?


no



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: