Hacker Newsnew | past | comments | ask | show | jobs | submit | Byamarro's commentslogin

I blame syntax. It's too unorthodox nowadays. Historical reasons don't matter all that much, everything mainstream is a C-family memember


In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software


In practice ends up being a bit like static analysis though, which is you get a ton of false positives.

All said, I’m now running all commits through Codex (which is the only thing it’s any good at), and it’s really pretty good at code reviews.


Will only work somewhat when customers expect features to work in a standard way. When customer spec things to work in non-standard approaches you'll just end up with a bunch of false positives.


This. When the bugs come streaming in you better have some other AI ready to triage them and more AI to work them, because no human will be able to keep up with it all.

Bug reporting is already about signal vs noise. Imagine how it will be when we hand the megaphone to bots.


A hybrid will likely emerge. I work on a chat application and it's pretty normal that LLM can print custom ui as part of the chat. Things like sliders, dials, selects, calendars are just better as a GUI in certain situations.

I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.


IMO the best fusion for this kind of thing is:

1. There's a "normal" interface or query-language for searching.

2. The LLM suggests a query, based on what you said you wanted in English, possibly in conjunction with results of a prior submit.

3. The true query is not hidden from the user, but is made available so that humans can notice errors, fix deficiencies, and naturally--if they use it enough--learn how it works so that the LLM is no longer required.


Yessss! This is what I want. If there is a natural set of filters that can be applied, let me speak it in natural language, then the LLM can translate that as good as possible and then I can review it. E.g. searching photos between X and Y date, containing human Z, at location W. These are all filters that can be presented as separate UI elements so I can confirm the LLM interpreted correctly and I can adjust the dates or what have you without having to repeat the whole sentence again.


Also, any additional LLM magic would be a separate layer with its own context, safely abstracted beneath the filter/search language. Not a post-processing step by some kind of LLM-shell.

For example, "Find me all pictures since Tuesday with pets" might become:

    type:picture after:2025-10-08 fuzzy-content:"with pets"
Then the implementation of "fuzzy-content" would generate a text-description of the photo and some other LLM-thingy does the hidden document-building like:

   Description: "black dog catching a frisbee"
   Does that "with pets"? 
   Answer Yes or No.
   Yes.


There's actually a research showing that llms are more accurate when questions are in Polish: https://arxiv.org/pdf/2503.01996


My first impulse is to say that some languages have better SNR on the internet. (less garbage autogenerated or SEO content compared to useful information)


I think we can do better than to have this level of argumentation. Regardless if the pretending comment had a merit to it or not


What he refers to is more specifically called phenomological consciousness afaik (just skimmed through tho)


Math is about creating mental models.

Sometimes we want to model something in real life and try to use math for this - this is physics.

But even then, the model is not real, it's a model (not even a 1:1 one on top of that). It usually tries to capture some cherry picked traits of reality i.e. when will a planet be in 60 days ignoring all its "atoms"[1]. That's because we want to have some predictive power and we can't simulate whole reality. Wolfram calls these selective traits that can be calculated without calculating everything else "pockets of reducability". Do they exist? Imho no, planets don't fundamentally exist, they're mental constructs we've created for a group of particles so that our brains won't explode. If planets don't exist, so do their position etc.

The things about models is that they're usually simplifications of the thing they model, with only the parts of it that interest us.

Modeling is so natural for us that we often fail to realize that we're projecting. We're projecting content of our minds onto reality and then we start to ask questions out of confusion such as "does my mind concept exist". Your mind concept is a neutral pattern in your mind, that's it.

[1] atoms are mental concepts as well ofc


I believe this is called epistemic pragmatism in philosophy: https://en.wikipedia.org/wiki/Pragmatism


It is a bit of a clickbait since they used commandblocks, not just redstone. But it's still impressive


From the video description: "I built a small language model in Minecraft using no command blocks or datapacks!"


From the video description:

  I built a small language model in Minecraft using no command blocks or datapacks!


Can you explain the difference (for non-minecrafters)


Command blocks can only be obtained by cheating in normal gameplay, they are used to execute server commands automatically. Using them to build a computer in the game kind of defeats the purpose of the exercise, since instead of using the game's physics to build your device, you're now mostly doing scripting with minecraft commands. The author explicitly said they didn't use any in their build.


The confusion might come from the author using commands / external software to generate and assemble parts of the redstone machine. The final machine doesn’t use any command blocks as part of it’s operation, but the description is a bit ambiguous here


there were no clickbaits there at all. no command blocks were used at all. if you were so certain, why dont you download the world and try it yourself?


I think you missed the 'no' in there. They did not use command blocks.


What I've found is that NuxtJS is miles ahead in DX. In NextJs it feels like their architecture stands in your way while in NuxtJS everything just works.


Completely agree. Nuxt is intuitive - convention-over-configuration and auto-imports remove a ton of boilerplate. The key is treating it as an app framework, not a backend solution - within that scope, it handles modern SSR/SPA complexity.


It should be almost obligatory to always state which definition of consciousness one is talking about whenever they talk about consiousness, because I for example don't see what language has to do with our ability to experience qualia for example.

Is it self awarness? There are animals that can recognize themselves in mirror, I don't think all of them have a form of proto-language.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: