In fact, automated regression tests done by ai with visual capabilities may have bigger impact than formal verification has. You can have an army of testers now, painfully going through every corner of your software
Will only work somewhat when customers expect features to work in a standard way. When customer spec things to work in non-standard approaches you'll just end up with a bunch of false positives.
This. When the bugs come streaming in you better have some other AI ready to triage them and more AI to work them, because no human will be able to keep up with it all.
Bug reporting is already about signal vs noise. Imagine how it will be when we hand the megaphone to bots.
A hybrid will likely emerge. I work on a chat application and it's pretty normal that LLM can print custom ui as part of the chat. Things like sliders, dials, selects, calendars are just better as a GUI in certain situations.
I've once saw a demo of an AI photo editing app that displays sliders next to light sources on a photo and you are able to dim/brighten the individual light sources intensity this way. This feels to me like a next level of the user interface.
1. There's a "normal" interface or query-language for searching.
2. The LLM suggests a query, based on what you said you wanted in English, possibly in conjunction with results of a prior submit.
3. The true query is not hidden from the user, but is made available so that humans can notice errors, fix deficiencies, and naturally--if they use it enough--learn how it works so that the LLM is no longer required.
Yessss! This is what I want. If there is a natural set of filters that can be applied, let me speak it in natural language, then the LLM can translate that as good as possible and then I can review it. E.g. searching photos between X and Y date, containing human Z, at location W. These are all filters that can be presented as separate UI elements so I can confirm the LLM interpreted correctly and I can adjust the dates or what have you without having to repeat the whole sentence again.
Also, any additional LLM magic would be a separate layer with its own context, safely abstracted beneath the filter/search language. Not a post-processing step by some kind of LLM-shell.
For example, "Find me all pictures since Tuesday with pets" might become:
Then the implementation of "fuzzy-content" would generate a text-description of the photo and some other LLM-thingy does the hidden document-building like:
Description: "black dog catching a frisbee"
Does that "with pets"?
Answer Yes or No.
Yes.
My first impulse is to say that some languages have better SNR on the internet. (less garbage autogenerated or SEO content compared to useful information)
Sometimes we want to model something in real life and try to use math for this - this is physics.
But even then, the model is not real, it's a model (not even a 1:1 one on top of that). It usually tries to capture some cherry picked traits of reality i.e. when will a planet be in 60 days ignoring all its "atoms"[1]. That's because we want to have some predictive power and we can't simulate whole reality. Wolfram calls these selective traits that can be calculated without calculating everything else "pockets of reducability". Do they exist? Imho no, planets don't fundamentally exist, they're mental constructs we've created for a group of particles so that our brains won't explode. If planets don't exist, so do their position etc.
The things about models is that they're usually simplifications of the thing they model, with only the parts of it that interest us.
Modeling is so natural for us that we often fail to realize that we're projecting. We're projecting content of our minds onto reality and then we start to ask questions out of confusion such as "does my mind concept exist". Your mind concept is a neutral pattern in your mind, that's it.
Command blocks can only be obtained by cheating in normal gameplay, they are used to execute server commands automatically. Using them to build a computer in the game kind of defeats the purpose of the exercise, since instead of using the game's physics to build your device, you're now mostly doing scripting with minecraft commands. The author explicitly said they didn't use any in their build.
The confusion might come from the author using commands / external software to generate and assemble parts of the redstone machine. The final machine doesn’t use any command blocks as part of it’s operation, but the description is a bit ambiguous here
there were no clickbaits there at all. no command blocks were used at all. if you were so certain, why dont you download the world and try it yourself?
What I've found is that NuxtJS is miles ahead in DX. In NextJs it feels like their architecture stands in your way while in NuxtJS everything just works.
Completely agree. Nuxt is intuitive - convention-over-configuration and auto-imports remove a ton of boilerplate. The key is treating it as an app framework, not a backend solution - within that scope, it handles modern SSR/SPA complexity.
It should be almost obligatory to always state which definition of consciousness one is talking about whenever they talk about consiousness, because I for example don't see what language has to do with our ability to experience qualia for example.
Is it self awarness? There are animals that can recognize themselves in mirror, I don't think all of them have a form of proto-language.