I'm not sure that "it is clear that Claude does not have those things".
I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.
But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.
If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.
If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.
Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.
Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.
But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.
[1]
Penrose thinks we have a quantum nature - so sure no classical computer can be then.
Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well?
Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.
> It is said that the Duke Leto blinded himself to the perils of Arrakis, that he walked heedlessly into the pit.
> *Would it not be more likely to suggest he had lived so long in the presence of extreme danger he misjudged a change in its intensity?*
Be careful of letting your deep, keen insight into the fundamental limits of a thing blind you to its consequences...
Highly competent people have been dead wrong about what is possible (and why) before:
> The most famous, and perhaps the most instructive, failures of nerve have occurred in the fields of aero- and astronautics. At the beginning of the twentieth century, scientists were almost unanimous in declaring that heavier-than-air flight was impossible, and that anyone who attempted to build airplanes was a fool. The great American astronomer, Simon Newcomb, wrote a celebrated essay which concluded…
>> “The demonstration that no possible combination of known substances, known forms of machinery and known forms of force, can be united in a practical machine by which man shall fly long distances through the air, seems to the writer as complete as it is possible for the demonstration of any physical fact to be.”
>Oddly enough, Newcomb was sufficiently broad minded to admit that some wholly new discovery — he mentioned the neutralization of gravity — might make flight practical. One cannot, therefore, accuse him of lacking imagination; his error was in attempting to marshal the facts of aerodynamics when he did not understand that science. His failure of nerve lay in not realizing that the means of flight were already at hand.
It wouldn't be a solution for a personal existential dread of death. It would be a solution if you were trying to uphold long term goals like "ensure that my child is loved and cared for" or "complete this line of scientific research that I started." For those cases, a duplicate of you that has your appearance, thoughts, legal standing, and memories would be fine.
I cannot be 100% certain that sleep is not fatal. If I had some safe and reliable means of preventing sleep I would take it without hesitation. But it seems plausible that a person could survive sleep because it's a gradual process and one that everybody has a lot of practice doing. However, there are no such mitigating factors with general anesthetics. I will refuse general anesthetics if I am ever in a situation to do so. I believe a combination of muscle relaxants and opioids can serve the same medical purpose, which I do not believe would kill the person.
> significant extra effort is required to make them reproducible.
Zero extra effort is required. It is reproducible. The same input produces the same output. The "my machine" in "Works on my machine" is an example of input.
> Engineering in the broader sense often deals with managing the outputs of variable systems to get known good outcomes to acceptable tolerances.
You can have unreliable AIs building a thing, with some guidance and self-course-correction. What you can't have is outcomes also verified by unreliable AIs who may be prompt-injected to say "looks good". You can't do unreliable _everything_: planning, execution, verification.
If an AI decided to code an AI-bound implementation, then even tolerance verification could be completely out of whack. Your system could pass today and fail tomorrow. It's layers and layers of moving ground. You have to put the stake down somewhere. For software, I say it has to be code. Otherwise, AI shouldn't build software, it should replace it.
That said, you can build seemingly working things on moving ground, that bring value. It's a brave new world. We're yet to see if we're heading for net gain or net loss.
If we want to get really narrow I'd say real determinism is possible only in abstract systems, to which you'd reply it's just my ignorance of all possible factors involved and hence the incompleteness of the model. To which I'd point of practical limitations involved with that. And that reason, even though it is incorrect and I don't use it in this way, I understand why some people are using the quantifiers more/less with the term "deterministic", probably for the lack of a better construct.
I don't think I'm being pedantic or narrow. Cosmic rays, power spikes, and falling cows can change the course of deterministic software. I'm saying that your "compiler" either has intentionally designed randomness (or "creativity") in it, or it doesn't. Not sure why we're acting like these are more or less deterministic. They are either deterministic or not inside normal operation of a computer.
To be clear: I'm not engaging with your main point about whether LLMs are usable in software engineering or not.
I'm specifically addressing your use of the concept of determinism.
An LLM is a set of matrix multiplies and function applications. The only potentially non-deterministic step is selecting the next token from the final output and that can be done deterministically.
By your strict use of the definition they absolutely can be deterministic.
But that is not actually interesting for the point at hand. The real point has to do with reproducibility, understand ability and tolerances.
3blue1brown has a really nice set of videos on showing how the LLM machinery fits together.
They _can_ be deterministic, but they usually _aren't_.
That said, I just tried "make me a haiku" via Gemini 3 Flash with T=0 twice in different sessions, and both times it output the same haiku. It's possible that T=0 enables deterministic mode indeed, and in that case perhaps we can treat it like a compiler.
Depends if you're using the botanical definition or the (more common) culinary definition[0].
I would argue fruit and fruit are two words, one created semasiologically and the other created onomasiologically. Had we chosen a different pronunciation for one of those words, there would be no confusion about what fruits are.
Yup. Though rather than say "fruit and fruit" are two words, or focusing on "definitions" (which tend to morph over time anyway), I think the more straightforward and typical approach is to just recognize that the same word can have different meanings in different contexts.
This is such a basic and universal part of language, it is a mystery to me why something so transparently clueless as "actually, tomato is a fruit" persists.
I mean, a jelly is just broadly any thickened sweet goop (doesn't even have to be fruit, and is often allowed to have some savoury/umami, e.g. mint jelly or red pepper jelly). Usually a jelly also is relatively clear and translucent, as it is made with puree / concentrate strained to remove large fibers, but this isn't really a strict requirement, and the amount of straining / translucency is generally just a matter of degree. There are opaque jellies out there, and jellies with bits and pieces.
Ketchup has essentially all the key defining features of a jelly, technically, just is more fibrous / opaque and savoury than most typical jellies.
But, of course, calling a ketchup "jelly", due to such technical arguments, is exactly as dumb as saying "ayktually, tomato is a fruit": both are utterly clueless to how these words are actually used in culinary contexts.
> Consider what happens when you build software professionally. You talk to stakeholders who do not know what they want and cannot articulate their requirements precisely. You decompose vague problem statements into testable specifications. You make tradeoffs between latency and consistency, between flexibility and simplicity, between building and buying. You model domains deeply enough to know which edge cases will actually occur and which are theoretical. You design verification strategies that cover the behaviour space. You maintain systems over years as requirements shift.
I'm not sure why he thinks current LLM technologies (with better training) won't be able to do more and more of this as time passes.
To genuinely "talk to stakeholders" requires being part of their social world. To be part of their social world you have to have had a social past - to have been a vulnerable child, to have experienced frustration and joy. Efforts to decouple human development from human cognition betray a fundamental misunderstanding.
I AM sure that it is hard to conclusively show that Claude has experience and consciousness. Even Claude isn't sure about that.
But while it is absolutely true that "it is a word calculator" - unless you hold the position that human consciousness isn't neural[1]- I don't see how this is any different from saying saying humans beings are neural activation pattern calculators.
If you're sure that your consciousness isn't neural - then fine: Claude isn't made of the right stuff so couldn't possibly be. But state your assumption up-front.
If one opens up a person and looks at their nervous system the single neurons look complicated, but not especially mysterious.
Given how shockingly little we understand the brain/mind it is hard to be sure that we are certain enough of how we work and given how little we know how LLMs work at any of the many layers above the raw architecture either position can be reasonably held, but not convincingly argued/demonstrated.
Feel free to think Claude isn't conscious - I can't prove to you it isn't. And the amount of theory we still need to learn to be able to is vast.
But don't expect me to be _certain_ that it isn't and couldn't be - you simply can't show that convincingly either.
[1] Penrose thinks we have a quantum nature - so sure no classical computer can be then. Some like Rupert Sheldrake think it's a field phenomenon - very woo maybe Claude has a morphic field as well? Lots of people are sure we have a supernatural soul/spirit. One then needs to take up Claude's status with the Creator.