Hacker Newsnew | past | comments | ask | show | jobs | submit | kibwen's commentslogin

Despite what completely uninformed people may think, the field "computer science" is not about software development. It's a branch of mathematics. If you want an education in software development, those are offered by trade schools.

What I want is for universities to offer a degree in Software Engineering. That's a different field from Computer Science.

You say that belongs in a trade school? I might agree, if you think trade schools and not universities should teach electrical engineering, mechanical engineering, and chemical engineering.

But if chemical engineering belongs at a university, so does software engineering.


Plenty of schools offer software engineering degrees alongside computer science, including mine ~20 years ago.

The bigger problem when I was there was undergrads (me very much included) not understanding the difference at all when signing up.


Many do. Though, the one I'm familiar with is basically a CS-lite degree with software specific project design and management courses.

Glad I did CS, since SE looked like it consisted of mostly group projects writing 40 pages of UML charts before implementing a CRUD app.


Saying this as a software engineer that has a degree in electrical engineering - software "engineering" is definitely not the same as other engineering disciplines and definitely belongs in a trade school.

My university had Electrical Engineering, Computer Engineering, Software Engineering and Computer Science degrees (in additional to all the other standard ones.)

Last I checked ASU does, and I’m certain many other universities do too.

For reference, China installs about 1 GW of solar per day. By this time next week, they will have surpassed the output of this entire project.

China is the world's largest electricity producer and installs a lot of generating capacity of all types. For example, China has 29 nuclear reactors with 31 GW of capacity currently under construction:

https://pris.iaea.org/PRIS/CountryStatistics/CountryDetails....


Which leads to a shrinking nuclear share in their grid. It peaked at 4.6% in 2021, now down to 4.3%.

Compared to their renewable buildout the nuclear scheme is a token gesture to keep a nuclear industry alive if it would somehow end up delivering cheap electricity. And of course to enable their military ambitions.


I think that says more about their vast investment in other forms of power (particularly renewables) than it suggests a lack of investment in nuclear.

The nuclear share dropping is a very clear signal about a lack of investment. Shows that nuclear energy is no longer cost competitive, even in a "low regulation" environment.

It shows that strategic investment matters and people are looking at more than a single cost metric. Nuclear is behind today, but that doesn't hold a promise it will remain true into the future unless you stop investing now.

One armed bandit says explore as well as exploit. This delta you cited indicates the pendulum currently is more exploit than explore, but its not a static equation.


chinese nuclear is extremely cost competitive at 2.5bn/unit. They have other reasons, one being the ban on inland expansion fearing of messing up with 2 major rivers that feed the country. Current chinese units are basically borrowed and improved western designs, cap is basically vogtle's ap1000, hualong is a frankenstein of several western designs.

>borrowed and improved western designs

TBH this part seems key, even PRC couldn't operate full western designs reliant on western industrial capacity economically, part of it was simple incompetence of western supply chains (business closures / regulatory drama / sanctions). Nuclear seems viable once you strip out a lot of the politics that makes them uneconomical, hence PRC had to indigenize the designs since once western supply chains enter picture, the schedule goes out the window.


The US, in 2024, installed ~0.13 GW of solar per day.

https://seia.org/research-resources/solar-market-insight-rep...

6 GW Nuclear is either a tech company getting ahead of bad PR with a token gesture. Or its maybe? the start of something real.


Two years ago we were installing 1/10th of Chinese solar today?

Where are we at today? Can we catch up under this administration?

Where do we compare on a nuclear basis? I know my state installed nuclear reactors recently, but I'm not aware of any other build outs.

In a war game scenario, China is probably more concerned about losing access to oil and natural gas than we are. Not that we shouldn't be building this stuff quickly either.


> Can we catch up under this administration?

No. The future is Chinese, if the Chinese can maintain good governance.

A big "if"


1 GW in nameplate capacity of solar panels powers fewer data centres than 1 GW of nuclear. So it needs to be "this time next month".

1GW at noon, maybe 20% of that average.

China's building a bunch of nuclear too.


That isn't how solar capacity is measured. It's not simply its maximum instantaneous power potential

There are multiple measures, as generating technologies are complex. "Nameplate capacity" (given above) is one, "capacity factor", which is (roughly) the time-averaged output is another, and for solar averages about 20%, though that can vary greatly by facility and location.

Nuclear has one of the highest capacity factors (90% or greater), whilst natural gas turbines amongst the lowest (<10% per the link below). This relates not only to the reliability of the technologies, but how they are employed. Nuclear power plants cannot be easily ramped up or down in output, and are best operated at continuous ("base load") output, whilst gas-turbine "peaking stations" can be spun up on a few minutes' notice to provide as-needed power. Wind and solar are dependent on available generating capability, though this tends to be fairly predictable over large areas and longer time periods. Storage capability and/or dispatchable load make managing these sources more viable, however.

<https://en.wikipedia.org/wiki/Nameplate_capacity>

<https://en.wikipedia.org/wiki/Capacity_factor>


It's close enough to how it's measured. China's terawatt of solar power capacity isn't producing 9000 terawatt hours in a year. Their total electricity use is 9000 terawatt hours.

It is how individual power generation projects and measured though. If you install a GW of solar generation, it means you installed solar panels capable of generating 1 GW peak. If you install a 1 GW of coal generation, then same thing. If you install 1 GW peaker gas plants etc.

The coal plant will have a capacity factor of 80% though. Solar will be 10 to 20%. And the gas plant could be very low due to usage intent.

Battery projects are the same (since they're reported as generators). Whatever nameplate capacity...for about 4 hours only.


As mentioned numerous times in this thread, the percentage of their generation that is nuclear is falling.

And yet they're still making a bunch more.

So only installing 73GW average capacity per year.

Yes. Meta's matching a whole month of China's solar growth, which I would call a lot.

A whole month of China's growth... to come fully online by 2036.

For a single company that's half a percent of US GDP.

Solar and Nuclear energy are different energy products. China is also bringing on an insane amount of nuclear energy.

In absolute terms, China installs about as much nuclear as the US does solar. So I can only assume you agree with the statement "the US is bringing on an insane amount of solar energy"? Because, once again in absolute terms, the US's solar buildout is trounced by China's. The US is losing the energy race, and nuclear isn't going to save it. The US will run out of fissile material before China runs out of sunlight.

There's really no risk of running out of fissile material. We can create it.

Depends if they start seeding clouds and doing geo engineering.

Nuclear and solar are different energy products that are complementary. This solar vs nuclear narrative is basic and anti progress.

For example china invested in solar so they can transition their energy system and get it paid by selling globally via subsidized cell manufacturing.

I don't think they will be able to sell export their nuclear tech globally since it is essentially repackaged US tech.

But yeah Im all for solar - more solar the better but it cant do firm power well.


China is building a tiny amount of nuclear in comparison to their wind, solar, storage, and HVDC builds. Only something like 50-100GW over thw coming decades. The quantity being built only makes sense as a strategic hedge, not as a primary strategy.

Renewables crash the money making potential of nuclear power. Why should someone buy ~18-24 cents/kWh new built nuclear power excluding backup, transmission costs, taxes, final waste deposit etc. when cheap renewables deliver?

https://oilprice.com/Alternative-Energy/Renewable-Energy/Wha...

China is barely building nuclear power, in terms of their grid size. It peaked at 4.6% in 2021, now down to 4.3%.

Compared to their renewable buildout the nuclear scheme is a token gesture to keep a nuclear industry alive if it would somehow end up delivering cheap electricity.


Again they aren't the same product. Everyone always thinks power is only about $/kwh especially in hackernews. That is a strong proponent of the product but most definitely not all of it. Solar just does not work for large scale industrial uses cases (99.99% uptime). Even with massive energy storage to try and cover the edges. Its a great combo but not comparable.

How does your "large scale industrial use case" deal with 50% of the French nuclear fleet being offline?

https://www.nytimes.com/2022/11/15/business/nuclear-power-fr...

Or 50% of the Swedish fleet two times this year being offline?


At the same time it happened, the french solar was 92% offline and the french wind generation was 81% offline.

Maybe we should get the opposite conclusion from this incident.


Based on yearly average capacity factor?

Since that incident storage has been scaling massively. How does a nuclear plant compete with zero marginal cost renewables?

https://oilprice.com/Energy/Energy-General/The-Quiet-Unravel...


Based on if it would run at full capacity like a plant would yes.

And no, storage hasn't scale at all yet, it would need a 100x increase before being useful for such events.

The proof is in the pudding anyways, if that works so well, why nobody does it?


Nobody does what? Solar installs are way way up.

Nobody does a storage based solar grid at a country scale.

by providing firm power.

nuclear provides for about 4-5ct/kwh if built cheap, everything included, looking at swiss data. Chinese units are built for 2.5bn/unit, so probably even cheaper than that. But yes, china is far from what france or sweden did with nuclear per capita

Renewables and battery storage energy are unstoppable. Why take nuclear risk when you can get more than enough from solar, wind, and geothermal coupled with battery storage?

lol, talking about risky nuclear investment and mentioning geothermal as ready to go alternative...

It’s not easy to go. It’s going, and has been for many decades.

Small scale where I am, when compared with hydro.


Geothermal is essentially negligible to the grid so it's weird to include it.

It's also so geographically constrained no one can choose to build it anyway.


China is a country with over a billion people, Meta is a private company with under 100k employees, it doesn't really make sense to compare the power output of their investments.

What it tells me is that humans are fallible, and that being a competent programmer has no correlation with having strong mental defenses against the brainrot that typifies the modern terminally-online internet user.

I leverage LLMs where it makes sense for me to do so, but let's dispense with this FOMO silliness. People who choose not to aren't missing out on anything, any more than people who choose to use stock Vim rather than VSCode aren't missing out on anything.


It's not Vim vs VSCode though - the analogy might be writing in assembler vs writing in your high level language of choice.

Using AI you're increasing the level of abstraction you can work at, and reducing the amount of detail you have to worry about. You tell the AI what you want to do, not how to do it, other than providing context that does tell it about the things that you actually care about (as much or little as you choose, but generally the more the better to achieve a specific outcome).


> the analogy might be writing in assembler vs writing in your high level language of choice.

If it were deterministic, yes, but it's not. When I write in a high level language, I never have to check the compiled code, so this comparison makes no sense.

If we see new kinds of languages, or compile targets, that would be different.


It's a new type of development for sure, but with an agentic system like Claude Code that is able to compile, run and test the code it is generating you can have it iterate until the code meets whatever test or other criteria you have set. No reason code reviews can't be automated too, customized to your own coding standards.

Effort that might be put into feeling that you need to manually review all code generated might better be put into things like automating quality checks (e.g code review, adherence to guidelines) ensuring that testing is comprehensive, and overall management of the design and process into modular testable parts the same way as if you'd done it manually.

While AI is a tool, the process of AI-centric software development is better regarded as a pair-design and pair-coding process, treating the AI more like a person than a tool. A human teammate isn't deterministic either, yet if they produce working artifacts that meet interface requirements and pass unit tests, you probably aren't going to insist on reviewing all of their code.


This is so stupid. You still have to review that code, you still have to know what the solution to something is, ergo, you still need to know how to do it and you still have to deal with the cognitive load from reviewing someone else's code. I don't understand how you can write as if the implementation, fairly trivial and mechanical, is somehow more taxing than reading someone else's code..

This is not the support argument you think it is, it just further allures to the fact that people raving about AI just generate slop and either don't review their code or just send it for their coworkers to review.

I guess AI bros are just the equivalent of script-kiddies, just running shit they don't know how it works and claiming credit for it.


I confess I don't really get this. Fish and Bash are different languages in the same way that Ruby and Perl are different languages. And if I want to run a Perl script, I don't try to run it in a Ruby interpreter, and I don't get grumpy at Ruby for not being source-compatible with Perl.

Which is to say, if you need to run a Bash script, run it via `bash foo.sh` rather than `./foo.sh`. There's no need to limit yourself to a worse shell just because there exist some scripts out there written in that shell's language.


There's nothing even preventing the second form from working either. Just put the right shebang at the top of the script and it'll run through that interpreter. I've been on fish for a decade, but still write all my shell scripts in Bash. It's never been an issue.

I'd recommend taking this even further. Start every meeting 30 minutes after the hour, and end it 30 minutes before the hour.

You don't understand. If I arrange everything into written words and send out an email with a link to the document, noone will read it.

Instead, I must invite 10 people to do other things while I talk on a zoom call! "Sorry, I was multitasking"


I mean you can't make people read an email but I feel like you would have a much higher success rate if the content was in the email itself. You're competing with the other work that people have to do and actually get graded on, why add a layer of indirection?

People don't read the email itself, they just want to 'over it together' because lazy/no reading comprehension/whatever the reason is. So many meetings have 10+ people there who have no clue what this meeting is about while the agenda, questions, possible answers etc are in the email. So I usually start (if it's my responsibility to do so) with; how about you read the email for a few minutes before we start. Which is usually met with 'why don't you go over it line by line with us, share screen and read it'. Drives me bonkers. Granted, these are usually very big partner companies for which the employees (including middle management) see this as some break in their day, so they don't really care about the time spent or the outcome.

We meet in person and have a culture where closing your laptop is followed pretty well.

TBH, if everyone involved is invested in what is being said, you don't need to be in person and you don't need to shut laptops.

If that is the majority of your meetings, you are in a good place.

The mistake is to think the rules are what makes the meeting useful. Having the right audience, an agenda, and appropriate expectations for the outcomes are the useful things.


That's so quaint, you all must be in the same place!

Except for the guy who built the thing who lives elsewhere and can’t join the offsites

Got it, 24 hour long meetings it is.

Except for the weekly release meetings, those can be 48.


I for one, support this...and reduce the Agenda to NO bullet points.

To extend this to entire game histories, here's the Lichess blog post, "Compressing Chess Moves Even Further, To 3.7 Bits Per Move": https://lichess.org/@/marcusbuffett/blog/compressing-chess-m...

They may be an expert in Go, but from their writing they appear to be misunderstanding (or at least misrepresenting) how things work in other languages. See the previous discussion here: https://lobste.rs/s/exv2eq/go_sum_is_not_lockfile

> They may be an expert in Go, but from their writing they appear to be misunderstanding (or at least misrepresenting) how things work in other languages

Thanks for that link.

Based on reading through that whole discussion there just now and my understanding of the different ecosystems, my conclusion is that certainly people there are telling Filippo Valsorda that he is misunderstanding how things work in other languages, but then AFAICT Filippo or others chime in to explain how he is in fact not misunderstanding.

This subthread to me was a seemingly prototypical exchange there:

https://lobste.rs/s/exv2eq/go_sum_is_not_lockfile#c_d26oq4

Someone in that subthread tells Filippo (FiloSottile) that he is misunderstanding cargo behavior, but Filippo then reiterates which behavior he is talking about (add vs. install), Filippo does a simple test to illustrate his point, and some others seem to agree that he is correct in what he originally said.

That said, YMMV, and that overall discussion does certainly seem to have some confusion and people seemingly talking past each other (e.g., some people mixing up "dependents" vs. "dependencies", etc.).


> but then AFAICT Filippo or others chime in to explain how he is in fact not misunderstanding.

I don't get this impression. Rather, as you say, I get the impression that people are talking past each other, a property which also extends to the author, and the overall failure to reach a mutual understanding of terms only contributes to muddying the waters all around. Here's a direct example that's still in the OP:

"The lockfile (e.g. uv.lock, package-lock.json, Cargo.lock) is a relatively recent innovation in some ecosystems, and it lists the actual versions used in the most recent build. It is not really human-readable, and is ignored by dependents, allowing the rapid spread of supply-chain attacks."

At the end there, what the author is talking about has nothing to do with lockfiles specifically, let alone when they are applied or ignored, but rather to do with the difference between minimum-version selection (which Go uses) and max-compatible-version selection.

Here's another one:

"In other ecosystems, package resolution time going down below 1s is celebrated"

This is repeating the mistaken claims that Russ Cox made years ago when he designed Go's current packaging system. Package resolution in e.g. Cargo is almost too fast to measure, even on large dependency trees.


> its stored forever in the proxy cache

This is mistaken. The Go module proxy doesn't make any guarantee that it will permanently store the checksum for any given module. From the outside, we would expect that their policy is to only ever delete checksums for modules that haven't been fetched in a long time. But in general, you should not base your security model on the notion that these checksums are stored permanently.


> The Go module proxy doesn't make any guarantee that it will permanently store the checksum for any given module

Incorrect. Checksums are stored forever, in a Merkle Tree, meaning if the proxy were to ever delete a checksum, it would be detected (and yes, people like me are checking - https://sourcespotter.com/sumdb).

Like any code host, the proxy does not guarantee that the code for a module will be available forever, since code may have to be removed for legal reasons.

But you absolutely can rely on the checksum being preserved and thus you can be sure you'll never be given different code for a particular version.


Here's another person auditing the checksum database: https://raphting.dev/posts/gosumdb-live-again/

Ah, my mistake. I had read in the FAQ that it does not guarantee that data is stored forever, but overlooked the part about preserving checksums specifically.

To be very pedantic, there are two separate services: The module proxy (proxy.golang.org) serves cached modules and makes no guarantees about how long cache entries are kept. The sum database (sum.golang.org) serves module checksums, which are kept forever in a Merkle tree/transparency log.

Ok. So to answer the question whether the code for v1.0.0 that I downloaded today is the same as I downloaded yesterday (or whether the code that I get is the same as the one my coworker is getting) you basically have to trust Google.

The checksums are published in a transparency log, which uses a Merkle Tree[1] to make the attack you describe detectable. Source Spotter, which is unaffiliated with Google, continuously verifies that the log contains only one checksum per module version.

If Google were to present you with a different view of the Merkle Tree with different checksums in it, they'd have to forever show you, and only you, that view. If they accidentally show someone else that view, or show you the real view, the go command would detect it. This will eventually be strengthened further with witnessing[2], which will ensure that everyone's view of the log is the same. In the meantime, you / your coworker can upload your view of the log (in $GOPATH/pkg/sumdb/sum.golang.org/latest) to Source Spotter and it will tell you if it's consistent with its view:

  $ curl --data-binary "@$(go env GOPATH)/pkg/sumdb/sum.golang.org/latest" https://gossip.api.sourcespotter.com/sum.golang.org 
  consistent: this STH is consistent with other STHs that we've seen from sum.golang.org
[1] https://research.swtch.com/tlog

[2] https://github.com/C2SP/C2SP/blob/main/tlog-witness.md


Not really.

For the question “is the data in the checksum database immutable” you can trust people like the parent, who double checks what Google is doing.

For the question “is it the same data that can be downloaded directly from the repos” you can skip the proxy to download dependencies, then do it again with the proxy, and compare.

So I'd say you don't need to trust Google at all in this case.


ok, I guess I was wrong about the cache, but not the checksums. I was somewhat under the impression that it was forever due to the getting rid of vendoring. Getting rid of vendoring (to me) only makes sense if its cached forever (otherwise vendoring has significant value).

Go modules did not get rid of vendoring. You can do 'go mod vendor' and have been able to do so since Go modules were first introduced.

How long the google-run module cache (aka, module proxy or module mirror) at https://proxy.golang.org caches the contents of modules is I think slightly nuanced.

That page includes:

> Whenever possible, the mirror aims to cache content in order to avoid breaking builds for people that depend on your package

But that page also discusses how modules might need to be removed for legal reasons or if a module does not have a known Open Source license:

> proxy.golang.org does not save all modules forever. There are a number of reasons for this, but one reason is if proxy.golang.org is not able to detect a suitable license. In this case, only a temporarily cached copy of the module will be made available, and may become unavailable if it is removed from the original source and becomes outdated.

If interested, there's a good overview of how it all works in one of the older official announcement blog posts (in particular, the "Module Index", "Module Authentication", "Module Mirrors" sections there):

https://go.dev/blog/modules2019#module-index


ok, 1) so would it be fair to modify my statement that it basically tries to cache forever unless its can't determine that its legally allowed to cache forever?

2) you're right, glanced at kubernetes (been a long time since I worked on it) and they still have a vendor directory that gets updated regularly.


> We'll probably see this start to come together in 2026.

Thus far I see no evidence that robot manipulation will come together by 2036, let alone 2026.


I think manipulation will come long before 2036, but the people doing high level planning on LLMs trained on forum discussions of Chucky movies and all kinds of worse stuff and planning for home robot deployment soon I think are off by a lot. Things like random stuff playing on TV rehydrating that memory that was mostly wiped out in RLHF; it will need many extra safety layers.

And even if it isn't just doing crazy intentional-seeming horror stuff, we're still a good ways off from passing the safely make a cup of coffee in a random house without burning it down or scalding the baby test.


I dunno. The folks at Physical Intelligence are showing remarkable progress for being such a small team and relying on Gemma as their base model.

https://www.pi.website/blog/pistar06 has some reasonable footage of making espresso drinks, folding cardboard boxes, etc.


That's what I was thinking, but could not find the link. Here is it working on some standard tasks.[1] Grasping the padlock and inserting the key is impressive. I've seen key-in-lock before, done painfully slowly. Finally, it's working.

That system, coupled to one of the humanoids for mobility, could be quite useful. A near term use case might be in CNC machining centers. CNC machine tools now work well enough on their own that some shops run them all night. They use replaceable cutting tools which are held in standard tool holders. Someone has to regularly replace the cutting tools with fresh ones, which limits how long you can run unattended. So a robot able to change tool holders during the night would be useful in production plants.

See [2], which is a US-based company that makes molds for injection molding, something the US supposedly doesn't do any more. They have people on day shift, but the machines run all night and on weekends. To do that, they have to have refrigerator-sized units with tools on turntables, and conveyors and stackers for workplace pallets. A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.

[1] https://www.pi.website/blog/olympics

[2] https://www.youtube.com/watch?v=suVhnA1c7vE


> A humanoid robot might be simpler than all the support machinery required to feed the CNC machines for unattended operation.

A humanoid robot is significantly more complicated than any CNC. Even with multi-axis, tool change, and pallet feeding these CNC robots are simpler in both control and environment.

These robots don't produce a piece by thinking about how their tools will affect the piece, they produce it by cycling though fixed commands with all of the intelligence of the design determined by the manufacturer before the operations.

These are also highly controlled environments. The kind of things they have to detect and respond to are tool breakage, over torque, etc. And they respond to those mainly by picking a new tool.

The gulf between humanoid robotics in uncontrolled environments is vast even compared to advanced CNC machines like these (which are awesome). Uncontrolled robotics is a completely different domain, akin to solving computation in P by a rote algorithm, vs excellent approximations in NP by trained ML/heuristic methods. Like saying any sorting algorithm may be more complex than a SOTA LLM.


Most flexible manufacturing systems come with a central tool storage (1000+ tools) that can load each individual machine's magazine (usually less than 64 tools per machine). The solution to the problem you mention is adding one more non-humanoid machine. The only difference is that this new machine won't consume the tools and instead just swaps the inserts.

There is literally no point in having a humanoid here. The primary reason you'd want a human here is that hiring a human to swap tools is extremely cost effective since they don't actually need to have any knowledge of operating the machines and just need to be trained on that one particular task.


> It’s a tool. Let’s stop judging it like it’s supposed to be one of us.

Tech CEOs and their breathless AI hype have demonstrated to everyone how dreadfully effective it is to weaponize anthropomorphization and pareidolia.


> Tech CEOs and their breathless AI hype have demonstrated to everyone how dreadfully effective it is to weaponize anthropomorphization and pareidolia.

Read my paper, It's called "AI Consciousness: The Asymptotic Illusion of Artificial Agency—A Quantum-Topological Proof of Consciousness Non-Computability via Orchestrated Objective Reduction and Penrose-Lucas Arguments" You'll find it on Academia & Zenodo

It pretty much shuts that crap down into fantasy land where it deserves to stay.

Here's the abstract:

This paper challenges the prevailing assumption that computational escalation can bridge the ontological gap between artificial intelligence and genuine consciousness, presenting the "Asymptotic Illusion Thesis": simulation, regardless of fidelity, cannot transmute into phenomenal reality due to a fundamental topological obstruction.

We establish that neural networks operate within Class P (Polynomial) complexity—deterministic, syntactically defined systems—while conscious experience maps to Class NP (Nondeterministic Polynomial) complexity, possessing an interiority that resists algorithmic compression.

This is not a technical limitation but a geometric mismatch: the linear structure of computation fundamentally cannot access the high-dimensional manifold of qualia.

Our analysis demonstrates that organizational closure, not code, defines sentience.

Biological systems exhibit autopoietic constraint-generation in continuous thermodynamic struggle (f(f)=f), creating intrinsic necessity absent in artificial systems governed by programmed teleology.

We invoke Penrose-Lucas arguments and Orchestrated Objective Reduction (Orch-OR) theory to establish that genuine free will and agency require non-computable quantum processes in neuronal microtubules—a computational capacity Turing Machines cannot replicate.

Recent neuroscientific debunking of "Readiness Potential" signals vindicates top-down veto control as the locus of volition, supporting quantum indeterminacy models of free will.

The hard problem of consciousness—the explanatory gap between physical processes and subjective experience—represents a topological barrier, not an engineering problem.

Current large language models represent "Stochastic Parrots" proficient in syntactic manipulation but devoid of semantic comprehension.

Our conclusion: the consciousness-computation boundary is physical and absolute, necessitating a reorientation of technological development toward preservation of biological consciousness rather than simulated existence.


I always upvote microtubules in the brain acting as an antenna for consciousness. Or something like that

> Read my paper

By the time someone does it’s too late.

Even the few who do read about anthropomorphization still have to override their subconscious reaction. We’re all human after all..

The market may yet settle on a different form factor that won’t trigger the ick.

Flat tabletop with robot legs??


Maybe an Octopus-platypus looking thing with no face? I honestly don't know.

I mean HAL is starting to look mighty fine right now. "Hello Dave"


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: