Hacker Newsnew | past | comments | ask | show | jobs | submit | evanwolf's commentslogin

Beyond demography, Much of this depends on public policy and execution. Will more of us live in conditions that prevent a oidable death or injury? Or another way?


Wasn't there something about Jobs taking back employee stock just before he sold next to apple?


sometimes it seems folks are just making up words.


neuromorphic hardware is just hardware that has biologically inspired designs.

spiking neural networks are artificial neural networks that actually simulate the dynamics of spiking neurons. rather than sums, ramps and squashing, they simulate actual spike trains and the integration of energy that occurs in the dendrites.

neuromorphic hardware can range from specialized asics for doing these simulations efficiently to more experimental hybrid analog-digital systems that use analog elements to do more of the computation.

it's all very cool stuff, but i tend to think of snns as similar to the wings on the avion 3 where simplified unit functions look more like a modern jet wing.

but who knows, maybe the neuromorphic route will open the door to far more efficient computations. personally, i'm very excited about potential wins that could come from novel computational substrates!


I wonder how far we will move the goalposts once we have a multimodal transformer type model running on neuromorphic hardware.


Lots of people involved in explaining away AI are labouring under the axiom that intelligence is mysterious. Therefore, if I can understand how a system works, it logically follows that it can't be intelligent.


I predict that many of those people will continue to believe that up until human cognition is mechanistically understood, at which point there will be some other reason that humans are "real" thinkers and machines are not. The problem is that theoretical opposition to the existence of AIs is incompatible with materialism and thus just doesn't fit with our world, which is very much built using the scientific truths that materialism enables us to discover.


It is insane to me that views of consciousness and cognition other than physicalism still exist in mainstream scientific and philosophical discourse. As far as I can tell, no matter how much discourse you dress it up in, any alternative boils down to "it's magic, I ain't gotta explain shit".


We… can’t really understand how neural networks work, but we can definitely tell they’re not intelligent beyond making good sounding word soup (as demonstrated in their minimal practical reasoning abilities)

I wouldn’t call pagerank intelligent, even though I can give it a text prompt and get relevant information back.

In my view, the only difference between that and an llm is the natural language interface.

I’m no expert on intelligence, but I’d expect being able to introspect and continually learn to be part of it.


You're engaging in explaining away intelligence.

One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.

Be honest, how many people do you think "introspect and continually learn" on a daily basis?


> Be honest, how many people do you think "introspect and continually learn" on a daily basis?

That's wild if you think that isn't quite literally one of the defining features of human consciousness (and many would say other animals as well).

If you think people thinking differently than you means they don't still indeed...think...then I don't know what tell you.


Unfortunately for the intelligence denial crowd, introspection and learning capability is something we can measure, as opposed to the vibes-based discourse you prefer to engage in. If that's what you've picked for your threshold of "intelligence", you've reduced the majority of the bell curve's left side to soulless automatons. Again, your definition, not mine.


My definition (again, as a layman on the subject of intelligence philosophy), but your incorrect analysis.

I guess I just think more highly of my fellow humans.


> I guess I just think more highly of my fellow humans.

As an article of faith, yes. But I don't see what this adds to the discussion.


> If that's what you've picked for your threshold of "intelligence", you've reduced the majority of the bell curve's left side to soulless automatons

I disagree with this statement, which is the crux of their argument against my definition of intelligence.

I don’t think any credible survey of the intelligence or lack of a large enough population exists (due to there not being a common binary measure of intelligence), so it’s an issue you kind of need to take on faith.


>so it’s an issue you kind of need to take on faith.

Thanks for playing.


It cuts both ways… you need to believe that most humans aren’t actually intelligent as we don’t have any data that suggests that most humans aren’t.


you have no data that suggests I can't fly either.


… but… we do have data that proves that humans can’t just fly?

I think… I’m done talking with you now.


you just like your assumptions to go unchallenged. goodbye!


It’s crazy to me that people would rather believe that we can create intelligence by feeding the text of the internet into a statistics machine, than believe that the people making that text are intelligent.


It's crazy to me people would rather deny the intelligence of a large segment of the human population than admit to the increasing overlap between it and AI.

That's what you're doing when you keep moving the goalposts of "real intelligence" further and further right on the bell curve. You're denying the intelligence and consciousness of billions of people (and counting) just so you don't have to admit there's nothing magical about intelligence.

Sometime in the next 10 years, you'll have to start thinking of yourself as a soulless automaton to keep up the delusion. Good luck with that.


> It's crazy to me people would rather deny the intelligence of a large segment of the human population than admit to the increasing overlap between it and AI.

You’re the one here denying. I think the vast majority (if not all) of humans are intelligent under my definition. You do not.

I don’t think LLMs or other statistical models are.


Under the latest definition you made up on the spot, yes. And definitely not all.

So what's your plan when the fraction keeps shrinking? When you're no longer in it?

This is simple interpolation. It is plainly obvious that at some point soon, you will be faced with the fact that there's nothing magical about intelligence. When that happens, will you concede that, or start thinking of yourself as a soulless automaton?

If you can't project that far forward, I question whether you meet any meaningful definition of "intelligent" right now.


> So what's your plan when the fraction keeps shrinking?

What fraction? How would it shrink?

I don’t think that humans, as a species, are becoming non intelligent en masse. In fact, I think that we are, by default, intelligent.

That’s where our opinions seem to irreconcilably differ.

> you will be faced with the fact that there's nothing magical about intelligence.

I dont think there’s anything “magical” about anything. I just don’t think that a statistical model can achieve intelligence as we think of it with regards to humans.

You may see the recent trend of text generation models as new intelligent machines, but I’ve been studying and working these kinds of statistical models for about a decade (since 2016) and have seen these opinions spouted only to quiet down once the logarithmic improvement curve is reached. I don’t see any reason why these LLMs wouldn’t follow the same pattern.

> This is simple interpolation

Interpolation of what? You’re assuming that the goalpost will always shift, but in reality we just don’t have a generally agreed upon definition all. Either way, any definition of intelligence that rules out the majority of humanity is an incorrect definition off the bat, as pretty much all humans are intelligence.

There exists some accurate definition of intelligent such that almost any human satisfies it, but statistical models do not. I’m sure if I studied the philosophy of intelligent I could put one into words, but I’m ill equipped to do so.

> If you can't project that far forward, I question whether you meet any meaningful definition of "intelligent" right now.

Are you just trying to be mean, or do you actually believe that people who disagree with you are not intelligent?

We’ll see in 5 years that this intelligence hype will fade just like that last 2 AI booms.

This isn’t at all to say that we will never make a machine with intelligence that rivals humans, just that I don’t think the statistical model route will get us there… and it hasn’t.


One last pretty funny thing:

Here a short chatgpt convo about how personification bias can cause people to believe that statistical models are intelligent. I think it's what's fooling so many people.

https://chatgpt.com/share/670549a3-2f9c-8001-81c1-d950c626ad...


> how many people do you think "introspect and continually learn" on a daily basis?

At the very least, every single person who plays sports, video games, tries finding a way around traffic, a faster route home, a way to do less work, take a longer break, or a way to save some extra money getting food.

Literally any optimization task at all requires an observation, analysis (read: introspection,) and adjustment. That’s why we model training loops as optimization problems.

We spoof that with REACT prompts in LLMs, but it becomes clear after a few iterations that there’s no real optimization going on, just guessing at tokens (a gross oversimplification, as this guessing has real uses). It’s doing what it was trained to do, completes text. Not to mention that those steps all disappear when the prompt is changed.


> One way to help you notice this is to try and estimate how many billions of people you've defined out of "being intelligent" with your latest goalpost movement.

love this, I will use this in future rants.


That argument only works if your audience already thinks of humans as mostly automotons…


It's an operational definition: if you claim AI is not intelligent because it cannot do X, then you necessarily exclude a whole lot of humans who also can't do X.

There used to be a strident faction that would say "but AI can't produce original art/a symphony/novel/etc". My answer was usually (correctly), "neither can you."


> It's an operational definition: if you claim AI is not intelligent because it cannot do X, then you necessarily exclude a whole lot of humans who also can't do X.

Sure, but I think most people are intelligent according to my definition, but AI is not…

You’re already coming from the assumption that people are “souless automatons,” which is probably why the idea of a machine being “intelligent” is so easy for you to accept.

> There used to be a strident faction that would say "but AI can't produce original art/a symphony/novel/etc". My answer was usually (correctly), "neither can you."

This is a dumb apples and oranges comparison. AI as a concept is different than a concrete person.

AI as a concept can do anything, it’s a conceptual placeholder for an everything machine.


I can’t reply to the other comment, but “soulless” was to quote the other commenter. Having a soul (whatever that might mean) holds no bearing on what I’m saying.


again, do they obey the laws of physics? can they decide to go against what the physical interactions in their brain guide them to do?


> You’re already coming from the assumption that people are “souless automatons

do your people obey the laws of physics? is the soul magical or physical?


I think there is a difference from people upset around over hyped LLMs and arguing about intelligence in "A.I.". Most of the "intelligence" arguments I've seen are fighting against putting too much stock in chatgpt and Sam's fever dreams.


the goalposts for what?


Literal goalpost in a game of football of course. Or soccer if you are an American.


thanks. you seem to think that a spiking multimodal variant of transformers on neuromorphic hardware would demarcate a goal of some sort, which one?

for as far as i can see, the achievement would just be a spiking multimodel variant of transformers on neuromorphic hardware.


I bet you are great at playing blackjack, but suck at Texas hold 'em.


To be fair, all words are made up.

Words are useful to the extent they effectively communicate with the intended audience.

This can be accomplished by a mix of familiarity (has this word been already used enough in the target audience with the intended meaning) and the ability to evoke new meanings by intuitive derivation rules (word composition, affixes, ...)

In the case of this title, fwiw, it was perfectly clear to me what this was about because I'm already familiar with related topics and they were using the same terminology


And even with a willingness to make up words, it’s STILL hard to name tech projects uniquely: https://github.com/sorbet/sorbet


I’m flashing back to bapi


Yes, it looks like this project is starting with helping highly motivated adult learners go deep into a hard to teach/learn material. Contrast this with the Khan Academy approach at https://www.khanmigo.ai/ targeting young students and their teachers and parents with broad assistance across subjects. Maybe they converge?


Yes. The act of parsing code for yourself, for gisting it, and thinking about it - that's what makes you think better and build skill. Discussing why and different ways of doing it with others - that builds team wisdom. AI might help with organizing a review and timeboxing, clerical support, but the actual work of the review should be left to people.


How much do domain expertise, teamwork, management/leadership, and soft skills matter in senior sw interviews?


there's legal accountability and there's moral choice. would you screw over contractors and investors to protect yourself from your decisions?

the contractors aren't the only stakeholders at risk in this choice. how do your investors feel about this situation? what's their moral compass?


Thanks for this, Nathan. Orphaned devices pose a suite of security problems. They outlast the companies that sell them, the companies that make them, the upstream suppliers of hardware and software, the companies that service and repair them. Smart building devices and power systems can run for decades. Implanted medical devices, home health devices, and hospital systems persist longer than five years and can outlast the corporations behind them.

Please address orphaned products so that security continues with a duty by the maker to sustain safety and security beyond the life of a product or its manufacturer. This is like the requiring a sale-time deposit into an independent fund to reclaim/recycle a product's waste.

Beyond the current proposal, you might require a device's IP to be put in escrow in the event of product or corporate end-of-life, allowing customers or third-parties to take up maintenance and security. (#RightToRepair #EoL)


Thanks for your response! This would be an excellent comment on the record, and implanted devices are a particularly compelling example considering cases such as Second Sight.


Nearly all business bank accounts are connected to standard industry codes and other descriptions.


This is insurance for their already huge investments (and future rounds) in AI businesses. It's a pittance relative to everything else they have at risk. It's prudent since failure of small projects like these can delay work on profitable corporate efforts.


This is a good point I hadn’t considered.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: