General intelligence is an ability to cope, adapt and thrive in an ecology: to start from a limited set of capabilities, and via exploration, acquire a rich competence. To develop conceptualisations, techniques of coordination and control, to form novel goals and strategies to realise them, and so on.
General intelligence is a strategy to defer the acquisition of abilities from the process of construction/blueprinting (ie., genes, evolution..) to the living environment of the animal. The most generally intelligent animals are those that have nearly all of their sensory motor skills acquired during their life -- we learn to walk and so can learn to play the piano, and to build a rocket.
There is a serious discontinuity in strategy to achive this defferal: the kinds of processes which "blueprint" the intelligence of a bacterium are discontinuous with the processes which a living animal needs to dynamically conceptualise its environment under shifts to its structure.
Of the latter animals need: living adaption of their sensory-motor systems, heirachical coordination of their bodies, robust causal modelling, and so on.
General intelligence is primitively a kind of movement, which becomes abstract only with a few hundred thousand years of culture. The earliest humans, able to lingusitically express almost nothing, were nevertheless generally intelligent.
Present computer-science-led investigations into "intelligence" assume you can operate syntactically across the most peripheral consequences of general intelligence given by linguistic representations. This is profoundly misguided: each todller necessarily must learn to walk. You cannot just project a slideshow of walking, and get anywhere. And if you remove this capability and install a "walking module", you've remved the very capabilities which allow that child then to do anything new at all.
There is nothing in the linguistic syntactical shadow of human intelligence to be found in creating generally capable systems. It's just overfitting to our 2024 reflections.
Maybe that would be a suitable working definition of general intelligence, and props to you for even giving a definition at all (in contrast to TFA). However your definition seems almost tailor-made to exclude present and near-future AI (and, I suspect, motivated thereby) . Current AI works by being trained on large amounts of existing data. If current AI would be real intelligence, we would be sad, therefore real intelligence is the opposite of intelligence trained on large amounts of data.
Having said that, one can also make the case that LLMs start from a limited set of capabilities and, via exploration, acquire a rich competence. Only these are linguistic abilities and the exploration is exploration of a linguistic environment. Maybe the real intelligence is the friends we made along the way i.e. the general class of algorithms roughly called "backpropagation and gradient descent on a very high-dimensional neural network".
The most meaningful definition of intelligence is one that captures the essence/nature of human/animal intelligence, which is where the word originated.
I think you can get to the core of it by considering the evolutionary benefit of intelligence - what beneficial behavioral capability has been optimized - which comes down to being able to utilize past experience to predict/plan future outcomes, rather than being locked into reactive behavior patterns like simpler animals.
LLMs, trained to predict based on past "experience", might (perhaps charitably) be considered to exhibit some intelligence, but where they notably fail is in situations where better prediction (utilization of prior experience) requires a process more similar to search with backtracking than a linear application of rules derived from the training data - i.e. in the areas of reasoning and planning.
You can try to put lipstick on the pig by adding RL-based post-training or wrapping the LLM in an agentic loop, trying to extract more value out of the training data and gain some semblance of reasoning, but at the end of the day it's still a pig - at heart just an expert system not a cognitive architecture.
Another obvious limitation of LLMs is that they are just a repository of canned knowledge/rules, with no ability to learn from "runtime" experience, and therefore lacking the ability to learn to handle novel problems by experimentation and adaptation to failure.
The limited intelligence of LLMs is firmly baked into their architecture - the transformer, being just as pass-thru model, as well as the way they are trained by SGD rather than an algorithm capable of continuous incremental learning.
It's tailor made to describe the phenomenon of animal intelligence that we're trying to model.
The only tailoring which goes on is by those who say, "we can only do X, therefore Y must be defined in terms of X". It's deeply pseudoscientific approach to investigation, as it completely abandons a scientific theory of empirical phenomenon over a purely circumstantial account given in terms of what tools we have to hand.
I can think of no other area where such an approach to investigation is permitted.
> General intelligence is an ability to cope, adapt and thrive in an ecology: to start from a limited set of capabilities, and via exploration, acquire a rich competence. To develop conceptualisations, techniques of coordination and control, to form novel goals and strategies to realise them, and so on.
One thing I've learned over the past few years is that _nobody_ knows what intelligence is, and it may not even exist as a genuinely measurable attribute. What you've described is certainly _a thing_ that is worth describing and thinking about, but it doesn't encompass everything that we think of as intelligence, and ascribes intelligence to processes which most of us don't think of as intelligent (ie, evolution and plant life).
The problem we have is that for the entire history of humanity, there has been a single example of something that "thinks like us" and our conception of what it means to be "intelligent" or to have "reason" or to "think" is just inextricably tied with all the other attributes that make us human.
I'm not at all sure that intelligence _must_ arise out of a process of evolution and natural selection, and i think that it may be possible to create an intelligent entity which completely lacks the ability to survive in an ecology on its own.
On the other hand, it's often said that while humans domesticated plants and animals, those same plants and animals also domesticated _us_. Human life rearranged itself around the requirements of farming and animal husbandry, and just in terms of pure biomass and range of habitat, becoming "domesticated" was a tremendously successful evolutionary strategy for the animals that we domesticated.
Human society is now _again_ re-arranging itself in order to take advantage of AI, and we're spending a lot of money and labor building these systems and maintaining them. It's hard to say that they haven't adapted themselves to surviving in an ecology, it's just a more abstract ecology than the sort of blood and claw ecology that we evolved in.
In some sense, these AIs are the ultimate expression of "memetic evolution" -- ideas that are able to spawn new ideas, without having any meaningful embodiment at all.
Thank you for being sensible We have so many "intelligence professors", who can conveniently dismiss SoTA AIs with a sleight of hand (Cholet, LeCunn etc), yet completely ignore the superintelligent traits these AIs already exhibit.
FYI: SoTA AIs can think and reason. They're far far away from mere memorization & retrieval. They aren't human yet, nor do we expect them to be. They'll just keep getting ever better in their own ways, and become super duper useful for practically everything as computer buddies first, agents next, robots next next.
The "intelligence professors" I'm hoping will at some point shut up and accept that "functional/universal approximation" is all you need, which is abundantly done by neural nets of today.
Human nature is to have a moral compass (conscience), with a mind to contemplate whether our planned behavior is positive or negative for the happiness of those around us, and a free will to then choose which paths to take.
What you describe is all present in the animals, to more or less advanced degree, where chimps and crows can use tools and even pass that knowledge on to others. Yes, our bodies follow the mammalian template, and with it comes our baseline tendencies to form packs to fight against other packs and fight for dominance within the pack, all in order to enjoy more physical pleasure. We inherit that, but we are capable of rising above those animalistic urges to become a humanitarian, who considers the happiness of others, and even the whole, in their decisions.
Further, with our advanced being and our free will, we are able to self-evolve our ideals, attitudes, and behaviors, in EITHER moral direction: either towards a more selfish, brutal, and callous competitive state, or towards a more selfless, compasionate, and caring cooperative state. The former leads to where we are now in human history, the latter, rarely exercised, leads to our highest potential, returning us to a happy, prosperous, environmental-concerned human race with various cultural differences but united in the success of each and every person, should they choose to participate.
So, no, no computer logic engine at present can in any way mimic the totality of "human intelligence" because very few people understand human nature, so they can only literally "ape" it. They can't even approach simulating what is going on within us, the only moral beings on this planet, and the only beings here with the power to consciously change ourselves and our environment, for better or worse, it all being our choice, however subconscious and inertial for the vast majority.
It is going a little under the radar, but AI is really throwing the religions into a tizzy now that they are being forced to think about these things. In the good old days, you could just trust the bible to say how we were created.
Now people are being forced to think about it for the first time. Where do morals come from? Do they even exist? Who am I? What am I doing here?
Like whole new generation having an existential crisis.
If you dare, you can read my recent comment history for the explanation, but I doubt you're going to like it. And it is the explanation. Someone had to do it!
True, but our consciousness comes with an integral moral compass that we are completely free to utterly ignore or even act in opposition to. It's our gift and our responsibility, and our absolute choice, for good or ill.
Is it actually integral? I’m not sure. Try asking a person whether some arbitrary scenario is right or wrong and why. A good null hypothesis is that, for people with no training in morality or ethics, responses will be uniformly distributed over outcomes.
Yes, but we can choose to ignore it. Ignorance of our human nature potentials is our choice, too, and that's the reason for the inertia of the world's societies, including ones that claim to be religious.
The key is that, just as our physical bodies have developmental stages of ever-increasing capability, so does our moral compass. We must learn how to not only use it, but to develop it and fine-tune over time, and we must train and use our mind to self-evolve ourselves.
Note that we can use our free will to de-tune it as well so that we have pointed ourselves in the direction opposite to our happiness, and that of those we come in contact with. In other words, we are free to use our abilities to create unhappiness, out of sadistic pleasure.
The first step of the spiritual path is awakening to this highest of human potential, where we learn to willfully and with difficult effort develop our moral compass in the direction of compassionate concern for the well-being of our fellow human beings. This is why the selfish -- to themselves and their in-group -- people of the world are loath to hear the term "woke"; they take pleasure in remaining ignorant of both the unhappiness they cause and thus karmically receive by their selfish actions. And the same impulse within us that seeks to keep us ignorant of our highest human potential, is also keen to keep its ideals, attitudes, and behaviors off other people's radar. The development of a selflessly compassionate morality become like garlic to a vampire (and they do suck, and suck the life-blood out of the world's systems and people, for sure, causing so much misery by their efforts).
So, yes, our moral compasses are each in various states of development, from the utterly ignored to sometimes-positive-sometimes-negative to the fully developed. As usual, such distributions follow a bell curveish shape. But only the long tail in the positive direction understands and manifests the highest morality. The middle bulk are hit or miss, per their cultures' predilections and the circumstances of their life. And the negative tail are the uttlerly selfish evil bastards of the world.
You can find a full explication of our human nature to self-evolve and the process of manifesting such change in the deep dialog I had yesterday under my comment to Maria Konnikova's poetry article submission. You have to skip past my initial reply and its grand-reply to get to the meat of it.
We fight to establish hierarchies of dominance called, "monopolies of violence", that we have social allegiance to. If a competing "dominance regime" is in our neighbourhood, we draw territorial boundaries -- and if these fail, riot, and if that fails, kill.
The strategy of intragroup 'mutual aid' is common across the animal kingdom -- and is paired with hostility to 'foreign aid' in its literal sense.
The achievement of the modern world is massive amounts of abundance which increases our generosity beyond typical chimpanzee proportions -- but not by much. And upon a single attack, or moment of scarcity, we return back exactly to our genocidal defaults -- which is to say, group-centred violence.
Abundance and 'wars only on our borders' creates a dangerous illusion of equanimity which is moreso just, "the feeling of an ape fat, tired and safe".
Yes, my recent comment history contains the explanation for why we have the potential for all of that but also to be true humanitarians. The short of it is that we can choose to learn how to be better by actually changing ourself, and then being a positive force in the world.
It is the Way, but we must choose it, after first seeking to escape our natural ignorance to the possibility.
That sentence literally makes no sense, obviously coming from the spoiled mind of a coddled rich kid.
The fact is that you chose to write what you wrote, for good or ill. I spent all day yesterday explaining the truth of our moral existence to y'all here, conversing with a very fine fellow who has a bit of knowledge. It brought me a joy that no one else on this site has ever felt. It was electric, sans drugs of any kind, and only a couple of sips of coffee all day, which is very rare for me.
No, there's a force within you that will work its damnedest to get you to quit reading it before you get to the bottom. It starts in that post about Maria Konnakova's poetry article, but that's not the important part, or even my grand-reply (reply to my initial reply). Most people don't have the intellectual curiosity and bravery to read such utterly new information, but if you can read drivel from AS, you can make through one (rather long) page of mine.
I triple-dog dare ya ;-)
And remember, ignorance of the truth is a human vice that we must fight and defeat, in order to choose the better path, the Path of Love. Giving in to ignorance is a choice between good and evil, my friend. I hope you choose well, but you'll likely choose to rebel against the truth, and instead keep believing the lies that have been told to you, which is our body's monkey-inheritance.
Happy choosing! I truly wish you all success and happiness in this world, but that latter one is dependent upon our learning and manifesting the truth, my friend.
Is choice a fact? The point of the quote is that we do not know. It SEEMS like we have free will but this could be an illusion. Where do our desires and motivations come from? I have always had a strong desire to build things, either physical objects or mental ones (like code). These desires push me to make choices in my life, like pursuing the career I have. But did I choose my motivations? I’m not sure. I don’t remember choosing them. They are just feelings I have, and in some cases, can’t remember not having.
I am a scientist. Truth has a specific meaning to me. I’m not sure we share the same definition. That does not make either of us “evil.”
My friend, what you are saying is that you do not know. You have no idea what I know, or even what I can know.
Truth is all that exists in the universe. We are the information processors of this universe and "knowing thyself" is part of our design, but, because of our free will, we can choose to remain ignorant. And, by knowing ourselves (long process), we also learn other things about the universe. At some point, we can actually have access to the very deepest truths that can be known by human beings. (There do remain some topics that are unknowable, but we can never exhaust the knowable, so simply knowing that some specific unknowables exists shall have to suffice.)
Our motivations are a combination of our physical predilections and our cultural and personal upbringing. No, we're not going to have a memory of our every motivation, but we are capable of at least gaining an understanding of what they are currently.
If you wish for the truth as explained by the "Sufi Science of Soul Transformation", follow the comment dialogue I referenced above. It requires a brave curiosity and ability to integrate very unfamiliar concepts; it is really akin to how Eugene Parker's solar wind theory shocked the world of astronomy, but on a far more important topic.
To really "know" if fire is dangerously hot, you have to test it; no one else's experiment is enough to truly convince those of us who have any skepticism within us.
It is the same with the spiritual path, except that we contain a force inside us that tries its best to convince us that remaining ignorant to the possibility that self-evolution into total compassion is not only possible, but is the best way forward for each of us. The only way to escape that ignorance is to light the flame and feel its effects, and it is each our choice. Most people are simply content with what is familiar to them, to their heritage, to their cultures, to our intrinsic ignorant nature.
The lack of compassion in this world is why it has been so historically f_cked, and getting ever more so, in many ways.
Evil is borne of selfishness that refuses universal compassion. Those Nazi death camp guards weren't killing anyone, but they sure contributed to the evil. We are all choosing sides, even if by default via our cultural inertias. We must choose to begin transforming ourselves into universally compassionate human beings, or else we have sided with either the deliberately evil or those callous to the evil they cause, which is a kind of evil. Entering the Path of Love is the only way to see this fact clearly and know how we are all choosing sides, whether we know it or not. The default value of a .NET int variable is always 0; you must change it to a 1 to make it a 1. To be not evil (in some measure, however by default) means to enter the Path of Love deliberately, because our default state is willful ignorance, and the masses upon masses of ignorant people are ruining this beautiful garden.
I wish you well, my fellow scientist (I have been such a one since 1st grade). It's your choice, the same as the rest of us.
What you present as a grand narrative of self-discovery, truth, and compassion is, in many ways, an idealistic interpretation of human existence that oversimplifies complex realities. While I can appreciate the depth of your convictions, the perspective you offer assumes a universal applicability of your framework—one that may not align with everyone's experience, philosophy, or epistemological approach.
To begin, claiming that "truth is all that exists in the universe" reduces the richness of existence to a monolithic pursuit. Truth, as a concept, is subjective and shaped by individual, cultural, and temporal contexts. What you call "truth" may resonate with you, but it risks dismissing alternative ways of understanding the universe, such as those rooted in skepticism, pragmatism, or even nihilism. These frameworks are equally valid, as they acknowledge the limitations of human cognition and the constructed nature of meaning.
The notion that self-knowledge leads to universal truths about the cosmos assumes a direct correlation between introspection and external understanding. While self-awareness is undoubtedly valuable, it does not guarantee access to the deepest truths of the universe. Human cognition is bounded by our biology, sensory limitations, and the constraints of language and culture. We are not omniscient processors; we are flawed, interpretive beings navigating a sea of uncertainty.
Your emphasis on "universal compassion" as the sole antidote to evil is admirable but simplistic. Human motivations are multifaceted, and what you describe as "evil" often emerges from systemic, historical, and material conditions rather than individual moral failings. Compassion, while transformative, cannot alone dismantle entrenched power structures or resolve the complex web of human suffering. Moreover, framing those who do not embrace your "Path of Love" as ignorant or complicit in evil undermines the diversity of human experience and the validity of alternative ethical frameworks.
Finally, your analogy comparing spiritual transformation to testing fire conflates subjective spiritual experiences with objective physical phenomena. The former is deeply personal and cannot be universally measured or validated. Not everyone will, or should, approach spirituality or self-evolution in the way you propose. Placing the burden of moral alignment on individuals rather than acknowledging the role of collective and systemic forces risks perpetuating a kind of spiritual elitism.
In summary, while your call for self-awareness, compassion, and transformation is compelling, it oversimplifies human complexity and diversity. We are not all on the same path, nor should we be. True respect for the plurality of human experience requires acknowledging that there are many ways to navigate existence—each as valid as your own.
> Moreover, framing those who do not embrace your "Path of Love" as ignorant or complicit in evil undermines the diversity of human experience and the validity of alternative ethical frameworks.
There is no higher ethical framework than that which espouses universal compassion.
And, yes, each of us who remains ignorant of the truth of the importance of compassionate service to mankind, is themselves harming the education of humanity, which is a form of evil, albeit small in comparison to the brutality of dictators.
The key understanding here is that we each sit on the knife's edge, and we are each choosing compassion or one of the myriad opposites, each of which involves a degradation of the whole, even if it's in a small area of influence, or even without malice.
You can't wipe your ass with your hand, not wash it, and then traipse all over the mall touching stuff. That ignorance will cause real harm, however unintentional.
> Finally, your analogy comparing spiritual transformation to testing fire conflates subjective spiritual experiences with objective physical phenomena.
No, you refuse to understand that the spiritual path is the same for every human being, even though there are different forms that get us there. We must transmute our soul's 19 vices into their corresponding virtues, by degrees, over time, with the help of our Creator, in order to become vehicles for compassion. This is a universal human developmental potential.
> The former is deeply personal and cannot be universally measured or validated.
It can be, but only by the person who has begun the transformation, as well as their teacher or other persons of high attainment. Just as our bodies have a developmental progression where different stages entail different abilities, so, too, does the spiritual progression towards love. Before we begin it, we have "eyes that do not see, ears that do not hear, and hearts that do not understand".
We all have souls that start out with some combo of the 19 vices operant, per our personal predilections. We all start out equal in sum, but with a different bar chart of the different weights. Sum any person's weights together and they will be equal, but one person will have more hate, another more greed. With 19 pairs of vices and virtues, that's a lot of possible combos. That's why it's so easy for our lower selves to be able to point our finger at another person and think, "I'm better than that other person. Look at their X." It's the multi-spiderman theme, but one spidey has a different vice in just as great a quantity as the one our internal voice points out in others.
> Not everyone will, or should, approach spirituality or self-evolution in the way you propose.
Well, we have to contact or Creator to begin the process. There's no escaping that any more than saying a person can graduate from college without matriculating first. Like I said, it's our universal spiritual developmental progression. There are different forms, different prayers, different practices, but beyond those trifling, unimportant differences, we all have to connect to our Creator, find our path, and then do the work required to self-evolve our ideals, attitudes, and behaviors, in order to transmute our souls' vices into their corresponding virtues.
The key is that each person's path is determined by the Creator. Upon contacting it, we will be guided to whatever path is required. It's never for another person to determine. Rumi says, "The Way goes in."
> Placing the burden of moral alignment on individuals rather than acknowledging the role of collective and systemic forces risks perpetuating a kind of spiritual elitism.
A culture can only level-up by its members leveling-up; that's just systems analysis. I'm not placing any kind of burden on anyone any more than Watson and Crick placed a limit on the shape of DNA.
Elitism wasn't the reason for or effect of Einstein explaining how mass and the time are interrelated. The spiritual path is simply our complex reality, just one that cannot be verified by our physical sciences, just as it fails to explain Dark Matter or Energy.
[It just sounds like your inner voice doesn't want anything to do with your self-evolving yourself beyond its ignorance. And that is exactly the case. It's defending itself from the ego-death that results from the spiritual path, and it's doing it like hell. That's precisely why the world is the way it is, and also why mis- and disinformation is so deadly, because until one enters the Path of Love one cannot clearly comprehend reality, much less "know thyself". "Their minds are confused with confusion," as Bob Marley said.]
> In summary, while your call for self-awareness, compassion, and transformation is compelling, it oversimplifies human complexity and diversity. We are not all on the same path, nor should we be. True respect for the plurality of human experience requires acknowledging that there are many ways to navigate existence—each as valid as your own.
All our ways to navigate are valid, because we can choose to live and believe however and whatever we want, because we all have an unfettered free will, and we must each respect each others' choices, so long as they aren't harming/oppressing others. But most people's beliefs are simply based on bad information and assumptions. We live in an objective reality and there is truth and half-truth and utter bullsh_t.
And the fact of the matter is that I know flat-earthers have some just plain incorrect beliefs. Like my explaining this to you, though, those of use who know can try to explain the truth, but it's your choice to accept it or not.
If you prayed about it, you would get the answer, but you don't pray, do you? Or believe in prayer, right? Or am I wrong about that. I doubt it. You are intelligent about the physical world, but don't you want to know where the Dark Matter is or the purpose of Dark Energy? You can't find those answers by studying the physical universe, you can only find hints like the anomalous galactic momenta. To find the answers, you must follow Rumi's Wisdom and follow the trail, "The Way goes in."
I love you. Thanks for your detailed, intelligent response, but your arguments are no different than a flat-earther arguing with Galileo.
> What you present as a grand narrative of self-discovery, truth, and compassion is, in many ways, an idealistic interpretation of human existence that oversimplifies complex realities.
I am only presenting reality, my friend. The simplicity of the explanation is due to its nature, which follows Occam's Razor in its own resplendence.
> While I can appreciate the depth of your convictions, the perspective you offer assumes a universal applicability of your framework—one that may not align with everyone's experience, philosophy, or epistemological approach.
A student of Einstein did not care -- and shouldn't've cared -- about whether other folks have alternate theories.
Eugene Parker, much derided by his contemporaries, was not "assuming" anything; he was merely presenting the truth. No, the truths I present here is not grounded in our physical world's science of the matter, energy and the laws that interrelate them, but instead encompasses our multi-dimensional nature as human beings, with a body, soul, conscience, and free will, with the mysterious mind at our disposal.
> To begin, claiming that "truth is all that exists in the universe" reduces the richness of existence to a monolithic pursuit.
I never said it was monolithic. It has many facets, but seeking truth in a universe that is nothing but truth, is a singular pursuit within it. Besides, you believe that this physical world is all that exists, correct?
> Truth, as a concept, is subjective and shaped by individual, cultural, and temporal contexts.
No, by definition, truth is objective and a quality of the universe, unfazed by whether or not we believe it to be true.
Only perspective is shaped by the factors you mentioned, and they do, indeed, shape our it. But perspective of the truth can be accurate or inaccurate, and what I'm saying is that your perspective is flawed, like most of humanity at this moment.
> What you call "truth" may resonate with you, but it risks dismissing alternative ways of understanding the universe, such as those rooted in skepticism, pragmatism, or even nihilism.
I am dismissing them, because they are not true. You don't dismiss the flat-earthers? I do, and rightly so, because I've seen time-lapse pictures taken through a telescope of other planets rotating, with moons rotating about them!
> These frameworks are equally valid, as they acknowledge the limitations of human cognition and the constructed nature of meaning.
As to limits of our cognition, it is precisely your denying the truth that is limiting your cognition, not mine. This is exactly like the flat-earthers do by refusing to acknowledge science because they haven't done enough math or looked through a telescope at a planet. Your and their claims do not limit my cognition. The history of science is rife with folks like Boltzmann, Einstein, and Parker whose sound and accurate scientific discoveries challenged the perspectives of their day with a deeper understanding of the universe around them. Be not like their critics, who all "fell flat", to put it in the words of Eugene Parker, one of my heroes.
> The notion that self-knowledge leads to universal truths about the cosmos assumes a direct correlation between introspection and external understanding.
First, I'm not assuming anything, you are.
Second, the key point to introspection is that it leads us to our Creator, Who then opens the doors of perception to us, by degrees, when we commit to becoming a selflessly compassionate human being. The honest introspection leads us to the door, and our seeking opens the doors for us.
> Human cognition is bounded by our biology, sensory limitations, and the constraints of language and culture.
Here you are, claiming you know the limits of human nature. We are not just this physical body, we are much more. Even our body's sensory abilities are beyond this physical body. You think that you are nothing but your body, so you have limited yourself. In my love for you, I am offering you the path to more, which you are fully allowed to deny, without denigration by me, only love. But your denying the truth only limits you, not any of the rest of humanity.
> We are not omniscient processors;
No, only our Creator is omniscient and It is the Prime Mover, but the universe It created is the primary processor we deal with and are a part of. You could say it's our primary interface.
> we are flawed,
We are indeed flawed, every single one of us, at least at first, but we are also created with the ability to achieve perfection, with the help of our Creator, that wishes but does not demand that we all choose to love one another. Loving It is an integral part of the mechanism that facilitates the cleansing and purification of our soul's flaws. (Loving It does not add one jot to It, nor is it due to some "needy" aspect. No, that practice is solely, like all that exists in this universe, for our benefit and happiness. What could we possibly add to the Creator of space, time, and vibrational dimension -- all that has ever and will ever exist?)
> interpretive beings navigating a sea of uncertainty.
Yes, we interpret the universe around us, with our senses and our mind. And our interpretations grow more depth and accuracy when we make progress along the spiritual path, which I term the "Path of Love".
And, yes, life is uncertain, isn't it wonderful?
> Your emphasis on "universal compassion" as the sole antidote to evil is admirable but simplistic.
There is no other fundamental perspective that can determine every antidote, for all our problems are due to our having not prioritized compassion in the first place. The methods and means to the specific solution's details will vary, but if the root is not compassion, there will be no lasting solution.
> Human motivations are multifaceted, and what you describe as "evil" often emerges from systemic, historical, and material conditions rather than individual moral failings.
All of what you mention are caused by human beings, moral human beings, flawed moral human beings, just in aggregate, acting selfishly instead of selflessly, callously instead of compassionately.
> Compassion, while transformative, cannot alone dismantle entrenched power structures or resolve the complex web of human suffering.
I didn't say it would be a gentle compassion. WWII taught us the lesson that laying down for the brutal aggressors will only result in you getting obliterated. Of course, it is still happening in many places across the world on this very day.
No, our love must be properly fierce when dealing with the wantonly selfish and brutal oppressors. They must be stripped of their power to harm others, because our compassion for the oppressed must be of a different flavor than our compassion for the oppressors. We must be as merciful as we can, but they must stop harming others. It is our duty.
I continue to be dismayed by AI's quixotic focus on "human" intelligence. AI is very far from pigeon-level intelligence or dog-level intelligence. I strongly suspect transformers are dumber than spiders.[1] This focus on human intelligence via formal human knowledge is putting the cart before the horse. If your "human-level" AI architecture cannot conceivably be modified for chimp intelligence, and requires bootstrapping with a bunch of pre-processed human knowledge, then it is not actually emulating human intelligence. LLMs are fancy encyclopedias, not primitive brains.
[1] Suppose you have an accurate web-spinning simulator and you train a transformer ANN on 40 million years of natural spiderweb construction: between trees, rocks, etc. This AI is excellent at spinning natural webs. Would the transformer be able to spin a functional web in your pantry or basement. If not, then the AI isn't as smart as a spider. I don't think this thought experiment is actually possible: any computer simulation would excessively simplify the physical complexity. But based on transformers' pattern of failures in other domains, I don't think they are good enough to pull it off.
I’m not sure the emphasis is quixotic. I would prefer to say myopic, but the myopia is understandable: human intelligence is the only intelligence we have any firsthand experience with.
One of the things I love about computer science is that it forces us to devise definitions—working definitions at least—so that we can move forward. What is intelligence? Turns out we don’t really know. What we do know about are TASKS or PROBLEMS, and that certain kinds of machines are more or less suited for certain problems. A human intelligence is one that can solve a large range of problems, but often only with training. Is the human mind a mechanism that we can replicate or is it something more? We don’t know. Are there other kinds of intelligence? Is intelligence just a matter of definition? We don’t know.
Personally, I think it is an exciting time to be alive, because these questions are no longer merely philosophical. And we finally have the ability to start answering them scientifically.
I tend to agree, what is being lost in most recent discussions on Human Level AI, is the focus on Language and LLM's.
There are plenty of other researchers and companies working on non-LLM models. And there you start getting a little more 'human' like reasoning.
Maybe eventually the LLM will just be the 'human interface' to a different AI model underneath, that does have a model of the world, and goals that it is charting through the real-world complexity and not just a game simulation.
I also find it quite weird, I think the fact that the current cycle is focused on chat bots fools a lot of people into anthropomorphising LLMs and perceiving them as better than they actually are. It's very comforting in a world with increasingly less meaningful social interaction
Sure can. Whole world’s done this every time there’s a major advance in algorithms. We do this with other major advances, too, like how the industrial revolution was going to usher in utopia and GMO was going to end world hunger. Whenever we can’t see the end of something, only half the world figures it’s a vision problem, while the other half figures the end must not exist.
Present-day developed world must look a lot like utopia for a medieval serf. And we're now feeding 8 billion as opposed to, what, 2 billion before the green revolution. The exaggerations are only slight.
>models... AGI. ... unlikely to reach this milestone on their own.
I don't think there is a single milestone. Intelligence has many aspects - IQ test type ability, chess playing ability, emotional intelligence, ability to go down to the shops and buying something and so on.
AI is making gradual progress and is very good at some things like chess and very bad at others. There will probably be a gradual passing of different milestones on different dates. When they can replace a plumber including figuring out the problem and getting and fitting the parts might be a sign they can do most stuff.
There's probably a way to go there but there are a lot of resources being thrown at the problem just now.
There is zero reasoning in it so far, everything up to today is perfectly explainable with advanced statistics and NLP. Its large _language_ models after all, no matter the hype.
Still I find it excellent when exploring new knowledge domains or cross-comparing cross knowledge domains, since LLMs by design (and training corpus) will spill out highly probable terms/concepts matching my questions and phrase it nicely. Search on steroids when you will, where also real-time doesn't matter for me at all.
This is not intelligence, yet hugely valuable if used right. And I am sure because of this, a lot of scientific discoveries will be made with todays LLMs used in creative ways, since most of scientific discoveries is ultimately looking at X within a setting at Y, and there are a lot of potential X and Y combinations.
I am exaggerating a bit, but at some point (niels bohr?) had the thought of thinking about atoms like we do about planets, with stuff circling each other. Its an X but in Y situation. First come up with such a scenario (or: an automated way to combine lots of X and Y cleverly) and then filter the results for something that actually would make sense, and then dig deeper in a semi-automatic way with actual human in the loop at least.
You must be using a peculiar definition of expert because even generally conservative AI experts like LeCun now expect we could have human-level intelligence within 5 to 10 years.
There are some comments that, even if I was blindfolded and only had to listen to them through a loudspeaker, I would immediately recognize as coming from some HN commenter who thinks they're being really smart.
This is one of those such comments.
It begs the question: have you actually used any of these "AI"s, or are just reading about them on HN?
Kinda like an idiot savant that can keep up with Terrence Tao at math, yet fail to play Tic Tac Toe or help a farmer plan a river crossing with his chicken.
I can't believe this article is being published in Nature. The article is flawed, plagued with assumptions that I guess the author doesn't even notice (like what do we really mean by AGI, the epistemological problems/assumptions to intelligence, the real nature of thinking, the real functioning of the human brain).
It is really curious that the philosophical community is addressing the debate on what AI really is and its implications, but the computer science community does not read almost anything about philosophy.
Regarding the fear of 'losing control of it', I would suggest reading the works (or at least about) of Gunther Anders and Bernard Stiegler. Technology (in this case AI) is inseparable from human being, to the point that we already lost control of technology, its use and its meaning (like, 100 years ago).
Another thing that surprises me is how the computer science community is blind to the work of Hubert Dreyfus and other contemporary philosophers that analyze AI from and epistemological and philosophical perspective. But, actually, I should no t be surprised: we barely study philosophy in any scientific discipline when attending university.
This rhetoric about how AI is similar to the human brain is starting to be a bit boring. It assumes a very simplistic view on the brain and turns a deaf ear to other types of research (like language acquisition and embodiment, mind/brain duality, epistemological basis for knowledge acquisition, ontological basis of causal reasoning...).
And above all, what is really upsetting is the techno-optimism behind this way of thinking.
This is not a scientific paper that was published in nature by researchers, it is a news editorial written by an editor/journalist. Don't get fooled by the domain.
I know it is an article, and not a scientific publication, but that does not change the fact that the article is not serious at all regarding the ongoing discussion on AI.
If this gets published in Nature, even as an opinion article, is because there is a general ideology that can actually produce this kind of content.
> “Bad things could happen because of either the misuse of AI or because we lose control of it,” says Yoshua Bengio, a deep-learning researcher at the University of Montreal, Canada.
God, I hate phrases like this. We've already lost control of it. We don't have any control. AI will evolve in the rich medium of capitalism and be used by anyone due to its ease of use and even laws will be unable to restrict that. At this point, since we've set up a system that promotes technologies regardless of their long-term cost or dangers, we simply cannot control them. Bad things are already happening and human beings are being integrated into a matrix of technology whose ultimate purposes is just the furthering of technology.
Even people like Dr. Bengio are just pawns in a system, whose purpose is just to present an artificially balanced viewpoint as if there were a reasonable set of pros and cons, designed to make people think that we could "lose control" but with the right thinking, we don't have to let that happen. I mean come on, just suppose for a second the hypothesis of "AI is already out of control". If Dr. Bengio and their colleagues acknowledged that, then they'd be out of a job. So just by evolutionary pressure on "organizations that monitor AI", they have to be artificially balanced.
As a thought exercise, on one hand the technology's purpose is having no purpose at all, just doing what it was made to do. So I'd rather focus on the technology people, where some develop it for the sake of developing, and others want to extract most money possible out of it. Nothing surprising because capitalism, but now there's a real possibility that the AI owner(s) will extract all the money there is. And I have no idea how our society will function then - until now rulers always needed subjects... We try building guardrails in laws and regulations exactly because we don't know where all this can lead, but as even today humans find ways to circumvent laws I expect the AI finding ways around its guardrails as well (with human help for sure). Thus sooner or later we will unavoidably come to that point where we have no idea what's gonna happen, and that's when usually unrest peaks.
I disagree with your first statement. Technology has a purpose, and that is to evolve itself. The ancient greeks had this perspective, so did several philosophers, and I agree with them.
We are not in control either of the nukeclear power which is available for quite a few nations, since a lot of decades now, as well. Soo... c'mon cheer up most probably is some kind of simulation anyway.
Thank you for this. I'm glad someone has their head on straight.
I think it's like some rich guys said, "Lets roll this tiny snowball down this mountain towards that village."
Of course, we are also heating an already overheating Earth with these mostly useless things "just to see" (line from Gibson's Neuromancer, spoken by the psychopath).
> We've already lost control of it. We don't have any control.
I think it's important to be clear on what's being said here:
There's a very big difference between "we, the people, do not have control over what Silicon Valley businesses and billionaires are doing with AI" and "there are AIs out there that no human has any control over".
The former is current reality. The latter is sci-fi, and nothing yet has demonstrated that it is actually possible.
The former is also precisely what I would describe as "the misuse of AI".
You are right and I do not think anyone making AI has much control over it either because too many people who make it are addicted to the process in a very similar way in which drug users are addicted to drugs. And also, the prisoner's dilemma solution is almost forcing a lot of people to control it.
So I really do believe that NO human has control over it, really.
I don't understand how culture is being used here. What does "more culture" mean? A culture is a process and a system of relations I can understand how wikipedia has a culture but not how a shadow library has "more culture than you need". Are we talking about the products of cultural production? To suggest that LLMs are anywhere near human level is laughable.
Is there a distinction between LLM’s and AI, or do we consider LLM’s to exhibit intellect?
I remeber Sam Altman pointing out in some interview that he considers GPT to be a reasoning machine. I suppose that if you consider what GPT does to be resoning, then calling it AI is not so far fetched.
I feel it’s more like pattern recognition though rather than reasoning, since there’s no black box ”reasoning” component in an LLM.
I've been annoyed by the redefinition of artificial intelligence since the LLM boom started. The term AI has no place being used to describe LLMs as far as I can tell, unless what goes on inside the black box of an LLM is drastically different than how they are described to function.
Predicting the next token based on a compressed dataset of human generated content isn't intelligence in any meaningful definition of the word. That doesn't mean LLMs aren't impressive or useful for certain tasks, but they aren't intelligent.
When Altman describes them as reasoning machines he's either lying (likely for marketing purposes) or using a different definition of "reasoning" than most people would. The latest release of GPT is attempting to mimic reasoning, but what they're actually doing is having one system act as an automated prompt engineer in between the GPT model and the end user.
> I've been annoyed by the redefinition of artificial intelligence since the LLM boom started
If there's any redefinition, it's being pushed further out. AI was previously used to describe far simpler systems, like expert systems and Deep Blue's alpha–beta search.
> Predicting the next token based on a compressed dataset of human generated content isn't intelligence in any meaningful definition of the word
I'd claim generating the next token is a sufficiently general task such that success can depend on essentially arbitrary intellectual capabilities. For instance, reliably completing unseen equations like `2335 + 4612 = ` requires ability to perform basic arithmetic.
> using a different definition of "reasoning" than most people would. The latest release of GPT is attempting to mimic reasoning
I think most people initially have some relatively solid definitions of "learning", "reasoning", "language use", etc. similar to how it's being used there - just that when non-humans meet those definitions there's an inclination to create some distinction between "learning" and an elusive "actual learning".
For instance, if something changes to refine its future behavior in response to its experiences (touch hot stove, get hurt, avoid in future) beyond the immediate/direct effect (withdrawing hand) then it can "learn". I think even small microorganisms can learn, with the main requirement being that it has some mutable state (can't learn if you can't change). Yet, others will object that "machine learning" is a misnomer because it's "not actual learning" and instead "just mimicking/simulating".
For to define "reasoning", you have to deal with (at least) the following sub-questions:
1. What is knowledge?
2. How can knowledge be encoded in a machine?
LLMs say that knowledge is encoded in the relationships between words (and, in fact, has been by the corpus of human writing), and that's enough. Expert systems said that knowledge could be encoded in carefully-written rules, and that's enough.
I'm pretty sure that any actually intelligent[1] computer is going to have to have more than one flavor of knowledge representation, and be able to shift between them as the situation warrants.
[1] Whatever "actually intelligent" may mean. I don't have to know what it is, though, to recognize that what we have so far is inadequate.
I'd say reasoning is the process of applying logic to draw inferences from some information/axioms/assumptions. For instance if you're asked "can a fridge fit in a bread-box?" and (implicitly or explicitly) go through:
1. A fridge is much larger than a bread-box
2. Larger objects cannot fit inside smaller objects without flexibility
3. Neither objects are sufficiently flexible
4. Therefore, a fridge cannot fit in a bread-box
Then I'd be happy saying you have used reasoning to reach your answer.
> How can knowledge be encoded in a machine? [...] LLMs say that knowledge is encoded in the relationships between words [...]
I don't think it'd be fully correct to say that knowledge is only encoded by relations between words. The input/output of the model is tokens of text, but internally it'll be converted into high-dimensional semantic vector spaces of concepts.
Different words describing the same concept ("Bread-Box", "breadbin", ...), or even images in the case of multi-modal models, can be associated with the internal representation of a bread-box, from which useful semantic manipulations/inferences can be made about the concept and not just the word used to reference it (like approximating the bread-box's size, a factor potentially learned from images but applied to answer a textual question).
> I don't think it'd be fully correct to say that knowledge is only encoded by relations between words. The input/output of the model is tokens of text, but internally it'll be converted into high-dimensional semantic vector spaces of concepts.
All right, how about this: LLMs do have actual knowledge - the knowledge that was encoded in the words in the training data. That's not how they store the data internally, but the actual knowledge comes from there.
And I wasn't saying that that's enough. I was saying that the LLM advocates think, or at least claim, that it's enough.
> LLMs do have actual knowledge - the knowledge that was encoded in the words in the training data. That's not how they store the data internally, but the actual knowledge comes from there.
For non-multimodal models, and minus ephemeral context and what's encoded by the architecture (like the translational invariance of CNNs), I'd agree to that.
> And I wasn't saying that that's enough. I was saying that the LLM advocates think, or at least claim, that it's enough.
Most modern LLMs like GPT-4, LLaMA-3.2, Gemini, or Claude 3.5 are already multimodal (text, images, sometimes video, sometimes audio). If you primarily just meant that's a good pathway to building richer internal world representations (and thus better at answering questions involving 3D geometry, for instance) then I'd also agree there, though I don't see why it'd be a requirement for reasoning/etc. (opposed to just beneficial).
No, I would put text, images, video, and audio as one kind of "stuff" - NN training stuff. I would put knowledge graphs and rules for reasoning engines as another kind of stuff. If you use "modes" for text and images and so on, then I want something different from just "multimodal". I want left-brain vs right-brain, or slow vs fast, or something on that order. I want a different kind - not just fancier and larger LLMs. I want an LLM coupled to an inference engine with the Cyc encyclopedia available to it... or something in that direction. Maybe further than that.
Just LLMs aren't enough, and they aren't going to be enough.
You use words like "reasoning", but LLMs do not reason in the same way that an inference engine does. They can, at best, simulate it badly. I think we need more - not more of what we've got, but more of a different kind.
> I want something different from just "multimodal". I want left-brain vs right-brain, or slow vs fast, or something on that order. I want a different kind - not just fancier and larger LLMs. I want an LLM coupled to an inference engine with the Cyc encyclopedia available to it...
So if I'm understanding, your objection isn't about the modalities that the model can work with (text, video, diagrams, ...), but about the kinds of processing it can do?
Many modern LLMs support tool calling (e.g: to look up entities in Google's knowledge graph, or evaluate code), mixture-of-experts architecture (specialized subnetworks that are enabled/disabled as needed per-query), and chain-of-thought inference (for questions requiring more complex reasoning). Would you consider those to be steps in the right direction?
> You use words like "reasoning", but LLMs do not reason in the same way that an inference engine does
If you view reasoning as something inference engines can do, then I don't think we disagree too much. Remaining difference may just be about error rate - I'm personally fine saying something can reason (at least "to some extent") even if it's a little fuzzy and not 100.0% accurate formal logic (else animals would also be excluded).
I view reasoning as something that LLMs do a kind of, or a subset of, and inference engines do a different kind or subset of. And there may be different kinds or subsets than just those two.
And just as inference engines, by themselves, were not enough to be really able to "reason", neither are LLMs, by themselves. (I think "AI" has historically been quite reductionist - they reduce thinking to only one kind of thinking, and then try to automate that. The result can sometimes be impressive, but always is less than what human thinking is.)
Tool calling or mixture-of-experts are in the direction that I'm thinking.
The term "artificial intelligence" is still used, quite correctly, to refer to fully-deterministic algorithms controlling NPCs in video games of all types.
The field of "artificial intelligence" still has "machine learning" (of which LLMs are a product) as part of it.
The problem is not, and has never been, that the term "AI" was used incorrectly to describe LLMs. It's that people (like Altman) who almost certainly do know better started making marketing claims conflating them with "AGI" (aka "strong AI"), and pushing them as being genuinely "alive" and reasoning.
Most of us in the tech field, and a lot of people outside of it (eg, most gamers) fully recognize that "AI" does not automatically mean Skynet. It takes active, deceptive work on the part of the people selling these systems to prime them to make that leap.
Remember that fools can either be full of horsesh_t, where they don't know that what they believe and repeat is untrue, or bullsh_t, where they know they are lying and doing so for a particular reason.
The first step to being a Dune-style truthsayer is to never lie. The deeper truth to that path (which is possible, but rarely travelled) is that it is possible, but we must purposefully seek ever deeper truths about truth and humanity.
Our world's lack of this deep honesty, first about oneself and then about others, is a major source of our systemic problems. Another major source is selfishness, but I've discussed that elsewhere.
Regardless, most people just love hearing the words flow out of their own mouths, and that tendency seems to be worse for successful tech guys or anyone with a bit of money or with self-righteous fake-religion guys.
Do you consider a basic algorithm to be artificial intelligence?
You are right though, you van go back further than LLMs and find misuses of the term "artificial intelligence." That doesn't contradict my main point though, that the word has been so redefined as to be pretty meaningless to the understanding of what intelligence is.
If we want to consider even basic algorithms to be intelligence, are we boiling down the entire concept of intelligence to mathematical equations?
If it's been the way the field has used it for decades, it's not really a misuse.
> that the word has been so redefined
It's not been redefined though, other than people now wanting to moan about PR and things not being "real" AI when we've had AGI as a term to use right there.
> If we want to consider even basic algorithms to be intelligence, are we boiling down the entire concept of intelligence to mathematical equations?
Massive side argument, but I think we obey physical laws and are not magical and so fundamentally I can't see another answer.
Not really - machine learning, whether SVMs or ANNs, was called just that until relatively recently when the popular press started to first call ANNs AI, then LLMs. At first there was pushback from ML researchers, but particularly with LLMs they are now embracing it since investors want to invest in "AI".
LLMs are really just fancy (deep) pattern recognizers/predictors, conceptually not so different than rule-based expert systems like CYC, which was never called AI. Of course LLMs learn their own rules, which is extremely useful.
Other than the pop press wanting to talk about futuristic AI, and investors wanting to invest in it, what also provides cover for LLMs as "AI", is that they are trained to predict/copy human training data, and so appear as smart/dumb as that is, even if they are really no smarter than Searle's Chinese room.
> machine learning, whether SVMs or ANNs, was called just that until relatively recently when the popular press started to first call ANNs AI,
That is absolutely not the case. These things have been in the field of AI for decades. Frighteningly it's nearing two decades since I started my degree in AI and it wasn't a new reference then.
I remember taking Andrew Ng's Coursera ML course (incl. neural nets and SVMs) when it came out in 2011, and nobody, including him, was calling it AI at that time. I think it was sometime after neural nets really took off after ImageNet 2012 that the press started to call everything AI.
The field of ai is far older than that, my degree was in artificial intelligence starting in 2005 so before the dnn boom with rbms (I was replicating them only in my masters, I think it was more 2008ish that became a bigger topic?)
Yes, although the use of the label AI comes and goes as people get their hope up that a particular type of solution (e.g. various GOFAI approaches) is the answer, until it proves not to be, when the technologies go back to being called by their descriptive name (general problem solver, expert system, etc).
There was certainly a time when ANNs were widely just considered as part of ML, then rebranded as "deep learning", before the "AI" label was slapped on anything ANN-related. I guess it makes sense that an AI degree, encompassing many prior/current approaches might use that as a catch-all term for the field as opposed to any specific technology.
There is no black box 'reasoning' component in humans either.
I will grant you that humans are far more intelligent, and after spending thousands of hours playing with LLMs, it's hard not to see their limitations. At the same time... they're dumb like a very dumb person who has (implausibly) read the Library of Congress, not like a rock or a computer.
I often use Claude to write short stories, largely just for fun. Certainly, its skill at English vastly outmatches its skill at reasoning. It doesn't write well, but it regularly produces turns of phrase that makes me laugh; meanwhile, it needs hand-holding to successfully handle situations with asymmetrical knowledge. It's bad at theory of mind.
But it's just bad, in more or less the same way that a two-year-old is bad at it.
Not the best reasoner in the world. It would be false to claim it's as smart as the typical seven-year-old...
It's almost as wrong to claim that it can't reason at all.
It definitely feels like reasoning. Problems get solved. They may be simple problems, but it's still far beyond what a calculator can do.
Does it really matter if it's "just wordplay"? I'm not convinced humans are any different, beyond the sheer scale. I certainly don't believe we have a 'reasoning module'.
That story goes way deeper than some wordplay fooling people. The entire intent was to get people to realize that it was worthless, but, even after people learned that and what it was, they clamoured for more!
"Just imagine how stupid the average person is, and them remember that half of them are even stupider than that!" --George Carlin
And, yes, we humans are very different, but you'll have to traverse my recent comment history to get the extensive explanation. It's worth it, though, I promise, but I doubt you'll like it or agree with it. Good luck!
You raise good points, I agree it feels like it is reasoning at times.
Though the brain, with our current understanding of it, is by far more of a black box to us than any LLM.
> I certainly don't believe we have a 'reasoning module'.
Let’s also point out that human brains probably don’t have any vector databases in them either.
It seems to me like our brains must work very differently - just look at how much energy an LLM consumes compared to our brains consuming around 12 Watts.
I remember expert systems being considered AI, so LLMs ought to meet that bar as well. They aren't AGI, though, which is a higher one, I guess. I'm not in love with the various terms and the various ways people define them. Even LLM --at what point is it "large"? In a rapidly changing area of both academic and lay understanding, it's understandable for terminology to be a bit unstable. I don't think it's reasonable to say LLMs do reasoning, however. Even when mimicking incredible feats of intelligence, they don't have a grasp of what is true or how truth flows from a set of facts to any other.
General intelligence is a strategy to defer the acquisition of abilities from the process of construction/blueprinting (ie., genes, evolution..) to the living environment of the animal. The most generally intelligent animals are those that have nearly all of their sensory motor skills acquired during their life -- we learn to walk and so can learn to play the piano, and to build a rocket.
There is a serious discontinuity in strategy to achive this defferal: the kinds of processes which "blueprint" the intelligence of a bacterium are discontinuous with the processes which a living animal needs to dynamically conceptualise its environment under shifts to its structure.
Of the latter animals need: living adaption of their sensory-motor systems, heirachical coordination of their bodies, robust causal modelling, and so on.
General intelligence is primitively a kind of movement, which becomes abstract only with a few hundred thousand years of culture. The earliest humans, able to lingusitically express almost nothing, were nevertheless generally intelligent.
Present computer-science-led investigations into "intelligence" assume you can operate syntactically across the most peripheral consequences of general intelligence given by linguistic representations. This is profoundly misguided: each todller necessarily must learn to walk. You cannot just project a slideshow of walking, and get anywhere. And if you remove this capability and install a "walking module", you've remved the very capabilities which allow that child then to do anything new at all.
There is nothing in the linguistic syntactical shadow of human intelligence to be found in creating generally capable systems. It's just overfitting to our 2024 reflections.