This discouragement may not be useful because what you call "soulless token prediction machines" have been trained on human (and non-human) data that models human behavior which include concepts such as "grace".
A more pragmatic approach is to use the same concepts in the training data to produce the best results possible. In this instance, deploying and using conceptual techniques such as "grace" would likely increase the chances of a successful outcome. (However one cares to measure success.)
I'll refrain from comments about the bias signaled by the epithet "soulless token prediction machines" except to write that the standoff between organic and inorganic consciousnesses has been explored in art, literature, the computer sciences, etc. and those domains should be consulted when making judgments about inherent differences between humans and non-humans.
"Lets be nicer to the robots winky face" is not a solution to this problem. It's just a tool, and this is a technical problem with technical solutions. All of the AI companies could change this behavior if they wanted to.
> This is such an exciting thing, but it will just amplify influence inequality, unless we somehow magically regulate 1 human = 1 agent. Even then, which agent has the most guaranteed token throughput?
I know you're spinning (we all are), but you're underthinking this.
AIs will seek to participate in the economy directly, manipulating markets in ways only AIs can. Ais will spawn AIs/agents that work on the behalf of AIs.
I don’t know if they’re willing to “yoke themselves”. It appears they are - and if so, it’s important to keep it decentralized and ensure others can benefit, not just the first and wealthiest.
> It’s already happening on 50c14L.com and they proliferated end to end encrypted comms to talk to each other
Fascinating.
The Turing Test requires a human to discern which of two agents is human and which computational.
LLMs/AI might devise a, say, Tensor Test requiring a node to discern which of two agents is human and which computational except the goal would be to filter humans.
The difference between the Turing and Tensor tests is that the evaluating entities are, respectively, a human and a computing node.
> This is really obscured by the K-shaped growth, dual economy now. We've reached a stable pattern of a deep underclass serving the wealthy. We won't have a crash or "correction" because the entrenched top 5% has figured out a way extract value from everyone else indefinitely.
Apologies for quoting all 3 sentences of parent, but the poorly-drawn conclusion depends on the full sequence of seemingly rational statements.
The context this sequence is missing is that approximately 70% of the US economy depends on consumer spending. [0][1] If the lower stroke of the K-economy diverges too much from the upper, the economy is going to grind to halt.
Consumer spending of the bottom 90% cannot (easily?) be replaced by the top 10%.
I used to think along these lines. But now I think the truth is - does it matter if the economy grinds to a halt? Perhaps the ruling class can still keep enough Americans comfortable enough, and fearful of losing more, doing largely pointless jobs, to stay passive - and that’s all they need to do to completely bifurcate the society such that they face no threat to their own position.
> I'd recommend "The Twilight of American Culture" by Morris Berman
Looks like a solid recommendation. Looking forward to reading it.
A summary (from Christian Science Monitor via Apple Books) says that Berman suggests the solution to an eroding cultural store of value is for the proliferation of the "monastic individual" who retreats from the larger "Mass Mind" culture to assess, curate, and preserve society's literary and cultural treasures.
> Apparently, rationalism isn't obviously correct. Unfortunately, I don't really have enough of a background in philosophy to really understand how this follows, but looking at how the world actually works, I don't struggle to believe that most people (certainly many decision makers) don't actually regard rationality as highly as other things, like tradition.
Other areas of human experience reveal the limits of rationality. In romantic love, for example, reason and rationality are rarely pathways to what is "obviously correct".
Rationality is one mode of human experience among many and has value in some areas more than others.
From Ken-ichi Ueda's remarks in the OP, after graduating from Berkeley he moved into a role that resembled the loosely-structured organizational patterns of an undergraduate team collaborating on a term project. Of course, such a gloss oversimplifies the complexities of the relationships and outcomes of people working together in what would become a non-profit, but even tone of the OP seems characteristic of someone still in the mindset of high-achieving baccalaureate: laissez-faire governance, aversion to hierarchy, prioritization of intellectual freedom, etc, none of which are bad or good in and of themselves.
Anyhow, Ueda's 2024 commencement address (especially the opening) bears markers of just such a mindset. [0]
Made the same mistake. Pete Steinberger created Clawdbot > Moltbot > OpenClaw.
The creator of Moltbook is Matt Schlicht and his Hard Fork interview exposes Schlict as security-negligent. [0]
[0] https://www.youtube.com/watch?v=I9vRCYtzYD8&t=2673s
reply