It is an attempt to predict a possible future in the context of AI. Basically a doomer fairy tale.
It is just phenomenally dumb.
Way worse than the worst bad scifi about the subject. It is presented as a cautionary tale and purports to be somewhat rationally thought out. But it is just so bad. It tries to delve into foreign policy and international politics but does so in such a naive way that it is painful to read.
It is not distasteful to participate in it -- it is embarrassing and, from my perspective, disqualifying for a commentator on AI.
I reject the premise that https://ai-2027.com/ needs "refutation". It is a story, nothing more. It does not purport to tell the future, but to enumerate a specific "plausible" future. The "refutation" in a sense will be easy -- none of its concrete predictions will come to pass. But that doesn't refute its value as a possible future or a cautionary tale.
That the story it tells is completely absurd is what makes it uninteresting and disqualifying for all participants in terms of their ability to comment on the future of AI.
Here is the prediction about "China Steals Agent-2".
> The changes come too late. CCP leadership recognizes the importance of Agent-2 and tells their spies and cyberforce to steal the weights. Early one morning, an Agent-1 traffic monitoring agent detects an anomalous transfer. It alerts company leaders, who tell the White House. The signs of a nation-state-level operation are unmistakable, and the theft heightens the sense of an ongoing arms race.
Ah, so CCP leadership tells their spies and cyberforce to steal the weights so they do. Makes sense. Totally reasonable thing to predict. This is predicting the actions of hypothetical people doing hypothetical things with hypothetical capabilities to engage in the theft of hypothetical weights.
Even the description of Agent-2 is stupid. Trying to make concrete predictions about what Agent-1 (an agent trained to make better agents) will do to produce Agent-2 is just absurd. Like Yudkowsky (who is far from clear-headed on this topic but at least has not made a complete fool of himself) has often pointed out, if we could predict what a recursively self-improving system could do then why do we need the system.
All of these chains of events are incredibly fragile and they all build on each other as linear consequences, which is just a naive and foolish way to look at how events occur in the real world -- things are overdetermined, things are multi-causal; narratives are ways for us to help understand things but they aren't reality.
Sure, in the space of 100 ways for the next few years in AI to unfold, it is their opinion of one of the 100 most likely, to paint a picture for the general population about what approximately is unfolding. The future will not go exactly as that. But their predictive power is better than almost anyone else. Scott has been talking about these things for a decade, before everyone on this forum thought of OpenAI as a complete joke.
Do you have any precedent from yourself or anyone else about correctly predicting the present from 2021? If not, maybe Scott and Daniel just might have a better world model than you or your preferred sources.
>The job market for junior software engineers is in turmoil: the AIs can do everything taught by a CS degree, but people who know how to manage and quality-control teams of AIs are making a killing.
AI doesn't look like a competition for a junior engineer and many of the people using not "managing" AI are going to be juniors in fact increasing what a junior can do and learn more quickly looks like one of the biggest potentials if they don't use it entirely as a crunch.
Meanwhile, it suggests leading-edge research into AI itself will proceed fully 50% faster than research not without AI but those using 6 months behind cutting edge. This appears hopelessly optimistic as does the idea that it will grow the US economy 30% in 2026 whereas a crash seems more likely.
Also it assumes that more compute will continue to be wildly more effective in short order assuming its possible to spend the money for magnitudes more compute. Either or both could easily fail to work out to plan.
I'm not sure why it's so distasteful, but they basically fear monger that AI will usurp control over all governments and kill us all in the next two years
I'm not familiar with ai-2027 -- could you elaborate about why it would be distasteful to participate in this?