The only thing more popular on Twitter this week than the imminent demise of the dollar’s status as global currency reserve is AI doomerism.
For those not up to speed, this week many prominent technologist came out with what amounts to the 21st century version of the British German pre-WWII naval treaty to limit warship tonnage.
Well, we know how that went.
The hysteria has reached a fever pitch, with eminent technologists calling for some version of a global authoritarian regulatory regime to limit the speed of AI development.
Some of these arguments are well founded (“look at how difficult it is to control these machines, we should probably slow down before we build things we cannot control”), some rather less so, essentially amounting to neo-Luddism (what about the all the jobs that will be automated away?!?!), or just a lack of understanding of the difficulties of, you know, overthrowing every human institution in the world.
In classic fashion, the same Twitterati which are knowingly prognosticating on the demise of the dollar, are also making bold and confident proclamations about the macro economy (do they even know that spending equals income?!)
As for the automation arguments, well, there aren’t many farmers anymore. Where did they go?
This week, things escalated further when prominent AI doomer Yudkowsky came out with a plan to UNILATERALLY BOMB DATA CENTERS ABOVE A CERTAIN SIZE.
Well folks, it looks like we have crossed the Rubicon of imagination.
Where not only is it inevitable that AI will achieve AGI, it will achieve superintelligence, & necessarily develop into some shadow of the devil.
AI will seize the memes of production, take over our institutions, and decide to destroy life on the planet in order to …make more paperclips.
And, critically, “WE HAVE NO MORE TIME” to debate this.
What’s ironic is not that this line of thinking jumps so many steps, but that this kind of thinking, unchecked from the mooring of reality or pragmatism, logically necessitates a global authoritarian regulatory regime with the mandate and power to use violence to attempt to slow or stop the development of AI.
And in this intellectual vacuum of logic, a simple question remains.
Unanswered, though not for lack of me trying to ask it:
What about China.
See, as a (reformed) professional game theorist, I know a thing or two about taking things to their ridiculously logical conclusions.
You may have heard of the prisoner’s dilemma, the idea that cooperation often leads to better outcomes, but without a way to coordinate or enforce cooperation, the natural state (and most likely) is for individuals to pursue their own self-interest, which in some cases lead to overall worse outcomes for everyone.
Some, real world examples:
Aforementioned Naval Treaties
The race to mobilize for WWI
Nuclear non-proliferation
Gain of function research
Militarization of space
Climate change
This last one being a pretty good example, not only for the overall dynamics, but as a clue to the important players on the field, and what we already know about how they play these games.
A chart here helps.
So here we are, the West once again paralyzed by anxiety over existential risks that, alone, it pretty much has zero ability to control, contain or regulate.
The recent shift of AI safetiests / doomers being functionally equivalent to the climate doomers who maintain a world view where it somehow makes sense to shut down nuclear plants while China burns mountains of coal.
How much impact does a closed down nuclear plant have when China burns another 10 tons of coal?
Nothing.
In fact, likely worse than nothing, as the replacement costs of that energy likely involve some combination of dirty fossil fuels, and commodity intensive energy infrastructure like solar and batteries.
So it goes.
Where does Mr. Yud come in?
Well, from a game theory perspective, given that both public and private sector in China seem to have unlimited appetite for using (and abusing) AI, we pretty clearly look to be in a prisoner’s dilemma here.
One where the only logical conclusion from the AI Risk crowd is to…
Start a nuclear war with China in order to control their investment in AI.
Well.
Reductio ad absurdum in the flesh.
Keep in mind, by the moral framework of the vast majority of humans, it’s not even clear that we aught not prioritize the welfare of intelligent machines.
Doesn’t mean it doesn’t make for good twitter content.
Until next time.
Disclaimers