AI for Bad: A Saboteur’s Guide
A step-by-step manual for ensuring AI’s potential remains untapped while maximizing the likelihood of dystopia
In an increasingly information-rich future, AIs can be navigators for our children or wardens that devalue human autonomy. AIs can extend human reason or enroll us in digital kindergarten. AIs can empower billions of mini-captains or create one big boss at the top.
We don’t know what a path towards good outcomes for human flourishing looks like in detail, so we talk instead about what to avoid. Let’s go a step further: What would it look like if someone were trying to prevent good outcomes for AI? To answer that question, here is an imaginary saboteur’s guide.
1. Centralize control over AI. Aim for a single point of failure.
Machine learning turns data into practical computation, much like a steam engine turns chemical energy in fuel into movement. Both drive carriages, commerce, and culture. Pick someone to put in charge of all that.
2. Let pessimists dominate the discourse. Encourage polarization.
Define everyone else in opposition to those who believe AI will kill us all. Alienate, sideline, and repel optimists and the highly agentic. Encourage black-and-white thinking: AI must be good or bad, humanity’s savior or its doom. Make people pick sides, foster factionalism, and paralyze the public with vague fears and speculative futures while narrowing choice.
3. Favor probability over adaptability. Elevate the new oracles.
History shows that predicting the civilizational effects—both intended and unintended—of transformative technologies is a formidable challenge. Who foresaw the printing press sparking the scientific revolution or cars fueling sexual liberation? Yet insist AI is different. Crown superforecasters and their Bayesian techniques as prophetic. Champion confident ex ante regulation while scorning advocates of iterative learning. Make epistemic humility seem quaint.
4. Make theory complex. Keep it inaccessible.
Revolutions are about context, not depth. No amount of technical understanding of a steam engine can predict how rural workers will move to cities. To learn together, specialists must make their ideas legible and mutually adjust based on one another’s breakthroughs, so don't let them.
5. Define initiatives rigidly. Stifle spontaneous order.
Gatekeep resources by demanding concrete outcomes in advance. Survival in revolutionary times requires flexibility and openness to opportunity, so ensure bureaucratic calcification. Remember that serendipity is the enemy of control and, insofar as it is not quantifiable, it is worthless. Don’t waste time creating the conditions for decentralized, spontaneous orders to emerge.
6. Seal off sectors from one another. Isolate institutions.
Maintain mutually alien values and perspectives between groups by preventing bonds. Make academia seem useless, arcane, guildlike. Chain builders in industry to the blind logic of KPIs. Keep governments glacial and tutelary.
7. Entrust judgment to experts. Embrace dependence.
Exploit novelty to amplify authority. Equate current credentials with superior judgment about unknowable futures. Transform ordinary people into dependent children, with experts as their wise and benevolent caretakers.
8. Elevate societal priorities. Prioritize institutions over individuals.
Focus on what “society” or the “world” needs in the abstract and in the long term, not what might contribute to the flourishing of actual individuals in the here and now. Have schools ban students from using language models on homework in the name of educational standards, denying the next generation access to experimentation with the technology. Ask not what institutions can do for people but what people can do for institutions.
9. When in doubt, freeze development. Pause AI.
Weaponize fears about AGI to justify any preventative action in the present. Make it harder and more expensive to deploy sophisticated open-source AI models. Birth new regulatory schemes to aid in the slowdown, channeling the spirit of the Nuclear Regulatory Commission and its chilling effect on US reactor construction. If bureaucracy fails, consider more… explosive solutions. When critics cry that halting innovation before Darwin, Newton, or Einstein would have caused incalculably large opportunity costs, remind them we are fighting for the human race and cannot compromise. Collateral damage to science and its human beneficiaries is the price of safety.
10. Take it all very seriously. Stiffen the spine.
Wonder and awe are childish emotions. Exploring and playing are distractions from the rough road ahead. Since these are often the drivers of technological, scientific, and cultural advancement, we must be on guard against such embarrassing states of mind. It is more important to do things correctly than to do things right. In the face of AI, a furrowed brow is the only rational expression. Laughter is the first step towards extinction.
If we foster effective global coordination and complete these ten steps, we can stifle innovation and undermine human flourishing at scale.
Thank you to Vincent Wang-Maścianica for this satirical guest post.
Hilarious!
And useful...it's an AI development version of Jared Spool's Despicable Design. A reminder that thinking through the anti-thing helps one to see more clearly one's *real thing.