The Race That Ends in Tragedy

The Race That Ends in Tragedy

The Race That Ends in Tragedy

Technological progress does not become dangerous simply because it moves fast. It becomes dangerous when responsibility is discovered first, competition takes over next, and coordination arrives too late.

Technological progress does not become dangerous simply because it moves fast. It becomes dangerous when responsibility is discovered first, competition takes over next, and coordination arrives too late.

In March 2023, Tristan Harris and Aza Raskin — co-founders of the Center for Humane Technology, and the minds behind The Social Dilemma — presented what they called the AI Dilemma to a room of policymakers, technologists, and researchers. At the center of that presentation were three rules about the nature of technological progress.

The rules seem simple enough, but they are worth sitting with.


  1. When you invent a new technology, you uncover a new class of responsibilities.

  2. If that technology confers power, it starts a race.

  3. If you do not coordinate, the race ends in tragedy.


Three simple rules, essentially a compact theory of how civilizations lose control of their own tools.


The first rule is the most hopeful. It acknowledges that invention carries obligation. It’s saying that the people who build something new are also often the first to understand what it can do, and therefore bear a particular responsibility for what happens next. Most technological optimism lives here, in the belief that invention and responsibility can still travel together.


The second rule is where that optimism runs into reality. Power is never distributed evenly, and wherever a technology concentrates it, competition follows. Not always because the people in the race are uniquely reckless, though some are, but because the structure of competition makes restraint individually irrational even when it is collectively necessary. To slow down unilaterally is to lose. So nobody slows down.


The third rule is the consequence of the second, in the absence of the one thing the second makes hardest to achieve: coordination. When a race has no agreed rules, no shared limits, and no mechanism to allow for all participants to step back at once to reassess the trajectory, it continues until something breaks. Harris and Raskin call that endpoint tragedy. History has other names for it.


We are now living inside all three rules at once.


The race in artificial intelligence is the clearest current example of the second rule operating at scale. The danger is not abstract. It is already materializing in physical infrastructure, resource extraction, and energy demand. Training and deploying successive generations of large AI systems requires massive data centers, vast compute capacity, enormous volumes of water for cooling, and growing quantities of specialized hardware built from resource-intensive supply chains. These costs are not incidental to the race. They are part of its structure.


And the race does not reward caution. Once a new capability appears, competitive pressure takes over. A prototype becomes a product signal. A research breakthrough becomes a market threat. A proof of concept becomes an arms race in deployment. Each actor has reasons to accelerate, because each assumes others will do the same. More capability leads to more investment, more infrastructure, more extraction, and more urgency. The responsibilities are being uncovered in real time. The coordination is not.


That is the gap between the first rule and the third. We are discovering obligations faster than we are building the institutions needed to govern them.


Harris and Raskin are not arguing that tragedy is inevitable. The Center for Humane Technology exists precisely because they believe coordination is still possible — that the race can be governed, that limits can be set, that powerful systems can be subjected to collective restraint before the damage becomes irreversible. But that requires clarity about the structure of the problem. It requires seeing that the danger is not contained within in the technology itself, but in the incentives surrounding and shaping it.


This is why the race matters more than the tool in isolation. A powerful technology introduced into a competitive environment does not remain a neutral invention for long. It becomes leverage. It becomes a strategic asset. It becomes something states, firms, and institutions feel compelled to pursue, whether or not they are ready to bear the consequences. By the time the responsibilities are visible, the race is already underway.


That is the tragedy built into the sequence. Responsibility comes first in principle. Competition overtakes it in practice. Coordination arrives last, if it arrives at all.


We understood the first rule well enough to name it. We are now living inside the second. What remains unresolved is whether we can act on the third before it acts on us.


The decision will not wait.

In March 2023, Tristan Harris and Aza Raskin — co-founders of the Center for Humane Technology, and the minds behind The Social Dilemma — presented what they called the AI Dilemma to a room of policymakers, technologists, and researchers. At the center of that presentation were three rules about the nature of technological progress.

The rules seem simple enough, but they are worth sitting with.


  1. When you invent a new technology, you uncover a new class of responsibilities.

  2. If that technology confers power, it starts a race.

  3. If you do not coordinate, the race ends in tragedy.


Three simple rules, essentially a compact theory of how civilizations lose control of their own tools.


The first rule is the most hopeful. It acknowledges that invention carries obligation. It’s saying that the people who build something new are also often the first to understand what it can do, and therefore bear a particular responsibility for what happens next. Most technological optimism lives here, in the belief that invention and responsibility can still travel together.


The second rule is where that optimism runs into reality. Power is never distributed evenly, and wherever a technology concentrates it, competition follows. Not always because the people in the race are uniquely reckless, though some are, but because the structure of competition makes restraint individually irrational even when it is collectively necessary. To slow down unilaterally is to lose. So nobody slows down.


The third rule is the consequence of the second, in the absence of the one thing the second makes hardest to achieve: coordination. When a race has no agreed rules, no shared limits, and no mechanism to allow for all participants to step back at once to reassess the trajectory, it continues until something breaks. Harris and Raskin call that endpoint tragedy. History has other names for it.


We are now living inside all three rules at once.


The race in artificial intelligence is the clearest current example of the second rule operating at scale. The danger is not abstract. It is already materializing in physical infrastructure, resource extraction, and energy demand. Training and deploying successive generations of large AI systems requires massive data centers, vast compute capacity, enormous volumes of water for cooling, and growing quantities of specialized hardware built from resource-intensive supply chains. These costs are not incidental to the race. They are part of its structure.


And the race does not reward caution. Once a new capability appears, competitive pressure takes over. A prototype becomes a product signal. A research breakthrough becomes a market threat. A proof of concept becomes an arms race in deployment. Each actor has reasons to accelerate, because each assumes others will do the same. More capability leads to more investment, more infrastructure, more extraction, and more urgency. The responsibilities are being uncovered in real time. The coordination is not.


That is the gap between the first rule and the third. We are discovering obligations faster than we are building the institutions needed to govern them.


Harris and Raskin are not arguing that tragedy is inevitable. The Center for Humane Technology exists precisely because they believe coordination is still possible — that the race can be governed, that limits can be set, that powerful systems can be subjected to collective restraint before the damage becomes irreversible. But that requires clarity about the structure of the problem. It requires seeing that the danger is not contained within in the technology itself, but in the incentives surrounding and shaping it.


This is why the race matters more than the tool in isolation. A powerful technology introduced into a competitive environment does not remain a neutral invention for long. It becomes leverage. It becomes a strategic asset. It becomes something states, firms, and institutions feel compelled to pursue, whether or not they are ready to bear the consequences. By the time the responsibilities are visible, the race is already underway.


That is the tragedy built into the sequence. Responsibility comes first in principle. Competition overtakes it in practice. Coordination arrives last, if it arrives at all.


We understood the first rule well enough to name it. We are now living inside the second. What remains unresolved is whether we can act on the third before it acts on us.


The decision will not wait.

For work that moves further out on the limb, see:

For work that moves further out on the limb, see:

Optimist Nihilist.

Optimist Nihilist.

© 2026 Arman Musaji

© 2026 Arman Musaji

Email:

armanmusaji@gmail.com

armanmusaji@gmail.com