Who Bears the Cost: AI, Power, and the Similarities with Succession

There is a pattern emerging in artificial intelligence that feels familiar, and if you have ever watched Succession, you already understand it.

That show was never really about who would take over Waystar Royco. It was about power and what happens when a small group of people make decisions that impact millions while remaining insulated from the consequences. The cruise scandal storyline made that clear. Real harm was done, and while victims were dealing with the aftermath, the people at the top were managing optics, negotiating strategy, and protecting their positions. The victims were not in those rooms.

That same dynamic is showing up in how AI is being built and governed.

There are rooms where decisions are made about AI deployment, funding, and policy. Those rooms are filled with experienced and often well-intentioned people, including executives, academics, and consultants with access to platforms, capital, and influence. At the same time, many of the people directly affected by these systems are not present. Workers screened out by hiring algorithms, creators navigating suppression, and practitioners who see where systems fail in real-world environments are dealing with the outcomes without shaping the decisions.

That gap matters because systems tend to protect the people who design and control them.

This is not about individual intent. Most of the people involved are not trying to cause harm. They are operating within systems that reward proximity to power, institutional credibility, and access to funding. Those incentives shape outcomes. That is what makes the parallel to Succession so relevant. The Roy family did not set out to harm people. They prioritized the survival and growth of the company, and harm became a byproduct of those priorities.

The same pattern is visible in AI governance. When concerns are raised about bias, privacy, or misuse, the response often follows a familiar sequence. The issue is acknowledged, commitments are made, panels are convened, and frameworks are introduced. The conversation expands, but the structure remains largely the same. The same institutions control resources, the same voices are elevated, and the same definitions of expertise continue to apply.

This is not reform. It is management.

Data helps make this clearer. According to Pew Research Center, 52 percent of Americans report being more concerned than excited about the increased use of AI, and 70 percent say the government should do more to regulate how it is used. At the same time, trust remains uneven, with a majority expressing concern about how AI systems handle personal data and decision-making. These concerns are not abstract. They reflect lived experiences with systems that already shape hiring, lending, visibility, and access to information.

There is also a gap between who is affected and who has influence. Research across the sector shows that advanced degrees and institutional affiliations remain a primary pathway into AI governance roles, which narrows the range of perspectives shaping decisions. Meanwhile, reports of algorithmic bias and discriminatory outcomes continue to surface across industries, from hiring tools to facial recognition systems.

At the same time, investment in AI continues to accelerate. Global spending on AI systems is projected to reach hundreds of billions of dollars in the coming years, with a significant portion concentrated among a small number of companies. That concentration of capital reinforces concentration of influence.

The result is a system where the benefits and the risks are not distributed evenly.

This is where structure becomes more important than intention. If decision-making power remains concentrated, outcomes will continue to reflect that concentration. If access to influence is limited to a narrow set of institutions and credentials, then the perspectives shaping the future of these systems will remain limited.

Speed adds another layer to this challenge. AI systems are being developed and deployed faster than regulatory and social frameworks can adapt. What took social media years to reveal is unfolding much more quickly. Adoption is accelerating, integration is deepening, and the consequences are compounding in real time.

At the same time, the benefits are real. AI is improving medical diagnostics, increasing accessibility, and expanding productivity in ways that were not possible before. Both progress and risk are happening simultaneously. That duality makes the moment harder to interpret, but it does not change the underlying structure.

The most important question is not whether AI is good or bad. It is who has the authority to shape it.

Succession showed that power protects itself through process. It uses language that signals concern while maintaining control. It creates mechanisms that absorb pressure without redistributing authority. That same pattern is visible in how AI is being discussed and governed.

Statements about inclusion and accountability are common. Recognition of potential harm is widespread. But the structure of decision-making remains largely unchanged.

Until that changes, the outcomes will not.

The question that matters is simple. Who is in the room when decisions about AI are made, and who is left to absorb the consequences?

Because until those two groups become the same, we are not solving the problem. We are managing it.

And we have seen how that story ends.

Next

Ethical AI and the Rise of a New Aristocracy