Ethical AI Is Also About What We Tell People

Ethical AI is often framed as a technical problem. Discussions tend to focus on bias mitigation, model accuracy, explainability, and safety protocols. These are important considerations, but they do not capture the full scope of what is happening.

Artificial intelligence is not only being developed. It is being narrated.

This perspective is informed in part by a recent discussion between Ed Zitron, author of Where’s Your Ed At and host of the Better Offline podcast, and Isaac Pound on The Tech Report. Zitron argues that “the AI industry is ignoring how much people hate it,” and critiques what he describes as dangerous rhetoric surrounding the current AI boom. That framing is important because it shifts the conversation away from technical capability and toward public perception, trust, and lived experience.

Over the past several years, the AI industry has paired rapid technological advancement with aggressive messaging about the future of work. Executives and researchers have publicly suggested that large portions of the workforce could be automated within relatively short timeframes. These statements are frequently amplified by media coverage, often without sufficient context or scrutiny.

This narrative is unfolding in a fragile economic environment. According to the Federal Reserve’s 2024 Survey of Household Economics and Decisionmaking, a significant portion of Americans lack financial resilience, with 37% unable to cover a $400 emergency expense without borrowing or liquidating assets. Economic insecurity is not an abstract condition; it is a daily reality for millions of people.

Against this backdrop, messaging about widespread job displacement carries weight. The World Economic Forum’s Future of Jobs Report 2025 projects that while AI and automation may create new opportunities, they are also expected to displace tens of millions of roles globally in the near term. These projections are complex and contingent, but they are often communicated in simplified, headline-driven formats that emphasize disruption over nuance.

The result is a widening gap between perception and reality.

Large language models, which underpin many current AI systems, are powerful but limited tools. They generate probabilistic outputs based on patterns in data. They do not possess intent, understanding, or agency in the human sense. However, they are frequently presented in ways that suggest autonomy or intelligence beyond their actual capabilities.

Research published in Nature Machine Intelligence highlights the risks associated with anthropomorphizing AI systems. When users perceive these systems as human-like, they are more likely to trust their outputs and rely on them in ways that may not be warranted. This dynamic is not accidental. Design choices, including conversational tone and personality, shape user behavior.

This raises an important ethical question: what responsibility do companies have in how they present their technology?

If the public is led to believe that AI systems are more capable, more autonomous, or more imminent in their impact than they actually are, then messaging itself becomes a form of influence. It affects how people make decisions about their careers, their education, and their sense of security.

Zitron’s critique is particularly relevant here. If the public response to AI is increasingly negative, and if that sentiment is being dismissed or ignored by those building and funding these systems, then the ethical gap is not just technical. It is relational. It reflects a disconnect between builders and the broader society expected to absorb the impact of their decisions.

Ethical AI, in this context, cannot be reduced to model performance metrics. It must also address communication practices.

There is also the role of media to consider. Journalism serves as an intermediary between technical development and public understanding. When claims from AI companies are repeated without critical evaluation, they contribute to a feedback loop in which perception is shaped by repetition rather than verification.

This does not imply malicious intent, but it does point to a structural issue. The incentives that drive attention, speed, and engagement can conflict with the need for careful, contextual reporting on complex technologies.

The ethical implications extend beyond misinformation. They influence how society adapts to technological change.

If individuals believe that their roles are imminently obsolete, they may alter their behavior in ways that are not aligned with actual labor market trends. If organizations overestimate the capabilities of AI systems, they may implement them prematurely, leading to inefficiencies or unintended consequences.

Ethical AI requires alignment between what systems can do and what is communicated about them.

This includes clear explanations of limitations, realistic timelines for impact, and a commitment to avoiding exaggerated claims that prioritize attention over accuracy. It also requires acknowledging uncertainty. The future of work is not predetermined, and projections about automation are subject to economic, regulatory, and social factors.

Ultimately, ethical AI is not solely about preventing harm at the level of code. It is about preventing distortion at the level of narrative.

Technology does not exist in isolation. It exists within systems of communication, power, and perception. The way it is introduced to society shapes how it is understood and how it is used.

If ethical AI is to be meaningful, it must account for that full context.

Next

Who Bears the Cost: AI, Power, and the Similarities with Succession