Ethical AI and the Divide Between Access and Surveillance

Ethical artificial intelligence is often framed as a question of bias, transparency, and technical accountability. These are important considerations, but they do not fully capture how AI is reshaping society. A more revealing lens is access.

Who gets to use these systems, and who is subjected to them?

Two parallel developments highlight this divide. On one side, facial recognition technologies are expanding into both public and private spaces. On the other, incarcerated populations are largely excluded from meaningful access to the same class of tools, even as those tools become foundational to modern life.

The ethical tension between these two realities is difficult to ignore.

Facial recognition has become one of the most visible and contested applications of artificial intelligence. According to the Electronic Frontier Foundation, facial recognition technology is now used by law enforcement agencies in over half of U.S. states, often without clear standards for accuracy, consent, or accountability. At the same time, private entities—from retail environments to entertainment venues—have adopted similar systems with even fewer regulatory constraints.

This raises fundamental questions about governance. While some states have introduced restrictions, the regulatory landscape remains fragmented. Illinois’ Biometric Information Privacy Act (BIPA) is often cited as one of the strongest protections, requiring informed consent before biometric data is collected. In contrast, many states lack comprehensive frameworks altogether, leaving individuals with limited recourse when these systems are misused.

New York, California, and Massachusetts have all introduced or passed legislation addressing aspects of facial recognition, but enforcement and scope vary widely. In practice, this creates a patchwork system where the same technology may be tightly regulated in one jurisdiction and largely unchecked in another.

The expansion of surveillance technologies is occurring alongside another, less visible trend: restricted access to AI among incarcerated populations.

The United States has one of the largest prison populations in the world, with more than 1.2 million individuals currently incarcerated, according to the Bureau of Justice Statistics. Within these facilities, access to technology is tightly controlled. Internet usage is typically prohibited, and even basic digital tools are limited to monitored communication platforms.

At the same time, artificial intelligence is becoming integrated into education, legal research, job applications, and business development. Individuals outside prison walls are increasingly expected to understand and use these tools as part of everyday life.

This creates a significant disparity.

Reporting has shown that incarcerated individuals are attempting to bridge this gap through indirect means. Some rely on family members or friends to input prompts into AI systems and return the outputs via mail or messaging platforms. These outputs are used to draft legal arguments, develop business plans, and support educational efforts.

The implications are significant. Reentry into society already presents substantial challenges, including employment barriers, housing instability, and limited access to resources. A lack of familiarity with emerging technologies adds another layer of disadvantage.

From an ethical standpoint, this raises a critical question: if AI is becoming essential to participation in modern society, what does it mean to systematically exclude certain populations from accessing it?

This question does not have a simple answer. Concerns about security and misuse within correctional facilities are valid. However, exclusion is not a neutral policy choice. It shapes outcomes.

The contrast between these two developments—expanding surveillance and restricted access—reveals a broader pattern in how AI is being deployed. Technologies that enable monitoring, classification, and control are often adopted quickly, while technologies that enable education, empowerment, and participation are distributed more selectively.

This pattern is not unique to AI, but AI amplifies its effects.

Facial recognition systems, for example, have been shown to exhibit higher error rates for certain demographic groups. A 2019 study by the National Institute of Standards and Technology (NIST) found that many facial recognition algorithms had higher false positive rates for Black and Asian individuals compared to white individuals. While newer systems have improved in some areas, concerns about bias and misidentification persist.

When such systems are deployed in environments with limited oversight, the risks extend beyond technical error. They become questions of civil rights.

At the same time, limited access to AI tools can restrict an individual’s ability to advocate for themselves, pursue education, or prepare for economic participation. The gap between those who can leverage AI and those who cannot is likely to widen over time.

Ethical AI, therefore, cannot be reduced to system performance alone. It must also address distribution.

Who has access to these tools? Under what conditions? With what safeguards?

And equally important, who is being subjected to these systems without meaningful consent or recourse?

These questions point to a broader understanding of ethics as a structural issue rather than a purely technical one.

As AI continues to evolve, the choices made about access and governance will shape not only how the technology functions, but who it serves.

If ethical AI is to be meaningful, it must engage with both sides of this equation. It must consider not only how to prevent harm, but how to ensure that the benefits of these systems are distributed in ways that expand, rather than restrict, human possibility.

Next

Ethical AI Is Also About What We Tell People