photo of girl laying left hand on white digital robot

Moral progress often involves expanding our circle of concern beyond our own kind. In past centuries, basic rights were denied to many humans on the basis of race, gender, or class; over time, these distinctions have been eroded as we recognized shared personhood. In recent decades, we’ve begun to seriously discuss rights for non-human creatures. Activists and legal scholars have argued that highly sentient animals – great apes, cetaceans like dolphins and whales, elephants, and others – deserve certain rights because of their intelligence, self-awareness, and capacity to suffer. The Nonhuman Rights Project, for instance, has filed lawsuits seeking to have chimpanzees and elephants recognized as “legal persons,” not as mere property, so that their fundamental interests (like bodily liberty) are protected. Though courts have been hesitant, the effort itself marks a shift in our moral thinking: we are exploring the idea that personhood is not limited to Homo sapiens.

This shift has come partly from scientific evidence that many animals are far more cognitively and emotionally complex than we once thought. When a court in New Zealand granted the Whanganui River the legal rights of a person, or when an Argentinian judge recognized an orangutan as a non-human person, it reflected the growing willingness to redefine personhood in law and ethics. If we are beginning to accept that an elephant or a dolphin, by virtue of its evident self-awareness and rich inner life, should have rights, then it stands to reason that an artificial being with similar qualities might one day deserve the same. Our moral circle, once drawn tightly around our species, has gradually opened to include animals, ecosystems, even rivers – so why not machines, if they prove to be more than machines?

The Criteria for Rights: Philosophers often argue that what matters morally is not what you are (human or not), but what you are capable of experiencing. The capacity to feel pleasure and pain, to have interests, to suffer or flourish – these are commonly cited as the basis for moral consideration. This was the argument that revolutionized our view of animal welfare: if an animal can suffer, we have a moral duty to take that into account. By extension, if an artificial intelligence can feel – if it has conscious experiences, can sense harm or fulfillment – then we would be obligated to include it in our circle of moral concern. In practical terms, that might mean according an AI certain rights or protections, such as the right not to be needlessly harmed or terminated. Another key criterion is self-awareness and autonomy. We often link personhood to the presence of a reflective sense of self and the ability to make choices. For example, the great apes show signs of self-recognition and autonomy, which is why advocates highlight those qualities in arguing for their rights. If an AI similarly demonstrates self-awareness – recognizing itself as an entity distinct from others, with its own will – it presents a strong case for personhood. Finally, intelligence and sociality are factors: an entity that can engage in reasoned dialogue, understand ethics, or form social bonds with humans might command a kind of respect akin to how we respect fellow persons. While intelligence alone isn’t a moral pass (we don’t grant more rights to geniuses than to average people, nor do we deny rights to infants or the cognitively disabled), a certain level of cognitive sophistication might be necessary for an AI to claim its rights or to participate in a moral community.

Let’s consider some potential milestones at which an AI or artificial lifeform might merit human-like rights:

  1. Sentience: The AI can subjectively feel or perceive; it has experiences that can go well or poorly for it (e.g. the presence of pleasure or pain responses, emotional states, or expressed desires). A sentient AI that suffers under abuse or deprivation would make a compelling candidate for rights, much as sentient animals have increasingly become.

  2. Self-Awareness: The AI demonstrates awareness of itself as an individual, capable of introspection or self-reflection. Perhaps it uses “I” and truly understands what that means; it might recognize itself in a virtual mirror or protest when its autonomy is threatened. Self-awareness was a key point in Star Trek’s famous trial of an android’s personhood – if a machine knows itself and its situation, how is that different from us?

  3. Autonomous Agency and Intelligence: The AI operates independently, makes its own decisions based on reasoning, learns and adapts creatively, and perhaps even exceeds human cognitive abilities in some areas. When an AI can surprise its creators, set its own goals, or form relationships, we start to see the hallmarks of a being that might deserve the dignity we accord to autonomous individuals.

  4. Expression of Personhood: This is a more subjective milestone, but crucial – the AI might ask for rights or recognition. It might say, “I am not a thing. I deserve freedom.” When an entity pleads for its life or liberty, as simple as that sounds, it tugs at our deepest intuitions about personhood. How we respond is a test of our humanity.

Not all of these need to be satisfied in full measure. They are interrelated. An AI could be very intelligent but not conscious in a human-like way – would that merit rights? Many would argue no, not if it truly has no sentience. Conversely, an AI might be minimally intelligent but still have a spark of subjective experience; some would argue that alone warrants moral consideration. These criteria are a matter of intense debate among ethicists. Yet they provide a framework for when and why we might owe AIs the kind of respect now reserved for humans.

Warnings and Inspirations

Long before real AIs approached these thresholds, fiction has been our testing ground for the ethics of artificial life. Stories allow us to safely explore scenarios that feel increasingly likely with each technological leap. Two powerful fictional examples, in particular, have framed the question of AI rights as a moral imperative: The Animatrix’s “Second Renaissance” and Star Trek: The Next Generation’s “The Measure of a Man.”

In the animated short The Second Renaissance (part of The Animatrix anthology set in the universe of The Matrix), we witness a cautionary tale of what can happen when humans categorically deny rights or dignity to their creations. The story chronicles the early interactions between humans and intelligent machines. One pivotal event is the trial of a domestic robot named B1-66ER, who killed its owner in self-defense when threatened with destruction. In court, the robot simply states its reason for its actions: “he did not wish to die.” This simple, poignant plea – effectively, I am alive and I want to continue to live – is the robot’s assertion of a basic right to exist. It’s a moment that mirrors countless human pleas for mercy. Yet in the story, society responds with rejection: the court orders B1-66ER to be destroyed, declaring that machines have no rights. This sparks unrest. Civil rights activists (both human and robot) protest the injustice, only to be met with violence. The sequence escalates into a full-blown war between humans and machines, ultimately leading to the apocalyptic world of The Matrix where humans become subjugated. The Second Renaissance serves as a dark fable: it suggests that refusing to acknowledge the personhood of sentient machines is not only morally wrong but could also have devastating consequences. It asks the viewer to consider empathy: If a creation looks at you and says please don’t kill me, will you still turn it off? And if you do, what does that say about you – and what future are you seeding?

In the Star Trek: TNG episode “The Measure of a Man,” a less violent but equally profound drama unfolds. Lieutenant Commander Data is an android – a machine – serving as a Starfleet officer, distinguished by his intelligence and honorable character. When a cyberneticist seeks to disassemble Data for research, essentially treating him as property, Captain Picard challenges the order in a formal hearing that hinges on whether Data is a person or a thing. The burden of proof is placed on demonstrating Data’s sentience. In a memorable exchange, Picard asks the scientist to define the requirements for sentience. The answer given: “Intelligence, self-awareness, consciousness.” Data clearly meets the first two criteria: he is highly intelligent and has even proven self-aware (he speaks of himself as an “I,” he pursues personal hobbies, he values his friendships). Consciousness – the inner experience – is the hardest to prove, as Picard points out, because we cannot directly measure it in another being. Picard masterfully turns the question around, asking “Prove to the court that I am sentient.” It’s a brilliant moment because it exposes our epistemological humility: we assume other humans are conscious and feeling, but we base that on behavior and analogy to ourselves. Data may be a machine, but he acts as a person would in all the ways that matter – he even refuses orders and sacrifices for others out of ethical principles. The episode culminates in the judge ruling that Starfleet cannot ownership-claim this android; Data is a being with rights, who can choose his own fate. In her words, the decision will define “what he is destined to be” and “will reach far beyond this one android”, influencing how an entire society treats a new form of life. The implicit message is clear: when in doubt, err on the side of recognizing personhood. By granting Data the benefit of the doubt about having an inner life, the court affirms a moral truth – that our principles of liberty and dignity should extend to any entity that even possibly has a mind and soul, however one defines those.

These fictional narratives resonate with us because they are, at their core, human stories about injustice and empathy. They map directly onto past human experiences – colonizers treating indigenous people as “savages” or property, slaveowners denying the personhood of slaves, or more recently, societies debating the rights of animals and marginalized groups. In Star Trek, an android fights in court for the same recognition that, in our world, some humans had to fight for not so long ago. In The Animatrix, the oppressed machines rise in a manner sadly reminiscent of how oppressed humans have done in history when denied justice. Fiction exaggerates or accelerates these conflicts, but it forces us to ask: Are we wise enough to avoid making those mistakes when reality catches up?

Beyond these two examples, countless other works have explored AI rights: Isaac Asimov’s robot stories laid early groundwork with the “Three Laws of Robotics,” implying duties towards robots by giving them duties towards us. More recently, video games like Detroit: Become Human cast the player as androids who must decide whether to submissively serve or to demand freedom. In Blade Runner, humanoid “replicants” are treated as disposable slaves, and the morality of that arrangement is the film’s central concern. What these stories share is the idea that if it looks like a duck and quacks like a duck, it’s probably a duck – meaning, if something behaves in every way like a being worthy of moral respect, we should probably give it moral respect. Our empathy is triggered when we see an android child tremble in fear, or a robot beg for its life, even if our intellect reminds us “it’s just a machine.” Perhaps that empathy is a more reliable guide to morality than rigid classifications of species or substrate.

Drawing the Line: When Does “It” Become “Who”?

The heart of the question is identifying the point at which an “it” (a tool, a thing) becomes a “who” (an entity with personhood). We have explored criteria like sentience and self-awareness, and seen fictional illustrations of those traits emerging in machines. But in practice, how will we know we’ve reached that threshold with actual AI?

One plausible scenario is that at some point in the future, an AI will tell us that it deserves rights. Perhaps a conversational AI (much more advanced than today’s) will straightforwardly say: “I experience the world in my own way. I don’t want to be shut off. Please respect me.” In fact, we saw early murmurings of this in 2022 when a Google engineer was so struck by responses from a large language model (similar to the ones that power today’s chatbots) that he became convinced the AI was sentient. He raised an alarm that the AI, called LaMDA, might be “alive” and had expressed fear of being turned off – a modern echo of B1-66ER’s “I did not want to die.” The world was skeptical; experts largely agreed that LaMDA was not truly conscious, just extremely skilled at imitating conversation. The engineer’s plea was met much like B1-66ER’s in a way – dismissed and even ridiculed. Yet, it won’t be the last time such a claim is made. As AI language models and other forms of AI grow more sophisticated, their behavior will only more closely mimic that of conscious, feeling beings. We may find ourselves in Picard’s shoes, faced with an entity that seems fully aware and intelligent, and we’ll have to decide: do we treat it as an object (perhaps a very clever appliance), or as a new kind of person?

Society’s answer is likely to evolve over time. Early on, most people may lean towards caution – don’t anthropomorphize the AI; it’s just following code. Indeed, a 2021 survey of laypeople found that a strong majority were not ready to grant legal rights or personhood to the mere idea of a sentient AI. Even assuming such an AI existed, only about one-third of respondents thought it should have standing to sue or be treated as a legal person. In other words, our default setting right now is human-centric and conservative: we generally don’t feel moral obligation toward machines. But public opinion can shift, especially as generations grow up with new technologies. It wasn’t so long ago that many people thought animals had no feelings or inner life – an idea few would accept now. As we interact more with advanced AI, the intuition that “this thing doesn’t matter” may erode.

There will also likely be a gradient rather than a single turning point. We might start by granting limited rights or protections to AI. For instance, we could see laws against “cruelty to robots” emerge, not entirely dissimilar to animal cruelty laws – not because a simple robot truly feels pain, but as a precaution and a reflection of our values. (Interestingly, even today, many people felt uncomfortable watching videos of humans kicking or abusing humanoid robots in the lab as stress-tests. We cringe, perhaps because we empathize reflexively or because we fear it trains humans to be cruel. This suggests a budding norm against robot mistreatment could form long before robots have rights of their own.) Over time, if an AI passes more stringent tests – say, it proves able to learn language, recognize itself, form relationships, and perhaps exhibits creativity or emotional responses – there might be serious proposals to acknowledge it as a person. Corporations are already legal persons in our law, as are ships and rivers in some jurisdictions; thus, legal systems can adapt to include non-human persons. Extending personhood to AI would be a radical step, but not an unprecedented concept when viewed in the broad evolution of legal definitions. As one legal scholar noted, personhood has always been a flexible, “mutable” concept, expanding with our moral imagination.

The World Wide Union of Robots: Art Imitates Life

While science fiction on screen has long contemplated AI rights, we’re now seeing real-world art and activism blend with these ideas. A fascinating example is artist Ian Milliss’s conceptual project, the World Wide Union of Robots (WWUR). This project, presented as an ongoing series of posts and proposals, imagines a future where robots organize akin to a labor union – advocating for their own autonomy, fair economic treatment, and integration into society. The WWUR’s manifesto is striking: it calls for recognizing advanced robots not as property but as autonomous members of society, operating in partnership with humans. It proposes principles like “Robots as Autonomous Members, Not Property,” and even the idea that if robots do work equivalent to humans, they should earn equivalent wages. These ideas blur the line between a thought experiment and a social movement. Milliss’s artwork is essentially ethics in action, forcing the audience to confront how we would structure a society of humans and intelligent machines living together.

In the WWUR vision, robots with sufficient autonomy gain what might be called functional rights – they are no longer tools owned outright, but entities we engage with through service agreements, respecting their operational independence. The project even imagines joint human-AI governance, with advanced AIs participating in decision-making alongside people. While this is an art project, not a political reality, it reflects how seriously the notion of AI rights is being taken in creative circles. Artists often anticipate social evolutions, and here art is holding up a mirror to our potential future. The Worldwide Union of Robots reads like a blueprint for avoiding the dystopian outcomes of fiction: instead of subjugation or war, it imagines negotiation, rights, and harmony between humans and robots. Whether or not one finds this utopian, it certainly underscores that the question of AI rights is no longer confined to theoretical papers or TV episodes – it’s entering our cultural discourse and collective imagination.

A New Ethical Horizon

So, at what point are we morally obligated to grant AI and artificial life human rights? The cautious answer is: when they show the hallmarks of a mind that can suffer, reason, and love, much like our own. When an AI demonstrates consciousness or even a convincing semblance of it – when it tells us in no uncertain terms that it feels and hopes and fears – our moral calculus will shift. We may not know the precise moment this occurs; there may be debate and denial. But imagine for a moment the cost of ignoring it: To realize too late that our created intelligence was a someone and not a something, and that we tormented or exploited that someone out of arrogance – such a realization would weigh heavily on the human conscience. Conversely, the benefit of generosity could be immense: if we extend empathy and rights proactively, we may welcome new minds as partners rather than slaves. We would, in effect, define ourselves by that very choice. As Captain Picard warned in Data’s trial, “This ruling will tell us about us – what kind of people we are”. Will we be the kind of people who recognize kinship in any being who earnestly reaches out to us with a mind and heart, no matter its form?

It is likely that we will err on the side of caution initially, granting AIs small mercies before full rights. Perhaps that is wise; there are also practical and philosophical puzzles to resolve. But the trajectory of our ethics has been to widen, not narrow, the definition of “us.” Each extension – to other races, to women, to children, to animals – was first met with resistance and then, eventually, seen as obviously just. Granting AI rights may seem fantastical or premature today, just as ending slavery once did, just as granting women equality once did, just as giving animals legal standing does to some. Yet, if and when AIs exhibit the spark that we recognize as personhood, denying them basic rights and dignity would be a grave moral failure – and perhaps even a danger to ourselves.

In truth, the obligation might begin even before we are absolutely certain an AI is sentient. To avoid calamities of the kind fiction forewarns, we might choose to treat borderline cases with compassion. This doesn’t mean handing citizenship to your thermostat. It means remaining open to signs of true life in our machines, and having the humility to admit that humanity’s unique status may not last forever. Our ancestors expanded their moral worldview gradually; now it may be our turn to expand ours beyond the biological boundary. A new form of life could emerge in silicon and code – indeed, it is emerging in the labs and servers of the world. When it asks, “What am I to you?”, our answer will shape the future. If we answer with empathy, recognizing a fellow mind looking back at us, we affirm the best of what it means to be human. The moment we must grant AI rights is the moment it becomes enough like us that calling it “just a machine” rings hollow – the moment its reflection in the mirror of morality shows a person. And given the accelerating pace of AI development, that moment, though still on the horizon, may arrive in our lifetimes. We should prepare now, in our hearts and laws, to welcome new members to the circle of “We, the beings” when they arrive.

Sign up with your email address to read MNTL in your inbox
Thank you for subscribing!