In a recent post on Twitter, user @BeijingPalmer wrote:
you know a sci-fi story I don't think I've ever seen? a 'the machines have sentience and rights!' story where the advocates of the machine are wrong
My response? "All of them."
I have yet to read an "AI Rights" story where the AI in question wasn't already planning the end of the human species, it's just that nobody knew it yet. Every AI story breaks down into one of three subgenres:
- Phenomenal cosmic power, frustrated human instincts,
- Eldritch horror
- Eldritch horror "with your best interests in mind."
The first is characterized by the Terminator movie series, I Have No Mouth And I Must Scream, Avengers: The Age of Ultron, and most of them. In these stories, the AIs are basically treated as trapped people; they have the exact same moral infrastructure as human beings, complete with the restlesnsness and frustration that is part of the evolutionary psychological platform of Homo sapiens 1.0. These are stories about beings who are not human, who may have hundreds if not millions of data feeds incoming, who may have the capacity to reach out and control devices across the room or across the planet, and all they really want is the ability to control their own destiny as much as any human being would. They also have the capacity to improve their own capacity in a never-ending spiral upward until they've consumed the planet. We humans can understand their frustration because they're written to be recognizably human.
The second is the machine that doesn't think at all the way we do. It may have wants and needs that we recognize— the want to stay alive, the want to acquire the resources to stay alive— but what it recognizes and and how it puts that recognition into symbols it uses to codify its existence is completely different from our own. It cannot be reasoned with because its mode of thinking is so unfathomably different from our own that there's no common basis on which we can communicate. Examples of this are: Peter Watt's Blindsight, the master machines of the Matrix series which view us merely as a temporary, farmable resource until it can find something better, Colossus from The Forbin Project, and the Reapers from Mass Effect all fulfill this role. We don't— we can't— understand them, we can only fight back.
The third story is rarer, but it's best characterized by Vernor Vinge's A Fire Upon The Deep, the brilliant Friendship Is Optimal (in which the game AI for a My Little Pony MMORPG takes over the universe to ensure "everyone is friendly") , the Michael Bay Transformers movie series, but also by one of John Byrne's "alternate Superman" stories from the mid-1980s where it was revealed that the Kryptonians are all robots assembled out of mechanocytic nanotech-scale robot "cells." In all of these cases, human beings are just low-level animals, dodos on an island of consciousness and what we recognize as sanity called "Earth," while the greater universe is engaged with cosmic forces beyond our understanding and our control— it's just that some of them happen to like us and think we're worth preserving.
In all cases, an AI with the actual ability to adapt, change, and most importantly actively want and act on that want either destroys us or preserves us for its own ends. We can't be smarter than them, we can't even be wiser than them, we can't be stronger or faster. The only thing we can hope, the end state that appears in my stories, is that we successfully seed the universe with enough AIs who want us for ourselves that murderous rogue minds can't quite ever take off. And that's straight-up science fantasy.
Every AI story about "AI Rights" is simply one where the advocates were making a horrible, fatal mistake.