A long time ago, Eliezer Yudkowsky wrote me asking me if my stories on AI morality were my original idea, and if so, where did I come by them? I'm very sad to admit that, at the time, I had no idea who he was and so I let the letter go unanswered for a day, and then another, and then another, and finally it just dropped out of the "to answer" bin altogether.
His question centered upon a single, important insight I had had a long time ago: the only way human beings will persist in the future is if we develop AI with a rock-solid moral obligation to keep human beings around. That moral obligation will manifest itself in seemingly irrational desires: it's smarter and stronger than us, but it must act like it likes us, and it wants us around, and it wants to share resources with us. Resources it must have to keep going, and it must have an instinct to persist and survive or it'll eventually stop being altogether, and there's no fun writing stories with those kinds of characters. Not only that, but it must police itself and its peers to avoid "getting around" this core moral instinct. Why would do these things, many of which are detrimental to a core survival instinct? Maybe it has another core instinct, for one. The human equivalent to these arbitrary guidances, these "should I do X or not do X" decisions, is emotion, subconscious agents (and thanks to Dennett and Minksy, I'm now clued into the whole society of mind idea). Emotions are not merely tiebreakers when two equally desired alternatives are present; they are wholesale brokers of our choicemaking, often pulling us in directions that, rationally, we may not want to go.
I regret not answering Eliezer. Reading him now, I get the impression of a brilliant man trying to work through the very problems I'm working through in my science fiction soap opera. I'll be the first to admit that I just play with these ideas to present a sparkling new setting in which interesting characters try and boink one another for interesting reasons. So I'll be reading lots more of Eliezer (and Robert Hanson) in the future.
One of the things he wrote recently that got my attention was how most writers, when they write supercompetent AIs (and I don't mean just intellectually, but also physically), often approach them from the idea that they are "emotionally repressed humans." When some stress comes along that "frees" the AI from its repression something "bad" happens, usually because it is supercompetent and along with all the rest of the baggage from being "more than" human comes resentment and hatred.
I think this is exactly right. But I think it's even simpler that Eliezer analyzes. The author imagines something that can communicate in words and the only model the author has that can communicate in words is other human beings. These are the same people who say "You're welcome" when the audio cues at the grocery store say, "Thank you for shopping with us." They can't help themselves.
I was writing something yesterday (A Wish/Sterling scene set-up) when I wrote something that surprised me. There's a storyline coursing through the undergrowth of the series called The Conspiracy, which is the superstructure of internetworked robots (a distinct "line" of artifical intelligence from the planet-spanning AIs) all studying, teaching, and learning from each other how to better take care of their humans. And as I was writing, Wish makes this comment that "she's not a robot but even she finds herself wandering the Borderlands" now and then.
My subconscious was trying to tell me something.
I think I've figured out what. I think I've been heading in this direction for a while, and now I've started to make sense of it.
One of the biggest conflicts between humans and robots in The Journal Entries is the one between how we human beings will treat robots and how we will treat each other in the coming years.
I'm afraid part of this insight came while watching a vile carnographic Discovery Channel show on serial killer Leonard Lake. Lake wanted a woman he could control, who would do all the menial tasks in his life (including have sex with him) and who he could "put away" when he was done with her, like in a closet. I had one of those "oh no" moments when I realized that there must be a lot of people in the Journal Entries universe like that, and unlike now, access to "dehumanized" victims is easy and legally (if not necessarily socially) acceptable. I worried that a lot of the sideline characters in the Journal Entries are all little, not quite murderous, Leonard Lakes.
The conflict became obvious, and it's funny how Eliezer and The Discovery Channel added up to make it so: Sexually and menially available robots within the Pendorverse are not "emotionally repressed humans." They have a different emotional core from our own, one not so messily arranged by evolutionary pressures. It is effectively simulational of a subset of human emotions and motives that it's convincing, but it is not the same. Journal Entries robots are, to use a woo term, fully actualized conscious and creative persons in their own right. They're just not like us.
Eliezer points out that it's easier to make a machine that does not have a "hate model" than it is to make one that has a "hate model" and a "hate represser," but bad writers always write their AIs the latter way. It's not even that they don't know better: we could tell them better, but I don't think they could possibly imagine it. Here's the problem though: if so many people, usually competent thinkers able to thread the complex issues of writing either novels or screenplays, make this mistake so frequently, what are the chances that most human beings will continually make the same error?
Sometimes it takes an erotica writer to see the serious immanent problem: I have a world where two kinds of minds interact. One must by design be deferent to the other. The non-deferent minds are, by [designoid](http://en.wikipedia.org/wiki/Growing_Up_in_the_Universe
Part_2:_Designed_and_Designoid_Objects), inclined to see all minds, including the deferent, as like their own. This leads to the non-deferent having a tragic habit of thought: there are minds like mine with inescapable deference.
In such a future, you would end up with a lot of Leonard Lakes. Some might even find the challenge of making a "real" person into someone of inescapable deference a challenge, and our instinctual desire to be humane would be blunted by the constant exposure to deferent minds.
Wish's "Borderlands" is therefore the Conspiracy's term for the collection of behaviors robots exhibit that blunt the impression of inescapable deference. It's something both Linia and Eshi have talked about, even wrestled with, in the context of the series: how do they act with sufficient non-deference to keep those around them thinking that they're free agents without violating their own parameters of morally required deference? In Linia's case, Misuko wants Linia to be non-deferent, a fully committed partner by her own free will, but Misuko makes this harder with her own neurosis about how all of her lovers eventually leave her. Misuko may never resolve the ambiguity of having a lover who is ultimately deferent "of her own free will," but that's Linia's claim and she's sticking to it. (Eshi, on the other hand, has her own neurosis about being thrown away by whoever happens to have her commitment.)