The Theory of Fun: A valuable resource for science fiction writers.

There are few things more humbling than watching someone else work his way around the issues I've been touching on in my stories, vaguely aware that something was wrong with my approach, and having a dogged intellectual approach come out of his effort.

Eliezer Yudkowsky once expressed his admiration that I'd managed to work out the problems with having superintelligences around and had come up with the concept of the Friendly AI almost independently.  One of the things that's long bothered me about the Journal Entries, though, is that, since I write erotica in a superintelligent AI setting, there are only so many crises about which I can write before I start to wonder if I'm just repeating myself.

Eliezer has supplied the answer: The Theory of Fun.  If you're a science fiction writer of transhumanist themes, the Theory of Fun is absolutely critical reading: "How much Fun is there in the universe?"  "Can we escape the human tendency to return to our baseline persona in an environment of accelerating abundance?"  "Fun involves engaging the senses, not abstracting them away," "If we're free to optimize, do we provide an environment where one bad choice dooms you to an existence of misery?"

Probably the most telling chapter for me is Amputation of Destiny.  Without massive handwaving, the presence of superintelligence makes the ordinary characters about which I write little more than sideline characters living within the flow of history.  Part of the reason I've had trouble writing this past year, and considered falling back to fantasy and contemporary fiction, is that I haven't figured out how to write convincingly about characters who can actually change their futures while living with AIs that pretty much see all and know all.

Eliezer has a heavy dislike of catboys, or any apparently conscious creature that has a more limited capacity for choice than human beings, calling it a "step in the wrong direction."  That may be, given his goals, but I don't think it addresses adequately the instantiation problem: what are our moral obligations to beings that don't exist?  As long as they're only a potential, even an immanent potential, we don't have a moral obligation to them: they don't have agency.  Instantiating a limited potential, even a conscious potential, with a more limited capacity than our own, is a morally neutral activity, so  I don't have a problem with the whole sex-robot (or even sex meat machine) aspect.

Other chapters worth reading include Eutopia is Scary, in which he challenges creative people to imagine, not a world in which your ideas worked out and the future is like the present but moreso, but actually, genuinely as different from today as today is different from the past, and Higher Purpose, which reads a lot like a direct challenge both to the "Purpose culture" of the Journal Entries and a challenge to all posthuman writers: after having solved all of the major challenges facing humanity, what "purpose" would we adopt next?

I don't recall if Eliezer ever addresses the simple statement that our "mechanical purpose" is to survive and reproduce: without that, we cease to exist; both controlling and supplanting that purpose are issues that we need to deal with to avoid poisoning our own pond before we escape it.  The "Purpose culture" of the Journal Entries proposes that that "mechanical purpose" can be arbitrarily replaced with any other mechanical purpose without a change in one's individual moral status (although it's very possible to wind up trapped in a "devil's offer"), and again instantiating a being with a different purpose from the default is a morally neutral act (for the individual; it might be a very different story for those around the individual).   Eliezer's take on this declaration is very different (see "Can't Unbirth a Child"), but he has an equally arbitrary idea of "a life worth leading."  (It reads remarkably like Eliezer's own life.)

Still, this is an incredibly well-thought-out set of ideas about living next door to the godlike AI's of love, and well worth reading, and mining for deep ideas about the future.

Earlier: Bubble, bubble: What to do when there's no chemistry?

Later: Back in the groove, for a moment...