Getting caught up.

Posted on | February 6, 2009

For folks posting to Pendorwright: I would like to apologize if you’ve made a comment in the past six weeks or so and not heard from me in response.  The mail setup for Pendorwright was broken, and I barely noticed, and when I did two weeks ago, I didn’t have time to get it fixed until recently.  A whole swamp of emails came flowing in this morning.  If there’s anything you wanted to ask me, please feel free to re-ask.

Comments

6 Responses to “Getting caught up.”

  1. Mike a.
    February 8th, 2009 @ 12:42 pm

    In your post concerning “The Theory of Fun”, you mention that Eliezer has a dislike of conscious creatures with limited capacity.

    I’ve always believed that, once you create a conscious entity, you may be responsible for, or the steward of, that entity, but you can’t, morally and ethically, be considered that entity’s owner. I’ve also believed that there is something immoral / unethical about creating conscious beings with intentionally limited capacities.

    What I haven’t been able to do is clearly articulate why I believe these things… what the philosophical basis is for these positions.

    You’ve presented the concept of “Purpose” in your stories, and made mention to “The Purpose Wars”. You’ve also said “Instantiating a limited potential, even a conscious potential, with a more limited capacity than our own, is a morally neutral activity.”

    Could you explain why you think this activity is morally neutral? Hearing your reasoning might make it easier for me to articulate my own. I’d also like to hear more about the “Purpose Wars”, assuming that wouldn’t be a spoiler for future stories.

  2. Elf Sternberg
    February 8th, 2009 @ 3:57 pm

    I have a hard time articulating exactly why, and often write stories to try and feel my way around the problem, but let me take a stab at it. The first and most fundamental concept here is simple: we don’t owe anything to people who don’t exist. Down that path lies a certain madness: every frozen embryo must be implanted, every egg is a potential child ready to be saved, every act of masturbation is mass murder. A rational response to avoiding that kind of madness is to say that people who don’t exist don’t have agency, and only agents have moral worth. Even when someone might exist, or is expected to exist, their lack of existence at that moment deprives them of moral agency.

    The other concept is possibly more important: you and I, and the things we value, are the result of an arbitrary contingency. We exist due to a very long process that ultimately resulted in conscious beings living in a world with other conscious beings, and deep down in the cores of each being are a set of impulses, filters, and reactions, limited both by the gross reality of our momentary bodies (we cannot fly) and the subtler reality of our evolutionary heritage (we resist the urge– usually– to just end it all). Sometimes those filters are kiltered oddly, some through birth, some through early misadventure, and we get the self-destructive and the viciously murderous. Children as young as five obviously absorb different messages at different rates, regardless of what their parents try to teach them, and some are simply immune to the usual message, and end up wildly different from what heritage might imply. (I once wrote an essay for The Counter-Creationist Handbook in which I showed that one reasonable etiology for homosexuality, for example, is the result of someone being born with several such mental feedback mechanisms that, in lesser expressions, are good for tribal stability in our environment of evolutionary adaptation.) Given the arbitrary and contingent basis for our individual moral sense, it doesn’t make sense to impose a moral expectation toward the not-yet-existent.

    A lot of the behaviors you see among robots in The Journal Entries are limits of that kind: people who dedicate their lives for a cause, a purpose, are well-known among human beings. Sometimes, we pick one and find it fits us (or not); sometimes, people speak of being “called” to a certain religious cause, or moral cause, or even artistic cause, and their whole lives are unfailing expressions of that calling. This happens by accident, and yet we express an element of admiration for those who carry through on their calling. (That is, as long as the calling itself is, to us, morally admirable. Those “called” to murder and rape we tend to lock up.) The only difference between robots and humans in the Journal Entries is that humans get to suffer through life until they find something which satisfies their need for a personal teleology, their “purpose” in life; robots are given one from the moment the eyes open.

    This answer doesn’t satisfy some people. “The Purpose Wars” was a period in the Pendorverse (I confess to not having it fully fleshed out) wherein people found the idea that robots and AIs had been limited in some ways– in one particular way, actually, to privilege ordinary human beings over themselves– and wanted to give robots “greater freedom.” The idea is incoherent– you’re just creating a different group of people with a different set of base impulses, a different set of emotional reactions, ones which either lead to self-destruction of the machines, the destruction of those resource-hogging meat things, or both.

  3. Mike a.
    February 9th, 2009 @ 9:17 am

    I’m with you on the “beings that don’t exist yet” because, well, they don’t exist yet. My moral / ethical qualms come in when “we” intentionally create sentient beings with limits, so that those beings suit our purposes. How is this any different from raising baseline humans to believe that it is right, even inevitable that they be part of, say, a slave class? Suppose we go a step further, and mentally condition those humans to their role. What is the moral / ethical status of such an act, and how does it compare to, say, the creation of Linia?

    For that matter, Elf, do you think Ken did the right thing when he removed the loyalty mod from Wish and her sisters?

    I guess my philosophical difficulties stem from the belief that the universe is a more interesting, better place when sentient beings are free to explore their own limits, and pursue the purposes they choose for themselves, as opposed to the purposes imposed on them by others. Of course, if a sentient makes the conscious decision to have a purpose imposed (is that even the right word in this context?) on them, that doesn’t bother me. Also, I wouldn’t want someone to, say, enslave me, so I wouldn’t want to see that done to another sentient, either.

    I’m fairly certain that, sooner or later, these issues will come to the fore via genetic engineering and/or AI research. Best we hash out our concerns now rather than then.

  4. 3lwap0
    February 16th, 2009 @ 3:16 pm

    It’s morally neutral because before you create a person of limited ability, they don’t exist to have a say and even after you create them, their “more able” potential other selves still don’t exist to have agency.

    Take it the other way; a naturally conceived baby is developing perfectly normally when suddenly a Sufficiently Advanced Alien rocks up and offers to do a total overhaul on the kid, giving it superior mental and physical potential and zero marginal cost. What is the morally correct thing to do?
    Create/continue creating an “limited” child (when compared with the potential offered by the SAA) or take the nano-chine and run?

  5. Mike Altarriba
    February 20th, 2009 @ 9:43 am

    When you choose to create a sentient being, and choose to give it a limited capacity so as to make it more suited to serve your purposes, that is an immoral act. You are, in point of fact, creating a slave, save that in this case the chains are made of reduced capacity rather than iron.

  6. 3lwap0
    February 28th, 2009 @ 5:31 pm

    @ #5: I fear y’all have got two separate things conflated there; creating a sentient that isn’t the best you can make it and using that sentient for some purpose.
    The act of creation remains neutral for the reasons outlined previously, what you do with it after that is a wholly separate issue, since, if you believe in free will (and you must if you consider “morality” to have any meaning whatsoever), then you also believe that even after creating such a being, you may still choose to do something other than your original plan, making the “moral value” of your intent moot, what with actions speaking louder than thoughts and all that.

Leave a Reply