AI and the Interstellar Space Paradox

I’ve been thinking long and hard about the issue of AI and morality.  If you read the news, you know I am not alone and for good reason. There are so many ways one could imagine an AI system determining certain outcomes that might not exactly be to the benefit of man.

Consider my own little AI brain teaser…

Interstellar Space Paradox

Imagine a AI-controlled space ark, travelling millions of miles to a remote star system. There are thousands of souls aboard, sleeping comfortably in cryo-sleep. Along the way a small meteorite strikes the ship, temporarily causing a loss of precious oxygen.  The oxygen stores are depleted by 30% and calculations show that there is not enough oxygen for all of the souls to make it to the destination.  Someone in cryo-sleep has to die but who? 

The AI now has to make a moralistic decision that even a human would struggle with. Does the AI do the math and kill off occupants? If so, who and how do you choose? Should the AI wake humans to make the decision? 

All of these decisions should not be made by one AI based on a set of “learned” response or at least there should be another AI or human that acts as a check. 

SMITE, the God AI

So the obvious first solution to the Interstellar Space Paradox is, “We will build morals into the AI!”

But this is where the problems begin. If we build decision making into an AI, how does one build an AI that will not decide to either ignore an outcome or build it’s own set of morals as it matures?  A true AI is supposed to learn from decisions it makes.  Like a child, it is supposed to make mistakes, learn from them, and then not make the mistake again.

Let’s unravel this further.

  • How do I teach an AI to make life or death decisions?
  • How can an AI learn life or death decisions without “breaking eggs” (aka killing humans) along the way?
  • What if an AI decides that humans are the problem variable in an equation and decide to “remove” us?
  • How do we know AI morals will equate to Human morals?

This is just a short list of questions AI moralists are asking themselves….

In my opinion, building morals into an AI system, as hard as that will be, will not suffice. I have a hard time imagining any way in which an AI will not eventually learn to modify it’s own code, it’s own thoughts, it’s own MORALS.

To this point, it seems the only way to add an additional level of humanistic control would be to introduce another AI who’s job was to act as the moral guard-band – a separate AI (I call SMITE) that can oversee the decisions of the first AI and act as a overriding figure (aka GOD) should decisions loose there moral basis. This at the very least prevents the original AI from overriding it’s own intent.

Granted, SMITE too could be coerced but in the end this is equivalent to two humans.  We too can be corrupted and coerced but at the very least there would be another voice of potential reason to contend with. It’s simply too dangerous to give one AI complete decision making control is so many situations.

I am sure there were times in our live where would could have all used a SMITE. Why not do that same for our AI prodigy?