I want to pose a question: How can we develop Artificial Intelligence when our own intelligence is questionable; when we can’t agree on standards of behavior or ethical goals; when we know little about how our brains work; can’t define intelligence and have no idea what consciousness is? Consider now that the only rules we have regarding ethical standards for robots are those proposed by science fiction writer, Isaac Asimov written 70 years ago:
- A robot may not injure a human being or, through inaction, allow a human being to come to harm.
- A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
- A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
The limits of these goals I have outlined countless times. In a robot’s mind, keeping a human from harm might well mean imprisoning a person to eliminate risk.
Of major concern to Christoff Koch, president and chief science officer of the Allen Institute for Brain Science in Seattle, is, “People can’t seem to agree on the best rules to live by.” (“When Computers Surpass Us, by Christoff Koch, Scientific American Mind, Sept/Oct 2015 pg. 29.) Given that discrepancy, can we, as confused and flawed creatures of moral enlightenment, really create a super intelligence that will do no harm?
Worse, given a set of flawed ethical imperatives, how will those imperatives evolve as robots begin to program themselves. Will they one day conquer space travel and, like Borgs, destroy or assimilate all other life forms?
Unfortunately, these conundrums aren’t confined to the realms of science fiction. Artificial Intelligence is upon us. If the robots we create are based on what we currently know about ourselves, we’re in trouble.