What I know best about operating my computer is how to type. Still, I dare write about technology because I suspect others will want a heads-up about coming changes in their lives. What would you say, for example, if I announced homo sapiens are approaching a transhuman condition? I‘m not talking about Borgs. I’m relaying John Markoff’s prediction that “artificial intelligence is poised to have an impact on society that may be greater than the effect of personal computing and the Internet.” (“The Transhuman Condition,” John Markoff’s Machines of Loving Grace, excerpted from Harper’s, August, 2015 pg. 12.)
Let’s start with the basics. Do you know the difference between AI and IA? Like me, most of you understand AI because you’ve seen the movie. AI means artificial intelligence, algorithms that allow robots to live, act and think independently of humans and perhaps, one day, to excel us in all aspects of living. Reverse the letters to IA and you get Intelligence augmentation, or “machine symbiosis.” (Ibid pg. 12.) We’re talking about technology that enhances the human experience rather than competes with it. Think driverless cars, or head sets that allow you to walk around in a 2 dimensional world as if it were 3 dimensional. Think enhanced computer games and mechanical body parts.
The difference between AI and IA is huge because each implies “differing stances toward the relationship of man to machine.” (Ibid pg. 12) Most of us probably haven’t noticed that AI and IA represent a split in the technological community and that the two branches are accelerating away from one other, posing ethical questions as they do. AI means machines will develop self-awareness and ever increasing intelligence, leading to the not so silly question: Is it okay to kick a robot? (Blog 9/1/15) Or consider the unthinkable. What if we make robots so intelligent and so superior to ourselves, we become the equivalent of their pets? (Ibid go 14.) What if they begin to make decisions for our society? How would our cities, our buildings and our modes of locomotion look? Robots don’t breathe? Will they worry about climate change, for example?
IA serves human development. Great! But have we given any thought to how we want to evolve? Do we want better weapons of war? Enhanced space travel? Medical advances? We can’t afford to move in every direction but our curiosity may offer limitless possibilities. What should our priorities be?
At the moment, we have no answers to these questions and no ethical standards in either technological branch. As Markoff points out in his book, today’s decisions are made “largely on the basis of profitability and efficiency.” (Ibid pg. 15.) We need to promulgate ethical standards soon, or we may find ourselves in a world of our making but which is hostile to us.
(Originally posted 9/15/15)