In an earlier blog (10/27/16), I referred to an article by Clive Thompson about artificial intelligence. He warned we know so little about what’s going on inside those wired machines, we should consider what this lack of transparency means. Are we being foolhardy when we put too much faith in computer programs, particularly those touching personal aspects of our lives — whom to date, our credit worthiness and the medications we take?
In spite of cautionary tales, like the one in the film, Minority Report, we continue to develop technology that one day we hope will enable us to predict human behavior. In some places, it is being attempted. When inmates enter our prisons, we feed their data into machines designed to make decisions about mental competence, the degree of supervision necessary and make attempts to “predict what kind of help a person needs before they actually need it.” (sic.) (“The Justice Machine,” by Issie Lapowsky, Wired, November, 2016, pg. 70.)
Some justification exists for wanting to foresee human behavior. “More than half of all prisoners nation-wide face some degree of mental illness,” and of that population, 15% suffer serious mental illnesses. (Ibid pg. 70. ) But software which forecasts future behavior, as in the case of granting paroles, poses a different set of hurdles. Too much depends upon the nature of the information fed into the machines and the weight algorithms give to that information. Based on a study from Broward County, Florida, the ACLU has brought these predictive computer programs into question. Research disclosed African-Americans suffered from a racial bias. Too much emphasis, for example, was placed on levels of education. (Ibid pg. 70)
Smart machines are likely to grow smarter, over time. They will save us money and may improve the care of prison inmates. But I suspect compassionate decisions will require the flawed and sometimes emotional judgement of humans.