This post originally appeared on the Louisiana Technology Park blog.
The use of artificial intelligence in the criminal justice system has moved past the realm of fanciful science fiction and into everyday use, raising a host of thorny ethical issues that society is only beginning to work through.
Franz Borghardt, a Baton Rouge criminal defense lawyer who is an LSU Law School instructor, led a discussion about the implications of utilizing AI for decision making in the criminal justice system during a recent Tech Park Academy workshop at the Louisiana Technology Park.
The discussion comes in the wake of a media report that New Orleans police secretly used a predictive policing tool by Silicon Valley data-mining company Palantir to help identify gang members involved in drug trafficking and violent crimes. The software reportedly predicted the chance that people would commit a violent crime or become a victim by tracking ties to gang members, analyzing criminal histories and examining social media habits.
Beyond crime forecasting, proponents say AI offers the possibility of overcoming problems associated with balance, fairness and bias, taking away the emotion and political pressures humans often experience when making important decisions. Borghardt, though, says it’s far from clear whether this is a good thing. “I don’t know how I feel about certain decisions being taken away from human beings and given to software programs,” he says.