Modern phrenology
Researchers still foolishly think AI can predict criminality by looking at photos
This ethically compromised project would have made the notorious Cesare Lombroso proud.
Researchers at the Harrisburg University recently published a controversial press release — which is no longer available online but can still be viewed via archived pages — about how they (erroneously) believe artificial intelligence can predict the likelihood of someone being a criminal from a picture of their face alone. Which sounds disturbingly like 21st-century phrenology.
One of the study's authors is an NYPD veteran named Jonathan W. Korn who worked alongside professors Nathaniel Ashby and Roozbeh Sadeghian. At one point in the now-recanted press release for the bizarre study, researchers claimed that their project achieved "80 percent accuracy" and "no racial bias." Since then, Korn, Ashby, and Sadeghian's faulty and ethically dubious research — titled "Deep Neural Network Model to Predict Criminality Using Image Processing" — has been slammed all over the internet.
The problem with predictive policing — Although the press release has been scrubbed from the university's main website, the paper for the original research is expected to be published in the "Springer Nature Research Book Series: Transactions on Computational Science and Computational Intelligence." It will be interesting to see how other technology scholars react to it. Or anyone who's seen Minority Report.
The researchers sounded extremely confident about their undertaking, which has been repeatedly slammed for being fraught with legal risks, ethical quandaries, and for having unmistakable and dangerous implications for civil liberties. Curiously enough, though, none of these concerns come up in the press release.
"We already know machine learning techniques can outperform humans on a variety of tasks related to facial recognition and emotion detection," Sadeghian wrote. "This research indicates just how powerful these tools are by showing they can extract minute features in an image that are highly predictive of criminality."
Cesare Lombroso tried this ages ago — Italian criminologist and perennially debunked thinker Cesare Lombroso tried this ages ago and was ultimately proven wrong. According to Lombroso, one could predict the likelihood of criminality in an individual by simply assessing their physical appearance and the presence of any physical defects.
It sounds like a scam (much like psychic detectives or COVID-19-beating necklaces) because it is one. Harrisburg University's claim that artificial intelligence skimming through people's photos and predicting if "someone is likely going to be a criminal" falls in the same compromised category. It also carries the profound risk of inaccurately classifying innocent people — often minorities — as miscreants. We've seen this happen before over and over and over again.
It is stunning that well-established researchers continue to fail to see how unreliable the premise of their study is. They have yet to explain their methodology, but it is possible that the researchers would access massive prisoner databases, skim through said photos, and use rudimentary software like TensorFlow to examine the content. It's far too simple an approach to have toward something as complex as crime.
At the end of the day, artificial intelligence doesn't predict risk as much as it analyzes it. And it uses data that is curated by bias-riddled humans. The notion that it could accurately prophesize about a person's potential to commit crime is a terribly naive idea with potentially horrific repercussions.