• Subscribe

Calculated justice

Throughout human history, people have had to rely on their own brainpower to solve problems. Now, technology is bringing us powerful digital minds that could help untangle our toughest dilemmas.

One area that shows promise is the legal system. Justice is complicated, and no two crimes are exactly the same. This means that officials have to apply situationally-relevant judgement to both catch and prosecute criminals in a fair way. Machine learning is already helping with this, but the outcomes are far from perfect.

A machine’s opinion

What happens when machine learning gets involved in the criminal sentencing process? According to Eric L. Loomis, a violation of civil rights.

<strong>Right to know.</strong> A defendant whose sentence of six years in prison was influenced by a machine learning program believes he was denied his right to due process because he wasn’t allowed to inspect the proprietary algorithm that made the recommendation.Loomis was accused of eluding police in Wisconsin, and was sentenced to six years in prison for this crime. However, the length of the sentence was influenced by a machine learning program called Compas, which assesses a person’s likelihood to fall back into criminal activities.

Loomis states that he wasn’t allowed to inspect the Compas algorithm, and believes he was denied his right to due process as a result. The Wisconsin Supreme Court ruled against him, using his criminal history and specifics of the crime to conclude that Loomis’ sentence was fair. What’s more, COMPAS only played a part in his sentencing—a human judge gave the final verdict.

Though it was apparently not a factor in Loomis’ sentencing, multiple validation studies of COMPAS have found that it incorrectly considers black defendants to be more likely to commit another crime than white defendants. Because its algorithm remains secret, it’s impossible to tell just how much bias is at work.

Real life Minority Report

It seems as though Philip K. Dick’s “Minority Report” has fallen on deaf ears. Police in Kansas, Washington, South Carolina, Georgia, Michigan, Utah, and California have all used a predictive policing software that many academics argue is too simplistic.

<strong>Predictive policing software</strong> may be prone to bias due to the ‘broken windows’ theory of policing that calls for arrests even for petty crime in neighborhoods with visible signs of crime such as graffiti and property damage.Called PredPol, this software utilizes statistical modeling methods adapted from earthquake prediction to anticipate crime. The original software relies on the assumption that earthquakes tend to recur in similar places under similar circumstances, and PredPol applies this theory to crime.

One of the major problems with this is that crimes aren’t reported like earthquakes. While just about any tremor in the world is captured by seismographs, police are only aware of the crimes that are brought to them. 

This means crime data is affected by factors like the local population’s desire to call the police in the first place, racial biases, and the controversial “broken windows” view of crime which requires officers to over-police petty crimes in certain areas. PredPol can even influence its own data by continually sending officers to specific neighborhoods. To avoid a feedback loop of crime, police will need to better understand the limitations of this technology

Automated thought police

Technology in and of itself can’t be evil. It is the user’s intentions that turn a tool into an abomination. Which means scientists must consider how their creations could be used.

Big brother knows when you are happy. Affectiva software can recognize a person’s emotional state based on their expression. Company founder Rana el Kaliouby refuses to sell to security firms over concerns that it could be used to monitor crowds for political nonconformists. Courtesy Affectiva.

For an example of how to handle a potentially dangerous technology, we look to Rana el Kaliouby, CEO of the face-scanning company Affectiva. This company has created a tool that can recognize a person’s emotional state based on their expression.

But el Kaliouby is fully aware of the danger this product holds. She believes that an authoritarian figure could use this technology to monitor the faces of an audience during a political speech to find nonconformists. The company has also refused to sell this software to security firms and lie detection experts. For now, Affectiva is using the technology to keep tired drivers awake by detecting their sleepy faces. 

Protecting children 

Despite the scarier aspects of the technology, machine learning is also making positive changes in the world. For instance, it's helping clean the web of images of child pornography.

Facebook is now using machine learning to find and remove images that contain both a child and nudity. The machine learning program was built by analyzing a collection of nude adult photos along with pictures of clothed children.

The social media giant claimed in October 2018 that it had taken down 8.7 million child abuse images in the previous three months.  Although Facebook acknowledges that at least some of these were innocent photos, the company says they would rather overregulate than miss pictures of abuse.

It’s easy to look at a technology like machine learning and fear it. However, it’s important for us to discern the difference between fear and respect. We can understand the awesome power this technology holds without avoiding it altogether, but doing so will require us to think long and hard about the human rights we value and deserve.  

Read more:

Join the conversation

Do you have story ideas or something to contribute? Let us know!

Copyright © 2023 Science Node ™  |  Privacy Notice  |  Sitemap

Disclaimer: While Science Node ™ does its best to provide complete and up-to-date information, it does not warrant that the information is error-free and disclaims all liability with respect to results from the use of the information.


We encourage you to republish this article online and in print, it’s free under our creative commons attribution license, but please follow some simple guidelines:
  1. You have to credit our authors.
  2. You have to credit ScienceNode.org — where possible include our logo with a link back to the original article.
  3. You can simply run the first few lines of the article and then add: “Read the full article on ScienceNode.org” containing a link back to the original article.
  4. The easiest way to get the article on your site is to embed the code below.