Only AI can judge me

only-ai-can-judge-me

Brock Turner was found guilty of three counts of sexual assault, among other detestable crimes. Despite this, he was only sentenced to six months (but only served three) in county jail and three years of probation. An incompetent judge did not hand down the correct sentencing, and justice was not served. Perhaps if AI were involved in this sentencing to analyze the data and evidence against him — instead of a judge who showed a shameless display of favoritism — Turner would be locked away for a long time.

AI and data sciences are now being relied upon to judge humans in ways that can have a significant impact on our lives. Using technology to determine such monumental decisions could be beneficial to humans, but ethical dilemmas arise when we start doing this, and in many cases the algorithms aren’t disclosing how they are coming to these conclusions. This begs the question: Should we allow AI to judge us?

Below is an examination of how we are implementing AI to assist and decide some large factors of human life, and if we may be ignoring some possible warning signs

AI in the courtroom

AI is now being used to aid the sentencing of individuals in court cases. Algorithms are being used to assess people in determining bail, sentencing, and parole lengths for individuals. In an ideal scenario, this may not seem like a problem. However, these algorithms have already been proven to show bias in their risk analysis. Furthermore, algorithms are shrouded in secrecy as to how they come to these conclusions.

In the case of State v. Loomis, Eric Loomis was sentenced to 6 years in prison. Mr. Loomis was suspected of driving a stolen car that was previously used in a shooting, and was charged with five counts in connection with the drive-by shooting. Loomis denied being involved in the shooting, but pleaded guilty to two lesser charges, including attempting to flee a traffic officer. The COMPAS algorithm deemed him a high risk for recidivism (repeating a crime again), and that was pretty much enough to lock him away. It also didn’t help that Loomis was a registered sex offender.

Should he have been sentenced? Probably. Should he have been convicted with heavy influence by artificial intelligence? That is up for debate. Mr. Loomis did appeal, arguing that no one knows inner workings of how the AI came to a conclusion and sentencing. However, the motion was rejected due to the fact the algorithms of COMPAS, a risk assessment tool developed by a privately held company, are proprietary, and that revealing the details of how the algorithm is executed would reveal trade secrets.

Black box algorithms determining jail time for individuals bring up some serious problems, mainly that of algorithmic bias. These algorithms can be completely biased and, under current law and regulations, are not required to reveal their reasoning and mechanisms, while they are capable of sealing the fate of a potentially innocent individual.

vernon-prater-brisha-borden
Credits: propublica.org

In a highly publicized 2016 study, ProPublica investigated the risk scores assigned to more than 7,000 people arrested in Broward County, Florida, in 2013 and 2014 and checked to see how many were charged with new crimes over the next two years. The score proved remarkably unreliable in forecasting violent crime: Only 20 percent of the people predicted to commit violent crimes actually went on to do so.

In one harsh case, Brisha Borden, a black woman who had four juvenile misdemeanors on her record was calculated to be a high risk with a risk factor of eight. Her accomplice, Vernon Prater, a white male with two armed robberies and one attempted armed robbery in his past wass deemed a low risk by the system with a risk factor of three.

Brisha Borden never committed a crime again while her associate went for a grand theft.

The judicial process depends on sound, unbiased decision making. Judges have already felt backlash from being biased, and it makes no sense to “justify” these prejudices with AI. It seems the only way we should implement AI into the judicial system is if we can guarantee it has no vulnerabilities — bias, hacking, or any other miscalculations.

Robotic clergy

Germany and Japan are giving certain divine powers to robots in that people are allowing them to deliver blessings, and even funeral rites. But, should robots have the same powers as clergymen? Furthermore, are we going to take religious guidance and live our lives according to what a robot says?

Imagine confessing to a robot, or listening to a robot give a sermon — it would seem (at the risk of making a pun) robotic. It feels unnatural for any religion to provide the power of God to a non-biological entity. Although, if the science is perfected and unbiased, robots would not be able to contradict themselves in theological matters, live double standards, or interpret the texts of God to suit their lifestyle. In this way, robot pastors may resolve some issues of religion that have been challenged by skeptics.

This is not to say that there aren’t challenges of the original religious texts that will be data points fed to these robots, but it may highlight the dangers of some liberties previously taken by men of the cloth. As it stands right now, only base religious services are being performed by robots, but something just feels artificial about being blessed by a robot instead of an ordained minister.

surveillance

Social credit systems

In 2020, China will implement a social credit system, in which it will be ranking its citizens. Through an unrevealed method of extensive data analysis, your social rank can shift up and down depending on your actions. Regulating your social status through data science is supposed to uphold the social integrity of Chinese citizens to keep them honest and humane. Doing good things will bring your social status higher, and you will be rewarded. But what about the other side of the coin?

You can get reprimanded for something as small as smoking in a non-smoking zone. Obviously, larger infractions and crimes will drop you further down the social ladder. In a perfect world, and on paper, this may not seem like a terrible idea — keeping people honest, lowering crime, etc. But carried out in a large society, this can pose some significant problems.

Travel restrictions, lowering internet speeds, and banning you and your children from going to good schools or getting good jobs are all proposed punishments if your social credit is reduced enough. This seems like a restriction of human rights, and if any or all of the above happens to you, it will be extremely hard to climb back up into a decent social status — creating a vicious cycle.

Additionally, this system and big data analysis are not impervious to tampering or miscalculations and could ruin an innocent person’s life. You could also follow the same logic to imagine a corrupt politician boosting his social status. This supposed utopian society may turn into a “big brother is watching” dystopia and see its class systems fabricated. In fact, Rutgers University lists income disparity and the digital divide as two of five socioeconomic issues that can contribute an income imbalance of a country. If China isn’t careful in restricting jobs, education, and internet from people, they could eventually end up exacerbating their economic challenges.

In all of the cases above, it seems that we are putting the cart in front of the horse. Entrusting data sciences and AI to decide significant aspects of human life is extremely dangerous, seeming as it has been proven to be biased. If these technologies were unbiased, incorruptible, and proven otherwise perfect, maybe we could start thinking about implementing them. But as things are now, we may need to take a step back before we start allowing AI to judge us.

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.