• Giovanna Collu

Should legal decisions be automated using algorithms? (OPINION)

Updated: Jun 29

Image credit: Image by Ezequiel Octaviano from Pixabay

From simple things, such as social platform recommendation algorithms to an increasingly vital part within the juridical practice, algorithms have become a substantial part of our life. It is evident that the practice of law has progressed over time: Finding its start from vocal orators in Ancient times, to Artificial Intelligence being incorporated into decision-making. One of the reasons more and more industries use AI is the elimination of a bias, whether that is racial, religious, classist, etc. Later on, I will explore the accuracy of said bias elimination. The question is complex and a simple “yes” or “no” wouldn’t be sufficient to conclude the matter. We should not underestimate the improvements that automation results in but also not miscalculate their potency and biased errors they could cause. The purpose of this blog is to discuss whether legal decisions should be automated using algorithms or humanity should only abide by the archaic approach to law.

The human race never fails to amaze us with its grand achievements and introducing AI as a means of human’s labour replacement in 1956 by the Father of AI: John McCarty is just one of them. Whilst algorithms are a remarkable invention, with their useful ability to analyze data, the ambiguity that they offer is a deliberate concern, especially in the legal field, where lives are on the line. The biggest issue, suggesting that algorithms ought not to be used in juridical practice is the so-called “Black Boxes” they have. Those are, essentially, why we, humans cannot access the reasoning of an outcome an algorithm reaches. Substantially, this is a severe matter regarding decisions for we have no way of knowing the machine’s reasoning, which could easily lead to harsh and unfair consequences.

Besides not being able to provide reasoning, algorithms are essentially build to perform a given task-predict and calculate risks of a person re-offending. Every person, involved in juridical matters, knows the complexity of the job and the difference between every case, and how they are all highly individual, regarding many factors. That is, fundamentally why algorithms must not take part in legal decisions. They are not advanced enough to take the individual factor a case has into consideration while calculating.

Additionally, another issue with algorithms is the failure of their sole motive-to not be biased. Research by “ProPublica” offers evidence for a systematic bias against African Americans. Thus, instead of fulfilling their duty and be fair, algorithms had predicted African Americans to re-offend twice as much as it did white people. This is ultimately unacceptable and completely immoral. As Mahatma Gandhi said himself “An unjust law is itself a species of violence.”

To summarize, as the human race grows, so will new inventions, destined to help us, continue to be invented. And that is, in no way, a bad thing. It is truly extraordinary to see human development and the increase in the use of technology in almost every industry. The thing we, as individuals, entitled to fairness, must do is put some regulations until all miscalculations are eliminated. Because law must be fair, ergo maintaining fairness and valuing justice is essential if we want to continue progressing in today’s society, striving for a better, fairer, and unprejudiced world!

-Giovanna Collu

References: [1] History of the legal profession. From Wikipedia, the free encyclopedia. https://en.wikipedia.org/wiki/History_of_the_legal_profession#Ancient_Greece,_Rome_and_Byzantine_Empire [2] Ramajan, V. John McCarthy – Father of Artificial Intelligence, Resonance – Journal of Science Education, (2014): 201 [3] Deeks , A, The Judicial Demand for Explainable Artificial Intelligence. COLUMBIA LAW REVIEW, 119;1829 (2019)

[4] grawal, A., Gans, J. S., & Goldfarb, A. “Artificial Intelligence: The Ambiguous Labor Market Impact of Automating Prediction.” Journal of Economic Perspectives, 33(2), 31–50. https://doi.org/10.1257/jep.33.2.31 [5] Machine Bias, Julia Angwin, Jeff Larson, Surya Mattu and Lauren Kirchner, ProPublica. May 23, 2016. https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

15 views0 comments

Recent Posts

See All