So what is a robot judge?
If I were to ask you whether a robot has ever convicted you, the answer would probably be no. But what if I asked if you have ever been flashed for speeding? The answer would probably be yes. So this means a robot did convict you. The reason for this is that during the process of detecting and fining you for exceeding the speed limit, no human was involved.
The robot judge is machine learning; this is a method of artificial intelligence (AI). AI is a machine incorporated with human-like intelligence to perform tasks as we humans do. The essence of machine learning are algorithms that give computers the ability to learn from data and then make predictions and decisions. Predictive algorithms and data are, therefore, leading in the case of the robot judge.
A robot judge may seem like a part of a tech Utopia, but just last year the Estonian Minister of Justice already introduced the robot judge in a small civil claims court with a maximum of 7,000 Euro. The concept of this robot judge involves two opposing parties uploading their documents in support of their claim. After that, the submission is analyzed by AI and machine learning and they subsequently make a decision. Now, what if we thought about justice in a broader sense and a robot judge took on criminal cases?
Let us pause.
Because the idea that something could make crucial decisions about a human’s life and that this decider itself is not alive can be an extremely uncomfortable thought. So, what are the benefits of a robot judge? And what is crucial to take into account when implementing such technology?
The promise of the robot judge
As history has shown, humans are not always the perfect arbiters of justice and are also prone to making mistakes. A study in Israel has demonstrated that when and what a human judge eats influences their decision. So, before lunch, a suspect will probably go to jail. After lunch, a suspect will likely be released.
Personal values, unconscious presumptions, and decision fatigue can also affect the judgment of a human judge. That would not be present with a robot judge.
The robot judge promises to correct the biases of human judges through the algorithm. As a result, defendants would get the same decision when presenting the same evidence. No exception. This will promote more consistency and fairness in the justice system. Furthermore, by deploying automation, the justice system can be made more accessible to people who cannot afford a trial.
A robot judge could also spot things we might not have spotted ourselves. AI is also more time- and cost-efficient and can work faster without stopping for a break or some sleep. After all, justice delayed is justice denied.
AI: I swear to tell the truth, the whole truth and nothing but the truth
Using a predictive algorithm to determine a prison sentence is not quite the same as Netflix using a predictive algorithm to suggest which movie you should watch next.
Predictive algorithms may sound like a logical approach to the justice system, but they have a deeper problem: it relies on the type and quality of the data supplied.
As noted above, a robot judge is a machine learning that uses data and a predictive algorithm model to make predictions and decisions in the following steps:
- Historical data is collected.
- These data sets (that contain a lot of historical data) will enter the machine learning algorithm model.
- This algorithm model predicts and makes a decision based on the data sets with historical data.
In summary, the robot judge is a case prediction machine. The key component is data; everything is based on data. So we can conclude that an algorithm is only as good as the data that goes into it. The fact is that we humans are biased; we have our known and unknown prejudices. The reason that a predictive algorithm can be biased is for the simple reason that humans created them. As a result, a biased or flawed algorithm could amplify injustice and inequality. As we know, legal decisions can lead to irreversible consequences, especially in areas like immigration law or criminal law. Thus, to what extent should we use AI in the justice system?
So, if you are like Y= X (i) – M (i) you are going to jail
An example of the above formula is the investigation of the ‘risk assessment tool’ of the US Criminal Legal system by ProPublica. The predictive algorithm COMPAS is used in courtrooms to predict the risk of criminals committing future crimes. These risk assessment tools can be used to help a human judge make crucial decisions about who can be released and what the bail amount should be. ProPublica found out that the algorithms falsely predicted black defendants to be future criminals at almost twice the rate of white defendants.
This example shows how an algorithm can be racially biased. The person writing the code for the algorithm could have an unintended replicate bias, this can result in, e.g., an (unintentionally) unfair trial.
Is this an algorithmic problem? No, the reason for this is the systemic biases that already exist in the justice system. And even if race were illegal to include in the algorithm, many other characteristics are associated with race as well.
The algorithm uses existing data to replicate what judges did before, and this could reproduce biases in a future decision.
So, who controls the algorithm and where does the data come from are the questions the public sector should ask itself before introducing a robot judge into the justice system.
A predictive algorithm that converts ethnic background into unequal opportunities for a fair trial is unacceptable. Therefore, if AI and predictive algorithms are being introduced into the justice system, they should comply with the ethical codes for responsible AI: Fairness, Accuracy, Confidentiality, and Transparency (FACT).
This is especially true in the public sector, where transparency is vital for how decisions are being made. This is also essential for a democratic constitution and for the trust we need in AI and the robot judge. Thus, it can be concluded that we do not need to be threatened by the robot judge; we just need to use it in the correct manner.
The replacement of judges?
Every single day we take an Uber, an electric scooter, or a flight, and every time we do that we put our lives and trust in the hands of technology, something that is not alive. In non-legal domains, it seems that humans appear to be relatively comfortable with the replacement of some human tasks by AI and robots. However, a shift towards a robot judge raises many issues.
The reason for this is that anytime we get a technological intervention in a system like this — where a person’s freedom is at stake — we as humans want to handle this cautiously. Also, bearing in mind that the next iteration of this technology will become more advanced, refined, and comprehensive, even more difficult questions will be raised.
A robot judge has the potential to positively influence and change the current justice system. However, these changes will limit the extent to which humans are involved in the justice system. The dangers of AI and predictive algorithms have shown that human intervention is necessary.
Thus, when implementing this new technology in the legal system, it is crucial to consider ethical questions so we can know who develops and controls the algorithms and to what extent discretion and supervision are maintained within the justice system.
Rather than the total replacement of human judges, the aim of a robot judge in the justice system should be to complement the current legal work. This offers a human judge the opportunity to be more creative and to focus on more complex and important cases. The role of robot judges will be to help human judges detect their own biases and reduce them. This way, the move can be made towards the concept of a human judge and a robot judge working together to make better decisions in the justice system.
No, this is not an episode of the Netflix series Black Mirror. This is the future, and the future is happening right now. So, will robot judges replace human judges (and reach the point of singularity)?…
No. Well…not yet at least.
Gabriella Obispa is a guest writer for Profound. She is a master’s student majoring in International Technology and Law at the Vrije Universiteit Amsterdam. As a feminist and ‘woman in tech’, she is committed to the empowerment of women and diversity within the tech scene.