A Colombian judge used ChatGPT to decide what sentence to hand down
A judge in Colombia has made headlines, but also sparked controversy, after he admitted in official documents that he also asked the artificial intelligence program ChatGPT what decision it should make in one of the cases it had to decide.
The fact that a judge admitted to using ChatGPT and said it helped him sparked heated debate, with some arguing that it was crazy to use such a program for something as serious as a judgment in court.
Others, on the contrary, said that things would move faster and more efficiently if the justice system also used the latest technologies.
Judge Juan Manuel Padilla in the city of Cartagena had to rule on the medical insurance of a child with autism. The parents could not afford to pay the expenses related to treatment and transportation and argued that the medical insurance should cover these expenses.
The judge mentioned in the reasoning of the decision that he asked ChatGPT whether, according to the laws of Colombia, a minor suffering from autism is exempted from paying for the treatments. ChatGPT’s response was that, yes, that is correct, minors who are diagnosed with autism are exempt from paying for treatment, according to the laws of the South American country.
The judge also said that Colombia’s judicial system can become more efficient if technology is also used. He also explained that he based his final decision on similar cases in the past, so he didn’t just leave it up to ChatGPT.
Padilla also said that ChatGPT can simplify certain tasks, but there is no question of this program replacing people and making legal decisions for them.
The big problem with ChatGPT is that it gives answers that sound good grammatically, but often don’t provide correct information. The chatbot gives you the impression that it “knows” everything, that it is confident, but it can give a lot of erroneous information “packaged” very nicely.