Chief Justice of the U.S. John Roberts cited the significant rise of artificial intelligence in 2023, warning in an end-of-the-year report that the technology could “dehumanize” the law.
“Proponents of AI tout its potential to increase access to justice, particularly for litigants with limited resources,” he wrote. “AI obviously has great potential to dramatically increase access to key information for lawyers and non-lawyers alike.
“But just as obviously, it risks invading privacy interests and dehumanizing the law,” he continued.
Roberts highlighted the increased use of AI in smartphones, voice recognition software, and smart televisions—but also noted how in awe and wary law professors are of the emerging technology.
Much of his analysis focused on AI’s potential to make the legal system more accessible.
“For those who cannot afford a lawyer, AI can help,” Roberts said. “It drives new, highly accessible tools that provide answers to basic questions, including where to find templates and court forms, how to fill them out, and where to bring them for presentation to the judge—all without leaving home.”
The Chief Justice’s words come at a time when artificial intelligence, particularly generative artificial intelligence, has entered the mainstream across many industries and use cases, including education, defense, healthcare, and the legal system.
“These tools have the welcome potential to smooth out any mismatch between available resources and urgent needs in our court system,” Roberts continued. “But any use of AI requires caution and humility.”
Justice Roberts noted AI’s limitations, including hallucination, which he pointed out has caused lawyers to cite non-existent cases.
In October, former Fugees member Pras Michel’s new legal counsel requested a new trial, claiming the previous legal team’s use of generative AI and the technology’s habit of making up facts led to his client losing his case.
In December, a federal judge demanded the legal team for the former attorney for former President Donald Trump, Michael Cohen, show printed proof of the legal cases submitted in court documents after the court said it was unable to verify their existence.
Generative AI developers have invested heavily in combating AI hallucinations. In May, OpenAI said it was improving ChatGPT’s mathematical problem-solving skills to reduce hallucinations. In December, Fetch AI and SingularityNET announced a partnership to curb AI hallucinations using decentralized technology.
“SingularityNET has been working on a number of methods to address hallucinations in LLMs,” said SingularityNET Chief AGI Officer Alexey Potapov. “We have been focused on this since SingularityNET was founded in 2017.”
“Our view is that LLMs can only go so far in their current form and are not sufficient to take us towards artificial general intelligence but are a potential distraction from the end goal,” Potapov added.
For his part, Roberts also highlighted the potential biases programmed into AI models that could lead to unfair decisions in court cases.
“In criminal cases, the use of AI in assessing flight risk, recidivism, and other largely discretionary decisions that involve predictions has generated concerns about due process, reliability, and potential bias,” Roberts said. “At least at present, studies show a persistent public perception of a ‘human-AI fairness gap,’ reflecting the view that human adjudications, for all of their flaws, are fairer than whatever the machine spits out.”
Despite the warning, Roberts was optimistic that artificial intelligence will not soon replace human judges.
“But with equal confidence, I predict that judicial work—particularly at the trial level—will be significantly affected by AI,” Roberts said. “Those changes will involve not only how judges go about doing their job, but also how they understand the role that AI plays in the cases that come before them.”
Edited by Ryan Ozawa.
Stay on top of crypto news, get daily updates in your inbox.
Source: https://decrypt.co/211696/supreme-court-justice-warns-of-ais-impact-on-legal-system