© Reuters. FILE PHOTO: U.S. Chief Justice John Roberts speaks through the funeral service for retired U.S. Supreme Court docket Justice Sandra Day O’Connor on the Washington Nationwide Cathedral in Washington, U.S., December 19, 2023. REUTERS/Evelyn Hockstein/File Picture
By John Kruzel
WASHINGTON (Reuters) – Synthetic intelligence represents a combined blessing for the authorized area, U.S. Supreme Court docket Chief Justice John Roberts mentioned in a year-end report revealed on Sunday, urging “caution and humility” because the evolving know-how transforms how judges and legal professionals go about their work.
Roberts struck an ambivalent tone in his 13-page report. He mentioned AI had potential to extend entry to justice for indigent litigants, revolutionize authorized analysis and help courts in resolving circumstances extra rapidly and cheaply whereas additionally pointing to privateness considerations and the present know-how’s incapability to copy human discretion.
“I predict that human judges will be around for a while,” Roberts wrote. “But with equal confidence I predict that judicial work – particularly at the trial level – will be significantly affected by AI.”
The chief justice’s commentary is his most vital dialogue to this point of the affect of AI on the regulation, and coincides with various decrease courts contending with how greatest to adapt to a brand new know-how able to passing the bar examination but additionally susceptible to producing fictitious content material, referred to as “hallucinations.”
Roberts emphasised that “any use of AI requires caution and humility.” He talked about an occasion the place AI hallucinations had led legal professionals to quote non-existent circumstances in court docket papers, which the chief justice mentioned is “always a bad idea.” Roberts didn’t elaborate past saying the phenomenon “made headlines this year.”
Final week, as an illustration, Michael Cohen, Donald Trump’s former fixer and lawyer, mentioned in court docket papers unsealed final week that he mistakenly gave his legal professional faux case citations generated by an AI program that made their approach into an official court docket submitting. Different situations of legal professionals together with AI-hallucinated circumstances in authorized briefs have additionally been documented.
A federal appeals court docket in New Orleans final moth drew headlines by unveiling what seemed to be the primary proposed rule by any of the 13 U.S. appeals courts geared toward regulating using generative AI instruments like OpenAI’s ChatGPT by legal professionals showing earlier than it.
The proposed rule by the fifth U.S. Circuit Court docket of Appeals would require legal professionals to certify that they both didn’t depend on synthetic intelligence applications to draft briefs or that people reviewed the accuracy of any textual content generated by AI of their court docket filings.