Misc

UK High court judge sounds alarm on AI misuse in UK legal system after barrage of fake citations

The UK legal landscape is grappling with a growing concern over the misuse of artificial intelligence, as the High Court has issued a warning to senior lawyers to urgently address the proliferation of fictitious and fabricated case-law citations appearing in court.

The directive follows two high-profile cases this year that were significantly marred by actual or suspected AI-generated legal untruths, raising questions about the integrity of justice and public trust.

The ruling, delivered by Dame Victoria Sharp, president of the King’s Bench Division, on Friday, highlighted the “serious implications for the administration of justice and public confidence in the justice system if artificial intelligence is misused.”

She said that lawyers found to be misusing AI could face severe sanctions, ranging from public admonishment to contempt of court proceedings and even referral to the police.

Lawyers are increasingly incorporating AI systems to assist in constructing legal arguments. However, recent incidents reveal a concerning downside to this technological integration.

In a high-stakes £89 million damages case against the Qatar National Bank, the claimants presented 45 case-law citations, 18 of which were later confirmed to be entirely fictitious.

Furthermore, many of the remaining citations contained bogus quotes. The claimant openly admitted to using publicly available AI tools, and their solicitor conceded to citing these sham authorities.

A separate incident saw the Haringey Law Centre, in a challenge against the London Borough of Haringey over alleged failure to provide temporary accommodation, cite phantom case law five times.

Suspicions were piqued when the solicitor defending the council repeatedly struggled to locate any trace of the supposed authorities.

This led to a legal action for wasted legal costs, with the court finding the law centre and its lawyer, a pupil barrister, negligent.

The barrister denied intentional AI use in that specific case, but suggested she may have inadvertently done so while using Google or Safari in preparation for a separate case where she also cited phantom authorities, potentially taking AI summaries into account unknowingly.

Dame Sharp said that AI tools, while seemingly capable of producing “coherent and plausible responses,” often generate information that is “entirely incorrect.”

In response to these alarming developments, Dame Sharp has urged the Bar Council and the Law Society to consider steps to curb the problem.

She also instructed heads of barristers’ chambers and managing partners of solicitors to ensure all legal professionals are fully aware of their professional and ethical duties when employing AI.