Legal Professionals Using AI Should Be under Duty to Act Responsibly

Legal Professionals Using AI Should Be under Duty to Act Responsibly

Anyone who uses artificial intelligence (AI) in the justice system should be under a duty to act responsibly, a campaign group has claimed.

Campaign group JUSTICE recently published a report addressing the use of AI across the justice system, where they believe the proposed duty should obligate AI users to “pause, rethink, redesign or even stop development or deployment if significant risks to the rule of law or human rights are identified.”

The report titled, AI in our Justice System, set out to propose a “framework to achieve [a trustworthy justice system] in the context of innovating with AI”, doing so through two main requirements:

  1. AI development should be goal-led, ensuring that it is attempting to improve one or more of the justice system’s core goals (access to justice, fair and lawful decision-making, and transparency).
  2. Users should be under a duty of responsibility and that all involved in creating and using AI should take responsibility for “ensuring the rule of law and human rights are embedded at each stage of its design, development, and deployment”.

According to research, AI has the potential of being “of great service to the strengthening of our justice system” if it is deployed well.

Conversely, the ground-breaking technology could also “lack transparency, embed or exacerbate societal biases, and can produce inaccurate outputs which are nevertheless convincing to the people around it”.

Other possible benefits were uncovered by the research, including AI’s role in aiding the judicial system for legal research and drafting, investigating sexual abuse cases with online images, identifying bias in written pieces and improving engagement in law-making and developing policies.

Significant gaps in data collected by the justice system were pointed out as a major area of concern, as incomplete data can lead to unintentional replication or increased bias.

The report continued on further possible risks, stating:

“Many AI models – including those used for risk assessments, sentencing recommendations, or fraud detection – rely on probabilistic methods.

“Instead of offering guaranteed correctness, they provide predictions with varying degrees of confidence, which means there is always a margin of error.”

Fabricated or misleading content, labelled as “hallucinations”, and the worry that many AI models operate as “black boxes” and therefore lack transparency rounded off concerns raised.

Report co-author and chair of JUSTICE’s AI programme, Sophia Adams Bhatti, stated: 

“Given the desperate need to improve the lives of ordinary people and strengthen public services, AI has the potential to drive hugely positive outcomes.

“Equally, human rights and the rule of law drive prosperity, enhance social cohesion, and strengthen democracy. We have set out a framework which will allow for the positive potential of both to be aligned.”

You might also like

Scott graham OQM Zw Nd3 Th U unsplash 2
read more
Persistent Complaints Against UK Bar Highlights a Need for Action More

Shutterstock 243995539
read more
Solicitor Reprimanded for Inappropriate Comments Passed off as “Robing Room Banter" More

Shutterstock 1891258573
read more
Barristers’ Wellbeing Named Among Top Priorities for New Bar Council Chair More