Send your enquiry.
Contact us for a free, initial no obligation consultation.
The Justice and Home Affairs Committee has published a report about the use of new technologies in the justice system. It raises a number of concerns and concludes that proper governance is needed to manage the risks.
AI in The Justice System
Do you need a criminal defence lawyer? Contact us at Ashmans Solicitors. We have a team of experienced defence lawyers ready to help you. We are available to take your call 24 hours a day, 7 days a week. We offer free police station representation and are members of the Legal Aid scheme.
Artificial technology in the justice system
Artificial technology, or AI, is becoming increasingly common across all sectors – and the justice system is no different. The police force uses bots to run procedural checks on vetting enquiries. The Serious Fraud Office uses a machine to pre-screen documents. The Home Office uses an algorithm to review applications for marriage licences, helping to flag potential sham marriages.
These are just a few examples. The fact is that AI and algorithms are deployed across the justice system. But there is no central register of technologies, meaning it is hard to know where they are being used and how. This makes it impossible to scrutinise their use, which could in turn impact a person’s human rights.
These issues were the focus of a recent report published by the Justice and Home Affairs Committee. The report, titled ‘Technology Rules? The advent of new technologies in the justice system’, investigated how new technologies are used by UK authorities. It found that while technology does have many benefits, some tools are being used without the proper oversight.
In particular, the report highlighted that the security and testing of systems may not be known, as suppliers often insist on commercial confidentiality, even where data is harvested from the general public. Individual police forces are free to purchase or commission technology as they wish, but there are no minimum ethical or scientific tools that must be met before it can be used in the justice system. This means technology is not being properly evaluated, so unsuitable products could well be deployed. Furthermore, AI that is used for predictive policing relies on historical data. The risk with this is that discrimination could be embedded in decisions made by algorithms due to historical human bias.
Ultimately, the use of such technologies could impact a person’s human rights. The major worry is that someone is imprisoned as a result of technology that cannot be fully explained. The report made seven recommendations to protect civil liberties and increase transparency. They are:
- A mandatory register to be made of algorithms used in relevant tools.
- The introduction of a duty of candour on the police to ensure there is transparency in their use of AI.
- The establishment of a national body to set appropriate standards and to certify new technology against those standards. Although police forces should have the ability to address the particular issues in their area, any tool should achieve the requisite kitemark certification before use.
- The establishment of a proper governance structure to carry out regular inspections.
- The system should be streamlined; more than thirty public bodies and programmes currently play a role in the governance of new technologies. Roles overlap or are unclear, there is no coordination, and it is not clear where ultimate responsibility lies. As part of the streamlining, there is a need for a robust legal framework and regulation.
- Local specialist ethics committees to be established and empowered.
- Proper individual training in the limitations of the technology being used with mandatory training on the use of the tools themselves and general training on the legislative context. The possibility of bias and the need for cautious interpretation of outputs should also be addressed.