AI and the law

How the University of Surrey is at the forefront of research into artificial intelligence bias in predictive policing.

person wearing black police suit

Photo by ev on Unsplash

Photo by ev on Unsplash

It’s happened already. The future that is. The artificial intelligence revolution is upon us. Some are even calling it the fourth industrial age. 

But it is an errant, if genius child, still to fully understand the difference between right and wrong or gauge judgement on any ethical basis.

Yet it is being employed, ungoverned, by law enforcement around the world to predict criminal behaviour.

text

Research at the University of Surrey into AI points to police scrutiny expanding into a guessed-at and institutionally biased future. The research is already informing the US Congress, the UK Government and the judicial system in Korea.

grayscale photo of man in black jacket and pants running on road

There's no knowing yet whether predictive policing is doing more damage than good.

"If machines are trained on biased data, they too will become biased"
Professor Melissa Hamilton
“Communities with a history of being heavily policed will be disproportionately affected by predictive policing.”
Professor Melissa Hamilton, University of Surrey

The 'Wild West'

A newly published (March 2023) House of Lords report describes its unregulated use by 43 police forces in the UK to predict criminal behaviour as ‘the Wild West.’

Public bodies and all 43 police forces in the UK are free to individually commission whatever tools they like or buy them from companies eager to get in on the burgeoning AI market. 

The report calls, in fact shouts, for regulation in the UK. 

“Algorithms are being used to improve crime detection, aid the security categorisation of prisoners, streamline entry clearance processes at our borders and generate new insights that feed into the entire criminal justice pipeline," says the report.

“When deployed within the justice system, AI technologies have serious implications for a person’s human rights and civil liberties. At what point could someone be imprisoned on the basis of technology that cannot be explained?

“Informed scrutiny is therefore essential to ensure that any new tools deployed in this sphere are safe, necessary, proportionate, and effective. This scrutiny is not happening. 

“Instead, we uncovered a landscape, a new Wild West, in which new technologies are developing at a pace that public awareness, government and legislation have not kept up with.”

At the University of Surrey, Professor Hamilton’s research tries to increase transparency by providing an independent review of how algorithmic risk is operating in practice.

It explores and explains how biases can become embedded in the algorithms and attempts to translate between the science and law to educate practitioners, policymakers (e.g. the House of Lords), and stakeholders.

Professor Hamilton’s research also chimes with the House of Lords and the US Congress premise calling urgently for a regulatory framework.  

gray pillars

But she warns that the technologies will advance quicker than the regulatory regime can be completed.

“There needs to be some middle ground,” she says.

“My research highlights that science is not as objective and unbiased as assumed and that the scientists who create these technologies can still be biased toward them - they are only human and they want their babies to shine so they may unconsciously be burying bad information.

“We need more public education as citizens have a right to know something about how the government uses technologies that might impact their lives.”

A study from organisation UpTurn found that 20 of the US' largest police forces have already engaged in predictive policing.

 “Histories of discrimination can live on in digital platforms,” Kate Crawford, a Microsoft researcher, wrote in The New York Times earlier this year. “And if they go unquestioned, they become part of the logic of everyday algorithmic systems.”

US Congress

Professor Hamilton who has testified as an expert twice before the US Congress on risk tools and these issues says people (policymakers, criminal justice officials, the public) tend to either love or hate the use of risk assessment tools to identify dangerous offenders.

“Few of them adequately understand what the risk-based algorithms can achieve from a practical and scientific perspective, as well as their limitations. This research is meant to help translate these issues for these audiences.”

In fact, a hefty Congressional Research Report is also asking some serious questions around the ethics of AI in the justice system.

It says ethics, bias, fairness, and transparency along with interest in technical advances, researchers, companies, and policymakers are expressing growing concern and interest in what has been called the ethical evolution of AI, including questions about bias, fairness, and algorithm transparency.

“What constitutes an ethical decision may vary by individual, culture, economics, and geography,” it says.

“As some analysts have asserted, 'AI is only as good as the information and values of the programmers who design it, and their biases can ultimately lead to both flaws in the technology and amplified biases in the real world.’ Just as there are many ways of considering what is ethical in AI, ‘researchers studying bias in algorithms say there are many ways of defining fairness, which are sometimes contradictory,’ having inherent trade offs.”

Professor Hamilton was further commissioned to write a report for the National Association of Criminal Defense Lawyers in the US.  

In 2016 a report claimed that a computer program used by a US court for risk assessment was biased against black prisoners. 

The program, Correctional Offender Management Profiling for Alternative Sanctions (Compas), was much more prone to mistakenly label black defendants as likely to reoffend – wrongly flagging them at almost twice the rate as white people (45% to 24%), according to the investigative journalism organisation ProPublica.

Compas and programs similar to it were in use in hundreds of courts across the US, potentially informing the decisions of judges and other officials. The private company that supplies the software, Northpointe, disputed the conclusions of the report, but declined to reveal the inner workings of the program, which it considered commercially sensitive. 

prison backyard

Photo by Larry Farr on Unsplash

Photo by Larry Farr on Unsplash

International impact

Further afield, Professor Hamilton’s University of Surrey research is also being drafted into similar thinking in Asia. She has been invited to present on risk assessment to the Sentencing Commission of Korea and the South Korean Government, suggesting concerns of bias, lack of governance and accountability is becoming, or already is, a global concern.

The House of Lords and Congress reports both say public bodies often do not know much about the systems they are buying and will be implementing, due to the seller’s insistence on commercial confidentiality – despite the fact that many of these systems will be harvesting, and relying on, data from the general public.

The UK report says there is no central register of AI technologies, making it virtually impossible to find out where and how they are being used, or for Parliament, the media, academia, and importantly, those subject to their use, to scrutinise and challenge them.

Deloitte’s recently published report - Generative AI: Navigating Risks and Ethics says future research should focus on addressing the unique challenges of generative AI, including better defining and measuring uncertainty in predictions, improving explainability mechanisms, detecting undesired biases, understanding the potential environmental impact, and putting in place effective safeguards against misuse.

Unfortunately, the algorithms used aren't widely available so that they can be analysed as to whether they suffer from flaws or biases in their data input, as it would apparently violate their 'trade secrets'.

Photo by h heyerlein on Unsplash

Photo by h heyerlein on Unsplash

The concerns here are how we prevent these programs from amplifying the inequalities of our past and affecting the most vulnerable members of our society. 

When the data we feed the machines reflects the history of our own unequal society, we are, in effect, asking the program to learn our own biases.

According to Andrew G. Ferguson - a professor at the University of District Columbia law school and author of the book The Rise of Big Data Policing - police departments are drawn to AI’s potential as a quick fix seen as objective and trustworthy.

"Police agencies are adopting this technology because the lure of black box policing is so attractive," he said.

"Every chief has to answer the unanswerable question, 'What are you doing about crime rates?’"

Risk Predictions

Current laws ‘largely fail to address discrimination’ when it comes to big data.

“Risk predictions (e.g. high risk) can mean that one remains jailed pending trial, receives a sentence requiring incarceration, is sentenced to longer periods, is denied parole, is civilly committed indefinitely as a dangerous sexual predator, is sentenced to the death penalty, or their term of supervision is revoked,” says Professor Hamilton.

“If the science behind the risk tools is insufficient or if these tools discriminate by sociodemographic characteristics, then they can undermine the aims of justice and violate individual rights,” she added.

The mayor of an Australian town - Hepburn Shire, near Melbourne - has begun legal action against the creators of ChatGPT, arguing that the AI-powered chatbot falsely accused him of being a criminal. He's sent a legal notice to the parent company, OpenAI, after the app wrongly implicated him in a corruption scandal he blew the whistle on. The chatbot - which draws on a large amount of data from books, articles and websites - carries a disclaimer warning it "may produce inaccurate information”.

Research from the Royal Statistical Society says predictive policing systems are used increasingly by law enforcement to try to prevent crime before it occurs.

It draws a direct link from sci-fi fantasy to present day reality.

"We are treating algorithms like they are infallible"

Steven Spielberg’s film Minority Report starring Tom Cruise was released 25 years ago. 

The concept, based on a novella by Philip K. Dick, is that D.C.’s new ‘Precrime’ division has eliminated murder in the city by tapping the brains of three psychic ‘precogs’ whose dreams of death are used to prevent killings before they happen. 

“We are arresting individuals who have broken no law”
Quote from Minority Report

The local police department's cyborgs conclude the film's main character, played by Tom Cruise, will commit a murder. 

While trying to prove his innocence, Cruise learns that the cyborgs lack consistent judgment.

But that lesson isn't widely understood with today's crime prediction systems.

"Minority Report has become less of a cautionary tale and more of a blueprint, especially in predictive policing," said Suresh Venkatasubramanian, a computer science professor at the University of Utah.

"We're treating algorithms as if they're infallible and true arbiters of how people are going to behave, when of course they're nothing of the sort.”

Photo by Mark kassinos on Unsplash

Photo by Mark kassinos on Unsplash

Chicago's Police Department program has used artificial intelligence to identify people at high risk of gun violence, but a 2016 Rand Corporation study found the tactic was ineffective.

“If machines are trained on biased data, they too will become biased. Communities with a history of being heavily policed will be disproportionately affected by predictive policing,” warns Professor Hamilton.

“Identifying when it is fair and just to base risk predictions on sociodemographic characteristics such as race, gender, age, mental health condition, immutable traits, family dynamics and educational status raises some big philosophical and ethical issues,” she says. 

“For example, some tools have separate algorithms for males versus females. 

“This can mean that a male who receives the same score as a female may be judged as higher risk. That may appear to be gender discrimination. 

“But if the science supports different predictions (because studies consistently show women are far less likely to reoffend given the same predictors) is this a proper justification? The argument is politically harder to make regarding race, though questions still arise.”

The biggest questions around bias and the risks of it therefore sit uncomfortably with the fundamental principle of the justice system that one has the ability to defend oneself.

Professor Hamilton notes in her research that judges are ‘risk averse’ resulting in a huge uptick in pretrial detentions in the US.  

“Widespread pretrial detention appears to operate as a presumption of ‘guilty until proven guilty’ and thus ‘incarcerated until proven guilty’,” she writes in the American University Law Review.

She says in recent decades pretrial detention has markedly shifted from incapacitating a few select individuals deemed likely to abscond or be dangerous to a higher rate of individuals accused of various crimes, serious or not.

Pre trial detention and AI

Over 81 per cent of jail inmates in the US have not been convicted of any crime.

That’s around 550,000 people awaiting trial - in jail.

Professor Hamilton’s deep dive into the pretrial statistics in Cook County, Illinois, turfed up some reductions in bias and pretrial detentions due to the use of predictive AI risk assessment tools.  But while it points to a positive intervention, it needed to expand the categories of racial definitions.

The study provides an empirical exploration of how the outcome of pretrial detention may be associated with racial and gender disparities and whether any such disparities are ameliorated when considering a host of legal factors that are predictive of pretrial detention.

Policy implications of Professor Hamilton’s results have informed debates concerning pretrial reforms in terms of whether risk assessment tools offer the ability to reduce racial/ethnic and gender disparities and to decrease the detention rate.

Minority Report opened two and a half decades ago, but its foundations remain contemporary. 

When Chief of Police John Anderton (played by Tom Cruise) sees himself committing a crime in the future, he runs, only to discover that he’s being framed.

This prediction policy ties to contemporary machine learning, which allows organisations to collect existing data through technology to predict insights about certain individuals.

“The Internet is watching us now. If they want to, they can see what sites you visit,” said Spielberg in an interview at the time Minority Report opened. 

“In the future, television will be watching us, and customising itself to what it knows about us.”

As we have said. The future is already here with this technology.

Society may still be quite far from 2054 when the movie is set, but it serves as a ‘watch out' for unethical practices that may derive from an ever-evolving technological world.

"Minority Report has become less of a cautionary tale and more of a blueprint, especially in predictive policing," said Suresh Venkatasubramanian, a computer science professor at the University of Utah.

"We're treating algorithms as if they're infallible and true arbiters of how people are going to behave, when of course they're nothing of the sort.

From 2011 onwards, the FBI has been involved in the development of what's been termed NGI, or Next Generation Identification, which integrates palm print, retina scans, and facial recognition to help computers search for criminal history.

The facial recognition database is currently believed to consist of around 411.9 million images, the bulk of which are connected to people with no history of criminal activity.

The Department of Justice's (DOJ) Federal Bureau of Investigation (FBI) operates the Next Generation Identification-Interstate Photo System (NGI-IPS)— a face recognition service that allows law enforcement agencies to search a database of over 30 million photos to support criminal investigations. 

grayscale photo of man and woman

Photo by Orkun Azap on Unsplash

Photo by Orkun Azap on Unsplash

Human-level AI intelligence

Many AI experts believe there is a real chance that human-level artificial intelligence will be developed within the next decades, and some believe that it will exist much sooner.

A human-level AI would be a machine, or a network of machines, capable of carrying out the same range of tasks that we humans are capable of. 

It would be able to choose actions that allow the machine to achieve its goals and then carry out those actions. 

In recent years, several research teams contacted AI experts and asked them about their expectations for the future of machine intelligence.  But is this emotional intelligence capable of ethical decision making?

Such expert surveys are one of the pieces of information that we can rely on to form an idea of what the future of AI might look like.

Katja Grace, lead author of ‘When Will AI Exceed Human Performance?’

The chart shows the answers of 352 experts. (This is from the most recent study by Katja Grace and her colleagues, conducted in the summer of 2022.)

Experts were asked when they believe there is a 50 per cent chance that human-level AI exists.

A few believe that this level of technology will never be developed. Some think that it’s possible, but it will take a long time. And many believe that it will be developed within the next few decades.

The term 'artificial intelligence' can be applied to computer systems which are intended to replicate human cognitive functions.

In particular, it includes 'machine learning'.

Machine Learning

The promise of machine learning and other programs that work with big data (often under the umbrella term ‘artificial intelligence’) was that the more information we feed these sophisticated computer algorithms, the better they perform. 

According to global management consultant McKinsey, tech companies spent somewhere between $20bn and $30bn on AI, mostly in research and development. 

Investors are making a big bet that AI will sift through the vast amounts of information produced by our society and find patterns that will help us be more efficient, wealthier and happier.

“If you’re not careful, you risk automating the exact same biases these programs are supposed to eliminate,” says Kristian Lum, the lead statistician at the San Francisco-based, non-profit Human Rights Data Analysis Group (HRDAG). 

Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, could potentially get stuck in a feedback loop of over-policing majority black and brown neighbourhoods.

The program was ‘learning’ from previous crime reports and demonstrating how programs could replicate the sort of large-scale systemic biases that people have spent decades campaigning to educate or legislate away.

“Histories of discrimination can live on in digital platforms,” Kate Crawford, a Microsoft researcher, wrote in The New York Times earlier this year. “And if they go unquestioned, they become part of the logic of everyday algorithmic systems.”

There’s plenty of evidence of that.

A Google photo application made headlines in 2015 in mistakenly identifying black people as gorillas.

Tay, a chatbot Microsoft designed to engage in mindless banter on Twitter, was taken offline after other internet users persuaded the software to repeat racist and sexist slurs.

Pred Pol mapping

Pred Pol mapping

Fourth industrial revolution

People who work in machine intelligence say one of the challenges in constructing bias-free algorithms is a workforce in the field that skews heavily white and male.

“The fourth industrial revolution is the first one in which the pace of technological change is outstripping business change and what society is able to cope with. In previous industrial revolutions society had decades to adjust to the pace of change,” says Tony Scott, CEO of NeuralRays AI.

NeuralRays AI was founded in late 2018 with the vision of using ethical AI to create solutions for a better world.

“The biggest challenges in this are the human, behavioural and cultural ones, and the need to bring people along on the journey. It’s about painting a view of what a changed world might look like to inspire them to participate. You cannot make people transform, they have to want it and understand the reasons why.” 

According to Tony, AI was increasingly being spoken of in negative terms back then. 

Stories focused on automation and robotics replacing jobs, instead of celebrating people being freed from mundane and repetitive tasks to excel in new roles using human traits such as empathy, creativity and leadership. 

At its most basic, AI is software that mimics and generates human behaviours.

“With normal software, we program rules and formulas or use conditional logic to automate the software. AI doesn’t need human-attributed rules. It generates them itself,” said Sulabh Soral, the chief AI officer at Deloitte Consulting.

"To understand this significance, we should look back to the Industrial Revolution: humans created machines that mimicked human muscles and replicated the work of humans’ hands and legs. Machines scaled hard human labour. It transformed the world. 

“Likewise, AI will scale our cognitive abilities, enhancing healthcare, transport, education, customer-centric experiences – it will have a profound impact on human efforts.”

He says there are two ways AI can go bad: when it only benefits the people who own the intellectual property or, a little dramatically, it becomes an existential threat for the human race. 

"We need the global community to decide, together, how we regulate, fund, invest in and research AI because the success of AI depends totally on humans.”

 

Tony Scott

Tony Scott

Both academics, like Professor Hamilton, and industry leaders like Tony Scott agree that to train and evaluate complex AI systems, researchers and developers may need large datasets that are not widely accessible. 

Further, stakeholders have questioned the adequacy of public and private sector workforces to develop and work with AI, as well as the adequacy of current laws and regulations in dealing with societal and ethical issues that may arise.

What it all points to is the necessity for further research. 

“Ethics is inherently a philosophical question while AI technology depends on, and is limited by, engineering,” says Professor Hamilton.

The big work is now in figuring out acceptable ethics reference frameworks which can be developed to guide AI system reasoning and decision making in order to explain and justify its conclusions and actions.

As the Congressional Report notes, to achieve these goals, there is a need for multidisciplinary, fundamental research in designing architectures for AI systems to incorporate ethical reasoning.

While such fundamental research is being conducted, and while various groups work on developing standards and benchmarks for evaluating algorithms, key stakeholders such as the UK and US Governments are now calling for a risk-based, sector-specific approach to considering uses and potential regulations for AI algorithms. 

 Background biography

Melissa Hamilton’s work is with governmental agencies, attorney groups, and her expertise is called upon internationally in criminal and civil cases involving challenges to the technologies.

She is a former police officer, former corrections officer and former judicial clerk.

She is a Professor of Law & Criminal Justice, University of Surrey School of Law; J.D., The University of Texas School of Law; Ph.D. (criminology), The University of Texas at Austin; Fellow, Royal Statistical Society; Fellow, Surrey Institute for People-Centred AI.

People-Centred Institute for AI

The Surrey Institute for People-Centred AI brings together practitioner domain knowledge with AI expertise. The Institute addresses the national importance of AI for the benefit of society and the economy as recognised in the government’s new AI ten-year strategy. This strategy launched in September 2021 and aims to make the UK a 'global AI superpower’.

The University’s vision also addresses the government’s industrial strategy ‘AI and the Data Economy’ for critical sectors of the UK economy such as creative industries, health care and security.