What are the legal implications?
The sophistication of artificial intelligence (AI), along with its application, is increasing almost on a daily basis. The UK has announced the world’s biggest trial of breast cancer screening and diagnosis using AI in place of the second radiologist (mammograms are normally assessed by two human doctors). If successful, this advancement could speed up testing, reduce radiologists’ workloads, and reduce the time patients wait for results. In law enforcement, AI offers similar opportunities. However, it also presents challenges, particularly in terms of bias and racial profiling.
In South Africa, the adoption of AI has sparked debates about the potential benefits and risks. Predictive policing, which is the use of AI to forecast crimes or identify high-risk areas, has provoked legal and ethical debates. Understanding these implications is critical to balancing technological progress with adherence to the rule of law.
Predictive policing
Predictive policing employs algorithms to analyse large datasets and predict potential future crimes. This method, particularly the place-based approach, uses historical crime data to determine areas and times with a high likelihood of criminal activity. Information sources include historical crime records, social media, geolocation data and socioeconomic indicators.
Initially introduced in cities like Los Angeles, predictive policing has been implemented through models such as PredPol and LASER, which identify crime hotspots and predict incidents involving violence or property crimes. While these tools offer innovative crime-fighting approaches, their use raises ethical concerns.
Legal ramifications of predictive policing
In a country with stubbornly high levels of crime, it would be easy to welcome any tools or tricks that can aid the police in their duties. However, South Africa is also a country with a history of human rights abuses, particularly in police conduct. Therefore, we have a duty to scrutinise any development with the potential to infringe or violate the rights we have fought so hard to protect. Predictive policing must be approached with caution; concerns include privacy and data protection, bias and discrimination, transparency, and due process.
Data protection
Predictive policing relies heavily on collecting and processing personal data, including surveillance footage, geolocation, and biometric information. The Protection of Personal Information Act (POPIA) is the comprehensive data protection law in South Africa, designed to give effect to the constitutional right to privacy by safeguarding personal information and regulating the way it may be processed. Mismanagement of personal data not only violates privacy rights but also exposes agencies to legal actions and loss of public trust.
Bias and discrimination
AI systems are subject to bias contained in the data they analyse. If historical data reflect existing societal inequalities, predictive policing tools will reinforce systemic discrimination, disproportionately penalising certain racial or socioeconomic groups. In South Africa, where inequality remains deeply entrenched, fairness in AI-aided law enforcement is crucial. Careful evaluation of algorithms combined with bias mitigation strategies are essential to prevent discriminatory outcomes.
Transparency
Predictive policing leads to actions and decisions influenced by algorithms. Who is accountable for these decisions? Many AI models are opaque in their rationale and lack of transparency can lead to inadequate accountability in decision-making. Particularly in criminal matters, AI systems must not be a substitute for human decisions as errors or misinterpretations could have severe, even life-changing, consequences. South African law enforcement agencies must establish clear guidelines for accountability and ensure transparency to gain public trust.
Due process
The predictive potential of AI tools raises concerns about the erosion of legal principles such as due process and the presumption of innocence. Targeting individuals or communities based solely on algorithmic predictions risks infringing their constitutional rights. Predictive tools must complement rather than replace traditional investigative methods to uphold justice and equity.
International basis for law enforcement conduct
Policing has unique roles, responsibilities and far-reaching consequences for individuals, communities and society and is governed by rules derived from the Constitution and national legislation, as well as international obligations such as the International Rules and Standards for Policing (ICRC), the Code of Conduct for Law Enforcement Officials (UN General Assembly Resolution 34/169), International Human Rights Standards for Law Enforcement (UN Centre for Human Rights) and INTERPOL legal frameworks. These mandates require that policing innovations balance efficiency with human rights, fairness and equity.
Balancing risk and reward
Technology has undeniable potential to help the police fulfil their duties. However, its use must be balanced against the intrinsic risks. Any technological support deployed by the police must strike a balance between the function of policing and the various competing human rights and public interests, including the freedom and security of the person, privacy and equity.
Efficiency vs. effectiveness
Efficiency is frequently mistaken for effectiveness. AI tools in many spheres undoubtedly increase efficiencies. But they do not necessarily lead to greater effectiveness. Before committing to the use of predictive analytics or unproven surveillance systems, law enforcement agencies should proceed with due diligence and strengthen internal capabilities to ensure the proposed technologies are properly understood. Without this human intervention, the challenges described above represent a critical barrier to adoption.
SD Law can help
At SD Law, we welcome tools that enhance our service to our clients and the advancement of law enforcement and the legal profession in general. We embrace cutting-edge technology as part of our vision to be a modern, client-driven law firm, and we do so responsibly and ethically. If you think you have been a victim of bias or discrimination due to the use of AI tools in law enforcement, or if your rights have been infringed in another way by AI, we want to hear from you. We can help. Contact Simon on 086 099 5146 or email sdippenaar@sdlaw.co.za for a confidential discussion.
Further reading:
The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.