If you use AI in your business, you need to carry out a risk assessment
Artificial intelligence (AI) is increasingly embedded in day-to-day business operations. Even small organisations use chatbots for customer service (including this website); and facial recognition software is used not only by large tech companies (to unlock your cell phone or laptop) but also by businesses for access control and schools for attendance monitoring. Algorithms are widely used for recruitment and selection and AI is increasingly employed as a financial decision-making tool, for example in loan applications. As adoption grows, so do the risks — many of which are not immediately obvious but can have long-term consequences for legal compliance, reputation and operational integrity. Here in South Africa, we do not have an explicit legal framework governing the use of AI. However, the absence of regulation is no excuse for not proactively considering the risks you are exposed to via AI. If you simply use a generative AI tool like ChatGPT, your level of risk is low, but if you have implemented AI as part of your core operations or have developed a bespoke AI system, you need to carry out a thorough AI risk assessment. Businesses have a duty to self-regulate and adopt responsible AI measures until our regulatory structure catches up with Europe and other jurisdictions that have specific laws in place governing AI.
What is AI risk?
AI systems represent a unique combination of traditional technology risks and new, AI-specific challenges. AI models are based on probability and are notoriously opaque. They are often referred to as “black boxes”, because they are so complicated even developers don’t fully understand how they work. It is often difficult to explain the outcome of an AI system and so the appropriateness of an AI-based decision may be in question. Organisations that rely on AI in areas such as customer interaction, recruitment or surveillance are exposed to various risks, which can range from “unacceptable” (e.g., an AI-based scoring system used to screen rental applicants based on race) to “minimal” (e.g., a chatbot that provides instructions on how to use a website). However, an AI system risk can move from low to medium, depending on the context. We outline below some of the more common risks, but the list is not exhaustive.
Bias and discrimination
AI models trained on historical or skewed data can perpetuate or amplify existing biases. For example, a recruitment algorithm may unfairly filter out candidates based on gender, race or age due to patterns embedded in training data. This exposes organisations to reputational harm, legal claims and missed opportunities to hire the best talent.
Data privacy and sensitivity
AI consumes and processes large volumes of data, which may include personal or sensitive information. Customer-facing chatbots may inadvertently collect personal data without consent. Facial recognition software uses biometric data, which is considered high-risk under global data protection principles. Without proper controls, organisations risk breaching POPIA or other privacy protection laws.
Data leakage and security
AI systems that interface with cloud-based services may be vulnerable to data leakage. A chatbot connected to a customer service database might accidentally reveal personal information or, worse, cyber attackers might access confidential data.
Lack of transparency
Because AI can make decisions that are not fully understood, problems may emerge when explaining outcomes to customers or defending actions in a legal or regulatory setting.
Automation risks and loss of human oversight
However efficient the AI system, human oversight is necessary. AI systems are not (yet) sophisticated enough to understand the nuances of language meaning or sensitive enough to consider all qualitative factors in the decision-making process. Relying blindly on AI can lead to errors in judgment or questionable ethics. For example, an AI customer service tool may mishandle a complaint, or a recruitment filter may reject a strong candidate without a clear rationale.
Regulatory and legal exposure
Although South Africa currently lacks AI-specific regulation, elsewhere standards are being established and laws enacted. The EU’s AI Act and Canada’s Artificial Intelligence and Data Act (AIDA) are among the first; others will follow. A South African company that wants to operate globally should be prepared to comply with emerging legislation. Conducting a risk assessment is the first step.
What an AI risk assessment should cover
An effective AI risk assessment is a rigorous process in which all AI models, systems and capabilities used in the organisation are evaluated to identify and rank potential risks, such as security, privacy, fairness and accountability. It should be sensitive to the organisation’s context, the AI application in question, and the potential harm to individuals or society. It should include mitigation measures. Done thoroughly, there are multiple steps involved.
System mapping
First, itemise the AI systems in use and note where and how are they deployed. You may be surprised to find more automated tools in use across the organisation than expected. What data do they use and how is it collected? Which stakeholders are impacted? This step ensures all systems, whether third-party or in-house, are captured.
Risk identification
Next, identify the risks specific to each AI system based on its function, scope and context. Consider whether the system uses sensitive personal data, whether it could cause unfair discrimination or exclusion, and whether the output significantly infringes people’s rights or restricts their access to opportunities.
Bias and fairness testing
Then, test the system’s output for biased results. There are several ways to do this: study statistical audits, use test cases, or engage an external reviewer. For example, a recruitment tool could be tested with dummy CVs representing various demographic groups.
Data governance and security
Make sure any data you collect is anonymised or pseudonymised where possible. Review access controls and revise if necessary. The default network access in place before the introduction of AI-assisted tools may no longer be appropriate. Look at data storage methods and consider how data is transmitted and processed. Make sure data is used according to the consent obtained. If you collect or process personal data, with or without AI, there should already be robust processes in place to comply with POPIA.
Explainability and accountability
This component of the assessment aligns to the lack of transparency described above. Can the decisions reached either by AI or with AI assistance be explained to customers or auditors? If the rationale behind decisions is vague, you may rely on inaccurate or inappropriate information for major business decisions or you may face consumer backlash. Who carries ultimate responsibility for these decisions? There should be human involvement in any high-stakes decisions made by AI and an accountable senior manager. You should have policies that provide for review of AI-based decisions and facilitate human override. Employees who use AI systems should be trained to recognise when to step in.
Stakeholder impact analysis
An effective stakeholder engagement strategy considers the needs and expectations of each stakeholder group and the business response to those needs. An AI risk assessment takes that one step further and looks at how each AI system affects each stakeholder group, including customers, employees, partners and the general public. If the business is in a sector such as health, employment or financial services this step is particularly critical.
Risk scoring and mitigation planning
Once you have thoroughly identified the risks relevant to your business, you can assign scores to each risk based on likelihood and severity of harm. This is similar to other risk analysis you have probably carried out for your business but instead of looking at impact on the organisation it considers the probability and severity of harm to other people. Place the risk in the appropriate quadrant of a matrix like the one below. Once you have this overview, you can design your mitigation strategy.
Source: www.bloomberglaw.com/
Ongoing monitoring and review
Lastly, plan to review the matrix and your risk mitigation at regular intervals, as AI tools are evolving at a dizzying rate. Ensure you have suitable feedback loops in place. Include technical audits, test cases and user feedback to identify unexpected outcomes or deteriorating performance.
Who can carry out an AI risk assessment?
Like everything else in the field of AI, skills are rapidly developing. IT security providers, auditors and risk management professionals may offer AI risk assessment as part of their service suite. But as yet South Africa has few AI risk specialists. Therefore, a multidisciplinary team is best suited to carry out a comprehensive AI risk assessment. The team might include:
- Data scientists, for technical evaluation of the model
- Legal and compliance experts, to ensure alignment with data protection, employment equity and human rights legislation
- Ethics experts, to evaluate societal or reputational risks
- Business unit leaders, to review operational suitability of the model and ensure it is fit for purpose in the context
- HR or customer service representatives as appropriate, according to how the tool is used
SD Law can help
As AI systems become integral to business operations, managing their risks must become a core organisational capability. Conducting an AI risk assessment will position your organisation as a responsible, forward-looking business.
At SD Law, we embrace cutting-edge technology like AI as part of our vision to be a modern, client-driven law firm. We do not claim to be experts in AI risk assessment, but we can form part of a multi-disciplinary team to help you protect your reputation, build customer trust, and prepare for the regulation that is on the horizon. Contact Simon on 086 099 5146 or email sdippenaar@sdlaw.co.za for a confidential discussion.
Further reading:
The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.