More points to ponder about the use of AI at work
In a recent article we looked at the legal implications of generative AI in the workplace. The topic is too extensive to cover adequately in one article, so we promised a second instalment. In this article we continue our investigation into the implications of using ChatGPT and other versions of generative AI in an occupational context.
In part 1, we looked at issues of intellectual property, i.e., who owns work created by AI; privacy and data protection, i.e., how to ensure you comply with POPIA in your use of data either generated or used by AI systems; and employment law, i.e., what factors must be considered with regard to the Basic Conditions of Employment Act, Labour Relations Act, etc., when using AI. In part 2 we examine discrimination and bias; accountability and liability; confidentiality; consumer protection; and regulatory compliance.
Discrimination and bias
In AI and ChatGPT – what are the ethical considerations? we looked at the potential for racial profiling and discrimination with AI systems used for facial recognition, sometimes leading to wrongful arrest. In the workplace AI algorithms may be used in recruitment, to narrow down a wide field of applicants. But if the AI is trained on historical data that includes mostly males, or mostly applicants of a certain race, it is going to favour current applicants that conform to those models. This actually happened at Amazon.com. It created software to read CVs, in an attempt to reduce the time and cost involved with manually reading the tens of thousands of applications it receives every day. Amazon trained the system on applications that led to interviews over the previous 10 years, to teach it to recognise an “interview-worthy” application. Unfortunately, there was a massive bias towards men in the historical data, so the system concluded “we don’t hire women”. Fortunately, Amazon recognised the problem, tried to rectify the bias, and couldn’t. So they didn’t put the system into action. But it’s a very good example of the risks involved.
Beyond recruitment, what are some of the problems that might emerge from using AI in the actual work of the organisation, rather than solely for recruiting employees? You may elect to use ChatGPT to generate internal memos or newsletters, or customer-facing content such as web pages. This could be seen as an effective use of AI that frees up a junior member of staff for other work, such as research or continuous professional development. AI can improve productivity through more efficient use of resources. However, if the data used to train AI systems is not representative of the population it is intended to serve, bias will be present. Bias also derives from the values and assumptions of the developers. At this stage of its development, AI is still driven by humans. And developers are largely male. The main actors in the AI arena also have a US bias. They have no way of understanding the South African context. So the content that appears in your staff newsletter, company website, or customer chatbot may not resonate with – and may even offend – your staff or customers. Furthermore, if you are using AI for forecasting or decision-making, you need to validate your predictive models and make sure your decision-making does not contain inadvertent unfairness toward or preference for particular populations.
Accountability and liability
If you discover that your use of AI has exposed you to discrimination, who is accountable? The answer is not simple. As a business owner, you have responsibility for the tools you use in your business. But if a machine or software program you rely on delivers a faulty product or output, you would take recourse with the manufacturer or supplier. An AI system, by contrast, consists of developers, data providers, and system operators. Each “actor” plays a role in how the AI is created, rolled out, and used. So who is responsible in the event of a problem?
The nature of AI means that it is being “trained” continuously, as new data is added. Therefore, outputs to the same query will change over time. If you rely on a solution generated by AI, which subsequently changes as the system evolves, where does the accountability lie for the actions performed by the AI system? As yet, the answers to this and the above question are not clear. Some deep learning AI systems are not fully controlled by humans. If an outcome is negative or even dangerous, liability may be difficult to attribute.
When you chat with ChatGPT, it may feel like you are chatting with a human. This is because the AI is trained on the way humans speak and interact. You may have noticed that when you ask ChatGPT to explain something, it sometimes replies with “Certainly!” Don’t mistake this for enthusiasm. AI has no emotional fabric or moral compass. This means that it can undertake an action with no malicious intent, but still create harmful impact. If this happens, it may be extremely tricky to prove your intent was different from the actual consequences.
The issue of accountability and liability has not been settled in law in most jurisdictions yet, meaning there is uncertainty and ambiguity about legal responsibility for AI-generated content and actions. Proceed with caution.
Confidentiality and trade secrets
The issue of trade secrets and confidentiality is a potential hornet’s nest when it comes to AI. Lengthy articles abound on this topic so we highlight just a few of the risks here. Firstly, if employees are using generative AI such as ChatGPT in their work, and input confidential or sensitive information as part of a prompt, there is a risk that the chatbot could reuse that information in response to other queries from outside of the organisation. Remember, generative AI is continuously being trained by all the data it receives daily. Your company’s trade secrets could wind up in the public domain. Secondly, if you are creating an AI model, such as Amazon’s CV-reading software discussed above, you need to train the model on large datasets. That data may relate to employees or customers. In the event of unauthorised access, i.e., a data breach, you risk violating the privacy of real people behind the data. Furthermore, proprietary information or “trade secrets” could be exposed in such a data breach. Lastly, sabotage by unhappy employees or disgruntled ex-employees is not impossible. Access control to AI models and training data is critical to safeguard confidentiality and prevent unauthorised disclosure.
Clear policies and procedures need to be in place governing the use of AI in the workplace, whether that involves use of existing tools like ChatGPT or proprietary systems being developed. Access, data sharing, data handling, data protection, and encryption should be included in standard operating procedures. Relevant contracts and non-disclosure agreements should be in place to protect your company’s sensitive information.
Consumer protection
If you use or plan to use AI-generated content in your marketing or customer interactions, there are a number of factors to bear in mind with regard to consumer protection. Currently, the regulatory regime in South Africa does not encompass AI. The Protection of Personal Information Act (POPIA) regulates data processing and protects our privacy. The data discussed in the section on confidentiality is covered by POPIA. But the Consumer Protection Act (CPA) has not yet been amended to cater for ChatGPT and AI systems. In Europe, by contrast, the European Commission proposed a regulation over two years ago which would lay down harmonised rules on artificial intelligence. It’s possible that this regulation might become applicable by the end of 2024. Regardless of legislation, responsible businesses want to treat their customers fairly. The absence of specific laws around AI actually makes it more difficult, as existing laws must be interpreted in an entirely new environment.
For example, consumers have the right to know when they are interacting with a “bot” rather than a human. You should disclose any use of AI to avoid misleading your customers. We’ve discussed the risk of bias and discrimination in AI algorithms. If you are using AI to inform your marketing strategy or campaign, be very sensitive to how it treats your customer base and look out for AI-generated segmentation based on demographics or customer behaviour. Offers to different segments may be perfectly legitimate, but they may also reinforce stereotypes or existing social biases.
AI can be a boon to the marketing department. It can help you create personalised marketing messages that were impossible a generation ago. But there is a risk of exploiting or manipulating customers through hyper-emotive messaging. It may not be illegal to do this, but it is unethical. Furthermore, if you are using existing customer data to create personalised offers, you should obtain explicit consent from your customers for this use of their data. They have the right to know how their data is being used and are entitled to opt out if they choose.
Finally, make sure there is recourse to a human being for customer support. Chatbots cannot answer every question and older customers in particular may become frustrated dealing with a “bot”. Provide a human means of redress for consumer concerns.
Regulatory compliance is not optional
There may not be targeted legislation governing the use of AI yet. But many industry-specific regulations, including financial, health care, and advertising standards, apply to the content generated by AI in the same way as they apply to the content generated by your employees. AI is not above the law. Until regulatory and legislative frameworks evolve to specifically accommodate AI, you should ensure compliance with existing relevant frameworks.
Seek professional legal advice
We are all coming to terms with the impact of AI in our lives. AI has huge potential for good, and equally its potential for misuse must be managed. At SD Law we are monitoring the development of AI in South Africa closely and stand ready to advise you on your use of AI in the workplace. Contact Simon today on 086 099 5146 or email sdippenaar@sdlaw.co.za.
Further reading:
- The legal implications of generative AI in the workplace – part 1
- AI and ChatGPT – what are the ethical considerations?
- AI and women – friend or foe
The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.