The legal implications of generative AI in the workplace – part 1

AI Workplace

Chatbots are here to stay – understand their complexities

Do you think ChatGPT is just a glorified search engine? Or do you think AI is the future of work, for better or for worse? If you are worried that generative AI will put you out of a job, you can (probably) relax – at least in the short term. As for the long term…no one really knows how artificial intelligence will evolve and what that world might look like. But ChatGPT, Bard, and the many other generative AI systems are here to stay and are becoming an everyday part of the workplace. What are the legal implications of generative AI in the workplace? What do you need to be aware of, whatever field you work in?

This is a complex topic and it’s not practical to cover all the implications in one article for reasons of length. This is the first of a two-part exploration of generative AI in the workplace.

Why ChatGPT is more than just a search engine

In one sense, ChatGPT is a sophisticated search engine. You ask it a question, and it gives you an answer, drawn from a vast online dataset. Ask Google a question, and you get a list of results that contain the key words in your query, also drawn from online data. The order in which results appear is heavily influenced by paid-for results and by effective use of search engine optimisation (SEO), a tool website managers use to bump results up the ranking. You then have to investigate those links, synthesise the information you have gathered, and interpret it to get the answer to your question. ChatGPT does all that for you, and it delivers the output in a coherent narrative. It’s a search engine with superpowers.

But there is a key difference between the two. Google search results are transparent. You visit the websites in your search result, and the sources of information you ultimately employ are traceable. You can validate your information by visiting multiple sites on the same topic. You can tell the difference between an industry authority and a blogger. For example, if you ask Google about a health issue, there will be tens of thousands of results. You can easily distinguish between the South African Department of Health, or National Institutes of Health in the US, for example, and a sponsored ad by a pharmaceutical company.

When you ask ChatGPT a question, the source of its information is not disclosed. ChatGPT is trained on a vast amount of data. Some of it is authoritative; some is not. ChatGPT does not know the difference. Furthermore, you cannot interrogate ChatGPT about its sources. Here’s an excerpt from an “interview” a writer carried out with ChatGPT:

Question: I’ve heard that when pressed for citations, you sometimes just make them up. And  I have experienced that myself when I work with you. Why is that?

ChatGPT: As an AI language model developed by OpenAI, I don’t have the capability to perform independent research or verify the accuracy of information.

I provide answers based on the text I was trained on and do not have access to external sources.

Additionally, I was not designed to provide citations or perform fact-checking.

My main purpose is to generate human-like text based on the prompts given to me. If you need information with sources, I would recommend consulting reliable sources such as academic journals or credible news outlets.

What does this mean for your use of generative AI in your work, whatever line of work you are in? Let’s look at a few considerations.

Intellectual property

Is the content generated by AI systems your intellectual property? If AI is used to create original works, such as articles, designs or code, questions about copyright ownership and licensing may arise. This has already begun to happen in the US. The US Copyright Office has started to receive applications for copyright registration of works generated by AI. In one registration application for a visual work created entirely by AI, the Office rejected the copyright application because “the work contained no human authorship”. In another recent case, “the Office concluded that a graphic novel comprised of human-authored text combined with images generated by the AI service Midjourney constituted a copyrightable work, but that the individual images themselves could not be protected by copyright.” (emphasis ours)

To our knowledge, this has not yet been tested with the Companies and Intellectual Property Commission of South Africa, but if you want to claim work as your own, it’s best to make sure it is written, drawn, or designed by a human being!

Privacy and data protection

The Protection of Personal Information Act (POPIA) came into full effect on 30 June 2021. (It officially commenced on 1 July 2020 but there was a one-year grace period to allow organisations to prepare for compliance.) POPIA has four key purposes: 

  1. To give effect to the constitutional right to privacy, by safeguarding personal information.
  2. To regulate the manner in which personal information may be processed.
  3. To provide persons with rights and remedies to protect their personal information from processing that is not in accordance with the Act.
  4. To establish voluntary and compulsory measures to promote, enforce and fulfil the rights protected by the Act.

If AI systems process personal data in the workplace, there are privacy and data protection issues to be aware of. If you use generative AI systems to process personal data of employees, such as emails, chat logs, or documents, you need to ensure compliance with POPIA. This involves informing employees about the processing of their personal data, providing transparency on the purposes of the AI system, and implementing appropriate data protection safeguards. POPIA requires that personal information generated via AI be processed lawfully and with a lawful purpose. There must be a legitimate basis for processing the data, such as consent, contractual necessity, or legal obligation. Furthermore, you must have appropriate security measures in place to safeguard personal information against unauthorised access, loss, or destruction, which holds true for all personal information you hold, not only that generated by AI. If any personal information generated by AI is transferred outside of South Africa, you must ensure protection of the data during the transfer, as you would with human-generated data. POPIA provisions apply equally to AI and other data, but it may take a bit more effort to ensure your AI data is properly protected and you are in full compliance with the Act.

Employment law

Employment law in South Africa consists of multiple pieces of legislation, all of which employers must comply with. These include the Basic Conditions of Employment Act, Labour Relations Act, Employment Equity Act, and more. Inevitably, the use of AI in the workplace is extremely complex, as each Act has its own requirements. Here are some of the factors to consider when using AI:

  1. Training and skills development: If you intend to embrace generative AI and embed it in your way of working, you need to provide training and upskilling opportunities to employees so they can adapt to the use of AI in their work. It is a good idea to carry out an impact assessment on employee skills and identify areas for development. Relevant training can then be designed and rolled out.
  2. Employment contracts and policies: It’s a massive undertaking, but there is no escaping the need to review employment contracts, policies, and procedures to address the introduction of generative AI systems. You may need to update confidentiality provisions, intellectual property clauses, and acceptable use policies as appropriate for the AI technology you use.
  3. Anti-discrimination and fairness: The use of generative AI systems must not result in discriminatory practices or biased outcomes. As we’ve seen (see AI and ChatGPT – what are the ethical considerations?) AI algorithms have been shown to have potential for racial and gender profiling and other types of discrimination. This is equally relevant to the use of AI internally and in your recruitment processes.
  4. Retrenchment: If the implementation of generative AI leads to workforce restructuring or changes in job requirements, be aware of your legal obligations, including the requirements for fair retrenchment processes, compensation packages, and potential redeployment opportunities.
  5. Workplace monitoring and privacy: In addition to the privacy requirements of POPIA, you must comply with the provisions of the Regulation of Interception of Communications and Provision of Communication-Related Information Act (RICA) if generative AI systems involve the monitoring of employee activities, such as analysing communications or tracking work progress. You must make employees aware of the extent and purposes of this monitoring.

No specific AI laws yet but common sense must prevail

At time of writing, South Africa does not have comprehensive legislation governing the use of AI and generative language tools like ChatGPT, unlike some other countries. The UK has published a government White Paper on AI – “A pro-innovation approach to AI regulations” (March 2023) and Italy banned ChatGPT on 31 March over concerns about data privacy and misinformation. However, in South Africa we do have a robust Constitution and legislation that is modern and progressive in areas such as labour, gender equality, data protection, etc. In the absence of a specific legislative framework, you must ensure your use of generative AI does not breach existing laws. Although there are areas such as copyright and intellectual property that have not been tested, in many areas it is not difficult to apply the current legislation to use of AI. To use POPIA as an example, personal information and privacy must be protected, whether the data is created and/or used by humans or machines. In terms of employment law, if AI replaces an employee’s entire role, the employer must follow the retrenchment procedure set out in the Labour Relations Act.


The landscape around AI is changing all the time. If you are unsure as to whether your business is using generative AI lawfully and ethically, you should consult a legal professional. SD Law is monitoring the use of AI in South Africa closely. Contact Simon today on 086 099 5146 or email

Further reading:

Previous post:
Next post:

The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.

Need legal assistance?

Request a free call back