Is AI a blessing or a curse?
According to Reuters, ChatGPT is estimated to have reached 100 million monthly active users in January, two months after launch. This makes it the fastest-growing consumer application in history. Analytics firm Similarweb estimates that ChatGPT had an average of 13 million unique visitors per day in January, more than double the levels of December. Compare that to other popular social media platforms: it took TikTok nine months to reach 100 million users and Instagram two and a half years. Growth seems to have slowed a bit: ChatGPT had amassed about 266 million visits by December, and ended April with c. 1.76 billion visits. However, the growth rate in April slowed to 12.6%, compared to monthly growth of 131.6% in January, 62.5% in February, and 55.8% in March. Unless you are a resolute Luddite, you’ve probably dabbled with ChatGPT, if only to see what all the fuss is about. Is it a search engine on steroids, a passing fad, or a resource that will revolutionise education and business? Is it a force for good, a dangerous threat…or both? Whatever your view, it has rapidly become an integral part of 21st-century life. What are the ethical considerations surrounding the use of ChatGPT and artificial intelligence (AI)?
Defining terms
ChatGPT makes use of AI, but the two are not the same. AI encompasses much more than chatbots. Although AI has become widespread in the last few years, many people are not fully aware of the role it plays in their lives. Because of the visibility of ChatGPT, it is often conflated or confused with AI. But there are many types of AI, of which ChatGPT uses just one. Here’s an overview:
- Narrow AI: AI systems that are designed to perform specific tasks or solve specific problems, such as image recognition, voice assistants, or recommendation systems. Apple’s Siri and Amazon’s Alexa voice assistants are examples of narrow AI.
- Machine Learning: a subset of AI that focuses on algorithms and models that can learn from data and improve performance without explicit programming. Machine learning can automatically detect patterns, make predictions, and adapt its behaviour based on the training data it receives. The “personalised” recommendations you see on Netflix or Spotify are examples of ML.
- Deep Learning: a subfield of machine learning that uses artificial neural networks with multiple layers (deep neural networks) to learn and represent complex patterns and relationships in data. Examples of deep learning include sophisticated image recognition and classification, natural language processing, and speech recognition. ChatGPT and Google Translate are based on the natural language processing feature of deep learning.
- Expert Systems: Expert systems are AI programs designed to mimic the problem-solving and decision-making abilities of human experts in specific fields. These systems use a knowledge base of rules to provide expert-level advice or solutions to specific problems. Expert systems are commonly used in medical diagnostic systems.
Benefits and uses of AI
There is no doubt that AI can be put to good use. Reading or viewing recommendations based on your history and the choices of other like-minded consumers are harmless and can lead to discoveries you might not have otherwise made. Navigation systems like Google Maps and Waze make finding your way around a new city easier. Facial recognition on your cell phone gives you a reassuring level of security in a country where petty theft is widespread, and makes daily life convenient. ChatGPT can take the hassle out of drafting a letter of resignation or other minor writing tasks.
Used in health care, AI can help diagnose obscure or unusual conditions. A patient’s symptoms and other information are input into an expert system which analyses the data and compares the patient’s symptoms against the stored knowledge. It factors in the frequency, severity, and duration of their symptoms to come up with potential diagnoses. It can also suggest treatment options and further actions the clinician could take. The system indirectly gives the health care worker access to the expertise of specialists and the experience of thousands of patients…far more examples of the particular condition they are presented with than they would ever see in their practice alone. The expert system does not replace the clinician’s expertise or judgment but helps improve the accuracy of their diagnosis. AI is also used to help radiologists read and interpret medical imaging. Given that delays in treatment are often due to a shortage of radiology resources, an AI-based “radiologist assistant” that supports routine reading and measurement tasks on medical imaging can help alleviate some of the pressure on a hospital’s radiology department. AI used in health care can shorten waiting times and enable more efficient patient care by speeding up the time it takes to diagnose and treat patients.
Risks
Health care
But AI must be used with caution. There are limitations and risks. Sticking with health care, research indicates that AI systems being developed to diagnose skin cancer run the risk of being less accurate for people with dark skin. Few image databases that could be used to train AI systems for skin cancer diagnosis contain information on ethnicity or skin type, and those that do have very few images of people with dark skin. Charlotte Proby, a professor of dermatology, has said, “Failure to train AI tools using images from darker skin types may impact on their reliability for assessment of skin lesions in skin of colour.” This has huge implications in Africa, where the majority of people have darker skin and there is a wide range of skin tones. A study was also done during COVID-19 that showed blood oxygen saturation to be overestimated among Asian, Black and Hispanic patients using pulse oximetry, meaning essential treatment was delayed to these patients. Pulse oximeter devices themselves are medical technology but not AI, but the data from them is collected and aggregated to inform deep learning. The light signals interact differently with different skin tones, impacting accuracy, and so the data collection…and the insight it provides into patient care…is flawed.
These examples are unintentional and can be corrected. But there is another risk with biomedical AI: the equipment and tools powered by AI are expensive. While they may increase efficiency in a health system under pressure, they are unlikely to be affordable or realistic in resource-constrained settings, such as South Africa or any other African country. Biomedical AI favours wealthier countries or private health care systems, where the costs are simply passed on to the patient. In South Africa, less than 20% of the population has private medical insurance. Our public health system is fragile and resource-poor. It is unlikely to be able to afford the AI-Rad Companion.4, developed by Siemens Healthineers to augment the review of medical images. The end result will be a greater inequality in global health.
US bias
ChatGPT was developed by a research organisation based in California. It claims that it is “a digital entity that exists purely in the virtual space and does not have a physical presence or nationality”. However, because it has been trained on text data from the internet, and there is a disproportionate amount of data emanating from the US, it’s inevitable that ChatGPT has a US-centric bias. This is most obvious in its use of language, but there are more subtle ways that US culture and values may influence the responses ChatGPT gives to certain queries. It also means it is biased towards English. We asked it, in French, how many languages it speaks. It replied that the model has a command of multiple languages, notably English, French, Spanish, German, Italian, Dutch, Portuguese, Chinese, Japanese, Korean, and other languages – a decidedly Euro-centric list with a smattering of languages that represent the home of most modern manufacturing. When we asked it (in Italian) if it speaks Italian, it said that it does, but advised us to verify the accuracy of any information supplied, because it’s possible the model might contain errors or generate responses that are not entirely faithful! Then we asked it if it speaks Afrikaans. Here’s what it told us: “Yes…While its proficiency in Afrikaans may not be as extensive as its proficiency in widely spoken languages, it should still be able to understand and generate responses in Afrikaans to some extent. However, it’s worth noting that ChatGPT’s language abilities can vary, and it may perform better in languages for which it has been extensively trained or for which there is a larger amount of available training data.” So…it’s pretty useless for Zulu or Swahili, only serving to alienate speakers of less widely spoken languages, deepening rather than closing the digital divide.
Misinformation, false sources
Both academics and journalists have been discovering that ChatGPT cannot be relied upon for accuracy when it comes to its sources. A recent article in The Guardian tells the story of a journalist whose work was cited by a researcher. The reporter had no recollection of writing the article in question but agreed it was something he might have written. He went back through all his files and could find no record of the article. It turned out the researcher had used ChatGPT. When the researcher asked ChatGPT for articles on the subject, it simply made them up. There is no Wizard of Oz behind a curtain churning out results for ChatGPT; AI is not human. This was not a deliberate attempt to mislead; it was simply the result of a huge bank of data that joins dots that shouldn’t be joined. We’ve heard similar tales of academics being cited in journal articles that have never been written, but could have been – in other words, the sources are credible, until checked. In both of these cases, the “victims” were astute enough to unravel the trail of information. But how many people using ChatGPT stop to verify the source of the information? A recent article in the New York Times (yes, we checked!) reports that, “Supplied with questions loaded with disinformation, it [ChatGPT] can produce convincing, clean variations of the content en masse within seconds, without disclosing its sources.” Researchers at NewsGuard, a company that tracks online misinformation, tested how it would respond to questions containing conspiracy theories and false narratives. The results, presented as news articles, essays and television scripts, were extremely troubling. The co-CEO of NewsGuard, Gordon Crovitz, said, “This tool is going to be the most powerful tool for spreading misinformation that has ever been on the internet. Crafting a new false narrative can now be done at dramatic scale, and much more frequently…”
Racial profiling and discrimination
Probably the biggest concern with AI is the potential for racial profiling and discrimination. The most visible example is facial recognition, but the discrimination is far more widespread than that. Dr. Monika Zalnieriute, of the Faculty of Law and Justice at UNSW Sydney, Australia, submitted a paper to the United Nations Human Rights Council, spelling out how AI reinforces systemic racism. She said, “The emerging consensus is that facial recognition technologies are not ‘neutral’, but instead reinforce historical inequalities. For example, studies have shown that facial recognition technology performs poorly in relation to women, children, and people of colour.” What this means is that people have been wrongly arrested because facial recognition software has identified them as the wrong person. This happened to Robert Williams, a Black man in Detroit, USA, who was wrongfully arrested for theft from a store whose software identified him as the thief. Facial recognition, increasingly used in public surveillance, has the potential to improve public safety. But most facial recognition algorithms perform poorly at identifying anyone other than white men. Forbes Africa reported a 2022 study that programmed AI-trained robots to scan blocks with people’s faces from different races. The robots were tasked with designating which blocks were criminals. They consistently labelled the blocks with Black faces as criminals.
AI is increasingly being used in the workplace. Algorithms can simplify a recruiter’s job, especially when there are hundreds of applicants for a single post. But researchers have discovered racial bias in these systems too. In a recent experiment, scientists asked specially programmed robots to scan blocks with people’s faces on them. The robots repeatedly responded to words like “homemaker” and “janitor” by choosing blocks of women and people of colour.
AI’s potential to wipe out the human race has hit the headlines lately. Most experts dismiss it as preposterous. The European Union’s competition commissioner, Margrethe Vestager, believes that discrimination is a far bigger threat than the extinction of the human race, and far more likely. It isn’t just crime and employment that is affected. Access to financial or social services can also be subject to AI profiling, and Vestager says “guardrails” are needed.
Proceed with caution, especially in the global south
Industry professionals, including the “father” of ChatGPT, Sam Altman, have expressed concerns about AI and have issued warnings about the dangers it poses. In March, more than 50,000 signatories, including Elon Musk, signed a letter urging an immediate pause in the development of “giant” AIs, and calling for the creation of “robust AI governance systems”. AI has huge potential to improve quality of life, but there are many moral and ethical questions that must be answered before we let AI get out of hand. Most of the concerns are universal, such as accountability, bias, transparency (or lack thereof), autonomy, socio-economic risks, and maleficence (the potential to use AI for harm).
However, here in South Africa, and in the global south generally, we have additional risks. The data and models are foreign and don’t necessarily reflect our culture or our concerns. There is a shortage of data sets that reflect our local conditions. Socio-economic inequalities can be entrenched rather than lessened (some examples have been discussed in this article). Our policymakers and stakeholders are less informed than those in the global north and lack a deep understanding of AI. This puts us at risk of inadequate or absent policies to regulate and manage AI, which is currently the case: there are no specific government positions or legal requirements governing AI in SA. Furthermore, our population has a low level of awareness of AI, so organisations using it are not under pressure to act ethically.
An institutional framework for AI in South Africa
With load shedding and the economic climate occupying most of the government’s – and the nation’s – bandwidth, regulating AI may seem like a minor issue. But if we don’t tame the beast now, we may find it becomes something we can’t control in the not-too-distant future. The Presidential Commission on the 4IR (Fourth Industrial Revolution) was established in 2019, in acknowledgement of the disruptive potential of the 4IR. More recently, the Department of Communications and Digital Technologies, the University of Johannesburg, and the Tshwane University of Technology established the Artificial Intelligence Institute of South Africa. The institute’s focus will be research and development, as well as implementation capabilities in AI. Government has asked the institute to develop solutions to South African and African challenges. Ethical questions will be at the heart of the institute’s role.
For more information
We at SD Law are extremely encouraged by this. We will be monitoring the upsurge in the use of AI in South Africa and following the development of a legal framework carefully. Meanwhile, if you have any questions on how AI might impact your constitutional rights, give Simon a call on 086 099 5146 or email sdippenaar@sdlaw.co.za.
This article was written by a human being. All sources cited have been checked.
The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.