Bletchley Declaration – saving the world or serving no purpose?

SHARE:
Bletchley Declaration

What the AI Safety Summit and the Bletchley Declaration mean for the future of AI

Key Points:

  • Generative AI and large language models (LLMs):
    • Development of generative AI and the use of LLMs like ChatGPT have brought AI into the public spotlight
  • AI risks include:
    • Factual error (e.g., “hallucinations”, or made-up sources)
    • Disinformation and manipulation (use of fake news to influence public opinion)
    • Discrimination and bias (unfair racial or gender profiling, for example)
    • Existential risks, (i.e., in the wrong hands AI could pose risks to our very existence through deployment of nuclear weapons or creation of biological agents)
  • Current state of regulation:
    • National-level regulations exist: EU working on the AI Act, US has published the Executive Order on Safe AI, UK takes a light-touch approach
    • South Africa lacks specific AI legislation but frameworks are under discussion
  • Global nature of AI threats and AI Safety Summit:
    • AI threats transcend national borders, necessitating international cooperation
    • The AI Safety Summit convened global leaders, industry players, and academic experts
  • Bletchley Declaration:
    • The Bletchley Declaration, which emerged from the summit and was signed by 28 countries, emphasises global efforts for safe and responsible AI development
    • It calls for inclusive international cooperation to manage AI risks collectively
    • The Bletchley Declaration recognises the collective responsibility of developers to ensure the safety of powerful AI systems

“Never trust anything that can think for itself if you can’t see where it keeps its brain.” Harry Potter fans might recognise this advice, given by Mr. Weasley to his daughter Ginny at the end of “Harry Potter and the Chamber of Secrets”. JK Rowling had magic rather than AI in mind when she wrote the Harry Potter series in the 1990s, but artificial Intelligence has been around since the 1950s. Alan Turing, the celebrated mathematician and computer scientist, wrote “Computer Machinery and Intelligence” in 1950, in which he proposed a test of machine intelligence. But it is the recent development of generative AI and the use of large language models (LLMs) like ChatGPT and Bard that has brought AI into the public spotlight.

Generative AI differs from “traditional” AI, which is predominantly analytical. Traditional AI analyses existing data to make predictions and automate certain processes. Generative AI on the other hand learns from data and generates new data. When you ask ChatGPT a question, it predicts the next word, sentence or paragraph that matches your query, based on its training data. It was trained with human transcripts so its answers sound like human conversation.

AI has massive potential to improve quality of life through better education, healthcare, even climate science. But, like any technology, it can also be used for harm. Technology is agnostic; it is the use humans make of it that determines the outcome.

AI risks

When ChatGPT launched a year ago, there were soon calls from tech giants like Elon Musk to pause  the development of generative AI systems to give the experts time to assess the risks. However, by July Musk said a pause was no longer realistic. (He recently launched his own competitor to ChatGPT called Grok.) Despite this, Musk has called AI “one of the biggest threats to humanity”. Others, including Kamala Harris, the Vice-President of the US, have echoed those sentiments. What are the risks?

Factual error

“The greatest enemy of knowledge is not ignorance, it is the illusion of knowledge.” Stephen Hawking

ChatGPT does not verify its sources. There are many, many examples of publications being cited that don’t exist, but which could conceivably have been written by the alleged author, as the subject is one the author has published on. ChatGPT doesn’t know the difference. The term “hallucination” is used to describe this phenomenon. If you use generative AI for research or personal growth, confirm the legitimacy of the content yourself and check the references. AI should be used extremely cautiously for learning new things. Otherwise, you may have the illusion of knowledge, but not knowledge itself.

Disinformation and manipulation

First, a quick explainer about misinformation vs. disinformation: Misinformation is simply false or inaccurate information. By contrast, disinformation is false or misleading information put out deliberately to deceive, often in pursuit of a political objective. It is used to influence public opinion. According to research from Stanford University, “AI systems are being used in the service of disinformation on the internet, giving them the potential to become a threat to democracy…” The disinformation may not be exclusively verbal. Deepfakes – heavily manipulated AI-generated images (such as Pope Francis wearing a white puffer coat) are also used to spread fake news and gain political advantage. With a presidential election looming in the US and Donald Trump vying for the Republican nomination – and leading, despite facing criminal charges – this is a frightening prospect. Nearer to home, South Africans will also be going to the polls next year in a general election. Our political parties are no strangers to corruption and dishonesty. Make sure you get your news from trustworthy sources and don’t rely on social media as your authority. Be on the lookout for disinformation.

Discrimination and bias

AI systems can inadvertently perpetuate racial and/or social bias if the training data contains bias, i.e., the system is not trained on sufficiently diverse data. We’ve written before about the potential for racial profiling and discrimination with AI systems used for facial recognition, sometimes leading to wrongful arrest. Gender and racial bias has also been seen in AI systems used for employee recruitment. More sinisterly, deepfake photos have been used to abuse women.

Existential risks

If this sounds extreme, it’s because it is. Existential risks refer to the potential for activity that threatens human existence. In the wrong hands, AI could – theoretically – be used to deploy nuclear weapons or create biological agents.

These doomsday scenarios may not have a high degree of probability, but they are sufficiently worrying to have motivated the UK prime minister, Rishi Sunak, to convene an AI Safety Summit at Bletchley Park in England, the site where Alan Turing famously broke the Enigma Code during World War II. The summit, held on 1-2 November, brought together global political, technology and civil society leaders and researchers to discuss the riskiest uses of AI and develop a strategy on how to best to manage the risks presented by recent advances in AI and discuss how they can be mitigated through internationally coordinated action.

Current state of regulation – at home and abroad

Current regulation, what little there is, exists at national levels. The EU, which is an economic and political union of 27 countries with the capacity to make laws governing all member states, has a track record of implementing stricter rules on the tech industry compared to other regions. It has been working towards passing the AI Act, which would be the first AI law in the West. This act would classify AI systems according to risk and implement compliance standards aligned to the various levels of risk. The US has allowed the industry to regulate itself until now, but has just published the US Executive Order on Safe, Secure and Trustworthy Artificial Intelligence. The UK has taken a light-touch approach to regulation, concerned about stifling innovation. China is developing policies that will seek to balance state control of the domestic AI sector with its desire to be competitive globally.

Meanwhile, here in South Africa, and in most of Africa, we have no legislation or regulation specifically governing AI, though certain aspects of AI, such as privacy, are governed by existing laws (e.g., POPIA). The Presidential Commission on the Fourth Industrial Revolution has recommended the development of policies that will empower stakeholders with responsible use of technology, with a focus on data privacy, protection laws and digital taxation. Principles for the ethical development of AI have emerged, but these have not yet been enshrined in legislation. AI researchers argue that our lack of regulatory framework is indicative of the wider issue around technology and policy formulation. Africa has sociotechnical concerns that Europe and the US do not have to contend with (or at least not on the same scale): our creaking infrastructure and digital divide. There is a call for South Africa and other African countries to engage with diverse stakeholders to craft a comprehensive legal framework, with local ethical considerations at its heart.

Global nature of AI threats and need for international cooperation

However, regulation at national level is inadequate on its own, with a technology that is oblivious to juristic borders. And so the AI Safety Summit was convened. Both the industry and governments recognise that an international consensus is needed. The statement that emerged from the summit, the Bletchley Declaration, decreed: “The risks arising from AI are inherently international in nature, and so are best addressed through international cooperation. We resolve to work together in an inclusive manner to ensure human-centric, trustworthy, and responsible AI.”

The summit was attended by 28 countries, major multilateral organisations such as the United Nations and Organisation for Economic Co-operation and Development (OECD), representatives of academia and civil society, and a long list of industry players, such as OpenAI (creators of ChatGPT), Microsoft, Google, Amazon, and more. For a full list of attendees click here. South Africa was not represented. From the African continent only Kenya, Nigeria and Rwanda were in attendance, though the African Commission on Human and People’s Rights was there. China was represented, though there were some concerns over the trustworthiness of China’s involvement.

The Bletchley Declaration

The focus of the summit was on the dangerous risks posed by AI, defined as those at the “frontier” of general purpose AI’s capabilities, such as cutting-edge large language models, as well as some narrow AI that contains specific dangerous capabilities, such as bioengineering. The two categories of risk within the scope of the summit were misuse risks (e.g., biological or cyber-attacks, critical system interference) and loss of control risks (e.g., risks from advanced systems aligned with our values and intentions going out of control).

The outcome of the summit was the Bletchley Declaration, signed by 28 countries (though the Declaration was actually published at the outset of the summit, not the conclusion). The signatories have agreed a joint global effort to ensure AI is developed and deployed in a safe, responsible way for the benefit of the global community, in response to the urgent need to understand and collectively manage potential risks. The Declaration also acknowledges that “those developing these unusually powerful and potentially dangerous frontier AI capabilities have a particular responsibility for ensuring the safety of these systems, including by implementing systems to test them and other appropriate measures.” The collective responsibility undertaken by the signatories is in recognition of the fact that no single country can address the risks posed by AI on its own. Global cooperation is required to ensure safe development and build public trust.

Next steps

The Bletchley Declaration is a statement of mission and purpose. It does not contain specifics on what this global cooperation will look like. However, it is not a token effort. A further mini-summit is scheduled for South Korea six months from now, followed by a second full summit in France in a year’s time.

Does an international declaration do away with the need for domestic legislation? In our view, it does not. Every jurisdiction has the right and the responsibility to protect its own citizens, and every country has its own contextual matters to address, as we discussed above. However, a global statement of intent provides structure and core values on which a legislature, such as our Parliament, can build its own regulatory framework, knowing that we are in lockstep with the global community on a matter that impacts all humanity, regardless of nationality.

For more information

For any questions on AI in the workplace or data privacy, contact attorney Simon Dippenaar on 086 099 5146 or email sdippenaar@sdlaw.co.za.

Further reading:

Previous post:
Next post:
Disclaimer

The information on this website is provided to assist the reader with a general understanding of the law. While we believe the information to be factually accurate, and have taken care in our preparation of these pages, these articles cannot and do not take individual circumstances into account and are not a substitute for personal legal advice. If you have a legal matter that concerns you, please consult a qualified attorney. Simon Dippenaar & Associates takes no responsibility for any action you may take as a result of reading the information contained herein (or the consequences thereof), in the absence of professional legal advice.

Need legal assistance?

Request a free call back