
iStock
The UK has unveiled the world’s first global guidelines for making artificial intelligence (AI) tools ‘secure by design’.
The US, Singapore, Australia, Chile, Germany, Israel and Japan were among the 18 nations that have signed a non-binding public agreement that aims to secure AI systems against cyber attacks.
The document has been developed by the UK’s National Cyber Security Centre (NCSC) and the US Cybersecurity and Infrastructure Security Agency (CISA), and endorsed by all G7 members. It was launched in an event in London, attended by more than 100 industry, government and international partners such as Microsoft, the Alan Turing Institute, and cyber agencies from the US, Canada, Germany and the UK.
The guidelines set a ‘secure by design approach’, advising developers and organisations on the best practices for incorporating cyber security at every stage of the development process of AI algorithms. They focus on four key areas: secure design, secure development, secure deployment, and secure operation and maintenance.
“Security is not a postscript to development but a core requirement throughout,” NCSC CEO Lindy Cameron said at the launch.
The guidelines include advice such as the need to model threats to a company’s system, accounting for the growing dangers that come with growth, and the threat of future technological developments such as automated attacks. In addition, the document covers supply chain security, maintaining robust documentation, protecting assets and managing technical debt.
When it comes to secure development, the plan advises that developers ensure the protection of the infrastructure used to support an AI system, including access controls for APIs, models and data. Moreover, models should be released “responsibly” – only when they have been subjected to thorough security assessments and making the most secure configuration the default for all users.
The final section covers how to secure AI systems after they've been deployed, noting that companies continuously monitor systems, and the data that is entered into them, for signs of misuse.
However, the guidelines do not address other concerns regarding the use of AI-generated content or the data chosen for its training.
US secretary of homeland security Alejandro Mayorkas said: “We are at an inflection point in the development of AI, which may well be the most consequential technology of our time. Cyber security is key to building AI systems that are safe, secure, and trustworthy.
“The guidelines jointly issued today by CISA, NCSC and our other international partners provide a common-sense path to designing, developing, deploying and operating AI with cyber security at its core.”
The agreement is the latest in a series of initiatives – most of which provide general advice – pushed by governments around the world to ensure the ethical development of AI technologies. It follows US President Joe Biden’s signing of an executive order that would require AI developers to share the results of safety tests with the US government before they are released to the public. The order also directs agencies to set standards for that testing and address related chemical, biological, radiological, nuclear and cyber-security risks.
The Biden administration has been very vocal about its concerns over the rapid development of generative AI, and even unveiled a new AI Bill of Rights, which outlines five protections internet users should have in the AI age.
The US and the UK were among the signatories of the Bletchley Declaration, described as a “landmark achievement” that signals a starting point in the conversations around the risks of AI technologies. The document has also been signed by representatives from the European Union and 28 countries, including China.
UK science and technology secretary Michelle Donelan positioned the new guidelines as cementing the UK’s role as “an international standard bearer on the safe use of AI”.
“Just weeks after we brought world leaders together at Bletchley Park to reach the first international agreement on safe and responsible AI, we are once again uniting nations and companies in this truly global effort,” she added.
Last month, Prime Minister Rishi Sunak revealed the UK will launch an AI safety institute, following a similar announcement from the US.