New guidance for security in AI deployment and use
Are you aware of new published guidance by the NCSC and CISA related to A.I?
In November 2023’s months article our Chair Dave Cartwright discusses this interesting topic and provides his insight into the guidelines published.
27 November this year saw the publication of “Guidelines for secure AI [Artificial Intelligence] system development”, a collaboration between the UK’s National Cyber Security Centre (NCSC), the US Cybersecurity and Infrastructure Security Agency (CISA) and over 20 other national cyber agencies across the globe.
In many ways, the guidelines tell us a lot of stuff we already knew. Concepts like “acquire and maintain well-secured and well-documented hardware and software components”, “manage your technical debt”, “apply a holistic approach to assess the threats to your system” and “design your system for security as well as functionality and performance” are hardly rocket science – in fact they should be core to our whole approach of doing IT within our organisations.
But if we look past the obvious stuff, the new guidelines do make some valid points – as one would hope given the vast range of organisations that contributed to it. (Along with the national cyber agencies mentioned already, a lot of commercial entities were also part of the production process, not least Amazon, Google, IBM, Microsoft and OpenAI).
One of my favourite points is that the authors put AI and ML into the same bucket. “We use AI to refer specifically to machine learning (ML) applications”, they say, and go on to defining what they mean by ML. Always nice to start by telling people what you mean by ML!
One of the most important things the guidelines remind us is that AI isn’t always the answer. A key requirement is that “you are confident that the task at hand is most appropriate addressed using AI”. How many people reading this have had the edict from on high – we need to do more with AI next year – with an undertone of “we actually don’t really understand what it is, but we keep reading that everyone’s doing it”. AI is like any technology – use it where it’s suitable, but don’t try to shoe-horn it in.
Getting back more to the security side of things, while many of the concepts that apply to AI systems are (as we’ve said) general security approaches, it can’t be denied that you have to give AI-specific consideration in each case. Take incident management, for example: just as a generalist incident responder needs training and practice to make a decent fist of IT-specific incidents, so an IT incident responder should have some awareness and training around incidents in an AI sense. The key thing to remember about AI is that what comes out of an AI system is less predictable (it’s been through a self-training mechanism that develops and modifies its behaviour as more data flows through) than what comes out of a traditional IT algorithm (which is largely deterministic and does what the developer told it to). So if an incident is going on, it’s harder to work back from what’s happening to what caused it to happen. The same concept applies across the other areas of IT: testing an AI system is harder than testing a “normal” IT system because outputs are harder to predict; it’s a relatively new concept so developer skills are thin on the ground; and it’s trickier for the security team to get to grips with AI systems because, like the rest of the IT team, both the concepts and the tech are new.
So, then: yes, the new guidelines for secure AI system development will tell you a lot of what you already knew. But it’s well worth a read, because it’ll definitely provoke a few thoughts and make you consider how AI might fit in your world (and how you’ll try to make sure what you make and/or deploy is secure). And if it doesn’t tell you everything you want to know, there’s a raft of links in there that will definitely help address that!