In this blog you will hear directly from Corporate Vice President and Deputy Chief Information Security Officer (CISO) for AI, Yonatan Zunger, about how to build a plan to deploy AI safely. This blog is part of a new ongoing series where our Deputy CISOs share their thoughts on what is most important in their respective domains. In this series you will get practical advice, forward-looking commentary on where the industry is going, things you should stop doing, and more.
As Microsoft’s Deputy CISO for AI, my day job is to think about all the things that can go wrong with AI. This, as you can imagine, is a pretty long list. But despite that, we’ve been able to successfully develop, deploy, and use a wide range of generative AI products in the past few years, and see significant real value from them. If you’re reading this, you’ve likely been asked to do something similar in your own organization—to develop AI systems for your own use, or deploy ones that you’ve acquired from others. I’d like to share some of the most important ways that we think about prospective deployments, ensure we understand the risks, and have confidence that we have the right management plan in place.
This is way more than can fit into a single blog, so this post is just the introduction to a much wider set of resources. In this post, I’ll articulate the basic principles we use in our thinking. These principles are meant to be applicable far beyond Microsoft, and indeed most of them scope far beyond AI—they’re really methods for safely adopting any new technology. Because principles on their own can be abstract, I’m releasing this with a companion video in which I work through a detailed example, taking a hypothetical new AI app (a tool to help loan officers do their jobs at a bank) through this entire analysis process to see how it works.
We have even deeper resources coming soon intended to help teams and decision makers innovate safely that will build on this content. Meanwhile, if you want to learn about how Microsoft applies these ideas to safe AI deployment in more detail, you can learn about the various policies, processes, frameworks, and toolboxes we built for our own use on our Responsible AI site.
What does “deploying safely” mean? It doesn’t mean that nothing can go wrong; things can always go wrong. In a safe deployment, you understand as many of the things that can go wrong as possible and have a plan for them that gives you confidence that a failure won’t turn into a major incident, and you know that if a completely unexpected problem arises, you’re ready to respond to that as well.
It also means that you haven’t limited yourself to very specific kinds of problems, like security breaches or network failures, but are just as prepared for privacy failures, or people using the system in an unexpected way, or organizational impacts. After all, there’s no surer guarantee of disaster than a security team saying “that sounds like a privacy problem” while the privacy team says “that sounds like a security problem” and neither team dealing with it. As builders of systems, we need to think about the ways in which our systems might fail, and plan for all of those, where “the systems” includes not just the individual bits of software, but the entire integrated system that they’re a part of—including the people who use them and how they’re used.
These ideas probably sound familiar, because they’re the basics we learned at the start of our careers, and are the same concepts that underlie everything from the National Institute of Standards and Technology (NIST) Risk Management Framework to Site Reliability Engineering. If I had to state them as briefly as possible, the basic rules would be:
You implement these three principles through a fourth one:
If your role is to review systems and make sure they’re safe to deploy, that safety plan is the thing you should look at, and the question you need to ask is whether that plan covers all the things that might go wrong (including “how we’ll handle surprises”) and if the proposed solutions make sense. If you need to review many systems, as CISOs so often do, you’ll want your team to create standard forms, tools, and processes for these plans—that is, a governance standard, like Microsoft does for Responsible AI.
These first four rules aren’t specific to AI at all; these are general principles of safety engineering, and you could apply them to anything from ordinary cloud software deployments to planning a school field trip. The hard part that we’ll cover in later materials is how best to identify the way things could go wrong (including when radically new technologies are involved) and build mitigation plans for them. The second rule will repeatedly prove to be the most important, as problems in one component are very often solved by changes in another component—and that includes the people.
When it comes to building AI systems, we’ve uncovered a few rules that are exceptionally useful. The most important thing we’ve learned is that error is an intrinsic part of how AI works; problems like hallucination or prompt injection are inherent, and if you need a system that deterministically gives the right answer all the time, AI is probably not the right tool for the job. However, you already know how to build reliable systems out of components that routinely err: they’re called “people” and we’ve been building systems out of them for millennia.
The possible errors that can happen in any analysis, recommendation, or decision-making step (human or AI) are:
We can combine these to add some AI-specific rules:
And some more rules that apply to any analysis, recommendation, or decision-making stage, whether it’s human or AI. (This similarity goes to the heart of Rule 5; it shouldn’t be surprising that humans and AI require similar mitigations!)
The deepest insight is: novel technologies like AI don’t fundamentally change the way we design for safety; they simply call on us to go back to basic principles and execute on them well.
You can find a detailed worked example of how to apply these ideas in this video. You can learn more about our responsible AI practices at our Responsible AI site, and about our best practices for avoiding overreliance in our Overreliance Framework.
To learn more about Microsoft Security solutions, visit our website. Bookmark the Security blog to keep up with our expert coverage on security matters. Also, follow us on LinkedIn (Microsoft Security) and X (@MSFTSecurity) for the latest news and updates on cybersecurity.
The post How to deploy AI safely appeared first on Microsoft Security Blog.
Source: Microsoft Security