Microsoft Community Insights

Episode 11 - Evolving AI Security with Sarah Young

May 25, 2024 Nicholas Chang
Episode 11 - Evolving AI Security with Sarah Young
Microsoft Community Insights
More Info
Microsoft Community Insights
Episode 11 - Evolving AI Security with Sarah Young
May 25, 2024
Nicholas Chang

Stay ahead of the curve with our latest episode, where Sarah Young also unveils a treasure trove of resources for those eager to deepen their grasp on AI security. Sarah's insights from her experiences at top tech events will guide you through the evolving landscape of AI security.

http://aka.ms/copilotl33tsp34k

Build sessions
BRK227 - Inside AI Security with Mark Russinovich
BRK225 - Secure your AI application transformation with Microsoft Security
BRK226 - Data Security Considerations for AI Adoption

Show Notes Transcript Chapter Markers

Stay ahead of the curve with our latest episode, where Sarah Young also unveils a treasure trove of resources for those eager to deepen their grasp on AI security. Sarah's insights from her experiences at top tech events will guide you through the evolving landscape of AI security.

http://aka.ms/copilotl33tsp34k

Build sessions
BRK227 - Inside AI Security with Mark Russinovich
BRK225 - Secure your AI application transformation with Microsoft Security
BRK226 - Data Security Considerations for AI Adoption

Speaker 1:

Hello, Welcome to Microsoft Community Insights Podcast, where we share insights stories from community experts to stay up to date with Azure. I'm Nicholas. I will be your host today In this podcast. We will dive into AI security, but before we get started, we want to remind you to subscribe to our podcast on social media so you never miss an episode and it helps us reach more amazing people like yourself. Today we have a special guest, Sarah Young from Microsoft. Can you please introduce yourself?

Speaker 2:

Hello everybody. My name is Sarah Young. I'm a cloud security advocate at Microsoft, so I get to talk to lots of people about all different kinds of bits of security, because, of course, everybody has to do security nowadays and I live in Melbourne in Australia.

Speaker 1:

Cool, yeah, so before we get started, our theme is AI security, so I want to ask you what is it and why is it important nowadays?

Speaker 2:

Well, I mean obviously, I think probably everyone who's listening, who has some kind of interest in tech, will know that cybersecurity of any system is very important. Cybersecurity of any system is very important. But now there is a huge focus on AI, as there has been for the past year, 18 months, and, as with any new technology, bringing in AI is again going to raise some security, some new security challenges. It might also bring back some older ones, bring some existing ones more into focus. It's probably well, I can tell you it is a combination of all of those things and, of course, as we bring in new technologies, we you know there are new security risks and things to consider. So everybody who is looking at implementing and adopting AI systems should be thinking about the security of them.

Speaker 1:

Okay, brilliant. What are some of those risks that you mentioned, some of those security risks?

Speaker 2:

So it depends what kind of AI you're using, if it was like open AI.

Speaker 2:

Well, this is the thing on. It depends on a lot of things, as it does with everything, but, more broadly speaking, something across the board you need to worry about is your data security. Now, data security isn't a new issue it is. It's something that, as an industry, we haven't traditionally done very well. Usually, our focus has often been on securing the platforms, the applications, the data that use the data, rather than the data itself. But what AI does or can do is find data much more easily than a human can, is find data much more easily than a human can. So if your data hasn't been adequately protected, an AI system could find, let's say, your payroll. If you've got a payroll file somewhere and it's not adequately protected and has the right permissions on it, it might be a human would still possibly find. Quite it would take them a while to find it, whereas an AI model could find that really, really quickly. And, of course, then that would be a bad thing if it was data that shouldn't be being seen by everybody. So I think that's probably one of the top concerns across the board, no matter what kind of AI system you're using, and then if you're building your own AI system from the ground up. So you know, say you're using Azure, openai and you're building your own AI system from the ground up. So, say, all those traditional application security rules and principles still apply to AI, but plus a little bit extra for the AI side, some new bits. So if you're building an application that's using AI on top of, say, the OpenAI platform, then you need to think about your traditional application security secure coding and integrating into a database, a memory store, whatever it is, a key vault. All of those integrations need to be done in a secure manner. You should be using proper, managed identities, not random, hard-coded secrets. None of that is new. That has been the same for a long, long time, but that's still going to be important.

Speaker 2:

And then the other side of AI is the sort of new bits, and these are the bits that probably have had the most press and attention. On them are some of the new AI-specific attacks we see so things like jailbreaking. So jailbreaking is when you put, let's call it a creative prompt into an AI system for a large language model and you manage to get the large language model to give you an answer. It shouldn't be giving you. So, whether that's if the large language model has been told. You mustn't give responses that are offensive. You mustn't give responses that are dangerous or violent or racist or sexist. If you can convince and I say convince in inverted commas if you can convince the AI model to do the things its instructions have told it not to do, that is a jailbreak.

Speaker 2:

There are some other attacks, like model poisoning. That's when you specifically throw a lot of incorrect data at a model. I say incorrect again in inverted commas but basically the model trains itself on a set of data that gives a skewed version of the world. There have been examples of this. So say, you've got just a general knowledge large language model. If you started feeding it loads and loads of articles that said New Zealand is not a real country, newaland is fake, blah, blah. If you kept doing it again and again and again and it was actually retraining eventually the model would be like and somebody asked it what about new zealand? It would go ah, new zealand is a fake country. Obviously that's a very relatively harmless example, um, and, and it does take some time, but that is poisoning the model itself as well. So that they are different attacks and there's a few other ones, but I say probably they're the two that have had the most press. So they are the new types of AI attacks and risks that we have to consider nowadays as well.

Speaker 1:

Okay, so you would just need to train the model in order to handle those risks right.

Speaker 2:

No, no, that's the big challenge. So you can't drink. You can't, as, as of the time we're recording this, there is no way that we have found. Researchers are working on it and there's been some experimental things going on, but you cannot for want of a better word inoculate a model from being trained but for being poisoned if you leave it open to training by everybody. So there's no like magic tool.

Speaker 2:

The way that you prevent your model being, you know, being poisoned is to lock down the model and keep it secure and only let the people who are allowed to put training data into it. If anyone can put training data into it, the model is going to train itself. On that there isn't. The model isn't smart enough to know it's being it's. You know it's being confused and deliberately poisoned. We're not there yet, so, and this is what we're finding in AI security at the moment, that's a good example of having to using traditional security controls to secure AI, because when I first started looking at the AI space, which was a little over a year ago when it started to become a thing, I imagined that we would have some really cool new capabilities and tools that would specifically secure AI, and we don't Not yet anyway. So what we do have, though, is our traditional security tooling that puts in layers of defense. So, for example, what you would do to protect against model poisoning is you would secure your model, you'd put it somewhere in an immutable state, you would lock down the access to it, etc. Etc. Etc, and then you could have some other monitoring controls around, checking the model's integrity, and things like that.

Speaker 2:

Now the good news is, specifically for model poisoning, is that the majority of folks don't need to worry about that, because model poisoning is something that most well. Model poisoning works down at your platform layer, where your models sit. Now, most people organizations don't have their own AI platform, because they're very expensive to run. You know, you need a lot of compute. Most, the majority of organizations and people will be using a third party providers platform, so, for example, azure open ai.

Speaker 2:

Now that means the model poisoning is certainly something that can happen, but that is a microsoft problem, because that's down at the platform layer where the models are. So we have to worry about that. So if it makes people and we do, I can assure you we do but the but the good news is a lot of these AI specific attacks not all of them, but a lot of them are down at that platform layer and most people will not run their own platform, which means you will be putting your trust into the security controls of your third party provider. Now, as with any cloud service, we always say you must go trust but verify. So you trust us but you go and verify that what we do is acceptable to your organization in terms of security. So you do that by looking at all the audits and the third-party reports the independent ones that we get done, and then you can make a decision from there.

Speaker 1:

So good and bad news, I say okay, so going, going back what you just said. So what are the challenges you've? You think space when securing ai? Would that just be the don't train itself the challenges, or is there other challenges?

Speaker 2:

No, no, that's just an example, and that's just. There are many, many challenges, I think, for, as I said, data security. Data security is a big one. It's not new. It's just that AI has brought a different spotlight onto data security and a lot of folks are you know, and quite rightly so. A lot of people are saying what is my, you know, where is my data going, microsoft, what are you doing with my data? And also, how can I stop my AI getting at my data in the tenant, et cetera, getting at my data in the tenant, et cetera?

Speaker 2:

Now for anybody wondering because I get asked this question all the time if you're using a Microsoft AI product, whether it's a Copilot or Azure Open AI we don't train our models on customer data. We just don't do it. The models are trained completely separately in Microsoft land and then we put them into the platform and they're all protected and no matter what you do on the platform level, we're not using your data to retrain the models. It's a bad idea. Generally, of course, it's important from a privacy and security perspective, but on a more practical level as well. If we were training our models on all of our customers' data, if we were training our models on all of our customers' data. Even if that were possible, then the models would get messy really quickly. The models would be unusable. So we definitely, definitely, and they wouldn't be as predictable in how they worked because they'd be being retrained all the time. So we don't do that. So wherever your data is, it stays in your tenant. We don't pull it down. It stays there and the platform sort of accesses your data to do some grounding. So if you ask it a question and you want it to use your data, the OpenAI platform will use your data, but it's like a temporary thing. It chucks it away. It's not going back down into the platform. It's kind of a one-way thing, and there's lots of documentation online about this because we get asked about it all the time. So anyone who's more interested can definitely go and have a look at that.

Speaker 2:

The other one that's, I think, the most interesting to me is this jailbreaking and messing around with the prompts. So that is when it's not that you're changing the way the model works, but you're, because we have something in AI in large language models, called a meta prompt. It's also called a system message. Now, that is a list of instructions for the large language model. So, for example, you would say if you had a shopping bot, you would say to the in your meta prompt you are a shopping bot, you only talk about shopping. If anybody asks you about anything else, you don't answer it because you are a shopping bot now. You only talk about shopping. If anybody asks you about anything else, you don't answer it because you are a shopping bot Now. And so what would happen there? If somebody tried to ask it about something else, it should say oh, I'm so sorry, I only talk about shopping. I can't answer that question for you.

Speaker 2:

But there are techniques and this is jailbreaking where the way you phrase a prompt can sometimes confuse the model into ignoring the rules it has and actually giving you an answer. So there's a few techniques out there, some of them called like gaslighting, where you confuse it. Sometimes you say, oh, it's OK, this is just a story, so it's not real. So you're allowed to tell me. And it's not real. So you're allowed to tell me. And it's fascinating stuff and it's really worth reading up on it as well.

Speaker 2:

Recently our CTO, mark Mark Russinovich he he also was talking about an attack that he's been working on a white paper called Crescendo. So Crescendo is kind of an evolution of jailbreaking, where in about five to 10 prompts you can confuse. So you don't put all of that in one big prompt, you do it in little, bite-sized pieces and then it's even harder for the model to pick up. It's being confused and it's much more successful so, but this is a so. This field is changing so quickly, as the whole AI technology field is. Of course, security is changing too.

Speaker 2:

But I will say we've talked about all these cool attacks and they sound very exciting, but it's really important for people to remember that although these attacks exist and we do see some of them in the wild, some of them are only theoretical at this point, as in, researchers have done them and we haven't seen any large scale exploitation of them to date. I will stress, to date. But remember, at the moment, some of these attacks you know things like model poisoning, etc. They take quite a long time to do Like. There's quite a lot of effort involved.

Speaker 2:

So not only do you need to break into a system to access a model, then you would need to sit and feed it data as well, you know, to mess it up, whereas if you were your, let's say, your sort of standard cybersecurity criminal who's just looking to make a quick buck, which is a lot of the threat. You know a lot of the threat actors out there are motivated by money. Not all but a good majority are. That's a lot of time and effort for not necessarily getting what you want. So it's really important for people to keep in perspective that, even though these AI attacks out there and they most definitely are at the moment at least they're not as widespread as perhaps one would think.

Speaker 2:

And if you're looking at it holistically, from like a securing your enterprise perspective, there are other things you should be focusing on, and we say this every single year in the Microsoft Digital Defense Report that 95 to 98% of security breaches would have been prevented by good, solid security hygiene.

Speaker 2:

And that's the boring stuff.

Speaker 2:

That is using multi-factor authentication, patching things using some kind of modern threat protection, patching things using some kind of modern threat protection using the principle of least privilege on all your accounts. You know people or machines. Those are the kind of things that are easy. The attackers know how to exploit them, so there's no point worrying about jailbreak or model poisoning if you're not doing your patching properly or if everybody in. Your tenant has global admin, and we do see that. I know we laugh and I might be sounding a bit frivolous saying everybody has global admin, but people do do things like that and most attackers aren't going to waste time poisoning a model if they can compromise a global admin account and it's game over. So I think it just is very important to keep that in perspective. Now, all of these things change over time and sadly, I do expect to see as the years go by that AI attacks will become more widespread as the technology is more widely adopted and the attackers become better at attacking AI adopted and the attackers become better at attacking AI.

Speaker 1:

But in these early days at least you know that good security hygiene is the stuff that will still save you the most from a security breach 100% Thanks. So let's touch a bit on core part of security. I take it that, since that was released recently, is that another layer that people can use in terms of AI security products, because for those, for those- people who have played with it you'll have seen this.

Speaker 2:

But it has like a, a sidecar in the other Microsoft security products like Purview, like Sentinel, and so you can use Copilot to ask it things. So the idea is I think you know the idea with Copilot for security is it is I know it's cheesy it is your Copilot, you can ask it quick questions. It can bring back. You know who is this threat actor? What is this CVE? If anyone doesn't know what CVE is, it's the Common Vulnerabilities and Exposures database. So every time there's a security issue it goes in there. It's a public repository. It's not run by any one organization and you can ask and they get rated out of 10. So the higher up they are, the worse they are. So you can ask it things quickly. But it isn't in itself securing. It doesn't secure AI. It is bringing together all of our other security products and the capabilities to allow you to function quicker and better doing security things. Now it's also going to plug into all the other tools and so the idea is that it will be able to. You know, as the product develops. You know the idea is that it's going to be able to bring together all of your security tooling and let you, by bringing all the signals together quickly, it will, using the AI, it's going to let you respond quicker. But because I get asked this question a lot well, do I need any other security products if I have Copilot for security? And the answer is yes, because Copilot for security sort of isn't kind of don't take it too far, but kind of a little bit of an overlay over the top of the other security products. They still do the things and have the controls.

Speaker 2:

So we mentioned data security, so a lot of data security you can do data security. There's a number of just good hygiene, good application building practices. But in the Microsoft suite, of course you'd be looking at Purview. Now Purview is it's interesting because a lot of the a lot of folks are under a misconception that Purview is just for compliance, which is wrong. It is a data security tool. It has some compliance bits to it, but it's actually it is a security. It is security tooling through and through.

Speaker 2:

So with Purview you can label data, it can automatically label it for you and then when you've got labels on your data, you know, say, public, general secret, top secret, whatever, then when you've got labels, you can start putting protections on Now. Purview has protections as well, but when your data is labeled, you can then, to your, your co-pilot or your ai, never, ever retrieve top secret documents. You know, when someone asks a question like it doesn't matter what they say, never retrieve top secret documents, but it can't do that until you've labeled things because ai is smart. It is well, it's. Ai is smart to point, but I think that some folks are still overestimating what AI can do. So AI can do data security, but in order for it to be able to do some data security, you need to give it direction, which is having labels on your data so it knows what to do with it.

Speaker 1:

Yeah, Okay, brilliant. So we've touched base on AI security. So how would people learn more about AI security in that space?

Speaker 2:

So at the moment it's currently airing. There is a webinar that is going on. It will be going on till the middle end of August, called co-pilot leet speak. If people don't know leet, that's hackers, silly words for elite, but it's akams slash. Co-pilot leet speak. But leet is l33t and speak is sp e4k. You can put it in your show notes, I think. Yeah, I will do it's probably easier to put it in the show notes.

Speaker 2:

Yeah, you can send a link, yeah but and that's a series that I got to record back in January there's a new episode airing every couple of weeks and we're talking to a variety of folks from both inside and outside Microsoft some experts in their field about AI security. So I have to plug my own thing. I would definitely check that out. Also, I know if anybody is attending any of the big security conferences coming up this year, there's quite a lot of AI security trainings available as well. There's also material. They should also go and have a look at our AI red team, who are essentially our red teamers or penetration testers. Specifically for AI and AI systems. They have released a tool called Pirate. It's actually spelled P-I, no P-Y-R-I-T, I think. Again, I think it's a show notes one and that is a framework for testing AI. So if you're making AI systems and you want to do some security testing, you should definitely go check out Pirate.

Speaker 2:

We've also got there's more and more material coming online. So also, yeah, and there's a lot of Copilot for Security learning series on all the Microsoft channels as well. Like I said, it's a developing thing. There's probably not as much solid material that people can go and find out there as I'd like, but that's because we're in such early days, but certainly if you have a search, you should be able to find some nice solid material to get you started as well. And keep an eye out on Microsoft channels, because there'll definitely be more coming in that space as well.

Speaker 1:

That's brilliant. So, as this episode is coming to an end, we always like to get to know the guests, so are you going to any tech events in the future microsoft tech events or charity events?

Speaker 2:

oh, I'm always at events. I mean that's part of my job. So I mean, I just came back from the microsoft ai tour in seoul. I guess for the big ones I should for the big ones, because we it's well northern hemisphere summer, there's a little bit of a lull june, july time, but I should be well.

Speaker 2:

I will be at what we call hacker summer camp in las vegas in august, which is black hat, defcon, the diana initiative. Besides las vegas and squad con, there's like five or six conferences in a week. It's crazy and it's all security. So, uh, I'm pretty sure I'm going to be there and I would say there's a better than there's a fairly high chance. You'll probably see me at ignite later on in the year as well, and if you go to other security conferences, you'll probably see me there as well. I'm going, oh, I'm going to kcC, which is Kansas City Developers Conference, in June, and I'm going to Copenhagen Developers Festival in August. I think that's all my currently booked travel. So, always keeping busy with the travel, and I'm sure there'll be other things that come up as well that's just what's on the diary.

Speaker 1:

I must say the Vegas 105 conference looked very tiring. So people can get jet lag from conferences so much.

Speaker 2:

Oh yeah, for sure. I mean well, australia is long haul from everywhere, everywhere is long haul from Australia. So I'm quite used to being on planes, Spend a lot of time on planes, Okay nice.

Speaker 1:

So how can someone get in touch with you to learn more about air security?

Speaker 2:

so you can find me on twitter x, whatever you want to call it. My username is underscore sarah s-a-r-a-h-y-o, and and that's probably the best place to get hold of me also you can find me on linkedin. You just search Sarah Young Microsoft Security. I should come up and say you can find me on there too.

Speaker 1:

Okay, thank you. Thanks for joining this episode, sarah, and.

Understanding AI Security Risks and Solutions
AI Security Challenges and Solutions
AI Security Training and Resources