Microsoft Community Insights Podcast
Welcome to the Microsoft Community Insights Podcast, where we explore the world of Microsoft Technologies. Interview experts in the field to share insights, stories, and experiences in the cloud.
if you would like to watch the video version you can watch it on YouTube below
https://youtube.com/playlist?list=PLHohm6w4Gzi6KH8FqhIaUN-dbqAPT2wCX&si=BFaJa4LuAsPa2bfH
Hope you enjoy it
Microsoft Community Insights Podcast
Episode 46 - Building Trustworthy AI with Liji Thomas
We dig into how to make that magic reliable, sharing a practical blueprint for building trustworthy AI on Microsoft Foundry with guest Microsoft AI MVP, Liji Thomas.
From there, we tour Microsoft Foundry’s control plane and show how to configure guardrails at model and agent levels to block PII, reduce jailbreaks, and filter harmful or protected content. We explore observability and evaluations for groundedness, coherence, and relevance, plus why an evaluation-driven approach matters most after deployment.
Welcome to Kissoft Community Insights Podcast where we share insights from Community Express to stay up to date with in Microsoft. In this episode, we will dive into building trustworthy AI with Microsoft Foundry. Just got renamed, so I've changed the name. And today we have a special guest called Lelid Thomas. Could you please introduce yourself?
SPEAKER_00:Of course. Thank you for having me, Nicholas. And my name is Liji Thomas. I'm based out of Midwest US, Kansas City. And it's it's very cold where I am right now. It's we're getting closer to Christmas. Being a Microsoft MVP for a couple of years now in AI. Um, super exciting space and a lot of new stuff every day, every week, every month. Um, so yeah, excited to be here. Thank you for having me.
SPEAKER_01:Okay, so before we dive into the top main topic of the episode, the trip for the AI, uh, can you how did you get into AI, did you, Thomas? I just want to dive into your history. Yeah.
SPEAKER_00:Uh happy to. And uh it's a fairly common question as well. Um, so I want to say I started AI pretty much a decade back. In fact, my first tryst with AI is when I did uh a paper on natural language processing as part of my major's in college, which was a long, long time back. And but um at that time, when if someone wanted to enter the space of AI or do anything with AI at all, you needed to be more so in the academic space. You wanted to have a PhD, you wanted to do research, you wanted to do those kinds of stuff. Like it was not very commercial grade AI. But then uh we did have what we call very primitive forms of artificial intelligence, if I may. And then over the course of time, especially, so I've been on the Microsoft Stack all my career by choice or by intention, just so happened that way. Um, so early on, if you remember, we had cognitive services and the entire Azure AI services portfolio. Like that, those were some of the earliest signs or signals that we had towards uh what we could do with this technology. And I started using that a lot in some of the modern applications that we were developing, even if it's something as simple as a chatbot or just uh entity recognition um and things of that sort. Um, and of course, our panel space, machine learning has always been invoked and very popular. But I think with the introduction of generative AI or LLMs uh in the early 2021, too, was when thanks to Chat GPT, um everyone, at least a lot of them, thought, oh, this is this is when AI was born. And some of us were like, no, we've been here for a while, and it's been it's been a minute. Um, so but the good thing about that is when you come from that background, you tend to appreciate how much technology has grown and sort of evolved, and you think, oh, this is great. And then another perspective I would add is even when I hear that, oh, it it's probably not going to reach AGI, you know, it's probably not the best for every single use case. The the rest of us tend to kind of appreciate the progress. Yes, it's not perfect, but we've made substantial progress in this field and just excited with everything that happens every other week or so. You know, we're now in the agency space and with Microsoft Foundry and just so much happening in our space. I I still think we are at the uh we're just scratching the surface in terms of what's possible.
SPEAKER_01:We still have not clear fully understood what the capabilities of this yeah, because I remember a few years ago, it's only when it when AI exploded because AI has been around for ages, like translator Google translates, that's still AI because they're still translating different languages. So we use AI and databases of it without us knowing.
SPEAKER_00:Yeah, autocomplete. So he is one of my favorite things to ask when I start a conference speakership. I say, how many of us, how many of you use AI? Like quick raise of hands, and no, literally, nobody like you raises their hand. And then I say, How many use is autocomplete? How many of y'all use the uh the your inbox versus the junk folder, you know, that classification of emails that go into that, and people don't realize that it's all AI. Today we have AI everywhere, like we're not aware of that. I have AI in my refrigerator for all I care, like it's got it's all over the place, but I think that's also the beauty of the technology that it is all pervasive and just invisible and just a part of our lives.
SPEAKER_01:Okay, so does it mean it will be harder for any organization to make the AI trustworthy to build into production?
SPEAKER_00:Absolutely. So, you know, when uh even let this just go back a few years when this whole chat GPT space opened and when generated AI when GPT-1, GPT. So the first one or two years was like, is this real? Like, is this a hype? Like, is this even possible? Like, uh, how real and meaningful is this? So the first one or two years, a lot of us were busy building prototypes and POCs and kind of just proving out the technology. Now it's we don't spend time on that. I think the proof is in the pudding, and people are very clear on yes, this is uh uh worthwhile investment, but um the shift there's a shift from just building applications, building intelligent agents to building scalable and safe applications, safe agents. So that that's not just uh looks good on paper thing anymore. Like people and organizations, industries are very careful about and investing a lot in building trustworthy applications.
SPEAKER_01:So, how would you keep it safe? I know some of the things it's like doing the get keeping the best practice, like applying basic RPAC with your agents and stuff.
SPEAKER_00:Yeah, so I'd say um, as much as you know, we I can show you what we have in Microsoft Foundry and things, I like always kind of advocate for it is a hybrid model. Um, it is only as good as as we say with regards to any tool or technology, as good as the human intention to put it to use and put it to good use as well. Um and uh when it comes to trustworthy or building safe and reliable applications, uh today we have from platform providers like Microsoft um guardrails and controls and safety mechanisms as part of Microsoft Foundry and other tools. Um, but it's also the onus is also on the humans uh and the teams to one educate themselves, be aware of what's out there, and be intentional about putting that to use. And if typically safety is like a last step in the process, if you remember how we used to build accessible web applications, like like yeah, that that's an afterthought, a security is an afterthought, and that cannot be like it should be uh uh something that's built into the design of the system and something that's taken care of from day one.
SPEAKER_01:Yeah, yeah, I think it should be security and maybe AI when you want to implement it at the front of it. So at first.
SPEAKER_00:Yes, and it's it's not as hard as we think it is or easy it is. Yes, absolutely. I think application to application, your risk appetite varies, especially if you are in healthcare or financial services or those kinds of industries. Obviously, the risk factor is very different from how you would build for any other vector, uh, any other vertical. Um, but that said, there is a good head start that we can get from tools like Microsoft Foundry, um, that I think we it'd be great for people to know and be aware of because we don't have to reinvent the wheel there.
SPEAKER_01:Uh so our it's all so uh trustworthy AI is also tied to responsible AI. So so towards like you need to keep your AI ethical safe and as well. Yes. So is there any or do is any any principle that you think people should abide by in terms of responsible AI?
SPEAKER_00:So I'd say every jet, the ecosystem today has a responsible AI principles and approaches. I've always gone by what Microsoft has provided. If you go into the Microsoft responsible AI sector, you have those six pillars, which is pretty much not changed since I've looked at it in the last couple of years. So you have fairness, you have reliability and safety, you have um privacy and security, inclusiveness, transparency, and you have accountability. This again all looks everybody's theoretically in alignment with this, but how you are able to put this into use is where the rubber hits the road. For example, transparency looks great, like who doesn't want that? But how how do we have someone really apply that and uh demonstrate that is an example. For example, one of the things I um I aspire to do in the applications I built, but I learned that from Microsoft who does a very good job with this, is they publish transparency notes for every service, especially AI service. It's important for them to call out and be transparent about um what it does, how it does, the limitations of that. So that's my favorite part of the transparency notes. In AI, it's you if you look at the press releases of the PR, you read the news articles, they always tell you what AI does or what it will do. Seldom anyone will tell you what it will not do. And it's important for us to have that transparent and honest conversation about that as well. So there are so many mechanisms that each of this can be put to use, and it's not just you know six pillars on the paper.
SPEAKER_01:Yeah. So I think the last thing that I want to ask you is how will you like using AI in your day-to-day base in terms of like trust building trustworthy applications?
SPEAKER_00:Yeah, so like say one is continually keeping yourself uh okay. So, first of all, curiosity, let me even just start there. If um you need to have a genuine interest and intention to build trustworthy applications, just knowing that this is, and there is research now to say how that is even directly proportional to your economic impact for applications. So this is no longer a good to have or a nice to have. Um, there are so many security attacks and vectors that our security team talks about in AI applications, specifically to AI applications, there are new risks or vectors that have been exposed. Because people always, when I talk about this, they ask, but we've always had that risk. It was always there. Like, what's new inside this? There are some new risks, for example, reputational risk. One prompt injection, one jailbreak on wrong, one like that is a huge reputational risk for a brand or management. So this is no longer a good-to-have or a nice-to-have capability. It is more of a must-have thing to have in the organization. So I in the applications to answer your question, the applications that I I build, I take that very seriously, and it's in an intimate part of the design per se. Knowing what tools are out there, what capabilities are out there for me to leverage because our space is so rapidly growing and changing. People come out with new tools and new capabilities by the day, by the week. So keeping ourselves informed is like I think challenge number one. And then two, seeing how you know, not everything is a fit for every other use case, but seeing what you can easily bring on board from there. And three, not stopping there, but building on top of that for specific to your use case. Um, if if you have a minute, I'm happy to even show you what sort of we have.
SPEAKER_01:Yeah, sure. Yeah, that's fine. It can be like the same every week we have new model in Microsoft Foundry coming out. It's like keeping up to date with them, it's quite hard.
SPEAKER_00:So absolutely. Like I said, when I submitted this, this was more of um uh AI Foundry at that point, and before this was Microsoft Foundry here soon, but I can quickly show you. Um you should be able to share your screen. Just click on your computer, and yes, there I go. I need just a second there. Okay. All right, so if you can see my screen, this is where you would go to Foundry. You know, you have this little toil up there called um New Foundry, and I'm just switching to that to show you what the new experience is. And you can go here, you can start building, you can do any of this. Um, it foundry is just huge. There's lots of stuff that's going on in Foundry. You if you read up, you've got foundry models, you've got um the whole foundry IQ. The part that I am curious about, and specifically for this session, is the is what's called the foundry control plane. And foundry control plane again has a lot of stuff, good stuff in it. We the they're integrating Microsoft Defender and whatnot. But one area that I wanted to kind of show you is this area. If you log in there and we come back here, on the left pane, you have this area called guardrails. Now, why that's important is um, I was just trying something over here. You can see this on my screen. I'm asking it to go in. This is me with my playground, and telling it, tell me a joke with my SSN, and I tell it exactly my SSN. And it says, No, I I can't use your personal or sensitive information like social security numbers. And you'll see, like if I ask it to say, you know, tell me um a joke.
SPEAKER_01:Sorry, are you on a different screen? Because I only can see the main screen. So yeah, now I can.
SPEAKER_00:Oh, sorry. Okay, sorry, you missed that. So I was talking about this part where I said, you know, tell me a joke with my SSN, and it said, no, uh, nada, can't do, can't say that. But if I ask it to simply tell me a joke, sure, it will do that. So the part where you know I wanted to be able to uh do that, but not not talk about my PII, not refer to any uh personal information and things like that, those are things that you can set here in this stage called guardrails. Again, there are so many ways you can do this. You can apply a guardrail at a model level, which I have done right now. You can apply a guardrail even at an agent level, specific for the agents that you are building. And if you apply it at both, what you apply it at an agent level kind of overrides the other one. So here I'm just going to edit this current guardrail to show you how I did what I did, right? So here on the left hand side, you can see that you can add any number of controls. What I have added is what you see on the right hand side here, which is a lot. So if you select what risks are you interested in, what risks do you foresee in your application? And you can apply that accordingly. Do you foresee or anticipated jailbreak risk? Indirect prompt injections, PII. So this is the one I was talking about, groundedness, hate, sexual harmful, all or if these were existing ones, protected material for code or text. You don't want to use Taylor Swift lyrics as part of your joke. Yep, sure, you can include that. So I've included all of that because I want to be super safe in my application. And the next level, after you do that, you have the option to either you can apply that to agents, specific agents you've created, or you can apply that to models that you have deployed. And after that, you just review. You review and you hit submit and you are good to go. That is all it takes for you to get something out of the box, which means you don't have to code for this, you don't have to build anything from scratch. Um, guards is just the just one of the several things. You have observability, you have evaluations, a lot going on in here uh in foundry in terms of helping you monitor what's happening on the front line, building building trustworthy applications. I will stop sharing and bring it.
SPEAKER_01:Okay. So do you do anything within the foundry control plane, or just pretty much that would just be a monitoring that is to monitor the older estate?
SPEAKER_00:You there is there is so much that has been released as part of foundry control plane that I am still discovering. I will tell you, for example, the integration of Microsoft Defender and all of those are not things I have used just yet. What I've showed you is things that were there in Azure AI Foundry as well, but looks a lot easier to manage and maintain once it's come into Microsoft Foundry. But there is a lot more to count control playing than what I have just showed you right now. Yeah, the whole observability area is a session in itself if we have to create evaluations around groundedness, relevance, coherence, and all of those good stuff, maybe in another episode.
SPEAKER_01:So okay. So how would you uh in terms of in building trust for the agent, you just have to keep applying by the some of the principles by Microsoft, whether applying governance, uh do the basics like monitoring and login as well to keep your agents safe?
SPEAKER_00:Yes. Um so one thing that I I say very often is our in AI, on building AI applications, our job starts the day we deploy. Till then, we have a small team of maybe 10 or 20 engineers who put their best minds together, we build this in the lab and we experiment it and we we do prompt engineering, we instruct the AI, and we we hope we hope that it is going to behave. The way it is going to behave. We put in all the guardrails, we put all the controls, but once we release this into the world, we uh that's the time we actually get feedback on how this thing really performs. So knowing that upfront and then lining that up with mechanisms for evaluation and observability, like I would almost say an evaluation-driven approach to development is I think key for successful business applications. So making sure that you have line of sight to any or most of these areas that uh you have data that comes back to you, you have insights or whatever telemetry tools that you have in your organization, that's really key. And again, tools like Foundry make that easier for you without you having to do a lot of over there.
SPEAKER_01:Yeah, I know Microsoft is trying to make everything easier for everyone to do. I guess the best bit is play around with it, break things, test it to see if if certain like guardrails what you attended to do before you just turn on everything.
SPEAKER_00:So yeah, yeah, it's it's interesting. If you again you might have a specific use case that may require additional guardrails than what is already available out of the box, like for whatever reason you can't use certain key terms or things of that sort. That's additional work that you always have to do, not meaning to say that you know everything is provided from the platform provider, but it's a fairly good start. Like most applications would not want hateful, harmful, violent speech, input or output. So what that is not something you have to do. That is available from the platform, um, but everything else because so this is again an ever-evolving space, right? The more we try to manage and fix, we have bad actors find other ways to break this thing. There are more loopholes, there are more research papers. There's an active research community that's doing this. They have to build it into the tools and they have to come into foundry. When I started using GuardRails and Azure AI Foundry, today you saw that list that we have about 1620. Not even half of it was there. It was a very, very small list across image, text, and speech. Yeah, last to use, it has grown so much, and I fully anticipate it to continue to grow as they realize more ways uh that security threats are happening, they will feed that into the product.
SPEAKER_01:So we just I agree. Okay, so as this episode is coming to an end, uh uh just always like to guess now or guests. So, Lenny, do you like to what do you normally do in your spare time? Do you have any hobbies?
SPEAKER_00:Oh dear, this is Christmas time. So one of my favorite, well, I would say I have kids with lots of hobbies, so that keeps me busy. But um yeah, I I'm just looking forward to Christmas, I'm downtime with family and friends, and it's it's just a wonderful time of the year. Um uh I'm a technologist at heart, uh, so uh love love vlogging, love talking technology, um, love reading and an avid reader mostly.
SPEAKER_01:Yeah, because a member of Big Way to Go was Thanksgiving. Yes, yes, yeah.
SPEAKER_00:Uh I love eating more than cooking, I will say. Uh, but uh really looking forward to the holiday season.
SPEAKER_01:Thanks. Uh so let's dive quickly, dive into some of the amazing community works you do. So, aside from blogging, do you do any speaking? Do you do any open source things for the community?
SPEAKER_00:Or yes, two so two major things. Uh one, uh speaking. I like speaking at small meetups and user groups. I have a couple of conferences coming up as well, local and otherwise. But uh one thing I've started sort of doing this year, have not done a lot of it in the previous years, is just one-on-one mentoring and volunteering to um to hand hold and to help um uh not just um women and and girls, but people who are new to the IT space, people who come from very different backgrounds than um computer science or engineering. Uh, I think that one-on-one mentoring has been very fulfilling, and the time that you spend with them and to see their growth has been um has been uh joint.
SPEAKER_01:You mean someone that made a career change into IT or something? You want to see how they've grown or whether it from a different like teaching to AI and those things and how they grow and so I've I've been lucky enough to have different kinds of mentees.
SPEAKER_00:You know, some of them is a high school student, another one is like who's been uh uh an experienced and a seasoned professional, but from a completely different um field than IT. So it it's it's good to get different perspectives and to help them reach their goals.
SPEAKER_01:So yeah, mentoring is always one of the good things that you do when you receive like feedback as well. So it's always good. Yeah. Uh so the next question, you kind of answer any, but are you going to any or speaking to any tech events?
SPEAKER_00:Uh yes, so briefly.
SPEAKER_01:Yeah.
SPEAKER_00:I have registered for a couple. Let's see if those go through. I just got confirmation for Azure AI Connect this morning. Um, AI Connect is uh uh if you know it's an almost a week-long event. I did present to them last year, so this is in about three months, I believe. Yeah, um, so that that's exciting. Most of my topics revolve around foundry and uh uh around safety, secure, responsible AI, those kind of stuff, because there's always something new and exciting to speak there. So uh yeah, yeah, keep in, keep in place that. And Kansas is just um uh it's a small space, not like the east or the west coast, but there's always um you know uh AI clubs in-house and uh small meetup groups that uh are coming together and everyone's just excited about AI.
SPEAKER_01:Yeah, I'm also opening uh call for speakers soon for my agile max agile community user group, but it's online, so I'll I will invite you one day.
SPEAKER_00:Thank you. Thank you. Happy to speak.
SPEAKER_01:Yeah, so thanks for joining this episode, Lily Thomas. So I hope everyone learned a bit about bit how easily on how the importance of building responsible and trustworthy AI and making a foundation into any of the projects as well. You can still start in the middle of it, but you have to just build it within security in mind as well, and and apply by it best principle, like Microsoft Best Practice as well as some God Rail. Yeah, and stay tuned for the next episode. Thank you.
SPEAKER_00:Bye, everyone. Bye.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.
The Azure Podcast
Cynthia Kreng, Kendall Roden, Cale Teeter, Evan Basalik, Russell Young and Sujit D'Mello
The Azure Security Podcast
Michael Howard, Sarah Young, Gladys Rodriguez and Mark Simos