Microsoft Community Insights

Episode 13 - Azure AI Insights by Veronika Kolesnikova

June 30, 2024 Nicholas Chang Episode 13
Episode 13 - Azure AI Insights by Veronika Kolesnikova
Microsoft Community Insights
More Info
Microsoft Community Insights
Episode 13 - Azure AI Insights by Veronika Kolesnikova
Jun 30, 2024 Episode 13
Nicholas Chang

One of the key takeaways from this episode is the concept of responsible AI. Veronica explains that as AI becomes increasingly integrated into various applications, it's crucial to adhere to principles like transparency and security.

You can contact Veronika Kolesnikova through social media or check out her Boston azure user group meetup

https://www.meetup.com/bostonazure/


Show Notes Transcript Chapter Markers

One of the key takeaways from this episode is the concept of responsible AI. Veronica explains that as AI becomes increasingly integrated into various applications, it's crucial to adhere to principles like transparency and security.

You can contact Veronika Kolesnikova through social media or check out her Boston azure user group meetup

https://www.meetup.com/bostonazure/


Speaker 1:

Hello, hello everyone. Welcome to Microsoft Community Insights Podcast, where we share insights from community experts to stay up to date and enjoy. My name is Nicholas and I'll be your host today. In this podcast, we'll dive into Applied AI. I'm not sure if that's the theme, but we're going to do it in a minute. But before we get started, I we're going to the internet, but before we get started, I remind you to follow us on social media so you never miss an episode and help us reach more amazing people like yourself. Today we have a special guest called Veronica, but called Klosky, so I pronounced it wrong.

Speaker 2:

Can you please start by introducing yourself, please? Yeah, I have a really hard-lost name, so my name is Veronica Galis Nicoma. I live here in Boston, massachusetts. I am a senior software engineer and working for an insurance company here in Boston, and I'm also a Microsoft MVP in AI, and I've been Microsoft MVP for five years. Hopefully I'll get another one this year fingers crossed, july is coming pretty soon and also I am a co-organizer of Boston Azure User Growth. I do mentorship, I do community work as much as I can. So yeah, trying different things here and there.

Speaker 1:

Okay, so before we get started with Pyde AI, we will just dig into your community some amazing community work you do so recently I saw that you're you was part of, like you did a post on responsible AI in Microsoft, like blog. Yeah, what is that about? Can you explain that?

Speaker 2:

yeah. So responsible AI is very important and I'm trying to educate people about that and I am going to go to KCDC that's Kansas City developer conference, which is going to happen in a couple of weeks. I'm going to talk about responsible AI there as well. I think it's very important. A lot of people use AI Now it's really democratized with generative AI, large language models. It's everywhere and everyone is trying to use it one way or another, but not a lot of people think about it in a responsible way. They don't know that it is also a so-called piece of software. So as you care about security of your software, you need to care of security and actually implementing all those responsible AI principles with your AI and machine learning tools.

Speaker 1:

So what are those principles that you mentioned? Responsible AI.

Speaker 2:

Yeah, there are a couple of main principles like oh yeah sorry, it's okay it's okay, I got lost a little bit.

Speaker 1:

No worries.

Speaker 2:

Sorry. Yeah, so the openness so you need to do something like transparency. For example, you need to understand how your model is working behind the scenes. It is really hard to do with generative AI and large language models just because those models are so big that it's hard to understand each bit and piece that goes into that, like all the data that comes in, all the data that comes out. I think the main opportunity to see it in transparent way is to what those teams do, what they actually work on in those large language models, for example, in those large language models, for example, and how they test. The transparency is that they just provide different inputs and then get different outputs and then see and compare what parameters they need to change in the input in order to get different output, and that kind of gives them perspective how it works behind the scene and what's the logic there yeah cool.

Speaker 1:

So I saw that you've been. You're quite popular in the tech community like going away, like conferences and meetups and stuff. Could could you explain how that impacted your career and your personal development before we get started and apply AI?

Speaker 2:

Yes, thank you. Yeah, so it helped me a lot. Actually, I started my public speaking maybe in 2018. I was ready for the next step in my career. I was looking for another job at that point and I just met really nice people in community here in Boston. One of them is James. I can't thank him enough because he basically changed my life. He became my mentor. He pushed me into public speaking. He's working for Microsoft and that's how he landed his job at Microsoft and he's like okay, you want to find a new job, just start being more active in the community, maybe start public speaking.

Speaker 2:

And I was very, very afraid to do any kind of public presentations or just speak in front of people. But, yeah, I'm glad that he and then later, bill Wilder and Jason Haley they pushed me, they stood by me and then, based on that experience and then my community work, I got noticed. I got a different job at that time and then, thanks to my community work and now I know a lot of people in the community, I found that my current job as well, so it definitely works really well, and then I can travel to all those cool places and talk about stuff that I really love AI and machine learning and people listen to me there, so that is really really nice so did you sorry?

Speaker 1:

did you get started in AI as a software engineer or just when you become an MVP?

Speaker 2:

MVP. Yeah, so that goes back to my story. How I became MVP. I started speaking about two streams of work at the same time. I was really into mobile cross-platform solutions. I was talking about Xamarin and at the same time at that that time I fell in love with Azure Cognitive Services.

Speaker 2:

So I also started talking about that, and then I was kind of in between those two fields and then Microsoft actually made the choice for me. They gave me MVP award in AI and I'm like, okay, that's a sign I need to focus a little more on machine learning. And then that actually started my learning journey and then speaking journey, and that's how I actually fell in love with machine learning and artificial intelligence.

Speaker 1:

Yeah, and I take it, you do your, you do more majority of the AI at work where you work at the moment currently.

Speaker 2:

No, I'm mostly doing backend development, so I'm running services. We're working on login and registration for that insurance company website. Yeah, so that's my day job and for during my free time I actually learned a lot about machine learning and AI and then come back to the community and share my knowledge.

Speaker 1:

Yeah, that's good, because not every software engineer knows AI, so that's good to get at start. Okay, that's brilliant. So let's dive into into applied ai. So I'm not sure what it is, because before we get, before we started, you say it's not called a pi day. But I saw some, like some docs in microsoft say applied ai. Do you know what it is? That's still the name.

Speaker 2:

The story behind it. I think there was a cognitive service Azure Cognitive Services that I mentioned before and then Microsoft split them into two parts Azure Applied AI and then Azure Cognitive Services. They still call them applied AI services or applied services, whatever those names they changed so fast, yeah.

Speaker 1:

Now it's called AI service, now Azure AI, I think.

Speaker 2:

Yeah, azure AI. That's the main thing that people are using right now and Microsoft is promoting. So with the popularity of OpenAI and the partnership between Microsoft and OpenAI, microsoft definitely pushes everything towards generative AI and obviously those new models can do a lot of stuff. So in a lot of cases they'll need additional services, like cognitive services used to be or applied AI services.

Speaker 1:

Okay, so it was an old name. Okay, so it's not a new name. Okay, so what are some of the? Since you're really active on AI now, what are some of the real world use cases or like practical experience that you made that impacted your career as an MVP in AI? It could be any labs that you found interesting, any real-world scenario that you think is very interesting to share.

Speaker 2:

Yeah, definitely, definitely so. With the launch of that partnership between Microsoft and OpenAI and Microsoft releasing Azure OpenAI service, I think that changed a lot in everyone's lives, I guess, especially developers. Now we developers are getting even closer. You don't need to to know anything basically about ai and how to build those models. Um, now you can just plug in the api and then use all the benefits of those generative ai solutions. I do like the content safety tools. I use them a lot when I'm playing with some tools or trying to build something on my own. I think it's very important and I'm trying to include it in my talk about responsible AI, since now that maybe old responsible AI perspective was not that valid anymore, since people are not building those machine learning models from scratch, I mean they still can but, not a lot of people are required to do that anymore.

Speaker 2:

So now there is a new focus on the content safety and actually monitoring all the inputs and outputs that are going into generative AI solutions and how those models are reacting and what guardrails you can put around those models, around those models. So some of them are pre-built, but you still need to verify that it's not producing some kind of garbage or some kind of powerful content.

Speaker 1:

Is a member at Microsoft Build announced Prompt Shield. So is that next level of content safety? I think, because you can integrate Prompt Shield with content safety? Have, because you can integrate prom show with content safety. Have you? Have you tried that before?

Speaker 2:

yeah, so I haven't checked the prom show specifically, but they also announced a lot of different things um related to content safety, for example. Um the groundness analysis. So now if you build a solution using, for example, rag approach or retrieval augmented generation, that means you can actually test your solution before you ship it to production. You can verify that the groundness is at acceptable level. That means that your solution is not making up information and it actually relies on the information that you shared with that whether you shared documentation or specific information related to your company or your clients or something and then you can actually verify it and reduce the amount of hallucination, because hallucination is a big problem still, and I'm glad that developers and machine learning specialists can actually do it before shipping that solution and they don't have to just rely on the end users and figure out.

Speaker 1:

I'm not sure what they mean by groundness. How can you test the groundness of the AI?

Speaker 2:

Yeah. So groundness is how close to actual, factual information the answers are. And especially, I mentioned retrieval, augmented generation, that's when you're using your own information on top of, maybe, gpt-4 or something like that, so you're combining all the benefits of that maybe knowledge set that you have about something specific related to your company, in addition to all the knowledge that GPT-4 has. And then when you are combining those two sources, then you need to verify that your solution is not producing just some random answers.

Speaker 1:

It produces answers based on the documentation that you shared Is there a member who tried to do it like at work and stuff, and a colleague in that knows about ai saying about hallucination, but he's saying I think what he meant is something about when ai make it up. Make up answer. Is that correct?

Speaker 2:

yeah, and it's a big problem. That's why you can now you can test that groundness and verify that it's actually producing factual information and not just random stuff.

Speaker 1:

Okay, so you can just actually value by testing the groundness. You can make it close to real world and then reduce the hallucination in AI modules. Okay, so what are the sorry in your opinion? What, which? How would you? What are some of the best practice for organization when deploying AI modules, do you think?

Speaker 2:

So the best practice is obviously responsibility and security practice.

Speaker 2:

I'm trying to really emphasize this topic today, and there you can actually use all those tools and some of them I just mentioned here, but also there are lots of other tools. Some of them are not even Microsoft specific. So whatever people decide to use for their solutions, it's totally up to them. But you can verify something like groundness, for example, you can verify that your model is transparent before you deploy your solution, and then you can keep monitoring it once you release it and then if something goes wrong, then you can update it or add additional information, maybe change your prompts, maybe change your content safety restrictions, and then you have alerts at every step, like before you deploy, during when people use.

Speaker 1:

Actually, the solution. Yeah, so every stage of the deployment just keep testing it.

Speaker 2:

Yeah, keep using the tools so you don't have to just sit there and stare at it. But there are tools that can identify all those issues, maybe with groundness or maybe some issues with responses being completely inadequate or when your system keeps telling I don't know, I don't know. Users don't want to see that. They want to get real answers. So maybe at some point you can add more information. If you're using a RAG approach, maybe you need to add the information there, or maybe you can if you have an opportunity to have that passage to a real person, and if it's, for example, some kind of support agent, there might be an opportunity to pass the information to a real person and then real person can take it from there.

Speaker 1:

Okay, so from like this is a weird question, but from your experience from using AI so far, what challenges do you think organization or anyone face when using AI? Is it learning the tools like Python to use OpenAI, or is it just be aware of the security concerns, like the responsible AI like you mentioned?

Speaker 2:

Yeah, it's everything. It depends on the company. I'm working for an insurance company, so we have a lot of personal information and that's why maybe we're not the first jumping on that wagon with shared AI because we do need to make sure that we're not just sharing personal information with any kinds of tools or outside resources.

Speaker 2:

Also, challenges might be as simple as prompt engineering. So we do have internal GPT solution for internal usage and a lot of people, even developers, who are very technical and they know a lot about coding but they don't know a lot of AI and generative AI. They do have a lot of problems with prompting. They don't know what's the system prompted and how come someone asks the same question but they get different results, and it's definitely harder for people who are less technical. Yeah, so a little bit maybe learning curve for them as well. Uh, python I don't think it's a big problem. I guess it also depends on the company. Um, I try to learn python as much as I can, but since I don't use it on daily basis, I keep forgetting it a little yeah, over and over.

Speaker 1:

It's really useful in OpenAI, but you can use JavaScript as well, but I think it's default, I think, in OpenAI Python, because you can have a source code of all your deployment, everything, even though you can use infrastructure that's called like bicep and terraform as well.

Speaker 2:

Yes, there are lots of options that you can use with other languages. If you use an apis, obviously you can connect it to whatever solution you want. It didn't have to be in python. If you want to do something like langchain, for example, then there are two options there javascript option I used it with typescript, for example and python option. There is something like if you don't know those two things but you still want to do something similar to lane chain, there is a semantic kernel that can be used with C sharp.

Speaker 1:

Okay, I've not heard about semantic kernel. Can you explain a little bit about that?

Speaker 2:

Yeah, that's a good question. I haven't worked with it. I attended a couple of sessions. So I think it's kind of similar to link chain and I work with link chain. So it's basically a structure that can connect different pieces together, for example connecting to different data sources, and then you connect into your pieces of code and then you connect into different large language models. So it kind of creates the structure of your solution where you can just change different pieces easily.

Speaker 1:

So it kind of creates the structure of your solution, where you can just change different pieces easily. Okay, thanks. So if someone were to become like an AI, like learn AI, do you recommend this to a software engineer like yourself in order to be like an AI engineer or AI scientist?

Speaker 2:

to be like an AI engineer or AI scientist. Ideally, yes, it's good to know everything. Yeah, but it's not a requirement, especially now when basically your natural language becomes language of interaction with AI, with all the prompting, a lot of opportunities for people who don't know how to code at all, something like Power Platform, low-code, no-code solutions. So I don't think it's a requirement. It's nice to know, it's nice to understand about machine learning and how it was built, the generative AI, the background of it and also software engineering skills, but I don't think it's a requirement.

Speaker 1:

Yeah, I don't know, because anyone can just learn the AI stuff. But you just pick up Python as you see source code of like Microsoft backend and stuff and we just engineer it to learn it. Really, yeah. Yeah, back end and stuff as we were as engineer it to learn it really, yeah, yeah, so before we close off this episode, I want to like find out more about yourself. So are you going to any tech events and stuff in the future?

Speaker 2:

yeah, I just mentioned in the beginning of the show I'm planning to be at KCDT Kansas.

Speaker 1:

City.

Speaker 2:

Developer Conference. It's in Kansas City, Missouri. It's going to be in exactly two weeks, I think, from today.

Speaker 1:

Are you speaking or are you just attending as an attendance?

Speaker 2:

I'm speaking there.

Speaker 1:

What's the session?

Speaker 2:

I'm going to talk about responsibility and content safety.

Speaker 1:

Okay, nice. So will it be recorded, or people have to just go in person to see it?

Speaker 2:

Yeah, I think they have to go in person. Nice, we also, since I'm a co-organizer of Boston Azure. We are running both in-person and virtual events. So if people can check it out, we have our meetup page, we have our website with all the links. We have a YouTube channel where we post all our recordings from virtual meetups. So it's a good opportunity to learn in-person and virtually.

Speaker 1:

Yeah, I'll put the links in the show notes later. So is there anything you'd like to say? Is there any recommended resources for people to start learning AR or applied AI? So anything in general, if they want to get started.

Speaker 2:

Yeah, yeah, there are so many good resources. I definitely recommend check um microsoft learn um. They have really good um, really good articles. They have um in general microsoft documentation. It's very good. They have links to GitHub where you can see how it was built, see some examples there, use the learn modules. I think the build challenge is still running so people can still join and learn about AI and maybe get discounts for certificates to get certified, which is really cool, and even if you don't want to get it good certified, which is really cool, and even if you don't want to get certified, it's just a cool way to kind of learn about ai. All together, um, it's a nice package so it covers all the bases, all the main things that uh, people to know.

Speaker 1:

Yeah, plus a stock fund fundamentals. So if you're new to AI, it gives you like AI fundamentals, so that's a certification that you can get as well. So is there how can someone get in touch with you if they want to ask any questions regarding AI or anything that we discussed?

Speaker 2:

Yeah, sure, so I'm on LinkedIn. People can find me there, but please, please, please, write a message why you want to connect. There's so many different people there. And then I am also on X, which is X Twitter, so people can reach out.

Speaker 1:

I advise the best way to connect to her is go to our user group, bosnianjoo user group, and turn on your camera and say hi, and then, when you add her on LinkedIn, she will know who you are.

Speaker 2:

Then Of course yeah that's a good opportunity. That's one of the best ways. Yeah.

Speaker 1:

Okay Now. Thanks for coming to this episode, veronica. Let me just close it off.

Exploring Applied AI in Microsoft
Exploring AI Skills and Resources