Microsoft Community Insights

Episode 6 - Responsible AI with Laurent Bugnion

February 28, 2024 Episode 6
Episode 6 - Responsible AI with Laurent Bugnion
Microsoft Community Insights
More Info
Microsoft Community Insights
Episode 6 - Responsible AI with Laurent Bugnion
Feb 28, 2024 Episode 6

Discover the delicate balance between technology and ethics as we sit down with Laurent Bunion, Microsoft's cloud advocate, to unravel the complexities of responsible AI. This episode promises a deeper understanding of how AI should complement, not replace, human workers and why it's paramount for AI service providers to implement safeguards against misuse and errors. Laurent brings to the fore his insights on content creation and the essentiality of educating the public on AI's role in boosting job efficiency—expect to walk away with a fresh perspective on AI's place in the workforce.

References
https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2
https://www.microsoft.com/en-us/ai/responsible-ai

Show Notes Transcript Chapter Markers

Discover the delicate balance between technology and ethics as we sit down with Laurent Bunion, Microsoft's cloud advocate, to unravel the complexities of responsible AI. This episode promises a deeper understanding of how AI should complement, not replace, human workers and why it's paramount for AI service providers to implement safeguards against misuse and errors. Laurent brings to the fore his insights on content creation and the essentiality of educating the public on AI's role in boosting job efficiency—expect to walk away with a fresh perspective on AI's place in the workforce.

References
https://learn.microsoft.com/en-us/azure/machine-learning/concept-responsible-ai?view=azureml-api-2
https://www.microsoft.com/en-us/ai/responsible-ai

Speaker 1:

Hello, welcome to Microsoft Community Insight Podcast, where we share insights and stories from community experts to stay up to date with Azure. My name is Nicholas and I will be your host today In this podcast. We will dive into responsible AI, but before we get started, I want to remind you to subscribe to our podcast on social media so you help us, you never miss an episode, and help us reach more amazing people like yourself. In today's episode, we have a special guest called Laurent Burlion. Can you start and introduce yourself, please?

Speaker 2:

Hey, nicholas, thanks. My name is Laurent Bunion and I am a cloud advocate for Microsoft. I've been working for Microsoft for six and a half years now and cloud advocates mean that I'm working for Microsoft Azure principally, but of course, these days, everybody at Microsoft is doing a lot of AI, so I'm also very interested into that topic and planning to have also some talk, some upcoming talks, about AI etc. So that's what I'm doing.

Speaker 1:

Okay, can you explain more about what you do in your current role?

Speaker 2:

Yeah, absolutely so. Advocacy is basically a developer relation role, meaning that I'm creating content to teach to use Microsoft services. I guess that's the easiest way to put it, and in my current role, it means that I am the executive producer for a show that we have which is called Learn Live, where we do multiple shows a week about everything that we do at Microsoft, and so if people are interested, I guess we can put the link in the show notes akamslearnlive. Next to that, I'm also creating content for a range of technologies, mainly around Visual Studio, but also things like NET, azure App Services, azure Functions, Azure Container Apps, azure Static Web Apps, etc. And so I do that on different platforms my blog, for example, but also speaking at conferences. You know that conferences are starting to get in person again, so we are preparing for right now our next big conference, which is Microsoft Build in May. I also work on Microsoft Ignite in November and also other events throughout the year.

Speaker 1:

Great thanks, as today, our team is responsible for AI. In your opinion and experience, why do you think it's crucial in the cloud to be responsible for AI?

Speaker 2:

Well, I mean, the thing with AI is that everybody is interested about AI. I think it's the first time that we see a technology which is really so widely mentioned. I've been an engineer now for 30 years and it's really the first time that I see so many regular people my mom, my hairdresser so many regular people I'm not tech people, that's what I mean with regular who ask questions about AI. Like, hey, have you heard about this chat GTP? It's actually chat GPT, but they will misspelle chat GTP because they heard something on radio. And I think, on one hand, it's great because it's really nice that we see that the technology is wide reaching. On the other hand, it also means that there are some responsibilities and, of course, microsoft being at the forefront of AI. I think we are really pioneer in that role with all the partnerships that we have with OpenAI and all the services that we create and we build with Azure, openai, with Azure AI Studio, et cetera.

Speaker 2:

It's very important that you have safeguards in place and we have seen a lot of cases and examples where AI was either creating mistakes, like creating errors we talk about hallucinations in the world of AI, where this model is just creating nonsense, and then we have some examples also where people are just misusing AI and using it to create something which is worrying for society, like deepfake. For example, there was a case just a few weeks ago where apparently a firm was robbed I can't remember the number, I think $25 million US dollar because a guy was in the video call and saw that he was talking to his boss and actually he was not. He was talking to a deepfake. So these kind of things are dangerous and there are also a lot of ethical concerns about the trainings that those models receive and some concerns about how do you actually use this technology.

Speaker 2:

Ai is a tool, right, you can use a tool for good or for bad, and I think that our responsibility as Microsoft and as engineers in general is to make sure we protect the public as much as we can. Obviously, there are always going to be cases where we cannot protect everybody. So this is a critical role at the moment very difficult situation, because the technology is so new that we don't totally understand exactly how those things are happening. And also, everything is happening very fast, right, like the evolution of AI is super fast, probably faster than we ever saw before. So all these are very complicated questions.

Speaker 1:

I would say so, in your opinion, what are the key principles that guide responsibility AI?

Speaker 2:

So I think that really, one thing that is super important is that you need to remember that the AI is here to assist people and not replace people right. So this is very important because we keep hearing people. I think that the major fear right now is that all the AI is going to take my job right. And, of course, for us as engineers, as coders, we also had the forefront of that fear because, who knows, if the AI is not going to take our job tomorrow right and it's, you know, it's this kind of fear that we have.

Speaker 1:

Yeah, it's how it is the AI here.

Speaker 2:

Yeah, it's very old right. Those fears are very old right. When they introduced the steam train, you know, people who were raising horses got afraid, of course, that they were going to lose their job, and many of them did and many of them started probably you know driving trains right. So there is a reconversion, and I think this is where we are right now with AI. We need to really teach people how to use AI as a tool, but also that AI is not going to replace them. It's going to assist them to make them more efficient in their role. I think we already see quite a lot of examples of that right now.

Speaker 2:

So this is one thing, and then the other thing is really to teach people like to explain to people what's going on right now and how they need to be careful about seeing that if you get a video call these days, you know you cannot believe anything anymore, which is sad, but it's really the way that tech is going and in this case, really educating people is super important and making sure that they understand the risks.

Speaker 2:

There is also a mission to educate people who use AI in their job, in their roles, because when you use AI, it needs to be really a shift in your enterprise strategy. It cannot be just like, oh, we are going to use AI, make a few quick books and then and then be done. But you really need to have a strategy which is making sure that you you have safeguards in place. And so this is actually Our role at Microsoft in the world relations to try to explain that to people, to teach them, and this is why we are going on the road with a lot of shows and events right now, and also online, of course, to explain that to people.

Speaker 1:

Yeah, so has AI becoming more popular. How do you think, in your opinion, what's the most popular way to address bias and fairness when using AI?

Speaker 2:

Yeah, so this one is really a major concern. Bias means basically that, since models, models are trained on, data generated by our society is a bias of our society are also present in the model and sometimes even even stronger right, and this is really tricky. Part of that can be solved in parts only by really curating the data that you get. So making sure, for example, that you get A fair distribution of data, that you that you go and and you know the not only get data in privilege you know parts of society but also in less privileged part of society, for example, etc. But there is only so much you can do. I think a A certain amount of bias is going to be in the model anyway, right, and this is dangerous.

Speaker 2:

I think, really the main thing that you need to do is, again, right, use AI to assist whatever you're doing in your job, but not replace your job. You need to have people checking the output of the AI, making sure that the output is fair, that the output is you know. If the output is biased, that this is detected and corrected as soon as possible. We talk as a model itself is going to be biased, but you have some Mechanisms that you can put in place, such as, for example, you know, making sure that the content is safe for use in terms of violence, in terms of hate, in terms of sexual content, etc. Also some what we call some mitigation layers, where you can make sure that you, that you watch what's going on.

Speaker 2:

But, in the end, really, the human factor is still super important, because humans are going to be the ones who are going to be able to detect if your model is performing correctly or not, and so it's really really very, very important to have to keep humans in the pipeline to make sure that those people are experts, that they understand what's going on.

Speaker 2:

Not necessarily that they understand inside the AI, because this is very complex. Not many people really understand those things, but when they check the output of the AI that they, that they verify that this is actually what they, what they want, and that is this not leading them in the wrong direction? Now, this is, of course, a very complex problem to solve and I'm not claiming to have solutions. I'm just saying we need to be at this point in time, at this point in history, where we are really transitioning a lot of enterprises to AI in the past two years, right. This is really a time where we need to be super careful with that and make sure that we that we verify everything, that we validate what the what the AI is doing, and that we use the eye to assist. You know, assist humanity and not replace humanity.

Speaker 1:

So you kind of touched on this next question. But in your experience, what are the use cases where we can use AI for this? We embed responsible AI when you do Azure Open AI.

Speaker 2:

Yeah, azure Open AI is already adding lots of layers of protection. If you would compare that to charge GPT, let's say, which is really using the model directly without many safeguards in place, I think, in the end, what you need to remember is that, as an engineer, as a coder, you're using AI to build an app. Ai is going to be maybe the engine of the app, but after that you need to have a whole lot of things around it to make sure that your code is safeguarded. When you use Azure Open AI and I'm not here to do any marketing but when you use that we already put in place some of those safeguards, like, for example, the Azure Content Safety Service that I mentioned before is going to make sure that people cannot input violent content or hateful content or sexual content, etc. But also that the model is not going to output these kinds of content. So there are some things in place. I think you can already see that if you go into a co-pilot or into Bing Chat or whatever you're using which is not charge GPT directly, but which is a Microsoft product built on top of that, and you try to input, for example, some violent content, that's going to cut you immediately and the session will be terminated. So we don't really let that go through. But, of course, what Microsoft does is one thing, but you also have tons of people who are using charge GPT to build their own applications and not everybody is putting those safeguards in place. So that's why it's important to educate people, make sure that they understand the risks.

Speaker 2:

I keep seeing ads, for example, for virtual girlfriends. It's kind of a big trend at this moment. I think that this is deeply unethical to do these kinds of things, for example, and people are taking advantage of AI to do a few quick bugs, but they are not at all considering the feelings of the user. We are trying to make a better world for our users, but I feel that these kinds of applications which are using AI, in my opinion, again in an irresponsible way and unethical way, are definitely not helping humanity. So, yeah, a lot of this is a topic which is honestly way too complex to discuss in just a minute. You know half an hour, but I think that really a key guideline is really, as a provider of AI service, you have a very strong responsibility, probably even more than ever before.

Speaker 2:

If you want to compare, you know, when an engineer builds a bridge, there are all kinds of laws and regulations in place and controls and safety checks to ensure that the bridge is going to hold, and of course, this has been developed in decades, even centuries, of building bridges and of seeing what works and what doesn't. And now we have a technology which is going so fast that those safety checks are not available. Yet. There are no laws, yet there are no guidelines and ethical considerations. I mean, there are ethical considerations, but they are not widespread in the population right now. People who cross a bridge these days are fairly confident that the bridge is going to hold, but people who use AI don't have this kind of confidence.

Speaker 2:

So I think that the education is hugely important and really we need people who provide AI services to understand that they have this kind of ethical responsibility. The laws are being put in place, but of course, laws need a long time to be put in place and the technology is changing so fast that this is all accelerating very, very fast at the moment. So we as a society I would say taking big words here we have really a strong responsibility to be careful with that. I also don't think that it means that we shouldn't use AI. I mean, we have People have been talking about AI since the 1950s, right? So it's, like you know, 70 years, 70 plus years that people are talking about that, but now we have the means to actually, you know, create meaningful models and train models and create applications using that, which is the first time in history that we can actually do that.

Speaker 1:

Yeah, because AI has been out for quite a long time, like Siri and Google Translator, because those are AI as well.

Speaker 2:

Yeah, I mean you could say. I mean, I guess that the definition of AI varies based on who you ask, and you know some people will tell you there is no. Ai is just a big if they else right. And so I think that now we're at the point where really the inflection is. You know, there is this famous the Turing test, right, the famous Turing test, which is basically aiming at finding out if a machine is a machine or if it's not right. I mean to try to if the machine can fool you into thinking that they are human, etc. And now we are closer to that limits than ever before.

Speaker 2:

Right, I'm not talking about AGI, the, you know, artificial general intelligence, which I think we are probably safe from that for a very long time. I don't think I will see that in my lifetime, to be honest. But but, generally speaking, we are at the point now where really AI can fool people into thinking that it's sentient, just by certain mechanism which happened. And so, even though we have been talking about that for a long time, it's really the first time that we actually have examples of this happening. And yeah, it's, it's a challenging time, but it's also an interesting time, I think, and being at the forefront of that is definitely an interesting time to be alive, I think.

Speaker 1:

Yeah thanks. So, as this episode is coming to an end, I always want to get to know, interview people, so are you going to? You touched based on this question before, but are you going to any other future events in the future?

Speaker 2:

Yeah, so I'm going to be in a few places in March. So, first of all, in Switzerland, we have Ignite Switzerland, which is coming up March 8. And this is a one day event in Zurich Definitely encourage everybody to come there. We are going to talk about Azure, a lot of things about AI and different topics, and then after that I'll be going to Paris for the AI tour, and so AI tour is a big organization. I think we have something like 12 different dates in different places around the world.

Speaker 2:

I know that after Paris, we are going to Berlin. I'm not going to be in Berlin myself, I'm just going to Paris, but Berlin. And then there is also additional events like Sao Paulo, etc. And then also right after that, I'm going to fly to South Africa. I'm very happy and lucky to go there and I'm going to speak at the AI day in Cape Town and in Johannesburg. So that's going to be March 15 in Cape Town and March 18 in Johannesburg, so hope to see a lot of people there. Then, after that, the next big event is going to be built in May, so that's in Seattle. A lot of exciting news which are going to come out soon about that. But yeah, this is where I'm hoping to meet most people there. Okay, thanks.

Speaker 1:

So how can people learn about response to AI and get in touch with these people?

Speaker 2:

Yeah, so there are really a lot of places where you can go. So one place and I think you'll probably put the points in the links in the description, but for people who want to know more, one place to go is learnmicrosoftcom. So if you go to learnmicrosoftcom and you search for responsible AI, we have actually some training modules which are available responsible AI, responsible generative AI, etc. Another thing that you can look at is a project at Microsoft which is called the Responsible AI Dashboard. This is a project which is being developed and which is going to give you a lot of guidelines and help you understand the challenges, but also put some safeguards in place. Also, at Microsoft, we have a group called SAFE and this group, safe, is really talking a lot about ethical usage of technology in general and AI in particular, and they have a lot of webinars and other resources about responsible AI. So definitely a lot of places where you can look and find some information.

Speaker 1:

Okay, brilliant. Thank you for joining this episode, laurent, my pleasure. Thank you, nick, for being a spot spot on App Music. Thank you, bye, thank you Bye, bye.

Responsible AI
Responsible AI and Ethical Considerations
Guest Interview on App Music