Microsoft Community Insights

Episode 2 - Navigating AKS with Richard Hooper

January 24, 2024 Richard Hooper Episode 2
Episode 2 - Navigating AKS with Richard Hooper
Microsoft Community Insights
More Info
Microsoft Community Insights
Episode 2 - Navigating AKS with Richard Hooper
Jan 24, 2024 Episode 2
Richard Hooper

Discover the  power of Azure Kubernetes Service (AKS) as we're joined by Richard Hooper, the Azure MVP behind the insightful 'Azure Containers Explained.' Richard,  known online as Pixel Robots, lends us his seasoned expertise, shedding light on the advantages and hurdles one may encounter when navigating the container technology seas within Microsoft Azure. From deploying straightforward apps to orchestrating complex, multi-tenant systems, Richard's guidance is a beacon for those seeking to leverage AKS for applications that not only survive but thrive during the most demanding seasons like Black Friday.

This episode is a deep dive into AKS, with Richard Hooper. He unravels the cluster auto-scaler's functionality and introduces Carpenter, the open-source tool that revolutionizes auto-scaling by intuitively selecting the optimal virtual machine sizes. If you're wrestling with the challenges of scaling container-based applications or you're an enthusiast eager to understand the inner workings of cloud infrastructure, Richard's insights offer a treasure trove of knowledge. Be ready to be enlightened on how to efficiently meet customer demands and stay afloat in the ever-evolving cloud-based ecosystem.

Show Notes Transcript

Discover the  power of Azure Kubernetes Service (AKS) as we're joined by Richard Hooper, the Azure MVP behind the insightful 'Azure Containers Explained.' Richard,  known online as Pixel Robots, lends us his seasoned expertise, shedding light on the advantages and hurdles one may encounter when navigating the container technology seas within Microsoft Azure. From deploying straightforward apps to orchestrating complex, multi-tenant systems, Richard's guidance is a beacon for those seeking to leverage AKS for applications that not only survive but thrive during the most demanding seasons like Black Friday.

This episode is a deep dive into AKS, with Richard Hooper. He unravels the cluster auto-scaler's functionality and introduces Carpenter, the open-source tool that revolutionizes auto-scaling by intuitively selecting the optimal virtual machine sizes. If you're wrestling with the challenges of scaling container-based applications or you're an enthusiast eager to understand the inner workings of cloud infrastructure, Richard's insights offer a treasure trove of knowledge. Be ready to be enlightened on how to efficiently meet customer demands and stay afloat in the ever-evolving cloud-based ecosystem.

Speaker 1:

Hello, welcome to Microsoft Community Insight podcast, where we share insight from community experts to stay up to date with Azure. My name is Nick and I will be host today. In this episode we will dive into Azure Kubernetes service, but before we get started, I want to remind you to follow us on social media and never miss this episode, so it helps us reach more amazing people like yourself. So today we have a special guest. Today, richard Hooper, can you please start introducing yourself, please?

Speaker 2:

Yeah, hi, I'm Richard Hooper, also known as Pixel Robots Online, so you can follow me on all socials and on the blog and at Pixel Robots Azure MVP for five years now, microsoft Certified Trainer Hooper of a book called Azure Containers Explained and so much more. Probably haven't got enough time, so, yeah, we'll leave it at that.

Speaker 1:

Okay. So, speaking about your offer about Azure Explained, do you want to give a little brief summary of your book before we get started? Yeah, of course.

Speaker 2:

Yeah, so the book which is available on Amazon and Appact is basically all about the different technologies you can run containers on inside Microsoft Azure. We go into the use case of why you would do it, the pitfalls, the pros, the cons, and then it would build up your application from a simple app all the way up to a multi-tenant sort of multi I don't know how to describe this multiple containers into your app. So you've got your front-end, your back-end APIs and stuff like that, and, yeah, so you go all the way from running it on VMs all the way up to AQS and further. Okay.

Speaker 1:

And I take it this book is available from Amazon, right?

Speaker 2:

Yes, yeah, go on Amazon and online Appact as well.

Speaker 1:

Okay, so today's theme is Azure Kubernetes Service. We just get to the access and question regarding it. So what are the most common use case for using AQS?

Speaker 2:

So, yeah, the good question, the most common use case we see is where software companies have got an application that's running in web servers and then other apps and they've already, like, slightly decoupled them, and then they want to basically be able to scale this quicker but without having to add a new virtual machine, setting up IIS or Apache, setting up all that web server config and then adding this on, adding that on, and they want to be able to scale their app based on demand.

Speaker 2:

So, like when you've got Black Friday or Christmas sales, they want their app to expand. But when you've got it on virtual machines you need to add more compute or you need to add a new one into the cluster. So it gets a bit complicated and a bit more manual. Whereas with AQS you've got your containers running and you can just say, oh yeah, I, just once I've hit this certain threshold be it CPU memory, even maybe HTTP requests or other metrics on service bus or RabbitMQ, it will just scale it up and automatically scale it and then use the virtual machines in the background to automatically scale and you've just got that sort of I wouldn't say unlimited scale, but near unlimited, based on your quotas in Azure.

Speaker 1:

Okay, so, speaking of scaling up, how does this Azure Kubernetes service differ from other AQS, like Kubernetes service, because they have different providers?

Speaker 2:

Yeah, so it's a good one, because AKS used to just have the cluster auto scaler, which is a Kubernetes thing. All of the cloud providers use cluster or scale. All of the on-prem Kubernetes distributions will use cluster or scale. And then there's a company called AWS Amazon massive big company they got their own one. They created this cool tool called Carpenter, which basically is a new and improved in a way auto scale up. But it's now being open sourced, donated to the CNCF part of the auto scaler SIG, and it basically can work in Azure now and other cloud providers.

Speaker 2:

But what this does differently to the auto scale is you can tell it right. I would like to use any VM size in this family and then it will go off find the best resources available to you, the best VM size, automatically for you, based on your workload you're giving it, whereas with the cluster or scale up you need to have your node pools already created and it can scale up and down, but it can't change the SKU randomly. So Carpenter is new they call it node auto provision inside AKS and definitely worth a look if anyone's wanting to look it out. It's in preview so it can be a bit buggy, but it's good.

Speaker 1:

So, speaking of auto like Kubernetes services from different cloud provider, what are the major benefits of using AKS than other Kubernetes service?

Speaker 2:

For me, I would say the ease of use, the ease of onboarding the developer experience, the integration to things like Azure, ad, entra ID, where most of you find most of the enterprises will have a mugs of active directory in their own intendants or they'll use Entra ID, so AKS can easily integrate into that. But, yeah, the AKS development team and the product management team have put a lot of effort in to make the updates a lot easier. So it does pre-validation checks to make sure your APIs aren't depreciated. They've invested a lot of time given back to the open source community so they work on the actual Kubernetes source code and all these third party add-ons, give it back to the CNCF and then other people can use it. Other cloud providers can use it, whereas other companies like Amazon and that they're not always nice at giving back to the open source community, whereas Azure really is. So I love that about them.

Speaker 1:

So, speaking of when, again scaling with Azure Kubernetes service, how does AKS handle scaling and balancing?

Speaker 2:

Yeah, so good question again. So obviously you've got the node scaling. So that depends on if I have too much workload for the existing nodes, it will add a new node for me, either using the cluster autoscaler or by using the node access node order provision. But when your workloads need to scale, by default it uses the Kubernetes methods, which is CPU and memory and the horizontal pod autoscaler. But you have a nice add-on feature which is called KEDA or KEDA, depending on where you're from.

Speaker 2:

I always call it KEDA, but this is an open source tool which was actually created with Microsoft and Red Hat to start off with, and now it's all open to the community so anyone can go contribute and this can integrate in with many, many different resources, so RabbitMQ, file services, Amazon S3, azure, gke, prometheus you name it. It's probably got a scaler and it reaches up to these scalers. Depending on what metrics you set, it will add extra pods. It all works on top of the horizontal pod autoscaler as well, so it's using that Kubernetes stuff. It's really good, so definitely check it out. It's KEDAsh if anyone wants to have a look.

Speaker 1:

So you mentioned before that Kubernetes Rabbit Scalable thing. What I do, a best practice for different application on a case for scaling it, yeah, good, good again.

Speaker 2:

Another good question. So your best practices always make sure you set your request and limits for your CPU memory. Always do that. Even if you're not sure, set it and then you can always go back and check it. Always make sure you have at least two replicas of your application running and you set your like pre stop hooks and stuff like that so if it dies you know it can finish off doing what it's doing and died gracefully and stuff like that. So like on the sick term, yeah, make sure you try and do topological spread.

Speaker 2:

So in Azure you got zone availability zones one, two and three. Make sure you work like a beast spread over all three zones if needed. And I know best practice, which is sort of changing for me now recently with the changes that come from Azure but you don't use persistent storage if you can get away with it. So if you needed to have storage, do API calls to block storage or something. Don't mount that files or Blob into your pod because it can slow things up and obviously you want it to be super quick because it's in containers. But they are fixing that. They got this new service called Azure container storage and we've now got the ability to use zone redundant storage with our ACS, so the story is getting better. I'm getting closer to being happy to suggest it, but yeah, it's definitely something to keep an eye on.

Speaker 1:

Okay, so we want to find out. The viewers want to find out what inspired Richard Cooper to be involved with a chaos. So can you find out A bit more? Yeah, inspired you to get started.

Speaker 2:

Yeah, again, another good question. You've got some blinders here. So I've got this admin background. I started off on the help desk, worked all my way up working on physical machine servers and then virtual machines. So, like you know, hyper VVM, where and then I was working as company, we had a monolithic application and it was hard to get it to scale. Basically it was, you know, customers will send us lots of information. We need is a scale the app quick, and it was extremely difficult. So we started looking into containers and that technology and Kubernetes and it just sort of made sense to me.

Speaker 2:

Like I've got this infrastructure background, kubernetes is just a sort of operating system in a way on top of services, like the hyper V is. You know, it's like the VM, where it's just Kubernetes. It orchestrates my workloads. My workloads are now in a container instead of virtual machine. You've got hand charts and values, files and all that, which is just like when I had MSI's to install my application with my MSD, my transformation files. So to me it just all clicked in my head and made sense. So I thought, oh, this is where the world's going. This makes sense to me. Let's just go all in on it and see where it takes me, and so far this been good.

Speaker 1:

Okay, brilliant. So before we wrap up with the episode, we want to ask you are you going to any events or future events?

Speaker 2:

Yes, yes. So we've got a few coming up. We've got KubeCon in Paris, which is coming in March. I'm super, super excited about that. It's where thousands, literally thousands of people are all coming together. Microsoft, aws, all these open source companies all come together and talk about Kubernetes in the cloud native world. So I'm going to be there. Hopefully I'll see quite a few other people there. And then we've got experts live in Budapest this year, which hopefully I'll see you again at Nick, because it was awesome to see you there last year in Prague. And then, obviously, we've got KubeCon North America, which is going to be in Salt Lake City in November time. Then, hopefully I think it goes across this a lot it will be Microsoft Ignite, but we're still waiting for them to announce where that's going to be. It'd be good to get back to a full Microsoft conference. That's my background, so I'd like to be back there with the people that I know.

Speaker 1:

That's brilliant. How can do before we close the episode? How can the audience learn about AKS and connect with you to stay up to date?

Speaker 2:

Yeah, good question again. So to learn about AKS, I'm going to permit myself here. I've got an AKS course on LinkedIn, so go check that out. The Microsoft Learn documentation is awesome. They've also got LearnPars all about AKS and the clone of world, so check those out.

Speaker 2:

But the best way is just to deploy it and start playing with it. It really makes life easier, I think. And don't think of it as too complicated, because, yes, it's a massive beast, but it's just like VMware, hyper-v. We have to deal with networking, storage, os patching back in the days. It's just the same, just a different name for the software. And then, yeah, just sort of get on Twitter as well, or X as it's called, and just start following people like myself and some people at White Duck, nico and Phil, and to be to Wesley and Carl, all on Twitter. And, yeah, just get involved with the community and talk to people. That's also another good way to learn. And then, if you want to reach out to me, you can find me on Twitter. Or X, pixel, and it's called Robots. You can find me on my blog, pixel Robots. Find me on LinkedIn, richard Hooper it's got a weird thing because I've got a cloud and stuff in the name. So just search for me and you'll find me.

Speaker 1:

Okay, brilliant. Is there any last few words that you want to say to the viewers, or anything?

Speaker 2:

Yeah. So yeah, if you're looking into Kubernetes and you're a bit afraid, don't be afraid, just embrace it. It's not that difficult If you've got a sysadmin background. If you've got a developer background it's a bit harder. But if you just follow the learning path, some Microsoft learn, it should be fine.

Speaker 1:

Okay, brilliant, okay. Thanks for coming into this episode, richard, since you're quite busy, so I hope you're showing a few weeks, a few days. It's going to be on Spotify and not the music, so stay tuned. Bye, bye.