Robin, Woking discussed on Software Engineering Daily

Automatic TRANSCRIPT

They're picking which machine gets what work you can do round, Robin. Where you just say I'm just gonna put them in a loop and give each one to the next one each piece of work to the next one. But quite often, you find that a particular machine will end up a bit overloaded if you do that the whip load isn't necessarily evenly balanced. So this a you could look at how much work a machine is doing counting number of connections is currently holding. And so that's the least connection scheduling where you try to figure out the system that appears to have the fewest number of connections. And then there's waiting so you can say I want some machines might be faster machines than others soil to give the mole bait in the algorithm. Safe. Weighted least connections is common scheduling algorithm that you'd find these things get relatively sophisticated. It's there are a number of interesting pathological sort of things that can happen in this. Where particularly in the presence of failures, you can sort of end up sending all your traffic to machine which seems to be processing everything really files because it's just crashing every time you send it to request for. Let's go black hole. Rafting? Basically, the audio traffic goes into this black hole and everything files. So there's a number of cases where you want to actually have health checks in there to make sure that everything seems to be working. Okay. And quite often, the you put limits to make sure you don't sort of suddenly do something strange with the system is going through transients. So that that was something. So coming back. The idea of goals the goal of the scheduling at the load balancer level that you encountered at EBay was fairness fairness among different machines, and fairness was an abstraction of many different aspects of allocation among those different machines. Yeah, you're really looking at trying to spread the workload evenly. You've got all these machines to keep them roughly evenly busy. You don't want to hope spots where one machine gets overloaded in his slow. You want to quickly identify any machine that's failed or struggling or in the middle of garbage collects and stops responding or something like that. And stop sending traffic than you want to be able to out new code in ways, which typically controlled by the load balance. So for example, canary testing, you have a a extra machine that has some new piece of code on it you want sent a small amount of traffic. To it using a small weight and put it in the in the load balancer and then see if it looks. Okay. Give it a bit more traffic. If it looks good than replace all if the other machines with the news new coat so that that kind of way of Woking software into production, gradually can be automated..

Coming up next