Blizzard, Wednesday, Ten Hour discussed on .NET Rocks!

.NET Rocks!
|

Automatic TRANSCRIPT

Is they scale. Up to very large instances a or two before the big event. Host the big event and yeah. They'd be spending a lot of money for those few days but they've also got a lot of income coming in because of those events and then once those events are done they would scale back down to minimal instances to keep the background traffic going. And there's a number of businesses gaming companies. Do this where they have like. Blizzard has a big launch. And all of a sudden. You're gonna have a lot of traffic hammered. Now we can scale up and then once interest dies down then you scale back down and that flexibility that you're not locked into physical hardware and having to pay too because we're basically with physical hardware you paying for your peak. Now you have to the po provision the pizza and then you've got that peak provision hardware for the other three hundred and sixty four days that you don't need yeah that's absolutely true and certainly this is the new era right. The utility computers that we can buy what we need when we need it and in reasonably short amount of time i mean. How long does it take to move to a higher instance and On atlas minutes. Yeah there's a lot of a we do a lot of Interesting things in the back end. Most of the cloud providers will allow you to tweak their hardware once every six hours right so if you just need a like a very comic database we live and die by aiops guy ops. So if you want to bump up from like a thousand to ten thousand aiops that'll take a few minutes. Yeah i still think if. I might be doing ops coming into black friday. It's like on wednesday. I turn up the nah right and then sort of poke things like everybody. Happy with the big instance and we all good. We don't want to wait until noon on friday for another thing that we were experimenting with Experiment because it's actually in production is auto scale you can turn on auto scaling on your allison. Is that if you see a certain peak load for an extended period of time it will auto scale up to the next instant size. What's the thing you're measuring is like number of transactions are is it i opposite you're measuring ago. This view. Bombardier instance right now. A cpu guys. We're are pinned. Yeah you're processor is pin. Pretty hard bottleneck for mound. Go like i think disk latency zero bottleneck remember. I think you can also do it on. Aiops sedans on your workload if you're doing a very heavy workload. Obviously disguise ops is going to be your primary driving factor if you are doing heavy duty. Educations of where you're doing grouping expressions that's where you can run into. Cbs resource end. Seaview run. We almost never see network run. Hot is typically depending on your workload. It can query heavy and using complex queries like if you're just searching by essentially primary key are underscore field than that takes. Virtually no seaview resort sources resources. But if you're doing a lot of sorting in aggregation in server memory than that can take up cebu you can actually have atlas auto scale so that it will bump up to the next tier up to a maximum the you set and then if it remains if you are below a certain value for typical previously days. I think we're we're getting a bit more aggressive. It can be on the order of ours if you see like your views really running low. You're not using law guy. Oh then you'll drop down your tears again down to a certain minimum set point as it were very much experimenting with this for customers to optimize their costs on atlas. If you could get into daily with is like if you're a retail outlet that's streaming data from transactions but like ten hour window where every store is close like being able to turn that knob all the way down for what is better more than a third of the day or add up over share. Yeah for sure. What about the long term storage side of things is archived. Like i find that. That's an excellent. Sequel azure and these kinds of products as datasets. Get big. they actually get really expensive. And you kinda wanna carve off old data and put it away right so where how. What is the cheapest storage out there in the world today. Blob block store. Yeah whether it be s. Three or azure block store. That is by far the cheapest per gigabyte that you can get so one of the things that we enable is atlas data lake where you can actually pull off your old archive data into an s. Three blob store or a azra blob store and the nice thing about that is although your performance isn't great it's to query -able right so it's still there you know. Is it going to be slower. Because we don't have the full indexes we actually have you can issue mongo db queries against this blob store. Which is quite cool. Yeah but it's it's still there and available for reporting purposes but you're not in incurring the cost of having it hot an available immediately. I should mention the azure sequel does make backups. And so if you screw something up you can just go online into the portal. Find the last backup and restore that to another sequel database. And you're off to the races. That is a very nice feature. But you know you you pay for it. Yep we have automatic backups as well an in atlas so you can actually you can establish a backup schedule a. The backups are snap shotted into asimov's door or orange s three and then you can pick a snapshot and then restore it either to your own. Cluster to cluster. So you have similar functionality. So that you can because you live in companies live and die by the data. Yep and you talk about keeping. Cto's com lose weight like that's bad that'd be cto's get upset win. But so having visibility into data that's old is great but releasing the cost of storing all. Data's because the traditional solution is just delete data. Right or i built archive systems..

Coming up next