Making Spark Cloud Native at Data Mechanics

Automatic TRANSCRIPT

Your host is tobias. Macy inch today maneuvering giannis. Stefan about data mechanics cloud native spark platform for data engineers. So giannis can you start by introducing yourself to be here. So yeah. I'm john eve i am. I'm the co founder of mechanics. Priority mechanics was a software engineer auditor. Bricks lead their spark infrastructure team. Sanal being working. We spark cousin infrastructure provider for quite a few. But i'm pretty passionate about it. So i hope i have some interesting stories to share with your audience and do you remember how you first got. Involved in the area of data management. Yes so. i studied engineering. In france then went to the. Us stanford at the time machine learning was. Everyone's obsession. I remember the pretty popular machine. Learning class by andrew hang had one hundred thousand students registered but it was actually a separate class that interests me mining massive data sets at which was like my introduction to distributed computing and. I find that this area was a great mix of really software. Engineering problems algorithms architecture problems. And then i had the opportunity to join. Data breaks as a pretty early software engineer just out of college and that was an amazing experience. And that's how. I go all in that area. And so you mentioned that you had that experience of running spark at data bricks and now you're running it for other people in your company data mechanics. I'm wondering if you can start by giving a bit more of an overview about what it is that you're building their data mechanics and some of the story behind. What made you decide to set out on your own and run your own business to help provide this service to more people. Yeah of course. Sedate mechanics is a cloud. Native spark platform for data engineers. Our platform is deployed on kuban cluster. That we create and manage for our customers inside their cloud accounts so the contract with our users. Our users develops barcode. They submit it and then we take care of scaling infrastructure tuning the configurations collecting logs and making them available enough friendly user interface.

Coming up next