Interview With Gideon Mendels, CEO Of Comet

Automatic TRANSCRIPT

We're so excited to have with us today given mendel. Who's the ceo and co founder of comet so high gideon and thank you so much for joining us today jeff million. Hey you're on. Thank you so much for having me. I'm super excited to today. We'd like to start by having you introduce yourself to our listeners. And tell them a little bit about your background and your current role at comment definitely so as you mentioned. I'm the ceo and co founder of comment For the listeners. Who don't know comet provides a self hosted in college based missionary platform essentially on data science teams to track impair explain and optimize experiments models company support some of the biggest and best enterprise machinery teams in healthcare attack media financial services and other industries Her son who actually started maker of software engineer but she sixteen years ago. And i shifted do working on an applied machine learning about seven years ago. I was a grad student whilst work work on speech processing natural language processing after that i had my own start up again in. Nlp space. And after that. I was google Where i was working on deep learning research specifically we were working on detecting hate speech on youtube comments using the malls. Yeah that's that's really a great application. In general for automated systems is very hard for for humans to just manage the mountains of tasks that are needed for moderation. So great use of of a and a great applied use of ai. It's cool that you bring into that so let's sort of bring us into now. I know what you're doing with commented lot of it's helping people make these better models and and iterative battle and manages models so maybe you could tell us a little bit in our listeners. About what are some of the challenges that organizations face today when they're trying to build machine learning models into production That's a great question. And i liked it. Use the word build rather than deploy because from our view comments in working with these like very business focused engineering teams the biggest challenge in getting them. All production isn't the actual deployment or devops problem behind. It's really a building model. That's good enough to justify deployment right so when we think about machine learning it's actually buried their friends. Offer engineering both from a process perspective. The dodgy the tools everything about different machine learning iterative process tres many pitfalls in the way and now whether it's your optimizing for the wrong metric or you're leaking your target or you're just working on a data set. That doesn't have enough signal so eventually it's really comes down to building a model that meets the business. Kpi in most of the teams out there are really struggling with that point Like i mentioned. There's a lot of things that can contribute to that but a big part of it is the lack of processes and tools of doing these things in a safe and a predictable way. you know. it's it's great that you gave that explanation. I know that a lot of companies are now starting to bring their starting to build models and think about how they can incorporate machine learning into their their company. So why is it. Important to have a tool for data scientists and teams to track explain in optimize experiments in models. That's an excellent question. And i think a lot of companies learned that the hard way but really impossible to run a team successfully without a system of record of your work. I mean that's true for most job functions. Not just machine learning. You know whether it's gets hub for after themes or salesforce for cell students hub spot for marketing and so on you really need a central system of records manage these processes and and again like other system records. Another jobs luncheon. Once you have that like. In our case on experiment and a model management platform it provides value to anyone was in walden engineer. Works so whether it's data scientists that's looking to track their experiments compare and understand. Why one models being better than the other is bias or issues with a model through the software engineer that's needs actual binary defer deployment all the way to the manager that wants to track and have visibility of a team progression and eventually maintaining all that institutional knowledge about research experimentation metrics and models within the organization and non people's personal notes for example.

Coming up next