17 Burst results for "Michael Berger"

"michael berger" Discussed on AI in Business

AI in Business

07:48 min | 2 weeks ago

"michael berger" Discussed on AI in Business

"And then we have also the retraining. So in case there's a performance shortfall. How fast can I ultimately come back to a robust performance? So how fast can I retrain? What procedures and methods to a use for that? So this was something what we are looking at. And in each step, we want to make shreds statistically sound. And then it derived really data from a statistically sound testing machine to then derive this distribution that we would use to set the guarantee levels and then other price for this guarantee. Got it. So two things to touch in on that. Number one, clearly you have to have a lot of pretty technical folks on your team to even know the right questions to ask, right? To ask the question of, okay, when this does go catastrophically wrong, what are our protocols to getting this back on track? And then figuring out how risky those are, how not risky those are. And being able to take that seriously, you need a lot of subject matter expertise, and you need a lot of data science expertise, my guess is there's a lot of techies talk at a techies to price these particular products out. Is that right? Yeah, that's right. So we have a team of research scientists that we build up here in Palo Alto as well as in Munich at the headquarters. And they are working with the data science teams and the research scientists at our clients at the AI providers. So those colleagues pay have involved, but also then you have different to my experts as you have said. So in case we would, for example, back a company offering machine learning model to classify malware. So in the cybersecurity space, we would involve our cybersecurity experts in this one of the biggest cyber reinsurers in the world. So we have a lot of domain experts there. Another case might be that we are working with an AI company in the agricultural space. Then we go to tie in our agricultural insurance colleagues in their also looking at risk, more from actual perspective, but they can also bring in interesting interesting subject matter expertise there. So as a global reinsurance company active many different fields of insurance and different areas, we have a lot of domain experts which we can bring in. Yeah, yeah, but it really has got to be combined with the subject matter experts, right? And I imagine it's a little challenging when if we have a bunch of agriculture folks who literally don't even know what data science is, and then we have a lot of hardcore data scientists out of the really good schools in California, have never been on a tractor in their life. That's a lot of gaps to bridge, but it would seem somewhat evident that like many other kinds of AI projects we need a lot of voices to make this work and the tech to tech feels like a big part of the team that you're building up out there. The other question I've got on this briefly is around how nimble and frequently these kinds of systems are updating and what your process is to update that. So there's some areas of your insurance products. I'll just give you a random example. Maybe it's like tripping and falling in a shopping mall, right? That kind of insurance until people invent some new way to walk, that kind of insurance will be somewhat predictable year upon year. We know the foot traffic, we know the number of people that fall down. It's probably somewhat similar the way we price that as maybe we would 30 years ago. With AI, 6 months from now, what is CPUs might be GPUs? 6 months from now, 20% video traffic being what we're analyzing goes to 80% video traffic that we're analyzing. Or we have new kinds of mobile devices that are hooking into the thing or now we want to use Kubernetes somewhere. This stuff is the opposite of a predictable place. And of course, those are all going to affect risk, and they're all going to affect how you assess risk. What is your process for checking in on these products and I assume it would have to be much more nimble in hands on than tripping and falling at a shopping mall. Yeah, sure. So of course, machine learning models are updated frequently in order to maintain performance. And that's something would be what we want our clients to do because we ultimately want to maintain this performance levels. So what we do is really defining more process with our AI clients to say, okay, how do they do the updating? And what kind of testing regime do they do to check that the new model that they want to push into production? It's really robust enough and can really replace the existing model. And how do they need to measure the performance of that? So this is more really a description of the process in a definition of the process that they need to follow in order to update the models. And then as long as they follow this process, as long as the performance metric matrices are there, achieved, then declines can update to smalls. And this is also in our interest to do so. Got it. Okay, so you have to have some kind of fail safes and agreements in place to say, okay, as we're updating, we'll have to be notified. We'll have to bring in our techies again to talk to your techies and see how this new GPU stack might affect the way that we process these kind of things. And then you guys take that into account. So it's very living and breathing product in that respect. Yes, yeah. Yeah, very, very hands on stuff. I imagine cyber is similarly the case. Well, you know, it's only once every now and again that we get to touch on a topic that in, you know, 8, 900 episodes, we haven't touched on before, but Michael, this was a lot of fun. And I think hopefully our audience has seen the beginnings of a much bigger trend for where insurance companies are going around emerging tech. So thanks so much for joining us today. Thank you for having me then. So that's all for this episode of the AI and business podcast, a big thank you to Michael Berger for being able to join us and a big thank you to you as our listener. It is always a pleasure to be able to talk with folks who are heading up AI at giant multinational corporations we love talking to startups too, but it's important to get the enterprise take and Michael did a great job here today. Two quick notes as we wrap up. Number one, if you've enjoyed these episodes on the AI and business podcast where we cover financial services, we have a whole podcast dedicated just to financial services, so insurance, banking, and wealth management. It's called the AI and financial services podcast. I appreciate the heck out of you as a listener of this show. And if you like use cases and you like trends and you want to just get a feed of cutting edge use cases and trends in financial services, you can go into Google type in the AI and financial services podcast or go into Apple podcasts, Spotify, wherever you listen to podcasts, you'll find it, and make sure that you're a subscriber, there as well. We have episodes coming up around applying AI for financial services, compliance and regulation. We have episodes coming up on lending on predicting pricing. There's a lot to cover in front serve. In fact, it is an endless stream unto itself, and if you're interested in more of that, then be sure to subscribe to the AI in financial services podcast, as I mentioned, you can find it anywhere. And if you're interested in specifically more in insurance, we do have a free PDF brief called the AI and insurance executive cheat sheet. This is a highlight of key insurance AI applications and use cases in addition to some key terminology. So if you want a quick up to speed breakdown of what you need to know in terms of terminology and in terms of basic use case breakdowns for the entirety of insurance, you can go to EME RJ dot com slash INS one that's INS like insurance and then the number one and you can download that free PDF brief called the AI and insurance cheat sheet. Again, it's EME RJ dot com slash INS one. So you can sign up there for that particular PDF and otherwise appreciate you being here as a listener. Next Monday, as usual, we're gonna be kicking off with our AI success factors series our short ten to 15 minute episodes covering one particular AI use case and one particular factor that allowed that use case to achieve an ROI. So I hope you start your week off right with us next week. I.

Palo Alto Munich Michael Berger California Michael Apple Google
"michael berger" Discussed on AI in Business

AI in Business

08:23 min | 2 weeks ago

"michael berger" Discussed on AI in Business

"Feels inherently fuzzy, but how do you solve for that? Yes, sure. I think what we was enforced is that there's a clear agreement between the client, so the end user of the AI model and the provider of the AI model. So that's really count for the end user. So how does the end user really look at the performance of an AI mode? Meaning the toxic content moderation space was ultimately basic opposition. We call rates on a monthly basis that we used. In other cases, it might be, for example, in a case where we backed machine learning model doing four detection in the credit card space was really ultimately adjust the false negative rate. And so it very much depends on what the end user really cares for. So how does the end user really look at the performance? And then we take this statistical mesh and say, okay, based on the statistical measure based on our risk assessment of the data science process based on our assessment of a statistically sound testing procedure for the machine learning model, what is the probability distribution for this metric for the machine learning model? So how does it vary? Because at the end of the day, a machine learning model will always exhibit some error probability. And some probability of falling short. So for us, it's in the task to find a good representation of this probability distribution. This is then what we used to set a guarantee threshold, then to set also our premium, or it would charge for guaranteeing this robustness level of the AI. So in this respect, we bring in also some kind of honesty in terms of yet this is the claim in terms of the robustness of the AI that we can back with this kind of costs. And if this is good enough for the user, then the user can have crossed in that. And if it's not good enough for the user, then might not the machinery model might simply not be fit for purpose for what the user wants to do. Okay, I'm sure there's a chance that there's a dumb element to this question, but I want to ask it because it feels warranted. So in any the insurance world, it's on ecosystem, and there's, of course, potential incentives, you know, fraud is a thing, right? Fraud is real. Post COVID, you know, a lot of companies kind of saw an uptick in this. And in terms of AI solutions in insurance, underwriting and fraud prevention, as far as our research goes, something like 40% of the investment of the big guys in the United States. I mean, mostly U.S. firms, I'm sure it's actually somewhat similar in Europe. Big dollars there and big issues there. When it comes to this kind of a solution, I'm almost thinking, okay, let's say I'm this large social media firm, I want to detect this kind of negative content, and if my model is an effective at a certain level, I'd like to be, you know, make sure that I'm kind of compensated for that. If there's some catastrophic risk, what are the thresholds to not get a number of different insurance programs for a number of statistically significant things. And then be like, man, you know, we sure could use some more dollars, you know, next next quarter, you know, is there some way to kind of put a small wrench in the gear so it drops a little bit below that threshold and we can kind of cash in on this thing because these systems feel so open ended to small influences that could just change that risk number. Like you said, you want to hold it down, but you're talking about detecting content. There's some new video format that Instagram just invents, right? They invent overnight. Okay, how does that factor into the model? Do we say, oh no, no, keep out that kind of data because it's novel since before we wrote the policy. Can we do that? I mean, this feels endlessly complex in terms of loopholes. What are the incentives to not say, hey, it went below threshold. We'd like to be paid out on this. I mean, it feels complicated that you let me know. Yeah, sure. I mean, yeah, the payout functions really, let's say, a function in terms of the shortfall. So in case the shortfall is just a little, and also the payout would be just a minor amount. Case they pay at the shortfall is big. Then also the payout is bigger. So it's really taking into account how well does the AI perform and how far does it fall below expectations? And below the guaranteed levels. So this is one way how to ultimately address that. Another is simply a matter of how do you set up the testing in production. So how can ultimately a client say, okay, my AI model that I purchased there or the predictions that I want to be getting from this from the service, they don't really work well. So in most cases, our client, this is the AI provider, will also have ultimately locked the predictions, and then we'll also get access to the ground truth. So there, at this point in time, the AI provides our client can already do a check and see okay. How is the performance evolving? And of course, also, when it was in case there are drifts in my data changes in the correlation structures and then taking appropriate measures. So we don't need to wait until the month is over to see okay how it's how it's working, but we can constantly ultimately see how the system is working out and then taking measures to that. It feels like there's so many ways to stream in an additional source of this kind of data or that kind of data. And maybe you guys are hooked into some of the core data in your arrangement, but can we see everything that's flowing in at exactly what source and its verification? The black box element of this feels wild. And it's a new wild west domain, but it's really, really interesting. I guess the other question that's relevant here just being mindful of our time, but that I know I want to explore is how you come up with these kinds of insurance products. So if I'm hearing you correctly, it's not just loss of life. It's anything that's a big economic consequence or a big consequence generally if things underperform below a certain threshold. So if it's a big enough economic investment, we might want to think about insurance for it. First time I've ever said that on the show, but maybe that is the future here. You guys are certainly driving that forward out there in Palo Alto. How do you take a look at such an application? Maybe it's the social media one. Maybe it's one for autonomous vehicles. If they, you know, hit a grandma at some point. And come up with that realistic risk threshold and then the agreement around payment and terms. I mean, what's the basic process here? Yeah, sure. And I think this also goes into the question of what you asked before in terms of the, let's say, the scope of balloon or bears out in my AI model really valid. Because it might be truly that, yeah, there is data or changes there, which the AI was not really trained to or tested on, and you can not really take care of. Of course, in the guarantee there would be then a limitation to say, okay, you can use the AI for those use cases within those specifications. And in case you would use the machine learning model outside those specifications, we can not guarantee the performance. So this is something we'll be taking care of. But the information where ultimately the machine learning model can be used and where really delivers a robust predictive performance. And that's something that we figure out together with the AI provider. So we are looking into the ultimately every step of the data science process. So from the data generation and the annotation of the data and making sure that the annotation process really remains stable throughout the lifetime of the model to the model architecture that was chosen the training procedure and then also the especially the testing procedure that was procedures statistically sound to the monitoring and the retraining of the machine learning model. So it's the monitoring really appropriate entry statistically sound for the respective use case at hand might be that an might need to use more complex and better statistical methods to monitor my AI model besides just looking at how the performance fluctuate for specific use case because I must expect a higher degree of change in my data there or might not be the case. And then it might be sufficient to ultimately just monitor the performance of the machine learning model. So this is very much use.

U.S. Europe Palo Alto
"michael berger" Discussed on AI in Business

AI in Business

06:56 min | 2 weeks ago

"michael berger" Discussed on AI in Business

"What kinds of projects are big enough to make this relevant? Talk to us a little bit about how this even gets on the road map. Yes, sure. So ultimately, a company or if a company is pulling an AI model and then selling the predictions of the AI model, then there's always the question okay, how well is the outperformance performing in how robust is it? And again, in case the AI really matters for the end user in the predictions really matters. So in case the predictions of the AI being wrong to really lead to some strong financial consequences, some financial downside for the user, then predictive performance matters. In this respect also an insurance of the predictions in the predictive performance of the AI can matter. And here ultimately on what we do and what we try to achieve is to really establish a trust worthy nest in terms of how robust the AI. So what we want to do is to work with AI providers and to structure guarantees to guarantees should benefit the end users, and they ultimately should tell the end user that yet the error rate of the AI will be below a certain threshold. And in case they are then produces errors above the threshold, then the end user would receive a financial compensation. And for the end user, the value is that either the AI will work as expected and then the end user will receive the operational benefits of a better working AI and also the economic benefit from it or in case the I force short relative to the promised performance than the usable receiver financial compensation. Neither way the end user will have a guaranteed return on the investment of the AI project. Got it. Okay, so this, we're going to get a little bit into the insurance business today. It sounds like. And so I'm really excited to unpack that. But let me share some of the ideas that immediately come to mind about where this would be relevant. And I'm sure you're going to break that mold and tell me about other things. But when I think about, okay, the insuring of AI immediately my mind goes to autonomous vehicles or transportation or things where the risks are lives. You know, maybe it's medical to potentially diagnostic tech of some kind. Maybe there's some guarantees there. I'm also maybe thinking about financial kind of applications where if we make the wrong trade or something along those lines, we can have gigantic negative financial consequence based on the decisions of an algorithm. I immediately go to things of that level of consequence, but maybe it's much more mundane. Maybe it's some chat bot application for fashion retailers that want to sell more makeup. I mean, talk to me about, you know, I would think grandiosity of impact when I think I want to pay for actual insurance, tell me if that's a wrong assumption and what some examples are. Yeah, sure. I mean, it's clearly relevant. But also for your applications, which accompanies just using to, let's say, reduce costs for optimize our business processes. But in case the I wouldn't work well there, that ultimately I destroy some economic value. I'm ultimately not work with the AI might not work as well as expected, then I might need to ultimately shut down the process. I might need to bring in workers to take up the tasks of the AI, all of this course is costs. And even those kind of scenarios, which are purely operational and might also be just purely to optimize processes within the company, they are also guaranteed at value. Simply by saying, okay, I'm really securing your investment in this project in ultimately adopting an AI solution. So an example, there could be if we are looking at a company, which is building an AI model in order to toxic content moderation. So it's a copy we worked with, which is building machine learning models to classify toxic content on social media platforms. So for example, whether host might be weapons sale or trucks sale post and that's of course quite important for the users, but it's also quite important for the social media platforms in order to moderate the content on the platform effectively. So they would like to have a solution which puts the critical posts, a high on the radar on the moderation teams. And on the company we worked with, put an AI solution in place, charting with the promises that. And what we are doing here is that we work with the company to put a guarantee place guaranteeing certain efficiency of the AI. In case the efficiency levels are not achieved, then the economic benefit in terms of taking work off the moderation teams for the social media platforms is also not realized. They might still need to have more moderators might be involved and are working more hours. And so the benefit of the AI project is not realized. And then this situation are the guarantee payout. And cover the economic shortfall. Got it. Okay, so this is yeah, this is nuance. We might get into some more examples, but let me just poke into that one. So that's a great one to start off with. I appreciate you listening out. I'm thinking to myself, okay, so the numbers around how efficient is this model, I think what vendors learn very quickly in the AI space is that there's 5000 ways to do that. Like, we help with call center efficiencies. Okay, which of the 7 million ways of possibly measuring that do you do? And then what are the hand selected set of features that you decide to use as the main signals when you you could do all kinds of stuff. We could base it almost entirely on customer service scores. You could base it almost entirely on other kinds of time to resolve different issues. There's so many ways to proxy these things like risk. And there are also ways to make the numbers kind of fluffy, right? Decide to not really include the weekend numbers because we don't like how that looks or whatever the case may be, right? That this all kinds of ways to flub that. I'm thinking to myself if I'm in your shoes and the insurance business, the reason they're purchasing this product is they want to make sure that if that algorithm goes rogue and they end up spending an astronomically greater amount of time or maybe new hires or whatever on filtering out these drug posts or these nudity posts or whatever. They'll be compensated for that. It almost feels like how do you know that they can't just say, well, you know, it was less efficient than we thought we will need some of that money now. It just feels like there's so many features and factors and there's a lot of black box elements to detecting images, right? And how we tweak and adjust that, how do we realistically have one lockstep as to this is what our efficiency threshold is because it.

"michael berger" Discussed on AI in Business

AI in Business

02:58 min | 2 weeks ago

"michael berger" Discussed on AI in Business

"This is Daniel fragile and.

"michael berger" Discussed on AI in Business

AI in Business

01:45 min | 2 months ago

"michael berger" Discussed on AI in Business

"Also <Speech_Male> have another report called <Speech_Male> the AI ROI <Speech_Male> cheat sheet, which <Speech_Male> has retailed on <Speech_Male> our reports <Speech_Music_Male> section on emerged dot <Speech_Male> com for some $300. <Speech_Male> <Speech_Male> We have hundreds of folks who <Speech_Music_Male> <Advertisement> have access to <Speech_Music_Male> this report as <Speech_Music_Male> <Advertisement> it contains some step <Speech_Music_Male> <Advertisement> by step processes <Speech_Music_Male> <Advertisement> and simple tables <Speech_Male> <Advertisement> to be able <Speech_Music_Male> <Advertisement> to predict and <Speech_Music_Male> <Advertisement> understand the return on <Speech_Music_Male> <Advertisement> investment of an AI <Speech_Music_Male> <Advertisement> solution. And this is <Speech_Male> <Advertisement> normally retailed on its own on emerged <Speech_Male> <Advertisement> dot com on the reports <Speech_Male> <Advertisement> section of the page. <Speech_Music_Male> <Advertisement> But we are making it <Speech_Music_Male> <Advertisement> available for free for <Speech_Male> <Advertisement> anybody who joins <Speech_Male> <Advertisement> emerge plus for those <Speech_Music_Male> <Advertisement> of you who are <Speech_Male> <Advertisement> emerged plus members, <Speech_Male> <Advertisement> thank you already for being <Speech_Male> <Advertisement> in the community here. <Speech_Male> <Advertisement> For those of you who are not, <Speech_Male> <Advertisement> emerge plus is our <Speech_Music_Male> <Advertisement> private members <Speech_Music_Male> <Advertisement> only platform and emerge <Speech_Male> <Advertisement> dot com <Speech_Male> <Advertisement> where we make available <Speech_Music_Male> <Advertisement> all of our best practice <Speech_Music_Male> <Advertisement> frameworks and <Speech_Male> <Advertisement> infographics for AI <Speech_Male> <Advertisement> ROI, <Speech_Music_Male> <Advertisement> AI strategy, <Speech_Male> <Advertisement> AI adoption, and <Speech_Male> <Advertisement> more, as well <Speech_Male> <Advertisement> as all of our <Speech_Male> <Advertisement> AI use case <Speech_Music_Male> <Advertisement> library and white <Speech_Music_Male> <Advertisement> paper library. So if <Speech_Music_Male> <Advertisement> you're interested in <Speech_Music_Male> <Advertisement> finding <Speech_Music_Male> <Advertisement> new AI applications <Speech_Music_Male> <Advertisement> that might suit your <Speech_Music_Male> <Advertisement> business or your clients <Speech_Music_Male> <Advertisement> or you're <Speech_Music_Male> <Advertisement> interested in having direct <Speech_Music_Male> <Advertisement> frameworks to make decision <Speech_Music_Male> <Advertisement> making more <Speech_Music_Male> <Advertisement> simple in the C suite, <Speech_Music_Male> <Advertisement> then you've come <Speech_Male> <Advertisement> to the right place. You can go to <Speech_Male> EMR J dot <Speech_Male> com slash <Speech_Male> R 7. So <Speech_Male> in this case, our like ROI, <Speech_Male> and then the number <Speech_Male> 7, and you can learn <Speech_Male> <Advertisement> more about emerge <Speech_Male> <Advertisement> plus and about this <Speech_Music_Male> <Advertisement> $300 report that <Speech_Male> <Advertisement> we're giving away <Speech_Male> <Advertisement> during this special <Speech_Male> <Advertisement> launch week. <Speech_Male> <Advertisement> So I hope you're enjoying this <Speech_Male> <Advertisement> series. Hopefully <Speech_Male> <Advertisement> some of you folks join us in <Speech_Male> the emerge plus community <Speech_Male> as well. Be emerge <Speech_Male> <Advertisement> plus and about this <Speech_Music_Male> <Advertisement> $300 report that <Speech_Male> <Advertisement> we're giving away <Speech_Male> <Advertisement> during this special <Speech_Male> <Advertisement> launch week. <Speech_Male> <Advertisement> So I hope you're enjoying this <Speech_Male> <Advertisement> series. Hopefully <Speech_Male> <Advertisement> some of you folks join us in <Speech_Male> the emerge plus community <Speech_Male> as well. Be sure <Speech_Male> to stay tuned in <Speech_Male> for tomorrow as we <Speech_Male> <Advertisement> dive into more <Speech_Male> <Advertisement> ROI insights. We've <Speech_Male> <Advertisement> got a leader from Oracle <Speech_Male> <Advertisement> who has a <Speech_Male> <Advertisement> storied history with companies <Speech_Music_Male> <Advertisement> like Amazon <Speech_Music_Male> <Advertisement> and in the startup world <Speech_Music_Male> <Advertisement> and we've got a lot <Speech_Male> <Advertisement> more to sink our teeth <Speech_Male> <Advertisement> into. So thanks <Speech_Music_Male> <Advertisement> again for tuning <SpeakerChange> in. I look <Speech_Music_Male> <Advertisement> forward to catching you in the next <Music>

"michael berger" Discussed on AI in Business

AI in Business

03:58 min | 2 months ago

"michael berger" Discussed on AI in Business

"So we'll ultimately say, okay, I should contain some information about the uncertainty in the prediction. Ultimately, every application I look at, and especially in those kind of sensitive areas. So what you mentioned on the healthcare side, it really making, say, at the end of the day, yeah, live death kind of decisions or very important health decisions. I should embed this kind of reverse of this kind of adjustments on my model to really provide me with the uncertainty information there. And if I would have a lung, your city image, and the model will give me back an interval where yes, and no is contained with a 90% probability. I mean, this tells me, okay, I can't delegate the decision to my AI. There I might need to have a radiologist or two radiologist taking a look at that, really making sure, okay, what is visible there? Or take another scan. So I think in this respect, this kind of prediction intervals can help quite a lot. And really methods which teach my machine learning models to be to be robust and provide me with high confidence, this kind of uncertainty information. Okay, cool. And are there any particular maybe in closing here? Any kinds of either a specific workflow or use case for AI, where you don't really see this kind of confidence gradient as a norm, but you hope that it is adopted as enormous. Are there any overt areas that you hope this paradigm shift reaches? Yeah, I mean, we don't see many many especially to this kind of new methods coming out from research. Don't really see them applied yet most applications. But we hope they will be applied and yeah, I think this will then simply make also the adoption of AI with confidence, I think this will also contribute to that. And I think companies or individuals can then also simply have more trust in those AI applications..

"michael berger" Discussed on AI in Business

AI in Business

04:17 min | 2 months ago

"michael berger" Discussed on AI in Business

"My cash flow? Then I could ultimately work with the data science teams building the AI models to see what might be good probabilities. I could assign to those scenarios. And then ultimately use that to come up with the more disputed and better assessment of my cash flow distribution and then also on the distribution of the value of what this AI project might generate for me as a company. So layering those on top of each other and saying, okay, what's the right pathway forward? So we've got to be able to measure risk and potentially we want to be able to reduce it. And I know you've given a lot of thought to that as well again. You guys are in the risk business here. When it comes to AI, you've got a bit of a technical background too. What are some of the considerations for companies who are looking to chop down as many of those risks as they can. There's probably some, by use case to use case, there's different kinds of low hanging fruit, but how do you like thinking about it? I mean, we are thinking about risks in the AI and mere just taking the first risk case in terms of predictive performance of the AI, then of course it comes down to finding a good estimate for my error probabilities and I could estimate for the error of my AI. This really comes down to doing a good technical shop in terms of finding a testing regime as statistics sound testing regime and then really seeing, okay, how well does my AI perform on that? And it's not just a deriving just one point estimate. For example, just looking at the accuracy of my classification system. It's really about K how does it fluctuate? So my testing regime should really take into account that I should be able to find some distribution of my performance metric I care about. So I think that's very important. And I think there's also some exciting research happening in this area, especially the main of mathematical statistics. To look at how can we adjust machine learning model or any kind of AI model in order to give us some information about the uncertainty in its predictions. And I think this requires us a paradigm shift. So so far, many AI models just give us some point prediction back. So, for example, they are just saying, okay, in this picture, I see a dog or this is the house price for this real estate is 1 million U.S. dollar. However, this doesn't really provide me with any information about the uncertainty in the model. What this research does is to say, okay, can I teach my model to give me back an interval, a prediction interval? So for example, the house price of this real estate might be 800,000 to 1.2 million. And I can have 90% trust in that..

U.S.
"michael berger" Discussed on AI in Business

AI in Business

05:59 min | 2 months ago

"michael berger" Discussed on AI in Business

"Do I want to keep my existing system in place? Might I want to go with that less modern statistical techniques? Well, I want to get to go with something more established to more traditional. And this might be fine for me, because this much which used my risks for that. So it's quite important to think about the impact of risks and also how what this means really for my investment project. Got it. And to your point, plenty of times that will be simpler solutions where the upside of getting that extra 5% of performance on why is just not worth the maintenance cost and the potential downside of an algorithm steering somewhere versus maybe the rule based system that works for now. So it's always going to be a balance for many applications. Some have to be ML, some don't. So when it comes to quantifying what that downside would be, I think as a business, let's say I'm a big manufacturing firm and I'm about to spend untold millions to start predicting maintenance failures of my machines. I've got a bunch of drill machines press machines, whatever I got in my manufacturing warehouses and factories. I'm going to put on sensors. I'm going to start training algorithms. So I'm going to make a big investment here. And it's going to affect my business if it fails and I start producing flawed products that's horrendous. So big consequences this might be an insurable thing. How do you go about quantifying where the failure points are? Because part of it is, you know, a business person doesn't know, oh, here's all the ways AI could fail, right? They didn't go to school for this. So how do you walk leaders through maybe the considerations to think about for where the failure points are and how risky they are? Well, what's your method? Yes, sure. I mean, the first question is really does the AI work as expected? This really means to the predictions the AI makes. Are they really living up to my expectations in terms of being correct or being not too far off from the ground truth? I think that's really the first question in the first risk that's the risk of predictive performance. So once I know my AI solution works, there might be other risks I need to watch out for. For example, if I'm more in a consumer sensitive area, then discrimination fairness related questions might become a topic. If I'm just using machine learning model to do predictive maintenance on my machines, of course, fairness and discrimination is not really an issue. But if I ultimately use machine learning to do great, assessments and ultimately decisions about our.

"michael berger" Discussed on AI in Business

AI in Business

04:10 min | 2 months ago

"michael berger" Discussed on AI in Business

"The AI and business podcast. This is the place where non technical professionals stay ahead of the ML curve in advance their careers and businesses you're listening to the second episode in our 5 part series on achieving ROI with early AI projects. We wanted to put together some great perspectives from varied leaders around advice that would help us bypass some of the mishaps to achieving AI ROI and get there more safely and soundly. And today's episode focuses on a critical element of measuring ROI and that is risk. So there's AI companies that need to think about risk there are big venture firms that need to think about the risks of their various investments, but who's really in the science of risk more so than the insurance companies. Munich re is a $60 billion insurance giant that's heavily invested in cyber insurance and now in AI insurance. And we speak with their head of AI insurance, Michael Berger, who is based out in the Bay Area, Michael is a PhD from the university of Munich in finance, as well as a master's data science from Berkeley. And while in our last episode with the head of Intel's AI center of excellence, this episode focuses again more specifically on risk, particularly what are the questions we can ask upfront to screen for risk and then measure that potentially against the upside. What are bits of enterprise leader advice that don't require you to be an insurance firm to be able to make smarter decisions. That is indeed the focus and Michael delivers some real gold in this episode. During this special podcast series week we are giving away a few of our AI ROI reports here at emerge and I'll mention a little bit more about that in the outro of this episode. But I wanted to be able to fly directly into the meat and potatoes here. I learned a lot here. I don't think we've ever covered AI ROI from this particular angle. And I hope you as our listener will be able to draw a lot from this one. So without further ado, this is Michael Berger, the head of AI insurance.

Michael Berger university of Munich AI center of excellence Munich Michael Bay Area Berkeley Intel AI insurance
"michael berger" Discussed on KQED Radio

KQED Radio

01:56 min | 2 years ago

"michael berger" Discussed on KQED Radio

"It here let's begin this week here's a great odd man out with the help of listeners Philip late them from Greenville Kentucky doctor Tom Schwartz of Palm Beach gardens Michael Berger of New York New York and Daniel Coleman who for some reason steadfastly refuses to give us his whereabouts was a traffic accident wasn't which which one of them is the odd man out Caroline nine which is the odd man out Nashville Carolina Baltimore and Pennsylvania station okay well it's not Penn station because that looks like it's the one that would be the odd man out okay therefore it can't be because that's too obvious Nashville Carolina Baltimore and Pennsylvania station rather oh gosh Nashville home of warranty I know this is one of my favorite I've been so out of all sons and I think maybe sounds song Carolina on my mind you're just about there also there are no Baltimore actually is a song in Baltimore I read Newman yeah yeah and Caroline is a song Astros not are they saying six five thousand no that's when the station's nothing if you're dancing all around if you got five points because Nashville is indeed the odd man out but do you know why not for the reasons we were coming any help from the other that was not a song Nashville is not a good song this the movie is a movie the other three are I'm sorry I I thought you'd get this win Nashville isn't on the route of the Chattanooga Choo Choo this.

Philip Tom Schwartz Michael Berger New York New York Daniel Coleman Baltimore Pennsylvania Penn station Carolina Caroline Astros Nashville Greenville Kentucky Palm Beach Newman
"michael berger" Discussed on KQED Radio

KQED Radio

01:51 min | 2 years ago

"michael berger" Discussed on KQED Radio

"General Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it. meeting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society we go foods use lands full all transport and followed economies all of that with the brakes on greenhouse gas emissions and beyond debating a planned summer also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek the answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs this even center for climate change law at Columbia University all these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the cases filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Baltimore boulder San Francisco Exxon B. P. Columbia University Colorado San Miguel Antonio Guterres Oakland Rhode Island Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on KCRW

KCRW

01:51 min | 2 years ago

"michael berger" Discussed on KCRW

"Secretary general Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it limiting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society are we go foods use lands full all transport and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a planned summer also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek to answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs the Sabin center for climate change law at Columbia University all these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the cases filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Baltimore boulder San Francisco Exxon B. P. Columbia University Colorado San Miguel Antonio Guterres Oakland Rhode Island Sabin center Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on 90.3 KAZU

90.3 KAZU

01:52 min | 2 years ago

"michael berger" Discussed on 90.3 KAZU

"Secretary general Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it limiting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society we go foods use lands full all transport and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a planned summer also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek to answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs the Sabin center for climate change law at Columbia University all of these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the case is filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Oakland boulder San Francisco Exxon B. P. Colorado San Miguel Baltimore Antonio Guterres Rhode Island Columbia University Sabin center Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on WNYC 93.9 FM

WNYC 93.9 FM

01:51 min | 2 years ago

"michael berger" Discussed on WNYC 93.9 FM

"General Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it. meeting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society all we go foods use lands full all transport and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a plan some are also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek the answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs this even center for climate change law at Columbia University all of these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the cases filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Oakland boulder San Francisco Exxon B. P. Colorado San Miguel Baltimore Antonio Guterres Rhode Island Columbia University Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on KCRW

KCRW

01:52 min | 2 years ago

"michael berger" Discussed on KCRW

"General Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it limiting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society how we go foods use lands full all transports and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a planned summer also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek to answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs the C. been center for climate change law at Columbia University all these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the cases filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Baltimore boulder San Francisco Exxon B. P. Columbia University Colorado San Miguel Antonio Guterres Oakland Rhode Island Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on 90.3 KAZU

90.3 KAZU

01:51 min | 2 years ago

"michael berger" Discussed on 90.3 KAZU

"Secretary general Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it limiting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society we go foods use lands full all transport and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a planned summer also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seeking answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs the Sabin center for climate change law at Columbia University all of these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the cases filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the cases allege oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Oakland boulder San Francisco Exxon B. P. Colorado San Miguel Baltimore Antonio Guterres Rhode Island Columbia University Sabin center Michael Berger US Rebecca Herschel NPR
"michael berger" Discussed on WNYC 93.9 FM

WNYC 93.9 FM

01:51 min | 2 years ago

"michael berger" Discussed on WNYC 93.9 FM

"Secretary general Antonio Guterres opened the meeting by challenging countries to make bold plans to reduce greenhouse gas emissions immediately we can do it. limiting warming to one point five degrees is still possible. but if you require fundamental transformations in all aspects of society are we go foods use lands full all transport and followed economies all of that to put the brakes on greenhouse gas emissions and beyond debating a plan some are also beginning to wrestle with the big question who is responsible for the effects of global warming NPR's Rebecca Herschel reports that a growing number of lawsuits seek the answer that question there are about a dozen significant lawsuits against oil companies in the US right now Michael Berger runs this even center for climate change law at Columbia University all of these lawsuits have been filed in the last couple of years there is the one filed by the state of Rhode Island against twenty one companies including Exxon B. P. shell and chevron there the case is filed by the cities of San Francisco Oakland and Baltimore and by San Miguel and boulder counties in Colorado and in each case the city or state or county is suing one or more fossil fuel companies over the impacts of climate change it's a wide range of impacts many of the lawsuits to focus on sea level rise and coastal storms but it also includes drought wildfire flooding in Colorado the lawsuits allege that the oil companies should help pay for the cost of dealing with all of that now and in the future because the case is a ledge oil companies have known for a long time that burning fossil fuels causes global warming in an email to NPR a spokesperson for the main oil industry trade group wrote in part that the industry is quote actively addressing the complex global challenge of climate change through robust investment in technology innovation.

Oakland boulder San Francisco Exxon B. P. Colorado San Miguel Baltimore Antonio Guterres Rhode Island Columbia University Michael Berger US Rebecca Herschel NPR