A new story from AI in Business
So we'll ultimately say, okay, I should contain some information about the uncertainty in the prediction. Ultimately, every application I look at, and especially in those kind of sensitive areas. So what you mentioned on the healthcare side, it really making, say, at the end of the day, yeah, live death kind of decisions or very important health decisions. I should embed this kind of reverse of this kind of adjustments on my model to really provide me with the uncertainty information there. And if I would have a lung, your city image, and the model will give me back an interval where yes, and no is contained with a 90% probability. I mean, this tells me, okay, I can't delegate the decision to my AI. There I might need to have a radiologist or two radiologist taking a look at that, really making sure, okay, what is visible there? Or take another scan. So I think in this respect, this kind of prediction intervals can help quite a lot. And really methods which teach my machine learning models to be to be robust and provide me with high confidence, this kind of uncertainty information. Okay, cool. And are there any particular maybe in closing here? Any kinds of either a specific workflow or use case for AI, where you don't really see this kind of confidence gradient as a norm, but you hope that it is adopted as enormous. Are there any overt areas that you hope this paradigm shift reaches? Yeah, I mean, we don't see many many especially to this kind of new methods coming out from research. Don't really see them applied yet most applications. But we hope they will be applied and yeah, I think this will then simply make also the adoption of AI with confidence, I think this will also contribute to that. And I think companies or individuals can then also simply have more trust in those AI applications..