A highlight from Making 'AI Ethics' Productive - with Beena Ammanath of Deloitte

AI in Business
|

Automatic TRANSCRIPT

You're listening to the AI and business podcast. And this is not going to be an episode of holier than thou. Many times the topic of AI ethics is little more than a conversation of holier than thou. The way that I define unproductive AI ethics is essentially simply the exercise of shooting down AI ideas as being detrimental. Conjuring up some potential risk, potentially something that's very politically prickly and saying, oh, that might cause that or that might cause that. There's certainly many risks with AI, but when ethics, quote unquote, steps in without being able to solve those problems. In other words, integrate values integrate law and also get the job done for the customers or the company, I consider it unproductive, and I consider it a sort of holier than the game that I don't consider worth covering on the podcast. So I don't we had a good episode about AI ethics with the, at the time, global head of AI at IBM Seth dobrin, about a year ago, and that was an awfully good episode talking about the productive side of AI ethics. Today we double down on that theme with a guest who is not only author of a book called trustworthy AI, but is also the executive director for the global Deloitte AI institute. Bina am enough has also held leadership positions in AI and data at Hewlett Packard Enterprise, Bank of America, General Electric, kind of a who's who of global enterprise firms, and now she's with Deloitte. She speaks with us this week about putting AI ethics in action in ways that are conducive to innovation. In ways that genuinely will serve to solve business goals and customer problems. And there's two really important points I think are worth noting down for those of you who are tuned in who are leading AI projects or maybe your consultants who are helping your clients lead AI projects. There is a process here for sort of being able to screen out potential downsides and thinking through those upfront, which I think can be a potential benefit of applying AI ethics properly, being it has some excellent ideas there. And then secondly, she talks about who needs to be in the room to have a realistic AI ethics conversation. This is a team sport. As any of you who've been here for long enough are well aware and being a talks about the different kinds of expertise that have to come together to understand squarely the ethical and legal concerns of AI applications, but also how they can interact, how these folks need to level up their own knowledge and how they need to bounce that knowledge off of each other to genuinely screen applications and determine the best place to put our company resources for the sake of our customers and Some of these ideas hopefully many of you will be able to turn around and apply in your own business and that's certainly what we're shooting for in this episode. So I'm grateful to bena for being able to be with us. And without further ado, it's fine. This episode. Phenomenon of Deloitte, you're in the AI in business podcast. So Bina, I know you have a lot of these conversations with leadership around AI ethics, and there's a lot to get into with the meat and potatoes today, but I think we should define the terms that we've certainly heard a lot of different definitions of what comes to mind for AI ethics. When you're explaining this to the C suite to the boardroom, how do you put it in a nutshell? So there is a notion that AI ethics is all about transparency and removing bias and making it more fair. Those are catchy headlines, but in my experience working across different industries, fairness, bias transparency, are all crucial, but there are other factors. If you have an algorithm predicting a manufacturing machine failure, for example, fairness doesn't really come into play, but security and safety are both key issues. So let me take a step back and tell you why I like to think about it as trust and ethics in AI because for me, trust include the ethics, but it also includes policy and compliance, which is what leaders need to be aware of in the context of ethics. So trustworthy and capacitors, everything you can think of related to the potential negative consequences of AI. That's how I think about ethics. Yeah, kind of not putting it simply in the bounding box of transparency and bias as like buzzwords. Yeah. Yeah, got it. And in terms of where it fits in, I'm sure some folks that you talk to, I know for our listeners, this is often the case. When they hear about AI ethics, it's often sort of just well. You know, you want to be careful. Your algorithms could make for a really bad PR event. You know, sometimes it's physical danger, right? But as you and I both know, certainly, if you're running a manufacturing plant with heavy equipment or you're making self-driving cars or you're diagnosing cancer, we got real real issues here.

Coming up next