Paul, Youtube, Mike Kaput discussed on Ohio Today radio
Tried to emulate human intelligence. So they can only do what humans can teach it to do so. Everything else has been programmed every piece of software. We've ever uses been told what to do by humans. These are machines that don't sleep that can learn from data levels we can't comprehend and they can learn to do things that humans can solve problems. Humans can't so it's different because we've never created anything like it before and you know the quote I always go back to his Sundar. Pichai from Google says. It's the most profound thing humanities ever worked on it's more important than electricity or fire because it can change everything so when you look at these major problems we have to solve climate change in poverty and hunger and cancer and like really difficult things that the human race has been working for decades to figure out and we can't say I in theory can there's no limit to what it can eventually solve. So we've just we've never created anything like it and then when we're done creating it as when it just really gets started because it can improve learn on its own and that's the stuff you see in the movies this idea of general intelligence which we're not there yet and that's a lot of people think that's what it is and they're just frayed of it because it's not a right now is still program to do. Very specific things and learnt from Dayton keeps getting better at those things. It doesn't think of other things to solve. It's not consciously US doesn't use imagination and creativity but it will probably in our lifetimes and when that happens then everything changes. And here's what Else Sets Paul. Apart and this is big. He recognizes the enormous opportunity. Ai Presents and seizing it but he also recognizes the danger of the technology. He asked big questions ethical questions human questions and he wants people to confront these big questions before they just jump in blindly because they I can go very wrong. If it's if people don't approach it the right way it'll give people superpowers which is what we said on stage and they have to be. You have to start from the ground up thinking about the ethics and the morals of how you use it so when I go give talks oftentimes and even when we spoke to the students here though you the questions rarely focus on marketing. After I give a marketing I talk. They always go to what is going to do to society. What's the government doing about it? What do you think about? This is ethical to use facial recognition and profiling people. They just they start connecting the dots and understanding the bigger impact and the more time. I've spent an AI. The more you realize how much of an impact it will have on society and humanity and so our small piece of this world is marketing but marketing touches every consumer and so to me. It's just critical that as we talk about how to do marketing smarter and make it more effective. We don't do it on ethical ways. Because you're gonNA have the ability to do it and I know from having talked to big brands. They struggle with trying to understand where the line is. What is ethical? And what's not and so I just feel like nobody was probably coming to the conference thinking. Oh I hope we have a topic on ethics but I was not going to let people leave without listening to a topic on ethics and will do that again. This year. Like it's GonNa be a staple in we're GONNA start cutting a lot more content around it because I just I feel like otherwise you could look at. What we're doing is just teaching people to better predict and influence consumers and that's not at all what we're in it for so. I I think is a chance to kind of raise. The Bar of what marketing is and the standards that uphold. It can go the opposite way real fast. And that's really it the speed and scale inherent in a high tech. That's what makes it dangerous. Here's Karen Jim. Software is much easier to deploy as technology than other things and a in particular once you find powders. In some kind of data you can then like us. Those patterns to make thousands of decisions at a very rapid rate and it can affect many many many people. And it's it's it's a little different from like actual physical objects that you might have to manufacture for example like you can just deploy it over the Internet like this decisions that facebook makes for example when they are using machine learning those decisions get deployed to like over a billion people. I actually probably more. I don't know how many users they have but whatever like billions billions of people or Youtube another example of unintended consequence of Youtube recommendation. Algorithm is that it's it's become like a pretty intense tool for radicalization. Because if you end up falling into a youtube our the whole you keep going and going and going and you end up getting radicalized. There's there have been studies that show that tears organizations for example. Actually use this to their advantage where they will try to make their content seem related to like very benign content. And then you could just accidentally trip into this like whole that radicalizing you a what. I think is so unsettling about Karen's youtube example in particular is the fact that the danger how it went wrong it was completely unintentional like this kind of outcome is bad enough. And it's not even someone using this technology to purposefully do bad things and before we get too far it's fair to point out. Just how complex so out of these algorithms are what Youtube and facebook and all the other social media platforms are trying to do just in terms of the technical challenge. It is not easy so some respect is owed just because of what they're attempting to do but going back to Paul and Cairns point the speed and scale of the negative consequences are what make this also problematic failure and reiteration are fundamental to technology development. But what happens when the failures can be felt immediately by billions of people and moreover how do you control the bad actors people? Who are out there using this technology with some malcontent. In today's era of fake news. One particular example of this kept coming up in almost every conversation. I had deep fakes right. Fake content fake videos. Lots of things that appear to be real. That aren't deep fakes. Are I would predict as an almost certainty whether next year or four years from now or five years now. This election or the next on the fakes will be an enormous issue. That was Nikos and Paul again and then that last voice was of Mike Kaput who has worked on Paul's. Pr Twenty Two thousand eighteen for about seven years and is now the director of the Marketing Institute and just to quickly explain. What deep fakes are? In case you're familiar. They're basically fake videos generated by a I powered software that makes them indistinguishable from authentic footage as Nikos explains this is made possible by a particular type of AI called GAAP NS or generative adversarial networks. So you can watch a video. It could be Donald Trump doing something controversial wrapping beat boxing. And you in time less than five years. You won't be able to tell the difference of whether or not that video was real super scary when that stuff starts going onto the web so you're GONNA have to regulate that so what can be done. And WHO's responsible for protecting the rest of us from these? Harmful applications of the technology for my responsibility needs to be shared by everyone this stuff impacts whether you're interested in the technology or not. That's okay like not everyone geeks out about this stuff. But it's going to have an impact whether you're technical or not the Internet certainly has a huge impact on you and this stuff will too so I think we're all going to have to get a lot smarter and a lot more curious about. Okay how do we move forward responsibly? In a world where. It's a lot harder to kind of tell okay what's actually true. What's not what is human generated. What's machine generated and things like that to me? This gets to the heart of the debate. The Big Questions Mike and Paul and the whole pr. Twenty twenty staff and lost. The People I met at the conference are asking. Those questions are really more important than anything else hearing those questions. That's what impressed me most about Paul. And how he framed the whole topic of Ai for the hundreds of marketing professionals in attendance and in a lot of ways Mike asking how do we move forward? Responsibly is the answer to his question. Keeping the conversation centered on what is right and what is wrong and keeping that conversation going. Through new unique circumstances that's ethics so since expiration of this new tech of A. I really ended up as an ethics discussion. I wanted to bring in another expert from right here at Ohio University. My name is bound heart debuted teen I'm a professor of journalism. I'm also the director of the Institute for Applied and professional ethics. I asked Professor Deborah Team to start by breaking down when he sees as the ethical issues at stake with Ai to me. The core is probably transparency in a very general sense because we don't know what the Algorithms and Im- implemented values are that are at the bottom of what an artificial intelligence does so there's a high level of transparency which we have already with computer technology or any type of complex technology so with artificial intelligence. We have a situation where we as users probably have absolutely no idea why the system is doing what it is doing how the system is doing what it is doing and so on and so forth so it becomes very opaque and at the same time whenever you have Complex processes and decision making which artificial intelligence ultimately does you have values and ethical decision at stake. And so what happens in effect? Is that we delegate decision making to assist them. That is pretty opaque where we don't know which values which set off Potential preferences are used for decision making and that may or may not be problematic. We don't know and that many fields of application you know anything from medicine where you have a lot of those systems already in place To let's say you know resource exploration there are like oil exploration or things like that. This is where the first expert systems which is early. Artificial Intelligence were developed. And they're they're really good but We have to keep in mind. Each time we do that. We implement.