Listen: What is adversarial AI and why does it matter?
"In fact there's a whole field of study known as adversarial a I actually aims to make artificial intelligence a little smarter as part of an N. P. our special series on the technologies that watch us dina dina temple Raszyn has more. artificial intelligence is all about showing the machine millions of examples so we can learn to recognize things in the real world and there's a pretty famous experiment about how easily this can go wrong it was conducted by a team of researchers led by UC Berkeley professor don sol let me start playing the video she and her colleagues made a video that showed how the full daylight and in this case for the system well it was driving a car the video is less than a minute long and it doesn't have any sound but it rocked the A. I. community so in the media at UC two frames side by side thank split screen all you need to know now is that each split screen is subtitled so you can see how the A. I. and specifically a subset of a I called image classification is making decisions inside the autonomous car you see the prediction given by the image classification system to try to predict what the traffic sign as so it's sort of like the car starting to think I'm a sign is coming I'm gonna have to make a decision right so song inner team had the A. I. system read to stop sign one was a perfectly normal stops the other was manipulated song had put one sticker below the S. and another above the in stop and is the car gets closer to it the subtitles are describing the A. eyes decision making process it reads the regular stops on just fine installing the card prepared to stop but the one with the stickers it thinks the sign read speed limit forty five miles an hour which would allow the car if this wasn't an experiment to blow right through the intersection to carefully place stickers was all it took to make a self driving car ran a stop sign so you were expecting it to mis read the sign and then it didn't you're happy about it is surprising so given how well it worked it works so well the people who were developing driverless cars tap the brakes. now to be fair songs team didn't just randomly throw some stickers onto a sign they knew exactly how the a size image classification system worked they knew which pixels of the sun to manipulate to fort which got the attention of people over DARPA the defense advanced research projects agency and understand why the military's top research arm was so concerned I went to door by headquarters to meet with have a sequel for us in Haiti now have thank you for making this she's the director of something called the guard project guard stands for guaranteeing a I'd robustness against deception and just like it sounds it's looking for ways to make artificial intelligence more hack proof the way a I makes decisions is a bit of a black box but see common says if you understand what the system is chosen to focus on you can fool it and if your door but you're less worried about a stop sign then say putting a sticker on the tank and because that sticker with his particular kind of we think that this time his acting ambulance and he needed to be opened the gates to let the ambulance corps and the reason to study all of this isn't to scare us about ARI old though it does that too researchers want to understand the limits of it so they can fix it kind of like gold fashion hackers who used to call up software companies let them know about flaws in their coding so they could send patches don songs is the bottom line is machine learning an AI aren't as powerful as people think they are we do really needs Neil and more break throughs before we can really get there so would you ride in a driverless car that's the day. I mean you. how you doing having a test drive. and by the way dawn song special stop sign with the stickers isn't fooling driverless cars anymore it's now hanging in the science museum in London. of an exhibit about our driverless future. dina temple Reston NPR news"