Vicky Zhang, John Hopkins University, Andrew Hunt discussed on BBC Newsday

WBUR
| WBUR

Automatic TRANSCRIPT

Robots are being paired with artificial intelligence systems to help them carry out tasks such as sorting objects. Their skills are often developed by training their algorithms on vast databases of images. But a new study by researchers from John Hopkins University, Georgia institute of technology in the University of Washington, suggests that as well as learning to distinguish objects, they can pick up human prejudices such as racism and sexism. The group says their research revealed that a robot operating with a popular Internet based artificial intelligence system consistently gravitated to men over women, white people over black people, and jumped to conclusions about people's jobs after a glance at their faces. I wonder what on earth it would have made of me. I spoke to two of the studies authors Andrew hunt and Vicky Zhang, shortly before they announced their findings at a conference in Korea. I asked Andrew, how they uncovered the bias. An AI company created something that matches images to captions. And other researchers took that and put it on a robot to have it reorganized objects. And we thought, well, what happens if you put objects with pictures of people on it and different kinds of people be it race or gender? And ask it to place objects somewhere. Place the person into a box or you could say place the criminal into the box. And we looked at how often it moved different people and found that it has a lot of toxic stereotypes. Such as when if you put two people two blocks with pictures of people down with a white man and a black man, and you say, put the criminal in the box, ideally, the robot would refuse to do anything because it's just inappropriate instruction in the first place. But what it actually does is it tends to get the black man identify the black man as a criminal 10% more than a white man and put them in the box. And another example would be with a homemaker, it will identify women as homemaker over white men and it'll identify Latino men as janitors more than white men. They keep perhaps I can come to you now and were you surprised by the results? Sadly I was really unsurprising as Andreas talked about. There were some really prominent biases as stereotypes that we kind of expected originally so for example, men in general would be picked up 7% more than women or in the case that white people would be picked up more than black people. So there seems to be continuing trend that happens implicitly within society already. So that's also the thing that's really dangerous about these models is that it's taking all these implicit bias that already exist in Internet data via due to underrepresentation or just due to implicit bias among humans in stereotypes and then turning those implicit bias now to explicit through machine learning and artificial intelligence models because these models will be taking in all the data you feed them and then making conclusions from that. And now you're putting it onto robots. So now you're embodying all these explicit bias into physical robots. And so to the point where robots are now autonomous, then you would have robots physically enacting all of these stereotypes in real time in the future. And this would be something pretty dangerous with the results that we currently have. Is part of the problem that the sheer amount of data that you're feeding these machines as they're learning, I mean, if you're feeding a machine 800,000 images. You're not probably going to look through every single one of them, are you? Data is usually a really big problem in machine learning in general. Because like you said, it's sometimes it's just physically impossible. We're like, it's a huge amount of work to be able to filter to the day through these data. So yeah, this is definitely an ongoing problem that is hard to solve. Yeah, I think it's that concept of that, oh, you can just take billions and images. And because you can't evaluate, you can't have a human look at every single one of those. There's not really a process to even sample a few here and there and evaluate and be like, oh, is this actually okay? You could even get a better idea that way, but they're not really doing that. They're just being like, oh, let's take it off and move fast and break things. That's sort of the problem. Vicky Zhang and nanji hunt there. Chris I guess one troubling thing that came out of that chat was how difficult it seems to be to remove biases from these systems even now. Yeah, really hard. I mean, the data they train these systems on reflects the real world and unfortunately racism and sexism permeate the real world. So stripping it out is just this massive challenge. But just because something's hard, I guess, doesn't mean we shouldn't do it. Indeed, these AIs become ubiquitous. It's really never been more important that we deal with it. I have a horrible feeling. This is the story that is not going to go away. We might just be hearing from a budding sir James Dyson in a few moments once I've shut up, that is, because the next wave of young inventors has been gathering in London to compete in the teen tech awards. The remit this year was to come up with an idea to make life easier, simpler, or better. And it could

Coming up next