Audioburst Search

US-China research highlights risk of 'dual use' AI


Hello from the newsroom of the financial times in London. I'm Josh naval US tech giant, Microsoft is what the Chinese military run university on research that could be used for surveillance and censorship. Malcolm, more talks to tech. Reporter Mattie me to merger about the research collaboration and why it's causing disquiet. Can you tell us a little bit about this story? How did you come across it? And what exactly has happened here show? I've been looking through a bunch of research papers around the topic of facial analysis and AI and looking at connections between US research and Chinese research. And I came across a paper on facial analysis, which was jointly authored by Microsoft research in Beijing and academics at the National University of defense technology in China when I looked into a bit further. I realized that this visa JR. Ascended should he was trying to recreate detailed environments in photos that couldn't be seen with the naked eye, which seemed to have quite obvious military applications and I walked with a colleague in China Yuan who figured out that the university was funded by the Chinese military and then managed to dig out to other papers. So essentially we're seeing collaboration. Of Microsoft, researchers with academics from this Chinese university run by the Chinese military with Klay applications in military and intelligence communities talk a little bit about this research what promos was trying to solve and what potential applications might be. So that would three separate papers. One of them was facial analysis. And this is the way of analyzing photographs of people purportedly, they were trying to do oriented reality, which is, you know, putting a virtual mosque on a person's face for gaming, etc. But really to break down what the research was they were using machine learning to pick up cues about the environment in which a person was to then recreate this environment. So say you have a doc photograph you can see a person's face. But you can't really see where they are what they're doing. You can use that face as a light source, and then the algorithms were able to recreate the windows in the house where they were. Where the camera was where the door was. And then other details about the environment, which wouldn't have been visible to the naked eye. So potentially a way of surveillance people in a doc environment. Exactly. The other two papers were looking at the area of machine reading which is a bit easier to understand which basically means how can we use a I to read and understand large blocks of text quickly. And according to people I spoke to a very clear -plication in censorship pair because it's much easier to scale up censorship or to speed it up. If machines can read text to quickly identify it pull it out. Mark soft has a long history in China. What we know about how much of this kind of collaboration goes on the big problem here is that academic research exists in a bubble of its own and often these type of openly published papers fly under the radar because actually the fact that they're open paradoxically means that there's less scrutiny of them and research is usually considered to be homeless. It's an exchange of ideas. But because of the current climate in China than the relationship between the US and China being redrawn and changing. This is now slowly starting to come to the attention of US regulators about more and also civil rights activists who say that US companies or any foreign companies should really look at the parameters of how their research could be used or misused by doctors in China or elsewhere. But marks of Beijing research has quite a long history of doing research in China. And a lot of people connected to it have gone on to be major figures in Chinese and others fears is that right? Yeah. So Microsoft research in Beijing has a big department that does not true language processing, which is teaching computers to understand tax and also facial recognition and other areas, and at least two former employees of Microsoft have got onto found Chinese AI companies such as sense time and Meg v face. Plus, plus which actually supply surveillance technologies or work with law enforcement in China for surveillance. Of people. So yes, these links that and also one of the authors on the papers that I found was at and you DT the university, but was also an intern at Microsoft research Beijing so just turning back to this collaboration. Is this the only collaboration that we're aware of between Microsoft researchers and researchers Atta Defense University, or do you think there have been others as well? So that three different papers three different projects. So even though it's only with this one university. They've been published at different times, we should keep in mind that it's a bit more expanded than just one paper one collaboration and there may be others out there. But the other signal that shows that that all more links is that Microsoft has been running these technology clubs which are student clubs on university campuses to promote an exchange of ideas and research projects at cetera between Microsoft and students and they've advertised this at many universities, including at least two others that have military links, including. Behan university and Harbin institute of technology. So I guess one of the problems here is that it's very easy or it has historically been very easy to find. When something is Jill use, it can be used for civilian purposes, and for military purposes, you can see very clearly whether a tube is able to be used as a tank Bairo, for example. But with this sort of technology in this kind of research, how do you tell when something might have potential military applications or even know military applications, but in the hands of, you know, repressive state have up occasions that could be used for massive Aillance or other things that western company might consider to be unethical. Yeah. This is a big debate. That's actually raging right now in the US when they're trying to decide the new export control laws. So they had a new law enacted last year, but they're still discussing specifically whether emerging technologies and foundational technologies, which is basically research of this kind openly published research should fall under. Those export controls because the issue is that things like artificial intelligence, and that includes the big umbrella, Tom, but includes things like machine vision. Includes facial analysis facial recognition even oriented reality, which we could see as just being games all of those have dual purpose because they could be used to find people in a crowd. They could be used to identify large blocks of tax that could then later be blocked or they could be used to analyze people's expression. So it's hard to figure out how it could be used. But the potential is there for misuse, and this is true of AI more, generally, which is why this debate is raging about whether it should be restricted in terms of how it's exported in terms of academic cooperation, do you think with potential answer is to clamp down in really tightly police this sort of exchange of ideas, or what other approach could you see? So I don't think that there should be a clampdown on free academic. Research because one of the big hallmarks of this type of thing is exchanging ideas between academics, and researchers and coming up with new ideas. Right. But I do think that if you all working with a university that's funded by military in a state known to conduct activities that might not match so surveillance or detainment of minorities, for example, then that should be awareness that the company should know what's going on. And also there should be parameters that they put into place to make sure they know what's happening with their research. Because at the moment, you just put your research out there. And you have no idea who takes it up what they do with it and where ends up. So I guess the question is even if you're not developing repressive technologies. Do you take responsibility? If the technologies you create and up there, and if the answer is yes, then you have to find a way to be accountable for it. In terms of these new export controls in the US, which wage you think that's going to go. How do you think? The politics is going to play out at taught say for me. Sitting here in London, but the currently asking for submissions to discuss whether these should be included on all, but it's becoming Tara to everybody that AI in particular can be misused and can have some quite an affair ES applications if not controlled properly, and this is not just ruined China. This is true, everywhere and companies and governments all over the world from Europe to the US thinking about how we put ethics into place and how we control what might happen with AI technologies. So I think it will go the way of trying to be more careful rather than than less. Also plays into the broader sort of AM's raise between the US and China. Well, exactly because ultimately, whoever wins at that is going to have a lot of power and intelligence, and you know, be able to develop technologies that don't exist today, we all know, the potential of AI. So it's going to be interesting to see whether they continued to let the exchange of ideas happen. That was Malcolm neural tooth tech. Reporter Ida meet him major. Thanks for listening. Remember, if you're not ready a subscriber, I would like to discover more if he content you can find our latest subscription office at fifty dot com slash often.

Coming up next