Giving Autonomous Aircraft a Moral Compass

Automatic TRANSCRIPT

You're listening to the checks podcast. Brought to you by editors across the aviation week network listeners. now have access to special subscription office including a preferential rate for aviation week and space technology. Going to podcast aviation week dot com to learn more. Hello and welcome to this week's check six podcast. I'm greg mark aviation executive editor for technology. Today we're going to ask. Will we ever be able to trust an autonomous aircraft. Nasa is working to ensure that we can and joining us to find out. How a my colleague. Guy norris aviation weeks western. Us bureau chief and our special guest. Marc scoop the principal investigator for at nasa strong flight research center in california a lot about tony and artificial intelligence in aviation particularly in the context of urban taxes an unmanned cargo aircraft. Tony and i are not same thing but they are potentially very complimentary using machine learning to train algorithms to automate takeoff and landing autonomously plan optimum flightpaths recognized obstacles and avoid collisions and identify safe. Landing sites along a route has tremendous potential to make aviation safer. But there is a problem. And here i will grossly oversimplify impart to avoid showing my own lack of real knowledge. The software used in today's avionics such as those pilots and digital flight controls is deterministic. That means the same in input always produces the same output and three rigorous analysis on testing. We can prove to the regulators like the faa that our system will always be safe machine. Learning algorithms non deterministic. Same input doesn't always produce the same output because of some change in the environment inner around an aircraft. It might decide to turn left. not right. And because we fully understand what goes on inside a machine learning algorithm. No amount of testing can guarantee to the regulated. The system will always behave safely. So how do we safely unlock all those great capabilities. That me and i promise. How do we wanna tournaments aircraft to only handle all the functions. A pilot has to today but also safely make all the decisions a pilot has to make. That's what nasa is working on. I'm not gonna handover to guy we'll give it a bit of background to wear glasses look came out of and he'll introduce a special guest. Marc guy thanks. Yeah that's Well looking at this subject today because of a lot of research that goes by more than twenty years really to program. That's now called the Automatic ground clinton voidance system as part of a auto g cass. And of course. We're very familiar this any of our readers who've been following this for years we'll know that aviation weeks been lucky enough at the invitation of nassar in the air force to follow the development of author g s and the subsequent water a the airborne collision avoidance system through the decades. Now this work that talking with mark today is really really stems out of all of that early work. These are safety upgrades that were developed. Principally with lockheed martin and the air force research the borrow tree for the sixteen and subsequently the f thirty five and soon to be other aircraft too. So it's already saved in in service Ten ten aircraft and eleven pilots for sure. So that's a remarkable success story and the success of that in general prompted nasa to begin looking at the wider potential applications to general aviation unmanned systems and potentially even more applications beyond that. Now this work really dovetailed into something called a joint capability technology demonstration. Which with this program was called resilient autonomy. That was a joint effort between nasa the secretary of The opposite st officer the secretary of defense and the faa and it specifically targeting an architecture and the method of potential certification as. You mentioned graham for these highly autonomous aircraft. So i know that's compressed twenty plus years into just a minute or two but perhaps you could strike for a little bit of where this has come from and really way. You're going now because it's really a the. The rubber meets the road as it were now on the flight tested. Just just around the corner. Yes well thank in first off. Thanks for having me today. I really appreciate the opportunity. Be able to share with our work that we're doing and hopefully it'll help others. Achieve similar results issa will. Our goals are Specifically yet the way were approaching this is. It's a discipline called runtime assurance It is a discipline that technique that began quite a few years ago I think primarily within core an flight control systems We've kind of expanded that concept. What runtime assurance does In the portion of it that we're leveraging here is to to monitor a boundary a safety boundary. And you think of gecas you're talking about the hitting the ground. Where is the point. That safety aches needs to take priority over the mission conduct conduct as opposed to the mission. And if you can define that boundary and software and monitor it you can then allow the mission to take place until now that pry. The safety party needs to control the mission. And we then take from the mission execution whether it's the pilot or it's a thomas aircraft and make sure we avoid penetrating that boundary as soon as we've avoided the boundary. Were sure were no longer going to imminently breach it. We then give control back to whatever it was. It was controlling it. So do you think about gecas. We let the pilot fly Conduct level missions. We make sure it's nuisance free if they get too close to the ground and they're about hit the ground we then take over briefly. Locked the pilot out avoi- Performed the avoidance maneuver and immediately control back to the pilot that concept is at a top level the way runtime assurance architectures are developed now as far as complexity goes you. There's there's an there can become an issue if you now's like okay. We'll use that technique and we'll protect against all the possible hazards will if you lump all of that. Logic into one single boundary monitoring algorithm it to becomes complex and just as difficult to certify as an artificial intelligence system so what's unique to our approach is deciding to not combine all of those monitors of the boundary into one single algorithm. We functionally separate them into individual tasks not unlike the a pilot flies. If you're concerned about the ground you take a moment to observe your surroundings. Take in your situation awareness. I either okay. Or i'm not i then let me then take a look at traffic around me. You have to do a scan pattern. It's taught all pilots are taught this sort of thing and it's it is a natural human thing you do. Need to kind of look at each task individually assess that taking the broader context of things in your strategic mission planning. But it's not tenable when you're reacting to very time sensitive yo crit safety critical task the now the good thing about this remarkable thing about this because you mentioned the idea of having multiple Sensors and being able to put this into an architecture. And i think one of the intriguing aspects is then. Okay well if you look. Amy warnings from these different senses. Which one is the most important and wenchuan which warning like you're going to hit the ground or you're going to hit another aircraft could describe how you decide what's really unique about the system. The way that you were able to architect that system and and this idea of a almost like i'm the moral. Compasses colour but anyway go ahead i. It's so just basic flying. If you have a young a potential a multiple controls two seater fighter or a normal aircraft where you have a pilot and co-pilot there are very well established rules for who is controlling the aircraft at a time. It's just you've got to do it that way. Even in any organization you have to have a a very well. Well-defined a chain of command if you will so similarly we need to embed that within our logic and like you say okay well you don't wanna have a force fight between these different monitors saying well you know. We're about to hit the airplane. So i'm going to go here. You ground collision avoidance. We'll wait with rocks over there. I want to go the opposite direction towards that airplane safe to make a decision and when humans are faced with that think of driving down a road and a tumbleweed rolls out in front of you. Or let's say a cat jumps out in front of you. A little less adult valley specific analogy You'd have to decide well okay. What are my surroundings do i. Is it safe enough for me to avoid hitting that animal without jeopardizing mile life or others. So you make this decision and act. Accordingly and other. We may not all have the same morals. Or if you will. I'm using air quotes on morals that our decision logic and how we wait the various consequences we do go through that process. So what's important understand is. We're trying to develop a structure that does that and so in a runtime assurance. Ta network everything comes down to a switch that is deciding who has control the aircraft. And when i mentioned the simple if you think of gecas the pilot controls it until you're about to reach the boundary. The monitor says nope you no longer should be controlling the aircraft i'm going to recovery maneuver and give it over to the flight controls. It's what does that switch. We have multiple monitors going into that switch so because we have a multiple multi monitor amateurs architecture. We have to govern which switch position we go to d- to say which monitor has control the aircraft so each monitor has to make an assessment as to what the consequences will be if it is not allowed to control the aircraft. So in the case of the cat versus yell a ditch or a cliff that you're gonna fly off. You would apply some waiting factor to running over a cat versus plunging off a cliff and you would most likely give it your judgment. Pick the one. That has the least consequence. It's an unfortunate outcome but there it is where we start to get a different area indicating a deterministic system as you mentioned earlier graham. You have a very specific architecture. How you do it. In this case you now can bring in the rules of behavior that provide the weighting of consequences so our architecture does not want to emphasize does not dictate what that waiting should be. It provides a framework that allows you to plug in that that those rules of behavior that provide the waiting for making these decisions and each monitor goes to that that those rules of behavior to calculate its consequence of then share that to the moral compass and the world cup of purely select. What is the highest consequence. And that's who's going to control the vehicle and market. You mentioned to me when we talked about this. And i should mention by the way. This system is called the expandable variable autonomy architecture or e either. And you mentioned to me. We're really great analogy. About how either really is trying to teach sort of the rules of behavioral most of the to the system as it goes along a better in in a way like a parent with a child. And you kind of this analogy to to the ultimate point where you give it the call as when it gets to eighteen years. Exactly why would you cut through that a little bit. Yeah exactly so is. I talked to the public. It's good to give an analogy. A lot of us have gone through parenting and raising children. Ed as you're a parent and you have a little toddler. And if you've got a front yard you may let the child play the front yard but there's very likely a street right next to the yard and so you've got to be very careful watch because you don't want that child or out in the street you got if you if you're in a city and crossing a street what we do. We take the pile that we take the a child's hand at rs and we walk across the street with that child. Hand it ours. Were holding that child. Constraining its behaviors. The can't run up because at child. Why do we do that. Because the child doesn't know what can hurt it and so We have to and that's very that's exactly the way we unmanned aircraft. Today we have the hand holding goes through a command control link between the vehicle and the pilot and the pilot is basically has his hand oh as at. Uav's controls in his hand and is making sure that road off into the street through something bad but a day comes at every parent's life. Ideally where they toss the car keys to the kid and say be backed by ten and will what trance what transpired between having to hold the child's hand crossing the street and now you're going to let them drive with you know one plus vehicle Through the streets with other similar hazards running around well. What happened is we as parents. Ideally taught that child the rules of behavior of the consequences of actions. What can hurt them. What how they could hurt others in that Using that vehicle and that through their actions have proved to us that they can observe those rules on their own and therefore we have the trust to hand them the keys to the family car and let them go out for the evening. So what we've done is in the in the eva architecture. We've created this ability to embed rules of behavior as defined by a regulator or whoever has authority for where this vehicles can operate and then through our flight tests in our simulation tests. We send the vehicle through many hazardous situations so that it could show a that. It will observe those rules of behavior and with sufficient data. Whenever we get to that point we should have a body that we could bring forward say we can trust this as much as a human To be able to observe these rules of behavior and therefore is that sufficient is society willing to accept now that vehicle to go out. And there's a proof case it's going to be a burden of approved case on anybody bring a vehicle for this to really fly it out there in in the air Close to the public or anywhere out there that it can sufficiently trusted to do that and that will come with data just like that. We typically gather during flight tests. So i would say it says that the day you throw the caucus the kansas when they finally wear you down with begging for. But i've got to jump in here and say that one thing that's fascinating when i read guys story about eva and you'll description. The system is the flexibility of this to set the rules. It's not just. These are not just a standard set of rules that you might give a pilot. You can actually use a situation rules. You make the point you can actually have one set rules this vehicle is flying out in the desert you sites out somewhere out in the oklahoma you know the the wild lines of oklahoma it s. He's doing its mission sites delivering something or eat flies into the airspace over t- tulsa it brings in the rules. The local authority says for privacy and noise. And all these things. These are all things that the flexibility. That system has extraordinary. Because to me. Because i think i think the key to trusting autonomy and five actually make a really good point in story where in your own flight testing the was there was a constituency was worried about disturbance. Do some sort of a wild animals or whatever and you said we should care about that. We should build that into the vehicles rules of behavior and to me. That's that is exactly the type of approach that we need to build trust because then people will see. These vehicles are obeying a set of rules that they could understand. You know that somebody on the ground can see vehicle is when it's flying over doing its deliveries. They concedes behaving in a way that minimizes the disturbance it staying away from schools and the less stuff you know and i think that's really key ultimately to gaining this acceptance of them of ptolemy autonomous vehicles. So it if. I could carry that thread a little bit further. Tha the specifics of what. We're trying to embed within architecture eva to to enable that and make that usable within a certification system and a developer. The billy to manifest this kind of thing Bring each rule like wait a minute. This could become a complex system. How flicked and data so that well. Okay fortunately our group our work going back to automatic gas stems from a core group. That was a big involved with bringing digital computing into aviation back to the very early eighty s on the very first high-performance digital aircraft. And so we've seen the struggles with software. Vbb architectures effectively complex systems as we develop them. And so what we chose to do odd. The i'll say the nasa side a bit more. We really really emphasize and folks it was. We have to be really careful about how we develop a software architectures. If it's going to remain understandable and folks will be able to know. This is a deterministic outcome. Given these situations so we've created a software structure and architecture that strongly emphasize Minimal software code common structures of of how you develop algorithms in fact our latest version that we're running just as the last month uses a common collision avoidance architecture We were going to call it. Generic collision avoidance system. But that's acronyms already taken so we've seen for it yet Stay tuned but And so we use the very same code. Avoid aircraft as we do terrain as we do obstacles as we do airspace as we do Whether we can bring whether and we don't and infectious yesterday. I was on a call. Somebody was presenting some information on whether they very eager to visit way. We could even says. Wow you know i said to myself. Gosh what you've got there. We could probably plug in. And if we had of flying aircraft that was flying this right now but we could plug it into sim. Probably have it running and checked out in less than four weeks. We we don't have weather of wins and our system now but just knowing what you've got you share your interface with us. We could probably have it up and going in four weeks. i believe. And we've seen this time in a very bold statement in the software world but in truth we had folks come in. I was like well. What about this. What about this. How can you add that. Can you add this. And truly we've found it with our architecture. It literally is only a few lines of code to bring in whole new capabilities We can easily layer on more and more systems us. Oh the architecture that that the framework behind all of this that we're talking about is very flexible now. This is a challenging thing in the aviation world to have accepted because our industry for extremely stovepipe knits development and especially software. The yes like well who wants to take ownership of somebody else's code good grief. That's just crazy you know. That's he kept the business case on that. So we're really trying to break that barrier a and we do have w folks that are boldly. considering that i've i've i've i'm surprised That we have In you know some semi player relatively major players considering it and evaluating for that purpose right now good okay well we have to wrap up here guy any very very brief closing thoughts or before i wrap up now just I'm looking forward to seeing some of the flight. Testing that i believe is going to be later this year. So we'll bring you. We will bring our audience backup. Today's later in the year. We'll get back to talk about how it all works. So i am just scared for that. Wrap up for this week's check six which is now available for download on itunes stitcher. Google play on spotify. Whatever they are special thanks to our producer in washington. D don thomas and thank you for your time and join us again next week for another check six.

Coming up next