A highlight from Episode 377 - Artificial Intelligence and Operational Resiliency
This is Jane Lo, and I'm at the Global Resiliency Federation office here in Singapore. And with me today, I'm very pleased and very privileged to have Mark Orsi, who is the CEO of GRF or Global Resiliency Federation, all the way from United States of America. So thank you, Mark, for your time today. Thank you for having me. And so Mark will be sharing with us the latest in terms of artificial intelligence, which is causing a lot of excitement nowadays, as well as the operational resiliency framework, which has been developed by GRF over the last year or so. So Mark, you know, give us a brief introduction about yourself and also GRF and what, you know, the organisation does. And I also understand that you're very passionate about AI. So tell us about the history of your career as well. Sure. So I started as an aerospace engineer many, many years ago. And after aerospace, I was in computer science and I was working on computer vision. So it's really been interesting to see the journey until today. But additionally, so the last 15 years or so, I've been in the financial services sector primarily and technology risk and cybersecurity. I worked at Goldman Sachs for about eight years, KPMG for a few years, JP Morgan for a few years. And then in the past four years, I've been at Global Resilience Federation and we're a non -profit. We manage and support 17 different sharing communities. ISACs, which are really information sharing and analysis centres, they're collective defence communities where organisations join together to help protect themselves against the various threats that are out there. And of course, you have your conference later in October, later this year in Texas. Yeah, Austin, Texas, October 11th through 12th. Anybody who's local or who wants to make the journey, please come. We also have an OTI set conference on September 6th coming right up. More local. But yeah, it's an iteration of it, sixth year running. And it's security and third -party risk. So we have practitioners, CISOs, third -party risk practitioners, business resilience practitioners. And we have a whole track on AI security. So we've worked for the last six months with 20 organisations on two papers. One is a CISO guide to AI security and one is a practitioner's guide. So let's start with AI, which is what gets people excited nowadays. So tell us, you've got a great vintage point from America, which is a leader in many ways when it comes to technology and innovations. So what is the conversation like in terms of the business use cases that you see in America? Sure, we're coming from a cybersecurity and resilience perspective. And so I was on a call, it was about a week and a half after ChatGBT was released in November of last year. A hundred different chief information security officers on the call, really all concerned about maybe business forging ahead without really taking any security considerations into play. But also about some of the major strengths that they could, how can we use this for good as well, right? How can we use it to find vulnerabilities? How can we use it to secure our code? So an example is one of the organisations had been using a tool like it to actually rewrite their code base and translate into different language, which added memory management to their code and then translate it back to the original language. And they were also using it then to multiply their developers time by tenfold, because they didn't have to write the test cases and additional code around developers. So there's plenty of benefits to it and there's plenty of risks, right? We need to think about the whole pipeline, whether we have in -house AI models or whether we're using third parties, there's different kinds of risks that we need to consider. There's also been a lot of talk of using AI large language models to do predictive diagnostics in healthcare, right? And GRF, of course, you have more than, what, 20 member organisations? It's 17 different ones, yes, 17. And one of them is Health iSAT, right? So talking to your member communities, do you see a difference in terms of the pace of adoption in terms of using AI? Yeah, absolutely. And so we worked with 20 different organisations, including some healthcare, some manufacturing, some energy and others, to put together a guide on AI security, both the practitioner guide and a CISO guide. And yes, there's different pace of adoption. There's organisations that have been using machine learning and AI for many, many years. And but with the advent of this generative AI, there's just a tremendous amount of concern and the pace of change is much more rapid. It used to be every year you'd have change and now it's every week. There's new things happening. So of course, artificial intelligence is not new in cyber security. How is this latest innovation of using large language models, how is that going to be different in terms of adoption in cyber security? I think you mentioned a few sort of like... I think some of the power of it is that ultimately, if you think about the resource limits that we have, there's always constraints on the number of resources that are available that are cyber focused and cyber educated. And so for us to take the power of some of those large language model generative AI and sort of multiply the efforts of the staff that we have, then we can also meet some of the needs that we have from a resource perspective. Also, I think ultimately we're going to get into very targeted threat intelligence to where it'll be based upon our own assets. So if you're an enterprise and you have specific assets and you have specific threats in your sector, then the intelligence that you're delivered would be very targeted to your organization specifically. So it's going to get much more powerful over time to give you tailored threat intelligence. Do you think that the rate of adoption on the cyber defense side is possibly faster than how the threat actors are adopting... Yeah, I mean, that's a big concern, right? I think probably we'll be behind the curve. All right, okay. I think there was even talk early on about just pausing the pace of developments, making sure that we have the regulatory framework so that we know how to do this ethically and responsibly. So I think from a machine learning perspective, we could be doing very well, but I think from a generative AI perspective, we may be behind the curve a little bit. So I think the complexity of attacks, I think we'll be putting essentially nation -state tools into every threat actor's hands. So I think it's a very sort of concerning few years as we work to try and match the pace of change. You think that is something that is quite realistic that will happen, or is it just kind of like a hype? Because there's some part that human developers or human threat actors are possibly a lot more sophisticated when it comes to developing the malware code. And you can kind of tell the difference between one that's generated by generative AI and one that's written by human developers. I'll give you an example of just a very personal use case. So I was working with my son just a couple of weeks ago, and we found an old Nintendo DS. And so he wanted to run videos on his old Nintendo DS. And so we used ChatGBT to learn how to hack into our Nintendo DS to make it display videos. So he never had any programming experience, but we were able to do this. So this is exactly what I'm like, you know, we can put these tools into everybody's hands. So how do we, you know, we need to be extra vigilant as this change happens. So what do you think is the immediate step that cyber defenders have to take in face of this threat? Well, I think there's a few things. Number one, we need to be moving forward to be using it in the right ways, to be using it from a defender perspective. So if it is helping us to find vulnerabilities quicker, if it's helping us to develop threat intelligence better, that's more tailored towards each individual organization. But also just from security and ethics perspective, there's all sorts of different attacks that can happen to those, whether it's on the input data, whether it's in the model itself, you can embed undetectable backdoors in these models. So if you're using a third party to develop your models, you need to be very concerned and maybe even have multiple models to compare the answers. Now, some people also say, right, let's just get the basics right, right? So for example, we'll get more sophisticated phishing emails, right? So that just means more awareness in terms of how to spot a fake email from a genuine email. So that's kind of like the basics that we need to sort out. Yeah, but it's also addressing all the different aspects of that. You know, I mentioned the models themselves. So protecting the models, protecting the data. You don't want data poisoning. You want to detect and monitor these things because they may evolve over time. And you need to be really concerned about your third parties because every third party is going to be introducing AI. So we talk about an AI bill of materials. So the same as you have a software bill of materials, we want to think about how can we develop an AI bill of materials? So how can you ensure that the training data and the model that's being used, right, how do we know which models we're using and which training data is being used? So if we find an ethical bias or we find some, let's say it was trained on a set of a code that had malware embedded in it or a set of code that had logic bombs in it, you don't want to embed logic bombs in your new code that you're writing by using these tools. So we need to make sure that the training data is clean. For example, let's just take the example of data poisoning, right? So that is perhaps, you know, looking at how you provide access levels to your data set. So it's not any difference from sort of the basic cybersecurity measures, right? Right. It's using some of the same constructs that you have across others. But one of the things that you need to be concerned about too, though, is these are dynamic, some of these are dynamic models. Right now, it's a very static world. We have these models that were trained in, you know, 2021 data, right? But in the near future, these things will be much more dynamic and actually responding to the inputs to change their behavior. So you'll need to be monitoring. Yeah, that's very different. Right, okay. So I think one final question on the copy of AI before we move to operational resilience framework. A lot of people say, right, AI is going to mean, you know, perhaps job losses, right? And how do you see that playing out in the cybersecurity field? So I'm, you know, concerned in general. I studied AI 30 years ago. I was concerned about it then. You know, I thought the first sort of impact would be with self -driving cars and in our transportation industry. I think it turns out that, you know, these models advanced very quickly, maybe quicker than people were expecting. But it's going to take a very long time for us to sort of digest that through all of our business models that we have right now. But I think it's going to multiply our efforts. I think cybersecurity is an industry where we're very resource constrained, where people, there's way more cyber resources are required than we have people. So it'll just multiply our capabilities and maybe meet the needs that we have. So I think that's a very positive thing. Ultimately, I think our economy will be changing in the next decade or two decades in different ways. And I think we can only imagine what those changes will be. Right. Okay. So talking about overcoming some of these challenges, it means like resiliency, right? So that plays into the next topic, operational resilience framework. So tell us what this resiliency means in the context of this framework and perhaps cybersecurity. So back in 2018, there was a paper from the Bank of England. So regulatory guidance on operational resilience and impact tolerance. And so it was really thinking about the potential systemic impacts of bank failures on customers and partners. And so the question was, well, how do we respond to that? What are the things that we need to do to ensure that we can continue to operate our critical services through a crisis, even if it's an impaired state? So we, Trey Moss, who is the CEO of Sheltered Harbor, it was an initiative from FSISAC to help protect consumer data. So if there was a bank failure or a bank disruption, you could still access your bank account information. So it would prevent sort of a run on the bank or this systemic impact from it. So we took that concept and Trey was always thinking, hey, we probably need to do more than just protect this little piece of data. It was in a distributed and immutable way that the different banks and the standard format that different banks could access. We need to also prevent the bank from failing its critical services. So we were working with him, Bill Nelson, who is the CEO of FSISAC for 12 years. And he's our board of directors. Trey and I, we met for about a year to say, well, what should we be doing beyond just protecting this little piece of information? What are those critical services that we need to protect? And we need to make sure that through a crisis they would operate, even if it's an impaired state. So we developed a path to operational resilience. We worked with 100 organizations and financial services regulators to develop a very simple path that was meant really for every industry, not just for financial services. And so it's a path of seven steps, 37 rules. We tried to make it very simple. It's aligned to NIST and ISO standards and extends existing business continuity and disaster recovery type standards and frameworks. It takes a holistic approach and really looking outward instead of inward on saying these are internal business services that we need to keep running. Those we call business critical services. Operations critical are those things that your customers and your partners depend upon. And so making sure that those continue to run through a crisis. If you have a wiperware attack, you have a ransomware attack, you have a data center fire, you want to make sure that your customers' critical services continue to function through that crisis. So take an example of, say, a ransomware attack. So attack ransomware hitting one of these industrial organizations, right? So how would this resiliency framework help, you know, plug some of the gaps? So what's interesting is we've done this very much from an IT focus. We want to extend it to an OT realm as well. So we'll be working with OT ISAC and manufacturing ISAC late this year, early next year, and we'll set up a working group to do that. But actually one of our first scenarios that we put out there, it's, you know, freely downloadable from our website, grf .org, is a scenario that we call it ACME pipeline. And it was essentially a replication of colonial pipeline incident to highlight the benefits of an operational resilient framework approach. And so we looked at, you know, what are those critical services from a pipeline? And it was really just delivering petroleum. So there are a bunch of regulatory responses they have to have. There's payroll, there's all these different systems. When it comes to what do you actually deliver to your customers and your business partners, it was just delivery of petroleum. So making sure that they could deliver petroleum through that crisis, if they had a ransomware attack or a wipe away attack, what are those things they needed to do to ensure, even if it's an impaired state, how do I deliver that to my high priority customers and my low priority customers and designing so that let's say I could only operate at 80 % capacity. Can I still provide service to my low priority customers or do I need to only provide service to my high priority customers? So understanding at what point do you cut off service or do you are you going to disappoint some people because it's no longer a service to them. Designing that into your system and pre -planning that is part of this framework. Right, yes, yeah. So it's kind of like looking at from a sort of a consequence perspective on the mission factors rather perspective than start from the asset inventory kind of that traditional. It was interesting, I was hearing some of the same language that we were developing over the last two years coming from the OT experts on the panels as well about exactly that, about operating through a crisis, about the mission critical functions. Right, okay. So we just talked about one scenario which is ransomware and you are looking to sort of, I guess, expand to different types of scenarios to try to help organizations assess where they are in terms of their maturity when it comes to resiliency, yes? Yeah, so it really doesn't matter what the type of attack is, right? And also I think one of the concerns, we've been very sort of IT focused and very much we talked about the data and making sure that it's distributed and immutable, but also from a service perspective. So you want to make sure that you can deliver those services. That's right. Whether it includes manpower or whether it includes just technology. So that's very important. So what are the next steps then? So you say that the efforts started in 2019, yes? There's two active working groups right now. So one is we're developing a maturity model. We're going to release the next iteration in October of this year at our conference, which is in Austin, Texas. So not local. But so the next iteration will come with a maturity model, some of the comments that we've received from multiple industries, and we're still actively seeking, we want to make sure it's a cross -industry approach. We also have another working group focused on a scenario that's in the financial sector. So in ACH payments network disruption, ACH is, you know, domestic cash payments are made through this ACH network, and it's $76 trillion a year. So it's a very significant system. And so what would a disruption like that, how would it impact banks? And how should we be thinking about operational resilience in that scenario? So working through that, we'll probably do an exercise in November of this year, which would be open to many banks to have that discussion. So we'll be looking at the next steps. Like I mentioned, we'll be looking to extend the framework to OT, ICS concerns. And we'll be looking to, you know, develop the third iteration and additional scenarios. So what is the first thing that organizations have to do if they want to adopt your framework? So they can go to our website now and freely download it. It's available. They can actually review it and give feedback. But also think about how they can use it in their organizations, right? What some in major banks, they're using it just to develop training materials. So organizations, they're different business units across the globe regionally, and different business units can consider operational resilience and how they work. So I think it's a really good learning tool. And ultimately, as they implement it, the first steps are, number one, we build it upon the baseline of NIST and ISO standards. IDLE, change management, making sure they have core standards, core practices in place, core controls, and then naming an operational resilience executive. So really getting somebody who has visibility across business and technology. Yeah, a champion of it, who can sustain it through organizational change, right? Who can really have some power and authority to implement it. That's really important. And then you can start walking through the framework and doing the things that are necessary. It'll take investment, it'll take some work to really become more resilient. And so we're working on the maturity model as well, so people can evaluate sort of where they are and where they think they might have gaps. Can they participate in one of your working groups so that they can assess to see how they can practically use it? Yeah, they can contact me. No, happy to have that. Happy to have people reach out to me and contact us. Again, our website, grf .org .org. And yeah, we're continuing to develop new working groups and new sector focal points. Our goal is to make the whole ecosystem more resilient, to figure out how organizations can do that and to contribute to security and resilience in any way that we can. So this is one way to do that. Possibly there's a way to incorporate AI element, the latest generative AI element. I would love it, right? I love it. I mean, that's a real passion of mine from many years ago. So it's great to kind of see it finally come into play. And we just have to address it in the right way and with the right security concerns. So, well, Mark, thank you so much for your time today to talk to us about generative AI, as well as operational resiliency framework that GRF is developing. So thank you very much for your time. Thank you. Thank you, Jane.