The Future of AI and Defense Analyst Workflows - with Michael Segala of SFL Scientific

Automatic TRANSCRIPT

This is Daniel Fidel and you're listening to the business podcast. We cover a lot of industries here in our use case episodes every single Tuesday on the in business podcast from banking to life sciences, and beyond we occasionally like to touch on defense that's indeed focus today. Our guest is Dr Michael. Scholars. The CEO of SF L. Scientific a fast growing AI consultancy here in the Boston area they've gone from zero to. Something like forty or fifty folks on their team in the last five years and they've worked with some rather large customers in addition to the military, and they've been awarded the INVIDIA services partner of the year. The last two years running Michael speaks to us today about the workflow of a defense analyst someone who's poring over data at aiming to find anomalies the might help inform defense's objectives so the military is looking to. Figure out where terrorists are going or maybe looking for clues as to the paver of some ruler. In some faraway country, there's a lot of ways to be able to proxy that data streaming from various sources in the world in defense analysts are burdened with what is often rather monotonous work to put together insights to bring to bear to military leadership Michael talks to us about what it looks like to embed a. In that existing workflow in where it can actually add value. This is useful for essentially any industry working with oodles and oodles of data aiming to make sense of it in terms of reports and interpretation, but I think defense just gives us a pretty cool color I always like covering defense use cases. So without further ADO, we're GONNA fly into this episode. This is Michael Seagal SF L. scientific here on the business podcast. So Mike You folks have been doing a lot of work in federal. We last caught up maybe a year and a half ago a Lotta grows for you guys a lot of accolades with invidia. The government space has been a big place of growth talk to us about the workflow in social media and Kinda, the job of a defense analyst and how that's being done today will pivot into where a it. Yes. Sure. Thanks Dan, and of course, we're seeing tons of money being thrown into the federal space from an AI. Perspective. For lots good reason right in most of the the opportunities here is traditionally if you look across any of these large programmes from the army to the Navy to the Air Force to the NBA. You have hundreds to thousands of skilled individuals in manually looking through images or social media to basically find in assessories right? That's what they do. They sit there are trained they're very, very good at it, but the very programmatic right in the take what they're given as. Gospel and then they move it to the next level where they're supposed to then have somebody take action on that that action might be monitor more closely or eventually hey, there should be a military kind of required there. So the use case that we've been working on for about the past year or so is with some of the some of the departments within the army that sits obviously outside of the US. What they're trying to accomplish is to basically be. Global listeners of all information that's being distilled to them. So you can imagine on a given day you have millions of tweets in Lincoln Post and newspaper articles and things like that are coming out positive negative all sorts of different things across all sorts of different languages, right to a given region in as traditional analysts that sits for instance, in the army your job is to basically assess all these look for risks look for patterns and then basically pass it on. To the next person who would take action right. So that is a very traditional way that the problem has been solved, but it's it's hard right? Because people are subjective meaning what I think is a risk. The other person doesn't and it's not scalable right where now exploding in terms of the content that's out there. So now we have a problem with subjectivity in explosion of growth, right and that's where traditionally we've been in terms of analyzing this kind of information. Got It. So it sort of it hearkens to a mental image that I have I spoke with Mike Brown who heads up the what used to be called deacs. I think. It's the dia you now. Who Talks about the visor man back in the early days, of project, Maven or something where these folks just a little green visors looking at screens labeling stuff manually it just tremendously repetitive can be very drawn obviously, and it sounds like in the social media space. No, it's much. The same we're we're we're looking for stuff we're using judgment super repetitive in order to scale it. You just need more human beings sitting in in seats. In this particular case, I can imagine risk just for the audience, Mike we can't get into incredible detail here, but I'm trying to clarify the image for the listener. We might be looking for things that. Seem like Russell's in the breeze for terrorist activity. We might be listening for things that seem like Russell's in the breezes hints to what the government is up to. We might seem to it could be sort of I imagine maybe there's categories of risks were looking for here in red orange green or something like that These folks have those strata in front of them as they're looking at the social media says big. It is in a can't be right and it's not just risk in terms of is something militias happening. It could be risk in terms of do we see spikes in Kobe in certain populations or do we see anti propaganda wear? The government is saying, Hey, we have no spikes in Kobe but are people are saying we're all getting sick right so it's really looking across the spectrum of language to say, Hey, something just doesn't seem right right in that could be a lot of different things. Yeah that's incredible. It seems almost overwhelming Mike I mean because over well, when when you talk about you know training system abounded reality is That's what we like. My good man. That's what we like my brother but you're talking about will risk it could it could mean These things could mean references of these things we're. We're talking about an infinite spectrum are the the folks who are trained I. Imagine they're trained for maybe a core set of of maybe main risks, but it sounds like they also have to be able to be flagging and be aware of all these tertiary could bees tertiary anomalies at the same time in it's me it's the ladder in most cases. Ideally we WANNA say, Hey, just have your eyes open for this, but things happen to quickly right things unfold as. A great example, right one day you heard a corona and you thought of beer the next day you hear corona and you're thinking about this is an actual medical problem. This is actually a problem. Now kind of locally with some of the Pharma companies. But like terms change technology changes the way that we think about vocabulary changes. So you can't just have a rigid definition like you do in traditional images with me where a building building building. Need to start distilling different languages. English Spanish but Spangler in. This phenomenally anomaly complex space that these analysts have to deal with, which is the whole goal of helping them from An. Tool right which obviously vote we can talk about now. Yeah. Yeah. We'll spend into so and just just for clarity's sake. So the way we frame up use cases and it will pivot right into where ai fits into the workflows talk about what is the business value at hand. So we're talking about what the what these people are doing I imagine the goal is we're potentially creating Schwartz where potentially updating some. Colonels generals with reports on topics of their interest or maybe even just notifying somebody when something really spooky seems seems like it's GonNa Happen. So I I feel like these analysts, their output is what might your at summit up like while they're doing this they can just entering stuff into some big net database that pumps out a report or they often doing the right themselves. Yet. So let me give you one more small little intermediate piece of knowledge or so what we're trying to create for them is basically a google search functionality just imagine you throw a Google right now and you can type into a window and you can ask a question what are my risks today? That's an absurd way. You could say something like that the goal is to enable them to get back information and then write reports about what they're finding in that information. Okay. Right and then serve that to their leaders whoever serving right to actually let them make that informed decision as well. Cool. Okay. These are the people that are looking for the INFO as well as creating the reports that are gonNA get settled up to the. Top. Okay. Great. So yeah, we talk about where I could fit into that workflow already you know my mind is dancing with all the places NLP could fit into the mix and and other things like that. But for you folks, you've figured out maybe what their problems are where a I could could witless way in what's that immigration looking like? Where were those junctures? Where is able to make its way in? So the first areas, the obvious one, right I think on a given thirty day window because we're basically looking across thirty days of legacy data right in data is literally hundreds of millions of records to automatically ingest these records into right. We're using these modern day tools, called Bert models, or elmo models, right all these fancy little names for these deep learning models that make them sound simple. But basically, saying can automatically ingest all this information in an start understanding those patterns such that those patterns can just be shown. To a user who is looking to get some better level of understanding, right so the first obvious places to saying if I may use her in I want to search wear is my risk I want a computer to be able to understand what risk means the context around where risk materializes itself, and then basically give me like a Google search of the top ten areas where you need to investigate further. Maybe it's this tweet maybe it's this document maybe it's this user writing give you that almost like search base functionality. Yeah does that make it does it does. So I'm trying to imagine an example again we're I'm using extrapolated examples of course because am not bob in this project and be you know it's pretty sensitive stuff but I'm imagining okay. Let's just say these patterns that you're referring to. Maybe we have some that relate to the spread of a disease could be cova could be something else maybe there's entities were sort of tracking here. Their sentiment were sort of tracking here would the display simply be hey, here are terms phrases topics that seemed to be exceedingly repetitive over the last let's say trailing thirty days trailing twenty four hours etcetera is it something akin to that? You Know Hey, here's ended. Okay. Yes. Because from an analyst perspective, you have to realize you can make complex as you want, but the end of the day you have. Not, an unsophisticated but in untrained individual in a I consuming those results yes. Yes. Yes and most importantly you have somebody who's not going to sit around for twenty hours waiting for your model to return a result. So you need to build something that inferences at an SLA that they care about in produces a simple visualization around. Here's the top keywords here. Are Some topics right that are meaningful to them that they don't need to think about the math behind it such that they can almost take that probability score as their net new Gospel he and then on to the next level right so it has to be simple individualization but complex in bill, right? Yeah and that's that's the challenge with I in general, right? Writ Large. That's that's the issue that we're gonNA run into just thinking out loud here taking that New Gospel I mean that's that's a lot of weight on your shoulders Mr Mike I. that's that's a big deal. Right? Because these folks obviously they're gonNA, use other tools of course but sounds like this is going to be another tool that will be a layer that maybe they'll filter where tensions go. Hey, it's eight. AM sitting in front of my computer again do I just start reading stuff or do I maybe poke into the things that are beet red and if change in the last twenty, four hours well, why don't I start there? It sounds like it's more than efficient use of scanning time tool maybe more. So than a definitive defining of what the risks are tool, it has to be right in there is in adoption curve. Just, familiarity with the tool in the output in is just not in this use case, this is literally every day. Everyone. It is. I've done a process the same way for twenty years in you're gonNa tell me this computer's GonNa give you the tackling. You don't do that overnight love you have to gain their confidence. So you do that by maybe spending the first several weeks running side by side and parallel in showing them. Hey, I'm giving you every time the best results or almost the best results in getting them more and more and more confidence such that they relied more broad system. It's always a decision support tool. Yeah. So this is this brings us into topics that we really liked to to drum home here at emerge. One factor that that I'm seeing a lot of in this kind of covid era is that tools that are that we believe I think arc as gonNA. Just, truck in the next two years I think you I've path is just gonNA I don't Lord knows how much they're gonNA be worth it to your but I think efficiencies everybody wants efficiencies ai for some weird reason often gets couched as efficiencies only which are so limiting in a terrible way to frame it but deep really hard integrations of either involve a lot of data sources and overhauling workflows I think just haven't even lower chance of getting adopted when our budgets are down were sketchy about the economy. Everybody's already fearful I think what we have to be able to do, and it sounds like this what you're talking about. Address another podcast is find a place in the workflow to fit it in their not doing data science. They're not really getting too crazy. You probably need in the beginning. Mike to bring on some of these guys help with engineering, the features, coaxing out what you want to pay attention to you probably need to partner with a cluster of them for a while but day to day on the dash were they just they're able to look at it it's not really changing what they're doing. Do you put this into dash for their big at a new screen they have to have open? How does this fit into flow of their attention? So. It absolutely depends on how they're going to adopt it. So I think most people do it the wrong way and I'm going to talk about the way that we do it. You can say it's right or wrong. It's a few I think it's the right way. So most people when they do real integration of data science, they I m L., whatever you WANNA call it. They start from the fun side, the Algorithm side, the training I'm going to get the greatest biggest NLP model in the world going to train it with a billion parameters in it's going to be ninety, nine, point, nine percent accurate. But in reality in a production environment, there's no way. Anybody will ever consume that it's too slow. You don't have the hardware you don't have the data to support that in an analyst isn't GonNa wait for it to predict or you have a Visualization. So to your point, you always have to start with one of those fundamental business requirements from an influencing point of view from a visualization point of view from the business side how are you going to actually get our ally of at solve that problem I with the assumption that the model works you don't need to prove it just assume the model works. So starting there and saying, do this needs to be in the same visualization dashboard or did something different because that can have profound differences of what you do or don't bill maybe you can have a different technology stack or Or you don't in your limited, right. So starting from that end user requirements has to infer what kind of data science modeling that you can do and then work your way to that side last right and then actually do the monitoring. So everybody is uniquely different depending on what they need to accomplish. Yeah. But it sounds like gonNA maybe it's vetted in existing dashboard ex baby has to be its own interface in some way shape ambler. But yeah, just depends on how it's skin fitting. Obviously, you guys you deal with custom stuff per so okay that's useful context. I think here we'll. Talk about that Nitty Gritty of getting past as adoption barriers I know you folks off also work in care. I gotTa tell you. My Am I say this with all due respect just any sector I was not gonNA sell a high into if I was doing the technical work, luckily, we just do market research here. Right but hosing the tackle, it would be healthcare because of just how many hurdles there are like the CEO loves it and it's going to benefit the patient but the doctor has to learn it and the nurses workflow changes a lot this stakeholder mix there is just I mean it's scary. INDEFENS- of courses also complex diesel. If we talk about what it takes to sort of, you know get folks to to start to use this. You know we went in with great intentions. The folks had signed off on this new that we were good at this. They believed in the vision they thought this would be really really helpful. We want to get people to Kinda get some traction with it. What is that convincing process for lack of better terms that internal traction process talk about a bit of that Mike The beauty about healthcare is if you're a researcher or a doctor, your inherently a scientist who is open to collaboration in ideas, right? Okay. That makes it easy that that's a good starting point. When we get involved in healthcare in this could be in research hospitals or big Pharma companies or something like that. We have to embed ourselves into their SME. So, for instance, when we're working with the hospital system and they WANNA do say medical imaging on. Radiological imaging for images. It's something like that. You don't just start building. What we've done in the past to help them bridge. This barrier is literally go to the rounds in the morning sit with them while they're discussing patient cases in literally talking and understanding what are they doing when they're looking at these images? How much time does it really take for them in how much of it? Is that to really then them talk to their patients and helping them build that story of saying like if this was solvable in if we can predict conditions in Sepsis in relapses in, give you a better patient in. Dr. Experience would that be of interest? It's Oh. Yeah. That would be great and then you start teaching them. Okay. So we've broken down that you. You think the process works from a process perspective and I understand what you're doing is literally sitting in looking at your rounds with you, which is Harvey Experience Beer that. Terrible. Well, it's not terrible, but it's a reality check Then you start needing to educate them right like what does it mean to build an algorithm? What is a probabilistic score mean? How is this going to help you in your day to day in? Really Treating them as you know, they're brilliant in their discipline. You have to go in there and show that you're brilliant and yours in coming together the common understanding and that really breaks down the barriers. If you're just GONNA go in there and say, Hey, listen I can predict cancer better than you can laugh you out you'll never have a chance but you really have to develop A peer to peer relationship. In. It sounds as though Yep so this is this is sort of even in suggesting the project itself wrapped as our last little bundled question here as we close out this first. Sitting down for figuring out what their day to day problems or maybe suggesting having it be their idea a little bit of inception action. Yeah to. Try. It has to be you know. I think about. Philanthropic efforts right if you go, it's the same thing you go into Africa I can get you guys water. It's like aw do be careful you you have to you have to sort of find a way to and so in your case it's the same ball game. You go there figure out what they care about where they're bumping up against come. From that place where they're not gonNA listen till. They know they're understood right. So you make sure that they're understood. Then you can say, Hey, we can have this in this way so that it would make this easier without the useful in. It's it's sort of like a almost certainly s if you could do it kind of thing, and that's how you. Have to suggest it. Then you have the issue you brought up with defense, and we can talk about whatever industry dynamics fascinating maybe defense specifically around what it takes for them to start to use it. Once you've built up and built. Now now it's available. You said running this fifty fifty cast deserve to be a framework of thinking for you as. The vendor to say, okay, we always know we build something. We're GONNA run a fifty fifty tests. We're GONNA talk to the economic buyer. We're going to tell them. That's what we're GONNA do when it's done built because we know this is not just going to be a rolled out thing. There's always gonNA be a little bit of a wrestling match. Here's. Part of what you're planning forward is that kind of Oh my God. Yes especially in. So the biggest issue with medicine even though they're brilliant is they don't like to think they're ever wrong but statistically, doctors are wrong in his own at the time. Right? It's just a fact yet. So as part of this journey has to be helping them see in designing a system that allows them to understand what we're predicting what they're actually predicting in showcase that these things could leave cohesively in. You're not replacing them. You're augmenting in making their jobs more efficient, right? So you can't just say. You're ninety three percent accurate historically because that's what already law radiologists are are machines ninety five percent accurate were better than you. You can't do that. It doesn't work that way. So you have to build this workflow same thing right? A Ui that's friendly that shows them in explains rate it's a heat map it's local it's something that gets them comfortable with Y, you're making your decisions and then that's how you get through adoption. Yeah. Yeah. So again, a lot of human factors here you guys are in. Your the Services Industry Mike and yeah got Yorkie man you gotta learn your soft skills my good sir clearly, you've learned a lot of so interesting I. Think maybe this is a good take home message as we close out for the folks that are working on these kinds of solutions is to think about how do we inset and really collaborate on the origin of the idea and the solution and sort of the way it's going to be frame, and then how do we do the same thing with very soft nice framing around getting them to try it getting them to adopt it making sure it's not an automation risk in an insult to their intelligence. People WanNA pretend it's about the Algorithm Mike but I guess it's. Not go I would say I, mean at this point we worked on almost every use case in every industry right? Realistically there solvable given the enough data in technology in hardware we can solve every problem in the world that's easy. The people is the hard part. The hard part, and so a little bit of bloviating with the claim there. But I will say you're driving home a point that everybody listening in does need to tune into. If you listen to the show for long enough, you're well aware that that's the case Mike I'm really glad we got to dive into that aspect of use cases as well. Today I know that's all we had for this first interview but thanks so much for joining us. Awesome thank you Dan. So, that's all for this episode of the A, and business podcast. If you like what you're hearing here, if you enjoy these use case episodes, if you enjoy are making the business case episodes on Thursday where we talk about a deployment and return on investment than drop review on I tunes, it's now called podcast, but it's very easy to find us just a I and business on Apple podcasts. Your feedback is not only tremendously valuable. To me and my team helps us inform who we wanna let on the show. What kind of topics we want to cover city can make things better and better for you. In fact, this twice a week format that we're doing is actually based on reviews feedback and Lincoln notes from those of you who are loyal listener. So I want to say big thanks for that. Already that's GonNa keep us informed moving forward to what you'd like. To know also, it helps get the word out about the podcast itself every now, and again, I'll share one of the nice five star reviews me on apple podcast. Let other folks know what what people really see value in the program and that really helps us a bunch as well. So helps me help you if you enjoy the program, drop us a five-star view, it's a business podcast on apple podcast if you haven't already Checked out our other show the AI in financial services podcast check that out on apple podcasts were on spotify soundcloud or your favorite platform as well, and be sure to get all of our latest coverage on financial services banking wealth management insurance, etc. We have the whole show dedicated to that as well. So that's all for this episode of in a catchy here for our Thursday making the business case episode on the A and business podcast.

Coming up next