Artificial Intelligence

Listen to the latest news, expert opinions and analyses on the ever-expanding world of artificial intelligence, data science and machine learning, broadcast on leading talk radio shows and premium podcasts.

A highlight from Chris Parkinson Co-founder and CTO of RealWear on Voice Controlled Applications for Industrial Workers - Voicebot Podcast Ep 287

The Voicebot Podcast

01:29 min | 12 hrs ago

A highlight from Chris Parkinson Co-founder and CTO of RealWear on Voice Controlled Applications for Industrial Workers - Voicebot Podcast Ep 287

"This is episode two 87 of The Voice by podcast. My guest today is Chris Parkinson cofounder and CTO of real ware. We talk AR and voice control for industrial applications. Welcome back voice by nation. This is Brett Keller. You're hosted The Voice by podcast for the past 5 years. We have brought you a weekly interview with a true innovator in the voice and conversational AI space. Today we have a first time guest and a voice UI approach that we have not previously spoken about. So I think that will intrigue you. I've known Chris Parkinson for nearly 20 years and he's always at the front end of providing practical applications of new technology. Real wear has more than 80,000 units in the field, and they help make workers more productive in the automotive manufacturing or gas, healthcare, variety of other industries. Even more interesting for our community, the product was hands free and voice first from the outset. There are a lot of companies offering smart glasses and headsets for industrial and manufacturing use cases. But many are stuck in this cycle of test and learn. It's like the perpetual proof of concept. Real ware is being used in production. That's what impressed me, and that's why I wanted to talk to Chris. In addition, you may think you know what AR is. It's an acronym, right? However, you might be surprised at how it is defined at real wear. Let me know if you think about that. And it's their new take on an old acronym.

Chris Parkinson Brett Keller Chris
A highlight from AI Today Podcast: Applying CPMAI in the real world, Interview with Andre Barcaui, CPMAI

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

02:35 min | 20 hrs ago

A highlight from AI Today Podcast: Applying CPMAI in the real world, Interview with Andre Barcaui, CPMAI

"The AI today podcast produced by cognitive cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence. Learn about emerging AI trends, technologies, and use cases from cognitive analysts and guest experts. Hello, and welcome to the AI today podcast. I'm your host, Kathleen Walsh. And I'm your host Donald smells are and we hope you've been enjoying our educational series. We've been recording all of these podcasts on glossary terms. Some of them may seem really basic. You're probably wondering, wait a second. AI today, podcast has been on for 5 years. Why are you now just defining artificial intelligence? Well, the answer is, well, if you don't know the answer, you should listen to that podcast because I'm not going to define it for you here. I will not. But there's many other terms. That are related. They get more and more complicated. So you should definitely stay subscribed. It's when some of our most popular stuff that we do here on AI today, and as I mentioned for the last 5 years and 300 plus episodes, but one of the other things that we have been doing that has been really popular is having some fantastic interviews with people who are putting AI into practice today. And also, our community of people who have been certified on the cognitive project management for AI methodology, CPA, which is the leading methodology and practice for implementing AI and advanced data analytics projects. There's nothing like it out there, things have not been evolving. On the other side. So if you're familiar with crisp DM, it's a great place to start, but it hasn't been evolved for 20 years. So we have something for you, but I'm not going to dive too much into that because that's actually another whole bunch of podcasts we have recorded on those subjects as well. So I'll let you do that. But yeah, we are thrilled to have with us here on this podcast. We're going to be interviewing someone to continue on that. Great series that we talked about. Exactly. We have been, you know, it's one thing for us to talk about CPM AI, but we really wanted our community to share it with our podcast listeners as well. So you may have noticed we've had some previous interviews with folks that are CPM AI certified, and so today we are continuing with that in interviewing somebody else at CPM AI certified so he can share his experiences. I'm really excited to have with us Andre barkawi, who is a project management independent consultant, and he's also CPM AI certified. So welcome, Andre, and thanks so much for joining us today. Thank you. Thank you, Kelly. Thank you, Ron. Thanks for having me here. Perfect. So we'd like to start by having you introduce yourself to our listeners and tell them a little bit about your background and your current role.

Kathleen Walsh Donald Trump Andre Barkawi Andre Kelly RON
A highlight from Copilot lawsuits & Galactica "science"

Practical AI: Machine Learning & Data Science

05:03 min | 1 d ago

A highlight from Copilot lawsuits & Galactica "science"

"I'm Daniel whitenack. I'm a data scientist with SIL international. I'm joined as always by my co host, Chris Benson, who is a tech strategist with Lockheed Martin. How are you doing, Chris? I'm doing fine as we are recording this episode of the day before Thanksgiving. Yes, U.S. Thanksgiving is tomorrow. That's right. And I know that we both have our day jobs and we just have nothing to do today, do we? We just, there's not much going on right if only if only we were talking beforehand and both of us are like, oh gosh, it's quite a busy day for the day before Thanksgiving. But you know what? We have a few minutes to talk about some fun stuff here. Yeah, exactly. I hope you got your tofurky or whatever you've got ready for tomorrow. I don't know what we'll have, but absolutely got myself some vegan bird here. Nice, nice. I like it. I like it. So I'm going to maybe start with a story Chris because this is kind of what prompted some of my thoughts around this episode is so I live downtown in the town where we live here and there's a barber, a couple blocks away. I go and get my hair cut from this barber. And he's big into crypto, like when NFTs was really hot, he was like porn like thousands of thousands of dollars into NFTs and he's got like all this stuff he's doing. Anyway, he lost a bunch of money with NFTs. But then the last time I went to go get my hair cut, we were talking about this recent controversy around FTX. And just sort of disclaimer, we're not going to be talking about crypto or Bitcoin. This episode or blockchain. But it sort of prompted my thinking because basically for those that aren't aware recently, there is this crypto exchange FTX, the founder, owner, Sam bankman fried, basically he was a kind of industry later well respected, but he's kind of turned into industry villain lost most of his fortune and bankrupted a bunch of things like $32 billion plunge in value of this FTX exchange. And I was talking to a couple of people interested in this and like my barber, who maybe I don't know how much he is an expert, but thinking about how this is a major setback to those that are kind of promoting blockchain technology, crypto, currencies, crypto, whatever. And it got me thinking what sort of controversy or event could prove to be a major setback to the AI industry. Or as such, or is such a setback possible. So that's my first question to discuss on our day before Thanksgiving. I guess we came first give thanks that such an event maybe hasn't happened, although maybe smaller controversies have happened or yeah, although before we kind of move fully over to the AI side from the crypto side, I happen to be staring at Sam bankman freed's Wikipedia page and I'm looking at his hair and as you mentioned, the barber and stuff, there's got to be a joke there. Yeah, there's got to be a joke there. So moving back over to AI, well, I kind of feel like you've set me up because, you know, you're like, what could possibly go wrong with AI? And, you know, that would be a major setback to the industry. So not just like a bad thing. So there certainly, I think we can both say there's been bad things happen with AI, no doubt, right? Absolutely. I think it would be the degree of badness potentially on a scale of bad things. What's the scale of badness zero to ten? What's at the ten? Well, ten is that you have significant loss of life that's caused by AI inference and that would in specifically because I work in the industry I work in, I'm going to say unintentional loss of life by that. I'm not saying that there's I should be careful. We don't have AI that I'm just saying in the future sometime as things develop. I'm having to put in all the careful things that, yes, if there was AI in some industry and it resulted somehow an unintentional loss of life, then that would be a very bad thing. Right. So like if all the airlines started flying autonomously and there was an airliner that was flying autonomously and had significant loss of life or something like that, right? Indeed. And when you really think about it, that is something that people are already talking about for the future is AI running various types of vehicles. Some of which are on the ground, some of which are in the air. And there may be there may be instances of that out there in the world. So yes, an airliner would be a big thing. I have to say, as we're talking about this kind of scenario though, you know, like, totally recognizing the tragedy of that.

Sam Bankman Daniel Whitenack Sil International Chris Benson Chris Lockheed Martin U.S.
A highlight from Using Data to Untangle the Sticky Problems of Manufacturing Procurement - with David Schultz of Westfall

AI in Business

05:05 min | 2 d ago

A highlight from Using Data to Untangle the Sticky Problems of Manufacturing Procurement - with David Schultz of Westfall

"Inventory management has come up in manufacturing, so many other use cases, procurement specifically, not exactly the hottest topic we've covered, but it's definitely an area where there's a lot of room for improvement. There's a lot of clunky guessing games and procurement and they are extremely costly if we get them wrong, whether we're ordering too much or too little of something or overpaying or taking too long to get something, all of these have downstream consequences in the manufacturing domain. Our guest this week is an expert in this space. David Schultz is the VP and chief supply chain executive at westfall, westfall is a manufacturing firm based in Las Vegas, Nevada. David studied chemical engineering before getting his MBA at Bentley. Westfall is a contract manufacturer. They do a lot of different things, but they work a lot in plastics and resins. David himself has studied chemical engineering before getting his MBA at Bentley university and then serving a number of leadership roles in the supply chain. Today, we break up this interview into two sections. The first of which is articulating what the specific challenges are in procurement in manufacturing. Why is this as consequential as it is? And what kind of rules of thumb guessing games do we have to play today and manufacturing to make business decisions? We have to guess how much our customers are going to do business with us. We have to guess which of them are being overly optimistic about the orders that they say they're going to do this year, which of them we think are being a little bit more truthful or have a better understanding of reality. We have to factor all of that in to how much we're going to spend for parts and materials for our manufacturing operations. The second part of the interview, we focus on where data and artificial intelligence fit into the mix, westfall is a client of orchestral, orchestral is the sponsor of this series. So we previously had an episode with Edmund Zachary, who's the CEO of orchestral. his perspective on the kind of data that is becoming increasingly important in manufacturing when it comes to decision making, and also where AI is fitting into the mix to be able to help make smarter, faster decisions. There's a little bit of talk at the end about the future. You can stick around to the end of the episode for that. Again, this episode is brought to you by our kestrel. Without further ado, let's fly right in. This is David Schultz with westfall. Here on the AI and business podcast. So David, welcome to the program. Yes, thank you very much, Dan. Thanks for having me. Glad to have you here. We're diving in on manufacturing and David over the years we've covered so many use cases in manufacturing from inventory prediction to predictive maintenance, et cetera haven't focused that much on procurement, but that's the topic of our interview today. Before we get into where AI and data come to life, I want to get an insider's look at some of these big challenges of manufacturing procurement, ordering parts, dealing with inventory, et cetera, and kind of tee up for the folks at home. What makes this such a hard problem? Could you help us out with that? Sure, yeah, be happy to do so. You know, the whole supply chain environment has really risen to a different level. Obviously, through the pandemic, people hear the word supply chain and they understand maybe what it means now or at least they're exposed to that. It starts and ends really really, the customer, right? So really what it comes down to is, you know, what kind of forecast, what kind of demand predictability can you get on that end? And then really cascading that all the way back through the operation that goes all the way back through to your suppliers so that you can take that demand and satisfy that with the parts and the operation that you bring in. Historically, that's talked about as S and OP in the industry, sales and operations planning. So it truly does encompass all the way from your customer, your commercial side of the business, all the way through to the manufacturing side. Got it. And I can imagine this has been a clunky and complicated process for as long as it's been around because if I know anything about customers, you can't necessarily predict everything they're going to do, everything they're going to want all the time. There's likely some best practices that you folks have to operate with today or that the industry has to operate with today about looking at historical kind of forecasting kind of quarter over quarter a month over month, looking at maybe the activity of different customers and estimating, okay, based on what they ordered last year, what do we think they're going to order this year? What are some of the factors that go into these, you know, guesstimates hate to say it? What are the factors that go into these guesstimates today that allow manufacturing to operate? Well, I think Dan, you said that perfectly. They are guesstimates. And the day that you issue a forecast, it's wrong. But I think what you have to make sure that you do, as you mentioned, is, is it 90 plus 95% of the way there that's going to get you to your end goal. And basically what you're looking at is historical, as you mentioned. But I think what's made it difficult, you know, in the last 18 months or so, is people talk about in many ways, you know, what are we going to get back to normal? There is no back to normal ever in my opinion. It's the next normal. And I think what you have to realize is when you look at historicals, there's a lot of noise in the data over the last 18 months, let's say. And what I mean by noise is, for instance, we're in the contract manufacturing business, which means that we

David Schultz Westfall Bentley University David Edmund Zachary Bentley Nevada Las Vegas DAN
A highlight from BI 154 Anne Collins: Learning with Working Memory

Brain Inspired

05:59 min | 2 d ago

A highlight from BI 154 Anne Collins: Learning with Working Memory

"Learning has been one of the greatest success stories tying together brains behavior and artificial intelligence. Long ago now, reinforcement learning algorithms that were developed in computer science were imported into neuroscience to account for the brain activity associated with how we learn. Since then, a wide variety of algorithms and computations underlying various forms of reinforcement learning have been explored, along with the neural substrates possibly implementing those algorithms. However, our brains are highly complex entities, and as we've discovered more about learning, the story has become more complicated. It isn't clear how and when various brain activities map onto various particular equations used to describe how we learn. And people like on Collins, my guests today are showing that reinforcement learning isn't the only game in town in terms of how our brains learn. On as a professor at the University of California, Berkeley, where she runs her computational cognitive neuroscience lab. One of the things that she's been working on for years now is how our working memory plays a role in learning as well. And specifically, how working memory and reinforcement learning interact. To affect how we learn, depending on the nature of what we're trying to learn. So in this episode, we talk about that interaction specifically. We also discuss more broadly how segregated and or how overlapping and interacting many of our cognitive functions are, and what that implies about our natural tendency to think in dichotomies, like model free versus model based reinforcement learning, system one versus system two, and so on. And we dive into plenty other subjects, like how to possibly incorporate these ideas into artificial systems. You can learn more about on and the show notes that brain inspired dot co slash podcast slash 154. Thanks to the brain inspired supporters, you people are the best, and it's just so generous of you to take the trouble to send a few bucks my way each month to help me make this podcast. And I always look forward to our live discussions and our interactions. Thank you. All right, here's on. And I know that you're not at SFN right now, the annual neuroscience meeting. And in fact, this our discussion here is, I think, over a year in the making, because I'd asked you so long ago, but you had decided to go and procreate. Apparently for the third time, and you were telling me that that's why you're not at this annual neuroscience meeting. So, but I thought maybe that was your first child, so I was going to ask you, you know, how motherhood was treating your career and otherwise. But you have three. Yeah. Yeah, I have three. There are 5 and a half and a half and 6 month old now. I'm not going to lie, motherhood is rough with a career, especially if your partner has a carrier too. Actually, with my first child, my husband wasn't quite working full time. Yeah. And so we were able to travel and go to lots of conferences and stuff like that. Which makes for some really interesting memories of being an SFN with a baby in the pouch and stuff like that. But yeah, I think the combination of having the other tool having full-time career and just having lost the habit of traveling with COVID too has really made it much harder this year. Are you done? Are you going to keep going? I stopped it too. And I have a surgery to prove it. That's a bit too much detail. Okay. I'm one of 6 children. So people feel like they can ask me these questions. I'm actually a 5th of 6 children. But no, I don't think so. I think it's already pretty, pretty hard enough. At this point, and you know, I have three girls, they're very lovely, but they're also a handful. Yeah. All right, well, I'm glad that we're finally doing this. So I appreciate you. Finally coming on to the podcast. It was a lot of emails back and forth in the making. So thanks for the persistent solution. Yeah, I am persistent. So we're going to talk a lot about today about your work relating, reinforcement learning in the brain to working memory. And hopefully we'll talk a little about a little bit about attention as well. But I wanted to start by asking you since you have worked a lot on the interactions between working memory and reinforcement learning. I wanted to start by asking you just how you feel how you would describe your outlook or your conception of learning and reinforcement learning has changed or been shaped throughout your career. Can you describe that sort of projection? Yeah, so, you know, I thought about it since you kindly sent me a question to prepare a little bit. And that's the question I had the hardest time way of actually, because I got into this field not traditional way, not that I think many people. There's no traditional connection. I think in France, it's maybe at least when I was there. It's maybe even less traditional. You know, there was no undergrad anything close to cognitive science. I discovered cognitive science as part of my breath requirements in engineering school. Alongside with painting and music and stuff like that, so it was really, I was in a very stem oriented undergrad and.

University Of California Collins Berkeley France
A highlight from Your Mouse Reveals Your Gender and Age

Data Skeptic

05:56 min | 2 d ago

A highlight from Your Mouse Reveals Your Gender and Age

"Interview. I'm Louis leiber and I'm currently assistant Professor of computer science in the university of Luxembourg, in Europe. So I basically work at the intersection of machine learning and human computer interaction. Local quickly, I feel which is called computational interaction where I basically build computational models to try to explain or predict user behavior. So typically we are either adapt or create new machine learning models to account for any behavioral traits about the user. Let's say how they pay attention to these plays or how they move the mouse or how they use the eye movements to fixate on something kind of things. So if you build a good model of a user that means it's sort of representative of and predictive of the user's behavior, you're not trying to replace the user. What's the use of the model? So the main application of user modeling is simulation. Imagine that you need to recruit representative user sample for measuring whatever. Then you have to spend time designing this experiment, recruiting people reaching out to them, allocated some budget for paying them and so on. So instead of recruiting, let's say, 1000 people for that and you can regret maybe ten, run some statistical tests, do some machine learning on data that you can collect from them. And then you can create a user model that you can try to then to infer. Some trades or behavioral trades that could be extended to a larger user group. Of course, I mean, it's just an example, typically you would need fairly large representative dataset if you run. If you want to really do some production level machine learning code. But for research purposes, sometimes even less than 100 user is really more like enough to start drawing compressions about your user population. Yeah, it just depends on the effect size you're trying to measure. With that in mind, can you expand on what some of the behaviors are? It depends on the study or the kind of analysis that you want to do. For instance, something that I have been doing for more than a decade is to try to infer how people allocate attention on different screens based on how they move them out. And why this isn't interesting because then you don't need to really install any webcam any eye tracker. So you can use the mouse as a proxy of you should behavior and see depending on how they move how fast they move, how reach out to targets or how they click on something. So it's not only about the click itself, but how the process or what happened before the click what is in interesting and this kind of user models can tell us a lot about, for instance, how the layout or the user interface is designed and how can we change things or what happened if we move, I don't know what button from the top left corner to the bottom right corner, but kind of behavior this will enable or if this will facilitate people find an information quicker, these are things that you can measure for free basically by Lara and running larger scale studies. If you really want to pay attention to how people behave on the wave for instance, then the mouse has been shown to be a good proxy of the eye gaze. I mean, not for every single thing, but for most of the tasks that we do online, the mouse is a very reasonable proxy of user behavior. Well, I'm glad you made the comparison to eye tracking or gaze measurements. Obviously, we're going to talk mostly about the mouse today, but in terms of making that compare and contrast for listeners who don't really know much about the scholarship of eye tracking, is that a big thing in how highly is it regarded? I guess I asked because I think of my own eyes and it's not really a precision instrument, right? Just because I look at an ad doesn't mean I'm interested, sometimes I was distracted or it seems like a very noisy dataset. How reliable is eye tracking? Well, I track it is actually one of the earliest measuring instruments to analyze and investigate you should behavior. Typically I'm in New York computer interaction. We are interested in understanding how people pay attention to things or how things are arranged on the screen. And the eye tracking is an essential device in most ACI labs today. And it's not really noisy. Actually, the mouse is way more noisy than the eye. So yeah, I'm for sure we can go through that later. But I can tell you that by looking at how people experience or how people look at the content and how this is a range of the screen is something that people in marketing and in neuroscience has been using for decades. So I would say that it's pretty thunder. And while the use of this in human building interaction context, so I guess that's not all your audience will be familiar with hci when you can compute interaction literature by just to let you know that the eye tracking is a measuring device is really, really popular in NCI. So based on your research and I guess also just things you've read and looked into. Do you think that the mouse approach can be a full proxy if for some reason eye tracking or was too expensive or just not available for my project, but I do have mouse data. Can I get just the same rich amount of information as data center? Is there like a fidelity loss by looking at only the mouse? Yeah, well, it depends on the task. Of course, I mean, I can not give you a 100% accurate answer about that. But for instance, on web search, how people search engines are displaying information, these typical search engine page, this page with time purple snippets that you can click on. This is a highly structured information.

Louis Leiber University Of Luxembourg Europe Lara New York NCI
A highlight from AI Today Podcast: AI Glossary Series: Augmented Intelligence

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

05:04 min | 6 d ago

A highlight from AI Today Podcast: AI Glossary Series: Augmented Intelligence

"Hello and welcome to the AI today podcast. I'm your host, Kathleen mulch. And I'm your host Ronald schmeisser. And thanks again for joining us on the AI today podcast, as you know, we have well over hundreds of episodes and 5 plus years we've been doing this for a long time. And we've never run out of things to say. On AI today. And part of it is because we keep hearing from a lot of you our listeners who are telling us about the need to not only put AI into practice. But even to understand terms, we're still surprised maybe we shouldn't be that people are asking us about some terms that have been around and for artificial intelligence. Even data terms that have been around for decades. So we decided, hey, let's put together a glossary, just a glossary it's visible on our site. We'll link to it in the show notes, but a glossary that goes over all of the major terms and concepts you need to know about artificial intelligence. And even putting that together, there's like hundreds of terms that you really need to know. And that could be a little overwhelming. So part of what we decided to do is put together not just the glossary, but a little podcast series where we can explain each term, but sometimes a couple terms that are related in one podcast so that you could say, oh, I understand what this term means. And that can have a conversation with someone, and I can really, really know what to do. Of course, putting it into practice is a whole other thing, understanding the terminology is one thing knowing what to do with that it's another, that's what our CPA certification and training is all about, but we'll get to that later on in this podcast. Exactly. In our podcast series, we really wanted to just go over some key AI machine learning and big data terms at a high level because as we mentioned, some people just overly complicate these terms for no good reason, and I think that that doesn't help to make people less confused about some of these terms. So we wanted in our glossary and then in our companion glossary podcast series to just go over it at a more high level. Again, these podcasts will be at that high level. If you're interested in digging deeper than we encourage you to take the CPM AI training and certification and CPM AI for our listeners who have been listening to us for a while, know that that's the cognitive project management for AI. Methodology, we are big advocates of doing AI right, including following best practices, which is CPM AI methodology. But again, on today's podcast, we really wanted to just talk about some of those terms. And we will be focusing on the term augmented intelligence. Yeah, so one of the things you might be thinking about with intelligent machines is that we're really thinking about the machine. And what we want the machine to do. But machines are limited. This is, of course, they're not we don't have general intelligence. We talked about that in other podcasts and other things. So we don't have machines that can really truly do everything that humans can. At least even at the ability, even for basic tasks that humans can do. We're just really good at what we do. So the idea with augmented intelligence is instead of thinking about this machine that's going to do things perhaps on its own is, what can we do with an intelligence system if we can work together with the human to make the human better? So in many ways, it's really the machine augmenting the human intelligence. So we're trying to make the human more capable. And that's really what we mean by augmented intelligence. If you've heard this term. And the reason I want to do that is because humans are really, really good at some things. Machines are really, really, really good at some things. Let's do the chocolate and peanut butter thing and put these two things together and just make people better at what they do. And that's the whole idea of augmented intelligence. So what are humans really good at not good at machines good at not good at that together? Everything is better at. Yeah. Yeah, because that is important, right? We want to make sure we're taking the best of both. So humans are really great. We have great intuition. We have emotional IQ. We have common sense, and we have creativity. We're really creative beings. We're able to draw pictures, write poetry, sing songs, but we're not good at probabilistic thinking. Also, we're not good at dealing with very large volumes of data. If you've ever looked at spreadsheets, once they start getting past a page, I'm like, oh my goodness, my eyes are glassing over. And also humans just inherently have bias. So those are what we're not good at. But then now let's think about, well, what are computers really good at? Well, they're really good at probabilistic thinking. They're really good at dealing with very large volumes of data and information in a very quick amount of time. And they're also really good at being trained. But machines and computers are not good at intuition. They lack emotional IQ. They lack common sense. They also lack creativity. As we mentioned, they're good at being trained, so they may produce something that seems that it's creative, but they are not creative like humans. And then also machines do have bias. So if you take what's humans are great at, what machines are great at, move it together, then that's the idea of augmented intelligence.

Kathleen Mulch Ronald Schmeisser CPA
A highlight from Leveling Up Commercial Operations in Pharma - with David Ehrlich of Aktana

AI in Business

04:03 min | Last week

A highlight from Leveling Up Commercial Operations in Pharma - with David Ehrlich of Aktana

"In this episode, we're focusing on the pharma industry and we're focusing on a unique element of pharma that is the commercial side. So we have all the activities we need to do to develop drugs and push them through clinical trials, and then of course we need to actually get them to patients when you go sell them to our customers, and there's a lot of activity there as well, but it's less talked about when it comes to the intersection with artificial intelligence. Our guest this week is David elrond is the CEO of octane, auton is focused on helping sales and marketing leaders go to market with their products, and there's three topics that we cover in this episode. The first is the business challenges of taking a drug to market. What do sales marketing and product folks have to deal with? What are the complexities that they're buried in? Many of these are going to mesh with some of you in other industries. If you're listening in and you're in banking or you're in retail, I'm sure some of these communication challenges and branding challenges are going to be things you'll resonate with as well. But David goes deep on exactly what this looks like in pharma. Secondly, we talk about where data and AI can fit into that mix to add value. Again, this is a unique use case, very different from a lot of the backend drug development topics that we've covered over the years here on the AI and business podcast. So David explains where data and AI fit into drive the sales and marketing metrics up for pharma firms. And lastly, he shares some insights on AI adoption. I asked David directly what it takes for folks on the commercial side of a life sciences business to prepare for and adopt AI in a way that gives them the highest likelihood of success and he shares some of his insights and common pitfalls and things that they've done. Well, so I hope that you find those insights to be transferable to your sector as well. This interview is brought to you by for more information about reaching emerges global audience stay tuned to the outro of this episode, but without further ado, let's fly right in. This is David elric with akana. They're in the AI and business podcast. So David, welcome to the show. Thanks, Dan. It's great to be here. I'm glad to have you with us. We talk a lot about AI in the domain of pharma, but not so much in commercial. And this is sort of where you guys play. Some of our listeners are very much in your industry, some aren't. Maybe we could kind of define commercial and what workflows exist under there. And then head right into the particular workflows that you guys operate in that AI might help with. But if we could start with the definition, I think that would be helpful. Sure. So I mean, the way to think about most of life science companies is there is a research and development side of the company that figures out what kinds of products the market would want that's consistent with the kind of impact that they want to have in the world. They go and attempt to build those products and develop them and release them to market. So the way to think about commercial gaming is every big life science company is going to have two sides to them. The first side is around research, development, manufacturing, it's around figuring out what drugs or what product the market needs that's consistent with the kind of impact they want to have in the world. They go, they build that product. They invent the product, they invent the medicine, they do all the development. They get it approved by the FDA, and then they hand it over to the commercial side of the business. The commercial side of the business is really around how do we market this new brand this new treatment for a certain disease? How do we explain to the world that we have this now? What it does, how it operates, then there's distribution and their sales, and there's fulfillment. So it's everything around that side of the business. Got to include it. Just fulfillment get bundled under commercial in theory here. Okay, okay, got it. I was unaware. So I know the sales and marketing side, but wasn't quite certain fulfillment sort of had its own bucket. Yeah, you can basically think of it as building and getting a product approved and everything else.

Pharma David Elrond Auton David David Elric Akana Octane DAN FDA
A highlight from AI Today Podcast: Interview with Galen Low, host of The Digital Project Manager podcast

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

02:57 min | Last week

A highlight from AI Today Podcast: Interview with Galen Low, host of The Digital Project Manager podcast

"The AI today podcast produced by cognitive cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence. Learn about emerging AI trends, technologies, and use cases from cognitive analysts and guest experts. Hello and welcome to the AI today podcast. I'm your host, Kathleen mulch. And I'm your host Donald mills, and thanks again for joining us on AI today, been going strong here for 5 plus years. Hitting our 300th episode pretty soon. Pretty soon. I keep talking about 300 and actually technically not there yet, but we record a lot of podcasts. So in my mind, we're already at past 300 episodes. But we really have really enjoyed much of your feedback. Many of you really enjoyed our educational focus, podcasts, needless to say. There will be many, many more. You might have also noticed that the frequency of our podcasts have been going up. For a long first 5 years, we were regular on Wednesday every week, a podcast. Now we're Wednesday and Friday. Well, I don't want to commit to the days of the week, but basically we're twice a week now. Because we just have so much. You might think after all these years, we didn't have much more to say on AI. It's the opposite. We just have too much to say. And we just can't cram it all in and bust your heads. Of course, this is the first time listening to the AI today podcast. Then you should know, we got lots of stuff, including interviews with some amazing people who are involved in either making AI work today or people you should be listening to because they will help you make AI work for you today. Exactly. And so we always love the opportunity to have interviews, especially with fellow podcasters so that they can share their insights. And one thing that we've noticed and you may be noticing as well with our interviews is that we're really starting to see a cross section between project management and AI. And so we said, let's get some project managers and some project manager podcasts on here to help share with our audience their insights and maybe some of the challenges that they face and the opportunities that they see when it comes to project management and also AI as well. So we're really excited to have with us today Galen Lowe, who's the host of the digital project manager podcast and cofounder of the digital project manager. So welcome and thanks so much for joining us. Oh, thanks for having me here. This is such an honor. I'm really excited to get into it. AI and project management, BFFs, completely. Well, perfect. Well, we'd love to start by having you introduce yourself to our listeners. Tell them a little bit about your background and why you started the podcast as well as the digital project manager. Fantastic. Yeah, so again, I'm Galen Lowe. I'm one of the cofounders of a little professional community called the digital project manager. Myself, I've been working in client services

Kathleen Mulch Donald Mills Galen Lowe
A highlight from Making "AI Ethics" Productive - with Beena Ammanath of Deloitte

AI in Business

04:41 min | Last week

A highlight from Making "AI Ethics" Productive - with Beena Ammanath of Deloitte

"You're listening to the AI and business podcast. And this is not going to be an episode of holier than thou. Many times the topic of AI ethics is little more than a conversation of holier than thou. The way that I define unproductive AI ethics is essentially simply the exercise of shooting down AI ideas as being detrimental. Conjuring up some potential risk, potentially something that's very politically prickly and saying, oh, that might cause that or that might cause that. There's certainly many risks with AI, but when ethics, quote unquote, steps in without being able to solve those problems. In other words, integrate values integrate law and also get the job done for the customers or the company, I consider it unproductive, and I consider it a sort of holier than the game that I don't consider worth covering on the podcast. So I don't we had a good episode about AI ethics with the, at the time, global head of AI at IBM Seth dobrin, about a year ago, and that was an awfully good episode talking about the productive side of AI ethics. Today we double down on that theme with a guest who is not only author of a book called trustworthy AI, but is also the executive director for the global Deloitte AI institute. Bina am enough has also held leadership positions in AI and data at Hewlett Packard Enterprise, Bank of America, General Electric, kind of a who's who of global enterprise firms, and now she's with Deloitte. She speaks with us this week about putting AI ethics in action in ways that are conducive to innovation. In ways that genuinely will serve to solve business goals and customer problems. And there's two really important points I think are worth noting down for those of you who are tuned in who are leading AI projects or maybe your consultants who are helping your clients lead AI projects. There is a process here for sort of being able to screen out potential downsides and thinking through those upfront, which I think can be a potential benefit of applying AI ethics properly, being it has some excellent ideas there. And then secondly, she talks about who needs to be in the room to have a realistic AI ethics conversation. This is a team sport. As any of you who've been here for long enough are well aware and being a talks about the different kinds of expertise that have to come together to understand squarely the ethical and legal concerns of AI applications, but also how they can interact, how these folks need to level up their own knowledge and how they need to bounce that knowledge off of each other to genuinely screen applications and determine the best place to put our company resources for the sake of our customers and Some of these ideas hopefully many of you will be able to turn around and apply in your own business and that's certainly what we're shooting for in this episode. So I'm grateful to bena for being able to be with us. And without further ado, it's fine. This episode. Phenomenon of Deloitte, you're in the AI in business podcast. So Bina, I know you have a lot of these conversations with leadership around AI ethics, and there's a lot to get into with the meat and potatoes today, but I think we should define the terms that we've certainly heard a lot of different definitions of what comes to mind for AI ethics. When you're explaining this to the C suite to the boardroom, how do you put it in a nutshell? So there is a notion that AI ethics is all about transparency and removing bias and making it more fair. Those are catchy headlines, but in my experience working across different industries, fairness, bias transparency, are all crucial, but there are other factors. If you have an algorithm predicting a manufacturing machine failure, for example, fairness doesn't really come into play, but security and safety are both key issues. So let me take a step back and tell you why I like to think about it as trust and ethics in AI because for me, trust include the ethics, but it also includes policy and compliance, which is what leaders need to be aware of in the context of ethics. So trustworthy and capacitors, everything you can think of related to the potential negative consequences of AI. That's how I think about ethics. Yeah, kind of not putting it simply in the bounding box of transparency and bias as like buzzwords. Yeah. Yeah, got it. And in terms of where it fits in, I'm sure some folks that you talk to, I know for our listeners, this is often the case. When they hear about AI ethics, it's often sort of just well. You know, you want to be careful. Your algorithms could make for a really bad PR event. You know, sometimes it's physical danger, right? But as you and I both know, certainly, if you're running a manufacturing plant with heavy equipment or you're making self-driving cars or you're diagnosing cancer, we got real real issues here.

Seth Dobrin Global Deloitte Ai Institute Hewlett Packard Enterprise Bina Deloitte General Electric Bank Of America IBM Bena Cancer
A highlight from Natalie Monbiot from Hour One on New Virtual Human Use Cases - Voicebot Podcast Ep 285

The Voicebot Podcast

01:33 min | Last week

A highlight from Natalie Monbiot from Hour One on New Virtual Human Use Cases - Voicebot Podcast Ep 285

"Hey guys, I can't believe it's been nearly 5 years since I quit posting regular videos to YouTube. And I feel like an entirely different person. The ironic part is, I'm actually not a person at all. At least not a real person anyway. I am a creation of artificial intelligence. That's right. I don't exist in the three B world. I exist in pixels. 1280 by 7 20 to be specific. With the permission from the real tear in southern AI Taryn can speak different languages. And have a different faces. Ages, genders, AI Taryn can sing. Type two wake up brush my teeth go to work, time to come home and count likes as my worth. She can even relaunch her YouTube channel by creating new videos without the real terran having to shower or leave her bed. And perhaps most importantly, the real terran can now focus her time and energy on solving more existential problems. Problems like, what does it mean to have an AI twin? Am I creepier intriguing? Do AI humans like marshmallows? I believe we do. Anyway, it's really nice to meet you guys. I will see you all in the matrix. Or somewhere. Good stuff.

Ai Taryn Youtube
A highlight from Measuring Web Search Behavior

Data Skeptic

08:05 min | Last week

A highlight from Measuring Web Search Behavior

"Whenever I look over somebody's shoulder, always with permission and watch what they're doing on the Internet. Invariably, their behaviors a little different than mine. I'm more likely to open information in a new tab rather than the current tab. Someone close to me is constantly selecting things, but not to copy and paste just to highlight, which I find odd. We all use our machines slightly differently and we definitely use our browsers and search engines in slightly different ways each of us. I consider myself an above average Internet user in terms of my technical merits and how much I use the Internet, but you know, more than 50% of people also think they're above average. So who knows? Well, my two guests today probably know. They had access to a large dataset we'll talk about how they got it. A combination of web tracking and later survey data. When they blend these two data sources, a number of insights are available about the different ways different demographic groups use search engines. We'll get into those details and more on today's interview. So my name is Alexandra orman or Sasha, is also the kind of nickname I go by. And I'm a postdoctoral researcher at the university of Zürich in Switzerland. I work with social computing group that's department of informatics, though my background generally is in social sciences. So I'm kind of working my work generally in between social science and computer science and my primary research areas. I would say currently is research on web search on the HCI aspects of that. So this is like algorithmic bias in web search, but also how users interact with it. And another stream of my research is political communication on social media platforms. Broadly defined, so to say. Yes, my name is Michael Omaha. I am an Alfred lander lecturer at the university of bay, specifically working at the institute of media and communication status environment. One of my central projects that they are working great now deals with the impact of the algorithmic systems such as web search engines or recommendation systems on the Holocaust membranes. But I also definitely having a bunch of interests. So which deal order a broadly with information table systems, as well as their potential bias, as well as the implications for the public sphere, especially information behavioral in relation to the politics, but also a historical information. Well, how did your collaboration come together? Well, I was actually doing PhD works now at the institute, but we didn't meet there. We met before, Michael joined, I think, a year or so, right before when I was still doing my PhD in mikula, was already a postdoc. And Amsterdam at that time, if I recall correctly and we met basically at a conference. And we talked about the paper that we could potentially collaborate on based on, I think, what we both were presenting and then we just kind of started collaborating remotely via email. And then Michael joined the institute because there was a position open that was pretty fitting. I would say the main paper I invited you guys on to discuss is the you are how and where you search comparative analysis of web search behavior using web tracking data. So caught my attention right away, but neither of you work at Google who has most of the web tracking data. How do you get started on a project like this? So essentially, we're both at the time working. I was still in Bern, and we were working in that web tracking project more generally, where essentially it's a project joint between Germany and Switzerland. The goal of which was to collect browsing data overall, not focused on web search specifically just browsing data from the users who agreed to participate in it so they basically installed a plugin that would record all of their browsing accept a dedicated block list, which was a list of sensitive websites like we didn't record anything on their visits to banking or to insurances, to adult websites and things like that. So everything else was recorded unless they would press a button and say, don't record me for the next 15 minutes, and they could press it as much as they wanted to. So we had this data collected for different projects that just deals more generally with people's information consumption online and news consumption online, and since we're both more interested in web search, as we do a lot of work on web search bias, so not focusing on the users, but focusing on the search engines themselves. This we saw is like an opportunity to just use this data to look at the other side that we didn't explore before. The user side about how users actually search because previous studies were mostly based on either eye tracking data, so there's a small lab studies where people come, there is a night tracker, and the researchers look at what people look at on web search. Pages which is cool to see in more detail what people do, but also the kind of ecological validity of these studies is a little bit lower kind of not too high. And another stream of research was historically log based studies. For example, when Google or studies that were done in the early 2000s, there were some other search engines, not Google who would researchers from this companies or they would give access to some academic researchers just based on all the transaction logs from one search engine, basically check what people click on, what do they do? And we have this drove of data that allowed us to look at scale and of in real life at multiple search engines at the same time. So bringing together the benefits, so to say of these two previous methodologies. You'd mentioned being able to collect some of that data from the chrome plugin, could you expand a bit on how you got people involved in that study? It was basically the paper is one of the outputs of a lightroom project, which was done by the two universities, the university of Berlin and the team led by silke atom and the university of koblenz Landau in the team led by Michelle Meyer, and the idea was to basically representative sample of German and Swiss citizens, and then also invite them to share their data using the plugin system. So pretty much we collaborated with a market research company, which has samples of online panels coming from the two countries, so Germany and the Swiss arm and pretty much we indeed ask them to recruit a sample of participants. And then each participant was basically asked to express Isaac his consent or the lack of concern to be tracked and naturally quite a number of people didn't agree to participate, which was expected because it's quite first of all quite noble way of researching information behavior. But second, it's also still, I would say white gravis a sensitive way to inclusive way of actually starting people's behavior. But in the end, we actually received quite a number of four participants who are agreed, participate both in the tracking component and then also in the survey component. And basically this group of people was I get groups that we actually worked with since the project and based on whose data we actually load this paper. And what does that dataset turn into at the end of the day? Do you have a list of URLs or something richer? This dataset is richer, so essentially the plugin was developed within this research project, and it records also the snapshots of HTML pages essentially as a user sees them. It's not just the set of URLs you also have all the HTML, even though they're naturally quite messy. If you've worked with a lot of data, it's quite difficult to extract stuff, but with web search specifically, it's easier in a way because we have a limited number of

Alexandra Orman University Of Zürich Department Of Informatics Michael Omaha Alfred Lander University Of Bay Institute Of Media And Communi Mikula Switzerland Sasha Michael Google Bern Amsterdam Germany University Of Berlin Silke Atom University Of Koblenz Michelle Meyer
A highlight from Maaike Coppens on Conversation Design Themes in 2022 - Voicebot Podcast Ep 284

The Voicebot Podcast

00:31 sec | Last week

A highlight from Maaike Coppens on Conversation Design Themes in 2022 - Voicebot Podcast Ep 284

"Hello there to all my Friends have voice bought nation. This is Brett kinsella, your host of The Voice pop podcast. Every week for 5 years, I've sat down with an engineer, upside down with researchers, entrepreneurs, conversational AI industry leaders, and sometimes those leaders are designers, and that is what I have for you today. It is a long overdue talk with Micah coppins, about voice user experiences, but the timing was ideal. We had the chance to sit down while we were at voice summit 2022 just outside of Washington, D.C., and Micah has a new book on conversation design that was released shortly after we conducted the

Brett Kinsella Micah Coppins Washington, D.C. Micah
A highlight from BI 153 Carolyn Jennings: Attention and the Self

Brain Inspired

07:11 min | Last week

A highlight from BI 153 Carolyn Jennings: Attention and the Self

"Popular view of philosophy to seek consciousness as the thing and self as nonexistent in an illusion and what I'm doing is I'm saying the self is a thing and then consciousness is just the way that the self is related to its world. I think that one of the reasons that people will be uncomfortable with it who have been in consciousness research for a long time is just because it's become so popular to think about consciousness as a thing. So popular that people say things like. This is brain inspired. This is brain inspired. Everyone, I'm Paul. William James, the super influential psychologist and philosopher, famously in 1890, wrote, everyone knows what attention is. That turned out not to be true, instead, like other cognitive functions we give names to, like memory, or consciousness, the more that we study attention, the more subdivided the concept becomes, leading to a taxonomy to describe the varieties of what we collectively call attention, like top down versus bottom up attention, feature based versus spatial, attention, overt versus covert, attention, and so on, and some people even argue that the word attention isn't even useful anymore. And we should abandon it. Carolyn dicey Jennings is a philosopher and a cognitive scientist at the University of California, Merced, and in her book the attending mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the prioritization of some stuff over other stuff. Based on our collective interests. And one of our main claims is that attention is evidence of a real emergent self or subject that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting that slow, longer range oscillations in our brains can alter or entrain the more local neural activity. And this is a candidate for mental causation. So we unpack that more in our discussion and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception. I link to her book and some other relevant articles, and you can learn more about Carolyn in the show notes at Brandon's dot co slash podcast slash 153. On the website you can also sign up to support brain inspired via Patreon for various bells and whistles, like full episodes and joining our Discord community. Thanks as always to my Patreon supporters and thank you for listening, or watching. All right, here's Carolyn. Carolyn, the book is the attending mind, and a right before we were talking here, I was frantically looking it up because of course it has a subtitle, but it has no subtitle. Why no subtitle? Yeah, no, sometimes. I like things to be short and sweet, I guess. They didn't ask you for all books, all science books or philosophy books have subtitles, right? This is important hard hitting interview questions. Yeah. I guess they do often have long subtitles, I'm really inspired by philosophers like Susan Wolfe, who tried to connect more with the public or federici who try to be really clear with their writing and that's a goal of mine and I feel sometimes like the really long subtitles are at odds with that. Okay, well, so the title is very short. And by the way, I like that it has no subtitle, by the way. It's not a criticism. But the book and the book is not long either, but it is dense and thick and has lots of goes down lots of paths, lots of details and stuff. Maybe I'll just start off with a very easy quote here from the book. And then we can unpack it, right? Consciousness, because consciousness is the interface between a subject and its world, action is the subject's contribution to that interface. And attention is but one way to get there. So we have a lot to unpack here, perhaps. So like I was saying, the book covers a lot of ground in philosophy and neuroscience and psychology. And there's no way that we're going to get to all the topics discussed in the book. And this book was two years old now, so it's probably old hat for you, and it was based on over a decade of your previous works and thinking. Yeah. Maybe we can start with, you know, we're going to have to unpack the ideas in the book, many of them. But maybe what I want to start with is just asking you what you feel most sure about in your work. And I don't know how your mind has developed and changed since publishing this work. In terms of the ideas that we'll get to, but what do you feel most sure about in the book? I feel the most sure about the existence of a self, which is I would say the strongest claim of the book as well. So yeah, that's probably where I bet there is something responsible for attention that could be seen as a one possible solution to problem of free will, for example. With agency, I feel confident about that. And in keeping with that, I feel pretty confident about rejection of a reductionist perspective of the universe. That all causation occurs at one level that all science occurs at one level and should be that we should think of science as ultimately coming back to one level, whatever that is. I feel confident about that that it's actually really useful to think in terms of multiple levels and that agency is one of the cases where you can really see that. So that's where I feel confident. I would say places where a part of the book that I feel less confident about and I haven't continued to work on would be the stuff about legal theory all the way at the end, which kind of makes sense because sort of starts with the stuff that I'm the most excited about in the book and kind of ends with the stuff that I feel the least confident about, but going out into a new direction that I may continue later. But there are also things that I just didn't complete in the book. And so in a way, I feel less confident about those things too, but I'm hoping to complete them eventually and those are things like what is where is the boundary between self and world?

Carolyn Carolyn Dicey Jennings William James Susan Wolfe Federici Merced University Of California Paul Brandon
A highlight from AI Today Podcast: AI Glossary Series: Cognitive Technology

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

04:17 min | Last week

A highlight from AI Today Podcast: AI Glossary Series: Cognitive Technology

"And welcome to the AI today podcast. I'm your host, Kathleen walch, and I'm your host Ronald schmeiser and thanks again for joining us on the AI today podcast as you know. Well over 300 episodes or so, 5 years plus, you know, we've really been going strong and a lot of the reasons why AI today pockets is so popular is because we focus on giving our listeners an understanding of AI, machine learning and big data, and of course how to put those things into practice today, which is what we're all about. Not looking at the research not looking at yesterday, the history or talking about some of the tangential topics. We're really about making AI practical thing, really want you to be successful with it. So as part of it, we've really spent a lot of time going over various aspects of education and our failure series, and most recently our glossary series, where we just highlight the terms so that there's understanding of what these various different things are. So if you hear this term, you hear this word, you may or may not be familiar with, you know what it means. Now, of course, we can only go into it in so much detail when we're doing a glossary, but we've never really did this before and we found this was really very helpful. Give people an understanding and sort of they have a common lexicon, a common terminology by which they can have successful conversations with their colleagues. Exactly. So if you have not done so already, we encourage you to describe, subscribe to the AI today podcast. So you can get notified of all of our future episodes. We will have many in this AI glossary series. But in today's AI glossary series, you know, we wanted to make sure that we were covering key terms related to AI, machine learning, and big data. At a high level, because we've heard from many of our listeners and our audience that some of these terms can get a little confusing, other people's definitions, the way that they describe things just sometimes as overly complex for no good reason. So we wanted to put this in terms that everybody can understand really make this approachable so that people have a better baseline understanding of what these are. If you'd like it more in depth understanding of all of this, then we encourage you to take our CPM AI training and certification, where we go into much greater detail and CPM AI is the cognitive project management for AI. If you've listened to any of our podcasts, you know we are big advocates of doing AI right with best practices, and we are big advocates of CPM AI. But on today's podcast, we really wanted to go over, you know, some of those key terms from our glossary. So today we'll be defining cognitive computing and cognitive technology. Yeah, so you might have heard these terms, maybe you didn't even have heard these terms, but you have obviously heard of artificial intelligence. And I think the challenges is that when you have a conversation with others who may not have the same common understanding of artificial intelligence and as you may have heard from our podcast on artificial intelligence, that's because there is no well established standard definition of artificial intelligence. And we get into this tricky problem of not coming to an agreement or maybe if people don't even like AI. So an alternate term that people are using to describe aspects of narrow AI, which is something that we have talked about another podcast as well, and we've defined it as well in a glossary, is that when you're only using AI for a specific thing, such as some sort of text processing or image or some sort of prediction or just a sentiment analysis, it's hard to really think about that as AI when you're just trying to say, are these tweets happy or sad? Is it really AI? It's like kind of, but not really. So a technology that the term that people use to describe the use of machine learning, especially in AI and other techniques for things that you would not necessarily think of as artificial intelligence is cognitive computing or cognitive technology, which is really the range of technologies that we are using as we pursue this goal of artificial intelligence. However, we're using it, even if we're not really using it for something AI, if someone's like, look, I'm just doing some classification system. They're saying, are you doing AI? No, I'm not really doing AI. I'm doing cognitive computing or cognitive tech. That's okay. It's just a terminology that people use so that way they don't get stuck in some mindset or mind frame when it comes to artificial intelligence.

Kathleen Walch Ronald Schmeiser
A highlight from StrategyQA and Big Bench

Data Skeptic

01:58 min | Last week

A highlight from StrategyQA and Big Bench

"It's been a while since we ran our season on artificial intelligence. And if I were to pat myself on the back I would say it was serendipitous that I planned it at the same time large language models were starting to change the NLP landscape. If I missed the ball on something it was probably running a season on computer vision right now. We've seen a mirroring set of advancements most notably the recent diffusion models, and its astounding to think how far we might go with this more or less identical underlying architecture used for both language and vision. If there's one thing I am confident about, it's that the true test of whether or not something is artificially intelligent. Can only be performed with Alan Turing's imitation game, or some of you know what the Turing test. And despite recent advancements, it seems pretty confident we're still a ways off from an AGI. But between now and then we're going to need bigger and badder challenges to press our machine learning algorithms up against. I mean, people still publish on mnist and ImageNet a bit, but if there's one lesson we've learned, it's more data and more distributed are the two paths to push forward on. So that's why I wanted to take a quick respite from our ad tech season and bring you a story about a collaborative benchmark known as big bench or the beyond the imitation game benchmark. This is a large collection of many different independent tasks. In natural language. He knew it was a major feature of Elmo and Bert and all the other models that have followed since that they're useful in a wide assortment of seemingly independent tasks. Or that the bird embeddings used as features in an MM model will allow you to train something with hundreds of examples where you previously needed maybe hundreds of thousands. Big benches one of the best benchmarks out there as we try and build algorithms that can be as general purpose as possible. So today I speak with returning guest more geva, we talk about strategy QA, her specific contribution to the overall project, as

Alan Turing Elmo Bert
A highlight from Applying Computer Vision and Computer Listening in Manufacturing - with Remi Duquette of Maya HTT

AI in Business

01:55 min | 2 weeks ago

A highlight from Applying Computer Vision and Computer Listening in Manufacturing - with Remi Duquette of Maya HTT

"Business podcast where non technical professionals stay ahead of the AI curve. If you do not want to learn to write python, but you do want to identify high ROI projects and help to steer an AI strategy, you found yourself in the right place. In this episode, we're going to be talking about computer vision, often when people think about computer vision, they imagine what a human being's eyeballs would look at, and they say, okay, what could we use for AI to look at the same things a human would look for? Maybe we're looking for someone moving in surveillance footage. In today's case, we're looking at manufacturing. So maybe we want to examine possible defects and some manufactured product the same way a human being would. Well, as it turns out, machines can look at things in ways that human beings can't. I'm talking about infrared and other kinds of cameras that do not model the human eye, but might be able to pick up on things the human eye can not. In addition, we talk about machine listening that is to say, how can we use audio data to determine what might be going wrong in the manufacturing process. And again, this goes beyond what the human ear can listen to. This episode gives us a lot of different jumping off points into where computer vision and audio might make the difference, including going beyond how humans are currently diagnosing our machines. Our guests this week for this in depth and interesting topic is none other than Remy Duquette. Remy has been with us in the past, Remy leads artificial intelligence at Maya HTT, Maya is an AI services firm. And Remy has previously joined us on episodes about improving throughput and improving quality with AI in manufacturing today. We are focusing on the tools of the job, vision, and listening, and be able to go beyond the human senses, to be able to drive results in predictive maintenance and improving quality in the manufacturing process. This episode is brought to you by Maya HTT to learn more about reaching emerge as global audience for your AI products or services, stay tuned to the end of this episode. Without further ado, let's fly in. So he's fun to talk to Remy

Remy Remy Duquette
Interview With Daniel Kornev Chief Product Officer at DeepPavlov

The Voicebot Podcast

02:07 min | 1 year ago

Interview With Daniel Kornev Chief Product Officer at DeepPavlov

"Daniel gornja. Welcome to the voice. Podcast much brackets and big for me to turn today today. It's my pleasure to have you. This is a long time in the making. We've been i guess chatting on slack for maybe year and a half something. Yeah i think so. I started to read your westport. Insider was fascinated by opportunity to look into your think to on hand Why not took. Yeah that that's that's how it happened. Well the is really perfect. Because we're going to talk about a few things today. Obviously d. Pavlov is a project i've been interested in for at least a year. I don't remember when i first came across it but it might have been might have been. You introduced it to me. Or maybe shortly before that i found out about it but definitely answered that project and then obviously you've been involved recently with the elec surprise social competition. We've had another conversation about that about this. What a perfect time to go a little deeper on that because it is a different way to build bots and so really looking forward to this conversation today. But i'll let you get started. So why don't you tee it up for the The audience right now first and let them know what d- pavlov is before we get deep sure depot is like lab at moscow's physics and technology. That is focused on conversational And neural efforts Officially cool to neural networks in Terrain but Wednesday were standard like full. Five years. ago it's also got to down moniker Because follow fossil famous russian scientists who discover it reflects us in all those things that encouraged scientists researchers to understand how human brace books and we still have a lot of things that we have to uncover. But that's was formed as the name.

Daniel Gornja Pavlov Elec Moscow
Google Develop AI for Detecting Abnormal Chest X-Rays Using Deep Learning

Daily Tech Headlines

02:09 min | 1 year ago

Google Develop AI for Detecting Abnormal Chest X-Rays Using Deep Learning

"On friday we talked about a nature publication by google. Ai scientists that showed how a deep learning system could detect abnormal chest xrays rays with an accuracy. Rivaling that of professional radiologists. The system only detects whether a chess scan is normal or not and is not trained to detect specific conditions. The goal here is to increase productivity and efficiency of radiologists clinical process. Let's examine some a i x ray. Science first of all how to rays work xrays are a type of radiation energy. Wave that can go through. Relatively thick objects without being absorbed or scattered very much. X rays have shorter wavelengths than visible light which makes them invisible to the human eye for medical applications of vacuum x. Ray tube accelerates electrons to collide with a metal and owed and creates rays these rays are then directed towards the intended target like a broken arm for example and then picked up by digital detectors called image plates on the other side differ body tissues absorb x rays differently so the high amount of calcium in bones for example makes them especially efficient at x ray. Absorption and this highly visible on the image detector soft tissues like lungs are slightly lighter but also visible making x ray and efficient method to diagnose pneumonia or pleural a fusion Which is fluid in the lungs. For example according to this latest nature publication approximately eight hundred and thirty seven million chest. Xrays are obtained yearly worldwide. That is a lot of pictures for radiologists to look at and can lead to longer wait times and diagnosis delays. And of course. This is why there's interest in developing ai. Tools to streamline the process many algorithms have already been developed but are rather aimed at detecting specific problems on an x ray. The google ai. Scientists however developed a deep learning system capable of sorting chest xrays into either normal or abnormal data intending. To lighten the case load on radiologists

Chess Google Pneumonia
Generating SQL [Database Queries] From Natural Language With Yanshuai Cao

The TWIML AI Podcast

01:58 min | 1 year ago

Generating SQL [Database Queries] From Natural Language With Yanshuai Cao

"So tell us a little. Bit about touring and the motivation for it. How did the project get started right. So is this natural. Language database interface is a demo of anguish database interface built. And it's really just putting a lot of our word on some parsing space together. In this academic demo so netra language database interface the from application perspective the pin uses to a law a nontechnical users to interact with structured data. Set is there's lots of inside endure and You know who want to give out change for nontechnical users to to get those insights and from a research perspective. It's a very challenging natural english Problem because the underlying problem is you have to parse pasta in english or had our next languish than convert to see cole. And we all know. Natural language is ambiguous machine languages on bigger after resolve all amputate. He yard a too harsh correctly. Furthermore was different from compared to on other program. Language is the mapping. From adams. To see cole is under specified. If you don't know the schema really depend on what is the structure of schema and so he still model has to really learn how to reason using it. And in order to resolve all that may retail and correctly predicted the sequel and lastly this printer model some. You don't want to just work on this domain one. To work on demand is on databases. You're never seen before. So without st cross domain across database part of it and dodgers very challenging. Guess it's completely different. Distribution wants moved to different dimensions even

Cole Adams Dodgers
Seth Dobrin Talks About Trustworthy AI

Eye On A.I.

01:41 min | 1 year ago

Seth Dobrin Talks About Trustworthy AI

"We're gonna talk about trustworthy a i. It's something that is increasingly in the news and concerns a lot of people. Ibm has a product called fact sheets. Three sixty that i understand is going to be integrated into products. Can you tell us what fact sheets three sixty is. And then we'll get into the science behind. Yes so let me start by laying out what we see is the critical components Trustworthy a at a high level Three things there's a ethics there's govern dated ai and then there's an open and diverse ecosystem an ai ethics is fully aligned with with our ethical principles that we've published with arbin dr ceo co leading the initiative out of the world economic forum. And i'm adviser for essentially open sourcing our perspective on a ethics from a govern data in ai perspective. It falls into five buckets. So i is. Transparency second is explain ability third is robustness. Fourth is privacy and fifth is fairness and so the goal of fact sheets is to span multiple of these components and to provide a level of explain ability. That is needed to drive adoption and ultimately for regulatory compliance. And you think of it as a nutritional label for ai where nutritional labels are designed to help us as consumers of prepackaged foods to understand what are the nutritional components of him. What's healthy for us. What's not healthy for us. Factually is designed to provide a similar level capability for a.

Arbin Dr Ceo Co IBM
Everyone Will Be Able to Clone Their Voice in the Future

The Vergecast

01:49 min | 1 year ago

Everyone Will Be Able to Clone Their Voice in the Future

"World today often feels like it's full of digital voices with a assistant siri amazon alexa and google reading your messages announcing the weather in answering trivia. Here's what i found on the web but if you think things are chatting now just you wait. The voices of these a assistant used to be based unreal recordings. Voice actor spent hours talking in a studio and these clips would-be cut up and rearranged to create synthetic speech but increasingly. These voices are being created using artificial intelligence. This means we can not only create more realistic computer. Voices clone the voices of real people much more quickly creating endless artificial speech at the touch of a button for example it was surprisingly easy to make a synthetic version of my own voice. In case you missed that. That was not me talking. That was all made digitally by typing into a computer. So why would some want to do this. Besides the obvious novelty of it. You might have guessed a reason to make some money. I listen to this was going on. Kevin hart here. I wanna talk to you about why. We have to have mac and cheese every night. Think about it. That's why. I recommend thousands of new shows and this is a promo from baritone one accompany. That's working on an ai product to create synthetic voices and make them something. The media industry wants to us. So we've created a platform. Ai which at the end of the day turns unstructured data into structured data. That's shaun king executive vice president. Ed veritas one. So if you're thinking about audio thinking about video things that are typically unstructured and we make that searchable discoverable author a host of different a cognitive engines that are there from transcription beaker detection speaker separation. And then we provide those tools to you know many different industries that are eating

Amazon Kevin Hart Google Shaun King Ed Veritas
Interview With Patrick Bangert of Samsung SDS

AI in Business

02:01 min | 1 year ago

Interview With Patrick Bangert of Samsung SDS

"So patrick i'm glad to be able to have you with us on the program here today and we're gonna be talking. Ai at the edge particularly in the world of medical devices. Which is i know where a lot of your focus is here. We're gonna get into some of the unique challenges of leveraging data and ai at the edge in the medical space. But i want to talk first. About what kinds of products. We're talking about people think medical devices. Okay well medtronic is tracking my blood sugar on the side of my arm and you know. Then i've got a big cat scan machine kicking around over here. What kind of devices does your work involve with. And and his edge relevant From your experience. Thank you for having me on the show pleasure to be here. We are dealing with medical imaging devices. So if you have a smart watch on your wrist. That's not what we deal with. Even though those are very useful of course to measure your exercise and sleep patterns we're dealing with technologies like an ultrasound and mri is not an x ray. And what's called digital pathology which is where a biopsy is removed and put on a microscopic slide. Those kinds of technologies produce images that are relevant to telling you whether you're sick at all hopefully not or if you are what kind of disease it is. And so the job of computer vision in this case is to detect whether is a disease diagnose what it is to find out where it is to find out how big it is advanced in if cancer stage one. Three how advanced it is. And all of these outputs can of course be created. Virtually instantaneously by executing artificial intelligence models at the edge and the edge in this case is the device itself. Yeah okay so. Some devices are huge. Mri scanners take up a whole room. As some devices are quite small ultrasound. Machines view could transport it in your suitcase and so there's obviously also price difference here but nonetheless. All of these technologies do produce an image that that is then analyzed by

Medtronic Patrick Cancer