Listen to the latest news, expert opinions and analyses on the ever-expanding world of artificial intelligence, data science and machine learning, broadcast on leading talk radio shows and premium podcasts.
A highlight from Using Data to Untangle the Sticky Problems of Manufacturing Procurement - with David Schultz of Westfall
"Inventory management has come up in manufacturing, so many other use cases, procurement specifically, not exactly the hottest topic we've covered, but it's definitely an area where there's a lot of room for improvement. There's a lot of clunky guessing games and procurement and they are extremely costly if we get them wrong, whether we're ordering too much or too little of something or overpaying or taking too long to get something, all of these have downstream consequences in the manufacturing domain. Our guest this week is an expert in this space. David Schultz is the VP and chief supply chain executive at westfall, westfall is a manufacturing firm based in Las Vegas, Nevada. David studied chemical engineering before getting his MBA at Bentley. Westfall is a contract manufacturer. They do a lot of different things, but they work a lot in plastics and resins. David himself has studied chemical engineering before getting his MBA at Bentley university and then serving a number of leadership roles in the supply chain. Today, we break up this interview into two sections. The first of which is articulating what the specific challenges are in procurement in manufacturing. Why is this as consequential as it is? And what kind of rules of thumb guessing games do we have to play today and manufacturing to make business decisions? We have to guess how much our customers are going to do business with us. We have to guess which of them are being overly optimistic about the orders that they say they're going to do this year, which of them we think are being a little bit more truthful or have a better understanding of reality. We have to factor all of that in to how much we're going to spend for parts and materials for our manufacturing operations. The second part of the interview, we focus on where data and artificial intelligence fit into the mix, westfall is a client of orchestral, orchestral is the sponsor of this series. So we previously had an episode with Edmund Zachary, who's the CEO of orchestral. his perspective on the kind of data that is becoming increasingly important in manufacturing when it comes to decision making, and also where AI is fitting into the mix to be able to help make smarter, faster decisions. There's a little bit of talk at the end about the future. You can stick around to the end of the episode for that. Again, this episode is brought to you by our kestrel. Without further ado, let's fly right in. This is David Schultz with westfall. Here on the AI and business podcast. So David, welcome to the program. Yes, thank you very much, Dan. Thanks for having me. Glad to have you here. We're diving in on manufacturing and David over the years we've covered so many use cases in manufacturing from inventory prediction to predictive maintenance, et cetera haven't focused that much on procurement, but that's the topic of our interview today. Before we get into where AI and data come to life, I want to get an insider's look at some of these big challenges of manufacturing procurement, ordering parts, dealing with inventory, et cetera, and kind of tee up for the folks at home. What makes this such a hard problem? Could you help us out with that? Sure, yeah, be happy to do so. You know, the whole supply chain environment has really risen to a different level. Obviously, through the pandemic, people hear the word supply chain and they understand maybe what it means now or at least they're exposed to that. It starts and ends really really, the customer, right? So really what it comes down to is, you know, what kind of forecast, what kind of demand predictability can you get on that end? And then really cascading that all the way back through the operation that goes all the way back through to your suppliers so that you can take that demand and satisfy that with the parts and the operation that you bring in. Historically, that's talked about as S and OP in the industry, sales and operations planning. So it truly does encompass all the way from your customer, your commercial side of the business, all the way through to the manufacturing side. Got it. And I can imagine this has been a clunky and complicated process for as long as it's been around because if I know anything about customers, you can't necessarily predict everything they're going to do, everything they're going to want all the time. There's likely some best practices that you folks have to operate with today or that the industry has to operate with today about looking at historical kind of forecasting kind of quarter over quarter a month over month, looking at maybe the activity of different customers and estimating, okay, based on what they ordered last year, what do we think they're going to order this year? What are some of the factors that go into these, you know, guesstimates hate to say it? What are the factors that go into these guesstimates today that allow manufacturing to operate? Well, I think Dan, you said that perfectly. They are guesstimates. And the day that you issue a forecast, it's wrong. But I think what you have to make sure that you do, as you mentioned, is, is it 90 plus 95% of the way there that's going to get you to your end goal. And basically what you're looking at is historical, as you mentioned. But I think what's made it difficult, you know, in the last 18 months or so, is people talk about in many ways, you know, what are we going to get back to normal? There is no back to normal ever in my opinion. It's the next normal. And I think what you have to realize is when you look at historicals, there's a lot of noise in the data over the last 18 months, let's say. And what I mean by noise is, for instance, we're in the contract manufacturing business, which means that we
A highlight from BI 154 Anne Collins: Learning with Working Memory
"Learning has been one of the greatest success stories tying together brains behavior and artificial intelligence. Long ago now, reinforcement learning algorithms that were developed in computer science were imported into neuroscience to account for the brain activity associated with how we learn. Since then, a wide variety of algorithms and computations underlying various forms of reinforcement learning have been explored, along with the neural substrates possibly implementing those algorithms. However, our brains are highly complex entities, and as we've discovered more about learning, the story has become more complicated. It isn't clear how and when various brain activities map onto various particular equations used to describe how we learn. And people like on Collins, my guests today are showing that reinforcement learning isn't the only game in town in terms of how our brains learn. On as a professor at the University of California, Berkeley, where she runs her computational cognitive neuroscience lab. One of the things that she's been working on for years now is how our working memory plays a role in learning as well. And specifically, how working memory and reinforcement learning interact. To affect how we learn, depending on the nature of what we're trying to learn. So in this episode, we talk about that interaction specifically. We also discuss more broadly how segregated and or how overlapping and interacting many of our cognitive functions are, and what that implies about our natural tendency to think in dichotomies, like model free versus model based reinforcement learning, system one versus system two, and so on. And we dive into plenty other subjects, like how to possibly incorporate these ideas into artificial systems. You can learn more about on and the show notes that brain inspired dot co slash podcast slash 154. Thanks to the brain inspired supporters, you people are the best, and it's just so generous of you to take the trouble to send a few bucks my way each month to help me make this podcast. And I always look forward to our live discussions and our interactions. Thank you. All right, here's on. And I know that you're not at SFN right now, the annual neuroscience meeting. And in fact, this our discussion here is, I think, over a year in the making, because I'd asked you so long ago, but you had decided to go and procreate. Apparently for the third time, and you were telling me that that's why you're not at this annual neuroscience meeting. So, but I thought maybe that was your first child, so I was going to ask you, you know, how motherhood was treating your career and otherwise. But you have three. Yeah. Yeah, I have three. There are 5 and a half and a half and 6 month old now. I'm not going to lie, motherhood is rough with a career, especially if your partner has a carrier too. Actually, with my first child, my husband wasn't quite working full time. Yeah. And so we were able to travel and go to lots of conferences and stuff like that. Which makes for some really interesting memories of being an SFN with a baby in the pouch and stuff like that. But yeah, I think the combination of having the other tool having full-time career and just having lost the habit of traveling with COVID too has really made it much harder this year. Are you done? Are you going to keep going? I stopped it too. And I have a surgery to prove it. That's a bit too much detail. Okay. I'm one of 6 children. So people feel like they can ask me these questions. I'm actually a 5th of 6 children. But no, I don't think so. I think it's already pretty, pretty hard enough. At this point, and you know, I have three girls, they're very lovely, but they're also a handful. Yeah. All right, well, I'm glad that we're finally doing this. So I appreciate you. Finally coming on to the podcast. It was a lot of emails back and forth in the making. So thanks for the persistent solution. Yeah, I am persistent. So we're going to talk a lot about today about your work relating, reinforcement learning in the brain to working memory. And hopefully we'll talk a little about a little bit about attention as well. But I wanted to start by asking you since you have worked a lot on the interactions between working memory and reinforcement learning. I wanted to start by asking you just how you feel how you would describe your outlook or your conception of learning and reinforcement learning has changed or been shaped throughout your career. Can you describe that sort of projection? Yeah, so, you know, I thought about it since you kindly sent me a question to prepare a little bit. And that's the question I had the hardest time way of actually, because I got into this field not traditional way, not that I think many people. There's no traditional connection. I think in France, it's maybe at least when I was there. It's maybe even less traditional. You know, there was no undergrad anything close to cognitive science. I discovered cognitive science as part of my breath requirements in engineering school. Alongside with painting and music and stuff like that, so it was really, I was in a very stem oriented undergrad and.
A highlight from Your Mouse Reveals Your Gender and Age
"Interview. I'm Louis leiber and I'm currently assistant Professor of computer science in the university of Luxembourg, in Europe. So I basically work at the intersection of machine learning and human computer interaction. Local quickly, I feel which is called computational interaction where I basically build computational models to try to explain or predict user behavior. So typically we are either adapt or create new machine learning models to account for any behavioral traits about the user. Let's say how they pay attention to these plays or how they move the mouse or how they use the eye movements to fixate on something kind of things. So if you build a good model of a user that means it's sort of representative of and predictive of the user's behavior, you're not trying to replace the user. What's the use of the model? So the main application of user modeling is simulation. Imagine that you need to recruit representative user sample for measuring whatever. Then you have to spend time designing this experiment, recruiting people reaching out to them, allocated some budget for paying them and so on. So instead of recruiting, let's say, 1000 people for that and you can regret maybe ten, run some statistical tests, do some machine learning on data that you can collect from them. And then you can create a user model that you can try to then to infer. Some trades or behavioral trades that could be extended to a larger user group. Of course, I mean, it's just an example, typically you would need fairly large representative dataset if you run. If you want to really do some production level machine learning code. But for research purposes, sometimes even less than 100 user is really more like enough to start drawing compressions about your user population. Yeah, it just depends on the effect size you're trying to measure. With that in mind, can you expand on what some of the behaviors are? It depends on the study or the kind of analysis that you want to do. For instance, something that I have been doing for more than a decade is to try to infer how people allocate attention on different screens based on how they move them out. And why this isn't interesting because then you don't need to really install any webcam any eye tracker. So you can use the mouse as a proxy of you should behavior and see depending on how they move how fast they move, how reach out to targets or how they click on something. So it's not only about the click itself, but how the process or what happened before the click what is in interesting and this kind of user models can tell us a lot about, for instance, how the layout or the user interface is designed and how can we change things or what happened if we move, I don't know what button from the top left corner to the bottom right corner, but kind of behavior this will enable or if this will facilitate people find an information quicker, these are things that you can measure for free basically by Lara and running larger scale studies. If you really want to pay attention to how people behave on the wave for instance, then the mouse has been shown to be a good proxy of the eye gaze. I mean, not for every single thing, but for most of the tasks that we do online, the mouse is a very reasonable proxy of user behavior. Well, I'm glad you made the comparison to eye tracking or gaze measurements. Obviously, we're going to talk mostly about the mouse today, but in terms of making that compare and contrast for listeners who don't really know much about the scholarship of eye tracking, is that a big thing in how highly is it regarded? I guess I asked because I think of my own eyes and it's not really a precision instrument, right? Just because I look at an ad doesn't mean I'm interested, sometimes I was distracted or it seems like a very noisy dataset. How reliable is eye tracking? Well, I track it is actually one of the earliest measuring instruments to analyze and investigate you should behavior. Typically I'm in New York computer interaction. We are interested in understanding how people pay attention to things or how things are arranged on the screen. And the eye tracking is an essential device in most ACI labs today. And it's not really noisy. Actually, the mouse is way more noisy than the eye. So yeah, I'm for sure we can go through that later. But I can tell you that by looking at how people experience or how people look at the content and how this is a range of the screen is something that people in marketing and in neuroscience has been using for decades. So I would say that it's pretty thunder. And while the use of this in human building interaction context, so I guess that's not all your audience will be familiar with hci when you can compute interaction literature by just to let you know that the eye tracking is a measuring device is really, really popular in NCI. So based on your research and I guess also just things you've read and looked into. Do you think that the mouse approach can be a full proxy if for some reason eye tracking or was too expensive or just not available for my project, but I do have mouse data. Can I get just the same rich amount of information as data center? Is there like a fidelity loss by looking at only the mouse? Yeah, well, it depends on the task. Of course, I mean, I can not give you a 100% accurate answer about that. But for instance, on web search, how people search engines are displaying information, these typical search engine page, this page with time purple snippets that you can click on. This is a highly structured information.
A highlight from AI Today Podcast: AI Glossary Series: Augmented Intelligence
"Hello and welcome to the AI today podcast. I'm your host, Kathleen mulch. And I'm your host Ronald schmeisser. And thanks again for joining us on the AI today podcast, as you know, we have well over hundreds of episodes and 5 plus years we've been doing this for a long time. And we've never run out of things to say. On AI today. And part of it is because we keep hearing from a lot of you our listeners who are telling us about the need to not only put AI into practice. But even to understand terms, we're still surprised maybe we shouldn't be that people are asking us about some terms that have been around and for artificial intelligence. Even data terms that have been around for decades. So we decided, hey, let's put together a glossary, just a glossary it's visible on our site. We'll link to it in the show notes, but a glossary that goes over all of the major terms and concepts you need to know about artificial intelligence. And even putting that together, there's like hundreds of terms that you really need to know. And that could be a little overwhelming. So part of what we decided to do is put together not just the glossary, but a little podcast series where we can explain each term, but sometimes a couple terms that are related in one podcast so that you could say, oh, I understand what this term means. And that can have a conversation with someone, and I can really, really know what to do. Of course, putting it into practice is a whole other thing, understanding the terminology is one thing knowing what to do with that it's another, that's what our CPA certification and training is all about, but we'll get to that later on in this podcast. Exactly. In our podcast series, we really wanted to just go over some key AI machine learning and big data terms at a high level because as we mentioned, some people just overly complicate these terms for no good reason, and I think that that doesn't help to make people less confused about some of these terms. So we wanted in our glossary and then in our companion glossary podcast series to just go over it at a more high level. Again, these podcasts will be at that high level. If you're interested in digging deeper than we encourage you to take the CPM AI training and certification and CPM AI for our listeners who have been listening to us for a while, know that that's the cognitive project management for AI. Methodology, we are big advocates of doing AI right, including following best practices, which is CPM AI methodology. But again, on today's podcast, we really wanted to just talk about some of those terms. And we will be focusing on the term augmented intelligence. Yeah, so one of the things you might be thinking about with intelligent machines is that we're really thinking about the machine. And what we want the machine to do. But machines are limited. This is, of course, they're not we don't have general intelligence. We talked about that in other podcasts and other things. So we don't have machines that can really truly do everything that humans can. At least even at the ability, even for basic tasks that humans can do. We're just really good at what we do. So the idea with augmented intelligence is instead of thinking about this machine that's going to do things perhaps on its own is, what can we do with an intelligence system if we can work together with the human to make the human better? So in many ways, it's really the machine augmenting the human intelligence. So we're trying to make the human more capable. And that's really what we mean by augmented intelligence. If you've heard this term. And the reason I want to do that is because humans are really, really good at some things. Machines are really, really, really good at some things. Let's do the chocolate and peanut butter thing and put these two things together and just make people better at what they do. And that's the whole idea of augmented intelligence. So what are humans really good at not good at machines good at not good at that together? Everything is better at. Yeah. Yeah, because that is important, right? We want to make sure we're taking the best of both. So humans are really great. We have great intuition. We have emotional IQ. We have common sense, and we have creativity. We're really creative beings. We're able to draw pictures, write poetry, sing songs, but we're not good at probabilistic thinking. Also, we're not good at dealing with very large volumes of data. If you've ever looked at spreadsheets, once they start getting past a page, I'm like, oh my goodness, my eyes are glassing over. And also humans just inherently have bias. So those are what we're not good at. But then now let's think about, well, what are computers really good at? Well, they're really good at probabilistic thinking. They're really good at dealing with very large volumes of data and information in a very quick amount of time. And they're also really good at being trained. But machines and computers are not good at intuition. They lack emotional IQ. They lack common sense. They also lack creativity. As we mentioned, they're good at being trained, so they may produce something that seems that it's creative, but they are not creative like humans. And then also machines do have bias. So if you take what's humans are great at, what machines are great at, move it together, then that's the idea of augmented intelligence.
A highlight from Leveling Up Commercial Operations in Pharma - with David Ehrlich of Aktana
"In this episode, we're focusing on the pharma industry and we're focusing on a unique element of pharma that is the commercial side. So we have all the activities we need to do to develop drugs and push them through clinical trials, and then of course we need to actually get them to patients when you go sell them to our customers, and there's a lot of activity there as well, but it's less talked about when it comes to the intersection with artificial intelligence. Our guest this week is David elrond is the CEO of octane, auton is focused on helping sales and marketing leaders go to market with their products, and there's three topics that we cover in this episode. The first is the business challenges of taking a drug to market. What do sales marketing and product folks have to deal with? What are the complexities that they're buried in? Many of these are going to mesh with some of you in other industries. If you're listening in and you're in banking or you're in retail, I'm sure some of these communication challenges and branding challenges are going to be things you'll resonate with as well. But David goes deep on exactly what this looks like in pharma. Secondly, we talk about where data and AI can fit into that mix to add value. Again, this is a unique use case, very different from a lot of the backend drug development topics that we've covered over the years here on the AI and business podcast. So David explains where data and AI fit into drive the sales and marketing metrics up for pharma firms. And lastly, he shares some insights on AI adoption. I asked David directly what it takes for folks on the commercial side of a life sciences business to prepare for and adopt AI in a way that gives them the highest likelihood of success and he shares some of his insights and common pitfalls and things that they've done. Well, so I hope that you find those insights to be transferable to your sector as well. This interview is brought to you by for more information about reaching emerges global audience stay tuned to the outro of this episode, but without further ado, let's fly right in. This is David elric with akana. They're in the AI and business podcast. So David, welcome to the show. Thanks, Dan. It's great to be here. I'm glad to have you with us. We talk a lot about AI in the domain of pharma, but not so much in commercial. And this is sort of where you guys play. Some of our listeners are very much in your industry, some aren't. Maybe we could kind of define commercial and what workflows exist under there. And then head right into the particular workflows that you guys operate in that AI might help with. But if we could start with the definition, I think that would be helpful. Sure. So I mean, the way to think about most of life science companies is there is a research and development side of the company that figures out what kinds of products the market would want that's consistent with the kind of impact that they want to have in the world. They go and attempt to build those products and develop them and release them to market. So the way to think about commercial gaming is every big life science company is going to have two sides to them. The first side is around research, development, manufacturing, it's around figuring out what drugs or what product the market needs that's consistent with the kind of impact they want to have in the world. They go, they build that product. They invent the product, they invent the medicine, they do all the development. They get it approved by the FDA, and then they hand it over to the commercial side of the business. The commercial side of the business is really around how do we market this new brand this new treatment for a certain disease? How do we explain to the world that we have this now? What it does, how it operates, then there's distribution and their sales, and there's fulfillment. So it's everything around that side of the business. Got to include it. Just fulfillment get bundled under commercial in theory here. Okay, okay, got it. I was unaware. So I know the sales and marketing side, but wasn't quite certain fulfillment sort of had its own bucket. Yeah, you can basically think of it as building and getting a product approved and everything else.
A highlight from AI Today Podcast: Interview with Galen Low, host of The Digital Project Manager podcast
"The AI today podcast produced by cognitive cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence. Learn about emerging AI trends, technologies, and use cases from cognitive analysts and guest experts. Hello and welcome to the AI today podcast. I'm your host, Kathleen mulch. And I'm your host Donald mills, and thanks again for joining us on AI today, been going strong here for 5 plus years. Hitting our 300th episode pretty soon. Pretty soon. I keep talking about 300 and actually technically not there yet, but we record a lot of podcasts. So in my mind, we're already at past 300 episodes. But we really have really enjoyed much of your feedback. Many of you really enjoyed our educational focus, podcasts, needless to say. There will be many, many more. You might have also noticed that the frequency of our podcasts have been going up. For a long first 5 years, we were regular on Wednesday every week, a podcast. Now we're Wednesday and Friday. Well, I don't want to commit to the days of the week, but basically we're twice a week now. Because we just have so much. You might think after all these years, we didn't have much more to say on AI. It's the opposite. We just have too much to say. And we just can't cram it all in and bust your heads. Of course, this is the first time listening to the AI today podcast. Then you should know, we got lots of stuff, including interviews with some amazing people who are involved in either making AI work today or people you should be listening to because they will help you make AI work for you today. Exactly. And so we always love the opportunity to have interviews, especially with fellow podcasters so that they can share their insights. And one thing that we've noticed and you may be noticing as well with our interviews is that we're really starting to see a cross section between project management and AI. And so we said, let's get some project managers and some project manager podcasts on here to help share with our audience their insights and maybe some of the challenges that they face and the opportunities that they see when it comes to project management and also AI as well. So we're really excited to have with us today Galen Lowe, who's the host of the digital project manager podcast and cofounder of the digital project manager. So welcome and thanks so much for joining us. Oh, thanks for having me here. This is such an honor. I'm really excited to get into it. AI and project management, BFFs, completely. Well, perfect. Well, we'd love to start by having you introduce yourself to our listeners. Tell them a little bit about your background and why you started the podcast as well as the digital project manager. Fantastic. Yeah, so again, I'm Galen Lowe. I'm one of the cofounders of a little professional community called the digital project manager. Myself, I've been working in client services
A highlight from Making "AI Ethics" Productive - with Beena Ammanath of Deloitte
"You're listening to the AI and business podcast. And this is not going to be an episode of holier than thou. Many times the topic of AI ethics is little more than a conversation of holier than thou. The way that I define unproductive AI ethics is essentially simply the exercise of shooting down AI ideas as being detrimental. Conjuring up some potential risk, potentially something that's very politically prickly and saying, oh, that might cause that or that might cause that. There's certainly many risks with AI, but when ethics, quote unquote, steps in without being able to solve those problems. In other words, integrate values integrate law and also get the job done for the customers or the company, I consider it unproductive, and I consider it a sort of holier than the game that I don't consider worth covering on the podcast. So I don't we had a good episode about AI ethics with the, at the time, global head of AI at IBM Seth dobrin, about a year ago, and that was an awfully good episode talking about the productive side of AI ethics. Today we double down on that theme with a guest who is not only author of a book called trustworthy AI, but is also the executive director for the global Deloitte AI institute. Bina am enough has also held leadership positions in AI and data at Hewlett Packard Enterprise, Bank of America, General Electric, kind of a who's who of global enterprise firms, and now she's with Deloitte. She speaks with us this week about putting AI ethics in action in ways that are conducive to innovation. In ways that genuinely will serve to solve business goals and customer problems. And there's two really important points I think are worth noting down for those of you who are tuned in who are leading AI projects or maybe your consultants who are helping your clients lead AI projects. There is a process here for sort of being able to screen out potential downsides and thinking through those upfront, which I think can be a potential benefit of applying AI ethics properly, being it has some excellent ideas there. And then secondly, she talks about who needs to be in the room to have a realistic AI ethics conversation. This is a team sport. As any of you who've been here for long enough are well aware and being a talks about the different kinds of expertise that have to come together to understand squarely the ethical and legal concerns of AI applications, but also how they can interact, how these folks need to level up their own knowledge and how they need to bounce that knowledge off of each other to genuinely screen applications and determine the best place to put our company resources for the sake of our customers and Some of these ideas hopefully many of you will be able to turn around and apply in your own business and that's certainly what we're shooting for in this episode. So I'm grateful to bena for being able to be with us. And without further ado, it's fine. This episode. Phenomenon of Deloitte, you're in the AI in business podcast. So Bina, I know you have a lot of these conversations with leadership around AI ethics, and there's a lot to get into with the meat and potatoes today, but I think we should define the terms that we've certainly heard a lot of different definitions of what comes to mind for AI ethics. When you're explaining this to the C suite to the boardroom, how do you put it in a nutshell? So there is a notion that AI ethics is all about transparency and removing bias and making it more fair. Those are catchy headlines, but in my experience working across different industries, fairness, bias transparency, are all crucial, but there are other factors. If you have an algorithm predicting a manufacturing machine failure, for example, fairness doesn't really come into play, but security and safety are both key issues. So let me take a step back and tell you why I like to think about it as trust and ethics in AI because for me, trust include the ethics, but it also includes policy and compliance, which is what leaders need to be aware of in the context of ethics. So trustworthy and capacitors, everything you can think of related to the potential negative consequences of AI. That's how I think about ethics. Yeah, kind of not putting it simply in the bounding box of transparency and bias as like buzzwords. Yeah. Yeah, got it. And in terms of where it fits in, I'm sure some folks that you talk to, I know for our listeners, this is often the case. When they hear about AI ethics, it's often sort of just well. You know, you want to be careful. Your algorithms could make for a really bad PR event. You know, sometimes it's physical danger, right? But as you and I both know, certainly, if you're running a manufacturing plant with heavy equipment or you're making self-driving cars or you're diagnosing cancer, we got real real issues here.
A highlight from Natalie Monbiot from Hour One on New Virtual Human Use Cases - Voicebot Podcast Ep 285
"Hey guys, I can't believe it's been nearly 5 years since I quit posting regular videos to YouTube. And I feel like an entirely different person. The ironic part is, I'm actually not a person at all. At least not a real person anyway. I am a creation of artificial intelligence. That's right. I don't exist in the three B world. I exist in pixels. 1280 by 7 20 to be specific. With the permission from the real tear in southern AI Taryn can speak different languages. And have a different faces. Ages, genders, AI Taryn can sing. Type two wake up brush my teeth go to work, time to come home and count likes as my worth. She can even relaunch her YouTube channel by creating new videos without the real terran having to shower or leave her bed. And perhaps most importantly, the real terran can now focus her time and energy on solving more existential problems. Problems like, what does it mean to have an AI twin? Am I creepier intriguing? Do AI humans like marshmallows? I believe we do. Anyway, it's really nice to meet you guys. I will see you all in the matrix. Or somewhere. Good stuff.
A highlight from Measuring Web Search Behavior
"Whenever I look over somebody's shoulder, always with permission and watch what they're doing on the Internet. Invariably, their behaviors a little different than mine. I'm more likely to open information in a new tab rather than the current tab. Someone close to me is constantly selecting things, but not to copy and paste just to highlight, which I find odd. We all use our machines slightly differently and we definitely use our browsers and search engines in slightly different ways each of us. I consider myself an above average Internet user in terms of my technical merits and how much I use the Internet, but you know, more than 50% of people also think they're above average. So who knows? Well, my two guests today probably know. They had access to a large dataset we'll talk about how they got it. A combination of web tracking and later survey data. When they blend these two data sources, a number of insights are available about the different ways different demographic groups use search engines. We'll get into those details and more on today's interview. So my name is Alexandra orman or Sasha, is also the kind of nickname I go by. And I'm a postdoctoral researcher at the university of Zürich in Switzerland. I work with social computing group that's department of informatics, though my background generally is in social sciences. So I'm kind of working my work generally in between social science and computer science and my primary research areas. I would say currently is research on web search on the HCI aspects of that. So this is like algorithmic bias in web search, but also how users interact with it. And another stream of my research is political communication on social media platforms. Broadly defined, so to say. Yes, my name is Michael Omaha. I am an Alfred lander lecturer at the university of bay, specifically working at the institute of media and communication status environment. One of my central projects that they are working great now deals with the impact of the algorithmic systems such as web search engines or recommendation systems on the Holocaust membranes. But I also definitely having a bunch of interests. So which deal order a broadly with information table systems, as well as their potential bias, as well as the implications for the public sphere, especially information behavioral in relation to the politics, but also a historical information. Well, how did your collaboration come together? Well, I was actually doing PhD works now at the institute, but we didn't meet there. We met before, Michael joined, I think, a year or so, right before when I was still doing my PhD in mikula, was already a postdoc. And Amsterdam at that time, if I recall correctly and we met basically at a conference. And we talked about the paper that we could potentially collaborate on based on, I think, what we both were presenting and then we just kind of started collaborating remotely via email. And then Michael joined the institute because there was a position open that was pretty fitting. I would say the main paper I invited you guys on to discuss is the you are how and where you search comparative analysis of web search behavior using web tracking data. So caught my attention right away, but neither of you work at Google who has most of the web tracking data. How do you get started on a project like this? So essentially, we're both at the time working. I was still in Bern, and we were working in that web tracking project more generally, where essentially it's a project joint between Germany and Switzerland. The goal of which was to collect browsing data overall, not focused on web search specifically just browsing data from the users who agreed to participate in it so they basically installed a plugin that would record all of their browsing accept a dedicated block list, which was a list of sensitive websites like we didn't record anything on their visits to banking or to insurances, to adult websites and things like that. So everything else was recorded unless they would press a button and say, don't record me for the next 15 minutes, and they could press it as much as they wanted to. So we had this data collected for different projects that just deals more generally with people's information consumption online and news consumption online, and since we're both more interested in web search, as we do a lot of work on web search bias, so not focusing on the users, but focusing on the search engines themselves. This we saw is like an opportunity to just use this data to look at the other side that we didn't explore before. The user side about how users actually search because previous studies were mostly based on either eye tracking data, so there's a small lab studies where people come, there is a night tracker, and the researchers look at what people look at on web search. Pages which is cool to see in more detail what people do, but also the kind of ecological validity of these studies is a little bit lower kind of not too high. And another stream of research was historically log based studies. For example, when Google or studies that were done in the early 2000s, there were some other search engines, not Google who would researchers from this companies or they would give access to some academic researchers just based on all the transaction logs from one search engine, basically check what people click on, what do they do? And we have this drove of data that allowed us to look at scale and of in real life at multiple search engines at the same time. So bringing together the benefits, so to say of these two previous methodologies. You'd mentioned being able to collect some of that data from the chrome plugin, could you expand a bit on how you got people involved in that study? It was basically the paper is one of the outputs of a lightroom project, which was done by the two universities, the university of Berlin and the team led by silke atom and the university of koblenz Landau in the team led by Michelle Meyer, and the idea was to basically representative sample of German and Swiss citizens, and then also invite them to share their data using the plugin system. So pretty much we collaborated with a market research company, which has samples of online panels coming from the two countries, so Germany and the Swiss arm and pretty much we indeed ask them to recruit a sample of participants. And then each participant was basically asked to express Isaac his consent or the lack of concern to be tracked and naturally quite a number of people didn't agree to participate, which was expected because it's quite first of all quite noble way of researching information behavior. But second, it's also still, I would say white gravis a sensitive way to inclusive way of actually starting people's behavior. But in the end, we actually received quite a number of four participants who are agreed, participate both in the tracking component and then also in the survey component. And basically this group of people was I get groups that we actually worked with since the project and based on whose data we actually load this paper. And what does that dataset turn into at the end of the day? Do you have a list of URLs or something richer? This dataset is richer, so essentially the plugin was developed within this research project, and it records also the snapshots of HTML pages essentially as a user sees them. It's not just the set of URLs you also have all the HTML, even though they're naturally quite messy. If you've worked with a lot of data, it's quite difficult to extract stuff, but with web search specifically, it's easier in a way because we have a limited number of
A highlight from Maaike Coppens on Conversation Design Themes in 2022 - Voicebot Podcast Ep 284
"Hello there to all my Friends have voice bought nation. This is Brett kinsella, your host of The Voice pop podcast. Every week for 5 years, I've sat down with an engineer, upside down with researchers, entrepreneurs, conversational AI industry leaders, and sometimes those leaders are designers, and that is what I have for you today. It is a long overdue talk with Micah coppins, about voice user experiences, but the timing was ideal. We had the chance to sit down while we were at voice summit 2022 just outside of Washington, D.C., and Micah has a new book on conversation design that was released shortly after we conducted the
A highlight from BI 153 Carolyn Jennings: Attention and the Self
"Popular view of philosophy to seek consciousness as the thing and self as nonexistent in an illusion and what I'm doing is I'm saying the self is a thing and then consciousness is just the way that the self is related to its world. I think that one of the reasons that people will be uncomfortable with it who have been in consciousness research for a long time is just because it's become so popular to think about consciousness as a thing. So popular that people say things like. This is brain inspired. This is brain inspired. Everyone, I'm Paul. William James, the super influential psychologist and philosopher, famously in 1890, wrote, everyone knows what attention is. That turned out not to be true, instead, like other cognitive functions we give names to, like memory, or consciousness, the more that we study attention, the more subdivided the concept becomes, leading to a taxonomy to describe the varieties of what we collectively call attention, like top down versus bottom up attention, feature based versus spatial, attention, overt versus covert, attention, and so on, and some people even argue that the word attention isn't even useful anymore. And we should abandon it. Carolyn dicey Jennings is a philosopher and a cognitive scientist at the University of California, Merced, and in her book the attending mind, she lays out an attempt to unify the concept of attention. Carolyn defines attention roughly as the prioritization of some stuff over other stuff. Based on our collective interests. And one of our main claims is that attention is evidence of a real emergent self or subject that can't be reduced to microscopic brain activity. She does connect attention to more macroscopic brain activity, suggesting that slow, longer range oscillations in our brains can alter or entrain the more local neural activity. And this is a candidate for mental causation. So we unpack that more in our discussion and how Carolyn situates attention among other cognitive functions, like consciousness, action, and perception. I link to her book and some other relevant articles, and you can learn more about Carolyn in the show notes at Brandon's dot co slash podcast slash 153. On the website you can also sign up to support brain inspired via Patreon for various bells and whistles, like full episodes and joining our Discord community. Thanks as always to my Patreon supporters and thank you for listening, or watching. All right, here's Carolyn. Carolyn, the book is the attending mind, and a right before we were talking here, I was frantically looking it up because of course it has a subtitle, but it has no subtitle. Why no subtitle? Yeah, no, sometimes. I like things to be short and sweet, I guess. They didn't ask you for all books, all science books or philosophy books have subtitles, right? This is important hard hitting interview questions. Yeah. I guess they do often have long subtitles, I'm really inspired by philosophers like Susan Wolfe, who tried to connect more with the public or federici who try to be really clear with their writing and that's a goal of mine and I feel sometimes like the really long subtitles are at odds with that. Okay, well, so the title is very short. And by the way, I like that it has no subtitle, by the way. It's not a criticism. But the book and the book is not long either, but it is dense and thick and has lots of goes down lots of paths, lots of details and stuff. Maybe I'll just start off with a very easy quote here from the book. And then we can unpack it, right? Consciousness, because consciousness is the interface between a subject and its world, action is the subject's contribution to that interface. And attention is but one way to get there. So we have a lot to unpack here, perhaps. So like I was saying, the book covers a lot of ground in philosophy and neuroscience and psychology. And there's no way that we're going to get to all the topics discussed in the book. And this book was two years old now, so it's probably old hat for you, and it was based on over a decade of your previous works and thinking. Yeah. Maybe we can start with, you know, we're going to have to unpack the ideas in the book, many of them. But maybe what I want to start with is just asking you what you feel most sure about in your work. And I don't know how your mind has developed and changed since publishing this work. In terms of the ideas that we'll get to, but what do you feel most sure about in the book? I feel the most sure about the existence of a self, which is I would say the strongest claim of the book as well. So yeah, that's probably where I bet there is something responsible for attention that could be seen as a one possible solution to problem of free will, for example. With agency, I feel confident about that. And in keeping with that, I feel pretty confident about rejection of a reductionist perspective of the universe. That all causation occurs at one level that all science occurs at one level and should be that we should think of science as ultimately coming back to one level, whatever that is. I feel confident about that that it's actually really useful to think in terms of multiple levels and that agency is one of the cases where you can really see that. So that's where I feel confident. I would say places where a part of the book that I feel less confident about and I haven't continued to work on would be the stuff about legal theory all the way at the end, which kind of makes sense because sort of starts with the stuff that I'm the most excited about in the book and kind of ends with the stuff that I feel the least confident about, but going out into a new direction that I may continue later. But there are also things that I just didn't complete in the book. And so in a way, I feel less confident about those things too, but I'm hoping to complete them eventually and those are things like what is where is the boundary between self and world?
A highlight from AI Today Podcast: AI Glossary Series: Cognitive Technology
"And welcome to the AI today podcast. I'm your host, Kathleen walch, and I'm your host Ronald schmeiser and thanks again for joining us on the AI today podcast as you know. Well over 300 episodes or so, 5 years plus, you know, we've really been going strong and a lot of the reasons why AI today pockets is so popular is because we focus on giving our listeners an understanding of AI, machine learning and big data, and of course how to put those things into practice today, which is what we're all about. Not looking at the research not looking at yesterday, the history or talking about some of the tangential topics. We're really about making AI practical thing, really want you to be successful with it. So as part of it, we've really spent a lot of time going over various aspects of education and our failure series, and most recently our glossary series, where we just highlight the terms so that there's understanding of what these various different things are. So if you hear this term, you hear this word, you may or may not be familiar with, you know what it means. Now, of course, we can only go into it in so much detail when we're doing a glossary, but we've never really did this before and we found this was really very helpful. Give people an understanding and sort of they have a common lexicon, a common terminology by which they can have successful conversations with their colleagues. Exactly. So if you have not done so already, we encourage you to describe, subscribe to the AI today podcast. So you can get notified of all of our future episodes. We will have many in this AI glossary series. But in today's AI glossary series, you know, we wanted to make sure that we were covering key terms related to AI, machine learning, and big data. At a high level, because we've heard from many of our listeners and our audience that some of these terms can get a little confusing, other people's definitions, the way that they describe things just sometimes as overly complex for no good reason. So we wanted to put this in terms that everybody can understand really make this approachable so that people have a better baseline understanding of what these are. If you'd like it more in depth understanding of all of this, then we encourage you to take our CPM AI training and certification, where we go into much greater detail and CPM AI is the cognitive project management for AI. If you've listened to any of our podcasts, you know we are big advocates of doing AI right with best practices, and we are big advocates of CPM AI. But on today's podcast, we really wanted to go over, you know, some of those key terms from our glossary. So today we'll be defining cognitive computing and cognitive technology. Yeah, so you might have heard these terms, maybe you didn't even have heard these terms, but you have obviously heard of artificial intelligence. And I think the challenges is that when you have a conversation with others who may not have the same common understanding of artificial intelligence and as you may have heard from our podcast on artificial intelligence, that's because there is no well established standard definition of artificial intelligence. And we get into this tricky problem of not coming to an agreement or maybe if people don't even like AI. So an alternate term that people are using to describe aspects of narrow AI, which is something that we have talked about another podcast as well, and we've defined it as well in a glossary, is that when you're only using AI for a specific thing, such as some sort of text processing or image or some sort of prediction or just a sentiment analysis, it's hard to really think about that as AI when you're just trying to say, are these tweets happy or sad? Is it really AI? It's like kind of, but not really. So a technology that the term that people use to describe the use of machine learning, especially in AI and other techniques for things that you would not necessarily think of as artificial intelligence is cognitive computing or cognitive technology, which is really the range of technologies that we are using as we pursue this goal of artificial intelligence. However, we're using it, even if we're not really using it for something AI, if someone's like, look, I'm just doing some classification system. They're saying, are you doing AI? No, I'm not really doing AI. I'm doing cognitive computing or cognitive tech. That's okay. It's just a terminology that people use so that way they don't get stuck in some mindset or mind frame when it comes to artificial intelligence.
A highlight from StrategyQA and Big Bench
"It's been a while since we ran our season on artificial intelligence. And if I were to pat myself on the back I would say it was serendipitous that I planned it at the same time large language models were starting to change the NLP landscape. If I missed the ball on something it was probably running a season on computer vision right now. We've seen a mirroring set of advancements most notably the recent diffusion models, and its astounding to think how far we might go with this more or less identical underlying architecture used for both language and vision. If there's one thing I am confident about, it's that the true test of whether or not something is artificially intelligent. Can only be performed with Alan Turing's imitation game, or some of you know what the Turing test. And despite recent advancements, it seems pretty confident we're still a ways off from an AGI. But between now and then we're going to need bigger and badder challenges to press our machine learning algorithms up against. I mean, people still publish on mnist and ImageNet a bit, but if there's one lesson we've learned, it's more data and more distributed are the two paths to push forward on. So that's why I wanted to take a quick respite from our ad tech season and bring you a story about a collaborative benchmark known as big bench or the beyond the imitation game benchmark. This is a large collection of many different independent tasks. In natural language. He knew it was a major feature of Elmo and Bert and all the other models that have followed since that they're useful in a wide assortment of seemingly independent tasks. Or that the bird embeddings used as features in an MM model will allow you to train something with hundreds of examples where you previously needed maybe hundreds of thousands. Big benches one of the best benchmarks out there as we try and build algorithms that can be as general purpose as possible. So today I speak with returning guest more geva, we talk about strategy QA, her specific contribution to the overall project, as
A highlight from Applying Computer Vision and Computer Listening in Manufacturing - with Remi Duquette of Maya HTT
"Business podcast where non technical professionals stay ahead of the AI curve. If you do not want to learn to write python, but you do want to identify high ROI projects and help to steer an AI strategy, you found yourself in the right place. In this episode, we're going to be talking about computer vision, often when people think about computer vision, they imagine what a human being's eyeballs would look at, and they say, okay, what could we use for AI to look at the same things a human would look for? Maybe we're looking for someone moving in surveillance footage. In today's case, we're looking at manufacturing. So maybe we want to examine possible defects and some manufactured product the same way a human being would. Well, as it turns out, machines can look at things in ways that human beings can't. I'm talking about infrared and other kinds of cameras that do not model the human eye, but might be able to pick up on things the human eye can not. In addition, we talk about machine listening that is to say, how can we use audio data to determine what might be going wrong in the manufacturing process. And again, this goes beyond what the human ear can listen to. This episode gives us a lot of different jumping off points into where computer vision and audio might make the difference, including going beyond how humans are currently diagnosing our machines. Our guests this week for this in depth and interesting topic is none other than Remy Duquette. Remy has been with us in the past, Remy leads artificial intelligence at Maya HTT, Maya is an AI services firm. And Remy has previously joined us on episodes about improving throughput and improving quality with AI in manufacturing today. We are focusing on the tools of the job, vision, and listening, and be able to go beyond the human senses, to be able to drive results in predictive maintenance and improving quality in the manufacturing process. This episode is brought to you by Maya HTT to learn more about reaching emerge as global audience for your AI products or services, stay tuned to the end of this episode. Without further ado, let's fly in. So he's fun to talk to Remy
A highlight from Protecting us with the Database of Evil
"Episode of practical AI. This is Daniel whitenack. I'm a data scientist with SIL international, and I'm joined as always by my co host, Chris Benson, who's a tech strategist with Lockheed Martin. How are you doing, Chris? Doing very well today, Daniel. How's it going? It's going great. So yesterday was voting day here in the U.S. and I did go to the voting place. And it was interesting because in line, I could hear people talking about cyber threats to the voting machines and other things like that. And so my mind was already actually thinking about these things because we have a really interesting topic to talk about today that's in that same vein, we're privileged today to have with us matar holler, who is VP of data at active fence. Welcome matar. Hi. Thanks for having me. Yeah. And active fence, I've read a bit about it and the sort of website talks about this barrage of threats that is online platforms are susceptible to now, which act offense is addressing in various interesting ways, which we'll get into. But I'm wondering if you could kind of give us a picture like if I'm going to run an online platform of some type, maybe it's not even like, you know, I'm likely not going to start and run the next Facebook, but I might very well, you know, start and run some type of software company that provides an online platform to do something. What should be on my mind and what's the reality of kind of online threats that I might need to be aware of if I'm getting into that space? Yeah, so first of all, I think there's one thing to think about is that any time that you have a platform that has any type of user generated content, whether users are uploading photos or they're chatting or they have comments or anything like that, you're going to have tons of data very, very fast and just it's prime for people to post wonderful things, but also some really, really dark things, which we've all seen and been exposed to. And so one thing to keep in mind is that trust and safety just basically safety online. It's not really a nice to have anymore. At this point, it's a competitive advantage. It's kind of a basic expectation, right? So users are expecting it, advertisers are expecting it. Parents are expecting it. The public expect it. So if you're going to spin up a platform, so first of all, best of luck. On second of all, you need to keep this in mind from the get go before find yourself down this rabbit hole. One thing that I think is really important to keep in mind is that although trust and safety isn't a new industry really, it's really now finally becoming something that people are aware of. Like I said, it's this basic expectation. Now it's not only users, but also regulators and legislators. There's new legislation coming in that's making it even more at the forefront and the basic sort of content moderation that is out today doesn't really make the cut. To follow up the second part of your question about what kind of harms are out there. So online harm is really multi dimensional. We can see it in different media types. So we've seen it in games and merchandise sites, chats, text, video, audio, things like that across many, many different languages, and also different types of violations. So you have white supremacists and terrorists and human trafficking and sort of these really painful sorts of things. Also goes into misinformation, disinformation, fraud, spam, cyberbullying, and so forth. And so it's this really, really complex space that you need to have a deep understanding of to understand how to address. And in the sort of, I guess up until this point, you were talking about content moderation and how it has evolved over time, but is still kind of lacking in the sort of traditional sense. What does it look like? I mean, it's content moderation. People might have in their mind, oh, I have a blog, right?
A highlight from AI Today Podcast: AI in Project Management, Interview with Ann Campea, host of The Everyday PM Podcast
"The AI today podcast produced by cognitive cuts through the hype and noise to identify what is really happening now in the world of artificial intelligence. Learn about emerging AI trends, technologies, and use cases from cognitive analysts and guest experts. Hello and welcome to the AI today podcast. I'm your host, Kathleen walch. And I'm your host final smells are in. Thank you again for joining us. You know, we've had so much feedback from our listeners on some of our education series. And as you may know, we are doing all sorts of stuff from our failure series to our use case series to a glossary series that we are just getting going here on the key terms or AI machine learning and big data. It's a surprising we've been doing the ant state podcast 5 years, 300 plus episodes, and still, still, terminology. That's kind of where we are. Over 50 years of, well, I think the AI industry is almost 70 years now. It's getting to be that old. So, you know, that's always important. So I would say if you aren't subscribed, be subscribed, you know, listen to all of our podcasts and be part of it. But we are also thrilled to have other folks join us. We've had some great interviews with folks who are not only practitioners in the AI space, but are on other parts of the technology and ecosystem landscape, where they're very important. So, well, we have a great guest with us here today. Right, so we're so excited to have with us and campia, who's the host of the everyday p.m. podcast. So welcome and thanks so much for joining us. Yeah, thank you for having me. I'm so excited to make the connectivity. We'd like to start by having you introduce yourself to our listeners and
Interview With Daniel Kornev Chief Product Officer at DeepPavlov
"Daniel gornja. Welcome to the voice. Podcast much brackets and big for me to turn today today. It's my pleasure to have you. This is a long time in the making. We've been i guess chatting on slack for maybe year and a half something. Yeah i think so. I started to read your westport. Insider was fascinated by opportunity to look into your think to on hand Why not took. Yeah that that's that's how it happened. Well the is really perfect. Because we're going to talk about a few things today. Obviously d. Pavlov is a project i've been interested in for at least a year. I don't remember when i first came across it but it might have been might have been. You introduced it to me. Or maybe shortly before that i found out about it but definitely answered that project and then obviously you've been involved recently with the elec surprise social competition. We've had another conversation about that about this. What a perfect time to go a little deeper on that because it is a different way to build bots and so really looking forward to this conversation today. But i'll let you get started. So why don't you tee it up for the The audience right now first and let them know what d- pavlov is before we get deep sure depot is like lab at moscow's physics and technology. That is focused on conversational And neural efforts Officially cool to neural networks in Terrain but Wednesday were standard like full. Five years. ago it's also got to down moniker Because follow fossil famous russian scientists who discover it reflects us in all those things that encouraged scientists researchers to understand how human brace books and we still have a lot of things that we have to uncover. But that's was formed as the name.
Google Develop AI for Detecting Abnormal Chest X-Rays Using Deep Learning
"On friday we talked about a nature publication by google. Ai scientists that showed how a deep learning system could detect abnormal chest xrays rays with an accuracy. Rivaling that of professional radiologists. The system only detects whether a chess scan is normal or not and is not trained to detect specific conditions. The goal here is to increase productivity and efficiency of radiologists clinical process. Let's examine some a i x ray. Science first of all how to rays work xrays are a type of radiation energy. Wave that can go through. Relatively thick objects without being absorbed or scattered very much. X rays have shorter wavelengths than visible light which makes them invisible to the human eye for medical applications of vacuum x. Ray tube accelerates electrons to collide with a metal and owed and creates rays these rays are then directed towards the intended target like a broken arm for example and then picked up by digital detectors called image plates on the other side differ body tissues absorb x rays differently so the high amount of calcium in bones for example makes them especially efficient at x ray. Absorption and this highly visible on the image detector soft tissues like lungs are slightly lighter but also visible making x ray and efficient method to diagnose pneumonia or pleural a fusion Which is fluid in the lungs. For example according to this latest nature publication approximately eight hundred and thirty seven million chest. Xrays are obtained yearly worldwide. That is a lot of pictures for radiologists to look at and can lead to longer wait times and diagnosis delays. And of course. This is why there's interest in developing ai. Tools to streamline the process many algorithms have already been developed but are rather aimed at detecting specific problems on an x ray. The google ai. Scientists however developed a deep learning system capable of sorting chest xrays into either normal or abnormal data intending. To lighten the case load on radiologists
Generating SQL [Database Queries] From Natural Language With Yanshuai Cao
"So tell us a little. Bit about touring and the motivation for it. How did the project get started right. So is this natural. Language database interface is a demo of anguish database interface built. And it's really just putting a lot of our word on some parsing space together. In this academic demo so netra language database interface the from application perspective the pin uses to a law a nontechnical users to interact with structured data. Set is there's lots of inside endure and You know who want to give out change for nontechnical users to to get those insights and from a research perspective. It's a very challenging natural english Problem because the underlying problem is you have to parse pasta in english or had our next languish than convert to see cole. And we all know. Natural language is ambiguous machine languages on bigger after resolve all amputate. He yard a too harsh correctly. Furthermore was different from compared to on other program. Language is the mapping. From adams. To see cole is under specified. If you don't know the schema really depend on what is the structure of schema and so he still model has to really learn how to reason using it. And in order to resolve all that may retail and correctly predicted the sequel and lastly this printer model some. You don't want to just work on this domain one. To work on demand is on databases. You're never seen before. So without st cross domain across database part of it and dodgers very challenging. Guess it's completely different. Distribution wants moved to different dimensions even
Seth Dobrin Talks About Trustworthy AI
"We're gonna talk about trustworthy a i. It's something that is increasingly in the news and concerns a lot of people. Ibm has a product called fact sheets. Three sixty that i understand is going to be integrated into products. Can you tell us what fact sheets three sixty is. And then we'll get into the science behind. Yes so let me start by laying out what we see is the critical components Trustworthy a at a high level Three things there's a ethics there's govern dated ai and then there's an open and diverse ecosystem an ai ethics is fully aligned with with our ethical principles that we've published with arbin dr ceo co leading the initiative out of the world economic forum. And i'm adviser for essentially open sourcing our perspective on a ethics from a govern data in ai perspective. It falls into five buckets. So i is. Transparency second is explain ability third is robustness. Fourth is privacy and fifth is fairness and so the goal of fact sheets is to span multiple of these components and to provide a level of explain ability. That is needed to drive adoption and ultimately for regulatory compliance. And you think of it as a nutritional label for ai where nutritional labels are designed to help us as consumers of prepackaged foods to understand what are the nutritional components of him. What's healthy for us. What's not healthy for us. Factually is designed to provide a similar level capability for a.
Everyone Will Be Able to Clone Their Voice in the Future
"World today often feels like it's full of digital voices with a assistant siri amazon alexa and google reading your messages announcing the weather in answering trivia. Here's what i found on the web but if you think things are chatting now just you wait. The voices of these a assistant used to be based unreal recordings. Voice actor spent hours talking in a studio and these clips would-be cut up and rearranged to create synthetic speech but increasingly. These voices are being created using artificial intelligence. This means we can not only create more realistic computer. Voices clone the voices of real people much more quickly creating endless artificial speech at the touch of a button for example it was surprisingly easy to make a synthetic version of my own voice. In case you missed that. That was not me talking. That was all made digitally by typing into a computer. So why would some want to do this. Besides the obvious novelty of it. You might have guessed a reason to make some money. I listen to this was going on. Kevin hart here. I wanna talk to you about why. We have to have mac and cheese every night. Think about it. That's why. I recommend thousands of new shows and this is a promo from baritone one accompany. That's working on an ai product to create synthetic voices and make them something. The media industry wants to us. So we've created a platform. Ai which at the end of the day turns unstructured data into structured data. That's shaun king executive vice president. Ed veritas one. So if you're thinking about audio thinking about video things that are typically unstructured and we make that searchable discoverable author a host of different a cognitive engines that are there from transcription beaker detection speaker separation. And then we provide those tools to you know many different industries that are eating
Interview With Patrick Bangert of Samsung SDS
"So patrick i'm glad to be able to have you with us on the program here today and we're gonna be talking. Ai at the edge particularly in the world of medical devices. Which is i know where a lot of your focus is here. We're gonna get into some of the unique challenges of leveraging data and ai at the edge in the medical space. But i want to talk first. About what kinds of products. We're talking about people think medical devices. Okay well medtronic is tracking my blood sugar on the side of my arm and you know. Then i've got a big cat scan machine kicking around over here. What kind of devices does your work involve with. And and his edge relevant From your experience. Thank you for having me on the show pleasure to be here. We are dealing with medical imaging devices. So if you have a smart watch on your wrist. That's not what we deal with. Even though those are very useful of course to measure your exercise and sleep patterns we're dealing with technologies like an ultrasound and mri is not an x ray. And what's called digital pathology which is where a biopsy is removed and put on a microscopic slide. Those kinds of technologies produce images that are relevant to telling you whether you're sick at all hopefully not or if you are what kind of disease it is. And so the job of computer vision in this case is to detect whether is a disease diagnose what it is to find out where it is to find out how big it is advanced in if cancer stage one. Three how advanced it is. And all of these outputs can of course be created. Virtually instantaneously by executing artificial intelligence models at the edge and the edge in this case is the device itself. Yeah okay so. Some devices are huge. Mri scanners take up a whole room. As some devices are quite small ultrasound. Machines view could transport it in your suitcase and so there's obviously also price difference here but nonetheless. All of these technologies do produce an image that that is then analyzed by
Social Commonsense Reasoning With Yejin Choi
"All right everyone. I am on the line with jin. Choi eugen is a professor at the university of washington. Yajun welcome to the air podcast and excited to be here. Thanks for having me. I'm really looking forward to digging into our conversation. I'd love to have you start by sharing a little bit about your background and how you came to work in the field of ai. Right so i primarily work in the area of natural language processing but like any other feels of ai. now the boundaries become looser losers and. I'm excited to work on the boundaries between language and vision language and perception and also thinking a lot about the connection between a i and human intelligence and what are the fundamental differences in that in terms of knowledge and reasoning And so let's go a little bit deeper into that. Talk us through like some of the ways that you take on those topics in your research portfolio. What are some of the main projects. You're working on the things that you're exploring right so currently i'm the most excited about the notion of commonsense knowledge and reasoning. This was in fact the only dream of a field. The in seventy eight as people love to think about it and tried to develop formalism for it. It turns out it's really trivial for humans but really difficult even for the smartest people to really think about how to define it formally so that machines can execute it as a program so for a long time. Scientists assumed that it's Doomed the direction. Because it's just too hard so i didn't really thought about commonsense for for a long time and then it's only in recent years. Some of us got excited to think about it again. Which is in part powered by the recent advancements of neural modell's that is able to understand large amount of data.