Artificial Intelligence

Listen to the latest news, expert opinions and analyses on the ever-expanding world of artificial intelligence, data science and machine learning, broadcast on leading talk radio shows and premium podcasts.

A highlight from AI Today Podcast: AI Glossary Series  Machine Learning, Algorithm, Model

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

04:13 min | 2 d ago

A highlight from AI Today Podcast: AI Glossary Series Machine Learning, Algorithm, Model

"On today's podcast, we're going to continue with our glossary series, where we're helping people either understand terms that they don't know that they think they know. That they know how to use and then maybe they don't know how to use correctly. And it's kind of interesting because even after all these years of doing AI today, over 5 years and 300 plus podcasts, we still see people either not understanding terms or misusing terms, even basic terms. And that's going to be especially the case in today's podcast where they have instruments. You may think you know. But funny thing is we see people misusing all the time. People who probably should know better. So we're going to get into that, but I just want to mention that, again, this is the first time that your AI today podcast know that we do have this fantastic glossary series, which is based on this big glossary that we have on cognitive for AI machine learning and big data terms, but also that we have other podcasts as well. So you can hear from folks who are been doing AI for years from folks to other interviewees, people who are influential doing AI today as well as a lot of educational podcasts, our glossary series, our failure series, our use case series, all sorts of stuff. So definitely, if you haven't yet, subscribe and you can get a tall that. And we're already kind of good way part of the through our glossary survey. So you can start from the beginning and see some of the other definitions. Exactly. And as Ron mentioned, we have a very comprehensive AI glossary that we have put together and I'll make sure to link to that in the show notes. It goes over at a high level key AI machine learning and big data terms. And so since we put it together, we said, we should share this with our podcast listeners because I'm sure that they, you know, there's a lot to learn in the glossary series and also maybe there is some terms that you yourself are not familiar with or confused by and we wanted to really break that down into language everyone can understand. So on today's episode, we'll be going over terms, machine learning, algorithm, and model. So you might think, oh my God, this is basic. How is it that we are 300 episodes today and defining machine learning? We should all know what that is. Interesting thing as well, maybe some of you do know what it means and some of you don't. There actually is a technical definition for machine learning. So if you look in the interesting thing as artificial intelligence, as we said, in one of our first episodes and the glossary series does not have a well defined definition, mainly because we don't have a good definition of intelligence. So artificial intelligence is generally a collection of different ideas and definitions. There's no established one. But machine learning does have an established definition and it is there's two parts. There's sort of the description and then there's the more computer sciencey version. So the description is it's the machine learning is the ability for a machine to learn from data and improve with experience over time and apply that learning to new data that the system has not seen before to provide predictions. So in a previous podcast, another sorry, another glossary podcast we talked about, what is a prediction? We talked about all these other ideas are. But specifically, machine learning, and this is the technical definition, is it's a computer program, is said to learn from experience. And of course, every technical definition is variables. And so it is said to learn from experience E with respect to some class of tasks, T and performance measure, P, if it's performance at the tasks as T as measured by P improves with experience E now, there's a lot of E and T E and P so what is that? Basically what we're just saying is very simple. It's like, we're teaching a machine to learn very specific set of tasks. That's the important part. It's not learn everything. Does that learn anything? It's not learned generally. It's learn a very specific set of tasks T cover one defined this task. Maybe it's classifying something, maybe it's clustering something. Also, we are measuring how well it's performing. This is what's called the generalization performance that we talked about in another glossary podcast. We're saying how well is it actually learning? That's the measure P and we're seeing how it's experienced E has to be improving. So that means, in this case, we want the number of errors, the journal addition errors to be going down. We want it to be improving. That's how we know it's improving.

RON
A highlight from Future AI Products Might Be Habit-Forming, In a Good Way - with Nir Eyal [AI Futures / Human Reward Systems - Episode 2 of 5]

AI in Business

05:09 min | 3 d ago

A highlight from Future AI Products Might Be Habit-Forming, In a Good Way - with Nir Eyal [AI Futures / Human Reward Systems - Episode 2 of 5]

"You can see our full article there, including multiple infographics around some of our research and perspective on where these virtual technologies might take us. And so the pitfalls and opportunities of immersive generative AI experiences we're grateful to have new year's perspective in that article against EME RJ dot com slash war. Without further ado, let's dive into this episode with near as to why sticky future digital products might even make us happier and more productive. Some interesting perspectives and it's all worth unpacking without further ado, let's fly right in. So near, welcome to the program. Thanks so much Dan, great to be here. Following your work for quite some time and you've clearly been following the cutting edge text space for quite some time and all of its sort of fascinating permutations and addictive qualities over the years, when you look forward 5 to ten years, you know, even since the time you wrote your first book there, TikTok has emerged generative AI start to emerge VR is coming onto the road map. When you think about how the human experience is going to be different in a technologically immersed way, what do you think is compelling and likely in that? So I really look at how we spend our time and attention. So both my books are hooked was about how to build habit forming products to help people build healthy habits in their lives. And indestructible, my second book was about how to break those unhealthy habits. Now, they're not a negation, right? Some people look at it and they say, oh, it's hooked and unhooked. And no, no, no, no. I think we can have our cake and eat it too. That we can actually learn how to use technology to improve people's lives through healthy habits, facilitated through technology. But we as consumers of this technology also need to make sure that we use the technology and the technology doesn't use us. And as far as I can see, it's going to be largely up to the user. I think there are some situations that companies have special responsibilities that governments need to step in. But by and large, those are limited. I would exclude those two situations where, for example, children. I think children need special protection. I think people who have a pathological addiction need special protections. But I think if you're an adult, it's going to be up to you that you're going to have to learn how to control your attention. There's going to be a real bifurcation, I think. Between people who let their time and attention, their lives be controlled by others. And people who say no, I decide how I will control my time and attention because I am indestructible because the world is becoming an increasingly distracting place. If you think the world is distracting now, just wait. It's only going to become more potentially distracting. But that's not necessarily a bad thing. We hear a lot of tech critics talking about how social media is melting our brains and there's too many frivolous video games or whatever. And I kind of take issue with that line of thinking because the fact that we have so many amazing ways to spend our time, that's not necessarily a problem. That's progress. We're already going to complain to Netflix and say, hey, stop making so many interesting shows. I want to watch them a lot or Apple stop making your devices so user friendly. I find that I want to use it. I mean, that's ridiculous. The point of these products is to be engaging. That's why we use them. And it's such a luxury that we live for the first time in history, where we have so much leisure time. You know, the average American spends 5 hours a day, still now, watching television. Wow. Television. 1950. According to Nielsen. Tell me why, watching the boob tube, right? Watching more Fox News. Or whatever else you might be watching. Why is that somehow morally superior than going on social media or playing video games? I'm no different. Anything you want to do with your time and attention is fine, as long as you're the one deciding that you want to do it. So I really want to empower people. I think the name of the game here is agency. How can we make sure that we use these tools mindfully? Got it. So I'm going to try to look back the original question here. So where's the future taking us? So maybe I'll get a little bit more specific with us. So I'm certainly not going to argue with you on the personal responsibility front. I think we're going to have people probably from every possible political permutation. I tend to lean in your direction. I tend to think at the end of the day there's going to be a jungle of tech, and you ought to buckle up and sort of make the world what you want to make it for yourself. And that tends to be my philosophy. And for your kids. Totally. I should emphasize, make sure we teach our kids how to do this, because it's going to be even more important for their generation. Man, you're telling me, kids that are this far from an iPad when they're 6 months old all the way up until they're four. It's going to be a wild world for them. So let me dive into some particulars and we'll start to untangle some of the things you can start to move around. One of them is, you know, you mentioned sort of think the world is full of distractions now. And on some level, to your point, it's part of progress. If you and I were smelting iron and cutting wheat all day, you know, we wouldn't exactly have the time to be distracted by as many people do today. Yeah, exactly, exactly. Exactly. Oh my gosh, I'm so distracted. 100%. It is a Pinnacle signal that we are no longer really dealing with those core issues of survival.

DAN Netflix Nielsen Fox News Apple
A highlight from AI Today Podcast: AI Glossary  DeepMind, AlphaGo, and AlphaZero

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

05:08 min | 4 d ago

A highlight from AI Today Podcast: AI Glossary DeepMind, AlphaGo, and AlphaZero

"AI methodology, which is helping companies and organizations really be successful with the AI projects. And we talk a little bit about it there, as well as different aspects of use cases and making AI work. But what we've really been focusing on lately is giving people an understanding of some of the key terms, because even after 5 years, we still run into folks who are not familiar or have a misunderstanding of what we feel are necessary terms that we just should really be a common understanding. Some of those terms relate to some of the technical aspects of AI, some of the more the conceptual aspects of AI or some of the applications of AI, but sometimes we even talk about some of the implementations of AI that are noteworthy so that when someone refers to it, you could say, okay, I know what you're talking about. And we're going to do a little bit of that going on today's glossary podcast. Exactly. So as Ron mentioned, we have put together a very comprehensive AI glossary, and I'll make sure to link to it in the show notes so that you can check out all of the terms that we have defined. But we wanted to spend some time sharing it with our AI today podcast listeners as well. Since some of you have reached out and despite many years that we've been doing this podcast, some terms can still seem a little confusing to people or you might not be a 100% sure how they connect and with each other. And sometimes there's just definitions out there that are confusing for no good reason. So we wanted to really simplify it and make sure that our listeners were understanding all of these basic concepts and terms related to AI machine learning and big data. So in today's AI today podcast, we are going to be at a high level defining the terms, DeepMind, AlphaGo, and AlphaZero. Yeah, and the reason why we talk about these particular implementations of AI. And that's what these are. These are in one case we have an organization that has a mission to really accomplish something in AI and they put together some implementations. And we understand. As part of our research business, we tracked over 20,000 companies in the AI space. So we understand that there are tons of companies doing lots of implementations of AI. And trust me, we hear from you guys all the time. We don't need to hear from more of you. So we are aware of it. But there are some notable applications and implementations of AI that have really sort of shifted the way in which the industry has thought or sort of validated some of it. And so the one of the big ones is this organization called DeepMind, which is a British AI research company that was acquired by Google in 2014. So they had existed before that. And their purpose is they're trying to achieve this so called strong AI, which is the artificial general intelligence, which is trying to build an AI machine learning system that can really handle a very large range of tasks. Eventually, anything that our brain can do, they want this intelligence to do too from understanding the world, to speaking, to navigating, to thinking about things, playing games, your brain is a pretty remarkable thing. It doesn't take up a lot of space. It doesn't have a lot of big data. There's no Internet connectivity. So there's something about the brain, right? And so DeepMind is really, really focused on that. And they started by looking at some of the games that people have been playing that were traditionally very difficult for the older style, the previous incarnations of AI. And we'll talk about the implementations and kind of where things are going. But just in general, the idea is that if you think about a game, a game involves, you need to look at YouTube to be able to understand a game, look at it, understand the pieces, the board, so there's a visual component. There's a understanding where things are at component. And then you have to have a strategy in that strategy needs to be not just for that move or the next move, but you need to think about winning the game. You need to respond to the other person's moves. You need to be able to even have a little bit of creativity and think outside the box because maybe the other player is already you can't do the things that people are expecting. So there is something to be said about game playing and intelligence. Exactly. And as Ron mentioned, some games are fairly simple. I mean, back in the early days of artificial intelligence, we were able to build systems that could play checkers, for example. But as we move up to some of these more complex human games, then it becomes quite difficult. So DeepMind created AlphaGo. And in case you're not familiar with AlphaGo, it's an AI application that was created to play the game go, which is a considered a very complicated game for humans. And this was based on advanced concepts of reinforcement learning and other aspects of deep learning, AlphaGo was. Notably in 2016, AlphaGo beat the best human player in the world, Lisa doll at go. So this was a big deal. This made a lot of news back in 2016 and people said, oh my goodness, I can't believe that this AI application beat the best go player in the world. Yeah, and I want to clarify.

Deepmind RON Google Alphago Youtube Lisa Doll
A highlight from 3D assets & simulation at NVIDIA

Practical AI: Machine Learning & Data Science

05:46 min | 5 d ago

A highlight from 3D assets & simulation at NVIDIA

"Scientist with SIL international, and I'm joined as always by my co host, Chris Benson, who is a tech strategist with Lockheed Martin. How you doing, Chris? Doing good Daniel, how are you today? I'm doing great. I had a conversation on breakfast on Monday this week with a company from the UK doing drones and automated autonomous drones and I felt very prepared for that because you've talked to me so many times about aeronautics and drones and all that. So thanks for your prep. No problem. Happy to do it, you know? Yeah, it was a good breakfast. Just think of the universe of possibilities out there, you know? So many things. Exactly. Yeah, well, speaking of the universe or I guess rather the omniverse or even better. Metaverse or whatever. You want to think of, we're going to get into all the verses today. We're going to be well versed in those verses. Yes, we're going to be well versed. Good stuff. We've got with us Beau pershall, who is the director of omniverse, sim data ops at Nvidia, which I have to say is a really exciting title. One of the better ones we've had on the show. So welcome Bo. Thank you very much. Pleased to be here. Yeah, I imagine that my title doesn't make a whole lot of sense to just about anybody. It's a lot of words. I bet it'll make more sense after this conversation. Hopefully so. I was going to say you have a whole episode to explain it to us. So we're good. Fair enough. You know, I guess spinning off of kind of how Chris and I were starting that. It would be awesome to hear about, you know, what does omniverse mean? And also maybe a little bit about your background and how you came to be working on omniverse. So this intersection of what I understand some type of 3D stuff and AI and simulation. What was that journey like and how can we understand generally what Omni versus? Sure. So omniverse is Nvidia software. It is our computing platform for building and operating metaverse applications. And again, it's not necessarily so theoretical. These are like industrial metaverses. These are, you know, whether you're designing and manufacturing goods or you're simulating your factory of the future or building a digital twin of the planet, which Nvidia is doing to accelerate climate research, omniverse is a development platform to help with that kind of simulation work. And it's doing it in 3D. Yeah, so it's not just those people without the legs kind of hopping around in a place. No, this is very practical as a matter of fact. We have big and small customers that are using it for over 200,000 downloads for Omni versus a platform that you can get from the Nvidia site. You've got companies like BMW that are using it to plan their factory of the future and part of that is worker safety. So they have to have legs. You can't simulate the ergonomics of if you're doing a repetitive task, are you going to hurt somebody by doing it or are they in danger of getting hit by something in a work cell or something on the assembly line. So there's all sorts of simulation around that kind of information as part of omniverse. But it's a really broad platform. It's designed to be extendable. So that customers can come in and write their own tools and connectors. It's not supposed to be just its own endpoint. In other words, we have connectors which are basically bridges to other applications, whether you're coming from manufacturing side, like Siemens or you're coming from architectural software like revit or you're coming from animation software like blender or Houdini or Maya. Or unreal for that matter. All of that data can be aggregated through USD. Universal scene description is the file format that omniverse is based upon, which was a Pixar open file format. It is very robust. And basically, we figure where the kind of the connective glue between all of these platforms. So that simulations can be run inside of omniverse, but all the data can move in and out. It's not like captive data. Hopefully that gives you a little bit of a background of omniverse in and of itself. It is a visual platform. It does that sounds fascinating. And as you know from our pre chat, I know a little bit about omniverse before coming into the conversation. But I know that there is a lot of confusion about how this fits in with all the other, you know, we kind of we were joking in the beginning about the various verses that people are hearing. There's a lot of lingo out there. And as recently as yesterday, a friend of mine named Kevin texted me and I haven't replied to him yet, but I will have by the time this is aired. Texted me saying, I don't understand this verse thing and I know that you're involved in this. Can you explain it? And I think Kevin represents a lot of people in that way. And so could you, we've heard multiverse. We've heard metaverse. We've heard now that we definitely have heard omniverse and how does all of it, can you give us some context or how this whole industry fits together so that as we dive into omniverse in a back into it in just a moment, we kind of have a sense of where it fits within and some of the other companies.

Nvidia Sil International Chris Benson Beau Pershall Chris Lockheed Martin BO Daniel UK Omni BMW Revit Siemens Pixar Kevin Confusion
A highlight from Detecting DeepFake Videos - with Ilke Demir of Intel

AI in Business

07:49 min | 5 d ago

A highlight from Detecting DeepFake Videos - with Ilke Demir of Intel

"Ilka demir, designer of fake catcher and senior staff research scientist at Intel. Fake catcher is an AI powered deepfake video detection tool developed by the company ilka joins us to explain the science behind the new technology and its many applications, both in the hands of individual journalists, as well as being leveraged by entire media enterprises. Without further ado, here's our conversation. Thank you so much, ilga for joining us on today's show. Thank you, Matthew for hosting me. So to start off, those who are familiar with AI technologies, particularly machine learning use cases where we see in fraud detection across industries, they might have some idea of how this verification process can take milliseconds. But for those who know nothing about that and look at this headline and say, you know, why does it take days to fact check a political debate, let's say. But now it takes milliseconds to determine a deepfake video is real. Why does that make sense from a technological standpoint, especially with these new developments in deepfake? Identification technology. So I think these are these sounds similar problems, but fact checking versus checking if something is deep fake is completely different problems. Deepfakes are visual or audio signals that are manipulated that are edited that we can actually find by signal processing or by other means that they are real or not. Fact checking is a little bit more complex process that you need to check whether the context is correct. You need to check the motivation this correct. You need to check who says why is it, et cetera. So that contextual information for fact checking makes it a little bit more complex problem. But for deepfakes, we can actually just do some visual analysis of the videos and then understand that it is real or fake. Indeed, in one of those metrics that you use is and I'm going to, I practiced saying this before we got started, I think you might have heard me on the preliminary part of the call. But it is photo, plethysmography, or PPG for short, and we're going to keep calling it PPG before I have another chance to butcher that name. But essentially, that's like taking visual cues to see very natural flows of blood to the face or whatever is being depicted on screen. Tell us about how that visual data is being collected. And what other factors are being used and how they work with AI capabilities to verify that they're seeing a human being, a real human being in front of the camera image. Right. As you just mentioned, I think the hardest part of the project is saying for the flattest photography for the flat is modern for the flight smarter. Yes. Very hard. There's also one hardware. So yeah, those PPG signals are what we use. Essentially for catching deep back. So normally, defect detectors are trying to find artifacts of fakery. So what is fake in what is like the fake in bad videos, right? And they look at boundary artifacts, symmetric artifacts, histogram artifacts, et cetera. So we trust that question and we asked what makes us human, what is any priors in humans that we can trust that it makes us real, right? And heart rate is one of them heart rate is the most natural answer maybe. So when our heart pumps blood, it goes to your veins and the oxygen content of the veins change. And that oxygen content change actually causes color changes on your face on your wherever you have veins. They are changing color. So that color change is of course not visible to our eyes. As we must we can not see that color change. But computationally it is visible. So in videos of real people, you can see that it has a periodic, it is periodic. It has a structure. It has some bonds. It's like in hospitals we see this way as our heart rate. So people just signals are like that. For real humans. For fake ones, they are everywhere. Your heart rate is 75 seconds and then one 20 and then 16 and you can not have that heart rate. And it's not periodic, it doesn't have uniformity. So the dose signals are everywhere. So we talked to these signals and we made this where we conducted this very simple experiment. So given many pairs of real and fake videos, which is like the real version and the fake one that's the real version was used as source. For many pairs of these videos can be used these PPG signals to detect which one is fake, which one is real. And with no deep learning with no AI just by an implicit formula in processing of signals, we found that we can answer that question correctly by 99.39% accuracy, which is very high. It's like over 99%. So that shows us that the PPG signals are very powerful in showing us what is real what is made. Then, of course, came the natural question. Can we generalize this? Given any video, can we use this PPG signals to detect whether it is zero or fat? In that case, the problem is a little bit harder because we want to process any video. So we employ the deep learning approach to run these PPG maps for classification into real and fake videos. And to fortify our PPG signals, we not only use the temporal version of them, but we also use the spectral version of them, which is temporal version means like these things are way that you see, right? And the spectral version is the frequencies that are represented by those PPG signals. So with the spatial temporal and spectral versions of the PPG signals, we train a neural network, and it says whether it is real or very, very interesting stuff, especially seeing how where the crossover is between where you're depending on this PPG variable versus integrating other capabilities to reinforce it. On that note, I'm wondering how much in terms of data collection it relies on the assumption that the latest camera video technology is being involved. And understand when I ask this, I'm kind of asking like a trying to look into the future to see how hackers small age hackers might try to game the system because everything for fraud detection, whether it's video data or even fraud detection for banking and AI capabilities and you're looking at transactions, it's a rules based game. It's an arms race of the cops develop a technology. The police side of the table develops a technology and then the criminal side of the table is trying to figure out how to game it. And in looking at knowing this is a new technology, this is kind of like the latest and greatest tool for the authority side of the table. I'm thinking, in my mind, from seeing it in action, the first way I would think to game it might be to use a lower grade quality camera. It might make sense that, say, someone depicted in the video, let's say, you know, Barack Obama, you took a video of them analog, or it was very grainy. How much does video quality factor into being able to read PPG? The other variables that you're depending on to get that 99% accuracy.

Ilka Demir Ilka Joins Ilga Intel Matthew Barack Obama
A highlight from Gil Perry CEO of D-ID on Lifelike Digital People, Generative AI, and the Rise of Synthetic Media - Voicebot Podcast Ep 296

The Voicebot Podcast

03:28 min | 6 d ago

A highlight from Gil Perry CEO of D-ID on Lifelike Digital People, Generative AI, and the Rise of Synthetic Media - Voicebot Podcast Ep 296

"This is episode two 96 of The Voice by podcast. My guest today is Gil Perry, CEO, cofounder of DID. We talk virtual humans, generative AI, and how you create full motion digital people from photographs, video, and other digital files. Welcome back, voice by nation, and all of our friends from the Cynthia community. Every week since 2017, we have brought you an innovator and engineer designer and entrepreneur or other industry leader that is shaping the future of conversational AI and synthetic media, more recently we've delved into a lot more depth around generative AI and we have more of that on tap for you today. DID originally stood for de identification and was focused on protecting people from the inappropriate use of facial recognition technologies, the founders already had experience with generative adversarial networks to known as Gans, and how those AI models could be used to generate images. That background combined with the work on facial recognition turned out to be a recipe for some very specific insights about how to create and animate new images of people using AI models. Now my guest is cofounder and CEO of DID guild Perry. We talk about how the company's history logically evolved into tools for creating talking digital people. The company is really well known for powering my heritage deep nostalgia product. Now that was kind of a viral hit. It produced over a 100 million photographs. The DID helped my heritage animate for the consumers so you could upload a picture of a family member or maybe an ancestor and it would actually make their face move. It was really an innovative solution, and they filed that album with some other products in that space, over a 100 million photographs have been transformed that way using DID technology. Now, the technology behind the idea was also instrumental in helping Sean baptiste martinoli when two film festival awards for his AI generated short film. Some of you may have heard my interview with Jean Baptiste at the Cynthia event this past fall also we had him on the podcast, not too long ago. Now, more recently, DID is introduced creative reality studio. So creative reality studio is designed so anyone can upload someone's picture, add some text, create a quick scripted video with an avatar based on that likeness in the photo. In December, DID added the ability to create the script using a prompt to GPT-3 and upload images created by stable diffusion. So this is a great example of how synthetic media is often enhanced by layering several generative AI solutions together, and that's why synthetic media generative AI are very similar, gender AI is just really a subset of what we define as synthetic media here at boys bot. Now these new use cases are also why these markets are the hottest stories in tech right now. If you'd like to keep track of the latest news, technology, market data around synthetic media and generative AI, you should subscribe to voice box synthia newsletter. The stories often go deeper than our coverage and voice bot that AI and is delivered to your email inbox daily. What's better than that? It's become really popular and we've been among the first to spotlight innovative companies like DID. It's also completely free, so you should definitely sign up. Just go to voice bot AI and click synthia in the navbar that'll take you over there where you can just sign up, or you could go to Cynthia dot substack dot com.

Gil Perry Sean Baptiste Martinoli Film Festival Awards Gans Perry Jean Baptiste Cynthia
A highlight from Causal Affective Triggers

Data Skeptic

07:36 min | 6 d ago

A highlight from Causal Affective Triggers

"I mean, see the results that's interesting, but take a few more. There are always short and somewhat informative, and I'm going to share a few insights with you here. The two newest surveys I've launched to which the least amount of you have taken just given their age are the programming survey and the cars survey. Let's start with cars because I find it a little bit more boring. Mainly because I probably didn't build the most interesting survey in the world, but Linden and I have been having some car trouble and were shortly going to be in the market for something new. Most of you have had your vehicle for on average 5.6 years. The maximally being 12 years. And I'll go ahead and de anonymize myself. That was me. And then I have made the 2008 Prius work, all this time. I guess there's no classic car owners in listening audience. In fact, yeah, somebody beat me on year of manufacture we go from 2004 to 2023 in the years. Average being 2014. Mostly Toyotas, within that a pretty good selection, and when I ask some details about when you use the vehicle, overwhelmingly people use their vehicles on weekdays and weekends. Unlike me, I use it about one to two days per week. Now, if you're looking at the results like I am at survey data skeptic dot com after having taken this survey, you will see my embarrassment at a terrible way of displaying this integer response I've built so far. We've shown the mean min and max, but in this question where I asked how many miles do you drive per year, put minus one if you have no idea, because I honestly have no idea I can imagine most of you don't either, maybe if you're doing the tax deduction, but of course minus one is the minimum, who knows how that destroys the mean. I got to get in there and recode this. I think instead, you know, integers should really show us the distribution. So maybe some sort of histogram or something like that. Hopefully if you're listening to this in the future, I've already fixed it. Then I asked how many miles per gallon do you achieve? To which someone, I don't know if they're fooling with me or trying to prove a point, but they put 1 million. That's the max. Actually, that's probably someone who has a fully electric car. And what the heck do you put? Because I know I don't let you type infinity in that blank. So that's a survey design issue I need to think more about. Overall, most people are satisfied or very satisfied with their vehicle. And somewhat or very likely to purchase the same make and model in the future as the most popular answers there. Although that's a fickle question too, you could like your car, but not want to buy it again because there's a newer version of it, a newer replacement line, if you will. So my question doesn't account for that, or at least we can't get insight about that the way I've worded it. All right, let's move on to the programming survey. You were almost three to one python over our users. Other being the third most likely category that surprises me a little bit. But python and R number one and number two could have guessed that. How many hours per week do you spend programming mean average 20.9? How many years have you been programming mean average of 15? Although despite that, the most popular answer to which best describes you expert intermediate or novice programmer intermediate is the largest choice. That seems a bit humble if the average used programming is 15. I asked, do you use serverless design patterns, which I'm a big fan of, but they don't work in all situations. In fact, often for someone doing machine learning, they're not exactly the right choice. Yet despite that answer is always sometimes a never, sometimes most popular, never close behind, and always trailing in third. Lastly, I ask where do you write your code example visual studios, IntelliJ, vim, I see a lot of VSCode and Jupyter notebooks, no surprises there, some root, some RStudio, pie charm. I made this free text because I didn't want to restrict anybody, but now I can't make a nice bar chart because it's whatever you guys typed in unless I manually code these, or maybe I should use Bert to code these. Let me put some thought into that and get back to you guys. In the meantime, let's move on to our interview for today. How many of you have participated in a hackathon of some form? There weren't a lot of these when I was an undergrad, but I think it's becoming more and more popular. I've served as the judge at data fest a few times over at UCLA, and the use case we discussed today is a survey that went out to some not data fest, but other hackathon participants, but the hackathons really secondary. This is about an experiment around response rates. Ideally everyone would respond to your survey, but we know that won't happen. So in what case can we entice them? Offering them cash is not really an option if there's no budget for that. It also creates some perverse incentives. Although that's a topic for another episode. Now today we investigate can something about the email alone make people more likely to respond. And if so, could those responses be biased in some way? We'll get to the bottom of both questions and a few other good ones, right after this. I am Alexander nausea. I'm an associate professor at the university of tartu in Estonia. And an adjunct associate professor at Carnegie Mellon university in Pittsburgh in the U.S.. Could you tell us a little bit about the types of research you work on? Most of the work that I'm doing is really around collaboration in very different forms. So I work with companies to study work teams. I work with people that run hackathons and similar styles sort of event based collaborations. So anything where people work together that basically gets me interested and excited. And what sorts of methodologies do you use? I mean, graphs come to mind because they describe social networks, but I don't want to presume that's the right data structure. What kind of analytical tools do you apply? Actually, I'm more from a sort of qualitative tradition, so a lot of the things that I do focus on qualitative research. And that includes interviews, observations, surveys, stuff like that. But then in addition, of course, as you said, if we have data traces available, that's always great. So we can sort of trace what's going on during collaboration or what happens afterwards even. And we've done a lot of work around that as well. But in general, I would say I'm more qualitative guys than I am a quantitative guy. You'd mention the use of surveys. Could you expand on how they can be a good measurement tool in your work? First of all, I should say that most of the work that I do is really mixed method stuff. So it's very rare that I just employ one survey or that I just employ interviews. It's very often that these instruments are combined with each other. And that being said, for me, surveys oftentimes help to number one, get a grasp of a bigger population, because let's say I want to study a hackathon, what happens often is you want to study the teams that participate because it's inevitably a team based event. And you can't really observe a ton of different teams at the same time because just of men power limitations basically. And also doing interviews afterwards also you can only run with a sort of limited amount of people. So if you want a better grasp of, I don't know, let's say, what were the overall motivations for people to participate in an event? Then you can do that with a Serbian. That's how I really employ these kinds of instrumentation. Well, there's a couple of metrics to look at if you're going to share a survey like that. I guess how do you get in touch with the people who participated in the hackathon and what are some good benchmarks for measuring if you're getting in touch with them? First of all, I should differentiate between in person online events.

Linden Alexander Nausea MIN Bert University Of Tartu Ucla Carnegie Mellon University Estonia Pittsburgh U.S.
A highlight from AI Today Podcast: AI Glossary Series  Prediction, Inference, and Generalization

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

00:46 sec | Last week

A highlight from AI Today Podcast: AI Glossary Series Prediction, Inference, and Generalization

"CPM AI, the cognitive project management for AI methodology. So for our listeners, we have come up with a free intro to CPM AI course so that you can take it for yourself and see what it's all about. You can go to AI today dot live slash CPM AI to sign up for the free intro course. And if you're interested in actually getting CPM AI certified, then you can go to cognitive dot com slash CPM AI. And you can learn more in depth about the what CPM AI is on that page and also sign up directly for the certification there. I know that many of our podcast listeners are CPM AI certified. So we'd love to have additional one certified and you can grow join the quickly growing list of thousands of CPM AI certified folks from around the globe.

A highlight from BI 159 Chris Summerfield: Natural General Intelligence

Brain Inspired

05:52 min | Last week

A highlight from BI 159 Chris Summerfield: Natural General Intelligence

"An anomaly arises because what we have is we're still in the mindset where like, okay, the goal is to build to recreate a human. But suddenly, we're like in the natural world, and then it's like, okay, so we want to recreate a human in the natural world, right? And then this suddenly starts to be a bit weird. People in machine learning and AI research, particularly people who've entered the field more recently, say, things like it's not clear what we have ever learned from the brain. Met from the study. So this is kind of in my view this is. This is brain inspired. Good day to you, I am Paul. My guest today is Christopher summerfield. Chris runs the human information processing lab at University of Oxford, and he's also a research scientist at DeepMind. You may remember him from episode 95 with Sam gershman when we discussed ideas around the usefulness of neuroscience and psychology for artificial intelligence. Since then, he has released his book, natural general intelligence, how understanding the brain can help us build AI, and in the book Chris makes the case that inspiration and communication between the cognitive sciences and AI is hindered by the different languages, each field speaks. But in reality, there's always been and still is a lot of overlap and convergence about ideas of computation and intelligence, and he illustrates this using tons of historical and modern examples. So I was happy to invite him back on to talk about a handful of the topics in the book, although the book itself contains way more than we discuss. You can find a link to the book in the show notes at Brandon inspired dot co slash podcast slash 159. Thanks for listening. Here's Chris. I was just looking it up and it was actually almost a year ago today that we spoke last. You were on with Sam gershman and it was mid January. So I was just looking at it up because I was thinking, what episode was that? When was that? And since then, you have published this bright shiny new book, natural general intelligence. And I was in the preface to the book, you write that it took you ten months to write the book. And this is there's so much in the book that that's a blistering speed. It feels like to write a book, but you also mentioned, hey, AI is increasing advancing at a super rapid pace. And so one, how did you write the book that fast and two, how much worry did you have that it would be immediately outdated, essentially, the AI facts in it. So neuroscience, not a neuroscience says, right? Psychology, not advancing so fast. AI advancing very fast. So you didn't have to worry so much that you'd be outdated in your side in your neuroscience maybe. Thanks for those questions. Yeah. Well, I didn't feel very fast when I was writing it. I can tell you maybe I don't know how long it takes people to write. But yeah, I think one of the reasons I found it relatively easy to write at least once I got going was because some of the material is part of a course which I teach. So the structure of the book and the kind of the conversations and the arguments and the things that I felt I needed to explain were sort of present in my mind. So yeah, that's one reason. And also, as purely incidentally, I just found that actually I really love writing. I think probably much better at writing than I am actually doing other aspects of science. I really enjoyed it and the synthetic process of getting those ideas down and trying to make sense of them was personally hugely enriching. I'm sure many other people have that experience too. In terms of the book being out of date, yeah, so obviously I would love people to read it. It's not out of date, by the way. I mean, I would love people to read it, but the neuroscience, of course, is not out of date, and the neuroscience, I still feel like neuroscientists are to some extent, you know, ideas come out in machine learning and AI research and it takes a while for them to percolate through to the first typically to the computational community and then out into the wider communities. And that process some of the models and ideas which are described in the book will be kind of probably news to a lot of people working in mainstream neuroscience and certainly to my kind of the audience that I had in mind while I was writing, which is sort of like undergraduates. Who are interested in cognition and computation in the brain. So that's definitely true. But in terms of AI research and what's new, sorry guys, but you know, it was out of dates. It was out of day three months after I submitted it. Dramatically. OUP are actually a little bit faster to turn around books than some other publishers. But even so by the time it was due to go to production, which was I think in July 22, like three or four months after I submitted it. The they said, well, hey, do you want to update it? Because it has anything changed. That's what I was going to ask. The whole thing, sorry. No, but did you? Were you in at least in your mind? Were you thinking, do I need to go and see if a new model has been released?

Sam Gershman Christopher Summerfield Chris University Of Oxford Brandon Paul
A highlight from Generative AI is a Waypoint to Brain-Computer Interface - with Lambert Hogenhout of the United Nations [AI Futures / Human Reward Systems - Episode 1 of 5]

AI in Business

02:06 min | Last week

A highlight from Generative AI is a Waypoint to Brain-Computer Interface - with Lambert Hogenhout of the United Nations [AI Futures / Human Reward Systems - Episode 1 of 5]

"This is the first of a 5 part Thursday series on AI futures. And in this series, we're focused on generative AI and human reward systems. Many of us are aware that our reward systems are already pretty well hijacked by different kinds of digital media, whether it's scrolling YouTube or TikTok or online gaming or something else. While more immersive AI generative experiences have tremendous potential for educational value and new modes of creativity, we also believe that there's a serious risk that these technologies will have an addictive appeal and pull people away from productive work and maybe even away from collaborating with each other. We are honored to have as our guest in the first episode of this series Lambert Hogan houd, who is the chief of data analytics and emergent technologies for the United Nations. He's been with that organization for some two decades and has some far reaching perspective on where technologies momentum is taking us and he goes right into brain computer interface and the real farther future of the human experience. The purpose of these AI future series is to stretch our imagination into where is this technology taking us and where do we really want to go? And Lambert does a great job of talking about where is tech momentum leading and also how can we prevent a potentially arms race oriented dynamic around AI and brain computer interface? Obviously, as the premier intergovernmental organization, he would have some strong ideas there as well. So I really do appreciate some of Lambert's takes again. This is the first of a 5 part series airing every Thursday. So in the outro of this episode, I will be talking a little bit more about some of the other guests you're going to hear from. And I would encourage all of you if you want to see some of Lambert's quotes in context with some of our broader research on generative AI, including our interviews with OpenAI, Microsoft and others, go to emerge dot com slash reward. This is our big quarterly article about generative AI and reward systems. And we'd love your thoughts and ideas on it because it's going to help us mold some of our future editorial coverage. Again, that's EME RJ dot com slash reward. You can see some of Lambert's quotes and see our broader research on this topic.

Tiktok Lambert Hogan Lambert Premier Intergovernmental Orga Youtube United Nations Microsoft
A highlight from AI Today Podcast: AI Glossary Series  Heuristic & Brute-force Search

AI Today Podcast: Artificial Intelligence Insights, Experts, and Opinion

04:00 min | Last week

A highlight from AI Today Podcast: AI Glossary Series Heuristic & Brute-force Search

"And I'm your host mileage mills are and your thanks again for tuning into our AI today podcast. We are in the midst of our glossary series and we are about to we're embarking on a bunch of terms that are related on machine learning. So stay tuned, stay subscribed as we dig deeper into all the main terms to understand around machine learning. If you're listening to this out of order, you should know we have podcasts and that are focused on glossary entries on machine learning. So you should listen to them all. But the idea of this is that there are many people who either don't understand or misunderstand or various really important terms around artificial intelligence, machine learning and big data, we put together this big glossary that's on our site that defines these terms. But sometimes it's helpful to hear about it, mainly because sometimes reading things it's hard to comprehend, especially when you see so many terms. So we decided, let's record a podcast series that goes over each of these terms. Sometimes a few related terms together in a single podcast and really explains what they are. Of course, understanding what the terms are and how to actually put them into practice, especially do them the right way, completely different things. We understand that. That's why we have our CPM AI, which is our cognitive project management for AI, methodology, and training, which not only provides fundamental understanding of a lot of these concepts. But really, it tells you how to put them into practice in a successful way following a methodology that has worked successfully at many large organizations. So stay tuned. We have a free course that you can participate in a ten and learn more about CPI and hopefully be one of our thousands of CPM AI certified folks and the great community we're building there. Exactly. So as Ron mentioned, we put together a very comprehensive AI glass rate. And we will link to that in the show notes so that you can use it as a reference point in case you ever want to look up different terms and in our glassary. But we wanted to spend some time on a podcast series going over these terms just at a high level so that you can understand some of these basic and key AI machine learning and big data terms at a high level. And so on today's AI glossary series podcast, we're going to be going over the terms heuristic and brute force search. And I think this is a good place to start really as a foundation for machine learning. Because these are two terms that really aren't machine learning specific, but help you understand why this idea of machine learning is so important and also gets us into thinking about, well, thinking. Because that's what machine learning is. We need to think about how do we think, right? So the term heuristic is one of those terms that seems overly complicated, but really refers to a very simple idea that here's the general strategy or tool or technique that you use to solve a problem. Especially it applies to things we need to make a quick decision when you only have either a limited information and you don't want to literally try everything. So you can think of it as like, let's say you go and you're in a diner. Diner has a huge menu, right? You need to pick up, you need to pick some food out of it. Well, you could think the heuristics were going to be like, maybe I'll just get what I got last time. That's a quick approach that could work, especially if you like it. But let's just say you're like, you want to try something new. Well, the next person will be like, well, is it breakfast? Maybe I won't look at the whole menu. Maybe you will. That's the great thing about diners, because they will serve the whole menu, the whole day. So they don't make your life easier by making that menu smaller. No, no, no. They will sell you pancakes at 11 p.m. and they will sell you hamburgers at 8 a.m.. And it's up to you to decide what you want to eat. So anyway, think of the heuristic as like, what approach do you use? Now, you might think, okay, manual problem that's easy. But how much bigger ones? What if you have problems like trying to figure out what classes to recommend, figuring out what movies to recommend, figuring out what price should I deny your loan, right? What we have done is we've built these heuristics for ourselves.

RON
A highlight from GPU dev environments that just work

Practical AI: Machine Learning & Data Science

05:28 min | Last week

A highlight from GPU dev environments that just work

"Welcome to another episode of practical AI. This is Daniel whitenack. I'm a data scientist at SIL international, and I'm joined as always by my co host, Chris Benson, who is a tech strategist at Lockheed Martin. How are you doing, Chris? Doing good, having a good 2023 and this is going to be the best year for artificial intelligence ever. Yeah, well, I mean, it must be, yeah, we finally did our chat GPT episode and that was really cool because I don't know if you saw Chris is the first episode where we had I think like over 10,000 downloads in the first week. So thank you to our listeners. That's awesome to see that. We're glad that was useful. And we're going to keep the good content rolling right along because this week we've got something super practical, which I think everyone deals with what we'll talk about today, but we're privileged today to have with us natter Khalil, who's the cofounder and CEO at brev dot dev. Welcome. Hey, thank you. Thanks for having me. Yeah. So I alluded to like a problem that we all face, which is environment, management, and I am developing on this environment. I need to have these dependencies or I use this environment. Now I need a GPU or Chris is on my team and he needs to replicate my environment. All of these sorts of things or whatever category you put those in. So how I guess in terms of your digging into this problem now, but how did you get there? What started you along this path of really thinking deeply about dev environments? Man, we've had quite a twist and turn of a journey to get here. And yeah, I mean, the ultimate goal is just monotonous machine problems, getting into the way of creative development. And it's funny, when I went to UC Santa Barbara, I studied electrical engineering and computer science. And when I moved to SF to work, I was actually building cloud dev environments at workday. And I did that for two years and in December 2018, actually just before that. I was getting a beer with a bar owner, and he was telling me how he had a thousand clicks on his Google ads, but his bar was empty other than me. And he shows me his metrics on his Google ads. He goes, make it make sense. And I realized he had a really good point. Digital ads work really well for digital businesses because if someone clicks on an ad that's an Amazon ad, you've entered Amazon storefront, there's nothing like that for the physical businesses like his. And so he's just using a really bad medium. So my cofounder and I kind of like the same cofounders with brev. We pretty much realized there was like a way for us to backdoor that Uber app. And so we put tablets and ubers and lyfts and we let local businesses advertise on them. And if you tapped our screen, we would reroute your Uber to that location. Yeah, that's legit. Yeah. Yeah. You got one Friends for drinks. You see, buy one, get one free margaritas. You tap the screen and we take you there. You get a free drink. We are owner knows ad work, the driver got a tip. Everyone won. Perfect. And so that was really exciting. I ended up that's what I quit my job to go do. We did that for like two years, completely bootstrapped. We ran out of money. I toured my 401k into it. We got into YC for that. We got to like a quarter mil ARR and essentially demo day was March 2020, which was right when the shelter in place happened in SF. And so we got to see our three to 400 cars go to 7 overnight. Right actually the week of demo day. So we didn't raise a dime, obviously. But I feel bad for laughing, but I can't help but yeah. Have you seen that GIF on the Internet of the raccoon with cotton candy and it's just like, where did it go? Oh, those look very much like 8 March 2020 for us. But it was funny because with a physical business, you have a physical fleet, right? We have physical operations. You imagine like physical hurdles being the hardest part of that. And in January 2020, we're starting YC. We're like, we got to like 15 KM or things were working. And we need to just three, four X the fleet. And that was like really hard for us. We found out from one of our drivers that Uber and Lyft have these parking lots, half a mile from SFO airport, where drivers go wait for these really valuable airport rides. So I go to the parking lot and Uber security kicks me out right away. They're like, I'm not a driver. So I'm like, okay, well, I'm at least there. So I went to a gas station I bought cigarettes. I light one up and just walk back on the lot because now I look like a driver taking a smoke break. And I got right past Uber security. I'm on this lot to like 4 a.m. talking to every driver. We Forex our fleet that night. So like there was never a physical hurdle that got in our way, but once we got those drivers live, everything else went to. We had like our advertiser dashboards really slow. Like all these random problems, one of them was like the ads when they flipped on our tablets would just disappear and flash white. And if that happened at night, it's jarring and so riders would turn off the screen and you lose revenue for the night. And so it was really funny having like really weird physical problems, but like we can sneak past Uber security and solve those. But like no, when we have to like senator computers and fix something, it's like our dev environment slowing us down. And so it was almost like instantly when my cofounder and I like essentially the pandemic killed that business. My cofounder I look at each other and those 20 days of January where we were trying to deal with our dev environment issues, we couldn't replicate these issues locally, just so many weird bizarre issues. We're just like shooting in the dark. That was the only time with that business. I had like a pit feeling in my stomach. Like we forgot an assignment or something. And so immediately, how do we solve our previous problems? And so we spent like a year and a half in pivot land with a good north star, we built a very heavy abstraction, I guess.

Daniel Whitenack Sil International Chris Benson Chris Natter Khalil Brev Dot Dev Lockheed Martin Amazon Google Santa Barbara Sfo Airport
A highlight from Dustin Coates from Algolia Breaks Down Keyword, Concept, and Conversational Search Models - Voicebot Podcast Ep 295

The Voicebot Podcast

02:57 min | Last week

A highlight from Dustin Coates from Algolia Breaks Down Keyword, Concept, and Conversational Search Models - Voicebot Podcast Ep 295

"This is episode two 95 of The Voice pop podcast. My guest today is Dustin Coates from algolia. We go deep on search technologies and architectures, including how algolia differs from Google and how both differ from chat GPT. Welcome back voice by nation. My name is Brett kinsella and I'm the host of The Voice bob podcast each week. I bring you an innovator and engineer designer, some other industry leader that is shaping the trends around conversational AI and synthetic media. Today we are talking about what happens after natural language inputs for the search use case. And how different technology approaches shape the type of response you receive. It does encode to spent the past 7 years in search technology and during that time he led the rollout of voice search and algolia and worked with GPT-3 when algorithm was one of OpenAI's launch partners in early 2021, despite all the talk around check GTP and search, GPT-3 doesn't offer improved search results for most algolia customers in most use cases. Instead, concept based search is typically more successful and we break all that down. Coach also walks us through several types of search, including keyword search, concept search, semantic search, conversational search. We talk about what large language models are good for and how retrieval models will be required to improve their performance. It's a great conversation. I really enjoyed it, and these are topics are highly relevant. They're discussed daily across our industries, and I suspect a lot of listeners will enjoy getting some quick education on what search is and is not. We talk about the models. I ask them some very specific questions about how to frame this and how to understand these different types of search models, which might not be obvious if most of your interaction with search is through Google. By the way, I've talked recently about search in the Cynthia newsletter. We use that to publish daily on topics in the synthetic media and generative AI markets. It's a free newsletter. It turned out to be really popular. If you would like to check that out along with articles about chat GPT being added to Microsoft's Bing search engine, what Google must wrestle with based on the chat GPT threat, and how instruct GPT paved the way for chat, GPT, head over to voicebot dot AI to sign up, just click on the synthe button in the nav bar, or you can go to Cynthia dot substack dot com. We have a lot of fun over there. And talking about topics that are in the news every day. Now Dustin Coates is the principal product manager at algolia and works with the company's AI enabled features. He joined the company in 2015 earlier was an instructor at General Assembly teaching, coding to students, and he held several analytics heavy roles across industry. Next up, search, how search works, what works, and what people get wrong about chat GBT's capabilities, let's get started.

Dustin Coates Brett Kinsella Openai Google Algolia Microsoft General Assembly
A highlight from Lowering the Barriers to Entry in Data - with Data Pollington of Bloc Ventures

AI in Business

07:22 min | Last week

A highlight from Lowering the Barriers to Entry in Data - with Data Pollington of Bloc Ventures

"And business podcast I'm your host for the most part, Matthew de Melo senior editor here at emerged you'll actually be hearing Daniel flagella emerge CEO and head of research, conducting today's podcast, but Daniel is moving away from hosting these programs and you'll be hearing from me regularly on this program. Week to week in short order. On that note, today's guest is head of research at block ventures David pollington. Block ventures is an investment firm known for its early stage entrances in lucrative exits in a space known as deep tech. In conversation with emerged CEO Daniel flagella, David talks about how the barriers to entry for serious data analytics use cases are lowering into a healthy startup market and how an interest from industrial sectors in use cases surrounding limited data problems is a driving force behind that trend. Without further ado, here's their conversation. So David, thank you for being with us. It would have been fun to be able to catch up with you in person, but quarantined as I am. We are catching up over a Google Meet call today, but we still get to cover the fun topics. I wanted to cover. And we're touching on a theme that you guys will have a great perspective on as a venture firm, which is sort of how you're seeing AI companies adjust in terms of their go to market and their strategy, post COVID. There were firms that their wholesales process was based on events or they did a lot of in person meetings with clients or maybe they access certain data sources that are tougher to access now. At a high level, what have you seen alter for your portfolio folks? Well, I think there's a general trend. And I don't know to what extent this has been influenced by COVID. Being obviously folks are just capturing a lot more data these days. And they're looking to AI to really help them to sift through that data and generate insights. And this is where I think there's a lot of people coming on stock. So we see a lot of companies coming through our deal flow process that are looking to bring tools to allow if you like the democratization of AI. People talk about low code, no code. It's all about being able to drive insights from that from that data to be able to then drive the business forward. So I think that's one of the things that we're seeing is more a general train tool sort of continuous improvement, more agility and the operating structure of the companies and making better use of their data and maybe COVID has really accelerated that realization. Yeah, I guess it sort of forced digitization for certain processes and maybe it's ousted certain kinds of redundancies for in person processes that we don't need to do anymore or something along those lines. And at least up until now, I don't know how long it will last. It has led to some gigantic financial crafts to the point where we've been crunched on how much we can spend on tech. So I think actually AI startups have really benefited from the last couple of years, which at the time of the crisis when the economy first got shut off, that wouldn't have been my first guess. So yeah, so data is waking up. You're seeing a lot more tools kind of come into play. Very competitive space, obviously, right? We've got Amazon's in that mix, data robots, prepping themselves to go public data coup, domino data. I mean, all the unicorns in that space have basically been on the show in the last two years. When it comes to democratized AI and data tools, what are these folks doing to kind of stand out? Yes, data is waking up. Yes, there's big opportunity, but by golly, there's so many other players there. Yeah, there is. It's kind of interesting because you say there are some very big companies that are already starting to dominate in these spaces. But it's amazing that a number of smaller companies coming through with kind of innovations that really then sort of focus in on particular problems. So whether that's kind of time series data, whether that's being able to unlock data in the data lakes and sort of build it into a knowledge graph to allow you in a better analysis of the data, as I mentioned before, it's also about exposing that data to the whole company. Which I think is increasingly important as people are starting to work remotely and have these sort of distributed organizations. You need everyone to be able to have access to the data and be able to query the data and get insights back from it. So there's a lot of local no code sort of aspects to that as well. So auto et cetera. So yeah, I think this is very much still a growing space and people are really just starting to get to grips with how to use their data and I think that's what is really innovation opportunity. Sure, yeah. Second that certainly I think there's a lot of niche opportunities, as you said, maybe time series, right? Maybe if you've got a very niche super specific solution for time series data that doesn't really come out of the box with Amazon or domino or whoever, maybe there's just a way to use that as your first way to win business. I think the idea of kind of becoming the AI platform for a whole company. Does feel like where you really do actually start to butt heads, you know, not in just the edge case, but now you're really starting to butt heads with the folks that are trying to own that space, but there are many foot in the door strategies and it sounds like you've got a couple companies that are leveraging a few. Anything else that in the last couple of years you're noticing could be around the kinds of companies that you're backing or you're seeing emerge, or maybe the way you're seeing those companies approach the market. There may be enterprises are buying a little bit differently these days. What else have you seen that's been interesting in the last couple of years? Yeah, I mean, obviously we talked about sort of big data enterprise level and what you can do in terms of analytics around that data. But the flip side to that is there's also a lot of interest perhaps in some of the industrial sectors. Around applying AI to use cases where actually you don't have a lot of data. You have these kind of limited data problems, things like defect detection on production lines and normally detection when it comes to things like cybersecurity and this is where you need to have a different approach. And we're seeing lots of interesting techniques coming out of academia that have been swimming around for a while. And people are now starting to try and bring those through and commercialize them to allow you to essentially apply AI to these kind of future learning type problems. And that's where we're seeing quite a lot of innovation in particular. Okay, so this few shot idea, and this is kind of new error for you folks in the last two years, like maybe startups just weren't really as excited about approaching these areas because the fact that we didn't have big volumes made it just less attractive to go there and now all of a sudden startups are attacking it. Like is this kind of what you're picking up on? Yeah, I mean, I think it's because it's more difficult. When you look at the big data problems, if you have this data in the data Lake, you know, maybe it's on structures. You've got that whole problem and sort of labeling and annotation. And folk have been putting the minds to how do you actually improve that. But when you don't have a lot of data, people don't automatically think of, well, we could use some AI on that because they just don't have enough to actually do anything meaningful with it. But that's where people are starting to realize the value of if you could bring AI to some of those problems. And it could have huge value for the organization. So that's where we're seeing people starting to tackle those harder problems. And especially when it comes to things like computer vision, you know, I mentioned defects on production lines, but when you talk about sort of autonomous vehicles, autonomous mobile robots, that whole aspect

Daniel Flagella Matthew De Melo David Pollington Block Ventures David Amazon Daniel Google Swimming
A highlight from Conversational Surveys

Data Skeptic

01:12 min | Last week

A highlight from Conversational Surveys

"Com. Something incredibly serendipitous has happened and I'm excited to share it with you guys here. I got a letter in the mail the other day from the United States Census Bureau with a little form we're going to fill out together. A few days later I got a second letter informing me that the first letter would be arriving, so that part didn't work out exactly right, but we're still on track. All right, dear Los Angeles household. I am pleased to inform you that your household has been selected to participate in an important national survey on the education of people in the United States. The 2023 national household education survey. The U.S. Census Bureau conducts the survey of households on behalf of the U.S. Department of Education every few years, we conduct it together information about learning activities that happen outside of schools. You have come to the right place. In the next few days you will receive an invitation in the mail, that's the one that came out of order from us to complete the survey. The survey can be completed at home by an adult in the household. That's me. Depending on your answers, it can take about three to 30 minutes to complete, so start the clock, and this is pretty cool. We included a $5 bill as a token of our appreciation for your participation, and they did indeed. Also if

U.S. Census Bureau U.S. Department Of Education United States Los Angeles
Interview With Daniel Kornev Chief Product Officer at DeepPavlov

The Voicebot Podcast

02:07 min | 1 year ago

Interview With Daniel Kornev Chief Product Officer at DeepPavlov

"Daniel gornja. Welcome to the voice. Podcast much brackets and big for me to turn today today. It's my pleasure to have you. This is a long time in the making. We've been i guess chatting on slack for maybe year and a half something. Yeah i think so. I started to read your westport. Insider was fascinated by opportunity to look into your think to on hand Why not took. Yeah that that's that's how it happened. Well the is really perfect. Because we're going to talk about a few things today. Obviously d. Pavlov is a project i've been interested in for at least a year. I don't remember when i first came across it but it might have been might have been. You introduced it to me. Or maybe shortly before that i found out about it but definitely answered that project and then obviously you've been involved recently with the elec surprise social competition. We've had another conversation about that about this. What a perfect time to go a little deeper on that because it is a different way to build bots and so really looking forward to this conversation today. But i'll let you get started. So why don't you tee it up for the The audience right now first and let them know what d- pavlov is before we get deep sure depot is like lab at moscow's physics and technology. That is focused on conversational And neural efforts Officially cool to neural networks in Terrain but Wednesday were standard like full. Five years. ago it's also got to down moniker Because follow fossil famous russian scientists who discover it reflects us in all those things that encouraged scientists researchers to understand how human brace books and we still have a lot of things that we have to uncover. But that's was formed as the name.

Daniel Gornja Pavlov Elec Moscow
Google Develop AI for Detecting Abnormal Chest X-Rays Using Deep Learning

Daily Tech Headlines

02:09 min | 1 year ago

Google Develop AI for Detecting Abnormal Chest X-Rays Using Deep Learning

"On friday we talked about a nature publication by google. Ai scientists that showed how a deep learning system could detect abnormal chest xrays rays with an accuracy. Rivaling that of professional radiologists. The system only detects whether a chess scan is normal or not and is not trained to detect specific conditions. The goal here is to increase productivity and efficiency of radiologists clinical process. Let's examine some a i x ray. Science first of all how to rays work xrays are a type of radiation energy. Wave that can go through. Relatively thick objects without being absorbed or scattered very much. X rays have shorter wavelengths than visible light which makes them invisible to the human eye for medical applications of vacuum x. Ray tube accelerates electrons to collide with a metal and owed and creates rays these rays are then directed towards the intended target like a broken arm for example and then picked up by digital detectors called image plates on the other side differ body tissues absorb x rays differently so the high amount of calcium in bones for example makes them especially efficient at x ray. Absorption and this highly visible on the image detector soft tissues like lungs are slightly lighter but also visible making x ray and efficient method to diagnose pneumonia or pleural a fusion Which is fluid in the lungs. For example according to this latest nature publication approximately eight hundred and thirty seven million chest. Xrays are obtained yearly worldwide. That is a lot of pictures for radiologists to look at and can lead to longer wait times and diagnosis delays. And of course. This is why there's interest in developing ai. Tools to streamline the process many algorithms have already been developed but are rather aimed at detecting specific problems on an x ray. The google ai. Scientists however developed a deep learning system capable of sorting chest xrays into either normal or abnormal data intending. To lighten the case load on radiologists

Chess Google Pneumonia
Generating SQL [Database Queries] From Natural Language With Yanshuai Cao

The TWIML AI Podcast

01:58 min | 1 year ago

Generating SQL [Database Queries] From Natural Language With Yanshuai Cao

"So tell us a little. Bit about touring and the motivation for it. How did the project get started right. So is this natural. Language database interface is a demo of anguish database interface built. And it's really just putting a lot of our word on some parsing space together. In this academic demo so netra language database interface the from application perspective the pin uses to a law a nontechnical users to interact with structured data. Set is there's lots of inside endure and You know who want to give out change for nontechnical users to to get those insights and from a research perspective. It's a very challenging natural english Problem because the underlying problem is you have to parse pasta in english or had our next languish than convert to see cole. And we all know. Natural language is ambiguous machine languages on bigger after resolve all amputate. He yard a too harsh correctly. Furthermore was different from compared to on other program. Language is the mapping. From adams. To see cole is under specified. If you don't know the schema really depend on what is the structure of schema and so he still model has to really learn how to reason using it. And in order to resolve all that may retail and correctly predicted the sequel and lastly this printer model some. You don't want to just work on this domain one. To work on demand is on databases. You're never seen before. So without st cross domain across database part of it and dodgers very challenging. Guess it's completely different. Distribution wants moved to different dimensions even

Cole Adams Dodgers
Seth Dobrin Talks About Trustworthy AI

Eye On A.I.

01:41 min | 1 year ago

Seth Dobrin Talks About Trustworthy AI

"We're gonna talk about trustworthy a i. It's something that is increasingly in the news and concerns a lot of people. Ibm has a product called fact sheets. Three sixty that i understand is going to be integrated into products. Can you tell us what fact sheets three sixty is. And then we'll get into the science behind. Yes so let me start by laying out what we see is the critical components Trustworthy a at a high level Three things there's a ethics there's govern dated ai and then there's an open and diverse ecosystem an ai ethics is fully aligned with with our ethical principles that we've published with arbin dr ceo co leading the initiative out of the world economic forum. And i'm adviser for essentially open sourcing our perspective on a ethics from a govern data in ai perspective. It falls into five buckets. So i is. Transparency second is explain ability third is robustness. Fourth is privacy and fifth is fairness and so the goal of fact sheets is to span multiple of these components and to provide a level of explain ability. That is needed to drive adoption and ultimately for regulatory compliance. And you think of it as a nutritional label for ai where nutritional labels are designed to help us as consumers of prepackaged foods to understand what are the nutritional components of him. What's healthy for us. What's not healthy for us. Factually is designed to provide a similar level capability for a.

Arbin Dr Ceo Co IBM
Everyone Will Be Able to Clone Their Voice in the Future

The Vergecast

01:49 min | 1 year ago

Everyone Will Be Able to Clone Their Voice in the Future

"World today often feels like it's full of digital voices with a assistant siri amazon alexa and google reading your messages announcing the weather in answering trivia. Here's what i found on the web but if you think things are chatting now just you wait. The voices of these a assistant used to be based unreal recordings. Voice actor spent hours talking in a studio and these clips would-be cut up and rearranged to create synthetic speech but increasingly. These voices are being created using artificial intelligence. This means we can not only create more realistic computer. Voices clone the voices of real people much more quickly creating endless artificial speech at the touch of a button for example it was surprisingly easy to make a synthetic version of my own voice. In case you missed that. That was not me talking. That was all made digitally by typing into a computer. So why would some want to do this. Besides the obvious novelty of it. You might have guessed a reason to make some money. I listen to this was going on. Kevin hart here. I wanna talk to you about why. We have to have mac and cheese every night. Think about it. That's why. I recommend thousands of new shows and this is a promo from baritone one accompany. That's working on an ai product to create synthetic voices and make them something. The media industry wants to us. So we've created a platform. Ai which at the end of the day turns unstructured data into structured data. That's shaun king executive vice president. Ed veritas one. So if you're thinking about audio thinking about video things that are typically unstructured and we make that searchable discoverable author a host of different a cognitive engines that are there from transcription beaker detection speaker separation. And then we provide those tools to you know many different industries that are eating

Amazon Kevin Hart Google Shaun King Ed Veritas
Interview With Patrick Bangert of Samsung SDS

AI in Business

02:01 min | 1 year ago

Interview With Patrick Bangert of Samsung SDS

"So patrick i'm glad to be able to have you with us on the program here today and we're gonna be talking. Ai at the edge particularly in the world of medical devices. Which is i know where a lot of your focus is here. We're gonna get into some of the unique challenges of leveraging data and ai at the edge in the medical space. But i want to talk first. About what kinds of products. We're talking about people think medical devices. Okay well medtronic is tracking my blood sugar on the side of my arm and you know. Then i've got a big cat scan machine kicking around over here. What kind of devices does your work involve with. And and his edge relevant From your experience. Thank you for having me on the show pleasure to be here. We are dealing with medical imaging devices. So if you have a smart watch on your wrist. That's not what we deal with. Even though those are very useful of course to measure your exercise and sleep patterns we're dealing with technologies like an ultrasound and mri is not an x ray. And what's called digital pathology which is where a biopsy is removed and put on a microscopic slide. Those kinds of technologies produce images that are relevant to telling you whether you're sick at all hopefully not or if you are what kind of disease it is. And so the job of computer vision in this case is to detect whether is a disease diagnose what it is to find out where it is to find out how big it is advanced in if cancer stage one. Three how advanced it is. And all of these outputs can of course be created. Virtually instantaneously by executing artificial intelligence models at the edge and the edge in this case is the device itself. Yeah okay so. Some devices are huge. Mri scanners take up a whole room. As some devices are quite small ultrasound. Machines view could transport it in your suitcase and so there's obviously also price difference here but nonetheless. All of these technologies do produce an image that that is then analyzed by

Medtronic Patrick Cancer
Social Commonsense Reasoning With Yejin Choi

The TWIML AI Podcast

02:07 min | 1 year ago

Social Commonsense Reasoning With Yejin Choi

"All right everyone. I am on the line with jin. Choi eugen is a professor at the university of washington. Yajun welcome to the air podcast and excited to be here. Thanks for having me. I'm really looking forward to digging into our conversation. I'd love to have you start by sharing a little bit about your background and how you came to work in the field of ai. Right so i primarily work in the area of natural language processing but like any other feels of ai. now the boundaries become looser losers and. I'm excited to work on the boundaries between language and vision language and perception and also thinking a lot about the connection between a i and human intelligence and what are the fundamental differences in that in terms of knowledge and reasoning And so let's go a little bit deeper into that. Talk us through like some of the ways that you take on those topics in your research portfolio. What are some of the main projects. You're working on the things that you're exploring right so currently i'm the most excited about the notion of commonsense knowledge and reasoning. This was in fact the only dream of a field. The in seventy eight as people love to think about it and tried to develop formalism for it. It turns out it's really trivial for humans but really difficult even for the smartest people to really think about how to define it formally so that machines can execute it as a program so for a long time. Scientists assumed that it's Doomed the direction. Because it's just too hard so i didn't really thought about commonsense for for a long time and then it's only in recent years. Some of us got excited to think about it again. Which is in part powered by the recent advancements of neural modell's that is able to understand large amount of data.

Choi Eugen Yajun JIN University Of Washington
Ultra Long Time Series

Data Skeptic

02:21 min | 1 year ago

Ultra Long Time Series

"My name is a foley counter. I work with essentially neurosis. They'll finance and economics in beijing. China background statistic computing and nowadays we focus on forecasting ways a lot of skill of data on distributed systems. So i haven't yet had the chance to interview anyone specifically about distributed time-series. It seems like that would be some extra challenges because the data sequential what happened before relates to what happens next. How can you spread that across. Many machines disputed hampshire is is just time is that alcohol. it can't be billions of observations. Historically we build up statistical models based on assumptions the narrative and other assumptions those assumptions do not work on distributed system and the industry like apache spark actually defacto standard for data processing and the the street people star a huge amount of data on distributed systems. We how to make a model that really works on sack disputed system and we have to work on their language to make our forecast more robust bus on his outer series. Yes spark is naturally a good choice to us because it's such a good reputation and a lot of reasons to look at it for big data solutions. But it's not obvious to me that it's necessarily the right choice for time series because it's not really baked in right. They've moved more and more towards like a sequel style and data sets. Are there any technical challenges to implementing time series via spark. And if you consider all single time that's fine but if you think about what we are doing we are streaming tate. Data is like times commun- out like water like re-re coming up now. You're really need nonstop system to process in the whole system. They simultaneously and without much delay that demand for temps is forecast in and out to claim that i think a lot of people agree with me nowadays arteta pam because we collect data. We always have the time stamp. So that's a windy. Temperatures for distributed systems. And there's a new challenge. I think emmanuel areas like atmosphere electricity and adi and other domains

Beijing Hampshire China Tate Arteta Pam Emmanuel