Listen to the latest news, expert opinions and analyses on the ever-expanding world of artificial intelligence, data science and machine learning, broadcast on leading talk radio shows and premium podcasts.
Rethinking Model Size: Train Large, Then Compress with Joseph Gonzalez
"Art. Everyone I am on the line. With Joey. Gonzalez. Joey is an assistant professor at UC Berkeley in the e S. Department. Joey welcome to the PODCAST. Thank you for having me. I'm really looking forward to diving into this. This conversation and in particular talking about M. L. Systems and your recent paper on train large then compress. But before we do that. Please share a little bit about your background and how you came to work in. Yeah excellent so my stories of it. Funny I started my PhD at Carnegie Mellon with an interest in actually flipping helicopters. Because that was the thing to do back in two thousand six awhile back. Lipping HELICOPTER LIVING HELICOPTER. Flying THEM FIXING THEM UP. Sell or fly them and then flip them. Actually a colleague of Mine Peter Beale now at Berkeley When he was finishing up his his thesis work is looking at how to do interesting. Control for helicopters cool and I knew I was. I went to my thesis advising the you've worked on control as well. I'm kind of interested in flipping helicopter. I think that's that's really neat research and you know that was And it actually was some of the pioneering work that we see today in reinforcement learning. But what's kind of cool about? The story is my adviser at that time being a real machining researcher. I was like you know what flipping helicopters. That's that's that's exciting but there's something more important like we can actually help the world with sensors we can build sensor networks to to monitor fires. We can use principle machine learning techniques. I should add that when I was looking at the flipping helicopters we should flip them with neural networks. And the other thing. My advisors said which was good advice at the time was a neural networks. Really serious research. We use more statistical methods graphical models things that have formal foundations that we can reason about right kind of detailed analysis and understand what our models are doing and that was good advice and so I went down this path of how to build a proxies Beijing per metric methods to reason about link quality incident works and in that process of doing that. I kind of stumbled into it problem. I was reading a lot of Matlab code to compute big Matrix in verses and then approximations that to make it run faster and one of the things I enjoy doing in the process of exploring these efficient matlab programs was trying to make them more parallel and I think advisor clued in is a good is like you know what maybe you enjoy that more so maybe instead of focusing on the per metrics and the sense now works. Let's start to think about how to make machine learning more efficient in particular at that point in time Duke was taking offense. Map Produce. That's GonNa Change machine learning and we were thinking well we're working on graphs and they just don't fit the map produce pattern and the kinds of computation. We were doing just wasn't it didn't actually fit the technology that people were building so we started to explore a different design of system so designer systems for computation graphs which took down the design of of graph processing systems system. That I ended up writing. His kind of the end of my thesis was a graph lab for doing very large analysis of graphs and so by the time I finished my PhD. I was actually writing systems papers. Not machining papers in the field was changing very very rapidly to this around two thousand twelve and if anyone's been following the history of machine learning around twenty twelve everyone started to realize maybe actually neural nets for a good idea. The deep learning these ideas actually really dated back to Nineteen Eighty S. They're actually really starting to work a and they were changing. The field machine learning and grass are also taking off so we built actually a company around the systems that I was developing as a graduate student who was graph. Flab that evolved into a company for building tools for data scientists do interesting machine learning at scale that was ultimately acquired by Apple. And around that time I also joined the UC Berkeley employed as a post doc. A chance to come out to California and it was really exciting opportunity to do research in a different system. A system called spark which eventually became Apache spark and there we started develop the graph processing foundation for the Apache spark system and again as I started to explore more and more into the field I learn more about research data systems and transaction processing and how those connect back to machine learning and so after finishing my post doc I came to Berkeley in fact I chose not to follow the much more lucrative path of the and it was going to ask about that. A made a terrible financial decision. But I'm happy because I have a chance to work with students. I'm a little less happy because I'm not as wealthy as one could have been but now I am teaching students that do research at the intersection of machine learning and systems and so we have a pretty broad agenda around how to better of technologies for delivering models to manage machinery. Life Cycle not just training but prediction how to prioritize Training Experiments on the cloud to use service computing to make machine learning more cost effective and easier to deploy. We have a big agenda around autonomous driving building the actual platform that supports autonomous driving not necessarily the models but how they are connected together to make a reliable car and we have work in natural language processing and computer vision and one of those papers when the hoping to talk a bit about today which is our work on on making Burt models easier to train and it to has kind of funny story how we came to To actually realization that what we were thinking was entirely wrong. And that's what that paper talk a bit about.
"Katie hello you know that they're me off in a recent episode. I remember we were talking about Bays Ball I had to 'cause in that episode. I didn't have a dad jokes so I had to make this We're GONNA explore that in just a second. You are listening to the New Year aggressions as that was a pretty good one. Okay where where have I to to wear? Have I fallen if I'm so proud of that? Indeed indeed biggest accomplishment all day. Okay so This example let's let's recap the example. Because why not? Yeah okay so the idea was. Imagine that you are Some kind of baseball scout. And you're trying to figure out what the What the batting average of baseball player is and the batting average? There's there's this thing called like a true batting average which is kind of an abstract quantity which is. How often would this baseball player hit? You hit the baseball in an infinite number of times the at Bat and get on base or whatever whatever a batting average I think yes. I think it's getting on base. I sent home runs. Come to right. Oh yeah sure. Yeah but anyway. The point is you don't have an infinite number of times that the that the batter has been up to bat instead. What you have is some number of times. They've been up to bat. And from that. You can calculate sort of this empirical batting average but the question is once you have that number. What does that tell you about their sort of true underlying unseen batting average? Because that's what you're actually interested and so we were talking about this. Is there like a way that we can formulate this and lake a sort of Beijing way and how one of the very reasonable guesses that you can make is. Well my best. Guess about what they're batting average is willing to be in. The future is less. Just take what it's been in the past right. You say okay. They've done so well. They're going to continue doing exactly as well. Yeah Yeah So. They batted point two five in the past selects guests point two five in the future and the stole a major league I think point two five is you know I think okay anyway. Your best guesses point two five and maybe there's like a standard deviation that you can assign to that which be probably also taken from data as well so point two five plus minus point. Oh two or something like this maybe Most of the time you expect them to be between point two three point two seven so then the question is let's suppose it instead of just having this one baseball player you have an entire baseball team. So you have this person's rbi no that's not what an Arby's persons batting average against this versus batting average you have the shortstops batting average you have the third baseman batting average you have the catcher batting average you have a whole bunch of other information now this available to you and the question is any way that you can use this other information to help you make a more informed guests about for anyone individual what you would predict for their batting average in the future right so let me propose to you two different ways now that you can two different reasonable guesses that you can make so first reasonable guess is let's just treat our baseball player. Are BASEBALL PLAYER OF INTEREST? Mistreat treat them independently of the rest of the team. Let's ignore the rest of the team and my best guess as to what this This is going to do in. The future is just a say. We're halfway through the season so we have stats on the first half of the season and I said second half of the season my best. Guess is they're going to have a batting average this equal the first half of the season. So we'll say this. It's kind of like the individual mean. If you like is my best guess or my estimator in fact estimator is like usually what you hear? This called a because you're reading a statistics paper got estimator. 'cause you gotTa have a fancier term. Yeah I think maybe estimators are like a little bit more complicated. But lay with my. My estimator is the mean the individual mean. So that's that strategy number one strategy number two is instead. Let's say that my daughter is the mean of all the means so it's the average of the pitcher in the catcher in the third baseman shortstop in the left fielder. And all that site right now. This one actually this one kind of confused me like not knowing where you're going with this. My intuition says your player of interests Batting average is roughly independence is generally independent of the batting average of all the rest of the team. I mean they're probably some examples like if you have especially really really bad morale that might impact it but like I just don't personally see how other players batting average would have any impact on the player of interest and furthermore it seems strange to To make that connection if you also only have past data of your players like for example I could imagine if you had half of the season for your player of interest but you had the entire season worth of data like the future for all of your other players. I don't know that feels a little bit more tractable but I also think based on the fact that you brought this up that my intuition is completely wrong is so let me give you. Maybe an intuitive case in which which might make it more apparent why the other players might be relevant. So let's suppose that your player of interest was just recently traded to this team or like with injured or something and for some reason the first half of the season this batter was only bat two times and is it happened. Got Extremely lucky and hit home runs on both of those two at bats so now our empirical guests. If what we're taking just the mean from the first half of the season our guests right now is that the player is going to bat a thousand for the remainder of the season whereas let's say if you were to look at the at the team as a whole you find that by and large. They're batting you know to fifty plus minus abandon. So in that case I would argue that. The context of the rest of the team is giving. You is giving you some information which is basically that the player that you're interested in seems to be a little bit of an outlier and so you probably in this case would wanna come up with some kind of compromise between group is a whole seems to be doing and then you know maybe there is some information in the fact that that this player did really really well in the first half you know small sample but in the first half. Rossi's maybe maybe this guy actually has pretty good. He's going to be above average but I don't think anyone would expect him to bat a thousand for the rest of the
The European Commissions approach to blockchain
"Hello and welcome to inch blocks urine. Decadent podcast to blockchain ans- mark contracts. I'm will eat. Scuff your host for this week's podcast. We be discussing the European Commission approach to blockchain. And I'm very pleased to have Peters Zilkha this head of unit digitalization Blockchain Digital Single Market Directorate at the European Commission Peteris. You've got a lot of titles. Many thank you many. Thanks for joining us today. Could you please give us a brief introduction on yourself? I'm glad to I mean I'm a lawyer by background. I have the JD degree from University of Southern California before they had a political science degree. And though I've never really practiced their California state bar for almost thirty years now and since Two Thousand and five Florence. My Country Latvia joined the European Union. I've been ahead of unit in the European Commission and digital innovation. Blockchain is what I've been working on and you could say to US sometimes perhaps over used term but It's a little bit my passion. I've been interested in walk chain and tech since about two thousand and twelve so perhaps not at the very beginning but at least relatively early for the public service this is also why I'm The original co chair of the Fintech Task Force. I have my second Co-chair coming from the financial services side. And then I'm from digital single market and I mean in both these areas I'm working in legislation and policy in funding infrastructure and research and managing it as well as working with with stakeholders and international cooperation. So it's an interesting bunch of things to work on. I'm glad to be doing it well. As you sitting the key term as passion because you're effectively getting the job of three other men so very impressive So Peter is As it is customary here at Inter blocks. Could you please explain our listeners? What is blockchain? And how does it work? Well glad to try. This is one of these things. It's a little bit of a communications challenge and exercise. But I mean I would say that. It's simply a growing list of records of blocks In a ledger that are linked utilizing cryptography and generally managed by peer to peer network adhering to a protocol for communication between the nodes and then validation of the new blocks that perhaps gets already a little little technical some listeners. But I would say. It's a way of validating transactions and data in an immutable in permanent way. So that you can be sure that they haven't been tampered with and that you don't have double spending of a value and that you can transfer data along with that value. That's the way that we see it and I think it's also important because some people are I think most of all sometimes negative that they say Blockchain is something that's bad because let's say uses a lot of energy if they take the original crew for work and everything that doesn't do that is is not blockchain. I mean we take a very wide view. I mean distributed Ledger Technologies Hash graphs tangle on these types of blockchain inspire technologies is is blockchain for us. I mean we're not trying to freeze history in two thousand and nine. When the BITCOIN paper was Was published or at some other point. It's developing technology and I would say what is really important is the element of decentralization which is not black and white. It is a gradient going from something which may not exist of completely centralized to something which also may not exist of being completely decentralized but actually allowing a degree of decentralization That a single database or even some federated databases. Don't don't allow so. This is where I think it gets exciting and where it makes it possible for a diverse group of actors to work together while preserving their autonomy. Excellent really loved that. I'm element of your definition of. It's a gradient of decentralisation incidents that's a spot on now could you introduce to us all the different bodies within the European Commission you have the digital innovation in Boston you need. And other bodies within your commission that are here to research enable and further development of blockchain in the EEG perhaps give us an overview where we'll do some deep dive in in some of the sure. I mean starting with with my unit. We're kind of the policy leaders on blockchain as a technology. But we're not. We're not the programmers as I said. I'm a I'm a lawyer and a political. Scientists have other colleagues are engineers but were more economists lawyers people looking at digital policy and in my unit we have the e U Blockchain Observatory in Forum which is a think tank working for us that has a whole set of reports and videos and regular workshops which used to be physical in at least right. Now they're virtually cited We also have the European blockchain partnership that my unit runs. This is twenty nine. Different countries twenty-seven all twenty seven e. You member states and Norway and Liechtenstein. Who are building a European blockchain services infrastructure together. I mean actually building an infrastructure. This piloting this is not testing. We're putting public services on the blockchain justified. We had quite a filtering to see which cases were justified to utilize the blockchain. And this is also something you could call a regulatory sandbox because while the countries and us are working together we obviously have to look at European Union legislation. We have to look at national legislation. Most likely you don't find anything. We're blockchain is prohibited. But you certainly don't find many cases where it specifically allowed though. You're getting some legislation in France in multi in other countries. It's specifically see a root for blockchain Roxanne legally And then we also collaborate with the International Association of Trusted blockchain applications as stakeholders organization I myself I'm in the OECD Policy Expert Advisory Board on Blockchain so we collaborate with OCD with the United Nations and others and In Not Buzz. International position of trusted blockchain applications a global governmental advisory board but also in the OECD activities. In the other activities. We probably would participate in the spring meetings of the World Bank and International Monetary Fund. This year I spoke myself in the IMF Fintech roundtable last year. Also you have the collaboration in the Fintech Task Force as I said from our side digital single market I gave a basic description on the other side. You have the financial markets colleagues the call as coming from research in Salon with the financial markets colleagues. Were collaborating on the digital assets possible legislation we just closed the public consultation on digital assets. Hearing what the stakeholders with the community has to say and in another context of the Digital Services Act which is a updating of ECOMMERCE along with addressing the platforms. We are seeing how perhaps smart contract so we have to do something to ensure that there is not any fragmentation of different requirements smart contracts across the digital single market and the twenty-seven member states. Something that we want to want to avoid
Interview with Igor Perisic
"Hello and welcome to the today podcast. I'm your host Kathleen Mulch and I'm your host Ronald Schmeltzer. Our guest today. It's ego or parasitic. Who is the chief data officer and VP of Engineering at Lincoln? High your thank you so much for joining us today. Yeah thank you eager for joining us. We'd like to start by having you introduce yourself to our listeners and tell them a little bit about your background and your current role at Lincoln certainly I guess from an education perspective. I grew up in Switzerland. And I have an Undergrad degree in applied math from the which are then follow up with graduate studies in statistics in the US somebody A statistician and they not only my career just pivoted to the industry and since then was essentially involve which was originally called it say applied statistics than data mining machine learning and data signs. And now they I. I guess it's GonNa depending on what it was at that time to liberal. Just change the funny thank you though is at the beginning. When I studied into domain. It didn't have the labels and people didn't understand what it was interested in doing or what I was doing so they would always people back to statistics until you're doing stuff like the Census Bureau or FDA drug pools etc etc. Any took awhile to sorta registered in the sense from the beginning in my career. I was always interested in to leveraging data provide to enhance individuals ability to do the job and to achieve more today Lincoln. I'm the chief data officer and the VP of engineering part of the engineering orgnization. From the beginning. Because I'd like to build things and the my responsibility than on the engineering side. It's everything that covers the data spectrum from online to offline distributed systems that at all as to store data about storage systems with semantics so the data flow from one service to another one thinks like Cobb we invented on open source. I think to other big systems at At scale think ado spark etc etc and then on top of infrastructure and these might teams are responsible for all the ai that enables us to percents experience so mentors and on the chief data officer side responsible to make sure that we use data in responsible fashioned according to the terms of the service to regulations set kind of city between product legal and engineering to make sure that we on the same page and inflammation communication pros across so the everybody's consulted than making sure. That knows what's happening your excellent well. You know the interesting. You mentioned data sciences sort of both the profession and sort of as a role has really emerged quite a bit most very recently near. Right. It's interesting people. You know when you look at so trying to understand a little data science when one. You like really isn't very much You'd think there'd be a well established definition menu. Some people look at it from the statistics probability perspective primarily or data analytics perspective. Some people look at it from the data management perspective and say mostly data and a little bit of science and some people. So it's kind of interesting all that but the chief data officer role also is fairly new thing. But you know of course. We've been dependent on data for decades. You know ever since we've had data that also changes through time to as data. A science at the beginning was doing a lot of things which today you we already saw the state. For example. A data engineer was co composed science at the beginning. Like around two thousand seven eight just because all these data systems did not exist so you have to build them. And there's the need they'd be innovation to dry them and who felt that need while the data scientists so they did engineering was part of it. Same thing she did officer Demand on making sure that the way that individual manipulate or use data within their organization is much more controlled than what he was before kind of elevated that will people flocking to download from different perspectives. Yeah people that come from the legal side and then got these towards engineering. People would come from. They did assign side More to woods through legalese as well as engineering saddened some people will come from the engineering side. Any understand what the legal component than the science components. I've always changes through time. The label kind of stays but the definition of what you really doing. Changes Adopts Yeah. I think that's important and we actually do like the idea of data centrisly understanding date on the role of data at the C. level which means that has visibility at the highest most strategic level there orgainzation traditionally maybe people have thought of as a component of the officers role the technology officers role. So that's really good to hear your perspective on especially at Lincoln so a one of the things people may not be aware of. You know. Obviously one of the things. We're going to try to do it this. Data's trying to get more insights and and use data to train systems for machine learning to apply to a wide range of applications. And so you know. Lincoln in particular uses a in many ways that maybe perhaps users may not be aware of on a daily basis. So can you give us some sort of insights and outline of the many different ways that linked in is applying? Ai and how it's enhancing the user experience so we leverage ai at Lincoln always in the context of our vision of using us to create economic opportunity for every member of the global workforce and within that space we described as being the oxygen for Linton. It's embedded just about everywhere you think about creating economic opportunity at the scale that want to create it. We have roughly six hundred seventy five million members on the platform. There's roughly nowadays. I'm not too sure. How many jobs anymore but As well doing that match between that big of a set of individuals and that big of a set of jobs you need to have tools you need to have a to allow it to be personalized to be good on that level so you have a lot of things that we do is to making sure that hold that information is personalized to make individuals chronicled smarter and enable them to do their work better but that did not flooded with the noise it could be in the patron. That's on the visual. Side is also very big pot which is Behind the scene for example we do. Now he'd be testing platform the way that we see it out how to root out traffic to all this end to these access points turntable. Speedy delivery experienced members was simple things of simple uncomplicated. Things Anomaly detection services would something is different than what? We focused sweetie just about everywhere.
Improving the Restaurant Industry with Voice Technology with Derrick Johnson, CEO of Encounter AI
"Today my guess is Derek Johnson the founder of encounter a. I welcome Derek. Thanks for being here thanks Kurt. Thanks for having me now. You've been working in the tech space for a while now. Where did your interest in voice technology coming? Yes so I've always been technologies being exposed to computers barely early. You know. And so in brochure from uncle sweat computers on various electronics. And then you know Bacne Day. We had the service call America Online and so like every time you log onto the platform. You regretted with this verbiage. You have male riding so that forced me to understand the voice possible with computers our from a technical standpoint. It wasn't conversational. Aix Time but it did for shadow will be right and so you know today. We reached a tipping point. Where it's actually possible to deliver those faces. And so I wanted to explore a lot of ideas about adult time and now was the top. So and your company encounter. Ai Is voiced technology for the restaurant industry. And I WANNA I start off by saying congratulations to you on choosing niche and really trying to dominate in that area. Can you tell us about the idea for encounter a and how it came about and what it does definitely so from career perspective. I had stints. Companies like Disney Spent some time at censure innovation labs plus I had some experience as an owner investor for the restaurant brand freshly right in numerous other regional restaurants and so I saw first hand the needs of the industry scale ability in operational perspective and I wanted to blend my data science and artificial intelligence background kind of with my love and passion for the restaurant and right in a second. It's a personal journey. No unfortunately I lost both of my parents at an early age to chronic disease right and so I decided the best way to of Transform. Health was to start at the source and one of the facts. Most people don't know as you know anywhere between thirty five to forty percent of the. Us population consumes dry food. Right and so that was our mark. How do we transform health? Kinda one of the highest impact points from their developing solution that enables contact with ordering across restaurant drive-thrus. In store kiosks. You don't have to touch the screen as well as tabletop solutions that you see you know largest sit down brands like chilies etc right. So you know what the core our platform collaboratively helps ordering associates. Meaning that they get off meant and helping then route. The customer take the order and even accept payment right plus we have kind of what we call the consumer magic. Where if you a repeat customer you know we can do various things around offering you up different preferences when that's no seasonality intentionally. You have allergies that you always wanted to take each incineration you know. And so the goal is to just give the customer a ubiquitous kind of digital enhanced customer experience that they're used to you know in their homes on their phones etc agreements to real life route right and so that was why I started. Ray I and that's what we're doing today and your company venture backed startup. Why did you Hughes to go that route? And what was the process like for you in getting funding? Yes Oh for most businesses you know. Oftentimes they're tied to brick and mortar. They're tied to physical space right and so four. Assassins business such as ours. You know traditionally that's been the domain by friends and family of comes from those type of networks or venture capital right and so for us. We took the venture capital route because we wanted to move fast and we also wanted to see customer expectations right and so that meant that one from a capital perspective we needed it right within to having networks of individuals who built world class and coach changing products with the intent that they can help us right and so the process was that we apply for venture capital via concept called celebrate right and that's where essentially investors provide capital plus wraparound services to prepare a startup founders on kind of best practices of being one a startup company but then to complement their teams. Where appropriate such that they have the scale and velocity necessary to be success? And I love if you could share with us. The experience so far from a user perspective and a business perspective in using encounter a I if you can share with us any stats or stories. That'd be great. Yuck so when you look at the restaurant industry like you know you have your large companies that are billion dollar corporations. You have your smaller mom and pop diners and cafes writing. We wanted to ensure that we had a solution work for both in our early implementation. You know from a technical perspective. We saw thirty percent improvement on speed in near human level accuracy across regional accents dialects and slaying right. And what that meant is you know often exempt as we say as you know at Burger King for example you know. The proper name of their salads. Is the rapper sound right? I don't think anyone in the history of bordering has ever said that right and so is being able to take a technology apply it to different sectors and have an exprienced customers really want you know one restaurant owner in a college town. This restaurant was right off of the expressive writing. So what was that for breakfast? You know there was a peak in demand right for lunch a peak and demand right for dinner picking the man but throughout the day largely there wasn't enough volume for him to have more than a skeleton writers. So with our technology you know breaking in an augment that owning associates such one person but now seamlessly to restaurant drive rights and so that meant that gave the owner of fighting chance to eat doors open when he was looking to shutting down and furthermore allow customer associate in the store the feel like he or she also had help and transformed how our technology has been applied and also change a lot of as and you know when you were sending me some of the videos. Some of the information on your company you had talked about when you're using this kind of AI. For restaurants not only obviously that saving time it saving money for businesses as far as they don't need to hire as many people to help with this sort of thing but you also said that more people feel comfortable to talk to an Ai. Order something then they do a person which when sometimes they're ordering more than they would so that you're actually having the restaurant see a higher level of revenue coming correct and often times. Talk to you know without college. You know if you want a lot of food and you're talking to a person sometimes you have that emotional lens right. Where like you don't want to appear but generally gladness or you don't WanNa pay that you ordering more than you should be right talking to. A virtual assistant or conversational is assistant. You don't have that we'll see higher ticket sales but also you know coming up selling respective items that potentially liked the cheap potentially no aware of also right so for example for me. I love chocolate shakes strikes with that means you know bite though from brand to brand with our technology even though the ordering dissociate doesn't know and love chocolate shades potentially our grand solution does right and so we can offer. That upset me agnostic Brian across multiple brands and that often leads to higher ticketing box. That's amazing so you're saying you could go if somebody if there were multiple restaurants using your A. Let's say you went to local mom and pop shop. You went to Chili's you Ansa Burger King that. If they were all using your they would know that you as the user love chocolate shakes and be able to offer. Hey did you know we have this here if he wanted it? Absolutely that's amazing. I love that that is exactly how signal and a I should work so I think that's really great.
Protecting Individual-Level Census Data with Differential Privacy
"Hey Katie hi Ben. So I got a postcard in the mail From the Census Survey. Can you feel it? I filled it. Yeah I filled it out but it got me thinking how. How do I know what is happening with? Statea that's being collected like it's always great to collect good data But in this case I'm in that data set so Are there any protections that are being put in place for Census Data? Really interesting question. Glad you asked total coincidence because this was what I wanted to talk about any way about that. Yeah so we're going to talk about differential privacy today and will use the census as an example. But it's a topic that's generally interesting to see who works with data around people which tends to be Ohio data. If you look under the covers you are listening to the new year decorations. So we'll focus on the example of the census here today because this is one of the biggest and most expensive and most famous data sets on people that exists but the topic of differential. Privacy is something that's general to really any data that you have but the problem that they have in. The census is collecting this very granular detailed data on everyone in America extensively. Although I think I think we all know that that's probably not totally realistic. But as many people as possible but a lot of people are understandably a little bit nervous about how is my data going to be used and what what protections are there that my individual level like personal data won't be disclosed based on the downstream uses of this data set. And I guess they're kind of two things in that one is. How do we know that the government won't use this data to say Target people who are undocumented but then the other piece of it is if the data set is out there for researchers to use. How do I know that the researchers won't be able to kind of pull it apart and find to be in that data set and the second one is what we're talking about today? Yeah we're going to focus on the second news case and it's an important one and I think it you know the meaning of the word research in this context. It's actually pretty broad So it can refer to folks like academic researchers who are getting versions of this data set to rate sociology or political science papers but it also refers to the way that many parts of the government just run and operate so when things like congressional districts are being drawn there being drawn on the basis of census results when state and local governments are asking for resources from the federal government. They're doing so on the on the basis of the number of people who live in their jurisdiction In some cases if you have a certain make up in terms of socioeconomic status or race or something like that then I can sometimes be the basis of additional funds that you can request for like parameter these kinds of things so it's not just like oh we're kind of interested in this in an academic sense but it's actually a pretty important for the functioning of many of these pieces and so there's different levels of data disclosure that are allowed for different types of research. So obviously if you're making something externally available to an academic researcher who's going to publish on the basis of that data. There might be a different set of expectations versus internal usage for bookkeeping and accounting for the operations of the government. But in general The question that you might have especially for that case where your pieces of the data set with your individual information of them. If those are being made publicly available you as a person in that data set might be wondering what someone who is smart and motivated and has access to that data set could discover about you as an individual as differential. Privacy comes in and that seems that that might seem a little far fetched. But we've had a number of episodes where we've gone into details of how what seem like fairly anonymous data sets even intentionally anonymous data sets can be. You can kind of back out Individual details especially when you combine it with another source So although it may seem far fetched it actually is feasible in does happen is. There's like a fun story about this. That's called a a record linkage attack. And so that's the data set that you release is not does not itself have any personally identifying information but there's a way that you could link it with perhaps some other data set maybe it's a data set. That doesn't even exist yet. But that when you join them together you can identify individuals fun story about this that I read as doing some research There is a woman who is if. I'm not mistaken a professor at Harvard. Now she's very distinguished In the field of this sort of special topics in data science like ethics privacy inequality of of Algorithms when they're being applied to like minority groups. These kinds of things. Her name is Ladonna Sweeney when she was a graduate student at MIT. If I recall correctly so this was in Massachusetts. There was a data set that was released of medical records of a whole bunch of people who were I think state employees or there's some kind of public public database of medical records that The governor said look we have taken off everybody's names. We've taken off any publicly identifying information so if you are an individual who's in the state of set like don't worry nobody's going to be able to know that it's you don't worry about it So as a graduate student she figured out how to do a record linkage ration- so she figured out how to join this publicly de identified medical records database to another database that had P. I approached and was able identifiable information. Yes thank you and was able to do that. Information to find the governor in the medical records database. Oh that's funny. I know this so really a devastating to in a very direct way that you stick in This was in. I think some of the earlier days of differential privacy when people were just starting to think about. Yeah how having your data out In public view. Even if you thought it was deified could was not entirely secure. So yeah so. It's pretty cool so anyway. The place where we are now as a society or something academic community. Let's say is Folks thinking about this problem for for a while now because obviously it's a it's a problem it's really important to a lot of folks who have data being collected about them which is basically everyone and tech companies and the Census Bureau which we're going to focus on today have different ways of dealing with it and one of the methods is differential privacy. That's going to be the main focus here today so differential privacy is kind of interesting. The rough idea is that you add noise to the data site in such a way that If you do a particular calculation on that data set you'll get the same answer regardless of whether a given individual as in the data set or not so. Here's a simple example. Let's suppose that you have a data site and you want to run a query on it. This is what is the average salary of people in the state set and you have individual level salary information for all of them or network. Let's use net worth wasn't worth of all the people in this data set and there's one hundred people on the data side and let's suppose that there's two different versions of the data sat and I give them both to you and one of those versions has Bill Gates in the day and one of them doesn't so the average salary in one of the data sets is going to be. Let's say it has one hundred people in it. It's GonNa be something like a billion dollars yeah And then the other one without Bill Gates in it. We'll have an average net worth of. I Dunno whatever the network is for literally any other group of people who say hundred people who live in my building was like in the tens of thousands of dollars the hundreds of thousands of dollars perhaps depending on like the group that you have or anyway you get the idea. Yeah so the idea of differential privacy is that then. Let's add a certain amount of noise to that output. So that what you'll get instead is a range. Plus minus some uncertainty and that uncertainty can change when you when you query the data set multiple times and things like this. You never quite exactly if you're getting the precisely correct answer and you're probably usually not but it does mean that whether now maybe Bill Gates would be pretty hard to hide in a data set like best but in many cases a little bit of fuzzy and can obscure whether any particular person is in there or not
Interpretable AI in Healthcare
"One of the important areas in which model interpret ability should have an impact. I guess somewhat argue the most important areas in healthcare after all. What good is some black box It's worth something if it's reliable but yeah better than a black box would be a transparent box and while that makes a great pool quote easier said than done. There's also an interesting aside here about interpreting. For whom patient or doctor and my concern is doctor because the last thing I want to patient self diagnosing and getting access to AI tools to essentially make the sort of inferences that you need a guided trained died. Make so in my mind. Successive applied machine learning and healthcare is really about helping the doctors the clinicians the radiologist the people that are operating the machines in interpreting the evidence in doing that sort of thing so surely there's some interesting work specifically on model interpret ability in the context of healthcare right off this week on the show. I speak with Jay theon garage about his recent paper. Calibrating healthcare AI. Towards reliable and interpreted deep. Predictive models. I am I'm a computer scientist at Lawrence livermore national labs and I will descend lights computing division. They're doing research much learning in occupations and I'd asked you on today most notably to discuss your recent paper calibrating healthcare a I told reliability and interpreted deep predictive models. This is right at the intersection of the things we like to get into here. It's interpret ability and M. L. I'm not an expert in healthcare and I know it does take some expertise. Can you tell me a little bit about your background? And how you've learned these two fields and whatever their overlap is so mind. Bagnall might be Watson Mustang. Running on sitting processing may apply Problems in consideration however when. I started working on applying to Real. What problems little more being a fine slob? Vdb typically focus on a bit of scientific applications in which data modeling could be potentially used. And that's where I started. What Lawn Jimbo? Space by medicine Pot and I've been fortunate enough to work with. Us decently stuff while I watch TV in the trade beyond my basic by classes school so I started learning as starbuck along these projects and my awesome does including the ones by the research on other places that I work with there. Being helping me get on top of some of the challenges that he insists and how we could potentially report with some of the tools of your building. General ambition problems to I see a lot of headlines that Bragg for certain problems like maybe it's a radiologist and things like that that algorithms are now achieving these human level performance although when you look a little deeper maybe that's perhaps a stretch. What's your perspective on the current state of the art in Computer Vision? How close are we to that? Human level goal were all seemingly after. I think that's a philosophical question on some level if you actually talked to a doctor down definitely testing which is using human performance. Okay but if you deep down to go to the doctors and house them on both the costs that you really want to help you with many times down to this new the kind of problems back the actual doctors and people who are treating radiated difficult infections business they are looking for is solid really really challenging problems which often cutting the benchmarking statistics typically build the vision community. So this is what happens even though be producing a lot of promise. Infanta building diagnostic tools. They might not be directly. Impacting healthcare has the way of humidity expected. However this conversation study lot of more practitioners of getting volume conversations issues with us. It's not anymore like taken. Automated Published inaudible computer vision and machine learning manuals on doctors. Look at it. So that is changed. So it's means what does happen is defined realistic. Problem actually agreed to have an impact on which is not very easy to do you. Have these awesome cameras. We are looking at on for names but Hollywood. This needs a more fundamental rethinking as to what helped get problems need to be solved which is the actual big challenge for example. The flight me something like a Cobra. Infection which intimate know what gain of data potentially even detected well and sending me thrown into the school. Maybe I saying we need to find a stating congressman beanie to detect things and more importantly. The office can be used in order to gain new insights that they do not more already so in some sense it is going to continue merely being automating to then the future is promising. Couple of the NFC moving. It's not just for making difficult jobs faster but it is also doing tasks that typically hard even for human to solve in a day to business on holding. We enable them do better sometimes. It's not even replacing them and brigadier deadliest in many times even assisting them to navigate to restrict at
Sony Unveils the World's First Camera Sensors with Built-in AI
"Announced that it has apparently developed what it is calling the world's first image sensors with built in artificial intelligence which would give intelligent vision to cameras. What's this now quoting Bloomberg calling it? The first of its kind Sony said the technology would give intelligent vision two cameras for retail and industrial applications. The new sensors are akin to tiny self contained computers incorporating a logic processor and memory. They're capable of image recognition without generating any images allowing them to do tasks like identifying analyzing or counting objects without offloading any information to a separate chip. Sony said the method provides increased privacy while also making it possible to do near instant analysis and object tracking the new. Ai Augmented sensors capable of capturing a regular twelve megapixel image four K. video at up to sixty frames per second or neither providing only Meta data. About what the sensor has
ICLR: accessible, inclusive, virtual
"I'm Katherine Foreman and I'm neal learns and today we are bringing you breaking our format. Our usual format our new usual format a little bit kneel at we have the opportunity to sit down with the General Chair and senior program chair or this year's interruption of the international conference on learning representations I clear which was entirely virtual this year Alexander Rush of Cornell and Shakira Muhammad of deep. Mind thank you so much for joining us today. I really appreciate it off. Excellent Catherine great to be here. I'd love to do a little bit of background on. I clear I just sort of like mainly to give us a sort of a context for the conference in in the ecosystem of other conferences. WanNa know how did the conference get to where it is today? Yes thank you neil. That sounds like a very good question. How the conference get to where it is today. This year was already breaking ground. It was going to be taking place in Ethiopia which I think is the first time any of the large conferences have been held on the African continent But then all of a sudden we had this massive global change and it was decided that the conference would take place entirely. Virtually so Sasha. I'd love to hear from you a little bit more about how you see. I clear fitting into the larger system of conferences. And what your experience with. It's been and how this change took place shore so this was the eighth international conference of learning representations but it was run as a workshop for several years so it started in two thousand thirteen. And I think what's remarkable about the conference it's been experiencing exponential growth for the last basically for its entire history and so it's a conference where everyone is kind of a newcomer each year. We have most people kind of experiencing it for their first time. I think personally I didn't really attend. I clear till about three or four years ago and I was coming from it from the natural language processing community so conferences like ACL P that she makes up a relatively large part of kind of this multi disciplinary area. It's a conference. That kind of welcomes a large group of people doing different forms of representation learning and deep learning and things of that form I think it differs from some of the other machine learning conferences in that. It's a bit more experimental. I think a lot of people know it for its experimental reviewing format and for the structure of how it's laid out and I think one of the reasons it was so interesting to work on is because it's a conference that kind of allows for more experimentation in its format in its structure and we took that to heart in both the venue this year and also in kind of change to the virtual conference format. So this sort of like the experimental stuff it does on its own. It was the first to open reviewing. And then there's the experimental stuff that's forced upon it so. I find so amazing. What you had to do this year is first of all you were taking a major conference to the African continent for the first time which was a major undertaking in itself. And then you had to cancel the first major conference on the African continent. Tell us how how I don't know who's best to sort of speak to that. She cared. Do you want to tell us how that came about? And how you reacted yes. I think we were actually quite far along in our work dealing with the European conference it was going to be in a great venue in the Millennium Hall. Very close to the airport in Ethiopia. Lots of things have been set up. Even down to the whole Shedu of the conference itself was set up. The we're going be three. Parallel tracks experimented the conference in that way. All the keynotes. All the speaking. The setup of the post is all of these kinds of things are done and then it was a difficult time the end of February the beginning of March when it was very clear that the long run of cove it and would come into effect and a lot of consideration and debate with many many different kinds of people around actually cancelling the conference. But I think in the end You know it was obviously a good decision. Forced us to experiment in the new way so I was pretty happy with the with the end to actually get to do it. So what was the thing that Chicago pops happiest about about the way the conference went the thing that surprised you the most because I know both of you I mean it must be in so much stress. Just organizing a major conference like this is like major stress in in the base case and then organizing one where you have to reorganize the entire conference within the space of a few weeks. I just can't imagine it but let's start with the positive things and and pats that recession. Say She Sasha. What was the thing that you most pleasantly surprised about about the conference? Yeah so there. I think there were a lot of things that kind of were unexpected or kind of emergent behavior. That came up during the conference itself. The part that I think I spent the most time on and was most excited about was the the social interactions particularly chat and the socials. That emerged I think that was the part we were most worried about. It's the part I get the most out of conferences talking to experts in the field kind of having conversations that you didn't expect or learning about papers that you didn't know that we're coming. We really built that around kind of slack. Like chat experience and seeing the different topic rooms emerged. There was a very interesting created. A I room. That came out of nowhere. The community had several very interesting events. That were just so neat to attend and then we also ran several mentoring sessions just kind of out of nowhere in the middle of the conference that were super interesting and kind of almost better than I would have imagined. That could have occurred. Say getting a drink at a bar at an normal conference
AI Changing the Workforce, Interview with Valerio De Stefano
"Hello and welcome to the AI today. Podcast I'm your host Kathleen. I'm your host Ronald Schmeltzer. Our guest today is of Alario Distefano. Who is professor of labor law at Keio Leuven Hive Alario? Thank you so much for joining us on today. Thanks so much for the action thrilled to have you. Welcome Valerio and thanks for joining us. We'd like to start by having you introduce yourself to our listeners and tell us a little bit about your background and your current role especially as it relates to artificial intelligence for so professor of Labor Law the University of love and in Belgium and before debts I was some Offi several stage National Labor Office. Which is a you and specialized agency that deals with everything that concerns labor and work there. I started working on technology. Impacts on labor may be working for a number of years about platform worth in the economies and he was the first time I started to reflect on automated management platforms. Vastly use technology to manage NBC. Fine Worse for instance. They allocate us in worksheets using. Gps and you. Software to constantly monitor work by collecting for instance customer reviews on cutting out of the Latham's workers debts on a meet. That very high on this that is required to be met are for taking screen shots of people that were aligned to show their clients that they are actually working so blaffer work as being used as a pilot for management by But management by algorithms is much broader than that so my interest come from the label sides and when we talk about and management and automation and occupy. It affects the workplace. Well we see that. A loss of the debate is about the wanting of jobs. How many jobs we'RE GONNA LOSE COMMISSION WE? They're all but still my competent job or not so much on the quality side of it's our technology is gonNA impact on my every day working life out not used to write my performance discipline my work so I think an researcher I am trying to fill these gaps. San One of the ways between these by editing a special issue of a journal Competitive Labor Law and policy are now that's east about automation artificial intelligence and labor protection and especially should we gather he contribution of many labor experts. Sociology economies lawyers investor relations specialists to investigate out technology. Easings reducing the workplace The facts our worke life beyond question all losing jobs. Yeah that was a really interesting insight. Thank you so much for your input on that you know. Some employers are having their employees use wearable tools that track emotions and stress by collecting data on heartbeats and the tone of voice for example. So you say that. Most of these practices should be urgently restricted because losing one's privacy especially their internal privacy mental privacy arguably threatens one of the core elements of being human. Can you explain to us why you say this? And what data. You may have to show how it can be misused so having a system the trucks your emotions and read your facial expressions or the tone of your voice is percents agreed invasion of our private sphere impersonality so basically when these options. I'm giving a data concern my most privates. In evening we will aspects the most private People of my life my thoughts my emotions and in some cases I might not even be much aware of these emotions of deceased themselves so basically managers know more about me they night about myself and these basically economic up on stress levels in general is generates huge information asymmetry now permission cement always being there at the workplace but he's really tilting the scale on the employer side. This is why. I think this should be restricted in a way because we are experiencing something that was completed America off in the past. The facts that employers manages can read my mind now so the question is not about Audi St that are be misuse. It is the very thing collection of the data. That should be called into question using us. His answer no in my mind better and faster than I women to share busy formation with other people percent abuse so we do not need data on specific issues is practice already are issues pursuing my. You know it's interesting because data is at the heart of AI and people have been freely giving away their data for many years in exchange for free services. And I think for a while we weren't even thinking about it we were just signing up giving away information leading different companies and systems and apps that we were using track us and didn't think twice about it but what happens when the workplaces start requiring employees to use these tools that collect data on them you know such as facial recognition technology or wearable tools that are able to track a variety of different things. This is where it starts to get. Maybe a little bit more gray where we're not necessarily giving it out so freely but almost being more fired too. So what have you seen discussed around this subject with regards to laws and regulations so first of all as a lawyer? I feed the assumption that people even customers even as the mets F- truly consented to give away all. These data is percent questionable. Now when we started to get away those day that when we hey member of facebook or twitter or whatever we did we could not imagine how far reaching implications of data could be an Audi. St could use it was not a manageable of the point. And even if this was describing fine prints in thousands of pages of terms of references. Nobody haven't read those sense of Serres. Now if we want to give you becky ambulance seeker lawyer. I mean it can be enough. Yes you gave your consent to even read the fine print so bad for you but as a society any tens of policy we need also to question much consent in our bailable and valued consent loss when we start to all these away now acids from days when it comes to employment fierce. Give bits even more complicated because employment is percent. Somebody us. Fowler's augured the other party at employers in any legal system as some amount of authority over workers they can be supplying the workers. They can monitor their work. The director work so it is already unbalanced situation. And when it comes to employment because of these unbalance consent is really never to be taken for goods in the sense that most people don't have a choice whether to work or not whether to apply for a job or not whether to be subject to certain vises tools than that monitoring techniques or not so there is not each choice and therefore there is not a free
AI Opportunity in Insurance, from Process Automation to Decision Support - with Gary Hagmueller
"This week's episode is focused squarely on insurance. There's a lot to keep track of in the space from claims to underwriting to back in process automation to customer service every six months the landscape of AI vendors and known use cases in the enterprise companies changing in altering in part of our work involves staying on top of that that means speaking to heads of AI and innovation leaders at companies. You might know like Geico allstate or Axa. Some the biggest insurance players in the world as well as staying on top of the start up ecosystem this week we speak with one of the players in that. Startup ecosystem. Gary Moeller is the CEO and president of Clara Analytics Clara analytics based in the bay area. And they are focused squarely on insurance artificial intelligence applications Gary previously was the chief operating officer at a house. D One of the rare companies in Silicon Valley to raise hundred million dollars plus for an artificial intelligence company and he was before that the CFO at Zoro which is an incredibly successful subscription management payments. Firm out again in the bay area so gary has got a pretty storied past in the startup world. Clarice raised about twelve million in there. Certainly on the way up insurances ripe for disruption and there's plenty to cover so gary gives us his perspective on where is making its way into insurance where he thinks it's going to make the biggest impact in the relative near term without further ado. We're going to hop right. It says Gary Hag with Clara analytics here on the business podcast so Gary. I wanted to start us off with just your idea today as to where is making a difference in insurance what what functions. It's being adopted into where the traction is today. If we look at a in the Insurance Enterprise Great Question. Damn yeah so so. There's definitely a whole bunch of different places where we're starting to see a proliferate. I will say it's probably very early days really for a big time. So we're you know. Obviously we are very focused on the claims operation space and so we're seeing a variety of different places where this is getting applied. It's getting applied. At least we're we're we're seeing it generally in two flavors things that can kind of be automated away. You know think simple tasks that it you know today. You got a human doing that. Maybe doesn't need to be done and in the second place where we're seeing it. Generally is is occurring in places where there are very complex in weak signals. That have a pretty large bearing on the outcome of whatever the person is working on or whatever the group is working on in is really being used as an augmentation of human capability but so think about the ability to kind of see around the corner and figure out where the things that that could affect what they're working on positively or negatively are In giving them action in on so like nausea as he said. Our focus is on claims ops but yet we have it a guy on my board works in in underwriting and we seen a bunch of different places where this is starting to apply even in the actuarial space. But it's been you know it's really feels like there's a. There's a groundswell of interest activity coming. I like your break out here when you when you look at a impact and insurance. Maybe we could do this with any sector. But you're talking about two categories. One is what can be automated away. I like the term. A lot of vendors are afraid to use that phrase even because it it comes across. You know immoral. You're one of those automation. People stealing job I. I hear a lot of vendors been far too tender with being able to say that phrase Second informing decisions. So it's sounds like a short breaking things up into we look at insurance. What might be an example of each just to give people a Nice Representative Lens into space some automated stuff and then some some decision informing. So I'm GonNa give you some thoughts on both of those but I do WanNa touch on the point on the automation automation away. I feel like that's a topic. That comes up a lot in this whole a discussion on. I don't think it's as sinister as which you portrayed it as I think it's really a situation where there's a lot of tasks that are being done today that I guarantee you that people do not. I don't WanNa do. And it's part of their regular job and so if you free them from doing tasks that they don't WanNa do in focus them in on the things that they would rather be doing that. They are probably better at doing right. That actually ends up making everybody better off instead of giving you kind of an example right. There's a lot of places where you have things blow through processing right where you can get a claim you can analyze. The machine can analyze the claim. The machine can make a determination that like this routine claimed. Let's just go ahead and eight. You know issue payment or issue settlement or whatever On this particular thing so that's maybe an example of things that the kind of flow through the machine can take care of An ambitious close out without necessarily having to kick it up to somebody who is just going to you know. Look at it. Roll their eyes. It's just another one of these ones again. You know do a couple of things. Close it out and move on right. So that's kind of an example of the automation flow the other side of it. So think of it as kind of decision support or or kind of enhancements human enhancement is what I would basically think about it. Were that's where you know. This is this by the way is common across all areas of machine learning. Were what you're doing. If you're tapping into an appropriately. Large amount of data. You're going to begin to pick up weak signals right in in things that are actually deterministic. That most humans aren't going to be able to go off and do right in in. That's for two reasons number one. They may have been doing this job for ten fifteen whatever years to have a certain way of doing the job and they're just never going to look at those other sources of data right. The secondary part is that some of those sources flow in places that people generally don't even look at so if you've figured out how to tap into all these different data sources and you can then get a much more complete picture like in the case of the sorts of things that we do. We can do a much more complete sense of what's going on with an individual claiming give evidence on like exactly how to attack this problem. Right now to mitigate loss or wind up doing something that's going to wind up making the claiming happier subtle faster. That sort of thing in. Maybe there's an interesting sort of exercise that we could do so I guess one quick thing I did I certainly wouldn't call automation sinister per se. I think there are to be some cases where someone gets freed up to do something more cognitively interesting and it's a thank goodness kind of experience. There will be other times where you know. Abacha folks in India Needham Filing TPS reports anymore. You know what I mean. They'll find somebody else to work with point blank period. There's going to be that and I think everybody needs to be pretty honest about it but there will be plenty of of experiences where we'll be able to move people up in work more interesting things when you look in a business and obviously insurance is your space and you aim to sort of help. Maybe business leaders think through where I can find a fit. You look at a business and say oh here's some identifiable for our potentially automative bucket and then here's some ones that we can also identify for the decision bucket. How do we put on a pair of goggles to to see those opportunities an insurance? What might be helpful.
Understanding the COVID-19 Data Quality Problem with Sherri Rose
"Welcome to the PODCAST. Thank you for having me. It is great to have a chance to chat with you. I'm looking forward to digging into your background and your research and The things you're doing related to cove it to help out there you know. Let's start at the beginning. How did you become interested in machine learning and in the intersection of that and Healthcare I always was very interested in science and mathematics and physics and I didn't really have a good sense of how you could use that to solve problems when I was going to college and it was during college that I was exposed to this summer. Program called the Summer Institute for training in biostatistics and it really sounded like what I was interested in which was bringing quantitative reasoning thinking to problems in health and public health and I realized very quickly that I needed more than my bachelor's degree in statistics in order to really solve a lot of those problems and I didn't actually get any training in machine. Learning in my bachelor's degree I graduated in two thousand five and the curriculum definitely did not include it at that point and so when I went to graduate school at UC Berkeley in biostatistics. That's where I saw. The the benefit of having really general frameworks in which solve problems. And that's when I started working on non parametric machine learning and having these kind of big picture ways to attack big problems in population health and that was for me. That's been both machine learning in non parametric models for prediction but also causal inference and the driver for me was really the ability to use these flexible tools to solve problems in in healthcare in medicine it must have been helpful having that. Undergrad in stats. It's it's been very helpful. Actually I actually started as a mechanical and Aerospace Engineering Major. And I did not feel very invigorated by the coursework there and I very room and I also was a little frustrated that I was often the only woman in the classes and it just it. There was a lot of reasons why didn't feel like the right fit for me. I ended up taking my second semester in college. Statistics course and I immediately saw how statistics could be used for solving lots of different problems and Engineering Ken as well but for me. The statistics was really how I saw bringing all my interests together. You mentioned non parametric machine learning. What is that? And how does that relate to Both the broader field as well as the healthcare field. If somebody talk about non parametric I mean it. In the very broad statistical sense a non parametric model is a larger model space. Where we're making many fewer assumptions and whereas with parametric models more standard parametric models. We might be making strict assumptions about the functional form the underlying unknown functional form of the data with non parametric. I WanNa really have a large model space. I have a much better opportunity to uncover the truth with my machine learning estimator so many like you're not assuming a normal distribution which has a couple of parameters and a standard deviation it could be anything definitely not definitely not that would be a limiting gumption in your work. Yeah absolutely and most of the data that I work with does not conform to those types of strict assumptions. Talk a little bit more about the scope of your research interests and where you apply machine learning. It sounds like you are interested. Both in the of the systematic issues the healthcare system with the relationships between the providers and the payers as well as clinical issues absolutely so in health services research were really interested in the whole broad scope of the healthcare system that includes cost quality access to providers and services and also health outcomes following care so that clinical piece comes into the health outcomes following care and some of the major areas that I've worked in intersect with the health spending aspects the financing aspects like mental health and Telemedicine and cardiovascular treatments. All of these things intersect within the system that relies on you know the the cost the quality the access to providers. So it's a really having a research program that encompasses both pieces of that can allow you to ask and answer questions in more integrated ways. It's difficult but I find that you if you understand those underlying systems and try and bring them into your work when you're looking at clinical work It can help you inform better answers and when you are looking at those kinds of questions are you primarily trying to understand or influence great questions so a lot of the work that I do. We're trying to understand some kind of phenomena in the system but influence yes in the sense that we're trying to inform policy so understanding the comparative effectiveness of multiple. Different types of treatments. I I would like to understand which treatments have better health outcomes but if we find a particular treatment has a very bad outcomes we want to inform policy to the FDA or to the relevant stakeholder in order to potentially have that treatment removed from market and we're talking towards the end of April Many of us have been some form of another of locked down due to co VID. Did you mentioned that? Your dog may start barking. He may He may my neighbor. Just I think my neighbor is finished cutting the grass. Now you know this. Is You know the Times but it sounds. Like your work intersects with Cova. Did as well. Can you talk about that intersection a little bit? Absolutely a large focus of my work because I'm so integrated in starting with the substantive problem in bringing either existing machine learning tools or developing new machine learning tools to answer those questions. It really there has to be the strong grounding data and the virus pandemic has really eliminated for a lot of people how much we need to care about data. And I I I mean we have misclassification. We have Missing nece in the types of data that we're collecting for Virus both for cases and mortality counts. And these are things that are very very common and most of the electronic health data that we use in the healthcare system where a lot of my work has focused on dealing with some of these types of issues. I mean we use billing claims we use Clinical Records Registry data an on and on and these data types were not designed for research. And so we need to be really aware of the issues in these types of of data and some of the newer forms of data like wearable implantable technology. That people have been very excited about measuring physical activity were now using the current virus pandemic of smartphone location data to try and understand how people are Social distancing with potentially with contact tracing and then digital types of data like Google search trends and twitter data which has been used for different types of research questions in the past now. Google is developing and has released this location. History website. Where they're showing out. Know how we can understand social distancing and so a lot of the data related work that. I've been focused on very relevant to the pandemic understanding our data sources and trying to bring rigorous flexible methods to them specifically. I had been working the last two years with my now former post-doctoral fellow an infectious disease expert myemma gender. Who's now faculty at Boston? Children's Hospital and Harvard Medical School. We had been looking at news media data. Cdc Data Electronic Health data. To understand the generalize ability of these data sources for both infectious disease and chronic disease. And now this become a very relevant the virus pandemic we had one of the conditions we've been studying was was flu like illnesses and understanding what electronic health data sources like billing claims an electronic health records what we can really understand from these data sources and we've seen people many people now start modeling making projections about cases and a death. Count's what we're going to start seeing next. Once people start. Having access to different types of electronic health resources is trying to use this data understand. You know to predict outcomes maybe to predict clinical courses were trying to causal inference which is even more difficult And it's very important that people understand the limitations of these data sources and so that's one of the things that we're working on and hopefully the the first paper from that work will be able to release in the next coming weeks but this is this is something that's relevant for the virus pandemic but has been a problem going back. Decades is using data. That people don't understand and that's been a at the forefront of my work is really making sure especially with the theme of one of the themes of this podcast machine learning a lot of people get very excited about machine learning and they throw a tool at data without understanding the data. And we're now in the midst of something where it's really crucial. That people do not do
End to end solutions against COVID-19 insights from Blok BioScience
"For this week's podcast rigby discussing end to end solutions against couve in nineteen with special insights from block bioscience. And I'm very pleased to have returning to intra blogs aerial Walla no CTO of block bioscience and managing director of experts aerial. Thank you for joining us today for those listeners. Who Haven't heard you in our first podcast. Could you please give them a quick introduction on yourself and on block bioscience they'd thanks and It's great to be back. My Name's Arielle. I've been working in the blockchain space for a while. Is it six years now. It's a longtime Komo experts. Blockchain Because we are all still figuring out a lot as we go but I've had the good fortune to work on a number of really interesting solutions from trade finance to insurance And now working Working in the biotech industry on something. That's really needed excellent. Excellent so as you'll recall our first question is could you please explain to our listeners. What is blockchain? And how does it work but here? I'm I'm curious to see if your definition has changed since the last time you are on the show due to the work you're doing it blocked bioscience so if if memory serves the definition. I gave you a hasn't changed much. Which is that blockchain technology that allows multiple companies where people to share a single version of the truth Without having to spend any time effort or money on reconciliation messaging Synchronization Or other such that the benefit from from gained from That sharing often allows completely new business. Models or new ways of solving problems. Ought to be possible in terms of how it's changed. I think the the main thing that's changed is a very welcome maturation of thought blockchain. I think it's fair to say no longer. A thing on. It is a tool kit For solving problems And for accomplishing what previously would take large numbers of people or expensive software solutions off. That can now be taken for granted when two people are talking and one is sharing a. I found this amazing new APP on my phone. What what we what I wouldn't say to you is. I found this amazing new APP in my phone and it runs the IP stack. That's given so if we're now talking about a solution and that solution includes our two companies Having a common record of a piece of data the presumed solution is that you would use blockchain so that you would share that single solution than do after create a business function hired people or build software to keep our version of that that data and yours in sync great great and I have very fond memories of our podcast where you took us all the way back to the Sumerians to talk about the Boola but I'll let our listeners to check it out for those. Who Haven't had the chance to actually on your on your on your webpage that Keith. Bear actually Commented on similar context of the indeed Princeton this world so in our last podcast I introduced you as managing director offensive experts. Now you've added to that the role of CTO block bioscience. What is blocked by? What is your mission blockbuster? Science is Is A team of thought. Leaders thought leaders in the medical industry thought LEADERS IN TECHNOLOGY THOUGHT LEADERS AND SUPPLY CHAIN. Who grouped together because we think that is needed fast that it needs to reflect how quickly things are changing And that it needs to represent Delivering the best possible medical diagnostic and supply chain capability to the fight The unprecedented impact that covert nineteen is having on the world. Ob conserve experts has entered into a business partnership with block and we are providing The technology delivery capability. But that's only one piece of block block is also about the medical expertise on supply chain in the network of relationships To get things where they need to go on and together we are creating The solution that we're talking about today. Excellent so just to confirm this blog bioscience was created because of the president pandemic or was it created prior to that block solutions. Which is which is this. Group of thought leaders will was already created what we did is mobilized specifically to solve this problem. We've been working on it Pretty much since since Cova I started You know emerging from China in in late January.
Understanding Neural Networks
"My name's Tim Willie Crap. I have affiliations with deep mind which is a part of Google and as well. Ucla University College London. Could you tell us a little bit about your career and how you got into machine learning and AI and topics? Like that. So I can around? When I was in undergraduate I took cognitive science course which I think was really the turning point for me. It got me interested in philosophy of the mind and figuring out how we think sometimes and I was in university Toronto when I was in Undergrad and was fortunate enough to take some neural networks courses from Jeff. Engine Bruce Sarah's a professor teaching undergraduate courses and that got me hooked on thinking about neural networks deep neural networks and from there I kind of went off and did neuroscience during my PhD post doc but of kind of slowly come back into. She learned philosophy of the mind doesn't show up on a lot of sort of traditional computer science curricula. How is it the to able to integrate that in also understand the more mathematical sides of these topics? Most of the everyday computer science we do is working with data structures trying to transform numbers and so on but on the other hand I would say that even fairly early on was some connection to this reflexively of mind. Kind of ideas. Touring famously proposed entering test very early on in their development of computer science theory and I think there has been sort of a bridging interest the whole time in part because so why is that I guess because we have this question all the time about what it means to think and in a certain sense computer science has sort of understood that is what does it mean to compute. And there's been a bridge built. I guess at each step along the way as we've gone into that all depending on how lazy I want to get as an interviewer the paper that I invited you on to talk about poses a series of questions. So I'll just do it once. Start with title. What does it mean to understand the neural network? Yes yeah that's right. The title is a bit funny. I mean there's actually probably a bunch of ways to interpret that title and I should say really commutes looks at the paper was aimed. Maybe most at risk kids audience. So it's really trying to speak to neuroscientists who are in the process of trying to understand the brains understand biological brains in particular and how they work and how they compute. It is a paper. Midi written from the perspective of where we find ourselves right now in machine learning deep network theory but then trying to take some of the recent results and ideas and reflecting back into neuroscience in terms of these two fields. I'm wondering if you can describe the relationship. I mean I do bump into people that kind of share both worlds but the ven diagram does not overlap as much as you'd think between machine learning and neuroscience what are some of maybe the successes are inhibitors that can help or hinder the ways in which these two fields can share information depending on who you talk to been massive amounts of transfer and. It's sort of an easy thing that happens all the time or almost no cross talk and I don't know if you like get too bothered about that. I think it's just a case by case basis where there might be transferred interesting ideas flowing one or the other. I think sitting in between them certainly where I've spent a good deal might time and thought but there's very successful practitioners who are just totally. Ignore the other thing that's going on. I guess maybe to connect this question back to the paper though. There's this huge. Recent set of successes in machine learning employing deep. Neural networks to solve all kinds of problems that we couldn't solve before and I do think that there's at least one particular story that's coming out of that progress which we to try and take seriously over on the neuroscience solid. That's what this paper is kind of about of the areas that the paper delves into notion of intermediate languages. Can you talk a little bit about what those are and why they're necessary and helping to understand neural network commitments from perspective of neuroscientists? I for a long time. People doing neuroscience have wanted to in some sense understand how the brain is computed and sometimes the functions of the brains computing are incredibly complex their complex enough that we really do not understand how computing they compute. And so. There's a sense in which you'd like to be able to describe that and how scientific language that we could talk to each other with debt. Let's say this is how this brain tissue is computing complex function. Made it a ground. All of discussion going forward. I'll pick a very particular one. One I think has almost become common currency which is categorizing on object in an image. So this is sort of the canonical example machine learning and you can imagine lots of animals to their answers this kind of computation and certainly humans do tons of this kind of computational time it's very sensible to ask people for ages you know how our brains performing that kind of a problem in could we have a language that would let us get a hold on that describe. What's going on as these computations vote that I think is really the thing that people have had in mind. The aim people have had in mind and I think that the recent results that have come out of the deep learning machine learning community cast a bit of light on this funny light on this. Which is that. Maybe that is not the best question to ask. Certainly. Maybe it's not the best kind of question to start asking right now. What is the best question to be asking right now? If we look at all the progress it's happened in deep learning. We have this picture where we can now build say large networks. That computer function like that quite easily. So in fact weaken specify learning algorithms and the network architectures in a couple of hundred lines of computer code. That will train network to perform that kind of a task quite easily. And we as human computer science practitioners to look at that code and pretty much have a good understanding of of each line of it. How a good idea how they string together in fold together and produce the outputs at the end produce a functional piece of in Silica Brain tissue and even though we can do all that we have almost I would say no true understanding of the computations that have been put into those networks after train. Now I want to distinguish from moment. What do I mean by understanding? I think understanding is this a very loaded philosophical word that gets his into all sorts of trouble just wanted to distinguish for a moment. I mean I think for these networks that we train these deep neural networks. Between our computer we have in some sense. Complete understand what those in that we can look parameters the weights in the network. We can look at how it sort of performs computations on inputs how it transformed the images from hidden layer to in there and then finally to the Oakland so we understand all of mathematical computations. That happened in between sort of totally white box way. But when we step back from that if someone asks you how does that network that this image is a draft or this particular images of an elephant we have? I think no good intermediate language we can sort of talk to other scientists about let us feel like we really tangibly understand the computations that have been put into that network.
The Whys and Hows of Managing Machine Learning Artifacts
"Are everyone I am here with Lucas Ewald. Lucas is the CEO and Co founder of weights and biases. If Lucas is name sound familiar. That's because we Spoke last on the PODCASTS. In August of last year twenty nine nineteen episode to ninety five on managing deep learning experiments Welcome back to the Tamiami podcast. Thank Them. It's great to be back yet. It's awesome be back Folks that are listening may not be able to tell. Probably they can't tell but we're experimenting with Doing video more more for these interviews. So Lucas I who always have a fun time when we get together in person or now having a even more fun all time because we can see one another. So where giggly you know. That's what it is But Lucas folks want to catch your background. They can check out that episode and will link to it in the show notes but What have you been up to since Since last August I know one. Big thing is changed for you. Had A baby fully. Has that changed my life and priorities. Maybe more work relevant. We've been working on improving the product and you know. I think one of the reasons I wanted to talk and come back on your show. Is You know we just recently put out? A A new product called wasted by artifacts that were super excited about Yup and that's What will spend our time talking about you know. Tell us a little bit about the problem. You're trying to solve with artifacts totally. I mean you have such good or did you. Call looking good diagrams. Sort of laying out all the different pieces in the industry. I kind of. I'd love to know how it exactly fits into Tier Diagram. It's always insightful. To see that You're referring to the the book definitive guide to machine learning platforms And I broke out kind of at least my view of the M. L. tooling and platform landscaping into data acquisition and management experimentation and model development and then model deployment and management and so I would think is artifacts in the middle of those two or the last of those who has a straddle. Well I think it might kind of straddle so let me just tell you the problem that we were trying to solve you. Tell me how we should have. We should be dismissed okay. So so basically. What happened was our experiment tracking platform. You feel super proud of it. It's been really popular inside faster growth in anything that I've been a part of and so you know costly asking our users you know. Like what else do you want like? What is this? What does it feel like? It's missing and the biggest request was I was like well look like you know. This is tracking all the experiments. I do and I can compare all the models I build In reality there's kind of other things that are really important for me to track like I want to track my data sets and I want to check the models and I also want to in some cases like connect steps together and kind of like a pipeline And so you know. We build artifacts as Like an adjacent. That's separate because we think that we want to keep our experiment tracking a really tight point solution. That's really good but we also want to make it really easy to kind of track more things that you might care about. And you're the main things like that are probably data. Sets models and and pipelines is important to note. I think some people kind of comparison with like are like looked at my my overview. And they say well how is it? Different than something. Like airflow that manages pipelines and. I think it's important really different right. I it just tracks it. So what what it would do is like you say you have some set of data and you do some transfer on and the navy trainer model and then maybe like you know test that model and a couple of different data sets and maybe you need some quantification than you play it What WB artefacts do as it would basically save all the steps and save the fact that they're all connected together so You know a lot. Like management is the right management. It's model difference. You're not The state machine or the the graph. You're not managing that. You're not standing up. They need an airflow or Luigi or something else that is actually tracking the state of all the objects in this pipeline. And you are kind of doing what you something similar to what you did with the experiment management product where you're capturing metrics of things as they move through this process yet saving it right so you know if you want to save something like your data set or your model we make it really easy for you to save it and version it but then a lot of our users asked for his. Hey we just WANNA track. We don't want to necessarily like you know upload these gigantic files to Your Service. We have a bucket somewhere. We have him on on Prem And so you know we let you You know. Save a pointer to those things If you want to so Yeah I think I think. Model Management and data management is. Maybe the best way of thinking about it. I mean I think what what I know for. Sure is that You know everybody's asking us for this. It must be if it's not a category. I think it's going to become a category and there's certainly lots of tools that you know. There's lots told do pieces of this and in some cases like you have your are more ambitious than this. It's really important to us that we work nicely with all these things. I mean you have such a great that Katameya sort of like you know sort of antenne platforms and You know point solutions and you know what are our real core values winston biases to be you know. A set of interoperable point solutions. That do one thing really well and play nicely with all the things around it and then you try to put that together. The kind of core problem that you're trying to solve here as you come at it. From the perspective of again experiment management where you had folks your tools to track different experiments that they were running as part of their model development process. But you found that. That component was used as part of the pipeline. They were getting data from somewhere they would run it through multiple some series of transformations and then ultimately run an experiment and folks wanted to track more of that process. Oh that Yeah I often Talked to folks that are trying to do kind of providence. Solve the this data providence problem or or decision providence. Even where you've got a model that makes an inference and you want to Kind of go all the way. Back from that inference. The decision to you know the model that was deployed in the experiment. That said that that model was the best model than data that started in a training set. That allowed you to train up that model that one and you know the data points that influence ultimately the data points and influence this decision missiles. Solve more of that problem than you were doing before. Yeah I think like you know There's lots of really interesting paint points here. That people talk about all the time like you know model explain ability and model reproducibility and those are. Obviously they're like huge problems. But I think like the the sort of core thing that we see a lot of our user struggling with is just literally knowing what data set the model got trained on what data set actually or what model actually got to put into production so so. That's the core focus. We have like you know. We work with Or the couple companies that do you know kind of retailed are trying to build systems that automatically can detect like what you're buying. Is you walk out like the Amazon store and know one of the things that those companies will have in common is their like constantly getting new labeled data rate because they have cashiers in their store there lay labeling the data live and they often have they always new products coming in right and so what happens to them is that they really like never train the models on the same data set right so it's actually like every single time at trains. It's sort of like a different basket of stuff that the models get trained on. It's not just incremental growth of their training data set but there it's different. It's just it well. Actually in that case is incremental growth. I'll say we have actually have the opposite thing where it's not. Incremental growth is actually shrinking. So you know some of our customers get Kind of privacy. Take downs right. Where you know. People will say Actually one one company with this talked with this as I robot right where you know people say take my data set out of I robot data. Of course they do that right but then what happens is now your your data. Set has certainly changed right. So it's not exactly Apples apples Maybe right maybe it's a but But you know I think that the overhead it sounds simple maybe a two track all that But the important thing is that you really really have to do it right. Like you have to have a system where you're always tracking it the same way so that when you do this comparison it's really easy to say You know which is what they'll say. Look another issue is like with our vehicle companies. They don't they have actually so much data typically that they they never tried it all their data right so they're they're actually every time they build a model selecting pieces of it and they're often something different pieces like for different purposes. So I guess like you know. Some case. Data sets growing some cases it's shrinking and some cases. You're like picking and choosing from different data sets but in in all these cases You know we think the really like core need here or they didn't need the pain that we want to solve it just like we will keep track of what data sets your. Your model was trained on.
Detecting Emotion Through Gait with Aniket Bera
"Hello and welcome to the AI. Today podcast I'm your host Kathleen Mall I'm your host modeled Schmeltzer our guest. Today is unaccounted. Berra is the research professor in computer science at the University of Maryland at the Gamma Lab. Thank you so much for joining us today. Thank you so much. Thank you for inviting. Yeah thanks so much for joining us today. We'd like to start by having you introduce yourself to our listeners. Tell them a little bit about your background. And your current role at the University of Maryland. Sure I've been with him for a little under a year so for this was at. Unc Chapel Hill for a year and had mcbean St at UNC in robotics so a hermit research over the last few years has been trying to work on the social aspect of our bodies whether from the computer vision side where we look different objects after we as humans when we look at different objects. How do we perceive them? So my research has always been about. Data perception for robots and robot can understand the world around us like as humans. It's what research has been over the last six seven years might Rowlatt you. Md has enough research faculty. I advise about seven students now from vision. Applications to robotic applications the psychology driven. Ai Applications on McLennan field of research is something affective computing affective computing. What it means is that we're trying to gauge emotions on Detroit's you in aggressive Shy said figured out different cues from your visual appearance like your facial expressions the way you speak things the way you walk from all of that. Can I figure out your emotions and then do something? According you follow. What specific here. That was something that we found really interesting. You know part of the reason why we reached out to you and hang you join us on their today podcasts. As we wrote an article for our today podcast listeners. He may or may not know that Kathleen and I are also contributing writers to Forbes and tech target and one of the articles that wrote in Forbes was how systems might be able to detect your emotion. Just by taking a look at how you walk and other sort of non maybe visual facial visual or verbal cues and that's part of what being socially intelligent is. I guess we as humans can read things like body language. But there's a lot more to it. So maybe you can explain some of the concepts of socially intelligent robots and why this idea of social intelligence is important yes oh the concept of socially intelligent robots is essentially making robots understanding humans. Better so we. As humans are not objective to be tend to evaluate unheard of things based on out upbringing aquaculture in all these different rates. And then associate all those things in everyday life so In this research which you mentioned the phobic mad we did when research way we could figure out how people walk and then have a garden new. Somebody's side we could mechanize that person sadness just by looking at his a her. Buster is a hub body language and maybe the robot can walk up to that person and ask questions. You look sad today. Can I help you like some help if somebody is Chris excessively angry? I might want to talk to that person and maybe even avoid that person together. Assembly looks confused. The robot co that Bush announce something that you look lost here. Do you need help been something. Do you need to some place? So all these different we inherently as humans. Do which usually doing tend to do those things so go. The last Jedi as robotics has always been about solving problems accurately and objectively so. Let's say you know the goal for about is go from point eight in be and the robot will try to figure out the shortest are the most efficient go from point eight point what via bringing in is also being associated vendor most socially aware the. Somebody's walking. I want that other person to have his hub Robot I do not want to enjoy on. Somebody's face so having all the social norms social events bring it back to robotics is what the concept of socially intelligent robot as unwise. This idea important I think has become more primetime and as they become more available among says I think they should try to attempt to understand humans and be Understand humans but go beyond that and be part of the human society He not interesting because we talk about commonsense emotional Iq and that's incredibly hard for robots and artificial intelligence to actually have been a lot harder than I think. Maybe some people realize although there has been some discussion around it and at COG Melika for the past two years we've actually done voice assistant benchmark and commonsense and emotional. Iq were too of the categories of questions that we asked because supposingly the systems were not very good at that but this idea of AI systems that can detect emotion based on gate is a really unique idea. So where did this concept come about? So we started this. Actually I think about skate years ago. I mean I know. Eight years ago IAE wasn't the we'd know now things were different back then but we started with the concept of can't be figured out somebody's personality just the like just looking at how they walk back. Then we started the representing every human being every industry as a single entity as a single dot on the screen so used to look at videos. And how this guy is trying to avoid somebody to cut across people to figure. Oh his guys aggressive. This guy's got shy guy walks around all these other guys look from that and now we figured out so from the dot aspect of figuring out the entire body leg rewrite now have around twenty two points Bush and so all our lake from your leg from your hand gestures your shoulder through your slaughtering head so all these things. All these different cues which we observe. That wasn't really being studied before there's a lot of research on the emotional especially from faces. You know somebody's happy. Somebody said there's a lot of research in this field especially from speech in the way I see something. Let's say I'm happy? I'm very happy today. I'm okay I'm happy but also the way you say. The sentence is the content of the sentence is one thing but also to the way you was at so all these different cues were being studied in different fees realized that the body language is something which really people studying. We look at people a lot. We look when they walk in the talking when they're driving but we don't we we know what they're going to but we don't really like we haven't understood how gives how walking body language relates to emotion so our on this hatred emotion is kind of it could be added all these facial cues with speed with other were. Hughes from the human so I researched
Apple, Google ban use of location tracking in contact tracing apps
"Google and apple Saville banned the use of location tracking apps they are working on together for a covert nineteen contact tracing system more from CBS news technology consultant Larry maggot apple and Google are trying to avoid any accusations that they're either collecting or allowing public health authorities to collect information they could invade people's privacy or locate them so they're asking the agencies to sign agreements that they will abide by certain privacy standards and make sure that they get consent from
Language Modeling and Protein Generation at Salesforce with Richard Socher
"Hey everyone I am on the line with Richard. Social Richard is the chief scientist and MVP at salesforce Richard. Welcome to the PODCAST. Aloha great to be here to chat. Houston you said. Aloha and I was surprised that you're actually in the bay area. I always see these wonderful photos of you all over the place and I only get to ever see you in person at nerves nowadays and The blacken ai events and stuff like that so great to get a chance to connect with you mid year. How is everything? Life is pretty pretty. Good I'm very grateful of our research can continue working on some research during distresses now would that that is specific to Kobe. Nineteen and But Ryan Large sometimes joked at the PhD. Prepared me for several years. I staying at home eating pasta every day. Working on a computer all day and So so I'm I'm in pretty good spirits trying to have a little bit a positive impact and still go about my work and make sure my team is doing well throughout this crisis. And it's it's a tough time But I'm I'm very grateful for line of research we can work remotely. Who quite well awesome awesome. Glad to hear that it is Don't usually you know do this. But is April tenth that were recording. This what is it week. Four for you for lockdown shelter in place thereabouts right. It's it's crazy to time weirdly. It's slow and fast at the same time days morph into each other The Home Office is just the the home and the office and everything so the ad time definitely does not seem linear going through. This is very strange but before we jump into some of the main topics that we wanNA cover in particular language models and some of the recent work you've been doing applying that to the bio space. You share with us a little bit about your background and how you came to work in a chirp boy. It almost starts in high school when I really liked math and languages and when you think about Those two fields one you would hope is true even if you go light years in some other direction and language is district constantly morphing system. Where every you know teenager could just say Yo lo and boom you have a new word and now the the signs of nestor deal with that and so they marry when you try to use computers use math to try to understand language so I studied linguistics computer science Back in two thousand and early. Two thousands and dad's at the time seem kind of like an orchids kind of cute a niche topic to my parents. I thought man if we can get computers to understand language that would be just incredible all the things they could do. You know especially if you're lazy. He wants to agree. Repetitive tasks to be done by a computer would be quite amazing. And so that kind of more Into a couple of other interests in trying to use eventually initially just the typical machine learning essentially sort of machine machine learning by itself and I More broadly applied to computer vision problems. But I really do think in languages the most interesting manifestation of human intelligence There's some quite incredible visual systems and apparatus in the animal kingdom like the Mantis shrimp all kinds of price focal vision. And so on and each I and all of that animals have quite sophisticated visual systems but language connected thought and culture and society. Information are in so got excited about language. Then two thousand ten I saw a handful of people. Apply NEURAL NETWORK TECHNIQUES And extend them to a computer vision and at the time I had also just become a little bit disillusioned myself around how much time naturally crossing folks spend on feature engineering. So I thought couldn't we use some of these ideas from computer vision and neural networks for natural language processing? It was a not easy in the beginning early days had a lot of rejected papers reviewers just ignoring reasonably good experimental results saying why you submitting neural networks stuff to this conference. This is not the nineties anymore. Stuff doesn't work and so on but eventually more and more people have joined a small core initially was read just Joshua Banjo and Jefferson's labs and rings lap At Stanford and and it expanded more and more and and now it's kind of the default way for doing things to US network of course Not Dave developed more and more novel architectures to and it's it's just been super exciting so now I I work Not just on the research side anymore but also on a lot of applied problems you know in the end. I often think about trying to impact and in the end when you do research you hope that people will pick up the research extent Extended next apply it to some real world problems but if you have the opportunity to both the research and applied to real problems to kind of reduce the variance of the impact that you have and so. I work on a lot of problems. Chat bots and service sales and marketing applications trying to for instance automatic reply to emails or two phone conversations or having chat conversations is a really great one also doing a lot of computer vision trying to identify different objects in supermarket shelves. Doing Complex. Ocr for forms and a lot of interesting things recommendation engines Voice Machine Translation. Now the the group is pretty large and so we get to work on a lot of different things. You know it's a it's a research organization but as part of salesforce which we in no as kind of a new still. I still think of itself as a CRM company Owner than their terms here has kind of expanded and now includes everything that you might do with the customer right so we're just ecommerce platform because customers buy stuff online. I worked obviously the largest Sailed Service Marketing Organization but we also have helped companies integrate all the different data. Now Tableau we help people understand our customer data and do a lot of analytics behind it and then we look at you know where are the customers and we help governments senior citizens as their customers and help them especially now also in this crisis of build software really quickly bill chat bots so they can answer questions? You know the knee and if you go to. Da Department of motor vehicles. You have the question chat bots said. Give the answers. They are also You know our customer there of the DMV. We work with healthcare providers where the patients are customer. So the edition of what a customer is is getting broader and broader. We're in all
Google and Apple to release exposure notification API
"Apple and Google delivering the first version of their exposure notification. Api To selected developers working on APPS for public health organizations after this test round the API is expected to be released. Broadly in mid may the updates come in the Beta of apples ex code Levin dot five and thirteen dot five and Google play services as well as android developer studio
Humans in the Loop and Outside of the Classroom
"You are listening to talk to you machines. I'm Katherine Gorman and Lawrence and our guest this week on talking machines has been with us before and I'm very glad to welcome him back Michael Whitman. Thank you so much for taking the time to talk with us. It is a great pleasure. I enjoyed my previous outing. And after that point you and I actually became close. And we've been working together and so I feel very honored to be asked back because you know now what you're getting and I guess that means you think it's okay yes it is yes absolutely full disclosure to what we have. Today is a cabal of communications chairs so a meal and I were communications chairs for Europe's for awhile and Michael and I work medications shares last year. Probably helping out again this year. So yes we can talk about all of those finer points of how how we talk about talking about talking about scientific communication this week still machines. Europe's communication chose. This is actually just one of those Europe's meetings where we're going to talk about for next year. How the press conference will be is that. This is an comedic. That's I like it when you're doing something for two reasons at once. Two birds with one stone. We can get the communication so for this year's Europe's the same time it's perfect. It's perfect I love it so Michael. I know we've had you on but it's been quite a while and I think a lot has changed. Give us an update as to where you've been in sort of a little bit of run through of how you got where you are. You are at Brown now and you're the CO head of the Center for human focused. Robotics is that right. It's called the humanities centered robotics initiative at Brown and Yeah it's a. It's an organization that we tried to put together Shortly after I got to Brown with the goal of thinking about robots and in particular trying to create robots that work with people for the benefit of people and so it's not necessarily machine learning thing but of course everything now is a machine learning thing so it pretty naturally segues into that and before you were at Brown you were at rutgers. Tell us a little bit about how you how you got to where you are and what you've been doing now besides being in full commercials and trying to help meal and everybody else in the oric come talk about science and things like that so I had a had a great time records. I had a terrific group there. We focused quite a bit on reinforcement learning and in particular issues in efficient exploration in reinforcement learning so trying to systems. That could not only over time. Learn to get better and better but could get better and better fast right so with with a minimum amount of of data minimum amount of experience. That kind of wrapped up. When I transitioned to Brown and my focus. Since I've been at Brown a lot of my papers have been about human in the loop reinforcement learning so basically try and and and this comes at least in part because of the humanities centered robotics initiative the idea of what we want our system's not just to learn to get better at things quickly but also in response to what people want them to do and so one way of telling machines. What you want them to do is by writing a reward function. That's the machine the reinforcement learning way. But I think normal people don't sit around thinking about how to rate reward functions. It's easier to use the kinds of tricks that people use when they're training animals for example like good robot. Oh bad robot. No no no no no bad robot. So how can you use that kind of feedback? Sort of positive and negative evaluative feedback that reward function would give but have it come from an actual person kind of live sit. Does that mean invest reinforcement? Learning will as you have to infer. What the human is implying. The reward function is or other approaches to that right so I would say they're related but in fact the the main difference in the way that the inverse reinforcement learning is typically studied is as the inverse of the reinforcement learning problem so the the reinforcement. Learning problem says here is a reward function. Use that to generate behavior the inverse reinforcement. Learning problem is here's behavior. What's the reward function that would have generated that behavior if you were thinking about reinforcement learning so the input? Irl inverse reinforcement. Learning is behavior is examples of. Here's I was in the situation. And here's if you're in the situation to here's this sort of thing you should do and inverse reinforcement learning agent in first from that. Oh okay if that's the thing I should have done then probably what's going on is we're trying to minimize time on the beach and maximize the time you know in front of ice cream or whatever whatever kind of fits with the behavior that's observed the human in the loop reinforcement learning is humans giving rewards signals giving feedback saying good job bad job. Now you could from that in for a reward function and actually it's not a terrible idea but it's not the dominant one in the literature. The the the the main thing that people do is try to use those rewards as if they were rewards and then some version of a reinforcement learning algorithm to turn those rewards into Behavior Hall K. So this related approaches would you. Would you classify that as almost equivalent to say model free and model based I know the normally that's around the world but where the model is now generating y'all value function or misunderstood? Well so one of the first papers to work on this problem of human in the loop enforcement learning was a system out of University of Texas Peter Stone and Brad Knox and what they did is actually kind of a model based approach. They said let's from watching the human give rewards. Let's actually estimate the reward function and then plan using that reward function so they actually directly estimate the reward function from the human inputs. And so in that case it is sort of model based but it's still in verse reinforcement. Learning yeah definitely not because you know as a computer scientist. I'd see things very much from the perspective of what are the inputs. What are the outputs? So inverse reinforcement learning inputs are behavior in human in the loop reinforcement learning. The inputs are positive and negative signals. So Richard said so in some sense good and bad is whereas versus with whole behavior profile is would be the difference in the inverse reinforcement learning. And you get a lot of information because you're you're getting to see what the right action was right and just like that. It's almost more of a supervised learning problem in a sense right because you are seeing. Here's what the right answer is. Whereas in evaluative feedback systems like reinforcement learning systems. You're just told. Hey that thing you just did. That's a six and like six that good like well. It's less good than seven but it's more good than five right. So you're you're missing a tremendous amount of of richness of the information In the feedbacks so why humanity rather than human sentence. Yeah so. That was a long debate. We had when we were trying to get the thing off the ground so the idea was that human centered is already a thing and it typically means you know trying to design a system so that it interacts really well with the person that it's connected to and that's obviously a really important thing but we wanted to go beyond that to saying not only that not only do we want to act sort of nicely with respect to the person directly interacting with but we want the the repercussions of the system as a whole to benefit society as a whole so for example. You know. We're going to be skeptical of a system where yes person and a robot working together to put ten thousand people out of work right like. That's not the kind of thing that we would take to be our goal in the in the center.
AI Research at JPMorgan Chase with Manuela Veloso
"Are everyone. I am on the line with Manuela. Veloso Manuela is the head of AI. Research at J. P.
Streaming Storage Reimagined
"This Corey Menton and we are back with another season of the big. Dig Up Your podcast and we're GONNA kick it off in style this time with a little conversation around streaming storage reimagined and have that conversation today. I'm joined by two folks from Dell Technologies. Amy Nannies is the product. Marketing Manager Adult Technologies and Flavio. Jakarta is the senior distinguished engineer. Adele Technologies Aiming Flavio. Welcome to the show. Amy How are you surviving in this crazy corona virus work from home migration and doing surprisingly well? I think I was made for this kind of living. What's funny I had a conversation yesterday and I somebody said its worst nightmare for an extrovert. Because we don't get to get out and socialize but it's also works nightmare for an introvert because you really don't get a lot of downtime because there's so many people in the house potentially for those of those kids and wives and families and all this stuff so it's everybody's struggling a little bit flabbier. How are you doing in this time? I'm pretty good pretty good. It has been on. It has been nice intelligent at the same time. Nice from the perspective that We spend a lot of time with family together like I. I believe we have never done before. So that's nice but telling him. Part is not being which you step outside me here stain. We have full lockdown. Now can we go tight for groceries and all that stuff from that perspective is challenging but You know we. We were coping very well. So we'll good well. I hope everybody else's stand out there. Hope our audience sustained safe and hopefully this conversation with episode. We'll give you something to enjoy in the lockdown. That's happening so many places around the world. Now business hasn't stopped. People are still out there. Working trying to derive value from data and one of the conversations kind of macro themes that has been really popular over the last two years. If you will is this concept of analytics on streams so I want to set the table Amy would you favor and help us understand? What exactly do people mean when they talk about streams sure yes so extreme as just a continuous data feed? That's in constant motion. So there's no beginning there's no end. Typically we have a time stamp on our data feed so this is different because it's always flowing Today a lot of our data naturally comes in this form you know everyone has a organizations are beginning to utilize drones and security cameras. So we're seeing this information produced all the time interesting now. This constant stream of data a guessing is kind of important you just mentioned a few Kenna interest in areas security and surveillance and those kind of things why streaming getting so much press. These days is becoming really critical for modern analytics. Yeah so you know. It's important for us to be able to consume it store it and analyze it in real time as it's coming in because we get the most value from this data as it's coming in A good example is when we're shopping online so we get to the cart and we have suggested purchases if the computer behind that was to look at that data. Historically we'd be getting it a week from now and that wouldn't be as valuable Or something like traffic lights. We can look at how busy they are and change the timing in between them if we can get that information as it's coming in so the ability to analyze information as it's coming in is hugely valuable in almost every industry. Yeah so get into that real time. Capability is so challenging. I imagine you know there's a lot organizations and a lot of technology is being built and developed to handle executive that problem so far beyond cures from your perspective. What are the challenges that this stream type data bring to maybe those traditional analytics platforms that organizations have spent the last five ten years deploying right so following up on a on what amy said if you're continuously generating data in you can imagine applications where you have a large number of these data sources? So she she used an online shopping example right. But you can also think of food servers Sensors edge applications in general. You can have many of those and all of those producing this flows of data continuously so this year diggity unnecessary to ingest this data and make available downstream. So if you're talking about applications that we want you tell that street rates went to processing data as soon as possible so ingesting that making available news is challenged by itself. Now if you think about the characteristics of of the Stream flows they need their unbounded right so as you mentioned the arm-banded so they have They have a beginning. They begin at some point by there is no no no. There's there isn't necessarily an end end. Not even that alone. You can have fluctuations in the in the workload so that the flow. You're getting my change in my few censors at some point or more sensors oranmore service fiercer results although this cannot can fluctuate and and the the your plan which accommodate those changes and in addition to that you don't want you don't want to have duplicates miss events or or or have problems with the with the streaming away that doesn't reflect what application expects a consistencies and other is another important property. All that's with the with the application wanting to deliver results with low latency so he's taking that data processing yet and delivering results as possible. And finally the the the aspect of reacting facet changes. So if you are in this in the situation that you are taking the state alive processing live and delivering results as fast as possible. System must also be able to accommodate changes to too many thanks to the work as I mentioned on. That could be faults in the system needs to watch to react to those. Maybe replicate In my need to increase the the D'Amato resources dedicated to a critical application. So all those make a beauty a platform like this very challenging.
Conversational AI with Israel Krush of Hyro.ai
"So I'm Israel Co harder and Seal Jairo A. M. I am roaming around and McCurry started late. Two hundred days early defensible says when was in charge of extracting considering message amounts of data for operational needs. That may studied computer science. The district says coming from a machine. Learning Beck rounds started working software engineer. I Inc than than at various start companies from cyber-security at the move the product management Was Product Manager head of of a couple of companies and then most of the state said gazillion. Mba DID MY MBA at Cornell University. This typically the new compass on woodsmen Thailand. And that's actually where I met with. My Co. founders from Cohen. Who studied at Cornell is master's in computer science? We actually met in machine learning class. Wow and do use in disband narrative Dick Campus We knew that one spin out than we actually got exposed to the polyphemus Alexa Google home divisive. And we're very excited about that. That we don't have this in Israel yet am so first of all very excited and then we got a bit disappointed by South. The use cases that it wasn't Tackle as we started exploring I the voice migrant than being dial natural language understanding market including jet right and when we understood that there is a lot to do there. I called a friend of mine. From beyond from eighty two hundred who are Scipio and Co founder and then we also a master's in computer science but he actually studying linguistics in Caroline and research. The English neighbors. We have this unique expertise in The industry it's called their competition linguistics or Google. Five years I that there could search link salmon beyond the USA and then Google Duplex which I'm sure a million happy group at at Google created spoi- suspended scheduled appointments for you restaurants in her salons headed amazing about the. That's right. I remember that I remember that very that was big news at the time wild. That's amazing so you guys really have a lot of expertise coming from different backgrounds but all very relevant to what you do in very interesting so how. How long ago was this that you all met like? How how? How does this company? Yeah so minimum. Method Cornell Tech and I know for the fifteen years but the company fall immediately after graduation so incorporated in June twenty eighth Dean so Less than two years ago. Talk about what we do but basically on on the concept had that accepted to news roundtable accelerator local leading accelerator in New York City And who went from there by developing the MVP getting our first big pilot We have something that we can actually converted to annual contract. Raised are around in a four million dollars the less July and today we're at the end of seventeen people in New York City California and Delaware. Wow that's great. Congratulations on that. So far that's amazing so to tell us more about the company for so what what is it. What do you guys do absolutely so I wrote? One sentence is a black play conversation. I blog form four healthcare providers. So let me break down. Abates conventionally I as I mentioned we're trying to focus on but also the so as long as it's natural language that we don't care about the medium actually understanding natural language in healthcare providers wanted to start with and enterprises organizations Massive amounts of data and That this data is heart navigating and maybe patients or general users. Find it hard to find. Whatever they're looking for or complete the tasks in the transaction did they want to complete and finally in that's the most important aspect of our solution is the blogging play. So why research assistant markets and the generals market. We've learned a lot of a lot of the existing solutions are based on a creation platform so Us As a company gives innovation the creation platform where they can define their intense and build workflows or conversation flows officials says X. replied with fly with another Blind we maybe Detroit Branch Users set and found it to be Embargoes There's friction at both the deployment and maintenance organization so we said we try to look for a completely blogging play approach. So what we do. Is We actually up into the existing data sources of the organization scrape them and we basically translating the the date now? Two different data structure which is a knowledge graph which is composed out of the main entities Andrew attributes and disconnections when this ended his attributes and this is our own representations of the Beta and which we can query natural language sessions. Really give them an embedded that piece of code they just copying base to their Website Call Center. Alexa Google and they have their voice assistant or Well basically the content. Yes so I'm not a computer scientist. That seems like maybe you can just explain that a little bit. How are you able to go? And when you're working with different organizations which which presumably have different ways of organizing their data take one piece of code and yet make it applicable to all these different organizations. Yeah absolutely so. I'll explain a one of the main use cases we've found valuable for healthcare provider is is helping their patients. The physician find the physician based on various attributes of this position when talked about their constructing. This nottage raffle and is a natural In this case the entity is a physician. And I can you might be a geology locations that the accepting insurance length that they accept and so on and so what we do. Is We actually go on by this website? And as all of their physicians went Beijing has been so we scrape them. And we don't care whether it's under the physicians or tens of thousands of physicians with scrape them build this knowledge graph and then on every attribute that the wasn't the data we can basically retrieve the relevant dancer. And I'll give you an interesting example show when we first deployed and we thought about you know people can ask seems like looking for cardiologists to speak Spanish. And ethnic cleansing the upper east side right location language specialities insurance men and so on so we destiny the beat and we actually gave to the first female user to death and she asked for an and that's just the use case that we didn't we didn't think of why a person would fit there by gender. Maybe you think it makes a little sense liquid even think about it was amen to recline with the relevant results. Just because the data was there it was a gender female man for each physician. And I think that's the barrel of actually being based on the data says trying to imagine a pretty finding dates. That's very interesting hot. So was essentially intelligent enough to just use the data. That was there but but bring that back so. Wow very
"Hey Katie hi Ben What are we talking about today? We're GonNa talk about Galaxy and processes as a way to solve pretty gnarly regression problems. All right let's dig into the NARAL. You are listening to linear digressions. So what do you? What do you why? Why use the word gnarly? Yeah so let's talk about regression for a second and kind of build up some complexity in in our mental model here so that we are we arrive at gnarly in a minute or two and so just quizzing myself. A LINEAR regression would be. You've got a bunch of points and then you try to draw a line. That's representative of those points through it. Right yeah so. That's a pretty classic. That's usually the first thing that I think of. When somebody says regression is exactly that you have a bunch of points. You're trying to draw a line through it. The best fit line is the one that minimizes the difference between the line and the point for all of the points in your data set and a linear regression example of what we call a parametric model. So sometimes we talk about like the parameters of the models that people are fitting for so in this case the parameters of a linear regression will be things like the the slope of. If it's one dimensional the slope of the line it can be multidimensional or multi variant. So you can have the slope or the coefficients that multiply several different terms. And then there's the intercept term which is another parameter of the model so when you're fitting the model kind of a rule it says it has to have this. Functional form y equals M. X. Plus B And then you're finding 'em and be that best fit the data that you have which is your X.'s. In your wise so there's kind of a a set form a set mathematical functional form of the answer that you're trying to find And then there's a couple parameters that you're discreetly allowed to tune in in finding the solution subject to that mathematical form. Okay got it cool so but as I think. Probably a lot of our listeners know probably you know a linear regression is not the only type of regression. That's out there the for for the purposes of this conversation. Let's talk about some other functional forms that that mathematical equation could take so you could imagine if you saw a particular pattern of the data points like. Let's suppose your data points were hourly temperature measurements taken over the course of the year. So yes you wouldn't want to fit. That was just a line right. No no especially. If you have did you say hourly yeah. Yeah then that's probably GONNA be. I mean it's going to be curvy and it's probably going to be a sign. You SEIDEL. Because every day the temperature goes up and then it goes back down. Yes so there's a periodicity to the daily measurements that you're gonna take where on average. The temperature is higher in the day lower at night. See except you expect that in general. There's maybe a roughly twenty four hour period or pattern that you see of course there can be days when you have a cold front. That comes through first thing in the morning or whatever so it's not it's not guaranteed But in general that periodic structure is something that you generally expect to see and moreover there's going to be an annual trend where the temperatures are going to be higher or lower in the winter. So if you had ten years worth of data there might be Signing soil functions that you would have had a couple of different periods and then when you were fitting the data together a couple of different terms to account for the structure in the model and then boom you have another example of parametric model. That's totally different functional form but again we can use it to fit a regression model for temperature as a function of date and time. Okay so a parametric model can be one of many different shapes and it can also be a combination of those shapes. Is that right because you were saying in? If you had ten years with a data for example you probably have roughly speaking to sign waves. Yeah I would say instead of calling it shapes. I'M GONNA use a a slightly more precise term. Which would be like a mathematical form so by mathematical form? I mean do you have a function? That's signing soil in the case of the temperature measurements linear in the case of trying to fit align. Let me toss in a few more here. You could have something that is polynomial. So if you have a bunch of data that looks like it's Distributed kind of a parabolic shape than. Maybe you'RE GONNA fit it with a quadrant function or a quarterback function like x squared. Squared cubed yeah or like sums of terms like that yeah so those are all GonNa have different characteristic shapes and and and that's so coming back to the point you make yes. They have different shapes but In terms of when we're thinking about them as as mathematical objects they have different functional forms
Education and AI
"Today we chat with David Growl Nick. President and CEO of cloudy scope learning. I've had a longtime interest in both education and technology. Going way way back as I was lucky enough to go to an elementary school outside of Washington. Dc called green acres. School in Rockville Maryland. Which was very project based so was non traditional education. You WORKED ON PROJECTS. You worked collaboratively with people you were. Teachers will was almost as much an advisor and mentor is a traditional teacher. Wasn't person in front of the room talking at you. And you learn how to learn how to think creatively and pursue your own interests and learn by doing and to all of that state with me as I got older and I developed interest in technology from a really young age. I had my first computer at thirteen which was at a time when people did not have a computer at thirteen and was interested in through that in how computers could learn what what did artificial intelligence mean and it was a field that was was a bit of a mystery and ended up as I was fishing. College getting the work of an artificial intelligence professionally and Roger. Shank who was at Yale and Roger was just at the time leaving Yale with some faculty to start an institute northwestern university that brought together cognitive psychology computer science and they I and education to apply artificial intelligence techniques to education and so I did my program and ended up being asked to focus particularly on business problems in the corporate world and worked with some corporate clients through Andersen Consulting. And it's kind of what you know the work that continues to this day. Yeah that's great. What what year around where you're doing you're just so sharp. Phd For me was starting in eighty nine and wrapping up in ninety four. Got It okay. So that was before the wave. Hit everything right. You guys are working on this stuff on the cutting edge. Sounds like yeah absolutely it was. It was We were considered on the cutting edge. Cutting EDGE LAB. We were written up in the early days of wired magazine and all that kind of stuff and it was really interesting place to be a tremendous group of people. We had some of them. I still work with to this day. We had people who were excellent writers with people who are really cutting edge thinkers in AI and in education and an in cognitive psychology with sometimes almost the cognitive side sometimes gets left out right. It's you know how do you how do you think and learn how do you how do you understand what you're what you're experiencing and all of that goes into designing experience. Yeah those are really really fascinating place to be built on a lot of the principles that that I kind of believed in my formative years and couldn't work any better. Yeah that's awesome now. You've seen this whole progression of AI. Machine learning all the. What's your perspective on that? Since you've you've lived this entire cycle now yeah I've lived yeah. I've lived a few cycles when when I first started doing it it was kind of you know the You know the almost almost became the dying gaze of of a at one point right like we were doing really interesting things. I think in applying it to education but as a field was considered was considered a failure the years since my PhD were mostly. What's considered a winter? You know really. It just didn't have high hopes we expected to be in a jetsons like world and we are not what happened and now. I've seen the renaissance and the renaissance has been certainly interesting to see. There's a lot more computing power now which has helped. There's a lot more public interest in and understanding of what I could be an some of. That's probably more more good than bad. Sometimes there's a little scary. We also are in danger of being over hyped once again and I think that's the thing that we we look at them and I'll talk to people sometimes even about what's possible. What kind of conversations online systems can have with people and there's usually it overstatement of what reality is and so. I think that's something to be cautious of as as we move forward and keep thinking about where I a I techniques and machine learning which which to me which attrition list is a subset of I can fit in and not you know not overstate and not necessarily feel like the goal has to be a fully functional human replacement. I don't know that that's a societal gopher lotteries INS but even in terms of technology. It's not clear that that's what we need. And and particularly the world of education. It's not clear that that's what we would want