19 Burst results for "Geoffrey Hinton"
"geoffrey hinton" Discussed on TechStuff
"Early in the history of neural networks, computer scientists were hitting some pretty hard stops due to the limitations of computing power at the time. Early networks were only a couple of layers deep, which really meant they weren't terribly powerful and they only tackle rudimentary tasks like figuring out whether or not a square is drawn on a piece of paper that isn't terribly sophisticated in one, thousand, nine, hundred, eighty, six, David Rumelhardt Geoffrey Hinton. And Ronald Williams published a lecture titled Learning Representations by Back Propagating errors. This was a big breakthrough with deep learning. This all has to do with a deep learning system improving its ability to complete a specific task and basically the algorithms job is to go from the output layer. You know where the system has made a decision and then work backward through the neural network adjusting the weights that lead to an incorrect decision. So. Let's say it's a system that is looking to figure out whether or not a cat is in a photograph and it says there's a cat in this picture and you look at the picture and there is no cat there. Then you look at the inputs one level back just before the system said, here's a picture of a cat and he'd say, all right, which of these inputs led the system to believe this was a picture of a cat and you would adjust those. Then you'd go back one layer up. So you're working your way up the model and say which inputs here led to. It giving the outputs that led to the mistake and you do this all the way up until you get up to the input level at the top of the computer model, you are back propagating, and then you run the test. Again to see if you've got improvement, it's exhaustive but it also drastically improved neural network performance much faster than just throwing more brute force to it the Algorithm essentially as checking to see if small change in each input value received by a layer of nodes would have led to a more accurate result. So it's all about going from that output working your way backward. In Two Thousand Twelve Alex Khrushchev Ski published a paper that gave us the next big breakthrough. He argued that a really deep neural network with a lot of layers could give really great results if you paired it with enough data to train the system so you need to throw lots of data at these models. and. It needed to be an enormous amount of data however wants trained the system would produce lower error rates. So yeah, it would take a long time, but you get better results. Now at the time a good error rate for such a system was twenty, five percent that means one out of four conclusions the system would come to be wrong if you ran it across a a long enough number of decisions, you would find that one out of every four wasn't right. The system that Alex's team worked on produced results ahead, an error rate of sixteen percent so much lower, and then in just five years with more improvements to this process, the classification error rate had dropped down to two point three percent for deep learning systems. So twenty five percent to two point three percent it was really powerful stuff. Okay. So you've got your artificial neural network, you've got your layers and layers of nodes you've adjusted the weights of the inputs into each node to see if your system can identify. PICTURES OF CATS And you start feeding images to this system lots of them. This is the domain that you are feeding to your system. The more images you can feed to it the better and you want a wide variety of images of all sorts of stuff not just different types of.
"geoffrey hinton" Discussed on Ideas
"Educate a computer and obtain the adult brain. Well. These these algorithms, these deep neural networks are called that for a reason they're essentially motivated or. designed. Based on principles that we see in human and animal brains. Okay. So the idea that there's pathways and that pathways of sensory input and Strengthens them and which leads through some sort of thought process and so on. This is really motivated by by the way we process data ourselves. gave hence the name neural networks and so on. So. The workhorse of most of the machine learning breakthroughs that have happened recently is this object, a artificial neural network. It's an artificial neural network is a small piece of that human brain encoded in computer to learn based on experience and how it works. As you can imagine on the right hand side like the pixels of an image maybe it's maybe it's retina maybe it's your. And you know as you go further back in the network, there's neurons and pathways and the pathways take signals and the neurons let them pass or maybe it stops them or maybe amplifies them. Such that you can train this thing or you can educate this thing. The way it's done in Amazon and most other places you take a whole bunch of images, billions of images a day, and you might have somebody tell that network for each one of those images whether or not the person in the images happy or sad. So you yourself don't have to program that computer you know what the neural networks doing is strengthening pathways that take you from that pixellated image space to the L. put that you want. The last neurons will fire happier said based on that pathway. So, then you can take an image that it hasn't seen before like happy trump and you could put it in the neural network, and if everything goes well in the training or the education, the neural network will fire that. Neuron. That tells you that that's a happy face and that's what's going on Amazon. It has many of these neural networks trained on tons of inches if you take that same neural network and feed it in a sad pitcher of trump's then will hopefully fire. The pathway that tells you that there's a sad pitcher coming into it. Right. So the idea of neural networks. Is Sort of the same philosophy as learning. But what I just showed you is that image space These these pixelated images. are very similar in many ways of the states face of quantum mechanics. The field of image interpretation seems to be exploding right now absolutely the field of computer vision. Is is very much exploded I believe since the since sort of twenty, twelve, two, thousand, thirteen. And Canada in Toronto plays a very large role in that Geoffrey Hinton and his group at the University of Toronto. Now at the Vector Institute for artificial. Intelligence. Pioneered this technology. That really demonstrated other mathematicians and computer scientists that computer vision or the classification of images say. In in a standard data set was very possible. And they feel of artificial intelligence. There's something called the standard data sets like pictures of cats which is of dogs. Pictures of people doing things, people, pictures of animals. And these data sets have been static for years and so when computer scientists and mathematicians and scientists come up with new algorithms for image classification feed, and all the pixels is a tree is it a cat? They can compare their algorithms and in two thousand twelve, twenty, thirteen. There was a huge breakthrough here in Canada with the benchmarking associated with these data sets and everyone in the field recognized that there was a breakthrough in computer vision. Explained, that breakthrough in just briefly. It's fairly simple in the sense you have. You have images, maybe thousands of images that human has labeled cat dog tree. You. Train. A computer you train and Neural Network Algorithm on those images and the associated labels. And then you feed in new images that the. Computer hasn't seen before and ask what percentage is miss classified. And we saw a jump in the. Success of these algorithms essentially in the last decade, how do you teach a computer to? To recognize complex images like the dog or. With their prisoners happier set, you do it with data. It's data driven and that's where the learning comes in. So, when I try to motivate this Straw man argument in my lecture that you know we, we cannot teach a computer recognize a tree. I haven't talked about data. I want the audience to think you know if I was programming and computer just take black and white pixels and. A range of them maybe spatially in the image will I be able to write a program that says definitively this and every other image of tree I give it is what we wanted to be a tree. The data driven approach is feeding examples. He'll thousands millions, billions of examples to a computer program that learns from that data, and that's the paradigm that most of the breakers that we're witnessing in real time now are based on data driven. So can you are you able to talk about how you teach a computer to actually learn what is the process of learning when you're presenting this data? What does that mean for the computer? So the most successful strategy? In the last decade has been. To program what's called these neural networks fashioned after the human and maybe a mammal brains. And the neural network is really kind of signal processing program where signals propagate through the machine in in pathways similar to the way singles in the human brain propagate between neurons. Computer to recognize a photo. You strengthen the pathways that will lead to the output of that neural network successfully classifying that photo. So learning strengthening these pathways, it's it's strengthening the the. Strengthening the signal propagation that go from pixelated image to label cat labeled dog label tree. has been making headlines also for the last few years. What do you make of what do you make of computer programs being able to teach themselves like how to play chess, for example? Right. So all I think even the the programs that do game play like. Chess or go as the famous one do have some element of. A data being. into them or data being propagated through them to to strengthen pathways. We can't I, believe divorce the..
"geoffrey hinton" Discussed on The Bitcoin Podcast
"And they basically deviated from Geoffrey Hinton's backrub approach to signal out the weights of near on that. and so that was a completely different approach It doesn't require GPA's, it doesn't do a lot of matrix multiplication. Instead, that's a lot of Hash table, look ups and you can imagine that with the innovation in hardware and in your network architectures. That you know things are going to get. You know we'll be in five years time. We'll be looking back at some of these models and go remember when we used to use a mentioned smits. Have Primitive. Those were. This is what we're doing today. So yeah, I think there's definitely going to be new use cases that that come out. That's I could probably talk about this for a while but without diving going onto many tangents deserter than questions that. You. Wished I would have asked that I didn't. Think you guys did great I mean I'm really excited with with the space I. Think it is transformational technology. It's. Andrew Ing. said, it's like electricity and I believe that it's going to be transformational. It's going to be in everything we do in one shape or form. it's going to be very powerful, which means you could be used for good and bad. So you'RE GONNA have to keep tabs on it just like you do with any technology. and I think it's GonNa, it's GONNA. Help us some solve some challenging problems that you know if you. If you look at the pace at which I can learn. This is the pace at which we've learned. I actually wrote an Arctic at blockbuster on this but. If you look at how evolution? Allows you to learn and how you could only lance or lineage, and now we then we started writing and so you could share stuff between focused and other people and learn have memories of Oh don't stick your hand on alliance mouth. 'CAUSE it'll. Will it that becomes something that you've learned from filing era, but with deep learning, we can continuously learn. and. So that that in and of itself is amazing what we'll be able to do. By having. Continuously learning, learning, lending, observing, recognizing new patterns So I'm really excited about AI, which is why I work in this field I'm excited about what role core place in in data scientists to solve these problems. and so yeah, I mean I think you guys did a good job. Is there anything else you'd want to ask you want me to dig in more? ask for your audience. I mean. There's so many stupid questions I. could ask about Ai because I'm genuinely interested in the field A. Don't know much about it. Either stupid audience member and ask those questions. For less with the with the one, which is like recent GPD three, right. What's the big deal? So. When when you get to these language models It's really, it's it's called embedding word embedding It's trying to learn the relationship between sings in a almost multidimensional grass and said typically wh-whether. It's GP D. Two. ELMO, the issues, there is just on how many parameters you've trained. If you look at the brain I, think it's a hundred billion synapses to think of those parameters. GP Three is just a very, very large. Model that was trained and said, the conjecture is the more parameters. The more complex model is the more. Advance, it could be in detecting in this case, language models Yes. So that that's basically the. The around around these natural language processing models. Leaguers better. Is. GPT's three. The most advanced one right now. I think GPA I'm not actually, I'm not sure whether there's one that Microsoft recently did not something. I can't remember the acronym, but they did a transformer based model. And I'm not sure if GP three is actually larger than that income that you could just look at the number for amador's like I said, the reason this is limited to these big companies really who has access to that much computational power. To. Be Able to train these models But yeah, it's it's a toss between gbd's three and tpao can remember what Microsoft's wins That I'M Surprised that like the model? Gt three out of open AI, right? which is like. The Elon Musk company. Thank right era. Yes. I'm kind of surprised that stuff like that. Like. Is Google doing open source stuff like this. Because you'd think they have both the computational power and data sets required to get something equally if not better of ground. Now, they did I think gp to has done by Google, I'm not mistaken. Okay? Yeah. They you the like the language models that will have Adam finding names, ELMA BURT GP to Gbd's three, and then Microsoft. Microsoft's in which the life of me I can't remember. They. Did come out of either academia or or a non nonprofits. Open A I, believe train their model on Microsoft's azure infrastructure. 'cause Microsoft invested a good portion. Of, infrastructure into up. But. Yeah, these these are great. Great on natural language processing and it's it's funny. We kind of. Stop, innovating much on computer vision. I think people feel that that said reasonable place. Now everybody is innovating on natural language processing models and I think what will happen as we will learn about new. Approaches that we might circle back and go play on computer vision again. and so yeah, I. Think this is GonNa continue to. We're GONNA continue with new architectures and and finding ways to shrink them down to run on edge. Devices. To that end that seems really important to have people like you at infrastructure companies trying to figure out what power computation devices architectures to. We have available to train these various things where they most efficient like. Like, you said like people can't do these things must have access to a specific amount of resources to do it. And or or If those resources are even available broadcasts, people who would like to do them, and without Rice Mike you in your position, it's really hard to make to bridge that gap. Are you seeing? Your competitor's people who provide these resources higher. people who are interested in machine learning research or is it just part of the deal like you don't run a company like that? You have some like you involved. Do the first part of your statement. Absolutely. True. You know the the example that comes to mind as we did. INITIATIVE WITH MIT in one of the research is chairing at what's called the Big Gan the. Generative. Adversarial network on our infrastructure, and that was the first somebody trained at network outside of Google. Absolutely. Our goal is to help bring this capability to to all data scientists and. in, yeah. We we want us to be democratized to make sure that everybody can can have access to this infrastructure in shape or form..
"geoffrey hinton" Discussed on This Week in Machine Learning & AI
"I am on the line with Balk at the Shami Shorty while Bach is a research scientist at qualcomm research, but walk welcome to the. PODCAST, thank you for having me It's great to have an opportunity to chat with you, and I'm looking forward to learning a bit more about your research and what you're up to their at qualcomm. But to get US started. Why don't you share a little bit about your background and how you came to work in a? Sure so I, did my masters in electrical engineering at Chalmers University of Technology in Sweden, and in that program we had a bunch of courses like machine, learning and pattern recognition as well as some image analysis courses, which kind of drew me to failed I decided to do. My master desists on cervical cancer diagnosis, using Amal so cutting the medical imaging domain and I absolutely enjoyed it, and to do my PhD in dissimilar field so I came to the Netherlands in bowed university and I for my. Developing machine learning models for breast cancer diagnosis in the logical. Images is the logical. Images are microscopic images of t shirt. and kind of Indie early mid part of my. Deep Deep Learning Revolution Happen and a me, and a lot of my colleagues quickly s to use the learnings and get a gift familiar with retraining it ourselves. And kind of a move to that using that in mind our project and during my PhD also. The Chameleon Challenge It was a challenge on a finding cancer metastasis on breast to two more patients, and it turned out to be very successful challenge and was one of the first examples. In which using a I, the top leading algorithms were actually outperforming human experts be become part the top two months in the challenge with a panel of eleven pathologists, and all all all. They were beating all eleven pathology without exception. Also did a visiting research at Harvard at Belk Lab, and also towards the end of my piece, de decided to join welcome at where I'm here for a bit more than two years now and I mainly working on conditional competition. What is conditional computation? Tell us a little bit more about that, so conditional competition in the context of neural networks refers to a class of algorithms that can selectively activate. Its units conditioned on the inputted receives. Such that on average we have lower computation costs, and when I'm speaking of units, it could be layers or individual filters for example. And the the the thing is in our feed forward neural networks. We usually have mixed prior that no matter what input is receiving, be always run all the layers. All the filter is no matter what while in reality, some examples might be simple and some harder, and maybe for simple examples we could exit earlier from that from the network and finished classification or sometimes for classification task, classifying the image of a cat and sometimes. Classified image of a vehicle for example, but even in the middle of a network that via kind of certain that we are dealing with a picture of a cat, you're still applying all those vehicle detection features or filters in our in our to our feature map, which is superfluous and also from a generation perspective that is bad for our network, so conditional aims to selectively activated parts of a network. It could be different layer as be channels. It could be actually a whole sub network in your main network that you could complete the activists in fact if we want to back. The first examples were ver- may be by Hinton Geoffrey, Hinton and Robert Yacob's from nineteen ninety one..
"geoffrey hinton" Discussed on Progressive Talk 1350 AM
"Red ball from a green one the both of those programs share a common problem they only know how to do one thing the goal of a I has never been to just build machines that can beat humans at chess FXS is always been used as a way to test new models of machine learning and while there's definitely use for a machine that can sort one thing from another the ultimate goal of A. I. is to build a machine with general intelligence like a human hands to me gonna chess and only chess is to be a machine to be good at chess good at doing taxes good speaking Spanish good at picking out apple pie recipes this begins to approach the ball park of being human so this is why early A. I. research ran up against what you're taught the A. I. how to play chess you still have to teach you what constitutes a good apple pie recipe and then tax loss in Spanish and he still have the rest of the world to teach all the objects rules and concepts that make up the fabric of our reality and for each of those you have to break it down to its logical lessons and then translate that essence in the code and then work through all the kings and then once you've done this once you've taught it absolutely everything there is in the universe you have to teach the AI all the ways these things interconnect just the thought of this is overwhelming current researchers in the field of A. I. refer to the work their predecessors did is go fi good old fashioned A. I. it's meant to evoke images of malfunctioning robots their heads spinning wildly as smoke pours from them it's meant to establish a line between the A. I. research of yesterday and the A. I. research of today but yesterday wasn't so long ago probably the brightest line dividing old and new in the field of AI comes around two thousand six for about a decade prior to that Geoffrey Hinton one of the skeleton crew of researchers working through the A. I. winter had been tinkering with artificial neural networks in old A. I. concept first developed in the nineteen forties the neural nets didn't work back then and they didn't work terribly much better in the nineties but by the mid two thousands the internet has become a substantial force in developing this type of A. I. all those images uploaded to Google all that video uploaded to YouTube the internet became a vast repository of data that can be used to train artificial neural networks in very broad strokes neural nets are algorithms that are made up of individual units that behave somewhat like the neurons in the human brain these units are interconnected and they make up layers as information passes from lower layers to higher ones and whatever input is passed through the neural net is analyzed in increasing complexity take for example the picture of a caste at the lowest layer the individual units each specialize in recognizing some very abstract part of the cat picture so one will specialize in noticing shadows or shading and another will specialize in recognizing angles and these individual units give a confidence interval that what they're seeing is the thing that they specialize in so that lower layer is stimulated to transmit to the next higher layer which specializes in recognizing more sophisticated parts the units in the second layer scanned the shadows in the angles that the lower layer found and it recognizes them as lines and curves the second layer transmits to the third layer which recognizes those lines and curves is whiskers eyes and ears and it transmits to the next layer which recognizes those features as a cat neural nets don't hit a hundred percent accuracy but they work pretty well the problem is we don't really understand how they work the thing about neural nets is that they learn on their own humans don't act as creator god to code the rules of the universe for them like in the old days instead we act more as trainers to train a neural net you're exposed to tons of data and whatever it is you want to learn you can train them to recognize pictures of cats by showing the millions of pictures of cats you can train them on natural languages by.
"geoffrey hinton" Discussed on Pulse of AI
"But overarching list of many projects in the future. What if all of your internal systems want to be. I enabled in one fashion or another. So you have the right infrastructure if you have the right data governance in place. Do you have the rights Skill set to identify the use cases. Early on I would say. Don't try to boil the ocean. You get started but you know small projects to validate Implemented a few solutions What would you often call the low hanging fruits and really instrument and agile process almost like an agile software engineering process? You want to be able to break it into smaller components terrifically solve certain solutions and then be able to experiment along the way Because you may not know you know the the ultimate Solution Front That you end up with the at the end of the day an AI audit. I would say really starts through kind of the same Elements that we talked earlier. The must have. Do you have the right data in place? Do you have a the infrastructure backbone for Say a data lake already in place That's that's the starting point them. Are you able to capture the right use cases and run the projects In an agile fashion the second one we talked about and then talent do. I asked the right either internal talent. The right partners identify that. I actually do this repeatedly and end up with what we talked earlier about a successful deployment now. You talked about low hanging fruit on which actually leads into my next question until in wet areas can companies deployed A. I today to get real returns now. I think that's the the classical use like supply chain optimization Maybe fraud detection than the Classical predictive maintenance time series analysis but would I see really emerging more more rapidly than I've ever seen anything Emerged in the last ten. Twenty years is Computer Vision use cases. Really looking at what I can do for computer vision and image recognition has changed so dramatically in the last few years that you you cannot tackle any use case where you have almost a human inspecting certain part to like predictive Quality Control and supplement that and supported and optimize. Its computer Learning Computer Vision Algorithms If you if you were following kind of the the Computer Science Literature. Acm SEM's touring award which is most often compared to the Nobel Prize Of Computer Science Was GIVEN TO. Geoffrey Hinton Giancana and Joshua Ben Joel Last year it was awarded for the deep learning algorithms that they worked on and to the success that they had in computer vision primarily and that really I think is is a hallmark example of AI. Today where you can be can implement projects And get to returns fairly quickly. It's not as as the most trivial one. I would say If you know what to do in computer vision however you are able to leverage a lot of open source tools published a deep learning models that are freely available that you can get started and companies. That of course can help you along the way and maybe just mentioned one of the projects that the we have. Stein have worked on as is on infrastructure inspection and the reason I like to bring the project to up is that it is in conjunction with a partner. apart negatives call. Gpa is a engineering from the classic engineering for an. It's an example of how a partnership can rapidly develop a new solution but both partners bring in the specific domain expertise The specific use case here is Estimating abridged detecting bridge eliminations On on the surface and under the surface of a bridge by flying a drone across the bridge taking images infrared images and RGB images and then half the AI algorithm. Really determine where this going to be issues. That currently hiding under under the surface of the bridge and this This is a classic example. Where you have to combine the engineering expertise in this case. It comes from all partner. Gpa and the machine learning back into expertise which is comes from Dina to really solve particular tasks So partnership looking at image recognition. I think is a great way to to capture new solutions very rapidly and drive to a return of investment because that's ultimately what what the business is looking for each imagine that Across you know all industrial use cases right. You're you're in southern California. I'm up here in the burien. Silicon Valley and we have a problem with forest fires One of our problems is our power lines on there following holes at you know need maintenance Lines that need maintenance trees that are too close and the way they do it now as they send around people in trucks or they fly over and helicopters and at some point you can imagine And all of us up here. He begging them to is to put drones with computer vision. And you can just see that across everything right. Yeah exactly and it can be drawn images Cameras everywhere now. Everybody has a cell phone cameras. A cheap easy to While they of course privacy issues sensitivity Is it's fees face facial recognition for example. I would say those you know Sensitive topics aside for the moment. Computer Vision has tremendous tremendous Capability that can use across almost any vertical. We talk about infrastructure. We talked about Looking at bridges looking at power poles looking at forest fires Even if I related back to the Kobe nineteen crisis that we're in the middle off Companies are now using it to to look at At the X ray images chest x ray images and scans and trying to determine is their code. Nineteen present and how extensively damaged belongs So again a Application is and the back end is image recognition algorithm. So Um you know of course. I'm over simplifying. It's it's kind of the same algorithm full of all of those Image recognition solutions. But it is something that can be applied in any industry today. I was just thinking as you were talking there about cell phones to solve our force fire problem in our line maintenance problem maybe we do a crowd source business where people take a picture. Geo Tagging the picture of the you know the electrical pole in front of their home or on their property in uploaded to a to a system for PG need to analyze. I really yes. Yeah I mean if you do that without ai they probably be overwhelmed because they get ten thousand images a day suddenly Because I think the public is willing to help crowd Susan as a great opportunity I mean just think about potholes could take pothole images all day. Long I would do do that and send it to to to the city Now the city doesn't have a means to even review all of those but if there's a algorithm bet can filter that and make sure that only the ones that really relevant gets surfaced and prioritized correctly For the city or any other company TAKE CARE. Then suddenly you have a solution in place And the missing link was just you know the I algorithm helping us as humans to focus on on the right images on the right things the right information at the right time. It comes back to this. This old dilemma. With big data collecting more data is great. A but unless you do something with it it's kind of useless and more data and more sensors. Millions of sensors cannot lead to more reports more dashboards than it needs to be filtered sway. I needs to be pre processed. We I so that we can focus on the right things at the right time I mean if we come back to the stigma that data is the new oil than I would say. Hey I is the refinery that makes that oil extremely valuable in that. Makes ACTIONABLE FOR US. In any kind of business context. I loved that analogy right there. So how do how do companies work with dynamite inuit? What does that process? Look like when you get a new customer. Yeah so we have a clear engagement approach that starts from the initial use case evaluation to approve concept to the ultimate Minimal viable product. That gets deployed. And once it's deployed you continue to evolve. It's iterative process end to end so in fairly well defined steps And along the way you have decision points were sometimes you have to be able to experiment and go down a path that may or may not work out And I'd say to to anyone who who starting in a I don't be afraid to invest in multiple projects. Even if you know. Some of them fail a many of them. Feel the ones that will succeed can be a differentiator that might be a new disruptive business opportunity for us we actually liked to partner with With the clients because it is such a close engagement you disclosing a lot of the intricate data off your organization. An and upset can be a sensitive topic for companies So bringing in a trusted partner I think isn't as an important.
The Evolution of ML and Furry Little Animals
"You are listening to talking machines Catherine Gorman Lawrence and Neil. We are again taping an episode in front of a live audience digitally recorded though on on talking machines. And if you want to be part of our live. Studio audience big quotes. You can follow us on twitter at Ti Okay. N. G. M. C. H. S. Or hit us up on the talking machines at gmail.com and our guest today for this interview on talking. Machines is Dr Terence. Annouce key doctors and thank you so much for taking the time to join us today. I really appreciate it Great to be here so we ask all of our guests the same question I. How did you get where you are? What's been your academic and industrial journey. You're also very involved in the reps conference. Tell US everything well. A wise man once told me that careers are only made retrospectively and I have no idea how he got here. There was no plan. It went through a sequence of stages starting with graduate school at Princeton in theoretical physics. From there when I finished that I for reasons that have to do with the field of physics. At the time which was a little bit more bummed I went into neuroscience so that was a post doc and then from there that's when I met. Geoffrey Hinton and had changed my life because we met him at a small seminar here in San Diego and set nineteen seventy nine. We hit it off and From that over the next few years you know blossoms the the Boehner Sheen and back prop and you know. The rest was history. Terry who you post talking with where you post talking in San Diego no no. This was a post doc at Harvard. Medical School in the Department of Neurobiology with Stephen Kofler who was widely considered to be the founder of modern neurobiology and It was an experimental post. Doc I actually recorded from neurons. Subic seventy nine. You mentioning physics. It was a little bit more bond a in some sort of connection modeling. That was also a very quiet period. That wasn't a lot going on it. Was this sort of age of classical. Ai Right you're absolutely right. This was in fact. It was the neural network winter. The seventies and it was primarily because of the failure of the perception. That's neat because you say failure of the percents on I read about that a lot. Do you really did fail. All was the men's ski paper little. What the mid ski books are in Minsk. Eighty books have killed it but was it a fair representation. Well you know it's interesting. I think that that's the myth that that book killed it but I actually think that there are other things going on and and Rosenblatt had died as well which seems pretty significant. Yes well He. He was a pioneer. But you have to understand that digital computers were regally primitive back. Then you know that even the most expensive you know the biggest computers you could buy. Don't have the power of your wristwatch today. Rosenblatt actually had to build an analog device. It a million dollars in today's dollars to build a analog device that had potentially otters driven by motors for the weight sums the learning. Wasn't it potentially because you know digital computers? Were good at logic but they were terrible. Doing a floating point is amazing so he built that at Cornell. Right that's right yeah Funded by the owner. Any case by by the time that we were getting started computers was the vaccine era. It was becoming possible. Do Simulations You know they were small-scale by today's standards but but really meant we could explorer in a way that Frank Rosenblatt couldn't so what you're saying around the perceptual and so just forbid of context for Central and sixty one. Is that right? It was fifty nine. I think it was the book but you know it was in that era of early sixty zero and so then there's this period where the digital computer actually wasn't powerful enough to do much and then digital kind of overtook and divinity but these analog machines would just now impractical from a point of view of expense. So you're saying it's less the book and more of a shift to the Digital Machine. That in those early days wasn't powerful enough to simulate the perception. Yes so I I have you know. I have a feeling that history will show that A. I was like the blind man looking under the Lamppost. His keys and someone came along and said where did you lose your keys He said well somewhere else. But this is the only place right can see. I was reading Donald BACI quote. I recently At the beginning of his book about the I which is just a fascinating area and I guess he spent a lot of his career and he did work in in the wool on radar and he was talking about the Radio Club. Which is these early Cybernet assist and the potential of the analog or digital computer to be what represented the brain and his perspective was he. He was sure it wasn't a digital computer and he wasn't sure it was an analog computer either and he thought it was kind of somewhere in between but it feels like that in between is what you're saying is that was the difficult bit to look and perhaps a police were able to look now. That's right I you know. It's I think it's being driven. This is true all science that what you cannot understand is is really determined by the tools that you have for making measurements for doing simulations in it's really only this modern era that has given us enough tools both to make progress with understanding how the brain works and also with a because of the fact that we have a tremendous amount of power now but just to go back to that early era. I think you know I once asked L. Annual you know who is at Carnegie Mellon and it was a time when Geoff Hinton was an assistant professor and I was at Johns Hopkins and I you know he was at the first fifty six meeting at Dartmouth or a I was born and I I said well. Why was it that you didn't look at the brain and for for inspiration and he said well we did. But there wasn't very much known about the at the time to help us out so we just had make doing our own and he's right. That was a era. You know the the fifties was kind of the the beginning of what we now understand about the signals in the brain. Actually potential synoptic potentials. So you know in a sense. What what he was saying was that we basically use the tools we have available the time which was basically computers but what they were good at. What were they good at? They were good at logic at rules. A binary programming. So that you know that was In a sense they were forced to do that. That's a really. WanNa come back to nine hundred seventy nine in a moment but this is an interesting context to that because of course. Vena initially was someone who spread across. Both these areas of Norbert Vena who was at mit founded cybernetics spread across both these areas of the analog and digital he did his PhD thesis on Russell and Whitehead's book but one thing I was reading about recently is there was a big falling out between Vina. I'm McCulloch Pitts. And it's sort of interesting. That Vena wasn't there at the I. E. T. in fifty six and I sometimes wonder was that more about personalities and wanting this sort of old guard to stay away because you always feel veto with someone who who bridge these worlds it. You know that's the fascinating story. I actually wrote a review of a book about Warren McCulloch came up. They were friends. They actually had had been friends yet. It has something to do with their wife's. Yeah I think the lifestyle McCullough was not line with its a side story but but I guess the point you're making which I think is an I'd like us to take us back to seventy nine and the meeting with Jeff is and I think that that's true. Despite the story between humans the real factor that drove things then was the sudden available at a t of increasing cheap digital computer. And no longer the need to do this work that Rosenblatt and McCain and others had done having to wire together a bunch of analog circuits. That you couldn't reprogram to build system. Yeah I think that was a dead. End It for the very reason you gave. Which is that you know you. It's a special purpose device. That isn't good for anything else. And and really if you're trying to explore you need the flexibility of being able to try many ideas and that's in that really is a digital simulation allows you to
"geoffrey hinton" Discussed on 106.1 FM WTKK
"Researchers in the field of A. I. refer to the work their predecessors did is go fi good old fashioned A. I. it's meant to evoke images of malfunctioning robots their heads spinning wildly as smoke pours from the basement to establish a line between the A. I. research of yesterday and the A. I. research of today but yesterday wasn't so long ago probably the brightest line dividing old and new in the field of AI comes around two thousand six for about a decade prior to that Geoffrey Hinton one of the skeleton crew of researchers working through the A. I. winter had been tinkering with the artificial neural networks an old A. I. concept first developed in the nineteen forties the neural nets didn't work back then and they didn't work terribly much better in the nineties but by the mid two thousands the internet has become a substantial force in developing this type of A. I. the internet became a vast repository of data that could be used to train artificial neural networks in very broad strokes neural nets are algorithms that are made up of individual units that behave somewhat like the neurons in the human brain these units are interconnected and they make of layers as information passes from lower layers to hire once in whatever input is passed through the neural net is analysed increasing complexity take for example the picture of a caste at the lowest layer the individual units each specialize in recognizing some very abstract part of the cat picture so one will specialize in noticing shadows or shading and another will specialize in recognizing angles and these individual units give a confidence interval that what they're seeing.
"geoffrey hinton" Discussed on NewsRadio WIOD
"Yesterday wasn't so long ago probably the brightest line dividing old and new in the field of AI comes around two thousand six for about a decade prior to that Geoffrey Hinton one of the skeleton crew of researchers working through the A. I. winter had been tinkering with the artificial neural networks in old A. I. concept first developed in the nineteen forties the neural nets didn't work back then and they didn't work terribly much better in the nineties but by the mid two thousands the internet has become a substantial force in developing this type of A. I. the internet became a vast repository of data that could be used to train artificial neural networks in very broad strokes there on that search algorithms that are made up of individual units that behave somewhat like the neurons in the human brain these units are interconnected and they make up layers as information passes from lower layers to hire once in whatever input is passed through the neural net is analyzed in increasing complexity take for example the picture of a caste at the lowest layer the individual units each specialize in recognizing some very abstract part of the cat picture so one will specialize in noticing shadows or shading and another will specialize in recognizing angles and these individual units give a confidence interval that what they're seeing is the thing that they specialize in that lower layer is stimulated to transmit to the next higher layer which specializes in recognizing more sophisticated parts the units in the second layer scanned the shadows and angles that the lower layer found and it recognizes them as lines and curves the second layer transmits to the third layer which recognizes those lines and curves is whiskers eyes and ears and it transmits to the next layer which recognizes those features as a cat neural nets don't had a hundred percent accuracy but they work pretty well the problem is we don't really understand how they work the thing about neural nets is that they learn on their own humans don't act as creator god to code the rules of the universe for them like in the old days instead we act more as trainers the training there on that you're exposed to tons of data and whatever it is you want to learn you can train them to recognize pictures of cats by showing them millions of pictures of cats you can train them on natural languages by exposing them to thousands of hours of people talking you can train them to do just about anything so long as you have a robust enough data sets no that's fine patterns and all of this data and within those patterns they decide for themselves what about English makes English English or what makes a cat picture a picture of a cat we don't have to teach them anything your listening to Sunday night podcasts featuring one of the biggest podcasts of the week on a free I heart radio app now number one for podcasting are you tired of buying contacts every month or fumbling around with different sets of glasses hi it's Reno grant and that was me until I got Lisa K. panel laser vision.
"geoffrey hinton" Discussed on Spark from CBC Radio
"A cat in the picture. The way computers see is pretty much the same as the way people see. I'm Geoffrey Hinton. I'm a professor at the University of Toronto. I also so what for Google. The idea is that when you look at an image you have some neurons that recognize little combinations of pixels. They might recognize that these three pixel over here a light-colored Pixel here dot com. And so we've got a little piece of edge shared and so you'll have a layer of of neurons recognized little pieces of edge then the next level up you might have a layer of neurons. Look at little pieces of edge and say we've got two little pieces of age that join at a fine angle here so we've got a little point here and if you want to recognize a bird for example having to bits of edge joint angle it might be a big and then at the next level up you might recognize that okay. Those two bits of age that joyner divine angle a connected to a blob and so that makes more likely to be the head of a bird and what's interesting about these neural nets as you make them deep. They worked better. What do you mean by deeper more Malays features so at the front end you have features that look directly at the pixels and then above those you have features looking at the features you already extracted and so you have less and less features getting more and more complicated and that's pretty much how it is.
"geoffrey hinton" Discussed on Spark from CBC Radio
"That's Joshua Ben Joe a computer science professor at Nevada tatum Mowlam back in the spring with this interview first aired he and Geoffrey Hinton were two of the three people to win. The Turing Award sometimes called the Nobel Prize as a computer science. I got to sit down with your show while he was in Toronto for a conference on A._I.. Ethics let's made Canada such a hub for A._I.. Research is exactly what Yeoshua mentioned that interaction between neuroscientists and computer scientists test mule networks right that's inspired by what we know on the brain since the fifties neural network A._i.. Is the technology that powers A._I.. Systems like Alpha go which a human master at the game of go and the machine learning programs power things like image and speech recognition but what is a neural network approach to a anyway well. It's mostly about taking what we know about how our brains work and applying that to a machine in its most most basic terms it's about recognizing patterns and then making predictions based on those patterns so if you totally oversimplified it it would sound something like this. My favorite color is red. I'm I'm Canadian. I like Maple Syrup. My favorite color is blue. I'm Italian. I like pizza. My favorite color is red. I'm from Russia. I like Maple Syrup to my favorite color is blue. I'm from Scotland. I like Haggas. My my favorite color is red. What do I like from the established pattern? I would predict you like Maple Syrup. Wow you're so smart. We have pancakes before these neural networks came to dominate. There was a debate about which A._I.. Technique was better giving computers specific rules to follow or letting them learn for themselves while most of the rest of the world focused on the former researchers like Geoffrey Hinton and Yeoshua Ben Gio worked for years to develop these neural networks. Here's Yeshua again so the progress has been pretty amazing and even for people like me and my colleagues who have been in the middle of it. We did not expect say say two years ago or five years ago or ten years ago. The progress that has happened in the last few years we did expect progress is just that the the speed at which it has been entering society in particular has been something we not expect and I think a lot of the progress has been due to more people working on this so it's not just a few headliners like myself. Doing things is because there are hundreds of thousands of people who've been working on deep learning and the I that progress progress has been accelerating and based on that observation. I would guess it's going to continue because more and more students are entering this feel more and more engineers are trying to understand it so they can build products. Do you have concerns about the rate at which it's being adopted needed in society as opposed to sort of in the more controlled conditions of the lab or academics I do I think there's a bigger issue of the progress in science and technology going faster than the redick which societies can become wiser in how to use technology <hes> and become wiser in general so some people call this the wisdom race we have to make sure that our collective intelligence collective wisdom gross tostadas to make sure that those technologies are not used in bad ways in one way that I like to think of it is we're building these more and more powerful tools and of course we're making those tools more and more easily used by individuals companies governments but as those tools become more powerful it means that those individuals or organizations can also misuse them and the only way around that is to have a more just society where people care more about the group and not as much about their individual interests <hes> machine learning has made some incredible progress particularly in some areas that have long been resistant to a A._i.. So from where we are now what you see is the primary challenges ahead. What are the major hurdles that were in order to increase the sophistication of so current systems are very stupid and we can see that when we?.
"geoffrey hinton" Discussed on The Business of Fashion Podcast
"Know, Pietro to alleviate on, it's like an each person beautiful web information that you're gonna think about price S, and I think that's a really interesting thing. And I love it. When people have seen lots of products and designs work, he see them. Get excited by things that makes me feel very happy about why Dave, how significant do you think it is? UN Virgil who both considered to be sort of crown Princess of St. wear have kind of moved away from that idea. You know, it's like, I think. You shift is natural progression as is like how you feel about things. And you know, Mike buck who was always super support to veto on one thing when we spoke as I was leaving. It was like, you know. Just, you know, it was like, you know, he he worked deal and he said, it's a very different thing is a very different thing. And he was very sort of sweet and gave me. An interesting perspective on the brand new. I think that. You know. Just people saying things is very different way of working very different feeding two places there's certain Eleuthera guests when you when you when you given a couture Talia when you're given the, the, the options available in a couture Atelli, do you find that you? I mean, there's I guess I'm thinking those probably nothing you do get really, you know, and it's like kind of United the next collection, we have a lot of interesting things coming. And you know it's teams like the television show us things in the way of making things that they somewhere along the law in working with each are known himself or different kitchen houses, and. They will look at a sketch and actually, this is really interesting. I'll make a a twelve that and show to the UN's I there's a real conversation going on. And it makes you think in a different way, and I think, you know, the design up is nice to have different challenges. And think in a different way to think about myself when I do my work. I think about the people that by the guys, I think about the brand, but see some in the United so did you ever imagine and wildest dreams that something would happen to you? When you. Graduated from Saint Martin's. No. I mean is funny because Johm book collection, and then I met EBay shortly after nly was one of the guiding light tomato ways of giving me the confidence. Just go. I never things move. I never have small, and I always felt that I would do I. Tom I just do. That's how I do. Because I obviously with. They organized and Pam when working, but I didn't have a game plan are just, you know, things happen. You know, it's kind of like. It felt like a right time to do something else, they were opportunities and. It just happens. I didn't know what it is. I just do it alchemy. Yeah, exactly what did you learn from lead? You. We talk fashion very much. We talked about other stuff and it's just like why, you know. He would always give these a soundbite that was really good. And you know. When he'd like I remember him talking to, like certain situation, and he just gave her the most amazing sound advice like. That just change the way that she talks about it in such a instant going competing a bit sad about something to then just really spoke to mystic. And not so kind of an incredible thing when people can do that. I think that's why learn from him. Really? And also just the beauty that he brought in to the world, you know, like the way that he worked, and it's funny because I've seen Sarah recently obscene Trine recently, and it's really nice to be back in touch with them as well. And see Sam gains. We knew these the people that are around and just like. How things move on unite thought sorts of thing, I've noticed out about stuff, it's like you know, you were the person that told me Lee killed himself, and it was like in the capital York. And you called me when I was getting to breakfast for forty president, when I was at Dun hill, and that the bear bat goes, I have evolved here for you than someone else. Ed hausky and K case is immediately driven to Casey because obviously they were by very close with the especially carry in, and it was like. If like. There was that person though, that you could grieve with, you know, it was like, but we, we used surprised by what happened subsequently with, with him that became the sort of the came the godfather, he became this dawn of fashion. Did that surprise you? That. Not really. Because as I keep deserves that, doesn't he like, you know, it's interesting to think how things would be if certain people listed around. Yeah. As for me this question you ways, you know, this fascination of eight Cup coacher. And these people are Liebau and how that would have been imitated over the time and unique changed its like all these people in the went too soon because of things I aides and is like the world would be like if they be around. So I mean, you've set yourself up in, in as a pretty significant keeper of the flame. I mean you have a collection of, of clothing of England of British fashion from the eighties. That is like a museum worthy collection. And I think of all the DJ's. All the music. And you'll you'll very Carvel approach pop culture, is, is really interesting when you look at your work in fashion, as well, I have to and feel the to match. I mean my, my passion for these designers from that time, I think, is I wasn't there at the time and I think it just seems I. Looking at the imagery, from looking seeing the clips that Geoffrey Hinton yuno would put in the HI and the fact, he filmed everything, like it just seemed like cannot. He chide me something going to DVD and it was this. I just wanted to be it was kind of like the energy that came up with site powerful. So I think that's really.
"geoffrey hinton" Discussed on Trailblazers with Walter Isaacson
"Before we continue add, like to take a moment to tell you about a new podcast series from Dell technologies that focuses entirely on artificial intelligence, the show is called AI hype versus reality, and the first episode is available right now a hype versus reality is a fun series that takes a deep dive into all the hype surrounding artificial intelligence, and then goes out into the real world to test the technology to see if it actually lives up to its promise, the series is hosted by Jessica show Bhatt, and you can listen to the first episode right now by searching AI hype versus reality on your favorite podcast app. We think you'll really love it. And now back to the show. It took a long time for the potential of machine learning to be realized there wasn't enough moon full data to feed into the computer's neural networks, and those computers. Weren't powerful enough to process, the data that was available most researchers abandoned the field, but not Geoffrey Hinton as a graduate student at the university of Edinburgh in the nineteen seventies and later as a professor at Carnegie Mellon and the university of Toronto. He remained convinced that neural networks with the key to unlocking the mystery of artificial intelligence and over the past ten years. He's been proven right? Computers. Now have the processing power to crunch the enormous amounts of high quality data that is being generated mostly by digital selves and researchers have taken advantage of breakthroughs in neuro science. To build increasingly sophisticated artificial neural networks than aimed amendment how the brain processes data. Now a research fellow at Google Hinton is considered the godfather of a subset of machine learning called deep learning, which has been the foundation for most of the breakthroughs in over the past few years. If you have a smartphone, it's recognizing speech using your networks, for sure. And it's working really, well, it's working, much better than it used to before the using your own landmarks that if you have a photo collection on a computer, and you want to know if you've gotta fight over dog or she got a photo of people hugging neo-nazi used to find that photo if you want to translate Chinese to English the best system, either on the web his Google translate, which uses neural nets footing. And it's very clear that things like reading medical images right now in a few demands, as neural nets as good as people and over the next few years, those will get better than people. The easiest way of thinking about the learning is Alec can recognize patterns in large sets of data determined probabilities based on those patterns, the more data the neural nets and process, the more patterns, it sees. And the more accurate, it Jim be take as a very basic example, how you computer determined whether an Email belonged in your inbox or your spam, folder. Hilary Mason is a general manager of machine learning at Cloudera a software platform company based in Silicon Valley. So you get examples of emails that are spam and you get examples of emails that are not spam. And then you think about the significant features of those emails that indicate it might fall in one category or another. So maybe if it uses certain words, you know, like Nigerian prince perhaps it's more likely to be spam. And then what you do is train see learn from that historical data. What a model is in the model will learn things like if the Email is too long, or over two thousand characters say it's you know, ninety percent more likely to be spam. The not and then for every new Email. So you don't know if it spam or not, it's just popped up in your inbox. The system makes the calculation based on the that learned set of features and those probabilities, as to the probability that this new message is spam or not. And then there's some threshold above which we put it in your spam, folder. So we say if it's more than eighty five percent likely to be spammed based on this calculation it goes in your spam, folder. This is how the system works. And so these probabilities become incredibly powerful at predicting, what is likely to be true. According to Geoffrey Hinton, one of the reasons deep learning can be so effective at making predictions is added consenual eight, the way humans often think the logically, intuitively, and unpredictably. The key was getting your nets had injudicious. It wasn't logical reasoning at all. If you wanted to look at an image and produce a caption, for the image, we're able to do that. But we don't know how we do that. We can't write a whole bunch of rules for doing that says, how to program, computer directly do that. Go what we can do we show, the computer of examples and having just kind of get it unless a new way of getting things having a machine that
"geoffrey hinton" Discussed on Talking Machines
"Got rid of it. And yet now fast forward ten years and deep learning is all the rage, and that could Dina it's one of the most popular topic of research in industry, deep learning technologies are being acquired at the millions of dollars and in the media and press it's often reported as the new much like in this piece in scientific American. So what happened? Well, what I thought. I did today is give you my perspective on the last ten years in deep learning that is from its emergence, and how it evolved and progress through the years, I'll talk not just about the different technology breakthroughs, but also focus a bit on how the community itself evolved and progress so for me things really started in two thousand and six. The thing that really influenced my research was this paper by Geoffrey Hinton who you see here from university of Toronto with Simon. Oh, Cindy arrow, and you I take so in that paper, Geoffrey Hinton was proposing a new approach to artificial neural networks, and what was really exciting about this work as the chief deep. Artificial neural networks that would rival some of the more standard more popular machine learning methods of the time. So this really sparked a new hope in that the approach using artificial neural networks might actually be successful for achieving. Fine. This was a new hope. So it was new so people essentially came up with a new name to refer to that type of research, and they called it deep learning. So far instance the next year I coordinated with some of my colleagues, the first deep learning workshop was we tried to organize it. That's part of the neural information processing systems conference, which is one of the largest machine learning conferences. And so we submitted a proposal for the workshop, but it was rejected. However, Geoffrey Hinton just wouldn't have it. So he put together to resources necessary for us to actually organize it as a parallel event and was a huge success. We attracted about ten times as many people as other official workshops that happen during the conference. So it was clear that was a lot of excitement in academia for the potential of deep. Artificial neural networks and in the next three years. You started seeing an emergence of more more papers on artificial neural networks referred instead. Onto the name of deep learning. Now, there were a lot of papers published but the progress was relatively slow. I turns out that executing. Artificial neural networks on regular computers is slow and so in about two thousand and ten several different labs figured out a way of executing artificial neural networks not on standard computers, but on graphics cards on GPA use the same graphics cards that we use to generate Chris graphics for computer games. So this marks for me, the first way major Wayne, which to deep learning community has been changing it has become way better at exploiting communicational resources. What this meant is that a deep learning research lab could essentially Bill its own meaning supercomputer, but at a few thousand dollars, and in fact, it's that's year that Jeffrey intesne and his lab produced a first results suggesting that deep learning might revulsion is peach recognition research. This game. Now is a big surprise. And in fact, the speech research community. At kind of difficulty believing some of these results, or at least they were harder to publish initially. But now deep learning is in the big way present in speech recognition research, and it's also part of the technology like behind Syrian Alexa. Then in two thousand eleven we start seeing the emergence of a lot of really good high-quality softwares for an libraries for supporting deep learning research, like Feyenoord torch and the few others. And to me this marks the second way in which deep learning as we've been changing over the years. It now has a new that occasion towards creating high-quality robust easy to use open and free code libraries to support deep learning research so used to be that artificial neural networks were somewhat difficult to use and implement. But now, it's actually quite easy to get started by leveraging the work of other people through these open source, libraries, so deep learning community as made performing deep learning research, much, less like carpentry and much more like playing with Legos. In two thousand twelve Jefferson different than proper prepares the next revolution with deep learning. Disarming computer vision so him in his lab participate to computer vision competition, the challenge here is to design a system that can read a photograph and identify. What are the objects and animals in this photograph, and so the results come in? And it turns out that their system totally crushes the competition and reaches accuracy that were never seen before. Now this time this breakthrough was undoubtful. And in fact, now in computer vision is also a field that's enlarge dominated by deep learning methods right now. So in two thousand thirteen there starts being a lot of excitement around deep learning methods and data excitement that year is about to transition to industry and in a big way. So for instance, that's year with my colleagues from two thousand seven we decide to organize another edition of the deep learning workshop this time our proposal is accepted. And in fact, not just that. But we get folks from Facebook that reach out and say that their CEO Mark Zuckerberg himself actually wants to be present and participate. So let me try to convey unusual. This is organizing an academic workshop and have Mark Zuckerberg show up is kind of like organizing a party with your personal friends. And then we'll look at that. Mark Zuckerberg is here. This is totally a surprise. And and not just that for someone like me who used to do research, initially in my PHD, and I could barely get my colleagues and other topics machine learning interested in artificial neural networks, this is almost beyond comprehension. In fact, the the interest from industry is as high as ever and also at that workshop, we see the first demonstration by little known startup deep mind technologies of first of their system that is able to play Atari games at the level of humans, and in fact, less than a year later deep mind was acquired by Google. Also that doesn't thirteen the international conference on learning representations is created I've had the honor of co-chairing dot conference in the past two years. And I mentioned it for two reasons the first is that this conference is now mostly known as the deep learning conference. And so that means that in two thousand thirteen the community is big enough at vibrant enough, then it can sustain its own conference. The other reason most important reason is that this conference has a very unique reviewing model for scientific work. Authors are asked to submit their work publicly right away on a website known as archive dot org. So now, the work is acceptable for everyone. And then the whole deep learning community is invited to review and criticized this work right away for everyone to see. So to me this marks the third way in which deep learning community has been changing and evolving over time it aggressively promotes discussion and the open criticism of deep learning results. And now, in fact, this approach of as soon as you have results that can be presented to put in archive and then discuss it openly on social media, for instance, is vastly adopted by deep learning researchers instead of waiting for to seal of approval from conferences in journals. So this is great for science. We get to integrate over ideas, much more rapidly. This is not so great for scientists because any day can be a day where you discovered that some other lab as executed the research idea you want it to work on. Then in two thousand fourteen we start seeing deep learning systems are very good with text. So for instance, we see I examples of deep learning systems successfully performing machine translation, so taking in a sentence in a foreign language and producing an English. Translation, we also see systems that instead take in or reason image and produces an English description of what that image is. And this is a really interesting example because that year in a few months four different labs proposed more or less the same idea at about exactly the same time independently. So this really illustrates how rapid innovation becomes at this time. Thanks to GP us graphics cards. And thanks to really good open source software. We get to iterative and produce results very rapidly. And then those communicated almost immediately for everyone to digest and and dissect laying the groundwork for the next innovation. And two thousand fifteen we start seeing deep learning systems that instead of perceiving or taking ZIM put some data. I'm making some predictions actually can generate or synthesize visual content. So have an example here of inaugural, the neural style transfer algorithm based on deep learning that can read a picture photograph and also a painting, and then produce a painting of that photograph using the style of the painting that was provided. But also, we're now seeing a lot of work on generating entirely new visual content much like in this work from open. They I reaching levels of realism. We haven't seen before. And this goes even beyond visual content where we're seeing for instance, recent work by Google deep mind on generating. Audio generating speech and generating music and also we've seen in two thousand sixteen perhaps you've heard about this the deep Trump Twitter, bud. That's powered by deep learning where deep learning system was strain on Donald Trump's tweets and was able to generate new tweets that might as well have come from from him. Now, this is Mike make it sound easier. That was achieved, but this is actually an impressive feat. But two thousand sixteen will almost certainly be remembered as the year that Google presented their alpha go system, and which competed against one of the world's best go player at least at all. And at one. And in fact, this came as a big surprise for many in the community many expected to take many more years to actually achieve this. But today alpha go amongst its human peers is recognized as the second best go player in the world. So we went from deep learning systems that can take as input an image and detect simple symbols in it. Too deep learning systems that can both perceive and synthesize very complex content much, like photographs speech, tax or game strategies. So we've come a long way, but there's still a long way to go before we reach ROY. I and I'm quite optimistic that deep learning will play an important role in that quest. Not just because deep learning technology is powerful. But also, and I want to leave you with this because the deep learning community as really structured itself to facilitate innovation very quickly. It has done this by first becoming much better exploiting communication resources using graphics card. It has become better at producing tools for performing deep learning research with very high quality open source code, libraries, and has become really good at discussing and sharing information about how to do the learning. And also, what is the current state of the art the recent breakthroughs and opening up the discussion to everyone. We've got a long way in these three aspects since I've done my PHD, and I think we can go even further we're starting to see on social media people even sharing preliminary results or early implementations of ideas or just ideas for other people perhaps to implement and so this hints at a future where different research labs might actually much more openly and collectively work to make progress towards Aon. And then hopefully, we'll reach a day where it actually doesn't matter which college you decide to go or not to go too. Thank you. Well, that is it for this episode of talking machines hope to see you all at I clear and tune in next episode.
"geoffrey hinton" Discussed on Daily Tech News Show
"Twist dot org. If you don't know already go check it out. And folks in Portland, you're going to get a chance to see you live coming up here in a while. We're going to talk about that a little later to what day does that next Wednesday April third one week away. All right. So let's start today's show we actually have prepared artificial intelligence three ways for you today. We'll start with a few other tech things, you should know. Apple has acknowledged problems with his third gen butterflied. Keyboard in a statement to the Wall Street Journal's to rent a stern apple roots. Quote. We are aware that a small number of users or having issues with their third generation, butterfly keyboard. And for that we are sorry. We are. Sorry. I love I love these small number of millions of people which still means millions of people to fix this thing in holding up which is the butterfly tech in the new desktop version of the wireless keyboard, Goldman Sachs CEO Richard node nod. We decided didn't we told CNBC the company is exploring the idea of offering the apple card credit card outside the US. Nod says absolutely we'll be thinking of international opportunities for it, unquote. There is some confusion around wrong vote pressing regarding the European copyright directive. We talked about it yesterday passed Tuesday by a margin of seventy votes. That's seventy is important to keep in mind. Now prior to the vote on the entire directive. There was emotion to make amendments that motion to consider amendments was defeated by five votes. Now amendments could have but wouldn't necessarily have resulted in proposals to. Delete articles eleven and seventeen seventeen is the new article thirteen at least two and possibly three Swedish members of the European parliament pressed the wrong button when voting on whether to allow amendments or not to other Swedish EMMY peas that might have voted for amendments were absence. So it's not certain if that would have been enough to allow amendments even if there had been amendments, it's not certain that there would have been enough to delete articles eleven and thirteen and certainly it wasn't enough to sway the seventy votes that allowed the copyright directive to pass. But while make sure your person your buttons, right? That's all. Carolina a of stuff to talk about on the show, but let's start with some congratulations. Geoffrey Hinton yawn, the Coon and Yoshua been geo have won the train award which comes with one million dollars for their work on neural networks from the association for Computing Machinery, the world's largest society of computing.
"geoffrey hinton" Discussed on KCBS All News
"Center for high tech jobs, and we're still adding them faster than anywhere else. In america. A report out this morning from the computing technology industry association says there are more than seven hundred fifty thousand people working in tech related jobs in the bay area. Now, the next biggest concentration of tech jobs that would be in the New York City area, which has a much bigger overall population. But about fifteen percent fewer. Tech jobs reports found no signs of a slowdown. The San Francisco Oakland metro area, and the San Jose Sunnyvale Santa Clara metro area ranked first and second in the country in the number of tech jobs added last year one more factoid, adding it all up one out of every eleven jobs in the state of California is in the tech industry. And speaking of tech Allison people, call it the Nobel prize of computing in the turing award is out this morning. The winners are actually three of them. This time all of them born in Europe. London. Born Geoffrey Hinton, and then two men from Paris, Luke Kuhn and Yoshua Ben GIO. In fact, Coon was a student of Geoffrey Hinton, all of this around neural networks and artificial intelligence, the turing award was introduced back in nineteen sixty six includes a million dollar prize, and those three scientists will share it hidden was out there often, quite alone on the whole idea of neural networks, he told the New York Times in an interview that he and his graduate advisers PHD advisor would sometimes fight over it. And he said we met once a week sometimes it's ended in a shouting match. Sometimes not. But the idea of neural networks is that you can take a mathematical system that it can learn how to do specific things by looking at lots of data one classic example is going through zillions of phone calls. And that way the system can learn to recognize the spoken word. So the touring award out today. Geoffrey Hinton, young Koon and Yoshua Benji. Oh, by the way, these days Hinton is at Google and the Coon works at Facebook KCBS news time, eight forty. Three. Bacon crumbles on. Now that I think of it. Ops. Go to.
Three Pioneers in Artificial Intelligence Win Turing Award
"And speaking of tech Allison people, call it the Nobel prize of computing in the turing award is out this morning. The winners are actually three of them. This time all of them born in Europe. London. Born Geoffrey Hinton, and then two men from Paris, Luke Kuhn and Yoshua Ben GIO. In fact, Coon was a student of Geoffrey Hinton, all of this around neural networks and artificial intelligence, the turing award was introduced back in nineteen sixty six includes a million dollar prize, and those three scientists will share it hidden was out there often, quite alone on the whole idea of neural networks, he told the New York Times in an interview that he and his graduate advisers PHD advisor would sometimes fight over it. And he said we met once a week sometimes it's ended in a shouting match. Sometimes not. But the idea of neural networks is that you can take a mathematical system that it can learn how to do specific things by looking at lots of data one classic example is going through zillions of phone calls. And that way the system can learn to recognize the spoken word. So the touring award out today. Geoffrey Hinton, young Koon and Yoshua Benji. Oh, by the way, these days Hinton is at Google and the Coon works at
"geoffrey hinton" Discussed on Grumpy Old Geeks
"And is this kind of like the tide pod challenge. When you when you watch bird box, you'll get it. Okay. I'll check it out. Yeah. Just avoid reading anything about it before you see it is worth seeing. It was very like. It's extremely well done. Okay. I'm very well. The acting phenomenal. And the concept was decent. Just it really needed to go dead at her. Yeah. No. We've got a lot for media candy for the next show because I watched a ton of stuff and thank God. The Orville is back because it just makes me happy. It just makes me happy that the Orrville is back. I do have one other bit of a couple of follow up here talk. Just talking about AI. We had Kaifu Lee on the Jordan harbinger show last week. And he was the head of Google China. And he's an AI expert, right? And very smart guy. I mean super premium leaf fucking smart when it comes to a I in. I thought it was one of our better shows because he just talks about what it really is. And he wrote the first authentic. Oh AI engine. This guy is Scott smart. I mean, he's no joke, and he talks about general. A I, you know, artificial general intelligence, which we always say is what AI should be the definition of got damaged everything else and machine learning, but he. Also talks about all of the breakthroughs in a in the history of AI any in the machine learning breakthroughs. They came about he's like there's only been a couple breakthroughs in the history of this whole field and machine learning is the newest breakthrough and that's been around for a while. And he's just like AGI is just not coming anytime soon. And also, there's another article on venture beat where a Geoffrey Hinton, and demean hospices zip. Say AGI is nowhere close to becoming a reality. And it seems like nobody in the industry knows that. And we've said it for talking or years. So yeah. So we're smarter than all of the all of the experts in a is. So take that people take it put it in your and smoke it and final follow up. I know we got a lot of follow up this week. But FU and one finance dammit, would what do they do? I got an Email. That's like you got ten dollars because somebody signed up with your link. The next Email. I got ten minutes later was hey, this is so cool that you can like send people to 'em one finance. We're going to raise the rate to twenty five dollars. I want my fifteen bucks. Dammit. So. Yeah. So if you go to any of the show notes where we talk about 'em one finance and in sign up with them. Then yet now you get twenty five dollars we get twenty five dollars. But I was just it was just like boom, boom right back to back. I'm just like damn it. Why didn't you give me a twenty five dollars? I want my fifteen dollars. I want my fifteen dollars. The news. Tesla is back in the news. Okay. They have figured out how do fill that board seat that he'll on musk is not allowed to have any more, right? They hired Larry Ellison the head of mechanical, right? I don't know if you know much about Larry Ellison. He's bat shit. Crazy. He I mean, he's certified bat shit. Crazy. He his entire house is like a Japanese like, you know, shogun warrior garden because he thinks that that's what he needs to prepare himself for the battle against the other tech moguls go back to the the history of Larry Ellison and just check out his house. He's he's a nutjob. He's a total nut job. Sometimes you feel like a nut. Jason. Look, they're just replacing one with another. That's fine. Look Larry is is a good stable choice, regardless of what you think about his home furnishings. I guess they're still there. So yeah. Yeah. Yeah. So, you know, he seems to know what he's doing in terms of running a business regardless again, like I said about Kia so it was a businessman. So we'll see how it goes. I mean, I'm sure that we'll see a stock bump based on that. Because everybody kind of wants a Alon just take a step back. So and they also got this woman from Walgreens is very strange. So yeah, that's not even like a tech businesses business business. This is. It's it's literally, where's our PNL? Let me look let me look at the spreadsheet..
"geoffrey hinton" Discussed on DataFramed
"Segment. Now it's time for a segment, cold data signs, buzzwords with data camp curriculum laid Spencer Boucher. What have you got I today spent. So today we're going deep on deep learning on of everybody's favourite data. Science buzzwords, everybody's doing it, or at least they claim to be doing it. What exactly is deep learning Spencer will in a certain sense Hugo. The answer to this one's pretty easy. Deep learning is narrow networks. That begs the question though, what's in your network? So a neural network is just one type of machine learning model that maps a set of inputs to set about butts just like any other. They get their cool name because they're loosely inspired by the way that human brains work as a network of neurons arranged into connected layers. A particular input from one layer will either activate or deactivate the neurons in the next layer. If you're familiar with logistic regression, you can imagine a neural network is just a bunch of the district. Russian stacked on top of one another so whiz, the deep pot come in. So one of the most well, known papers referencing deep learning is Geoffrey Hinton's learning multiple layers of representation which was published in two thousand seven Hinton was literally just referring to the geometric deep -ness of a particular class of neural network. A lot of the cooler advancements in machine learning since then like convoluted neural networks and recurrent neural networks. They've essentially boil down to strategies for setting up the geometry of the neurons in a neural network. And because they all tend to involve many, many layers. The phrase, deep learning has stuck these animal logical waters. They've been muddied a little bit in recent years though because popular implementations of neural networks like were developed, for example, are often referred to as deep learning. Even though they actually only consists of two layers of neurons. It's honestly up to you whether you want to be a stickler for the term or play it a little more loosey goosey. So why are people so excited. Mostly just because you can do so many impressively cool things with them. One of the first big waves the deep learning made in popular culture came as far back as two thousand twelve when classify was trained to recognize faces in cats from YouTube videos, even without having any label data on either faces or cats right now, neural networks are they're still at the bleeding edge of our coolest technological breakers. They can generate descriptive captions for images from scratch. They can understand human languages, both written and spoken and even translate them back and forth. And these days they can even drive your car. Actually, some modern versions of deep learning are tearing complete, which means that they're really starting to look more like a programming language than any simple mathematical model. So so sunshine and roses, then well, even a tool as powerful as deep learning comes with a set of downsides. Of course, for one thing, they generally require a lot of data which isn't always available or very cheap to. And as you know, Hugo plus the incredible complexity that powers a deep neural network also makes it really difficult to understand. It can be hard to determine exactly why neural network is working at all, or the reasons that it may have made a particular decision. Although progress is being made in the interpretation department slowly, but surely for applications that require transparency and understanding though you're often still better off with a more interpreted algorithm like a tree based method. Thanks for helping to demystify deep learning from Spencer listeners. If you wanna find out more about deep learning, make sure to check out next week's episode with Michelle Gill a date learning expert at Invidia, an artificial intelligence company that builds GP use the processes that everybody uses date learning. After that interlude of time to jump back into our chat with the best in Rostock. It'd be nice to know what type of role data science actually plays in these challenges for you. Yes. Oh, yeah. Good