Tunzi, Kobe, Edward discussed on This Week in Machine Learning & AI

Automatic TRANSCRIPT

Because this isn't the horses bore and has some e that network has not experienced the world. He varies information. The genetic code that wires it and then you get an Edward us something so you know and then the experience find Tunzi than improves on need but because a very strong inductive bias. That's approved that they are easy. We were understand the how to read out this inactive buyers then we will be able to build smarter machines now as another complexity to these. The learning algorithm may be titling to wind with these architectures. So you may have the perfect architecture but if you use a different objective function different learning algorithm then you will not get now the the right performance over the right model by so that's go. These things are related. How how we doing on this task of learning from the the brand and applying it to to deep learning I? I don't get the sense that of the most important things that we've learned about making deep learning work like you know. Drop out and learning rate tricks and things like that came from biological inspiration. Yeah these these is very interesting. Because I think that's interesting question of also from the larger scale of you want Elaborate so if you could try to understand the brain right Kids trying to vision visual perception right. In some ways it good test of it would be able to build a system based on what we think we understood. That does vision right. Because if I say I understand Nation works in their brain. Not some experiments in you know in humans animals whatever and I studied principles than my long term goal or as a field should be count. We now assemble the principle into a system that means the behavior of the system. Trying UNDERSTA right and that is essentially what the is go late. You saying from a neuro science perspective. We've got these models of cones and layers and all these things forget about this deep learning stuff we should just take our models and implement and they should be better in theory fear in Fiori. We understand it. I mean we really understood. Want we fully understand how vision works right? We should be able to reverse engineer. We should be able to take these principles. Put them together. Any should perform the tasks that the human visual system does I mean. That's sort of the very stringent test over hypotheses right. You have to test them like not now you may. I'm not saying you're going to be achieved in. Our lifetime may be knows or could take many many years. But that's the goal of neuro science right after maybe not necessarily but we're far away from that goal right. I mean it's very very there's only so toy examples where we've taken these principals and we've shown that we can achieve robust visa that we can achieve the two zero bustle trigger. Will is that based on gaps in our understanding or our ability to implement what we understand. I think he's based on two things. I think this is a more pessimistic view. It's based on our gobble understanding principles. I mean right. Now we've been doing more and these may be very difficult problem right and there's very there's a lot of technology. I said earlier that gives us incredible capabilities to understand the brain but new principals are much harder to understand fundamental principles and then we get lost in details and we don't know how we don't WanNa just Kobe because you see these different has also between understanding and I mean if we could take the brain and Kobe piece by piece would maybe we would achieve..

Coming up next