Robert Gangs, Daniel Bennett, John discussed on Brain Inspired
Automatic TRANSCRIPT
Brain but when we're we've moved to in silica neurons and in silica connections in the kinds of precision in and so forth then It's a for me. It's open question about whether or not you can make progress. I would still make empirical justice. John did that. You need ai to shift as well but it just doesn't make sense. It's hard to see how that kind of approach could yield success but So i would make that empirical bet for a as well. What is the shift that needs to happen. Because you still have to build a network so you have to. You have to implement it in some architecture right right so i think one of the and actually. I think there are some analyses of hours. I can think of robert gangs worked for example. It's really great Analyze neural networks from a dynamical systems. perspective states-based perspective seeing how different areas the state space have different structures that can support different types of cognitive function. And i think that's the shift. You're gonna need us what you're going to see is that potentially a shift away from I don't work in a i. So i'm going to be very speculative right now putting Shift away from a consideration of of error propagation Back towards you know the different nodes as a result of safe performance on a task. Depends on what you're what you're optimizing. Your air function is but you could imagine that instead of doing that. We're going to try and Propagate air on the basis of. I don't try to replicate certain kinds of state space. Let's say or something like that. So that would be a a real concrete difference where you're error function that's gonna define the era that can propagate back is going to differ. It's going to change from like a task optimization to let's say a dynamic optimization or something like that that would be but i don't work in this field so Growing you know That's happened. I dunno worrying about this law. But you know you had your hassan on your program right. where he talked about. You know this direct fit approach and you know he said you could do a lot of intelligent as daniel bennett. Cools it competence without comprehension. You can show intelligent behaviour by basically just training these networks and you have such a big data sample that you can get by in life just by trump relation right the but he admitted in his paper and on your show that when it comes to coalition this flexibility this extrapolation. He had no idea. You just admitted that this approach would not get you there right so in other words we're just at the same a poorer whether we're neuroscience or whether we're in a i i think it's delicious that where exactly the same gap in both professions. We don't know how to get this rich representation either in mexico or understand it in euro and it and i don't think that's a coincidence. I think it's because we're up against the same problem and we're going and i don't know who's going to win. The race is going to be some sort of evolutionary algorithm. At at open or mind was suddenly. Oh my god. This thing is thinking more. There's going to be a more principled approach using the fielding approach where one begins to see what's required to get thinking. I don't know one thing. I can tell you whether it's going to be the hub fielding approach in big areas or a. I think they have a much more much more likely. To succeed than going down into an insect and being sharing tony according cognition finding some kind of circuit motif that you can call micro cognitive and then somehow imagine you can extrapolate from that so of the three projects. I think we would argue that. Doing principled hope field neuroscience on primates for example or doing work in ai. Worrying about system to like..