Google, Adobe, Seven Nanometers discussed on This Week In Google
That's what i want you to do. What i'm going to four well okay. I apologize because we just got all information they they did announce announce a new exodus which according to a non tech and this of course it'd be for the international version of the no according to a non tech is not a big difference from the existing the ninety five no it is. It's seven nanometers is the only i was going to say. The only difference is is actually a really big difference in it could be bad. I don't know it's there. I seven nanometer chip built using e v which is extreme ultraviolet lithography which is a way to deal jio with leakage on the chips but it's a brand new manufacturing process nor that means he could be it could be bad but yeah it might. It might be easy so i was going to say it is it is just a smaller version so they moved down of the eight nanometer. It's also a different yeah. It's it's the same chip made it seven nanometers <hes> so i don't i mean who knows no one knows is new who knew that they helps the cores like the frequencies. They kept those mostly the same so it's not like you're gonna see a sudden power boost here. I think let's see what did they change yeah. It's it's basically the same except at the middle course they're gonna go from two point one three one gigahertz to two point four gigahertz for those you guys who are accounting all your your hurts things are going to put in a dedicated a or d._s._p. Kind of like google does with their pixel funds for photography right now. They have the molly the arm valley core. They've built their own g._p. You okay which is probably good enough. I mean even though google's got their dedicated processor i don't i don't think we know the workloads yet on the phones that are gonna need this like some of the things they're showing might benefit from. I'm a slightly different ship design but not that new really missing adobe was able to do this with an iphone into the three d. mapping rear four years ago yeah. I'm thinking about things like you know handling these zoom in on the mike. I'm trying to think about like and you've got the fingerprint recognition i don't. I don't know how do you train it in. That model just gets executed or you. Training one thing that is new is there is a time of flight sensor on the back of the phone. Does the iphone have a time of flight center for tracking your face or do they use different they paint your face with red dot dot dot projection to give you so the the both have kind of the same purpose which is to get depth information time of flight measures the time it takes to bounce stuff off of you as opposed to a yeah so it's what was in your lighthouse camera..