Goater, VA, Layton discussed on The TWIML AI Podcast
Just want you to learn the identity function for encoder and decoder. Learn anything interesting or be able to use it for compression. So what if he does is it. Puts constraints on the latent variables. The a k alter that basically you introduce a prior distribution on the late variables and you require d. n. goater and prior to output similar distributions. Now the compression interpretation of such a system is that what the end garter network will do is it will produce an encoding of the data that may lose some information so for example The the image contains a patch of grass or something may be. Encoded will only remembered fact that serious grass hair but not necessarily all the low level details of frost how to each shift piece of grass oriented. The decoder tries to reconstruct it. So it's doing Compression compressions trying to reconstruct it as best as i can and then in addition the latent variables which is the thing you're going to transmit. They may have some redundancy so for example there may be a correlation if you see a patch of grass in one part of the image. Maybe it's more like that makes it more likely that there's also a patch of grass in another region of the image as well as soon as that happens you should be able to lost the compressed alls if you have a probability mall and that's for model that role is played by the prior of the v8's that is really very beautiful. Perfect fit between the va. You which was invented without necessarily thinking about this application and lucy repression autoencoder does the laci bar and the prior does the lossless and going of the late now. That is still very high level description and when you actually want to make this work in a practical way. There's still tons of things to do. You have to about how to quantify those latent variables out to decide encoders. Decoders priors are traditionally efficient. Were looking at also hybrids between vietnamese and gas. where the decoder. Just just try to do the best job at reconstructing the input and maybe ending up with some blurry patch of grass that is pretty close to the original but because the information loss loss has come up with exactly the The same thing as the input but in addition by adding game losses we can have the decoder essentially imagine what the grass might have looked like such trying to output something not just the original image being compressed but also is indistinguishable by a discriminator network From real data so it's essentially confabulation making up details that are essentially realistic. Pleasant to look at without us. Having to transmit doubts so that. There's i think one of the very exciting possibilities of of neural compression algorithms that the previous generations of the classical non learning base codex could implement think traditional compression. You're basically trying to identify redundancies. In the image through some encoding process. Get rid of them and here. You're trying to rather predict the distribution of the the image. I'm not sure if that's the best way to describe it but the thing that you're transmitting or the compressed video that you're storing is your your lateness as opposed to some encoding. Is that the idea. And then you're using a generative model to reconstruct from those layton's as opposed to Decoder to reconstruct your video. I think it's accurate yet. It infects some of the same principles that using world so using in classical codex. Only they're typically the end coder. Decoder aren't non linear to linear maps. So for example waveland transform or some discrete cosign transform or something like that which typically is hand designed to produce sparse goldfishes. So dick batch of an image apply one transforms you get a whole bunch of numbers that are almost zero in a bunch of coefficients that are large and then implicitly also a prior at later. Which has i expect most of these goldfish to be close to zero. So just those exactly zero Visitation will have scheme to efficiently code..