Music & AI with Pablo Samuel Castro of Google Brain

Automatic TRANSCRIPT

Am here at nerves continuing my coverage and conversations from the thirty third nerves conference and I am seated with Pablo awesome. Well Castro who is a staff research software developer at Google Pablo. Welcome to the PODCAST. Thank you thank you very much for having me. This is a real pleasure to be here. Awesome thanks so much. I am really excited to jump into this conversation. You are someone that I follow on twitter. And like we've had these kind of back in occasional occasional back in overtime and it's great to finally meet you in person. you've got some pretty varied interests You spend a lot of time. You're research focus on reinforcement learning. You also tweet a lot about music and arts. Looking at your background you've done applied l. l. stuff at Google on ad from and other things you know. Tell us the story like how to all these threads come together So well originally. I'm from Ecuador and they moved to Canada after high school to to come study at McGill So eventually I did. My undergrad in the navy actually actually did my masters and PhD St at McGill with throwing a pre-cup and garden and so part of the reason why stayed in Montreal and McGill was for personal reasons. I had I was dating someone. WHO's now my wife and I also yes and I also had a band so I've always always been heavily involved with music? I grew up with music. Learning music. Play music so that was very important to me and I didn't WanNa leave that so I decided to make that choice. I know it's not typical thing that suggested to do while you're in the same university but for me. It was more important to to play music so I graduated. Did I finished my PhD at around two thousand eleven and then I moved to Paris post doc and this was at a time where a isn't what we see here with twelve thousand people in this conference. In Europe's didn't have back. Then it was called Nips maybe four thousand people So I WANNA say Nakajima and I was working at the intersection that was very theoretical between between Markov decision processes and form over vacation so I was finding it really hard to find a job because I wasn't former fixation enough for the former vacation community and I was I wasn't reinforcement learning enough for the reinforcement learning. Okay and so after my post doc I I just feared already have two young kids. And if you're that I would speak going post the post stop for too long so I luckily got up from from Google doing applied machine learning and adds an extra said goodbye. Getting at that point I stopped reading papers and faster fast. Then I did a little quick stint in chrome doing a building machine learning infrastructure so backend infrastructure And Brain opened up in Montreal and mark. Belmar are who I had done. My masters with he was he kept in research. He was in decline for a while and he was one of the first people to join brain in Montreal and he put in a good word for me and So then they. They offered me to join them and I jumped up that possibility and I hadn't been following the research. That also is a huge shock to come back. I I mean when I was doing my research. We were all working on Grit worlds and in pretty simple environment because a lot of it was theoretical. We didn't really use deep networks at all for enforcement or any now so it was a lot of catchup trying to to familiarize myself with the literature and how the whole landscape has changed so throughout all this time I always kept with music. I had a a few different bands. Always I've always been performing live and writing music and The other thing is when I started my PhD. I was actually considering doing a PhD with Douglas AC as as well as with During a pre cup in something with machine learning and music but at the time the what was available for music generation didn't really excite me very much Because it was still in the early days and I fear that it would taint my love of music and I just want to keep my music site separate but when a rejoined the research world and I saw with the Magenta team was doing I was kind of blown away by by the quality of of things then. I decided to also start going along that pathway pretty almost I think the day after I joined brain This artists from Canada. He's called David Usher. He's pretty well known in Canada. He approached us wanting to the other. He approached us that he was actually. I had abandoned the nineties called Moist and really popular and and he approached us. He wanted to do an album using like ai techniques and so we just Matton Kinda brainstorm then thing. He gravitated towards the most was lyrics and and So Google who was my manager at the time was Very generous because I had just joined bright. And he's like. Do you want to take this project because I like music as it sure. That sounds fun. I had never trained a language model. We're still trying to figure out all the steep networks because I hadn't looked at that but yeah google that gave me that opportunity and and I learned a ton and that project it's still it's still an ongoing project. So relative to the first model trained with David which we actually made a video out of that like he wrote one of his songs with the first prototype and it worked okay but the model we have now is so much better and I understand all of this language modeling so much better than they did before. And that's just ah that experience kind of showed me to not be afraid of stepping out of because even with reinforcement learning which is the background to step out of that comfort zone and go into two other areas that I'm not as familiar with because they're all interesting problems and really trying to dig into the details. And for me the way I learned the most is actually actually trying to implement some of these models architectures and play around with him because you read about them in papers and you kind of get it fine but until you're actually trying to get it to work for yourself it's that's a whole different experience and I've learned so much just from doing this like jumping from a one to the next in a separate can field and learning about those architectures architecture's but while still maintaining my research and reinforcement learning. Well it sounds like you've landed in an incredible place to do that. Not just kind of the resources of Google and the people that you're surrounded with and have an opportunity to interact with but your role seems to be defined as like advancing research. You know the implementation absolutely. Yeah so I'm a software developer like. That's my official title. There's also research scientists that Google and until recently there was still like most people that are in research wants to be research scientists. Because that's like then you're officially doing science So my like if I had graduated say four years after when I graduated likely would have been applying for research research scientist role Back when I google. That wasn't really a maybe Sammy. Benji was a research scientist but probably about it And so I entered Google ads syringe India and sort of advance my career in that in that track and when I joined Google it was a software engineer. Develop developing comebacks. 'CAUSE engineer you get an iron ringing. I don't have that Initially I was a little skeptical because the official description is your. They're more supporting research. Scientists and so. I was worried that I wouldn't don't have the flexibility to pursue my own research interests. But it's been not at all like that so I lead my own research projects and I still support a lot of people with the engineering aspects of it. Because I've been working on this a lot so I'm more familiar with Google infrastructure and just coating in general And it's been a lot of the major major advances that we see in machine learning the I nowadays a lot of his engineering. So there's of course there's still math and there's still a lot of theory behind it but a lot of engineering and and I don't think it I think more and more it is but Few years ago I don't feel like dot the credited. It really deserved and so living in the sort of intersection of of pure engineering and pure research is for me super exciting because I kinda get the playground in both worlds and learn from both when I've got a a long Melissa things that I wanNA talk to you about but you mentioned Something that's got me really curious. The you know what it means to evolve a language model so you started this project with David And came out with this early crappy language model and have evolved over some number of been like uh-huh Yeah No. It's been like a year and a half it's been or actually it's been almost like two years. I think since we started it but two years calendar calendar wise. But but it's not it's not one of my main project so yeah exactly so it's when I get a chance that I that I work. Yeah so as I said when I started this project I had never trained a language model. I like like I knew what else were studied in school. But so the first thing I did was I actually Andrea Sherr potty has the Yeah this famous blog post host The surprising reliability of of recur neural networks. Something like that thing. Anyway that blog posts and they got his Kodansha Jordan's are played around with it and that was the the Vero model. I'm just over characters and then I started tweeting that a bit and and finding new data sets for lyrics and that initial model that was basically a variant of Parties model was the initial model that I had and so that was okay. They just a milestone like okay was able to train. This actually get it to do what I wanted to do. But obviously was Has All the shortcomings that these types of models do the around around. I mean the the tension is all you need. Paper had come out not not Not Too much before then. And so then I started looking into these attention models and and so so it seemed like the right thing to switched over to to the transformer model and started playing around with that and so the V.. Two model was attention model and it's had various versions of a two part of the difficulty that had with the language with training. These language models on lyric status at is that the lyrics said is not the best in what sense so the tricky thing about these language models is that an end for lyrics in particular is that you're trying signed to get this model to learn English kind of so how how to structure English phrases together but in quote unquote poetic way and to not be boring doing right because you're trying to use it for creative purposes and you don't want it to be boring so we train this model and if you look at it like perplexity scores and things like that it was doing pretty well on this lyric status but but then when you actually look at the output. It was extremely boring so because in pop songs you have lines that repeat often. I mean that's just how songs written so the model would tend to just repeat the same thing over and over and over and It also had certain phrases that would keep on coming back to just had very high likelihood so I wonder if you've talked about this. I say like it's Hanway but the average pop line over the last six decades is you know that I'm the one and That one came up a lot and you can also get you know that I'm the one baby. So that's the average pop line. It was boring and so the interesting thing about working with with with David is that I build variants of these models and nitro him and one of the things he remarked on. Is that It was very nonspecific in the sense that at the nouns that it was using it wouldn't use proper nouns. So would you like me. You he she they since very kind of ambiguous. If you think of Like the Beatles mister mustard polythene pam jude. You know there's all these I mean the fictional characters but they're very canvas and so then you can sort of the ground the song song in something kind of real whereas if you're just talking about him like hey you don't don't even though pink. Floyd has a hate us

Coming up next