1 Episode results for "alanon library"
Interview with Joel Grus
"Hey, everybody. So this week, instead of having yours truly, and Ben here to explain some kind of data science this, or that I am here instead with a special guest. This is Joel Bruce, and he is a world renowned, author data, scientists software engineer, and a point of view holder. And we're gonna talk about all of those things in this episode. Joel, thanks for coming on. Thanks a lot for having me. It's good to be here. You are listening to linear digression. So Joel, I gave you a rather expansive introduction, because you are a, a man of many data sciences. So let me ask you to introduce yourself. Tell us a little bit about your background and how you've maybe ended up in the spot where you are right now. Sure. So originally, I'm a math person I studied math as an undergraduate. And then I got some really bad career advice because I mostly got it from math professors. And so after I graduated I went to math graduate school after a couple years of that I realized I did not want to be a mathematician, and I did not want to be a math PHD. So I dropped out and sort of went down. Of very, very whiny career path the that probably is not worth paying to every detail of I currently my job title is research engineer. I worked at the Helen stoop, artificial intelligence Seattle, which is a research nonprofit, we do basically a research and he was like an e and commonsense reasoning and revision and extracting information from, for example, scientific papers and so my job, I'm on a team called Alan and be and in addition to having an all researchers on the team, we make a library called alanon. I'll be the deep learning library. Don't be researchers to do research. And so most of my job is to work on that library for I was at air to have been there a little over three years. It was a Google as a software engineer for a couple of years. And before that, I did disci- at a bunch of startups. I read a book called data science from scratch that came out. In two thousand fifteen and the second edition just came out last month. So pure in the market for introductory data science book do science from scratch is the one I recommend, because I wrote it, I various other have various other sort of minor claims defame. I'm the I don't like Jupiter notebooks guy last year. I gave a talk at Jupiter. Con called I don't like though books and then it went somewhat viral. I wrote a blog post once about solving fizz buzz and an interview situation using tensor flow. And also went somewhat IRO. That's it. Yeah, that's a favorite in our in our department. We like to win around. Yeah. I, sometimes I make live coding videos are used to do the stunt where I would live code neural network library from scratch, you know, I have various, I'm sure there will be more stupid stunts like that in the future. I just don't know what area. So, yeah, that's in prepping for this. I was kind of struggling with whether to describe you as a software engineer, who knows a lot about data scientists about data science. If you think of yourself as a data scientist or researcher, who has strong opinions about software engineers. I mean to talk about your book a little bit data science from scratch. I think one thing that it really emphasizes in a way that I think is super valuable is a, a way of thinking about data science that has a strong emphasis on software engineering, and what's the right way to be thinking about these problems, not just from the methods, but from kind of a coding standpoint? So how do you think about that? Do you think of yourself as a person who is thinking about how to solve data science problems? And the software engineering is a means to an end or is, is there something, you know, maybe calling back to your software engineering past. That's more a little more deeply held than that. So I was a data scientist before I was a software engineer. I was working in a start up called villa metrics, and we were doing analytics on enterprise collaboration data so going into big company, and look at us emailing whom who's meeting with whom how often what topics things like that. And I was a data scientist. I was not at all software engineer, but because it was an analytics company, the data science sort of was the other. And so I wrote a lot of production code and a of a disturbingly large fraction of the product were stupid things that I built in D three kind of for fun in show to the CEO, and he said, I'll let ship that. So what happened was this product at a lot of coding that was written by me, data scientists not a software engineer, and it was not good code and supporting it was a real pain in the ass. And so that was one thing. The second thing is I discovered that I actually really liked writing that production code and I wanted to be good at it. And I wasn't getting good at it by being data. Scientists running production coat, so I made sort of. A somewhat deliberate choice to say, I, I really want to get into software engineering, and build out that part of my skill set so that I can be more valuable and do more of that kind of work. So I kind of took a hard pivot and said, I'm going to do less data science, and more software engineering. I went to Google and my job at we will had nothing to do with it. A science in the slightest, it was building back in systems, and C, plus us and building benchmarking tools to help ad sales people somewhere ads and things like that. And, and so that kind of accomplished, Mike goals of learning more about how do I build good software. And what is offering during best practices look like but at the same time after a while I thought I wanna be kind of closer to the data, and so I tried to sort of bring myself back, just how ended up at two in the sort of hybrid research, engineering, kind of role where I work with researchers. I'm expected to understand deep learning and right deep learning code, but at the end of the day, I'm really kind of a software engineer who understands research. So that's really cool. So tell me a little bit more one thing. I'm wondering about is maybe contrasting, the team that you were on it at volume metrics earlier on if you years ago because that sounds a lot, like probably what a lot of data scientists who are working at various companies are working with right now. Like they're kind of a team of one or they're out there trying to figure out what good code looks like versus the team that you're building out and the work that you're doing now at AI to, like, what's, what's the some hypothetical future state that you think is like a little bit cooler and more advanced that you see that you're building around yourself right now. So I was very early of all metrics. I was a second employees. So the first year I was there it was the CEO the lead developer me, and the lead developer was very opinionated. And he was really a geek about software engineering that he read he would read all sorts of books about software engineering. And when we would disagree about the way to build things he would really try and like, bulldoze, mayo will Kent Beck says this, you know, an uncle Bob Barr says this and like I I really struggled, like keeping up my end of those conversations in this is I don't recommend doing this. But I sorta recommend doing this, eventually, I memorize the names of all the people. He would sort of site as gospel. And sometimes in arguments, I would say, oh, well Kent Beck says that no, we should do things my way you really can't Beck said that. No. But so that was kind of fun to mess with them. So anyway, there was so little team there that the, the standard sort of division between data science and software engineering, just sort of didn't exist. We were three people in a room and as time went on. We basically tried to bring in a, you know, a CTO who could come in and say, we need to have more discipline around writing unit esta around code reviews. Around various standards around how we do our source control. And they ended up bringing in someone that I did not get along with, and that sort of, I would say, that's why I left out certainly didn't help. And so the difference. I feel very fortunate in that my team at I two the team of people building, alanon library that we really share a deep commitment to strong software engineering discipline. So if you were to look at Allen e you know, I'm my own horn and a little bit here. But it's really one of the like highest quality deep learning code bases. You will find it uses Eitan type annotations everywhere. And we is my to check them as a pre commit check, we have a pilot recommit check. We have extreme test coverage be have, you know, automated documentation building, and we've appre commit check to make sure that the documentation actually builds. And so and, and we're very hardcore about this, even when people come in with, you know, external Puerto quests, right? Okay. You can contribute, but you gotta write more tests in you, gotta typing your coat and you got to do all the stuff. So I feel very fortunate to be on a team that sort of is of a single mind on. This code quality issue right now, and it's very hard for me to imagine ever, again, working on a team that is not the case, cool, cool. And I think that's I've been, you know, one thing I like about the talks that you give like you mentioned a couple of them at the top. But once I've read recently are the, the one about I don't like notebooks very provocative title. There was one more recently about reproducibility, and how the idea reproducible science has a lot of very strong analogs with soccer engineering best best practices principles what have you and the this lies themselves like not all talks are like this. But you can actually sort of follow the narrative of the talk through the slides, you didn't have to be there in person. So some folks, you're listening to this might have actually seen those talks. But for those who haven't we'll post links on the on, leaner, digression dot com and you can actually go in and get most of it out. But. The, the reason I mentioned this is it seems to be a thing that as you're going out and giving talks publicly talking about, you know, generally, what you think people should be paying attention to that. It's, it's not just a, a local thing that you've set up on your team, where they're using these offer best practices, but trying to bring that message to the broader data science community and for folks, who either don't know that this is a way to think about building things or no it, but don't really know how to start unpacking that that seems to be, you know a pretty sweet spot for some of this stuff that you're talking about. That's right. So with the reproducibility, there's, there's sort of to sort of mirrored aspects to it, one is that if you care about reproducibility and a lot of researchers do, or at least say, they do then, you know, adopting software engineer. Best practices. Anything from like unit testing to dock rising things to making a real clean separation between your library mental code those things will help you accomplish that goal of reproducibility. And then there's this, flipside, which is a kind of an angle, I been taking a little bit more recently, which is that, if you're an engineer the thanks that researchers need to adopt software during best practices more than, you know, offering up this carrot of reproducibility. If you do all these things, you'll get reproducibility is a way to help us the Trojan horse metaphor, which some people didn't like, but, you know, is a Trojan horse of like sneaking these practices into their workflows. And so this is something that I know you've been talking about for awhile now at least maybe a year, probably longer. I'm not sure how have you felt the reception to that message changing review? Have you gotten the sense that as time is going on more and more data? Scientists are starting to nod their heads to the stuff that you're talking about. I think that's right. You know a year ago. Well, I'm always lecturing people about on three type hints 'cause I love him. I'm like the world's biggest fan of highs on three type ins and I feel like a year ago when I pull people right? And that makes I don't like it. I don't like it. That looks weird too makes hundred Java whatever. And now, people are starting to be a little bit more receptive. It'll be interesting to the second edition of the book hasn't been out long enough to, you know, get a ton of feedback on it. But one, the biggest change from the first edition to the second edition was updating from python two seven two six and my number one reason for doing second edition was that I felt I felt guilty is there was a book out there with my name on it was telling people used by onto seven like in, in the current year that felt I felt bad about it. But because I was revising, and I said, I'm going to take the opportunity to one rewrite all the code so that it's more. Readable. There were a number of places in the first edition, where I'd sort of tried to be clever, and maybe I was covered. But as time goes on, I find I always prefer readability to cleverness, the second thing I did, you know, which have been what you related to is that I weaved throughout the book, the sort of narrative slash emphasis on testing code as you write it. And this is not elaborate, you know, unit tests that ups or nose, or whatever this is really we've written a function. Now let's write a couple of Sirte test cases to make sure it works. And so it fits into the flow of the book, it actually illustrates here's an example of using this function, and the and it makes it, so that's code has some somewhat of tests with it. And, and I do the same kind of thing, what I'm doing live, coding, I is this live coding for advent of code zero last year where I'm trying to solve a problem, I've never seen before. And so I'll break it down into pieces, all alright a function for one of the pieces. And then, before I proceed, I'll, I'll write a couple of tiny unit tests for this check that what I've written works before I go onto the next part, and I think a lot of people have said to me, you know, before I watch those videos, I didn't use a cert statements, I didn't use Pence, but since seeing how they help you know, make fewer mistakes, and your problem solved more quickly and more easily I've adopted that. And I think that's one place where the videos are actually quite nice. Now. What follow up question that I wanted to ask him this, this might be a little bit unfair because I know that you must think about, like, what are the software engineering best practices? But one of the challenges that me and my team, we run into the most in thinking about writing well-tested software is that a lot of the bugs that we find actually come from the data that we're putting in not from the code that we're writing itself, right? Or it's assumptions that the code is making about the data that's coming in and up not being true about the data as it actually arrived. So how do you. Is that something that you're starting to think about two is that something that, you know, just in practice doesn't come up enough in your day to day work that it's a huge component of, how you think about software testing like where does the where does data testing come in or not come into the picture as you see it? So I'm building tools for researchers who are mostly working with an of standard data set, so especially in NLP. No, there's this squad data's out to Stanford question answering data set. And so for a while everyone was working with the exact same data set. And so the issues were less is the data bad. But more have I read in the data incorrectly, somehow and so that's the sort of thing, that's easy to write a test for you say, you know, here's my code that reason the data sets, and here's a fake test fixture, which is a file on disk, that looks like you know, maybe two records from the actual data set. And now read them in an asserts that they have. The right field. So for things that are really mechanical like that. I think it's relatively straightforward for things like you know, I expect this value to be between zero and ten or I expect this to be positive right, expect that if you know feel as populated and feel be must not be vice versa to me, those kind of tests, feel like they belong naturally in your data pipeline. Right. So. Intern ideal world. If you have these various about your data, you wanna check every piece of data for that, you know, in practice that might be too expensive, or too slow and you do the best you can. But insofar as you either have in variance that violating them. We'll break your code and you expect they might get sometimes he's proactively checking them. We'll probably save you time. You know, a lot of time in the long run totally. Yeah. And so one thing I just wanna make sure that we that we cover is going back to the book here for a second. I know for folks who have the first edition and are upgrading to the to the second stuff like the new python version, like more readable code, the I, I assume they'll know what you're talking to or what you're speaking about and can kind of fill in the gaps for themselves for folks who haven't read the first addition or who are wondering if they should pick it up. Can you just say a little bit about maybe you can speak to what's in the book but? I think a more interesting question is, like, who is it written for what are, what are you trying to accomplish with that? So that if someone is listening to this. And they're like, oh, yeah. I'm that person. They know to go pick it up. Yeah, that's a great question. So the, the full title is the science from scratch, I principles with thought. So because I am trained as a math person math people do things from first principles. They, you come in on the first day class and you start proving the themes, you're going to need in order to, you know, prove the theorems that you wanna prove at the end of the class and actually had a class in graduate school. We came in on the first day of the second semester, and it was a different professor. And he said, I'm so glad that, that other professor proved this one theorem is that means I can use it in this class, and I wanna use in this class, and it sort of. It's a way of looking at the world that unless you've actually built something and implemented something, you don't don't get to use it. And so I kind of took out approach to data science, and what I mean by that approach is you want to understand how something works, well, we're going to implement it in hopefully, a clear, but probably non-performance way. So everything from, you know, statistical inference okay? We're going to implement the statistical inference code from the basics so that we can hopefully understand how it works things like web scraping, we'll build our own web scraper things like using the Twitter API. Okay. We'll use the library for that. But we'll go through the details of what do we do to get indicated in call that library things like cleaning data munching data all the way up to machine learning models. No, we're going to implement our own linear regression. We're going to implement our own decision tree, our them, we're going to implement our own. Neural network. I break things like that. And so that's kind of the angle of the book, when I was writing I had in mind, a long time ago, back in two thousand twelve or so I got permission from my boss of all metrics to hire a couple of unity, scientists help out and at the time two thousand twelve data science was not that popular. And so I literally just put ads out there that said, you know, data scientists needed. No experience necessary must know some in some program like that was literally all I asked for and so when it came time and actually it's funny I had a really hard time finding people to apply for that job. Whereas now, if he puts out jobless, Yep. You probably applicants, but the target reader for the book when I wrote it was one of those people who I hired who you know, he understood math, and he could write simple python code. But he didn't really know anything about working with daily didn't really know anything about machine. Learning. He didn't know anything about much about coding beyond writing simple scripts. And so that's kind of the target for the book. And so for the folks who are listening to this who are now thinking about that. The thing that you mentioned like there's. Data science jobs out there that are attracting, lots and lots of applicants. And they're saying okay from what I'm hearing from Joel, it sounds like depending on the shop having some of these software engineering skills can be really valuable if this is my first data science gig. I should maybe pick up this book and a few others potentially and cut my teeth. What are some of the other things as you're thinking about how you want wanna grow your team? Like what you're hiring for who's out there. How do you think about that part of your job now, especially keeping in mind that probably a bunch of people who are listening to this are, are wondering, like be like, oh, that sounds really cool. Like how can I. What are the things that folks Joel are, are looking for what are the most valuable ways that I can be spending my time to, like, differentiate myself in that huge pile of resumes, that you see? Well, I don't know that trying to appeal to me is the best career strategy. I don't know that I'm Representative of most hiring managers. I have some very strong opinions about interviewing it are slightly. You know, I can't say that people necessarily disagree with them, but I don't think I'm on the same age as a lot of people like when people tell me, here's your time, going to conduct an interview, I, I have a very strong, gut reaction. Like that's a great question. Or or that's an awful question. Don't ask that. Okay, interesting. Well, let me let me try again, in, I think like the gist of the question that I'm that I'm wondering about is, maybe I'll phrase it this way. This is a question that I get a lot is folks saying, you know, I'm trying to get my first job in data science. Or maybe I'm coming out of one of these boot camps, or something knowing that it's a field that's got a lot of folks in it. And especially like that first job can be fairly challenging to get. Let's imagine there, maybe not applying for your specific team or your specific role, but you're just trying to give them maybe slightly more general advice. What are some of the pieces of advice, you, you find yourself giving if people ask you this question? So this is the number one piece of advice that I give and it, it's like an old of what you're asking. But that is that personal connections and networking matter, a ton and that knowing someone and having them put your resume into the system, gives you. A huge leg up over you know sending a resume off into the void or pressing upload on some site. And my experience has been that, you know, those personal connections. Get you much further along in the process, then just applying blindly. So, you know, I, I hesitate to recommend this, because now I'm gonna get tons of Lincoln messages. But if you see somewhere that you want to work instead of just putting your resume in their site on linked in find out who works there, maybe, you know, them, maybe their second degree reach out to that person, and a lot of places offer referral bonuses. So if you seem decent they'll be happy to refer you in and they can see to the front of the queue. So, so that's, that's sort of one kind of met a piece of advice in terms of its just in terms of the other, it's hard because so many different people care about so many different things. I mean, part of me saying that I destroy opinions about interviews is that people will, tell me their favorite interview questions. And, and I'm like that's terrible question. But to them, it's a great question in this what they wanna ask. And so that makes it hard to give kind of a one-size-fits-all answer you want. I mean for me. Obviously having better software engineering skills, I'm I'm strongly in the camp of data scientists should have some software engineering skills. And as a Dita, scientists the more software engineering skills, you have a better and that doesn't mean be a computer scientists, but it means you know, us source control use code reviews. Right. Test things like that. And so for me if I'm hiring and I see someone who has that, that's a leg up for me to the centers coding sample. Like I said, I care much more about readability than cleverness. And so if someone is, you know, submitting code or her repo, and I see, like, really clean well-structured code that also counts. A lot for me. But I don't know that there's a secret sauce if there was than the boot camps would be selling, right? That was the thing that I was about to point out is like, as I was listening to you. One of the notable things that I didn't hear was any kind of hard filter on background like you. You're a math PHD dropout. I myself has a I've finished a PHD in the hard sciences. I work with a bunch of people who do, but I also work with a bunch of people who didn't come from that background. And some of them came from boot camps, some of them came from, they learned software engineering, kind of like on the on the metaphorical streets. And I think that, that is just something that I would I, I think that was, maybe not totally explicit on your part, because, you know, there's a million things that you could leave out in the listing of what you look for when you're hiring. But I think it is worth like, actually just making explicit like it's like, whether you can execute on certain types of things have certain skills, or knowledge, or a certain type of disposition is much more likely to have a big impact for you about whether someone successful than whether they come from any type of fancy program, or any particular school, or any of that kind of stuff I used to care, much more about that stuff in a in a couple of times, I've been burned by over indexing on this person went to school, so they must be pretty good, or wow. This person has this particular work experience. So they must be pretty good and overtime, I tend to, I guess I had completely discounted. I mean if. Someone comes to me, and they say, oh, you know, I, I spent three years at Google in two years at Facebook. I say, well, you know, those are like one, they're pretty strict about who they hire the fact that you got hired there. And, and get fired immediately says something about you. And I'm not gonna really discount that at the same time if you say to me, you're hiring data scientists who do you hire? I'm not looking for specific agrees. I'm not looking for specific colleges or even any college at all. I'm not looking for specific companies. I just I, I, I don't like relying on that stuff very much to be honest, cool. So I have a few more questions that I wanted to ask, and then and then we'll wrap up on so one is you mentioned a little bit as an aside earlier, like you've been working in data science for a relatively long longtime relative to the the the age of the field. Overall, it's young field. So for folks who are more recent. Really recent. Let's say immigrants to, to the field of data science people who have less than six or seven years of experience since their first data science jobs. Like, what are some of the biggest changes you have seen even in that relatively short period of time? It seems like a field that has changed a lot, and what are some of the, the biggest changes that you have seen? So a few one is obviously the rise of deep learning you know, in two thousand eleven was the first time I had a actual data science job. And no one was doing deep learning. Then there was no tensor flow. There's no pie charts. Nothing. Whereas now, you know it's not like all data. Scientists are doing the earning but someday scientists are doing deep learning and there's so much almost turnkey. Deep learning stuff that I would, if I had a data scientist in a job in a role that involves some kind of machine learning, I would be kind of skeptical of them, if they didn't know some of that and weren't familiar with it, and were interested in trying it because that's kind of the state of the art, and solving, a lot of problems that are actually relevant to data science type business problems. So that's one sort of huge chains another is that the tooling has just become more and more mature over time. I mean two thousand eleven there is. It's psychic. Learn, we had numb pie we had our we had all those things, but things have become a lot more polished, which is nice a third is sort of the point that, that I've been harping on a little bit, we which is that data. Scientists have tend to be thinking a little bit more around, you know, engineering best practices of back in two thousand eleven data scientists certainly and even offering to your someday, we're, we're, we're much less thinking about those kinds of things. And, you know, the, the other big chains is that now ever knows what data science is in two thousand eleven when I started doing it be had data science meet up. It didn't even have data signs idol, but it was a data science meet up, and it was like twenty people, and we all knew each other. And we all had a disci- I've jobs and that was kind of it. And now that same sort of died out came back, and now it's like hundreds of people, you know, a lot of whom want to get into the field. So that's. That's also a really big change, and my last question for you a question. That's always like a little bit dangerous to ask is a wondering if, if you want to go out on a limb anywhere and make predictions about where you think data science is, is going. Maybe in the next couple of years, another way to think of that, if, if it's helpful is also, what are some of the some of the pain points that you think data scientists still have that when you pull your head up and think about it for a second. You're like, yeah, this is this is a problem that we're we've, we've got a there's got to be a solution to this one, and we're probably going to figure it out in the next few years like where is data. Science still sub optimal in, in your view. In a way that there's probably you. You think the tools are out there in to, to fix it in the next few years? But it's just an issue of like people kind of putting it together and the field gelling around those, those changes that you see potentially on the horizon. Joe? So when I when I made the switch to kind of software engineering, and this was in two thousand fourteen like I said, a lot of that was because of my interest. I said, you know, I'm really I like writing production company better at it. But part of it was also a skepticism towards data science as a standalone discipline. And I think part of what I saw coming was this sort of distinction that we have now between the more machine learning engineer pint of data, scientists, or maybe even that's the role itself. And then the more I want to call it down market, but the more dash forty data analysts kind of it a scientist. And I think the, the field is, is starting to it's getting crowded, not just in terms of people, but of breadth as well. You have some people who are data. Scientists who are writing deep learning code, and you have some people who are data scientists who are, you know, maybe writing sequel quarries building dashboard. And. In the worst case you get people who are hired to do one thing and ended up having to do the other and they feel misled. And so it's not clear to me, if it will all kind of hold together, as one thing or if it's blitz and you get these are the machine running engineer really becomes a much more important role in a lot of the design tests, who are more into writing code and up as machine on any engineers, and a lot of the ones who I think you can go back to calling people data analysts have been data scientists, but, you know, it's possible that data scientists itself becomes almost down market, but, you know, a more down market job, if you will. I it's. No. But it's not clear to me how this will all shake out. So that's one answer. The second answer is that. And you know, this is me tooting, my own horn a bit. But I think some of the issues that I brought up with notebooks are real and I didn't go into them here. But if you're interested, you know Google, I don't like books in. No. Find my. Slides and they're pretty self explanatory. And so, I think people are really thinking hard about, you know, how can we address some of these things. How can we give people the things they like about notebooks, but with also more of an eye towards doing reproducible science and making things Esteban and making things modular and things like that? And so I suspect that's another area where you'll see a lot of interational in growth over the next few years. And then the third is, I'd say probably there's going to be some continued, what I told democratization of deep learning where, you know, four years ago if you wanna do learning you're building her own tools, or hacking, really little answer flow. And then there's Paris. And then there's high torch and things are just going to become higher and higher level and were already partly there. But basically, you can train deepening models without really knowing much about people. For better for worse. But I, I think that's also a direction that will see. Well, I will hold you to that. And I'm going to get back in touch in two years, and we'll, we'll see how your scorecard is all right. Okay. So one more time with the books with the with the book for the folks in the back if people are interested in picking that up where should they go to for it? So the books called data science from scratch, I principles with python. You can find it on Amazon, beware. Either are several other books called data science from scratch by jerks, who decided that, hey, that books doing well, I'll make a shitty book with the same title and try and confused. That's that actually happened. There's like, oh, that sucks. Man. Okay. So there for a while there was one that was actually pleaded iced, and they basically took the text of change of words. So any book, but I, I gotta take that one out. But the other ones you can't copper title, I guess, so. Yeah. So only get the one that has my name on the cover. That's, that's the real. And so you buy that at Amazon, if you want a PDF of it or Allie, doesn't sell PF anymore being get that from me. He books dot com or if you have a safari subscription on O'Reilly a the book that way as well. Some people have told me that their public libraries have safari subscription so possible that your public library has a safari scripts subscription. You can read the book that way as well. Can I plug all the rest of my stuff? Do you gate? So I have a Twitter. I'm on Twitter all the time. That's at Joel Bruce. J G, R US, I've website slash blog that I very infrequently update. But when I usually pretty good. It's Joel Bruce dot. Com and I make some live videos, like I said that YouTube dot com slash jewel groups and it so happens that I do have my own podcast. With inter muscleman us called adversary learning, and it's an adversarial, learning. I think those are all my things too big actually that I lied a minute ago when I said that I had I was asking my last question, because I don't think I've ever had anyone on here before who had a podcast in their own. Right. So it's adversary learning. Hello. We've been doing that now. It's been a while. Yeah. It's been probably two years. Maybe probably maybe more than that, cool. But we don't put that besides I frequently it's been two years, but there's probably only about twenty five episodes. Maybe you like it. How do you how do you figure out what you want to talk about on there? Well, so I like having a podcast, I like talking on broadcast. I hate editing podcast. That's like my least favorite thing. How do I figure out what I want to talk about usually Andrew and I chat online, and we try and figure out who we should get his again. Guest and then either we brainstorm together or brainstorm separately. It, it totally depends. We, we like to have fun with it. So actually, what we did on the most recent one, because I wanted to promote my book, be didn't episode where we pretended that Andrew was the only host, and I was his guest, and as if I was not the host of the podcast. And so we made a lot of inside jokes about previous episodes. Right. Ask him. Oh, you know, so, and so, and he's like, yes, we had them on the I've had him on the podcast before. So so that was that was a good one. One time we had Vicky boy kiss on. And I think this was episode. It must be her episode and the topic was data science myths. And I think she chose that. And so in preparation, I made up a bunch of data science myths that I was gonna ask her about, but they were like, you know, some of them were real in the some of them are, like, if you look in the mirror and say, data science, three times, drew Conway will pure behind you. And Bonk you over the head with diagram and things like that. I mean, in fairness, have you ever done that, you know that that's not going to happen? No, I, I haven't. That's why I included it. So. So, so when I was going through my list of myths to ask her about there are a lot of like really goofy ones in there. And so that helps drive the conversation as well. You know, we did an episode with Tim hopper where we just telling stories about the worst interviews. We've been on that was probably the most episode. But, but Andrew and I are both kind of loud mouth. So we tend not to have too much trouble keeping the conversation going even if we don't prepare that much. Yeah. Cool. Well, if anybody is listening to this who is not already familiar. Yeah. Go check it out because I know y'all like podcasts out there. It's cool. Well, we've covered a lot of ground here today. Joe thank you for coming on again. This was really fun for me. I hope it was fun for you to Jerry. Thanks so much for having me. My pleasure. Digressions is a creative Commons endeavor, which means you can share or use it any way you like just tell them we said to find out more about this, or any other episode of linear digressions Goto linear, digressions dot com. And if you like this podcast go and leave us review on, I tunes so other people get so listen to content, you can always get in touch with either of us are emails, are Ben at linear, digression SICOM, and Katie at linear, diggers dot com. In case you have comments or suggestions for future shows, you can tweet us at Lynn digression. Thank you for joining us. And we'll see you next.