Best-Practices for Discovering Valuable AI Opportunities - with Adam Oliner of Slack

Automatic TRANSCRIPT

This is daniel fidel. And you're listening to the business. Podcast art of our mandate here at emerge is to constantly build our library at emerged plus that is to say construct. Best practices frameworks infographics. For what a strategists and a catalyst really need to have on deck that is to say ways of building strategy. Wade's of streamlining adoption ways of building. Ai roadmaps and also ways to find a opportunities within their business. These are the non technical business skills that bring ai to life. And that's ultimately would emerge plus is about and we have a huge panoply of these frameworks that we've built over time and people often ask. Well how do you do it. The good news is we don't have to do all the hard thinking. Some of the smartest folks in the world when it comes to applying ai real world or our guests here in the i in business pot gassner in our network in our rolodex emerge artificial intelligence research and it is from brilliant minds. That many of our best ideas and frameworks have actually come from and our guest. This week is absolutely no exception to that. Mind theme adam. Oh liner at. The time of this interview was the head of ai. At slack slack obviously was acquired. Not all that long ago for many billions of dollars slack a very well known silicon valley unicorn. Adam is now the founder of a stealth firm in the bay area. So he's no longer with slack. Think after a company gets bought sometimes very talented people spin out. And that's quite a natural transition. And adam had been with us last year. And i decided to pull him back in and talk about a topic that he hinted out in his previous interview. But we didn't get to get into in-depth and that is how do we find a opportunities. What is a lens of thinking. What are a set of steps phases to uncover. The i fit to uncover where a value can be unlocked in our business. How do we actually look through a pair of goggles. that will show us. Here's where the business needs. And the data assets could come together and actually deliver value in the business atom obviously has a very robust. Technical background is head of ai with silicon valley unicorn. Certainly need to be well rounded there. But he does a great job of being able to convey these phases and steps in the way that he thinks about the process in a way that essentially anybody can listen to can use can apply. So i appreciate adams way of explaining things being able to break things down. And i hope that it makes it easy for you to apply some of these ideas in your own business. If you're interested in using the wider library the best practices for finding ai opportunities building a strategy building a road map and even conveying the roi when it comes to making the business case to leadership you can learn more about those best practices as well as our full a use case library at emerge plus it's e emmy rj dot com slash p one p plus and then the number one yemi rj dot com slash p. One you can learn more about emerged. Plus they are without further ado. Let's fly into this episode is adam. Oh liner you're on the ai and business podcast so adam blad to have you back with us here on the program. I really enjoyed our last chat about these strategic advantage of data. We've got a couple of good topics to start with today. I wanted to kick us off on the theme of building executive. Ai fluency the show including your last episode to try to get smarter and be able to enable this stuff in their business when you think about you know executive teams leaders who might not be technical building a affluency wiz that involve for you. What do people have to learn to make stuff work. Yeah thanks for having me back then. A good question. I think about in terms of sort of a constellation of potential business problems that you would want to attack with ai on the one hand and then a bunch of data on the other hand and the process of assessing those business problems assessing the data and then assessing the potential bridges between the two. And that's the sort of image that i have in my head when i think about identifying a opportunities We can dig it on each of those so if you think about the sort of business needs like what are the brake problems to target with machine learning with the mostly you're looking for a well formed problems with measurable impact so i sort of negative example would be. I just wanna understand my business better. Well okay i. What are the units of understanding here and how. It actually measure. Whether or not. I've accomplished us. You can certainly slice up that problem and find good sort of a opportunities within that problem. But as a general like that's the project it's not really well formed and doesn't have measurable impact a good example would be something like i want to reduce the average time it takes a user to perform some specific task and in fact that's a great example from a class of problems that are often really good targets for machine learning inside of a product which is still look for user friction or look for dead ends basically any place where a user is asked to make a decision or perform some repetitive tasks. Those are good opportunities for automation. And i think this is maybe sort of unsexy application of ai. But it's a really good one at a lot of people on a chase after the shiny new product or the shiny new feature. Those are usually more expensive and harder to get traction for but if you look at all of the places in your product where repetitive workflows or really just unpleasant or difficult for users. those are great targets. Yeah yeah so all right so many things to tap into here. I've got this mental image. Hopefully listeners do as well of business problems on one side. Eight on the other side. We're gonna fill in the blanks on both sides together with you. Obviously working at slack. You guys have woodall's noodles of users and thinking about user friction is is a nice guy for you guys. Probably a rather commonplace where male would be deployed in used and leveraged when you think about kind of fleshing out that grocery list of conquerable bounded business problems that have a measurable impact. I'm imagining this in the mental left. Hand side here. if. I'm building that grocery list because i want to be able to pick the ones that are going to be. The best fit for my business would have other questions. I can ask myself to flesh that out. Because i fear that a lot of folks are just looking at what their competitors are putting out in press releases and thinking that that's what the landscape is. What are smarter ways of thinking through it. So one way to think about it is in terms of what are the kind of m. l. capabilities. And are there. Mlk abilities that are common across different business problems so to to give an example of this We built recommendations api at slack internally this contrive recommendations throughout the product and so we build that one emily capability and now we can drive recipient recommendations and the composer or channel recommendations when you join a new channel or from slack bod and so on and we don't have to do much new l. on the end it's just sort of wiring up the front end and so if you have a class of business problems that can all be served by the same back end. Mlk buddy that might be bridge worth building. Cool and i can imagine that being once. We have the grocery list on the left side. We can say do these cluster. The word recommendation comes up seven times guys. Is there a way for us to build something that can kinda tackle all of those that sort something. You're getting at the exactly. Yeah yeah and that might help us. Pick the business problems that are important. You also brought up something and maybe there's some color sprinkle onto it around are they're headed workflows fry their employees or for users is at maybe another useful answer. What are the you know. it's it's almost. it's almost a little bit of a a limited lens. Because i think that the idea of a equals automation is a pretty limited way of looking at a i however for low hanging fruit. It's not a bad one. I mean it should be in our bag of tricks. Do you think that can be a useful way to start building that list on the left side. I think it's certainly a useful way to get started partly because when you have those repetitive workflows you're getting pretty structured label data often from users. They were faced with the question. They perform the sort cognitive task of answering the question in some way. That is the label. That's the answer right one and you get a lot of these examples in now you have kind of a nice data set for doing machine learning and this is specifically on the product side. There are of course lots of potential business problems that you could tackle internally for example so if you're not making a new product or changing existing one you can think about on the back end. How do i make my services more performance or reliable. Or how do i keep things from breaking in the same ways that they've broken before and things like that you know to give an example of this. We recently built a spam filter at slack. So people were using the invite to slack feature as a way to spam people with email and so we built a an internal tool that filters out spam and that was a really nice and i'll project it was very well contained and often when you're shipping internal tools. The burden is a little bit less. You don't have to jump through as many hoops to ship. Non customer facing feature and so looking internally is another good place to identify a our opportunities in order to do this spotting of opportunities. Is it useful for you. Know we're talking about executive fluency. Is it useful for leadership or the people who are in the room during the brainstorming here to have some familiarity with the use case range in their industry or adjacent industries were some conceptual understanding of how data algorithms come together to solve problems. It feels like a lot of the time adam. Those conversations happen with. I don't mean ignorance in insulting. Why it just just not really good context on those two things how important those or there are other kinds of knowledge. You think the people in the room have to have to spot problems at worthwhile. Yes so let me maybe finish fleshing out these like two constellation them because i think that gets to the answer that question awesome so on the other from the business needs you have your proprietary data so this is your strategic asset. We talked about this. This the questions you need to ask about the data are things are they clean is it. Clean is it. Reliable is a timely. If you have a single clean current correct. Tabular data said that exactly includes the information. That's relevant for a problem than you're in great shape but you almost never do so the question becomes what will it take to get you closer to that and modern doesn't really require table like i described exactly but the further you are from that it is so if you have data scattered across a dozen different systems inconsistencies like mismatched ideas that make it. Impossible to join across. Those data sources are missing messy values than if nothing else you know where to start in your readiness journey so might wanna google concept called future store and just start getting your data into a state where it's at least tractable to talk about. Do you have the necessary to tackle one of your problems. And then of course. Now you have your your problems. You have your data. How do you bridge the two. And i think even at the executive level you don't need to know algorithms and how they work but it's useful to understand what kinds of things l. can do i think if you imagine it as just a a magic box. That thinks like a person does. Then you don't have any ability to assess the distance between a data and a solution honestly. The list of things that l. does is kind of small in a sense so it does things like prediction. You have some number of inputs in. It's going to predict a number category or something you know. Or clustering is another example. Where you're just grouping things and honestly what will like those two things that i have just said really does kind of cover the vast majority of machine learning because you can think about you can think about forecasting as just being prediction for the value is for some future time you can just think about ranking as being predicting numbers for a bunch of things and then sorting them based on the predicted numbers you think about recommendation in the same way so in a sense like once you understand that. Like i'm trying to think about this. As either grouping things or taking some set of inputs in making a prediction by number of category whatever. Those capabilities are serving as your bridge. And so if you can kind of understand those which is not. I think that then you can start to think about the problems you tackle so going back to thinking about workflows and user friction if you're presenting a user with an empty drop down and they have to select something from that drop down. Well they have a bunch of information about the tasks they're trying to perform the context of this drop down like. What are they trying to accomplish. And so on and they're just trying to pick something from a list. So that is a great example of a prediction problem. If you can give a model the same information that the user has when faced with that blank drop down and you just try to predict which of the things are they going to select and you could make that default instead of just showing them a blank drop down. It's a very small example. But that's the kind of thing that can do and really everything else is just kind of building up on top of that in various ways. Yeah one of the one of the as i really like what you're saying here about not remotely technical at the end of the day adam but having gone through injure ings course painfully and slowly in core understanding that ultimately you have your clustering and you have your you mentioned kind of prediction. Kind of taking whatever wherever m. l. is is being applied in saying. Well it's one of those to apply this kind of lens. I think that's helpful. So for you thinking through kind of hey do these kind of things and then having some examples of each for you. That will allow people to make problems. Click and say. Oh yeah that's prediction our own. That might be clustering or something like that. Sounds like that concept is important. I think if you read press releases or you take corsair courses or something you can be led to believe that has to be really complicated and difficult but ends base. There's sort of a small number of fundamental capabilities. And if you can understand those you're off to a great start and then there are other ways to think about it. Which is that. Some people have hooked together forecasting clustering and prediction and really complicated ways. Where now you have you know self driving cars or something like that but a lot of the time. Those solutions are reusable. So an example you can just go in download an object recognition model. That's been pretrained. You can show it images of sort of every day type of things and it'll tell you what's the image you don't have to go and rebuild that and so if you have a kind of a list of those sorts of capabilities that you know are available to you then you can start to think about ways of plugging those together like to give an example if i have an object recognition model than i could if i have a product catalog someone could take a picture of a thing that they have and it could say this is a chair so let me now look up on my website. You know other chairs. And i could give them a list of those things taken a picture of it but you know that there is. There exists this capability of going from an image to a list of things in that image in taxed and sort of building up. That list of things in your head is sufficient. I would say to start to map out the bridge between the data and the business needs right. So i like that. Hopefully that's the take home lesson for those of you listening in last question on our last sub question on this first topic together add on is around the other side of the table which is which is data. You had mentioned you wanna look at things in saving. It was a clean reliable timely. It's normally not so how far away from getting their one bit of color. Throw on that is do we look data first and then figure out business problems. Do we do business. Promise first and then go say well. What data would we need for that. We do the both concurrently. Do you have any kind of order. you prefer here. If we're going to think about finding opportunities usually start with a business need. I think having a sense of what kind of data is available to you when you think about the business needs is useful if you construct a list of problems you wanna tackle and find out that you have data for none of them. That feels like a bit of a waste of time on the other hand. If you have a problem that sufficiently valuable to you even if you don't have the data or data as a mass you can go about addressing that yeah add logging to collect the data that you need or do something even more heavy handed like you know use mechanical turk or something to get the right labels and if it's a sufficiently important problem for the business then you can sometimes find the data but if you start with the data and then say you know. Where should i go from here. I think you run the risk of targeting low hanging fruit. That isn't necessarily the most business being seen. As sort of churning out an endless list of low-value features is a good way to not change the culture at a company. He add stuff to get by in unless somebody can tie something to something meaningful as well. The the enthusiasm for toys is limited at best so okay so you make the business prominent say okay. What data with this require. And then that's when we go look at the data. Because the i guess the way i see it is. There's so many pockets. Were data's being store in so many ways it's impossible to just you know audit everything or something in one fell swoop but if we say look. Here's the major cluster problems we have. Here's the kinds of data. We're gonna need for them that we can do. Our investigation in auditing only where it makes sense that we don't take twelve months to say here's the state of every drop of data in the business. It seems like that might help. Honar auditing process to some degree. Damore yeah i think starting at the business problem and then thinking about what the possible bridges that could get you there. What data those bridges need to start sense and For those of you listening it's useful to have a data science savvy. Technical person like adamant room to Validate those hypotheses because what a what a pure business person might say. We would need this kind of data to solve his problem. Sometime isn't isn't exactly at so multiple expertise in the room. I will say you know someone. Like me is not sufficient over sure but i mean even someone who is an absolute expert on all of this will not necessarily be able to tell you exactly what data is required for one of these problems so you might pick a good business problem and say okay well. Let's start logging x. Y. and z. That should be enough. You might do that and then go and build a model and find out. The predictions are okay but maybe not sufficiently good for the business problem that you're trying to tackle. There's not a great way to assess that until you look at the data and actually try it big time. Yeah we can't get around the fact that this is iterative. Can't get around the fact that this is what probabilistic i mean. You know who knows how many problems at slack even you guys have been like. This should be solid. And then it's like you know what the data just doesn't shake it out as it is we gotta take a different approach Imagine that happens every now and again. Absolutely yeah and there is not a great solution There are a lot of questions that amil engineers will here and they'll sort of cringe because they know that there's never a satisfying answer to it. Which is like. How many examples do you need to train this model reading. I have no idea. Like i'm never going to be able to give you a satisfactory. That question i'm sorry. And this is the sort of form of that but it's even worse because you may not even have the data yet and so what what are the features that matter. I you know. I can make an educated guess as an expert but i honestly won't be able to tell you until we try it. Yeah i i. I guess i would say this and let me know if you'll give this a thumbs up thumbs down. If you got smart subject matter expert folks in the room business people that know what matters to the business in the bottom line and data scientists will at least have a damn clue as to how data has been used historically. We've got our best. Chance are not not a guarantee we've got our best chance at running down a rabbit hole. That isn't empty. But like you said it might still be empty but but at least love a better shot. If you've got a little bit of a thinking mix to go in. Is that correct. That's certainly true. Okay so second team. Here that we chatted about off mic which. I'm excited to dive into is really around assessing ai. Readiness companies are listening into you right now and we're going to be turning this into all kinds of additional content and really wondering where are we starting from. And what do we need to know about ourselves to know where to begin with are a journey because so many enterprises are in exactly that position. I know you've done a bit of thinking about this beforehand. Where did you wanna get kicked off on this topic. Yes so if you've gone through the assessments that i just talked about so you have a reasonable data story. You have an understanding of the kinds of things you can do with that. Data candidates set of business problems than the last step is to evaluate some particular solution. You're selected a bridge that you might wanna build now. This will seem like a strange thing for me to say out loud. But there's nothing you can do with. Ai that you can't do without it just possibly much much worse and on the flip side the ai. Solution is usually more more expensive from an implementation and maintenance perspective. And that's the question that's before you let's say that the best possible solution is basically up against. Let's call it a year ristic. Sometimes you almost solution will be much better but other times your data just doesn't cut it in the ristic will win again. The ristic almost always wins on implementation costs. And if there's an easy eurispes to try you should almost always try it. I if nothing else it sets a baseline that an mel solution would need to be if it turns out that the ristic isn't good enough than this is when the company needs to ask a few questions to decide whether or not they want to build the solution. There are basically three of them. I would say the first. Is this problem important enough to the business. That i'm willing to invest in a better solution than this ristic. The second is how much new infrastructure do. I need to put the solution into production. The more emily info that you already have the less new stuff you'll need so a mature company will already have most of this infra and will instead be thinking about something like additional cloud spend or something and the third question is do. I have good reason to believe that the solution will outperform this ristic by enough to make it worth. It might be better. But if it's only better by one percent is that worth the investment not require. Any good engineer will have run some kind of an experiment to investigate this improvement. But sometimes you don't know until you try. yeah. I liked the idea of going through this lens. So what you're saying is when we're picking a project use use a. I went into the right tool for the job. Right if it's like. Hey we've got a rule set here. That really cuts the mustard. And you know like you just mentioned with think the expense to actually eat out. Another two percent is going to be pretty. Big from where restarting maybe. This isn't the right. The right move so for for you. Those three questions might allow us to take a further filter. Got the grocery list. Got the data. We've got the bridges that might be potential projects. Okay let's run through this new ring. This is kind of what you're advocating here. Yeah and so. I can give you an example from slacks recently published a blog post about the spam filter that we built and the state of practice when we launched that project was that a bunch of people were manually curing some risks around what constituted invites bam. What didn't you know the the word casino in sorolla this this word and so on and so every time. Some new pattern of spanish behavior would emerge. They would have to manually. Go in and jerry those restaurants and so there there's a baseline like we know how well that's performing. We know roughly. What's getting through in. What's not in particular one of the things that we identified as a problem where how many valid invitations were being filtered out by arrest it turns out some people run legitimate casino businesses and maybe they wanna use lack at so. The target that we needed to beat was the performance of this year. A and the question was like how much is it gonna take to get an nl solution into production you know. Do we have the data that we need to. we have the labels. Do we have the infrastructure to train a model and put it into production and so on and so we went through that evaluation and it turned out to be really successful. We had the data on the labels that we needed everything sort of lined up and so that was an example of a successful project. But it did start with ristic and now is sort of doing their duty for a while. Yeah yeah but like you said then you have the baseline right. If you just went in with the male model you could always be asking yourselves honestly guys if we just had a list of a thousand two rules of casino and comic sans and whatever else you wanna do. Would we be saving money and doing better than we are at this thing but now you know you you can you can figure it out can say how much should we think we can improve it. And then you can move forward from there. And this touches on maturity itself so one of the factors that you brought to bear was hey based on where we're at what's the additional investment. We're gonna need to actually potentially enabled us and company. That's done dozens of ai projects in different corners of the business firm like slack. You guys have lord knows how many algorithms in deployment but you know enterprises just starting off. Maybe to have less of people or somewhere between here. What does it look like to get a sense of where we stand. I can imagine. I may wish osama coo or on up ahead of head of compliance or vp of fraud. Somewhere to bank. And i'm saying well okay. I've got some problems. But what is my hand maturity. You know we're we're r restarting from around here. What are the couple ways you would wanna define an maturity to know what we're what we're standing upon how much more investment we need to take. I imagine a lot of non technical folks would need the conceptual understandings that you have three parts. The data which we've already talked about infrastructure. Which can be sort of basic infrastructure like the ability to train a model and to serve predictions and production at scale. Things like that but can also include m. l. capabilities that you've built for other purposes so a recommendations. Api for example might have some models that you've already trained on your data and that's already in production and a company that has api like that might be further along in the maturity spectrum and then the third part is the ability to put these things into production right like actually ship features or shift capabilities that deliver some sort of a a value to the customer to the business and some of that is cultural. I mentioned that sometimes you don't know whether an initiative is going to be successful until you at least try it on some smaller scale and that is sometimes a cultural leap for a company. The the idea that you would say all right. We're going to try to ship feature acts in in q. Four and then halfway through q. Four you say now we. We did a test model and the performance wasn't really good so we're gonna abandoned that and do something else for some companies that might generate embarrassment or frustration. But i company would say yup. Okay it was. It was the right thing to try. We went in. It was a good experiment to run. Now we know we either can go and collect more data or a try this again in a year or whatever or just say like okay. It wasn't a good target is not good enough. So those sort of different elements the the third of which is may be more like organizational or cultural along with data infrastructure. I think are the key pieces to look at when you evaluate maturity cool so good conceptual understanding there for the listeners. Here last little sub question to the second topic as we wrap up. Adam is around using initial projects helped to build some of that. Ai maturity you know like you said if you have a certain amount already in terms of you got some talent. You've got a culture that can kind of embrace ration- you've got in some of our listeners. Familiar with our model for a maturity as well we can do a little bit more but projects. The r y. The project is not just made this financial return. it's also. Hey we ness stand on this new higher level where we can enable other things we can more nimbly adapted move with ai. Broadly how do you think about picking projects that are both good fit for the data in the business need but also maybe a good fit for leveling us up. We call kind of capability are why if you will you like to think about that. Yeah it certainly the initiatives that people like to talk about are the high value ones where they say this moonshot if it's successful will totally do in in practice the companies that do that have money printer right. They have some part of the businesses printing money and they're fine with a massive upfront investment. Because again it's not necessarily guaranteed that this will be successful. And i think if you talk to people who work at for example self driving car companies. They will tell you that. They've poured billions and billions of dollars into this problem. And you still do not have the ability to go and buy a self driving car and the fact is it just turns out. It's a much harder problem than a lot of people were expecting or hoping or thought it would be where fifteen years since You know. Stanley winnings the darpa grand challenge. And still you know. I don't have my self driving car. So how much longer is it gonna take. I don't think you can get a really confident answer from anybody so again unless you have somebody who's pouring billions of dollars in your initiative you probably don't wanna go for the moon shots so instead the type of project that you want to go after something that is sufficiently valuable that it's worth doing. But maybe has a lower cost for some reason and this cost could be lower because it doesn't require all of the infrastructure or a massive amount of data maybe like our spam filter. You really only need the invitations and some simple labels and you can train the model on your laptop and the traffic goes to the prediction. Serving is relatively low. And so you know. It doesn't require some massive serving those predictions at you know billions of times in second or anything like that and so. That's a great target where it's really valuable to protect our brand and protect our users from spam and doesn't require all of that investment and the good news. Is that anything that we build in service of that project in now be used for other things so if you for example don't need to train a model but you do need to serve. Predictions woke great. Now you have prediction serving infrastructure. And the next time you go for a project the the cost to that as much lower cool if i try to nutshell this going for the wow this would be just a a rock star project with a double our revenue with this amazing model right that. That's maybe what looks cool in a magazine. But what you're getting at is an practice. Where iterating experimenting. Something's are winning more than others. Some things are flopping. But we're okay. Were being a prudent about our experimentation. We broadly build up this floor of ability to be adding value in enough places and take advantage of enough new opportunities that that's really the advantage here more so than thank goodness that home run workout for us. The big projects are high risk and high reward. The smaller ones often have a relatively low cost and are sufficiently valuable that there were doing and they often reduce the cost both of putting into production project number two. But also even the cost of evaluating the feasibility of project number two so if you have the ability to really quickly train a model and passed out like running experiment in production to test. You know with one percent of your traffic whether these predictions are serving purpose if you can do that relatively cheaply than it's cheaper not just to to build project number two but to ask the question of whether it's worth doing at all yeah so there there's there's an roi in just the learning like you said and do enough small projects you get a feel for what's viable and what's not unlike if you take a big swings i where a lot of your guesses. You're gonna learn the hard way for the first time. And maybe that's not good for all companies and if if on that maturity spectrum you're relatively low in the organizational and cultural maturity than shipping. Those quick wins those things that are likely to succeed. And don't have a huge cost that can start to change the culture at an organization because they say oh. This is pretty cool. This works really well. That was a win and so they'll be more likely to say yes to project number two even if maybe the risk is a little bit higher yet. That's that's gonna be. The reality gradually Frog the frying pan. It's the wrong analogy. But for some reason it's the only one coming to mind where the c. suite isn't gonna wanna listen to this whole interview. Adam take it all to heart. They're doing other stuff. They're not bad people. They're just doing other stuff but but if they can see enough chip away value than we may get some investments for some bigger projects will the track record of success. That's that's the right way to think about it. Yeah that's sets probably better than a frog and the frog dan anyway out of this has been an excellent second interview. I really appreciate you jumper back on with us. And thanks again for sharing your insights. Thank you so much for having me again Episode of the and business podcast. I hope you enjoyed this episode. We do our best to work hard to find a good mix of talent here finish show. We like to bring on startups. You might not have heard of. We like to bring on big blue chip companies. You we've had head of ai at raytheon really high level folks at comcast hsbc etcetera. We also like to be able to pull in the folks. They're moving the fastest with a and that is to say silicon valley unicorns so adams perspective is important to us. I hope it's important and useful to you and if you want support the show and you've learned some things he'd been able to apply the ai and business podcast. It would mean the world if you could support us by leaving us a five star review on itunes. What is now called apple podcast. You can search for the a and business. Podcast drop us a five star review and type up what you like about the show. What you've learned how it's been useful for you because it is your feedback that i bring back to my team. When we think about our editorial calendar we think about our interview calendar. It's really your ideas that feed the show helped it to evolve over time at your ideas that have helped us to recently spin out the a consulting podcasts. So for those of you aren't aware we now have a show called the consulting podcast you can find that on itunes find on spotify etc. And that was your idea as well. So you're reviews help us generate great ideas and they also really do support the chefs if you want to support show consider leaving tribes view on itunes. Miss now called apple podcasts. And otherwise stay tuned for the next episode next tuesday here on the a and business podcast.

Coming up next