Bill discussed on Inside Intercom Podcast


Down and there's a flood of a lot of different content and information, creating content and information that can actually stand out and rise above is going to be at an even greater premium over the next few years. It's been a few weeks since intercom launched is AI system features. So what's the early feedback that you have seen and someone else asked, how do you manage your success of incorporating this technology? Yeah, so I'll be very transparent about that. I have, you know, I don't have a fully satisfying answer to that question yet. What I can tell you is that we're now live, we have thousands of customers who are using this now and using it regularly. And so we've had a lot of adoption. We likely will try and measure, hey, how's this actually made people more productive? Because, you know, let's say for our own CS team, we can gather telemetry on are you faster if you use these features and probably put together some form of controlled experiment for that. And we always like to try and get some form of actualization on this at some point. We're not at that point. We probably likely will have some numbers and at least internally are more of an understanding of an internally in a month or two I would guess. But what I can tell you at the moment is, we're seeing a lot of adoption. We're seeing a lot of excitement and we're seeing a lot of usage. Customers are definitely some features like summarization, the customers tell us as saves them substantial time. They're like, we've had customers tell us things like, hey, for some customer accounts, some conversations, it can take as long to write the summary for a handover as a can to actually resolve the end user's issue. And so we definitely feel good about that. Some of our other features we have where it's sort of, it writes, you write a short term. A little bit like get a co-pilot, what we were inspired by co-pilot in co-pilot, if you're a programmer, you can sort of write a comment or you can write shorthand and then it will fill out the code. One of our features that we shift is expand where you write a shorthand and then it kind of turns it into a longer support message. Sometimes that works and saves people time. We don't have data on that yet. We also what we have live at the moment is really just a generation one version of that. And so we have we have prototypes of a generation two version, where instead of just at the moment, you just write the shorthand and then the large language model sort of expands that out. What we're trying to do instead is we're trying to say, hey, let's pull in the last time you answered the question like that. Let's pull in macros that are relevant to this. And we have some internal prototypes there that are working pretty well. And so we sort of we think that we're still innovation and we're still doing things that make it that are going to really move the needle for that sort of co-pilot style expansion style user interface as well. But we don't have, we don't have metrics yet, although we will see a problem. And to follow up on that, how do you measure the cost of it as understand you probably send inquiries to OpenAI and for them to charge certain, I guess, two cents per thousand characters, something like that. And I guess as your adoptions rises, that Bill also pile up. So do you have any learnings or observations to share for other stuff as maybe also thinking about incorporating these technology? Yeah, I have a chart in the Tableau of our daily spend with OpenAI dot. We keep a nervous and nervous watch on. You know, look, it's definitely a consideration. So I mentioned this summarization feature. And we've built it in a very human in the loop way where it's like, you've got to ask for the summary before you hand over to question. One thing a lot of our customers say to us is like, hey, and to come, why do I have to ask for this summary? Please just maintain a summary at all times in the sidebar. And so I never have to ask for it. That would get really expensive because if we had to pay kind of like two cents for every time someone said something new in the conversation in the submarine changed, that that would start to get extremely expensive. So we absolutely have to take cost into consideration here in a way that we don't with more traditional machine learning models. That said, OpenAI just announced their chat GPT API. And it was, I think, I think it's surprised a lot of people because it was ten times cheaper than the previous similar models in that series. And so it's possible that the cost drops pretty fast here and that these switches just become widely adopted. So I think crystal you said water by the other starters or other companies building in this area that the advice I think that we would give an intercom is like, hey, try and get in market fast here because there's real

Coming up next