How Alan AI Gives Enterprise Apps a Voice

Inside VOICE


Today my guess is is James Shelburne. The senior product manager for Allen the platform that makes it easy to add context based conversational voice assistance to your application welcomed James. Thanks for being here. Thank you for having me Jerry. Happy to be here. So let's start off with the history of this company. The founders and the company name Allen. Come from a very interesting place. Can you tell us a little bit more about it. Yes so our company is named after Alan Turing. He's the famous computer. Scientists who deciphered be `nigma machine in World War Two you know helped get victory for the allies and he asks the question can machines think and so we're kind of rephrasing phrasing not into Kenya application thing and that's the whole premise. Alan in adding intelligence to existing applications with voice so our founders founders are Ramaphosa Kara and Andre Riot and Ramu previously. You know his Oracle veteran you are on the parallel survey were there and he founded Dan. The company quick which was the first version was during the first version of video streaming on mobile devices. It's kind of before the iphone in facetime. So that company ended up being acquired by skype and Andre. Ri- other co-founder Work John. That was him in he. He's the one who built a lot of the video streaming components of it so he also worked at the Russian social network classic. And he's the real genius behind our technology wonderful now. Can you tell us a little bit about why you think boys can improve the experience of existing applications that are are out there. Yes so voice in existing applications. It's taking all of the functionality that they have and making it available at any he time from any different screen so not all mobile. Applications are easy to use as instagram. Day Are GonNA especially in enterprise. They're packed with functionality and Plexiglas a lot of screens a lot of tapping and touching typing on the mobile phone. It's just kind of in general hard to do. And so is unlocking the power of those applications by letting anyone just be able to talk to them to get work done to access functions while they're driving rather than have to touch the screen it can just talk to it and the screen is providing a visual feedback so they can easily continue it conversation there in what we'd seen in kind of existing voice ecosystem with Google home device Amazon exit vices day at least the version one. They didn't have of the screen. There is not the later came out with a show and the Google Hav and with the initial devices no screen. The user has to memorize what to say in what what to do. And it's hard to kind of keep up with it and it doesn't necessarily memorize the context or understand the context of the user so with voice and within existing existing applications. The user has some idea already of what the application does they can see on the screen. This was visual feedback. And that's why voice experience a mobile when it's supported by those existing functions is just a much better way to interact and enterprise which were primarily focused on the a user's there have very defined or close that they have to complete in so at every step. They're talking to talking back. It's a conversation and they're able to see. Alan is able to understand the visual context of the user and then we also kind of maintain the dialogue on deck. So it's a much richer

Coming up next