Podcast EP 105 - The Present and Future of Conversational AI - Joe Bradley, Chief Scientist, LivePerson

EP 105 - The Present and Future of Conversational AI - Joe Bradley, Chief Scientist, LivePerson

Oct 15, 2021

In this episode, we discuss the most common conversational AI use cases today and technical and business architecture underpinnings of a successful conversational system. We also explore the potential for broad networks of interlinked AI applications in the future and the challenges in realising this vision.

Our guest today is Joe Bradley, Chief Scientist at LivePerson. LivePerson makes life easier for people and brands everywhere through trusted conversational AI that empowers consumers to communicate with brands directly.

IoT ONE is an IoT focused research and advisory firm. We provide research to enable you to grow in the digital age. Our services include market research, competitor information, customer research, market entry, partner scouting, and innovation programs. For more information, please visit iotone.com

Subscribe

Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.

Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE, the consultancy that specializes in supporting digital transformation of operations and businesses. Our guest today is Joe Bradley, Chief Scientist at LivePerson. LivePerson makes life easier for people and brands everywhere through trusted conversational AI that empowers consumers to communicate with brands directly. In this talk, we discuss the most common conversational AI use cases today and technical and business architecture underpinnings of a successful conversational system. We also explored the potential for broad networks of interlinked AI applications in the future, and the challenges in realizing this vision.

If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Finally, if you have an IoT research, strategy, or training initiative that you'd like to discuss, you can email me directly at erik.walenza@IoTone.com. Thank you. Joe, thank you for joining us today.

Joe: Hey, Erik, thanks for having me, really happy to be here.

Erik: I think this is one of the areas where technology is really being used today and probably in a lot of ways that people are not so familiar with. But before we get into the details of conversational AI, I'm actually really interested in understanding the details of Joe Bradley a little bit more in detail. You have fascinating background, walking from liberal arts through mathematics through then more data science applied to hard science, and then getting into Amazon, etc. Can you just give us a little bit of background on how you have taken this path that actually makes sense in context, but probably maybe didn't necessarily make sense when you go from step 1, 2, 3, 4, 5, 6, 7, and how this added up to your position today?

Joe: No, it's definitely been a journey, and it certainly ended up making sense. I wish I could claim that I had a master plan all the way through I definitely made choices along the way that seemed to open up career entropy and also allowed me to follow interests and passions that I had as my life went along. And there's more in there to around like opera singing and classroom teaching as well. So, that roads been pretty varied.

But I'm thankful for it in the end for a few reasons. For one, I think, the liberal arts college process like being an English major really taught me to write. And I think that's a skill that we sometimes forget, like, we sometimes think like, oh, you learn how to write when you're in high school, five-paragraph essay, or whatever. But we forget that that's a skill that goes on forever, and you can get infinitely good at it. There is no end to how good you can get at doing that. And that then puts you in a position to be influential in many contexts.

That's one of the things I love about the business environment today that Amazon really champions is writing in business as opposed to like the PowerPoint decks and things like that, even though that's an interesting form of storytelling as well that we use at Nike. I think it's very powerful learning to express yourself clearly, learning to be convincing. This is maybe the best set of lessons you can give someone between the ages of 18 and 21 and an English major was not a half bad way to do it.

I've always had an interest in mentoring and teaching that was sort of true while I was in college as well. And I did that in San Francisco Unified School District for a while and then in other contexts as well in district and not as just a classroom teacher, but as an after-school teacher and things like that. I think, that for one taught me a lot about human behavior and people and how you see a lot of things adults do and maybe a more in a baser form as children, the choices that people make and the ways they're influenced by their emotion and their biases.

I think if you can learn to deal with a class of 12 year olds, you can probably deal with grown up There's probably some lessons there for dealing with grownups that I think it also makes you more compassionate. So there's definitely some benefit from that. And of course, the technical learning as a mathematician, as a physicist, I spent a number of years in the national labs doing experiments, and literally turning bolts and also doing statistical analysis. And for me, a lot of the lesson there is around like nervousness about the quality of your results and your process, and like a deep belief in empiricism.

Which is one thing I see in, I think there are a lot of people coming out of computer science backgrounds, this sort of data science and machine learning degrees that are extremely smart, and extremely talented and do amazing work, but they haven't always had the experience of like, forgetting to set up a detector somewhere in the right way, and then having all their results turned out to be garbage, and therefore having this paranoia about having to write everything down and really have their process nailed so that they can repeat everything.

And some of the stuff that empirical scientists have all learned the hard way, I think a lot of times they see computer science folks come in and learn that the hard way in a business context, when you're trying to solve a scientific problem with your data. And so a lot of education and hopefully, some of the value that I've brought to my teams coming from a different background is some of those standards for quality of process.

And then just this kind of atmosphere that like, I think our models are going to break in production all the time. What can we do so we can be positive that they won't? What kind of systems do we need to build to improve them and to maintain that, that all comes from that empirical background.

And that led to Amazon, Nike, and I see myself as a generalist data scientist, and someone who's come to conversational AI slowly over the last 10 years. I've become more and more fascinated by it as time has gone by and sort of can't resist the poll. Subsequent to that I got business chops through working at Amazon, learning how that company culture worked. I feel very grateful to have worked for that company and for Nike, which are both extremely innovative companies, but also extremely different companies in how they pull it off.

At Amazon, your ownership is well defined; you're meant to be single-threaded, you're given a free rein to go, and you run after your problem, and the whole organization is trying to get out of your way. And Nike, it's almost the opposite. You have this like multiple matrix bosses. And everybody's got a number of different interests and you have to build a lot of consensus and do a lot of personality massaging to kind of get stuff done there.

But the net of both those systems is they free up the individual to do innovative things. Nike sort of almost does it by having you have so many people to report to you that you can create the space because there's like room in that; not every knows what you're doing all the time. People talk about it. I'm not just saying this is like some calloused observer. People talk about the management structure in this way at Nike and recognize that there's this weird side benefit.

And then Amazon, of course, you have the pier FAQ process, and anyone can go in and write down what they want to do, and try and get the appropriate level of executive excited about it, and then they can go get it funded. So ultimately, you're trying to create the space, you're just doing it in these really, really different ways these really different cultures.

So, does that quite sums everything up in the way you're hoping? But that that's a little bit about the journey along the way. And at least some of the value I see, maybe the last thing I'd say about it is I feel maybe most fortunate of all having been in different careers in different discourse communities. And one thing it instilled in me, and that I believe is very important is to see the value in that community in that group that that you may not belong to, and that you might not yet understand very well.

A lot of times a pitfall I see in scientists in a business context is they can get a little judgy. And I think this works both ways too. It's not just the scientist. But I think it’ll judgy about like, oh, those salespeople don't really know anything about what they're doing and I've got the data over here. And I think the best scientists are the people who go and talk to those salespeople or talk to the product lead or go and have lunch with their engineering counterparts and are like fascinated by the way that those people work and the way that their minds work, the way that they solve problems and the language they use to do it. And the more you can touch that, the better job you do as someone who does scientific work in a business context, making it meaningful, and making it relevant for that work.

So, I spend a lot of time like trying to kind of get the scientist out of the box, and get them working closely with product, closely with engineering, closely even with sales, though sometimes institutionally, there can be challenges there. Because I think science work in a business context is hard and you can't expect it to be solved any more than you could expect engineering to be solved by like hurling requirements over a fence so that perspective and that depth and that human communion is a big piece of the puzzle or a big piece of the recipe for success.

Erik: Just listening to you, I can tell that you really have a deep appreciation for human dynamics, how humans communicate, whether it's a 10-year-old, or a scientist and how they coordinate and so forth. And I imagine that together with your hard science background, this really provides a lot of value in the business that you're actually doing today but understanding how to use AI to facilitate communication with real humans. Where did you first touch this topic? So you were head of data analytics for Amazon Search? Was that where you first started applying ML to language processing? Can you just walk us through this?

Joe: Yeah, I think you can draw it back there; that probably makes the most sense. I mean, I worked in the search experience. We had a very interesting, general problem statement, which is how do you organize the experience of searching for a product, the front end, all the way that looks and feels and behaves and responds to you based on your assessment of what problem the user is trying to solve? And you have this really imperfect information to help you do it in that context.

We used to talk about it, it was this like Jewel, the search query itself, we call it at Amazon, a jewel that this customer is giving to you because it's their verbal expression or their textual expression of what they need. And we spent a long time there having to interpret that. One of our classic examples is like Harry Potter, if somebody puts it in, like what the heck are they asking about? Is it the book? Is it the movie? Is it a costume there's all these different ways that that can go? And how do you create an experience that allows them to solve their problem?

A lot of what we did there is we took this sort of like combination of text data that they were giving us in the search queries, and then impression, information and what they interacted with and what products they looked at, and what they did. And we tried to use that to classify shopping behavior into these different kinds of missions, and we found consistent missions in different places, like you shop for socks the same way you shop for USB drives, and you shop for iPhone cases the same way people shop for curtains, there's these different like modes of thinking and evaluating that you get into typically.

But it is very much this literal plus extra literal communication, that we'd be assembling into this longitude that each customer had. And we use natural language techniques, not only on the language itself, but also on that sequence of events is the interesting things and a lot of stuff about embedded representations that you can derive from that, like really just borrow very directly from natural language processing.

If anything, like to your point from before, like what holds a lot of these roles for me together is just trying to understand like how do people express what they want? And how can you interact with them successfully to help them on this little piece of their journey, whether that's buying a cell phone case or like just really wanting to make a credit card work in a foreign country and not wanting it to be a big pain in the neck to go do it?

Erik: Interesting. And then you joined Nike, and I imagine you were touching on similar topics, although it looks like the scope was a bit different there. Was this also directly related to computer interaction with humans, or was this a broader scope?

Joe: Yeah, very much same thing at Nike. What was interesting about Nike for me is we really worked internally, not only with the web properties, and like the digital properties themselves a web an app, and I guess like four or five apps, and try to build algorithms there that would help people solve problems. We did work in training as well to help them train better, things like that. But I also got to work closely with the marketing function at Nike and really see that firsthand, which is like the sort of first class marketing engine. So that was very interesting,

And I think we had like a good concept in bringing that all together that the algorithms and the journeys that the customers were on and the way they discussed and talked about it, and interacted with Nike. It was kind of independent of whether it was like an owned media that we at Nike ran or media that was sort of bought and paid for. We were trying to help people on the same journey so we just found them in different places at different times.

And that was a fascinating place to work, fascinating set of problems. It's a brand that people are really passionate about. Amazon is also a brand. Some people are like very passionate about Amazon, and really feel great about the company. Some people very passionately dislike it. And there's sort of a range of feeling. And of course, that's true of any brand. Like there are people that passionately dislike Nike too, sure.

But I think by and large, the consumers that interact with Nike have this deep passion for the company, which was pretty cool. And I think it gave me a perspective that as I entered into LivePerson and started to think about how should the brand-to-customer communication change going forward? What should the medium of that look like? What's the value of that for the customer? What's the value of that for the brand? I feel grateful for the perspective that Nike gave me on that. It's until you see it firsthand, like how a company that's selling shoes, like how meaningful they can kind of be in their customer’s lives. And that can be a good thing.

I went to college in the late 90s and it was a kind of a time of a little bit of lefty cynicism, and I grew up on like, hey, these companies they're made probably out here to take advantage of everyone, is a little bit of a reaction to modernism in the 70s and 80s, like our culture at my age took on. And so, to see the reverse of that from Nike’s point of view is pretty cool. It gave me a little more optimism, and I think set me up for the role at LivePerson pretty well.

We're trying to help brands create those connections, and we're trying to help drive the positive version of how does a brand help a customer, not how does a brand get what it wants from a customer? I personally feel very passionately about that mission. I think there is a right side of this, like anything, technology is a tool so you can misuse it. But I think that's what we're trying to do at LP.

Erik: It's interesting perspective because you could imagine somebody, especially maybe from a data science background, looking at this problem and saying, okay, basically somebody wants to accomplish a goal, and we need to understand their words and help them accomplish the goal in the process. But then there's this whole emotional context around that person also has relationship with the brand, they have also some kind of frustration going on, that's why they're probably on the call right now. So there's an emotional context which you also have to play within.

Let's get into LivePerson now. So LivePerson, on the one hand, it's a tech company that's in a domain that's really on the cutting edge right now, which is conversational AI, but at the same time, it's also a 26 year old company. And it's really hard for me to imagine conversational AI 26 years ago. Did it start with this as the mission or did that evolve over time?

Joe: No, it didn't start with this as the vision. Though, I think Rob, who's our founder, and continues to be our CEO after all this time, always knew that the mission it started with was not the mission it was going to end with, or it was going to evolve quite a bit. I don't want to speak for Rob and say whether he would have written it as it is today 26 years ago, or it would have been different. But he's very clear

I've talked to him a lot. And we knew that we had to take some steps to get where we wanted to go and we had to take the right steps. And so the way the company started, was to essentially more or less invent online chat. So, it's kind of a funny thing now because now even a LivePerson will look at online chat were like, yeah, it's kind of a little embarrassing. Like, that's maybe not the right answer to the problem in the end state. We still have customers that do that, and we still support them. And we still think there's a lot of potential medium. I'm not trying to be too glib.

But we don't see that as like the ultimate expression of what we're doing at all. But I think that was the foundation for the company and it was very successful at that. And then about six years ago just before I joined, Rob and the company they began to make a pivot into messaging. And that was one of the big changes which I think it'd be easy to overlook how meaningfully different that is. So in the online chat context, you go to a website you're talking to a company through that website. It's this limited interaction, only exists for a while: you pick up the phone and you hang up essentially, phone in this case, just being the keyboard and a website.

The messaging context is very, very different for brands and customers as far as their relationship. It's a persistent relationship, those messages stick around and it's a synchronous relationship. You can solve problems or do things on your own timetable as a customer. The power of that and the interest in that from customers, and what that kind of unlocks as far as the relationship building is pretty awesome.

I look with brand partners at data between customers and brands, and the kinds of conversations that they're having. And it's really fascinating to me, especially going back to the Amazon days. Because I one of my favorite conversations is a woman talking to a sporting apparel company, and talking about how she's not motivated to run, but she's doing it and she's going to run her first 5k and she needs some shoes, but she's only doing it because she's trying to get in shape for her wedding.

And if you're someone who's into marketing and into consumer understanding, your brain is exploding as you read this conversation because you're seeing this woman tell you about like the problems she needs to solve today; and as far as the shoes and like the motivational challenges she's having, and these companies have ways to solve. They have apps for running. They have run clubs, all these things. And then she's telling you about this impending event in her life that's really important to her and that really matters to her.

Tons of gifts stuff, like women asking about late Christmas shopping or holiday shopping, I guess, for her it was Christmas for 10 grandkids, and 2 great grandkids. All these things is missions that I used to be on the lookout for. In Amazon, you'd have to like decode these complex digital signatures to go and find them. And now you have a medium in which people can really just ask the brands to help them with these problems.

And there's like actually more complex machine learning under all this in doing it at scale, because you got to build systems that can understand those problems. And that means a couple of things. That means one, you got to build systems that have the technical capacity to go look for a particular customer problem. And secondly, you got to build systems so that brands can specify the kinds of problems they want to be on the lookout for at the type of specificity that's most helpful to them.

So there's like a modeling problem and then there's a platform problem underneath it of how do I make it easy for brands to make all these models for themselves. But ultimately, the business problem there, like it's one of these rare times in history where you have what's good for the brands. This persistent connection where they can understand our customers better and help them solve real problems is also good for the customer for pretty much the very same reasons.

And if you go and look at times like this, these are times where you're going to see exponential elbows in sales and see big winners and big losers, like Amazon ushered in one of these times, was like, hey, we can make shopping really convenient for you on the internet, that solves a real customer problem, and it creates an economy of scale for Amazon. So like everybody benefits and ooh, like there it goes exponential. So we're sort of on the beginning of that curve for conversational right now. And the challenges boil down to challenges of a lot of the sort of AI and machine learning and the modeling challenges, and then there’s like platform and integration challenges associated as well.

Erik: There's a lot of tremendous amount of complexity here, just as you've explained. There’s this example of emotional context and a life event and then somebody having a conversation to explain a specific need. But ideally, I guess, you don't want this to be a one-time interaction and the data is then deleted, but you want to then understand to an extent this person's motivation so the next time they call you, you are kind of one step ahead and understanding how to support them.

If I'm just looking at your website, you have different roles, sales and marketing versus customer service. You have different company sizes. You have different partner types, BPO, solution providers, etc. You have many different industries that could be using it. You have different service platforms, and then you have different products that all of these different users could be engaging with.

It's often easier to work with, like, you're selling machines, and you sell these ones to agriculture and so forth. But here, I feel like these are all words and under the blanket, there's some machine going on. So what does that machine look like? Is it quite similar across these different solutions, or is it designed uniquely for different user types in different situations?

Joe: It's a good question. I think you're right there is a lot of complexity for us. That's a challenge for us as a business because we want, obviously, to do business effectively in these different contexts and help our customers the brands that we work with their particular problems.

We have a really big foothold in one of our biggest segments by dollar amount, and like how big a piece of business they are for us? Are some of these really big companies that are performing? It was historically a lot of customer care use cases. And now increasingly, more and more, it's sales and marketing use cases as well.

So we have a lot of momentum there. And I think we have a really special sort of account and sales team that knows how to work with companies like this really, really well and help them succeed.

Let me to answer the spirit of your question. The base platform, under LivePerson is the same. There are different expressions of it. And there are different versions of it that are where the safety mechanisms are on moreso than others. And it's a little bit more guided versus a little bit more bespoke, that depends on the needs of the particular brands. But in some ways, building for some of these big enterprises is good in platform building because they have very specific needs, wants, desires, as far as what problems they want to solve, but also how they want to integrate with you, and what other technology they want to be able to integrate with. And so you end up having to build a very open platform and a very flexible platform to solve those problems, which is a little bit I think I'm advantage that you can take that and then you can weld pieces of it together for smaller enterprises, whereas it might be harder to kind of go the other way.

The conversational AI underpinnings for us, we have the same system as you find in conversational AI platforms anywhere, although I think we have some unique features and capabilities to help you use them that really differentiate us. But we've got natural language understanding engines. We've got a platform that allows you to build models to do natural language understanding for whatever purposes you want. That, of course, can be for creating a dialogue and an automated system to talk to a customer. That's one of the main use cases. Obviously, this goes into the name of chatbot these days. So that's become a bit of a dirty word.

I think maybe correctly, I think there was a lot of hype around chatbots that was ill-formed and misleading. Obviously, I think we've found a way to do this that is meaningful and helpful, and lets brand solve real problems. But there's like lots of examples of small startup companies that sort of sold snake oil in 2018, 2019, 2020, and then eventually paid the price for it, or got bought up, depending on the situation.

And so, we have this [inaudible 27:50] platform, we have the dialogue management system you can go engage with. We have the integration capabilities for these conversational AI systems to have these automated conversations to talk to your systems and work through API's. But what we also have that I think is very, very special is we have the integration between the human operators of your system, and many of our brands have many human agents, sometimes thousands of human agents using our system every day with that conversational AI tooling.

So, one of the Levers we're pulling that I think is a differentiator for us as a customer or as a company, and also allows machine learning to operate differently in this context is this feedback. So, now on the LivePerson platform, for instance, if your NLU, if your system that's understanding natural language isn't working as well as you like, you can have annotation sent out to your working agents.

So while they're in their downtime, all these agents have some degree of downtime or other, they can be giving feedback to the system and saying, no, this isn't really a customer who's trying to shop for a pencil that's the wrong intent, you've misunderstood, in fact, it's a customer who is trying to return a pencil, whatever it is. So I think we've begun to make those connections. And that's when one way in which we differentiate ourselves.

The other piece for us that's so important is we have a strong emphasis on quality. There has been a lot of snake oil in this marketplace, and there has been the story of well, show up it's kind of like told to simply show up and then spend a couple months developing or a few weeks developing or whatever, we will help you build this chatbot and it's going to talk to all your customers forever and solve all your problems. And that's just not how the system works. That's not how the science works. That's not how conversation development works.

You should think about it more like a product that you're creating. These conversations are kind of a product you're building and you're going to have to grow and maintain, and evolve in much the same way that if you want to build a great web presence or have a great AP, you kind of need to imagine that's going to be a process of investment and learning that you're participating in as a company or that you're going to have to evolve.

And so for us, we place a really high premium on giving you tools to do that work with very high quality. Part of the reason for that is, just even answering the question, what's a high quality conversation isn't easy, that's like an open research topic. So we've got tools to help you answer that. We've got models that are looking at your conversations and trying to tell you if they are of high quality, what might be going wrong, if they're not.

We're tracking human emotion in the platform. And we're helping you build and maintain these NLU models that are going to detect and understand what you want understand about your customers. We have first class tooling, it's better than anywhere else, in my opinion, to help you visualize these models, help you find these models, help you use your data to bootstrap these models, and build a system that really behaves well. Because every good business person knows this: if you give your customer a bad experience, then they're going to leave you. And that's never been more true than now.

There are a few industries where you can get away with having lousy experiences because there's such a big moat, and you don't really have any competitors; cable companies a good example of that. But most industries, the customers fickle, and they know they can move on. So that's, I think, a lot of where we've placed our focus and a lot of the business reason for it.

Erik: So it sounds like this is primarily a SaaS solution. I imagine you still have teams that help key accounts with particular problems about a SaaS solution. And the customer is then using this, to do a number of things. So they're using it to build models, and then train models continuously. So they have to be able to put in data. They have, as you said, this kind of live interaction of human agents or humans helping to train and improve the model. You have to be able to analyze results or act on output from the model. And you have to manage also the UI UX, of where and how are people interacting with those. So it sounds like those are the major components here.

How deeply into the customer's systems does this model reach? So can the output of a conversation that I might have with a customer service app around maintenance for my refrigerator, can that go all the way to SAP and issue an order for a spare part? Where does this data flow?

Joe: So you're right, we have a SaaS platform. That's our fundamental business. With enterprises is always a challenge, like kind of not to get pulled into doing solutions and services work 90%, and SaaS 10%, especially for small companies, that can be really hard. But I think we've done a good job helping the enterprises out and having a satellite network of solutions providers and an internal group of solution providers as well. But we've been able to separate that from the main platform building.

Erik: As a user, what I see is the front end, I see, okay, I'm communicating to this chatbot, I communicate something. It seems to understand what I'm saying. And then where does this go? Does this go into an Excel and then somebody downloads the Excel, and they say, okay, I've got to operationalize this or does this go to SAP?

Joe: So because we're a SaaS platform, we try to be very open. Over the last few years, we've acquired a number of people from Amazon, Microsoft, Google, a lot of great talented tech professionals that have allowed us to build a really great platform that is API-driven, and can connect and integrate with a wide range of technologies for CRM and sales and commercial websites, all sorts of things.

So, the short answer to your question is you connect us through API's or through, in some cases where we have very common connectors, you connect us through, you can do it through kind of off the shelf products that we serve, or integration products, I guess, that they come bundled with the platform. But yeah, we try very hard to make that a non hacky connection.

So, there won't be a lot of Excel files in your future. You'll either have a tool to help you connect the systems that you want, or if you're a sort of power user, you'll have the ability to take our API's and directly connect to them with whatever you want. And so the information coming out of conversations like hey, I want that refrigerator going into an order form, and then issuing the order, or the return or whatever the use case might be. The platform is designed to make that easy.

Erik: And then the customer they can decide where they want a human in the loop to approve some particular action?

Joe: Absolutely, yeah. So you can build a chat bot on the platform. You can have that chatbot solve for the use cases you want it to. You have it escalate to humans when you want to. You can build interrupt connectors based on it. You are based on scores that the platform is assessing about the conversation and maybe your frustration level, maybe certain key problems that you want to be listening to really closely. You can set those up as interop mechanisms to pull out to a human. If you want to, you can build a human into the process of finalizing the order, whatever you want.

I think that's one of the things we do best is we make it really possible to weave the human in the automation together, and manage all that in a way that is efficient, is commensurate with what you'd expect operating some of these big contact centers in the first place and like kind of feels like you would expect it to feel and you have both the metrics that you know well in that environment to connect to, and some new metrics and some new understanding that are innovative and can help you understand your operational efficiency as well.

Erik: So this technology it's relatively new, it's certainly going to evolve very quickly for many years. Can I throw a situation at you and understand how you see this maybe evolving in the future? So let's say, I'm driving in my hypothetical Porsche, and I want a coffee, my Porsche has a conversational bot, say, hey, Porsche, I want a coffee, and then there might be a Starbucks nearby that also has a conversational AI that can help me deliver that coffee. But that's a different AI from the Porsche one.

So, hypothetically, in the future, you have millions of these bots that are deployed by different companies that can solve different problems. And ideally, from a customer perspective, I just want to tell Porsche bot what I need and Porsche bot is going to figure out how to communicate with these other bots and deliver what I need as if they were my secretary. So, first of all, do you see that working in the relatively near future? And then how would we connect these into something that's more like a personal secretary that can solve a wide range of problems as opposed to an individual process management tool?

Joe: I think it's a really interesting question. There's a couple ways to answer it. One is how would we like it to work? And then another is like, well, what do we think is likely to happen? Because there's certainly been throughout human history all sorts of cases of technology that like doesn't necessarily adapt to talk to itself or other versions of itself particularly well. Like, we're not always good at that some of the wired wireless carrier technology is a good example of like networks that like Sprint's network behaves nothing like T-mobile's network, and they're working to integrate those. And there's all sorts of technical challenges there that I'm sure they're solving very well.

So I think for me, the next step in this journey is actually what you kind of said at the end, and the way I like to talk about it is how do we build conversational systems that can actually begin to take away some of the cognitive load for us, and help us solve for a little bit bigger goals? So it's not just show up?

And there aren't systems like this in the world. If you go and look at the systems in your home, whether that's made by Amazon or Google or whatever or your phone assistant, like all these systems are very transactional systems, you look at it, and you say, hey, whatever, when was the Declaration of Independence written? And hopefully it gives you a good answer. Or you ask it, or maybe it'll play a game. Those are maybe actually some of the most interesting use cases. Maybe it'll turn your lights on. Maybe it'll play some music.

But if you go to one of these systems and say, hey, I want to learn to knit or I want to learn to surf or I want to get in better shape or something that requires that's going to be a persistent mission, and you can use some help with, you're like, oh, I wish I had an administrative assistant to help me with X, none of those use cases are particularly well solved by dialoguing just today. And that's, in part a technology problem around integration. But fundamentally, right now it's a science problem.

We don't have dialogue paradigms that are good enough to be both safe so that they learn what you want to do, they have the guardrails on them that we would want them to have in an industrial or commercial context, and flexible enough so that they can sort of talk more like real systems and sort of keep a persistent goal and maintain that and like help you solve that and help you organize that and help you collate your articles on knitting in YouTube or whatever it is.

So for me, I think step one is like, well, let's begin building some of these systems that can solve a little bit bigger problems. The better we get at that and the more real that is, I think there's moment that's going to happen for some company or other that starts to cross that boundary. And as that happens, that'll create a lot of momentum. And then there's a really interesting like marketplace dynamics around like, okay, well, does that mean as that happens more and more coalesces around that technology, is the purveyor of that technology willing to integrate with other life technologies? We haven't seen that a lot yet. We see it in a few places. I think there's some integrations between like Alexa and Cortana.

The short answer is like I don't think anybody knows the answer to that. But I do think we know where to look next. And I think that is in resolving and pushing beyond what I would call the limits of these really transactional systems today. I can do this one transactional enterprise with it or these 10 transactional enterprises, and that's it. I think we all want more.

Erik: So maybe were a couple technical breakthroughs, a lot of data, and then probably a lot of hours of lawyers talking about how this is going to work solutions here?

Joe: Yeah. And then just like who owns it? And how do they see their role in the marketplace strategically? Do they see is it a Linux like object that is like hungry for things to mate with, and doesn't have limitations in an ideological sense based on who built it? Or is it something that's more commercial and more closed off, and wants to own and control its domain? That's not only like is it a public or a private entity. Like different private entities see things differently. But there'll be like a race to that some of those first applications. And, yeah, we'll have to see what the lawyers come out with.

Erik: Yeah, there's something in the industrial world the protocol called OPC UA, which is you have this problem of everybody putting machines into a factory, the machines don't talk to each other, and so any work you do, the effort is multiplied by the complexity of getting these machines to talk to each other. So OPC UA is a protocol that makes it easy to communicate across devices from different manufacturers.

So it’ll be interesting to see whether there could be something like an open source protocol here that would provide that framework for things to talk to each other without necessarily one company being in the middle and making the decision of do we do we want to or not want to play with this other company? But there's lots of situations in the past when that did not occur when there were basically business considerations in those dictated rather than maybe the technical ideal dictating what's possible.

Joe: Yeah, there was like Apple and Flash is a good one. Just these choices by companies like not to support certain types of media. I think what you're saying is really interesting. Another way to put the question is how close to natural language can we make API's? Or can we put like, a natural language like interface between API's? Because the systems that are doing the conversations, like they're going to have likely have some notion of dialogue state that has some semantic understanding that structured and then that itself a kind of API, but certainly talk to API's to do stuff like order the coffee at Starbucks, or whatever it is.

It's actually a question that is beyond conversational AI, or bigger than. It is like, can I glue some of these API's and some of this these stateful systems? Or these systems with semantics of states, like is human language a good glue for that? Like, the other side of the question would be like if I have these two chatbots, would they even talk to each other, or would they just have some other way of passing the semantics around through an existing API set? Like

I mean, I'm sort of like terrible at future prediction because I always don't want to be sure. But there's a fundamental ism to that question that we're going to see ourselves cope with over the next 20 years, for sure.

Erik: Let me ask maybe a more practical question here. So right now, we've talked a lot about customer service, we've talked about sales. Are these solutions being used internally today? So let's say I'm on the maintenance team, and I'm over at a job site and I say, okay, I got to order a part, maybe my hands are full, I've got tools in my hands, I'm up on a windmill somewhere, turbine, are there solutions widely deployed now that would solve these internal process challenges?

Joe: There are. You can use our platform to do it. I think one of the big examples is like insurance claims adjusters. As part of the insurance operation, we'll use our platform to communicate back to the mothership. There are other examples too. I'm always like surprised and enthusiastic about new ways that that happens. But yeah, definitely, within business internal applications, the help desk is another one we see sometimes. There's also it's not just B2C, it's also B2B. In fact, we use our own platform ourselves to sell to create our own leads, as the way when you communicate with LivePerson, and you're interested in our product, you're doing it on our side, we're on our platform talking back to you, or you're talking to a chatbot that we have on our platform that we built with our own technology. Well, there's a lot of flexibility to all those use cases.

Erik: And I suppose that would be a few criteria to determine what is a good use case. So you might look at the complexity or the simplicity of the use case and in terms of standardized communication, and so forth, I guess, volume of instances is going to be important both from the business side and also having enough data to work with, how would you evaluate whether something is or is not likely to be a use case where it can be effectively trained?

Joe: You can use the platform a different ways. So the base platform you can use to have humans talk to humans, and your use case, fundamentally, is just like, do I want this to be conversational or not? Because that's what we do is we allow people to have conversations with the system and that system on the other side could be a person or a chatbot.

One version of this question is what's the criteria for using LivePerson at all? And the answer there is like do I want to have a conversational experience? As that gets better and easier the answer, there are going to be more and more yeses. Because almost anything you can imagine solving with a webform, I can also solve with a sentence just as well, or a couple of commands that I would verbally say or type that would in the end be easier if the system can understand me well enough and if it's efficient enough.

I think maybe more of the root of your question is like, well, when do I know if it's a good use case for a conversational AI or chatbot, or something like that? There's quite a degree of complexity you can get into now. But I think if you're a company, you want to start with some use cases where you want to start simple and build. And so if you have basic use cases like I want to check my balance for a bank, I want to change my password, I want to check my order status, these things are use cases that we have lots of templates for and you can kind of quickly get up and running, so it'd be a combination of like fully automatable through an API or through some programmatic connection meets simple enough that you can get it off the ground and start to see some success and start to measure your progress and is something to build on.

Those, if I'm a company, would be the use cases I'm kind of going after first. And things where I don't want a lot of human oversight, but I just want to do it. And I'll need a person to make sure it's okay. And then you can graduate from there to use cases that are more complicated. Like another really good one is information gathering where it's like, hey, I can have you talking to a person for five minutes who's going to get your email address, your account, ID, whatever, all this stuff from you to get you going.

Or I could have an automated system help get that information from you, and then serve it up along with your question or your need to a human agent who's going to do something where you want to put a human in the loop like evaluate is this a fraudulent transaction or help you like understand some arcane detail about your bill that is maybe new for this month or something and you haven't automated your way out of that discussion yet.

So yeah, I guess I kind of start small and build. But I think we do have brands that continually pushing those boundaries as well.

Erik: Joe, I always like to wrap up the podcast with one or two very tangible examples. So maybe we can walk through a situation where somebody, how did they and how do they build functionally functionality on top of that? Is there a particular customer that comes to mind that we could use as an example here?

Joe: One of my favorite examples is this isn't like sort of building the whole conversational AI system. This is actually building a product on our platform that we did with a close partner company of ours. But I think it's a good example. And I think it illustrates some of what you could do. We collectively realize this is a major telco. We collectively realized with them that this telco could see network outages within a region just as faster based on the conversations that were coming in to their contact center than they could in their own network operations center. And that's not to say their network operation center was flawed or bad, it's just to say that people get on the phone, and want to have conversations about this very quickly, or want to text about it very quickly when there's technical problems. That signals really, really good.

We've also looked at other use cases, of course, where it's like there's lots of real time use cases like this where somebody's saying, hey, I think the bus is late in a theme park is a good example of a use case like this as well. You'll get this information via natural language from your customers just as faster than you can see it in the other way. And in fact, your customers are willing to tell you all sorts of things that you might not have API's for have built out a network operations center to detect.

That was like one of these key examples that for us we realized we needed to develop natural language understanding as a platform unto itself independent of am I automating a conversation or not, or am I building a chatbot? And so we worked very closely with this company to create a system that allows you to bootstrap your way into NLU very, very quickly.

So I spent a lot of time I went around to a bunch of companies and asked them about, okay, so you're trying to understand your customers’ intents, and they’re like, yeah, we need to do that because that tells us about what kind of products they want to buy, that tells us about their preferences on products, what they like, what they don't like, what's satisfying them, we understand when they want to leave us, all these like good customer analogies cases, so we need to do all this stuff.

And the question I would ask is where are you in the process? How you doing it? We've got a lot of the same answers, which is like, well, we're six months in, and we've managed to get our data in a data lake, and that's really good because that's hard. And then we've managed to investigate it a little bit. I think in the next three or four months, we'll probably have our first set of intents.  We'll be able to build these detectors and start to see this.

And so for us, what we realized is we can help do this way, way faster. If you're having conversations on our platform, we can build a standardized intent model on your industry, we can pass over your data, we can go give you a whole bunch of training data for your own natural language model to help you quickly get. Our goal was we want you to recognize more than half of the conversations that are coming in to your platform every day with greater than 70% or ideally 80% accuracy on that recognition. We want you to be able do that in the first couple of days from turning on the product instead of like this nine month, two year process that you'd have to go through otherwise.

And so that product, we called the Intent Manager product. And we built and released it about two and a half years ago. And our goal in the first year was to get 10 brands on it because we were selling it to these enterprises and it takes a lot of time. And it's a heavyweight thing. And I think we had like 80 or so brands in that first year and it's just been growing since.

So this product I think has not only fulfills these use cases like the telecom that wants to see network outages or the theme park that wants to hear about when the bus is late, but it also becomes this way you can start to build yourself towards conversational AI and teach your chatbots, or learn what problems you should solve for your customers and then help teach the chatbots to solve them really well or at least to do the NLU piece of that puzzle.

Erik: So in those situations, the person could still be talking to a human agent, but you would be then recording that conversation, analyzing it, determining the context and then feeding that into other systems to understand what are people actually talking about in aggregate?

Joe: That's right. And that if you want right to store and remember information about your customers, obviously you won't do this in a customer right way, but that idea that you're a gift shopper, that you're a holiday shopper, and that you might want some help next year getting ahead of your 10 great grandkids or your 10 grandkids and your two great grandkids holiday shopping, that's something that brands can now remember and like use to help you later. And they can do that at scale, is the idea.

Erik: So many, many other things potential use cases here beyond automating a conversation or using AI to conduct a conversation. Joe, I also want to be respectful of your time here. I really appreciate you taking the last hour to walk us through this. Anything else that we haven't touched on yet that you think's really important for the audience to understand?

Joe: I mean, I think we look pretty wide ranging. So I think we got a lot of good stuff out today. And Erik, I appreciate the opportunity to come on and talk with you. I think it's an interesting podcast and thanks for letting me come on board.

Erik: Well, really appreciate the time. And I guess, if somebody wants to learn more about LivePerson, probably best option is for them to go to your website, and you probably have some bots they can talk to, or what's the best way?

Joe: We sure do. Yeah, they can go to Liveperson.com, and they can learn about our products that way, and there's conversations there they can have with people and automations to get a feel for us.

Erik: Cool. Thanks, Joe.

Joe: Thank you very much. Nice talking with you.

Erik: Thanks for tuning in to another edition of the IoT spotlight podcast. If you find these conversations valuable, please leave us a comment and a five-star review. And if you'd like to share your company's story or recommend a speaker, please email us at team@IoTone.com. Finally, if you have an IoT research, strategy, or training initiative that you'd like to discuss, you can email me directly at erik.walenza@IoTone.com. Thank you.

test test