Erik: Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.
Welcome back to the Industrial IoT Spotlight podcast. I'm your host, Erik Walenza, CEO of IoT ONE. And our guest today will be Satyam Vagnani. Satyam is the Vice President and General Manager of IoT and AI at Nutanix. And Nutanix is a company that elevates IT to focus on the applications and the services that power businesses from manufacturing oil and gas to retail. Satyam was previously the CTO and cofounder of PernixData, which was acquired by Nutanix. And he has a depth of expertise in virtualization, the edge computing stack and the development of code and movement of data across the edge and cloud.
Together, we discuss the adoption of hyper converged infrastructure to enable scale and simplicity in industrial IoT systems. We explored how decisions are migrating from the IT department to project managers and also up to the C-suite. And we discussed the success factors behind an IIoT deployment, which in Satyam’s perspective are human as often as they are technological.
I hope you found our conversation valuable. And I look forward to your thoughts and comments. Satyam, thank you so much for taking the time to speak with us today.
Satyam: Thanks for inviting me. Glad to be here.
Erik: So Satyam, I want to dive in deep into Nutanix, the technology you're building there, and also the trends that are driving the need for this technology. But before we really get deep into the company and the solution, talk to me a bit about your background because you have a very interesting background. Before you joined Nutanix, you cut your teeth at VMware. And then you started up your own company, and you actually joined Nutanix through an acquisition, as I understand. Can you just walk us, in your own words, quickly through that path and how you ended up in your current role is VP and GM of IoT and AI Nutanix?
Satyam: For sure, Erik. I feel a little blessed in terms of the type of roles and the type of companies I've been able to work in past and now in the present. I started off at VMware, and this was early days of virtualization. And quite frankly, even we didn't quite know the true potential of virtualization at the time we started. This was roughly in the beginning of the 21st century, it’s like 2001-2002 timeframe. Server Virtualization was the big batch that we made.
I remember, we were recruiting once at MIT and somebody comes up to me and says, what do you do? And so I told the person we do virtual machines. And she corrected me. She said, well, you mean washing machines, right? That was my kind of aha moment, is partly it was a little rude awakening, saying, well, not enough people in the world appreciate virtual machines. But since then, we came a long way, obviously, we were grew up to be a public company, 15,000 employees, I spent 10 years there, going all the way from the startup phase to the public phase. It was a great journey.
I focus mostly on the operating system side of it. I specialized in storage and [inaudible 03:44] systems. I built their file system. And then flash memory happened to come along. And so that became a big phenomenon in storage. So we did a startup called PernixData around server side flash, using server side flash in a scale out manner to really accelerate workload performance as it comes to storage.
More than the technology, it was a great transition point in my career, because I was mostly focused on technology first like VMware. And then I got to see a lot of the business world as a cofounder at PernixData, which then led me to Nutanix we were acquired, and to my current role, which is a very good mix of technology and business. And it is a role that requires me to apply all of that stuff in extremely new context, a place where the right business model is not known, the right product is not quite known.
Obviously, there's a lot of companies experimenting, if you will. We are all trying to create IoT products for the enterprise, some for consumers. So it's a great challenge. I mean, just to figure out everything all the way from the product creation and ideation process all the way to taking it to market figuring out the exact use cases where it resonates, the business models that are able to scale and so on.
Erik: And why do you think that Nutanix purchased PernixData? Was there a specific niche technology that you were best in class at that would complement their technology? Or was this an adjacency that would allow them to offer a bigger portfolio or talent acquisition? What was the background logic from your understanding?
Satyam: Back then, it was a very complementary technology. Nutanix, just for people who don't know, is the leader in what's called hyper-converged infrastructure. So the kind of founding principle is that we could, instead of treating different functions of computing as separate silos, instead of treating storage as a separate function from networking, which is a separate function around compute, maybe if we could use servers as the core building block of a data center, and then just overlay all these functions, storage, compute networking on top of massively distributed farm of servers just through software, then we figured that could be a much more efficient way to build data centers. ‘
And in fact, that's how data centers are built in the public cloud. So we wanted to bring that kind of core architectural principle, we want to make it mass market. So that was the foundational principle of Nutanix. Since then, of course, we've graduated to addressing a much wider scope of problems. But that's where PernixData came in is our foundational principle was also the fact that server side software is the key software layer. It can provide many functions, including scale out storage. And so just from our technology, DNA point of view, there was a lot of overlap. So we figured by joining forces, we could do something much bigger.
Quite ironically, after I joined Nutanix, I focus mostly on things that are not necessarily in the HCI package. It is extension of HCI. It’s if one figures that HCI is foundational concepts in building data centers, whether that data center is in the private cloud or in the public cloud context, or in the edge context, then at least in the edge context, by layering more technology on top, which is where I come in we could kind of customize that HCI stack, which is an infrastructure level concept to specific application context, which is IoT, for example.
Erik: I do want to go into the technology into as much detail as I can comprehend. But let's cover business first. But before that, let's just give a quick 101 on what is virtualization. You've already explained hyper converged infrastructure a bit, but a lot of the folks that are listening, I really come in from an OT, an Operational Tech background, and on the one hand, this move towards edge computing, or this move towards connecting IT to the OT environment is critical for them. On the other hand, everything is very new. Can you give us just a quick understanding of what this is and why it's important in the context of the industrial IoT?
Satyam: Yeah, absolutely. And maybe I can explain it for some examples that we've already come across. One thing I would admit just starting out is virtualization is a very IT specific concept, at least so far. And one of our mission parameters is, in fact, to make it desirable in OT context. And I'll be the first one to admit that we have a long way to go. Because typically, when it comes to the business of selling technology, you bring in your shiny toy saying, look, this is our shiny toy; in our case, it's virtualization. And you are to use it. But very often we kind of forget to empathize with the customer.
And so, a lot of this journey, in my opinion, is going to be about showing why this at least traditionally IT specific concept, which is virtualization. Why should OT people even worry about it? So let me give you some examples. We came across the example of wireline trucks. These are trucks that are essentially “mechanics” or an oil rig. So this is a truck that goes out to a working oil field, for example, and tries to deliver what's going on in the oil. So they deploy sensors in the oil well, figure out what's going on. And if there's something to fix, based on the sensor data, they can recommend a fix. So that's wireline trucks.
And typically, if you look at a wireline truck scenario at an oil and gas services company, they would have many different PCs inside that truck literally riding inside the truck. So three different pieces, or four different pieces doing different parts of the OT stack, handling different sensors, for example, are doing different applications. And just because those applications are traditionally not compatible with each other, they would do them on different computers. And so suddenly, there are only three computers, they don't have application lifecycle management hygiene, because they are physical machines. The patching, the security lifecycle management for those machines is not quite well defined, etc, etc.
So a lot of things that that IT folks care about, security for the application or for the operating system, our application lifecycle management for the specific application running in the IoT context, all those things tend to be less defined. And that's because of convenience reason, is in OT, a lot of things that it people care about are less convenient to do. And so, virtualization, in our opinion is a great unifying concept, is we can keep all the OT concepts almost the same so the exact operating systems they use are the exact application stacks they use, and remain the same.
But by replatforming the OT infrastructure, fundamentally, at the infrastructure level, suddenly, you can, for example, consolidate all those three or four different computers in our wireline truck or at an oil rig, or at a factory into a single computer. And the way to manage a set of computers is much more contemporary. So you can roll out newer versions of the applications much more easily, and have to go to every specific computer with a USB stick in hand. So there's all these benefits that one can provide to the OT world without necessarily changing everything that the OT world actually cares about. So that's step one.
But the next step is, of course, you know, if you can do that code replatforming, then that also enables newer stacks to run in the same OT contexts. For example, it enables an AI stack to run so that now you cannot just run your core SCADA applications or HMI applications, etc, but you can then also leverage all that data that you're bringing in from machines, and feed it to AI models, for example. So now you can layer in new concepts that otherwise your core OT infrastructure would have made it very difficult to lay in the first place.
Erik: I think this complexity of dealing with a ton of segregated hardware is at least a challenge that OT everybody can understand to an extent as a problem needing to be solved. So let's go into then the technology behind this into a bit more depth later. But now, Nutanix, you are probably one of the companies where, on the one hand, your technology is touched everybody in the developed world, let's say, I think you have around 12,000 customers. So somehow we're all being impacted by your solution.
But I think realistically, a lot of people don't know the brand because it's an infrastructure. We could call it an infrastructure company or augmenting the infrastructure. So it's something that's often back end to the environment. Talk to us a little bit about what is Nutanix today. And I think you've already mentioned that it's already much more than it was maybe when you first joined, so the scope of problems that you're addressing is expanding. But it also be interesting to see where you see this expanding in the future? Where do you see the solutions that Nutanix is bringing to market impact and how people are, for example, adopting AI at the edge, which is really kind of a new to world, but also scaling very quickly., please.
Satyam: So, just as past context, the early success for Nutanix came from specific workloads, as is the case for most startups in the early days. And so that specific workload for Nutanix was, I think, our VDI, Virtual Desktop Infrastructure. This is a concept of instead of having physical machines and having to manage applications on physical machines, especially in enterprise contracts, think about a bank, for example, they might have 45,000 employees in brick and mortar banks’ branches.
And it's very hard to secure 45,000 different desktops. And so the idea was if we can host all their desktops in the data center, and then the employees remote into those desktops, so that's really AI. It's a much more scalable, secure way of provisioning enterprise grade desktops to enterprise users. And since then, Nutanix expanded its use cases, now, of course, HCI as a substrate supports any and all enterprise workloads ranging from high end databases, SAP live workloads, all the way to VDI, or all the way to even some newer workloads. Like as you were saying, AI workloads or containers, for example, or no SQL databases, your concepts around storing data.
So that was when Nutanix achieved most of its fame. Like you said, around 12,000, 13,000 customers as of this quarter, a lot of the number one branch in the world that you might think about in insurance, and food and beverage in retail, and hospitality, etc, they all use Nutanix. Because of being enterprise companies, you don't quite know about it, but it's almost everywhere.
But now the next step is we think that computing is not necessarily going to be limited to a specific data center. And so there's many companies who had a specific opinion about where computing is going to exist. So there were companies which provided computing in the context of a private data center, then, of course, there are many public cloud providers. And then now more and more people in the industry are realizing that computing is going to be pervasive. It's going to happen in the public data center. It's going to happen in the private cloud. And it might also happen on the edge, like at an oil rig, or a factory or hospital, or an airport, etc.
And so, right now, the mission we are on is to provide a software layer that provides a very uniform set of computing services across all these contexts. And of course, doing it in a control plane sort of manner is slightly easy. And so, essentially creating a control plane that enables you to deploy applications, whether that application needs to be deployed in the edge, or the public cloud or the private cloud, many people attempt that. But to create that unified layer of computing, which also enables data to move between these clouds is much harder. And our unified plane of computing that enables security concepts to stretch across clouds it's much harder. And so that's the mission we’re on.
That's why we kind of started looking into IoT, for example, as a workload, because it is an inherently massively distributed workload. If you think about a specific enterprise, they might have hundreds of factories and so all those factories need to be served. And, of course, those factories also need to talk to the core. So the tech computing fabric, or the security fabric, or the application lifecycle management fabric that you would think about in the context of it needs to, by definition, stretch across all the factories and the core. So, it's by definition, a hybrid workloads. And that's central to our vision.
Erik: How do you demonstrate value? So this is a challenge that I think a lot of companies in the industrial IOT space or experiencing, I guess your typical proposition is total cost of ownership, and if it's the case where that's the core value proposition, I think building a business case is probably fairly straightforward. But if your value proposition is related to a simplicity or form factor of that a compute environment in an operational situation, which maybe enables a new type of use case, or enabling compute on an edge in a way that was just not possible previously, then you get into this messier subjective discussion where maybe there's a lot of value there. But how do you quantify the value and improve it out over time can be difficult because there's uncontrolled variables? How do you address this?
Satyam: So there's two layers. In fact, we make the total cost of ownership argument at the infrastructure layer. So when it comes to saying, hey, what's the total cost of ownership of a solution that runs the infrastructure at your factory, or at an oil rig? There is a TC argument there. But it only applies, or at least it is very effective at the infrastructure layer. And in fact, we can prove it out through hyper-converged infrastructure, which is Nutanix is corporate.
But then, in the business that I mostly do for Nutanix, which is IoT, we actually talk about very different, most of our conversations are at the application layer because this infrastructure is just a means to an end at that point in the context of a specific IoT scenario. So out there, we try to guide the customer about thinking about their IoT investment and their journey as a multi-phase journey.
Many a time you kind of read a blog about some guy doing something with IoT, and it's a very aspirational use case. But the problem is there's a long amount of both technology and people and process mileage to cover before you go from zero to that kind of aspirational use case. So, we typically tell our end users to focus on maybe divided into two phases. In phase one, they could potentially try to do IoT applications that are about quantifying existing industrial processes.
And so essentially, you just get data from stuff that is already running, except that stuff is currently invisible, those industrial processes, they are impossible to quantify today. So use the power of IoT to just measure what you already have. In other words, digitize what you have. If you're successful with step one, then in step two, you can think about brand new experiences, things that were previously not even possible.
So a good example for step one, for example, digitizing what you already have, and I'm sure you've seen this in industrial IoT, is predictive maintenance, is telemetric data from machines kind of already exists, and to do analytics on top of that data to predict whether a machine is going to fail or not. That's reasonably tractable.
But to invent brand new experience, for example, you know, we work with a Fortune 500 packaging company, where they want to go to a model where they inspect every package that comes out of their assembly line. And that's impossible for humans to do. And so that's a brand new experience I wanted to invent. And that's a very kind of advanced IoT use case. Obviously, it was machine vision, very custom AI models to figure out what the machine is seeing and so on.
But that's a phase two thing. And so they get confidence in IoT through phase one. And obviously, those use cases have good ROI. And then using that confidence, they can build much more advanced use cases as phase two.
Erik: Maybe to summarize, if I'm right, is your approach is to, first, showcase some value through a phase one deployment, which is more readily quantified, and then work with your customers, then I guess becomes a bit more strategic, the engagement working with your customers to define and build out the phase two or the newer use cases.
Satyam: Exactly. Another key thing that we work on with our customers is if you think about IoT projects, they're not like running a new web server, or deploying a new exchange, email server or deploying a new database. If you think about how easy it is to deploy a database in the year 2019, contrast that with how difficult it is to deploy a new IoT use case, I mean, you don't even know where to start because there are so many moving parts. There's nothing on a platter waiting for you saying install this little software, and you will have IoT. There is no such thing.
And so, in our experience, we have seen that it is incredibly challenging and people don't even know where to start. It's overwhelming. And then, of course, you have to go into a design phase which lasts six months or maybe a year. You don't even know what's going to come out at the other end of the tunnel. So, the other thing we have invested very heavily on is we have created this concept of IoT application library. And the idea is that across many different verticals ranging from retail to smart cities to oil and energy to manufacturing, we have these “liquefied” applications that are very relevant to a specific vertical.
So for example, in the smart city vertical, we have liquefied many traffic management applications. So you can literally, if you just using basic camera, or even a smartphone, on a click of a button, you can actually deploy a pretty complicated application that uses AI, that sources data from your traffic cameras to do something very interesting, get insights out of traffic that your city is already seen.
And so that automatically gives people a lot of visual confidence that wow, I can see it now. I know, instead of talking about concepts like AI and containers, and edge computing, and so on, I can see it. I don't know all the 100,000 moving parts that are actually powering this application. But because I can see it, it is much more relatable. So we invested a lot in that application library concept. And we invested in doing partnerships, because this is not just a Nutanix problem. There are many vendors who have interesting IoT applications where they don't have interesting delivery mechanism that can make that application available to a massive set of potential end users. So we are on our way to solving that problem.
Erik: And I guess this new class of problems that you're addressing is it's also a very different group of stakeholders. I imagine in a lot of these cases, the IT department becomes maybe an advisory function, but they're not necessarily making the decisions around what to prioritize. So you're probably engaging a lot more with operations. Who would be a typical decision maker, a typical project team that you would be engaging in on one of these AI or IoT use cases?
Satyam: I will admit to you first we are discovering this as we go along as well. And so, number one, it looks like there is a different way to have this conversation depending on verticals and also depending on the personality of the end user. So for example, there are many end users who are actually service providers. So if we look at the oil and gas industry, that technology stack for oil and gas is actually done by a handful of companies. I don't want to name names here, just to be kind of sensitive to them. Practically, that conversation is very different, because you are selling to a service provider, who is then selling to hundreds or thousands of oil and gas companies. They are the technology provider.
Same thing in telcos, for example, there are many 5G IoT use cases that are obviously very, very new. But typically, you sell to the service provider, and the service provider is creating IoT applications that they sell to the end users. So, there, it's literally selling to product manager. There's a line of business. It's a product that this person is creating for their end users. And you just happen to be one component of that product.
But then if you flip over to true and user, we work with Fortune 500 retailers company, they operate cafeterias around the world. And they are on their way to essentially make cashier-less checkouts, so essentially, they [inaudible 29:13] but for restaurants. So there we worked with Chief Digital Officer, for example, because this was a vision thing. It was core digital transformation initiative at CEO or CTO level and they needed some technology providers to fulfill that vision. So we solved from the top down.
And in manufacturing, we've seen a mix of everything. Obviously, there is top down stuff for new use cases. For example, the packaging company example that I mentioned, they were doing this AI based product inspection. That was a very top down thing. And so we sold from the CIO downward, which was business transformation that they equated to savings in terms of the number of product failures they have in the field or the money it takes to inspect product at scale. So it was a top-down strategic thing.
But at the same time, there were a set of other use cases around use cases around telemetry data from their machines, which was very bottom up thing. The OT people needed that use case to be solved, just to make sure that their machines and factories are operating efficiently. And so, that was done from the bottom up. So it was a combination of two things. Very aspirational thing, or, if you will, to use my terminology, the phase one use case, getting more out of existing business processes. The phase one use cases was solved bottom-up, and phase two use case was actually done top-down.
Erik: I was chatting with somebody from a pharmaceutical a few weeks ago, and they made the point that so pharmaceutical manufacturer, challenge is not hiring 100, or a couple 100 data scientists to build solutions. The challenge is how to train their 50,000 employees globally on what is actually possible today, so that when they have a project, whether it's in production, or it's a new solution development or it’s R&D, people have in mind, the set of solutions that they might engage with, and the set of partners that they might engage with to make this happen.
And I think the way that I'm hearing you describe it, it's basically these are the people that have been running projects for the past 100 years at organizations. And now the it domain is it's in their toolbox, and they have to be able to understand what tools are available and how they fit into the particular project that they're rolling out, whether it's a CTO down, or it's an engineer that just says, hey, I've got a spot problem here, and I think there might be a new way to do this more efficiently.
Satyam: So true. It's a very interesting problem in negotiation. Is many times, there are, for example, top-down initiatives. But the question is, well, but for the person who actually spends their life five days a week, or seven days a week at the factory, what's in it for them? And so it's a very interesting negotiation problem. Is an IoT use case, better have something for all the stakeholders, in which case it will be successful? We were talking about this smart city use case, for example, and it was a high level decision maker for that city. And of course, they want to do this IoT use case, which is around safety. So they might then run into some privacy concerns is should we, for example, look at people's faces?
And if you think about it from the citizens point of view, it is all net negative. I mean, if you just think about it from that point of view, is that well, yeah, but why should I be bought in? And so we had this conversation about, look, you want to achieve safety. But then if we can, for example, give citizens something in return, then they will be bought in, because they now are seeing the exact same IoT system but for a very different use case. And so they get something out of it, and then, of course, the city gets something out of it, and now it's a great compromise.
So, OT is the same thing, factory operators, there has to be something in it for them. And in fact, again, going back to that product inspection thing, the product inspection thing was a top-down initiative. But when we managed with telemetry data, the predictive maintenance use case, which was a bottom-up requirements. It was a great compromise. It was a great negotiated measured ground, is like now both parties benefit from whatever that solution is that's going to be put in place. In fact, it's partly a social engineering problem. It's partly a negotiation problem.
Erik: Let's go a bit more into the technology. And I have in mind the traditional IoT stack, let's say edge cloud enterprise are this traditional stack, maybe you can explain how you view the technology stack, and then where the Nutanix would fit in at the different levels of the stack so we can understand where it would interface with other technologies.
Satyam: Number one, clearly, this is a very crowded market. So I “bucketize” IoT players into three big categories. And there are players, so provide devices and connectivity. So, as the connectivity or whatever other connectivity and all the devices, all the sensors that you need to do your IoT infrastructure, we don't do that.
There are other vendors so provide tightly integrated IoT solutions. So, if you think about, for example, baggage handling system at an airport, it does exactly one thing, and it does it very well. But it only does one thing. And that system is closed, it does only that one thing and it doesn't want to care about anything else.
And then there is third category. And it's an emerging category, admittedly, is IoT systems that focus mostly on providing a platform where you can do a lot of the many different IoT applications. And most of these IoT applications are around real time data processing, so essentially getting insights out of raw data. And so that's the part that we plan is we think the enduring value in any IoT project is if you can actually get insights out of your data. Everything else is everything else. Everything else is just a means to that image. And everything else is infrastructure that you were to set up to get that data in the first place.
But until you can get the insights, there is no ROI. So we focus on that. And to get that insight, the technology stack that needs to be done is essentially, a lot of these applications are AI applications. The runtime for these applications are either containers or functions. And so we provide those technologies as part of our stack. These applications are also, they use things like GPUs, and so on to do AI. So you need to simplify what it takes from an infrastructure point of view to sell these AI applications.
And last but not the least, we think that a lot of these applications, at least in the enterprise context, are going to be deployed at the edge, not in the cloud. Think of a factory or an airport, or oil and gas rig and so on, or even a grocery store, and so on. And there are many reasons, I think Gartner has talked about it, many other analysts have talked about it is there's volume of data reason, is just a number of the amount of data that is going to be produced at oil rig, at a hospital or an airport is going to be overwhelming. It can be moved to the cloud in real time.
It could be for autonomy reasons is, for example, an airport cannot stop functioning if they then it is down, or it could be for compliance reasons is there are various things that points of data that cannot be ordered out of a hospital, or they cannot forward operating base in defense use case, and so on and so forth.
So for all these reasons, we think edge computing is going to be that core technology blueprint for enterprise IoT. And so we are heavily focused on that. We provide an edge computing stack that enables these new generation of AI based applications. And then because these edges are literally spread across the planet, you need a centralized way to manage this massively dispersed infrastructure and so we provide that.
Erik: And this is then a mix of hardware, software, and then services, is that right? And the software, it could be run on your hardware or on third party software?
Satyam: Our core of what we bring to the table, the main thing is software. And of course, to a small extent services, although of course, are there we mostly like to partner with size and other [inaudible 39:05]. But it's a software that we bring to the table, the edge computing software. Of course, we also help the customer with a good choice of hardware. So indeed, we bring the hardware and software solution, but the hardware is not provided by Nutanix, it is by one of our partners. It could be people like Supermicro, Hp, or Neurosis or Advantech, and so on.
Erik: Because I see this on your website, you're offering it basically for simplicity from the customer perspective, but what you're providing there is a software and the service. On the software side, what are the main variables that are driving progress? So if you say what differs between the solution today and the solution three years ago, or the solution today and the solution that you expect to see three years in the future, what are the big variables that you’re oriented your R&D efforts around?
Satyam: So one of them is scale. I can't even tell you how many times I've seen a Raspberry Pi-based IoT project. But then the question is that's a great prototype. Now, how do you roll out that particular prototype across 1,000 factories or 1,000 airports? And that's where people get stuck, because that's not just let me write an application, that kind of a problem. It's an operationalization problem.
So, large part of our investment is that planet scale operations. For example, let's say you got a new version of your IoT application, how do you roll it out literally in a span of, say, 2 or 3 or 24 hours across 100 objects? And so the more we can automate it, the more we can solve it at the platform level, the less the end user has to worry about it, the less code needs to be written that has nothing to do with the business logic. It has to do with operations logic. So we want to hide all the operations complexity. So that's a heavy area of investment.
The second heavier area of investment for us is AI, is easy to kind of get enamored again, just read a blog post about some AI models to recognize cats and dogs in a picture. But if you think about enterprise IoT, every use case is a custom AI use case. You got to create a custom AI model. For example, the food services end user that I talked about, they created a custom AI model to recognize each and every food item that they sell in all the cafeterias in France, for example. And of course, overall, they have to do in across the lunch, and they have 55,000 different cafeterias across the world. And that AI model to do the food items in France recognizes roughly 20,000 different food items. It's a custom model.
And so if you think about the state of AI today, it takes a lot of effort, both in terms of human expertise, and in terms of technology to do all of that. And so the more we can make it accessible to less and less expert level people, the more successful we are going to be. You can create IoT applications much faster, it will be less costly, and so on. People will be more confident about creating those applications. So we are investing a large in all the things we could potentially do to reduce the barrier to entry to creating good AI applications.
And last but not the least, we are investing a lot in converging the edge with the cloud. We think all interesting enterprise IoT applications are going to have a little bit of a footprint on the edge and a little bit of footprint on the cloud.
To illustrate my point is self-driving car. So if you think about self-driving car, obviously, there is AI logic on the car. But then the actual data set to train that AI model is in the cloud, and of course, the car is transmitting anomalies to that thing in the cloud so that it can use that data to train the model, make it more accurate.
So clearly, this is application that spans the edge and the cloud. And a lot of the mechanics of what it takes to actually make the application span the edge in the cloud are today like literally done by developers. And so we want to provide all those mechanics at the system's level so that people have to write less code to create these massively complicated AI applications or massively complicated IoT applications that span multiple slots. So that's a total area of investment for us.
Erik: I think this is really what we see right now is a lot of companies have deployed POCs that they find interesting and then they hit a bottleneck of figuring out how to scale that up. And then a lot of companies are, again, in these POCs finding things that look interesting, but then confronting the complexity of actually developing that on an enterprise grade of quality, and reliability and so forth, and they simply don't have the resources.
One of the ways I think about this to an extent is companies are very multinational, they're very traditionally comfortable looking at environments where there's a handful of opportunities, and they could make bets, say we put $5 million into each of these, and we have a reasonable probability of having some result, and then we scale up three of those, that's something we're comfortable with. But now we're looking at environments where you have 100 potential use cases that could all have some value, maybe many of them are not going to make sense.
You can't simply make us small number of bets. You need to make a large number of bets, test them out, see what works. But that means there's a lot of failure along the way. There's a lot of cost and companies aren't comfortable with that. So we certainly need this generation of solutions that allow allows you to make these bets in a very quick, cost effective way before companies are really comfortable taking that leap.
Satyam: In fact, sometimes when I talk to bigger audiences, I talk about these two concepts, right, is, in IT, and OT, we have gotten to use to the systems for success. So any IoT application we deploy, we deploy it with an expectation to be successful. Now are, that's true, even for IoT applications. If you’re going to deploy an email server in your data center, you're deploying it, you're doing that project with expectation to be successful.
In this new era, we have to allocate enough budget for fast failures. I mean, unless you are actually failing, you're not doing enough, you're not experimenting enough. And I think that is not necessarily how organizations work. They don't budget for failures. It's seen as a bad thing. But I think now in this new world, this is going to be the stepping stone to success. So, we've got to change that way of thinking. You can't only budget for the systems for success. You've got to budget for kind of fast failures.
Erik: Satyam, I want to be conscious of your time. But would you have few more minutes to walk through one or two use cases, or case studies?
Satyam: The manufacturing case study, I'll talk about just the product requirement, and then the outcomes. And so I was loosely alluding to earlier, this is a Fortune 500 packaging company. One of their most lucrative businesses is meat packing. They actually sell it as a service. And so they provide the equipment to pack the meat. They provide the expertise to package and all the logistics, but very lucrative.
But the downside is if there is an error, and if there's a recall, it really, really eats into margins. And so now, if you walk back from that problem, is how can you prevent meat recalls as much as possible? Today, it is very hard to guarantee that there won't be any errors, because those meat packets are inspected by humans. And they can only inspect so many packets, sort of like a batch of meat packets, so they will do one in 1,000, etc.
The ideal thing would be one could inspect each and every package. And so that was a use case we added with this end user, is obviously it was not possible for humans to inspect every package. And so we deployed edge computing stack at their factories, and high speed and industrial camera would capture the image of every package that goes out from the machine. And they would use a custom AI model to figure out, for example, if there were any air bubbles in the package, or they use special dyes to figure out if there's any contamination. So, they would use the same picture to figure out if the dye is showing, and so on.
We help them operationalize that use case. So, problem number one was to actually have a modern computing stack at the factory, something that can run their application, which was containerized. And that application required GPUs etc. So, we provided that modern computing stack. And number two, we give them a very good way to manage all this stuff. And so, they could log into the central portal that we provide, they could see all their factories, their applications spread out across all those factories, and they could lifecycle manage their application all from a central place.
In terms of ROI, it was a problem that was humanly impossible to solve. And now, just by investing a few thousands or tens of thousands of dollars per factory, they go inspect each and every made package and so, they compared the cost of setting up that equipment to the cost of servicing a recall, for example. And the economics was very clear for them, the benefit was very clear for them.
Retail, similar very interesting use case: this is a food services company. They operate 55,000 cafeterias across the world. They wanted to provide a very modern checkout experience to their end users. And so a lot of enterprises use them as the cafeteria provider. And so in those enterprise cafeterias, they obviously had their traditional checkout experience, where you take your food tray, and you stopped by a cashier, who tells you how much you got to pay. And so they wanted to provide Amazon Go like experience.
And so when we first talked to them, the interesting thing, from my point of view is I thought they were trying to save costs, they're trying to save on the number of the cashiers that they've got to deploy. But it sort turns out that it had nothing to do with saving costs. It had to do with providing this fantastic experience that their users will actually buy. And so in fact, they created a brand new product, which is automated kiosk. And in fact, they set it at a premium. People aspire to that product because people want that kind of experience at their cafeterias, they want something to write home about. And that was very eye opening for me.
Many times, we get stuck up on saving money. But many times there is real revenue to be had by creating these new IoT experiences.. Again, the problem was twofold, is one was to give them a platform that they can use to operate this new IoT experience across 55,000 potential cafeterias across the world, so that operationalizing problem was key to them. They had worked with our different partner to actually create the AI model to recognize all the food items, etc. So they had that part under control, but they didn't quite know how to scale it out across thousands and thousands of cafeterias. So that's one problem we solved for them.
And the other problem is solved for them, as with time they have some other use cases in mind in the same cafeteria context. So they were also looking for not just point solution that does automated checkout counter for them, but they were also looking for like a computing platform that they can then leverage to do more and more use cases. So they didn't want to, again, invest from scratch to do use case number two, and three and four. So we were able to grow out the platform thesis.
Erik: And in both of these, I'm sure you had other partners, maybe system integrators or other partners that were involved in this. But in both cases, were you involved more or less from the beginning, the ideation stage, or where did you get involved in these two?
Satyam: So in the manufacturing situation, we were involved from the beginning. In fact, it was this, you know, strategic initiative that the CIO wanted to drive. And in fact, as part of this, we've become good friends. In the retail space, the CDO, the Chief Digital Officer, he clearly had this vision. He knew exactly what he wanted to do and he just wanted somebody to do it and get it done for him.
And again, I find this question to be interesting. Because from a business point of view, I love for my customers to come from both these areas: we don't want to overtake on doing very, very at this stage use cases, because honestly speaking, we also need to see some revenue to sustain our business. And so there's good mix of well-formed use cases and very early stage use cases is the mix we want to see. One sustains us as a [inaudible 54:35] business and the other one keeps it interesting for us. We get to work on this aha things that are ill-defined and we get to create those things from scratch. So, both of them are important.
Erik: Satyam, this three wrap up quick questions, feel free to answer these with one word, or if you want to expound on them a bit, that's fine as well. But let's say a couple of quick fire questions here. One is, what is one technology that's not yet widely adopted that you think has the potential to disrupt the existing industrial IoT tech stacks within a five year time period?
Satyam: AI. I think does sound like a broken record. But everybody sees the potential and the effort. But people haven't quite figured out how to make it very accessible. So I guess I would qualify my answer by saying accessible AI. That's really what we're looking for or what I'm looking for.
Erik: What is one under the radar industrial IoT company, let's say maybe a seed or a round company that most people haven't heard of, but that you're keeping track of and you think might have the potential to really develop into something interesting?
Satyam: I'm keeping my eyes on this company called Wireless Glue. They are a protocol gateway, I'm obviously highly simplifying their offering. The reason it's very interesting to me is if you think about protocols gateways, which is a core, core fundamental building block in manufacturing, in OT context, protocol gateways haven't changed for ages. And so I think it is ripe for disruption. There is a modernization that needs to happen for that layer so that we can actually much better integrate with the IT part of the IoT wave that is going to hit manufacturing. That modernization is only going to happen if we have newer options to traditional offerings in that space. So, that's why I'm interested and excited about it.
Erik: And then last one, in the consumer internet, we've seen several companies: Facebook, Baidu, Alibaba, Tencent, Google, Amazon emerged over the past three decades and turn into 100 billion plus valuation companies. Do you see a similar phenomenon happening in the industrial space? Are we going to see some of the companies that are emerging over the past several years develop into these really 100 billion plus giants? Or do you think it's going to be the incumbents, the Siemens, GE, or even the Amazons and Alibaba’s who will kind of figure this space out first, and that the new generation will maybe mature into nice $10 billion companies, but not necessarily into the new market leaders?
Satyam: I'm optimistic, especially since I come from a startup background myself. I think, indeed, we will see a new name emerge. And there's a reason I think so. There's a reason I'm optimistic about that. If you look at the landscape, people need to have respect for this space. And what I see out of some of the incumbents who have good technology stacks, I don't see enough respect for what an end user actually has, all the problems they have that come in the way of adopting new technology stack that somebody wants to sell to them.
So I think there is a chance potential for disruption, a company that not only is very good at technology, but also has the respect of seeing that look, it's not just about throwing that technology over the wall and expecting the end user to adopt it, but it is about figuring out all the design problems along the way that make it very hard for that end user to adopt it in the first place, and solve all those design problems.
There is going to be a company that will see that that's going to have the empathy to see that and solve it, and that's going to be the winner. I hope, obviously, in my very biased opinion, Nutanix is [inaudible 59:10]. But I think that I don't see a company yet who has it all, who has the software DNA that is needed, the respect that is needed for what exists and so on.
Erik: So Satyam, I really appreciate you taking the time to talk to us today. If our listeners want to reach out to you or they want to learn more about Nutanix, what is generally the best way for them to do that?
Satyam: The best way to do it is to try to email@example.com. I will promise you, if they mentioned your name, I will respond to them personally. And then of course, I'm on Twitter as well, it's at Satyamvagnani, my first name and last name. I'd love to have a smaller conversations on Twitter.
Erik: Well, we'll get those in the show notes. Satyam, really thank you so much for taking the time today. Great insight, I really enjoyed the conversation.
Satyam: Erik, likewise thanks a lot again for inviting me, great chatting with you.
Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.