Welcome to the Industrial IoT Spotlight, your number one spot for insight from industrial IoT thought leaders who are transforming businesses today with your host, Erik Walenza.
Erik: Drew is the founder and CEO of Alluvium. He's also a very active guy that's involved in a number of other companies. We'll focus first on going a bit into Drew's background, some of the other projects he's involved in and then have a deeper look at Alluvium from the business perspective.
For the second part of the podcast, we will be diving more into the technology, both Alluvium technology but also technology more generally, so looking at how they differentiate from other companies in the market. And then we'll end in the third part with some deep dives into specific case studies. Drew, thanks so much for taking the time to talk with us today.
Drew: Erik, it's great to be here. Thanks for having me.
Erik: And if we can start maybe at the beginning, meaning, how did you actually first connect with this company? What were the discussions before you really got into the deployment of them feeling you out as a young company? And then if we can talk about, maybe if you did a pilot with them, what the face look like in terms of planning that out, gaining trust, I think it's interesting for a lot of companies both large and small to understand how this actually unfolds and how the deployment actually unfolds?
Drew: So the best example I can give is actually from one of our very first pilot customers, actually a large multinational oil and gas company. And so, they actually fit well into the mold that I was speaking about before, where we got connected with them through their offices of innovation. So we actually ended up getting connected to them through one of our investors. And so, we got connected with the woman who is heading up there innovation team on the refinery R&D side, so thinking about new approaches to doing analysis on the downstream oil and gas side.
But at the time, it was a process really of relationship and trust building at the beginning. We actually were able to go visit their facility and take a tour of the refineries. And so we actually got to directly interact with the folks who might be using the software. We got to see literally firsthand where the data gets generated, what system they were using at the time, and learn about what their acute problems were.
And so this project started, this was actually before we even had Primer as a product. But it helped us really start to formulate some of the thinking that we had around in the early days about how to build Primer. And so from there, we kind of zeroed in on a specific problem around predictive maintenance, which again, unsurprising is this is a typical use case that we hear a lot about. And so the refiner handed over some data from a particular failure that had happened in one of their real life refining processes. And this was a really important lesson learned for us in one of the things that we still do.
What's so important in the early part of an interaction with a new customer for us, and I suspect this is true for a lot of analytics companies working in the industrial space, is that the reality is no one's going to really think that your tool is useful until you've shown them what it can do with their data. Understandably, there's not a long history of success and trust for analytics tools inside the industrial space, we can think of early 90s work for these big decision tree systems and decision support systems that were being built, but ultimately were unsuccessful. And then there was this AI winter that happened. And these organizations are old and they have long histories.
And so you walk in the door thinking that you have this brand new hot technology that's going to solve all their problems. That's a recipe for disaster. These consumers are way too sophisticated, they see right through that stuff. And so what we found is we said, listen, here's what we do. We do this stability analysis. We build products that allow you to traverse this information quickly.
So with this particular customer, I said, okay, well, here's data from a particularly bad event, in fact, an event that was a fire at one of our refineries that really has high salience with the entire organization; from the CEO on down, everybody knows about this. So if you can show us how your tool would have been able to help us better understand or even prevent this from happening than we think we have something. And so we did exactly that.
In fact, again, this is before Primer was even a standalone tool. We built a custom tool, which ultimately, again, helped us build around the Primer as a general tool. And we had a lot of success that way. We're able to come back to the customer and say, listen, here's this data, here's how it flows through our system, here's what our system is able to identify. And in fact, here are the specific sensors and the specific parts of the asset that we think were the leading indicators of this particular failure.
And when we did that briefing to the seniors at this organization, I'll never forget, because we were sitting in this conference room, and we were going over the results and the folks around the table had their laptops open, and they were sort of feverishly typing on their laptops. And what they were doing is they're actually looking up the internal analysis and audit that had happened at this company for this event and try and cross reference that with what we were showing them.
And two great things happen. One, we were actually able to show some of the same findings that they had has. So we're actually able to identify the root cause in the same way that they did. But we were also able to show them some things that hadn't seen. We’re actually able to show them some relationships in certain sensors and how they were correlated that were not present in their own analysis. And in fact, we're able to do that only with a handful of clicks. So that was really the first time we said, okay, we think we really have something here, this kind of modality of interaction with the product is really powerful.
Erik: What was the timeline, and maybe we can break this down into the timeline to actually build the trust and kick off? And then also, once they gave you the data, how long did it take to process that and come back with a result?
Drew: So, the timeline to building trust, that was a much longer period. But just getting on an organization schedule, getting through the security to actually come visit a refinery operation, having the tour, sitting down meeting, discussing, so that whole part took several weeks, probably months, just to get that process started from initial introduction to the meeting to identifying a particular use case, and then building a proposal around that.
Once we had the data, and sometimes that's the easy part. We'd already had the technology in place. Of course, we had to do some additional development because we were building this custom tool for them to be able to do the exploration. But even that only took a few weeks to get done. So all told that interaction was probably so generously about a six-month engagement where we're working with them on a proof of concept from really start to finish. But as you rightly point out, the largest point of it was really okay, how do we get these folks to actually trust us with their data and be able to show them something interesting?
Erik: And if we looked at the product today, and somebody who was already an existing client had a similar issue and they fed the same data into the system as it exists today, how long would it take them now to get back a result that they can act on?
Drew: Now you're talking on the order of minutes really, and that's the idea for us. Practically from that same process of saying initial introduction to trial and using the product, that can take a few weeks to get settled. But once a customer has access to the system, they have their accounts, and they can start loading data into it. Again, the whole idea of it is it's supposed to be a smart tool that takes your data, does that kind of 80% work of identifying what's novel in it and where are the important things are and then allowing you to explore it.
And so, part of the answer is sort of depends on the size of the data set. If we have a really, really big rich data set, well, then the system will process that for slightly longer. But typically, a dataset of a few gigs into the system will only take a couple minutes to process and then a customer can go right through and do it. And sometimes that analytical process can take a while. If it's a really big data set and you're looking at a long time series, well, then you might want to be a little bit more deliberate and how you go through what the system is telling you.
But for that more repeated process, like the plant manager who's putting data in every day, those data files are typically much smaller. And you may only have a handful of alerts to look at and go through. That can take you five minutes.
Erik: How does the pricing work? So we don't have to look at that particular client. But I guess you have one flagship product, does that mean that is it kind of use-based pricing, is it a flat monthly or subscription based?
Drew: Yeah, we do a flat monthly price. So again, it's a SaaS based product, we do a subscription to the product. We like to typically do annual subscriptions for it. And we do it based on teams. We found that that was a useful grouping for us to work on. Because the organizations that we work with are typically large, what's great for us is if we can have an initial pilot that converts to a paid subscription and then hopefully, maybe one team will tell another team how great the system is. And then they'll want to do a pilot and they'll want to do a subscription we can grow within that.
So, we say teams, maximum number of 12 users per team, again, that's a number that has emerged from our work with different companies in terms of sizing. No hard and fast rules there. And the one thing that we never want to do is we never want to price against the data. We never want to discourage a customer from putting data in the system. That obviously hurts them. But more importantly, it hurts us, because the system gets smarter with every turn of the crime, so to speak. And so we never want to throttle folks based on data. We never want to charge folks based on how much data they're putting in it. For us, we say, if a customer is putting data in the system every day, that means that they're having a great experience and we should really nurture that and figure out a way to make that customer even have a better experience, allow them to put more data in it.
Erik: So there's not like a significant operating costs for managing that data? I guess they're paying the hosting fees for that?
Drew: In some sense that would be a great problem to have if a customer was using the system so much that our pricing structure ended up not being economically viable to us because of the hosting cost and then obviously, we would figure out a way to share that cost with the customer.
Erik: Can you give us kind of a ballpark for what this would look like, if you do a paid pilot, but then also for a deployment?
Drew: When we first meet a new customer, again, we focus a lot on that trust building exercise. So the first thing that we do is we have a 30-day free of charge trial. What that includes is a enterprise license for Primer, again, exactly the same product that you would have if you were paying for the subscription service. That's a team of up to 12 people with unlimited data and access to our data science team. And we'll do a trial, again, where we'd like to start to say, okay, do you have an incident or a dataset that you find to be particularly challenging or was an incident in your operation that was particularly costly or salient? Let's start there and let's see what the product can do for you.
And so that trial will get kicked off. We actually have a really rigorous trial process where we do a kickoff, we get everybody on the system, we do some light training with users so they know how to use the system, how to get data into it, a little bit about how they might interpret the initial findings that we're seeing with that data set. And then we do regular check-ins. So we say we want customers to do a minimum of four different analyses over the course of that trial, really, to kind of get them at least have a little bit of competency with how the product works.
And then once they're in, we do a halfway check to make sure that the product is doing what we wanted to do. We always want to hear from customers about which features were best for them, which features they wish the product could do. And we really use that information to help prioritize their own development. And then we do a wrap up of the trial at the end of those 30 days. And the wrap up really focuses on three things. One is, here's a summary of all the things that you did and the results. And we actually use our data science team to compile that. They'll go through the data analyses, with team, both internally to us and with working directly with a customer team to show exactly what the tool is able to do and where it was able to provide value.
We also do specific metrics. So we look at obviously things like how much data did they put in, how many alerts were generated, how many of those alerts were marked as being important, and we view that as a great metric for us. So, the majority of the alerts that we're showing are high value to the customer, that means that we're doing our job well.
And then the third part is we do a user survey. We obviously want to do similar to net promoter score for the product. Were the folks who were using it finding it value really kind of have more of a qualitative analysis of it? And then we present that back to the team with the understanding from the team that now they get to make a purchasing decision. And so immediately after that meeting, the expectation is that a customer will make a purchasing decision. And we've been fortunate that we've had some success there and customers have been really happy with the results.
Erik: Are you able to give us a ballpark for what it would look like for an annual subscription?
Drew: Sure. We try to do 10k a month for the subscription, as I said, we like to do an annual subscription. And it's simple. It's just a subscription product, so we love to work with customers. It's enterprise sales, every customer is a little different. But that's ballpark where we like to store customers.
Erik: You said that there was one other case that you wanted to walk through, when was that?
Drew: Sure. So this would be one that we'd like to talk about more recent history. This is sort of working with Primers specifically. We were working with power authority, actually up in Anchorage, Alaska. But this is smaller bit of verticals that we work as in power generation, so on power plants. It's a nice story. So we were working with the SCADA analysts up there. So again, this is that use case where it's not the plant manager, but it's the individual in the team who's charged with analyzing all the plant data.
And so he was putting data into Primer. And the question that he was wondering is why were these particular breakers having issue and why was the power on the grid fluctuating in this particularly odd way. But in of power grid markets, one of the things that's unique about a rural system like they have in Alaska is that it's very much a shared resource among many different providers. And so if there are gaps in in power on the grid, then other providers can supplement that and then they sort of have this economic clearing that happens at the end and so everybody's made whole as a function of that, which would be different say, from major metropolitan area where there's much less sharing of infrastructure.
What was happening is that they were noticing some fluctuations in power on the grid and these breakers were tripping and so they're doing this analysis. And so they're seeing these instabilities in the data in Primer and then getting down and specifically identifying where those fluctuations are coming from. And lo and behold, what they found is that not this particular provider, but one of those other providers on the grid was actually not supplementing back to the grid in the way that they're supposed to. And so they were having fluctuations due to these breaker issues.
These other providers are supposed to provide energy back on the grid, but they weren't doing it. And so actually quickly able to identify who that provider was and then go and actually reconcile that issue with them directly and all able to do that, really, with a few clicks of a mouse where the process that was in place prior to that would have been this extremely arduous Excel file by Excel file by Excel file process of trying to identify that which we were able to just drop that data in and show it almost immediately.
Erik: So how are you finding traction so far now that you have a standard product on the market?
Drew: So, really in earnest since beginning of 2018 is really when we've established the product and market. And the traction has been great. The process of enterprise sales is obviously a long one. But as we've moved through the year and really built a lot of rigor into how we work with our customers, as I was explaining, that has really helped us a lot. I think if we can find customers that have this specific problem of wanting to understand their complex systems at a high level so that they can more quickly get through that data and identify problems, if a customer is attracted to that and sees the value in that well, getting them to a trial is, as you said, sort of a no brainer. If they have access to the data, that data can be put into our system, well, then we can get them up and running and support them on that.
And as we're moving through now, we're actually seeing a good number. I'd say, most of our customers moving towards the direction of making that purchasing choice. And so for us, the challenge, of course, as always with technology companies is how do you support that as it scales. And so, we've tried to be really thoughtful in that. And one of the things that we've really tried to avoid is having a system that requires a lot of customization per customer.
And so Primer, that's a general tool, you put data in it, and if it works for your use case, it works. And if it doesn't, it doesn't, and I think this will be the process for us going forward this year, zero in on those particular customers and in particular verticals that have a specific use case, and then be working to scale those out and not get distracted by a bunch of other potential interesting but orthogonal industries and use cases.
Erik: So Drew, if somebody wanted to get in touch with you, as a customer or a partner and investor, what would be the best way to reach out to you?
Drew: So easiest thing to do is go to our website, it's alluvium.io. And if you're interested as a customer, there's a request demo button right there on the page, you can look around and then that will actually go right to our business development team and then write to me as well. If you want to reach out to me directly, I'm easy to get to by email. My email is just firstname.lastname@example.org and I'd love to talk with you.
And then also, if you're listening to this and sounds like a project that you might want to contribute to as a team member, we're always looking to hire smart folks as software engineers or folks who have deep industry experience, and so you can just go to alluvium.io/careers and look at what the open jobs are. And if something looks like a fit, we'd really love to hear from you.
Erik: Well, Drew, for me, super interesting. I really appreciate you taking the time to walk through this very structured manner that really helped me understand better what you're doing. So, thanks for taking the time, very much appreciated.
Drew: Well, you're quite welcome. And I appreciate you making the time and having a great conversation, Erik.
Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IoT one HQ and to check out our database of case studies on IoT one.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at Eric dot Relenza at IoT one.com
Erik: Thanks for tuning in to another edition of the industrial IoT spotlight. Don't forget to follow us on Twitter at IotoneHQ, and to check out our database of case studies on IoTONE.com. If you have unique insight or a project deployment story to share, we'd love to feature you on a future edition. Write us at erik.walenza@IoTone.com.