In this episode, we talked with Rob Hirschfeld, CEO and founder of RackN. RackN connects the tools people use to manage infrastructure to workflow pipelines through seamless automation across IT systems, platforms, and applications.
In this episode, we discussed the challenges of operating on-premise data centers and the need to automate processes as companies adopt edge computing. We also explored the development during the past 25 years of data center evolution, what has improved, and where we still need progress.
- What are the challenges nowadays for cloud infrastructure?
- How have IoT and data centers evolved over the past 25 years?
- How do companies adapt to edge computing?
Erik: Rob. Welcome to the podcast today.
Rob: Erik, thank you for having me. I'm excited about the conversation.
Erik: Yeah, I'm looking forward to it, too. Interesting to know that you're also a podcaster. So I suppose this will be a very smooth one. Maybe before we jump into the conversation, I'd love to hear a couple of thoughts. What is your podcast cover? So it's called Cloud 2023. Oh, 2030.
Rob: Cloud2030. It was designed as a 10-year out forward-looking roundtable discussion, and we record them. We started it during COVID, because we were so desperate for what we've lost with conferencing. It's actually really thought of — I think of it as a hallway track, an ongoing hallway track. So we sit down. We pick a topic. We have an agenda, and then we talk about where things are going. We have a strategy session where we'll talk about AI, big data cloud of government intersects with cloud environment, like big topics, and then bring it back to how things are going. Non-commercial so it's not a vendor thing. It's just us thinking about how things are going to go. Then we do a separate one for DevOps, where we talk about big topics in DevOps and how those are going to get broken down.
It's a lot of fun. It's an open format, so it's not very driven by a guest pitching a product or a guest with a specific expertise. We do that sometimes. It's much more around a topic that we all want to talk about. It's been amazing. We've been running it for three years. We have the luxury of going back and doing sequential topics. When we run out of time, we just go back and we dig deeper and deeper, and deeper. I've never seen anything else like it, and it's a lot of fun to run.
Erik: Yeah, well, that's the luxury of this topic, right? It's constantly evolving. So you can always look back and say, okay, we talked about this 12 months ago. What's different today? No end in sight here on the evolution.
Rob: No end in sight. Definitely not.
Erik: Great. Well, I think that would be maybe a good frame for our conversation today, which is, basically, what should we be thinking about in this context? But then also, obviously, focusing a bit more on IoT. I'm sure, typically, you might be covering a broader scope. So we can narrow down a little bit, which is always interesting to have a bit of a definition there. But before we go there, Rob, I would love to understand a bit more about yourself. I think you have a fascinating background. I mean, you're — let's see. When did you start your first company? Was it back in 1999? Is that right?
Rob: Yeah, we literally built one of the very first cloud infrastructure companies. In the process of doing that, we were the first people anywhere to install VMware ESX beta outside of the halls of VMware, and built the first cloud and started building cloud infrastructure back in 2000. So we've been doing cloud and cloud infrastructure for a long time from an operation's perspective. It's remarkable how hard it is. Even in some ways, it's not getting easier. We keep making, getting this stuff harder. Not easier.
Erik: Yeah, that's right. So it's kind of a race. There's been a lot of things that you used to do. They have been very heavily automated by now. But then, of course, there's always new complexity also coming to the tech stack.
Rob: Yeah, I think the challenge that we have for the cloud infrastructure work, people are used to the cloud. Somebody is running the infrastructure for you. It's very API-driven. Not necessarily simpler, but it's API-driven at least. We haven't put that same effort into things that people have to run themselves. The audience here are people who are going to have to run their own infrastructure. If you're running IoT, or IoT even, still you might connect to sign in the cloud, that you've got a device. You're running that device. You actually have concerns about the networking that is attached to all the pieces and parts. That infrastructure work is still your concern. You can't get rid of it. That, I don't think we've done a lot in the industry to make easier.
As a matter of fact, cloud is a response of our lack of progress there. And so that was frustrating. I did a stint before founding RackN. I was at Dell for many years. That was part of the impetus for starting RackN. It was just how hard it was to have successful data center operations in the customers. RackN was formed as trying to make that process better. It's just really, really painful.
Erik: Yeah, and you'll know this better than me. I've seen numbers floating around that the system integration cost is something like 30%, 40% of the total cost. I mean, basically, you buy your hardware. You buy your technology, and then you have this very heavy cost associated to basically figuring out how to operate and so forth. I don't know if that's budged so much in the past decade or so. It still seems pretty heavy.
Rob: It hasn't budged. I don't think it's budged at all. No, actually, the origin story for me starting my first company was the fact that I was writing software for people and then doing the installs and things like that. I was very young at the time, and so I would budget how much time it took to write the software. I was pretty accurate about that. But when I went to install it, I found I was spending exactly the same amount of time as I spent writing the software, actually getting it running in the environments where the software was. That's what led me to look at cloud in the first place. It's like, wait a second. If we can use this with a browser, I don't have to run around and install software on everybody's desktops which is taking all this time. It's incredibly hard to do it.
For us, we see this rise of infrastructure as code as the first sort of break in the wall about how do we create repeatable, reusable automation in process that we can use customer to customer site. Because what you're describing is, it's not that it's that difficult to do this work. Part of the problem is that we don't have a lot of repetition, where we learn from doing it once and then improve it for the next time. We might write stuff down and have a cheat sheet to make things easier. But what I've seen and what problem that RackN in cycle that we're trying to break in at RackN is this idea that every time I go to a new customer, a new site, I'm reinventing all of the stuff that I had before. Then if I fix something in the next customer, that previous customer never gets the benefit of it. This is the challenge. We have to stop doing IT and operations, IoT and operations one at a time. One-offs. We have to find the patterns. We have to get better reusability out of this. Otherwise, it is the downward spiral then.
Erik: Yeah, I know. That's interesting. It also seems to be quite often that these are layered on top of each other as the CIO after CIO makes investments. But then, you have the legacy technology. You're never going to replace that entirely, right? So you're just building them on and doing these, hacking these integrations and so forth. And so each company ends up having its own monster that's been built up over 10, 20 years. Yeah, much harder to standardize that than it is for Azure to standardize cloud services, right?
Rob: Correct, yeah. The reality is that complexity — we talk about complexity a lot in the industry but not in very functional ways. I see and I hear people say they're all afraid of complexity. They're worried about complexity. They assume that the only antidote to complexity is standardization or simplifying. I think your point is very valid. One person standardization is the next person's legacy infrastructure. You're just going to get layers of standardization. That's what ends up. My standard isn't the same as the next person's standard or the next person's standard. So when we look at complexity, what we've done is not see it as a problem with a specific antidote. It's not like, "Oh, I'm just going to simplify things and remove complexity." I'm saying you have to manage and design them, too. That, to me, is the first step in here, the first sort of aha moment I had as we were building.
RackN specializes in physical infrastructure. We automate all types of infrastructure. But we've gotten very good because we embrace this complexity as a normal problem of being able to do a good job managing things that have a lot of exceptions and rules. You have to think about how very different environments impact the performance, all that stuff. It's complex, but it's needful complexity. So instead of walking in and saying, "We're going to wipe this board clean and pretend like we don't have any of these other generations or any of these other requirements," you go the opposite direction and say, "Okay. A lot of these things are probably required based on the way the things were built. Let's see if we can accept that." Then automate around it, or automate in a defensive way. It takes more time, but it creates much more repeatable automation, much more reusable automation. Then it also, in my mind, is respectful of what was done before.
It really frustrates me when we have IT professionals who basically treat the last generation as — which they were the last generation on the job. They just left. They show up. Then all of a sudden, they're thinking, "Oh, everything the person did before me was was wrong and bad. How could they have been so short-sighted?" We do that. It's a normal human tendency. But we need to acknowledge that that stuff is there, and it's working. Let's see if we can make it keep working.
Erik: Yeah, absolutely. That seems like the right philosophy. You also have to acknowledge that the guy that comes after you is going to have the same response. You want to have an architecture in place where they're going to be able to value the work that you're doing and fit it into the greater whole as it continues to evolve.
Rob, let me take just one step back and ask you a bit about the Digital Rebar project, because it's always interesting. You set that up in 2011. Then you founded RackN in 2014. So it's always interesting to me when somebody basically sets up a, I guess this is like a nonprofit or an organization oriented around the topic before they set their company up. What was the logic there?
Rob: The history for that traces back to something we were doing inside of Dell. Digital Rebar today, boy, it's another two generations past when we founded the company on, which was, in itself, another generation past what we did at Dell. So when I was at Dell — actually, most of the founding team at RackN was at Dell together. This was in the OpenStack days, 2000, early 2009, 2010 era. We built an installer for OpenStack.
Inside of Dell, we were helping hyperscalers build the next generations of their data centers. We were trying to use things like Hadoop and OpenStack. If your audience isn't familiar with those projects, one is the virtualization manager. One is a big data analytics platform. All of them are designed for big data centers. So multi-system automation, multi-layer compute. Some of that stuff has been morphing into smaller footprints, but it's always operationally challenging. What we found was, Dell could sell them servers. We could take software from the community. But actually, running that software in a reliable way on that equipment was really harsh, and worse inconsistent. This comes back to my career theme. If you think about it, I have software and I have hardware. But every time I set up that, that I create a new unique system, that's really bad to me. I've worked really hard to fight that. So we've built a way to install that software on the systems consistently.
The original version of that was called Crowbar, which was named after — it's the first tool in Half-Life that you start the game with. It's the only thing you have. It's the crowbar. That's how you start the whole systems up. Iterating through that process, which started as an open-source project in Dell and then became pretty widely known in the OpenStack communities as we worked to improve that and make that hit our reusability targets, that became Digital Rebar, which is very different. The product we have for Digital Rebar today has some API overlap for it. But it's almost completely different from that perspective.
One of the things that's worth noting about this is, the open-source piece is really important in that we want people to reuse their automation. The purpose here is, how do you take it so that customer to customer, person to person, operation site to operation site is actually able to share and reuse automation? This is a theme we've been talking about so far. If I automate something and create processes, how do I make sure that those processes are durable in time, over so I can keep adding to them instead of having to rewrite them? But also, site to site — if I have multiple sites — or customer to customer, or community member to community member. If we're constantly rewriting stuff, we're not sharing. We're not working together. We're toiling around this same work. That, to me, has always been one of the biggest challenges in IoT going back to the first days of my career.
Erik: Okay. Fascinating. Help me understand this. If I'm a customer of RackN, I deploy your platform to help run my premise data center. Then through the platform, I have access to tools that you've developed but also workflows that maybe previous customers or previous community members have developed, that I can then immediately begin leveraging so that I don't have to start building things from scratch. Is that?
Rob: That's exactly the primary thing that we do. The open-source pieces of the Digital Rebar have evolved to actually be all of that automation in community. Because that's where we want people to reuse. The parts that RackN sells are actually the platform pieces that allow you to have a place to run all that. What we've seen from this perspective is that customers really don't need to understand how data is stored or how the engine works. But they really, really need to understand in order to do this sharing and reuse is they have to see how the automation lays down all the other pieces into it. Because if they don't see that and they don't actually — they can't change it. They can't inform it. They can't contribute things back. But when a new customer shows up, they will install all of this open content, what we call universal workflow sometimes. That's driving what we call an infrastructure pipeline.
The idea here is that when you start up, you're actually building on well-proven, highly-exercised workflows that will install operating systems, do work beyond installing, qualifying systems, discovery, classification, configuration. All that work is actually built into how the system operates right from the start. Then inside of those pipelines, you can add in whatever custom pieces you want. This is really a critical piece. So we have to step back to the objective. The objective is that you can start with a well-proven, tested body of work that out of the gate provides 90%, 95% of the function you need. Then without removing that layer, you can add to it to do just the work you need.
A lot of times, we'd like to draw IT diagrams like layer cakes. We have this beautiful vision in our head of, we have a networking layer and a hardware layer, an OS layer, an app layer, a platform layer and an app layer. It's all neatly stacked together with this idea that we can take pieces in and out, like switch out a layer in the cake. I like to describe it instead as a fruitcake. Because everything that we do has implications on all the other pieces and parts. So you can't, in IT system, swap out the networking and pretend like it's just the networking. Changing an IP address changes your DNS, which changes your certificates. All these pieces all bump together. And so we don't have the luxury of thinking that we can stratify our way out of the complexity. We have to manage the complexity in a different way. The way we do that is we actually build these pipelines together that then pass information back and forth across that pipeline, and then make it very easy to inject new operations into the pipeline.
So I can say, oh, you know what? On my systems, I need to run this script after I've done the bootstrap configuration but before I've installed my application. What we've enabled you to do is add in that one action into the system without breaking the pipeline. So you can use a standard pipeline and take advantage of its evolution, improvements, and fixes for bugs. You can stay on the pipeline, but now you've added that one piece without breaking it.
Historically, what happens is, people take all that automation. They make a copy of it. This is what we used to have to do. Then you add in your one piece, and now I've got a copy, the 2023 version of my automation. If RackN continues to improve it, or fix things, or evolve it, I don't get the benefit of any of that work. Most people take that piece of automation, and then they're like, "Oh, I don't need any of these extra things that RackN stuck in to support security. I'm going to take that out." Then you start unpacking the box. Now we're back to complexity versus simple. You're like, "Okay, I'm going to keep this simpler by taking out all this stuff that I don't understand, because it's complex." You end up with something that works for you in the moment. But what you've lost is all of that learning, all those battle scars, that defensive stuff, the things that you might need next week. You don't realize you just tore that out.
We saw this pattern happen over and over again, which is why we built the platform in this way. Because it really, really is important for you to have all of those pieces in place, even if you don't need them yet. They all serve a purpose. This is actually the punchline I didn't finish with what is the antidote for complexity. It's not simplicity. It's exercise. And so if you're seeing something and you're thinking, "Wow, that's a really complex piece of automation, or a complex system, or a complex machine," the way that you deal with the complexity of that work is by doing it more often. In development speak, they talk about releasing more. If releases are hard, release more often. The more you exercise the system, the more defended you are against this complexity.
And so what RackN has done is when we build these pipelines, instead of having thousands of pipelines, we actually work really hard to have as few pipelines as possible that service a broad number of the utilities. Because we know that if you're taking that pipeline, even if you're not exercising it, as an individual customer, we have customers exercising those pipelines like crazy all over the place. And so we find bugs. We add features. We add better defenses. We add new capabilities into those pipelines based on the community exercising that automation. It's radically transformative from that perspective. Because that means that everybody's automation is being tested on an ongoing basis, even if they're not using the automation that much. That is the thing that when we were going all the way back to Dell in our early OpenStack days — this is, I think, a critical insight for anybody doing IT work — individually, you might not do a task very often. But if you can participate in a community where that task is being done a lot, then you get the benefit of that exercise. That's one of the ways we cope with complexity.
Erik: Yeah, it's a fascinating approach in the context of IoT, kind of the connected enterprise. Because, at least, a lot of the companies that I'm working with have had IT systems obviously for 30, 40 plus years. But for a significant portion of that time, they were relatively stable. It's like, okay, we deploy. We have our ERP. We have our MES, and we have our general back-office systems and so forth. Beyond that, things are fairly stable. We make updates every once in a while. Now we're moving into an environment where companies have this backlog of 50 use cases, of different SaaS, different paths. They have customers that are asking to integrate into different data streams. They have decisions around enterprise 5g. They have salespeople coming and saying, "Hey, you should deploy a campus 5g network, and then you can rethink your architecture and radically enable new use cases and so forth.
Of course, they're looking at this and they say, okay, we know how to manage our MES and our ERP and so forth. How do we make sure that we have an architecture that's able to scale to meet all of these new requirements? Obviously, that means that they have to be rethinking how they evolve their IT backbone to be in a much more flexible architecture. It sounds like that's the mentality that you're taking. It's enabling people to do this. I'm curious then, on the open-source side, is it by default that anytime code is developed on this platform that it's then available to the community? Or do people have to basically opt in and say, this workflow that I've developed, I'm going to make available? And if so, what is the incentive model to make sure that people are not basically free riding on the community?
Rob: That's an excellent observation and question. One of the reasons why we converted the content to be open and the platform to be closed is the free-riding problem. Free-riding in open source can be a bit of a challenge. The way we designed it, this was something that evolved over time. We've been doing this for — you go back to the Dell days — over 15 years. Where it's open and where there's community and where there's collaboration is really important. Because one of the things we support is highly secure air-gap environments or even just sensitive environments like banks where they're building automation that they don't want the public. It can't be public. And so part of the design has to be this very intentional balance between these are things that are internal for me and I keep, and these are things that are open and in the community that I share.
A lot of times what people do is, instead of trying to take ownership — this became important to us — instead of acting as if everybody should be collaborating and contributing to the open source and creating pressure, a lot of open-source communities do this. It makes sense for their model. It doesn't make sense for our model. Instead of creating pressure for people to commit in the open in order to get credentials and build up in the community, what we'll do is if customers have pieces that they believe can be shared, we can help them get it into the shared content even if it doesn't show up as coming from them.
So what we're trying to enable with this is that we get common reuse. Instead of trying to build, hey, you have to contribute back to open source, if people have things that they want shared and they can contribute back into the community directly or through us, both are fine. Because the goal is to reuse, not have the community, everybody earning community credentials and credibility here. These are operators. This is one of the things developers are really good about sharing things like this. Operators are not as much. I actually can explain why this was a big aha moment for me. When you're an operator, when you take new code from somebody else, it's not tested for your environment and your scenario. This is one of the things that made it so hard for us to get to a point where RackN was solving this problem. The operators tend not to. By operators, I'm going to include IIoT, IT people building edge infrastructures. They should be very nervous, realistically nervous about taking code from outside of their environment, because it could break their operational systems.
Nobody in ops wants to spend the time and do the work to test other people's stuff in their own environment. They just don't have the overhead to do it. And so you have this challenge where they don't want to write a lot of custom software that's negative. That means they have to maintain it. But they're very nervous about bringing in software automation that is outside of their scope. We have this conversation with a lot of operators, and they're always nervous. They're like, "Wait a second. I don't want to take — I'm an HP shop. I don't want to take anything that's Dell related. I don't need it. That's complexity. It's bad. I'm going to kick it to the curb or not accept it." This is the classic balance we keep coming back to.
This is why it's very important to have a curation process inside of what's going on and actually be able to call somebody and say, "Alright, you've made these changes to my automation. Can you help me get it working?" Because you're not going to call up one of her other operators or go to a community meeting at the bank if you're working in another bank or in telco and say, "Hey, I need help fixing this thing that you broke." This is where it fell down. This is where we were having trouble in the OpenStack days, where operators don't have time and help somebody solve a problem that's different than the problem they're solving. If it's identical, you can do it. But if it's even slightly different and every environment is slightly different, it's much harder to spend the time solving somebody else's operational problem when you've got a lot of burdens on your end. But this exercise capability is really important to understand. And so you have to think through what that means.
For us, what that means is we're trying to get more successful with our customers, get them to over 90% shared automation. When you start hitting a threshold like that, it comes back to what your first point was. It was how much do you get working out of the box? How much do you keep working out of the box from that perspective? It's a totally transformative perspective. But you asked about open source. Some of our customers will share things because they don't want to maintain them, where they recognize that they don't want to own it. For us, that often means that they're transferring ownership of that piece of code to us, to RackN — the community in general but to RackN specifically. Then we will fix it if something breaks. That becomes a really important part of open source.
If you're accepting code into an open-source community, the community has to be willing to take over ownership of how that's operated. Typically, the author of the code might in an operational environment. That's really operators aren't as likely to be the owners of something over a long period of time. They get it done. They keep running it. They don't worry about what the community does with it. In order for the model that we're describing to work, you have to have a curator who will maintain and support how that stuff is going. It's essential to creating that reuse.
Erik: Yeah, clear. Okay, well, that sounds like a good win-win. So they can transfer longer term ownership to you. By doing so, they get then to benefit from everybody, both RackN and anybody else who improves upon that workflow in the future. Then in return, you get to share that with the broader community.
Rob: Correct, yeah. It's a very virtuous cycle. Because that means that as things improve, they keep getting the benefit. I haven't met a single customer who felt like they were adding business value by being able to set BIOS. We have customers who have very specific weight, BIOS settings they need for their high frequency trading environment. That, I get. But the actual process of setting it doesn't add any business value to them. And so they're very happy to have all of that become standardized. Same with installing operating systems, or installing applications, or doing security audits. They're unique usually for each customer, but they are only unique because people haven't had ways to share and reuse that automation. That requires that you have to get this threshold of reusability and make it worthwhile.
Erik: Clear. Well, help me understand what it looks like for a company to adopt this, the RackN solution. Because, of course, they all have their existing legacy, and so they're going to be somehow integrating that into what they've already been working with. Maybe we can start with just a quick refresh on who you're actually working with. I guess this is a very horizontal technology, so it could be anybody in the world. But, of course, companies still tend to focus on specific markets even if they could hypothetically be serving anybody. Maybe first, who are you working with? Then what does it look like if you just want to maybe choose one example of how somebody would actually deploy RackN and begin to utilize the solution?
Rob: I'm happy to. You're right. This is very horizontal. We go across a lot of different industries. We do have a lot of banking. It was one of those initial use cases, because banks have a requirement to be able to build whole new data centers out of scratch as an emergency response. So they can't lose a data center. They have to be able to build one from nothing in a week. And so we get called in because that process usually takes months for people. They needed to take hours. And so we get called in for that. Interestingly enough, edge and telco have very similar requirements in building up a new cell phone, tower site or a new remote site. I need to be able to automate that. All those vendors have multiple hardware vendors in the mix. So they can't be single streamed either. The ability to abstract hardware and then completely automate that process, both of them are key requirements for us.
Let me be very specific and concrete about what it looks like. Because RackN is unusual in that we are not a SaaS. We are a software company. One of the things we sell is not just software but control — self-control and self-management. So our customers would download and install Digital Rebar in their data center. It's a very small, simple to run, executable. You can run it on a footprint as small as a Raspberry Pi. Then from that perspective, it will handle the out of band management, boot provision, a whole lifecycle control process. Everything that's included in that initial install can be version-controlled and put together as a package. So you can show up with a USB stick that has an exact version of what your datacenter or site needs to run, down to your custom pieces or standard library pieces, the binaries itself, the ISOs. All of that stuff actually gets put together as a, we call it version set, and then run into the system. Then as machines boot or get connected in, they will get identified, discovered, validated, provisioned. All that stuff is automated workflows that go out straight out of the system. But it all starts from basically just running a single service in the data center, and then installing that content.
The versioning of the content is an important piece. Infrastructure as code is incredibly deep in how we do this and what we do. And so the idea that not only am I saying here is a Digital Rebar service, but here's all the content that I need to run my site. It's version-controlled and immutable. Meaning, it can't be changed. It comes in as a read-only copy. Propping that in to start a site, that's how Digital Rebar works. No external networking is required. No VPNs. No phone home back to RackN. It's all basically a self-contained bundle in that perspective.
One of the things that lets us do, that's really notable is, usually, that site bring up experience is not the first time you've run that software. The way we do this and what all of our most effective customers do is, they will build a dev environment where they take and fix exactly what they have. They'll typically run the versions of the hardware that they need or versions of the software they'll build, everything together. They'll take that exact automation because it's now liftable as a version, move it to a test environment. They'll have another person run through that whole process, test it bring the site up and down, rehearse, rehearse, rehearse.
The more times you exercise the code, the more confident you are. Then by the time they actually lift up that exact version of everything they need and move it into their production site, then they're really confident that that's going to work, because they've been able to test it. If they have a problem, they now can replicate the problem, fix it, take the version of the automation they have working and move it into the new site. Part of that means we've also worked out a lot of day two processes. So it's not just the first time you bring it up. But can you reset a site? Can you add or change? Can you morph things? All of that is factored into how these systems work. It all comes back to this idea that I can actually say exactly what automation is running at my site, copy it, clone it, look at exactly what all the pieces are in the versions. That type of clarity is transformative to actually beginning that journey.
Erik: Got you. Anyway, what does the timeline look like from — I mean, I guess to some extent, because you're talking about the ability to deploy within hours. But I guess that means that that company has already pre-configured, and so they can copy and paste and get something up and running very quickly. Maybe that's my assumption. But there would be still some configuration when you're onboarding a new customer for the first time. What does the timeline look like to get something up and running here?
Rob: Yeah, I mean, usually, just getting the basics, what I would call the out-of-the-box workflows going, is a day or less. We've done this so much that most things just work out of the box. There's usually some people's networking. They have to understand their networking enough to fit things together. Then from there, depending on what customers want to do, or add, or change, it usually takes a couple of weeks to learn how to build the automation, or do the extensions, or wire in against our APIs, or call back to your APIs. That's usually where people are spending the time. It's building, taking the standard pipeline, and then extending it to run Ansible configurations, or bring in scripts that they have, or phone home to different systems they need to coordinate with call into our systems. We've done a lot to make that easier. But that stitching together is actually a really important part of these processes. It's not as much Digital Rebar work. It's actually the right environmental work that you have in your system.
The fun part to me is watching customers who then take — they learn how to do this infrastructure as code piece. Then that stitching together, they build as their own content packs. They don't have to share them with us. Those are internal work for them, but they then add that into the mix. Then that becomes a standardized piece that they can use repeatedly across their own infrastructure. That's usually the part where there's some learning curve, and then they'll build and test that work. We see that happen in our community channel all the time. It's people understanding how to make that work.
The other thing that takes a little bit of time for people to get used to is our multi-site capability. Because of the way we built the software as site independent, it's one of the only piece of software I've ever seen where you can actually run a control plane per site. Then you can attach another Digital Rebar server to each site. It's not a centralized control. It's actually a distributed federation. But you can have Digital Rebar servers that attach to multiple edge sites, and then create an aggregated view so that you can actually see all of your infrastructure even if it's managed by different sites. If you talk to the API at the management site, it will forward the requests to the edges. And so you can act as if you have an aggregated infrastructure even though we're all site autonomous at the end of the day. That's been a really powerful software. It takes people a little getting used to because they then have to understand how they're layering their automation and controls. Incredibly powerful from a consistency multisite automation perspective, and unlike anything else I've seen in an industry. Especially, because it's a self-managed software, it's not a SAS. So we're not in the mix of them either.
Erik: Exactly, yeah. Exactly. I know. I think it's a different world from SaaS. I was going to ask around pricing. Because you said that the SaaS pricing model tends to be quite different. But you have this fairly transparently on your website here. There's physical infrastructure, virtual infrastructure, support tiers and feature tiers. So it sounds like also quite flexible in terms of the requirements of the companies that you're working.
Rob: Yes, exactly. We work hard to make it simple. The pricing that you have to guess about or wonder is, to us, really hard. It's very easy to know how many systems you're going to be connecting. For virtual, we think of that as a high watermark. Our customers doing virtual work are encouraged to have very dynamic virtual environments. One of the things that we've made it easy to do is create and destroy virtual machines and have a very cloud-like dynamism in build, destroy, create. Literally go through that process as fast as you can. We don't want to dis-incent that. We spend a lot of time thinking about incentives and disincentives. And so we want people to have a very dynamic environment.
The other thing we incent is we don't charge any additional for this multi-site capability. So even though there's a huge value, we don't charge a premium on if you have hundreds of Digital Rebar endpoints. We call that as a site. That way, if you want to have a dev test prod, we don't charge people additional to have those multiple sites. Or if you have four teams that are all autonomous and then feeding into a centralized site, we want people to use that capability. It's transformative in how you manage infrastructure. And so we made the decision to just keep it simple. It's the total number of machines that you're managing or the high watermark for virtual machines. That's it. You do that with hundreds of sites. That's okay with us.
Erik: Yeah, nice. Very clear. Well, let's see. Maybe a good place to wrap up our conversation would be to look at the future. I'm sure you've been quite busy the past few years. I guess COVID has been, I guess, one of the silver linings to a very unfortunate event has been for the IT community, a lot of acceleration in technology adoption. But now you probably have a little bit of breathing room and also thinking about developments for the next one or two years. What's exciting? What's coming out, either at RackN or maybe just broader developments in the ecosystem that you're excited about?
Rob: Well, for me, those are well aligned. We're very excited about this edge transition towards a more IT-oriented conversation. RackN has been building around highly distributed edge sites since our inception, that idea of having a lot of sites and having to manage it consistently. Whether they're different customers or within a customer is really important to us. But all of the edge, the IoT, the IIoT conversations that we've been part of, typically, are dominated by the operations or the OT side of the business, not the IT side of the business. We've been watching as we're slowly getting more and more IT conversations in that space. That's really where we get excited. We're not building an IT gateway network for people. We would enable the IT gateway to be run in Kubernetes, or K3s or NVMS. Bringing in IT management to make those much more repeatable things.
But the industry, in our opinion, hasn't been quite ready, has been asking for IT solutions in those spaces yet. And so we're very excited to start seeing and having those conversations about, wait a second, I've got 100 sites. Or, I've got, in a factory with PLCs that are not secure and need to be managed and PCs on the floor and IoT devices that actually have restful APIs and standard protocols. How do I start thinking of that as an IT environment where I have to have consistent automated management? That conversation, to me, it gets me really excited. We're in a very good position to help in those places.
One of the things I've learned in my career is having a conversation that somebody is not ready for isn't of much use. So if people aren't asking the question of, wait, can I automate all of this stuff? Can I standardize the process? Can I create reusable workflows? When people start asking those questions, then I get excited to have a very meaningful dialogue about increasing the number of devices and types of devices that you're helping somebody manage. To me, the next two years are just emergence of IT conversations in IoT, more traditional IoT.
Erik: Fascinating. Well, hopefully, we can — we're a bit of a drop in the bucket, but we can do our bit to push the market there. A lot of the companies that we're working with are exactly in that situation where IT and OT are beginning to work very closely together, and the environment is getting sufficiently complex where they really need to start looking heavily at automation and technology to help manage it. Also, they're building systems that are evolving much more quickly than they were in the past.
Rob: The thing that I keep professionals need to stop doing — this has been a theme for us in this conversation — is approaching the existing tech that's in place as outdated in legacy. It's going to be there. This is one thing I think that for IT to be successful in IoT and OT and discussions, you have to drop the 'if it doesn't match exactly the way I want it to be, then it's not going to work.' At the same time, I do believe that cloud technologies are going to start filtering down and be required. This is something that your audience should be thinking through. We are going to start having more and more control planes, or API integrations, or tooling. It's deployed in containers only, and therefore managed in container-based systems. And so those control planes are coming from the cloud back into on-premises infrastructure and automation systems and factories and telephone pops, all these places. That's how the software is going to get deployed. And so that is going to drive a lot of IT requirements in places that traditionally have not had to do that work. It would be a huge convergence, this point of those different technologies. People have to be ready for it. That will drive IoT conversation.
Erik: Yeah, well, that's it. That's a good challenge to leave our audience with. Let me quickly share your website. So for the folks listening, this is rackn.com. Rob, what is the best way for folks to reach out to the company if they're interested in continuing the conversation?
Rob: Yeah, the simplest is through our website. You can find a couple different ways to contact us. We are @rackngo. On Twitter, I am @zehicle. It goes back to my electric car days. On Twitter, Hachyderm, LinkedIn, we are in all of the places. Pretty easy to find us, RackN.
Erik: Awesome. Rob, thanks for your time today. Thank you. Thank you, Erik.