How can industrial organizations optimize AI’s potential to build powerful operational capabilities? Leveraging their existing domain expertise is the key. Aitomatic helps companies level up their AI’s competitive edge by combining human and artificial intelligence.
In this episode, we sat down with Christopher Nguyen, CEO and Co-Founder of Aitomatic, a knowledge-first app engine for industrial IoT that assists industrial companies in using their domain expertise to build more effective algorithms. We discussed why only 9% of manufacturers currently use AI in their business processes and how they can overcome data availability challenges. We also explored which use cases are best suited for a human-first AI development platform, and which work well with a traditional black box training approach.
- Why are we not seeing AI takeoff in the industrial sector as often as we see it in consumer markets?
- What are the limits of modern AI solutions in enabling predictive and prescriptive analytics use cases?
- How can you manage the fundamental challenge of small data sets?
- How can your engineers and operators embed their know-how in an AI algorithm?
Erik: Welcome back to the Industrial IoT Spotlight Podcast. I’m your host, Erik Walenza, CEO of IoT ONE, the consultancy that helps companies create value from data to accelerate growth. And our guest today is Christopher Nguyen, CEO and co-founder of Aitomatic. Aitomatic is a knowledge-first app engine for industrial IoT that helps industrial companies use their domain expertise to build more effective algorithms. In this talk, we discuss why only 9 percent of manufacturers currently use AI in their business processes and how they can overcome data availability challenges. We also explored which use cases a human-first AI development platform is suitable for and which use cases a traditional black-box training approach works well in.
If you find these conversations valuable, please leave us a comment and a five-star review, and if you’d like to share your company’s story or to recommend a speaker, please email us at firstname.lastname@example.org. Finally, if you have an IoT research strategy or a training initiative that you’d like to discuss, you can email me directly at email@example.com. Thank you.
Erik: Christopher, thank you so much for joining us on the podcast today.
Christopher: Thanks for having me.
Erik: So, yeah, Christopher, you have a really a fascinating background, not just on the technology side but also I think on the personal side, we won’t cover that too much but I would say you are very much an American melting pot success story so coming over from Vietnam and then crafting a very successful career in Silicon Valley. We maybe don’t need to go back all the way to your birth but can you just give us a little bit of a walkthrough of how you ended up in the technology domain?
Christopher: Well, briefly, I came to the US as a refugee child from Vietnam back in 1979 and ended up in Silicon Valley basically at the beginning of the PC revolution. So, it was hard to fall out of that. So I started hacking then, I was 13, and I have not stopped since, literally. So, whenever that has happened in between is just one experience, hacking one system after another, but I’ve accumulated a lot of experience on the entire technology stack all the way down to device physics and all the way up to software systems.
Erik: Actually, I’m curious there, how old were you when you arrived in the States?
Christopher: I was 13.
Erik: You were 13 when you arrived? And I imagine, I mean, I got to imagine that your English was maybe certainly not fluent.
Christopher: No, no. My only foreign language at that time was French. That was my generation. You get to choose your foreign language track and I think roughly 80 percent choose French and English was sort of the weird one. But I learned here and such as, I guess, let me call the strength of the American system, the American culture, that you assimilate.
Erik: And when you say hacking, were you already as a teenager learning how to program computers and…?
Christopher: Yeah. So, of course, it has to be in the US, it was not in Vietnam, but my first PC was, my first computer that my sister went to work and scraped up enough money, I think was about $400 or $500 at the time which was a huge amount, was a TI-99, a Texas Instruments machine, that has no storage at all and so you have to write the same basic program over and over again. I remember going to the nearby mall, a strip mall, and there was the beginning of Atari computer store and I will be writing some programs there that does some cute graphics and then people would crowd around and watch and they would buy their computers for their children so the store manager is like, “Hey, come in anytime you want right after school. You could just play with the system.”
Erik: Oh, that’s so cool. It’s really cool how programming is a universal language. Even before you really become fluent in English, you can start programming.
Christopher: Yeah. I don’t want to sound too profound but I think it’s what — how shall I put it? Got me out of poverty. Yeah. I was able to do consulting jobs and so on.
Erik: You got into Berkeley, you got into Stanford. When did you first — so were you already involved in computer programming in your degrees or when did you first start doing this more professionally?
Christopher: So such as the culture at the time, this is 1984, ’88 was my undergrad at Berkeley. My computer science training is, of course, a combination of coursework but even before that, I was already practicing a lot of computer coding and programming and so on. I ran the educational systems at Berkeley. We called them the VAX and PDP-11 and so on so. At the time, there was like five machines over the entire campus and I was the sysadmin for that. That was my job in college. Going to computer science classes was more of a revelation that, oh, so there are systems and patterns and principles that people have thought about. So, it’s more — my computer science education was more, think of it as an apprenticeship you do before you realize that there are formulas and things. In many ways, I think it’s actually the best way to learn. Do first and then refactor it out to principles later rather than the other way around.
Erik: Yeah, so, I’ve just seen that you have done your masters and also your doctorate in electrical engineering so I guess that’s —
Christopher: Yeah, and that was Stanford, yeah.
Erik: Best of both worlds —
Christopher: I was in device physics, because I — at that young age, I would naively look at the whole stack and I said I wanna work at the bottom of the stack so that everybody has to depend on me and I have to depend on no one. So I went all the way down to transistors and electrons and holes.
Erik: Okay. Well, IoT and AIoT has finally caught up with you so we’re bringing you back to the bottom of the stack now.
Christopher: Yeah. We’re back to atoms. Silicon Valley, for the last 10, 20 years — my friend Marc Andreessen likes to say software is eating the world but we’re realizing that geopolitical risks, manufacturing, offshoring, and so on, turns out, we do need to start actually making things again. So, atoms are getting their day back versus just bits. So the Googles of the world is digital in, digital out, but the Panasonic, the Samsungs, the TSMCs and so on, UMC, SMIC, and so on. You’re absolutely right. It’s their time to come back.
Erik: So I think we don’t have time to go through your entire bio, you’ve worked with half of the tech companies in California, but maybe we can cover Arimo before we get into Aitomatic because I think this is probably a very important part of the founding story. So, what was the vision when you set up Arimo?
Christopher: So, in many ways, Arimo to Aitomatic is really one straight line, one project that I started in 2012. Arimo was acquired by Panasonic. After we got initial product market fit and worked with various customers, one of the largest customers turns out to be Panasonic. We also worked with Nasdaq, calls, other entities, even the intelligence agencies and so on. At the time, the company was really, I would say, a bunch of geeks with algorithms. And Panasonic wanted to acquire us because 2018 was the 100th anniversary of the company founded by Matsushita-san and the idea was to transform Panasonic from, we say roughly from hardware to software, but it’s really moving up the vertical stack. You may know that Panasonic makes a lot of the industrial equipment, cold chain, avionics, automotive, and so on. And AI was one of the major missing pieces to position the company much closer to the CEO level, the customer so that was the rationale for the acquisition. And we learned very quickly after the acquisition that a lot of our systems and techniques that were optimized for the digital world didn’t work for the physical world and I can talk a lot more about why that’s the case. And then, for the better part of the last 5 years, we basically built up a lot of talent and experience, ran into a lot of walls and found solutions for them which we’ve launched out of Panasonic now and Aitomatic is about refactoring all of that knowledge into a product to help industrial companies with AI.
Erik: Yeah, and understood then that Arimo was acquired, you then ran the business under Panasonic for a number of years and then I think just last year set up Aitomatic. Let’s get into the problem here. So, we have now fairly mature AI frameworks, we have good sensor infrastructure, we have good infrastructure for managing large datasets, we have a lot of platforms on the market. Why in the industrial space are we not seeing AI take off as we are seeing it in some of the consumer markets, at least in backend processing? What are the challenges that still need to be solved here?
Christopher: Yeah. Well, you actually hit the nail on the head in our earlier chat, which I’ll go back on. Let me answer your question by starting out with the general principle and then I’ll give a specific example of that and this is something that took me the better part of a year to come to reconciliation with. The general principle is that there’s something very fundamentally different between the digital versus the physical domain. I like to refer to that as bits versus atoms. For a company like Google, when I was running Gmail infrastructure at Google, the input was data, the output is data, and the processing is all data and any experiment that we wanted to conduct is done digitally, is done sort of in cyberspace. And we can launch it at scale and you launch in the morning and by afternoon, you’ve got millions and millions of, let’s use the word examples or data points to work with. In the physical world, things cannot move as quickly. And you mentioned, for example, if you’re gonna predict errors and failures but you don’t have that many errors and failures, it’s certainly not happening at a rate high enough for machine learning to consume so that’s the fundamental difference between an industrial company trying to take advantage of AI versus a digital first company, which is companies that were created in the last generation, I would say from 2000 onwards. And so, a particular example, let’s take predictive maintenance. This is something that anybody in manufacturing, anybody in industry say that’s the holy grail, that’s what we want. So let’s consider what is really predictive maintenance. Well, in the old days or even now, we have what’s called reactive maintenance. When something breaks, you go and you fix it. But that’s actually very expensive. It’s not just the price, the cost of the equipment and the cost of the repairs but the cost of downtime. And, in the case of automotives, maybe safety and human life is at stake. And so the next stage is preventive maintenance. That’s what aircraft MRO is. Every 6 months, every period of time, you go and you inspect and you replace everything, whether it’s broken or not. The holy grail is predictive maintenance, meaning you only replace things that are likely to fail. But that requires prediction, that requires the ability to say and to answer this very precise question, which is, for example, over the next 2 months, give me the list, the probability that this particular compressor in this refrigeration system is likely to fail and if it exceeds 80 percent, I’ll replace that particular one. Machine learning requires, in order for it to make that prediction, requires a lot of examples of that past failure. And it’s not just the failure but it’s also stratified by what is the model of that equipment? Where is it operated? What is the workload? What’s the climate like? For example, we have applied use cases in supermarket refrigeration systems. And so by the time you’re stratified by these physical parameters, actually you end up not having enough what’s called labels. In machine learning, you get a lot of what’s called labels. You don’t have enough of that in the physical domain. So it’s what I call the small data problem. And people working, trying to apply AI to this domain, is starting to slowly realize that, of course, we were, in a way, fortunate and unfortunate to run into it as early as 5 years ago because we were part of this industrial giant, Panasonic. But it is now becoming a well-known problem and so the question is how do you solve that.
Erik: So, let’s discuss more what this actually looks like in person or in practice. So, if I have a production line and I have a motor on the production line and this production line needs to have 99.X percent uptime, so this motor breaks down once every 2 or 3 years so, basically, I have no data to train on that or indicative of what conditions look like when the motor breaks down because it never breaks down. But if it breaks down, then it’s a multimillion dollar problem. So, I’m willing to invest in a solution to predict breakdown even if it only happens every couple years because it’s such a big problem but I can’t do it because the data never accumulates. So how — so you say tagging is the solution but there we still lack — I guess we lack data to tag, so how do you address that challenge, that fundamental challenge of dealing with very small datasets around issues like a fault in equipment?
Christopher: Yeah, I would say the tagging or labeling is not the solution. That’s what we machine — “we” meaning including myself because I’ve been a professor at HKUST, what we teach our students, we machine learning people think about labeling, but you don’t have enough labels. So the solution turns out to be a blindingly simple and straightforward insight but we can talk about the actual implementation of it. The insight is, and we do this, let’s follow the use case of refrigeration predictive maintenance. The initial solution we built after the acquisition, after the integration into Panasonic was what’s called anomaly detection, meaning I cannot tell you what’s likely to fail over the next 2 months but I can tell you when something looks different, when the sensors are not setting off the same signals as they used to. That can be done because you don’t need failures to tell you that something looks different. The challenge or the limitations with that is that anything could be the cause for something looking different, including you have different workload today or somebody decided to turn off one system and so the signals, the sensors look different. And so our first solution was to use anomaly detection in order to shortlist things for an engineer to look at, an expert, a domain expert. And a domain expert, we watched them do this work and they’re surprisingly good at identifying potential issues down the road, including their ability to say, “This is nothing, don’t worry about it,” but that gets tedious for them after a while and the blinding insight is what if we just codify the heuristics that this expert is going through and then automate that, capture their knowledge, because if you think about it, what they’re using is their 30 years of line experience that is not in the data coming from the sensors. So, the key idea here is to be able to combine human expertise, domain expertise, that these industrial companies have a lot of, combine it with the data and then come up with an overall AI solution. And so that’s what we do. We actually have a branch of AI, if you will, that we’re advocating called knowledge-first AI as opposed to data-first.
Erik: Yeah, I think that’s a really interesting way of framing this. If you think about a lot of the pure data environments, if we look at finance, for example, you can have quants that go into Wall Street that are fresh graduates and they might not understand anything about finance but they understand how to manipulate data and monitor data and they can be very effective in some cases, and, of course, over time, they learn the finance, but, basically, they can be effective just as quants. And I think in the industrial space, that’s not really the case. You need this domain expertise on the messy mechanics of the physical environment, which means that not only do you have a very challenging problem to solve from the technical standpoint of having limited data and messy data, but you also have a more challenging human environment where you have a 50-year-old engineer who has been working at the plant for 25 years and he can listen to the machine and he can make sense of what’s happening and he has a tremendous amount of intuitive know how, and then how do you embed that in an algorithm, that’s the challenge at hand here. So, how do you think through this, not just the technical standpoint issue here, but also the human issue of getting these very different teams to collaborate effectively on building a solution?
Christopher: You mentioned quant. Believe it or not, I happen to be CEO of a quant shop in Hong Kong between 2000 and 2005 quite successfully so, so when I say it’s different, it is different. So you’re absolutely right. Because of this physical domain. So the question, how do we — so this sounds right? In fact, it’s really interesting in that I don’t have — a lot of these customers of ours, I don’t have to convince them of this. They know it intuitively. They know that they have a lot of knowledge and they know that the data that they have, because they have data scientists, they’ve hired these people and somehow there’s this barrier between what they know versus the data that they have, they already know the setup. So the key is, okay, so how do you put all of this together? So that’s the product that we’re building to automate what I’m about to share, which is, in principle, if there’s a way for a domain expert, in the case of, say, time series data, if this temperature is rising and yet the pressure is remaining constant and they can give the ranges, then take a look at this other issue and then once you look at that, now to try to combine. In other words, it’s sort of a heuristic that they go through. In principle, you can simply take Boolean code logic and encode it and so that is one of the early approaches that we take, but can you take it all the way to a situation where you can take what I just said as natural language and then generate that code automatically. So, from the verbal description or from the written description, can we take that and somehow intelligently generate the code, which may be in Boolean logic or fuzzy logic, I think you and others will have heard of some of these systems. It’s not that the codification doesn’t exist, it does, but how do you bridge that between human natural language or somebody’s experience and encoding? So that’s what the product that we’re building is. And once you have this what I call the knowledge model, then, instead of using labels, which you do not have, you can use that knowledge label now to train machine learning models and then you can combine the two models in what’s called an ensemble and the ensemble is like this very smart combiner. It knows when to listen to the knowledge model more than the machine learning model and make the best decision possible.
Erik: Okay, fascinating. So, let me see if I’m thinking about this correctly. The traditional way of programming that we might have used 10 years ago and in many cases still are using today is to try to translate someone’s experience into a calculation, if X goes up and Y goes down, then Z happens, and we come up with these very rigid rules that can work and they’re understandable by humans but they’re often fairly simple. And then we have machine learning where you put a bunch of data in a box, the algorithm is auto-generated to some extent, and then it spits out another equation which is much more difficult for us to understand and sometimes can extract a lot more insight but requires a lot more data and you’re basically combining these two together. You’re saying let’s build the knowledge map and then let’s work with — and basically use that to — are you using that to train the AI algorithm or are you generating a new algorithm that switches from one path to another depending on what seems to be more likely to solve the — to identify the right solution? You can tell I’m not probably sufficiently technical here but help me understand how to think about working with these two together.
Christopher: Your intuition is exactly right. So I like to say that the human rules are very good in the center, meaning near the normal operating regions. They tend to fail catastrophically at the edges, in the exceptions. Whereas the machine learning, if you train using that model, tend to be smoother at the edges, but they may not be as good without labels near the center. So it is the combination of both that human knowledge model and the machine learning model that is trained by it. That is, that’s why the ensemble works better than that both individually.
Erik: Okay, got you. Maybe we can get a bit more into the Aitomatic product now since we’re talking more about how this works. So, would it be a platform where you would have access for engineers, maybe mechanical engineering or different teams to embed their knowledge? What would this actually look like from a user perspective?
Christopher: Yeah, from a delivery point of view, you can think of that as any other SaaS software platform. You probably use Gmail, which I used to run at Google, you don’t worry about where the infrastructure is, you just open up your web page and you start using it. So the idea here is that with Aitomatic, there’s a tool, again, you point your browser, of course, at a secure web page somewhere, and then you give it instructions, human instructions, and then, somehow, in our software, it generates models. Basically, natural language in, models out. And the models, there’s a number of sophisticated architectures that we have created, the one that I’ve described where we have a teacher, which is a human knowledge model, a student, which is machine learning that gets ensemble. We call that the K Oracle architecture. The K stands for knowledge. We have a host of others that are useful for different use cases. But a customer of ours, their team, we call them AI engineer, they would interact with this system and generate these models and these models go into a system application which we also host and they can launch that and manage it directly from their cloud or our cloud. Deployment is very flexible, but access to this software is through that SaaS model.
Erik: Okay, and so the users, I’m imagining, you’re gonna have some data scientist or somebody who has data science expertise who’s gonna be a primary user but you’re also gonna have very critical domain expert users who, if I’m — that I’m sitting here in Shanghai and if I reflect on maybe a large manufacturer in northern China, they might be relying on a lot of people that are in their 40s, 50s, who maybe in some ways are quite savvy in terms of using mobile phones to manage their lives but might spend very little time actually on a computer and not really have much expertise there. So, how do you — so, first, who would be — what would the end user ecosystem look like and then how do you engage the non-technical people to make sure that they’re able to efficiently embed their knowledge into this framework?
Christopher: Right. And to be sure, this is not consumer software. This is enterprise software. So when we say the domain expert, they are expert, they are engineers, they just don’t happen to be AI engineers. And so our customers typically have large teams and they have different roles. There’s invariably somebody who knows the domain very well. They may or may not be able to code at all. In fact, I’ll give you one example where we have a use case in China where there is a large supermarket chain and they are opening over a thousand stores per year, over a hundred per month and they’re using the system to build what’s called a forecasting for decision making. For example, forecasting what the inventory should be, what the revenue might be to determine exactly the location where to open the new store. Doing that by sheer human estimation alone is overwhelming in terms of the amount of input that they have to consume, but also very error prone. And yet, the local managers are very good at sharing their knowledge, like there’s a subway station on the right hand side of that and if it’s more than 100 meters away, then the sales, the foot traffic may drop off in a certain way. And some of that knowledge is unique or certainly applicable only to that locality. So those people, those local managers, would be able to work either with our solution team or, more likely, their own AI engineering team and they would actually be saying these things then these things are then input into our system and that natural language input will be translated into code automatically. Of course, somebody has to review that code, review the output of that system. But, for the most part, you can think of this system as going directly from that human knowledge description to a bunch of models that does that prediction and you compare the performance of that against something that has no such human knowledge input but you try to do typically what data scientists do today, give me, in this case, POS or point of sales data from the past and then give me information about the geography, the map, and so on. And, invariably, when you add that human knowledge model, the prediction becomes better and we can measure that. We can measure that in terms of what’s called the mean error of the prediction.
Erik: Yeah, I think this forecasting use case is quite interesting. Every company faces it but with different dynamics. Maybe I can throw a problem by you and I’d be interested to hear your thoughts on whether your solution could actually address this. So, this was something we were working on a couple years ago and it was a large chemical company and their situation is that they have to manage supply and demand because there’s periods in the market where there’s more supply than demand, there’s periods when there’s more demand than supply, and supply cannot — you can’t shift it very quickly. You have a certain number of facilities that are running and adding new ones takes a lot of CapEx, and their forecasting was always terrible. It was like — 60 percent would be considered a good target. And this makes it very difficult for them to set prices because they have to forecast to set prices. So they wanted to look at machine learning to address this, but then you have these — some things where machine learning is probably good at, so correlating things like housing starts in Japan and car sales in the US, etc., that all, then work their way up the supply chain eventually to the demand for their materials. But then you have other things like if there was a fire at one of their competitor’s factories and it shuts down for 3 months, then all of a sudden that — I mean, there’s no way that the algorithm can manage that because that happens every 2 years but when it happens, it completely disrupts the industry. And that we really struggled with using a pure ML approach because there were occasional events like that that completely confused the algorithm. And so, how would you look at addressing a challenge like that? Because I think that’s actually a fairly common challenge that industrial companies have.
Christopher: Exactly. I’m not here to say that this approach solves every possible problem out there or even is superior to every case. If you’re doing Google advertising and trying to predict the likelihood of a click, you have lots and lots of predictive data out there, you only need to know why. I remember when we were testing different colors, the background of the ad, and it turns out the best background color was a little yellowish, because it was predicted by data — measured, I would say, not even predicted, and so Sergey Brin used to say, “That looks like piss.” But it is what it is. But you’re right. An industrial fire, give me a time series data of all the fires that have happened and their impact. You just don’t have the data set. So this is a case that we have actually done or actually doing in a very similar way. There’s a use case where we’re trying to forecast for these industrial devices that are being manufactured by a large conglomerate and they’re always trying to get ahead of competition. If you’re making capacitors, how much volume should I make in terms of the 10 µF versus the 25 µF? Decisions like that but in a predictive manner, getting ahead of the market. You can’t do that by forecasting from time series data alone, precisely because of things like that. And yet, very often, in many of these cases, when you ask a manager who has lived through enough and you ask them if there’s such a fire or now that there has been such a fire, what do you think demand will be affected by and they can say, “I think it’s gonna be down by 20 percent.” They could be wrong but it’s gonna be a heck of a lot better than a machine learning model that is completely unable to take into account for that. So, intuitively, you see that this can be better then the next challenge becomes how do you incorporate that knowledge into the system in such a way that it’s not a different system. And so the tool that we make provides our customers with a way to do that.
Erik: So if we think about the use cases, as you said, that there’s a lot of things that traditional ML does really well. If we think about the use cases that require more a human input in the training process, what would those be? So we have demand forecasting as one, anything related to predictive maintenance or, I guess, predictive analytics in situations where there’s uncommon events could be another. What are the other things that have come up on your radar as key use cases to explore?
Christopher: Right, so, again, I’ll give a principle first and then give you more examples. The key principle that we have learned is just what we discussed. If your intuition is such that if there’s a human expert who can add more light to this problem due to their life experience and so on and it can make a predictive difference, then that’s a very good candidate for this kind of thing. And when you go across that, you tend to find that they tend to be physical problems rather than, for example, the Google Ad example that I shared with you. I’ve given you a number of examples ranging from predictive maintenance with refrigeration to predictive forecasting and so on. There’s another use case completely different which is there’s a global marine navigation company that we work with, if you go here, I live in the Bay Area, if you go to Half Moon Bay, at the marina, you look at all the boats and 70 percent of the masts is their brand. They also make what’s called fish finders, which shoots out a sonar beam straight directly down to the ocean and then the echo comes back, very much like submarine technology. What comes back is an echogram, and guess who is really good at reading these echograms and be able to tell you that’s a school of mackerel right there and somebody might say, “Are you sure it’s mackerel and not sardines?” “Yes, it’s mackerel, trust me.” It’s not the engineers, it’s the fishermen. The fisherman that have used this equipment for 5, 10 years after a while and they’re good, by the way, only in their area. So, for example, the case I’m thinking about in my head right now is in Hokkaido and mackerel. So this company knows intuitively that if we work with these fishermen and somehow codify whatever they’re seeing in these images, and there are not a lot of them so you can see you cannot label millions of these things and somehow capture what they’re seeing. They might say, “If you see this golf ball-shaped thing right next to this stick-like thing on that echogram,” which, to you and me, looks like a mess but when you see a golf ball, and I say, okay, I know what a golf ball looks like and we could codify that experience, then what happens is that this fish finder, which the market is only for, say, 10,000 of these expert fishermen in Japan, suddenly, the whole addressable market becomes 10 million because you and I don’t know how to interpret these echograms directly, we can just, by the benefit of this algorithm that we encoded, it just labels mackerel and then you just basically throw your net down there.
Erik: Okay, interesting. And, actually, that maybe raises another question of the ecosystem of AI algorithms. So, one other way of addressing this dearth of data is to look at how we can share data between different companies, obviously, highly anonymized, but maybe one factory has a certain number of motors and those motors only break down so often but then those motors are also deployed in a lot of other factories and so, in aggregate, there’s actually maybe a lot of data, it’s just that industrial data tends to be highly firewalled from other systems. Is there any mechanism that you use to or considering using to enable the exchange of this insight or this expertise so that customer 10 can use the insight that was generated from customer 3? Or do you have too many privacy and legal issues there?
Christopher: Yeah, I think the opportunity is one level higher. There’s a wise and not so wise saying that I don’t like and I think it’s completely wrong that says, “Data is the new oil,” and that implies that it’s inherently valuable and fungible and so on. And as this discussion shows, one megabyte of data X is not at all the same as one megabyte of something else. I think the future, the opportunity is in sharing models and not in sharing data and sharing could be selling. If I have a model that can predict or classify mackerel versus sardines in Hokkaido, I can sell that as a model or a prediction service. And this is starting to become a business model for some companies. There are startups in the US that are dealing with this in the easier, more available model set, example, image classification and so on. I think, increasingly, companies will find opportunities to essentially encode things, knowledge and data into models and then they will sell models. And, in fact, in terms of the progression of machine learning, 10 years from now, we’ll look back and realize that this data-driven or data-only machine learning is actually the very basic step. It’s almost like semiconductors but at the transistor level. And then we go to LSI, large scale integration, very, very large scale, ultra large scale, and so on, and then we deal with things as blocks rather than individual transistors. I really believe that knowledge-based training of models is the future. These models will get smarter and smarter. They will understand what a golf ball means. They will also understand English, Chinese, and so on. So that’s where the future is and what we’re working on is the beginnings of that next stage of artificial intelligence.
Erik: Okay, great. And I know also Aitomatic is still a relatively young company, I guess in your second year, although, as you said, it’s really an extension of work that you’ve been doing personally for the past decade so there’s a lot of know how behind it, but from a development standpoint, what is on in the future for you? What do you look at over the next, let’s say, 24 months in terms of roadmap for new functionality, new features, new use cases?
Christopher: Yeah, as a company, we’re at an interesting point. There’s a phrase called product market fit, which is we’re trying to make sure all these ideas that I’m very sure are correct but then are they correct and needed over the next 6 months or is it over the next 6 years? You can be right and completely off the timing. So we’re fortunate in that we have, after we launched out of Panasonic, we have a lot of existing customers and we have — we actually call them design partners today. And then we have also one other contract and so on so we’re working with a fairly large, for an early-stage startup, something like over a dozen different customers and use cases. So really, the next 12 months is iterating with these design partners into a product set that they find, “Yes, this is what our humans can work with. This is what our domain experts can work with together with our AI engineers,” and while you can imagine all kinds of use cases, only three or four are going to be profitable. It makes sense over the next 12 months for the industry. I like to say, people like to say the future has arrived, it’s just unevenly distributed so there’ll be leading use cases and the use cases that we’re seeing resonate the most is industrial equipment predictive maintenance, that’s one example. So, over the course of next 24 months, what I wanna get the company to is, of course, expanded revenues but also quite importantly a well-defined functional set, no more, no less, don’t overbuild. That satisfies our customers and that’s something that we can make generally available to the market.
Erik: Yeah, I’m curious, as a serial entrepreneur or somebody that’s been involved with a lot of different technology projects, how do you think about the tension between focusing on a specific market and building a product for, say, one use case like predictive maintenance for industrial equipment versus building more horizontal technologies that can apply and apply basic principles to multiple, potentially very different use cases that all share similar features? Because I guess there’s forces pulling you in both. On the second hand, you have a much larger potential market. On the first hand, it’s easier to define a product that fits a particular set of customers. As an entrepreneur, how do you view this challenge?
Christopher: Definitely focus. Definitely focus. There’s no such thing as saying, “Let’s try to address everything and capture a large market,” because there’s always somebody else who’s gonna do it better than you. Now, focus doesn’t mean betting on one thing. Like I say, product market fit means being humble about how much you really can’t predict. I’m very convinced, I’m very sure about this vision, I’m very sure about the technology that we have, but do we really have a product that people need and will pay for immediately? And so it’s — in computer science, we call that the exploration-exploitation trade-off, so the idea is to get, of course, try to understand the industry as much as possible so we have the advantage of coming somewhat as insiders. So that’s an unfair advantage that we have. And then you have to experiment over a set of three or four use cases, not 30 but also not one. And that’s why I’m able to tell you now, for example, that some use cases, we already have traction. I believe we have tried some other use cases for which the market is not that interested yet, or, for one reason or another, our customers are not profitable on those things. So what’s the point of saying that we can — we’re gonna do those things? So that’s — in a way, the answer, the structure of the answer is very simple. The challenge is the judgment. It comes down to that judgment and I think experience and conversations. Talking to your customers helps tremendously. This is not an exercise where you’ve got 10 smart guys inside the office talking to each other.
Erik: Yeah, yeah, that’s right. This is something I’ve personally identified I have an issue with. It’s easy to brainstorm ideas and, in the end, having some degree of success means really focusing on those, but it’s a challenging thing to choose bets because it does — you devoting a lot of time and…
Christopher: I like to say respect the challenge. Don’t think you’re so smart.
Erik: That’s right. Great. Well, Christopher, I think we’ve covered a lot of ground here. Anything else that folks should know about Aitomatic or about the problem here?
Christopher: I think we’ve certainly covered a lot other than the fact that you could hear the excitement behind my voice. I think over the next 5 years, we’re standing at the threshold of what I call the — I’m gonna go on the record and say there’s a shift back to atoms from bits, and, again, even Marc Andreessen has recently blogged how we need to be making things. It’s time to make again. This is somewhat of an American-centric approach and message as well but events of recent years have highlighted the fact that we’ve offshored too much and we’ve given away or sent away a lot of muscles while we focus only on the brain in the US. And so semiconductor tooling, making things, I think it’s an exciting time to be in the physical space, certainly in the US, and I think even around the world.
Erik: Yeah, yeah, absolutely. I agree, 100 percent. Two very quick last questions from me. The first is, as a relatively early stage company, do you have any funding rounds coming up? Because we do have actually a number of investors listening here that you would be open to external participation in.
Christopher: Yeah. So we’re very fortunate that, number one, we started out with customers already, we have revenues. I’m also fairly reasonably well connected in the Valley and so we we’ve done quietly some rounds of funding, but never say never so we’re always opportunistically talking to folks and trying to time things. I think we have good metrics as a startup so if anybody is interested, feel free to reach out to me and then we’ll talk.
Erik: Okay, wonderful. And whether it’s around investment or around partnership or becoming a customer, what’s the best way for folks to get in touch with you or your team?
Christopher: Absolutely. The most valuable thing right now is designed partnerships. That’s more important than the next dollar of investment. I think aitomatic.com is a good way to get in touch with me. That’s our website. Our company is just like “automatic” except starts with “AI” so aitomatic.com.
Erik: Great, Christopher. Thank you.
Christopher: Thank you.