Podcasts > Ep. 203 - Scaling AI: Revolutionizing Manufacturing Operations
Ep. 203
Scaling AI: Revolutionizing Manufacturing Operations
Nikunj Mehta, Founder & CEO, Falkonry
Thursday, April 11, 2024

In today's episode, we're joined by Nikunj Mehta, Founder and CEO of Falkonry, a company revolutionizing plant operations with AI-driven insights. We explore the evolution of AI from project-based work to scalable software solutions, discussing the role of generative AI in automating data processing and simplifying output delivery. Nikunj shares insights on building trust in AI systems and leveraging wearables for real-time insights on the shop floor.

Key Discussion Points:

Falkonry's platform analyzes industrial equipment data to identify issues and trends.

Software-based AI reduces implementation time and resources for companies.

Challenges in AI adoption include building trust in AI systems and integrating diverse data sources.

To learn more about our guest, you can find him at:

Website: https://falkonry.com/

LinkedIn: https://www.linkedin.com/in/nrmehta/ 

Transcript.

Erik: Nikunj, thank you for joining us on the podcast today.

Nikunj: Yeah, thanks. It's my pleasure to be here.

Erik: Yeah, great. This is, for me, it's a really important topic that we're going to be discussing, this topic of — I think if you could quite, from a high level, how do you turn AI from a project into a software? This has been, because of GenAI and some other developments recently, quite an interesting set of developments.

Nikunj: Likewise. For me, as well, I've spent 15 years doing that and inadvertently taken some surprising twists and turns. But it is also a topic that's very close to my heart.

Erik: So before we get into the heart of the topic, I would like to understand a little bit more about yourself and also about your company, Falkonry. You set up the business, if I'm correct, in 2012. So it's been about a decade now. The AI landscape was very different back then. What was it? What was the spark that motivated you to set up this particular business at that particular time?

Nikunj: Yeah, I was at C3 at the time. I was a software architect, and we were developing a platform for energy efficiency and greenhouse gas monitoring and mitigation. And as we were doing that, we came into conversations with some power-related systems manufacturers. I learned from them that there's a lot of unanswered questions in time series data. Especially, these were time series that were produced from the operation of systems. In the course of understanding that customer's needs, I realized that the landscape of time series systems was very impoverished. There was not much research. There were no software providers. People did not have a language to speak about their problems. Then I realized that as we were seeking to adopt more climate-friendly solutions, we would have to do something about this time series data problem. Not so much data is too much of it but that we don't really understand how to work with it. Over the course of time, we've been perfecting the way of both harnessing the data but also making it usable by people. And so for me, the spark was, it is a completely new type of data that people don't know much about. And from my experience at Oracle — I was there for about five years — I had learned that a whole ecosystem of software companies, not even project companies, would come out when you approach these problems correctly.

Erik: It's interesting that you mentioned this carbon management as the initial project that motivated you to work on this. I'm curious on your perspective of, let's say, horizontal versus vertical applications for AI. Because I think Falkonry and a lot of the AI platforms are relatively horizontal, right? Because I guess, in the end, you're working with data, and it doesn't really matter what the subject matter of that data is. In a lot of other software domains, you see companies building SaaS applications that are very horizontal, where it's really about the UX and UI and understanding the specific customer problem at a very great depth so that you can build the right solution for them. How do you see that evolving for AI? Do you see AI moving more towards vertical applications, or is it really about building the strong horizontal toolkit and then allowing the customer or system integrators to figure out how to use that for specific applications?

Nikunj: Yeah, I'd make two points. One, project versus software. Generally, software tends to be horizontal. Projects tend to be vertical. It doesn't mean it's one and the same, but that is typically the case. The second is, when it comes to AI, what is usually the hardest for AI is understanding the data type. So speech — meaning, the spoken word — versus text are very different data types, even though they may be delayed. Likewise, time series is a very different data type to text. And so at the fundamental level, one has to figure out how to deal with that data type. You could still consider horizontal. Because time series is not peculiar to, say, pharmaceutical versus, for example, RADAR. I mean, they're still both time series data. Maybe the rate at which the data is sampled can be very different. But yet, they are still time series data. I do think that AI fundamentally has to be good at time series data in order for people to really exploit such techniques.

So overall, at a high level, horizontal approaches for AI are going to be here to stay because that's where economies of scale is. And I believe that where it specializes, it's going to be about how do we use the same computing architecture but for a different type of data. And that the data type has to be seen, if you might, at the database level. Not so much at the application level.

Erik: Okay. I want to get into this topic of the evolution of AI adoption from project-based towards software-based. But before we go deep into that, let's cover a bit more the portfolio of Falkonry, so people also know where you're coming from as a business. So what I'm seeing kind of just high level is, you have a time series AI cloud. You plug in data.

Nikunj: Time series.

Erik: Yeah, plug in time series. And so time series is really the theme here. What is the set of products that you've built around this AI cloud?

Nikunj: What we learned from our work was that people really had very poor technology to manipulate and manage time series data. So the products that, as we call, the AI cloud Falkonry offers provides two core capabilities. One is the processing of this data to analytical needs. That involves different techniques depending on the situation. One technique Falkonry offers is anomaly detection. Anomaly detection identifies unusual behavior, rare behavior, novel behavior. People want to know that because they have a long tail of problems, problems that don't happen even though the equipment is generally functioning in a normal manner. And when there is abnormal behavior, it's usually indicative of some kind of damage or health condition. So that's an example of the kind of processing Falkonry performs.

We also perform pattern processing, which means distinguishing between patterns and recognizing pattern. The third kind that Falkonry performs is rule processing. So that's where you can apply either precise or fuzzy predicates to data to translate whatever you are measuring into actions that you want to take. These are three different ways in which data gets processed. Now, those are different processing methods. In addition to that, Falkonry provides means of ingesting, organizing and storing data in a way that's consistent with unified namespace as an example and a way to visualize that data so that people can understand what their data looks like and what happens in their plant. So these are at the heart of what Falkonry provides in its AI Cloud.

Erik: Okay. And with your users on the front end, would they typically be engaging directly with Falkonry's front end, or would you be then plugging these outputs from your analysis into their existing software stack?

Nikunj: Generally, in the ordinary course of operation, the results of Falkonry would get plugged into a workflow system, and that's where people would interact with. We're also seeing that when people are trying to develop trust in the analytical process, they want to engage directly with the Falkonry interface. Because that's where explainability is the highest. Secondly, we've also seen that a lot of people need to have means of understanding what happens in their data. And for that as well, they engage primarily with Falkonry through its own user interface. So in general, the large number of users that will work with Falkonry results are going to be through a workflow system. But there will be a small group of people who are the experts at that organization's data, and they will engage directly with Falkonry's interface.

Erik: Okay. Got you. Then the industries you're covering, several industries but I guess they have certain themes here. Oil and gas, chemicals, semicon, pharma, so these are kind of process-oriented, automotive and metals. Discrete but also very high volume.

Nikunj: Yeah, I would say that metal does not translate to subtractive manufacturing. So metal production itself. Here, we're talking about steelmaking. It's actually a process industry. On the flip side, semiconductors, even though it feels like a process industry, the work that is done inside of a chamber is very discrete in nature. So when you start working at the level of time series, the distinction between discrete and process becomes immaterial, actually. And you need to constantly go back and forth between the two words. So yes, we do work with the industries that you've named and also work with equipment operators of very complex systems, such as the US Navy who are also seeking to get the most out of all the different types of equipment that are used inside of a single warship as an example. It is pretty diverse in that sense, because they are all seeking to understand what happens in their world from data.

Erik: Oh, it's interesting. So basically, you're working with a set of manufacturing industries that have very high cost of downtime or of disruptions of some kind, have a high premium on OEE. Then you're working with defense. And there, you're actually working with the assets themselves rather than the production facilities. Is that right?

Nikunj: That is correct. So these assets are not producing something. They are used in order to perform the function of that asset of the organization, which is defense as an example.

Erik: I see. And your platform for analyzing those time series for defense, does that have to be substantially different from the manufacturing use cases, or is it fundamentally the same?

Nikunj: No, it's no different. It's no different. It's actually identical. It's exactly the same. The way to think about it is, in any complex physical system, especially one that is used on a continuous basis — meaning it is used at production grade — it is going to have recurring behaviors. People want to know when those behaviors start to diverge from the recurrence so that they would be able to take any preventive steps. And because that is the common theme across all of the customers that I mentioned to you, their needs can be fulfilled from a time series AI cloud. We need to be able to store all of that data. We need to be able to visualize it, retrieve it by its organized approach. And we need to be able to find any type of behavior, whether it is unusual behavior or behavior of a particular known kind, and whether it is expressed precisely or by an example. And so that is what people use Falkonry for.

I'd go back to one of the questions we discussed earlier, project versus software. The basic challenge in this field has been that customers want solutions, and they constitute projects to find the solutions. And because the problems are so complex, solving them is itself very time consuming. At the end, we get a solution but not necessarily solved. In the context of analytics, developing such software is always or has always been — it's not new — in the previous generation where we had BI solutions and say the star schema in OLAP database technology, it was a very hard solution to come to. There were plenty of analytics that would look at a general ledger, for example. But it was not until the star schema was born and OLAP databases specialized was it possible to get a business intelligence solution without doing any new software development. At that point, it became just software and you implement.

So the last maybe five or seven years with machine learning have been mostly projects trying to figure out what is the commonality, what is the software behind it. Even for ourselves, we were engaged in a lot of machine learning with a very explicit thing that we need to find the software that is common in these machine learning activities. So we first focused on pattern identification as software. But we realized in the process that most of the needs in the industrial world don't involve a known pattern, because these are new problems that happened the first time. So if you cannot solve the first-time problem, then there is much less value to customers. And even they don't have precise records of when certain specific issues have happened. And so that led us into understanding that, well, we need to have anomaly detection in order to be able to differentiate behaviors that matter from the ones that don't without any examples. That's how you go towards software. It's you have to be able to take versatile set of needs without having precise specification of what people are looking for, and yet span a very big distance from the raw starting point to where people understand what is happening in their world without having to set things up.

Erik: Okay. Yeah, that's fascinating. The state of your work today — maybe you can also comment on the state of the industry a bit more broadly — in terms of this path from project-based towards software-based AI, where are we today and maybe where is Falkonry from just a customer perspective, being able to deploy a system and the amount of resources, time, also people from maybe the client side required, to work on the project in order to get it to a functional state?

Nikunj: A very good question. I think this has dramatically changed in just the last one year. I will attribute a lot of that change to generative AI technologies. Not generative AI itself but generative AI technologies. Broadly speaking, there are three major phases to the adoption of AI. The first is connecting data sources to AI. The second is developing trust in the AI's function. Then the third is delivering the AI results into the hands of people who are supposed to take action. These are, generally speaking, the three major phases. Each one of them was error prone and was very challenging, especially the second one.

In the last one year or so, with the convergence of MQTT and various data modeling standards around MQTT, I think that that problem has been mostly made less critical. These days, for example, we work with lots of data sources like the IBA historian, for example, PI system and its historians, but also with Litmus Edge which are capable of connecting the plant directly to the cloud without going to historian. So that problem has largely been solved. Our customers are generally connecting their plant to Falkonry within a matter of hours. They are doing it with their existing talent. You don't need to bring in a system integrator. You just need to give them instructions in the software, and they will do it themselves. That is a major advancement. People would lose months in the past trying to do that. Secondly, on the AI itself. Again, previously, you had to set up the machine learning. And to do that, we'll have to bring together the domain expert, the data scientist, and the software or the data engineer who would be able to iteratively find out what the domain expert knows, and therefore how to process data and how to prepare data to be on the software.

Some of the challenges in the industrial world, I know that there are many. But the ones that I would highlight are: one, data preparation, which means aligning data or removing certain artifacts that are present in the data. Second is data quality. That usually refers to the fact that people don't know how the data being collected was related to the events of interest. That's the second problem. The third one is the problem of modeling and model validation. That is mostly a function of data science in finding what its outputs look like and comparing it with the real world. The fourth is MLops. This is where people are trying to figure out, how do I run all of these models over time? How do I do maintenance, et cetera? So these problems — I'm not listing all of them — have mostly limited our ability to succeed. And they are all very complex. They take a lot of time. People used to spend weeks and usually, in many other company's cases, months on a per use case basis.

For example, with a paper that was published in 2021, one of our customers reported that they were able to complete one use case every two weeks as an example. They did 50 such use cases. They could do these in parallel, but each one would take two weeks. Now, the same customer with our generative AI technology can do 100 use cases within five weeks where they are not doing anything. It's the computer that's learning whatever it needs to learn, and it is trading over all of these use cases. So you'd go from 50 weeks in which they could do 50 use cases, to five weeks in which they did all of those 50 or actually they will do say 150 use cases. And also, in that process, they are not actually doing any prep. They are not doing any labeling or data quality improvement. They are not trying to figure out how do I deploy this. All of that is automated. And so that's how much of a shift has happened from, say, 2021 and '22, to today in 2024.

Erik: Yeah, wow. Okay. So that's in some parts of the workflow, we have 100x improvement basically or a 10x improvement or something like that.

Nikunj: Yes, right.

Erik: Where do you see the bottlenecks today? I imagine that if you look across the entire workflow, people would not say that we have a 50x improvement. Because there are still some bottlenecks that are preventing that, the practical use of these.

Nikunj: Yeah, very much. So I think the major bottleneck right now is trust. It is an essential bottleneck. I think we, as a society, will have to find ways to have trust or develop trust. It is a part of introduction of any new technology. We are seeing how trust and safety are major issues with generative AI in general. All of us are still questioning whatever it is telling us. That is the major bottleneck from a technology perspective. The second one tends to be the integration of data sources into an AI system. That's an area where the divergence of methods that are used across different technologies or different vendors is also introducing delays. Because people have to make choices, and there are interoperability issues with their choices. And so that one is less of a social problem than an economic problem. We agree to having common mechanisms to transfer data. We are working with certain companies like Litmus who are addressing this by unified namespace and unified architectures for the data. That way, no matter where the data is originating and what the source of the behavior that is recorded in the data, it can still be processed the same way by analytics. And so we are seeing the same basis for solution with any vendor. It doesn't really matter, because it's not vendor-specific standards. So I would say that those are two of the biggest challenges to adoption. Naturally, the human issue is the larger issue. I think the software and standards issues can actually get addressed. Whatever succeeds is going to become more prevalent.

What we're seeing in the context of trust is, actually, in the past, it took people time to get the outputs from such analytics. And so it would take a long time to develop trust. But now because the initial outputs from AI are available within just a couple of weeks, therefore, people are able to develop trust. And once they have developed trust, they want scale. They want to get outputs directly in the form of work orders in the hands of maintenance teams. And so that's a problem that can be more easily solved. But it is currently one of the bottlenecks. How do you get it in the hands of the right people at the right time, with the right level of detail so that they will take the right action?

Nikunj: Yeah, I mean, I can imagine, especially for anomalies which are a first-time occurrence, if the AI is saying you need to shut down this production line because there's some anomaly, you have a question. Right? Do we shut down the production line and lose productivity because we trust the system, or do we need to do operations and risk a fault because who knows if the system is correct? As a tech company, I mean, a lot of technology companies, they're designed to build technology. They're not necessarily designed to change human behavior. How do you approach this topic of helping to build trust in your systems? It's probably not for the people that you're selling the software to. It might be for the frontline people that are using your software.

Nikunj: The end users.

Erik: Yeah, so how do you approach that as a technology company?

Nikunj: I think there are two components to it. First is developing technical confidence in the findings. That means, is the AI actually intelligent? Is it finding information that I do not have and that is helpful for me to do my job? Of course, helpful with the less effort, the better. Then the second is, is it being made available to me in a form I can consume? Now, because problems in these lines tend to produce effects in the millions of dollars, therefore there is a high level of motivation. Therefore, to the extent that intelligence can be proven, there is a propensity to adopt. And so to prove, we conduct basically longitudinal studies, studies over a month usually but sometimes longer, in which people can compare business as usual to the findings of AI. Was AI able to see the incipient damage that eventually led to a stoppage or a delay and correction to take place after that? Whether AI identified behaviors that were spurious and therefore did not require any action. And on balance, how much useful stuff is it finding versus the nuisance that it might be creating is one of the key assessments that people want to make. That becomes a basis for trust.

As I said earlier, though the second element of trust is how is that AI conveying what it is finding, and is it in a form that I can consume? Not so much whether I believe it but that I understand. The belief part is usually done in the study that I described. Because I can see longitudinally across a large number of components and subsystems in my plant, how is that AI acting? Therefore, it's not a fluke that it found something, because it would have to find that in many places.

Trust has actually often come down to: is it simple enough that I can understand what I should be doing with? And there, we are working with organizations to identify subsystems that that have a likelihood of failing in a way that's not very different from the MICA, except they are not doing critical analysis to figure out what all causes, how exactly those causes play out, or are they instrumented to be measured. So that would be extensive analysis, if clinical-wise. So here, it's simply understanding when this component breaks, these are the likely things that it will affect. This is all of the sensing we are doing in the vicinity of such a thing. So we are taking that level of information to increase confidence of our users. That when the AI finds an anomaly, it finds it in multiple such sources of data. They can see how it evolves in a limited amount of time, maybe in a few hours, so they can see how it was working when it was normal and how it worked at the time that damage started to happen.

Erik: On this challenge of communicating the output of the analysis to the user, what do you see as the role for generative AI there? Because I guess that's a technology that's uniquely well suited for communicating and using AI to communicate in a relatively human way. Right? Which is very different from maybe the traditional dashboard of analytics that somebody might be working with. Do you see a future where there's going to be a lot more language-based communication of here's the problem, and here's maybe the potential root causes, da, da, da with the AI? Or do you think that people, the traditional kind of dashboard showing different views of data is still what will be the predominant way of engaging with the dataset?

Nikunj: Erik, I think one important thing at this point is to realize or recognize that manufacturing is perhaps a million different processes that never intended to work identical. Okay? I mean, the examples of cleanroom manufacturing in semiconductor fabrication is a rarity. And that even two lines that were both commissioned at the same time probably don't work the same over its life. They start diverging pretty soon after. Generative AI approaches to explain what AI is found are likely to be most useful in predictive maintenance, specifically where you are developing part or component-specific predictions that can be applied at very large scale. Think pumps. Think motors, where there are tens and thousands and maybe millions of these deployed and where the same vendor is making very large number of these and has all the failure records to be able to develop a language means of communicating to the end user.

What we have learned in our experience, in manufacturing more broadly, is that everything that is considered to be either critical or balance of plant and is not in the very high cardinality of systems needs attention. And there is almost nobody who can solve that problem. That's where Falkonry is generally more. In that space, less is more. So being able to say that we see a problem. Let's just say the segment 12 Lazo, which is a part of the continuous casting process, where there is a molten slab of steel that flows through and where it is generally idle. But when there is a bar that starts to flow, it just moves up. So just knowing that there is a tension problem in the Lazo is sufficient, if nobody's going to immediately stop the line first of all. Because a continuous cast does not shut down for a whole week. Then every week, there is probably a four-hour window in which any maintenance has to be performed. But knowing that there was malfunction in the Lazo means that somebody will check it out at the time that maintenance window opens up.

Usually, people don't need to know what was in the data, because that validation took place at the time that trust was being created. We expect that there will be operation service center, that the maintenance team will call up and ask, can you explain to me what happened? What was that AI finding? That would be only in rare occurrences where they need that type of assistance. So we need to think about this from the mindset of the maintenance technician, that they don't need to be instructed in that level of specificity what their job is or what they need to do. They just need to know where and when, and have some way of finding out why, when they need to know that, and they cannot determine that on their own. This is probably a better medium for adoption of AI in the manufacturing space. Otherwise, it's going to be a long time. There are too many different systems, and there is not public data. So therefore, generative AI will also create a lot of burden on people to understand what is this AI trying to tell me.

Erik: Kind of shifting into a different topic. A lot of organizations, certainly the ones that you're working with, are sitting on huge amounts of archived data. I guess in a lot of the scenarios that we're looking at with one-time anomaly detection, for example, you're dealing with real time or kind of real time plus recent past data, what is your view of the potential value in archived industrial data? Do you think it is worth storing that data and processing it to try to assess potential insights? Or is that generally, is that only worth the, let's say, the effort and the cost in rare circumstances?

Nikunj: Yeah, very good question. So the issue we talked about earlier about data quality and data preparation. A lot of times when people have archived industrial data, they've done some or the other prep on that data. It is quite possible that some critical detail was cleaned up because it was introducing noise. As a very simple example, historians are designed to store data in a SQL database. And SQL database technology was not as scalable as modern data storage technology is. So as a result, they were always trying to remove data that was only slightly different from previous values. So archived industrial data, in the instance where it was put into historian technology, is unlikely to be of much value because it would have eliminated critical signal that was present in the data. On the other hand, storing maybe full fidelity data into historical archives may be useful only if there is a catalogue, of precise catalogue of information.

We've seen in, for example, the Department of Defense that they collect very high-fidelity data during testing, for example. But soon after the testing is completed, they don't have any means of locating the data that they want. So even though they put it into a historical archive, it's very hard for them to find that data later. And so for these reasons, we believe that the value to be created is going to be much higher from near past and real-time data when you can actually make sense of it and confirm that whatever conclusion you reached is indeed correct and actionable. Historical data could be used to speed up the learning process. But that is entirely a function of how stable the operation is, how well cataloged the data. And in the absence of knowledge of these two, we generally ascribe very low potential value to historical data archives.

Erik: Okay. So with your AI engine, what is the duration that you recommend your customers or your users store the data that's been processed there? I mean, I'm sure this differs quite a bit situation by situation. But what tends to make sense in the more common situations?

Nikunj: Yeah, actually, for the AI to learn and become good, it needs to have 10,000 data points at the minimum. And if you're sampling data 10 times a second, then you could get there within just a few hours. So you don't need a lot of history. However, we still tell customers that the first couple of weeks, we are going to actually observe the plan and look at the data that it is producing, so we can develop a reasonably good baseline. Usually, it would take about a month or so for that baseline to work equally well in 80% of all of the data that they're bringing to the plant. Usually, over a three-month period, 90% of that data will be well baseline. So that's the amount of history, depending on whether you want to get to 80% or 90%, that is going to be needed. 5% of your plant is always going to be changing. So you should not try to get to 100% because it's actually imaginary. It doesn't exist. And getting to 80% means that you can operationalize. So basically, within a month you can operationalize. Having much more than a month's history is not going to be a big difference. So that's what we advise our customers is, if you have history the last one month, that would be great. Then we can skip ahead. We can save ourselves a few weeks of clock. But beyond that, your plant keeps changing, and you don't have precise records of what changed when. So therefore, it would be very hard to learn from that old data.

Erik: Okay. That's a great insight. Yeah, the plant is certainly evolving, right? So having two years of historical data is not very useful if production line shifted over that time.

Nikunj: Yeah, there is no record of what changed when.

Erik: So you've shared already in some parts of the process the very significant improvements that were made over the past few years. If you look at what you're bringing to market today and then project over the next, let's say, three to five years, what do you think are going to be — what are the technologies that you're most excited about? And what do you think are going to be the changes in terms of how you're working with customers in the coming several years?

Nikunj: I mean, Falkonry is known for innovation. We've been recognized as an AI 50 company by Forbes and CB Insights. So innovation is in our DNA. We have 29 patents all over the world, all the way from how we write data into storage so that it can be easily panned and zoomed, all the way through to how deep learning is done on time series data. So we continue to innovate, and we have done so for the last 10 years.

Now, as I look ahead to the next few years, actually, bulk of the innovation is in making the core innovations easily usable to people. And so in that regard, there's going to be a lot of integration with data sources and with the standards that people are expecting to use in their enterprise deployment. Likewise, there will be a lot of integration with work management systems so that scheduling can be done consistently with their practices. I would say that that integration is actually a major area of work. I know that it is not exactly innovation. But then when you start thinking about is it possible for AI to conclude very quickly whether the recently concluded maintenance was effective, well, people need to know that. Because they're going to do maintenance regardless of whether AI told them to do it. And knowing whether AI maintenance was performed correctly is likely to be very helpful. We know that a lot of failures are the result of performing maintenance. So that's an area of innovation that we are expecting to bring to market.

The second one is, when people are trying to look at the state of a system without identifying failure modes in advance, how can we take information about the design of the plant so that we can automatically create the spreading of failures from one part of the system to another and do so from its data so that people don't have to have a static view of the theory of failures and then maintain that static view overnight? So that's another major area of future work. Third one is going to be about how can we make AI easy for people to override and say, I know why the AI does, what it does. But I don't want it. I want the AI to not do it. This is sort of the safety hatches that generative AI is also saying. No hate speech as an example, right? Don't reveal people's private information. That's another one. So these are safety hatches. Safety hatches on top of our generative AI so that people will feel more comfortable with the AI and will have more efficiency in working with it. These are three broad areas where we see innovation in our future.

Erik: Okay. Last question. We've been talking a lot about your business. We've been talking about some of the core technology that you're working with. If you look a little bit on the periphery of your business, what else in the ecosystem do you see today that you maybe are just personally excited about? That might not have traditionally been a core part of Falkonry's business, but that could be impactful. Is there anything else that you think might be a bit under the radar that people should be paying more attention to today?

Nikunj: I think one area that is challenging is that people are not used to working with real-time data in the plant in general. Everybody looks at a slightly old view, static view of the planet. And what is it going to take for people to change their work so that they can work smarter and so that they are able to more nimbly make decisions? It's a big human factor's question, and I don't think most of us have a good sense of how that can be achieved. So I'd be very curious to see how human factors play a role in evolution of the worker experience. A lot of this is seen as connected worker. It only plays a role in it. But I think that there is more than that. Especially, if you might contextual and situational visibility to what's going on where you are able to combine your eyesight as well as data to make in-time decisions. So that's one area I'm excited about. The second one is in understanding the relationship between the design and the operation of physical systems. To a large extent, we understand that in the simulation world. But how can simulation and physical design play a role in operations? This is still an unsolved question. And so that is an area where, computationally, I'm very excited about what lies ahead. Because we now have better computational means and because we are able to better learn the noise that exists in operations, but that is not present in simulation.

Erik: Okay. Two really fascinating areas. I mean, on the first topic, here, I guess we'd be talking largely about probably wearables, goggles, say, some kind of goggle type of wearable. I guess they could also be screens. Is it just the usability? Obviously, when you're in a industrial environment, walking around with a screen in your hand or a screen over your eyes is still a bit challenging. Given the state of the technology today, do you see it really just as a hardware limitation there, or are there limitations that you see?

Nikunj: This is a form factor question where there's a variable or an augmented environment. But there's also the question of what information can be consumed and not overwhelm the user. That's where my personal sort of feeling is that it's less likely to be augmented reality, and more likely to be mobile information that people can carry with them. It could be on their watch, and it could be more location aware of where they are. Therefore, what information is available as they are passing by, or whether they are able to go over schematics and review all of the current information as they make decisions for that period of time. So perhaps not as high tech. But still, it's a human factor's consideration. How does work evolve, especially in maintenance, operations and quality teams so that they are able to act on information that is current? Right now, the only people who do that are people who are in the pulpit, but not anybody else.

Erik: Okay. That's interesting. So the challenge is: you want to avoid distracting people. That means you have to, in a discreet way, present only the relevant information from among all the potential information that you can present in them to and then on very small screen.

Nikunj: Yeah.

Erik: Nikunj, I think we've covered a fair bit today. Is there anything that we haven't touched on that's important for folks to understand?

Nikunj: Yeah, so we've talked a little bit about project versus software. There's one thing I wanted to highlight. That the last 10 years or so have been mostly about solutions and projects. And the one thing that we all have to recognize is that it has created a fairly unsustainable situation for the industry overall. One of the effects of that is, startups are going to start to disappear, if you might, from our view. That's an indication that, one, we have completed a major cycle in the evolution of our space, our field. And two, that projects in what might appear to be perfect fit solutions are more of a mirage than reality. And we have to now play a much more aware role about what makes something a software that we can apply repeatedly in lots of places, even though it might feel like the process involves some effort on the part of managers of that. That's probably part and parcel of the system evolving from solutions to a specific problem that are almost invisible, to software that can be applied in lots of places.

And so I think we should look for industrial data platforms. Falkonry is working on an industrial data platform that's kind of like Snowflake and OpenAI put together into one single solution. That is the future. Time is easier. It is not really amenable to anything else but AI. It's too voluminous, too complex and, more importantly, too opaque to people when human beings don't look at graphs all day long. We do look at text. We don't listen to other things. But we don't look at plots of time things all the time. So it's always going to remain hard. We need AI to be good at it. That's only possible if the AI is built into the data plant. So that's what I would like to suggest. When you're looking at software as opposed to project, look for such data platforms. That's where the future lies.

Erik: Perfect. Nikunj, thanks so much for your time today.

Nikunj: Yes, it was a pleasure, and I hope your audience took a few interesting points away from this.

 

Contact us

Let's talk!
* Required
* Required
* Required
* Invalid email address
By submitting this form, you agree that Asia Growth Partners may contact you with insights and marketing messaging.
No thanks, I don't want to receive any marketing emails from Asia Growth Partners.
Submit

Thank you for your message!
We will contact you soon.