Data annotation is the hidden champion of machine learning. It is the process of tagging image, video, text, and other data in order to prepare it for training a model. The quality of your data annotation makes the difference between insight and noise.
In this week’s episode, we interview Tigran Petrosyan, co-founder & CEO of SuperAnnotate. We discuss how to manage and scale your annotation workflow, quickly spot quality issues in your data, and seamlessly integrate new data sets into your existing pipeline. We also explore how specialized agencies and AI are collaborating to accurately tag the high volume of data that AI training requires.
- How to manage the key steps of the annotation process - annotate, manage, automate, curate, and integrate?
- How can you deliver ML projects faster without compromising on quality?
- How should you balance the efforts of internal teams, freelancers, and automated tagging to achieve the right cost structure and performance?
Erik: Tigran, thanks for joining us today.
Tigran: Excited to be here.
Erik: A great company that you've built. This is scratching the itch of a lot of companies that I work with, so it's really a timely product. But before we get into that, I'd love to learn how you ended up here as the CEO of the company. I mean, you have an interesting background. It looks like you studied physics, you are doing a PhD, and looks like something related to biomedical imaging. Then I guess during that process, somehow, you realized this was a problem. You jumped out of your PhD, and you immediately jumped into this company. So, just to share with us a bit about the backstory here, how did you find your way four years into a PhD program and then say there's maybe a better way to be spending your time?
Tigran: Yeah, for sure. I'm originally coming from Armenia. I moved to Switzerland to do my Master's in Physics, then PhD in Biomedical Imaging side of physics. I started to see a lot of issues with computer vision, especially biomedical imaging side. I was even invited to some TEDx event that talks about — it's funny that I was talking about algorithms will make the radiologist's work much easier and faster, and better diagnosis. But I had absolutely no clue that I'll be working on something that will directly contribute to that later on. But the main idea came from my brother's PhD thesis. My brother was working on his PhD in Stockholm. He applied his algorithm and image segmentation on annotation sign and realized that his technology makes it much faster and more accurate than any other in the space. In one conference, he'd even seen a lot of interests in buying the tech in some of the biggest companies in our space. So, that triggered us to think about what if we could do it ourselves? We forced ourselves into one startup event, just faked an application as if we're a company. I'm the CEO, he's the CTO, built a story around it, and went to the event. It was in Armenia, in our home country. It was a very regional big startup event. We won that competition. It triggered us to actually start a company, got some funding. I hired some folks, local, and then eventually got some more funding and expanded to US, and Europe, and in other places in the world. So, it's a quick backstory, how we got started.
Erik: It's a great story. It's a luxury to be running a business with your brother and starting it by winning a competition with a fake company. It gives you some confidence that you're onto an interesting idea. I'm curious. Why make the switch from Europe, move over to the States?
Tigran: I think the country — we're in East Europe, in Armenia. It's an awesome place to build products, but I think it's always good to be closer to your clients. Most of our clients are in the US, also in Europe but mostly in the US. Also, when you build business development infrastructure, I think United States is the best place to do that. It's my personal opinion. So, that triggered us to build that here. I think another reason was the accelerator that we went to, quite quickly, when we started. It was Berkeley Accelerator called SkyDeck. I think this was a big trigger for us for our future growth. We got excited to the area around Silicon Valley and seeing all that hype around technology and so many talented, smart people around. There's so much there. It's crazy.
Erik: Yeah, the Valley is a great place to start. I see you've moved over to Miami. It's maybe a good place to migrate after you build your relationships in the Valley. I bet you travel a lot now.
Tigran: Yeah, that's true.
Erik: You mentioned your customers are in the States. Are you working mostly with tech companies that are building algorithms, or are you working with what might be called the end customers in terms of industrial companies that are building these solutions for themselves? What does your customer base look like?
Tigran: I think it's a combination of both. So, if you split it into two parts, there are, let's say, large enterprises who have some ideas about how to build ML applications to optimize and processes — any kind of processes they want, whether it's on automating some document text recognition or putting some cameras that they can identify something in the warehouse, whatever. They try and think about ideas, and they build some teams — internal teams, data science teams. They need data management infrastructure, training data management infrastructure. And they come to us. Then there's so many other companies that are separate, like startups that are worth from a few million to hundreds of millions to billions who are just focused on specific application, whether it's identifying eye tracking, or some warehouse automation, or retail automation like autonomous checkout systems or robotics. It's focused on specific application, but they're just a startup. They're not like a big enterprise group, but they still have a data science team. They have their own problems to solve, and then they could sell those solutions to maybe other enterprises. So, we work on both sides. So, it doesn't matter whether the data science team who's building those products is separate or part of the big enterprise.
Erik: Okay. Got it. So, you're building an annotation technology and service platform. Let's cover for folks what that actually means. I mean, it's a very horizontal need, right? Anybody that's dealing with large data sets, basically, needs to figure out how to annotate their data. So, can you just walk us through? What is the process? What are the steps that need to get done? I think our audience is relatively savvy. A lot of people are probably somewhat familiar with this, but I think most people are not going to have any deep expertise here.
Tigran: Yeah, for sure. To give a bit broader perspective, let's say, we want to build any machine learning application. Let's say, your camera wants to identify any objects around the field of view or any text recognition, optical character recognition, voice recognition. So, in order to build that general processes, you collect the raw data. Let's say, some general audio files, any kind of audios, or images, or videos. So, that's a raw data collection. Second part is building the training data or annotations. That is what is called the training data, because you train on this data to build your models. So, you have to build annotations. Let's say, for autonomous driving, for cars, you put boxes in the cars, or trees, or lanes, or whatever. Then you want your camera to identify objects around. This is probably the most eye-catching application out there. Then once you build this data, there's something called machine learning models. You run that model. It's a model building side. Then these are just like large matrices that have some information about your object detection. Let's say, you labeled 10,000 images of cars. Then you have 80% accuracy of detecting cars. Then once you have that model, you have to understand how the model performs—the different cases, what's the accuracy—and decide what to label more. Because the more you label in the right way, the better your object detection is. So, iteratively, once you build annotation with your model, understand what to label next, label again, build your model. So, usually, it goes in an infinite loop in a way that it's eventually deployed in your device. I mean, you can also deploy it on a way to test. But generally, if you really want to have a very high accuracy and detection, you need to constantly label data in a very smart way in order to make sure that you have the best object detection accuracy with all the edge cases that is relevant for your business. This is where we come in, in annotation training data infrastructure side. So, we're not building models or deploying, but just making sure that all those companies have the right infrastructure to build and manage those training data, understand how the model works, what to label next, how you version those training data sets or annotations, and eventually, basically, get some analytics, anything that comes with not just annotations. Also, there's a huge part of this area. It's really finding the right people who can label this data. Because labeling can be done in many different ways. It can be manually done. Someone is just putting a square on a car, let's say. It's a manual labelling. It can be automatic. You can also have some pre-built models that suggest like, okay, is this a car or is this a three? Annotators have to approve, disapprove, or it can be automatically, say, if it's over 90% accuracy, just accept and approve. It can be also done via a predictive matter. You build a model, you do predictions, you confirm annotations, and then eventually do some iterative improvements, and ultimately, set your model, and set your model into the device. So, no matter how you do that, you still need to manage this data, understand how annotation is working, how the model is performing, how you version this data in order to make sure that the model constantly improves. This is where we're basically closing all of loop into end-to-end training data infrastructure with workforce that can label that through marketplace or workforce.
Erik: Okay. Got you. So, data is the raw material in this process. You, basically, have a technology in a marketplace for making that data usable, for turning it into something that can be acted on. The examples you were giving, they were primarily visual. I guess, that's maybe the majority of the use cases here. But are you also covering other types of data? Are you covering machine data? Are you covering audio data? Are there any scalable use cases outside of visual that needed significant annotation?
Tigran: Yeah, we started with visual data. So, we added more video type of data, images and then videos. We added text data that is growing quite fast. Another area we'll be expanding is — there are other types of data like AIJO, on LiDAR, the 3D point cloud data. So, we want to make sure that we cover every type of data when it comes to building a training data infrastructure?
Erik: So, that's the playground. What are the challenges here? What are the problems that companies face when they're trying to make sense of their data?
Tigran: If you can imagine, let's say, in order to build really high-quality models, you need a lot of data. You can get started, let's say, with 1,000 image labeling. That can be simple. You have some open-source tools. You just put one by one, manual labeling. You get some model performance, which is, let's say, 85% accurate. But this is never enough if you really want to be serious about your machine learning pipeline or your object detection accuracy. The problem comes when you really start scaling into tens of thousands, hundreds of thousands of data, and most of the cases, millions of data to be labeled. The biggest question is, first, what I need to label and how. Because there's so many ways you can label data. You can put a box. You can put an edge polygon around it or a segmentation around it. If it's other applications like cause detection and motion detection, you have to put some key points. How many key points you have to put at each phase, or human posts? How can you do that? The second biggest challenge is what the data actually I need to label. Let's say, lighting. If it's an autonomous driving, for example, I need to label the different lighting conditions at different cities, at different locations, and different — there's so many cases that you need to consider what to label. The third challenge is how I make sure that the quality of the labeled data is high. This is probably the biggest challenge in our industry. Because, let's say, you can check 1,000 images. You have put good labels. By quality, let's say, it can be that, let's say, you put a box around the car. Maybe the edges are not tight enough, if people put. Because there's a lot of noise around. Maybe someone mistakenly put instead of a car, they set a tree. Because if you do it in large-scale, there's a high chance you'll do mistakes. Especially, people get urgent of doing that repetitively a lot. So, there's a high chance of error there. Another big part of it is instruction sometimes. Let's say, you need to label 20 different things in thousand images or tens of thousands of images. The way you need to label with instructions and how, this can get very complicated. This is where a lot of challenges of quality comes — having the right infrastructure where there's an iterative process of collaboration with data scientists, annotation teams, their managers, the pre-labeled automatic prediction algorithms, making sure that no bad data goes to the pipeline. This infrastructure gets really, really difficult. Then last but not least, finding the right team that is very much skilled on the type of work you're doing. For example, it can be simple doing annotation, but it's complicated if you have a large sheet of instructions. Different teams are skilled on different types of data labeling because of their experience. So, how do I find the right team that can label the data that I need to ensure the high quality? This is where we come in as well with our marketplace of service teams, where we find the right team and attach it to the right client. We manage those teams to ensure the high quality. So, it's a long answer. Hopefully, that encapsulates a lot of challenges that's coming with our industry in space, although it may look pretty simple on a first sight.
Erik: It's interesting. It's one of those things that looks simple at that human scale or when you start playing around with things. Then the complexity is with the volume, as you said, right? Doing one thing a million times all of a sudden becomes very complex if you want to do the thing to a really high-level of accuracy repeatedly. Let's walk through then, how you addressed this. So, I don't know if this is the right way to think through it. On your website, you have your technology broken down into annotate, manage, automate, curate, and integrate. Is that a good way to think about the jobs that need to be done here, or is there a different way that you think through what needs to get done?
Tigran: Yeah, that's a great way to look at it. So, annotate is just the first simple side. You have the place where people can just check in with their account, with their role. They're given their assigned data sets, and they start labeling based on some instructions they learned. Basic stuff. Manage is the part where you have, let's say, tens or hundreds of people of different roles doing this labeling together. You need to manage those teams, understand their work, their performance, understand, find a bit quality issues, how they communicate with each other when they find mistakes, how you propagate certain issues to the whole team, and how you actually manage those teams. So, this is where the whole team and data management infrastructure comes in, which is very important if you want to do it in a large-scale. Then automation is where you really don't want to label every single data there. Once you start, for example, building certain data set early on, you can test, for example, how your model performs and then set certain predictions. So, auto labeling comes in, sometimes semi auto labeling or auto labeling, where you want to accelerate that process further and further to save time for yourself and, of course, costs for the customers as well. So, what we do is — there are two different ways. First, which is more prevalent in our industry, the customers usually are the ones who build the best models for themselves. This is what they constantly do. So, we set like a pipeline through our system that they create their own model outcomes or predictions. Automatically, it comes to our platform. So, it's just an annotation that is created by their models. Then our people or our system can find the mistakes or bring it to the right people to label. Based on the customers' models, we do predictions and then we do the labeling and corrections. Then the second one is, we use our own models internally to do some auto labeling, basically, creating predictions. Our people do that. Sometimes customers don't want to bother with that, and then we do that ourselves. The third one is, basically, an iterative training and prediction way, which is quite fun. So, you do, let's say, one batch 100 images. You run your model, and then you do prediction. Then you see how it works. Then you do the next 1000 images, and then you run again a prediction. Then constantly, the more you do that, you hope that your model gets better and you get faster and faster.
Erik: Got you. So, on automate, where are we today? I mean, it feels to me like the situation might be that, right now, for high sensitivity solutions, you need a human in the loop for significant amount of data. Five years, I don't know, 10 years in the future, maybe automation works for the vast majority of scenarios and you just need a human to do the initial. Where are we moving in terms of the ability to automate this process?
Tigran: With current use cases, if you look at it, if you just focus on just one specific use case, if you think of it, definitely, you get more and more automated. Human intervention will be less and less. The problem with that is — so you constantly want to improve your model. Sometimes even 1% improvement can require the same amount of data than you had before getting to that, let's say, 97%. So, this is where the problem comes in. It's not linear. It's a very logarithmic way of how much data you need to improve your model performance. Human intervention, especially, becomes really important in these edge cases. Of course, there are a lot of ways you can automate. But in the next, at least, five years, this industry doesn't seem to be slowing down. This is in addition to the fact that the use cases become more and more. So, you can think of every company building some ML, AI application on top of them, some infrastructure around it. For every application, you need proper training, data management infrastructure. Let's say, companies need AWS for their general infrastructure. Then for AI applications, you need the training data infrastructure. So, the more applications you bring in, the more type of data you bring in, you need to label more and more and more. Just the sheer scale of it gets extremely high. There's much more data there than can be labeled. So, it doesn't seem like that human intervention is being slowed down. Of course, the auto labeling is helping a lot. That human in the loop seems to be still quite important part of that process. What I can see also in the future, whether it's synthetic data or auto label data or — synthetic data is another big, of course, interesting problem solver in the space. But whatever data that comes in, you still need that whole infrastructure. You need the data to be managed, to be versioned, to be understood what works, what doesn't work. This is where I think the space is going to on the platform side, to have training data infrastructure. Then human contribution may potentially go less in 10 years. So far, it's just, I think, in the beginning phases of huge development of AI and machine learning applications.
Erik: Got you. Okay. Then you have curate broken out from manage. What's the difference there? What do you do when you're curating the data sets?
Tigran: So, if you're a data scientist, what you do is, basically, you review this data and understand what works, what doesn't work for your model. So, let's say, you've labeled 1 million images in the street for autonomous driving. So, curate is coming when you say, okay, show me, let's say, filter all the data that has some specific characteristic. Okay, show me all the, I don't know, yellow cars on the left lane of the street. Then looking at how the model performs in that specific subset of data set, looking at some parameters in a model, how it performed. You basically curate the data and understand how it works. First, for quality purposes, just to make sure that the data is labeled correctly. Second is, once you have the models based on the data, you can compare models together. You can understand certain model characteristics. It's curating or reviewing the health of the data. This is a huge part of what the data scientists are spending a lot of time on. It's a holistic view of the data where you just look at specific subset of data, look at some analytics, compare model performances that helps you to understand the health of their data, health of your model, and what to label next. This is what we also have in our system.
Erik: Then the last element you have here is integrate. So, is that integrating with the customer's algorithms? What are you doing here?
Tigran: So, another big challenge is — because what you really need also as a customer, you have, let's say, annotation training data infrastructure, and you have your own machine learning pipeline. These two can sit in separate places. What was happening before was, customers will send data somewhere. Then once it's labeled, they download the data. Then they put into their system somewhere, and then they run some models. Then they create some more data. They send back and close all the manual. What can happen now with our system is, all the data — inflows, outflows, some triggers about when you send the data, what needs to be labeled, who needs to label — all that can be automated. It's something called Python SDK functions, where with just simple line of codes, you can automate all that training data infrastructure and machine learning pipelines together to create a seamless flow, to make sure that you don't have to do a lot of manual work. So, this becomes a really, really important part of it. One interesting case is, okay. Let me compare, let's say, three different annotators working on the same image. I only want to use the one that all of the three have done exactly the same annotations, because I want to reduce the error or any biases from the data. This can be also automated. You only use the ones that are coming through this consensus. It's just one case, but there are thousand ways that you can think of on how you want to set your pipeline in the training data side and your ML side. This needs to be properly set up through some functions that we have built. This is where integration comes in. Another part of integration is, usually, data sets are whether it's a private cloud or AWS, Google Cloud, or Azure, or any other system. How do I make sure that this infrastructure is flawlessly connected with our infrastructure? So, you see the whole folder structure, whole processes properly set up. It's securely set up, because you don't want — Especially on the customer side, you don't want your data to be exposed to some other systems. This is what we do. Also, we make sure that the data is very securely connected to our system, being labeled, and is never stored in our system.
Erik: Got you. Okay. Then it sounds, just from your description here, like we're going to have a few different groups of users on the annotate and manage side. Maybe we're working with large teams that are tagging the data. Then automate and curate, it sounds more like those are tools for data scientists, and then integrate maybe for the product owner or the team that's managing this. Who would be the different key users across this, and how are they interacting?
Tigran: Very, very good point. You're right on the first level. It's the people, the team that needs to build the training data — so annotators, quality assurance specialists, their managers. Then once you have the data, of course, the data scientists and their leaders who are actually looking at the data and setting their performances and understanding what works, what doesn't work. Then we have had cases where even C-level people, product leaders, C-level people would actually interact with the platform just to see what's going on, have better analytics. It's funny that you can also look at how each linked team works and compare their performances and understand what team works, and how to bring the best thing for your work, checking the quality. So, we've seen it across the board. But the main users are the data scientists, machine learning engineers, and the annotation teams working together to build this infrastructure.
Erik: Then you have this marketplace that you mentioned earlier. It's interesting. Just looking on your website, you have teams in different countries. They have quite large teams, so ranging from maybe a few hundred to multiple thousands of people. Then they got to focus on different topics like NLP, image recognition, video, et cetera. So, are these basically agencies that are set up, I guess, often in lower cost countries? Basically, agencies of, I don't know, full-time or maybe contract employees that are then trained to annotate. It looks like that's the structure. How does that industry work? Is that a corporate organization in terms of them having full-time employees, or is this a loose network of people that are trained, primarily, freelance, maybe working with different agencies? What does the marketplace look like today?
Tigran: It's more on the later side. So, they're basically companies, very established companies, very much trained. There are two ways you can see in this market. There are crowd-sourced types of companies where they're just — people can register anywhere in the world. Usually, these companies don't even know them. So, they just get a task and then get the label back. Then for each task, they get a job. This is not what we do, because this works probably well with very simple tasks. But once the instructions get a little bit complicated, it's really hard to get high-quality data out from the crowd-sourced platforms. So, this is what we have done in a way, that we vetted over 300 teams, professionally managed teams across the world. Majority of those teams happened to be, of course, in Southeast Asia. But we found the teams in Europe, East Europe, in the US, in South America, in Africa, everywhere. We train them in our system. Another key part is, of course, vetting those companies in a way that you understand what their working conditions are, are they violating any local laws. In certain clients, for example, they need a huge security infrastructure for those teams. For example, to make sure that their data doesn't leak anywhere, they need certain specific cameras in facilities, certain certification standards. So, we have vetted all that for all these teams to make sure that we find the right team for the right client. Another big factor is, of course, making sure that you know where they're trained on and what their skills are. Certain teams are good at one thing, and the others are not. So, how do we make sure that we bring the right skilled people to the right customer? Another thing we're doing differently in this space is, as a client, you don't want to deal with two different entities. Let's say, one is giving you a platform. Okay. Now let me vet and find another team that you're going to deal with to do the annotation. So, this marketplace, we find the right teams and our service operations team actually manages the team in making sure that we work on time or a deadline basis. We assure the quality and, basically, everything that comes with it.
Erik: Okay. Cool. So, technology and service. Basically, one point of contact, client manager. Why don't you walk us through an example? What does this look like from — you have an initial conversation with a company to understand what their needs are. What kind of questions do you ask? How do you scope out, make sure you properly understand that need and then walking through at matching them to technology and annotation partners? If you can share us some results, just one or two examples that come to mind.
Tigran: We usually jump in when the companies are ready to scale. For example, they've done already some early work, let's say, about 100,000 images. They've built some initial models. They understand that there's something unique. They're building, and there's a need. They're scaling when they need, let's say, from tens of thousands to millions of data. This is where, ideally, we come in. So, we vet early on in our early conversations to understand how AI mature the team is. Because if the team is very early, then ideally, they would need to do some tests and early building themselves before they scale. We would rather jump in when they're really ready to scale with the data infrastructure and also having some understanding about machine learning. There are companies who try to say, okay, we'll build all your data, models, everything together. It's more like a consulting service work. We're not doing that. We come in when they're already ready to scale their training data infrastructure and constantly, they know that they need to improve their model. This is where we come in. Usually, it can start with a platform offering. Let's say, they get a platform. They can do the labeling internally, or we can bring our teams. Eventually, they get a better understanding about how the curation works. Funny enough, a lot of companies don't even know. It's such a new place. They don't even know that there is actually a system you can get where you don't have to think about all this infrastructure on the curation side. You don't have to build it. It's already there. Sometimes we see such fascinating faces when they see what they can do with this system because they haven't thought of this before. For example, in CRM tools, you know what to expect and what they would need to do because it's such an old established market. But our space is so new in terms of the platform, machine learning data platform infrastructure, that people don't even know what to expect. So, there needs to be quite some education at the early stages and clear onboarding at the beginning to make sure they understand it right.
Erik: It's funny. I mean, that happens. It comes up quite often in conversations that I'm having on the podcast here, where companies are — I'm talking to a lot of companies that are less than five years old and building relatively sophisticated technologies. So, this scenario where you're basically explaining to customers how you can do something 5x better than them, and they don't need to be building the infrastructure themselves is a conversation that happens. What are — I mean, not just to kind of promote your results. But what are the results that somebody might expect if they're moving from a built at home solution towards more of a standardized platform with a scalable workforce, and so forth? What are the economies that you're trying to achieve for your customers?
Tigran: So, the number one thing, we make sure that the quality of the data, whatever they had, would be much better with an output with us than with any other company because of the way its system is built and the way we vetted the service teams, and the way we added our own quality assurance specialists before the data gets to the pipeline. So, this is the first thing we ensure. The second thing, which is very important, is that customer gets the full flexibility and transparency about what's going on before the data gets to the pipeline. So, they have all the view to curation, their admin access. In this way, usually, you get from data to model at least two times faster. Because you are part of the system to give feedback. Annotators can work, maybe at the beginning, manually, depending on the use case. But also, if we automate certain tasks, the data can also get two, three, sometimes five times faster, what we've seen than they would use in other platforms. So, they can save a lot of time and money depending on the use case. It's hard to say in which use case is how much, because every case is so unique that we have seen improvements of 5x just within the first one month of working with us. It's all about how you quickly get from data to model deployment. What we're really showing to our clients is, you can get it 2, 3, 5x faster just within the first couple of months.
Erik: So, the big levers that you're trying to move here are quality, quality of the data, and time through the pipeline. We don't need to get into the cost details, but if you can share a little bit about how — what does the cost structure look like? Because I guess you're dealing with dramatically different sizes of projects that you're working with. Is it around volume? I guess you have different technology offerings. There's probably different modules there. But what would it look like to issue an RFQ? How would you structure a quote?
Tigran: It depends on whether it's just a platform offering or it has also the people from marketplace to label the data. If it's a platform, it's basically a combination of how many users they need and how much data they need. It's heavily more focused on the number of users than how much data. Because we have an integration, in a way, that we don't store their data so we don't create any additional cost on this. This made that pretty attractive for a lot of companies, no matter the scale. It just needs this number of users and they can really work, do a lot of complicated stuff. So, when it comes to this platform, we call it a platform approach where we use an end-to-end software plus integrated marketplace of services. Depending on how much they commit early on, they can get the platform for free, or it can be a combination of some software charge plus platform charge in one package with some yearly commitment basis. But what we also sometimes do is, we do some trial or pilot stage where the label — Let's say, some parts of the data get integrations just to make sure that the client understands the value before there's a large-scale commitment. So, we basically do these both as well. In this case, it's more about how much data is being labeled and how much each cost. We always do some benchmarking to understand how much time it takes. Then we always price whether it's per annotation or sometimes just an hour work of the annotator.
Erik: Got you. I saw a free plan for early-stage startups. So, that's a nice support for a young company that's trying to validate some data set that has value. I know that can be heavy lifting for a small team. Cool. So, we covered, I think, a lot here. I've, at least, learned a lot Tigran. So, thanks. Anything we haven't touched on yet that would be important for folks to know?
Tigran: Nothing comes to my mind at this point. I think we've really touched a lot. Maybe just I can mention if someone is just interested to explore what opportunities they can get to our system and with us, the easiest way is just going to our website, clicking the button request demo or get started. Our team will contact you, will understand your needs, and will really make sure that you can have a proper setup. You don't have to immediately — it's a low touch approach at the beginning, so you don't have to pay hundreds of thousands right away to get started. It can also be — we always provide value first before we sign a contract or try to get money from people. So, I wouldn't get afraid. Just come down there and then request demo, and we'll take care of it.
Erik: Awesome. So, that's superannotate.com. I'll put that in the show notes as well. Tigran, thanks for taking the time to talk to us today.
Tigran: Great talking to you, Eric. Pleasure.