In 2016, Mckinsey forecast that the self-driving vehicle market could be worth USD 1.5 trillion in 2030.
A study, prepared by Strategy Analytics, predicts autonomous vehicles will create a massive economic opportunity that will scale from USD 800 billion in 2035 (the base year of the study) to USD 7 trillion by 2050.
Source: Strategy Analytics
The global autonomous vehicles market revenue is expected to grow at a CAGR of 39.6% during the forecast period 2017-2027 reaching USD 126.8 billion by 2027.
Source: PR Newswire
What is the business value of autonomous vehicles?
1. Improved safety: Research indicates that up to 90% of road traffic accidents are caused by the driver. Advocates for driverless vehicles argue that autonomous systems make better and faster decisions than humans. Self-driving vehicles can monitor and adapt to varying traffic and weather conditions with more diligence, speed, and safety than human drivers.
2. Lower environmental impact: Autonomous systems can be programmed to minimize their environmental impact or to achieve specific regulatory targets. This is done by, for example, optimizing acceleration and minimizing idling time.
3. Traffic efficiency: Congestion can be reduced by coordinating the flow of vehicles. Using vehicle-to-vehicle communication, autonomous vehicles can drive safely in close proximity at high speeds. They can also reroute to avoid congestion points in road networks. The reduction in accidents will also reduce unanticipated congestion.
4. Greater comfort: Commuters will be able to enjoy their morning coffee, read the news, participate in meetings, or catch up on sleep during their daily commute. Vehicles will be redesigned to optimize for comfort and convenience when equipment such as steering wheels and breaks are no longer required. And erratic driving behavior of tired or stressed drivers will not impact the smooth flow of traffic.
What sensors are typically used to enable autonomous control of vehicles?
Sensors are critical to the functioning of an autonomous vehicle. They provide dynamic environmental data that supplements system data, such as maps, to enable safe operations. There are two domains where sensors can be deployed, on the vehicle and in the environment. Over time, more sensors, such as cameras deployed on traffic lights, will be placed in the environment and provide data feeds to all vehicles in the vicinity. However, in the absence of highly aware and connected environments, autonomous vehicles rely primarily on the sensors installed in the vehicle itself.
The data from these sensors must be processed on the edge (i.e. the vehicle) rather than in the cloud due to the need for ultra-low latency and relatively high throughput. As sensors are fused with a central computing system they can be distributed to support higher autonomy of sub-systems using a distributed architecture. To accomplish this, the communication system must have subsystems to support the flow of data.
There are three common sensors categories:
RADAR sensors acquire information from nearby objects like distance, size, and velocity (if it is moving) and warn the driver if an imminent collision is detected. Should the driver fails to intervene within the stipulated time (post-warning), the radar’s input may even engage advanced steering and braking controls to prevent the crash. The high-precision and weather-agnostic capabilities of radars make them a permanent fit for any autonomous vehicle prototype, notwithstanding the ambient conditions.
LiDAR sensors are “light-based radars” that send invisible laser pulses and ascertain their return time to create a 3D profile around the car. Unlike cameras and radars, LiDAR does not technically detect the nearby objects; rather they “profile” them by illuminating the objects and analyzing the path of the reflected light. This, when repeated over a million times per second, yields a high-resolution image. Since LiDAR sensor uses emitted light, its operation is not impaired, notwithstanding the intensity of ambient light which means the same intensity in night or day, clouds or sun, shadows or sunlight. The result is a greater accuracy of perception and high resilience to interference.
Camera-based systems are either mono-vision i.e. having one source of vision or stereo-vision i.e. a set of multiple (normally two) mono-vision cameras just like human eyesight. Depending on the need, they may be mounted on the front grilles, side mirrors, and rear door, rear windshield etc. They closely monitor the nearby vehicles, lane markings, speed signs, high-beam etc. and warn the driver when the car is in danger of an imminent collision with a pedestrian or an advancing vehicle. However, the most advanced camera systems do not only detect obstacles but also identify them and predict their immediate trajectories using advanced algorithms.
What types of analysis are typically used to transform data into actionable information?
Autonomous vehicles require subsystems that perform data fusion in order to provide localization, situational awareness, route planning, vehicle control, and other functions. This data is typically stored and searched to provide results in real-time. For example, a system could include the DDS interoperability protocol, a data bus to move data in real-time, and administrative console, web-integrated services, a code generator and other components.
What business challenges could impact autonomous vehicle deployment?
Safety and cybersecurity, with its potential safety implications, are the most significant impediments to widescale deployment of autonomous vehicles. They are particularly challenging given the current lack of clarity regarding legal liability for accidents involving an autonomous vehicle.