In an earlier article, we have seen what autonomous or driverless vehicles are, what the different levels of automation are, and how driverless cars will benefit humankind. In this article, we will take a look under the hood.
Autonomous vehicles (AVs), a part of autonomous systems, are no longer a distant future goal. Also called as driverless cars, companies like google, Uber and Tesla are working on making them happen now. The automobile industry has been assimilating different technologies over the past few years, and driverless cars may be the culmination of these technologies.
But first things first – what do we exactly mean by an ‘autonomous system’?
An autonomous system generally refers to a system that can automatically sense and adapt to dynamically varying environment. Apart from artificial intelligence and machine learning, such autonomous systems use a broad spectrum of technologies to make a driverless car really work. While it is just not possible to cover all the technologies involved in the making of autonomous cars, we will cover the most common technologies here. Please note that there is still not a consensus about how these technologies are used; each company implements them in a unique way.
Human Driven Cars
Before we dig any deeper, it is very important to understand why driverless cars need all these technologies at all.
Human beings are amazing creatures. Other specifies may excel at speed, agility, stamina and physical tasks, but no other species can perform complex, real-time calculations of their physical environment and generate specific, actionable predictions quite like us humans. Driving a car takes an incredible amount of real-time data processing, assessing various driving and environmental conditions that change rapidly, and taking intelligent decisions based on this information. A trained human driver realizes when to change gears, when to brake, when to accelerate and when to slow down. And the more they drive, the more driving comes naturally to them. Changing gears, accelerating, de-accelerating, braking, changing lanes, gauging traffic conditions and driving accordingly becomes second nature. If someone honks, or an ambulance turns on its sirens, they know just what to do. If a driver is familiar with a road, he or she knows exactly where the blind spots are, where the speed breakers are, and so on. They are prepared for it. Of the five senses they possess, humans use sight, sound and even touch while driving a vehicle. They know exactly how to drive in foggy conditions, windy conditions or how to react if someone suddenly crosses their path. To perform equally well, AVs need to incorporate different technologies that satisfy all there human qualities.
Technologies used in Autonomous Cars
In general, humans possess excellent cognitive abilities, and driverless cars need to emulate them. This is the reason why they need so many technologies; no single technology exists today that will replicate human cognitive skills. Once you understand the background about why so many technologies are involved in the making of an autonomous vehicle, you will better understand why they are needed.
Artificial Intelligence (AI)
In driverless cars, AI allows for flexible and error free driving through the aid of AI assisted software that enables the AV to navigate even the most complex traffic situations. Machine learning and deep learning, subsets of the AI ecosystem, are ways to create or train AI. Machine learning is an application of AI that enables systems to learn and improve from experience. Deep learning is a subset of machine learning, and leverages complex neural networks that extract more detailed features as the neural network continues to learn and evaluate its input data.
Autonomous vehicles rely on advanced artificial intelligence and machine learning systems to “understand” their environment and react to commands.
Sensors
If AI and machine learning are the brains of autonomous vehicles, sensors are the nervous system that feed it data. Sensors are fundamental to the perception of vehicle surroundings in an automated driving system, and the use and performance of multiple integrated sensors can directly determine the safety and feasibility of automated driving vehicles. Sensors used in AVs can be classified as active or passive. Active sensors have a signal transmission and emit energy in the forms of electromagnetic waves. That is to say, they actively send signals in order to receive a feedback. Passive sensors, on the other hand, detect existing energy, like light or radiation, reflecting from objects in the environment. Sonar, radar, and LIDAR (Light Detection and Ranging) are examples of active sensors, while radiometers and infrared cameras are examples of passive sensors.
The main types of electronic sensing devices used in driverless cars include cameras, radars and LIDARs.
Autonomous cars often have video cameras that see and interpret the objects in the road just like human drivers do with their eyes. Radars send out radio waves that detect objects and gauge their distance and speed in relation to the AV in real time. An important advantage of radars is that they also can determine the relative speed of the detected obstacle, which is very important for decision making in acceleration as well as for braking. LIDARs do the same as radar; however they use lasers instead of radio waves to determine the distance and type of the object.
The choice of sensors used in AVs is dependent on many factors; including their range and their price. Any moving vehicle needs to be able to avoid close objects, at the same time sensing objects far away from it. The sensors mounted on AVs also need to be able to operate in different environmental and road conditions with challenging light and weather circumstances. Since no single sensor can satisfy these requirements, an AV usually utilizes a mixture of sensors. In general, a suite of sensors in combination can complement one another and make up for any weaknesses in any one kind of sensor.
Geo-localization and Mapping
Like human drivers, AVs need to answer questions like ‘where am I? What objects are around me?’ and ‘How do I go from here to my destination?’ Geo-localization and mapping help provide this answer. The default geo-localization method is satellite navigation, which provides a general reference frame for where the vehicle is located on the planet. In addition, AVs also need to know how to proceed from location A to location B; and maps are needed for this. HD maps are one of the best ways ahead, as they have many more details than traditional maps. These details include information like road location, number of lanes (and further details like if the lanes are reserved for a specific kind of vehicle), current speed limit, traffic signs, parking spots, traffic rules at intersections, the severity of any curves that may be present on the road, and many other details. HD maps are usually made in layers, and it takes special software and hardware to come up with a suitable HD map that AVs can rely on.
Putting it all Together
As has been mentioned earlier, each company integrates and mixes these technologies in a different manner. The overarching objective of driverless or autonomous vehicles is to provide an extremely safe driving experience, and there are many challenges to overcome. One of the key technologies that will help AVs achieve an extremely high level of safety is augmented reality and virtual reality, and we will discuss that in one of the next articles.
This content was first posted on the DesignTechsystems website.