Computer vision in self-driving cars. Part 1
People have been dreaming about autonomous vehicles long before the car was even invented. Only a few decades after Henry Ford had started the mass production of low-cost cars, people wanted more once again.
The debut of unmanned taxis in cinema took place in the late 80s in the movie “Who Framed Roger Rabbit”. Two years later, a robotic car appeared in the movie “Total Recall” featuring Arnold Schwarzenegger, and a few years later in “I, Robot” which takes place in 2035. Nowadays, you can find self-driving cars even in your city.
Unfortunately, autonomous cars are not good enough yet to completely remove the need for driving.
In theory, the operation principles of the self-driving system might look deceptively simple. The user sets the endpoint, and the car computer calculates the optimal path to reach it, while taking into account road markings, signs and traffic, before it starts moving. Sensors, cameras and radars installed in the car gather information about surrounding objects on the road, while the AI processes it and makes appropriate decisions during the trip.
All car sensors are used in different ways: radars determine the position of various objects in the front and the back of the car as well as to its sides; lidars measure exact distances to said objects; cameras detect road signs and markings; stereo vision system determines the shape and location of the surrounding objects; gyrostabilizer determines the orientation of the car in space.
This is the simplest technical classification of the devices that allow the self-driving cars to see. Systems similar to this one are used most of the time as a result of their convenience.
Each of the elements that allow the car to “see” has its pros and cons. For example, LIDAR (Light Identification Detection and Ranging), receives data on any nearby objects by sending out laser beams around itself and analyzing the pulses of reflected light. It can precisely measure the distance to surrounding objects but is unable to work properly in snow or rain. Image recognition systems can identify objects using cameras in almost any weather but are sometimes unable to classify their types correctly. It is clearly shown in this video example.
This is a video recording of how one of the Tesla car subsystems, which is responsible for interpreting images from cameras, works. It was posted online by hackers about a year ago.
In the next part of the article, we will tell you about how machine vision systems are used in self-driving cars and what are the pros and cons of doing so.