Computer vision in self-driving cars. Part 3. Better than people

At the moment, self-driving cars can help drivers, but they only reach the third level of autonomy at best. That is why the Tesla manual states that a person has to keep their hands on the wheel while driving, so that they can take control of the car if necessary. The autonomous driving technology is not reliable enough as of now, and a failure in one of the cars subsystem may lead to an accident. But, even at this stage, the autopilot is able to predict and handle a number of dangerous situations much better than a person would.

In his video Snazzy Labs told the story of how Tesla helped him avoid two accidents. The first time the Teslas radar spotted an obstacle that was two cars ahead of him. The second time the car turned into the free lane when the truck driver who didn’t notice it and tried to switch lanes. If a human were to drive the car at that moment, a crash would be inevitable.

The autopilot works right in around 80% of the situations. However, the remaining 20% ruin it all. There are a lot of situations that often occur on the roads that are impossible to predict for the autopilot. For example, a self-driving car cannot manoeuvre in a busy stream or squeeze between two closely spaced obstacles, and etc..

Another obstacle on the way to mass adoption of self-driving cars are the imperfections of computer vision. Until there is a solution to this problem is found, the autonomy level of self-driving cars will not rise above the 3+ mark.

And lastly, the social aspect. At the moment, self-driving cars remain luxury items and are unavailable to the mass adoption as a result of their expensive components. Until self-driving cars prevail on the roads, the safety conception of the unmanned vehicles cannot work. To make the roads safe, people must trust cars completely, because, as the practice proves, robots are much better at driving than humans..

Also, there is a number of ethical issues in driverless cars development. For example,the questions like “which decision will be the most ethical, if a crash is unavoidable, to run over a child who runs across the road, or to crash the car with a family and two children inside”, and so on.

The image of the future depends on how much we can improve computer vision systems and if we can solve the ethical, social and technical aspects of autopilot development. We would like to hope that in 2035, just like in the movie “I am a Robot”, people will no longer need a driver’s license and a private car, and we will be able to get where we need to on an unmanned taxi.