The concept of self-driving cars, if being applied well, can help save millions of lives each year from traffic accidents. But because we’re still in early stages of autonomous car technologies, there are many controversies over whether we should replace human drivers with self-driving cars or not.
Our today’s article will provide you with the general information about driverless cars, their recent technological advances, and especially the self-driving car safety.
How Do Self-Driving Cars Work?
The process of how an autonomous car collects and processes its information is similar to human. It will first be equipped with certain cameras, sensors, and LIDARS – light detector and ranging devices. These will then act as the eyes and ears for the car, making it able to “see” objects around it.
Once the data is collected, it will be transferred to a processor to analyze the information. Depending on the result, the car will create multiple scenarios inside its brain. This is called the prediction and planning process.
Different Levels of Cars
There hasn’t been an official definition of self-driving cars because they’re still under various changes and improvements. Moreover, different manufacturers introduce their so-called self-driving cars with a wide variation of claims, making it difficult for users to understand the vehicle clearly.
Below is the list of the six levels of cars. They are used by National Highway Transportation Safety Agency and much of the industry.
- Level 0: No automation – These cars are fully operated by humans even when being enhanced with a warning or intervention system.
- Level 1: Driver assistant – The driving mode of these cars is assisted with a driver assistance system which consists of automatic steering, acceleration, or deceleration using the collected data from the driving environment. The driver has to perform most of the driving tasks.
- Level 2: Partial automation – These cars consist of one or several driver assistant systems of both steering and speed control. They also use collected data from the driving environment like level-1 counterparts. And the driver is expected to perform all other tasks as well.
- Level 3: Conditional automation – All dynamic driving tasks are expected to perform by the AI, but human interference is still highly recommended as the AI is not smart enough to handle all situations.
- Level 4: High automation – All tasks can be properly executed by the AI in most aspects of dynamic driving even when the driver cannot respond appropriately in certain situations.
- Level 5: Full automation – This is what the industry is striving for – a fully automated car that can handle all dynamic driving tasks.
Recent Accidents Caused By Self-Driving Cars
Although it is technically proven that self-driving cars can be safer than human drivers, their safety is still questioned by the public, especially due to the recent self-driving car crashes caused by these autonomous vehicles.
In March 2018, a driver named Walter Huang in California was killed in an accident where his Tesla Model X crashed into a concrete lane divider and careened into oncoming lanes. But this isn’t the first time where an accident happens when the car is in autopilot; in Florida in 2016, a Tesla Model S driver named Joshua Brown was also killed in an accident after crashing with a crossing tractor-trailer truck.
The two cases above happened when the cars are in autopilot mode, which requires the driver to be ready to interfere at any time because the AI isn’t smart enough to handle everything. It is not clear that did Huang or Brown had taken any action before their crashes, but this is still a worth-considering information.
Moreover, early this year, an Uber self-driving test vehicle hit a pedestrian while she was crossing the road with her bike at a speed of 30 miles per hour. This caused severe damages to her, resulting in her death a few moments later in the hospital. In this example, neither the car’s sensors nor the human safety driver was able to notice and deal with the accident. The footages aftermath even showed that there was a “safety” driver behind the wheel, but she’s looking down at the screen, instead of the road.
This caused Uber to be forbidden to attempt any other experiment on their self-driving cars in the state of Arizona, and has taken a toll on their plan to introduce the Uber driverless system by year-end.
Self-Driving Cars vs. Human
A computer is undoubtedly much faster than human in computing, which is why in certain situations, it can perform multiple calculations on a much larger scale compared to the human brain. On top of that, computers are bounded by rational thinking, so if everything is executed properly, they won’t be likely to make emotional-related mistakes or any kind of mistakes.
A survey conducted by the NHTSA showed that 94% of traffic accidents are caused by human, usually due to drug and alcohol abuses. With the rising popularity of high-tech electronic devices like smartphones, the number is expected to rise even higher.
On the other hand, computer drivers don’t drink, do drugs, or waste their attention on smartphones. As a result, they would be a better driver. They could also get better at driving by gaining experiences over time.
Imagine a world with fully automated vehicles, and all are connected to a sharing network that provides the essential information on routes, traffics, and other information. Severe car crashes can be easily prevented if all cars know that other’s thinking.
So, if your question is “are self-driving cars safer than humans?” then our answer would be “Yes” but it will take time! The fact is that even though we’ve made huge progress in making self-driving cars, we still cannot make sure they are 100% safe.
So, Why Self-Driving Cars Are Not Safe at the Moment?
First of all, let’s not talk about how the technical failures can cause an accident because both the Tesla Model X and S in accidents above belong to the level-3. This means that they all come with a self-driving mode which is capable of safely driving on the highway but still requires the driver’s full attention.
According self-driving car accident statistics, they are the first fatal accidents in over 200 million kilometers that drivers have driven with Tesla’s Autopilot mode enabled, while the average number for normal driving is 150 million kilometers driven. Although it doesn’t seem like a valid comparison because the Tesla Models are super reliable with advanced safety features, it does count in certain aspects that computer-driven cars are doing quite a good job.
On the other hand, Tesla has stated themselves earlier that the Autopilot mode is still an experimental beta and the users are helping to catch bugs. The drivers are required to keep their hands on the wheel all the time to ensure that they can handle critical situations if needed. And if they don’t put their hand on the wheel, there will be multiple warnings and the car will eventually stop.
Although the accident with the tractor-trailer truck was clearly a computer mistake because it wasn’t able to distinguish between the white truck and the bright sky, so does Brown because he didn’t attempt to stop the car.
This raises a question about whether the psychological aspects should be rightfully considered like technical ones. A DVD is later founded on Brown’s car; it is unknown that if he did or didn’t watch it, but this gets us thinking that will autonomous cars encourage these kinds of behaviors in the future?
While we wait for self-driving technologies to become better so that one day we can comfortably take a nap and leave the driving for the cars, the human driver partners should be aware of the risks that careless driving behaviors might cause in order to save their own lives.
Briefly, autonomous car safety technology can help us reduce the number of accidents, but it will partially rely on the drivers to keep their lives safe. And considering our current progress, this would still be the case for at least the next decade.
The Moral Dilemma of Self-Driving Cars
Even when the technology has improved enough to stop most accidents, there will be tricky situations in which the cars must deal with. To be clear, let’s do two small thought experiments.
- Imagine you’re on a crowded road, boxed with surrounding vehicles. Suddenly, some heavy objects fall out from a truck in front of you. Your car will have to decide on one of these three choices: to turn right and hit a driving SUV, to go left and hit a motorcycle, or to go straight while hitting the break to minimize the casualties.
- Now, imagine we have the same setup with two motorcycles on your both sides: one with the helmet and the other with no helmet. Should the car hit the first one because it will cause fewer casualties or should it hit the other because he is irresponsible?
What do you think it would do in these situations?
If you are driving your car in the manual mode, then everything happening afterward will be considered as accidents. That’s because the human mind can react quickly enough to these situations and we tend to result instinctively. But with a self-driving car featuring all essential equipment and capabilities to perform multiple decisions, should it be considered as the programmer fault?
Moreover, if you have to choose between a car that would save your lives at any cost and one that would minimize the number of casualties even if it means that your lives are endangered, what would be your choice?
These are the moral dilemmas that we’ll have to deal with in the future, not only with self-driving cars but also other autonomous devices. It is important to spot these out soon enough, so we can come up with suitable solutions and regulations for them.
Our solution for the two setups above is to improve the car design, add more safety features, and so on. If possible, with every vehicle being self-driven, we can connect each one of them on to a system that can analyze, predict, and come up with suitable decisions so that all vehicles can work together to minimize the damage. Of course, we have to make sure that the system security is superior so that no hacker can have the chance to take control or break it.
What do you think? Are you ready for self-driving cars? (check out our early post Self-driving cars: all you need to know today) Do you think that our current self-driving car safety is valid? Tell us more in the comment section. Thank you for reading and we’ll see you in our future posts.