Deaths on U.S. roads are up from 2015, which was already a record breaking year. More than 35,000 people were killed in motor vehicle accidents in 2015, and according to the Department of Transportation (DOT), this number had already gone up 8 percent in the first half of 2016. New government estimates by the National Highway Traffic Safety Administration (NHTSA), a division of the DOT, project that 2016 will have seen the biggest increase in motor vehicle fatalities in 50 years.
Why are our roads becoming more dangerous? Experts have identified several factors. For one, people are driving more (3 percent, to be exact, or 70 billion more miles). This is most likely due to lower gas prices and the fact more people are now employed and driving to work. Some also point to distraction due to smartphone use and warmer winters, which means more people on the road in unexpectedly dangerous conditions.
2016 will have seen the biggest increase in motor vehicle fatalities in 50 years.
Still, traffic safety researchers don’t know what role other factors, such as drug use, play. Government officials from the DOT gave reporters a briefing on the situation, and called for more data in order to get to the bottom of the recent spike in fatal accidents.
In the meantime, they want to take action to combat the crisis. In Oct. 2016, the NHTSA made it a goal to have zero traffic deaths in the U.S. by 2046. In December, they announced a stronger push for connected vehicles, or vehicle-to-vehicle technology (V2V), which they believe will decrease the number and severity of road accidents.
The Obama Administration, as well as the auto industry and tech companies such as Google, Uber and Lyft, have said that connected car tech, and down the road, self-driving cars, are the solution to eliminating danger on the roads. Fully autonomous cars, they argue, would do away with the kinds of human error responsible for most fatal accidents like alcohol impairment, speeding, and distraction.
Others, however, are concerned that there are a lot of unknowns about autonomous cars, like regulation, privacy, and liability. Some consumers have even expressed an unwillingness to give up their own autonomy.
Yet, driverless cars are expected to be a $750 billion business by 2030 and the industry, along with the government, show no signs of hitting the brakes. In preparation for the driverless revolution, we take a look at how exactly driverless cars work and what their pros and cons are.
What Exactly Are Self-Driving Cars?
There is a lot of jargon surrounding tech-enhanced vehicles: is there a difference between driverless cars, self-driving cars and autonomous vehicles? Not really, but there are different degrees of automation, as designated by SAE International (formerly the Society of Automotive Engineers).
Level 0: No automation at all; driving the car is completely up to the person behind the wheel.
Level 1: Level 1 is called “driver assistance,” which basically means the car has cruise control. Most cars have this already.
Level 2: A Level 2 is a car that helps you drive, but which you have to keep a careful eye on. These cars have more advanced automated systems that can maintain their speed, slow down to avoid other cars and stay in their lanes. But if any issues arise, the human has to take over. Tesla’s “autopilot” is an example of a Level 2.
Level 3: Level 3 cars can actually make decisions on behalf of their driver. They use information from the environment to decide whether it should change lanes and can pass. They may still need emergency human intervention in potential crash situations. Audi already has a working prototype of a Level 3 car.
Level 4: A Level 4 car can handle any and all situations on its own within a very controlled and perfectly mapped area that is safe from unexpected happenings such as severe weather (but can such a place exist?). It is up to the driver to enable the automated mode when it’s safe to do so, but once it’s on, the driver is free to focus on anything besides the road.
Level 5: Not only is no human intervention required for a Level 5, it won’t be possible for a person to take over one of these cars; they won’t have steering wheels or pedals. The idea is that the human gets in, tells the car where to go and can sit back and watch movies, play games or do some work. Commuting in these cars will be more like riding a subway or a train.
Full, Level 5 automation has become the new goal for companies, with Google taking the lead once it realized the task of getting humans to pay attention and react in time to a problem in a Level 3 or 4 was too difficult to solve, according a report by WIRED. Plus, there was not much incentive to upgrade automation incrementally, since, according to Delphi, if every car on the road had Level 2 capabilities, collisions would be reduced by 80 percent.
How Do Self-Driving Cars Work?
How exactly does a semi-autonomous or fully autonomous car function? Self-driving cars are outfitted with lasers, cameras, and radars that provide it with a 360 degree view. These act as sensors for the car, that, along with GPS and mapping technology that determine the car’s exact position, help the car recreate the physical world in a virtual 3D space that the technology can process. Essentially, the sensors create a video game version of physical space that the car’s computer system can process. The idea is that these cars, armed with increased awareness over humans, can make better decisions on how to proceed in any situation.
The information that the sensors gather is processed by the car’s computer system. The car’s computer is able to make decisions, acting like the car’s brain, because of a type of artificial intelligence (AI) called machine learning. Machine learning is a way of teaching algorithms by example and experience. It’s what companies like Amazon and Netflix use to give you recommendations based on what you’ve watched or bought. Basically, engineers have cars with sensors drive hundreds of miles and collect that information, then they painstakingly go through that data in a data center. They identify every object in every frame, feeding the information into the computer and programming it with directions on how it should behave when it encounters each object.
Will Driverless Cars Really Be Safer?
Both companies and the government are pitching autonomous vehicles as safer than cars driven by humans. In September, Google said 94 percent of car accidents in the U.S. are caused by human error. Certainly the scope of the sensors in autonomous cars give us good reason to believe that they’ll be safer. Yet, the first recorded death in a Tesla vehicle that was using automatic driving mode was due to an error in the operating system’s calculations, not human error (After investigating, the DOT did not decide to recall Tesla's Autopilot system).
As it stands, companies are already getting semi-autonomous cars on the road without enough evidence of their safety, and little to no regulation.
The NHTSA set initial federal regulations in place for self-driving cars in September, but these, including the 15-point safety assessment for companies to complete before putting cars on the street, are preliminary and not mandatory. Only nine states currently have legislation regarding self-driving cars and issues already have arisen on the state level, with companies such as Uber deploying autonomous technology without local government permission. Meanwhile, Waymo, Google’s autonomous vehicle program, showed off a fleet of self-driving minivans in Jan 2016 and plans to release them onto public roads by the end of the month.
Perhaps the biggest risk posed by driverless cars is their vulnerability to cyberattacks. Cars controlled by software can be hacked and run off the road. Last year, hackers showed a WIRED reporter that they could remotely hijack and crash a Jeep, forcing Chrysler to recall 1.4 million vehicles. Not just physical safety is at stake; passengers could be held hostage with new forms of ransomware, and their privacy is at risk, as their location can be tracked. Law enforcement can also, and has been, using the tech to listen in on vehicles. The NHTSA has not issued rules about the cybersecurity of driverless cars. Although they’ve noted the importance of the issue, right now it’s up to companies to police themselves.
The public is also worried about self-driving cars’ cybersecurity, not just security experts. According to The Christian Science Monitor, the consulting firm Altman Vilandrie and Company conducted a survey that found 64 percent of consumers would not purchase an automated vehicle and 57 percent wouldn’t even consider riding one. Some people don’t see the increased convenience worth the loss of free will.
Driverless cars also bring up difficult ethical questions. After a crash, where does liability lie? What if programmers don’t think of everything that may cross a car’s path? Most importantly, how do you pre-program a solution when an unavoidable crash involves a fatality? How do you weigh property damage over hitting, say, a dog? If the alternatives are the death of the passenger and the death of a pedestrian, which one will the car choose? And, why should tech companies and automobile manufacturers be left with the power to program these decisions?
Although autonomous cars are bound to reduce the amount of times cars are in the position to cause harm, these questions need answers. And, even with all privacy, security, liability and regulation issues squared away, there is still the problem of who will have access to self-driving cars and when. They are currently prohibitively expensive for most consumers, and will most likely continue to be for the poorest Americans even when they are mass produced.
Before Self-Driving Cars, Talking Cars
Before we’ll be able to ride around in advanced autonomous vehicles, we will most likely be driving ‘connected’ vehicles. The DOT proposed a rule in Dec. 2016 that prioritizes the deployment of connected vehicle technologies in an attempt to remediate the increasing number of fatalities on U.S. roads.
This vehicle-to-vehicle (V2V) technology lets cars talk to each other and avoid crashes. Implementing V2V in all “light-duty vehicles” - or passenger cars - could prevent hundreds of thousands of crashes every year, according to the NHTSA. They estimate that safety applications enabled by V2V could eliminate or mitigate the severity of up to 80 percent of non-impaired crashes, such as intersection or changing lane crashes.
The NHTSA rule proposes required standard messaging so V2V devices can speak the same language. Cars would communicate location, direction and speed data to nearby vehicles. The short range communications would be updated and broadcast up to 10 times per second to nearby vehicles, identifying risks and warning drivers to avoid imminent crashes. The Department’s Federal Highway Administration plans to issue guidance for these communications shortly, so that transportation planners can integrate the technologies in roadway infrastructure like traffic lights, stop signs and work zones. The department hopes this will “improve mobility, reduce congestion and improve safety.”
V2V would also help vehicles with automated driving functions like automatic emergency braking and adaptive cruise control by providing more data for it to use to make decisions.
In the meantime, V2V communication can let drivers know if it is safe to pass on a two-lane road, make a left turn across the path of oncoming traffic, determine whether there is a dangerous car approaching an intersection from “hundreds of yards away.”
According to the NHTSA press release, “the rule would require extensive privacy and security controls in any V2V devices.” It would not involve the exchange of information that can be linked to an individual “as a practical matter.”
Waiting on the World to Change
While we wait for technology to deliver on its promises of safer commutes, we still have to drive on the most dangerous roads this country has seen since 1966. If you or someone you love was involved in a motor vehicle accident, fill out our free, no-risk case evaluation to see if our attorneys can help.