There’s a big difference between the so-called self-driving cars with which Google is experimenting and the computer-assisted driving of Tesla’s Autopilot vehicle. The former is designed to be a wholly autonomous system, while Tesla’s is supposed to have some user input.
Tesla’s system took a hit when it was revealed recently that in May an Ohio man was killed in his Tesla Model S with the driver-assist technology switched on. The man’s death marks the first time a person has died while being driven by their car, calling into question just how safe Tesla’s semi-autonomous driving technology really is.
Joshua David Brown, 40, was killed in Florida in May when his car struck the side of a tractor trailer while it was turning in front of him. Instead of slowing down, the car kept its pace while it passed under the 18-wheeler.
In a statement on his death, Tesla Motors blamed the crash on the Autopilot feature not being able to distinguish between the “white side of the tractor trailer against a brightly lit sky.” Tesla was quick to point out that this was the first known fatality in 130 million miles of driving where the Autopilot was activated.
Tesla also noted that its Autopilot feature “is an assist feature that requires you to keep your hands on the steering wheel at all times.” Additionally, whenever the feature is engaged, the driver is reminded by the car to “always keep your hands on the wheel,” and to “be prepared to take over at any time.”
Are Drivers Relying Too Much on a Feature Not Intended for Full Autonomy?
The circumstances surrounding the crash are still under investigation, but the driver of the tractor trailer claimed that the car was still playing a Harry Potter film when the car finally came to a halt. This suggests that the driver was not fully engaged behind the wheel, something Tesla explicitly says not to do while it’s Autopilot feature is being used.
More recently, a man driving from Seattle to Yellowstone National Park crashed on the side of winding two-lane road in Montana. The man had the Autopilot switched on, and did not have his hands on the wheel. The man claimed that the car never told him to take control of the wheel, but a statement released by Tesla claimed otherwise.
In a related instance, a man was caught on video sleeping in the driver’s seat of his Tesla as the car slowed and accelerated in heavy traffic. This man’s irresponsible use of the Autopilot feature did not lead to an accident, but it does show just how trusting people are of technology.
How Should Drivers Use Autopilot?
If Tesla drivers think they are getting a fully self driving car they are mistaken. Tesla insists that its Autopilot feature is just an advanced version of cruise control that requires the driver’s full attention. People who read, watch movies, take a nap, and even just text are misusing the car, and at risk of getting into an accident.
If a person is looking for a fully autonomous car, they may want to wait until Google brings the self-driving car it has in development to market.
Google’s take on fully autonomous car technology is much more comprehensive, and cars with it could even be sold with pedals or a steering wheel. Unlike Tesla, Google’s version may completely eliminate the need for humans to drive.
However, a widely used, fully-autonomous car is still a ways off.
Until then, a person must remain fully engaged behind the wheel of their vehicle, regardless of its technology, or risk getting into an accident. Once cars without steering wheels and pedals are available to the public, people will finally be able to safely focus on other things while they are in their car.