Google’s self-driving car has been in the news before for its involvement in car accidents, but up until now, it has never caused one.
Monday afternoon, it was reported that a google self-driving car caused an accident with a bus on Valentine’s Day.
How Did The Google Self-Driving Car Crash?
The robotic SUV (a lexis RX450h) collided with a bus in Mountain View, California. The car had moved into the center lane in order to make a right turn around some sandbags.
The Google car incorrectly assumed that a bus, which had been approaching form behind, would slow down or stop to let the car through. The bus driver did not.
The google car smacked into the side of the bus at a low speed, damaging its own front fender, wheel, and sensor.
Fortunately, there were no injuries.
Google’s Statement Regarding The Crash
While many are taking the incident as another reason to lose trust in self-driving vehicles, Google’s statement, released earlier this week, seems to be considering the crash to be more of a “misunderstanding.”
“Our test driver, who had been watching the bus in the mirror, also expected the bus to slow or stop” explained Google in a statement, “And we can imagine the bus driver assumed we were going to stay put. Unfortunately, all these assumptions led us to the same spot in the lane at the same time.”
According to Google’s Spokesperson, many human Mountain View drivers will hug the right-hand curb when making a right turn. Since the lanes are wide enough for traffic to continue around them on the left, the courtesy in Mountain View is to move out of the way.
On the Valentine’s Day crash, the Google car began to do that before it detected sandbags by the storm drain, blocking its path. The Google car came to a stop, waited for cars to pass, and then attempted to maneuver into the center lane to make a wide right turn around the sand bags.
Injuries & Fatalities From Self-Driving Cars
As we mentioned in the article, this is the first accident that was caused by Google’s Self-driving car. In their 1.4 million miles of test driving, their self-driving vehicles have not caused any injuries or deaths due to a crash.
The cars have been involved in fewer than two dozen minor crashes—none of which were the fault of the autonomous vehicles.
The Valentine’s Day crash does bring up a worry that has crossed everyone’s mind since Google first announced its self-driving car: the first fatal accident caused by a self-driving vehicle.
In the early 1900s, when fatal car accidents were first reported, many newspapers questioned whether or not automobiles were inherently evil.
When a serious self-driving car accident does happen, we expect similar uproar. Given the immediacy of Social Media these days, the controversy will be big—and it’s difficult to determine whether or not Google’s goal of having self-driving cars replace all manual vehicles will be thwarted.
All statistics point to the safety of self-driving cars. Many researchers believe that autonomous vehicles will shrink traffic fatalities by up to 90 percent this century—saving as many lives as anti-smoking efforts already have.
Ethics Of A Self-Driving Car
Humans are naturally hesitant to relinquish control, and it’s hard to get past the “what-ifs.”
The biggest ethical dilemma is whether a self-driving vehicle should make a “logical” decision when faced with an unavoidable harm. In other words, should a self-driving car sacrifice its own driver’s life in order to save ten other lives?
A recent poll found that the public generally agrees that the car should sacrifice the driver to save the ten other lives. However, when asked whether they would use a car that was programmed to do so, the majority said no. (This questionnaire is virtually identical to something called the “Trolley Problem” in philosophy, and we definitely hope to blog more about this as self-driving cars continue to hit the streets)