WhatFinger

There is no reason whatsoever we need self-driving cars. This is one of the dumbest initiatives I’ve ever seen, and now it’s a deadly one too

Uber’s self-driving car killed Arizona woman because auto-brake system had been disabled


Dan Calabrese image

By —— Bio and Archives May 24, 2018

Comments | Print This | Subscribe | Email Us

Uber’s self-driving car killed Arizona woman because auto-brake system had been disabled Obvious question: If the auto-brake can be disabled, how can you be sure it’s enabled when you need it to be? Also, if you’d disable it to “prevent erratic driving,” then how can it be said the car can operate viably when the auto-brake is engaged? Any way you look at it, an Arizona woman is dead because – contrary to what Uber and the auto companies want you to think – cars cannot think for themselves, nor can they make split-second, life-and-death decisions on a reliable, consistent basis.
I’ve been saying all along that the whole concept of the “self-driving car” is a death trap – guaranteed to kill people and for no reason whatsoever, as there is absolutely no reason on God’s green Earth we need to make cars that don’t require drivers. This is the most pointless technology quest I’ve ever seen. But no, they insist! The technology is more reliable than human operators! Yeah. No:
The NTSB said the Uber vehicle’s sensors detected the pedestrian walking across the road with a bicycle six seconds before impact. At first, the self-driving system’s software classified the pedestrian as an unknown object, then as a vehicle and finally as a bicycle with varying expectations of where the bike was headed. It was only 1.3 seconds before impact that the system decided emergency braking was needed, the NTSB said. According to Uber, the NTSB said, Volvo’s built-in automatic braking system had been disabled during testing to “reduce the potential for erratic vehicle behavior.” “The vehicle operator is relied on to intervene and take action,” the report said. A video released around the time of the crash showed the safety operator glancing down toward the center console of the vehicle several times before impact. In an interview with NTSB investigators, the operator said she had been monitoring the self-driving system interface.
The pedestrian was dressed in dark clothing and was walking a bicycle across the road, not at a crosswalk, according to the report. NTSB also said the pedestrian tested positive for methamphetamine and marijuana. Volvo, in statement, said it was helping with the investigation, noting its driver-assistance system was disengaged. An Uber spokeswoman Thursday said the company has worked with the NTSB and started its own review, bringing on former NTSB head Christopher Hart to advise on its safety culture. She said the company in the coming weeks will detail changes it plans to make.
The only change it needs to make is a very simple one: Give up on the idea of self-driving cars. Forever. But let’s consider the issue of the auto-brake system having been disabled. Someone who is determined to defend the self-driving car concept come hell or high water could claim this points to human error. Why, they should have enabled it before they went out on the road! Sure, but the reason they didn’t is that the auto-brake was causing the car to drive erratically, probably by making all kinds of wrong determinations about what was in front of the car. About 10 years ago, Angie and I had a car with technology called “SmartTrak,” but we quickly took to calling it DumbTrak, because it caused the car to jerk around all the time, until we finally realized we could disable it and just drive the car normally.


Someone could say we didn’t use the technology correctly, but the fact is the technology wasn’t intuitive to the way we drove the car. That wasn’t on us. We were much happier with the car when we could just control it ourselves. Whoever disabled that auto-brake apparently felt the same way about the self-driving car. In theory it’s supposed to be highly accurate and responsive, but in practice it’s full of glitches and it makes the experience or riding in the car uncomfortable for the driver. The easiest way to deal with it is to disable the technology that makes it act that way. But without it, people get killed. But wait, you say, when the car is set normally, the auto-brake will be on! OK, but are you sure it can’t be disabled except by the operator? Have you ever had a function on your laptop – maybe your network adapter, or your speakers – suddenly turn up disabled? It’s a technology glitch, surely not the way it was designed to work, but it happens and you have to go through a diagnostic exercise to reverse it and keep it from happening again. How do we know this function, or other functions of self-driving cars, wouldn’t do the same thing? I know humans make mistakes too, but when humans make mistakes we can understand why they did, and we know what to do about them. When you rely on technology to perform a function like spotting a pedestrian, knowing what’s there and stopping the car – all in a matter of a second-and-a-half – you’re putting an awful lot of faith in something that historically hasn’t been asked to do things like that. And the technology is already failing the test. There is no reason whatsoever we need self-driving cars. This is one of the dumbest initiatives I’ve ever seen, and now it’s a deadly one too. Just stop this. Now. No self-driving cars. Enough people have been killed already.

Dan Calabrese -- Bio and Archives | Comments

Dan Calabrese’s column is distributed by HermanCain.com, which can be found at HermanCain

Follow all of Dan’s work, including his series of Christian spiritual warfare novels, by liking his page on Facebook.


Sponsored