By Robert Rector
Hub correspondent

Let’s face it, we can’t help being hayseeds from time to time. Being beguiled and bamboozled is a fine old American tradition.

That Rolex watch we bought from a guy on the street corner turned out to be plastic.

That “ranch property” we bought in Nevada was a former nuclear weapons testing site.

We gave our bank account information to a Nigerian princess we met on the Internet but the $2 million check she promised us never arrived.

We elected Donald Trump.

And now, those self-driving cars that promised us a Great Big Beautiful Tomorrow have turned out to be a dubious, if not deadly, proposition.

Car manufacturers throughout the world have spent millions on developing autonomous automobiles and at least that much on a PR campaign selling the concept to an adoring public.

Here’s the pitch: You have errands to run, kids to deposit at school, a friend to visit, a concert or a baseball game to attend. You hop into your car and punch in the destination. It is the last driving decision you will make during the trip.

Your car will take you to your destination, leave on its own to find a parking space, then return to pick you up when you summon it. In fact, you may not even have to own a car. Perhaps you can simply call Acme Driverless Cars and the company will send you a vehicle which will pick you up, drive you to your destination, return to pick you up when you’re ready and take you home again. Think of it was your own personal Uber.

The New York Times gushingly envisions “a city devoid of stoplights. Indeed, devoid of all major street signs: no huge billboards across highways naming the exits, no complex merge instructions. Those signs are expensive to build and maintain. They’re designed for humans, and GPS-brained robots don’t need them to know where they’re going.”

Sounds great, where do we sign up?

But this fairy tale was dashed on the jagged rocks of reality recently when a self-driving Uber car struck and killed a pedestrian in Arizona. Worse, for safety’s sake, there was a driver on board.

Tesla just revealed that its Autopilot feature was turned on when a Model X SUV slammed into a concrete highway lane divider and burst into flames on the morning of Friday, March 23. The driver died shortly afterwards at the hospital.

Mike Ramsey, an analyst who focuses on self-driving technology, takes a dim view of the state of things. “The system as it is now tricks you into thinking it has more capability than it does. It’s not an autonomous system. It’s not a hands-free system. But that’s how people are using it, and it works fine, until it suddenly doesn’t.”

Less serious but just as daunting was a Google self-driving car that struck a bus in northern California. It also had a driver on board.

Google engineers dryly concluded that “From now on, our cars will more deeply understand that buses (and other large vehicles) are less likely to yield to us than other types of vehicles, and we hope to handle situations like this more gracefully in the future.”

Cars that “deeply understand”?  In other words, self-driving cars must not only be capable of accelerating and breaking, turning and parking, but of making life-or-death decisions as well. We’re talking about Artificial Intelligence. Robots hit the road.

Ethical questions

Meet Stanford engineering professor Chris Gerdes, who is raising questions about ethical choices that must inevitably be programmed into the robotic minds that will be serving as our chauffeurs.

He provided a demonstration as reported by Bloomberg News:

Using a dune buggy on a cordoned-off street, he put the self-driving vehicle into harm’s way. A jumble of sawhorses and traffic cones simulating a road crew working over a manhole forced the car to decide: obey the law against crossing a double-yellow line and plow into the workers or break the law and spare the crew. It split the difference, veering at the last moment and nearly colliding with the cones.

That demonstration raises the following issues, according to Gerdes. When an accident is unavoidable, should a driverless car be programmed to aim for the smallest object to protect its occupant? What if that object turns out to be a baby stroller?

If a car must choose between hitting a group of pedestrians and risking the life of its occupant, what is the moral choice? Does it owe its occupant more than it owes others? Gerdes asks.

To carry it to an absurd extreme, imagine you program your car to take you to a fast food restaurant but the car refuses because it knows fast food is bad for you.

It’s no wonder some fast food establishments are working on delivering orders to your doorstep by drones.

The whole business of Artificial Intelligence is the stuff of an entire library of scary science fiction. And the reality is that we don’t know what will happen when we let that genie out of the bottle.

Or as one expert explained: “We cannot quite know what will happen if a machine exceeds our own intelligence, so we can’t know if we’ll be infinitely helped by it, or ignored by it and sidelined, or conceivably destroyed by it.”

Self-driving cars when they ultimately do arrive will make our world a safer place. But the road to the future appears to be a long one. And at this point in time, we’re mere crash test dummies.

As for me, I plan to keep my eyes on the road, hands on the wheel and a foot near the brake pedal. I’m not going to entirely entrust my safety or that of my family to an industry that recalled 53 million cars in 2016.

Robert Rector is a veteran of 50 years in print journalism. He has worked at the San Francisco Examiner, Los Angeles Herald Examiner, Valley News, Los Angeles Times and Pasadena Star-News. His columns can be found at Robert-Rector.blogspot.com. Follow him on Twitter at @robertrector1.

LEAVE A REPLY

Please enter your comment!
Please enter your name here