Drive.ai vehicle now deployed in pilot program covering Frisco business park

 

"The more self-driving vehicles hit the road, the more pressing the safety concerns and legal and regulatory issues will become."

 

Driverless Vehicles Hit North Texas Streets But

Broader Self-Driving Hype Has Crashed Into Reality


September 2018


Driverless vehicles are being deployed in pilot programs in Frisco and Arlington in a demonstration of what may be possible with the right technology and the right limited operating space.  Both are being fielded by a company called Drive.ai.

 

The first is operating within Hall Park, a large campus of office buildings and retail in Frisco.  The second will operate on city streets within the confines of the Arlington entertainment district surrounding the stadiums where the Dallas Cowboys and the Texas Rangers play.

 

Those are limited steps forward that run against the larger trend reported by Christopher Mims in a recent Wall Street Journal piece.  He concludes that the more self-driving vehicles hit the road, the more pressing the safety concerns and legal and regulatory issues will become.

Here are some excerpts from the piece:

 

Hardly a week goes by without fresh signposts that our self-driving future is just around the corner. Only it’s probably not. It will likely take decades to come to fruition. And many of the companies that built their paper fortunes on the idea we’d get there soon are already adjusting their strategies to fit this reality.

 

Uber, for example, recently closed its self-driving truck project, and suspended road testing self-driving cars after one of its vehicles killed a pedestrian. Uber’s chief executive even announced he would be open to partnering with its biggest competitor in self-driving tech, Alphabet Inc. subsidiary Waymo. Meanwhile, Waymo CEO John Krafcik recently said it will be “longer than you think” for self-driving vehicles to be everywhere.

 

“Self-driving technology has the potential to make our roads safer and cities more livable, but it will take a lot of hard work, and time, to get there,” says an Uber spokeswoman. In the past two years, Tesla CEO Elon Musk planned, then scrapped a coast-to-coast autonomous road trip. And Lyft CEO John Zimmer’s 2016 prediction that self-driving cars would “all but end” car ownership by 2025 now seems borderline ridiculous.

 

There are many reasons the self-driving tech industry has suddenly found itself in this “trough of disillusionment,” and chief among them is the technology. We don’t yet know how to pull off a computer driver that can perform as well or better than a human under all conditions.

 

It turns out that the human ability to build mental models isn’t something that current AI can just learn, no matter how much data it’s fed. And even once we have the technology, we’ll still have to deal with all those unpredictable humans in cars, on bikes and scooters, and on foot.

 

We’ll have to adjust to the reality that autonomous driving could be headed for narrower—but still transformative—applications. And if our desire for driverless taxis and delivery vans is strong enough, we might need to create dedicated roads for them.

 

Over a lifetime of driving, humans become expert at countless subtasks, from noticing distracted pedestrians to questioning the judgment of construction workers waving them through a work site. While much has been made of the total number of miles that various self-driving systems have racked up, conquering these little annoyances actually requires an enormous amount of intellectual labor by many teams of engineers.


LEGAL AND BEHAVIORAL ISSUES

 

Even when (or if) we get a working, go-anywhere self-driving system, we would face myriad legal and behavioral challenges, says Meredith Broussard, author of “Artificial Unintelligence: How Computers Misunderstand the World.”

 

When a Tesla slammed into the back of a stopped firetruck at 60 miles an hour, the driver sued the auto maker, claiming the company misrepresented the capability of its Autopilot software. Who is liable when a self-driving car gets into an accident? We have yet to resolve the issue, which could lead to a sea change in how we insure vehicles.

 

It’s also not true that we must transition to self-driving cars because human-piloted ones are so lethal, Prof. Broussard says. Countless innovations have made cars radically safer since the 1950s and continue to do so. Meanwhile, new distractions, such as smartphones, can be addressed more cheaply, without resorting to full autonomy.

 

Our love affair with self-driving cars is a form of “techno-chauvinism,” Prof. Broussard says. “It’s the idea that technology is always the highest and best solution, and is superior to the people-based solution.”

 

While we work all of that out, we’ll also need to start spending big money refashioning our cities, bike lanes and sidewalks so that they are friendlier to self-driving vehicles. This would have to coincide with the rollout of widespread and robust 5G wireless internet that could power a massive vehicle-to-vehicle communications infrastructure. If every car driven by every human was tracked, along with every autonomous vehicle, then perhaps humans and machines could share the road.

 

After all, it’s not keeping to a lane that’s hard. It’s predicting what all those capricious and distracted human drivers around you might do next.