Last week, the New Yorker published an article on the rise of autonomous, or self-driving, vehicles written by Burkhard Bilger. My friends, who know I'm writing a book on auto history, filled my inbox with links to it. It's a great piece. It's well-written and snappy, and it avoids many of the cliches that haunt most writing on self-driving cars. I first learned about Bilger's article on Twitter from a Tweet that said something like "The Inevitable Rise of the Self-Driving Car," which got my hackles up in a severe way. Nothing is inevitable, except death. But Bilger doesn't make an inevitability argument (damn you, Twitter!). Indeed, he spends a good bit of time exploring the many things that could go wrong that would keep autonomous cars from being legal or from being widely adopted. (More on the role risk plays in technology adoption in a post on the Tesla battery fires coming here soon.)
Still, Bilger's article was narrow in a few unfortunate ways. First, the only risk it considered was safety, both the possibility that autonomous vehicles will decrease auto accidents and the safety hazards the vehicles may pose themselves. There are good reasons to focus on safety. 30,000 people still die from auto accidents every year in the USA, and we treat that as normal and not especially troubling. Yet, we could also use autonomous cars to curb other problems, including emissions and fuel efficiency. I remember seeing a staff member of National Highway Traffic Safety Administration on TV explaining that the most efficient way to drive would be to drive as if there was an open coffee cup on the dashboard and try not to spill it. How far that is from how most of us actually use our cars!! We can imagine federal regulations that would require autonomous vehicles to drive as efficiently as possible, decreasing emissions and fuel use. Moreover, even if we stick with safety, self-driving cars have a perplexing issue around accident liability. If a car in self-driving mode crashes, who will be responsible? The person in the car? The automaker? The firm, like Google, that supplied the software? Mightn't we want to legislate this liability issue ahead of time to avoid having it settled in the less efficient space of the courts? Self-driving cars open not one can of worms but many.
But if we go further and think about autonomous vehicles as part of a larger sociotechnical system that includes both human actors and other technologies, other issues emerge. The one I would like to focus on here is how powerful economic actors, like corporations, might use self-driving vehicles to re-make the labor system. Like many accounts of autonomous cars, Bilger describes the technology as a high-end consumer good, as a geek's wet dream. It seems obvious, however, that the profoundest impacts of this technology will come not after geeks buy Google cars but after companies use the systems to shed workers. Hundreds of thousands, if not millions, of people—truckers, cabbies, delivery people—drive for a living in the USA. Why wouldn't corporations replace these workers with autonomous vehicles and, thereby, cut labor costs, benefits, workers compensation claims, etc.? This possibility seems, not like an inevitability, but like a logical conclusion of the way our economy has worked for a long time, doesn't it? Companies have been using machines and algorithms to replace workers for hundreds of years. Why has this seeming likelihood not entered most discussions of self-driving cars?
We shouldn't blame Bilger for so narrowly defining the topic of the self-driving car and its consequences, however. This narrow treatment is endemic to how our media cover technological system. A paper I'm developing called "What 'Technology' Means at the New York Times" shows that even the USA's paper of record does a dependably superficial and reliably unreliable job of thinking through technological systems, thereby missing, for instance, how our oldest and most mundane technologies, like our power grid and combined sewer systems, pose many of our greatest threats. Fittingly, the NYT's technology section is a subsection of the business section—a dishonor not shared by the Gray Lady's coverage of "science"—and the paper's primary technology "journalist" is David Pogue, a glorified gadget reviewer. Journalists' inability to think through systems is (pardon the pun) systemic, in the sense that it is pervasive and reliable. The inability is fostered by social and market structures that push editors towards one end point: novelty trumps knowledge. So, bring on the gadgets.
Then, last night, Amazon.com CEO, Jeff Bezos, announced in an interview with Charlie Rose on 60 Minutes that the company plans to use drones to deliver packages in as little as thirty minutes after a purchase is made. Bezos had Tweeted that he was going to unveil something on the program, and he got the people who care about such things all excited. Twitter was alight.
Bezos forgot to say in the interview that his favorite thing about drones is that they don't have human rights. The 60 Minutes interview comes on the heels of news reports about the nasty working conditions in Amazon warehouses, including an exposé on the issue posted by the Guardian on Saturday, November 30. Someone more prone to conspiracy theory than I am might even suggest that the 60 Minutes bit was meant as a cover for these negative stories. So, did Charlie Rose and the staff members of 60 minutes ask Bezos about the morally fraught topic of labor? NAH!! They decided to make a puff piece complete with Bezos spinning out a bit of near-future science fiction involving drones, which now has the Internet all a-Twitter. (I could go on a long diatribe here about the state of journalism today and how so much of it, like the 60 Minutes Amazon piece, are just glorified advertisements, but really, who cares?)
The one place Rose did push Bezos was about the old issue of Amazon putting locally-owned bookstores out of business. It wasn't surprising that Bezos tried to dodge the issue or pretend it didn't matter. What was surprising was that Rose didn't push back against the bit of tripe that Bezos handed him by saying, "The Internet is disrupting every media industry, Charlie, you know, people can complain about that, but complaining is not a strategy. And Amazon is not happening to book selling, the future is happening to book selling." Bezos's statement is a beautiful example of someone using the notion of technological determinism—the idea that technological change drives social change—to absolve himself of responsibility. Remember all you Amazon.com workers, when you work all day in warehouses without heat and air conditioning and go home exhausted and underpaid, that's just the future happening to you. Many of my friends have outright hatred for Walmart. They should save some of their spleen for Mr. Bezos.
At the heart of the supposedly new so-called "digital economy" lies the plain old industrial economy, in which profits are often wrought by giving short shrift to workers. We are so used to this fact that we no longer blink when we hear about Apple's further adventures with Foxconn or about how our electronics are filled with conflict minerals or about how companies are not conducting due diligence to keep gold mined by children out of the phones they produce or, yes, about the crappy treatment workers receive at Amazon's warehouses. Our present day reality more often resembles a Charles Dickens novel than the science fiction tales so dear to tech geeks.
But most companies prefer replacing workers to abusing them. In the last few years, people have begun talking about "dark factories," factories without workers beyond the few people needed to keep the machines running. Can we also imagine a mostly "dark transportation system"? Self-navigating container ships would bring material across the Pacific to California, where robots would unload the containers onto autonomous trains, which would bring the stuff to intermodal transport centers in the Midwest or wherever, where more robots would place the containers on the backs of self-driving tractor trailers, which would take the materials to Amazon's warehouses. We could also imagine delivery trucks that drive themselves. A human schlepper could move boxes around in the back of the truck and get ready for the next stop instead of splitting time between driving and managing the boxes, decreasing the amount of overall deliverers needed. Perhaps consumers would be offered a premium if they are willing to come out to the curb and sign for a package handed to them by a robot from the side of an autonomous FedEx vehicle.
Apologists for the status quo say that technological change is inevitable. They say that we should just retrain displaced workers (as if previous retraining programs have a strong track record). Some even claim that the answer is that everyone should learn to code (as if we are all going to join the technical elite). Yet, we live in a increasingly unequal society, and our use of technology plays a central role in this troubling trend. Silicon Valley famously has an inverted bell curve around education. It's a society made up of, on one side, hyper-educated, wealthy elites and, on the other, uneducated people who work for the wealthy, care for their children, and keep their lawns looking beautiful. Our use of labor-saving technology, like autonomous cars, only exacerbates this divide. American society has been remarkably tolerant of replacing people with machines and the technological unemployment such adoption entails. Perhaps we will grow less tolerant of it as data mining systems drastically reduce the need for lawyers and medical robots dramatically decrease the number of surgeons necessary. What are we going to do all day?
In the end, what our society needs is a new and greater political (and journalistic) imagination that can see the way forward towards being a society that mindfully uses technologies to replace—perhaps unrewarding—labor and distributes the profits won by these technologies more equitably. We have to think through the systematic effects adopting a technology might have and make intentional choices about how we adopt it. But of course we will find hardly any trace of such transformative imagination amongst our current crop of politicians, nor, it so happens, will we find it in Mr. Bezos or his self-serving visions of the future.