On Rafaela Vasquez, the “Safety Driver” of Uber’s Autonomous Vehicle
Imagine further that a new technology company (Uber) is willing to hire you. They offer you the job of “safety driver” of their new, cutting-edge autonomous technology. Your job is simple: you are instructed to “ghost drive” for six to eight hours per day. Although we do not know how much Uber pays its safety drivers, we do know that other safety drivers are paid relatively well. General Motors advertises its safety driver positions at $23 per hour, while Waymo offers its similar positions at $20 per hour.
You take the job and you’re excited about it. But what you don’t realize is that you have been placed in an untenable position for two reasons. First, your employer likely knows (or should know) that human factor studies show that you will likely fail at your job. Second, you will be blamed or have your character attacked when you do exactly what the study shows: you fail at your job. In other words, you become an easy scapegoat; a distraction of sorts; a red herring.
As I discussed here, here, and here and at conferences in recent months, humans are incapable of being safety drivers. Empirical studies have shown, and basic intuition and common sense lead one to believe, that a safety driver will not pay attention to the road at all times. Just think of how boring a six to eight hour shift of watching an autonomous vehicle drive would be. At some point, you will check your phone; you will stare at the beautiful landscape; or any other action that will take your eyes off the road.
Or your employer could place an iPad in the car’s middle console for you to monitor driving, alert engineers to problems, and to annotate autonomous driving data. Your employer allows you to use the iPad while the vehicle is operating autonomously; yet, at the same time, the employer expects you to pay attention to the road. (Note that my car prevents me from changing certain settings while I’m driving.).
But even if you are paying attention when an accident is about to occur, what are the chances that you are comprehending your surroundings? Staring forward is not enough; you must be actively engaged in the driving process.
A person needs situational awareness to safely intervene. As I have discussed here, it takes time for a driver to regain situational awareness. If a “safety driver” was distracted, it could take up to one minute before the driver regained situational awareness to the level of a non-distracted driver. A person retaking control without situational awareness may also make dangerous decisions.
In summary, your employer hired you for a job at which you were likely going to fail. And your failure here could have tragic consequences if it happens at exactly the wrong time (e.g., when the technology is failing and a pedestrian is crossing the street).
This is likely the story of Rafaela Vasquez. She was placed in an untenable position with no real chance of preventing the accident.
I do not mean to suggest that it is impossible for a safety driver to do exactly what he or she is hired to do: pay attention and remain situationally aware at all times. That would be like suggesting that it is impossible for someone to run a 100 meter race within 9.6 seconds. It is possible: Usain Bolt ran a 100-meter race in 9.58 seconds in 2009.
The fact is that for the most part safety drivers are unreliable if they are being used for the purposes of preventing accidents like Elaine’s tragic accident. But that does not mean that they are useless (at least as of now) for other reasons, especially in Uber’s autonomous vehicles. According to documents obtained by the New York Times, Uber’s autonomous vehicles were struggling to drive autonomously for 13 miles without human intervention. By comparison, Waymo’s autonomous vehicles averaged 5,600 miles per human intervention, and Cruise, General Motors’ autonomous vehicle company, averaged 1,200 miles per human intervention. It’s just that we should not view humans as the ultimate fail-safe, but rather as a means to help the technology through difficult parts of its drive (with adequate warning to the safety driver). The roles of safety drivers need to be clearly delineated. But one role that safety drivers should not be responsible for, and society should not expect them to solve, is preventing accidents like the tragedy that occurred on Sunday night in Arizona.
If Uber and other car manufacturers want their safety drivers to have a chance to prevent accidents like the one that killed Elaine, they could utilize technology to monitor the driver or incentivize their drivers to pay attention to the road. General Motors, for example, uses facial recognition technology to monitor drivers in its vehicles with “Super Cruise.” It is interesting that Uber did not utilize such technology in its vehicles to ensure that its safety drivers were paying attention to the road. One suggestion is that a company could use facial recognition technology to incentivize employees by offering a bonus based on the percentage of time that they pay attention to the road. And, of course, Uber could disable its iPad or otherwise not provide an iPad to the driver. But until a manufacturer seriously addresses the human factor issues of autonomous driving, we should not expect the safety drivers to prevent these accidents when the technology fails at the wrong time.