AV Policy 3.0: Analysis of Comments from the National Society of Professional Engineers
- Manufacturers should not be permitted to self-certify their vehicles. Manufacturers should be required to submit their vehicles to third-party verifiers for pre-market approval based on pre-determined technical and safety guidelines.
- The public does not trust automated vehicles, in large part, due to safety concerns. The DOT can foster public trust by implementing adequate and verifiable testing for automated vehicles.
- Automated vehicles need to be able to operate safely in existing infrastructure because it takes time to build connected infrastructure.
- The DOT needs to consider the ethical implications for automated vehicles. NSPE gave as an example that the DOT did not “propose methods for addressing life-and-death decisions.” NSPE asserts that these ethical questions should not be determined solely by the manufacturers and that a third-party perspective is needed.
Each of these points will be considered further below.
NSPE asserts that automated vehicle manufacturers should not be permitted to self-certify their vehicles. Self-certification is not new for the motor vehicle industry–self-certification is the rule. By way of background, the National Highway Traffic Safety Administration (NHTSA) is responsible for creating and maintaining the Federal Motor Vehicle Safety Standards (FMVSS). The FMVSS are located in 49 C.F.R. 571 and set minimum safety requirements for all vehicles allowed for sale in the United States. Vehicle manufacturers and equipment manufacturers self-certify that their products meet the applicable FMVSS. NHTSA does not pre-approve motor vehicles by determining that the motor vehicles are in compliance with the FMVSS, but NHTSA does test a number of vehicles and technologies each year. NHTSA will also answer inquiries from a manufacturer about the application of the FMVSS to certain technologies. An example is Google’s inquiry about its self-driving system.
The NHTSA model is in stark contrast to the model used by the Federal Aviation Administration (FAA). The FAA uses a pre-market approval process for commercial aircraft and software-driven products like autopilot. In Federal Automated Vehicle Policy 1.0, the DOT discussed the FAA model of pre-market approval. The DOT noted that there are five phases to pre-market approval for the FAA: (1) conceptual design phase; (2) requirements definition phase; (3) compliance planning phase; (4) implementation phase; and (5) post certification phase. The DOT said that typically the FAA certification process takes about 3-5 years, but the certification process for the Boeing 787 Dreamliner lasted considerably longer (8 years). The DOT said that the FAA delegates oversight to the manufacturers, which allows for a shorter certification process and resembles self-certification. The DOT noted that a change to pre-market approval would “be a wholesale structure change in the way NHTSA regulates motor vehicle safety and would require both fundamental statutory changes and a large increase in Agency resources.”
NSPE did not assert that NHTSA, itself, should conduct the pre-market approval process. Instead, NSPE suggested that “vehicle manufacturers should be required to submit to third-party verification of pre-determined technical and safety guidelines before further expansion and rollout.” A system of this sort is used in some European countries for the automotive industry. As other scholars have noted, this system has two potential benefits: (1) “independent verification that the manufacturer is following their stated policies”; and (2) a private-based certification system could allow for confidential information sharing between the manufacturer and the certifier.
This third-party verification system would require NHTSA to develop objective standards that third-party verifiers would use to judge the automated vehicles. If the objective standards would be the FMVSS, themselves, there does not appear to be a fundamental need for a third-party verification system. Manufacturers are capable of self-certifying their vehicles, and NHTSA is capable of investigating and reviewing technologies and vehicles that do not appear in compliance with the FMVSS. If, however, the purpose would be to determine whether a highly automated vehicle is safe, then numerous questions about the standard and process exist. Are we going to compare an automated vehicle to the average human driver? Are we going to compare technologies among other automated technologies? Who gets to be a third-party certifier? What is the timeline for certification? Will the system be confidential? If so, then how do NHTSA and the public ensure that the certifier is complying with its guidance? Do all updates to the software need to be verified or only the initial software? What about emergency updates to the software? These questions and many others about a third-party certification system are lacking from NSPE’s proposal. Without knowing these details, it is hard to determine whether a third-party, pre-market certification system should be used.
What we do know is that a third-party certification for automated vehicles would put automated vehicles at a disadvantage in comparison to non-automated vehicles. This would make it easier for non-automated vehicles to get on the market and also keep the price of non-automated vehicles artificially lower than the cost of automated vehicles. These are judgment calls about what technologies to prioritize. If an automated vehicle could be safer but is safer than the average human-driven vehicle, then is it really best to prioritize the human-driven vehicle because the crashes are caused by human drivers rather than technology?
Manufacturers will need to build public trust before the public is willing to purchase and use automated vehicles. NSPE believes that the way to build this trust is through a third-party verification system. A third-party verification system is but one way to build trust. One other solution to building trust could be to give automated vehicles “human-like” qualities. Trust could also be built through methods of deployment of automated technologies. For some, they may begin trusting highly automated vehicles by using lower levels of automation. Their experiences with lower levels of automation may build trust in the capabilities of higher levels of automation. For others, they may begin trusting highly automated vehicles by using them. They may catch a ride in an automated taxi and learn to trust high levels of vehicle automation that way.
The problem of building trust is not unique to the automated vehicle industry. Building consumer trust is a problem for manufacturers of all types of goods. Admittedly, here, the trust relates to the safety of the person in the vehicle, but that does not mean that the manufacturers of automated vehicles are ill-equipped to find ways to build trust.
Maybe the solution to the trust problem is a third-party certification system. Maybe it isn’t. The DOT should allow manufacturers to determine the best method to build public trust in their vehicles.
The Ability to Operate within Existing Infrastructure
NSPE asserted that the DOT needs to ensure that automated vehicles operate safely without regard to whether a locality has implemented smart infrastructure. NPSE is right. Automated technologies that are intended to be deployed into our current infrastructure needs to be able to operate safely within that infrastructure.
NSPE asserts that the DOT needs to start considering ethical questions confronting automated vehicles. NSPE references crash-optimization algorithms as one such ethical issue that the DOT needs to provide guidance to industry and the public. Crash-optimization algorithms are the algorithms that determine what or who an automated vehicle hits in the event of a must-crash scenario. I discussed crash-optimization algorithms at length in an article entitled “Crashing Into the Unknown: An Examination of Crash-Optimization Algorithms through the Two Lanes of Ethics and Law“, in which I explored ethical theories for programming crash-optimization algorithms and the legal risks that manufacturers may have when they program these algorithms. I ultimately concluded that programming crash-optimization algorithms according to ethical theory would be difficult for two principal reasons: (1) people do not share a common ethical belief system; and (2) the law does not always incentivize ethical programming of automated vehicles.
Nonetheless, manufacturers will need to make these type of judgment calls when programming their automated vehicles. Automated vehicles will inevitably malfunction and crash, and when they do, the vehicle coding will determine who or what to hit. At this point, the DOT should probably stay uninvolved and focus on greater ethical questions that manufacturers will face. One such ethical question will be when is it safe to test and use automated technologies on our roads. Uber made an ethical decision to test its automated technology on roadways without adequate safety precautions. Elaine Herzberg is dead because of that decision. As automakers become closer to deploying their technologies for use by humans, they too will face the decision of how and when to deploy their technologies. These are more pressing ethical questions than the trolley-problem.