Disengagement Reporting in Autonomous Vehicle Testing: Implications for Innovation and AI Regulatory Oversight
Dr. Wenjie Xue
Associate (Research Economist)
Cornerstone Research
The commercialization of autonomous vehicles hinges on their ability to operate safely in diverse and unpredictable environments. Public road testing plays a crucial role in enhancing future safety by exposing them to rare and unforeseen driving scenarios. Yet, these “edge cases” often cause failures that can impose significant negative consequences on other road users and the industry. This underscores the importance of balancing innovation against its potential harm to third parties. Given California’s current regulation that requires developers to publicly disclose instances of the autonomous mode being disengaged during testing, we examine how this transparency-based regulatory oversight affects developers’ testing strategies and social welfare. Using a game-theoretic model, we show that disengagement reporting can distort developers’ testing strategies, inducing some to prioritize testing in familiar areas, thereby impeding improvement. Under some circumstances, disengagement reporting, which is aimed at exposing the laggards, ends up unexpectedly uplifting them by distorting the leading developers’ testing strategies. Contrary to many’s intuition, increasing transparency in testing strategies within reports does not necessarily reduce their distortive effects—in fact, it can sometimes exacerbate them. However, despite the wide criticism of disengagement reporting among developers, our model helps rationalize this regulatory oversight in the presence of high externality costs when transparency in testing outcomes can indeed better balance innovation against its negative externalities. Our results help explain the different regulatory practices in different AI applications.













