Very good article. you touched on something I’ve been curious about for a long time: “Autopilot-enabled vehicle turned over operation to the driver, because road conditions were worse. If that hypothesis were true, these types of crashes should be included as an Autopilot crash, as it pertains to the road coverage of Autopilot and the hand-off between autonomous and manual control, which is related to Tesla’s design choices.” This handoff between manual and autopilot is arguably the most import part of the entire equation. There’s at least three important cases here: (1) cases where the car intervenes to save a careless drivers butt (2) Cases where the car appears to be doing something unsafe and the driver intervenes (which is part of Tesla’s AI training regime) (3) Cases where the car finds itself in a dangerous (or even untenable) circumstance, and hands back control to the driver, in some cases perhaps just before a crash. …Understanding these numbers is probably the most important way to get a true picture of AI’s improving performance. Tesla will certainly have them internally as they’re critical to improving AI performance over time (they even run AB testing where they test the AI’s decisions live using passive monitoring before rolling it out). But these numbers are strategic/competitive too and in some cases may tell an unkind story (especially if we obtained older stats when the AI was quite as smart). Given those two factors, I seriously doubt these handoff numbers will ever see the light of day. Not only that, coming to an objective definition of an “unsafe situation” or a “near miss” is pretty dang hard (if we knew a good way to define that objectively, we’d already have a pretty darn good algorithm for operating an autonomous vehicle).