Despite the fact that a Tesla with autopilot is not a true self-driving car, the organization's innovation has turned into a bellwether for Silicon Valley's desire to supplant human drivers with programming.
By mid-July, when a second Tesla Model S crashed while on autopilot on an undivided highway in Montana, Tesla had become the subject of three federal investigations.
Even Consumer Reports, which has championed the Tesla Model S as one of the greatest cars ever made, called for Tesla to disable autopilot until the technology became more reliable.
Then it introduced autopilot "Inertly" via software update into the vehicles of existing Tesla drivers for a testing phase that it called "Silent external validation." In this mode, the autopilot software logged and analyzed every move it would have made if active but could not actually control the vehicle.
Without taking Tesla's word for it, it's tough to empirically validate Musk's contention that autopilot is already saving a significant number of lives.
The math required to demonstrate conclusively that autopilot is safer than human drivers would be more nuanced, examining injury accidents as well as fatalities and controlling for biases such as the recommended use of autopilot predominantly on highways under favorable driving conditions.
What we know at this point is that autopilot can hurt or kill people if used improperly and that it also has the potential to save people.