Tesla launches Full Self Driving on US urban roads

Tesla

Tesla – and in particular its CEO Elon Musk – has no problem with overstating and emphasising the self-driving capabilities of its cars. As a matter of fact, the Palo Alto-based company recently released the beta of its ‘Full Self-Driving‘ software update, which is, however, not full self-driving.

In 2016, the SAE (Society of Automobile Engineers) defined five levels of autonomous driving in its J3016 standard (six, if we count ‘level 0’ for completely non-autonomous driving). Explained briefly, Level 1 is considered a simple driving assistant, such as cruise control or autonomously maintaining a fixed distance to the car in front. Level 2 is an advanced assistant, which can take control of the steering wheel, accelerator and brake, but where the driver must monitor all activities and be able to take control at any time.

Level 3 is a watershed, because from this point the human driver does not need to constantly monitor all aspects of driving. Cars at this level are able to monitor the environment and make autonomous decisions such as overtaking a slower car (note: overtaking, not just passing). The human driver must always be able to take control if necessary. From Level 4 we can start talking about self-driving cars like the ones we see in science fiction movies: they do everything themselves, even troubleshooting, and the driver doesn’t have to take control (but will be able to do so on his/her own initiative). Let’s say that from this level onward you can fall asleep in the car and be driven home safely. The only drawback is that cars at this level are not yet able to drive in all conditions. Urban areas that are too difficult, for example, or areas with often adverse weather conditions are still off-limits.

Level 5 is the car without steering wheel or pedals that can drive everywhere, in rain, snow and wind, and deal with any problem that may occur on the road. As you can imagine, such cars are still a long way from running on our streets.

But back to Tesla and its Full Self-Driving (FSD). Despite its name, it is still a Level 2, as it requires a human driver who is active and present at all times, with his/her hand on the wheel. The company’s decision to release the FSD beta to its customers, albeit in limited numbers, has therefore alarmed more than one person. Where previously Teslas could only be seen driving autonomously on motorways or expressways, the FSD update now allows several Teslas to drive autonomously in urban areas. Proof of this is this video, where a user-vlogger tries out the new feature in his own neighbourhood.

The company has warned users that the software is still in beta and – in their own words – ‘could do the wrong thing at the worst time‘. It doesn’t take much imagination to imagine a self-driving car doing ‘the wrong thing at the worst time’: accelerating into pedestrians, going the wrong way, or turning onto a pavement. Extremely unlikely events, of course, but with the software update being rolled out to over 1 million vehicles by the end of the year, the likelihood of such an occurrence may become anything but remote. Or to put it another way, if a very rare event only occurs once in a million, then activating FSD in a million cars is highly likely to make it happen. For the record, Tesla’s figures for Q3 2020 record one accident every 4.59 million miles with autopilot (but not FSD) engaged.

But if self-driving cars are already safer than human-driven ones, why so much caution?

The treatment of facial recognition in recent years teaches us that too much arrogance can lead to social rejection of a given artificial intelligence technology. The average person, besides having an innate distrust of any kind of change, tends to evaluate a technological innovation according to the degree of usefulness and harm it will bring to his or her life. Software that recognises people’s faces could be very useful in the fight against crime, but it was enough to use them everywhere and badly, giving them to organisations who did not know how to manage them, to create a real rejection by citizens and, later, by many politicians.

The same thing could happen to autonomous driving, if imposed with the same presumption and hubris. As the New York Times pointed out in a recent article, trials of autonomous driving take place in clearly defined geographical areas, and all the companies working on them try to do so in very small steps, precisely because when human lives are at stake there is little room for superficiality. It would take very little, for example a couple of serious and ‘journalistically interesting’ accidents, to trigger neo-luddite reactions that could slow down the rise of a technology, such as autonomous driving, that is certainly beneficial for us human beings, who have to suffer 1.35 million deaths on the world’s roads every year.

Too many errors in the release of these technologies would be detrimental to the entire industry. From Tesla, which is no stranger to fatal accidents, I would expect fewer “beta tests” on the life of citizens.

I have an Artificial Intelligence Professional certificate from IBM and one on machine learning from Google Cloud. I am a member of several AI industry associations: AAAI, ACM (SIGAI), AIxIA. I participate to the European AI Alliance of the European Commission and I work with the European Defence Agency and the Joint Research Centre.