Since Tesla rolled out the Smart Summon, a feature that enables Model 3 owners to “summon” their car remotely, the technology has been subject to much testing. The limited self-driving technology has seen hundreds of thousands of tests, and many users have published their experience. The results have been mixed, with the mistakes receiving more highlights than the successful performances.
Smart Summon is one several technologies that aims to eventually transform Tesla cars into full self-driving vehicles. Tesla activated it for the owners of the Model 3 who had paid a $6,000 upfront fee at the end of September. The feature uses Tesla’s artificial intelligence technology to enable its vehicles to navigate parking lots on their own. Their owners can use a mobile app to summon their car or send it a target location.
The company also has a driverless feature, the Autopilot, which requires drivers to have their hands on the steering wheel while it works. Autopilot has been subject the several mishaps and much criticism.
The performance of Smart Summon and the general reaction to its rollout tells us much about the current state of autonomous driving and the challenges of creating self-driving cars.
Driverless cars are a work in progress
Consumer Reports tested Smart Summon and confirmed that it performs basic driverless functions. This includes things such as exiting a parking space, navigating around obstacles, avoiding pedestrians and slowing down when it sees traffic.
But Consumer Reports also stated that Smart Summon works in ideal situations and makes mistakes in crowded spaces. It also shows confusing behavior, such as working in one part of a parking lot and shutting itself down in another part.
The driverless summoning feature also drove cautiously, which meant it could be slow and frustrating. In other cases, the Tesla drove like a distracted of drunken driver.
“This is a work in progress,” Jake Fisher, Consumer Report’s senior director of auto testing, said about Tesla’s Smart Summon technology. Fisher pointed out that Tesla is in fact using its customers to train and fine-tune the artificial intelligence algorithms that power the driverless features of its vehicles.
Ars Technica’s Timothy Lee summarized more than 100 YouTube videos as well as discussions on social media documenting Tesla’s Smart Summon feature. In most cases, the Smart Summon works well, but it also makes confusing mistakes at times, Lee said.
“Using Smart Summon will rarely get you to your car much faster than an old-fashioned walk across the parking lot, and walking is much less likely to irritate or inconvenience other drivers,” Lee observed while adding that Tesla will gradually increase the capabilities of Smart Summon as the technology gets better.
Lee’s account also portrays the limits of Tesla, such as getting too close to pedestrians and failing to detect objects on its sides.
The problem with current self-driving technology
Smart Summon pretty much sums up the current state of self-driving technology: It’s good enough to make you want to trust it, but also makes stupid mistakes that make you doubt it.
Tesla is one of a handful of self-driving car companies exclusively focused on computer vision technology, the science that allows computers to make sense of the content of images. Current computer vision technologies use deep learning algorithms to interpret images and video.
Thanks to advances in artificial neural networks, deep learning has become very good at detecting objects in real time. But neural networks also have some distinct limits that manifest themselves in the failures we see in self-driving cars.
Humans drivers don’t need explicit instructions for every single object, every possible lighting and weather and road conditions. We know that ice makes roads slippery and we need to drive cautiously. We know that heavy snow and mist limits our vision, and we must drive slower to be able to react if something unexpected suddenly pops into our field of view. And we can adjust ourselves to new roads with little experience.
That’s not how artificial intelligence algorithms work. AI models based on neural networks and deep learning don’t have the vast background knowledge and commonsense that human drivers have. They know nothing about the rules of gravity, the relations between different object, the common behaviors of people and other things that directly affect decisions humans make.
The AI algorithms that power self-driving cars need to be explicitly instructed on every single scenario they will face on the road. Even the slightest changes to the settings, such as lighting or weather conditions, can cause deep learning algorithms to break. A neural net might be able to detect an object in the morning light but miss it in the afternoon shade. They are also prone to adversarial examples, modified objects look normal to humans but cause AI algorithms to break and make bizarre decisions.
That’s why, for the most part, companies throw more data at the problem and train AI models on more and more data, hoping that larger datasets will cover all possible scenarios and corner cases. Tesla clearly has an advantage in this regard, because it has already collected hundreds of millions of hours’ worth of road video and images from its human-driven cars.
But roads are open environments, which means you can’t always predict everything that will happen on them. There are limitless scenarios that can happen (such as someone jumping in the middle of the road with a pogo stick), and there’s never enough data to cover every possible scenario. Every parking lot can be a new challenge for the self-driving car. Or better put, every parking lot at any given time of any given day is a new challenge.
We’re seeing the same kind of problems in all other areas where AI algorithms are struggling, such as natural language understanding. Deep learning can solve simple problems such as finding direct answers to questions or autocompleting words and sentences, but struggles to understand the meaning and nuances of written and spoken language.
So you can’t blame Tesla for releasing a premature technology. The problem is the missing components in its underlying technology. Without integrating reasoning and commonsense into AI algorithms, self-driving cars will always a “work in progress,” as CR’s Jake Fisher said.
Deploying AI in limited environments
There’s something “smart” about Tesla’s Smart Summon feature, which is its deployment in limited environment. Tesla’s manual explicitly warns about the limits of Smart Summon. It can only be used within 200 feet (60 meters) of the owner and moves at the slow speed of 6 mph. Users also must keep their fingers on a button in the app while the car is driving on its own. The self-driving feature will shut down as soon as the user releases the button. The Smart Summon feature is also exclusively meant for parking spaces and will not work on public roads.
By integrating these requirements, Tesla has effectively created a semi-closed environment for the self-driving car. This will make the environment much more predictable for the AI algorithms that navigate the Tesla. Some might call this a “moving of the goal posts.” But parking spaces might prove to be a challenge that the Tesla self-driving car will be able to solve with current AI technologies. Time will tell.
I don’t think that the Smart Summon will be an important step toward fully autonomous Teslas. That will require fundamental fixes to the underlying AI technology. But it is nonetheless a practical use for what we have already achieved in AI and self-driving car research.
Current AI technologies can’t solve the open environment problem of self-driving cars, but they can solve smaller problems, such as navigating forklifts in warehouses, summoning cars from parking lots, lane maintenance, cruise control and other challenges that can prevent accidents and make lives easier.
As I’ve argued before, we might never achieve the ambitious goal of creating self-driving cars that can drive on any road. But the effort will be rewarding nonetheless.