Why deep learning won’t give us level 5 self-driving cars

Tesla Model 3
Tesla CEO Elon Musk believes the basic functionality of level 5 self-driving cars will be completed by the end of 2020. (Image credit: Depositphotos)

This article is part of Demystifying AI, a series of posts that (try to) disambiguate the jargon and myths surrounding AI.

“I’m extremely confident that level 5 [self-driving cars] or essentially complete autonomy will happen, and I think it will happen very quickly,” Tesla CEO Elon Musk said in a video message to the World Artificial Intelligence Conference in Shanghai earlier this month. “I remain confident that we will have the basic functionality for level 5 autonomy complete this year.”

Musk’s remarks triggered much discussion in the media about whether we are close to having full self-driving cars on our roads. Like many other software engineers, I don’t think we’ll be seeing driverless cars (I mean cars that don’t have human drivers) any time soon, let alone the end of this year.

I wrote a column about this on PCMag, and received a lot of feedback (both positive and negative). So I decided to write a more technical and detailed version of my views about the state of self-driving cars. I will explain why, in its current state, deep learning, the technology used in Tesla’s Autopilot, won’t be able to solve the challenges of level 5 autonomous driving. I will also discuss the pathways that I think will lead to the deployment of driverless cars on roads.

Level 5 self-driving cars

This is how the U.S. National Highway Traffic Safety Administration defines level 5 self-driving cars: “The vehicle can do all the driving in all circumstances, [and] the human occupants are just passengers and need never be involved in driving.”

Basically, a fully autonomous car doesn’t even need a steering wheel and a driver’s seat. The passengers should be able to spend their time in the car doing more productive work.

full autonomous vehicle level 5 self-driving car prototype
Level 5 autonomy: Full self-driving cars don’t need a driver’s seat. Everyone is a passenger. (Image credit: Depositphotos)

Current self-driving technology stands at level 2, or partial automation. Tesla’s Autopilot can perform some functions such as acceleration, steering, and braking under specific conditions. And drivers must always maintain control of the car and keep their hands on the steering wheel when Autopilot is on.

Other companies that are testing self-driving technology still have drivers behind the wheel to jump in when the AI makes mistakes (as well as for legal reasons).

The hardware and software of self-driving cars

Another important point Musk raised in his remarks is that he believes Tesla cars will achieve level 5 autonomy “simply by making software improvements.”

Other self-driving car companies, including Waymo and Uber, use lidars, hardware that projects laser to create three-dimensional maps of the car’s surroundings. Tesla, on the other hand, relies mainly on cameras powered by computer vision software to navigate roads and streets. Tesla use deep neural networks to detect roads, cars, objects, and people in video feeds from eight cameras installed around the vehicle. (Tesla also has a front-facing radar and ultrasonic object detectors, but those have mostly minor roles.)

There’s a logic to Tesla’s computer vision–only approach: We humans, too, mostly rely on our vision system to drive. We don’t have 3D mapping hardware wired to our brains to detect objects and avoid collisions.

But here’s where things fall apart. Current neural networks can at best replicate a rough imitation of the human vision system. Deep learning has distinct limits that prevent it from making sense of the world in the way humans do. Neural networks require huge amounts of training data to work reliably, and they don’t have the flexibility of humans when facing a novel situation not included in their training data.

This is something Musk tacitly acknowledged at in his remarks. “[Tesla Autopilot] does not work quite as well in China as it does in the U.S. because most of our engineering is in the U.S.” This is where most of the training data for Tesla’s computer vision algorithms come from.

Deep learning’s long-tail problem

artificial neural networks deep learning human brain

Human drivers also need to adapt themselves to new settings and environments, such as a new city or town, or a weather condition they haven’t experienced before (snow- or ice-covered roads, dirt tracks, heavy mist). However, we use intuitive physics, commonsense, and our knowledge of how the world works to make rational decisions when we deal with new situations.

We understand causality and can determine which events cause others. We also understand the goals and intents of other rational actors in our environments and reliably predict what their next move might be. For instance, if it’s the first time that you see an unattended toddler on the sidewalk, you automatically know that you have pay extra attention and be careful. And what if you meet a stray elephant in the street for the first time? Do you need previous training examples to know that you should probably make a detour?

But for the time being, deep learning algorithms don’t have such capabilities, therefore they need to be pre-trained for every possible situation they encounter.

There’s already a body of evidence that shows Tesla’s deep learning algorithms are not very good at dealing with unexpected scenery even in the environments that they are adapted to. In 2016, a Tesla crashed into a tractor-trailer truck because its AI algorithm failed to detect the vehicle against the brightly lit sky. In another incident, a Tesla self-drove into a concrete barrier, killing the driver. And there have been several incidents of Tesla vehicles on Autopilot crashing into parked fire trucks and overturned vehicles. In all cases, the neural network was seeing a scene that was not included in its training data or was too different from what it had been trained on.

Tesla is constantly updating its deep learning models to deal with “edge cases,” as these new situations are called. But the problem is, we don’t know how many of these edge cases exist. They’re virtually limitless, which is what it is often referred to as the “long tail” of problems deep learning must solve.

Musk also pointed this out in his remarks to the Shanghai AI conference: “I think there are no fundamental challenges remaining for level 5 autonomy. There are many small problems, and then there’s the challenge of solving all those small problems and then putting the whole system together, and just keep addressing the long tail of problems.”

I think key here is the fact that Musk believes “there are no fundamental challenges.” This implies that the current AI technology just needs to be trained on more and more examples and perhaps receive minor architectural updates. He also said that it’s not a problem that can be simulated in virtual environments.

“You need a kind of real world situation. Nothing is more complex and weird than the real world,” Musk said. “Any simulation we create is necessarily a subset of the complexity of the real world.”

If there’s one company that can solve the self-driving problem through data from the real world, it’s probably Tesla. The company has a very comprehensive data collection program—better than any other car manufacturer doing self-driving software of software company working on self-driving cars. It is constantly gathering fresh data from the hundreds of thousands of cars it has sold across the world and using them to fine-tune its algorithms.

But will more data solve the problem?

Interpolation vs extrapolation

The AI community is divided on how to solve the “long tail” problem. One view, mostly endorsed by deep learning researchers, is that bigger and more complex neural networks trained on larger data sets will eventually achieve human-level performance on cognitive tasks. The main argument here is that the history of artificial intelligence has shown that solutions that can scale with advances in computing hardware and availability of more data are better positioned to solve the problems of the future.

This is a view that supports Musk’s approach to solving self-driving cars through incremental improvements to Tesla’s deep learning algorithms. Another argument that supports the big data approach is the “direct-fit” perspective. Some neuroscientists believe that the human brain is a direct-fit machine, which means it fills the space between the data points it has previously seen. The key here is to find the right distribution of data that can cover a vast area of the problem space.

If these premises are correct, Tesla will eventually achieve full autonomy simply by collecting more and more data from its cars. But it must still figure out how to use its vast store of data efficiently.

interpolation vs extrapolation
Extrapolation (left) tries to extract rules from big data and apply them to the entire problem space. Interpolation (right) relies on rich sampling of the problem space to calculate the spaces between samples.

On the opposite side are those who believe that deep learning is fundamentally flawed because it can only interpolate. Deep neural networks extract patterns from data, but they don’t develop causal models of their environment. This is why they need to be precisely trained on the different nuances of the problem they want to solve. No matter how much data you train a deep learning algorithm on, you won’t be able to trust it, because there will always be many novel situations where it will fail dangerously.

The human mind on the other hand, extracts high-level rules, symbols, and abstractions from each environment, and uses them to extrapolate to new settings and scenarios without the need for explicit training.

I personally stand with the latter view. I think without some sort of abstraction and symbol manipulation, deep learning algorithms won’t be able to reach human-level driving capabilities.

There are many efforts to improve deep learning systems. One example is hybrid artificial intelligence, which combines neural networks and symbolic AI to give deep learning the capability to deal with abstractions.

Another notable area of research is “system 2 deep learning.” This approach, endorsed by deep learning pioneer Yoshua Bengio, uses a pure neural network–based approach to give symbol-manipulation capabilities to deep learning. Yann LeCun, a longtime colleague of Bengio, is working on “self-supervised learning,” deep learning systems that, like children, can learn by exploring the world by themselves and without requiring a lot of help and instructions from humans. And Geoffrey Hinton, a mentor to both Bengio and LeCun, is working on “capsule networks,” another neural network architecture that can create a quasi-three-dimensional representation of the world by observing pixels.

These are all promising directions that will hopefully integrate much-needed commonsense, causality, and intuitive physics into deep learning algorithms. But they are still in the early research phase and are not nearly ready to be deployed in self-driving cars and other AI applications. So I suppose they will be ruled out for Musk’s “end of 2020” timeframe.

Comparing human and AI drivers

self-driving car computer vision ai

One of the arguments I hear a lot is that human drivers make a lot of mistakes too. Humans get tired, distracted, reckless, drunk, and they cause more accidents than self-driving cars. The first part about human error is true. But I’m not so sure whether comparing accident frequency between human drivers and AI is correct. I believe the sample size and data distribution does not paint an accurate picture yet.

But more importantly, I think comparing numbers is misleading at this point. What is more important is the fundamental difference between how humans and AI perceive the world.

Our eyes receive a lot of information, but our visual cortex is sensible to specific things, such as movement, shapes, specific colors and textures. Through billions of years of evolution, our vision has been honed to fulfill different goals that are crucial to our survival, such as spotting food and avoiding danger.

But perhaps more importantly, our cars, roads, sidewalks, road signs, and buildings have evolved to accommodate our own visual preferences. Think about the color and shape of stop signs, lane dividers, flashers, etc. We have made all these choices—consciously or not—based on the general preferences and sensibilities of the human vision system.

Therefore, while we make a lot of mistakes, our mistakes are less weird and more predictable than the AI algorithms that power self-driving cars. Case in point: No human driver in their sane mind would drive straight into an overturned car or a parked firetruck.

In his remarks, Musk said, “The thing to appreciate about level five autonomy is what level of safety is acceptable for public streets relative to human safety? So is it enough to be twice as safe as humans. I do not think regulators will accept equivalent safety to humans. So the question is will it be twice as safe, five times as safe, 10 times as safe?”

But I think it’s not enough for a deep learning algorithm to produce results that are on par with or even better than the average human. It is also important that the process it goes through to reach those results reflect that of the human mind, especially if it is being used on a road that has been made for human drivers.

Other problems that need to be solved

Given the differences between human and cop, we either have to wait for AI algorithms that exactly replicate the human vision system (which I think is unlikely any time soon), or we can take other pathways to make sure current AI algorithms and hardware can work reliably.

One such pathway is to change roads and infrastructure to accommodate the hardware and software present in cars. For instance, we can embed smart sensors in roads, lane dividers, cars, road signs, bridges, buildings, and objects. This will allow all these objects to identify each other and communicate through radio signals. Computer vision will still play an important role in autonomous driving, but it will be complementary to all the other smart technology that is present in the car and its environment. This is a scenario that is becoming increasingly possible as 5G networks are slowly becoming a reality and the price of smart sensors and internet connectivity decreases.

Just as our roads evolved with the transition from horses and carts to automobiles, they will probably go through more technological changes with the coming of software-powered and self-driving cars. But such changes require time and huge investments from governments, vehicle manufacturers, and well as the manufacturers of all those other objects that will be sharing roads with self-driving cars. And we’re still exploring the privacy and security threats of putting an internet-connected chip in everything.

internet of things devices

An intermediate scenario is the “geofenced” approach. Self-driving technology will only be allowed to operate in areas where its functionality has been fully tested and approved, where there’s smart infrastructure, and where the regulations have been tailored for autonomous vehicles (e.g., pedestrians are not allowed on roads, human drivers are limited, etc.). Some experts describe these approaches as “moving the goalposts” or redefining the problem, which is partly correct. But given the current state of deep learning, the prospect of an overnight rollout of self-driving technology is not very promising. Such measures could help a smooth and gradual transition to autonomous vehicles as the technology improves, the infrastructure evolves, and regulations adapt.

There are also legal hurdles. We have clear rules and regulations that determine who is responsible when human-driven cars cause accidents. But self-driving cars are still in a gray area. For now, drivers are responsible for their Tesla’s actions, even when it is in Autopilot mode. But in a level 5 autonomous vehicle, there’s no driver to blame for accidents. And I don’t think any car manufacturer would be willing to roll out fully autonomous vehicles if they would to be held accountable for every accident caused by their cars.

Many loopholes for the 2020 deadline

All this said, I believe Musk’s comments contain many loopholes in case he doesn’t make the Tesla fully autonomous by the end of 2020.

First, he said, “We’re very close to level five autonomy.” Which is true. In many engineering problems, especially in the field of artificial intelligence, it’s the last mile that takes a long time to solve. So, we are very close to reaching full self-driving cars, but it’s not clear when we’ll finally close the gap.

Musk also said Tesla will have the basic functionality for Level 5 autonomy completed this year. It’s not clear if basic means “complete and ready to deploy.”

And he didn’t promise that if Teslas become fully autonomous by the end of the year, governments and regulators will allow them on their roads.

Musk is a genius and an accomplished entrepreneur. But the self-driving car problem is much bigger than one person or one company. It stands at the intersection of many scientific, regulatory, social, and philosophical domains.

For my part, I don’t think we’ll see driverless Teslas on our roads at the end of the year, or anytime soon.

46 COMMENTS

  1. You make some fair, supported points.
    But where you lose me is your claim that it’s irrelevant how much safer autonomous cars are compared to human-driven cars. To me, that is THE metric. I don’t follow your argument why we should ignore this metric.

    • I also wouldn’t ignore it, even more, I think a closer look gets us to the key point of differentiation between level 4 and level 5 autonomy, as the metric is the average human driver. This includes less mindful people who drive drunk or under drug abuse. Hell yeah autonomous vehicles will soon be better than them. The next step are less trained drivers, like in the US, where you can get behind the steering wheel, starting somewhere between 14 and 16 years old. Yet further you have to compare autonomous vehicles to driver training standards in Austria and Germany, then to more experienced drivers, and I think we should absolutely not avoid thinking about racing drivers like Sebastien Loeb or Sebastien Ogier.

      So, let me derive a key argument from that: my understanding of automotive safety is to have systems for the worst drivers, to be as good as and preferably even better as the best drivers. Think of stability control, emergency brake assist, etc.

      To get back to your comment, I absolutely agree with you that we have to use such a metric, however, in benefit of Ben Dickson I think it would be a big mistake to pin level 5 autonomy to such a poor statistic.

      The following doesn’t fit your point, but let me bring in my thoughts on the initially stated differentiation between level 4 and 5: I think that it is comparably easy to get level 4 autonomy, meaning full autonomy (level 5) in situations as freeways (autobahn). But the things I have seen in my short drivers life on highways, smaller streets, country roads or even small villages and the stupid forms of traffic accidents produced by Tesla lights big red warning lights when speaking of level 5 autonomy. To further stress the topic, I concur with many scientists and automotive engineers, when they say that level 5 autonomous cars might be a romantic dream of our generation and depending on the focus on this topic in respect to our world economy, it might take around 50 years, until we can say that vehicles are level 5 to the high standards I elaborated above.

      • The only relevant metric is not some imaginary and marketing-ish levels, but who will take the financial and criminal responsibility for accidents and death. Note I make a difference between finance and criminal responsibility. Because one can make a case that some deaths from autonomous driving systems will be judged as criminal neglect and at least involuntary manslaughter. In such cases somebody will have to go to prison, not only pay the big bucks.

  2. Flawed logic. As a data scientist as you claim you use a 2016 example of a Tesla crash. 4 years ago. Software and hardware have moved on. Conclusion doesn’t fit data. Accidents per million km are 1/5 with level 3 technology. So if the Tesla drivers are typical of drivers (not Volvo drivers) and 5 times safer the tipping point has already past

    • I’m a new Tesla driver using the latest software update on my Model 3. Last week, I was driving on Autopilot on a city street when an all white semi pulled out of a parking lot in front of me. My car didn’t “see” it. That is, it didn’t show up on my car’s video display, and I had to do the braking myself in order to avoid a collision. So this situation of a white truck perpendicular to the travel lane is still not in the learning curve of the Tesla AI despite previous accidents and at least one driver intervention. I suspect that I’m not the only Tesla driver who has had to brake to avoid crashing into a perpendicular white truck.

      • Same here.
        How would the system allow crossing the centre line in a British village with oncoming traffic which is part of daily life?
        I have a M3SR+ with basic autopilot and in the Victorian countryside false speed limits abound causing sudden strong braking which as worrying if someone of size is following.
        Vehicles almost 100m ahead having almost completely cleared your path but then delayed strong braking with similar concerns.
        Literally ‘shaving’ parked vehicles and even oncoming over dimension heavy vehicles such that I simply won’t use ap under such circumstances.
        Clumsy cornering and surging on TACC (done better in our Suzuki Vitara).
        I’m starting to wonder if the talk is more to do with harming the ‘shorts’ by talking up the share price than actual reality

  3. You are assuming/wanting a 100% complete system. Like Elon mentioned he is going for a system that is 5x or 10x better than the human system right now if you look at accident rates as a metric. You can see that does not necessarily mean 100% complete. Yes the long tail will continuously be improved over time bringing it close to 100% complete but it doesn’t have to reach there for the system to be sanctioned and operational.

    • Deep learning systems may not be as safe as a fully attentive driver but what if the combination of probability of an accident and the probability of serious injury in case of an accident can be brought down to such a low level that it is acceptable? Gone are the days when driving was a pleasure. Most now sees it as a chore that they are more than willing to give up. While there may be few cases of good drivers getting hurt because of deep learning systems there will be many more cases of inexperienced and intoxicated drivers being saved by it. Demand would drive this forward than the system being as good as an attentive driver.

      • I like your idea. Deep learning autopilot systems should be able to bring down the probability of accidents and serious injury too.
        What bothers me is that non-tech people will never trust hard data, such as “autopilot reduces accident probability to x accidents per million miles”, but rather they will look at the ugly accidents caused by it, and blame it as a flawed system.
        People will not see the avoided accidents, because that will never make the news. No one can see an accident that didn’t happen.

      • It is very simple – if the AI driver producer claims that the probability for extent X is Y, then they have to offer an insurance of 1/Y for the event X. If the average Joe insures his car paying 1000 dollars, he has to receive 1000/Y dollars. If the calculation makes ridiculous claims for very low Y and this is wrong, the insurer will go bankrupt very fast.

      • So basically you admit that the benchmark level has to be lowered for the AI. This by itself would be in some sense an admission of defeat.

        And you reason that maybe the society will gain even from less performant AI driver. But if we start to make such global goal, maybe there are alternatives solutions instead – for example good public transport is nearly non existent in US, but abundant in many other places. I have lived in South Korea more than 10 years and never had a driving license, so I could intoxicate myself without risking anybody. Nearly the same level of public transport is available in Europe. Why would a consumer select to invest in less than perfect AI driving car and risk killing somebody unintetionally if he can simply use public transport? What would such societies with food public transport gain from a handicapped AI driver?

  4. Interesting article… although fundamentally flawed: we already have full self driving cars on the road, even though they are not private vehicles. Waymo removed the safety driver in some of his cabs back in December of the past year. What we have already witnessed is a fully driverless service, albeit geofenced. We aren’t far at all from the full deploying of TaaS, or Transport as a Service.

  5. You also say that we’re at Level 2.
    But Cadillac Super Cruise is Level 3 and Waymo has Level 5 (though both are geofenced).

    And I’d even argue Tesla is also Level 3+, just paralyzed from releasing it because of the political/public perception implications of any accident caused by it. This fear would be much less if people, including articles like this, drove home the single metric that matters – safety relative to human drivers. Tesla would release features they’ve developed as soon as they’re sure that they satisfy this relevant metric, thereby saving lives.

  6. Sane article during insane times, until ai/deep learning can incorporate casual models, which humans are good at, autonomous car is a far cry.

  7. I think people are trying to run before crawling. Driving is too difficult to try solve with AI right now. Researchers should be focussing on being able to things simple organisms can do first.

    A jellyfish is a very simple organism that has about 10,000 neurons. As far as I know, AI cannot even fully achieve level 5 jellyfish. How can you possible expect to achieve level 5 driving?

    • We have machines that can detect cancer, read lips, play chess and go way better than any human. It’s irrelevant if we can duplicate a jellyfish. Machines that can only do one specific thing really well exist. Self driving requires many things at the same time, but still just a limited number of independent things. The issue is the unforeseeable and the lack of causality. Although it’s unlikely that recognizing an elephant is important, but identifying a broken stop sign is. It’s at least a few more years before the long tail is addressed. We also need to consider security, such as a malicious person holding a fake 1000 mph sign, or a fake green light.

      • Interesting you mentioned recognizing stop signs. I don’t think Teslas recognize stop signs. They just know where stop signs are. The reason I say this is that on a recent drive on Autopilot in my Model 3, I had to brake for a flag man displaying and regulation stop sign at a spot where a repair crew was working. Conversely, the car tells me that there’s a stop sign 500 feet ahead all the time, even when trees or a curve in the road makes the actual stop sign invisible to the car’s cameras.

  8. Agree with most of your points in the article. But I am more optimistic of a breakthrough in the near future, simply because deep learning is so fundamentally flawed for this particular use case (autonomous driving) that a paradigm shift in approach to a more human-like one that addresses the main flaw of deep learning would eclipse current progress almost overnight with a fraction of training data.

  9. Lost me at the elephant example. I doubt there’s a single major self driving implementation that would fail to handle that situation. AI does not have to be trained on an Elephant specifically – just needs to know there’s an unknown object on the road.

    And the China example? Take any random American and plop them in a car in China and I guarantee their driving performance is going to suffer significantly, and for basically the same reason as a Tesla AI.

    Look, I get the underlying point – AI is not going to be the completely the same as a human driver anytime soon, and probably not ever (IMO). It has it’s own set of pros/cons, but already shows potential for statistically better than human performance in metrics that matter (e.g. safety), and that’s what matters.

    It’s like comparing humans to calculators in the 1950’s. Even now computers are not better than mathematicians at every task, but they have long since surpassed our ability to do arithmetic.

    • Thanks for writing out in a very thought through way what I am thinkning about autonomous driving. One combination that you did not touch on, but I think could be viable, is remote driving, where a human being operates several level 4 veichles, this solves the issue of legal responsibility and to some degree the reaction to new situations. But especially in transportation it would reduce the cost of human labour.

  10. Even in the case of interpolation there are huge challenges for neural networks. NN are basically fitting functions, also known as universal approximators. They are approximating an unknown function map from n to m dimensional spaces where n and m are very big and unknown. In addition the real life data are noisy in a very complex way via cross-correlations etc…. This is much, much, much more complex than deterministic games like chess and even go.
    NN have huge number of parameters to tune, which creates the well known problem of over-fitting – assuming you have approximated a function, but in fact locally approximating the noise (errors). As I said this is hugely dimensional stochastic space and exploring it requires huge amount of data, which is completely out of question for real-life data, but also very much in doubt for simulation based data – the so called reinforced learning. Any old school computer scientist will explain about the curse of dimensionality in such problems.

    In short – people who believe self driving is within reach are mislead by the growing computing power. It looks to them that we are within the range of the human brain power. However the brain is incredibly sophisticated device and has much more than speed and storage.

    I am not even going close to the legal and insurance problems… They alone appear very big to me.

    Musk is a great innovator and a blessing for.the humanity, but he is wrong about.self driving.

  11. I do mostly agree with your points, including Musk being exceedingly optimistic about the autonomy timeline. Part of that may simply be to sell more cars, of course, but part of it probably also the typical developer Dunning-Kruger effect if you will, where you think you’ll be done before you will actually be done, and your lifelong experience to the contrary is constantly being ignored.

    That said, many think that Level 5 is not really necessary and that good solid L4 is all that’s needed, and honestly I think that’s what Musk really means by what he calls L5, at least from everything I’ve heard him say. If the car can behave safely within the current context–react to surrounding traffic and stay on a recognized roadway, plus adapt to unexpected obstacles appearing in the road–and stay within a known infrastructure via geofencing, that would cover a massive majority of scenarios. I think Tesla is more right than say Waymo about their geofencing approach though: while Waymo rely on fully LIDAR mapped environments as their playground, Tesla think that a looser map like Google Maps plus solid situational awareness are all that’s needed. Waymo still have to implement the same situational awareness despite their LIDAR, coping with sudden obstacles in the path, their full 3D mapping doesn’t help with that.

    The cases you cited a examples for why neural networks aren’t the answer I think are poor, because they all merely demonstrate flaws in recognizing the environment, not inherent AI issues. Not seeing the white truck against the low sun could be addressed with additional sensors–the radar that’s there already, or perhaps non-visual-spectrum cameras, or yes, LIDAR, and being able to classify the elephant as such is also not important in order to successfully avoid crashing into it. If that elephant were to move at the speed and in the direction of traffic, should the AI care that it’s an elephant? Alternatively, if a bedsheet were to be lowered into traffic from a cable above the street, would you as a human not stop anyway despite recognizing that your car would probably be ok driving through it? Why should the AI be more aggressive than that? As soon as you recognize an exception in the traffic flow, you just react to it in the most conservative and prudent way possible and that should be ok for L4. There will still be tons of edge cases, but I still think that the vast majority of them can be handled with higher level generic classification. I think better-than-human driving safety can still be achieved that way.

    The current Autopilot is still at the baby stage. One of the biggest flaws in my view is its very poor to nonexistent handling of lateral approaches, vehicles veering into your lane from next to you. The side cameras seem to have huge blind spots at the B pillar on both sides, as can easily be seen on the sentry videos. You can also observe that in real life, where the car simply doesn’t react at all to vehicles right next to you coming dangerously close. I’m wondering to what extent it’s even using the ultrasonic sensors for Autopilot. Are there any at the B pillar pointing sideways?

  12. No argument about autonomous drivers can ignore comparisons to real-world drivers. Comparing autonomous drivers against a zero accident ideal is balderdash. Autonomous vehicles are already safer than human vehicles, even if they make mistakes. The mistakes they make are far less common and far less dangerous than the everyday accidents caused by texting, distracted driving, and bad driving practices that abound on our roads.

    • Good, then who will take this risk – who will be ready to sell insurance to the self driving level 5 vehicles? Who will be responsible for the accidents and the eventual fatalities?

      • If they have to rewrite the code now, this is a very bad indication for the quality of the software development process.

      • I am not sure about US, but in most of other developed World there is a special process and requirements for insurance companies. I assume US is the same. Currently in EU, Japan, Korea… Tesla would not be able legally to sell insurance. I don’t see any indications Tesla is making steps to get into approval process in any of these makers. Moreover, in many markets you can not just put anything on the road. There are basic legal requirements for car safety and again Tesla is not starting the process – and thus will be a difficult process.

    • “The mistakes they make are far less common and far less dangerous than the everyday accidents caused by texting, distracted driving, and bad driving practices that abound on our roads.”

      Killed your own argument there. Someone who is texting or distracted is already not paying attention. You can’t compare a computer system that pays full attention with a person who isn’t. A human driver with his/her attention on the road beats every computer system in existence, and so it will be forever. You cannot come anywhere near the computing power of the brain with artificial contraptions. It’s just not possible. Time to drop this dumb self-driving idea and instead focus on things that actually matter.

  13. I’d suggest two points missing. I teach high performance driving. The average driver is not very good. My model S demonstrates significantly better car control than the average driver. Introduce an average driver to a skid pad (simulation of ice and snow) and watch what happens. Not pretty. Yes you can train but you have to train each one, one at a time. Which is the second point. Once one Tesla learns how to handle a situation, all Teslas know. Less than 1% of drivers have taken true skills courses. I guess there is a third point. Most unique situations (accidents, dumb behavior) are human initiated. As fewer humans drive, fewer unique situations.

    • How come Tesla still doesn’t know not to crash into sideways tractor trailer years after a Tesla fanboy’s life was sacrificed by autopilot? It’s not simple as you think it is. Taking myself as an example, I have very poor sports/ reflexes. Yet I have driven my car for nearly 40 years in east coast and west coast uner all kinds of road conditions without any accident at all. Can Model S top my performance despite having “significant better car control”? WIthout stong AI, autonomous cars will never approach safety level of a good human driver.

  14. Wow. That’s amazing. You sound just like Boeing did 18 years ago. Look what happened to Boeing – all the head engineers are extremely pissed that they lost to a pot head.

    • You completely missed the point of the article. Everyone already knows about the rewrite, and it offers nothing as a contribution to the discussion.

  15. Elon said full functionality by the end of the year, not level 5 autonomy. Meaning in addition to everything the cars can do now, they will be able to navigate city streets, turns etc. Thats pretty exciting and a major step forward. It very well may take years to work out all the corner cases and get legislative approval (and take the steering wheel away) , but it will be miles safer than a human driver.

  16. All kind so of arguements can be made for and against Tesla achieving level 5 autonomy soon. But we can always look at past few years and measure what Tesla has produced in terms of Level 5 full self driving versus Musk’s claims made during that time. In all casees, Musk fell way way short of what he was claming – that level 5 full self drinvg /robo taxi was just around the corner. Based on Musk’s endless penchant for hyperbole and stretching truth, we can expect more of the same. Musk will claim robo-taxi is just around the corner every year until who knows when?

  17. A better way to evaluate FSD capability is to compare it with only human activity insofar as how many accidents does a human have in one million miles of driving. When FSD achieves less than one accident per million miles travelled, the statistical argument will be profoundly stronger for its acceptance on the basis of probability of number of lives saved through accidents avoided.

  18. Every day there are a huge number of accidents in the world. Autonomous driving will solve a third of these accidents.

  19. Somehow, more and more often I see articles not only on one topic, but with the same words … Of course, there is something to discuss here. Undoubtedly, China will come to the autonomous car before anyone else. Using the example of China, we will get the solution of one of the biggest problems: skepticism. Of course, I don’t know how quickly China will come to this, but I know for sure that the engineers have problems, since the car needs to be adapted for each region. Moreover, those words are only promises, for example, security, which will be guaranteed in the next five years (read in a technology article https://blog.andersenlab.com/de/can-self-driving-cars-drive-better-than- we-do), so it’s ridiculous to talk about autonomous driving this year. Now there are no less wonderful achievements in the form of semi-autonomy, although I don’t really understand its meaning, because the driver is still constantly watching the road …

  20. First. Musk’s job is to sell cars. Second, before you can teach a robot to drive, you need to teach then to be human. This goes way beyond deep learning. When a person drives they do more than just visual cues. They anticipate a lot more as to what is happening. This goes with being human and not a software program. Deep leaning cannot do this. With all the engineering conferences I have attended on deep learning. Being human is not happening. Only the simulation of human traits.

  21. I wonder if this is just another out of control tech guy who is just pushing boundaries, knowing fulwell that a human like driver can never happen with a binary device. He must know the background: computable numbers etc, or maybe not! I notice the Senat were keen to point out that none of the tech billionaires had a PhD, not a singe one,

  22. Good attempt to explain in a very superfical way. I too wonder if level five autonomy will be achieved with this technology. You claim to know about AI but are still talking in human intuition. Wether or not computation can solve this problem remains to be seen. That said computation has solved a hell of a lot to date.

    • Yes, lots of things have been done with “computation”, but also there are lots of things, which are proven to be unsolvable by brute computation – the so called NP-complete problems… Just an example.
      And there is another classical type of problems, which are very difficult to solve with computation – the so called “curse of dimensionality” problems. And self-driving appears too be one of them.

Leave a Reply to C++Cancel reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.