Tesla’s Autopliot feature has been a huge success since it was launched, and the move towards autonomous vehicles is becoming a closer reality. Arguably the industry leader in autonomous tech, Tesla made waves when they chose to go with Vision instead of LIDAR to power their Autopilot feature.
But what is the difference between the two, and which one would be the best one to use? It’s hard to say that Tesla made the wrong choice by implementing Vision in its cars, especially with the foothold Tesla currently has with autonomous tech. Analysts predict that Tesla will have moved over 100K units before any other manufacturer will even get to the production process on their own autonomous vehicles.
The difference between the two technologies is, when you really think about it, quite staggering. LIDAR uses beams of emitted like to “see” its surroundings and creates a 3D map of the environment. Vision functions closer to a camera, in that it needs existing ambient light to capture its surroundings. A Vision system capture images of the environment, which then needs to be processed to create a 3D image.
At the moment, it seems, that Vision is the superior tech of choice, especially with its lower cost and ability to mass market. A Vision system also does not require as much of a clear view as LIDAR does, so its placement on a vehicle can be more aesthetically pleasing. A LIDAR system, while much cheaper today than it was in the past, is still not as cheap as the cameras that Vision uses, so for the time being, it would seem almost silly for a business like Tesla to go with any other than Vision, right?
For proponents of Vision, the argument is that if humans can drive with two eyes, then cars should be able to as well with two cameras. While this makes sense on the surface, the analogy is actually flawed. For starters, the human head can rotate, and with the help of mirrors, can actually gain close to a 360-degree field of view.
More cameras can obviously solve the 360-degree problem, but with Tesla’s current implementation of Vision, you can see there are holes in visibility, especially with lateral views:
Another problem with Vision is that because its performance depends on ambient light, shadows and even direct sunlight could cause problems, even leading to problem mapping out the environment. Vision is subject to an extremely high rate of false positives and false negatives, and while the sensor hardware is cheaper than a LIDAR unit, the computer hardware needed to convert 2D images to 3D images may end up being more expensive.
So what you have is a situation where LIDAR is actually a more proven method with less of a chance problems, but comes with a more expensive sensor set that requires more precise placement (usually up high). An autonomous system using LIDAR has a higher chance of making it to production in a reasonable time due to less “issues.”
A Vision system has a longer chance of getting to market due to a higher required level of efficacy. The sensor set is initially cheaper, but the computer hardware is more expensive and the software more complex.
So which one is “better?” I guess a better question would be “What’s more important to Tesla; winning the battle or the war?” I guess we’ll see when AP 3.0 is announced down the line.
SOURCE | Seeking Alpha