FSD

Tesla FSD Beta V9.0 with Pure Vision Without Radar Is Almost Ready

Tesla FSD Beta V9.0 with Pure Vision Without Radar Is Almost Ready

Featured image: Shutter Stock | Tesmanian

The question of which is better for achieving Full Self-Driving (FSD), Tesla's set of cameras and sensors, or just sensors (such as LIDAR)—which are used by absolutely all other manufacturers—has long been on the agenda. However, Tesla has added some more spice to the matter. Now the company aims to remove sensors from its cars and FSD will rely only on pure vision, which, according to Tesla CEO Elon Musk, may already be possible in the new version of FSD Beta V9.0.

For some time now, Tesla has been striving to move neural networks (NN) to surround video, which will allow cars to reach the next level of Full Self-Driving. It seems that the company has made progress in this and the next version of FSD Beta V9.0 can work by receiving information only from cameras.

Musk has repeatedly said that the company strives for its cars to receive information in 4D (four dimensions). We see the world in 3D, and the fourth dimension is time; therefore, as soon as cars can "see" in all four dimensions, Tesla's goal will be achieved. In late January, Musk talked about the company moving all NNs to eight-camera surround video.

On April 9, he confirmed that FSD Beta V9.0 is almost ready. Musk stressed that the updated version has massive improvements, especially in terms of corner cases and bad weather. He also pointed out that V9.0 will work with pure vision and without using radar.


Guided by the principle of fewer details, fewer problems—which in reality is true—Tesla wants to completely get rid of radar in its vehicles. In order to avoid unnecessary questions and doubts, Musk explained that in fact, radars make the whole process more difficult, so it is wise to get rid of them. He pointed out that in some situations, the data from the radar and cameras may differ, and then the question arises of what to believe?

Musk explained that vision is much more accurate, which is why it is better to double down on vision than do sensor fusion. "Sensors are a bitstream and cameras have several orders of magnitude more bits/sec than radar (or lidar). Radar must meaningfully increase signal/noise of bitstream to be worth complexity of integrating it. As vision processing gets better, it just leaves radar far behind."


Thus, along with the improvement of FSD, Tesla also improves its cars, removing unnecessary equipment. In this way, the company achieves better results in achieving full autonomy while simultaneously reducing the cost of car production.

© 2021, Eva Fox. All rights reserved.

_____________________________

We appreciate your readership! Please share your thoughts in the comment section below.

Article edited by @SmokeyShorts, you can follow him on Twitter


About the Author

Eva Fox

Eva Fox

Eva Fox joined Tesmanian in 2019 to cover breaking news as an automotive journalist. The main topics that she covers are clean energy and electric vehicles. As a journalist, Eva is specialized in Tesla and topics related to the work and development of the company.

Follow me on X

Reading next

Tesla is Looking for Applicants in Japan to Host Superchargers, as Local Demand Heats Up
NASA Will Host A Series Of Conferences Ahead Of SpaceX's Crew-2 Mission

Tesla Accessories