Featured image: Baranozdemir/Getty Images
Behind all the impressive superpower of Tesla's FSD are huge computers that process billions of miles of data using machine learning via neural networks. While Tesla computers are already the world's leading computers, the company’s new supercomputer—dubbed “Dojo”—will be the best driver training computer ever created and possibly the fastest ever built, says Dr. Know-it-all Knows it all/YouTube.
Tesla cars process data on a massive scale utilizing their own company-built chips. Its vehicles need a powerful computer on board, and so they had to develop their own chip capable of processing hundreds of neural tasks and thousands of images per second. Basically, the chips need to be able to work in real-time, processing extremely complex information. After collecting the data, Tesla processes them and releases the next OTA update that improves the fleet.
And now Tesla is developing a new supercomputer, the Dojo. Its goal is to increase the speed and accuracy of training at least 10 times over the current computer. Dojo is a Neural Network (NN) chip being developed by Tesla's hardware team to increase the learning speed of neural networks at the server-side. Tesla CEO Elon Musk said that Dojo V1.0 will be released in about one year.
In mid-August, Musk hinted at the computing power of the Dojo computer, writing, "A truly useful exaflop at de facto FP32." An exaFLOP is one quintillion floating-point operations per second or 1,000 petaFLOPS. At the moment, the most powerful computer from the Japanese company Fuji is Fugaku, which has a speed of about 415 petaFLOPS. This means that Dojo will be more than twice the speed of the most powerful supercomputer.
A truly useful exaflop at de facto FP32— Elon Musk (@elonmusk) August 16, 2020
Tesla's custom 144 TOPS (Trillion Operations Per Second) in-vehicle inference computer—where almost every TOP is useable and optimized for NN—far exceeds anything else in volume production, given the hardware needed to run sophisticated nets. Dojo will be able to process vast amounts of video training data and efficiently run hyperspace arrays with an extreme number of parameters, plenty of memory and ultra-high bandwidth between cores.
For this, Tesla has developed (or is in the process of developing) a special NPU chip. A neural processor or a neural processing unit (NPU) is a specialized circuit that implements all the necessary control and arithmetic logic necessary to execute machine learning algorithms, typically by operating on predictive models such as artificial neural networks (ANNs) or random forests (RFs).
Dr. Know-it-All-Knows it all assumes that, at the moment, it takes Tesla about three days to pass one training model, but this is a very long time. Thus, increasing the power by 10 times (thanks to Dojo) will reduce the time to about seven and a half hours. Thus, several workouts can be done in one day, significantly accelerating the trajectory towards Level 5 autonomy.
© 2020, Eva Fox. All rights reserved.
We appreciate your readership! Please share your thoughts in the comment section below.
About the Author
Eva Fox joined Tesmanian in 2019 to cover breaking news as an automotive journalist. The main topics that she covers are clean energy and electric vehicles. As a journalist, Eva is specialized in Tesla and topics related to the work and development of the company.