Featured Image Credit: @megangale/Twitter
A Tesla Model 3 Autopilot teardown by System Plus Consulting analyzed the company's cost savings after Hardware 3.0 (HW3.0) retrofits. System Plus’ observation showed Tesla’s mindfulness in terms of cost efficiencies.
The EV automaker seems to have carefully decided which components it could cost cut to put its capital to better use. The company’s knack for vertical integration was also evident in the Model 3 Autopilot teardown. While System Plus did an extensive analysis of all the tech Tesla incorporated into its Model 3 for Autopilot and FSD, this article will mainly focus on HW3.0 and its related computer components.
According to EE Times, System Plus’ CEO Romain Fraux noted that Tesla designed a custom “liquid-cooled dual computing platform,” where its Autopilot and information computers can be found. Tesla’s infotainment electronic control unit (ECU or MCU) was found on one board and its Autopilot ECU/MCU on another. Both were in the same module.
For the GPU, Tesla used modules from different manufacturers associated with Nvidia’s high-performance integrated circuits. The modules Tesla used are listed below:
With HW3.0, Tesla designed two new SoCs, two GPUS, two neural network processors, and one lock-step CPU. In comparison, Hardware 2.5 (HW2.5) used two Nvidia Parker Socs, one Nvidia Pascal GPU, and one Infineon TriCor CPU.
There were 4746 components in HW3 compared to 4681 components found in HW2.5. Fraux noted that the same sized board was used for HW3 and HW2.5. Tesla used fewer processors in HW3. System Plus noted Tesla went from four SoCs to two (Nvidia, Infineon)
Credit: System Plus Consulting
Tesla used a 14nm technology node in HW3 while a 16nm node was used in HW2.5 Nvidia processors. “This was the first time when 14nm FinFET process was used in a car,” noted Fraux when HW3 first rolled out.
Fraux compared Tesla’s use of two SoCs to the zFAS Audi A8’s central driver assistance controller, which he said “comes with no redundancy, and is really expensive.”
Elon Musk explained the need for hardware redundancy in autonomous vehicles during Tesla's Autonomy Day in 2019. “In order to have a self-driving car or Robotaxi, you really need redundancy throughout the vehicle at the hardware level. So starting in—I believe it was—October 2016, all cars made by Tesla have redundant power steering…So if the motor fails, the car can still steer. All the power and data lines have redundancy. So you can sever any given power line or any data line, and the car will keep driving," he said.
“The auxiliary power system, even if the main pack—you lose complete power in the main pack—the car is capable of steering and braking using the auxiliary power system. So you can completely lose the main pack, and the car is safe.”
Credit: System Plus Consulting
Fraux noted that Tesla designing its own automotive ASIC was “a big risk unless you have a talented design team hardware competency internally.” However, OEMs may need to consider following in Tesla’s footsteps as more electronic components are integrated into vehicles. “If you want to keep a good margin and go for volume production, it could make sense,” said Fraux.
System Plus estimated that Tesla’s HW2.5 cost the company US$280 with three Nvidia chips and an Infineon MCU. Whereas, Tesla’s HW3.- costs US$190, based on Tesla’s two SoCs. “Our quick estimate shows that [Tesla] can recover [its] investment in four years,” concluded Fraux, assuming that the EV automaker spends US$150 million on its custom processor, component prices do not change, and Tesla’s annual production is 400,000 units.Follow @PurplePanda88