Tesla revealed details of a custom AI chip called D1 for training the machine-learning algorithm behind its Autopilot self-driving system at a promotional event last month. The event centered on Tesla’s AI research and featured a dancing human dressed as a humanoid robot that the company plans to build.
Tesla is the most recent nontraditional chipmaker to create its own silicon. As AI becomes more important and expensive to deploy, other companies heavily invested in the technology, such as Google, Amazon, and Microsoft, are designing their own chips.
Tesla CEO Elon Musk stated at the event that getting more performance out of the computer system used to train the company’s neural network will be critical to progress in autonomous driving. “It’s a big deal if a model takes a couple of days to train versus a couple of hours,” he said.
Tesla processors interpret sensor data.
Tesla already designs chips that interpret sensor input in its vehicles, having moved away from Nvidia hardware in 2019. However, developing a powerful and complex chip required to train AI algorithms is far more expensive and difficult.
“If you believe that training a large neural network is the solution to autonomous driving, then what followed was exactly the kind of vertically integrated strategy you’d need,” says Chris Gerdes, director of Stanford’s Center for Automotive Research, who attended the Tesla event.
Many car companies use neural networks to identify objects on the road, but Tesla relies on it more heavily, with a single massive neural network known as a “transformer” receiving input from eight cameras at once.
During the August event, Tesla’s AI chief, Andrej Karpathy, stated, “We are effectively building a synthetic animal from the ground up.” “The car can be compared to an animal. It moves around on its own, senses its surroundings, and acts on its own.”
In recent years, transformer models have made significant advances in areas such as language understanding; the gains have come from making the models larger and more data-hungry. Training the most powerful AI programs necessitates several million dollars in cloud computing power.
“We are, in effect, creating a synthetic animal from the ground up.”
According to David Kanter, a chip analyst with Real World Technologies, Musk believes that by speeding up the training, “then I can make this whole machine—the self-driving program—accelerate ahead of the Cruises and Waymos of the world,” referring to two of Tesla’s competitors in autonomous driving.
According to Stanford’s Gerdes, Tesla’s strategy revolves around its neural network. Tesla, unlike many other self-driving car companies, does not use lidar, a more expensive type of sensor that can see the world in 3D. It instead interprets scenes by parsing input from its cameras and radar using a neural network algorithm.
This is more computationally demanding because the algorithm must reconstruct a map of its surroundings from camera feeds rather than relying on sensors that can directly capture that image.
However, Tesla collects more training data than other automakers. Each of the more than 1 million Teslas on the road sends video feeds from its eight cameras back to the company. Tesla claims that 1,000 people are employed to label those images, which include cars, trucks, traffic signs, lane markings, and other features, in order to help train the large transformer.
Tesla also stated at the August event that it can automatically select which images to prioritize in labelling to make the process more efficient.
According to Gerdes, one risk of Tesla’s approach is that, at some point, adding more data may not improve the system. “Is it just a matter of gathering more data?” he wonders. “Or do the capabilities of neural networks plateau at a lower level than you hope?”
Answering that question will almost certainly be costly in either case.
The rise of large, expensive AI models has inspired not only some large companies to develop their own chips, but also dozens of well-funded startups working on specialized silicon.
‘Neurograins’ could be the next generation of brain-computer interfaces.
Nvidia, which began by making gaming chips, currently dominates the market for AI training chips. When it became clear that its graphics processing units (GPUs) were better suited to running large neural networks than the central processing units (CPUs) at the heart of general-purpose computers, the company shifted its focus to supplying AI chips.
AI is also driving the diversification of chip designs in a neat bit of recursion. Chip design normally necessitates extensive technical knowledge and judgment, but machine learning has proven effective in automating portions of the process.
AI-designed chips are being manufactured by Google, Samsung, and others.
According to Kanter, some technical questions about specialized chips like Tesla’s D1 remain, such as how well they can be connected together and how well an algorithm can be split up and spread across different chips. “In some ways, you are writing a big check for your software team to cash,” he says.
According to Huei Peng, an autonomous driving professor at the University of Michigan, if the D1 is successful, Musk could sell it to other carmakers, who would have to follow its technological lead.
Peng says he doesn’t know if Tesla’s approach will be financially or technically viable, but he’s learned not to bet against Musk. “They’ve done a lot of things that everyone says aren’t going to work,” he says. “However, it works in the end.”
Engage us to tailor that stunning style you desire. We style and process orders at the most affordable prices locally and internationally. Click here to check out our shop
For general fashion and lifestyle inspiration on Instagram, tap that Follow button – @stylishgwin_africa
Looking for a custom-made style for you, your wife/hubby or kids? We’re here for you..! Contact us today to have us tailor a dress for you.