Money & BusinessTech & Telecoms

Elon Musk Touts Nvidia Dominance and Predicts a Giant Leap in AI Power

Elon Musk doesn’t make a habit of shouting out other entrepreneurs, but even he can admit when someone, like Nvidia founder Jensen Huang, out-predicts him. 

While speaking via FaceTime at the Abundance360 Summit, a gathering in Rancho Palos Verdes, California organized by futurist Peter Diamandis, also the founder of XPrize Foundation and Singularity University, Musk said that “you have to give credit to Jensen and the Nvidia team for kind of seeing this coming,” referring to Huang’s decades-long effort to position Nvidia at the forefront of artificial intelligence. “They’re making what is currently the best AI hardware out there,” he added. 

During his conversation with Diamandis, Musk noted that Nvidia’s chips are largely responsible for the massive increases in AI computing power over the past year or so, and asserted that the amount of computational power dedicated solely to AI is increasing by a factor of 10 every six months. Musk also predicted that over the next few years, AI compute will increase by an annual factor of 100, as data centers that have traditionally focused on more conventional compute services shift to AI. An AI model’s “intelligence” is directly correlated to the amount of compute power used to train it, which is why companies like OpenAI or Musk’s xAI are so bullish on Nvidia’s tech; they need it to create significantly smarter models and get closer to achieving AGI, a hypothetical form of AI that’s as smart or smarter than the average person. “It’s certainly a good time to be Nvidia,” Musk said, “obviously.” 

That increase in compute power is good news, as Musk said the biggest limiting factor to the AI’s growth in 2023 was the scarcity of AI chips, like the ones Nvidia has made a fortune selling, that are used to train and run advanced AI models. Now that production on chips has ramped up enough to supply the outsized demand, the biggest limiting factor to AI’s growth going forward, according to Musk, is finding enough electricity to power the data centers where those chips are put to use. 

Training large language models like GPT-4, OpenAI’s marquee offering, is an incredibly energy-intensive process. It’s estimated that GPT-3, the large language model that powered the initial release of ChatGPT, cost 1,287 megawatt hours, which is reportedly roughly the same amount of power that 130 US homes consume annually. OpenAI founder Sam Altman has personally invested $375 million in Helion Energy, a startup that aims to use nuclear fusion to provide a more eco-friendly and low-cost way of running AI data centers. 

Musk said that AI companies will soon be competing for step down transformers, large and expensive machines that convert high-voltage currents into grid-ready electricity. “Getting the power from a utility, something like 300 kilovolts, down to below one volt is a massive step down” said Musk, joking that the industry currently needs “transformers for transformers.”

Comments

Source
INC

Related Articles

Back to top button