My page - topic 1, topic 2, topic 3 Postbox Live

Most potent AI Supercomputer in the World.

Elon Musk Asserts That He Has Just Turned On The Most Potent Ai Supercomputer In The World

Elon Musk asserts that he has just turned on the

most potent AI supercomputer in the world.

 

 

 

“Colossus is the most powerful AI training system in the world.”

Look at Colossus: Elon Musk‘s latest supercomputer, which is said to be powered by 100,000 Nvidia AI chips more than all AI systems combined on the planet.

Built in Tennessee for his xAI artificial intelligence company, Musk announced on Monday that the massive data center was now officially up and running over Labor Day weekend. Nvidia says that the record-breaking assembly time of just 122 days was achieved.

As Musk put it in a tweet, “Colossus is the most powerful AI training system in the world.”

Nvidia H100 graphics processing units, the most sought-after hardware in the industry for developing and executing generative AI applications, including chatbots and picture generators, are used in the construction of the supercomputer.

And that’s only the beginning in terms of how many xAI now counts. In a few months, according to Musk, Colossus will “double” in size to 200,000 AI chips. These chips will use 50,000 H200 GPUs, a more recent generation with about twice the memory capacity and 40% more bandwidth than its predecessor, according to Nvidia.

Quick Learners

Grok, the company’s main offering, is an AI chatbot that was included into X and has a foul mouth. Musk only launched xAI this past summer. It’s amazing that Musk’s business can compete with such tough adversaries as Microsoft and OpenAI, two tech giants with years of experience in hardware development, if Musk‘s tendency toward hyperbole isn’t at play again.

Nvidia regards Musk as one of its most valued customers, as Fortune notes, as he had already spent between $3 and $4 billion on tens of thousands of GPUs for Tesla before to pursuing xAI.

Parts of the chips used to train Tesla’s Full Self-Driving system would be used to train Grok.

Grok would be trained using some of the chips that were initially used to train Tesla’s Full Self-Driving system.

Musk most certainly had to pay billions more to acquire this latest treasure of 100,000 H100 GPUs, since each AI processor sells for about $40,000. Fortunately for him, xAI was able to secure about $6 billion in a May fundraising with the support of well-known tech VC firms like Andreessen Horowitz.

But controversy surrounded the debut of the enormous supercomputer first. Locals in Memphis who live close to the Tennessee data center complained last week about “untenable levels of smog” produced by the supercomputer, which may portend more conflicts at the xAI facility in the future.

And Colossus’s troubles won’t stop there. Its status as the most effective AI training system will surely be called into question too. It’s unlikely that other AI industry titans like OpenAI, Microsoft, Google, and Meta some of whom already possess hundreds of thousands of GPUs will sit back and take it easy.

Microsoft, for example, reportedly expects to have 1.8 million AI processors by the end of the year (though this number sounds highly optimistic, if not infeasible). In January, Mark Zuckerberg made hints that Meta intended to buy an additional 350,000 Nvidia H100s by the same date.

However, for the time being at least, Colossus remains a singular example of raw computing power. According to Fortune, Grok-3 which Musk intends to launch in December will be taught using it.

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading