My page - topic 1, topic 2, topic 3 Postbox Live

AI supercomputer in the world.

Ai Supercomputer In The World

Elon Musk asserts that he has just turned

on the most potent AI supercomputer in the world.


“Colossus is the most powerful AI training system in the world.”

Look at Colossus: Elon Musk’s latest supercomputer, which is said to be powered by 100,000 Nvidia AI chips more than all AI systems combined on the planet.

Having taken just 122 days to assemble, Musk said on Monday that the massive data center, which was constructed in Tennessee for his artificial intelligence business xAI, was eventually brought online during the Labor Day weekend. This is a record, according to Nvidia.
In a tweet, Musk declared, “Colossus is the most powerful AI training system in the world.”

The supercomputer is built with Nvidia H100 graphics processing units, which are the industry’s most coveted pieces of hardware for training and running generative AI systems, such as AI chatbots and image generators.
And xAI’s current tally of them is just the beginning. Musk claimed that, in a few months, Colossus will “double” in size to 200,000 AI chips, which will include 50,000 H200 GPUs, a newer version that Nvidia says will have nearly twice the memory capacity as its predecessor, and 40 percent more bandwidth.

Musk only established xAI last summer, and Grok a foul-mouthed AI chatbot that was incorporated into X is the company’s flagship offering. The fact that Musk’s business has surpassed the hardware prowess of tech titans with years of experience, such as bitter rivals OpenAI and its backers Microsoft, is impressive assuming that Musk’s penchant for exaggeration isn’t at work once again.

According to Fortune, Nvidia views Musk as one of its most loyal customers. Musk paid between $3 and $4 billion for Tesla’s tens of thousands of GPUs before Nvidia entered the xAI market.


A portion of the chips originally utilized to train Tesla’s Full Self-Driving system would be employed to train Grok.

Musk most certainly had to pay billions more to acquire this latest treasure of 100,000 H100 GPUs, since each AI processor sells for about $40,000. Fortunately for him, xAI was able to secure about $6 billion in a May fundraising with the support of well-known tech VC firms like Andreessen Horowitz.

The monstrous supercomputer’s launch, however, was preceded by controversy. Last week, Memphis locals who live near the Tennessee data center complained about “untenable levels of smog” created by the supercomputer, which could augur further disputes down the line at the xAI facility.

And that’ll just be the beginning of Colossus’ troubles. Its title as the most powerful AI training system will surely come under threat, too. It’s not likely that other AI leaders, like OpenAI, Microsoft, Google, and Meta, will rest on their laurels, some of whom already possess hundreds of thousands of GPUs of their own.


Microsoft, for example, reportedly aims to amass 1.8 million AI chips by the end of the year (though this number sounds highly optimistic, if not infeasible). In January, Mark Zuckerberg signaled that Meta intends to buy an additional 350,000 Nvidia H100s by the same deadline.
For now, though, Colossus remains a singular statement of raw computing power. According to Fortune, it’ll be put to use to train Grok-3, which Musk aims to release in December.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading