The Hype Cycle of AI
Telling Reality from Fantasy
Understanding the Peak of AI Hype
When the AI hype cycle reaches its peak, executives must sift through the noise to determine the true value of the technology for their business.
The excitement around generative AI appears to be cooling down, nearly two years after OpenAI introduced ChatGPT. According to Lucidworks’ 2024 survey, only one in four out of over 1,000 surveyed executives successfully launched generative AI initiatives in the past year. Moreover, just 63% plan to increase investment in generative AI, a sharp drop from 93% reported in 2023.
Governance over Hype: The Shift in Priorities
Companies finding success with generative AI are now prioritizing governance rather than cost savings or revenue growth. Large language models (LLMs) have become expensive to build due to the generative AI gold rush. However, many businesses are discovering the technology doesn’t always meet expectations.
A significant barrier to AI adoption remains trust. To address this, major tech companies have begun supporting transparency in model reporting. Regulating AI remains a challenge, as there is no universally accepted definition of generative AI. This lack of clarity could lead to prolonged legal disputes.
The resulting phenomenon, known as “AI washing mirrors greenwashing. Companies exaggerate the capabilities of AI tools, LLMs, and chatbots to appear more innovative than they are.
The Accuracy Problem with Generative AI
Some of the most damaging myths since 2022 suggest LLMs outperform humans or are close to achieving artificial general intelligence (AGI). Since its debut, generative AI has shown a recurring issue: trust. Users have become familiar with “hallucinations,” a term describing the confidently false answers LLMs often generate. Efforts to fix this have only achieved limited success.
Despite this, some tech leaders continue to praise LLMs. Nvidia’s CEO, Jensen Huang, famously claimed that future generations might not need to code due to AI advancements. AWS CEO Matt Garman echoed a similar belief, predicting AI could reduce developers’ roles over the next two years.
Still, reality tells a different story. A May survey of 1,700 Stack Overflow users revealed that 76% used or planned to use AI coding assistants. However, many noted that these tools struggled with complex code and context. While 43% trusted AI-generated code to some extent, 31% distrusted it, and only 2.7% had high trust.
Rather than replacing developers, AI coding assistants seem to support skilled programmers with routine tasks. As Łukasz Mądrzak-Wecke puts it, “AI might boost senior developer productivity but create challenges for juniors.” Fully autonomous coding may arrive eventually, but likely not before 2026.
Buying vs. Building an LLM: The Hidden Costs
In the rush to adopt generative AI, businesses often overlook the actual costs. Using public cloud LLMs on a pay-as-you-go model provides flexibility and access to tools like Amazon Bedrock, Azure AI, and Google Vertex AI.
However, building a custom LLM is a different story. While the upfront costs are higher, the long-term value may outweigh them. Custom models are tailored to specific needs, while off-the-shelf models may become outdated or unsuitable.
“Training is the most resource-intensive phase,” says Mądrzak-Wecke. “But once trained, fine-tuning and inference become more affordable.” He adds that training should be treated as an ongoing cost in the age of competitive AI.
In the future, Retrieval-Augmented Generation (RAG) may ease some of these concerns. RAG integrates external data to enhance model outputs, reducing the need for repeated training. However, building the required data infrastructure brings additional costs.
Energy Demands of Generative AI
Generative AI has earned a reputation for its enormous energy consumption. A study from the Allen Institute for AI and Hugging Face found that generative systems use up to 33 times more power than traditional software.
Microsoft’s emissions increased by 29% in 2023 due to AI data center expansions. Google has expressed concerns about achieving net-zero emissions by 2030 due to AI’s rising demands.
Currently, training consumes about 80% of AI-related energy, while inference takes up 20%. However, this is expected to reverse as more models run frequent inferences. For example, image generation consumes far more energy than basic text classification.
One solution involves using green data centers that rely on renewable energy and emit less carbon. Sarah Burnett, a tech evangelist, notes that if an LLM helps workers reduce device usage by 30 minutes daily, it could offset its energy footprint.
Still, the environmental cost of AI extends beyond electricity. Water consumption from cooling data centers is emerging as another concern. As AI evolves, sustainability must remain central to its development.
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.