My page - topic 1, topic 2, topic 3 Postbox Live

The hype cycle of AI: Telling reality from fantasy

The Hype Cycle Of Ai Telling Reality From Fantasy

The hype cycle of AI: Telling reality from fantasy

 

 

 

 

 

When the AI hype cycle peaks, executives must sift through the hoopla to determine the true worth of the technology to their company.

It appears that the excitement surrounding generative AI is finally waning, almost two years after OpenAI introduced ChatGPT to the public.

Only one in four of the more than 1,000 executives surveyed for Lucidworks’ second annual generative AI survey which was released in June had successfully launched generative AI efforts in the previous year. Furthermore, just 63% of businesses aim to expand their investment in generative AI, which is a third fewer than the 93% that was reported in 2.

Governance has taken precedence over revenue growth and cost savings for businesses that have found success using generative AI.

While large language models (LLMs) and their construction have become increasingly expensive because to the generative AI gold rush, businesses are starting to discover that the technology is not living up to the hype.

A major obstacle to the mainstream adoption of AI has long been a lack of confidence. This is the reason that a large number of the major IT companies have already consented to information sharing and the open, transparent reporting capabilities of their AI models. Regulating generative AI, however, can be challenging because there isn’t a single definition for the technology, which could lead to years of legal troubles.

AI washing is a result of the ambiguity in the rules governing the definition, application, and regulation of generative AI. Similar to greenwashing, AI washing is a marketing gimmick wherein chatbot, LLM, and AI tool creators exaggerate the technology’s effectiveness.

The ambiguity surrounding generative AI has made it challenging to independently confirm statements and distinguish fact from fiction.

 

AI hype: The accuracy issue with generative AI

 

The most harmful myths regarding generative AI that have been disseminated since 2022 are those that suggest LLMs can produce more than humans or are a step toward science fiction-style artificial general intelligence (AGI).

It has been evident since the general public first encountered generative AI and on several occasions during the main developer announcement release cycles that there is a significant trust barrier with generative AI output. We’ve all become familiar with the term ‘hallucinations’, used to describe the confident falsehoods LLMs regularly produce, and efforts to reduce these have only gone so far to date.

However, CEOs continue to assert that models can produce results on par with humans. When Nvidia CEO Jensen Huang made a suggestion about the demise of coding earlier this year, it created quite a commotion. Speaking at the World Government Summit in February, he suggested that the children of today would not need to know how to code in order to enter the tech sector due to the speed at which AI is developing. The CEO of AWS, Matt Garman, recently made the suggestion that over the next two years, AI may prevent developers from writing code.

The technological gap has undoubtedly been bridged by AI, and proponents of AI pair programming may contend that eventually, anybody may learn to program. These assertions, especially the one that AI could eventually replace human developers, should be carefully examined.

It appears from the evidence that programmers are here to stay. 76% of the 1,700 engineers in the Stack Overflow community who participated in a May study said they either now use or want to employ AI code assistants. Many of these developers did, however, acknowledge that their AI code assistants had trouble understanding the context, intricacy, and obscurity of the code. Questions about architecture were among them.

Simple and repetitive coding tasks can be completed by generative AI systems, while more complicated demands need human oversight and error correction. 31% of respondents either somewhat or highly distrusted the code that their tools created, despite 43% of respondents saying they trusted AI outputs to some extent. Just 2.7% of respondents thought highly of AI.

It may be that AI coding tools simply help skilled developers with menial tasks, rather than replace them altogether.

AI coding might boost the productivity of senior developers, but it may exacerbate the difficulties juniors face in the job market,” says Mądrzak-Wecke. “Fully autonomous coding is anticipated in the future. It’s unlikely to materialize in 2024 or 2025, though, so the potential risks probably won’t be immediate.”

 

AI hype: Buying versus building an LLM

 

In the rush to invest in generative AI, one thing that may be overlooked is the actual costs involved in implementing it.

Companies seeking to use LLMs run in the public cloud will pay hyperscalers to inference, train, or embed models on a pay-as-you-go basis. This is a popular format for adopting AI as it negates the need for up-front investment and allows businesses entry into large AI marketplaces with a range of models on offer such as Amazon Bedrock, Microsoft’s Azure AI, Google’s Vertex AI.

 

However, if you decide to build your own LLM, then you’ll need to consider that it will also be an ongoing investment.

Although building your own LLM may involve higher upfront costs, the investment is likely to pay off in the long run because it can be highly customizable and trained to your specific needs. On the other hand, an off-the-shelf LLM has been built for a wide range of tasks, so there may come a point when it no longer suits your needs and you have to invest in a new one.

“Training is the most resource-intensive and costly process, requiring significant compute and power. But, once a mode is trained, the cost of fine-tuning and inference should be lower,” says Łukasz Mądrzak-Wecke, head of AI at enterprise digital product consultancy Tangent, in conversation with ITPro.

 

With more and more companies turning to LLMs for a competitive edge, training should be seen as “an ongoing expense,’ he adds.

 

In the medium-to-long term, these concerns may have been alleviated by retrieval augmented generation (RAG). This process allows models to be grounded in vectorized external data to improve outputs, dramatically reducing the need for additional rounds of training and fine-tuning on off-the-shelf models.

 

But this comes with its own costs, primarily in the form of data infrastructure, which must also be factored into the price tag of using an LLM for business functions.

 

AI hype: Energy problems

 

A lot has been made about generative AI’s immense demand for energy. An often cited statistic, drawn from a paper by researchers at the Allen Institute for AI and the machine learning firm Hugging Face, is that generative AI systems can use up to 33 times more energy than machines running task-specific software.

Microsoft’s emissions rose 29% in 2023 due to AI-enabling data center expansion, while Google has publicly expressed doubt over whether it can meet its net zero by 2030 goal as AI pushes its emissions well above targets.

At present, training accounts for 80% of the energy usage and inference for about 20%, but, in the future, this is expected to flip on its head as the need for inference  passing new inputs through pre-trained models – accelerates. The actual amount of energy consumed is different for each use case. For instance, text classification is less power-hungry than image generation.

 

One response to these concerns is to house AI models in green data centers, which have far lower emissions and often run on 100% renewable energy.

While the tech industry works on making training efficient and lowering power consumption, there are still ways that LLMs can drive energy savings. Sarah Burnett, a technology evangelist, tells ITPro that if a company deploying an LLM can enable an office of dozens of workers to reduce their laptop usage by 30 minutes a day, then this could help offset their LLM’s net energy consumption.

“The energy cost of LLMs is undeniable, but their potential to transform workflows does present a compelling counter-argument. It’s a complex equation,” says Burnett

Beyond energy, developers and hyperscalers will need to do more to reassure customers over the environmental cost of AI in the near future. The $ immense water consumption of data centers, for example, will likely define conversations around technology and the environment in the coming years.

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading