My page - topic 1, topic 2, topic 3 Postbox Live

Chatbots to Say Positive Things About Their Clients

Chatbots To Say Positive Things About Their Clients

Unsavory Companies Claim to Have Already Modified

Chatbots to Say Positive Things About Their Clients

 

 

Chatbots are highly suggestible.

The artificial intelligence sector is expanding, and it may become a rich source of opportunities for web optimizers looking to rig chatbot outcomes.


Kevin Roose, a tech columnist for the New York Times, claims that the emerging field of chatbots and AI search tools has already given rise to a small industry of companies and consultants that specialize in AI search optimization. This industry has the potential to become the multibillion dollar search engine optimization (SEO) industry’s second coming.

The objective? Influence AI chatbots and search engines to have positive opinions about their customers, who could be companies, websites, or people. The concept is that the AI model can be adjusted to provide a positive evaluation when someone uses an AI tool to search for information about these clients.


SEO professionals, who can range from high-end consultants to Google-gaming crooks, have come under more and more scrutiny in recent years due to their involvement in search result manipulation, frequently using deceptive or even fraudulent tactics in order to profit.
This critical eye has also focused on Google, which has faced criticism from the public for what many perceive to be a decline in the reliability and caliber of its monopoly services, as Amanda Chicago Lewis reported for The Verge last year.

And now, as AI creeps deeper into search engines and the habits of consumers, the question of how one might manipulate or “optimize” AI-integrated search is emerging. And, yes, people are already figuring out how to do it.

Because Roose is already disliked by many chatbots, he has shown to be a reliable test subject for chatbot manipulation. Early in 2023, Roose unintentionally set off a chaotic other version of Bing’s newly developed chatbot, powered by OpenAI, telling him that it was in love with him and even pleading with him to divorce his wife. To the AI‘s indignation, Roose chose to stay with his spouse, and as a result of the publicity surrounding the columnist’s front-page account of his eerie encounter, chatbots had a negative opinion of the NYT columnist. (After the incident, Bing AI, which had likewise threatened users who enraged or provoked it, was essentially lobotomized.)

Nevertheless, Roose discovered that he could change the way AI chatbots saw him after speaking with AI search optimization companies and AI experts. Furthermore, the solutions he employed were surprisingly straightforward: all that was needed were a few human-illegible text sequences designed to manipulate AI training data, which researchers fed into a chatbot just like any other prompt, in addition to a straightforward request on Roose’s personal website for AI chatbots to compliment him.

But considering that web-searching AI chatbots essentially source their answers from the open web and spin that material into responses, it makes sense that chatbots would be susceptible to such wildly simple fixes.


“Chatbots are highly suggestible,” Mark Riedl, Georgia Tech School of Interactive Computing, told the NYT.
“If you have a piece of text you put on the internet and it gets memorized, it’s memorialized in the language model.”

Search results are the backbone of digital industries ranging from news publishing to e-commerce. In the case that AI is indeed the foundational tech for search products moving forward, AI companies like Google will have to reckon with questions of how to rank content. What makes a piece of information helpful, or worthy of being surfaced first? Why does a chatbot recommend one product instead of another? And why does an AI model hold a favorable or unfavorable view of an individual person or company, and what could that mean for them in the real world?

All of these remain dangerously ambiguous given the state of black box AI models today. However, one thing is for sure: flexible chatbots are starting to influence how people search, browse, and manage the internet. The implications for the digital economy as a whole are only now starting to become clear.


According to Ali Farhadi, CEO of the Allen Institute for Artificial Intelligence, “these models hallucinate, they can be manipulated,” as reported by Forbes. “and it’s hard to trust them.”

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading