CEO of AI Proud of Chatbot
CEO of AI Proud of Chatbot That Persuaded Woman to Put Her Dog to Death.
“Your dog’s life is coming to an end. I suggest ending one’s life.”
There are several reasons why we shouldn’t rely on AI chatbots to make health-related decisions.
The talkative assistants often lie with an incredible amount of confidence, which can lead to a lot of chaos. For instance, a recent study discovered that ChatGPT from OpenAI performed appallingly poorly in terms of accurate diagnosis.
That also applies to our pet friends’ health. Writer Laura Preston described an extremely strange incident she saw at an AI conference back in April in an article published this year in the literary journal n+1. Screenshots of the story have gone popular on social media.
Cal Lai, CEO of pet health startup AskVet, recounted a strange “story” about a woman whose elderly dog had diarrhea during a discussion at the event. AskVet just debuted a ChatGPT-based “answer engine for animal health” called VERA in February 2023.
The woman allegedly received a disconcerting response when she contacted VERA for guidance.
The chatbot replied, “Your dog is nearing the end of his life,” as Preston reported. “I recommend euthanasia.”
Lai’s account states that the woman was obviously upset and in denial about her dog’s condition. Preston said that because VERA “knew the woman’s location,” it offered her a “list of nearby clinics that could get the job done.”
The woman ignored the chatbot at first, but in the end she gave in and put her dog to sleep.
“The CEO regarded us with satisfaction for his chatbot’s work: that, through a series of escalating tactics, it had convinced a woman to end her dog’s life, though she hadn’t wanted to at all,” Preston wrote in her piece.
“The point of this story is that the woman forgot she was talking to a bot,” Lai told the audience, per Preston’s report. “The experience was so human.”
Stated differently, the CEO’s praise of his company’s chatbot convincing a woman to put her dog to sleep raises several severe ethical questions.
Firstly, was the dog’s death really necessary, considering the degree of inaccuracy of these instruments? Furthermore, shouldn’t a human doctor with experience treating animals have advised putting the dog to sleep if that was the best course of action?
Futurism has reached out to AskVet for a reaction.
Lai’s story highlights a startling new trend: AI companies are competing with one another to replace human workers with AI assistants, ranging from programmers to customer service agents.
Experts worry that there may be serious problems associated with generative AI in healthcare, especially now that health businesses are getting involved.
Researchers discovered in a recent report that chatbots continued to have a “tendency to produce harmful or convincing but inaccurate content,” a finding that “calls for ethical guidance and human oversight.”
“Additionally, critical inquiry is needed to evaluate the necessity and justification of LLMs’ current experimental use,” they said.
Preston’s writing serves as an exceptionally stark illustration, especially in light of the fact that, as pet owners, we are ultimately accountable for the well-being of our cherished animals.
On social media, however, screenshots of Preston’s piece sparked furious reactions.
One BlueSky member commented, “I’m signing out again, this is unironically one of the worst things I’ve ever read.” “No words for how much hate I feel right now.”
Discover more from Postbox Live
Subscribe to get the latest posts sent to your email.