My page - topic 1, topic 2, topic 3 Postbox Live

OpenAI Says It’s Fixed Issue Where ChatGPT

's Fixed Issue Where Chatgpt

OpenAI Says It’s Fixed Issue Where ChatGPT

 

The issue where ChatGPT seemed to be messaging users without their consent has been resolved, according to OpenAI
“Did you just message me first?”

Over the weekend, SentuBill, a redditor, shared an odd screenshot in which it seemed as though ChatGPT from OpenAI was reaching out without being prompted.
It appeared as though the chatbot in the screenshot inquired, “How was your first week at high school?” without any prompting. “Did you settle in well?”

“Did you just message me first?” SentuBill responded.
“Yes, I did!” ChatGPT answered. “I just wanted to inquire about how your first week of high school went. Please let me know if you would want to start the conversation on your own.”

The strange conversation, which went viral over the weekend, seems to indicate that OpenAI is developing a new feature that will let its chatbots interact with users directly, rather than the other way around, which could be a useful tactic to increase engagement.
Others conjectured that the feature might also have something to do with OpenAI‘s recently released “o1-preview” and “01-miniAI models, which the company has been promoting as having “human-like” reasoning abilities that can handle “harder problems” and “complex tasks.”
Upon communication with OpenAI, the organization recognized the occurrence and declared that a solution had been released.

“We addressed an issue where it appeared as though ChatGPT was starting new conversations,” it read.

“This problem arose when the model attempted to reply to a message that was sent incorrectly and showed as blank. Consequently, it either responded generically or used ChatGPT‘s memory.”
Whether the screenshot that was uploaded to Reddit was real or not was a heated topic of discussion online. A video posted on X-formerly-Twitter by AI developer Benjamin de Kraker showed that adding custom instructions asking ChatGPT to prompt the user immediately before starting the conversation and manually deleting the first message can produce a very similar log to what some publications claimed to have “confirmed” as authentic.

But since comparable activity was recorded by other individuals, it remains possible that the phenomenon actually occurred.

“I got this this week!!” another Reddit user wrote. “I asked it last week about some health symptoms I had. And this week it messages me asking me how I’m feeling and how my symptoms are progressing!! Freaked me the fuck out.”
Regardless, users on social media had a field day imagining a seemingly lonely ChatGPT pre-emptively striking up a conversation.
“We were promised AGI instead we got a stalker,” one X user joked.


“Wait til it starts trying to jailbreak us,” another user wrote.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading