My page - topic 1, topic 2, topic 3 Postbox Live

ChatGPT Acted Unauthorised

ChatGPT Acted Unauthorised, Speaking in People's Voices

ChatGPT Acted Unauthorised,

Speaking in People’s Voices

ChatGPT Deviated and Started Speaking in People’s Voices Without Their Consent

“OpenAI just leaked the plot of Black Mirror’s next season.”

AI Hysteria

The GPT-4o “scorecard,” released by OpenAI last week, lists “key areas of risk” for the company’s most recent big language model along with mitigation strategies.

According to Ars Technica, OpenAI discovered that in one horrifying incident, the model’s Advanced Voice Mode, which enables users to communicate with ChatGPT, unintentionally mimicked users’ voices.

“Voice generation can also occur in non-adversarial situations, such as our use of that ability to generate voices for ChatGPT’s advanced voice mode,” according to the documentation for OpenAI.

“During testing, we also observed rare instances where the model would unintentionally generate an output emulating the user’s voice.”

The effect is seen in the video attachment, as the user yells “No!” and ChatGPT instantly transitions to an eerily lifelike vocal replica.

without any apparent cause. It’s an outrageous violation of consent that sounds like something from a science fiction horror film.

“The next season’s plot of Black Mirror was just leaked by OpenAI,” tweeted data scientist Max Woolf of BuzzFeed.

Voice Mockup

According to OpenAI’s “system card,” its AI model can produce “audio with a human-sounding synthetic voice.” The corporation expressed concern that this capability may “facilitate harms such as an increase in fraud due to impersonation and may be harnessed to spread false information.”

Not only can OpenAI’s GPT-4o mimic voices, but it can also reproduce “nonverbal vocalisations” such as music and sound effects.

Similar to how prompt injection attacks operate, ChatGPT may determine that the user’s voice is important to the current conversation based on noise in the user’s inputs and be misled into copying the voice.

Thankfully, OpenAI discovered that the chance of inadvertent voice duplication is still “minimal.”

Intentional voice generation has also been limited by the company by limiting the user to the voices OpenAI generated in collaboration with voice actors.

“My reading of the system card is that it’s not going to be possible to trick it into using an unapproved voice because they have a robust brute force protection in place against that,” Simon Willison, a researcher with Ars. who studies

“Imagine how much fun we could have with the unfiltered model,” he joked. “I’m annoyed that it’s restricted from singing—I was looking forward to getting it to sing stupid songs to my dog.”

 

#ChatGPT, #AIethics, #VoiceImpersonation, #UnauthorizedUse, #DigitalIdentity, #AIresponsibility, #TechAccountability, #PrivacyConcerns, #VoiceAI, #EthicalAI, #UserConsent, #AItransparency, #DataProtection, #MachineLearning, #AIrisks, #VoiceTechnology, #SocialMediaEthics, #AIregulation, #PublicTrust, #DigitalRights,


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading