My page - topic 1, topic 2, topic 3 Postbox Live

Scientists warn that as AI becomes more sophisticated

Scientists Warn That As Ai Becomes More Sophisticated, Captchas Will Become Less Useful And Less Hum

Scientists warn that as AI becomes more sophisticated,

CAPTCHAs will become less useful and less human.

 

 

Because AI technology is developing so quickly, researchers contend in a recent article that there should be a more reliable method for determining whether an internet user is human and not an AI bot.

In a work that has not yet undergone peer review, the researchers who hail from Ivy League schools and businesses like Microsoft and OpenAI offer a “personhood credential” (PHC) system for human verification as a replacement for current procedures like CAPTCHAs.

However, to anyone who is worried about mass surveillance and privacy, that is an incredibly flawed solution that shifts accountability onto end users a regular practice in Silicon Valley.

“A lot of these schemes are based on the idea that society and individuals will have to change their behaviors based on the problems introduced by companies stuffing chatbots and large language models into everything rather than the companies doing more to release products that are safe,” Chris Gilliard, a researcher at The Washington Post who studies surveillance, stated.

The PHC system was suggested by the study’s researchers out of concern that “malicious actors” would spread non-human knowledge via AI’s mass scalability and ability to accurately mimic human behavior online.

Concerns stem from three main areas: digital avatars that resemble real people in appearance, movement, and voice; AI bots that are becoming increasingly adept at mimicking “human-like actions across the Internet,” such as “solving CAPTCHAs when challenged”; and the ability of AI to create “human-like content that expresses human-like experiences or points of view.”

This, according to the researchers, is why PHCs are such a compelling idea. For instance, a government offering digital services could grant a single, distinct personhood certificate to every end user who is a human. Once the user has verified their identity as human, they can utilize zero-knowledge proofs, a method taken from cryptography, to reveal particular details about the data without giving it away.

The researchers discovered that end users will store their login credentials digitally on their own devices, which contributes to the preservation of online anonymity.

Credentialing systems like the previously stated CAPTCHAs and biometrics like fingerprints could be supplemented or replaced by the credentialing system in online human verification processes.

Although a PHC system appears to be a perfect answer on paper, the researchers acknowledge that it still has drawbacks.

For starters, it would seem certain that a large number of people would sell their PHC to AI spammers, undermining the project’s objectives and lending legitimacy to automated content.

According to the report, any institution that issues these kinds of credentials runs the risk of growing too strong, and the system as a whole may still be open to hacker attacks.

“One significant challenge for a PHC ecosystem is how it may concentrate power in a small number of institutions especially PHC issuers, but also large service providers whose decisions around PHC use will have large repercussions for the ecosystem,” the article states.

Credentialing systems can also cause problems for older individuals and other less tech-savvy users, who are frequently the victims of online scams.

For this reason, the researchers contend, governments ought to look at PHC use through a pilot program.

However, the PHC avoids a critical problem: end users already burdened with spam and other nonsense in their already overloaded digital lives would be more burdened by this type of system. Since tech corporations are the ones that brought this issue to light, they ought to be the ones to find a solution.

One action they can take is to watermark the content generated by their AI models or create a mechanism that can identify the telltale indicators of AI-derived data. Although none is infallible, they do place the onus of accountability back on the company that created the AI bot issue.

Furthermore, if internet corporations completely free themselves of this obligation, it will reflect poorly on Silicon Valley, which has a history of causing issues that no one asked for and then profiting from their effects.

Similar to how tech corporations have monopolized on water and electricity to run AI data centers, harming people, particularly those in drought-stricken areas, in the process.

And PHC, glossy and pretty on paper, just throws more blame around.

 

 

 


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading