My page - topic 1, topic 2, topic 3 Postbox Live

AI May Abandon Control at Any Time

Prominent Ai Researchers Alert Ai May Abandon Control At Any Time

Prominent AI Researchers Alert:

AI May Abandon Control at Any Time


“Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes.”

Some of the top specialists on artificial intelligence in the world convened during an international conference to draft a definitive report on the perils of this technology.

The International Dialogues on AI Safety (IDAIS) is a cross-cultural consortium of scientists tasked with reducing AI hazards. “Rapid advances in artificial intelligence systems’ capabilities are pushing humanity closer to a world where AI meets and surpasses human intelligence,” reads the opening statement.

With luminaries like Turing Institute computer scientist Geoffrey Hinton rubbing shoulders with the likes of Zhang Ya-Qin, the former president of the Chinese tech conglomerate Baidu, the letter’s assorted signatories represent top AI thinkers globally.
“Experts agree these AI systems are likely to be developed in the coming decades, with many of them believing they will arrive imminently,” the IDAIS statement continues. “Loss of human control or malicious use of these AI systems could lead to catastrophic outcomes for all of humanity.”

Written during the consortium’s third conclave gathering in Venice, this “consensus statement” aims to define AI risk while simultaneously coordinating it for the “global public good.” Numerous experts have gathered at these gatherings to voice concerns about the risks associated with AI that we are moving too quickly.

The IDAIS signatories contend that it is crucial to consider AI and its risk on a global scale as this technology is borderless. They argued that although there have been “promising initial steps by the international community” towards AI safety collaboration at intergovernmental summits, these efforts must go on in order to create “a global contingency plan” in case and if these hazards worsen.

Such backup plans might include mutually assured agreements on “red lines” and what to do when they’re crossed, as well as the establishment of international organizations to promote emergency preparedness (though it’s unclear whether this would happen inside or outside of a respectable institution like the United Nations).


The statement, which has the signatures of former Irish president Mary Robinson, Turing Award winner Andrew Yao, and a number of academics and administrators from Beijing and Quebec, is clear about what needs to be done to reduce risks but is ambiguous about what those risks are and how they might arise.

All the same, IDAIS’ recommendations are probably a good one, and fostering international dialogue on such an important topic is paramount in the face of the coming militarized AI race between the United States and China.

 

 

 


Discover more from Postbox Live

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from Postbox Live

Subscribe now to keep reading and get access to the full archive.

Continue reading