My page - topic 1, topic 2, topic 3 Postbox Live

AI – Powered Autonomous Weapons.

Risques Associated With Ai Powered Autonomous Weapons

Q & A : Risques Associated With

AI – Powered Autonomous Weapons

 

The military has been using self-governing weaponry for many years, including heat-guided missiles, torpedoes, and mines. These weapons are controlled by basic reactive feedback and do not require human intervention. Nonetheless, the field of weapon design has recently been affected by Artificial intelligence (AI).

Kanaka Rajan, an associate professor of neuroscience at Harvard Medical School’s Blavatnik Institute, and her colleagues contend that AI-powered autonomous weapons usher in a new age in conflict and directly jeopardize fundamental research and scientific advancement.


AI-powered weaponry is now being developed and used, according to Rajan.
These weapons frequently use robots or drones. Because of how quickly this kind of technology spreads, she anticipates that they will only get more powerful, advanced, and often utilized over time.


She is concerned about how the development of AI-powered weapons may impact nonmilitary AI research in academia and industry, as well as how it can cause geopolitical instability.


Together with MIT Ph.D. student Shayne Longpre, HMS research fellow in neurobiology Riley Simmons-Edler and Ryan Badman, and Rajan, they highlight their main concerns as well as a way forward in a position paper that was published and given at the 2024 International Conference on Machine Learning.


Rajan discussed the reasons for her and her team’s decision to look at AI-powered military technology, the main hazards they believe exist, and what they believe should happen next in an interview with Harvard Medicine News.


As a computational neuroscientist, you investigate AI in relation to the brains of humans and other animals.
How did you come to consider autonomous weapons driven by AI?


We began thinking about this subject in response to several doomsday scenarios regarding artificial general intelligence that were going around in the spring of 2023. We asked themselves, what are the actual risks to human civilization if such projections are, in fact, blown out of proportion?
Our analysis of the military’s use of AI revealed that significant efforts are being made in research and development to create systems of autonomous weapons driven by AI that have global ramifications.


We came to the realization that the ramifications of the widespread development of these weapons will affect the academic AI research community as well.
Armed forces frequently lack the necessary experience to create and use AI technology on their own, therefore they must rely on the guidance of academic and business AI specialists. Like any huge firm sponsoring university research, this presents significant ethical and practical problems for academic officials and researchers.

What do you think the biggest risks are from integrating AI and machine learning into weaponry?

The development of AI-powered weapons carries a number of risks, but the three most significant ones are as follows: first, these weapons could make it simpler for nations to enter conflicts; second, nonmilitary scientific AI research could be suppressed or appropriated to benefit the development of these weapons; and third, militaries could use AI-powered autonomous technology to minimize or divert human responsibility from decision-making.

First of all, the loss of life that soldiers cause to their countrymen deters nations from initiating conflicts and may have internal repercussions for their leaders. Removing human soldiers from danger is the goal of much of the present research of AI-powered weapons, which is a compassionate thing to do in and of itself. On the other hand, if there are minimal casualties in offensive warfare, the link between war crimes and their human cost is diminished, making it politically simpler to declare war, which could ultimately result in greater casualties and devastation. when a result, when AI-powered weapons races intensify and this technology spreads more, significant geopolitical issues may arise swiftly.

Regarding the second issue, we can examine the past of scholarly disciplines such as rocketry and nuclear physics. During the Cold War, these subjects were increasingly important for defense, which meant that researchers had to deal with travel limitations, publication censorship, and the requirement for security clearance for even routine work. Similar limitations on nonmilitary AI research could be implemented as AI-powered autonomous technology becomes a major component of national defense planning across the globe. This would seriously hinder basic AI research, valuable civilian applications in scientific research and health care, and international collaboration.

Given the rate at which AI research is expanding and the popularity of research and development on AI-powered weapons, we view this as an urgent concern.

Finally, there may be significant attempts to co-opt AI researchers’ efforts in academia and industry to work on these weapons or to establish more “dual-use” projects if AI-powered weapons become essential to national defense.

Intellectual stagnation will result from our field’s expertise becoming increasingly restricted to those with security clearances. Such stringent regulations are already being demanded by some computer experts, but their reasoning ignores the reality that once new weapons technologies are developed, they always tend to spread quickly.

Why, in your opinion, has the threat posed by artificial intelligence (AI) been largely ignored when it comes to weapons design?

One explanation is that the world is brand-new and evolving quickly: major powers have started to openly and quickly adopt AI-powered weaponry since 2023. Furthermore, when AI-powered weaponry is viewed as a larger set of systems and capabilities, it can make individual systems appear less menacing and easier to ignore problems.

The lack of transparency from tech companies regarding the level of autonomy and human oversight in their weaponry systems is another difficulty. Some may interpret human error as pushing the “go kill” button following a series of decisions made by an AI weapons unit in a way that is difficult for humans to comprehend or recognize as a logical fallacy. For others, it might imply that a human is exercising more direct control and monitoring the machine’s judgment.

Unfortunately, the black box result is more likely to become the standard as these technologies get more potent and sophisticated and as wartime reaction times increase. Additionally, the presence of “human-in-the-loop” in AI-powered autonomous weaponry could mislead researchers into believing the system satisfies military ethics when, in reality, humans are not significantly involved in decision-making.

Which research questions need to be addressed the most immediately?

The majority of the fundamental algorithms for AI-powered weapons have either previously been proposed or are the subject of significant academic and business research projects driven by non-military goals, such as self-driving cars, even if there is still more work to be done in this area. In light of this, we as scientists and researchers have an obligation to guide the ethical application of new technologies and manage the impact of military interest on our work.

Military forces worldwide will require the assistance of academic and business specialists if they hope to replace a significant percentage of support and combat duties with AI-powered troops. This raises concerns about what part colleges should play in the military’s AI revolution, what lines should not be crossed, and what kind of watchdog groups and centralized monitoring should be established to keep an eye on the use of AI in weaponry.

We may need to consider how to set up usage agreements, whether AI discoveries can be categorized as closed-source versus open-source, and how the growing militarization of computer science will impact international cooperation in order to protect nonmilitary research.

How can we advance in a way that protects against AI being used for weapons use while allowing for innovative AI research?

Scholars have had and will have significant and fruitful partnerships with the government, with large information, medical, and technological enterprises, and with the armed forces. But historically, academics have also engaged in damaging and embarrassing partnerships with the tobacco, fossil fuel, and sugar sectors. In order to prevent researchers from generating scientifically questionable work and to help them comprehend the ethical dangers and biases of commercial support, modern institutions have institutional training, oversight, and transparency requirements.

We are aware of no such oversight or training programs in place for military money at this time. We believe that a good place to start is for universities to establish discussion seminars, internal regulations, and oversight processes for projects funded by the military, defense, and national security agencies that are comparable to those currently in place for projects funded by the private sector. The issues we raise are complicated and cannot be resolved by a single policy.

What is a reasonable conclusion, in your opinion?

A complete prohibition on military AI has been demanded by some community members. Although we acknowledge that this would be morally ideal, we also understand that it is not feasible given the usefulness of AI for military applications, which makes it difficult to get the support of other nations to enact or uphold such a prohibition.

Rather than attempting to replace human soldiers with AI-powered weapons, we believe that nations should concentrate their efforts on creating weapons that complement them. We may hopefully avert the worst threats by giving human oversight of these weapons top priority.

We also want to underline that AI weapons should be evaluated based on their capabilities because they are not a single type. As quickly as possible, we must outlaw and control the most heinous categories of AI weaponry, and our institutions and societies must set clear guidelines that should never be crossed.


Discover more from

Subscribe to get the latest posts sent to your email.

Leave a Reply

error: Content is protected !!

Discover more from

Subscribe now to keep reading and get access to the full archive.

Continue reading