AI system supervision by humans
AI systems under human supervision might not be as useful as we believe, particularly in combat situations.
There is an immediate need for governments, tech corporations, and international entities to ensure the safety of artificial intelligence (AI) as it grows in strength and is even utilized in conflict. Additionally, the necessity of human oversight of the technology is a recurrent theme in most accords on AI safety.
Theoretically, people can act as guardians against abuse and possible hallucinations caused by artificial intelligence producing false information. This could entail, for instance, a human evaluating the outputs that is, the content produced by the system.
However, as a growing body of research and several actual cases of military usage of AI reveal, there are inherent problems to the idea of humans serving as an effective check on computer systems.
Many of the regulations that have been created so far for AI already include language that encourages human monitoring and involvement. The EU’s AI act, for instance, mandates that high-risk AI systems like those already in use that use biometric technology, like a retina scanner, to automatically identify people must be independently verified and confirmed by a minimum of two humans who have the requisite authority, training, and competence.
In response to a parliamentary study on AI in weapon systems, the UK government acknowledged the value of human oversight in the military in February 2024. The study places a strong emphasis on “meaningful human control” by giving people the right kind of training. Additionally, it emphasizes the need for human accountability and states that robots cannot make decisions about actions taken by, say, armed aerial drones.
Thus far, this principle has mainly remained in effect. Currently, human operators and those in their chain of command are in charge of controlling military drones and are accountable for any acts made by an armed aircraft. AI, however, has the potential to increase the capability and autonomy of drones and the computer systems they rely on.
Their target acquisition systems are part of this. In these systems, AI-guided software would identify and lock onto enemy soldiers, enabling a human to authorize a weapon strike against them.
Although such technology isn’t believed to be widely deployed just yet, the Gaza War seems to have shown how it is. The Israeli-Palestinian journal +972 reported on a technology that Israel uses called Lavender. According to reports, this is an AI-based target recommendation system that tracks the whereabouts of the recognized targets and is integrated with other automated systems.
obtaining the desired outcome
In 2017, the US military developed the Maven project with the goal of integrating AI into weapons. With time, it evolved into a target acquisition system. It is said to have greatly increased the effectiveness of the target recommendation process on weapon platforms.
A human is in place to supervise the results of the target acquisition operations as a crucial component of the decision-making loop, per suggestions from academic research on AI ethics.
Nonetheless, there are crucial factors to take into account according to studies on the psychology of computer use in humans.
The US academic Mary Cummings described the problem known as automation bias which occurs when people end up placing an undue amount of trust in computer systems and their conclusions in a peer-reviewed paper published in 2006.
If operators are less inclined to challenge a machine’s conclusions, this could conflict with the human role as a check on automated decision-making.
Researchers Batya Friedman and Peter Kahn contended in a different study, which was published in 1992, that using computer systems can reduce people’s sense of moral agency to the point where they feel less responsible for the outcomes. In fact, the study illustrates how individuals may even begin to give computer systems a feeling of agency.
In light of these inclinations, it would be wise to think about how target acquisition systems might be impacted by an overreliance on computer systems as well as the possibility that people’s moral agency may be compromised. Ultimately, even while statistically negligible in theory, margins of error assume terrifying proportions when we contemplate the possible consequences on real lives.
The numerous resolutions, agreements, and laws pertaining to AI contribute to the certainty that people will have a significant role in acting as a check on AI. But it’s crucial to consider if, after extended time spent in the position, human operators would begin to perceive actual people as objects on a screen.
Discover more from
Subscribe to get the latest posts sent to your email.