AI and Drone Warfare: Navigating the Ethical Battlefield

Know: Gaining Knowledge

What If?

Imagine that the fictional countries of Eastlandia and Westlandia have a military conflict. Westlandia is a wealthier country with a larger military and has more financial backing from allied countries than Eastlandia. Westlandia is planning on taking out a military target on Eastlandia’s side, but they do not want to put too many soldiers on the front line when they have technological resources that can act as an alternative. While Eastlandia is using landpower and standard missiles, Westlandia decides to employ drones in order to take out its target. Once someone on Westlandia’s side presses the button that sets off the drones, they complete their mission but cause a lot of damage to the surrounding area. While the operation achieves its military goals, it also tragically results in civilian casualties, meaning, innocent people, women and children, are injured and possibly killed by this strike. This incident sparks international outrage and makes leaders and citizens question whether it is ethical to use AI during war.

Explanation

A world where robots fight wars may sound like science fiction, but it's closer to reality than you think. Today, AI is involved in nearly every aspect of war, with powerful computers, equipped with artificial intelligence responsible for making wartime decisions that, in some cases, could decide who lives and who dies. 

Definition

Artificial intelligence is a rapidly developing technology that allows machines, such as drones, to learn and make decisions on their own. The capabilities of this self-autonomous technology are being explored by militaries around the world for surveillance, reconnaissance, and combat. 

How It Works

Militaries around the world are exploring the use of AI during conflict. Proponents of using AI in warfare argue that it can offer several advantages. Machines can operate with precision and speed that even the most skilled human soldiers can't match. They can also process vast amounts of data and identify targets with greater accuracy, potentially reducing civilian casualties.

One of the ways in which AI drones are being used is in drone swarms. These are coordinated groups of multiple drones that operate on their own, without the need for human pilots. These swarms share critical information about targets, adapting to battlefield conditions. Individual drones can share critical information with the rest of the swarm. A large number of drones can overwhelm enemy defenses by sheer volume. This involves flooding an area with drones, making it difficult for traditional defense systems to intercept them all. The U.S. military is a notable leader in developing self-autonomous drones for surveillance and reconnaissance– they have been developing an AI system at MIT known as the Perdix system, since 2016 and have been rigorously testing the technology since. One of the first documented uses of self-autonomous drone swarms for warfare was by the Israeli military in 2021 against military targets in Gaza. In their on-going war against Russia, the use of AI-enhanced drones by the Ukrainian military has given them a serious advantage even though they are outnumbered by Russia. 

Beyond swarms, AI also enhances target recognition and threat monitoring in drone operations. AI-powered systems can take in much more data than humans can handle, including reports, documents, and real-time sensor feeds, in order to accurately identify and track targets. Drones equipped with AI can autonomously monitor border areas, detect suspicious activities, and alert human operators to potential threats. This capability improves situational awareness and also reduces the cognitive load on human analysts.

However, the use of AI in warfare also raises significant concerns. For example, delegating life-and-death decisions to machines raises questions about accountability. If an AI-powered drone makes a mistake or causes unintended harm, it can be difficult to determine who is responsible. Is it the programmer, the military commander, or the AI itself?

Many also fear that using AI in combat could lead to a depersonalization of conflict, making it easier to forget that real human lives are in danger. Leaders in technology and science such as Stephen Hawking, Stuart Russel, and Toby Walsh (Identity Review, 2023) have been outspoken critics of the use of AI in warfare. There has even been an initiative called the “Campaign to Stop Killer Robots” that was launched in 2013 as a coalition of organizations working with the UN to raise public awareness of the use of AI in warfare and regulate it. Some of these organizations include Humans Rights Watch and Amnesty International. Critics fear that AI could make the use of force less challenging, and lead to more frequent conflicts. For this reason, they feel that there should always be a human involved in making final targeting decisions, rather than relying on the AI's judgment alone.

Why Care?

While AI has made significant advances, there are still concerns about its ability to reliably distinguish between enemy combatants and innocent civilians, especially in complex environments, like cities. Mistakes in these situations could lead to increased civilian casualties. Because this is still a new and emerging technology, there are debates about whether current international laws and human rights frameworks are ready to govern the use of AI-powered drones and hold actors accountable for their actions with these systems.

Potential power imbalances are another concern. It is already the case that poorer countries tend to suffer more damage and casualties during conflict than wealthier countries, as was illustrated in our theoretical scenario between Westlandia and Eastlandia. AI could worsen this problem if only more powerful militaries have access to the most advanced weapons. 

Balancing the potential benefits of AI in warfare with these concerns is a complex challenge that governments and societies need to contend with. We need to find ways to ensure accountability, transparency, and safety while also meeting the needs of national security.

The future of AI in warfare is uncertain. While the technology has the potential to revolutionize combat, it also presents significant ethical challenges. We must engage in open dialogue and critical thinking to ensure that AI is used responsibly and ethically in the pursuit of a safer, more peaceful world.

Check Comprehension

  1. How is artificial intelligence used by militaries? 
  2. What is one potential benefit of using AI drones during war? What is one potential risk? 
  3. Name one ethical concern about the use of AI-powered drones during conflict.

Learn More

Care: Developing Connections

Think Further

  1. Do you think the potential benefits of using AI drones in warfare outweigh the possible ethical issues?
  2. If you had the opportunity to create the most important rule for AI in warfare that you can imagine, what would it be?
  3. Who should be held responsible if AI makes a mistake in warfare? The programmers? The military or political leaders who authorized the use of AI?

See Applications

Distribute or read Case Study handout. Summary: A fictional country uses AI drones to attack an enemy town, achieving a military victory but also causing unexpected civilian casualties. This incident ignites international debate about the ethical implications of AI in warfare, raising concerns about accountability, civilian harm, and the potential for conflict dehumanization. This case becomes a critical study for the need for international collaboration in regulating AI weapons and ensuring their ethical and responsible use.

Act: Building Skills

Practice Leadership

Note: This simulation should be conducted in a respectful and sensitive manner to avoid glorifying or making light of war, or encouraging violent talk. The focus should be on fostering critical thinking and ethical decision-making in the context of emerging technologies.

Ethical AI in Wartime Scenario

A fictional country, struggling for survival against a relentless aggressor, wants to employ advanced AI technology for autonomous defense drones. These drones are capable of identifying and eliminating enemy targets with minimal human intervention. However, concerns arise regarding potential civilian casualties and the ethical implications of delegating life-and-death decisions to AI.

Roles(see specifics on Simulation Handout)

  • AI Developers
  • Military Leaders
  • Ethical Advisors
Tasks
  1. Explain the scenario and create three small groups of students who will take on the roles shown above. (In larger classes, there could be multiple groups of each type)
  2. Give students the Simulation Handout and go over the roles.
  3. Group Planning – In role groups, students discuss their task and write down their plans.
    1. Each role has its own goals in discussion. These are listed on the Simulation Handout.
    2. Provide time for groups to discuss and create their desired list of rules, plans, and/or solutions.
  4. Each group should present their proposals to the other groups, explaining their logic and rationale.
    1. Have groups identify the similarities among groups – for example, write topics on the board as each group speaks, and then compare the lists.
    2. Direct questions back to the groups for discussion, such as discussing the potential consequences of different approaches. If one group suggests a solution or rule that is different from another group, have them discuss how to deal with that difference.
    3. Encourage the groups to use material from other groups to propose alternative solutions and compromises.
    4. Optionally, have one student from each role get together in a group of three  to discuss differences of opinion and attempt to come to some sort of agreement on rules and plans.
  5. Bring the entire class back together for group discussion.