When discussing an AI-driven future, “autonomous weaponry” is a loaded term that brings to mind visions of robots making life-or-death decisions without human input. However, when discussing this topic in a real-world context, it’s clear there are ways to embrace this technology while maintaining a focus on ethics.
The most obvious application for autonomous weaponry is missile strikes, where AI can be used to identify and then attack specific targets. Once only able to be used against fixed targets, newer missiles can course-correct and defeat countermeasures to strike moving targets on their own. As time goes on, these weapons systems will become faster, more efficient, and more deadly.
While there are clear benefits to using lethal autonomous weapons systems when wielded responsibly–for instance, being able to deploy a fast-moving weapon that acts quickly to mitigate adversarial threats–equally clear risks arise when irresponsible, unethical, or malevolent actors gain access to this technology. While the risks of such actors deploying the technology are self-evident, a less obvious, but potentially just as dangerous, risk exists if they can infiltrate the system itself. It only takes one introduced error, corruption, or adversarial attack to cause an AI-reliant system to make an incorrect decision. Additionally, much like humans, AI can be overwhelmed if the data is of poor quality or the system lacks the robustness required for these applications.
The future is upon us, and autonomous weapons will continue to be a critical part of the emerging warfighting paradigm. It’s up to decision-makers and stakeholders around the world to create systems that are both effective and ethical and to ensure they are rigorously tested and battle-hardened before deployment and protected from external threats afterward.
Read more about Autonomous Weapons in our 2023 AI Hot Button Report.