Enabling Machines to Make Life and Death Decisions Is Morally Unjustifiable — Global Issues

Jun 27 (IPS) – CIVICUS discusses autonomous weapons systems and the campaign for regulation with Nicole van Rooijen, Executive Director of Stop Killer Robots, a global civil society coalition of over 270 organisations that campaigns for a new international treaty on autonomous weapons systems.
In May, United Nations (UN) member states convened in New York for the first time to confront the challenge of regulating autonomous weapons systems, which can select and engage targets without human intervention. These ‘killer robots’ pose unprecedented ethical, humanitarian and legal risks, and civil society warns they could trigger a global arms race while undermining international law. With weapons that have some autonomy already deployed in conflicts from Gaza to Ukraine, UN Secretary-General António Guterres has set a 2026 deadline for a legally binding treaty.
What are autonomous weapons systems and why do they pose unprecedented challenges?
Autonomous weapons systems, or ‘killer robots’, are weapons that, once activated by a human, can select and engage targets without further human intervention. These systems make independent decisions – without the intervention of a human operator – about when, how, where and against whom to use force, processing sensor data or following pre-programmed ‘target profiles’. Rather than using the term ‘lethal autonomous weapons systems’, our campaign refers to ‘autonomous weapons systems’ to emphasise that any such system, lethal or not, can inflict serious harm.
The implications are staggering. These weapons could operate across all domains – air, land, sea and space – during armed conflicts and law enforcement or border control operations. They raise numerous ethical, humanitarian, legal and security concerns.
The most troubling variant involves anti-personnel systems triggered by human presence or individuals or groups who meet pre-programmed target profiles. By reducing people to data points for algorithmic targeting, these weapons are dehumanising. They strip away our inherent rights and dignity, dramatically increasing the risk of unjust harm or death. No machine, computer or algorithm can recognise a human as a human being, nor respect humans as inherent bearers of rights and dignity. Autonomous weapons cannot comprehend what it means to be in a state of war, much less what it means to have – or to end – a human life. Enabling machines to make life and death decisions is morally unjustifiable.
The International Committee of the Red Cross (ICRC) has noted it is ‘difficult to envisage’ scenarios where autonomous weapons wouldn’t pose significant risks of violating international humanitarian law, given the inevitable presence of civilians and non-combatants in conflict zones.
Currently, no international law governs these weapons’ development or use. As the technology advances rapidly, this legal vacuum creates a dangerous environment where autonomous weapons could be deployed in ways that violate existing international law while escalating conflicts, enabling unaccountable violence and harming civilians. This is what prompted the UN Secretary-General and the ICRC president to jointly call for urgent negotiations on a legally binding international instrument on autonomous weapons systems by 2026.
How have recent consultations advanced the regulatory agenda?
The informal consultations held in New York in May, mandated by UN General Assembly (UNGA) Resolution 79/62, focused on issues raised in the UN Secretary-General’s 2024 report on autonomous weapons systems. They sought to broaden awareness among the diplomatic community and complement the work around the Convention on Certain Conventional Weapons (CCW), emphasising risks that extend far beyond international humanitarian law.
The UNGA offers a crucial advantage: universal participation. Unlike the CCW process in Geneva, it includes all states. This is particularly important for global south states, many of which are not a party to the CCW.
Over two days, states and civil society explored human rights implications, humanitarian consequences, ethical dilemmas, technological risks and security threats. Rich discussions emerged around regional dynamics and practical scenarios, examining how these weapons might be used in policing, border control and by non-state actors or criminal groups. While time constraints prevented exhaustive exploration of all issues, the breadth of engagement was unprecedented.
The Stop Killer Robots campaign found these consultations energising and strategically valuable. They demonstrated how UN processes in Geneva and New York can reinforce each other: while one forum provides detailed technical groundwork, particularly in developing treaty language, the other fosters inclusive political leadership and momentum. Both forums should work in tandem to maximise global efforts to achieve an international legally binding instrument on autonomous weapons systems.
What explains the global divide on regulation?
The vast majority of states support a legally binding treaty on autonomous weapons systems, favouring a two-tier approach that combines prohibitions with positive obligations.
However, roughly a dozen states oppose any form of regulation. Among them are some of the world’s most heavily militarised states and the primary developers, producers and likely users of autonomous weapons systems. Their resistance likely stems from the desire to preserve military superiority and protect economic interests, and the belief in inflated claims about these weapons’ supposed benefits promoted by big tech and arms industries. Or perhaps they simply favour force over diplomacy.
Whatever their motivations, this opposition underscores the urgent need for the international community to reinforce a rules-based global order that prioritises dialogue, multilateralism and responsible governance over unchecked technological ambition.
How do geopolitical tensions and corporate influence complicate international regulation efforts?
It is undeniable that geopolitical tensions and corporate influence are challenging the development of regulations for emerging technologies.
A handful of powerful states are prioritising narrow military and economic advantages over collective security, undermining the multilateral cooperation that has traditionally governed arms control. Equally troubling is the expanding influence of the private sector, particularly large tech companies that operate largely outside established accountability frameworks while wielding significant sway over political leaders.
This dual pressure is undermining the international rules-based order precisely when we most need stronger multilateral governance. Without robust regulatory frameworks that can withstand these pressures, development of autonomous weapons risks accelerating unchecked, with profound implications for global security and human rights.
How is civil society shaping this debate and advocating for regulation?
Anticipating the challenges autonomous weapons systems would pose, leading human rights organisations and humanitarian disarmament experts founded the Stop Killer Robots campaign in 2012. Today, our coalition spans over 270 organisations across more than 70 countries, working at national, regional and global levels to build political support for legally binding regulation.
We’ve played a leading role in shaping global discourse by highlighting the wide-ranging risks these technologies pose and producing timely research on weapons systems evolution and shifting state positions.
Our multi-level strategy targets all decision-makers who can influence this agenda, at local, regional and global levels. It’s crucial that political leaders understand how autonomous weapons might be used in warfare and other contexts, enabling them to advocate effectively within their spheres of influence for the treaty we urgently need.
Public pressure is key to our approach. Recent years have seen growing weapons systems autonomy and military applications, particularly in ongoing conflicts in Gaza and Ukraine, alongside rising use of technologies such as facial recognition in civilian contexts. Public concern about the dehumanising nature of these technologies and the lack of regulation has grown online and offline. We frame these concerns along the whole spectrum of automated harm, with autonomous weapons representing the extreme, and highlight the critical need to close the gap between innovation and regulation.
We also collaborate with experts from arms, military and technology sectors to bring real-world knowledge and credibility to our treaty advocacy. It is crucial to involve those who develop and deploy autonomous weapons to demonstrate the gravity of current circumstances and the urgent need for regulation.
We encourage people to take action by signing our petition, asking their local political representatives to sign our Parliamentary Pledge or just spreading the word about our campaign on social media. This ultimately puts pressure on diplomats and other decision-makers to advance the legal safeguards we desperately need.
GET IN TOUCH
SEE ALSO
Follow @IPSNewsUNBureau
Follow IPS News UN Bureau on Instagram
© Inter Press Service (2025) — All Rights Reserved. Original source: Inter Press Service