Based on my quick reading, the article fails to establish that autonomous drones are actually a clearly defined category about which treaties could plausibly be enforced. The WMD category is not just about indiscriminate destruction, it's also crucially about being able to cleanly distinguish WMDs from non-WMDs. Biological, chemical, and nuclear weapons are, with only a few exceptions, fairly precise categories. (Part of the criticism of bunker-buster nuclear bombs is that they risked blurring the distinction between nuclear and conventional weapons.)
How often does a remote operator need to check in with a drone for it to no longer be "fully autonomous"? How many such drones could the operator be monitoring at once? How many drones constitute a swarm?
A sovereign power can plausibly regulate blurry categories by just picking arbitrary but clearly defined cut-offs, e.g., the difference between a moped and a motorcycle is defined as an engine with a certain number of cc's. But this is a lot harder when you need to get dozens of different countries to all agree on a rule. International law is more dependent than domestic law on appealing to clear moral boundaries, since there is no higher earthly power to appeal to.
Reading this [0] Wikipedia article, it sounds like "Weapon of Mass Destruction" is not a super well-defined term to begin with:
"The most widely used definition of "weapons of mass destruction" is that of nuclear, biological, or chemical weapons (NBC) although there is no treaty or customary international law that contains an authoritative definition. Instead, international law has been used with respect to the specific categories of weapons within WMD, and not to WMD as a whole."
However, there is one definition given in the Wikipedia article that seems to be sufficiently precise:
"""
(1) Any explosive, incendiary, poison gas, bomb, grenade, or rocket having a propellant charge of more than four ounces [113 g], missile having an explosive or incendiary charge of more than one-quarter ounce [7 g], or mine or device similar to the above.
(2) Poison gas.
(3) Any weapon involving a disease organism.
(4) Any weapon that is designed to release radiation at a level dangerous to human life.
"""
I definitely agree with your overall point that defining AFADS as a WMD will require careful navigation of the blurriness surrounding the category. On the other hand, it seems like that second definition is already in the territory of choosing arbitrary cutoffs (ie 113 grams).
An alternative perhaps is to focus on handling the "autonomous" part, rather than the "swarm" part, as the USMA article suggests. The definition here can be more precise/less arbitrary I think: a weapon that can at any time be overridden by an operator is not fully autonomous. However, if its regular operation includes a period where there is no operator in the position to override it (eg the drone(s) keep doing their thing while the operator is away from the controls), it would then count as fully autonomous. Having some authentication that the operator is at the controls at all times, available to override and legally responsible for the behavior of the drone(s) is the key to this approach I think.
> However, if its regular operation includes a period where there is no operator in the position to override it (eg the drone(s) keep doing their thing while the operator is away from the controls), it would then count as fully autonomous.
Whelp, I guess artillery and grenades are fully autonomous.
For the purposes of evaluating morality of actions, yes artillery shells with variable fusings are a primitive version of an autonomous weapon.
It is much more useful to think about the reverse situation: how does viewing future autonomous weapons as a sort of artillery shell with high CEP (or whatever feature you imagine the future weapons has) clarify the morality of its use?
In my opinion, no. Killing people with dumb artillery shells vs. smart microdrone swarms seems equally horrific morally. The primary question is about whether one is more dangerous on global scales than the other. Although I think the smarter drones may in fact be more dangerous globally, I haven't seen anyone define a clean criteria for distinguishing them. Without such clean criteria, it looks infeasible to enforce through international law.
Yes, killing is morally reprehensible. But given that we'll not end the practice of war anytime soon, how can you regulate all the senseless killing people are determined to carry out so that it at least some pretence at morality is upheld. That is the sort of problem I had in mind.
I think it’s abundantly obvious to anyone looking at this with good faith that the autonomy refers to the targeting of the weapon and the decision to fire the weapon.
If you're arguing about whether some weapon is a WMD after the fact, good faith isn't really something we can count on. The whole point of precise definitions is eliminating subjectivity. Anyway, that definition includes cruise missiles, which can't be WMDs because they're in widespread use. This line of inquiry is dead in the water.
That's a broadening of the term WMD in the US civilian and criminal legislation, basically defining any "destructive device" as WMD so that anyone setting off hand grenades or pipe bombs can be charged under their extremely severe terror legislation.
The US military, like every other organisation outside the US, doesn't include conventional explosives under WMD.
The first definition line comes from a definition of a weapon of mass destruction as related specifically to criminal terrorism charges in the US federal code. In this case they are basically setting the lower bound for what could be classified as a 'bomb'. Nothing more. The WMD definition you are probably thinking of (and the one which is more relevant to the original article) is the military definition outlined in various conventions and treaties.
I think that you have to focus on whether you have a human in the loop and can the human reliably interrupt an automated kill chain before there is a release of ordinance (much less fuzing)
If there's any point that a human cannot 'abort', then it needs to be classified as a WMD.
Out of human control I view swarm UCAVs as an unrecallable remote shotgun for the same reason as the article author, it's "inherently indiscriminate".
How long can the delay be between the fire button and they actual violent act? People shot un-recallable missiles and artillery that doesn't land for minutes but does sophisticated tracking and maneuvering in the meantime. Is that a drone?
Stop with the stupid nitpicking. You should already know the reason why we have humans in our war machines is that they are the ones with the authority to designate targets and deploy the weapon against the designated target.
It's not stupid nitpicking at all. If I designate Jon Doe as a target and have a swarm of drones do facial recognition the laws have to be clear enough to distinguish this from me putting a tracking device in Jon Doe's pocket and have a missile home in on that signal.
The counter-argument: it wouldn't be hard to design a system where a human remote-operator in Arizona is paid minimum wage to push a button whenever a screen in front of them turns red and says "FIRE".
Any regulation of lethal autonomous drones will require some nuance in the wording.
Also, what number of drones. At why point does a swarm become a WMD?
To me, it’s more about intelligence than numbers. You could have a single rocket but if it’s smart enough to evade enemy defenses, that rocket could level a city.
Only if that rocket had a nuclear bomb on it.
You'd be surprised how not powerful individual bits of munitions are. Sure a 2000 pound bomb sounds scary. But they work really well against point targets. Cities are not point targets.
The US is that strong, provided they don't have to fight in or near China or Russia. No one besides those three even has a seat at the table. China and Russia could probably defend their own territories, but projecting power is an entirely different can of worms.
> But they would fight in or near China or Russia — I don't see any chinese or russian bases in Canada or Mexico.
Sure. Because they know their limits.
But in the context of the thread, the point is that the US can institute whatever "international law" it sees fit (as long as it as the strength of will to do so) outside of the spheres of influence of China and Russia.
(I think we're heading for the classical Oceania/Eurasia/Eastasia split, but I could have read too many entirely fictional books in my formative years. Personally, I'd have preferred to have been in the timeline with centrifugal bumble-puppy and feelies)
How often does a remote operator need to check in with a drone for it to no longer be "fully autonomous"? How many such drones could the operator be monitoring at once? How many drones constitute a swarm?
A sovereign power can plausibly regulate blurry categories by just picking arbitrary but clearly defined cut-offs, e.g., the difference between a moped and a motorcycle is defined as an engine with a certain number of cc's. But this is a lot harder when you need to get dozens of different countries to all agree on a rule. International law is more dependent than domestic law on appealing to clear moral boundaries, since there is no higher earthly power to appeal to.