TechLetters Insights - US DoD approach to fielding autonomous weapons systems, how to make sure that AI in the battlefield is stable?
The US recently released an updated document about the employment of autonomous weapons systems. It concerns “design, development, acquisition, testing, fielding, and employment of autonomous and semi-autonomous weapon systems, including guided munitions that are capable of automated target selection” (however: it does not apply to cyberoperations, which may also be automatic or semi-automatic).
The timing is notable, as it should be clear to all that the Ukraine war is full of new weapons systems fielded or tested, including drones, loitering munitions, and various unmanned vehicles.
Who knows what is the level of automation or semi-automation employed or tested?
The US DoD policy retains the stance that systems must allow commanders to have “appropriate levels of human judgment over the use of force”. This means that at least policy-wise, the current fast-paced evolution does not mean the transition from human in the loop to the human on the loop, so less human involvement.
That is the most important element so far. Commanders should ensure that the use of “AI capabilities in autonomous and semi-autonomous weapon systems consistent with the DoD AI Ethical Principles and the DoD Responsible Artificial Intelligence Strategy and Implementation Pathway”. That is to say, using lethal force should be ethical and responsible.
Lastly, the weapons systems with autonomy or semi-autonomy will have to undergo testing and verification. Such an audit should be made against the appropriate international humanitarian law principles.