The Weopon's Edge
|Back to Security|
|Patrick Tucker||October 23rd 2016|
U.S. military leaders are spending billions of dollars to develop the robotic autonomy they say will drive technological dominance in the next decade or two. Here’s a question for the next five years: how do you test artificially intelligent weapons in a way that is both safe and yet credibly represents a battle environment?
Maj. Gen. Robert McMurry Jr., who leads the Air Force Research Lab, or AFRL, at Wright-Patterson Air Force Base in Ohio, is accustomed to testing dangerous and futuristic weapons, such as lasers.
“It’s either never lethal enough, or the most dangerous thing ever, if you’re a safety guy,” McMurry joked at a National Defense Industrial Association breakfast on Wednesday. “But it challenges the test ranges…Autonomy is going to take that to a whole new level.”
Currently, McMurry and his team are trying to figure out how to test the autonomous drones that might one day help fighter pilots in a real fight. Under an AFRL program called Loyal Wingman, researchers from the University of Cincinnati have developed ALPHA, a remarkable system that uses fuzzy logic and rapid processing to consistently beat human pilots in simulated environments. How rapid? In this paper, ALPHA’s creators explain that “every 6.5 milliseconds ALPHA can take in the entirety of sensor data, organize the data and create a complete mapping of the scenario, analyze its current [course of action] and make changes, or create an entirely new CoA for four aircraft.”
Writer M. B. Reilly broke down the math for a University of Cincinnati magazine. “Basically, the AI is so fast that it could consider and coordinate the best tactical plan and precise responses, within a dynamic environment, over 250 times faster than ALPHA’s human opponents could blink.”
How do you judge the safety and reliability of software that revises its strategy orders of magnitude faster than you can blink? How do you surprise it and how do you test its reaction to that surprise? Indeed, “How do I test that?” is a quandary that has haunted computer scientists since Alan Turing. Whereas Turing’s measure of artificial intelligence was the ability to fool a human over text, McMurry is looking for something less humanistic, more practical: the level of trust between machines and their operators.
“If you want an autonomous system to partner with your manned system, how does the man trust the autonomous system?” he said. “The cornerstone of trust is not integrity and truth. The cornerstone of trust is competence. The system has to do what you expect it to do in a way that supports the mission every time. When it does, you start to trust it.”
McMurry says the road to trust lies through much more virtual and simulated evaluation, which will happen through the Air Force’s Strategic Development Planning and Execution Office.
“There is an argument that we’re still having in basic system engineering that says: ‘I start with system-level requirements and decompose them all the way down to the smallest subsystems.’ I don’t know how to do that in an autonomous system,” he said. “It’s a game-changer for us. We’re going to have to figure that out. We’re already stepping that way. You can’t get the environment you need to test it in the real world. So we’re going to figure out how do a lot of modeling and sim.”
In other words, you need to leave testing of next-generation machines to the machines.
McMurry acknowledged, almost apologetically, “I don’t know how that’s going to square with the operational guys.”