MARCH 18, 2019 – Only months old, the Army Artificial Intelligence Task Force has already started pilot projects to find ways to speed up security clearances and analyze imagery for military activity.
Split between the National Capital Region and Carnegie Mellon University’s National Robotics Engineering Center in Pittsburgh, the task force stood up in October as part of a recent push to increase AI efforts across the Defense Department.
One of the first efforts they are exploring includes working with Army Analytics Group using personnel data to quickly identify risks in security clearances for Army personnel.
The algorithms use “machine learning tools to speed up the risk management process that security clearance adjudicators are using,” said Brig. Gen. Matthew Easley, the task force director. “It gives them evidence to allow them to make a decision much faster.”
Part of the newly-formed Army Futures Command, the task force plans to further develop other AI technology with academia and industry partners as it grows. It is now staffed at half of its projected size of about 16 personnel, the general said.
The task force will also work closely with the new Joint Artificial Intelligence Center, the Army Combat Capabilities Development Command’s Army Research Laboratory, and other DOD organizations.
Established in June under the DOD chief information officer, the JAIC is the focal point of DOD’s AI strategy and charged with providing a common vision to drive department-wide AI capability delivery.
On Monday, the Army lab announced a $72 million five-year AI research cooperative agreement with Carnegie Mellon, which will lead a consortium of other universities that want to team up with the lab to accelerate research and development of AI, advanced algorithms and autonomous systems for national defense.
Next year, Easley said the task force will likely search for ways to improve the Army’s rapid prototyping process of future equipment, such as helping to quickly develop the software used in prototypes.
In the second ongoing pilot project, the task force is testing an algorithm that could scour through imagery to find military objects of interest, such as tanks, belonging to near-peer competitors, he said.
The algorithm is similar to facial recognition technology found in a smartphone camera, which can pick out the heads of people in a photo.
“Your camera does a really good job at this,” he said Tuesday after his panel discussion at an Army signal conference. “You use it every day.”
As near-peer competitors adapt, Col. Mark Orwat, who is assigned to the JAIC, said more funds will need to be invested internally toward AI technology.
Today, there are over 500 AI projects across the DOD, Orwat said. Some of these investments are not AI systems themselves, but the foundations necessary to ensure DOD systems can manage massive amounts of data.
Newer equipment will have more sensors than ever before and when Soldiers wear helmet cameras, for instance, those high-definition video feeds could overload systems, too.
“The Soldier is going to [generate] massive data every time he or she swings their head around,” Orwat said during the panel discussion. “We know data is going up on the battlefield. All these things are going to just pound the tactical network with data.”
The human brain has powerful learning abilities that machines cannot yet replicate.
For example, Orwat said, someone could view an image and easily recognize it again at a later time. A machine would need to see that image thousands or millions of times before it could learn it.
“You have to show a bunch of code 3.2 million pictures to train it to do what you do after one picture,” Orwat said. “That’s a lot of data.”
Common applications on smartphones — such as social media, ride sharing or mapping systems — already continually learn from users by observing their behaviors.
“If you multiple that by a couple of million users every day, they get [to] big data very quickly and that’s how they do it,” Easley said. “We have to think hard about how we want to do that in the U.S. military across our service members, because we’re just not collecting data from them in the same method.”
AI systems, though, may not always offer perfect solutions to help with decision making. Easley said it may require a different mindset when systems can only offer probabilities.
“A lot of people want a clean answer — it’s Object X,” he said, adding AI systems may only be 80 percent confident that it is.
Just like apps on smartphones, there are some caveats with using them. Sometimes a specific app offers good recommendations based on a user’s past behavior, sometimes an app does not.
Enemies could also “spoof” Soldiers by planting fake information onto AI systems.
“As we get answers in, we have to train our operators how to understand the results that are coming out of them,” Easley said, “and understand what the enemy is trying to do so we’re making better decisions.”
A possible solution could be frequent updates to algorithms while in combat and pushing that info out to AI systems in near real-time.
“We have to find a way to automate the process much faster,” he said.
By Sean Kimmons, Army News Service