In a nutshell: While some view AI as a threat, the military sees it as a tool to detect threats. It has recently deployed an AI surveillance system that can identify threats like armed individuals from more than a mile away. The system also has fewer false alarms than those generated by trained security professionals.
Security startup Scylla offers “proactive,” AI-based security systems to safeguard perimeters around facilities and depots. Its Scylla AI systems are apparently good enough to protect US nuclear sites, as the Department of Defense (DoD) began testing them eight months ago at the Blue Grass Army Depot (BGAD) in Richmond, Kentucky.
Currently, BGAD is the only military base testing AI-powered surveillance algorithms to detect potential threats. The systems help personnel find and identify intruders, weapons, or “abnormal behavior” in real-time. Scylla systems work with existing surveillance cameras and drones to monitor facilities, providing significant efficiency improvements in threat response by human personnel.
According to Drew Walter, deputy assistant secretary of defense for nuclear matters, Scylla AI learns in real time, reducing false alarms. The system addresses one of DoD’s long-standing challenges in physical security: improving reaction times by security personnel while filtering non-security issues quickly and reliably.
In BGAD tests performed by the Physical Security Enterprise and Analysis Group (PSEAG), Scylla has shown it can detect threats with accuracy rates beyond 96 percent. Depot Electronic Security Systems Manager Chris Willoughby said the system significantly lowers false alarms caused by “environmental” phenomena. Humans are still required to decide whether to respond to a threat.
The AI showed remarkable surveillance capabilities by identifying an armed individual climbing a water tower a mile away. Another example of the system’s reliability was an alert sent to security personnel “within seconds” after the algorithm detected two potential armed intruders breaching a fence. The intruders were part of the BGAD staff, and Scylla immediately identified both via facial recognition.
While PSEAG is heavily involved in testing, evaluating, and even training Scylla’s deep learning algorithms for BGAD, the Army has provided no specific details regarding how different the trained system is from the commercial software for obvious reasons. Deputy Assistant Secretary Walter is a fan of the AI, as it could be “transformative” to PSEAG’s core mission: safeguarding the US strategic nuclear arsenal.