Artificial Intelligence & Machine Learning , Government , Industry Specific

CISA: AI Tools Give Feds 'Negligible' Security Improvements

Federal AI Security Tools Require Substantial Training, Offer Minimal Improvements
CISA: AI Tools Give Feds 'Negligible' Security Improvements
New artificial intelligence tools offer "negligible" improvements in risk detection, the U.S. Cybersecurity and Infrastructure Security Agency concluded. (Image: Shutterstock)

New artificial intelligence tools offer "negligible" improvements to federal cyber operations while requiring substantial time for training analysts, the U.S. cybersecurity agency concluded.

See Also: The Promise (and Peril) of AI in Healthcare

The Cybersecurity and Infrastructure Security Agency conducted an operational pilot to assess whether AI-powered federal vulnerability detection software is more effective than traditional technologies at identifying vulnerabilities in government systems and networks. The agency evaluated products that became federally available starting in 2023, focusing on the latest AI technologies, including software using large language models.

The agency found that AI tools "can be unpredictable in ways that are difficult to troubleshoot" and in some cases require a substantial amount of time to teach analysts new capabilities.

"The incremental improvement gained may be negligible," the agency said in a Monday report, adding that the best use of AI in federal vulnerability detection efforts "currently lies in supplementing and enhancing as opposed to replacing existing tools."

CISA has positioned itself as a leader in the federal government's adoption of AI systems, spearheading collaborative, cross-agency efforts to develop policy for the U.S. government's national AI strategy. The agency published a road map in 2023 to ensure the secure development and deployment of responsible AI tools across the federal government (see: New CISA AI Road Map Charts Course for Responsible Adoption).

President Joe Biden issued an executive order on AI in 2023 that directed CISA to launch a pilot using AI capabilities to aid in mitigating vulnerabilities in critical U.S. government software, systems and networks. The order tasks the Department of Homeland Security with applying standards set by the National Institute of Standards and Technology to the nation's critical infrastructure sectors, in addition to establishing an AI Safety and Security Board (see: White House Issues Sweeping Executive Order to Secure AI).

The White House has since said federal agencies are moving faster than anticipated in implementing many of the directives laid out in the executive order, including completing risk assessments on AI's use within critical infrastructure and launching efforts to accelerate the hiring of AI professionals across the federal government. The National Science Foundation has made headway in its own pilot project that aims to establish a national infrastructure to make AI computing capabilities and research more easily accessible.

The report acknowledged AI tools "are improving constantly" and said CISA "will continue to monitor the market" and test federally available software products to ensure "vulnerability detection capabilities remain state-of-the-art." The agency said it carried out its initial evaluations of AI tools for the pilot by conducting security assessments of federal partner networks and tests within a controlled environment.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.