Standards, Regulations & Compliance

DOJ to Launch Emerging Tech Board, Ensure Ethical Use of AI

Board to Set Ethical Framework for DOJ Use of Facial Recognition, Other AI Tools
DOJ to Launch Emerging Tech Board, Ensure Ethical Use of AI
The new board will seek to regulate the use of AI and emerging technology across the Justice Department. (Image: Shutterstock)

The Justice Department plans to help define the ethical and legal implications of using AI tools in a law enforcement and national security investigations. A top DOJ official announced plans to create an emerging technology board on Wednesday, just eight days after President Joe Biden had signed an executive order on responsible AI.

See Also: RSAC 2024 Call for Submissions Trends Report

Deputy Attorney General Lisa Monaco told the IBM Security Summit the new board will be tasked with advising Justice leaders "on the ethical, lawful use of AI" within the agency "to ensure that we are coordinating the understanding and use of emerging technologies across the department."

Monaco added that the board will aim to share information about best practices for the use of AI, while developing a set of principles around the deployment of emerging technologies. The department has already employed AI systems and emerging technologies in a number of high-profile scenarios, Monaco said, including the use of facial recognition throughout its ongoing investigations of the Jan. 6 insurrection.

The department also uses AI technologies to support its national security initiatives, such as helping to identify anomalies in drug samples, Monaco said. The new board will seek to standardize the use of AI and ensure that new technologies are deployed in a manner that aligns with the agency's mission.

The announcement followed an October executive order the White House issued to begin setting new standards and regulations for the use of AI systems throughout federal agencies and requires developers of advanced AI models to share safety test results with the government.

Government watchdog reports have long called on agencies - including Justice - to assess privacy risks and concerns associated with the federal use of facial recognition and emerging technologies for law enforcement purposes.


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.