Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

G7 Unveils Rules for AI Code of Conduct - Will They Stick?

Experts Are Raising Concerns About the Voluntary Nature of Recent AI Guidance
G7 Unveils Rules for AI Code of Conduct - Will They Stick?
The G7 agreement is meant to serve as one of the first global frameworks for responsible AI development. (Image: Shutterstock)

Members of the Group of Seven industrialized democracies agreed to a voluntary code of conduct that aims to establish a global set of guidelines and expectations for developers of advanced artificial intelligence.

See Also: GDPR & Generative AI: A Guide for Customers

Leaders of the G7 economies, which are Canada, France, Germany, Italy, Japan, Britain and the United States, reached the agreement this week. The guidance urges developers to publicly report on the capabilities, risks and limitations associated with their AI systems and seeks to harmonize global AI regulations with international technical standards for cross-border AI deployments.

The code of conduct is entirely voluntary, and organizations throughout G7 countries are not legally required to follow its conditions. Security researchers and technologists told Information Security Media Group that the voluntary nature of the agreement raises questions about its effectiveness and how widely its provisions will be adopted within the private sector.

Organizations that follow the guidance should employ red-teaming and testing measures while implementing robust security controls such as insider threat safeguards across the AI life cycle. AI developers will also be expected to publicly report on evaluations conducted for safety, security and societal risks.

John Harmon, regional vice president of cyber solutions for the security firm Elastic, said that the code of conduct is "great in concept" but "may require fine-tuning to truly fulfill its purpose of safeguarding citizens from the negative implications of future AI use, while simultaneously fostering innovation in the field."

"For the public sector, one of the biggest problems with AI right now is the fact that there is no centralized model for internal government use," said Harmon, who leads Elastic’s federal cyber solutions business. "Instead of being reactionary, we must get in front of potential issues that lie ahead with the use of AI, employing legislation to detail the technology’s use."

The G7 agreement was announced the same day U.S. President Joe Biden invoked Cold War-era executive powers and signed an order requiring developers of AI models to share safety test results with the federal government. Security analysts and policymakers said that the White House guidance currently lacks the accompanying legislation required to enforce key measures of the order.

The president himself encouraged lawmakers to pass bipartisan legislation on AI, saying during the signing ceremony, "This executive order represents bold action, but we still need Congress to act."

The executive order and G7 code of conduct recommendations have "significant overlap" in terms of themes and specific recommendations, said Graham Gilmer, senior vice president of AI at the government and military intelligence contractor Booz Allen Hamilton.

"They both represent excellent first steps in governments around the world taking a leadership position in realizing the promise of AI, while doing so responsibly," Gilmer told ISMG. He described the new guidelines as a "starting point" for encouraging safe and secure AI technology development and innovation.

It also remains to be seen how the G7 code of conduct aligns with other AI legislative packages in countries that are developing their own standards and regulations for AI systems. The potential for varying frameworks could introduce new complexities in the global AI landscape and pose significant challenges for international cooperation on AI development and governance, experts said.

"Both the G7 code of conduct and the [executive order] are aligned in their objective to preemptively put a marker down that governments and private sector organizations can build upon and refine," said Tom Miller, CEO of the risk management company ClearForce. "This is certainly a great opportunity for the United States to lead by example and ensure democratic market principals - elimination of bias or discrimination, among others - are at the forefront of these early initiatives."


About the Author

Chris Riotta

Chris Riotta

Managing Editor, GovInfoSecurity

Riotta is a journalist based in Washington, D.C. He earned his master's degree from the Columbia University Graduate School of Journalism, where he served as 2021 class president. His reporting has appeared in NBC News, Nextgov/FCW, Newsweek Magazine, The Independent and more.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.