Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

Apple Commits to US Initiative for Trustworthy AI

White House Touts Agency Achievements for Development and Safe Use of Technology
Apple Commits to US Initiative for Trustworthy AI
Apple signed onto a White House-developed set of commitments for trustworthy AI. (Image: Shutterstock)

Apple is the latest tech giant to sign onto a list of voluntary commitments for artificial intelligence development pushed by the Biden administration. Fifteen technology heavyweights have already pledged they will follow the guidance.

See Also: Vá à luta com armas mais inteligentes: acelere seu SOC com IA

The White House is extracting promises of secure and trustworthy development from Silicon Valley in a strategy it adopted after an AI regulatory push in Congress looked unlikely to succeed. The commitments include investing in AI model cybersecurity, red-teaming against misuse or national security concerns and accepting vulnerability reports from third parties. Companies also say they will watermark AI-developed audio and visual material (see: IBM, Nvidia, Others Commit to Develop 'Trustworthy' AI).

The strategy predates an October executive order that requires foundational model developers to report the results of red-team safety tests to the government.

Apple's decision to enroll in the White House commitments comes at a time when the company's progress with AI development and use is conservative compared to its peers. The smartphone giant's strategy has so far been to acquire early-stage startups to establish its foothold in the space. It bought 32 such firms by the end of 2023 and is focused on enhancing its existing products and services with AI, in contrast to its peers' strategy of rolling out new AI features and applications.

Today's White House announcement about Apple is timed to the nine-month anniversary of the AI executive order - giving the administration a chance to tout the steps federal agencies taken since the order took effect.

Among the highlights picked by the White House:

  • The AI Safety Institute released for public comment proposed guidance for the evaluation of misuse of dual-use foundation models.
  • The National Institute of Standards and Technology published a final framework on managing generative AI risks and securely developing dual-use foundation models.
  • The Department of Energy stood up testbeds and tools to evaluate harms AI models may pose to critical infrastructure.
  • The United States unanimously adopted the United Nations General Assembly resolution addressing global AI challenges, and expanded support for the U.S.-led political declaration on the responsible military use of AI and autonomy, which 55 nations endorsed.

The administration also celebrated the hiring of more than 200 hires to work on AI issues across the government.


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.