Artificial Intelligence & Machine Learning , Next-Generation Technologies & Secure Development

US FTC Keeping 'Close Watch' on Artificial Intelligence

Consumers Complain of Bias, Fraud, Privacy, Copyright, Data Use Concerns
US FTC Keeping 'Close Watch' on Artificial Intelligence
Image: Shutterstock

The U.S. Federal Trade Commission says it is keeping a "close watch" on artificial intelligence, writing Tuesday that it has received a swath of complaints objecting to bias, collection of biometric data such as voice prints and limited ways to appeal a decisive algorithm that fails to satisfy consumers.

See Also: GDPR & Generative AI: A Guide for Customers

Tech giants fell into a race to incorporate generative AI into products following the splashy late 2022 public debut of OpenAI's ChatGPT large language model - although AI in the form of facial recognition or automated customer recognition had already percolated through the marketplace before ChatGPT became available.

A slew of tech luminaries earlier this year called for a pause in order to develop safety protocols, an appeal that industry didn't carry out. One of the forces behind the call recently told The Guardian that corporations appear trapped in a "race to the bottom against each other."

An FTC blog post written by the agency's Office of Technology says AI is "fundamentally shifting the way we operate; it’s lurking behind the scenes and changing the mechanics by which we go about our daily lives."

The office queried its consumer complaint portal "using search terms we thought could best capture AI-related interactions in the marketplace" to examine "thousands of submissions from the past 12 months alone."

AI models are trained using massive amounts of data - leading consumers to submit complaints that the AI industry is scraping their data from across the internet. Many creators told the FTC that the companies behind AI models may be using their content to train models, supplanting their ability to make a living by creating content while contributing to market domination by large corporations.

Another area of complaint, the blog post says, lies in the collection of biometric and personal data. Some consumers told the agency they had reservations about customer support calls that are recorded, "expressing a fear that the recording could then be used to train an AI using their voice."

AI models are susceptible to bias, inaccuracies, hallucinations and bad performance. Some consumers told the agency they've been unable to verify their identity through automated tools because the underlying algorithm hasn't been trained by a demographically representative sample.

Data from fraudsters has also penetrated some large language models, the FTC said, citing a complaint from a consumer who had asked an AI chat bot for the customer service number of a bank but instead received the number of a scammer.

AI has the potential to turbocharge romance scams and financial fraud, the FTC warned. "Many reports described being tricked by such scams and expressing a belief the messages originated from an AI model."


About the Author

Rashmi Ramesh

Rashmi Ramesh

Assistant Editor, Global News Desk, ISMG

Ramesh has seven years of experience writing and editing stories on finance, enterprise and consumer technology, and diversity and inclusion. She has previously worked at formerly News Corp-owned TechCircle, business daily The Economic Times and The New Indian Express.




Around the Network

Our website uses cookies. Cookies enable us to provide the best experience possible and help us understand how visitors use our website. By browsing healthcareinfosecurity.com, you agree to our use of cookies.