The Federal Trade Commission (FTC) has warned US lawmakers that AI technology, including ChatGPT, could be used to fuel fraud. During a recent Congressional hearing, FTC Chair Lina Khan and other commissioners raised concerns about the potential risks of AI and emphasized the need to manage these risks.
Khan acknowledged the benefits of AI but highlighted the risks of AI tools being used to deceive people, which can lead to FTC action against parties involved in such practices. She stated that the ability of AI to “turbocharge” fraud should be a “serious concern” for everyone.
To address these risks, the FTC is embedding technologists across the agency to identify and manage any issues related to AI. This approach will enable the FTC to stay on top of technological advancements and ensure that any fraudulent activities are quickly identified and addressed.
While Khan highlighted the risks posed by AI, FTC commissioner Rebecca Slaughter downplayed these concerns. She stated that the agency has adapted to new technologies before and has the expertise to do so again. She emphasized that the agency’s obligation is to apply the tools it has to changing technologies, make sure it has the expertise to do that effectively, and protect people from fraudulent activities.
The Commission testimony, delivered by Khan, Slaughter, and Commissioner Alvaro Bedoya, addressed a wide range of topics beyond AI. The FTC detailed its work to reduce spam calls, combat deceptive claims in the crypto community, protect consumers’ private health data, combat deceptive practices in the gig economy, and more.
The agency also noted that it launched a new Office of Technology (OT) in February to keep pace with technological changes and support its law enforcement and policy work. The office focuses on areas such as security, privacy, digital markets, augmented and virtual reality, gig work economy, ad tracking technologies, and automated decision-making, which includes AI.
“The creation of the Office of Technology builds on the FTC’s efforts over the years to expand its in-house technological expertise, and it brings the agency in line with other leading antitrust and consumer protection enforcers around the world,” the FTC said.
This warning from the FTC is not the first time they have raised concerns about the use of AI in various industries. In the past, the FTC has highlighted the risks of using AI in credit scoring, hiring, and lending decisions, among other areas. The FTC’s warning highlights the importance of ensuring that these technologies are used ethically and responsibly.
There have been numerous instances of fraudulent activities fueled by AI. For example, deepfake technology has been used to create fake videos and audio recordings of individuals, which have then been used to spread misinformation or defraud people. Similarly, chatbots powered by AI have been used to scam people online.
To prevent such activities, the FTC has been working to raise awareness about the risks posed by AI and to ensure that companies are held accountable for any fraudulent activities. As part of these efforts, the agency has been issuing guidance to businesses on the responsible use of AI and has been investigating companies that use AI to deceive or defraud consumers.
In conclusion, the FTC’s warning about the potential risks posed by AI technology like ChatGPT is a reminder of the importance of responsible use of these technologies. While AI has the potential to revolutionize many industries, it also presents numerous risks that must be managed. By embedding technologists across the agency and launching the Office of Technology, the FTC is taking proactive steps to ensure that it has the expertise and tools needed to identify and address any issues related to AI.