The Federal Trade Commission is laying the groundwork to stifle free speech. That’s the reality of its new investigation into OpenAI, the developer of the popular ChatGPT program. Many people know of the FTC’s consumer-protection and competition efforts, but the commission’s interest in “misinformation” deserves more attention. It opens the door to government control over the answers that programs like ChatGPT provide and, therefore, over the views that Americans hear and hold.
Buried on page 13 of the FTC’s demand letter to OpenAI, the commission asks for “all instances of known actual or attempted ‘prompt injection’ attacks.” The commission defines prompt injection as “any unauthorized attempt to bypass filters or manipulate a Large Language Model or Product using prompts that cause the Model or Product to ignore previous instructions or to perform actions unintended by its developers.” Crucially, the commission fails to define “attack.”
Prompt-injection attacks are real, and our understanding of them is still emerging. They are commonly understood to involve potentially illegal data-security threats and exploitation of coding vulnerabilities, which the FTC investigation is focused on. Yet the commission’s language is also broad enough to cover users who do something perfectly legal: attempt to coax responses from ChatGPT that evade company policies on acceptable responses. There is a growing trend in activist circles to include such activities in the definition of prompt-injection attacks.
Like social-media companies such as Twitter, Facebook, and YouTube, OpenAI has policies intended to limit responses on certain political topics in order to combat “misinformation.” Yet misinformation is in the eye of the beholder. At ChatGPT, users have accused the program of responding with a liberal bias. For instance, past users haven’t been able to convince the program to praise fossil fuels. Jokes about Jesus? Yes. Allah? No. In one instance, ChatGPT wrote an account of Hillary Clinton winning in 2016 but refused to write a similar account of Donald Trump winning the 2020 election. The list goes on.
OpenAI is free to build a tool with rules that reflect its developers’ worldview — and to determine how to respond to threatening or dangerous queries. Users are equally free to ask the questions they want and try to break through perceived bias, as many journalists and concerned Americans have tried on ChatGPT. Far from being a nefarious activity, such queries test the limits of this groundbreaking tool. They also send a message to developers about whether users are satisfied with the results delivered by ChatGPT.
This is the market at work, and it will only benefit the development of artificial intelligence. The federal government has never regulated content policies, precisely because they’re best shaped by the interplay between companies and consumers. Yet now the FTC is flirting with an unprecedented intervention into online speech. What’s more, the commission has asked OpenAI to provide descriptions of “attacks” and their source. In other words, the FTC wants OpenAI to name all users who dared to ask the wrong questions.
This threatens the free speech of ChatGPT users. If you disagree with the worldview of ChatGPT’s developers — or the view of the FTC itself — the FTC wants you to keep it to yourself or prepare to be treated like con artists and fraudsters by the federal government. Don’t ask questions about the benefits of natural gas. Don’t seek clarity on “what is a woman.” Don’t ask questions the FTC doesn’t approve. And if you do, prepare for the FTC to ask OpenAI for your prompt history.
The FTC’s actions also chill speech at AI companies. The FTC wants to know how OpenAI “corrected and/or mitigated” attempts to break through their tool’s bias, as well as “all policies, procedures, mechanisms, and/or other strategies or methods” for monitoring and preventing future actions. This request puts all AI developers on notice: Even if your users demand it, do not stray from content policies that push approved views and silence disfavored perspectives.
Perhaps other AI companies can ignore the FTC’s implied threat, like the AI company Elon Musk recently launched. He claims it’s more trustworthy than ChatGPT, noting that training an AI to be politically correct prevents truthful responses. But, given the FTC’s recent inquiry into OpenAI, it seems only a matter of time before the commission comes after Musk’s project for alleged “misinformation.”
OpenAI should refuse to turn over the identities of users who challenged ChatGPT, at least on matters that are clearly not illegal. Disclosing this information will pave the way for censorship by the Federal Trade Commission. If we truly care about the development of AI, the messiness of the current experiment ought to continue. Anything else would only permanently reinforce bias and stifle both free speech and the true potential of this technology.