Yesterday, OpenAI said that it has seen several foreign influence campaigns tap the power of its AI models to help generate and translate content, but has yet to see novel attacks enabled through its tools.
But most analysts differed, noting that in a massive election year around the globe this year (globally, more voters than ever in history will head to the polls as at least 64 countries plus the European Union ... representing a combined population of about 49% of the people in the world ... will hold national elections) OpenAI tools are the weapons-of-choice when it comes to disinformation.
Last night Axios published a report on OpenAI's discovery, and this is their summary with a few notes from me:
Supercharging misinformation efforts has been seen as a key risk associated with generative AI, though it has been an open question just how the tools would be used and by whom. OpenAI said in its report that it has seen its tools used by several existing foreign influence operations, including efforts based in Russia, China, Iran and Israel.
For example, the Chinese network known as "Spamouflage" used OpenAI's tools to debug code, research media and generate posts in Chinese, English, Japanese and Korean.
The Russian "Doppelganger" effort, meanwhile, tapped OpenAI models to generate social media content in several languages as well as to translate articles, generate headlines and convert news articles into Facebook posts.
Meanwhile, an Iranian operation known as the "International Union of Virtual Media" used OpenAI tools to both generate and translate long-form articles, headlines and website tags, while an Israeli commercial company called STOIC ran multiple covert influence campaigns around the world, using OpenAI models to generate articles and comments that were then posted to Instagram, Facebook, X, and other websites.
OpenAI said it also detected and disrupted a previously unknown Russian campaign dubbed "Bad Grammar" that operated on Telegram. The effort targeted Ukraine, Moldova, the Baltic States and the United States and used OpenAI models both to debug code for a Telegram bot and to create short, political comments in Russian and English.
What OpenAI said in a press briefing was this:
"While all of these operations used AI to some degree, none of them use it exclusively. Instead, AI generated material was one of many types of content they posted alongside more traditional formats like manually written texts or memes copied from across the internet".
But the bigger picture, said analysts, was this: OpenAI's report comes ahead of a wave of global elections, including the U.S. presidential election. More than a billion people around the world are headed to the polls just as generative AI chatbots continue to become more widely available and easier to use.
And while AI is helping create text faster and with fewer language errors, the toughest part of foreign influence campaigns remains getting them to spread into the mainstream. According to OpenAI, all of the attacks it noticed were rated low in severity because they didn't show signs of spreading organically on their own.
But say analysts: that is not definitive because OpenAI cannot see all the ways its tools are being used to aid in such operations.
Bad actors can use generative AI to quickly spin up fake news sites, whether to help generate the misinformation or the legitimate news stories that serve as cover. For example, there is a huge Russian fake news operation, run by an American working out of Moscow, using OpenAI's tools. The New York Times did a great review which you can read by clicking here.
Plus, AI analysts say, attackers can easily relying on others' generative AI, especially open source tools with fewer guardrails and which might be harder for outside groups to detect.
It is an extremely fluid and complex area to cover and analyze.
|