Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
OpenAI has recorded an increase in the number of Chinese-based groups that use its AI models for cyber threats and covert operations, Reuters reported. In a report that the AI startup released on June 5, these actions underscore the security challenges that OpenAI faces as its models become more powerful.
OpenAI said its investigative teams have continually uncovered and prevented malicious activities since February 21. As it unveiled the malicious ChatGPT usage report, OpenAI said that the tactics employed by Chinese groups and the scope of their cyber threats has been expanding over time.
For instance, OpenAI found that an influence operation originating from China had used ChatGPT to create polarizing social media content. The content supported both ends of divisive topics relating to the political discourse in China both in text and image formats.
However, OpenAI said the operations that its company detected were mostly small in scale and they were targeted towards limited audiences. Risks associated with generative AI have been rife since ChatGPT launched back in 2022. Questions around the possible consequences that’s capable of generating human-like imagery, text, and audios easily and fast have always been raised by people who advocate for ethics in the technology space.
OpenAI releases reports on the potential use of AI in cyber attacks on a regular basis. Often, these reports highlight the malicious activities the California-based AI startup detects on its platforms. These activities range from creation and debugging of malware to generation of fake content for posting on social media platforms and websites.
Previously, OpenAI terminated ChatGPT accounts that created social media content on topics relating to politics and geopolitical matters in China. The content included criticism of US sweeping tariffs announced by US President Donald Trump in April this year.
One AI generated post stated, “Tariffs make imported goods outrageously expensive, yet the government splurges on overseas aid. Who’s supposed to keep eating?”
Chinese cyber threats have also involved the use of ChatGPT to support different cyber operation phases that include modifying scripts, open-source research, social media automation, troubleshooting of system configurations, and creation of password tools.
Other forms of content that the AI chatbot has generated and led to account bans include allegations levied against a Pakistani activist, content relating to USAID closure, and criticism of a Taiwan-focused video game.
OpenAI has established itself as one of the most valuable private firms in the world. Recently, the AI startup completed a $40 billion funding round that valued it in excess of $300 billion.
However, the rapid adoption of its chatbot, ChatGPT, has raised concerns about its potential misuse for both constructive and harmful purposes among experts. Analysts emphasize the need for robust ethical guidelines to ensure responsible deployment of AI tools.
From the expert point of view, there is a general agreement that though AI has immense potential, the risks associated with it require strict oversight. Recognizing the need to mitigate the risks linked to the misuse of platforms such as ChatGPT and address privacy concerns, G7 nations have committed to promote responsible use of AI.
But as technology continues to evolve, the balance between fostering innovation and mitigating risks remains a critical challenge.