OpenAI Hacked by ‘Private Individual
Published on
8 Min read

OpenAI Hacked by ‘Private Individual’, Could China Be Next?

Over a year ago, OpenAI was hacked and its staff discussion forum was breached. Last week, the New York Times reported that the hacker stole details about the company’s AI technologies. However, OpenAI didn’t go public about the security breach.

The OpenAI breach saw the hacker lift information from an online forum where staff discussed the latest AI technologies. This was reported by two individuals who were familiar with the incident.

Incidence Kept Secret

A report by OpenAI shows that the hacker didn’t penetrate its internal systems, models, or secret roadmaps. OpenAI executives informed employees and the board about the breach during an all-hands meeting held in April 2023.

But the executives chose not to go public with the news. This decision was informed by the fact that no partner or customer information had been stolen. Also, OpenAI’s executives didn’t view the incident as a threat to national security.

According to the company report, the hacker was a private individual with zero links to foreign governments. Federal law enforcement agencies were also not informed about the OpenAI security breach. However, not everyone was satisfied with the way the company handled the security incident.

News about OpenAI internal security hacking has raised fears of potential foreign attacks, particularly by China. But OpenAI holds that its AI technologies pose no threat to national security. However, leaking them out to Chinese specialists could help in advancing the technologies faster.

A Major Incident

In a recent podcast, former OpenAI staff, Leopold Aschenbrenner, termed the hacking a major security incident. Although the unauthorized access didn’t reach OpenAI’s systems, the security breach shouldn’t be trivialized.

Aschenbrenner criticized OpenAI’s security measures. He suggested the measures are inadequate and incapable of keeping foreign adversaries at bay. This makes sensitive information vulnerable. Aschenbrenner was later fired for leaking information about the OpenAI security compromise.

OpenAI has however maintained that Aschenbrenner’s dismissal was not related to the cybersecurity issue.

We appreciate the concerns Leopold raised while at OpenAI, and this did not lead to his separation,” an OpenAI spokeswoman, Liz Bourgeois, said.

The company acknowledged his contribution to building artificial general intelligence. However, it disagreed with his stand on its internal security practices.

While we share his commitment to building safe A.G.I., we disagree with many of the claims he has since made about our work. This includes his characterizations of our security, notably this incident, which we addressed and shared with our board before he joined the company,” Bourgeois added.

Boosting Security

After OpenAI got hacked, the tech company started to strengthen its internal security. The company has included guardrails to deter misuse of its AI applications. OpenAI has also set up a Safety and Security Committee to address future risks. Members of this committee include Paul Nakasone, the former NSA head.

Other tech giants are also taking steps toward improving system security. Meta is making its AI designs open to all to foster industry improvements. This move has made the technologies available to US foes, such as China. But this poses little threat as studies show that current AI systems are not as dangerous as search engines.

State and federal regulations are being considered to help in controlling AI technology releases. Such regulations would impose penalties for technologies that produce harmful outcomes.

James Hughes
Scroll to Top