Necessary Always Active
Necessary cookies are required to enable the basic features of this site, such as providing secure log-in or adjusting your consent preferences. These cookies do not store any personally identifiable data.
|
||||||
|
||||||
|
||||||
|
Anthropic restricts AI access in China with a sweeping update to its terms of service that prohibits entities that are majority-owned by Chinese companies from using its Claude AI platform. The decision, reported by The Economic Times, takes immediate effect and applies worldwide, closing previous gaps that allowed offshore subsidiaries to retain access.
The revised terms extend restrictions beyond China’s domestic market. Any organization with more than 50% Chinese ownership is now prohibited from accessing Anthropic’s AI services, regardless of its registered headquarters. This global scope ensures that subsidiaries incorporated in jurisdictions such as Singapore or Hong Kong cannot bypass restrictions. The update forms part of Anthropic policy on Chinese AI use, representing one of the most explicit and enforceable rules from a U.S. artificial intelligence developer.
Anthropic explained that the update is driven by risks of AI misuse by authoritarian governments, with specific emphasis on military and intelligence applications. The company has aligned its approach with broader U.S. national security measures, including export controls on high-end semiconductors.
“This is the first time a major US AI company has imposed a formal, public prohibition of this kind,” stated Nicholas Cook, an AI-industry lawyer with expertise in China, in a statement to AFP. He described the decision as both legally defensible and strategically symbolic.
Executives at Anthropic have acknowledged that the restrictions could cost the company in the “low hundreds of millions” in potential revenue. However, the firm views this as an acceptable trade-off to preserve compliance and safeguard long-term credibility. By becoming the first major U.S. AI company to formally block majority-Chinese ownership, the company sets a precedent. Industry observers believe these Anthropic AI access restrictions could encourage competitors such as OpenAI, Microsoft, and Google to evaluate whether their own terms require similar changes.
The move also resonates across Asia, particularly for firms with partial Chinese ownership that operate globally. Chinese AI companies ban trends are not entirely new, but Anthropic’s explicit wording raises compliance challenges for multinational enterprises with complex shareholder structures. Business leaders in Southeast Asia and Europe may now need to reassess supplier relationships and cloud contracts to ensure continued access to advanced AI platforms.
Reports by Reuters highlight how Chinese entities have historically used foreign cloud providers such as AWS and Azure to gain indirect access to restricted U.S. technologies. By updating its terms, Anthropic closes this loophole proactively rather than waiting for regulatory enforcement. This signals a more cautious approach by U.S. firms that operate at the intersection of innovation and geopolitics. In June, 2025, Anthropic has launched new AI tools called “Claude Gov” that are specially designed for the United States national security sector.
By instituting Anthropic AI access restrictions against Chinese-owned entities, the company has prioritized security and regulatory compliance over short-term revenue. This shift not only reinforces U.S. government policy but also introduces new standards for AI governance in the private sector.