AI Regulation
Published on
8 min read

AI Regulation Meet: Top AI Firms to Visit the White House

A new evolution is taking place in today’s era i.e. AI evolution. The only reason everybody is betting on the AI industry after the Open AI game-changing tool ChatGPT. Evolution is good in every aspect, but it also brings some other destructive things with it. Hence, AI regulation is important for the masses.

For the same, the Biden administration will hold a meeting to collect “voluntary Commitments” from 7 of the top AI firm developers to attain shared security and transparency goals ahead of a planned Executive Order.

Since Open AI’s tool ChatGPT, the AI Industry has been moving at lightning speed. Everybody including journalists and the White House has been worried that it should not get carried away. The Top AI Companies will be taking part in the non-binding agreement and send their representatives to the White House to meet with President Biden today. The Names of the companies are OpenAI, Anthropic, Google, Inflection, Microsoft, Meta, and Amazon.

To be clear, no rule or enforcement is being suggested here. The agreed-upon procedures are entirely optional. However, if a business avoids a few, no government agency will hold it accountable, and it will probably be public knowledge.

Following is a list of members to visit the White House:

  • Brad Smith, President, Microsoft
  • Kent Walker, President, Google
  • Dario Amodei, CEO, Anthropic
  • Mustafa Suleyman, CEO, Inflection AI
  • Nick Clegg, President, Meta
  • Greg Brockman, President, OpenAI
  • Adam Selipsky, CEO, Amazon Web Services

The Guidelines and Suggestions Made

The above seven firms and others who want to join have committed to the following:

  • Before release, AI systems undergo internal and external security checks, including aggressive “red teaming” by independent specialists outside the company.
  • Information on AI threats and mitigation strategies (such as preventing “jailbreaking”) should be shared between the government, academia, and “civil society”.
  • To safeguard private model data like weights, make investments in cybersecurity and “insider threat safeguards”. This is crucial not only to safeguard intellectual property but also because an opportunity for malicious actors could arise from an early widespread release.
  • Help third parties find and report vulnerabilities, for example, through a bug bounty program or a domain expert analysis.
  • Create reliable watermarking or another method of marking content produced by AI.
  • The “capabilities, limitations, and areas of appropriate and inappropriate use” of AI systems should be reported. You’ll have a hard time getting an honest response to this.
  • Prioritize research studies on societal dangers like systematic bias or privacy concerns as a top priority.

Even though the above-mentioned actions are optional, it is easy to conceive that the threat of an Executive Order, which they are currently developing, exists to encourage compliance. For instance, the E.O. may create language instructing the FTC to carefully examine AI products that claim robust security if some companies refuse to permit external security testing of their models before release. (One E.O. is already in effect and requests that agencies be vigilant against bias in the creation and application of AI.)

A Step Towards Responsible AI Development by Leading Companies

The White House wants to be ahead of the next AI revolution by monitoring and understanding the AI threats that may come along with the massive wave of technology. Both the President and the Vice President have met with business executives to seek their input on a national AI strategy and are allocating a sizeable amount of funds to new AI research facilities and initiatives.

Also, this is a good move to safeguard the beneficiaries along with a hope to control the AI revolution. If attained, it will be a fall well handled!

Linda Hadley
Scroll to Top