OpenAI is Developing a New Team to Control Superintelligent AI.

No Image

The most recent information indicates that OpenAI, the business that started the AI race with the highly effective chatbot ChatGPT, has established a specialized team to oversee and manage ‘super-intelligent’ AI systems. Ilya Sutskever, one of the organization’s founders and chief researcher, was appointed head.  

According to an OpenAI blog post, AI capable of outperforming the human mind could come within a decade. At the same time, it is not required for this ‘supermind’ to be beneficial to humans; consequently, methods of controlling and limiting the powers of such systems must be developed.  

Super-intelligent AI cannot yet be managed or controlled in a way that prevents aggression on its behalf, claims OpenAI. Existing artificial intelligence optimization systems rely on human feedback-based reinforcement learning. Simultaneously, such strategies are dependent on the assumption that humans can, in theory, regulate AI. Which is doubtful for systems that are potentially more intelligent than humans. A specialized team will have access to 20% of the company’s computing resources to study this issue. Scientists and programmers from the company’s AI customization section and other departments are also there. The team hopes to master the key technical hurdles connected with directing super-intelligent AI during the next four years.  

We’re discussing the use of feedback in AI training. To assess other AI systems, provide the needed outcomes, and ensure their operational reliability, a unique AI is used. OpenAI believes AI will manage this more quickly and effectively than humans. Companies are anticipated to take on an increasing number of activities as artificial intelligence develops, necessitating the development and application of better adaptation technologies than those that are currently available. They will collaborate with humans to better adapt their generations. Also, to the requirements of humans, who are merely serving as supervisors and not taking part in the actual study.

Future of Superintelligent AI  

As mentioned in OpenAI, no approach has absolute error prevention. Using one AI to evaluate others might increase the number of faults and vulnerabilities in the AIs developed. This indicates that the most difficult portion of customization is related to the most unexpected aspects of how the AI functions. However, the company argues that optimizing superintelligence is primarily a machine-learning task.  Therefore, experts in this field will be critical. In the future, the business intends to share the findings of its research with others. In order to establish and secure initiatives, including those unconnected to OpenAI.  

The attempt’s critics assert that AI could surpass human intellect even before it commits to protecting the security of other AI systems, according to Reuters. In an open letter published in April, experts and business pioneers issued a strong warning against developing an AI that is more advanced than GPT-4. 

Show More
Leave a Reply