OpenAI (developer of ChatGPT) has just issued a warning that the emergence of super-smart AI systems can bring the risk of disaster if not controlled.
The company emphasized that the industry is getting closer to the milestone of AI self-improvement submission, which is the ability to learn and upgrade oneself without human intervention, which can cause out-of-control consequences.
OpenAI believes that no organization should deploy hypersonic systems without demonstrating the ability to control and connect securely.
This is the company's strongest warning since the world started the global AI race, in which new models are becoming stronger and more unpredictable.
According to OpenAI, potential risks do not stop at information misunderstandings or technology abuse, but are also related to the risk of bio-terrorism, privacy violations and large-scale cyber attacks.
The company recommends that AI laboratories need to share information, safety standards and new risk lessons to build a common defense platform for the entire industry.
OpenAI also proposed a global surveillance system that coordinates with governments, research organizations and the private sector to create an AI recovery ecosystem similar to today's cybersecurity model.
This system will include encryption software, safety standards, emergency response teams and a mechanism to prevent misuse.
The company also called on international regulators to avoid patchwork laws and disjointed regulations, instead creating a unified legal framework for AI, helping to ensure both promoting innovation and protecting users from risks.
OpenAI's warning comes as many public figures from Prince Harry, Meghan Markle to US scientists and politicians have also called for a moratorium on the development of artificial superAI. They believe that this technology could get out of control and threaten the survival of humanity.
However, AI scientist Andrej Karpathy believes that AGI (general artificial intelligence) is still a decade away from us. He commented that current systems still lack the ability to continuously learn, which is a key factor to achieve human-like intelligence.
With a cautious but optimistic outlook, OpenAI predicts that by 2026, AI can conduct small scientific discoveries, and by 2028, it can achieve more important discoveries.
However, the company admits that the socio-economic transformation brought by AI will not be easy, and humanity needs to be ready for the possibility that current socio-economic platforms may have to change fundamentally.
We need to act today to ensure the future of AI is safe and beneficial to all of humanity, OpenAI concluded.