OpenAI warns of risks from super-smart AI, calls for global monitoring

Cát Tiên |

OpenAI warns that super-smart AI systems can cause disaster if not strictly controlled.

OpenAI (developer of ChatGPT) has just issued a warning that the emergence of super-smart AI systems can bring the risk of disaster if not controlled.

The company emphasized that the industry is getting closer to the milestone of AI self-improvement submission, which is the ability to learn and upgrade oneself without human intervention, which can cause out-of-control consequences.

OpenAI believes that no organization should deploy hypersonic systems without demonstrating the ability to control and connect securely.

This is the company's strongest warning since the world started the global AI race, in which new models are becoming stronger and more unpredictable.

According to OpenAI, potential risks do not stop at information misunderstandings or technology abuse, but are also related to the risk of bio-terrorism, privacy violations and large-scale cyber attacks.

The company recommends that AI laboratories need to share information, safety standards and new risk lessons to build a common defense platform for the entire industry.

OpenAI also proposed a global surveillance system that coordinates with governments, research organizations and the private sector to create an AI recovery ecosystem similar to today's cybersecurity model.

This system will include encryption software, safety standards, emergency response teams and a mechanism to prevent misuse.

The company also called on international regulators to avoid patchwork laws and disjointed regulations, instead creating a unified legal framework for AI, helping to ensure both promoting innovation and protecting users from risks.

OpenAI's warning comes as many public figures from Prince Harry, Meghan Markle to US scientists and politicians have also called for a moratorium on the development of artificial superAI. They believe that this technology could get out of control and threaten the survival of humanity.

However, AI scientist Andrej Karpathy believes that AGI (general artificial intelligence) is still a decade away from us. He commented that current systems still lack the ability to continuously learn, which is a key factor to achieve human-like intelligence.

With a cautious but optimistic outlook, OpenAI predicts that by 2026, AI can conduct small scientific discoveries, and by 2028, it can achieve more important discoveries.

However, the company admits that the socio-economic transformation brought by AI will not be easy, and humanity needs to be ready for the possibility that current socio-economic platforms may have to change fundamentally.

We need to act today to ensure the future of AI is safe and beneficial to all of humanity, OpenAI concluded.

Cát Tiên
RELATED NEWS

OpenAI announces IndQA standards to grasp Indian cultural identity

|

OpenAI introduces the IndQA standard set to assess the cultural and language capabilities of AI models, aiming to narrow the gap between LLM Indic and the world.

Zico Kolter and the mission of keeping users safe against the power of AI at OpenAI

|

Professor Zico Kolter, an expert at Carnegie Mellon University, is currently leading the OpenAI safety team with the right to prevent the release of unsafe AI models.

More information about OpenAI's AI music creation project leaked

|

More information about OpenAI's artificial intelligence (AI) music creation project has appeared.

Memorable photos at the wedding of Miss Do Thi Ha and young master Son Hai Group

|

On the evening of November 9, the memorable moment at the wedding of Miss Do Thi Ha and businessman Viet Vuong - a young master of Son Hai Group was the oath.

People are worried about landslides digging deep into the Cau River bank after floods

|

Thai Nguyen - The landslides on the Cau River bank in Dong Ang hamlet (Phu Binh) are expanding, causing local people to constantly feel insecure.

Accident between car and motorbike in Ho Chi Minh City, 1 young man died on the spot

|

HCMC - On November 10, Bay Hien Ward Police are investigating the accident between a motorbike and a car, the consequences of which caused the young man to die on the spot.

G-Dragon interacted with the audience, tens of thousands of fans burst into emotions

|

G-Dragon's final performance in the world tour "G-Dragon 2025 [Übermensch]" in Hung Yen (November 9) was grand, attracting tens of thousands of spectators.

Salary is only enough to cover the minimum living expenses of online sales officers

|

Limited salary has caused many civil servants to take on both the main job and sell online to make ends meet, continuing to stick with the job.

OpenAI announces IndQA standards to grasp Indian cultural identity

Cát Tiên |

OpenAI introduces the IndQA standard set to assess the cultural and language capabilities of AI models, aiming to narrow the gap between LLM Indic and the world.

Zico Kolter and the mission of keeping users safe against the power of AI at OpenAI

Cát Tiên |

Professor Zico Kolter, an expert at Carnegie Mellon University, is currently leading the OpenAI safety team with the right to prevent the release of unsafe AI models.

More information about OpenAI's AI music creation project leaked

TÙNG LÂM |

More information about OpenAI's artificial intelligence (AI) music creation project has appeared.