The latest edition of the AI Safety Index recently published by the Future of Life Institute shows that the safety measures of many leading AI corporations, including Anthropic, OpenAI, xAI and Meta, are still far behind the global standards that are gradually taking shape to ensure AI is developed safely.
The assessment results were conducted by a group of independent experts after analyzing many policies, strategies and public reports on AI.
According to the research team, although the race to develop a super-intellectual system is taking place fiercely, no company yet has a complete strategy to control AI models that are capable of surpassing humans in reasoning and logical thinking.
This increases public concern, especially after some cases of users committing suicide or self-harm related to interaction with chatbots.
Despite the controversy over the Hacking and Hacking of AI, US AI companies are still more laxly regulated than restaurants, emphasized Professor Max Tegmark, President of the Future of Life Institute and lecturer MIT.
The warnings appear in the context of the AI race showing no signs of slowing down. Major technology corporations continue to commit to investing hundreds of billions of USD to expand infrastructure and machine learning capacity.
However, according to experts, the rapid development of technology is far exceeding the efforts of businesses to control risks.
The Future of Life Institute was founded in 2014 and was once supported by Elon Musk, and has long been a leader in AI risk warning.
In October, many prominent scientists, including Geoffrey Hinton and Yoshua Bengio, called for a moratorium on the development of super-smart artificial intelligence until society reaches consensus and science finds a safe direction.