OpenAI is recruiting a department head in charge of risk preparation and response, responsible for researching, evaluating and managing increasingly complex risks arising from artificial intelligence.
This information was announced by CEO Sam Altman in a post on social network X, showing the company's growing interest in the unexpected impacts of advanced AI technology.
According to Sam Altman, current AI models are beginning to pose some real challenges. Among them, noteworthy are the potential impacts on users' mental health and the ability of AI systems to become too good at computer security, to the point of being able to detect serious vulnerabilities themselves. These are risks that are not only technical but also directly related to social safety.
Altman called on capable candidates to participate in a joint effort to help the world safely leverage the power of AI.
He emphasized that the dual goal is to equip cybersecurity experts with the most advanced tools, while preventing those capabilities from being exploited by bad guys.
A similar approach is also applied to other sensitive areas such as biology and self-improving systems.
According to the job description, this position will be responsible for deploying the OpenAI preparation framework. This framework explains how the company tracks, evaluates and prepares for advanced AI capabilities that can create new risks, leading to serious damage if not controlled.
OpenAI first announced the establishment of this group in 2023, with the task of studying potential risks.
These risks range from immediate threats such as online scams, cyber attacks, to scenarios that are more speculative but have major consequences.
However, less than a year later, the Head of Preparation and Response at the time, Aleksander Madry, was transferred to a position focused on AI theoretical research. Some other safety leaders also left the company or took on new roles.
Recently, OpenAI has updated the Preparation Framework, which states that the company can adjust safety requirements if a competing AI laboratory launches a high-risk model without similar protection measures.
This move raises debate about whether competitive pressure can weaken safety standards.
In addition, generative AI chatbots, including ChatGPT, are being increasingly closely monitored for their impact on mental health.
Some recent lawsuits suggest that AI chatbots can reinforce misperceptions and increase user social isolation. OpenAI said they are continuing to improve their ability to recognize emotional stress signals and connect users with appropriate support sources.
The search for the head of the risk preparation and response department shows that OpenAI is facing a major problem of how to continue to innovate quickly, while ensuring AI is developed and deployed responsibly.