If artificial intelligence can bring risks to humanity, then Professor Zico Kolter is holding one of the most important roles in controlling that.
He is the chairman of OpenAI's Safety and Security Committee, which has the authority to prevent the release of any AI system if it is found to be unsafe, from models that could be exploited by bad guys to make weapons, to chatbots that negatively affect human mental health.
Professor Kolter said that his team not only manages risks and concerns about major issues affecting humans, but also focuses on the entire issue of safety, security and social impact as AI is increasingly widely applied.
Kolter was appointed more than a year ago, but the position became particularly important as California and Delaware have just included his oversight in legal deals, allowing OpenAI to restructure into a public welfare model.
According to the agreement, all decisions on AI safety and security must be prioritized over financial benefits. Kolter, although not a for-profit corporation, still has full access to and access to all information related to safety decisions at OpenAI.
He was accompanied by three other members, including former Commander of the US Cyberspace Command, Paul Nakasone. Their presence shows the strictness that OpenAI wants to aim for in managing technology risks.
Zico Kolter, 42, is a computer science professor at Carnegie Mellon University. He entered the field of machine learning in the early 2000s, when AI was still a concept that few people understood and was far from reality.
Since then, he has become a leading expert in deep learning and AI security, collaborating with many large laboratories before tracking OpenAI since its launch in 2015.
Kolter believes that the pace of AI development has far exceeded all predictions, leading to unprecedented risks, from cybersecurity, data, to the ability of AI to support the manufacture of bio weapons.
And finally, the impact of AI on human mental health and behavior, which we do not fully understand, said Professor Kolter.
OpenAI, after launching ChatGPT, was accused of bringing the product to the market too quickly to maintain its leading position. The news of CEO Sam altman's temporary dismissal in 2023 has further raised suspicions that the company has neglected its mission for profit.
Since then, the role of Kolter and the Safety Committee has become a shield of trust for the public. Although he did not disclose details about the delays in releasing the model, Kolter asserted, We can request a temporary holdback of any releases until risks are minimized.
Amid the wave of AI commercialization, the presence of people like Zico Kolter is a reminder that technological innovation must go hand in hand with social responsibility.
Because for him, AI is not only a profit-making tool, but the power that can shape the future of humanity.