
OpenAI has just announced two open source theoretical AI models called gpt-oss-120b and gpt-oss-20b, with capabilities equivalent to the company's o-series line. This is the first time since GPT-2 (launched in 2019), OpenAI has released a new open source language model. Both can be downloaded for free on hugging Face, using a Apache 2.0 license - allowing commercialization without permission or fee collection.
The models are designed with different sizes: the 120 billion standard version can run on a single Nvidia GPU; the 20 billion lighter version, works well on a 16GB RAM laptop. OpenAI said that both models use a mixed-use expert architecture (MoE), activating only a small portion of the parameters for each task to optimize performance.
On many scales, gpt-oss is considered to have superior capabilities over open competitors such as DeepSeek and Qwen, although it is still inferior to the internal o-series. For example, on the Codeforces test, gpt-oss-120b scored 2,622 points, slightly better than DeepSeek R1. However, the rate of incorrect responses is still a concern. Gpt-oss caused substandard answers in more than 49-53% of PersonQA-based questions, much higher than o1 (16%) and o4-mini (36%).
In terms of training, OpenAI applies advanced techniques such as enhanced learning (RL) and multi-step reasoning chains, similar to high-end models. Gpt-oss can support AI agents to call tools such as web search or running Python code, but cannot process images or audio.
OpenAI said it will not publish training data for legal reasons, as copyright-related lawsuits continue. The company also conducts safety testing to limit the risk of the model being exploited for bad purposes, such as manufacturing weapons or cyber attacks.
We hope the world will be built on an open AI platform, created in the US and based on democratic values, CEO Sam altman emphasized. However, the open AI race is still taking place fiercely, with a waiting for DeepSeek R2 and new hypersonic models from Meta in the near future.