In the context of technology companies continuously developing increasingly large and complex artificial intelligence models, Alibaba chose a different direction when introducing a series of new compact AI models belonging to the Qwen 3.5 line.
The four new models include Qwen 3.5-0.8B, 2B, 4B and 9B with parameter scales from 0.8 to 9 billion. According to Alibaba, these models are designed to provide powerful reasoning capabilities but still optimize size, to serve developers who need effective and flexible AI solutions.
It is noteworthy that all Qwen 3.5 models are built on the same architecture and multi-mode support, allowing processing of both text and images.
Each model has two versions, including the "basic" version for developers who want to self-adjust and the "guidance" version that can be deployed immediately.
Among them, Qwen 3.5-9B is the largest model of this line and is attracting much attention. According to Alibaba, this model achieves performance equivalent to much larger models, including GPT-oSS-120B.
Despite significant size differences, Qwen 3.5-9B still showed competency in reasoning and knowledge processing in some tests.
The company said that in tasks such as logical reasoning, solving math problems and document analysis, Qwen 3.5-9B can achieve results equivalent to large AI chatbots such as OpenAI's ChatGPT or Google's Gemini.
On the other hand, the two smallest models, Qwen 3.5-0.8B and 2B, are optimized for running on computationally limited devices such as laptops or smartphones.
Although their reasoning capabilities are not as strong as larger versions, they can still process both text and images.
Qwen 3.5 models have now been released with openweight, allowing developers to download and run locally through popular platforms such as Hugging Face or ModelScope.
The launch of this model line also attracted the attention of the technology world. On social network X, xAI CEO Elon Musk commented that Qwen 3.5 models possess "impressive intelligence density", i.e. the ability to demonstrate high reasoning and task processing capabilities even with small parameter scale.
The success of Qwen 3.5 shows a new trend in AI development, instead of just focusing on scale expansion, companies are looking for ways to optimize performance on smaller models, making AI easier to deploy and saving computing resources.