Xiaomi has officially announced MiMo-V2-Flash, its latest open source artificial intelligence (AI) model, marking an important step forward in its ambition to become a "platform player" in the AI field.
This model is designed to handle complex reasoning tasks, programming and AI with factoring properties, and can also act as a multi-purpose assistant in daily life.
According to Xiaomi, the MiMo-V2-Flash has a reasoning speed of up to 150 token per second, with low operating costs, only about 0.1 USD for each million input token and 0.3 USD for each million output token.
The model has 309 billion parameters, a figure that shows large scale and strong processing capability, because parameters are often considered an important measure reflecting the capabilities of AI models.
MiMo-V2-Flash is now open for download via MiMo Studio, Xiaomi's developer portal, as well as on hugging Face and the company's API platform.
This is the latest version in the MiMo model line, and clearly demonstrates Xiaomi's strategy in expanding from the hardware field to developing platform AI models, competing directly with big names such as DeepSeek, Anthropic and OpenAI.
The launch of MiMo-V2-Flash also coincides with the period when Xiaomi is pushing to integrate AI into smartphones, tablets and electric vehicles.
According to Ms. Luo Fuli, a former DeepSeek researcher who recently joined the MiMo group, this is only the second step in Xiaomi's roadmap towards AGI (general artificial intelligence), but has shown different technical options.
Xiaomi Chairman Lu Weibing also affirmed that the company's progress in large-scale AI models and applications has exceeded expectations.
He believes that the extensive combination of AI and the physical world, where hardware devices interact directly with humans could become the next breakthrough in technology.
Technically, MiMo-V2-Flash uses the Mixture-of-Experts (MoE) architecture, allowing for the subdivision of large neural networks into many "experts", thereby balancing performance and computing efficiency.
This approach also helps reduce the cost of handling long reminders, by limiting the number of context before that the model has to re-evaluate.
In standard tests, Xiaomi said that MiMo-V2-Flash achieved results equivalent to Kimi K2 Thinking of Moonshot AI and DeepSeek V3.2 in most inference tests, while surpassing Kimi K2 in long-term reviews.
Notably, the model scored 73.4% on SWE-Bench Verified, surpassing all competing open source AI models.
Xiaomi also claimed that the programming capabilities of the MiMo-V2-Flash are as good as the Claude 4.5 Sonnet of Anthropic, but at a significantly lower cost.
This shows that Xiaomi not only pursues performance, but also focuses on the ability to deploy AI on a large scale and at a reasonable cost.