Artificial intelligence company Anthropic has just confirmed a serious incident related to the leak of Claude Code's source code, an AI programming tool that is being widely used.
Although not considered a cybersecurity attack, the incident still raises many concerns, especially in the context of increasingly fierce competition in the AI field.
According to information from Anthropic, the cause stems from human errors in the process of packaging the software release.
Specifically, version 2.1.88 of Claude Code unintentionally contained an internal file, leaking nearly 20,000 files with more than 512,000 lines of source code.
These data were quickly detected and spread on platforms such as GitHub and social network X, attracting tens of millions of views in a short time.
Anthropic representatives affirmed that there was no sensitive customer data or login information leaked.
However, the leaked source code is said to contain detailed instructions on how Claude Code works, including how to use tools, system limitations and processing programming tasks. These are valuable information for competitors.
The incident immediately attracted attention in the programming community and security experts. Many opinions suggest that source code leakage could help other AI companies better understand how Anthropic builds products, thereby shortening development time or finding weaknesses to exploit.
In the context of giants such as OpenAI, Google or xAI racing to develop AI programming tools, this is considered a major disadvantage for Anthropic.
Claude Code was introduced by Anthropic in May 2025 as a command line tool supporting AI programming, allowing users to write, edit code, fix errors and automate many tasks.
The product was quickly welcomed, contributing to bringing the company's annual revenue to over 2.5 billion USD in early 2026.
This leak also highlights Anthropic's "closed-door" development strategy, where the source code and core technology are not as public as the open source model.
When internal information is leaked, the risk of competition becomes even clearer, because competitors can take advantage of it to better understand the architecture and philosophy of product design.
Notably, this is the second incident in less than a week related to internal information leakage of Anthropic.
Previously, a blog draft describing the unreleased AI model named Claude Mythos was also illegally accessed from an unsecured data warehouse.
According to leaked information, Mythos is expected to be superior to previous models, but at the same time also poses significant cybersecurity risks.
Currently, Anthropic said it is implementing measures to prevent similar incidents in the future. However, the incident once again sounded an alarm about the internal control process and data security in the hot AI industry.