The explosion of artificial intelligence AI in programming is creating a new paradox that productivity is soaring but is accompanied by a "flood" of source code beyond human control.
At a financial service company, the deployment of the AI programming tool Cursor has helped code output increase from 25,000 to 250,000 lines per month.
However, this also entails about 1 million lines of code to be reviewed, a huge volume exceeding the existing censorship capacity.
According to Joni Klippert, CEO of StackHawk (a technology company specializing in application security), the rapid increase in code means increased security risks that businesses cannot keep up with.
This trend has become evident since AI tools from OpenAI, Anthropic or Cursor exploded.
Not only engineers, now any employee can create software in just a few hours.
This helps accelerate innovation, but at the same time causes a situation of "programming code overload".
In the technology environment, many employees see this as a "new normal". AI helps them focus on ideas instead of writing each line of code. But the downside is that the number of engineers capable of checking, detecting errors and ensuring safety is not enough.
Businesses are increasingly hunting for senior engineers, especially application security experts.
A Google survey shows that 90% of developers have used AI in their work. The sharp increase in efficiency has also caused many companies to cut staff, citing AI as a possible replacement for most of the previous workload.
According to Meta's Chief Technology Officer - Andrew Bosworth, projects that once required hundreds of engineers can now be completed with dozens of people.
Along with that, the emergence of AI agents, which are systems that can self-write software, is pushing development speeds to unprecedented levels.
With just a little guidance, AI can create the entire program in a short time, causing the amount of code generated to increase exponentially.
However, the problem is not limited to quantity. Companies are facing the question: who is responsible when AI-generated code encounters errors?
Previously, programmers writing code would fix errors. Now, as AI creates most products, the boundary of responsibility becomes blurred.
Security risks also increase in unpredictable ways. Many engineers have to download the entire source code to their personal computers to use AI tools, unintentionally creating a risk of data leakage if the device is lost or attacked.
In the open source sector, the situation is even more complicated. Some projects recorded a sudden increase in contributions, but many of them are code created by AI, lacking quality control. There are even cases where projects have to close to the outside to avoid risks.
To cope, companies continue to turn to AI itself. Many new tools have been developed to automatically check codes, detect errors and prioritize high-risk parts.
However, experts believe that this is only the first phase of a major transformation.
As AI continues to improve programming capabilities, the challenge is no longer writing code faster, but how to control, understand and take responsibility for the huge amount of code that machines create.