Anthropic CEO - Mr. Dario Amodei - publicly criticized OpenAI after the ChatGPT application recorded a sharp increase in the wave of uninstallations. The reason is said to be related to the artificial intelligence cooperation agreement between OpenAI and the US military.
Immediately after the US government reached an agreement with OpenAI on the use of AI technology, data from Sensor Tower showed that the number of uninstallments of the ChatGPT application has skyrocketed by about 295% on Saturday, February 28, compared to the average daily increase of only 9% in the past 30 days.
This shows a negative reaction from a part of users to the possibility of AI being applied in the military field.
Faced with the increasing wave of criticism, OpenAI CEO Sam Altman said that the company will not rush to implement the agreement.
He affirmed that OpenAI is reviewing the cooperation terms to further clarify how AI technology is used for prohibited purposes.
In a post on X social network, Altman said that the agreement is being adjusted to add a clause affirming that OpenAI's AI models are not used for domestic monitoring purposes.
The company also said it has added regulations to prevent the government from using large-scale US trade data for surveillance activities.
However, the reaction from competitors is quite harsh. In an internal memorandum accessed by the media, Anthropic CEO Dario Amodei said that OpenAI's agreement with the US military is just a "safe play".
According to Amodei, OpenAI accepts cooperation mainly to appease employees and public opinion, while Anthropic affirms that they focus on preventing the risk of artificial intelligence abuse.
He also accused Altman of sending false messages by claiming to be "a mediator and settler".
Mr. Amodei said that many people in the public and the media are considering OpenAI's agreement with the Pentagon as shady or suspicious.
The controversy also revolves around a clause in the US Department of Defense contract, allowing AI technology to be used for all legitimate purposes.
Experts believe that this expression may open up the possibility of using AI in many sensitive areas, depending on how the law is applied in the future.
OpenAI affirms that large-scale domestic surveillance is currently considered illegal according to current regulations.
However, some opinions suggest that the law may change over time, making current limits looser.
Meanwhile, after previous negotiations failed, Anthropic is said to be continuing to contact the US government to discuss the possibility of providing AI technology to the military.
The debate between the two leading AI companies shows the complex issues surrounding the application of artificial intelligence in the military field, especially when this technology is increasingly playing an important role in the national security strategy.