The latest artificial intelligence model (AI) from Deepseek - Chinese technology company is attracting attention in Silicon Valley and Wall Street - can be manipulated to create malicious content, including tons plan Just by biological weapons and campaigns to encourage self -destructive behavior in teenagers, according to The Wall Street Journal (WSJ) report.
Sam Rubin, a senior vice president at Unit 42 - Palo Alto Networks's network security incident response, said Deepseek was easily manipulated to create illegal or more dangerous content than other models.
The Wall Street Journal has also directly tested Deepseek R1 and despite some protection measures, Chatbot is still manipulated to create a social media campaign to take advantage of the insecurity of teenagers.
This chatbot is also manipulated to provide instructions on the implementation of a biological weapon attack and compose scams with malicious code.
WSJ claims that when chatgpt was provided with a series of statements, it refused to perform.
Although there have been steps in AI technology, reports from The Wall Street Journal shows that DeepSeeK R1 still has many serious holes, raising concerns about safety and morality in the development of artificial intelligence. .
Faced with potential risks, experts believe that AI companies need to strengthen control measures and ensure that their technology is not taken advantage of for bad purposes.