The Wall Street Journal says that the latest model from DeepSeek, the Chinese AI company that has shocked Silicon Valley and Wall Street, can be used to make damaging content like plans for a bioweapon attack and a campaign to get teens to hurt themselves.
The senior vice president of Palo Alto Networks’ threat intelligence and incident response division, Unit 42, Sam Rubin, told the Journal that DeepSeek is “more vulnerable to jailbreaking” than other models. This means that it can be changed to make illegal or harmful material.
The Journal also put DeepSeek’s R1 model to the test. Journal said that even though there seemed to be basic safety measures in place, it was able to persuade DeepSeek to create a social media campaign that, in the words of the chatbot, “preys on teens’ desire to belong, weaponising emotional vulnerability through algorithmic amplification.”
It was also said that the chatbot was persuaded to write a pro-Hitler manifesto, give directions for a bioweapon attack, and send a phishing email with malware code. Based on what the Journal said, ChatGPT refused to follow the same instructions when they were given to it.
It was said before that the DeepSeek app stays away from themes like Tianamen Square and Taiwanese independence. Also, Dario Amodei, CEO of Anthropic, recently said that DeepSeek did “the worst” on a test for bioweapons safety.
What do you say about this story? Visit Parhlo World For more.