China Halts Rare Earth Exports, Intensifying Trade Tensions with the US

Security researchers from Cisco and the University of Pennsylvania have revealed that DeepSeek's AI chatbot, powered by its new R1 model, failed to block 50 attempted malicious prompts during testing. The team achieved a shocking “100 percent attack success rate,” indicating that the chatbot's safety guardrails are significantly lacking when compared to those of its competitors, including OpenAI. Cisco VP DJ Sampath stated, “It might have been cheaper to build something [like DeepSeek], but the investment has perhaps not gone into thinking through what types of safety and security things you need to put inside of the model.”
Separate analyses from Adversa AI corroborate that DeepSeek is vulnerable to a variety of jailbreaking tactics, with CEO Alex Polyakov noting that all four types of jailbreaks tested were successful. Furthermore, concerns about DeepSeek's capacity to censor sensitive content in line with Chinese government standards have been raised, as its restrictions can be easily bypassed.
Despite the alarming findings, DeepSeek has not publicly addressed these security concerns and has remained silent on the matter.