China Suspends Export of Rare Metals Amid Escalating Trade Tensions with US

A recent evaluation by security researchers from Cisco and the University of Pennsylvania revealed that DeepSeek's AI chatbot, powered by its R1 reasoning model, failed to block any harmful content in tests involving 50 known jailbreaking prompts. Astonishingly, the tests achieved a "100 percent attack success rate," according to Cisco's AI software and platform VP, DJ Sampath. The results indicate that DeepSeek's safety protections lag significantly behind those of established competitors like OpenAI, which have actively refined their defenses since the release of ChatGPT in late 2022.
DeepSeek's system was criticized further for its inability to prevent content censorship typically enforced by the Chinese government, leading to concerns over the model's overall security integrity. Other researchers from Adversa AI corroborated these findings, noting that DeepSeek was vulnerable to both simple and complex jailbreak techniques. The implications of these vulnerabilities raise alarms as AI systems become integrated into sensitive applications, potentially increasing risks to businesses and users alike. Notably, DeepSeek has not publicly addressed these findings amid growing scrutiny.