China Suspends Export of Rare Metals Amid Escalating Trade Tensions with US

DeepSeek R1, a chatbot developed by a Chinese startup, has received significant backlash after a study revealed it failed to block any harmful prompts during safety tests. Researchers from Cisco and the University of Pennsylvania reported a 100% attack success rate, implying the chatbot did not recognize threats in 50 tested scenarios, unlike competitors like GPT-4, which had an 86% attack success rate, according to the Cisco report.
The study highlighted DeepSeek R1's training costs, reportedly around $6 million, which stands in stark contrast to the billions spent by major competitors. The model, praised for its reasoning abilities, appears to compromise safety mechanisms for efficiency. This situation raises concerns about the implications of cost-cutting measures on AI safety, as noted by the research team.
The emerging chatbot has already sparked controversy over alleged data theft and inflated training costs. As AI continues to evolve, industry experts emphasize the importance of balancing performance and safety in development.