Researchers Discover Hidden 'White Hydrogen' Reserves That Could Power Earth for 170,000 Years

The Internet Watch Foundation (IWF) has reported a staggering 380% increase in AI-generated child sexual abuse imagery in 2024, highlighting serious concerns regarding online safety. The annual report indicates that the organization received 245 reports of such material, translating to 7,644 images and a few videos. Notably, 39% of these images fell under "category A," which represents the most extreme forms of child exploitation.
According to the IWF, the advancements in artificial intelligence have led to increasingly realistic depictions of abuse that can be indistinguishable from real content, even for trained analysts. The UK government plans to introduce legislation that will criminalize the possession and creation of AI tools designed to generate such abuse imagery, effectively closing a legal loophole.
Derek Ray-Hill, interim chief executive of the IWF, announced the deployment of a safety tool called Image Intercept, which will help smaller websites prevent the proliferation of illegal content online. This tool is a response to the new Online Safety Act aimed at protecting children from evolving online threats, including sextortion, where minors are blackmailed with intimate images.