Global Maternal Deaths Surge Amid Aid Cuts, WHO Reports

A new research study from Google DeepMind posits that Artificial General Intelligence (AGI) could emerge as early as 2030 and poses significant existential risks to humanity. The paper emphasizes the necessity for societal oversight regarding the development and application of AGI to avert potential catastrophic outcomes. Co-authored by DeepMind co-founder Shane Legg, the study categorizes the risks associated with advanced AI into four main areas: misuse, misalignment, mistakes, and structural risks.
DeepMind's CEO Demis Hassabis has called for the establishment of an international governing body akin to CERN and the International Atomic Energy Agency (IAEA) to regulate AGI development. He advocates for a collaborative global approach to ensure that AGI advancements are conducted safely and responsibly. According to Hassabis, this collective strategy would involve multiple countries in determining how AGI systems should be utilized.
The study highlights that the societal understanding of risk and harm related to AGI will shape its governance. "Given the massive potential impact of AGI," the paper warns, "the threat of severe harm must be taken seriously," underscoring the urgency for comprehensive risk mitigation strategies.