Glaciers Losing Ice at Alarming Rate, Warns World Meteorological Organization

The emergence of AI-powered therapy chatbots, commonly referred to as psychobots, is transforming mental health support. These tools, touted as accessible, non-judgmental alternatives to traditional therapy, are gaining traction despite ethical concerns about their effectiveness and human-like interactions.
Since the late 2010s, psychobots have become increasingly available, prompting debates around their unpredictability and the ethics of simulating human empathy. Jodi Halpern, an ethics expert at the University of California, Berkeley, argues that creating emotional connections with machines can be manipulative. Notably, while platforms like Wysa focus on structured cognitive-behavioral therapy, others like Pi and Replika rely on generative AI, providing more conversational and apparently emotional responses.
Research published in Nature in 2023 suggested that while these bots offer short-term relief, they fail to enhance long-term mental well-being. Concerns also arise regarding the potential for psychobots to inadvertently worsen symptoms for those in crisis. Miguel Bellosta Batalla, a Spanish psychoanalyst, emphasizes the need for genuine human interaction in therapy, raising questions about the implications of relying on AI in such sensitive contexts.