Indian-Origin Student Solves Century-Old Math Problem, Boosting Wind Energy Efficiency

An increasing number of people are turning to artificial intelligence (AI) chatbots, known as "psychobots," for mental health support, as they seek affordable alternatives to traditional therapy. Emerging since the late 2010s, these tools offer anonymous and judgment-free guidance but raise ethical concerns about their efficacy and emotional manipulation. According to Jodi Halpern, an ethics expert at the University of California, Berkeley, simulating human empathy can be deceiving and potentially harmful.
Different types of therapy chatbots, such as Wysa and Pi, provide varying levels of support. Wysa uses cognitive-behavioral therapy techniques, while Pi leverages advanced generative AI for more conversational interactions. Researchers, including Jean-Christophe Bélisle-Pipon from Simon Fraser University, warn that some bots exaggerate their capabilities, creating confusion for vulnerable users. Studies have suggested that while these bots can alleviate short-term psychological distress, they may not offer long-lasting benefits.
Despite their limitations, many who lack access to qualified therapists find solace in these digital tools. The debate continues over whether psychobots can deliver meaningful support or if they merely serve as a temporary fix.