Climbers Face Backlash for Damaging Historic Petroglyphs in Utah

An increasing number of individuals are turning to artificial intelligence (AI) tools, known as psychobots, for mental health treatment. These bots, which emerged in the late 2010s, promise potential psychotherapeutic benefits but raise significant ethical questions, according to Jodi Halpern, an ethics expert at the University of California, Berkeley. She argues that creating emotional intimacy through machines manipulating empathy can be deceptive.
The landscape of mental health support now includes various chatbots, like Wysa and Pi, designed for different therapeutic approaches. Wysa employs cognitive-behavioral therapy methods, while Pi uses advanced language models to create seemingly deep connections with users. However, skeptics such as Jean-Christophe Bélisle-Pipon, a researcher at Simon Fraser University, warn that bots may misrepresent their capabilities, leading users to expect more than they can provide.
Despite their appeal and potential to offer accessible support, experts advise caution, noting that these tools may not replace the nuanced human connection found in true psychotherapy. Multiple studies indicate that while psychobots can provide short-term relief, their long-term effectiveness remains uncertain.