By Bart Magee, Ph.D. A new generation of A.I. technologies released this spring shocked the world with its power, the extent of which we are only beginning to comprehend. The awareness that we are just at the floppy-disc phase of their development adds to our collective shudder—and awe. I have no doubt that A.I. will fundamentally change mental healthcare. While the opportunities are tremendous, I do have profound concerns about the potential for harm not only to individual patients, but for our collective mental health. From a mental health treatment perspective, the potential is great. A.I. systems can aid mental health research, assist in diagnosis and treatment planning, and provide adjunct support. The impact could be transformative, especially now, given the current shortage of mental health providers. We have been underinvesting in provider training for years and the financial incentives for entering the mental health field have been abysmal. Education and training take years—and cost a fortune—and insurance reimbursement rates have remained shamefully low. Increased stress and social isolation due to the COVID pandemic accelerated our already deteriorating collective mental health. In addition, successful efforts at reducing mental health stigma have led to a huge increase in demand for services. There’s just not enough providers to go around. Enter Artificial Intelligence.
A.I systems lack the capacity for empathy, emotional intelligence, lived experience, self-awareness, and the ability to connect in personal ways that make it impossible for them to serve as actual therapists. The evidence is clear that the relationship between patient and therapist is the most important factor in treatment outcomes, and even the most sophisticated machines lack basic human capacities for relationship development and depth. Yet a strong counter argument has already started: What about people without another option? Wouldn’t a facsimile of a therapeutic relationship be better than none at all? These systems will be improving rapidly and will be able to approximate the therapy experience. Indeed, people have reported becoming deeply attached to bots developed as companions. The allure of an A.I. therapist who has a perfect memory for every story you have ever told, is always there for you (no waiting for an appointment) and has all the therapy tools and data to support their use instantly available, could be hard to resist. Add to that the massive financial incentives that are already driving the development and deployment of these more personal A.I. systems. The potential for harm is multifaceted. Much of the concern has to do with their rapid development and deployment with virtually no regulation or oversight. No research has been done on their efficacy. The fact that A.I. bots can “hallucinate”, make up information and give harmful advice to people who are emotionally vulnerable has caused understandable alarm. Regulators and advocates need to catch up quickly. And mental health professionals need to have a seat at the table where new systems are developed not just with better guardrails, but programed with ethical principles. But the problems go deeper. As alluring as it is to think about a machine that could provide a near-perfect source of support, we need to consider the shadow side. Good therapy and real change come from much more than insight, support, and learning new psychological and emotional tools. Therapy helps people navigate the real challenges and disappointments of life. A key way the therapist helps the patient do that is through the inevitable disappointments that happen, in real time, in the relationship with the therapist. Learning from experience and developing one’s capacity to relate to others despite their imperfections is fundamental to mental health. It is hard to imagine how a machine that cannot know what disappointment is like could be the vehicle for that kind of development. The change process involves nuance, creativity, and complexity. Machines don’t experience emotions, sensations or have bodies and therefore can’t possess the kinds of embodied intelligence that humans have. As advanced as artificial intelligence becomes in some domains, in others it will remain forever inanimate. Loneliness and disconnection have risen steadily in recent years and there is strong evidence that connecting to others via digital media has contributed to that. Remember, social media employs A.I. and we’ve already been experiencing its deleterious effects. COVID-related social isolation has furthered the drift toward disconnection. Much of what I work on with patients in therapy lately has to do with their social isolation, anxiety and helping them find ways of reconnecting to others. It takes patience, focus and emotional resilience. The more time one spends in isolation, the harder it gets. I am concerned that our current loneliness epidemic will create a big opening for a new generation of chatbot companions, reaping profits for their creators and leaving ever more social disconnection in their wake. We should be talking about how we can facilitate connection, whether through community groups, amateur sports leagues, the arts, or volunteer activities, not handing them more digital tools already degrading our mental health and our humanity. We are facing a near future where A.I. will be ever more integrated into everything we do. The boundary between what is human and what is machine will become blurrier than ever. How are we going to recognize, value and preserve experiences that are fundamentally human? More and more, A.I. will be part of making decisions for us, guiding us through the wilderness of life’s choices. Along the way, our ability to discriminate between our own desires and the confident recommendations of the algorithm blurs. In this environment, a therapist who you meet in-person weekly at a scheduled time provides space for deep thinking, creative relating, and a respite for humans in their digitally entangled lives. Training more mental health professionals, finding new way to deliver mental healthcare services, and helping all people develop basic social emotional capacities should be national priorities. If this were achieved, then new technology could be used as technology should be used, as a tool, as an enhancement to our cognitive abilities, not as a replacement for essential human relationships.
3 Comments
Sam Tucker
5/4/2023 04:39:20 pm
Great article
Reply
Emily Loeb
5/4/2023 04:43:51 pm
Yes! and Yes and Yes!
Reply
Your comment will be posted after it is approved.
Leave a Reply. |
Archives
February 2025
Categories |