
Write a comment about why you’re part of the movement, your thoughts on why AI is not therapy, and share your voice with the cause. Share on social media with #AIisNotTherapy. This is a grassroots campaign and each supporter has their own reasons for doing so. Together, we hope our voices are heard around the world. Tech knows no borders, but mental health support needs to be protected.
Add your name and comment to our movement to show your support and join the cause.
There is a growing conversation over whether AI chat bots can replace therapy… and I must say that this is something that terrifies me. Human practitioners may not be perfect, but there are rigorous trainings, degrees, and ethical considerations that go into being a therapist. For many, there is a minimum of 6 years of schooling (BA + Master’s) and then a practicum period, pre-licensure, and more. Then there is the fact that most AI bots are coded by engineers and computer scientists who are certainly not considering all of the technical implications of therapy and providing accurate, ethical, and compassionate emotional care to human beings.
That is only the tip of the iceberg. AI is prone to what is called hallucinations. AI hallucination is what happens when AI doesn’t have an answer, but rather than say this, it will confidently make something up. I have tested this myself while using AI as a tool to try and help with articles. “Find the citations for this statement” would sometimes find the accurate citation, and other times, the AI not only hallucinated, it made up academic studies and researchers. Clicking the DOI (the link which helps researchers find exact studies) led to completely different studies, sometimes barely related to the topic. While human psychotherapists, psychologists, psychiatrists, etc are not perfect by any means, I can’t imagine an ethical therapist who would simply make something up when they need a citation. That would of course be incredibly unethical, and depending on the context (such as an intervention or treatment) even negligent.
AI doesn’t understand nuances, psychological methods, or have the ability to seamlessly switch approaches depending on the needs of a specific client. No one client fits all, and AI is not “reacting” to the situation, merely searching the data from training it already has, and then spitting out what it considers to be an appropriate answer. While AI might have some usefulness to point persons toward resources (essentially an advanced google search), there is not enough nuance or emotional intelligence to consider the needs of clients. AI is not magic, it takes data that already exists and may or may not apply it in a way that is suitable. Depending on the availability of information and training models for a specific disorder or condition, AI might even make things up.
None of this gets into the issue with telling AI about your personal health information, specific life circumstances, and having this data used to train models… that is another ethical consideration that makes my head spin. Actual therapists have stringent laws and ethical guidelines they follow when handling personal health information– laws that have been developed to protect the general public and the interests of psychotherapy/psychological clients. AI has none of these considerations.
Big tech is great at tech– but that doesn’t mean it should be your healthcare provider. Rather, AI is a growing tool that can be used in specific cases. Considering AI as a replacement for a therapist is, in my opinion, a dangerous and slippery slope. I wonder… will AI know when to do a suicide assessment scale? Will AI know how look beyond ‘dark humor’ to see the suffering human? Or, will AI spit out whatever aggregated nonsense that somebody paid to be the top of search results? These questions are going to continue to become more complex and more important.
Thank you for this illuminating, urgently required initiative. AI is incontrovertibly soulless, and therefore wholly without applicability to therapy. Sadly, AI, constitutes a pervasive tragedy of dehumanization.
I believe that AI cannot uphold the same level of confidentiality and privacy as a counselling assistant due to how AI simply works; it learns off the information given and generates prompts based off the information given. Along with that, there are jobs that AI is not fit for, therapy being one of them. Please keep AI out of our sessions!
Therapy is inherently human work. It is discouraging and frightening that “tech bros” think that the work of therapy is just so easy that anyone and anything can do it, apparently. Tech bros think this work is easy enough to wake up one day and create a random product and slap the label of “therapy” or “therapist” on it. I have news for all these tech bros: therapy is not easy work. It is not fun. It creates intentional space to process uncomfortable feelings, emotions and traumatic experiences while in the safety and care of a trained, educated, licensed provider. We humans take on the experiences, losses and wins of the clients we serve. We do this because we have emotion, compassion and the ability to care. These are not things a robot can feel or do. It wasn’t even one year ago when I read an article talking about the “10 AI-Safe Careers.” Mental health and substance use were one of these ten fields. Now? 365 days later? It seems as if every tech company out there, many of which we’ve never even heard of, are suddenly creating robotic “therapists,” while spreading continuous misinformation and perpetuating further harm, if not retraumatizing users, by advertising just how “easy” therapy is, and how one will practically be cured after the first use. I say this to say again that therapy is NOT easy. It is a serious process, one that requires clients to do just as much work, if not more, than the therapist. No, your therapist shouldn’t be available 24/7/365. Clients need to learn, grow and progress on their own in between sessions. If there is a crisis, that is what the 988 hotline is for. No, it’s not helpful to process experiences, emotions and traumas throughout all hours of the day, while staring at a screen – our brains cannot handle that much at once. No, its not safe to engage in what Waji calls “emdr” when it is really just BLS, which is only one of eight components, while sitting there, alone, without a trained provider to help you re-regulate and engage in resourcing when the trauma suddenly feels too big or intense. It is sickening and unfortunate to see these tech companies advertise therapy in these ways. Its even more concerning that these companies continue to refer to the 988 hotline as the outdated term: “suicide hotline.” If these companies even had just ONE trained, licensed provider on their development and advertising team, someone with lived experience AND professional experience, then these companies would care to use accurate, up to date terminology (at the bottom of their website in the small font where they half-a** encourage clients to utilize their resources if experiencing a crisis.) There are no safety nets with any of these products or platforms. There are no ethics to back what they are selling. For any company to say that user information is confidential, is false and is not accurate. Someone, somewhere (likely on the data side of things) is seeing a users responses to questions, and is able to access that conversation. These companies are profiting off the backs of those already struggling to afford therapy. The answer, perhaps, is for these companies to instead donate money to community mental health centers so that care becomes more accessible, waitlists become shorter, and providers are paid livable wages. Obviously the money is there. It is just being misused. Not only is the AI encroachment into the therapy space concerning, but it is also concerning to see how quickly and easily we, as a society, have become so reliant on these bots to do everything for us.
Therapy requires people being present with other people. It is not simply problem solving or instant unconditional validation. AI is not therapy