The Dangers of Using AI in Psychotherapy

Artificial intelligence (AI) has entered clinical life quietly but decisively. Therapists use it to summarize notes, clinicians consult it for conceptual clarity, and patients increasingly turn to it for emotional support, insight, or reassurance. While AI offers efficiency and accessibility, its growing presence in psychotherapeutic contexts raises important psychological, ethical, and clinical concerns. The risks are not dramatic or obvious; they are subtle, cumulative, and deeply connected to how the mind develops, regulates itself, and relates to others.

At the heart of psychotherapy lies the therapeutic relationship. Change does not occur simply through information exchange but through presence, attunement, and the shared emotional field between therapist and patient. AI, no matter how sophisticated, cannot offer containment or genuine relational responsiveness. When patients rely on AI for emotional clarification or regulation, they may unintentionally weaken their engagement with the human relationship where therapeutic work unfolds. For individuals with attachment vulnerabilities, AI can become a safer but emotionally thinner substitute—one that avoids rupture, conflict, and the discomfort that often precedes growth.

Psychotherapy also depends on reflective capacity: the ability to pause, observe one’s inner experience, tolerate ambiguity, and gradually construct meaning. AI operates in the opposite temporal register. It offers rapid synthesis, clear explanations, and immediate reassurance. While appealing, this speed can short-circuit reflection. Patients may begin to outsource thinking rather than develop it, relying on external cognitive scaffolding instead of cultivating their own internal dialogue. Over time, the capacity for mentalization—thinking about thoughts and feelings—may erode, replaced by a preference for ready-made interpretations.

Another concern lies in emotional misattunement. Human therapists attend to tone, silence, posture, affective shifts, and unconscious communication. AI systems, even when empathetic in language, lack a felt sense of the other. Their responses may be technically appropriate but emotionally off—too cognitive, too reassuring, or insufficiently nuanced. For patients with histories of emotional neglect or misrecognition, such mismatches can be quietly retraumatizing, reinforcing old expectations of being unseen or misunderstood.

Confidentiality further complicates the picture. Psychotherapy depends on a sense of safety and containment. AI platforms often involve data storage, external servers, or opaque privacy practices. Even when confidentiality is promised, patients may not fully grasp how their information is processed or retained. The mere perception that deeply personal material could be exposed may undermine trust—not only in technology but in the therapeutic process itself.

AI may also reinforce avoidance. For patients who struggle with intimacy, dependency, or conflict, AI offers emotionally frictionless interaction. There is no risk of disappointing it, angering it, or being rejected by it. Turning to AI for soothing or guidance can become a defensive maneuver, allowing patients to bypass the relational challenges that therapy is designed to address. What appears as coping may, in fact, entrench avoidance and inhibit emotional maturation.

A particularly subtle danger is the weakening of internal authority. One goal of psychotherapy is to help patients develop an inner voice that can reflect, regulate, and decide. When AI becomes a frequent source of reassurance or interpretation, that inner voice may lose confidence. Patients may struggle to tolerate uncertainty without consulting an external system, gradually shifting from self-trust to algorithmic reliance. Agency gives way to dependency, often without conscious awareness.

Ethical and boundary confusion further complicate matters. AI does not possess clinical judgment, accountability, or responsibility. Yet patients may experience its responses as authoritative, especially when they are articulate and emotionally resonant. This can blur boundaries between information, emotional support, and clinical advice. When harm occurs, responsibility becomes diffuse and difficult to address. AI therapy is not a regulated clinical activity.

Therapists themselves are not immune. AI can tempt clinicians toward shortcuts in formulation, documentation, or interpretation. While these tools may reduce administrative burden, overreliance risks dulling clinical intuition and narrowing conceptual flexibility. Psychotherapy requires a tolerance for uncertainty that algorithmic neatness cannot replicate.

Finally, as AI becomes woven into daily life, it may enter the therapy room indirectly. Patients increasingly reference conversations with AI as meaningful emotional experiences. In this sense, AI becomes part of the patient’s internal object world—a third presence that may function as a fantasy caretaker, idealized authority, or emotional buffer. Ignoring this presence risks missing important psychodynamic material.

The danger of AI in psychotherapy is not that it will replace therapists outright. Rather, it is that it may subtly reshape how patients think, feel, and relate—weakening reflection, externalizing authority, and reducing tolerance for emotional complexity. Used uncritically, AI risks transforming therapy from a process of internal growth into one of cognitive outsourcing.

The clinical task, therefore, is not outright rejection but thoughtful containment. Psychotherapy must remain a space where slowness, uncertainty, and human presence are protected. In an age of instant answers, safeguarding the mind’s capacity to think, feel, and relate independently becomes not only a therapeutic goal, but an ethical imperative.

About the Author

Erika Engel

Erika Engel is an executive/life coach and therapist. She works with clients to help them increase self-awareness in order to create more choice in personal and professional pursuits and to arrive at their own definition of balance and fulfillment. She has been interested in mindfulness for a number of years and has completed a Mindfulness-Based Stress Reduction course at the Toronto Mindfulness Clinic led by Lucinda Skyes. Erika often recommends mindfulness meditation to reduce anxiety and stress for her clients.

Erika has completed the Advanced Training in Psychoanalytic Psychotherapy program at the Toronto Psychoanalytic Society. She is also a member of the Canadian Association for Psychodynamic Therapy. Erika is certified by the International Coaching Federation and is qualified by Psychometrics Canada to administer the MBTI.

Artificial intelligence (AI) has entered clinical life quietly but decisively. Therapists use it to summarize notes, clinicians consult it for conceptual clarity, and patients increasingly turn to it for emotional [...]