Article

As mental health professionals, we navigate a delicate balance every day. We create safe spaces where clients can share their deepest vulnerabilities, while safeguarding their trust through confidentiality. It's a sacred covenant that forms the foundation of therapeutic healing.

Yet today, many of us find ourselves at a crossroads. The promise of AI-powered tools to reduce administrative burden and return our focus to client care is undeniably attractive. These tools offer a path away from documentation fatigue and back to the heart of why we became therapists in the first place: to be fully present with our clients.

But this promise comes with legitimate concerns. When we invite technology into our practice, are we inadvertently compromising the very confidentiality we've sworn to uphold? Are we trading convenience for privacy?

Many therapists experience a profound unease at this possibility. And, unfortunately, many of those concerns are founded. Many of the leading AI platforms for therapists have written their terms of service and privacy policies in a way that gives them carte blanche to use your practice’s data as they wish, potentially selling it to data brokers. Even Sam Altman, the technologist behind ChatGPT, has warned that AI does not, by default, come with legal privilege.

The good news is that you don't have to choose between efficiency and ethics. By understanding the landscape of data privacy in therapy technology, you can make informed choices that honor both your need for support and your commitment to client confidentiality.

This guide — fact checked by a data privacy expert — will walk you through the critical privacy considerations when evaluating any AI-powered clinical tool. We hope it helps you ask the right questions before entrusting your practice, and your clients' most personal moments, to a technology partner.

<aside>

In this guide, we’ll compare the terms of nine AI platforms for therapists. These comparisons were written using publicly-available facts, directly from their privacy policies and terms of service, as they were in August 2025. We will continue to update this guide quarterly as other platforms change their policies.

</aside>

How to tell if a tool is private

No two privacy policies are the same. We brought together experts in AI, clinical ethics, and privacy law to figure out what to look for in an AI-powered medical scribe. That led us to identify six key criteria against which a privacy policy can be judged:

Privacy Comparison - Updated-1.webp

Read on for a more detailed description of each criterion, as well as a current round-up of leading tools.

1. Explicit consent for model training

When you document a session using an AI notetaker, that deeply personal narrative doesn't simply disappear after your note is generated. Many companies reserve the right to use that data — your client's most vulnerable moments — to train their AI models. The therapeutic space holds some of our clients' most vulnerable disclosures, and the question of who controls how these stories are used touches the very heart of our ethical practice.