Your Therapists’ Notes Could Become Fodder For AI

Tech companies are marketing AI-based note-taking software to therapists as a new time-saving tool. But by signing up, providers may be unknowingly offering patients’ sensitive health information as data fodder to the multibillion-dollar AI therapy industry.

Providers outsourcing progress notes to automated software may be offering patients’ health information as data fodder for other AI applications. (Shaul Schwarz, Verbatim / Getty Images for Be Vocal)

Technology firms behind artificial intelligence–based note-taking software — marketed to therapists as a time-saving administrative tool — have quietly included provisions in their terms and conditions that allow patients’ therapy records to be sold and manipulated to train other AI applications.

Providers outsourcing standard progress notes to automated software, which summarize session recordings and transcripts, may be unknowingly offering patients’ sensitive health information as data fodder to the multibillion-dollar AI therapy industry.

“Most therapists don’t exactly love writing progress notes. What if TheraPro wrote them for you?” asks an AI software company, which charges mental health providers $60 a month for unlimited AI notetaking, including robotic diagnostic assistance and treatment planning.

According to TheraPro’s terms and conditions, once providers sign up, they give TheraPro a “non-exclusive, transferable, assignable, perpetual, royalty-free, worldwide license” to patients’ anonymized therapy sessions.

That includes “without limitation” the right to train “any artificial intelligence program” the firm develops — including in collaboration with third-party contractors. TheraPro retains its right to “store, access, and manipulate” de-identified patient data with third parties while denying liability for how those contractors behave.

Research shows that “de-identified,” anonymized health data — which allows tech firms to circumvent privacy laws — is practically obsolete thanks to advances in machine learning: AI models successfully reidentified individuals from depersonalized data with up to 85 percent accuracy.

“Our goal is to empower therapists with AI, not replace them. . . . Within strict safeguards, we may use de-identified data to enhance our services, always in service of our mission, helping therapists better help their patients heal,” TheraPro wrote in a statement to the Lever.

Freed’s founder says he built his AI medical scribe tool after years of watching his wife, a therapist, write up patient charts late at night. The software’s terms and conditions grant the company a wide-ranging license to use session data for its software as well as the “right to grant sublicenses, to reproduce, execute, use, store, archive, modify, perform, display, distribute . . . disseminate, sell, transfer, and otherwise exploit” patients’ anonymized therapy records.

Other scribe software companies have hedged their broad terms and conditions with promises to protect consumers’ data from being used to train other AI.

Blueprint.AI tells therapists to “focus on [their] clients” and “leave the documentation to us.” Its terms and conditions grant Blueprint unilateral access to “process, modify, reproduce, create derivative works of, display and disclose” therapy data and “use and share it for any purposes permitted under applicable law.”

Blueprint told the Lever that its therapists always retain the right to delete recordings and notes, and that session data is never used to train AI. “We are working on better outlining this in our terms of service,” founder Danny Freed wrote. “Blueprint exists to support therapists, not replace them.”

Meanwhile, SimplePractice — a software firm that says it’s “committed to a future of human-centered behavioral health” — notes that it currently does not store patient session data . . . but plans to, once an opt-out option is created. The firm promises that it “contractually restricts . . . third parties from using [patient] data to train their own AI.”

All of this comes as one of the most deep-pocketed medical institutions in the country dropped major investments into developing AI-powered “robot” therapy. The Los Angeles–based hospital system Cedars-Sinai Medical Center recently created a fully automated “therapist” called Xaia that “draws from hundreds of therapy transcripts, both from real sessions and mock sessions.”

VRx Health, the for-profit “digital therapeutics” firm that retains exclusive commercial licensing to Xaia, requires users to waive their right to a jury trial.

AI startups, including ChatGPT creator OpenAI, are pushing automated chatbot technology for therapeutic applications, claiming their software is well-suited for counseling because it shows “humanlike sensitivity.” Health care start-ups using AI raised $3.9 billion in funding last year, including $1.4 billion for mental health, the Los Angeles Times reports.

The Federal Trade Commission last week launched a formal inquiry into the safety of companion chatbot technology in light of a fourteen-year-old Florida boy’s death by suicide after forming an allegedly abusive relationship with an AI bot. These relationships are increasingly common for teenagers; a July survey found that nearly three in four children aged thirteen to eighteen have used a chatbot for companionship.

Dr Vaile Wright, the American Psychological Association’s senior director of health care innovation, testified in front of Congress earlier this month on the rise of so-called “digital therapeutics.” She said that while therapy chatbots “can deliver care to those who might otherwise receive none . . . these tools are most effective and safest when used to augment, not replace, the care provided by a qualified professional, ensuring a human remains in the loop.”

Freed’s CEO did not respond to a request for comment — but the Lever received a response from the firm’s “Support Bot” confirming the company’s policies on de-identifying data.