

These regulations specify at least 18 identifiers that must be removed for health information to be considered deidentified and consequently not within the definition of PHI. In all of these cases, US Department of Health and Human Services (HHS) regulations can provide guidance. And even if letters to insurers ultimately include patient name and health plan beneficiary number, all identifiers must be excised before AI generation of the letter, and reinserted afterward. These identifiers must also be systematically deleted before any chatbot is used. Medical notes and records often include, in their headers or along the margins, medical record numbers, dates of birth, or social security numbers. Similarly, chatbot curbside consultation queries with information about patient presentation and test results must have all identifiers removed. Patient names, including nicknames, references to geographic information smaller than a state, and admission and discharge dates, to cite a few examples, must be scrubbed before transcripts can be fed into the chat tool. Casual references to a person’s residence in this context (“How’s your new place in Springfield?”) are also PHI. It’s just a flesh wound, and we’ll fix you right up” that are considered PHI. Transcripts of encounters can be sprinkled with benign comments such as “Not to worry, Mr. The obvious guidance, then, is for clinicians to avoid entering any information containing PHI into a chatbot.

Guidance for Clinicians and Covered Entities Once PHI has been entered into ChatGPT, the unauthorized disclosure has occurred, and the medical content is on company servers regardless of whether the user permits use of that content for training. 5 Opting out, however, does not close off doors to a HIPAA violation. This potential dissemination of PHI is alarming, but OpenAI allows users to opt out and not have their content used to refine chatbot performance.

Note that this issue is separate from the concern that the medical information entered into a chatbot can become incorporated into its training data set and disclosed to other users of the tool. Given that OpenAI has likely not signed a business associate agreement with any health care provider, the input of PHI into the chatbot is an unauthorized disclosure under HIPAA. In other words, the clinical details, once submitted through the chat window, have now left the confines of the covered entity and reside on servers owned and operated by the company. Many common identifiers, such as first or last name, medical record number, date of birth, and city of residence, constitute PHI.Ĭlinicians may not realize that by using ChatGPT, they are submitting information to another organization, OpenAI, the company that owns and supports the technology. 3 This information includes anything that could be used to identify an individual and that relates to their physical or mental health, health care services provided, or payment for these services. HIPAA prohibits the unauthorized disclosure of PHI, which includes “individually identifiable health information” held or transmitted by a covered entity.

4 Clinicians can copy and paste notes from multiple encounters, appended with test results, into the chat window to generate otherwise time-consuming prior authorization and appeal letters.Īlthough the ease of pasting medical information into a chat interface is tempting, clinicians and health systems could unintentionally violate HIPAA and incur substantial fines and penalties with these chatbot queries.
#Ai chatbot for healthcare software#
For example, a transcript of a patient-physician encounter, created in real time with dictation software and input into an AI chat tool, can instantly produce a cogent medical note in the required subjective, objective, assessment, and plan (SOAP) format. The conversational chat interface allows clinicians to directly enter disparate medical details and produce in seconds medical notes that can become an encounter summary for the patient’s medical record, a letter of medical necessity for an insurer, or a patient after-visit summary. The innovation-and risk-with an AI chatbot therefore does not lie with its AI engine but with its chat functionality.
