Using ChatGPT or other AI-enhanced tools can make your life easier. But, until it becomes more regulated, the risks may not be worth it.
Posted in Articles on Tuesday, September 10, 2024
If you ask AI or ChatGPT if it’s compliant with HIPAA, you’ll get a very long, convoluted response; a reply so confusing you’re not sure if it even knows what it’s trying to convey.
The use of Artificial Intelligence and ChatGPT engines have become more and more common in so many aspects of our lives, including the medical community. While talking with patients, some doctors literally put their phones in their pocket, press record, and then have ChatGPT write up the notes.
Sounds simple, right? Is this the lightbulb moment where you’re thinking, “Why haven’t I done that?” Before you Google “How do I use AI,” know there are risks that come with the technology—ones that could prove to be detrimental in the long run.
Caring, Not Sharing
Although the risks are high and the safety concerns are numerous, the appeal of using ChatGPT for notes is easy to see—it takes just a few steps to record the conversation, upload and have an automated summary generated.
Unfortunately, that summary isn’t exclusive to you as the doctor. Instead, the ChatGPT engine now has access to protected health information, as well as personal information from your browser. This is information that you’ve granted access to by accepting their terms and agreements.
What you import into ChatGPT is recorded and stored, which can be a major privacy risk, and does not adhere to HIPAA privacy regulations. By feeding Chat GPT what it needs to generate a desirable response, you open the books on your patient’s health information.
Faster is Not Better
Over the years, your practice has evolved to always remain HIPAA compliant. You most likely made painstaking efforts to ensure privacy of a patient’s information is always held confidential. By using ChatGPT, you swiftly undercut those efforts and open yourself up to violations and lawsuits. Not only are you knowingly placing a patient’s PHI into the cybersphere, but if the patient discovers you releasing their information, it could absolutely destroy their trust in you as a doctor.
Is Your Information Secure?
ChatGPT has also been subject to breaches and hacks, which leaves confidential information of your patients exposed for anyone to view. If a patient discovers their information has been leaked, will they hold you or AI accountable?
In addition to breaches and hacks of the more “popular” ChatGPT engines, a simple online search will give you hundreds of links to possible AI sites. Which of these can be trusted? Which of these are risky links? There’s no clear-cut way to tell until it’s too late.
Do Not Cut & Paste in Haste
When you read through summaries that you generated yourself, you can feel confident that what you’ve recorded is correct. After all, you have first-hand knowledge of your patient. Can you trust ChatGPT to give you the correct summary? ChatGPT can be highly subject to error. It really is just eavesdropping on a confidential conversation, and then running that through other confidential conversations of the same nature.
The response you receive from ChatGPT might be a complete misdiagnosis and cause potential harm to the patient. If you’re not reviewing your own notes in a timely manner, your diagnosis of a patient may not be as accurate as you initially thought.
If a doctor or staff member is just cutting and pasting into EHR, it can result in cookie-cutter records that are meaningless in relation to patient treatment.
Trust your notes; trust yourself.
Knowing Your Chain of Command
You hold yourself accountable for all things in your practice, but with AI, there is no transparency and certainly no accountability. If there’s a breach or discrepancy in your office, you know who you can ask, but when dealing with ChatGPT, there is usually nothing more than a “Contact Us” link. You are at the whim of the engine creator on whether they wish to deal with what, in their opinion, is a very important issue.
What Does the Future Hold?
A health care-specific ChatGPT could prove to be an incredibly beneficial tool, but one that would need to be regulated and secure. It could provide doctors quick access to documents and summaries, but until one exists, entering a patient’s PHI outside of a trusted, secure record keeping platform is a risky undertaking.