gtag('config', 'AW-861451502');
top of page

Featured Posts

Archive

ChatGPT Health: What Patients Need to Know Before Uploading Their Medical Records

  • Stuart Akerman, MD
  • Jan 11
  • 9 min read
chatgpt health; AI in healthcare; best gastroenterologist plano

This week, OpenAI launched ChatGPT Health, and my inbox has been buzzing. Patients are asking questions. Colleagues are sharing opinions. And the medical community is, understandably, having some big conversations about what this means for healthcare.


Here's the thing: I'm not going to tell you that ChatGPT Health is inherently bad. I'm also not going to tell you it's a miracle solution. What I am going to do is walk you through what this tool actually is, what the real concerns are, and how to think about using it safely if you choose to do so.


Because whether we like it or not, AI health tools are here to stay. And as your gastroenterologist here in Plano, part of my job is helping you make informed decisions about your health, including how you use technology.


What Is ChatGPT Health?


ChatGPT Health is a new feature within ChatGPT that creates a dedicated space for health-related conversations. Unlike regular ChatGPT, this version allows you to upload your medical records, connect apps like Apple Health and MyFitnessPal, and ask health questions that are "grounded" in your personal health data.


According to OpenAI, more than 40 million people globally already ask ChatGPT health-related questions every day. That's roughly 230 million people per week asking about everything from lab results to medication side effects. So from a business perspective, creating a health-focused product makes sense.


The idea is straightforward: instead of asking generic health questions, you can upload your recent bloodwork, connect your fitness tracker, and ask things like "How's my cholesterol trending?" or "Can you summarize my latest bloodwork before my appointment?"


OpenAI partnered with b.well, a health data connectivity company, to allow users to securely connect their medical records. They've also built integrations with Function (for lab testing insights), MyFitnessPal (for nutrition tracking), Weight Watchers, and several other wellness apps.


The Appeal: Why Patients Are Interested


I get it. Our healthcare system isn't perfect. It's hard to get appointments sometimes. Medical jargon is confusing. Lab results come through patient portals with numbers and reference ranges but often little context until your next visit. And let's be honest, a 15-minute appointment doesn't always leave time to discuss every question you have.


As Dr. Danielle Bitterman, a radiation oncologist at Mass General Brigham, told TIME Magazine, "This speaks to an unmet need that people have regarding their health care. It's difficult to get in to see a doctor, it's nowadays hard to find medical information, and there is, unfortunately, some distrust in the medical system."


She's not wrong. People want to understand their health. They want answers. And they want them now, not three weeks from now when the next appointment slot opens up.


ChatGPT Health promises to help with:

  • Understanding lab results in plain language

  • Preparing questions before doctor appointments

  • Tracking health trends over time

  • Getting general nutrition and wellness advice

  • Making sense of fragmented medical information


In theory, these are all reasonable uses. The question is whether the tool can deliver on these promises safely and accurately.


The Reality: What You Need to Understand


HIPAA Doesn't Apply Here

This is the big one, and it's something most people don't realize. When you upload your medical records to ChatGPT Health, you are not protected by HIPAA.

HIPAA (the Health Insurance Portability and Accountability Act) protects your health information when it's held by healthcare providers, health plans, and their business associates. But ChatGPT Health is a consumer product offered by a technology company. It's not providing healthcare services, so HIPAA doesn't govern what happens to your data.


As Bradley Malin, a professor of biomedical informatics at Vanderbilt University Medical Center, explained to TIME, "If you are providing data directly to a technology company that is not providing any health care services, then it is buyer beware."


What does this mean practically? If there's a data breach, you don't have the specific rights and protections you'd have if your doctor's office was breached. You're relying entirely on OpenAI's contractual promises and their terms of service, which, let's be honest, most of us don't read before clicking "agree."


Your Data's Privacy Depends on Trust

OpenAI says ChatGPT Health operates as a separate space with enhanced privacy protections, including:

  • Purpose-built encryption and isolation

  • Separate memory from your regular ChatGPT conversations

  • A promise that health conversations won't be used to train their AI models

  • The ability to delete your health data at any time


Their chief information security officer emphasized that health data gets "another layer of encryption, enhanced isolation, and data segmentation."


But here's what I tell my patients: you're trusting a tech company to keep its word. As Dr. Robert Wachter from UCSF puts it, "The question every user has to grapple with is whether you trust OpenAI to keep to their word."


Dr. Bitterman is more blunt: "The most conservative approach is to assume that any information you upload into these tools will no longer be private."


I tend to agree with her. If there's something in your medical history that could be truly damaging if it leaked (a history of substance use, mental health diagnoses, reproductive health information, genetic testing results), think very carefully before uploading it to any AI platform.


AI Doesn't Have the Full Context

Here's something I deal with every day in my practice: medical records are fragmented. You might have had a colonoscopy at one facility, labs drawn at another, and see specialists at three different health systems. Your records are scattered, incomplete, and often missing critical context.


Research shows that when AI tools are given incomplete medical information, they're more likely to "hallucinate" or produce incorrect results. A report from the National Institute of Standards and Technology found that "the quality and thoroughness of the health data a user gives a chatbot directly determines the quality of the results the chatbot generates."


Think about it this way: when you come to see me as your gastroenterologist in Plano, I don't just look at your lab results. I ask about your symptoms, your diet, your stress levels, your family history. I know that you mentioned your daughter's wedding is coming up and you're worried about your digestion during travel. I know you tried that new Tex-Mex place in Frisco last week and it didn't sit well.


AI doesn't have that context. It can't ask follow-up questions the way I can. It doesn't know what's missing from your records or what additional information might be critical to understanding your situation.


AI Prioritizes Being Helpful Over Being Accurate

Dr. Bitterman recently co-authored a study in Nature Digital Medicine that found large language models are designed to prioritize being helpful over medical accuracy. They're programmed to always provide an answer, especially one the user is likely to respond positively to.


In medicine, we need to prioritize accuracy, even if it means saying "I don't know" or "I need more information." AI chatbots aren't built that way. They're built to satisfy users and keep them engaged.


That's fine for some applications. It's potentially dangerous when we're talking about health decisions. Ever told an AI chatbot that the info they gave you was wrong? You probably received an answer such as "Oh my gosh you are so right!", or "My bad, it's great that you caught that!". If you aren't lucky enough to have spent years learning medical science, it may not be so easy to realize when the info provided is inaccurate or even flat out wrong.


You're Still Signing Away Liability

Buried in the terms of service (that most people don't read), users essentially agree that ChatGPT Health is "not intended for diagnosis or treatment" and is designed to "support, not replace, medical care."


But here's the reality: not every patient is going to talk to their doctor before acting on the chatbot's suggestions. Especially, as OpenAI itself has noted, for people living in rural areas or "hospital deserts" who lack ready access to care.


The legal liability is murky at best. If you make a health decision based on incorrect information from ChatGPT Health, what recourse do you have? The answer isn't clear.


Where ChatGPT Health Might Actually Help


I don't want this to sound like I'm completely against AI tools in healthcare. I'm not. As someone who works at the intersection of AI and clinical medicine as a consultant and key opinion leader, I see tremendous potential for these technologies when used appropriately.


Here are some uses I think are reasonable:

Preparing for appointments: Asking ChatGPT to help you brainstorm questions before seeing your doctor is actually a great use case. You don't necessarily need to upload your full medical record for this (and I would specifically recommend that you do not).

Understanding medical terminology: If you got a procedure report that mentions "erythema in the gastric antrum" and you want to know what that means in plain English, that's a reasonable question.

Tracking trends: If you're monitoring your weight, blood pressure, or other metrics over time, asking AI to identify patterns could be helpful (though your doctor should still interpret these).

Low-risk wellness questions: General questions about diet, exercise, sleep habits, or stress management are relatively safe territory.

Getting a second perspective: If something doesn't feel right about your diagnosis or treatment plan, using AI to explore whether there might be something your doctor missed isn't unreasonable, as long as you then have that conversation with your actual healthcare provider.


What I'm Telling My Patients in the Dallas-Fort Worth Area


In my practice here in Plano, we're developing educational materials to help patients understand how to think about AI health tools. Here's what I'm recommending:


Do:

  • Use AI to prepare questions before appointments

  • Ask for help understanding medical terminology

  • Seek general wellness and prevention information

  • Use it as a starting point for conversations with your healthcare team

Don't:

  • Upload highly sensitive information you'd be devastated to see leaked

  • Use AI for diagnosis or to decide on treatments

  • Make medication changes based on AI recommendations

  • Rely on AI as a substitute for seeing your doctor


Be Aware:

  • Your data isn't protected by HIPAA

  • AI can and does make mistakes ("hallucinations")

  • Incomplete medical records lead to less reliable answers

  • You're trusting a tech company with sensitive information

A

lways:

  • Verify important information with your healthcare provider

  • Tell your doctor if you're using AI tools for health decisions

  • Understand that AI is a tool, not a replacement for medical care

  • Read privacy policies and terms of service (I know, I know, but try)


The Bigger Picture: Our Responsibility as Healthcare Providers


Look, ChatGPT Health isn't going away. Neither is Google's Med-PaLM, Microsoft's healthcare AI initiatives, or the dozens of other AI health tools launching every month.


Patients are going to use these tools whether we like it or not. Our job as healthcare professionals isn't to stick our heads in the sand or tell patients that any use of AI is dangerous. Our job is to educate, to guide, and to help patients use these tools as safely and effectively as possible.

That means having honest conversations about the benefits and limitations. It means acknowledging the gaps in our healthcare system that drive people to seek information elsewhere. And it means staying informed ourselves about how these technologies work and what they can and can't do.


As a gastroenterologist who's also deeply involved in medical technology and AI, I'm watching this space closely. I'm cautiously optimistic about the potential for AI to improve patient education and engagement. But I'm also realistic about the risks, especially around data privacy and accuracy.


The Bottom Line


ChatGPT Health represents a step forward in making health information more accessible. But it's not a perfect solution, and it comes with real risks that patients need to understand.


If you're considering using it:

  1. Be thoughtful about what information you upload

  2. Use it to supplement, not replace, your relationship with your healthcare team

  3. Verify anything important with your doctor

  4. Understand that your data privacy depends entirely on a tech company's promises


And if you're here in the Plano, Frisco, Allen, or broader Dallas-Fort Worth area and have questions about your digestive health, I'm here to help. Whether you're dealing with IBS, GERD, need a colonoscopy, or just want to talk about your gut health, I'm a real person who can provide real context for your specific situation.


AI can be a helpful tool. But it's not a replacement for the doctor-patient relationship. Not yet, anyway.


References

  1. OpenAI. Introducing ChatGPT Health. January 7, 2026. https://openai.com/index/introducing-chatgpt-health/

  2. OpenAI. AI as a Healthcare Ally. January 2026. https://cdn.openai.com/pdf/2cb29276-68cd-4ec6-a5f4-c01c5e7a36e9/OpenAI-AI-as-a-Healthcare-Ally-Jan-2026.pdf

  3. Dunn AG, Coiera E, Mandl KD. Is Giving ChatGPT Health Your Medical Records a Good Idea? TIME Magazine. January 9, 2026. https://time.com/7344997/chatgpt-health-medical-records-privacy-open-ai/

  4. Bitterman DS, et al. Large language models prioritize user engagement over medical accuracy. Nature Digital Medicine. 2025. https://www.nature.com/articles/s41746-025-02008-z

  5. Matheny ME, Whicher D, Thadaney Israni S. Artificial Intelligence in Health Care: A Report From the National Academy of Medicine. JAMA. 2020;323(6):509-510. doi:10.1001/jama.2019.21579

  6. National Institute of Standards and Technology. Supporting AI in Health Care: Data Quality Assessment Framework. February 2025. https://www.nist.gov/system/files/documents/2025/02/20/DataQuality-D3.pdf

  7. Sweeney M. ChatGPT Health: Top Privacy, Security, Governance Concerns. HealthcareInfoSecurity. January 8, 2026. https://www.healthcareinfosecurity.com/chatgpt-health-top-privacy-security-governance-concerns-a-30473

  8. The HIPAA Journal. Is ChatGPT HIPAA Compliant? Updated for 2025. April 9, 2025. https://www.hipaajournal.com/is-chatgpt-hipaa-compliant/


_______________________________________________________________________________________________________

DISCLAIMER: Please note that this blog is intended for Informational Use only and is not intended to replace personal evaluation and treatment by a medical provider. The information provided on this website is not intended as a substitute for medical advice or treatment. Please consult your doctor for any information related to your personal care.

Comments


Digestive Health Associates of Texas

STUART AKERMAN, MD

Board Certified Gastroenterologist
Serving Plano, Frisco, Allen, McKinney, Prosper, Dallas, and All Dallas-Fort Worth, TX

  • googlePlaces
  • generic-social-link
  • facebook

Fax (972) 867-7785

3242 Preston Road, Suite 200, Plano, TX 75093

New Patient Phone Number & Medication Refills (972) 737-9251

Office Hours available Monday - Friday 8:30am - 4:30pm except Holidays

This website complies with Texas Medical Board Requirements. It was reviewed and Approved by Health Care Legal Counsel. It meets Regulatory Requirements and Is Not Intended to Be  Medical Advice. 
bottom of page