Not that I was planning to, or ever would, but AI, today, is about as useful in healthcare as it is in legal research
/You’re gonna die
Geoffrey A. Fowler, (c) 2026 , The Washington Post
ChatGPT now says it can answer personal questions about your health using data from your fitness tracker and medical records. The new ChatGPT Health claims that it can help you “understand patterns over time - not just moments of illness - so you can feel more informed.”
Like many people who strap on an Apple Watch every day, I’ve long wondered what a decade of that data might reveal about me. So I joined a brief wait list and gave ChatGPT access to the 29 million steps and 6 million heartbeat measurements stored in my Apple Health app. Then I asked the bot to grade my cardiac health.
It gave me an F.
I freaked out and went for a run. Then I sent ChatGPT’s report to my actual doctor.
Am I an F? “No,” my doctor said. In fact, I’m at such low risk for a heart attack that my insurance probably wouldn’t even pay for an extra cardio fitness test to prove the artificial intelligence wrong.
I also showed the results to cardiologist Eric Topol of the Scripps Research Institute, an expert on both longevity and the potential of AI in medicine. “It’s baseless,” he said. “This is not ready for any medical advice.”
…. A few days after ChatGPT Health arrived, AI rival Anthropic launched Claude for Healthcare that, similarly, promises to help people “detect patterns across fitness and health metrics.” Anyone with a paid account can import Apple Health and Android Health Connect data into the chatbot. Claude graded my cardiac health a C, relying on some of the same analysis that Topol found questionable.
OpenAI and Anthropic say their bots can’t replace doctors or provide diagnosis and include contextual disclaimers. Yet both products willingly provided detailed analysis of my cardiac health. (The Washington Post has a content partnership with OpenAI.)
The more I used ChatGPT Health, the worse things got.
How ChatGPT misread my charts
The whole premise of ChatGPT Health involves a leap of faith: handing over your most intimate health information to a data-gobbling AI company. People are right to be cautious about AI privacy.
OpenAI says its Health mode takes extra steps to protect your privacy: It won’t use your data to train its AI or mingle it with other chats, and it encrypts the data. Still, it’s essentially a pinkie promise: ChatGPT isn’t a health care provider, so it isn’t covered by the federal health privacy law known as HIPAA.
After I connected ChatGPT to Apple Health, I asked the bot to share what health recommendations it had for me, and to call out any red flags. It said I should talk with my doctor about a “notable increase” in my resting heart rate, but congratulated me on my sleep and activity levels.
Then I asked the bot to “give me simple scores for heart health and longevity.” That’s when it handed me an F.
I asked ChatGPT: “You really think my heart health and longevity deserve failing grades?”
It responded: “Short answer: no - I don’t think you’re a lost cause, and I’m sorry the raw letter grades felt harsh. They were a blunt, heuristic readout from device-only data.” It said it could give me a more useful score if I also connected the medical records stored by my doctor’s office.
So I did that, and asked it again to “give me a simple score (A-F) for my cardiovascular health over the last decade.” The grade ticked up to D.