The product challenges that ChatGPT Health will have to navigate
This is a system that is most valuable to users who know how to distrust it
This is for healthcare, regulatory and AI product folks. ChatGPT just announced ChatGPT health, and I’ve signed up for the waitlist. I can’t wait. Before you start about how I tell people not to upload their personal data into AI, hear me out.
I’m now a convert.
Last weekend, I spent multiple hours compiling my medical records: blood test reports, AI testing reports, incident data from some injuries and a couple of surgical procedures, my fitness band data which includes granular sleep data, smart weighing scale data, water drinking data, standard weekly menu, steps and exercise data, medicines consumption, supplements consumption, and then some more.
Step by step, I input all this data into ChatGPT, to create standardised templates. Then I fed some of the templates into Claude for creating JSON files (because ChatGPT sucks at processing large amounts of information). I now have a JSON with my medical records, another with incident reports, a weight change dataset going back to 2016, and a complete database file with my Google HealthConnect data, collated from multiple apps. And then some more.
Now contrast this with last January: especially worried since I lost a few friends around my age to heart attacks the previous year I signed up for an AI health test. I lost a close friend in October: he had gone to a different city to speak at a conference, had a heart attack in the morning, and never made it to the conference. Founder health is a serious problem, given the stress levels, and I worry a lot about my own, and those of friends who are founders. A friend in Bangalore is building a healthcare startup focused on longevity, and founders are his primary focus.
My health has been fragile for a couple of years, and I’m taking far too long to recover from injuries. I’ve also got injured more frequently. Doctors appeared to be treating the symptom, unsure of why healing is taking so long, so I needed to know more. So, a fancy health test at a fancy place. Thankfully, another health-tech founder friend who recommended it to me got me a discount. I landed up at this place at 8.30AM, and they gave me a consent form which says that I agree to contribute my data towards improving the AI. I push back, and they say that they can’t do the test without that box being ticked. I went back home.
Someone said “Don’t you want to contribute towards curing diseases?” I don’t know. A week later, I’m back for the test. My friend had a “do you realise who you’re messing with?” conversation with them, despite my saying that I’ll be fine with a refund, and get a non-AI test done. So they agreed to do the tests without that clause being mandatory. Small privileges.
I had a terrible year health-wise last year: had a couple of surgical procedures spread over three months, to address breathing issues that still haven’t worked out (I’ll probably get another procedure done this month, so this site might pause for a bit), and spent a month and half on bedrest + wheelchair starting November, following a worrying knee problem. All in all, I spent about 6 months of the year in pain and recovery. Try running a company while doing this, but every time I think about it, I’m reminded of this Ben Horowitz post and get back to work.
Something changed my mind about uploading data to ChatGPT.
In the middle of a four hour long phone conversation with another founder friend who retired about 10 years ago after selling his company, we started talking about health. He told me about his struggle to manage diabetes and bring his HbA1c down, and how he started uploading his reports one by one to ChatGPT, and that it recommended a series of changes to him. He even used it to preparee a clinical summary to be taken to a doctor. The senior doctor, surprise surprise, actually agreed with the diagnosis and changed his medication exactly in line with what ChatGPT recommended.
I couldn’t sleep that night. I sat up, and created a separate ChatGPT account, under a fake name, and started adding my test data bit by bit. I didn’t want it mixing with my primary account. By the end of it, it identified patterns across 4 years of reports and started correlating them, as opposed to most GPs who look at one or two recent reports, and treat the symptoms they see.
One question changed my perspective on my health: I asked it to tell me what my body is going through every day, and then it all started to make sense. I asked for non-medical recommendations (supplements, diet and exercise mostly), put a note together, and got it corroborated by my GP. That chat window has been my guide every since then: I can’t go to the GP every day, can I?
In December I started tracking my sleep data more closely using the Amazfit Helio Strap. Apple watch and Whoop also do this, but I prefer Android, and I don’t like subscription products. I opened a separate chat window for figuring my sleep, given that two decades of sleep debt is one of the root causes of my health issues (I know many founders who sleep max 6 hours a night): according to my data, the tipping point was in 2023, where it call caught up with me. Something became chronic and deep-set.
I upload screenshots of my sleep data every day. I’ve even uploaded photos of my sleep posture for recommendations, and am now more careful of my water consumption, and when in the day I have my masala chai last, or the fact that I’m eating magnesium glycinate. And maybe 15 other little things.
Drawing from something I wrote last night: if this is how AI is shaping my decisions, I’m all for it. I’m better and healthier today than I was three months ago. I have a date in mind for my next blood test, and ChatGPT has told me which tests to get done now, and which to defer till six months later.
I’m better, and 2026 is the year I reverse this, backed by tests, data, doctors and routines.
By the way, nothing in this post should be seen as medical advice. Do your own research, create your approaches and understand your own limitations, and consult a doctor before you do anything. Everyone has their own personal risk profile.
I’m telling you all this is because I started 2026 with the objective of creating a personal health advisor. ChatGPT loses context once you’re above 30-40 hundred messages into a conversation, like my health chats are, and I need persistence of context.
I’m halfway through this health data project, and I’ve created a protocol for data updates: there’s one set of records which are a baseline, and won’t be updated again. There’s another set that will be replaced with current data, with historical data aggregated after 2026 is done. I have a tightly defined (and slightly long) core prompt that is ready to be tested. I’m still compiling data, though.
I’m a systems guy, and this is a health system I’m building around the quantified self approach. Not everyone has had a Google form for uploading their health records and prescriptions to Google Drive for almost a decade, and has used apps to document health data.
I was building a Health OS to replace a chat window as an assistant.
So I don’t know whether this will all work, but I’m excited about the launch of ChatGPT health, in that if it holds context better, and has a set of medical research to reference while making recommendations, I might second-guess it less often. I have a journalist’s bullshit detector as a permanent occupant of my brain, so I end up challenging ChatGPT’s assumptions and telling it off fairly frequently to get it back on track.
Product challenges that ChatGPT Health faces
As a product, this is not going to be easy for ChatGPT, because of default product and user behaviour. Some things to think about when it comes to AI for Health:
Extreme variance in user behaviour: Not everyone will be as sceptical of its outputs. Not everyone will challenge its assumptions, or ask it to validate what it’s saying with data.
Incomplete, inconsistent, and poorly structured data inputs: Not everyone will try and get completeness in data inputs for a better, smarter diagnosis. Not everyone will be in a position to create a JSON, or choose to upload documents one at a time instead of all at once because that impacts the ability of the system to parse the data. Not everyone will be in a position to identify gaps in data already parsed, or think about what will make outputs more reliable. The system has to work with partial, messy inputs by default.
Mainstream users won’t build structure: Power users will compensate for lack of structure and completeness in system… I mean I created a system where none existed… but mainstream users don’t do this. It needs to work for less motivated users.
Combining the last two points: What happens when someone gives it 5% of the data and 5% of the attention an output needs?
A medical memory (context persistence) is a prerequisite for trust: User trust is earned when there’s pattern recognition across years and different datasets, not immediate symptoms and not just blood tests alone. It needs to have context persistence over a period of time, and connect the dots across sleep, exercise, blood tests, medication, supplements, diet… everything: a medical memory, in a manner of saying.
Figure out when to stop using context: This is the trickiest one. If I’ve uploaded data that goes back 8 years, how does the system figure out which data points it gives more importance to? There are anomalies: can it treat, for example, a sudden jump in triglycerides five years ago (not there in my data, but just an example) as important today because triglycerides went up again?
Consent, consent, consent: There is also a question of trust (and I guess I traded data for utility here), but consent, and using user inputted data to train ChatGPT health may be a key consideration for some, but it’s also a regulatory consideration since health data is sensitive personal data (except in India…ugh). Not everyone will create a separate account just for health, worried about whether this connects to the rest of their data.
Doctors not necessarily in the loop: Not everyone will go to a doctor to get a hypothesis validated. The product cannot assume a human check exists downstream. It might add a disclaimer, but it cannot guarantee that the user will get approval, or just lie to ChatGPT that it has got it checked with a doctor.
Separation of medical from non-medical advice: Not everyone will also separate medical advice (”take this pill”) from non-medical recommendations (”sleep on your side”). Given the low-trust environment we’re in, especially in India, there are enough people who will seek naturopathy cures over medication when needed, and ChatGPT will have to address this.
Safety cannot depend on adversarial behaviour: A health product cannot assume adversarial literacy, and safety cannot depend on scepticism. You can’t lose content and require an adversarial response in order to get back on track.
Recognise that you might shape behaviour, and figure out risk: Disclaimers also don’t address concerns about algorithms shaping behaviour even here. Algorithms are built to please and lower friction in response, and confirmation bias greatly influences algorithmic responses. ChatGPT will have to figure out how to reduce this risk.
Calibration of confidence signals: Outputs are often deterministic even though they’re based on probability. Ever so often, ChatGPT exhibits confidence it shouldn’t have, which gets corrected when you challenge it. I mean, on some occasions, when you challenge it repeatedly, before it course corrects. I think OpenAI will have to determine how it uses language to communicate lack of confidence in its recommendation, and ensure there’s enough room for uncertainty where it should exist.
Ensure that you say you don’t know, when you don’t know: AI tools often do not admit when they don’t know enough, or are at the risk of misinterpreting incomplete data because they’ve filled some gaps for themselves. Very often, I’ve received a recommendation that I’ve had to counter with data. ChatGPT has to figure out under what conditions it refuses to move forward and avoids coming to a conclusion, because user sophistication varies from user to user.
Lastly, from a regulatory standpoint: who owns the outcome of this intervention? When a users behaviour changes and the outcome worsens, who owns this outcome? There’s a case in the US about someone allegedly being nudged towards suicide, or not prevented from committing suicide, by ChatGPT. Who owns this?
I wrote a few weeks ago that OpenAI has a document that indicates what a good ChatGPT app looks like. I think what I’ve shared today is the first step towards a framework for what a good AI health app looks like.
In a strange way, this is a system that is most valuable to users who know how to distrust it, and most dangerous to users who don’t. That’s a big problem for a product to overcome.
Ps. If this will be useful for someone, please share it with them. Especially a founder.




Nothing heals like clean air, better diet and strength training and often these are the biggest luxuries in life.
PS: I would highly recommend getting even a basic Garmin (like Instinct 3) as they don't have a subscription .
Excellent piece, I have found some very good advice from chatgpt for my health....especially on specific symptoms...it does take long prompting sessions but some results are very good