Semble blog

What the launch of ChatGPT Health tells us about the future of care

Written by Pascale Day | 05,09,2026

OpenAI has announced its intention to launch ChatGPT Health, positioning it as a hub for personal health data designed to help users “feel more informed, prepared, and confident navigating [their] health.”

With millions of people already using generative AI platforms for health-related questions, it’s a logical next step. But it’s also a significant one. This announcement brings into sharper focus a shift that’s already well underway: patients are increasingly turning to AI alongside, and sometimes ahead of, traditional healthcare channels.

In our latest report, ‘Beyond the search bar: From AI curiosity to connected care’, we revealed from a survey of 2000 patients that one in four people are already turning to platforms like ChatGPT and Gemini for healthcare guidance, while one in three would be willing to consult AI rather than wait to see a clinician.

So what does this launch mean for healthcare and how can professionals adapt to a world where the first place patients might turn isn’t a clinic, but an algorithm?

So what is ChatGPT Health, exactly?

ChatGPT Health is a new arm of the AI platform, designed to sit separately from a user’s standard ChatGPT conversations to create a dedicated space for health-related interactions. If a user asks a health question in standard ChatGPT, they’ll be nudged to move the conversation into this new health-specific environment.

The product was developed with input from more than 260 physicians across 60 countries and dozens of specialties. But the intention is not to diagnose or treat – OpenAI has been explicit about this. In its press release, the company states: “Health is designed to support, not replace, medical care. It is not intended for diagnosis or treatment.”

Instead, ChatGPT Health positions itself as an assistant for tasks like explaining lab results, helping users prepare questions ahead of a doctor’s visit or tracking wellness patterns. Users can connect their medical records and wellness or fitness apps to provide more context around their health. And, OpenAI has stressed, they can view or delete health memories and disconnect apps at will.

Why are patients turning to AI for health support?

People are consulting ChatGPT in sickness and in health, and it’s not hard to see why.

“AI gives immediate access to knowledge drawn from enormous datasets, in this case, about health,” Semble CEO Christoph Lippuner told Metro this week. “It’s convenient, non-judgmental and free, and the combination of accessibility and privacy is drawing patients in.”

We reach for the convenience of AI in almost every other aspect of our lives; healthcare, it seems, is no different. OpenAI CEO Sam Altman said late last year that the platform had over 800 million users each week. This week, he added that 230 million of those users are seeking answers for medical issues.

For patients facing long waits or rushed appointments, AI can feel like a way to fill in the gaps. It can help them make sense of information, spot patterns or simply feel more prepared before they walk into a consultation. (But, as OpenAI emphasises, it shouldn’t be used for diagnosis.)

What ChatGPT Health could mean for clinicians

From a health tech perspective, the launch of ChatGPT Health accelerates consumer expectations overnight. This conversational access to health information will quickly settle in as the norm, which raises the bar for everyone else.

But it also highlights an important distinction: the gap between ‘helpful health context’ and clinically safe, accountable healthcare advice.

During a press preview, OpenAI’s CEO of Applications, Fidji Simo, shared a personal story about being hospitalised with a kidney stone. After a resident prescribed an antibiotic, she used ChatGPT to cross-check it against her medical history. The tool flagged that the medication could reactivate a serious infection she’d had years earlier.

“The resident was relieved I spoke up,” said Simo. “She told me she only had a few minutes per patient during rounds, and health records aren’t organised in a way that makes it easy to see. I’ve heard many stories like this from people who are using AI to help connect the dots in their healthcare system.”

It’s an important example, but also a specific one. Here, AI was used to sense-check advice already given, not to replace it. And crucially, the insight only mattered because it was brought back into the clinical conversation.

That’s where the balance sits. AI can surface patterns or potential issues, but without clinical context, accountability and professional oversight, it can just as easily mislead. The risk isn’t that patients use AI, it’s that those AI-driven insights stay outside the consultation. In that sense, it’s down to clinicians to ensure patients feel comfortable to raise the information they’ve found online with them, much like Simo’s resident did.

Making AI part of the consultation, not a replacement for it

So, the question is no longer whether patients will use AI, but how to make that use responsible and effective. Right now, many patients are already doing this work quietly, sometimes without telling their clinician. That silence is where risk creeps in.

“Healthcare professionals must help patients navigate their digital curiosity safely, to turn early questions into informed conversations,” says Christoph.

For Semble Product Manager and former GP, Dr Jenny Williams, this starts with communication in the room:

“Good consultations start with good communication. Clinicians should enquire openly from the start, setting a tone of ease and openness. As we were taught in medical school, simple questions about a patient’s ideas, concerns and expectations can make all the difference to the doctor–patient dynamic.

“If a patient arrives willing to lead with AI-driven ideas, this information shouldn’t derail a consultation, but supercharge it.”

Jenny suggests shifting towards more collaborative questions, such as:

  • “What did the tool suggest?”
  • “What worried you most?”
  • “What do you hope to take away today?”

Where caution is essential: safety, bias and trust

Despite its potential, ChatGPT Health will still need to be approached with scrutiny.

When health information is generated at scale, small inaccuracies, missing context or skewed data can have outsized consequences, particularly when users may not have the clinical knowledge to spot what’s missing. The platform has faced criticism in the past for its handling of health-related content, particularly around mental health.

Plus there's the risk of over-reliance, adds Semble’s Clinical Safety Officer and former GP, Dr Karim Sandid. 

“High-quality tools like this can lead to better patient activation and health literacy, meaning patients arrive at their consultations better informed and with educated questions, allowing them to advocate for themselves,” Karim explains.

“But a reliance on AI in this way comes with its risks, too. Patients sometimes begin to attribute higher authority to AI than human doctors, believing AI possesses ‘all human knowledge’. This simply isn’t true, but it increases the risk of blind trust in the tool and mistrust of clinicians.”

Data governance is another key concern. Health data is deeply personal and once it’s aggregated and analysed, questions around ownership, access and secondary use become unavoidable. As Semble CTO Mikael Landau puts it:

“Users need clarity on who sees their data, how long it lives and what decisions it influences. Trust breaks down quickly if people feel data is being reused in ways they didn’t expect. In health, transparency and reversibility matter as much as functionality.”

Potential bias is also impossible to ignore.

“As a former NHS clinician, I’ve seen how language, health literacy and access barriers create real inequality,” says Dr Williams. “AI is built on data and that data has bias. Left unchecked, AI tools risk amplifying those inequalities and providing users with inaccurate information.”

As Jenny says, historically, this healthcare data can be far from neutral. Data collection for women was only mandated in the US in 1993, when federal law required their inclusion in clinical research. Prior to this, women were widely excluded from studies, largely due to FDA guidance barring ‘women of reproductive potential’. If there is historical undertreatment or underrepresentation in the data, large language models can reproduce that framing.

So, is ChatGPT Health good, bad or just inevitable?

ChatGPT Health reflects a reality that’s already here: patients are curious, proactive, and using AI whether healthcare systems are ready or not.

Handled well, tools like this can help patients feel more informed and engaged, and give clinicians a clearer starting point for those meaningful conversations. Without that clinical guidance, however, they risk confusion, mistrust and widening inequality.

AI can’t be used alone. It must work in conjunction with clinicians, accountable systems and an open dialogue. The opportunity now is to make conversations about AI easy to bring to the table and not something patients feel they need to hide.

There’s also something worth noting about how ChatGPT Health is being positioned. At the time of writing, access sits behind a waitlist, a familiar tactic for building demand but perhaps an interesting one in healthcare.

This tool is clearly edging into the preventative care space, an area already crowded with services built around tracking, optimisation and early intervention. The difference is scale. ChatGPT doesn’t just add another product to the mix; it stretches an existing platform into yet another domain.

That doesn’t make it inherently good or bad. But it does reinforce the need for clarity about where AI fits, and where it doesn’t.

As AI in healthcare continues to expand at almost breakneck speed, we need to remember that the future of healthcare isn’t AI versus clinicians. At its core, it’s about how well the two can work together.