I've got all my medical records uploaded into NotebookLM. Years of them. Blood tests, scan reports, consultant letters, everything. So I used the audio podcast summary feature to get a nice overview of my medical history while driving.
About 10 minutes in, one of the AI hosts confidently stated: "Russ has had his gallbladder removed."
I've never had my gallbladder removed.
This is a problem.
Not because it's funny (it is) but because it's the most important thing about using AI for medical stuff: it will confidently tell you things that are completely wrong.
NotebookLM did that thing that AI does really well: it sounded authoritative. It wasn't hedging. It wasn't saying "it appears that..." or "the documents suggest..." It was just stating it as fact.
I checked. I looked through all the documents. There's no mention of a gallbladder removal anywhere. NotebookLM made it up. Or more accurately, it hallucinated a connection between something in my medical records and a non-existent procedure. Maybe it saw "bile" somewhere and made an assumption. Maybe it's just making shit up. I genuinely don't know.
The useful bit
Here's the thing though: the sheer volume of medical data in my files is genuinely impossible to manage manually. I've got 100+ pages per year. Six years in. That's 600+ pages. I can't read that every morning and keep it in my head. My chemo brain especially can't.
So AI solving that problem actually matters. It's just that I need to verify what it tells me, because the confidence is not correlated with accuracy. At all.
The Strategy
When using AI for medical stuff, the approach has to be:
- Use it for speed. Get the AI to summarise, search, and present information quickly.
- Verify everything. If it says something important, check the source document. Always.
- Use it for pattern spotting, not diagnosis. "Have my A1C numbers been going up?" is a good question. "Do I have diabetes?" is not.
- Don't act on AI conclusions alone. If NotebookLM finds a pattern or inconsistency, that's worth asking your doctor about. It's not a diagnosis.
Why I'm Still Using It
Because the problem it solves is real, even if the solution is imperfect.
Without AI, I've got 600 pages of medical information that I can't remember, can't search, and can't easily understand. With AI, even with the hallucinations, I can ask questions and get pointed in the right direction. Then I verify with the actual documents.
It's like having a very smart but slightly drunk medical assistant. They're enthusiastic, they'll help you find things, they can spot patterns. But you wouldn't let them make actual decisions.
My gallbladder remains firmly where it was born, thanks very much to my own fact-checking.
The Takeaway
AI is genuinely useful for managing medical information. But the confidence with which it presents information is not a measure of its accuracy. Train yourself to verify. Always ask to see the source. And maybe, just maybe, don't believe the audio summary when it tells you something you're pretty sure didn't happen.
Because I'd quite like to know where this phantom gallbladder surgery came from.