I watched a naturopath in Mississauga break down in a staff meeting last year. Not because of a patient. Not because of money. Because of charting. AI clinical notes didn't exist in her practice yet. Neither did any form of AI charting. Everything was manual reconstruction, hours after the conversation ended.

She'd seen fourteen patients that day. Good sessions, all of them — the kind of consults that remind you why you went into practice. But by 7:15pm she was still at her desk, toggling between two systems, trying to reconstruct a conversation she'd had at 10am about a patient's methylation panel results. Her daughter's piano recital started at 7:30. She didn't make it.

That story isn't unusual. It's Tuesday.


The documentation tax nobody agreed to pay

The AMIA 25x5 Task Force published their 2024 TrendBurden survey and the numbers should make every clinic owner uncomfortable: 77% of healthcare professionals reported finishing work later than they wanted to because of documentation. Not patient emergencies. Not complex cases. Paperwork.

77% of healthcare professionals finish work late specifically because of documentation tasks — AMIA TrendBurden 2024

And 75% of those same respondents said documentation actively impedes their ability to provide patient care. Read that twice. Three quarters of clinicians believe the thing they have to do after seeing a patient is making them worse at seeing the patient.

The AMA's 2025 physician data puts finer numbers on it: out of a 57.8-hour average workweek, only 27.2 hours involve direct patient care. Thirteen hours go to indirect care — charting, order entry, inbox management. The rest is admin. That ratio should bother anyone who went into healthcare to help people.

A November 2024 study in Health Affairs found something I keep thinking about. Every additional hour a primary care physician spends on documentation reduces the chance they'll review a patient's outside records by 7.1%. Documentation doesn't just consume time. It crowds out the clinical thinking that requires time.


What 5:30pm actually looks like

I need to be specific here, because the abstractions hide the damage.

Marcus runs a two-practitioner osteopathy clinic in Kitchener. His last patient leaves at 5:45pm most days. Between 5:45 and whenever he actually gets home, here is what happens: he opens his EHR, finds the appointment, scrolls to the notes section, tries to remember the exact sequence of a spinal assessment he performed four hours ago, types it manually into a SOAP template, realizes he forgot to document the referral conversation from the 2pm patient, goes back, edits that note, returns to the 5pm note, checks his scribbled shorthand on a sticky note that says "L4-L5 R rotation restricted — re-assess 2 wks," translates that into something defensible, saves, moves to the next patient's note.

Seven patients. Seven rounds of this. On a good night he's done by 7:30. On a bad one, 8:45.

His associate, who started three months ago out of a Canadian College of Osteopathy program, told him she didn't realize that "half the job is typing about what you already did." She's already looking at industry positions. Marcus isn't surprised.

This is the part that doesn't show up in the burnout surveys. The hemorrhage isn't dramatic. It's the slow drip of talented clinicians who realize the ratio of patient care to administrative reconstruction isn't what they signed up for, and quietly leave.


Why does clinical documentation take so long for practitioners?

Clinical documentation is slow because practitioners must manually reconstruct patient conversations from memory after appointments, translating verbal discussions into structured note formats like SOAP, treatment plans, and assessments. The AMIA TrendBurden 2024 survey found that 77% of healthcare professionals work late specifically because of charting, and most clinicians report the burden hasn't decreased despite new EHR tools.

How AI clinical notes eliminate the reconstruction problem

What makes clinical documentation so uniquely draining isn't the typing. It's the reconstruction.

A physiotherapist assessing a frozen shoulder performs maybe fifteen discrete clinical observations during a thirty-minute session. Range of motion in three planes. Compensatory patterns. Pain response at end range. Tissue quality during manual work. Neural tension findings. Patient-reported pain levels. Functional goals discussed. Home exercise modifications agreed on. Follow-up timing.

All of this happens in the flow of conversation and clinical reasoning. None of it gets documented in real time because the practitioner is, you know, treating a person. So it lives in their head, degrading with every subsequent patient, until they sit down at 6pm and try to reconstruct something that felt intuitive three hours ago into structured clinical language.

I started writing this section as a straightforward explanation of cognitive load. But the real issue is different. The problem isn't that documentation is hard. It's that documentation requires a completely different cognitive mode than clinical care, and we ask practitioners to switch between them repeatedly, at the end of the day when they have the least capacity for it. It's like asking a surgeon to write the operative report mid-procedure. Nobody would do that. But we ask every other clinician to do the functional equivalent, just with a time delay that makes it worse.

The reconstruction gets worse over time

By the time a practitioner sits down to chart their fifth or sixth patient of the day, the details have begun to blur. Was it the 2pm patient who mentioned knee pain radiating from the lateral collateral ligament, or the 3:30pm? Did the consent conversation happen with the chronic pain patient or the post-surgical follow-up? These micro-confusions compound. Every note takes longer than the last because the memory retrieval costs more.


What changes when the appointment records itself

Oli Health built something that short-circuits the reconstruction problem entirely. The idea is straightforward but the execution matters: record the appointment conversation — whether it's a telehealth call or an in-person visit using the browser microphone — and let AI generate structured clinical notes from the transcript. Automatically.

Appointment sidesheet showing a completed recording with green badge and spinner indicating notes are being auto-generated from two templates

Not one generic note. Up to ten separate note types, each populated from the same conversation, each following a different template. A SOAP note. A treatment plan. A progress note. An initial assessment. Whatever templates the practitioner or practice has configured for that appointment type. The AI doesn't guess what template to use. The practitioner (or practice admin) attaches templates to each scheduling configuration in advance, and the system generates a draft note for each one the moment the recording finishes processing.

One conversation, multiple structured outputs

A single 20-minute appointment recording can produce a SOAP note, a treatment plan, and a progress summary simultaneously. Each note follows its own template structure, populated with the relevant clinical details extracted from the transcript. The practitioner reviews drafts, edits if needed, and signs off. The reconstruction step — the part that consumed the evening — is gone.

Appointment sidesheet showing three auto-generated draft notes from a completed recording, each using a different template — Practice SOAP Template and SOAP Note Template

I thought this would produce generic, unusable output. I was wrong about that. The notes follow the template structure the practitioner already designed: if your SOAP template has fields for chief complaint, history of present illness, review of systems, medications, and allergies, the AI populates each one with the specific details from that specific conversation. When the patient mentions recurring lower back pain that started six weeks ago and worsens with prolonged sitting, that goes into the HPI section. When they mention they're taking naproxen 500mg twice daily, that goes into medications. It's not summarization. It's structured extraction.

Patient chart showing an AI-generated SOAP note with fully populated Subjective section including chief complaint of recurrent low back pain, detailed history of present illness, review of systems, past medical history, medications, and allergies — all extracted from the recorded conversation

That screenshot is a real generated note from a recorded appointment. The subjective section alone — chief complaint, HPI, review of systems, past medical history, medications, allergies — would have taken a practitioner fifteen to twenty minutes to reconstruct from memory. The AI produced it in under a minute from the transcript.


Can AI generate multiple types of clinical notes from one appointment?

Yes. Oli Health's AI Notes feature generates up to 10 different structured clinical notes from a single recorded appointment — SOAP notes, treatment plans, progress notes, initial assessments, and more. Each note follows a different practitioner-configured template and is populated with specific clinical details extracted from the conversation transcript, not generic summaries.

Who this actually changes things for

The practitioners who benefit most aren't the ones who are already efficient documenters. It's the ones whose documentation burden is disproportionate to their clinical complexity.

A solo chiropractor in Red Deer seeing twenty-five patients a day with fifteen-minute appointments does not have time to chart between patients. Period. Every note is deferred to the end of the day or, more likely, the evening. Twenty-five reconstructions from memory over a cold dinner.

A multidisciplinary clinic in Vancouver with three naturopaths, a counselor, and an acupuncturist has a different version of the same problem: five different note formats, five different documentation standards, and one shared EHR where consistency is aspirational at best. When each practitioner builds their own templates and the AI generates notes following those templates from recorded sessions, the variability problem solves itself. The counselor's psychotherapy notes follow the counselor's structure. The acupuncturist's TCM assessment follows the acupuncturist's structure. Same underlying technology, different outputs.

Telehealth meeting control bar with microphone, camera, screen share, chat, AI Notes, settings, and leave buttons — the AI Notes button highlighted for one-click recording activation

For telehealth appointments, the recording is built directly into the video call. One button. For in-person visits, the practitioner starts a recording from the appointment sidesheet, and a floating bar stays visible across the screen during the session so nothing gets accidentally left running. Both methods produce the same output: a transcript, and from that transcript, structured notes.

Floating recording bar showing red indicator, patient name, appointment time, elapsed duration, and pause/stop controls — visible at the bottom of the screen during recording

I keep coming back to the Red Deer chiropractor scenario because the math is startling. Twenty-five patients. Let's say each note takes eight minutes to reconstruct from memory — and that's conservative for a detailed SOAP. That's three hours and twenty minutes of charting per day. Five days a week. Sixteen hours a week spent writing about things you already did. If AI Notes cuts that by even 80%, you're getting thirteen hours of your week back. I honestly don't know what else gives you that kind of return.


Why AI charting improves note quality, not just speed

I expected the time savings to be the story. It's not. Or rather, it's not the whole story.

The unexpected shift is in note quality. When a practitioner reconstructs a conversation from memory at the end of the day, details get dropped. The patient mentioned bilateral symptom onset but the practitioner only documented unilateral because the bilateral detail was mentioned in passing during a discussion about something else. The medication dosage was stated once and never repeated, so the note says "naproxen" without the dose. The follow-up interval was agreed as "two weeks" in conversation but got written as "follow up as needed" because the practitioner was tired and defaulted to vague.

AI doesn't get tired. AI doesn't forget the bilateral mention that happened at minute four. The transcript captures everything. And the structured extraction surfaces details that a fatigued practitioner would have missed or rounded off.

The AI follows each template precisely. If your SOAP template separates subjective, objective, assessment, and plan sections, that's exactly how the note gets structured, with the right details in the right fields. The practitioner's job shifts from writing to reviewing — scan the note, confirm accuracy, sign off. Most clinicians make minor adjustments at most. The edit-from-draft workflow is categorically different from the write-from-memory workflow. A quick review takes a minute or two. Reconstructing a note from scratch takes eight to fifteen. The comparison isn't close.


How does AI clinical documentation reduce practitioner burnout?

AI clinical documentation eliminates the end-of-day reconstruction burden that drives most after-hours charting. Instead of spending hours rebuilding patient conversations from memory, practitioners review and edit AI-generated draft notes that already contain the clinical details from the recorded appointment. The AMA reports that physicians currently spend only 27.2 out of 57.8 work hours on direct patient care, with documentation consuming the bulk of the remainder.

The naturopath made it to the next recital

I circled back with the Mississauga clinic recently — they were one of our early beta users for AI Notes. The naturopath, the one who missed the piano recital, told me she finishes her last chart within ten minutes of her last patient now. Not because she's rushing. Because the notes are already there when she opens the chart. She reads through them, confirms the details, signs off.

She said something that stuck with me: "I forgot what it felt like to not dread the end of my day."

That's not a feature benefit. Not something you put on a pricing page. It's the difference between a practitioner who stays in practice and one who doesn't.

The 2025 AMA data shows 22.5% of physicians now spend more than eight hours per week on EHR work outside of standard business hours — up from 20.9% in 2023. The trend is going in the wrong direction. Documentation burden isn't a problem the industry is solving. It's a problem the industry is slowly losing to.

The clinics that figure out how to eliminate the reconstruction step — not reduce it, not optimize it, eliminate it entirely — are the ones that will keep their practitioners. The ones that don't will keep posting job listings for positions that experienced clinicians no longer want.

I don't have a tidy conclusion for this. Maybe later.


What to look for in AI charting software

Not all AI charting tools work the same way, and the differences matter more than most comparison pages let on. I've seen three categories emerge.

The first is ambient AI scribes — software that listens during the visit and generates a single note. These work reasonably well for primary care physicians with predictable visit structures. But for allied health practitioners whose sessions involve manual therapy, movement assessments, or extended counseling, a single ambient note often misses the clinical specificity they need. The output tends toward generic.

The second category is template-based AI documentation — tools that let the practitioner define the output structure in advance. This is where Oli Health's AI Notes sits. You build your templates (or use pre-configured ones), attach them to your appointment types, and the AI populates each template from the recorded conversation. The distinction matters because allied health documentation isn't one-size-fits-all. A chiropractor's SOAP note looks nothing like a psychotherapist's progress note, and both are different from an acupuncturist's TCM assessment.

The third is dictation-plus-formatting tools, which are essentially speech-to-text with some structural cleanup. These save time on typing but don't solve the reconstruction problem because the practitioner still has to narrate the note from memory after the session ends.

When you're evaluating options, the questions worth asking are practical. Does the tool support multiple note types from a single recording? Can you customize the template structure to match your discipline's documentation standards? Does it work for both telehealth and in-person visits? Is the output editable before it becomes part of the patient record? And — this one gets overlooked — does it integrate with the rest of your practice management workflow, or is it a standalone tool that adds another tab to your screen?

What should practitioners look for in AI charting software?

Practitioners evaluating AI charting software should prioritize: support for multiple note types from a single recording, customizable templates that match their discipline's documentation standards, compatibility with both telehealth and in-person visits, editable drafts before finalizing, and integration with their existing practice management system to avoid workflow fragmentation.

How to get started with AI clinical notes

The practitioners I've spoken with who adopted AI notes most successfully all did the same thing: they started small. One appointment type. One template. Three or four patients over a day or two.

The reason this works better than a full rollout is that it lets you calibrate your templates before committing. Your first AI-generated SOAP note will tell you immediately whether your template structure gives the AI enough guidance. If your template just says "Subjective" with no sub-fields, the AI will produce a block of unstructured text. If your template breaks it into chief complaint, HPI, review of systems, and medications, you'll get precise extraction into each field. The template quality determines the output quality.

Here's what a reasonable first week looks like:

  1. Pick your most common appointment type — the one you chart five or six times a day.
  2. Build or select a note template with clearly defined sections. Be specific. "Assessment" is vague. "Assessment: primary diagnosis, secondary considerations, functional limitations, contraindications" gives the AI what it needs.
  3. Record three to five sessions using that template. For telehealth, hit the AI Notes button in the call. For in-person, start the recording from the appointment sidesheet.
  4. Review the generated notes. Compare them against what you would have written manually. Note where the AI nailed it and where you'd adjust the template.
  5. Refine the template based on what you learned, then expand to your other appointment types.

Most practitioners tell me they stop editing their templates after the second or third round. Once the structure is right, the AI consistently extracts the right information into the right fields. The ten minutes you spend refining a template saves you ten minutes on every note it generates for the rest of the year. Scale that across twenty patients a day and the math is hard to argue with.


If your evenings disappear into charting, it might be worth trying AI Notes on a few patients this week. You'll know within three appointments whether it changes your day.