Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
9 changes: 9 additions & 0 deletions .github/workflows/ci.yml
Original file line number Diff line number Diff line change
Expand Up @@ -41,3 +41,12 @@ jobs:

- name: Run tests
run: uv run pytest

- name: Test documentation code snippets
if: matrix.python-version == '3.12'
run: |
uv run pytest --codeblocks \
docs/quickstart.md \
docs/cookbook/ \
docs/tutorials/
continue-on-error: true # Don't block CI while snippets are being updated
11 changes: 11 additions & 0 deletions docs/cookbook/clinical_coding.md
Original file line number Diff line number Diff line change
Expand Up @@ -18,6 +18,7 @@ A clinical note arrives from NoteReader as CDA XML → gets parsed and processed

We'll use [scispacy](https://allenai.github.io/scispacy/) for medical entity extraction. Install the required dependencies:

<!--pytest.mark.skip-->
```bash
pip install healthchain scispacy python-dotenv
pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.4/en_core_sci_sm-0.5.4.tar.gz
Expand All @@ -27,6 +28,7 @@ pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.4/e

Download the sample CDA file `notereader_cda.xml` into a `data/` folder in your project root using `wget`:

<!--pytest.mark.skip-->
```bash
mkdir -p data
cd data
Expand All @@ -39,6 +41,7 @@ Set up a Medplum account and obtain client credentials. See the [FHIR Sandbox Se

Once you have your Medplum credentials, configure them in a `.env` file:

<!--pytest.mark.skip-->
```bash
# .env file
MEDPLUM_BASE_URL=https://api.medplum.com/fhir/R4
Expand All @@ -53,6 +56,7 @@ MEDPLUM_SCOPE=openid
First we'll need to convert the incoming CDA XML to FHIR. The [CdaAdapter](../reference/io/adapters/cdaadapter.md) enables round-trip conversion between CDA and FHIR using the [InteropEngine](../reference/interop/engine.md) for seamless legacy-to-modern data integration.


<!--pytest.mark.skip-->
```python
from healthchain.io import CdaAdapter

Expand Down Expand Up @@ -83,6 +87,7 @@ Next we'll build our NLP processing pipeline. We'll use a [MedicalCodingPipeline

For this demo, we'll use a simple dictionary for the SNOMED CT mapping.

<!--pytest.mark.skip-->
```python
from healthchain.pipeline.medicalcodingpipeline import MedicalCodingPipeline
from healthchain.io import Document
Expand Down Expand Up @@ -128,6 +133,7 @@ def link_entities(doc: Document) -> Document:

This is equivalent to constructing a pipeline with the following components manually:

<!--pytest.mark.skip-->
```python
from healthchain.pipeline import Pipeline
from healthchain.pipeline.components import SpacyNLP, FHIRProblemListExtractor
Expand All @@ -146,6 +152,7 @@ def link_entities(doc: Document) -> Document:

Use `.add_source` to register a FHIR endpoint you want to connect to with its connection string; the gateway will automatically manage the authentication and routing.

<!--pytest.mark.skip-->
```python
from healthchain.gateway import FHIRGateway
from healthchain.gateway.clients import FHIRAuthConfig
Expand All @@ -170,6 +177,7 @@ fhir_gateway.add_source("medplum", MEDPLUM_URL)

Now let's set up the handler for [NoteReaderService](../reference/gateway/soap_cda.md) method `ProcessDocument`, which will be called by Epic NoteReader when it is triggered in the CDI workflow. This is where we will combine all our components: adapter, pipeline, and writing to our configured FHIR endpoint:

<!--pytest.mark.skip-->
```python
from healthchain.gateway import NoteReaderService

Expand Down Expand Up @@ -203,6 +211,7 @@ def ai_coding_workflow(request: CdaRequest):

Time to put it all together! Using [HealthChainAPI](../reference/gateway/api.md), we can create a service with *both* the FHIR and NoteReader endpoints:

<!--pytest.mark.skip-->
```python
from healthchain.gateway import HealthChainAPI

Expand All @@ -217,6 +226,7 @@ app.register_service(note_service, path="/notereader")

HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the NoteReader workflow end-to-end. It loads your sample CDA document, sends it to your service via the configured endpoint, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally before connecting to Epic.

<!--pytest.mark.skip-->
```python
from healthchain.sandbox import SandboxClient

Expand All @@ -239,6 +249,7 @@ client.load_from_path("./data/notereader_cda.xml")

Now for the moment of truth! Start your service and run the sandbox to see the complete workflow in action.

<!--pytest.mark.skip-->
```python
import threading

Expand Down
10 changes: 10 additions & 0 deletions docs/cookbook/discharge_summarizer.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,6 +10,7 @@ Check out the full working example [here](https://github.com/dotimplement/Health

### Install Dependencies

<!--pytest.mark.skip-->
```bash
pip install healthchain python-dotenv
```
Expand All @@ -19,6 +20,7 @@ This example uses a Hugging Face model for the summarization task, so make sure

If you are using a chat model, make sure you have the necessary `langchain` packages installed.

<!--pytest.mark.skip-->
```bash
pip install langchain langchain-huggingface
```
Expand All @@ -27,6 +29,7 @@ pip install langchain langchain-huggingface

Download the sample data `discharge_notes.csv` into a `data/` folder in your project root using `wget`:

<!--pytest.mark.skip-->
```bash
mkdir -p data
cd data
Expand All @@ -43,6 +46,7 @@ First, we'll create a [summarization pipeline](../reference/pipeline/pipeline.md
For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/integrations/chat/huggingface/) for better prompting.

=== "Non-chat model"
<!--pytest.mark.skip-->
```python
from healthchain.pipeline import SummarizationPipeline

Expand All @@ -53,6 +57,7 @@ For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/inte


=== "Chat model"
<!--pytest.mark.skip-->
```python
from healthchain.pipeline import SummarizationPipeline

Expand Down Expand Up @@ -94,6 +99,7 @@ The `SummarizationPipeline` automatically:

The [CdsFhirAdapter](../reference/io/adapters/cdsfhiradapter.md) converts between CDS Hooks requests and HealthChain's [Document](../reference/io/containers/document.md) format. This makes it easy to work with FHIR data in CDS workflows.

<!--pytest.mark.skip-->
```python
from healthchain.io import CdsFhirAdapter

Expand All @@ -116,6 +122,7 @@ cds_adapter.format(doc)

Create the [CDS Hooks handler](../reference/gateway/cdshooks.md) to receive discharge note requests, run the AI summarization pipeline, and return results as CDS cards.

<!--pytest.mark.skip-->
```python
from healthchain.gateway import CDSHooksService
from healthchain.models import CDSRequest, CDSResponse
Expand All @@ -142,6 +149,7 @@ def handle_discharge_summary(request: CDSRequest) -> CDSResponse:

Register the CDS service with [HealthChainAPI](../reference/gateway/api.md) to create REST endpoints:

<!--pytest.mark.skip-->
```python
from healthchain.gateway import HealthChainAPI

Expand All @@ -154,6 +162,7 @@ app.register_service(cds_service)
HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the CDS hooks workflow end-to-end. It loads your sample free text data and formats it into CDS requests, sends it to your service, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally and inspect the inputs and outputs before connecting to a real EHR instance.


<!--pytest.mark.skip-->
```python
from healthchain.sandbox import SandboxClient

Expand Down Expand Up @@ -182,6 +191,7 @@ client.load_free_text(

Put it all together and run both the service and sandbox client:

<!--pytest.mark.skip-->
```python
import threading

Expand Down
7 changes: 7 additions & 0 deletions docs/cookbook/format_conversion.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,12 +8,14 @@ The [InteropEngine](../reference/interop/engine.md) provides a unified interface

Install HealthChain:

<!--pytest.mark.skip-->
```bash
pip install healthchain
```

Create an interoperability engine:

<!--pytest.mark.skip-->
```python
from healthchain.interop import create_interop, FormatType
from pathlib import Path
Expand All @@ -26,6 +28,7 @@ engine = create_interop()

Parse a CDA document and extract FHIR resources:

<!--pytest.mark.skip-->
```python
cda_xml = """
<ClinicalDocument xmlns="urn:hl7-org:v3">
Expand Down Expand Up @@ -93,6 +96,7 @@ for resource in fhir_resources:

Generate a CDA document from FHIR resources:

<!--pytest.mark.skip-->
```python
from healthchain.fhir.r4b import Condition, Patient

Expand Down Expand Up @@ -149,6 +153,7 @@ print(cda_document)

Parse an HL7v2 message and extract FHIR resources:

<!--pytest.mark.skip-->
```python
hl7v2_message = """
MSH|^~\&|EPIC|EPICADT|SMS|SMSADT|199912271408|CHARRIS|ADT^A01|1817457|D|2.5|
Expand All @@ -169,6 +174,7 @@ for resource in fhir_resources:

Generate an HL7v2 message from FHIR resources:

<!--pytest.mark.skip-->
```python
from healthchain.fhir.r4b import Patient, Encounter

Expand Down Expand Up @@ -226,6 +232,7 @@ print(hl7v2_message)

Save converted data to files:

<!--pytest.mark.skip-->
```python
output_dir = Path("./output")
output_dir.mkdir(exist_ok=True)
Expand Down
3 changes: 3 additions & 0 deletions docs/cookbook/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -109,6 +109,7 @@ Hands-on, production-ready examples for building healthcare AI applications with

Cookbooks are standalone scripts — run them directly to explore and experiment. When you're ready to build a proper service, scaffold a project and move your logic in:

<!--pytest.mark.skip-->
```bash
# 1. Run a cookbook locally
python cookbook/sepsis_cds_hooks.py
Expand All @@ -125,6 +126,7 @@ healthchain serve

**What moves from your script into `healthchain.yaml`:**

<!--pytest.mark.skip-->
```python
# cookbook — everything hardcoded in Python
gateway = FHIRGateway()
Expand All @@ -151,6 +153,7 @@ llm:
max_tokens: 512
```

<!--pytest.mark.skip-->
```python
# app.py — load from config instead
from healthchain.config.appconfig import AppConfig
Expand Down
Loading