From 7a833e7185db522d1e1b5a596b98e7b461fca5da Mon Sep 17 00:00:00 2001 From: SexyERIC0723 Date: Tue, 7 Apr 2026 19:55:45 +0100 Subject: [PATCH 1/3] feat(tests): add pytest-codeblocks for documentation code snippet CI Add automated testing infrastructure for documentation code examples using pytest-codeblocks: - Add pytest-codeblocks>=0.17.0 to dev dependencies - Add [tool.pytest.ini_options] with codeblocks marker - Add doc snippet test step to CI workflow - Mark 56 Python code blocks across 11 doc files with skip markers (these require healthchain package + external services to run) The CI step runs with continue-on-error initially so snippets can be incrementally enabled as they are made self-contained. Usage: uv run pytest --codeblocks docs/ # Test all doc snippets uv run pytest --codeblocks docs/quickstart.md # Test specific file To make a snippet testable, remove its marker and ensure it runs standalone (no external dependencies). Closes #164 --- .github/workflows/ci.yml | 4 ++++ docs/cookbook/clinical_coding.md | 8 ++++++++ docs/cookbook/discharge_summarizer.md | 7 +++++++ docs/cookbook/ml_model_deployment.md | 11 +++++++++++ docs/cookbook/multi_ehr_aggregation.md | 8 ++++++++ docs/quickstart.md | 10 ++++++++++ docs/tutorials/clinicalflow/fhir-basics.md | 3 +++ docs/tutorials/clinicalflow/gateway.md | 1 + docs/tutorials/clinicalflow/next-steps.md | 1 + docs/tutorials/clinicalflow/pipeline.md | 2 ++ docs/tutorials/clinicalflow/setup.md | 1 + docs/tutorials/clinicalflow/testing.md | 4 ++++ pyproject.toml | 9 +++++++++ 13 files changed, 69 insertions(+) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 9056dcbd..6a41c8a5 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -41,3 +41,7 @@ jobs: - name: Run tests run: uv run pytest + + - name: Test documentation code snippets + run: uv run pytest --codeblocks docs/ + continue-on-error: true # Don't block CI while snippets are being updated diff --git a/docs/cookbook/clinical_coding.md b/docs/cookbook/clinical_coding.md index aa58598a..9dfca505 100644 --- a/docs/cookbook/clinical_coding.md +++ b/docs/cookbook/clinical_coding.md @@ -53,6 +53,7 @@ MEDPLUM_SCOPE=openid First we'll need to convert the incoming CDA XML to FHIR. The [CdaAdapter](../reference/io/adapters/cdaadapter.md) enables round-trip conversion between CDA and FHIR using the [InteropEngine](../reference/interop/engine.md) for seamless legacy-to-modern data integration. + ```python from healthchain.io import CdaAdapter @@ -83,6 +84,7 @@ Next we'll build our NLP processing pipeline. We'll use a [MedicalCodingPipeline For this demo, we'll use a simple dictionary for the SNOMED CT mapping. + ```python from healthchain.pipeline.medicalcodingpipeline import MedicalCodingPipeline from healthchain.io import Document @@ -128,6 +130,7 @@ def link_entities(doc: Document) -> Document: This is equivalent to constructing a pipeline with the following components manually: + ```python from healthchain.pipeline import Pipeline from healthchain.pipeline.components import SpacyNLP, FHIRProblemListExtractor @@ -146,6 +149,7 @@ def link_entities(doc: Document) -> Document: Use `.add_source` to register a FHIR endpoint you want to connect to with its connection string; the gateway will automatically manage the authentication and routing. + ```python from healthchain.gateway import FHIRGateway from healthchain.gateway.clients import FHIRAuthConfig @@ -170,6 +174,7 @@ fhir_gateway.add_source("medplum", MEDPLUM_URL) Now let's set up the handler for [NoteReaderService](../reference/gateway/soap_cda.md) method `ProcessDocument`, which will be called by Epic NoteReader when it is triggered in the CDI workflow. This is where we will combine all our components: adapter, pipeline, and writing to our configured FHIR endpoint: + ```python from healthchain.gateway import NoteReaderService @@ -203,6 +208,7 @@ def ai_coding_workflow(request: CdaRequest): Time to put it all together! Using [HealthChainAPI](../reference/gateway/api.md), we can create a service with *both* the FHIR and NoteReader endpoints: + ```python from healthchain.gateway import HealthChainAPI @@ -217,6 +223,7 @@ app.register_service(note_service, path="/notereader") HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the NoteReader workflow end-to-end. It loads your sample CDA document, sends it to your service via the configured endpoint, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally before connecting to Epic. + ```python from healthchain.sandbox import SandboxClient @@ -239,6 +246,7 @@ client.load_from_path("./data/notereader_cda.xml") Now for the moment of truth! Start your service and run the sandbox to see the complete workflow in action. + ```python import threading diff --git a/docs/cookbook/discharge_summarizer.md b/docs/cookbook/discharge_summarizer.md index 02f5ae58..56a95c52 100644 --- a/docs/cookbook/discharge_summarizer.md +++ b/docs/cookbook/discharge_summarizer.md @@ -43,6 +43,7 @@ First, we'll create a [summarization pipeline](../reference/pipeline/pipeline.md For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/integrations/chat/huggingface/) for better prompting. === "Non-chat model" + ```python from healthchain.pipeline import SummarizationPipeline @@ -53,6 +54,7 @@ For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/inte === "Chat model" + ```python from healthchain.pipeline import SummarizationPipeline @@ -94,6 +96,7 @@ The `SummarizationPipeline` automatically: The [CdsFhirAdapter](../reference/io/adapters/cdsfhiradapter.md) converts between CDS Hooks requests and HealthChain's [Document](../reference/io/containers/document.md) format. This makes it easy to work with FHIR data in CDS workflows. + ```python from healthchain.io import CdsFhirAdapter @@ -116,6 +119,7 @@ cds_adapter.format(doc) Create the [CDS Hooks handler](../reference/gateway/cdshooks.md) to receive discharge note requests, run the AI summarization pipeline, and return results as CDS cards. + ```python from healthchain.gateway import CDSHooksService from healthchain.models import CDSRequest, CDSResponse @@ -142,6 +146,7 @@ def handle_discharge_summary(request: CDSRequest) -> CDSResponse: Register the CDS service with [HealthChainAPI](../reference/gateway/api.md) to create REST endpoints: + ```python from healthchain.gateway import HealthChainAPI @@ -154,6 +159,7 @@ app.register_service(cds_service) HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the CDS hooks workflow end-to-end. It loads your sample free text data and formats it into CDS requests, sends it to your service, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally and inspect the inputs and outputs before connecting to a real EHR instance. + ```python from healthchain.sandbox import SandboxClient @@ -182,6 +188,7 @@ client.load_free_text( Put it all together and run both the service and sandbox client: + ```python import threading diff --git a/docs/cookbook/ml_model_deployment.md b/docs/cookbook/ml_model_deployment.md index 91f7efdd..d6cfa096 100644 --- a/docs/cookbook/ml_model_deployment.md +++ b/docs/cookbook/ml_model_deployment.md @@ -61,6 +61,7 @@ cp scripts/models/sepsis_model.pkl cookbook/models/ **Using your own model?** The pipeline is flexible—just save any scikit-learn-compatible model as a pickle with this structure: + ```python import joblib @@ -184,6 +185,7 @@ If using the **FHIR Gateway pattern**, also confirm: Both patterns reuse the same pipeline. Here's what you'll write: + ```python def create_pipeline() -> Pipeline[Dataset]: pipeline = Pipeline[Dataset]() @@ -205,6 +207,7 @@ def create_pipeline() -> Pipeline[Dataset]: The pipeline operates on a `Dataset`, which you create from a FHIR bundle: + ```python dataset = Dataset.from_fhir_bundle(bundle, schema=SCHEMA_PATH) ``` @@ -250,6 +253,7 @@ Clinician opens chart → EHR fires patient-view hook → Your service runs pred Create a [CDSHooksService](../reference/gateway/cdshooks.md) that listens for `patient-view` events: + ```python from healthchain.gateway import CDSHooksService from healthchain.fhir import prefetch_to_bundle @@ -289,6 +293,7 @@ def sepsis_alert(request: CDSRequest) -> CDSResponse: Register with [HealthChainAPI](../reference/gateway/api.md): + ```python app = HealthChainAPI(title="Sepsis CDS Hooks") app.register_service(cds, path="/cds") @@ -298,6 +303,7 @@ app.register_service(cds, path="/cds") The [SandboxClient](../reference/utilities/sandbox.md) simulates EHR requests using your demo patient files: + ```python from healthchain.sandbox import SandboxClient @@ -354,6 +360,7 @@ Query patients from FHIR server → Run predictions → Write RiskAssessment bac Configure the [FHIRGateway](../reference/gateway/fhir_gateway.md) with your FHIR source: + ```python from healthchain.fhir.r4b import Patient, Observation from healthchain.gateway import FHIRGateway @@ -369,6 +376,7 @@ gateway.add_source("medplum", config.to_connection_string()) Query patient data, run prediction, and write back a [RiskAssessment](https://www.hl7.org/fhir/riskassessment.html) resource: + ```python def screen_patient(gateway: FHIRGateway, patient_id: str, source: str): # Query patient + observations from FHIR server @@ -393,6 +401,7 @@ def screen_patient(gateway: FHIRGateway, patient_id: str, source: str): Loop over patient IDs and screen each one: + ```python for patient_id in patient_ids: screen_patient(gateway, patient_id, source="medplum") @@ -402,6 +411,7 @@ for patient_id in patient_ids: This demo uses a fixed list of patient IDs. In production, you'd query for patients dynamically—for example, ICU admissions in the last hour: + ```python # Find patients with recent ICU encounters encounters = gateway.search( @@ -418,6 +428,7 @@ for patient_id in patient_ids: ### Build the Service + ```python app = HealthChainAPI(title="Sepsis Batch Screening") app.register_gateway(gateway, path="/fhir") diff --git a/docs/cookbook/multi_ehr_aggregation.md b/docs/cookbook/multi_ehr_aggregation.md index 0603b08c..7056b1e8 100644 --- a/docs/cookbook/multi_ehr_aggregation.md +++ b/docs/cookbook/multi_ehr_aggregation.md @@ -27,6 +27,7 @@ EPIC_USE_JWT_ASSERTION=true Load your Epic credentials from the `.env` file and create a connection string compatible with the FHIR gateway: + ```python from healthchain.gateway.clients import FHIRAuthConfig @@ -38,6 +39,7 @@ EPIC_URL = config.to_connection_string() [FHIR Gateways](../reference/gateway/fhir_gateway.md) connect to external FHIR servers and handles authentication, connection pooling, and token refresh automatically. Add the Epic sandbox as a source: + ```python from healthchain.gateway import FHIRGateway @@ -63,6 +65,7 @@ gateway.add_source("cerner", CERNER_URL) Define an aggregation handler that queries multiple FHIR sources for [Condition](https://www.hl7.org/fhir/condition.html) resources. + ```python from healthchain.fhir import merge_bundles @@ -122,6 +125,7 @@ def get_unified_patient(patient_id: str, sources: List[str]) -> Bundle: Register the gateway with [HealthChainAPI](../reference/gateway/api.md) to create REST endpoints. + ```python from healthchain.gateway import HealthChainAPI @@ -148,6 +152,7 @@ For additional processing like terminology mapping or quality checks, create a D Document pipelines are optimized for text and structured data processing, such as FHIR resources. When you initialize a [Document](../reference/io/containers/document.md) with FHIR [Bundle](https://www.hl7.org/fhir/condition.html) data, it automatically extracts and separates metadata resources from the clinical resources for easier inspection and error handling: + ```python # Initialize Document with a Bundle doc = Document(data=merged_bundle) @@ -163,6 +168,7 @@ doc.fhir.medication_list # List of MedicationStatement resources Add processing nodes using decorators: + ```python from healthchain.pipeline import Pipeline from healthchain.io.containers import Document @@ -205,6 +211,7 @@ Example uses Epic patient `eIXesllypH3M9tAA5WdJftQ3`; see [Epic sandbox](https:/ ``` === "Python" + ```python import requests @@ -338,6 +345,7 @@ Sample conditions: You'll see this if you haven't authorized access to the correct FHIR resources when you set up your FHIR sandbox. + ```python print([outcome.model_dump() for outcome in doc.fhir.operation_outcomes]) ``` diff --git a/docs/quickstart.md b/docs/quickstart.md index 492da870..2ecfcc4d 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -16,6 +16,7 @@ The [**HealthChainAPI**](./reference/gateway/api.md) provides a unified interfac [(Full Documentation on Gateway)](./reference/gateway/gateway.md) + ```python from healthchain.gateway import HealthChainAPI, FHIRGateway from healthchain.fhir.r4b import Patient @@ -57,6 +58,7 @@ Containers make your pipeline FHIR-native by loading and transforming your data [(Full Documentation on Containers)](./reference/io/containers/containers.md) + ```python from healthchain.pipeline import Pipeline from healthchain.io import Document @@ -92,6 +94,7 @@ HealthChain provides a set of ready-to-use [**NLP Integrations**](./reference/pi [(Full Documentation on Components)](./reference/pipeline/components/components.md) + ```python from healthchain.pipeline import Pipeline from healthchain.pipeline.components import TextPreProcessor, SpacyNLP, TextPostProcessor @@ -113,6 +116,7 @@ You can process legacy healthcare data formats too. [**Adapters**](./reference/i [(Full Documentation on Adapters)](./reference/io/adapters/adapters.md) + ```python from healthchain.io import CdaAdapter from healthchain.models import CdaRequest @@ -133,6 +137,7 @@ Prebuilt pipelines are the fastest way to jump into healthcare AI with minimal s [(Full Documentation on Pipelines)](./reference/pipeline/pipeline.md#prebuilt) + ```python from healthchain.pipeline import MedicalCodingPipeline from healthchain.models import CdaRequest @@ -150,6 +155,7 @@ The HealthChain Interoperability module provides tools for converting between di [(Full Documentation on Interoperability Engine)](./reference/interop/interop.md) + ```python from healthchain.interop import create_interop, FormatType @@ -198,6 +204,7 @@ Workflows determine the request structure, required FHIR resources, and validati | `synthea-patient` | **Synthea FHIR Patient Records** | R4 | [Synthea Downloads](https://synthea.mitre.org/downloads) | [Download ZIP](https://arc.net/l/quote/hoquexhy) (100 Sample, 36 MB) | + ```python from healthchain.sandbox import list_available_datasets @@ -208,6 +215,7 @@ print(datasets) #### Basic Usage + ```python from healthchain.sandbox import SandboxClient @@ -235,6 +243,7 @@ responses = client.send_requests() For clinical documentation workflows using SOAP/CDA: + ```python # Use context manager for automatic result saving with SandboxClient( @@ -253,6 +262,7 @@ Use `healthchain.fhir` helpers to quickly create and manipulate FHIR resources ( [(Full Documentation on FHIR Helpers)](./reference/utilities/fhir_helpers.md) + ```python from healthchain.fhir import create_condition diff --git a/docs/tutorials/clinicalflow/fhir-basics.md b/docs/tutorials/clinicalflow/fhir-basics.md index db4fd4e6..29cb7560 100644 --- a/docs/tutorials/clinicalflow/fhir-basics.md +++ b/docs/tutorials/clinicalflow/fhir-basics.md @@ -70,6 +70,7 @@ Tracks what medications a patient is taking: HealthChain provides utilities to work with FHIR resources easily: + ```python from healthchain.fhir import create_condition, create_patient from healthchain.fhir.r4b import Patient @@ -107,6 +108,7 @@ print(f"With condition: {condition.code.coding[0].display}") When an EHR sends patient context, it often comes as a **Bundle** - a collection of related resources: + ```python from healthchain.fhir.r4b import Bundle @@ -128,6 +130,7 @@ print(f"Bundle contains {len(bundle.entry)} resources") HealthChain's `Document` container bridges clinical text and FHIR data: + ```python from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/gateway.md b/docs/tutorials/clinicalflow/gateway.md index 80648615..2e2a2513 100644 --- a/docs/tutorials/clinicalflow/gateway.md +++ b/docs/tutorials/clinicalflow/gateway.md @@ -29,6 +29,7 @@ The flow: Create a file called `app.py`. This imports the pipeline you created in the [previous step](pipeline.md): + ```python from healthchain.gateway import HealthChainAPI, CDSHooksService from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/next-steps.md b/docs/tutorials/clinicalflow/next-steps.md index 3e71164e..407ab66c 100644 --- a/docs/tutorials/clinicalflow/next-steps.md +++ b/docs/tutorials/clinicalflow/next-steps.md @@ -24,6 +24,7 @@ The NLP was hard coded in our example but HealthChain has simple pipeline integr Convert extracted entities to FHIR resources: + ```python from healthchain.pipeline.components import FHIRProblemListExtractor diff --git a/docs/tutorials/clinicalflow/pipeline.md b/docs/tutorials/clinicalflow/pipeline.md index 2d7b737b..42d6ec33 100644 --- a/docs/tutorials/clinicalflow/pipeline.md +++ b/docs/tutorials/clinicalflow/pipeline.md @@ -15,6 +15,7 @@ A **Pipeline** in HealthChain is a sequence of processing steps that transform i Create a file called `pipeline.py`: + ```python from healthchain.pipeline import Pipeline from healthchain.io import Document @@ -69,6 +70,7 @@ def create_clinical_pipeline(): Create a file called `test_pipeline.py` to test your pipeline: + ```python from pipeline import create_clinical_pipeline from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/setup.md b/docs/tutorials/clinicalflow/setup.md index c6fea2f6..e13fec0f 100644 --- a/docs/tutorials/clinicalflow/setup.md +++ b/docs/tutorials/clinicalflow/setup.md @@ -40,6 +40,7 @@ All the code running examples in this tutorial will show both the uv and pip ver Create a file called `check_install.py`: + ```python import healthchain from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/testing.md b/docs/tutorials/clinicalflow/testing.md index 1f24d4e2..f7c3243a 100644 --- a/docs/tutorials/clinicalflow/testing.md +++ b/docs/tutorials/clinicalflow/testing.md @@ -103,6 +103,7 @@ You should see a response with cards for the detected conditions. For more control, create `test_service.py`: + ```python import json import requests @@ -176,6 +177,7 @@ The pipeline extracted four conditions from the clinical note text - exactly wha For testing with multiple patients or larger datasets, use the `SandboxClient`: + ```python from healthchain.sandbox import SandboxClient @@ -202,6 +204,7 @@ for i, response in enumerate(responses): Save results for reporting or debugging: + ```python # Save responses to files client.save_results( @@ -215,6 +218,7 @@ client.save_results( Inspect what will be sent without actually calling the service: + ```python # Preview queued requests previews = client.preview_requests(limit=5) diff --git a/pyproject.toml b/pyproject.toml index 7a18dacd..b88ab8d0 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -69,6 +69,7 @@ dev = [ "pytest>=8.2.0,<9", "pre-commit>=3.5.0,<4", "pytest-asyncio>=0.24.0,<0.25", + "pytest-codeblocks>=0.17.0,<0.18", "ipykernel>=6.29.5,<7", ] docs = [ @@ -84,6 +85,14 @@ default-groups = [ "docs", ] +[tool.pytest.ini_options] +# pytest-codeblocks: test Python code snippets in documentation +# Use `uv run pytest --codeblocks` to run doc snippet tests +# Snippets can be skipped with: +markers = [ + "codeblocks: marks tests extracted from documentation code blocks", +] + [build-system] requires = ["hatchling", "numpy>=2.0.0"] build-backend = "hatchling.build" From 700fad79e8d889b809c75d8afc81aac45ec7c8b9 Mon Sep 17 00:00:00 2001 From: SexyERIC0723 Date: Tue, 7 Apr 2026 20:17:38 +0100 Subject: [PATCH 2/3] fix: address Codex review findings for doc code CI - Fix skip marker format: replace legacy `pytest-codeblocks:skip` with current `pytest.mark.skip` syntax (56 markers updated) - Restrict CI doc test to Python 3.12 only (pytest-codeblocks not verified for 3.13) - Narrow CI scope to quickstart + cookbook + tutorials (avoids executing shell blocks in reference docs) - Update pyproject.toml comment with correct skip syntax --- .github/workflows/ci.yml | 7 ++++++- docs/cookbook/clinical_coding.md | 16 ++++++++-------- docs/cookbook/discharge_summarizer.md | 14 +++++++------- docs/cookbook/ml_model_deployment.md | 22 +++++++++++----------- docs/cookbook/multi_ehr_aggregation.md | 16 ++++++++-------- docs/quickstart.md | 20 ++++++++++---------- docs/tutorials/clinicalflow/fhir-basics.md | 6 +++--- docs/tutorials/clinicalflow/gateway.md | 2 +- docs/tutorials/clinicalflow/next-steps.md | 2 +- docs/tutorials/clinicalflow/pipeline.md | 4 ++-- docs/tutorials/clinicalflow/setup.md | 2 +- docs/tutorials/clinicalflow/testing.md | 8 ++++---- pyproject.toml | 3 ++- 13 files changed, 64 insertions(+), 58 deletions(-) diff --git a/.github/workflows/ci.yml b/.github/workflows/ci.yml index 6a41c8a5..0eed0f29 100644 --- a/.github/workflows/ci.yml +++ b/.github/workflows/ci.yml @@ -43,5 +43,10 @@ jobs: run: uv run pytest - name: Test documentation code snippets - run: uv run pytest --codeblocks docs/ + if: matrix.python-version == '3.12' + run: | + uv run pytest --codeblocks \ + docs/quickstart.md \ + docs/cookbook/ \ + docs/tutorials/ continue-on-error: true # Don't block CI while snippets are being updated diff --git a/docs/cookbook/clinical_coding.md b/docs/cookbook/clinical_coding.md index 9dfca505..8c23c5bd 100644 --- a/docs/cookbook/clinical_coding.md +++ b/docs/cookbook/clinical_coding.md @@ -53,7 +53,7 @@ MEDPLUM_SCOPE=openid First we'll need to convert the incoming CDA XML to FHIR. The [CdaAdapter](../reference/io/adapters/cdaadapter.md) enables round-trip conversion between CDA and FHIR using the [InteropEngine](../reference/interop/engine.md) for seamless legacy-to-modern data integration. - + ```python from healthchain.io import CdaAdapter @@ -84,7 +84,7 @@ Next we'll build our NLP processing pipeline. We'll use a [MedicalCodingPipeline For this demo, we'll use a simple dictionary for the SNOMED CT mapping. - + ```python from healthchain.pipeline.medicalcodingpipeline import MedicalCodingPipeline from healthchain.io import Document @@ -130,7 +130,7 @@ def link_entities(doc: Document) -> Document: This is equivalent to constructing a pipeline with the following components manually: - + ```python from healthchain.pipeline import Pipeline from healthchain.pipeline.components import SpacyNLP, FHIRProblemListExtractor @@ -149,7 +149,7 @@ def link_entities(doc: Document) -> Document: Use `.add_source` to register a FHIR endpoint you want to connect to with its connection string; the gateway will automatically manage the authentication and routing. - + ```python from healthchain.gateway import FHIRGateway from healthchain.gateway.clients import FHIRAuthConfig @@ -174,7 +174,7 @@ fhir_gateway.add_source("medplum", MEDPLUM_URL) Now let's set up the handler for [NoteReaderService](../reference/gateway/soap_cda.md) method `ProcessDocument`, which will be called by Epic NoteReader when it is triggered in the CDI workflow. This is where we will combine all our components: adapter, pipeline, and writing to our configured FHIR endpoint: - + ```python from healthchain.gateway import NoteReaderService @@ -208,7 +208,7 @@ def ai_coding_workflow(request: CdaRequest): Time to put it all together! Using [HealthChainAPI](../reference/gateway/api.md), we can create a service with *both* the FHIR and NoteReader endpoints: - + ```python from healthchain.gateway import HealthChainAPI @@ -223,7 +223,7 @@ app.register_service(note_service, path="/notereader") HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the NoteReader workflow end-to-end. It loads your sample CDA document, sends it to your service via the configured endpoint, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally before connecting to Epic. - + ```python from healthchain.sandbox import SandboxClient @@ -246,7 +246,7 @@ client.load_from_path("./data/notereader_cda.xml") Now for the moment of truth! Start your service and run the sandbox to see the complete workflow in action. - + ```python import threading diff --git a/docs/cookbook/discharge_summarizer.md b/docs/cookbook/discharge_summarizer.md index 56a95c52..178300b6 100644 --- a/docs/cookbook/discharge_summarizer.md +++ b/docs/cookbook/discharge_summarizer.md @@ -43,7 +43,7 @@ First, we'll create a [summarization pipeline](../reference/pipeline/pipeline.md For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/integrations/chat/huggingface/) for better prompting. === "Non-chat model" - + ```python from healthchain.pipeline import SummarizationPipeline @@ -54,7 +54,7 @@ For LLM approaches, we'll use [LangChain](https://python.langchain.com/docs/inte === "Chat model" - + ```python from healthchain.pipeline import SummarizationPipeline @@ -96,7 +96,7 @@ The `SummarizationPipeline` automatically: The [CdsFhirAdapter](../reference/io/adapters/cdsfhiradapter.md) converts between CDS Hooks requests and HealthChain's [Document](../reference/io/containers/document.md) format. This makes it easy to work with FHIR data in CDS workflows. - + ```python from healthchain.io import CdsFhirAdapter @@ -119,7 +119,7 @@ cds_adapter.format(doc) Create the [CDS Hooks handler](../reference/gateway/cdshooks.md) to receive discharge note requests, run the AI summarization pipeline, and return results as CDS cards. - + ```python from healthchain.gateway import CDSHooksService from healthchain.models import CDSRequest, CDSResponse @@ -146,7 +146,7 @@ def handle_discharge_summary(request: CDSRequest) -> CDSResponse: Register the CDS service with [HealthChainAPI](../reference/gateway/api.md) to create REST endpoints: - + ```python from healthchain.gateway import HealthChainAPI @@ -159,7 +159,7 @@ app.register_service(cds_service) HealthChain provides a [sandbox client utility](../reference/utilities/sandbox.md) which simulates the CDS hooks workflow end-to-end. It loads your sample free text data and formats it into CDS requests, sends it to your service, and saves the request/response exchange in an `output/` directory. This lets you test the complete integration locally and inspect the inputs and outputs before connecting to a real EHR instance. - + ```python from healthchain.sandbox import SandboxClient @@ -188,7 +188,7 @@ client.load_free_text( Put it all together and run both the service and sandbox client: - + ```python import threading diff --git a/docs/cookbook/ml_model_deployment.md b/docs/cookbook/ml_model_deployment.md index d6cfa096..42791c72 100644 --- a/docs/cookbook/ml_model_deployment.md +++ b/docs/cookbook/ml_model_deployment.md @@ -61,7 +61,7 @@ cp scripts/models/sepsis_model.pkl cookbook/models/ **Using your own model?** The pipeline is flexible—just save any scikit-learn-compatible model as a pickle with this structure: - + ```python import joblib @@ -185,7 +185,7 @@ If using the **FHIR Gateway pattern**, also confirm: Both patterns reuse the same pipeline. Here's what you'll write: - + ```python def create_pipeline() -> Pipeline[Dataset]: pipeline = Pipeline[Dataset]() @@ -207,7 +207,7 @@ def create_pipeline() -> Pipeline[Dataset]: The pipeline operates on a `Dataset`, which you create from a FHIR bundle: - + ```python dataset = Dataset.from_fhir_bundle(bundle, schema=SCHEMA_PATH) ``` @@ -253,7 +253,7 @@ Clinician opens chart → EHR fires patient-view hook → Your service runs pred Create a [CDSHooksService](../reference/gateway/cdshooks.md) that listens for `patient-view` events: - + ```python from healthchain.gateway import CDSHooksService from healthchain.fhir import prefetch_to_bundle @@ -293,7 +293,7 @@ def sepsis_alert(request: CDSRequest) -> CDSResponse: Register with [HealthChainAPI](../reference/gateway/api.md): - + ```python app = HealthChainAPI(title="Sepsis CDS Hooks") app.register_service(cds, path="/cds") @@ -303,7 +303,7 @@ app.register_service(cds, path="/cds") The [SandboxClient](../reference/utilities/sandbox.md) simulates EHR requests using your demo patient files: - + ```python from healthchain.sandbox import SandboxClient @@ -360,7 +360,7 @@ Query patients from FHIR server → Run predictions → Write RiskAssessment bac Configure the [FHIRGateway](../reference/gateway/fhir_gateway.md) with your FHIR source: - + ```python from healthchain.fhir.r4b import Patient, Observation from healthchain.gateway import FHIRGateway @@ -376,7 +376,7 @@ gateway.add_source("medplum", config.to_connection_string()) Query patient data, run prediction, and write back a [RiskAssessment](https://www.hl7.org/fhir/riskassessment.html) resource: - + ```python def screen_patient(gateway: FHIRGateway, patient_id: str, source: str): # Query patient + observations from FHIR server @@ -401,7 +401,7 @@ def screen_patient(gateway: FHIRGateway, patient_id: str, source: str): Loop over patient IDs and screen each one: - + ```python for patient_id in patient_ids: screen_patient(gateway, patient_id, source="medplum") @@ -411,7 +411,7 @@ for patient_id in patient_ids: This demo uses a fixed list of patient IDs. In production, you'd query for patients dynamically—for example, ICU admissions in the last hour: - + ```python # Find patients with recent ICU encounters encounters = gateway.search( @@ -428,7 +428,7 @@ for patient_id in patient_ids: ### Build the Service - + ```python app = HealthChainAPI(title="Sepsis Batch Screening") app.register_gateway(gateway, path="/fhir") diff --git a/docs/cookbook/multi_ehr_aggregation.md b/docs/cookbook/multi_ehr_aggregation.md index 7056b1e8..7405eef3 100644 --- a/docs/cookbook/multi_ehr_aggregation.md +++ b/docs/cookbook/multi_ehr_aggregation.md @@ -27,7 +27,7 @@ EPIC_USE_JWT_ASSERTION=true Load your Epic credentials from the `.env` file and create a connection string compatible with the FHIR gateway: - + ```python from healthchain.gateway.clients import FHIRAuthConfig @@ -39,7 +39,7 @@ EPIC_URL = config.to_connection_string() [FHIR Gateways](../reference/gateway/fhir_gateway.md) connect to external FHIR servers and handles authentication, connection pooling, and token refresh automatically. Add the Epic sandbox as a source: - + ```python from healthchain.gateway import FHIRGateway @@ -65,7 +65,7 @@ gateway.add_source("cerner", CERNER_URL) Define an aggregation handler that queries multiple FHIR sources for [Condition](https://www.hl7.org/fhir/condition.html) resources. - + ```python from healthchain.fhir import merge_bundles @@ -125,7 +125,7 @@ def get_unified_patient(patient_id: str, sources: List[str]) -> Bundle: Register the gateway with [HealthChainAPI](../reference/gateway/api.md) to create REST endpoints. - + ```python from healthchain.gateway import HealthChainAPI @@ -152,7 +152,7 @@ For additional processing like terminology mapping or quality checks, create a D Document pipelines are optimized for text and structured data processing, such as FHIR resources. When you initialize a [Document](../reference/io/containers/document.md) with FHIR [Bundle](https://www.hl7.org/fhir/condition.html) data, it automatically extracts and separates metadata resources from the clinical resources for easier inspection and error handling: - + ```python # Initialize Document with a Bundle doc = Document(data=merged_bundle) @@ -168,7 +168,7 @@ doc.fhir.medication_list # List of MedicationStatement resources Add processing nodes using decorators: - + ```python from healthchain.pipeline import Pipeline from healthchain.io.containers import Document @@ -211,7 +211,7 @@ Example uses Epic patient `eIXesllypH3M9tAA5WdJftQ3`; see [Epic sandbox](https:/ ``` === "Python" - + ```python import requests @@ -345,7 +345,7 @@ Sample conditions: You'll see this if you haven't authorized access to the correct FHIR resources when you set up your FHIR sandbox. - + ```python print([outcome.model_dump() for outcome in doc.fhir.operation_outcomes]) ``` diff --git a/docs/quickstart.md b/docs/quickstart.md index 2ecfcc4d..44053e5f 100644 --- a/docs/quickstart.md +++ b/docs/quickstart.md @@ -16,7 +16,7 @@ The [**HealthChainAPI**](./reference/gateway/api.md) provides a unified interfac [(Full Documentation on Gateway)](./reference/gateway/gateway.md) - + ```python from healthchain.gateway import HealthChainAPI, FHIRGateway from healthchain.fhir.r4b import Patient @@ -58,7 +58,7 @@ Containers make your pipeline FHIR-native by loading and transforming your data [(Full Documentation on Containers)](./reference/io/containers/containers.md) - + ```python from healthchain.pipeline import Pipeline from healthchain.io import Document @@ -94,7 +94,7 @@ HealthChain provides a set of ready-to-use [**NLP Integrations**](./reference/pi [(Full Documentation on Components)](./reference/pipeline/components/components.md) - + ```python from healthchain.pipeline import Pipeline from healthchain.pipeline.components import TextPreProcessor, SpacyNLP, TextPostProcessor @@ -116,7 +116,7 @@ You can process legacy healthcare data formats too. [**Adapters**](./reference/i [(Full Documentation on Adapters)](./reference/io/adapters/adapters.md) - + ```python from healthchain.io import CdaAdapter from healthchain.models import CdaRequest @@ -137,7 +137,7 @@ Prebuilt pipelines are the fastest way to jump into healthcare AI with minimal s [(Full Documentation on Pipelines)](./reference/pipeline/pipeline.md#prebuilt) - + ```python from healthchain.pipeline import MedicalCodingPipeline from healthchain.models import CdaRequest @@ -155,7 +155,7 @@ The HealthChain Interoperability module provides tools for converting between di [(Full Documentation on Interoperability Engine)](./reference/interop/interop.md) - + ```python from healthchain.interop import create_interop, FormatType @@ -204,7 +204,7 @@ Workflows determine the request structure, required FHIR resources, and validati | `synthea-patient` | **Synthea FHIR Patient Records** | R4 | [Synthea Downloads](https://synthea.mitre.org/downloads) | [Download ZIP](https://arc.net/l/quote/hoquexhy) (100 Sample, 36 MB) | - + ```python from healthchain.sandbox import list_available_datasets @@ -215,7 +215,7 @@ print(datasets) #### Basic Usage - + ```python from healthchain.sandbox import SandboxClient @@ -243,7 +243,7 @@ responses = client.send_requests() For clinical documentation workflows using SOAP/CDA: - + ```python # Use context manager for automatic result saving with SandboxClient( @@ -262,7 +262,7 @@ Use `healthchain.fhir` helpers to quickly create and manipulate FHIR resources ( [(Full Documentation on FHIR Helpers)](./reference/utilities/fhir_helpers.md) - + ```python from healthchain.fhir import create_condition diff --git a/docs/tutorials/clinicalflow/fhir-basics.md b/docs/tutorials/clinicalflow/fhir-basics.md index 29cb7560..b7601613 100644 --- a/docs/tutorials/clinicalflow/fhir-basics.md +++ b/docs/tutorials/clinicalflow/fhir-basics.md @@ -70,7 +70,7 @@ Tracks what medications a patient is taking: HealthChain provides utilities to work with FHIR resources easily: - + ```python from healthchain.fhir import create_condition, create_patient from healthchain.fhir.r4b import Patient @@ -108,7 +108,7 @@ print(f"With condition: {condition.code.coding[0].display}") When an EHR sends patient context, it often comes as a **Bundle** - a collection of related resources: - + ```python from healthchain.fhir.r4b import Bundle @@ -130,7 +130,7 @@ print(f"Bundle contains {len(bundle.entry)} resources") HealthChain's `Document` container bridges clinical text and FHIR data: - + ```python from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/gateway.md b/docs/tutorials/clinicalflow/gateway.md index 2e2a2513..42b80c61 100644 --- a/docs/tutorials/clinicalflow/gateway.md +++ b/docs/tutorials/clinicalflow/gateway.md @@ -29,7 +29,7 @@ The flow: Create a file called `app.py`. This imports the pipeline you created in the [previous step](pipeline.md): - + ```python from healthchain.gateway import HealthChainAPI, CDSHooksService from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/next-steps.md b/docs/tutorials/clinicalflow/next-steps.md index 407ab66c..16da7f1a 100644 --- a/docs/tutorials/clinicalflow/next-steps.md +++ b/docs/tutorials/clinicalflow/next-steps.md @@ -24,7 +24,7 @@ The NLP was hard coded in our example but HealthChain has simple pipeline integr Convert extracted entities to FHIR resources: - + ```python from healthchain.pipeline.components import FHIRProblemListExtractor diff --git a/docs/tutorials/clinicalflow/pipeline.md b/docs/tutorials/clinicalflow/pipeline.md index 42d6ec33..42ead60a 100644 --- a/docs/tutorials/clinicalflow/pipeline.md +++ b/docs/tutorials/clinicalflow/pipeline.md @@ -15,7 +15,7 @@ A **Pipeline** in HealthChain is a sequence of processing steps that transform i Create a file called `pipeline.py`: - + ```python from healthchain.pipeline import Pipeline from healthchain.io import Document @@ -70,7 +70,7 @@ def create_clinical_pipeline(): Create a file called `test_pipeline.py` to test your pipeline: - + ```python from pipeline import create_clinical_pipeline from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/setup.md b/docs/tutorials/clinicalflow/setup.md index e13fec0f..e2c2cfb8 100644 --- a/docs/tutorials/clinicalflow/setup.md +++ b/docs/tutorials/clinicalflow/setup.md @@ -40,7 +40,7 @@ All the code running examples in this tutorial will show both the uv and pip ver Create a file called `check_install.py`: - + ```python import healthchain from healthchain.io import Document diff --git a/docs/tutorials/clinicalflow/testing.md b/docs/tutorials/clinicalflow/testing.md index f7c3243a..318f6dc4 100644 --- a/docs/tutorials/clinicalflow/testing.md +++ b/docs/tutorials/clinicalflow/testing.md @@ -103,7 +103,7 @@ You should see a response with cards for the detected conditions. For more control, create `test_service.py`: - + ```python import json import requests @@ -177,7 +177,7 @@ The pipeline extracted four conditions from the clinical note text - exactly wha For testing with multiple patients or larger datasets, use the `SandboxClient`: - + ```python from healthchain.sandbox import SandboxClient @@ -204,7 +204,7 @@ for i, response in enumerate(responses): Save results for reporting or debugging: - + ```python # Save responses to files client.save_results( @@ -218,7 +218,7 @@ client.save_results( Inspect what will be sent without actually calling the service: - + ```python # Preview queued requests previews = client.preview_requests(limit=5) diff --git a/pyproject.toml b/pyproject.toml index b88ab8d0..61834210 100644 --- a/pyproject.toml +++ b/pyproject.toml @@ -88,7 +88,8 @@ default-groups = [ [tool.pytest.ini_options] # pytest-codeblocks: test Python code snippets in documentation # Use `uv run pytest --codeblocks` to run doc snippet tests -# Snippets can be skipped with: +# Snippets can be skipped with: +# Entire files can be skipped with: markers = [ "codeblocks: marks tests extracted from documentation code blocks", ] From edb734b8e7f57ac79668281deb237ba91067f47c Mon Sep 17 00:00:00 2001 From: SexyERIC0723 Date: Tue, 7 Apr 2026 20:28:24 +0100 Subject: [PATCH 3/3] fix: mark remaining code blocks in cookbook and tutorial docs MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit - Add skip markers to 3 missed cookbook files (format_conversion, index, setup_fhir_sandboxes) — 10 Python blocks - Add skip markers to all bash/sh/shell blocks in scoped docs to prevent pytest-codeblocks from executing install/setup scripts --- docs/cookbook/clinical_coding.md | 3 +++ docs/cookbook/discharge_summarizer.md | 3 +++ docs/cookbook/format_conversion.md | 7 +++++++ docs/cookbook/index.md | 3 +++ docs/cookbook/ml_model_deployment.md | 18 +++++++++++++----- docs/cookbook/multi_ehr_aggregation.md | 5 ++++- docs/cookbook/setup_fhir_sandboxes.md | 14 ++++++++++++-- docs/tutorials/clinicalflow/gateway.md | 8 ++++++-- docs/tutorials/clinicalflow/pipeline.md | 6 ++++-- docs/tutorials/clinicalflow/setup.md | 9 +++++++-- docs/tutorials/clinicalflow/testing.md | 14 ++++++++++---- 11 files changed, 72 insertions(+), 18 deletions(-) diff --git a/docs/cookbook/clinical_coding.md b/docs/cookbook/clinical_coding.md index 8c23c5bd..895535df 100644 --- a/docs/cookbook/clinical_coding.md +++ b/docs/cookbook/clinical_coding.md @@ -18,6 +18,7 @@ A clinical note arrives from NoteReader as CDA XML → gets parsed and processed We'll use [scispacy](https://allenai.github.io/scispacy/) for medical entity extraction. Install the required dependencies: + ```bash pip install healthchain scispacy python-dotenv pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.4/en_core_sci_sm-0.5.4.tar.gz @@ -27,6 +28,7 @@ pip install https://s3-us-west-2.amazonaws.com/ai2-s2-scispacy/releases/v0.5.4/e Download the sample CDA file `notereader_cda.xml` into a `data/` folder in your project root using `wget`: + ```bash mkdir -p data cd data @@ -39,6 +41,7 @@ Set up a Medplum account and obtain client credentials. See the [FHIR Sandbox Se Once you have your Medplum credentials, configure them in a `.env` file: + ```bash # .env file MEDPLUM_BASE_URL=https://api.medplum.com/fhir/R4 diff --git a/docs/cookbook/discharge_summarizer.md b/docs/cookbook/discharge_summarizer.md index 178300b6..5bf18144 100644 --- a/docs/cookbook/discharge_summarizer.md +++ b/docs/cookbook/discharge_summarizer.md @@ -10,6 +10,7 @@ Check out the full working example [here](https://github.com/dotimplement/Health ### Install Dependencies + ```bash pip install healthchain python-dotenv ``` @@ -19,6 +20,7 @@ This example uses a Hugging Face model for the summarization task, so make sure If you are using a chat model, make sure you have the necessary `langchain` packages installed. + ```bash pip install langchain langchain-huggingface ``` @@ -27,6 +29,7 @@ pip install langchain langchain-huggingface Download the sample data `discharge_notes.csv` into a `data/` folder in your project root using `wget`: + ```bash mkdir -p data cd data diff --git a/docs/cookbook/format_conversion.md b/docs/cookbook/format_conversion.md index 1848db3b..11a22546 100644 --- a/docs/cookbook/format_conversion.md +++ b/docs/cookbook/format_conversion.md @@ -8,12 +8,14 @@ The [InteropEngine](../reference/interop/engine.md) provides a unified interface Install HealthChain: + ```bash pip install healthchain ``` Create an interoperability engine: + ```python from healthchain.interop import create_interop, FormatType from pathlib import Path @@ -26,6 +28,7 @@ engine = create_interop() Parse a CDA document and extract FHIR resources: + ```python cda_xml = """ @@ -93,6 +96,7 @@ for resource in fhir_resources: Generate a CDA document from FHIR resources: + ```python from healthchain.fhir.r4b import Condition, Patient @@ -149,6 +153,7 @@ print(cda_document) Parse an HL7v2 message and extract FHIR resources: + ```python hl7v2_message = """ MSH|^~\&|EPIC|EPICADT|SMS|SMSADT|199912271408|CHARRIS|ADT^A01|1817457|D|2.5| @@ -169,6 +174,7 @@ for resource in fhir_resources: Generate an HL7v2 message from FHIR resources: + ```python from healthchain.fhir.r4b import Patient, Encounter @@ -226,6 +232,7 @@ print(hl7v2_message) Save converted data to files: + ```python output_dir = Path("./output") output_dir.mkdir(exist_ok=True) diff --git a/docs/cookbook/index.md b/docs/cookbook/index.md index da7eb9cb..04b8537f 100644 --- a/docs/cookbook/index.md +++ b/docs/cookbook/index.md @@ -109,6 +109,7 @@ Hands-on, production-ready examples for building healthcare AI applications with Cookbooks are standalone scripts — run them directly to explore and experiment. When you're ready to build a proper service, scaffold a project and move your logic in: + ```bash # 1. Run a cookbook locally python cookbook/sepsis_cds_hooks.py @@ -125,6 +126,7 @@ healthchain serve **What moves from your script into `healthchain.yaml`:** + ```python # cookbook — everything hardcoded in Python gateway = FHIRGateway() @@ -151,6 +153,7 @@ llm: max_tokens: 512 ``` + ```python # app.py — load from config instead from healthchain.config.appconfig import AppConfig diff --git a/docs/cookbook/ml_model_deployment.md b/docs/cookbook/ml_model_deployment.md index 42791c72..f548995c 100644 --- a/docs/cookbook/ml_model_deployment.md +++ b/docs/cookbook/ml_model_deployment.md @@ -22,6 +22,7 @@ Both patterns share the same trained model and feature extraction—only the int ### Install Dependencies + ```bash pip install healthchain joblib xgboost scikit-learn python-dotenv ``` @@ -30,6 +31,7 @@ pip install healthchain joblib xgboost scikit-learn python-dotenv The cookbook includes a training script that builds an XGBoost classifier from MIMIC-IV data. From the project root: + ```bash cd scripts python sepsis_prediction_training.py @@ -45,6 +47,7 @@ This script: After training, copy the model to the cookbook directory: + ```bash cp scripts/models/sepsis_model.pkl cookbook/models/ ``` @@ -53,7 +56,8 @@ cp scripts/models/sepsis_model.pkl cookbook/models/ The training script uses the [MIMIC-IV Clinical Database Demo](https://physionet.org/content/mimic-iv-demo/2.2/) (~50MB, freely downloadable). Set the path: - ```bash + +```bash export MIMIC_CSV_PATH=/path/to/mimic-iv-clinical-database-demo-2.2 ``` @@ -90,7 +94,8 @@ The two patterns have different data requirements: Download pre-extracted patient bundles—these are already in the repo if you cloned it: - ```bash + +```bash mkdir -p cookbook/data/mimic_demo_patients cd cookbook/data/mimic_demo_patients wget https://github.com/dotimplement/HealthChain/raw/main/cookbook/data/mimic_demo_patients/high_risk_patient.json @@ -108,7 +113,8 @@ The two patterns have different data requirements: Add Medplum credentials to your `.env` file. See [FHIR Sandbox Setup](./setup_fhir_sandboxes.md#medplum) for details: - ```bash + +```bash MEDPLUM_BASE_URL=https://api.medplum.com/fhir/R4 MEDPLUM_CLIENT_ID=your_client_id MEDPLUM_CLIENT_SECRET=your_client_secret @@ -118,7 +124,8 @@ The two patterns have different data requirements: **2. Extract and Upload Demo Patients** - ```bash + +```bash # Set MIMIC-on-FHIR path (or use --mimic flag) export MIMIC_FHIR_PATH=/path/to/mimic-iv-on-fhir @@ -156,7 +163,8 @@ The two patterns have different data requirements: The script has options for generating larger test sets: - ```bash + +```bash python extract_mimic_demo_patients.py --help # Examples: diff --git a/docs/cookbook/multi_ehr_aggregation.md b/docs/cookbook/multi_ehr_aggregation.md index 7405eef3..73692c05 100644 --- a/docs/cookbook/multi_ehr_aggregation.md +++ b/docs/cookbook/multi_ehr_aggregation.md @@ -8,6 +8,7 @@ Check out the full working example [here](https://github.com/dotimplement/Health ## Setup + ```bash pip install healthchain python-dotenv ``` @@ -16,6 +17,7 @@ We'll use Epic's public FHIR sandbox. If you haven't set up Epic sandbox access Once you have your Epic credentials, configure them in a `.env` file: + ```bash # .env file EPIC_BASE_URL=https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR/R4 @@ -204,7 +206,8 @@ Example uses Epic patient `eIXesllypH3M9tAA5WdJftQ3`; see [Epic sandbox](https:/ === "cURL" - ```bash + +```bash curl -X 'GET' \ 'http://127.0.0.1:8888/fhir/aggregate/Condition?id=eIXesllypH3M9tAA5WdJftQ3&sources=epic&sources=cerner' \ -H 'accept: application/fhir+json' diff --git a/docs/cookbook/setup_fhir_sandboxes.md b/docs/cookbook/setup_fhir_sandboxes.md index b4f1cf30..4f62b4f5 100644 --- a/docs/cookbook/setup_fhir_sandboxes.md +++ b/docs/cookbook/setup_fhir_sandboxes.md @@ -36,6 +36,7 @@ Epic uses [OAuth2 with JWT assertion for authentication](https://fhir.epic.com/D Follow Epic's instructions to [create a Public Private key pair for JWT signature](https://fhir.epic.com/Documentation?docId=oauth2§ion=Creating-Key-Pair): + ```bash # Generate private key - make sure the key length is at least 2048 bits. openssl genrsa -out privatekey.pem 2048 @@ -51,7 +52,8 @@ Where `/CN=myapp` is the subject name (e.g., your app name). The subject name do Epic now requires registering your public key via a **JWKS (JSON Web Key Set) URL** instead of direct file upload. For quick and dirty development/testing purposes, you can use ngrok to expose your JWKS server publicly. 1. **Set up a JWKS server**: - ```bash + +```bash # Ensure your .env has the private key path # EPIC_CLIENT_SECRET_PATH=path/to/privatekey.pem # EPIC_KEY_ID=healthchain-demo-key @@ -65,7 +67,8 @@ Epic now requires registering your public key via a **JWKS (JSON Web Key Set) UR - Example: `your-app.ngrok-free.app` 3. **Expose your JWKS server**: - ```bash + +```bash ngrok http 9999 --domain=your-app.ngrok-free.app ``` @@ -96,6 +99,7 @@ The JWKS must be: Create a `.env` file with your credentials: + ```bash # .env file EPIC_BASE_URL=https://fhir.epic.com/interconnect-fhir-oauth/api/FHIR/R4 @@ -110,6 +114,7 @@ EPIC_KEY_ID=healthchain-demo-key # Must match the kid in your JWKS ### Using Epic Sandbox in Code + ```python from healthchain.gateway.clients import FHIRAuthConfig @@ -128,6 +133,7 @@ gateway.add_source("epic", EPIC_URL) After configuration: + ```bash python scripts/check_epic_connection.py ``` @@ -165,11 +171,13 @@ Cerner (now Oracle Health) provides both open and secure public sandboxes for th The Open Sandbox is read-only. It does not require authentication and is handy for quick proof of concepts: + ```bash https://fhir-open.cerner.com/r4/ec2458f2-1e24-41c8-b71b-0e701af7583d/:resource[?:parameters] ``` You can get an idea of patients available in the open sandbox by querying some common last names: + ```bash curl -i -H "Accept: application/json+fhir" "https://fhir-open.cerner.com/r4/ec2458f2-1e24-41c8-b71b-0e701af7583d/Patient?family=smith" ``` @@ -213,6 +221,7 @@ After creating the client: Create a `.env` file with your credentials: + ```bash # .env file MEDPLUM_BASE_URL=https://api.medplum.com/fhir/R4 @@ -224,6 +233,7 @@ MEDPLUM_SCOPE=openid ### Using Medplum in Code + ```python from healthchain.gateway import FHIRGateway from healthchain.gateway.clients import FHIRAuthConfig diff --git a/docs/tutorials/clinicalflow/gateway.md b/docs/tutorials/clinicalflow/gateway.md index 42b80c61..7e5614ec 100644 --- a/docs/tutorials/clinicalflow/gateway.md +++ b/docs/tutorials/clinicalflow/gateway.md @@ -129,13 +129,15 @@ Start your CDS service: === "uv" - ```bash + +```bash uv run python app.py ``` === "pip" - ```bash + +```bash python app.py ``` @@ -147,6 +149,7 @@ Your service is now running at `http://localhost:8000`. CDS Hooks services must provide a discovery endpoint. Test it: + ```bash curl http://localhost:8000/cds/cds-discovery ``` @@ -170,6 +173,7 @@ Response: Test calling your service: + ```bash curl -X POST http://localhost:8000/cds/cds-services/patient-alerts \ -H "Content-Type: application/json" \ diff --git a/docs/tutorials/clinicalflow/pipeline.md b/docs/tutorials/clinicalflow/pipeline.md index 42ead60a..8a2297f1 100644 --- a/docs/tutorials/clinicalflow/pipeline.md +++ b/docs/tutorials/clinicalflow/pipeline.md @@ -98,13 +98,15 @@ Run it: === "uv" - ```bash + +```bash uv run python test_pipeline.py ``` === "pip" - ```bash + +```bash python test_pipeline.py ``` diff --git a/docs/tutorials/clinicalflow/setup.md b/docs/tutorials/clinicalflow/setup.md index e2c2cfb8..d06760fa 100644 --- a/docs/tutorials/clinicalflow/setup.md +++ b/docs/tutorials/clinicalflow/setup.md @@ -6,6 +6,7 @@ Get your development environment ready for building the ClinicalFlow service. Create a new project directory: + ```bash mkdir clinicalflow cd clinicalflow @@ -17,6 +18,7 @@ cd clinicalflow Then initialize a project and install HealthChain: + ```bash uv init uv add healthchain @@ -27,6 +29,7 @@ uv add healthchain If you prefer using pip, create and activate a virtual environment first: + ```bash python -m venv .venv source .venv/bin/activate # On Windows: .venv\Scripts\activate @@ -54,13 +57,15 @@ Run it: === "uv" - ```bash + +```bash uv run python check_install.py ``` === "pip" - ```bash + +```bash python check_install.py ``` diff --git a/docs/tutorials/clinicalflow/testing.md b/docs/tutorials/clinicalflow/testing.md index 318f6dc4..3addd582 100644 --- a/docs/tutorials/clinicalflow/testing.md +++ b/docs/tutorials/clinicalflow/testing.md @@ -18,6 +18,7 @@ First, create sample data that matches what an EHR would send. This is the same Create a `data` directory: + ```bash mkdir data ``` @@ -91,6 +92,7 @@ This sample includes: The fastest way to test is with the sample data directly. With your service running, send the request: + ```bash curl -X POST http://localhost:8000/cds/cds-services/patient-alerts \ -H "Content-Type: application/json" \ @@ -135,13 +137,15 @@ Make sure your service is running in one terminal: === "uv" - ```bash + +```bash uv run python app.py ``` === "pip" - ```bash + +```bash python app.py ``` @@ -149,13 +153,15 @@ Then in another terminal, run the test: === "uv" - ```bash + +```bash uv run python test_service.py ``` === "pip" - ```bash + +```bash python test_service.py ```