Submission-facing technical reference, grounded in the open-source Damaros product tree (connectors/epic_fhir.py, connectors/fhir_bulk.py, ingest/fhir_normalize.py, ingest/cohort_sync.py). SMART on FHIR Backend Services (server-to-server) for cohort ingest and deterministic screening; JWKS, scope parity, bulk posture, and audit hooks as implemented, not aspirational marketing copy.
Damaros is a protocol-driven clinical trial screening system that evaluates eligibility across site-approved patient populations using site-approved FHIR access patterns. This is the same mental model Epic already uses for registry, population health, and research cohort tooling (scheduled, system-level, auditable access), not discretionary per-user chart mining or external SaaS analytics.
Population-level evaluation is required because inclusion and exclusion logic must be applied consistently across every potentially eligible patient attached to an approved cohort, not triggered ad hoc one patient at a time. That is why Bulk FHIR $export sits in the architecture as an optional, site-gated complement to iterative REST reads: it aligns with Epic's population-level access model (read-only, explicitly enabled, bounded jobs), not continuous streaming or open-ended replication.
System identity (deterministic core): a deterministic eligibility evaluation engine operating on FHIR-native inputs with full audit traceability. Protocol definitions supply versioned logic; REST and optional Bulk supply normalized facts; the engine executes the same rules on the same inputs for the same outcome class (PASS / REVIEW / FAIL); outputs are criterion-level decisions plus evidence and sync provenance, with no generative verdict on the screening path.
Reproducibility (non-stochastic on PHI): eligibility evaluation is deterministic and reproducible. Given the same protocol version and the same normalized inputs, the system produces identical criterion-level outcomes, and evaluations can be replayed for audit with full provenance. There is no stochastic "AI" behavior on patient-identifiable clinical data in the screening execution path.
Evaluation boundary: All eligibility evaluation occurs on normalized, protocol-scoped data within the execution environment.
fhir_bundle_to_normalized (ingest/fhir_normalize.py) and run_sync_into_db (ingest/cohort_sync.py) as protocol-scoped normalization and eligibility artifacts. Persistent normalized data lives only in trial-scoped, minimal schemas required for eligibility evaluation and audit traceability. The product does not replicate or expose a general-purpose longitudinal patient record outside those execution structures. Epic remains the system of record for longitudinal care.Operationally, the product converts versioned trial protocols into structured evaluation rules, applies them to cohorts synchronized from FHIR (REST bundle pulls and, when enabled, Bulk export), and records PASS, REVIEW, or FAIL at criterion granularity with evidence references for coordinators, QA, and audit.
The production Epic connector is SMART on FHIR Backend Services: the API/worker exchanges a signed JWT for an access token (client_credentials + private_key_jwt, RFC 7523). Access is read-only for screening ingest. Epic tokens live server-side in process memory only, not in browsers or end-user SMART sessions for this path. End-user SMART on FHIR EHR launch is a separate integration pattern and is outside this Backend Services submission unless explicitly scoped elsewhere.
| APPLICATION_AUDIENCE | Backend Systems (Epic-registered Backend Services app) |
| PRIMARY_OAUTH_MODEL | client_credentials + JWT client assertion to Epic token URL; PKCE does not apply |
| USER_SMART_LAUNCH | Not used for cohort sync or screening ingest; out of scope for the Backend Services submission described here (repository contains a non-production reference stub only) |
| FHIR_VERSION | R4 |
| USE_CASE | Population-scale clinical trial screening: deterministic protocol eligibility on FHIR-native cohorts; evidence and audit replay; aligns with registry, cohort, and research execution patterns |
| AUTH_STANDARD | SMART on FHIR Backend Services (RFC 7523) |
| AUTH_METHOD | JWT client assertion (urn:ietf:params:oauth:client-assertion-type:jwt-bearer) |
| JWT_ALGORITHM | RS384 (override via DAMAROS_EPIC_JWT_ALGORITHM if Epic requires) |
| KEY_SIZE | RSA-2048 (or per Epic registration) |
| TOKEN_TTL | Assertion exp within 300 seconds of iat; access token TTL from Epic response; in-memory cache only |
| DEFAULT_ACCESS | Read-only FHIR; no EHR write-back unless explicit env gates below |
| TOKEN_ENDPOINT | DAMAROS_EPIC_TOKEN_URL (alias: DAMAROS_EPIC_OAUTH_TOKEN_URL) |
| FHIR_BASE | DAMAROS_EPIC_FHIR_BASE (alias: DAMAROS_EPIC_FHIR_BASE_URL) |
| NON_PRODUCTION | Separate Epic sandbox / non-prod client ID, signing keypair, and JWKS URL |
| PRODUCTION | Independent Epic production client ID, signing keypair, and JWKS URL |
| JWKS_HOSTING | Public JWKS documents are served over HTTPS from www.damaros.ai (Damaros-controlled DNS and TLS). Epic registers these URLs; private signing material never appears in JWKS JSON. |
| JWKS_STATIC_NONPROD | https://www.damaros.ai/.well-known/jwks-nonprod.json (public keys only; non-production Epic client kid alignment) |
| JWKS_STATIC_PROD | https://www.damaros.ai/.well-known/jwks-prod.json (public keys only; production Epic client kid alignment) |
| JWKS_KID_PARITY | The kid in the hosted JWKS must match DAMAROS_EPIC_JWT_KID for the environment that signs client assertions. Epic rejects assertions when header kid does not resolve to a published JWK. Non-prod and prod each use their own keypair plus kid plus JWKS URL. |
| JWKS_KEY_ROTATION | Publish the next public key in JWKS alongside the incumbent so Epic always has overlapping validity during rotation, with no authentication gap. Signing switches to the new private key only after Epic app registration confirms the new kid; then update DAMAROS_EPIC_JWT_PRIVATE_KEY_PATH + DAMAROS_EPIC_JWT_KID and retire the old public key when safe. Cadence follows change control, not an in-app timer. |
| JWKS_KEY_ROTATION_OWNERSHIP | Customer-operated deploy (on-prem / customer cloud): customer PKI or secret-manager owners execute rotation with Damaros runbook; Damaros does not hold production private keys unless contracted as operator. Damaros-operated managed service: Damaros platform SRE performs rotation in agreed windows, updates customer-visible JWKS, and notifies security contacts. Customer retains Epic app registration approval authority. |
| JWKS_VIA_API | Optional: set DAMAROS_EPIC_JWKS_SERVE_ENABLED=1 on the API; Epic may register {DAMAROS_PUBLIC_BASE_URL}/v1/meta/epic_jwks.json (same PEM + kid as token signing). Use static URLs or API URL, never both for the same key without coordination. |
| SECRET_STORAGE | Private signing PEM from secret manager or secure mount; never in git or client bundles |
| WRITE_ENV_NONPROD | DAMAROS_FHIR_WRITE_ENABLED=1 required for any FHIR POST/PUT helper path |
| WRITE_ENV_PRODUCTION | Requires DAMAROS_FHIR_WRITE_ENABLED=1 and DAMAROS_FHIR_PRODUCTION_WRITE_ATTESTED=1 when DAMAROS_ENV is production or prod (two-step gate for hospital review) |
The API builds a signed JWT (iss/sub = Epic client id, aud = token URL, jti = UUID, bounded exp), POSTs to the Epic token endpoint with grant_type=client_credentials, and caches the access token in process memory for FHIR HTTP clients. Failures surface to operators; there is no silent fallback to unauthenticated reads.
| GRANT_TYPE | client_credentials |
| ASSERTION_TYPE | urn:ietf:params:oauth:client-assertion-type:jwt-bearer |
| TOKEN_ENDPOINT | Epic-issued URL (DAMAROS_EPIC_TOKEN_URL) |
| JWT_ISS / JWT_SUB | Epic client ID for the active environment |
| JWT_AUD | Token endpoint URL (per Epic / RFC 7523) |
| JWT_JTI | UUID per assertion |
| TOKEN_STORAGE | In-process only; not written to disk, DB, or browser |
OAuth system/*.read floor (per-patient hot path): when DAMAROS_EPIC_FETCH_MEDICATION_STATEMENT is not enabled (default), token scope must cover exactly what EpicFhirClient.fetch_patient_bundle (see connectors/epic_fhir.py, ~345 to 398) plus cohort GET Group/{id} via search_patients_from_group need: eight resource families below, with no write scopes. Epic's literal strings may use .r vs .read; DAMAROS_EPIC_SCOPE carries the agreed list at token time.
Never request FHIR Binary: product Python has zero HTTP calls to Binary (no GET Binary/, no Binary search). The only "binary" string hits in the repo are unrelated psycopg[binary] dependency markers in persistence modules. Do not register Binary.Read / Binary.Search (any Epic line-item variant): it is not derivable from code and would contradict DocumentReference handling in ingest/fhir_normalize.py (metadata only, see §02D).
Not on the per-patient OAuth floor (separate tracks): system/MedicationStatement.read only with §03 flag plus Epic amendment; Bulk Data $export scopes only when §03B gates and Epic line-items for kick-off, status, and file are approved (not "Bulk Data Delete Request": there is no delete implementation in FhirBulkExporter); all write scopes (§03C).
Scope and call parity: keep DAMAROS_EPIC_SCOPE aligned with traffic. With MedStatement flag off, fetch_patient_bundle does GET Patient/{id}; search_observations_lab_capped_variants for lab-tuned Observation searches; paginated search_type_all_pages for Condition, MedicationRequest, Procedure, AllergyIntolerance, DocumentReference; optional MedicationStatement when flag on. Cohort IDs use GET Group/{id} only, not Group search.
system/Patient.read | Patient resource for cohort binding and demographics used in deterministic rules |
system/Group.read | Resolve cohort membership (patient ids) for approved trial lists |
system/Observation.read | Laboratory observations per Epic-tuned searches (not full-chart extract) |
system/Condition.read | Active/problem list context for inclusion or exclusion criteria |
system/MedicationRequest.read | Outpatient medication orders for protocol-relevant criteria |
system/Procedure.read | Historical procedures referenced by criteria |
system/AllergyIntolerance.read | Access is via search endpoints only. search_type_all_pages("AllergyIntolerance", …) in fetch_patient_bundle (connectors/epic_fhir.py); GET AllergyIntolerance/{id} is not used. OAuth scopes reflect Epic's authorization vocabulary and may include read-level scope strings even when traffic is exercised via search; this is Epic's model, not an undocumented HTTP bypass. USCDI: register AllergyIntolerance.Search. |
system/DocumentReference.read | DocumentReference search plus normalize to metadata only (id, type label, date). No narrative content, no Binary follow-up; eligibility remains deterministic on structured fields. |
system/MedicationStatement.read when DAMAROS_EPIC_FETCH_MEDICATION_STATEMENT=1 and clinical informatics sign-off. Still read-only; not part of the initial eight-scope package.Epic's App Orchard / USCDI UI lists granular Read vs Search line items. OAuth system/*.read strings are a second contract, and both layers must map to the same HTTP calls. Register only line items with a traceable path in this codebase.
Bulk FHIR is how Epic already frames population work: read-only, site-controlled, scheduled, bounded cohort refresh. Not defensive experimentation, but correct use of Epic's population-level access model. Architectural shape in code: tenant isolation per deployment and org-scoped sync (run_sync_into_db(…, org_id=…) in ingest/cohort_sync.py); no default activation (bulk is an explicit integration path, not the REST per-patient default); read-only FHIR with resource sets bounded by Epic grants plus normalizer contract.
Bounded job lifecycle (FhirBulkExporter, connectors/fhir_bulk.py): each export is a finite job: kick-off (POST Group/{id}/$export), async status polling with explicit polling limits (max_poll_attempts × poll_interval_sec), then file download and NDJSON parse. That design prevents unbounded or continuous extraction; it is not a streaming tap on the EHR.
Operational summary: Bulk export supports scheduled, bounded cohort refresh in a site-controlled, read-only manner. Export bytes are processed through fhir_bundle_to_normalized into protocol-scoped structured rows and auditable eligibility outputs inside the hospital-approved deployment boundary, not exfiltrated as an unmanaged secondary clinical record store.
Deterministic handling of extra NDJSON types: bulk exports may include additional resource types depending on site configuration and optional _type on FhirBulkExporter.kick_off. The normalization layer (fhir_bundle_to_normalized in fhir_normalize.py) deterministically maps only resources covered by explicit type branches into normalized cohort fields tied to the normalized contract. Non-mapped resources are not persisted in normalized storage and are not included in downstream evaluation outputs. Eligibility execution consumes the normalized facts under versioned protocol logic, not open-ended ML interpretation of raw PHI on the hot path.
fetch_patient_bundle): Patient read; lab Observation search; Condition / MedicationRequest / Procedure / AllergyIntolerance / DocumentReference searches; optional MedicationStatement search when env flag on. No Binary anywhere.search_patients_from_group resolves GET Group/{id} only. Do not select Group.Search in Epic if the connector never issues a Group-type search.fhir_normalize.py: for resourceType == DocumentReference, the normalizer records metadata only (explicit comment: no narrative content (PHI); no LLM path from that module). It does not resolve or fetch linked Binary payloads, another reason Binary scopes must stay off the registration.connectors/fhir_bulk.py): implements kick-off (POST Group/{id}/$export), status polling, and file download, not Bulk Data Delete. Kick-off accepts optional _type; when omitted, NDJSON may include many resource types; fhir_normalize.py includes explicit branches for Encounter, DiagnosticReport, FamilyMemberHistory, and Immunization when those resources appear. Register those Epic line-items only when bulk export is tenant-enabled and institutionally approved, not as part of the minimal per-patient OAuth eight in §02C.extract_imaging_facts_from_radiology_report in imaging_extract.py is a deterministic stub (plain str in, fixed null fields out); it performs no FHIR I/O and does not pull Binary or radiology attachments.EpicFhirClient.patient_match (connectors/epic_fhir.py) exists for optional site-driven POST Patient/$match (e.g. scripts/fhir_live_readonly_validation.py). It is not called from fetch_patient_bundle or cohort bulk hot paths. Omit the Epic Patient.$match line item from default App Orchard selection unless the hospital explicitly contracts for external patient list ingestion or cross-system identity reconciliation that invokes this client. Otherwise it only raises identity-resolution review friction without a hot-path justification.The live patient bundle pull for eligibility ingest is built in EpicFhirClient.fetch_patient_bundle (connectors/epic_fhir.py, ~345 to 398): GET Patient/{pid}; search_observations_lab_capped_variants for capped lab Observation searches; then paginated type searches for Condition, MedicationRequest, Procedure, AllergyIntolerance, and DocumentReference (with MedicationStatement inserted in that loop when DAMAROS_EPIC_FETCH_MEDICATION_STATEMENT is on). There are no Binary calls. This list must stay in lockstep with §02C OAuth scopes and flags.
GET Patient/{id}search_patients_from_group resolves GET Group/{id} for member patient IDs (cohort path; not a Group-type search)search_observations_lab_capped_variants: lab-focused Epic parameter variants; not a full-chart extractpatient=Patient/{id}, paginatedpatient=Patient/{id}, paginatedDAMAROS_EPIC_FETCH_MEDICATION_STATEMENT=1 opt-inpatient=Patient/{id}, paginatedpatient=Patient/{id}, paginatedpatient=Patient/{id}, paginated; normalizer keeps metadata only (§02D)fetch_patient_bundle never follows DocumentReference content to Binary and never calls Binary. Epic registrations must omit Binary web services and any system/Binary.read scope.Bulk Data $export is disabled by default and runs only when a deployment explicitly executes scheduled, bounded cohort refresh (or operator-triggered bulk refresh) with Epic-granted bulk line-items and institutional approval (see §00 and §02D). This is how Bulk is intended to be used for population-level evaluation, not opportunistic per-request chart access.
FhirBulkExporter (connectors/fhir_bulk.py) implements kick-off, async status polling with explicit job lifecycle bounds (max_poll_attempts × poll_interval_sec), and gzip-aware file download, not Bulk Data Delete. Optional _type narrows exported types when supplied; when omitted, NDJSON may include resource types beyond §03. Ingest passes through fhir_bundle_to_normalized per §02D. Non-mapped resources are not persisted in normalized storage and are not included in downstream evaluation outputs.
Patient.$match (EpicFhirClient.patient_match): out of default submission scope. Omit from Epic line-item selection unless the site explicitly enables external-list or cross-system identity workflows that call this operation; it is not part of fetch_patient_bundle cohort screening.
FhirBulkExporter: tenant-gated; read-only; bounded poll; no Delete API in codebaseInitial Epic registration and production go-live do not request FHIR write scopes. Eligibility screening, evidence display, and audit replay operate entirely on reads plus application database state. The product codebase retains optional POST helpers (e.g. Observation create, Subscription create) for rare site-specific workflows; those paths are hard-disabled unless multiple environment attestations are set, and are out of scope for the first hospital deployment package.
DAMAROS_FHIR_WRITE_ENABLED plus production attestation; not activated in initial submissionThe following may appear on an Epic scope worksheet for anticipated workflows. They are not part of the default patient bundle fetch today. Enable retrieval only when the protocol engine and site authorization require it.
fetch_patient_bundle; fhir_normalize.py handles them when they appear in bulk NDJSON (§02D / §03B)_type list requires themEligibility execution is the deterministic engine on normalized FHIR facts already ingested into the trial boundary, consistent with §00. External assistance and protocol text never sit on that hot path (see below).
NetworkPolicy with explicit CIDR allowlists for Epic FHIR, OIDC, in-cluster Postgres/Redis, and DNS (see §08 sample). Customer CNI must enforce; IT validates with pod-level denial checks per deployment runbook.evaluation_as_of.POST /v1/integrations/ehr/sync): the request is authorized with RBAC (workflow_update on the org) after principal_from_request resolves the caller's subject and org. Scheduled automation may instead use an org-scoped DAMAROS_EHR_SYNC_TRIGGER_TOKEN bearer (no human session). That path is explicitly service-to-service, not a hidden user identity. The token value lives only in a secret manager (or equivalent), is rotatable on demand, and each enqueue / trigger emits an auditable log_event line with org and job outcome. It grants no broader API privileges: it is accepted only as Bearer on POST /v1/integrations/ehr/sync and cannot enqueue unrelated jobs or call other routes.cohort_sync_run_id carried into screening and review rows; connectivity or no-op checks are logged without creating downstream cohort mutation records.subject, role, organization scope, and trial scope; administrative and export actions remain RBAC-gated; web sessions are TTL-bound.cohort_sync_run_id. Together they answer "which institution, which ingest, which human action" without conflating the two layers.DAMAROS_FHIR_WRITE_ENABLED=1 only after explicit site policy may enable write helpers for sandbox testing.DAMAROS_FHIR_WRITE_ENABLED=1 and DAMAROS_FHIR_PRODUCTION_WRITE_ATTESTED=1 when DAMAROS_ENV is production-class. Two independent signals so a single typo cannot enable EHR mutation during audit.$export, when enabled, supports scheduled, bounded cohort refresh under Epic's population-level model: read-only access, tenant isolation, site-controlled execution (§00, §02D, §03B).Representative examples for security / App Orchard review only. Dummy or truncated values; production YAML, keys, CIDRs, and log schemas are environment-specific. Not a substitute for your tenant's live config. Canonical live JWKS: jwks-prod.json / jwks-nonprod.json.
apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
name: damaros-api-egress
spec:
podSelector:
matchLabels:
app.kubernetes.io/component: api
policyTypes: [Egress]
egress:
- ports: [{port: 53, protocol: UDP}, {port: 53, protocol: TCP}]
to:
- namespaceSelector:
matchLabels: {kubernetes.io/metadata.name: kube-system}
- ports: [{port: 443, protocol: TCP}]
to:
- ipBlock: {cidr: 203.0.113.0/24} # resolve & replace: Epic FHIR + token host CIDRs
- ports: [{port: 5432, protocol: TCP}]
to:
- podSelector:
matchLabels: {app.kubernetes.io/component: postgres}
This shape reflects the enforced default when the chart's networkPolicy.enabled (and related egress blocks) are left on. IT should treat it as the shipped baseline, not decorative documentation. Full template: deploy/helm/damaros/templates/networkpolicy.yaml in the product repo, including Redis, OIDC, and optional extra HTTPS CIDRs.
{
"keys": [{
"kty": "RSA",
"use": "sig",
"alg": "RS384",
"kid": "example-hospital-prod-2026",
"n": "w9zY...BASE64URL_TRUNCATED...q8Q",
"e": "AQAB"
}]
}
# header (base64url-decoded for readability)
{ "alg": "RS384", "typ": "JWT", "kid": "example-hospital-prod-2026" }
# claims
{
"iss": "epic-backend-services-client-id-example",
"sub": "epic-backend-services-client-id-example",
"aud": "https://fhir.epic.com/interconnect-nonprod-oauth/oauth2/token",
"jti": "2f5b0c1a-9e3d-4b7a-8f1c-0d2e4a6b8c0e",
"exp": 1710000300,
"iat": 1710000000
}
{"ts":"2026-04-01T12:00:00.000Z","event":"integrations.ehr_sync.triggered","org_id":"org_***","principal_sub":"oidc|coordinator_***","damaros_role":"coordinator","epic_oauth_client_id":"epic-backend-services-client-id-***","cohort_sync_run_id":"csr_7f2a9b1c***","ok":true}
Each such record corresponds to one discrete EHR sync execution (coordinator UI action or automation token), mappable to "who kicked off this pull" in plain language. Production lines follow your SIEM schema; Epic records the same Backend Services client_id on token issuance. The JSON above is a review packet composite (fields may span log_event plus domain event), not one verbatim syslog line.
Why the Damaros eligibility engine produces the same answer to the same question, every time. A short technical note for hospital IT, AMC trial offices, sponsors, and reviewers. Pinned to engine semantics, not marketing copy; citable per §07.
Two questions decide whether a clinical-trial eligibility decision survives audit. (1) Run the same screen again — do you get the same answer? (2) Show the specific evidence behind PASS / REVIEW / FAIL.
Stacks that put a generative model on the patient-screening hot path do not answer either question by construction. Stochastic decoding produces output variance even with identical inputs; a free-text rationale is not the same as a pointer to a FHIR Observation with a normalized fact. Pinning a model to greedy decoding does not make the rationale traceable; it just makes the variance smaller.
That gap is not cosmetic. It is the difference between a screening decision a coordinator can sign off on, and one a sponsor or IRB pulls at audit because it cannot be reconstructed.
The Damaros eligibility evaluation engine is non-stochastic on patient-identifiable data in the screening execution path (see Documentation §00). For any patient × criterion pair, the outcome class and its rationale are fully determined by four pinned inputs:
evaluation_as_of.cohort_sync_run_id, persisted on every successful FHIR ingest (see Documentation §05A). One ingest, one run id, one cohort frame.fhir_bundle_to_normalized (see Documentation §02D). Raw FHIR response bodies are not part of the evaluation surface; only the normalized contract is.evaluation_as_of. The explicit logical timestamp at which protocol logic is evaluated against evidence. Not "now."Same four inputs ⇒ same criterion outcome class (PASS / REVIEW / FAIL) ⇒ same structured rationale ⇒ same evidence references. This is the invariant the engine is built to enforce, not a property of any particular run.
A screening run identifier is not an opaque random UUID minted at execution start. It is a content-addressable identifier derived from the canonical fingerprint of the four pinned inputs above. Two evaluations with identical inputs collapse to the same run.
The practical consequence: if nothing has moved, "screen this cohort again" is a record lookup, not a re-execution. The hospital does not pay an Epic API tax to reproduce a decision the coordinator already saw, and the audit trail does not accumulate spurious near-duplicate runs that have to be diffed later.
If any pinned input moves — a protocol amendment lands, a cohort refresh promotes new patients, a lab result normalizes, the evaluation timestamp advances — the run identifier changes. Old runs remain immutable; the new run is a separately addressable record. There is no in-place mutation of a prior decision, even when the underlying facts have evolved.
Replay against an existing run is idempotent: it returns the same outputs as the original run, with the same criterion-level rationale and the same evidence references. Replay does not re-fetch FHIR; it re-executes the versioned protocol logic against the pinned, already-normalized evidence. Epic is not in the replay path.
Replay against a different reference run is a structured comparison. The engine classifies agreement, divergence, or new criterion outcomes, and the run lineage shows which of the four pinned inputs moved (see Documentation §05: "Replay classifies agreement or divergence; no opaque 'model decided' eligibility."). Divergence is always attributable to a named delta on a named input.
Each (patient × criterion) pair is recorded as its own audit unit (see Documentation §05). The engine never collapses to a single eligibility flag with a generative explanation appended.
This is the unit a sponsor or IRB reviewer asks for: "Show me criterion 4.2.1 for this patient on this run." The answer is a deterministic record with sources, not a regenerated narrative.
"Replayability" is the operational form of the long-standing FDA expectation that decisions affecting clinical-trial conduct be reconstructable. Academic medical centers translate this to a concrete question at audit: "With the protocol that was in force on the screen date, against the cohort and evidence as they stood, do we get the same set of eligible patients?"
A deterministic engine answers this in O(1) on stored runs. A generative engine cannot, because identical inputs are not guaranteed to map to identical outputs — that is a property of how generative decoding works, not a tunable.
For hospital IT the question is narrower and harder: "Can we produce, at audit, a bit-identical reconstruction of the decision the coordinator acted on?" Damaros answers yes by construction. The engine semantics in §01–§04 are the property; the rest of the integration documentation is the implementation.
Determinism is an architectural property. It has to be enforced from the protocol-compile step through normalization to evaluation, and it has to hold for every patient × criterion pair, not just the ones a reviewer happens to spot-check.
A generative model on the screening hot path has two doors. (a) Tolerate output variance — replay is best-effort, the audit gap above stands, and the model is one upstream change away from re-deciding screens that have already been signed. (b) Pin the model to deterministic decoding plus a templated formatter — at which point the "rationale" is no longer the model's reasoning; it is a deterministic transform of structured inputs. Damaros built the deterministic transform natively, without the model on the hot path.
Damaros uses generative assistance only outside the screening execution path, on non-PHI protocol or synopsis text, behind deployment flags and pattern guards (see Documentation §04). The screening verdict is deterministic; the optional protocol-comprehension layer is clearly separated and never carries patient identifiers.
| TITLE | Deterministic Replay |
| SERIES | Damaros Engineering Note 01 |
| AUTHOR | Damaros, LLC |
| VERSION | 1.0 |
| PUBLISHED | 2026-05-06 |
| PERMALINK | https://www.damaros.ai/#engineering |
| SUGGESTED_CITATION | Damaros, LLC (2026). Deterministic Replay. Damaros Engineering Note 01, v1.0. Retrieved from https://www.damaros.ai/#engineering |
| RELATED | Production Integration Documentation (Epic FHIR / SMART Backend Services, audit, write-back controls) |