Trust-Model Governance
How trust levels are configured, monitored, and escalated.
Each trust level is governed by explicit policy, not implicit defaults. Administrators configure who
gets which level, for which actions, and what happens when the AI's confidence or impact threshold is
exceeded.
Per-role configuration
Trust levels are assigned by role (investigator, reviewer, supervisor, program director, QA
analyst), action type (route to queue, suggest determination, draft correspondence, summarize
document, recommend sample), and tenant. A senior investigator may operate at "Confirm" for
routing while staying at "Suggest" for clinical determinations — all governed by tenant
policy.
Per-action approval requirements
At "Confirm", the AI's recommendation is presented as a pre-filled action; the user must
explicitly accept before anything happens. At "Suggest", the AI surfaces a recommendation in a
side panel without pre-filling. At "Shadow", the recommendation is recorded but never shown.
"Autonomous" is reserved for low-risk, narrowly-defined action types pre-approved by the tenant
administrator.
Audit logging of every AI decision
Each AI invocation is logged with: prompt context, model version, generated output, evidence
cited, trust level applied, action taken or declined, and identity of the human who reviewed
the output. Logs are retained alongside other case audit records under
HIPAA-aligned safeguards and surfaced for tenant administrator
review.
Escalation triggers
Each trust level defines escalation triggers that drop the AI to a stricter level for a specific
case or action: low confidence score, conflict with prior reviewer determinations, dollar impact
above threshold, presence of provider on the OIG / SAM exclusion list, missing required
evidence, or absence of an applicable rule. Escalation flows from Autonomous → Confirm
→ Suggest, with the underlying rationale logged.
Tenant-administrator controls
A dedicated AI Governance settings panel lets tenant administrators raise or lower trust by role
and action, define low-risk action types eligible for Autonomous operation, set confidence and
dollar thresholds for escalation, and review the AI audit log. All changes are versioned and
require an authorized administrator role.
Periodic review
AEGIS surfaces trust-model performance metrics — recommendation acceptance rate, rejection
rate, escalation frequency, dollar impact — so administrators can periodically review and
tune trust assignments based on observed accuracy in their own data and workflows.