Audit Logs & Explainability
FraudShield AI Engine maintains a complete, immutable audit log for every transaction scored. The audit log, combined with the model explainability output, supports regulatory examination, adverse action response, model risk management reviews, and internal investigations.
Audit log structure
One audit log record is written for every scoring request, whether or not an alert is generated. Records are written as JSON objects to the configured audit destination (Kafka topic, SIEM, or flat file). Each record contains the following sections.
Audit log field reference
| Field | Type | Description |
|---|---|---|
audit_id |
string | Globally unique audit record identifier. Format: AUD-{YYYYMMDD}-{sequence}. |
scored_at |
ISO 8601 datetime | Timestamp when scoring completed, in UTC. Used for temporal audit queries. |
model_version |
string | Version of the ML model that produced the score. Essential for back-testing and regulatory review — allows the exact model state to be reconstructed. |
suppression_applied |
boolean | Whether an alert suppression rule prevented case creation. If true, the matching rule ID is logged in suppression_rule_id. |
whitelist_applied |
boolean | Whether a whitelist entry reduced the risk score. If true, the whitelist entry type and ID are logged. |
weighted_contribution |
integer | The RI's contribution to the composite score: sub_score × weight. Sort by this field descending to identify the top contributors to a specific score. |
Model explainability
FraudShield AI produces an explainability output with every scoring decision. The score_contributors array identifies the Risk Indicators that most influenced the final score, their raw values, and a human-readable display string. This output supports:
- Analyst review: Analysts see the top contributing RIs in Case Manager, enabling faster and better-informed disposition decisions.
- Customer communication: Provides the factual basis for adverse action notices where a transaction block must be communicated to the customer.
- Regulatory response: Responds to data subject access requests (DSAR) and model examination inquiries with documented decision rationale.
- Model debugging: Helps the data science team identify RIs that are producing unexpected sub-scores.
Explainability report (on-demand)
For regulatory examinations or customer dispute resolution, you can generate a full Explainability Report for any transaction in the last 7 years. The report includes the complete RI profile at the time of scoring, the model version, and the exact configuration parameters applied.
To generate a report: go to Operations Console > Reports > Explainability Report and enter the transaction_id or audit_id.
Model governance
FraudShield AI maintains a Model Change Log that records every configuration change, model version update, tuning decision, and suppression rule modification. The Model Change Log is the primary artifact for model governance audits.
| Event type | What's logged | Governance artifact |
|---|---|---|
| Model deployment | Model ID, version, champion/challenger status, deployment approver, effective date | Model Deployment Record |
| Threshold change | Previous and new threshold values per BTA, change author, simulation results, approval evidence | Threshold Change Record |
| Suppression rule creation / expiry | Rule parameters, business justification, creating analyst, expiry date, review owner | Suppression Rule Registry |
| Whitelist modification | Whitelist entry, account or payee identifier, justification, approver | Whitelist Change Record |
| Model rollback | Rollback trigger, impacted transactions, incident ticket reference | Rollback Incident Record |
Regulatory compliance
SR 11-7 — Model Risk Management
The Federal Reserve's SR 11-7 guidance requires that all models used in material decision-making have documented development, validation, and ongoing monitoring. FraudShield AI supports SR 11-7 compliance through:
- The Model Change Log (complete development and deployment history)
- Champion/challenger validation reports (independent model validation)
- Quarterly performance reports (ongoing monitoring)
- The Explainability Report (model documentation and conceptual soundness evidence)
CFPB adverse action requirements
When a transaction is blocked or a customer is denied a service based on the FraudShield AI decision, the Regulation B adverse action requirements may apply. The explainability output provides the specific factors (Risk Indicators) that contributed to the decision, which can be used to draft legally compliant adverse action notices.
GDPR — Article 22 (automated decision-making)
For institutions operating in the EU or processing EU customer data, GDPR Article 22 grants data subjects the right to:
- Be informed when a solely automated decision with significant effects has been made.
- Obtain meaningful information about the logic involved.
- Request human review of the automated decision.
The Explainability Report satisfies the "meaningful information about the logic" requirement. Make sure your data subject access request (DSAR) process includes a procedure for generating and delivering Explainability Reports within the 30-day GDPR response window.
BSA/AML record-keeping
Transactions that trigger a Suspicious Activity Report (SAR) filing must have their complete audit log record preserved as supporting documentation. FraudShield AI automatically tags audit log records associated with SAR-filed transactions when the Case Manager integration is configured. These records are flagged as "sar_related": true and excluded from standard retention purge jobs.