Every Decision. On Record.
Every AI action that matters deserves a permanent, reviewable record. Use this structured audit log to capture what went in, what logic ran, who reviewed it, and what came out. Available in four formats so your team can start logging from wherever they already work.
What Is an Audit Log?
An audit log is a structured, permanent record of every significant AI action. For each workflow run it captures the input, the logic applied, the model version, the output generated, any human review, and the final result that reached your systems or customers. Without one, AI decisions are invisible — you cannot debug anomalies, demonstrate compliance, or hold any part of the process accountable.
This template works at any scale. A spreadsheet is fine for one or two workflows. As volume grows, the same fields promote directly into a database using the included SQL schema.
Every Entry Records
- Input summary and source system
- Logic applied and model version
- AI output and confidence score
- Human review status and outcome
- Final output, error flags, and retention period
Download the Templates
Choose the format that fits your current stack. All four use the same 20-field schema.
Headers and seven prefilled example rows across four workflow types. Opens directly in Excel, Google Sheets, or Numbers.
audit-log-template.csv
Excel, Google Sheets, Numbers
CREATE TABLE statement with column types, constraints, and indexes. PostgreSQL 14+, with notes for MySQL 8 adjustments.
audit-log-schema.sql
PostgreSQL 14+, MySQL 8+ compatible
Field definitions array plus three complete example entries. Drop into your integration layer or use to validate log payloads.
audit-log-template.json
schema + example records
Tab-separated values formatted for Notion databases and Airtable imports. Same headers and example rows as the CSV.
audit-log-template.tsv
Notion, Airtable, spreadsheet
How to Use This Template
Choose Your Format
Spreadsheet users: start with the CSV. Database or API integration: use the SQL schema or JSON template. Notion and Airtable users: use the TSV. All four share the same 20-field schema.
Map Fields to Your Workflow
Not every field applies to every workflow. Leave optional fields blank rather than inventing values — confidence_score and reviewer fields are only relevant when your process uses them.
Log at the Point of Action
Write each record at the moment the AI action occurs, not retroactively. If you are automating a workflow, the log write should be part of the same step as the AI call.
Set Retention Periods First
The retention_period field lets you apply consistent rules by workflow type. Define this before volume accumulates — 12 months for lead scoring, 7 years for financial processing.
Review on a Recurring Schedule
A log that is never read provides only reactive value. Monthly review of error flags and review outcomes catches drift before it compounds into a larger problem.
Archive & Scale
Plan for growth from day one. Keep a "warm archive" of historical logs separate from active workflows to enable trend analysis without slowing current operations. When volume grows, migrate CSVs to SQL databases.
Field Reference
Every field in the schema, what it stores, and whether it is required.
| Field | Type | Required | Description |
|---|---|---|---|
log_id |
UUID / INT | Yes | Unique identifier for this log entry. Use UUID v4 or an auto-incrementing integer. Never reuse an ID, even after archiving. |
created_at |
Timestamp | Yes | ISO 8601 timestamp when the entry was written. Always store in UTC. This is when the log was created, not when the workflow ran. |
workflow_name |
Text | Yes | A consistent name for the AI workflow that generated this entry. Standardize names across your team so logs can be filtered and compared reliably. |
workflow_version |
Text | Yes | The version of the workflow definition in use at the time of this entry. Increment the version whenever you update rules, prompts, or model configuration. |
input_summary |
Text | No | A brief description or reference ID for the input data. Store a summary or an external reference rather than raw data. Do not log PII here unless your access controls and retention policies are confirmed. |
input_source |
Text | No | The system or user that provided the input. Examples: HubSpot CRM, Zendesk, Accounts Payable Portal, Manual entry. |
logic_applied |
Text | No | The name of the rule set, prompt template, or model configuration that was applied. Be specific enough that a reviewer can locate and inspect the logic used. |
model_version |
Text | No | The specific AI model version used. Example: gpt-4o-2024-11. Record the full version string, not just the model family name, so results can be reproduced. |
ai_output |
Text | No | A summary of what the AI returned. For large outputs, store a reference or excerpt rather than the full payload. The goal is for a reviewer to understand what the AI decided, not to replay the full output. |
confidence_score |
Decimal 0.00–1.00 | No | The model confidence score if your API or model returns one. Leave blank if not available. A low confidence score is a useful signal for triggering human review. |
human_review_required |
Boolean | Yes | Whether human review was required before this output could be used. Define your review triggers in your workflow documentation before you start logging. |
reviewer_name |
Text | No | The full name of the human reviewer. Required when human_review_required is true. Leave blank for automated approvals. |
reviewed_at |
Timestamp | No | When the review was completed. The gap between created_at and reviewed_at is your review latency, a useful operational metric. |
review_outcome |
Approved / Rejected / Modified / Escalated | No | The result of the human review. Use a controlled vocabulary so outcomes can be filtered and counted. A high rate of Modified or Rejected outcomes signals a workflow that needs attention. |
review_notes |
Text | No | Free-text notes from the reviewer explaining the review decision. Especially important when the outcome is Rejected or Modified. |
final_output |
Text | No | What was actually used or delivered downstream. May differ from ai_output when a reviewer modified the result. This is the field to check when tracing a downstream issue back to its source. |
error_flag |
Boolean | Yes | Whether an error was detected during this log entry. Set to true any time the workflow produced an unexpected result, failed to complete, or triggered a downstream issue. |
error_description |
Text | No | A description of the error. Required when error_flag is true. Include enough context for the error to be reproduced and diagnosed by someone who was not present. |
retention_period |
Text | No | How long this record must be kept. Define retention periods by workflow type before logging begins. Examples: 6 months, 12 months, 7 years, Indefinite. |
archived |
Boolean | Yes | Whether this record has been moved to archival storage. Archived records should remain queryable for compliance purposes but can be separated from active operational data. |
Logging Is Just the Start
An audit log is one component of a complete AI governance system. Once logging is in place, the next step is defining review thresholds, setting escalation rules, and building the monitoring layer that catches drift before it becomes a problem. That is what we build with clients every day.
Talk to Us About Your Governance Setup →