Knowledge Base

📝 Context Summary

This playbook provides the operational procedure for conducting a freshness audit of the SIE Knowledge Core. It covers generating a freshness report from document metadata, classifying content by decay risk, prioritizing the update queue, executing tiered reviews, and verifying that updated content passes pipeline validation. The playbook is designed to be run on a recurring schedule to prevent knowledge decay from degrading agent performance.

Playbook: Running a Knowledge Base Freshness Audit

Knowledge decay is inevitable. The facts, strategies, tools, and market conditions documented in the Knowledge Core change over time. Content that was accurate six months ago may now be misleading. An agent retrieving stale content produces stale outputs — and the Fleet Commander pays the Human Correction Tax to catch and correct them.

The freshness audit is the systematic process for identifying and resolving knowledge decay before it degrades agent performance. It should be run on a recurring schedule, not as an ad-hoc reaction to a visible failure.

  • Full audit: Quarterly. Review every document in the Knowledge Core.
  • Domain audit: Monthly. Review one knowledge domain per month on a rotating basis (e.g., AI in January, SEO in February, E-Commerce in March).
  • Triggered audit: Immediately after a Steady Presence incident identifies content staleness as a root cause.

Phase 1: Generate the Freshness Report

Step 1.1 — Extract document metadata. Scan all markdown files in the Knowledge Core and extract the updated timestamp, status, steward, tags, and file path from each file’s frontmatter. Generate a tabular report sorted by updated date, oldest first.

Step 1.2 — Calculate staleness scores. Assign each document a staleness classification based on the time elapsed since its last update:

Classification Age Since Last Update Risk Level
Current 0-90 days Low
Aging 91-180 days Medium
Stale 181-365 days High
Critical 365+ days Critical

Step 1.3 — Cross-reference with retrieval data. If available, pull retrieval frequency data from the vector database logs. Documents that are both stale and frequently retrieved are the highest priority — they are actively feeding outdated information to agents. Documents that are stale but never retrieved are lower priority — they may be candidates for archival.

Step 1.4 — Cross-reference with incident data. Check the Steady Presence incident log for any documents cited as root causes of agent failures. These documents are flagged regardless of their staleness score.

Phase 2: Classify and Prioritize

Step 2.1 — Classify by decay type. For each stale document, determine the type of decay present:

  • Factual obsolescence: Statistics, tool versions, pricing, API details, or market data that has changed. Most common in the TOOLS and E-COMMERCE domains.
  • Strategic drift: Recommendations or strategies that no longer align with current business priorities or market conditions. Most common in GROWTH and CORE strategy documents.
  • Contextual degradation: Content that is technically accurate but has lost relevance because the surrounding landscape has shifted (e.g., a competitor analysis where the competitor has pivoted).
  • Relationship obsolescence: Internal links that point to archived or restructured documents, breaking the knowledge graph.

Step 2.2 — Build the priority queue. Rank documents for review using this priority matrix:

Priority Criteria
P0 — Immediate Cited in a Steady Presence incident OR Critical staleness + high retrieval frequency
P1 — This cycle Stale + high retrieval frequency OR any governance/protocol document over 180 days
P2 — Next cycle Stale + low retrieval frequency OR Aging + high retrieval frequency
P3 — Backlog Aging + low retrieval frequency. Review if capacity allows; otherwise defer

Phase 3: Execute Reviews

Step 3.1 — Assign to stewards. Route each document to its designated steward (from the steward frontmatter field). If the steward is unavailable or the field is empty, the Fleet Commander assigns a reviewer.

Step 3.2 — Execute tiered reviews. Apply the appropriate review tier based on the scope of changes needed:

  • Tier 1 (Minor update): Update statistics, refresh links, correct outdated tool names or versions. The steward makes changes directly, commits to Git, and the document re-enters the pipeline. No additional approval required.
  • Tier 2 (Structural revision): Rewrite sections, add new subsections to cover gaps, or restructure the document’s argument. The steward makes changes and the Fleet Commander reviews before reactivation.
  • Tier 3 (Fundamental rewrite or retirement): The document’s core thesis is no longer valid. The Fleet Commander decides whether to commission a full rewrite or archive the document and create a replacement.

Step 3.3 — Update frontmatter. For every reviewed document, update the updated timestamp to today’s date. If the document’s scope or focus has shifted, update the semantic_summary, synthetic_questions, and key_concepts fields accordingly. These metadata updates are critical — they affect how the document is embedded and retrieved.

Step 3.4 — Re-sync. After changes are committed, trigger a Knowledge Pipeline sync for the affected files. Verify that the updated embeddings are indexed and the WordPress posts reflect the changes.

Phase 4: Verify and Report

Step 4.1 — Verify pipeline integrity. Confirm that all updated documents pass Gate 2 validation (schema, taxonomy mapping, duplicate detection, link integrity). Address any validation failures before closing the audit.

Step 4.2 — Update the audit log. Record the audit results: total documents reviewed, documents updated by tier, documents archived, documents deferred, and any new documents created to fill identified gaps.

Step 4.3 — Identify systemic patterns. Look for patterns in the audit results that indicate structural issues:
– If an entire domain is consistently stale, the domain may need a dedicated steward or a more frequent audit cadence.
– If the same types of facts are repeatedly obsolete (e.g., tool pricing, model benchmarks), those facts may be better served by a dynamic data source rather than static documentation.
– If many documents have broken internal links, the knowledge graph structure may need reorganization.

Step 4.4 — Report to Fleet Commander. Summarize the audit in a brief report: documents audited, distribution by staleness classification, actions taken, systemic patterns identified, and recommendations for the next cycle.

Key Concepts: Freshness Audit Content Decay Staleness Classification Update Prioritization Audit Cadence

About the Author: Adam Bernard

Playbook: Running a Knowledge Base Freshness Audit
Adam Bernard is a digital marketing strategist and SEO specialist building AI-powered business intelligence systems. He's the creator of the Strategic Intelligence Engine (SIE), a multi-agent framework that transforms business knowledge into autonomous, AI-driven competitive advantages.

Let’s Connect

Ready to Build Your Own Intelligence Engine?

If you’re ready to move from theory to implementation and build a Knowledge Core for your own business, I can help you design the engine to power it. Let’s discuss how these principles can be applied to your unique challenges and goals.