Knowledge Base

📝 Context Summary

This document defines the content review workflows that govern the Knowledge Core lifecycle. It establishes three review tracks — human-authored content, agent-generated content, and periodic re-review — each with defined roles, approval criteria, and escalation paths. The workflows are designed to enforce the Bill Bernard Standard's integrity requirements while keeping the Fleet Commander model operationally efficient.

Content Review Workflows

The Knowledge Core is only as trustworthy as the process that governs what enters it. The SIE’s content review workflows define the exact path content must travel from initial creation to activation in the vector database and WordPress — and the criteria that determine whether it advances or gets sent back.

These workflows exist to operationalize the Bill Bernard Standard. The Standard defines the principles — Iron Word reliability, Steady Presence resilience — and the review workflows translate those principles into repeatable, auditable processes.

Three Review Tracks

Not all content enters the Knowledge Core the same way. The SIE defines three review tracks, each with its own workflow, because the risks and failure modes differ for each content origin.

Track 1: Human-Authored Content

This track governs content written directly by human contributors in Obsidian. It is the most common path for foundational articles, strategy documents, and reference material.

Workflow:

  1. Draft. The author creates the markdown file with complete frontmatter and body content conforming to Dual-Readability standards. The file’s status field is set to Draft.
  2. Self-review. The author reviews the document against the Data Quality Standards checklist: frontmatter completeness, Dual-Readability compliance, epistemic marking, structural formatting, and link integrity.
  3. Commit and push. The author commits the file to Git with a descriptive commit message. This creates the audit trail entry.
  4. Validation. The Knowledge Pipeline runs automated Gate 2 checks: schema validation, taxonomy mapping, duplicate detection, and link integrity. Failures are logged and the author is notified.
  5. Activation. If validation passes, the author updates the status field to Active. On the next sync cycle, the content is embedded in the vector database and published to WordPress.

For high-sensitivity content — governance documents, protocol definitions, content that directly instructs agent behavior — the Fleet Commander reviews and approves the activation. The author cannot self-approve these documents.

Track 2: Agent-Generated Content

This track governs content produced by SIE agents (e.g., the Editor Agent generating knowledge base drafts, the Analyst Agent producing research summaries). Agent-generated content carries higher risk because it may contain hallucinations or misattributions that are not immediately obvious.

Workflow:

  1. Agent drafts. The agent creates the content and attaches the Iron Word Verification Ledger: confidence score, reasoning, and sources used. The content is saved with status: Draft and human_review_required: true.
  2. Fleet Commander triage. The Fleet Commander reviews the Verification Ledger first — not the content. If the confidence score is above threshold, sources are valid, and reasoning is sound, the Commander proceeds to content review. If the ledger itself is suspect, the content is rejected without further review.
  3. Content review. The Fleet Commander reviews the body content for factual accuracy, brand alignment, and Dual-Readability compliance. The Commander checks specific claims against the cited sources.
  4. Disposition. The Commander either approves (sets status: Active), requests revision (returns to agent with specific feedback), or rejects (logs the rejection reason for the Steady Presence Incident Loop).
  5. Activation. Approved content enters the Knowledge Pipeline on the next sync cycle.

Axiomatic rule: No agent-generated content may be activated without human review. This is the fundamental boundary between the Fleet Commander model and fully autonomous operation. Agents draft; humans approve.

Track 3: Periodic Re-Review

This track governs content that is already active in the Knowledge Core but has been flagged for re-review by the freshness monitoring system, retrieval quality monitoring, or the Steady Presence Incident Loop.

Workflow:

  1. Flag. The content is flagged and its status changes from Active to Under Review. The flag includes the trigger reason: freshness expiration, low retrieval quality pattern, or post-mortem root cause.
  2. Triage. The Fleet Commander or designated steward (identified by the steward frontmatter field) reviews the flag reason and determines the scope of review needed:
  3. Tier 1 (Minor update): Factual corrections, updated statistics, refreshed links. Can be handled by the content steward directly.
  4. Tier 2 (Structural revision): Sections need rewriting, new subsections needed, or the document’s scope has shifted. Requires steward review and Fleet Commander approval.
  5. Tier 3 (Fundamental rewrite or retirement): The document’s core thesis is outdated or the topic has changed so significantly that a new article is warranted. Requires Fleet Commander decision on whether to rewrite or archive.
  6. Revision. The appropriate revision is made, committed to Git, and the document re-enters Gate 2 validation.
  7. Reactivation. Once validation passes and any required approvals are obtained, the status returns to Active.

Escalation Paths

When the standard workflow cannot resolve an issue, defined escalation paths prevent content from languishing in limbo.

  • Validation failure that cannot be resolved: If a document repeatedly fails Gate 2 validation and the author cannot determine why, the issue is escalated to the Fleet Commander for architectural review. The document may need restructuring or the validation rules may need updating.
  • Disagreement on content accuracy: If the reviewer and author disagree on factual claims, the dispute is resolved by consulting the primary sources cited in the document. If primary sources conflict, the content is updated to acknowledge the conflict with appropriate epistemic markers (Heuristic or Speculative rather than Axiomatic).
  • Stale re-review queue: If flagged documents remain in Under Review status for more than 14 days without action, the Fleet Commander is alerted. Stale reviews indicate either insufficient capacity or unclear ownership — both are operational risks that need resolution.

Review Metrics

The health of the review process is tracked through three metrics:

  • Time to activation: The elapsed time from initial commit to Active status. Long activation times indicate bottlenecks in the review process.
  • Rejection rate: The percentage of agent-generated content rejected at Track 2. A rising rejection rate signals degradation in agent performance or Knowledge Core context quality.
  • Re-review backlog: The count of documents currently in Under Review status. A growing backlog indicates that freshness decay is outpacing review capacity.
Key Concepts: Content Review Tracks Agent Content Review Fleet Commander Approval Re-Review Cycle Escalation Paths

About the Author: Adam Bernard

Content Review Workflows: From Draft to Activation
Adam Bernard is a digital marketing strategist and SEO specialist building AI-powered business intelligence systems. He's the creator of the Strategic Intelligence Engine (SIE), a multi-agent framework that transforms business knowledge into autonomous, AI-driven competitive advantages.

Let’s Connect

Ready to Build Your Own Intelligence Engine?

If you’re ready to move from theory to implementation and build a Knowledge Core for your own business, I can help you design the engine to power it. Let’s discuss how these principles can be applied to your unique challenges and goals.