AI Advisory Board — Autonomous Governance for a 20+ Project Portfolio
Sennable is an AI technology consultancy that designs, builds, and operates custom AI systems for businesses.
The Challenge
Managing a portfolio of 20+ AI projects across multiple brands required strategic decisions that were inconsistent, undocumented, and impossible to scale. Every new project needed a full context review — reading dozens of notes, recalling past decisions, and manually checking for conflicts. There was no institutional memory: insights from one project never informed another, and strategic drift went undetected until it caused real problems. The business needed a governance system that could review projects with the rigor of a senior leadership team, maintain memory across sessions, and enforce quality standards automatically.
Our Approach
We designed a multi-agent governance framework where five specialized AI roles — a Chief Growth Officer, Revenue Architect, Content Architect, Production Supervisor, and Agent Architect — each evaluate every project from their domain expertise. We built a 4-tier cognitive memory architecture: Working Context for live session state, Episodic Memory in Obsidian for historical experience, Semantic Memory in NotebookLM for queryable institutional knowledge, and Archival Memory in Google Drive for raw documents. The system enforces a "Pentad Rule" — no asset ships without all five perspectives recorded. We automated the review pipeline with Python modules that watch for new tasks, run council reviews, log decisions to Obsidian, and push updates to NotebookLM for long-term retrieval.
The Results
13
Python modules built and compiled clean
13/15
Infrastructure tests passing
68
Tasks tracked through pipeline
6
NotebookLM notebooks with 63+ sources
89.5%
Portfolio completion rate in 3 days
The 4-tier memory retrieval was validated at sub-second latency, with Tier 1 memory correctly overriding stale Tier 4 data. Direction drift detection was tested and confirmed. The governance framework guided the portfolio from initial architecture to 89.5% task completion (51/57 tasks) within 3 days of deployment, organized by council-prioritized execution waves.
Timeline
From initial architecture design to 13/15 tests passing: 14 days (2026-03-09 to 2026-03-23). The governance framework was deployed incrementally — core council review logic in Week 1, automation modules in Week 2, memory tier integration and infrastructure validation in Week 3.
Key Insight
The biggest cost in AI operations is not compute — it is decision latency. A governance system that can review a project in minutes instead of days, while maintaining institutional memory across hundreds of decisions, is the difference between a portfolio that compounds and one that drifts.