CocoPlus has nineteen features. They are not nineteen independent capabilities you can pick from a menu. They are nineteen expressions of the same underlying design — a system where each piece reinforces the others.

1 — CocoBrew Lifecycle Engine
/spec /plan /build /test /review /ship /rewind /fork

CocoBrew is the spine of CocoPlus. Every other feature exists in relation to it. Six phases — each with a specific purpose, each producing a specific artifact, each gated on the completion of the one before it.

Spec captures requirements before any code is written. Plan decomposes the spec and ends with a human approval gate — nothing in Build begins without it. Build executes the approved plan in isolated, checkpoint-validated stages. Test validates against the spec's success criteria, not the developer's mental model. Review runs the Code Quality Advisor and CocoCupper findings, categorized as must-fix, should-fix, and consider. Ship is gated on Review — open must-fix items block it unconditionally.

/rewind [step-id] walks back to a specific stage commit after confirmation. /fork [branch-name] creates an isolated exploration branch without touching the main thread of work.

When to Use

For any non-trivial Snowflake development task that spans more than a single session, involves multiple schemas or personas, or needs to reach production. If you're tempted to start coding before writing a spec, CocoBrew is exactly where you should start — the spec phase alone regularly surfaces assumptions that would cost hours to untangle mid-build. Use the full lifecycle for any task where "it worked in dev but broke in prod" would be a costly outcome.

2 — CocoSpark Optional
/spark [topic] /spark-off

A divergent thinking mode. When active, it generates at least three distinct approaches, challenges your assumptions, and raises questions you haven't thought to ask. Explicitly optional — brainstorming is a developer choice, not a lifecycle gate.

CocoSpark output is explicitly marked as exploration. It never flows automatically into spec.md. The developer decides what, if anything, carries forward. /spark-off exits brainstorm mode with an optional capture offer.

When to Use

When the problem space is genuinely ambiguous and you're not sure which architectural direction to take. If the path is already clear, skip it — CocoSpark adds value at the edges of certainty, not at the center. It's most useful immediately before writing a spec: run CocoSpark first, discover the angles you hadn't considered, then write a spec that reflects the real complexity. Not every task needs it. The ones that do usually feel uncomfortably open-ended before you start.

3 — CocoHarvest Orchestration Engine
Automatic at Plan phase · $<persona> for direct invocation

The intelligence behind parallelism. CocoHarvest reads an approved plan, decomposes it into workstreams, assigns each to the appropriate specialist persona, and encodes dependency relationships in flow.json. Parallel stages run in isolated git worktrees — one agent's incomplete work cannot contaminate another's context.

A stage marked as dependent on another does not start until the prerequisite's checkpoint files are validated. For simple plans — single workstream, single persona — CocoHarvest delegates directly without creating a formal pipeline.

When to Use

CocoHarvest is automatically invoked at the Plan phase whenever the task has multiple distinct workstream types — you don't choose it, it chooses itself. Use direct persona invocation ($de, $ae, etc.) when you want to consult a specific specialist without running the full lifecycle — a quick performance review, a targeted schema question, or a governance check mid-session. The --continue flag is particularly useful: $de --continue Fix the issue you identified hands the specialist the current context without restarting from scratch.

4 — Personas 8 Specialist Agents
$de $ae $ds $da $bi $dpm $dst $cdo · /personas

Eight specialists with locked domains, locked tool sets, and locked invocation modes. Specialists beat generalists in structured development work — not sometimes, always. The Data Engineer writes SQL with performance context. The Data Steward reviews governance with authority. The CDO reasons strategically with Opus-level depth.

Invocation modes are architectural decisions: $dpm, $dst, and $cdo always run in plan mode — they advise, they never execute autonomously. Add --model to override compute for a specific call. Add --continue to pass current session context for continuation work.

When to Use

Whenever the work has a clear domain owner. Don't default to the general Coco session for tasks that belong to a specialist — the specialist has a locked tool set and domain instructions that produce qualitatively better output. Use $cdo when reasoning about the full data estate, not just a single pipeline. Use $dst before any change that touches access policies, sensitive data, or column-level security. Use $de when SQL performance or schema correctness is the primary concern — this is not a task for a generalist.

5 — CocoPod Project Bundle
/pod init /pod status /pod resume

CocoPod is initialization. Before any CocoPlus feature runs, the project must have a CocoPod. /pod init creates the complete .cocoplus/ directory structure, sets default modes, and creates the initial git commit.

/pod status is the dashboard: current phase, active modes, pipeline status, last CocoMeter summary, and the three most recent CocoCupper findings. /pod resume reads the state and produces a focused brief — where you were and what the immediate next action is. It reads from files, not AI memory. The accuracy is deterministic.

When to Use

Every time you start a new Snowflake data project with CocoPlus — run /pod init before anything else. Run /pod status at the start of any session to orient yourself before diving into work. Run /pod resume when returning to a project after more than a day away — it compresses the "where was I?" problem into a focused brief that is faster and more reliable than trying to reconstruct context from memory.

6 — Project Execution Engine Pipeline Executor
/flow run /flow status /flow pause /flow resume [stage-id]

Reads and executes CocoFlow JSON pipelines. Where CocoHarvest generates the plan, the Execution Engine runs it. Stage status is updated in real time. /flow pause halts after the current stage completes — running stages are never killed mid-execution. /flow resume [stage-id] validates that all prior checkpoints are intact before restarting.

Runtime --model override applies across all stages without modifying flow.json on disk.

When to Use

When a plan has been generated and needs to be executed as a formal pipeline. For simple, single-persona tasks, the Build phase handles execution directly — you won't need /flow run explicitly. Use it when you want precise control: running only from a specific stage after a failure, pausing mid-pipeline to inspect outputs before proceeding, or applying a runtime model override without changing the stored plan. /flow status is useful any time you want a clear picture of where a running pipeline stands.

7 — Memory Engine Cross-Session Persistence
/memory on /memory off

Three layers. Hot (AGENTS.md, 200 lines, always loaded), warm (decisions.md, patterns.md, errors.md), cold (CocoGrove). Each layer serves a different time horizon. The 200-line limit on AGENTS.md is enforced — when new entries would exceed it, older entries compress to the warm layer automatically.

When memory is on, the PostToolUse hook captures significant events: schema changes, decisions stated explicitly, errors and their resolutions. Captures are brief — the fact and the reason, not the full content.

When to Use

Leave it on by default — this is the right posture for any real project work. The overhead is minimal; the compound benefit across sessions is significant. Turn it off only for explicitly throwaway sessions: prototype experiments, hypothetical walkthroughs, or one-off queries against an environment you don't intend to revisit. If you're unsure, leave it on. A session with memory that you didn't need leaves behind a few captured decisions. A session without memory that you did need leaves behind nothing.

8 — Environment Inspector Pre-Action Context
/inspect /inspect --schema <name> /inspect --full /inspector on

Before you build, look at what's there. The inspector scans the connected Snowflake environment — schemas, tables, views, stored procedures, Cortex endpoints, semantic models, access grants — and produces a structured snapshot. Compares against the last snapshot to surface what's new or changed.

When auto-mode is active, the inspector runs as a background subagent at every session start without blocking the session. Results are cached in .cocoplus/snapshots/.

When to Use

Before any Build phase that writes to or reads from Snowflake schemas. Especially valuable when returning to a project after a break — schemas may have changed in the interim. Essential when joining a shared environment for the first time, when you can't assume objects you expect actually exist. Run /inspect --full when you need column-level statistics or access policy details before writing sensitive SQL. If auto-mode is on, you get this automatically at session start without thinking about it.

9 — Safety Gate Execution Protection
/safety strict /safety normal /safety off

Two layers. The hard gate is a PreToolUse hook interceptor — it fires before every SnowflakeSqlExecute call. In strict mode, SQL containing DROP TABLE, DROP SCHEMA, TRUNCATE, DELETE without WHERE, or ALTER on production objects is blocked entirely. The tool call does not execute. This cannot be prompted around.

The soft gate fires before any batch destructive operation and requires explicit developer confirmation. It cannot be silent — the developer always sees a summary before anything runs. Configuration in safety-config.json defines which schema name patterns are protected. Default: schemas containing "PROD", "PRODUCTION", or "LIVE".

When to Use

Always on — this is the default posture and there is almost never a reason to deviate from it. Use /safety strict when working in or near production schemas where a single dropped object would affect live data or real users. Use /safety normal for active development environments where you need occasional destructive operations but still want warnings. Reserve /safety off only for isolated test environments where you're deliberately exercising destructive patterns and fully understand the consequences. When in doubt, default to strict.

10 — Code Quality Advisor Anti-Pattern Review
/quality on /quality off /quality run

Reviews generated SQL and code against a library of Snowflake-specific anti-patterns. Not a generic linter — it understands Snowflake execution context: query patterns that cause full table scans, Cortex function misuse, semantic model design mistakes that produce incorrect results. Findings are categorized as performance, correctness, governance, and cost.

Runs automatically during the Review phase. When quality mode is active, it runs as a non-blocking background monitor on every CocoFlow stage.

When to Use

During the Review phase it runs automatically — you don't need to think about it. Turn /quality on proactively when you're in an active Build phase generating complex SQL or Cortex AI functions, so findings surface continuously rather than only at review time. Run /quality run on-demand after generating stored procedures or batch SQL that will touch large tables. Catching must-fix items before the Ship gate is far cheaper than catching them after a production incident.

11 — Prompt Studio Prompt Engineering
/prompt new /prompt compare [a] [b]

A structured workflow for designing Cortex AI prompts. /prompt new guides you through goal definition, model selection, few-shot examples, initial draft, and anti-pattern review. Output is a versioned prompt file.

/prompt compare runs two prompt versions against the same test inputs and surfaces differences in accuracy, format compliance, verbosity, and token consumption. Prompt iteration becomes empirical rather than impressionistic. Prompts designed here feed directly into CocoFlow stage definitions.

When to Use

Whenever you're building or refining Snowflake Cortex AI functions where output quality depends on how the prompt is structured — AI_COMPLETE classifiers, AI_EXTRACT schemas, AI_CLASSIFY category definitions. Use /prompt compare when you have two candidate approaches and need evidence of which performs better on your actual data before committing to production. If you're iterating on a prompt more than twice, Prompt Studio will save you time — ad-hoc iteration has no memory and no record.

12 — CocoGrove Pattern Library
/patterns view [tag] /patterns promote [finding-id]

The institutional memory of your project, expressed as a library of reusable patterns. Patterns enter CocoGrove through deliberate promotion — a developer reviews CocoCupper findings and promotes the ones worth keeping. This human filter is intentional. Not everything CocoCupper identifies is worth preserving forever.

Once promoted, a pattern is structured: name, description, when to apply, when not to apply, example. By session twenty, CocoGrove contains your project's hard-won knowledge. The Data Engineer doesn't reinvent your schema naming convention — it reads CocoGrove first.

When to Use

After completing significant work — review what CocoCupper surfaced and promote anything durable before closing the session. Before starting new work in a domain you've built in before — run /patterns view to check if relevant patterns already exist. Think of CocoGrove as your project's living style guide: consult it before generating code, update it after learning something worth keeping. A CocoGrove that grows with the project is one of the highest-leverage investments CocoPlus enables.

13 — Doc Engine Auto Documentation
/doc run

Generates documentation from code artifacts: SQL files, Snowpark notebooks, stored procedures, schema definitions. Produces column-level descriptions for tables and views, function docstrings, schema lineage notes, and data dictionary entries.

Documentation is proposed, not auto-applied. Missing memory entries produce missing documentation sections — this is the correct behavior. Gaps in documentation reflect gaps in recorded decisions, and that honesty is useful.

When to Use

Before shipping — run /doc run as part of Ship phase preparation to ensure documentation reflects the current state of the build. When onboarding a new team member to an existing data product, to give them a navigable entry point into the project's decisions and structure. After a major schema change where existing documentation is stale. The Doc Engine is most valuable when the Memory Engine has been running throughout the project — the richer the recorded decisions, the richer the generated documentation.

14 — Context Mode Transparency Layer
/context on /context off

Activates narration. When on, the system surfaces its reasoning before acting: what it is about to do, why it is choosing this approach, what alternatives it considered. It does not change what the system does — only what you see about what the system is doing.

When to Use

When you're new to CocoPlus and want to understand what the system is doing and why before trusting it. When debugging a pipeline that is behaving unexpectedly — narration gives you visibility into reasoning before actions execute. When walking a colleague through a session to explain the workflow. Turn it off once you're fluent with the system; the narration becomes overhead rather than signal when the behavior is familiar.

15 — CocoMeter Token Tracker
/meter /meter estimate /meter history [n] /meter on /meter off

Makes token usage visible and predictable. Tracks consumption per session, per stage, and per persona. /meter estimate provides pre-flight cost estimation before an operation runs. Token visibility is not just about cost — it is about system health. A pipeline whose token usage grows session-over-session is accumulating unnecessary context.

When to Use

Before running any large or complex pipeline — use /meter estimate to get a cost envelope before committing. After any session that felt unexpectedly expensive — use /meter history to identify which features and stages drove the cost. In enterprise environments with token budgets, keep /meter on continuously. The estimate feature is most valuable immediately before the Build phase on plans with many parallel stages or Opus-level persona assignments.

16 — CocoCupper Post-Execution Intelligence
/cup /cup history [n]

The session debrief, automated. After every session — triggered by Stop and SubagentStop hooks — CocoCupper reads what just happened and identifies patterns: what worked, what failed consistently, what was reinvented unnecessarily. Findings go to .cocoplus/grove/cupper-findings.md.

CocoCupper runs on Haiku. It is read-only and cannot modify any artifact outside its designated output path. The name comes from coffee cupping — a professional evaluator scores and documents a brew's qualities after it is made, never during.

When to Use

It runs automatically — you don't need to invoke it for the background behavior. Run /cup manually after any session where you resolved a difficult problem, discovered a performance issue, or established a pattern you expect to reuse. Run /cup history periodically — every five to ten sessions — to look for cross-session patterns that wouldn't be visible session by session. Particularly valuable before a long break from a project: capture what was learned before the context fades.

17 — Assist Mode Master Toggle
/cocoplus on /cocoplus off

A single command that activates all mode-based features simultaneously: memory, inspector, safety (normal), quality, context, and meter. The recommended starting state for a new CocoPlus project. Intercepted by the UserPromptSubmit hook and takes effect immediately. AGENTS.md is updated to reflect the new state.

When to Use

At the start of every new project and at the start of any session where you want the full CocoPlus system active. It replaces six individual feature toggles with a single command. Use /cocoplus off when you're doing something lightweight and don't want CocoPlus overhead — a quick exploratory query, a throwaway experiment, a one-off question about the environment. If you're doing real project work, /cocoplus on is the right starting point.

18 — CocoFleet Multi-Process Orchestration
/fleet init /fleet run /fleet status /fleet stop /fleet logs

For when the work is genuinely too large for a single Coco session. CocoFleet spawns independent Coco CLI processes at the operating system level — separate processes, separate context windows, coordinated through shared file state and PID tracking. This is distinct from CocoHarvest, which operates within a single session.

Fleet coordination is file-based: no message queues, no service buses. Each process writes its state to shared files; the coordinator reads those files. Simple, debuggable, and consistent with CocoPlus's commitment to legible file-based state.

When to Use

When you've genuinely hit the practical context budget of a single Coco session and still have work to coordinate across multiple independent workstreams. Large-scale data migrations. Multi-schema rebuilds spanning days of parallel work. Development streams that must eventually converge on a shared result. If CocoHarvest and CocoFlow can handle your parallelism within a single session, prefer them — CocoFleet adds OS-level coordination complexity that is only justified by genuine scale requirements that can't be met any other way.

19 — SecondEye Multi-Model Plan Critic
/secondeye /secondeye --artifact <target> /secondeye --model <model> /secondeye acknowledge

A plan reviewed by the same model that wrote it is a plan reviewed by a single perspective. SecondEye fixes that by running three Claude model tiers in parallel — each with a genuinely different evaluative mandate — and aggregating their findings into a single structured critique.

The underlying insight is architectural: within Claude's model family, each tier reasons differently. Haiku naturally challenges over-engineering. Sonnet challenges logical completeness and missing edge cases. Opus challenges architectural risk and unconsidered alternatives. These are not the same critique at different quality levels — they are different critiques. Aggregating them produces a breadth no single model can match.

Haiku
Efficiency Lens

Is this plan over-specified? Are there steps that add complexity without proportionate value? Would a simpler approach achieve the same outcome?

Sonnet
Completeness Lens

What assumptions does this plan make that are not validated? Which edge cases are unaddressed? Are all spec success criteria traceable to a plan stage?

Opus
Risk Lens

What are the highest-consequence failure modes? Are there architectural decisions that constrain future flexibility? What would have to be true for this plan to fail despite correct execution?

How It Works

Three SecondEye Critic subagents spawn simultaneously — one per model tier. All three read the same target artifact (default: plan.md, or any artifact via --artifact). Each writes findings to a temporary staging directory. When all three complete, the skill aggregates and deduplicates findings — those agreed upon by two or more critics are marked [Consensus]. Everything is classified as Critical, Advisory, or Observation, and a single report is produced. If any Critical findings exist, a soft gate activates on the Build phase — the developer must either revise the plan or run /secondeye acknowledge to accept the risk. SecondEye Critic agents are read-only: they cannot write to any lifecycle artifact or take any action beyond their staging directory output.

When to Use

Before moving from Plan to Build on any work that involves irreversible operations, significant compute cost, or architectural decisions with long-term implications. Invoke it when you want a critical perspective that goes beyond what the model that generated the plan can offer — particularly for plans that are complex, have unclear trade-offs, or where the consequences of a wrong turn are expensive to undo. It is not necessary for every build — simple, well-understood tasks don't need it. But for plans where "what could go wrong?" is a question worth asking carefully, SecondEye is where you ask it rigorously.

Nineteen features, one idea: give structure a lower cost than chaos.