These are not tutorials. Tutorials teach you the tool. Workflows show you how to use it. Each scenario is self-contained — read the ones that match what you're trying to do.

Scenario 1: Building a New Data Pipeline from Scratch

The situation: You have a requirements brief and a blank Snowflake schema. This is the full CocoBrew lifecycle applied to a real project.

Assumptions: CocoPlus initialized, Assist Mode active.

Step 1: Specify what you're building

/spec

The Spec phase asks you specific questions: goal, success criteria, what's out of scope, which Snowflake objects are involved, timeline or constraints. Answer with specificity. "Build a customer churn prediction pipeline" is a goal. "Deliver a daily-refreshed table in ANALYTICS.ML_OUTPUTS with churn probability scores per customer ID, consumed by the BI team's Cortex Analyst semantic model" is a spec.

The Spec phase will push back if your answers are vague — it's preventing you from arriving at Plan with half the decisions undone. Output: .cocoplus/lifecycle/spec.md + git commit.

Step 2: Plan the work

/plan

The Plan phase reads your spec.md, decomposes the work into stages, assigns each stage to the appropriate specialist persona, and generates flow.json. Then it enters Coco's native plan mode — you review every proposed stage, every persona assignment, every checkpoint, and approve before any execution begins.

If a stage is wrong — wrong persona, wrong sequencing, wrong scope — correct it here. Editing flow.json before approval is allowed and expected.

Step 3: Optionally challenge the plan

/secondeye

Optional, but recommended for complex or high-stakes plans. Three critics fire in parallel:

Haiku
Efficiency Lens

Does every step in this plan earn its cost?

Sonnet
Completeness Lens

What's missing, unvalidated, or not addressed?

Opus
Risk Lens

What could go wrong despite correct execution?

Critical findings create a soft gate — you must revise the plan or run /secondeye acknowledge to proceed intentionally. The time SecondEye saves is the time you won't spend mid-build discovering a foundational assumption was wrong.

Step 4: Build

/build

The Build phase reads flow.json and begins execution. For multi-stage parallel plans, CocoHarvest spawns persona subagents in isolated git worktrees — each running independently, each committing its own stage artifacts. Monitor with /flow status. If a stage fails, resume from it with /flow resume [stage-id] after fixing the issue — prior stage checkpoints are always validated first.

Step 5: Test

/test

Validates against your spec's success criteria — not against a developer's mental model of what success looks like. Results recorded in .cocoplus/lifecycle/test.md linked back to their spec criteria.

Step 6: Review

/review

Aggregates Code Quality Advisor findings, CocoCupper session intelligence, and spec compliance check. Each finding classified: must-fix (blocks Ship), should-fix (recommended), or consider (informational).

Step 7: Ship

/ship

Gated on review completion with no open must-fix items. Creates the final commit with a full lifecycle summary, applies a semantic version tag, writes deployment.md. The git history at this point is a complete, legible record: spec commit, plan commit, per-stage build commits, test commit, review commit, ship commit.

Going back

/rewind [step-id]   # Roll back to a previous phase commit
/fork [branch-name] # Explore a different approach without touching your main work

Scenario 2: Quick Expert Consultation with a Persona

The situation: You're mid-session and need specific expertise — not a full build, just a targeted review or a pointed question.

$de Review this stored procedure for performance issues
$de --continue Fix the issues you identified

The --continue flag passes the full conversation context to the persona so it can proceed without re-reading the problem.

$dst Review the proposed schema changes for governance compliance

The Data Steward operates in plan mode — it will analyze and recommend but will not execute SQL. Governance advice and execution are different responsibilities.

$cdo What are the architectural trade-offs of building this as a Cortex Analyst model versus a stored procedure?

CDO uses Opus. The answer will be deeper and more strategic. This costs more tokens — it's the right tool for a decision that will constrain your project for months.

$da --model haiku Give me the top 10 customers by revenue in the last 30 days

Data Analyst on Haiku for a quick exploratory query. Fast, cheap, exactly proportionate to the task.


Scenario 3: Parallel Build with CocoHarvest

The situation: Your plan has multiple independent workstreams — schema layer, semantic model, notebook pipeline, and governance documentation — that can run simultaneously.

After plan approval, /build detects that CocoHarvest should handle this. It spawns parallel subagents in isolated git worktrees:

$de working on the schema layer in agent/stage-001
$ae working on the semantic model in agent/stage-002
$ds working on the notebook in agent/stage-003
$dpm preparing governance documentation in agent/stage-004

All four run simultaneously. None can see the others' work-in-progress — they're in isolated worktrees. Their only shared reference is the spec and plan files you approved.

/flow status  # Live status for all four stages

/build --model opus  # Override model for the entire pipeline (Tier 2 default)

If one stage fails:

/flow status         # Identifies stage-003 as failed with reason
/flow resume stage-003  # Validates prior checkpoints, then resumes

Scenario 4: SecondEye Before a High-Stakes Plan

The situation: You've planned a schema migration reorganizing three production-adjacent schemas. The plan looks right, but you don't want to discover a dependency was missing halfway through.

/secondeye

Three critics fire in parallel on your plan.md. A realistic Critical finding might read:

[Consensus] Plan assumes all downstream views can be recreated from DDL in spec.md, but spec.md does not include view definitions. If any views have non-standard column aliases or UNION constructs, recreation will fail silently. Recommend validating view DDL extraction in a pre-build step.

If you agree with the Critical findings and want to address them, update plan.md and re-run /secondeye. If you've reviewed the risks and are accepting them intentionally:

/secondeye acknowledge

The acknowledgment is recorded in the report metadata. SecondEye can critique any lifecycle artifact, not just plans:

/secondeye --artifact spec    # Catch assumptions before planning
/secondeye --artifact review  # Validate review completeness before shipping

Scenario 5: Returning to a Project After Time Away

The situation: You've been away from this project for a week. You need to pick up exactly where you left off.

/pod resume

Reads all state from .cocoplus/ and produces a focused narrative: the project and its current phase, what was completed last session, what blocking items remain, the three most recent key decisions, and the recommended immediate next action. You'll know in two minutes where you are and what to do next.

/inspect        # Re-scan if your Snowflake snapshot is stale
/cup history 3  # See what CocoCupper learned across the last three sessions

Scenario 6: Handling a Safety Gate Trigger

The situation: During Build, a pipeline stage tries to drop a table. The Safety Gate fires.

In strict mode — the SQL does not execute:

[Safety Gate — BLOCKED]
DROP TABLE ANALYTICS.CUSTOMER_FEATURES matched production schema pattern.
Operation blocked in strict mode. To proceed:
  - Verify this is an intentional and appropriate operation
  - Switch to normal mode: /safety normal
  - Then re-run the operation

In normal mode — the SQL runs with a logged warning:

[Safety Gate — WARNING]
TRUNCATE TABLE PROD_STAGING.TEMP_LOAD matches a production schema pattern.
This operation was executed. Review if this was intentional.

To configure which schemas are protected, edit .cocoplus/safety-config.json and add your patterns to the production_schema_patterns array. Changes take effect immediately.

Before any CocoFlow stage running multiple destructive operations, the soft gate presents a batch summary for confirmation:

[Safety Gate — Soft Gate]
Stage "schema-rebuild" will execute:
  - DROP TABLE STAGING.OLD_CUSTOMER (3.2M rows)
  - DROP TABLE STAGING.OLD_ORDERS (8.7M rows)
  - CREATE TABLE ... (x2)

Confirm to proceed (yes/no):

Scenario 7: Exploring Before Committing — CocoSpark

The situation: You have a requirements brief but aren't sure of the best architectural approach. You want to think out loud before writing a spec.

/spark schema design

CocoSpark generates at least three distinct approaches, articulates trade-offs, identifies hidden assumptions, and raises questions you may not have considered. Output is saved to .cocoplus/spark-[timestamp].md — explicitly marked as exploration.

/spark-off  # Exit brainstorm mode, optionally carry insights into spec.md

CocoSpark never modifies your lifecycle artifacts automatically. Multiple spark sessions on different topics each get their own timestamped file:

/spark data modeling approach
/spark cost optimization strategies

Scenario 8: Building Institutional Memory — CocoGrove

The situation: You've completed a build and CocoCupper has flagged a pattern worth preserving permanently.

/cup history 1

Among the findings:

FINDING-047: Pattern "cursor-based pagination via ROWNUM + dense_rank()" used in stages 3, 5, and 7
with consistent success. Zero downstream errors. Potential candidate for promotion to CocoGrove.
/patterns promote FINDING-047

You'll provide a name, description, context (when to apply), anti-context (when not to), and tags. The pattern is saved as a structured markdown file in .cocoplus/grove/patterns/ — in git, editable, versionable.

/patterns view pagination  # Browse patterns by tag in future sessions

Any agent invoked in a future session that reads from CocoGrove has access to this knowledge. It is no longer locked in your memory — it's in the system's.


Scenario 9: Large-Scale Builds with CocoFleet

The situation: Your platform build has twelve independent components — too large for a single CocoHarvest session where individual workstreams themselves need a full context window.

/fleet init data-platform

Creates .cocoplus/fleet/data-platform-manifest.json as a template. Edit to define instances — each with a task file path, persona assignment, output directory, and dependency list.

/fleet run .cocoplus/fleet/data-platform-manifest.json

CocoFleet resolves the dependency graph and spawns processes in dependency order. Monitor, inspect logs, and stop:

/fleet status fleet-001    # Live view of all instances
/fleet logs instance-003   # Stream a specific instance's log
/fleet stop fleet-001      # Graceful shutdown with SIGTERM → SIGKILL fallback

CocoFleet vs CocoHarvest

Use CocoHarvest when parallel workstreams fit in a single Coco session's context budget. Simpler, benefits from Coco's native hook system.

Use CocoFleet when individual workstreams themselves need a full context window. Not for everyday use — for genuinely large projects where scale demands it.

A workflow is only a workflow when it outlasts the first deviation. The ones worth keeping are the ones that bend without breaking.