diff --git a/.cursor/README.md b/.cursor/README.md index 205a40f..0e4ac35 100644 --- a/.cursor/README.md +++ b/.cursor/README.md @@ -1,42 +1,149 @@ +## How to Use + +Type `/autopilot` to start or continue the full workflow. The orchestrator detects where your project is and picks up from there. + +``` +/autopilot — start a new project or continue where you left off +``` + +If you want to run a specific skill directly (without the orchestrator), use the individual commands: + +``` +/problem — interactive problem gathering → _docs/00_problem/ +/research — solution drafts → _docs/01_solution/ +/plan — architecture, components, tests → _docs/02_plans/ +/decompose — atomic task specs → _docs/02_tasks/ +/implement — batched parallel implementation → _docs/03_implementation/ +/deploy — containerization, CI/CD, observability → _docs/04_deploy/ +``` + +## How It Works + +The autopilot is a state machine that persists its state to `_docs/_autopilot_state.md`. On every invocation it reads the state file, cross-checks against the `_docs/` folder structure, shows a status summary with context from prior sessions, and continues execution. + +``` +/autopilot invoked + │ + ▼ +Read _docs/_autopilot_state.md → cross-check _docs/ folders + │ + ▼ +Show status summary (progress, key decisions, last session context) + │ + ▼ +Execute current skill (read its SKILL.md, follow its workflow) + │ + ▼ +Update state file → auto-chain to next skill → loop +``` + +The state file tracks completed steps, key decisions, blockers, and session context. This makes re-entry across conversations seamless — the autopilot knows not just where you are, but what decisions were made and why. + +Skills auto-chain without pausing between them. The only pauses are: +- **BLOCKING gates** inside each skill (user must confirm before proceeding) +- **Session boundary** after decompose (suggests new conversation before implement) + +A typical project runs in 2-4 conversations: +- Session 1: Problem → Research → Research decision +- Session 2: Plan → Decompose +- Session 3: Implement (may span multiple sessions) +- Session 4: Deploy + +Re-entry is seamless: type `/autopilot` in a new conversation and the orchestrator reads the state file to pick up exactly where you left off. + +## Skill Descriptions + +### autopilot (meta-orchestrator) + +Auto-chaining engine that sequences the full BUILD → SHIP workflow. Persists state to `_docs/_autopilot_state.md`, tracks key decisions and session context, and flows through problem → research → plan → decompose → implement → deploy without manual skill invocation. Maximizes work per conversation with seamless cross-session re-entry. + +### problem + +Interactive interview that builds `_docs/00_problem/`. Asks probing questions across 8 dimensions (problem, scope, hardware, software, acceptance criteria, input data, security, operations) until all required files can be written with concrete, measurable content. + +### research + +8-step deep research methodology. Mode A produces initial solution drafts. Mode B assesses and revises existing drafts. Includes AC assessment, source tiering, fact extraction, comparison frameworks, and validation. Run multiple rounds until the solution is solid. + +### plan + +6-step planning workflow. Produces integration test specs, architecture, system flows, data model, deployment plan, component specs with interfaces, risk assessment, test specifications, and Jira epics. Heavy interaction at BLOCKING gates. + +### decompose + +4-step task decomposition. Produces a bootstrap structure plan, atomic task specs per component, integration test tasks, and a cross-task dependency table. Each task gets a Jira ticket and is capped at 5 complexity points. + +### implement + +Orchestrator that reads task specs, computes dependency-aware execution batches, launches up to 4 parallel implementer subagents, runs code review after each batch, and commits per batch. Does not write code itself. + +### deploy + +7-step deployment planning. Status check, containerization, CI/CD pipeline, environment strategy, observability, deployment procedures, and deployment scripts. Produces documents for steps 1-6 and executable scripts in step 7. + +### code-review + +Multi-phase code review against task specs. Produces structured findings with verdict: PASS, FAIL, or PASS_WITH_WARNINGS. + +### refactor + +6-phase structured refactoring: baseline, discovery, analysis, safety net, execution, hardening. + +### security + +OWASP-based security testing and audit. + +### retrospective + +Collects metrics from implementation batch reports, analyzes trends, produces improvement reports. + +### rollback + +Reverts implementation to a specific batch checkpoint using git revert, verifies integrity. + ## Developer TODO (Project Mode) ### BUILD ``` -1. Create _docs/00_problem/ — describe what you're building +0. /problem — interactive interview → _docs/00_problem/ - problem.md (required) - restrictions.md (required) - acceptance_criteria.md (required) - input_data/ (required) - security_approach.md (optional) -2. /research — solution drafts → _docs/01_solution/ +1. /research — solution drafts → _docs/01_solution/ Run multiple times: Mode A → draft, Mode B → assess & revise -3. /plan — architecture, data model, deployment, components, risks, tests, Jira epics → _docs/02_plans/ +2. /plan — architecture, data model, deployment, components, risks, tests, Jira epics → _docs/02_plans/ -4. /decompose — atomic task specs + dependency table → _docs/02_tasks/ +3. /decompose — atomic task specs + dependency table → _docs/02_tasks/ -5. /implement — batched parallel agents, code review, commit per batch → _docs/03_implementation/ +4. /implement — batched parallel agents, code review, commit per batch → _docs/03_implementation/ ``` ### SHIP ``` -6. /deploy — containerization, CI/CD, environments, observability, procedures → _docs/02_plans/deployment/ +5. /deploy — containerization, CI/CD, environments, observability, procedures → _docs/04_deploy/ ``` ### EVOLVE ``` -7. /refactor — structured refactoring → _docs/04_refactoring/ -8. /retrospective — metrics, trends, improvement actions → _docs/05_metrics/ +6. /refactor — structured refactoring → _docs/04_refactoring/ +7. /retrospective — metrics, trends, improvement actions → _docs/05_metrics/ ``` +Or just use `/autopilot` to run steps 0-5 automatically. + ## Available Skills | Skill | Triggers | Output | |-------|----------|--------| +| **autopilot** | "autopilot", "auto", "start", "continue", "what's next" | Orchestrates full workflow | +| **problem** | "problem", "define problem", "new project" | `_docs/00_problem/` | | **research** | "research", "investigate" | `_docs/01_solution/` | | **plan** | "plan", "decompose solution" | `_docs/02_plans/` | | **decompose** | "decompose", "task decomposition" | `_docs/02_tasks/` | @@ -44,7 +151,7 @@ | **code-review** | "code review", "review code" | Verdict: PASS / FAIL / PASS_WITH_WARNINGS | | **refactor** | "refactor", "improve code" | `_docs/04_refactoring/` | | **security** | "security audit", "OWASP" | Security findings report | -| **deploy** | "deploy", "CI/CD", "observability" | `_docs/02_plans/deployment/` | +| **deploy** | "deploy", "CI/CD", "observability" | `_docs/04_deploy/` | | **retrospective** | "retrospective", "retro" | `_docs/05_metrics/` | | **rollback** | "rollback", "revert batch" | `_docs/03_implementation/rollback_report.md` | @@ -58,6 +165,7 @@ ``` _docs/ +├── _autopilot_state.md — autopilot orchestrator state (progress, decisions, session context) ├── 00_problem/ — problem definition, restrictions, AC, input data ├── 00_research/ — intermediate research artifacts ├── 01_solution/ — solution drafts, tech stack, security analysis @@ -74,6 +182,7 @@ _docs/ │ └── FINAL_report.md ├── 02_tasks/ — [JIRA-ID]_[name].md + _dependencies_table.md ├── 03_implementation/ — batch reports, rollback report, FINAL report +├── 04_deploy/ — containerization, CI/CD, environments, observability, procedures, scripts ├── 04_refactoring/ — baseline, discovery, analysis, execution, hardening └── 05_metrics/ — retro_[YYYY-MM-DD].md ``` diff --git a/.cursor/skills/autopilot/SKILL.md b/.cursor/skills/autopilot/SKILL.md new file mode 100644 index 0000000..db02045 --- /dev/null +++ b/.cursor/skills/autopilot/SKILL.md @@ -0,0 +1,321 @@ +--- +name: autopilot +description: | + Auto-chaining orchestrator that drives the full BUILD-SHIP workflow from problem gathering through deployment. + Detects current project state from _docs/ folder, resumes from where it left off, and flows through + problem → research → plan → decompose → implement → deploy without manual skill invocation. + Maximizes work per conversation by auto-transitioning between skills. + Trigger phrases: + - "autopilot", "auto", "start", "continue" + - "what's next", "where am I", "project status" +category: meta +tags: [orchestrator, workflow, auto-chain, state-machine, meta-skill] +disable-model-invocation: true +--- + +# Autopilot Orchestrator + +Auto-chaining execution engine that drives the full BUILD → SHIP workflow. Detects project state from `_docs/`, resumes from where work stopped, and flows through skills automatically. The user invokes `/autopilot` once — the engine handles sequencing, transitions, and re-entry. + +## Core Principles + +- **Auto-chain**: when a skill completes, immediately start the next one — no pause between skills +- **Only pause at decision points**: BLOCKING gates inside sub-skills are the natural pause points; do not add artificial stops between steps +- **State from disk**: all progress is persisted to `_docs/_autopilot_state.md` and cross-checked against `_docs/` folder structure +- **Rich re-entry**: on every invocation, read the state file for full context before continuing +- **Delegate, don't duplicate**: read and execute each sub-skill's SKILL.md; never inline their logic here + +## State File: `_docs/_autopilot_state.md` + +The autopilot persists its state to `_docs/_autopilot_state.md`. This file is the primary source of truth for re-entry. Folder scanning is the fallback when the state file doesn't exist. + +### Format + +```markdown +# Autopilot State + +## Current Step +step: [0-5 or "done"] +name: [Problem / Research / Plan / Decompose / Implement / Deploy / Done] +status: [not_started / in_progress / completed] +sub_step: [optional — sub-skill phase if interrupted mid-step, e.g. "Plan Step 3: Component Decomposition"] + +## Completed Steps + +| Step | Name | Completed | Key Outcome | +|------|------|-----------|-------------| +| 0 | Problem | [date] | [one-line summary] | +| 1 | Research | [date] | [N drafts, final approach summary] | +| 2 | Plan | [date] | [N components, architecture summary] | +| 3 | Decompose | [date] | [N tasks, total complexity points] | +| 4 | Implement | [date] | [N batches, pass/fail summary] | +| 5 | Deploy | [date] | [artifacts produced] | + +## Key Decisions +- [decision 1: e.g. "Tech stack: Python + Rust for perf-critical, Postgres DB"] +- [decision 2: e.g. "6 research rounds, final draft: solution_draft06.md"] +- [decision N] + +## Last Session +date: [date] +ended_at: [step name and phase] +reason: [completed step / session boundary / user paused / context limit] +notes: [any context for next session, e.g. "User asked to revisit risk assessment"] + +## Blockers +- [blocker 1, if any] +- [none] +``` + +### State File Rules + +1. **Create** the state file on the very first autopilot invocation (after state detection determines Step 0) +2. **Update** the state file after every step completion, every session boundary, and every BLOCKING gate confirmation +3. **Read** the state file as the first action on every invocation — before folder scanning +4. **Cross-check**: after reading the state file, verify against actual `_docs/` folder contents. If they disagree (e.g., state file says Step 2 but `_docs/02_plans/architecture.md` already exists), trust the folder structure and update the state file to match +5. **Never delete** the state file. It accumulates history across the entire project lifecycle + +## Execution Entry Point + +Every invocation of this skill follows the same sequence: + +``` +1. Read _docs/_autopilot_state.md (if exists) +2. Cross-check state file against _docs/ folder structure +3. Resolve current step (state file + folder scan) +4. Present Status Summary (from state file context) +5. Enter Execution Loop: + a. Read and execute the current skill's SKILL.md + b. When skill completes → update state file + c. Re-detect next step + d. If next skill is ready → auto-chain (go to 5a with next skill) + e. If session boundary reached → update state file with session notes → suggest new conversation + f. If all steps done → update state file → report completion +``` + +## State Detection + +Read `_docs/_autopilot_state.md` first. If it exists and is consistent with the folder structure, use the `Current Step` from the state file. If the state file doesn't exist or is inconsistent, fall back to folder scanning. + +### Folder Scan Rules (fallback) + +Scan `_docs/` to determine the current workflow position. Check rules in order — first match wins. + +### Detection Rules + +**Step 0 — Problem Gathering** +Condition: `_docs/00_problem/` does not exist, OR any of these are missing/empty: +- `problem.md` +- `restrictions.md` +- `acceptance_criteria.md` +- `input_data/` (must contain at least one file) + +Action: Read and execute `.cursor/skills/problem/SKILL.md` + +--- + +**Step 1 — Research (Initial)** +Condition: `_docs/00_problem/` is complete AND `_docs/01_solution/` has no `solution_draft*.md` files + +Action: Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode A) + +--- + +**Step 1b — Research Decision** +Condition: `_docs/01_solution/` contains `solution_draft*.md` files AND `_docs/01_solution/solution.md` does not exist AND `_docs/02_plans/architecture.md` does not exist + +Action: Present the current research state to the user: +- How many solution drafts exist +- Whether tech_stack.md and security_analysis.md exist +- One-line summary from the latest draft + +Then ask: **"Run another research round (Mode B assessment), or proceed to planning?"** +- If user wants another round → Read and execute `.cursor/skills/research/SKILL.md` (will auto-detect Mode B) +- If user wants to proceed → auto-chain to Step 2 (Plan) + +--- + +**Step 2 — Plan** +Condition: `_docs/01_solution/` has `solution_draft*.md` files AND `_docs/02_plans/architecture.md` does not exist + +Action: +1. The plan skill's Prereq 2 will rename the latest draft to `solution.md` — this is handled by the plan skill itself +2. Read and execute `.cursor/skills/plan/SKILL.md` + +If `_docs/02_plans/` exists but is incomplete (has some artifacts but no `FINAL_report.md`), the plan skill's built-in resumability handles it. + +--- + +**Step 3 — Decompose** +Condition: `_docs/02_plans/` contains `architecture.md` AND `_docs/02_plans/components/` has at least one component AND `_docs/02_tasks/` does not exist or has no task files (excluding `_dependencies_table.md`) + +Action: Read and execute `.cursor/skills/decompose/SKILL.md` + +If `_docs/02_tasks/` has some task files already, the decompose skill's resumability handles it. + +--- + +**Step 4 — Implement** +Condition: `_docs/02_tasks/` contains task files AND `_dependencies_table.md` exists AND `_docs/03_implementation/FINAL_implementation_report.md` does not exist + +Action: Read and execute `.cursor/skills/implement/SKILL.md` + +If `_docs/03_implementation/` has batch reports, the implement skill detects completed tasks and continues. + +--- + +**Step 5 — Deploy** +Condition: `_docs/03_implementation/FINAL_implementation_report.md` exists AND `_docs/04_deploy/` does not exist or is incomplete + +Action: Read and execute `.cursor/skills/deploy/SKILL.md` + +--- + +**Done** +Condition: `_docs/04_deploy/` contains all expected artifacts (containerization.md, ci_cd_pipeline.md, environment_strategy.md, observability.md, deployment_procedures.md) + +Action: Report project completion with summary. + +## Status Summary + +On every invocation, before executing any skill, present a status summary built from the state file (with folder scan fallback). + +Format: + +``` +═══════════════════════════════════════════════════ + AUTOPILOT STATUS +═══════════════════════════════════════════════════ + Step 0 Problem [DONE / IN PROGRESS / NOT STARTED] + Step 1 Research [DONE (N drafts) / IN PROGRESS / NOT STARTED] + Step 2 Plan [DONE / IN PROGRESS / NOT STARTED] + Step 3 Decompose [DONE (N tasks) / IN PROGRESS / NOT STARTED] + Step 4 Implement [DONE / IN PROGRESS (batch M of ~N) / NOT STARTED] + Step 5 Deploy [DONE / IN PROGRESS / NOT STARTED] +═══════════════════════════════════════════════════ + Current step: [Step N — Name] + Action: [what will happen next] +═══════════════════════════════════════════════════ +``` + +For re-entry (state file exists), also include: +- Key decisions from the state file's `Key Decisions` section +- Last session context from the `Last Session` section +- Any blockers from the `Blockers` section + +## Auto-Chain Rules + +After a skill completes, apply these rules: + +| Completed Step | Next Action | +|---------------|-------------| +| Problem Gathering | Auto-chain → Research (Mode A) | +| Research (any round) | Auto-chain → Research Decision (ask user: another round or proceed?) | +| Research Decision → proceed | Auto-chain → Plan | +| Plan | Auto-chain → Decompose | +| Decompose | **Session boundary** — suggest new conversation before Implement | +| Implement | Auto-chain → Deploy | +| Deploy | Report completion | + +### Session Boundary: Decompose → Implement + +After decompose completes, **do not auto-chain to implement**. Instead: + +1. Update state file: mark Decompose as completed, set current step to 4 (Implement) with status `not_started` +2. Write `Last Session` section: `reason: session boundary`, `notes: Decompose complete, implementation ready` +3. Present a summary: number of tasks, estimated batches, total complexity points +4. Suggest: "Implementation is the longest phase and benefits from a fresh conversation context. Start a new conversation and type `/autopilot` to begin implementation." +5. If the user insists on continuing in the same conversation, proceed. + +This is the only hard session boundary. All other transitions auto-chain. + +## Skill Delegation + +For each step, the delegation pattern is: + +1. Update state file: set current step to `in_progress`, record `sub_step` if applicable +2. Announce: "Starting [Skill Name]..." +3. Read the skill file: `.cursor/skills/[name]/SKILL.md` +4. Execute the skill's workflow exactly as written, including: + - All BLOCKING gates (present to user, wait for confirmation) + - All self-verification checklists + - All save actions + - All escalation rules +5. When the skill's workflow is fully complete: + - Update state file: mark step as `completed`, record date, write one-line key outcome + - Add any key decisions made during this step to the `Key Decisions` section + - Return to the auto-chain rules + +Do NOT modify, skip, or abbreviate any part of the sub-skill's workflow. The autopilot is a sequencer, not an optimizer. + +## Re-Entry Protocol + +When the user invokes `/autopilot` and work already exists: + +1. Read `_docs/_autopilot_state.md` +2. Cross-check against `_docs/` folder structure +3. Present Status Summary with context from state file (key decisions, last session, blockers) +4. If the detected step has a sub-skill with built-in resumability (plan, decompose, implement, deploy all do), the sub-skill handles mid-step recovery +5. Continue execution from detected state + +## Error Handling + +| Situation | Action | +|-----------|--------| +| State detection is ambiguous (artifacts suggest two different steps) | Present findings to user, ask which step to execute | +| Sub-skill fails or hits an unrecoverable blocker | Report the error, suggest the user fix it manually, then re-invoke `/autopilot` | +| User wants to skip a step | Warn about downstream dependencies, proceed if user confirms | +| User wants to go back to a previous step | Warn that re-running may overwrite artifacts, proceed if user confirms | +| User asks "where am I?" without wanting to continue | Show Status Summary only, do not start execution | + +## Trigger Conditions + +This skill activates when the user wants to: +- Start a new project from scratch +- Continue an in-progress project +- Check project status +- Let the AI guide them through the full workflow + +**Keywords**: "autopilot", "auto", "start", "continue", "what's next", "where am I", "project status" + +**Differentiation**: +- User wants only research → use `/research` directly +- User wants only planning → use `/plan` directly +- User wants the full guided workflow → use `/autopilot` + +## Methodology Quick Reference + +``` +┌────────────────────────────────────────────────────────────────┐ +│ Autopilot (Auto-Chain Orchestrator) │ +├────────────────────────────────────────────────────────────────┤ +│ EVERY INVOCATION: │ +│ 1. State Detection (scan _docs/) │ +│ 2. Status Summary (show progress) │ +│ 3. Execute current skill │ +│ 4. Auto-chain to next skill (loop) │ +│ │ +│ WORKFLOW: │ +│ Step 0 Problem → .cursor/skills/problem/SKILL.md │ +│ ↓ auto-chain │ +│ Step 1 Research → .cursor/skills/research/SKILL.md │ +│ ↓ auto-chain (ask: another round?) │ +│ Step 2 Plan → .cursor/skills/plan/SKILL.md │ +│ ↓ auto-chain │ +│ Step 3 Decompose → .cursor/skills/decompose/SKILL.md │ +│ ↓ SESSION BOUNDARY (suggest new conversation) │ +│ Step 4 Implement → .cursor/skills/implement/SKILL.md │ +│ ↓ auto-chain │ +│ Step 5 Deploy → .cursor/skills/deploy/SKILL.md │ +│ ↓ │ +│ DONE │ +│ │ +│ STATE FILE: _docs/_autopilot_state.md │ +│ FALLBACK: _docs/ folder structure scan │ +│ PAUSE POINTS: sub-skill BLOCKING gates only │ +│ SESSION BREAK: after Decompose (before Implement) │ +├────────────────────────────────────────────────────────────────┤ +│ Principles: Auto-chain · State to file · Rich re-entry │ +│ Delegate don't duplicate · Pause at decisions only │ +└────────────────────────────────────────────────────────────────┘ +``` diff --git a/.cursor/skills/deploy/SKILL.md b/.cursor/skills/deploy/SKILL.md index 6f496ef..8767761 100644 --- a/.cursor/skills/deploy/SKILL.md +++ b/.cursor/skills/deploy/SKILL.md @@ -1,22 +1,22 @@ --- name: deploy description: | - Comprehensive deployment skill covering containerization, CI/CD pipeline, environment strategy, observability, and deployment procedures. - 5-step workflow: Docker containerization, CI/CD pipeline definition, environment strategy, observability planning, deployment procedures. - Uses _docs/02_plans/deployment/ structure. + Comprehensive deployment skill covering status check, env setup, containerization, CI/CD pipeline, environment strategy, observability, deployment procedures, and deployment scripts. + 7-step workflow: Status & env check, Docker containerization, CI/CD pipeline definition, environment strategy, observability planning, deployment procedures, deployment scripts. + Uses _docs/04_deploy/ structure. Trigger phrases: - "deploy", "deployment", "deployment strategy" - "CI/CD", "pipeline", "containerize" - "observability", "monitoring", "logging" - "dockerize", "docker compose" category: ship -tags: [deployment, docker, ci-cd, observability, monitoring, containerization] +tags: [deployment, docker, ci-cd, observability, monitoring, containerization, scripts] disable-model-invocation: true --- # Deployment Planning -Plan and document the full deployment lifecycle: containerize the application, define CI/CD pipelines, configure environments, set up observability, and document deployment procedures. +Plan and document the full deployment lifecycle: check deployment status and environment requirements, containerize the application, define CI/CD pipelines, configure environments, set up observability, document deployment procedures, and generate deployment scripts. ## Core Principles @@ -26,14 +26,16 @@ Plan and document the full deployment lifecycle: containerize the application, d - **Environment parity**: dev, staging, and production environments mirror each other as closely as possible - **Save immediately**: write artifacts to disk after each step; never accumulate unsaved work - **Ask, don't assume**: when infrastructure constraints or preferences are unclear, ask the user -- **Plan, don't code**: this workflow produces deployment documents and specifications, not implementation code +- **Plan, don't code**: this workflow produces deployment documents and specifications, not implementation code (except deployment scripts in Step 7) ## Context Resolution Fixed paths: - PLANS_DIR: `_docs/02_plans/` -- DEPLOY_DIR: `_docs/02_plans/deployment/` +- DEPLOY_DIR: `_docs/04_deploy/` +- REPORTS_DIR: `_docs/04_deploy/reports/` +- SCRIPTS_DIR: `scripts/` - ARCHITECTURE: `_docs/02_plans/architecture.md` - COMPONENTS_DIR: `_docs/02_plans/components/` @@ -55,7 +57,7 @@ Announce the resolved paths to the user before proceeding. 1. `architecture.md` exists — **STOP if missing**, run `/plan` first 2. At least one component spec exists in `PLANS_DIR/components/` — **STOP if missing** -3. Create DEPLOY_DIR if it does not exist +3. Create DEPLOY_DIR, REPORTS_DIR, and SCRIPTS_DIR if they do not exist 4. If DEPLOY_DIR already contains artifacts, ask user: **resume from last checkpoint or start fresh?** ## Artifact Management @@ -68,18 +70,33 @@ DEPLOY_DIR/ ├── ci_cd_pipeline.md ├── environment_strategy.md ├── observability.md -└── deployment_procedures.md +├── deployment_procedures.md +├── deploy_scripts.md +└── reports/ + └── deploy_status_report.md + +SCRIPTS_DIR/ (project root) +├── deploy.sh +├── pull-images.sh +├── start-services.sh +├── stop-services.sh +└── health-check.sh + +.env (project root, git-ignored) +.env.example (project root, committed) ``` ### Save Timing | Step | Save immediately after | Filename | |------|------------------------|----------| -| Step 1 | Containerization plan complete | `containerization.md` | -| Step 2 | CI/CD pipeline defined | `ci_cd_pipeline.md` | -| Step 3 | Environment strategy documented | `environment_strategy.md` | -| Step 4 | Observability plan complete | `observability.md` | -| Step 5 | Deployment procedures documented | `deployment_procedures.md` | +| Step 1 | Status check & env setup complete | `reports/deploy_status_report.md` + `.env` + `.env.example` | +| Step 2 | Containerization plan complete | `containerization.md` | +| Step 3 | CI/CD pipeline defined | `ci_cd_pipeline.md` | +| Step 4 | Environment strategy documented | `environment_strategy.md` | +| Step 5 | Observability plan complete | `observability.md` | +| Step 6 | Deployment procedures documented | `deployment_procedures.md` | +| Step 7 | Deployment scripts created | `deploy_scripts.md` + scripts in `SCRIPTS_DIR/` | ### Resumability @@ -92,11 +109,52 @@ If DEPLOY_DIR already contains artifacts: ## Progress Tracking -At the start of execution, create a TodoWrite with all steps (1 through 5). Update status as each step completes. +At the start of execution, create a TodoWrite with all steps (1 through 7). Update status as each step completes. ## Workflow -### Step 1: Containerization +### Step 1: Deployment Status & Environment Setup + +**Role**: DevOps / Platform engineer +**Goal**: Assess current deployment readiness, identify all required environment variables, and create `.env` files +**Constraints**: Must complete before any other step + +1. Read architecture.md, all component specs, and restrictions.md +2. Assess deployment readiness: + - List all components and their current state (planned / implemented / tested) + - Identify external dependencies (databases, APIs, message queues, cloud services) + - Identify infrastructure prerequisites (container registry, cloud accounts, DNS, SSL certificates) + - Check if any deployment blockers exist +3. Identify all required environment variables by scanning: + - Component specs for configuration needs + - Database connection requirements + - External API endpoints and credentials + - Feature flags and runtime configuration + - Container registry credentials + - Cloud provider credentials + - Monitoring/logging service endpoints +4. Generate `.env.example` in project root with all variables and placeholder values (committed to VCS) +5. Generate `.env` in project root with development defaults filled in where safe (git-ignored) +6. Ensure `.gitignore` includes `.env` (but NOT `.env.example`) +7. Produce a deployment status report summarizing readiness, blockers, and required setup + +**Self-verification**: +- [ ] All components assessed for deployment readiness +- [ ] External dependencies catalogued +- [ ] Infrastructure prerequisites identified +- [ ] All required environment variables discovered +- [ ] `.env.example` created with placeholder values +- [ ] `.env` created with safe development defaults +- [ ] `.gitignore` updated to exclude `.env` +- [ ] Status report written to `reports/deploy_status_report.md` + +**Save action**: Write `reports/deploy_status_report.md` using `templates/deploy_status_report.md`, create `.env` and `.env.example` in project root + +**BLOCKING**: Present status report and environment variables to user. Do NOT proceed until confirmed. + +--- + +### Step 2: Containerization **Role**: DevOps / Platform engineer **Goal**: Define Docker configuration for every component, local development, and integration test environments @@ -117,7 +175,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda - Database (Postgres) with named volume - Any message queues, caches, or external service mocks - Shared network - - Environment variable files (`.env.dev`) + - Environment variable files (`.env`) 6. Define `docker-compose.test.yml` for integration tests: - Application components under test - Test runner container (black-box, no internal imports) @@ -140,7 +198,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda --- -### Step 2: CI/CD Pipeline +### Step 3: CI/CD Pipeline **Role**: DevOps engineer **Goal**: Define the CI/CD pipeline with quality gates, security scanning, and multi-environment deployment @@ -179,7 +237,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda --- -### Step 3: Environment Strategy +### Step 4: Environment Strategy **Role**: Platform engineer **Goal**: Define environment configuration, secrets management, and environment parity @@ -194,7 +252,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda | **Production** | Live system | Full infrastructure | Real data | 2. Define environment variable management: - - `.env.example` with all required variables (no real values) + - Reference `.env.example` created in Step 1 - Per-environment variable sources (`.env` for dev, secret manager for staging/prod) - Validation: fail fast on missing required variables at startup 3. Define secrets management: @@ -209,7 +267,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda **Self-verification**: - [ ] All three environments defined with clear purpose -- [ ] Environment variable documentation complete (`.env.example`) +- [ ] Environment variable documentation complete (references `.env.example` from Step 1) - [ ] No secrets in any output document - [ ] Secret manager specified for staging/production - [ ] Database strategy per environment @@ -218,7 +276,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda --- -### Step 4: Observability +### Step 5: Observability **Role**: Site Reliability Engineer (SRE) **Goal**: Define logging, metrics, tracing, and alerting strategy @@ -272,7 +330,7 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda --- -### Step 5: Deployment Procedures +### Step 6: Deployment Procedures **Role**: DevOps / Platform engineer **Goal**: Define deployment strategy, rollback procedures, health checks, and deployment checklist @@ -321,6 +379,69 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda --- +### Step 7: Deployment Scripts + +**Role**: DevOps / Platform engineer +**Goal**: Create executable deployment scripts for pulling Docker images and running services on the remote target machine +**Constraints**: Produce real, executable shell scripts. This is the ONLY step that creates implementation artifacts. + +1. Read containerization.md and deployment_procedures.md from previous steps +2. Read `.env.example` for required variables +3. Create the following scripts in `SCRIPTS_DIR/`: + +**`deploy.sh`** — Main deployment orchestrator: + - Validates that required environment variables are set (sources `.env` if present) + - Calls `pull-images.sh`, then `stop-services.sh`, then `start-services.sh`, then `health-check.sh` + - Exits with non-zero code on any failure + - Supports `--rollback` flag to redeploy previous image tags + +**`pull-images.sh`** — Pull Docker images to target machine: + - Reads image list and tags from environment or config + - Authenticates with container registry + - Pulls all required images + - Verifies image integrity (digest check) + +**`start-services.sh`** — Start services on target machine: + - Runs `docker compose up -d` or individual `docker run` commands + - Applies environment variables from `.env` + - Configures networks and volumes + - Waits for containers to reach healthy state + +**`stop-services.sh`** — Graceful shutdown: + - Stops services with graceful shutdown period + - Saves current image tags for rollback reference + - Cleans up orphaned containers/networks + +**`health-check.sh`** — Verify deployment health: + - Checks all health endpoints + - Reports status per service + - Returns non-zero if any service is unhealthy + +4. All scripts must: + - Be POSIX-compatible (#!/bin/bash with set -euo pipefail) + - Source `.env` from project root or accept env vars from the environment + - Include usage/help output (`--help` flag) + - Be idempotent where possible + - Handle SSH connection to remote target (configurable via `DEPLOY_HOST` env var) + +5. Document all scripts in `deploy_scripts.md` + +**Self-verification**: +- [ ] All five scripts created and executable +- [ ] Scripts source environment variables correctly +- [ ] `deploy.sh` orchestrates the full flow +- [ ] `pull-images.sh` handles registry auth and image pull +- [ ] `start-services.sh` starts containers with correct config +- [ ] `stop-services.sh` handles graceful shutdown +- [ ] `health-check.sh` validates all endpoints +- [ ] Rollback supported via `deploy.sh --rollback` +- [ ] Scripts work for remote deployment via SSH (DEPLOY_HOST) +- [ ] `deploy_scripts.md` documents all scripts + +**Save action**: Write scripts to `SCRIPTS_DIR/`, write `deploy_scripts.md` using `templates/deploy_scripts.md` + +--- + ## Escalation Rules | Situation | Action | @@ -331,33 +452,40 @@ At the start of execution, create a TodoWrite with all steps (1 through 5). Upda | Secret manager not chosen | **ASK user** | | Deployment pattern trade-offs | **ASK user** with recommendation | | Missing architecture.md | **STOP** — run `/plan` first | +| Remote target machine details unknown | **ASK user** for SSH access, OS, and specs | ## Common Mistakes -- **Implementing during planning**: this workflow produces documents, not Dockerfiles or pipeline YAML -- **Hardcoding secrets**: never include real credentials in deployment documents +- **Implementing during planning**: Steps 1–6 produce documents, not code (Step 7 is the exception — it creates scripts) +- **Hardcoding secrets**: never include real credentials in deployment documents or scripts - **Ignoring integration test containerization**: the test environment must be containerized alongside the app - **Skipping BLOCKING gates**: never proceed past a BLOCKING marker without user confirmation - **Using `:latest` tags**: always pin base image versions - **Forgetting observability**: logging, metrics, and tracing are deployment concerns, not post-deployment additions +- **Committing `.env`**: only `.env.example` goes to version control; `.env` must be in `.gitignore` +- **Non-portable scripts**: deployment scripts must work across environments; avoid hardcoded paths ## Methodology Quick Reference ``` ┌────────────────────────────────────────────────────────────────┐ -│ Deployment Planning (5-Step Method) │ +│ Deployment Planning (7-Step Method) │ ├────────────────────────────────────────────────────────────────┤ -│ PREREQ: architecture.md + component specs exist │ +│ PREREQ: architecture.md + component specs exist │ │ │ -│ 1. Containerization → containerization.md │ -│ [BLOCKING: user confirms Docker plan] │ -│ 2. CI/CD Pipeline → ci_cd_pipeline.md │ -│ 3. Environment → environment_strategy.md │ -│ 4. Observability → observability.md │ -│ 5. Procedures → deployment_procedures.md │ -│ [BLOCKING: user confirms deployment plan] │ +│ 1. Status & Env → reports/deploy_status_report.md │ +│ + .env + .env.example │ +│ [BLOCKING: user confirms status & env vars] │ +│ 2. Containerization → containerization.md │ +│ [BLOCKING: user confirms Docker plan] │ +│ 3. CI/CD Pipeline → ci_cd_pipeline.md │ +│ 4. Environment → environment_strategy.md │ +│ 5. Observability → observability.md │ +│ 6. Procedures → deployment_procedures.md │ +│ [BLOCKING: user confirms deployment plan] │ +│ 7. Scripts → deploy_scripts.md + scripts/ │ ├────────────────────────────────────────────────────────────────┤ -│ Principles: Docker-first · IaC · Observability built-in │ -│ Environment parity · Save immediately │ +│ Principles: Docker-first · IaC · Observability built-in │ +│ Environment parity · Save immediately │ └────────────────────────────────────────────────────────────────┘ ``` diff --git a/.cursor/skills/deploy/templates/ci_cd_pipeline.md b/.cursor/skills/deploy/templates/ci_cd_pipeline.md index d21c1f4..57b8b41 100644 --- a/.cursor/skills/deploy/templates/ci_cd_pipeline.md +++ b/.cursor/skills/deploy/templates/ci_cd_pipeline.md @@ -1,6 +1,6 @@ # CI/CD Pipeline Template -Save as `_docs/02_plans/deployment/ci_cd_pipeline.md`. +Save as `_docs/04_deploy/ci_cd_pipeline.md`. --- diff --git a/.cursor/skills/deploy/templates/containerization.md b/.cursor/skills/deploy/templates/containerization.md index db982a7..d1025be 100644 --- a/.cursor/skills/deploy/templates/containerization.md +++ b/.cursor/skills/deploy/templates/containerization.md @@ -1,6 +1,6 @@ # Containerization Plan Template -Save as `_docs/02_plans/deployment/containerization.md`. +Save as `_docs/04_deploy/containerization.md`. --- diff --git a/.cursor/skills/deploy/templates/deploy_status_report.md b/.cursor/skills/deploy/templates/deploy_status_report.md new file mode 100644 index 0000000..9482ad7 --- /dev/null +++ b/.cursor/skills/deploy/templates/deploy_status_report.md @@ -0,0 +1,73 @@ +# Deployment Status Report Template + +Save as `_docs/04_deploy/reports/deploy_status_report.md`. + +--- + +```markdown +# [System Name] — Deployment Status Report + +## Deployment Readiness Summary + +| Aspect | Status | Notes | +|--------|--------|-------| +| Architecture defined | ✅ / ❌ | | +| Component specs complete | ✅ / ❌ | | +| Infrastructure prerequisites met | ✅ / ❌ | | +| External dependencies identified | ✅ / ❌ | | +| Blockers | [count] | [summary] | + +## Component Status + +| Component | State | Docker-ready | Notes | +|-----------|-------|-------------|-------| +| [Component 1] | planned / implemented / tested | yes / no | | +| [Component 2] | planned / implemented / tested | yes / no | | + +## External Dependencies + +| Dependency | Type | Required For | Status | +|------------|------|-------------|--------| +| [e.g., PostgreSQL] | Database | Data persistence | [available / needs setup] | +| [e.g., Redis] | Cache | Session management | [available / needs setup] | +| [e.g., External API] | API | [purpose] | [available / needs setup] | + +## Infrastructure Prerequisites + +| Prerequisite | Status | Action Needed | +|-------------|--------|--------------| +| Container registry | [ready / not set up] | [action] | +| Cloud account | [ready / not set up] | [action] | +| DNS configuration | [ready / not set up] | [action] | +| SSL certificates | [ready / not set up] | [action] | +| CI/CD platform | [ready / not set up] | [action] | +| Secret manager | [ready / not set up] | [action] | + +## Deployment Blockers + +| Blocker | Severity | Resolution | +|---------|----------|-----------| +| [blocker description] | critical / high / medium | [resolution steps] | + +## Required Environment Variables + +| Variable | Purpose | Required In | Default (Dev) | Source (Staging/Prod) | +|----------|---------|------------|---------------|----------------------| +| `DATABASE_URL` | Postgres connection string | All components | `postgres://dev:dev@db:5432/app` | Secret manager | +| `DEPLOY_HOST` | Remote target machine | Deployment scripts | `localhost` | Environment | +| `REGISTRY_URL` | Container registry URL | CI/CD, deploy scripts | `localhost:5000` | Environment | +| `REGISTRY_USER` | Registry username | CI/CD, deploy scripts | — | Secret manager | +| `REGISTRY_PASS` | Registry password | CI/CD, deploy scripts | — | Secret manager | +| [add all required variables] | | | | | + +## .env Files Created + +- `.env.example` — committed to VCS, contains all variable names with placeholder values +- `.env` — git-ignored, contains development defaults + +## Next Steps + +1. [Resolve any blockers listed above] +2. [Set up missing infrastructure prerequisites] +3. [Proceed to containerization planning] +``` diff --git a/.cursor/skills/deploy/templates/deployment_procedures.md b/.cursor/skills/deploy/templates/deployment_procedures.md index f9da36c..8bb5f0e 100644 --- a/.cursor/skills/deploy/templates/deployment_procedures.md +++ b/.cursor/skills/deploy/templates/deployment_procedures.md @@ -1,6 +1,6 @@ # Deployment Procedures Template -Save as `_docs/02_plans/deployment/deployment_procedures.md`. +Save as `_docs/04_deploy/deployment_procedures.md`. --- diff --git a/.cursor/skills/deploy/templates/environment_strategy.md b/.cursor/skills/deploy/templates/environment_strategy.md index 6c3632b..a257698 100644 --- a/.cursor/skills/deploy/templates/environment_strategy.md +++ b/.cursor/skills/deploy/templates/environment_strategy.md @@ -1,6 +1,6 @@ # Environment Strategy Template -Save as `_docs/02_plans/deployment/environment_strategy.md`. +Save as `_docs/04_deploy/environment_strategy.md`. --- diff --git a/.cursor/skills/deploy/templates/observability.md b/.cursor/skills/deploy/templates/observability.md index b656b29..d34a517 100644 --- a/.cursor/skills/deploy/templates/observability.md +++ b/.cursor/skills/deploy/templates/observability.md @@ -1,6 +1,6 @@ # Observability Template -Save as `_docs/02_plans/deployment/observability.md`. +Save as `_docs/04_deploy/observability.md`. --- diff --git a/.cursor/skills/problem/SKILL.md b/.cursor/skills/problem/SKILL.md new file mode 100644 index 0000000..030a2a1 --- /dev/null +++ b/.cursor/skills/problem/SKILL.md @@ -0,0 +1,240 @@ +--- +name: problem +description: | + Interactive problem gathering skill that builds _docs/00_problem/ through structured interview. + Iteratively asks probing questions until the problem, restrictions, acceptance criteria, and input data + are fully understood. Produces all required files for downstream skills (research, plan, etc.). + Trigger phrases: + - "problem", "define problem", "problem gathering" + - "what am I building", "describe problem" + - "start project", "new project" +category: build +tags: [problem, gathering, interview, requirements, acceptance-criteria] +disable-model-invocation: true +--- + +# Problem Gathering + +Build a complete problem definition through structured, interactive interview with the user. Produces all required files in `_docs/00_problem/` that downstream skills (research, plan, decompose, implement, deploy) depend on. + +## Core Principles + +- **Ask, don't assume**: never infer requirements the user hasn't stated +- **Exhaust before writing**: keep asking until all dimensions are covered; do not write files prematurely +- **Concrete over vague**: push for measurable values, specific constraints, real numbers +- **Save immediately**: once the user confirms, write all files at once +- **User is the authority**: the AI suggests, the user decides + +## Context Resolution + +Fixed paths: + +- OUTPUT_DIR: `_docs/00_problem/` +- INPUT_DATA_DIR: `_docs/00_problem/input_data/` + +## Prerequisite Checks + +1. If OUTPUT_DIR already exists and contains files, present what exists and ask user: **resume and fill gaps, overwrite, or skip?** +2. If overwrite or fresh start, create OUTPUT_DIR and INPUT_DATA_DIR + +## Completeness Criteria + +The interview is complete when the AI can write ALL of these: + +| File | Complete when | +|------|--------------| +| `problem.md` | Clear problem statement: what is being built, why, for whom, what it does | +| `restrictions.md` | All constraints identified: hardware, software, environment, operational, regulatory, budget, timeline | +| `acceptance_criteria.md` | Measurable success criteria with specific numeric targets grouped by category | +| `input_data/` | At least one reference data file or detailed data description document | +| `security_approach.md` | (optional) Security requirements identified, or explicitly marked as not applicable | + +## Interview Protocol + +### Phase 1: Open Discovery + +Start with broad, open questions. Let the user describe the problem in their own words. + +**Opening**: Ask the user to describe what they are building and what problem it solves. Do not interrupt or narrow down yet. + +After the user responds, summarize what you understood and ask: "Did I get this right? What did I miss?" + +### Phase 2: Structured Probing + +Work through each dimension systematically. For each dimension, ask only what the user hasn't already covered. Skip dimensions that were fully answered in Phase 1. + +**Dimension checklist:** + +1. **Problem & Goals** + - What exactly does the system do? + - What problem does it solve? Why does it need to exist? + - Who are the users / operators / stakeholders? + - What is the expected usage pattern (frequency, load, environment)? + +2. **Scope & Boundaries** + - What is explicitly IN scope? + - What is explicitly OUT of scope? + - Are there related systems this integrates with? + - What does the system NOT do (common misconceptions)? + +3. **Hardware & Environment** + - What hardware does it run on? (CPU, GPU, memory, storage) + - What operating system / platform? + - What is the deployment environment? (cloud, edge, embedded, on-prem) + - Any physical constraints? (power, thermal, size, connectivity) + +4. **Software & Tech Constraints** + - Required programming languages or frameworks? + - Required protocols or interfaces? + - Existing systems it must integrate with? + - Libraries or tools that must or must not be used? + +5. **Acceptance Criteria** + - What does "done" look like? + - Performance targets: latency, throughput, accuracy, error rates? + - Quality bars: reliability, availability, recovery time? + - Push for specific numbers: "less than Xms", "above Y%", "within Z meters" + - Edge cases: what happens when things go wrong? + - Startup and shutdown behavior? + +6. **Input Data** + - What data does the system consume? + - Formats, schemas, volumes, update frequency? + - Does the user have sample/reference data to provide? + - If no data exists yet, what would representative data look like? + +7. **Security** (optional, probe gently) + - Authentication / authorization requirements? + - Data sensitivity (PII, classified, proprietary)? + - Communication security (encryption, TLS)? + - If the user says "not a concern", mark as N/A and move on + +8. **Operational Constraints** + - Budget constraints? + - Timeline constraints? + - Team size / expertise constraints? + - Regulatory or compliance requirements? + - Geographic restrictions? + +### Phase 3: Gap Analysis + +After all dimensions are covered: + +1. Internally assess completeness against the Completeness Criteria table +2. Present a completeness summary to the user: + +``` +Completeness Check: +- problem.md: READY / GAPS: [list missing aspects] +- restrictions.md: READY / GAPS: [list missing aspects] +- acceptance_criteria.md: READY / GAPS: [list missing aspects] +- input_data/: READY / GAPS: [list missing aspects] +- security_approach.md: READY / N/A / GAPS: [list missing aspects] +``` + +3. If gaps exist, ask targeted follow-up questions for each gap +4. Repeat until all required files show READY + +### Phase 4: Draft & Confirm + +1. Draft all files in the conversation (show the user what will be written) +2. Present each file's content for review +3. Ask: "Should I save these files? Any changes needed?" +4. Apply any requested changes +5. Save all files to OUTPUT_DIR + +## Output File Formats + +### problem.md + +Free-form text. Clear, concise description of: +- What is being built +- What problem it solves +- How it works at a high level +- Key context the reader needs to understand the problem + +No headers required. Paragraph format. Should be readable by someone unfamiliar with the project. + +### restrictions.md + +Categorized constraints with markdown headers and bullet points: + +```markdown +# [Category Name] + +- Constraint description with specific values where applicable +- Another constraint +``` + +Categories are derived from the interview (hardware, software, environment, operational, etc.). Each restriction should be specific and testable. + +### acceptance_criteria.md + +Categorized measurable criteria with markdown headers and bullet points: + +```markdown +# [Category Name] + +- Criterion with specific numeric target +- Another criterion with measurable threshold +``` + +Every criterion must have a measurable value. Vague criteria like "should be fast" are not acceptable — push for "less than 400ms end-to-end". + +### input_data/ + +At least one file. Options: +- User provides actual data files (CSV, JSON, images, etc.) — save as-is +- User describes data parameters — save as `data_parameters.md` +- User provides URLs to data — save as `data_sources.md` with links and descriptions + +### security_approach.md (optional) + +If security requirements exist, document them. If the user says security is not a concern for this project, skip this file entirely. + +## Progress Tracking + +Create a TodoWrite with phases 1-4. Update as each phase completes. + +## Escalation Rules + +| Situation | Action | +|-----------|--------| +| User cannot provide acceptance criteria numbers | Suggest industry benchmarks, ASK user to confirm or adjust | +| User has no input data at all | ASK what representative data would look like, create a `data_parameters.md` describing expected data | +| User says "I don't know" to a critical dimension | Research the domain briefly, suggest reasonable defaults, ASK user to confirm | +| Conflicting requirements discovered | Present the conflict, ASK user which takes priority | +| User wants to skip a required file | Explain why downstream skills need it, ASK if they want a minimal placeholder | + +## Common Mistakes + +- **Writing files before the interview is complete**: gather everything first, then write +- **Accepting vague criteria**: "fast", "accurate", "reliable" are not acceptance criteria without numbers +- **Assuming technical choices**: do not suggest specific technologies unless the user constrains them +- **Over-engineering the problem statement**: problem.md should be concise, not a dissertation +- **Inventing restrictions**: only document what the user actually states as a constraint +- **Skipping input data**: downstream skills (especially research and plan) need concrete data context + +## Methodology Quick Reference + +``` +┌────────────────────────────────────────────────────────────────┐ +│ Problem Gathering (4-Phase Interview) │ +├────────────────────────────────────────────────────────────────┤ +│ PREREQ: Check if _docs/00_problem/ exists (resume/overwrite?) │ +│ │ +│ Phase 1: Open Discovery │ +│ → "What are you building?" → summarize → confirm │ +│ Phase 2: Structured Probing │ +│ → 8 dimensions: problem, scope, hardware, software, │ +│ acceptance criteria, input data, security, operations │ +│ → skip what Phase 1 already covered │ +│ Phase 3: Gap Analysis │ +│ → assess completeness per file → fill gaps iteratively │ +│ Phase 4: Draft & Confirm │ +│ → show all files → user confirms → save to _docs/00_problem/ │ +├────────────────────────────────────────────────────────────────┤ +│ Principles: Ask don't assume · Concrete over vague │ +│ Exhaust before writing · User is authority │ +└────────────────────────────────────────────────────────────────┘ +```