--- name: new-task description: | Interactive skill for adding new functionality to an existing codebase. Guides the user through describing the feature, assessing complexity, optionally running research, analyzing the codebase for insertion points, validating assumptions with the user, and producing a task spec with work item ticket. Supports a loop — the user can add multiple tasks in one session. Trigger phrases: - "new task", "add feature", "new functionality" - "I want to add", "new component", "extend" category: build tags: [task, feature, interactive, planning, work-items] disable-model-invocation: true --- # New Task (Interactive Feature Planning) Guide the user through defining new functionality for an existing codebase. Produces one or more task specifications with work item tickets, optionally running deep research for complex features. ## Core Principles - **User-driven**: every task starts with the user's description; never invent requirements - **Right-size research**: only invoke the research skill when the change is big enough to warrant it - **Validate before committing**: surface all assumptions and uncertainties to the user before writing the task file - **Save immediately**: write task files to disk as soon as they are ready; never accumulate unsaved work - **Ask, don't assume**: when scope, insertion point, or approach is unclear, STOP and ask the user ## Context Resolution Fixed paths: - TASKS_DIR: `_docs/02_tasks/` - TASKS_TODO: `_docs/02_tasks/todo/` - PLANS_DIR: `_docs/02_task_plans/` - DOCUMENT_DIR: `_docs/02_document/` - DEPENDENCIES_TABLE: `_docs/02_tasks/_dependencies_table.md` Create TASKS_DIR, TASKS_TODO, and PLANS_DIR if they don't exist. If TASKS_DIR already contains task files (scan `todo/`, `backlog/`, and `done/`), use them to determine the next numeric prefix for temporary file naming. ## Workflow The skill runs as a loop. Each iteration produces one task. After each task the user chooses to add another or finish. --- ### Step 1: Gather Feature Description **Role**: Product analyst **Goal**: Get a clear, detailed description of the new functionality from the user. Ask the user: ``` ══════════════════════════════════════ NEW TASK: Describe the functionality ══════════════════════════════════════ Please describe in detail the new functionality you want to add: - What should it do? - Who is it for? - Any specific requirements or constraints? ══════════════════════════════════════ ``` **BLOCKING**: Do NOT proceed until the user provides a description. Record the description verbatim for use in subsequent steps. --- ### Step 2: Analyze Complexity **Role**: Technical analyst **Goal**: Determine whether deep research is needed. Read the user's description and the existing codebase documentation from DOCUMENT_DIR (architecture.md, components/, system-flows.md). **Consult LESSONS.md**: if `_docs/LESSONS.md` exists, read it and look for entries in categories `estimation`, `architecture`, `dependencies` that might apply to the task under consideration. If a relevant lesson exists (e.g., "estimation: auth-related changes historically take 2x estimate"), bias the classification and recommendation accordingly. Note in the output which lessons (if any) were applied. Assess the change along these dimensions: - **Scope**: how many components/files are affected? - **Novelty**: does it involve libraries, protocols, or patterns not already in the codebase? - **Risk**: could it break existing functionality or require architectural changes? Classification: | Category | Criteria | Action | |----------|----------|--------| | **Needs research** | New libraries/frameworks, unfamiliar protocols, significant architectural change, multiple unknowns | Proceed to Step 3 (Research) | | **Skip research** | Extends existing functionality, uses patterns already in codebase, straightforward new component with known tech | Skip to Step 4 (Codebase Analysis) | Present the assessment to the user: ``` ══════════════════════════════════════ COMPLEXITY ASSESSMENT ══════════════════════════════════════ Scope: [low / medium / high] Novelty: [low / medium / high] Risk: [low / medium / high] ══════════════════════════════════════ Recommendation: [Research needed / Skip research] Reason: [one-line justification] ══════════════════════════════════════ ``` **BLOCKING**: Ask the user to confirm or override the recommendation before proceeding. --- ### Step 3: Research (conditional) **Role**: Researcher **Goal**: Investigate unknowns before task specification. This step only runs if Step 2 determined research is needed. 1. Create a problem description file at `PLANS_DIR//problem.md` summarizing the feature request and the specific unknowns to investigate 2. Invoke `.cursor/skills/research/SKILL.md` in standalone mode: - INPUT_FILE: `PLANS_DIR//problem.md` - BASE_DIR: `PLANS_DIR//` 3. After research completes, read the latest solution draft from `PLANS_DIR//01_solution/` (highest-numbered `solution_draft*.md`) 4. Extract the key findings relevant to the task specification The `` is a short kebab-case name derived from the feature description (e.g., `auth-provider-integration`, `real-time-notifications`). --- ### Step 4: Codebase Analysis **Role**: Software architect **Goal**: Determine where and how to insert the new functionality, and whether existing tests cover the new requirements. 1. Read the codebase documentation from DOCUMENT_DIR: - `architecture.md` — overall structure - `components/` — component specs - `system-flows.md` — data flows (if exists) - `data_model.md` — data model (if exists) 2. If research was performed (Step 3), incorporate findings 3. Analyze and determine: - Which existing components are affected - Where new code should be inserted (which layers, modules, files) - What interfaces need to change - What new interfaces or models are needed - How data flows through the change 4. If the change is complex enough, read the actual source files (not just docs) to verify insertion points 5. **Test coverage gap analysis**: Read existing test files that cover the affected components. For each acceptance criterion from Step 1, determine whether an existing test already validates it. Classify each AC as: - **Covered**: an existing test directly validates this behavior - **Partially covered**: an existing test exercises the code path but doesn't assert the new requirement - **Not covered**: no existing test validates this behavior — a new test is required Present the analysis: ``` ══════════════════════════════════════ CODEBASE ANALYSIS ══════════════════════════════════════ Affected components: [list] Insertion points: [list of modules/layers] Interface changes: [list or "None"] New interfaces: [list or "None"] Data flow impact: [summary] ───────────────────────────────────── TEST COVERAGE GAP ANALYSIS ───────────────────────────────────── AC-1: [Covered / Partially covered / Not covered] [existing test name or "needs new test"] AC-2: [Covered / Partially covered / Not covered] [existing test name or "needs new test"] ... ───────────────────────────────────── New tests needed: [count] Existing tests to update: [count or "None"] ══════════════════════════════════════ ``` When gaps are found, the task spec (Step 6) MUST include the missing tests in the Scope (Included) section and the Unit/Blackbox Tests tables. Tests are not optional — if an AC is not covered by an existing test, the task must deliver a test for it. --- ### Step 4.5: Contract & Layout Check **Role**: Architect **Goal**: Prevent silent public-API drift and keep `module-layout.md` consistent before implementation locks file ownership. Apply the four shared-task triggers from `.cursor/skills/decompose/SKILL.md` Step 2 rule #10 (shared/*, Scope mentions interface/DTO/schema/event/contract/API/shared-model, parent epic is cross-cutting, ≥2 consumers) and classify the task: - **Producer** — any trigger fires, OR the task changes a public signature / invariant / serialization / error variant of an existing symbol: 1. Check for an existing contract at `_docs/02_document/contracts//.md`. 2. If present → decide version bump (patch / minor / major per the contract's Versioning Rules) and add the Change Log entry to the task's deliverables. 3. If absent → add creation of the contract file (using `.cursor/skills/decompose/templates/api-contract.md`) to the task's Scope.Included; add a `## Contract` section to the task spec. 4. List every currently-known consumer (from Codebase Analysis Step 4) and add them to the contract's Consumer tasks field. - **Consumer** — the task imports or calls a public API belonging to another component: 1. Resolve the component's contract file; add it to the task's `### Document Dependencies` section. 2. If the cross-component interface has no contract file, Choose: **A)** create a retroactive contract now as a prerequisite task, **B)** proceed without (logs an explicit coupling risk in the task's Risks & Mitigation). - **Layout delta** — the task introduces a new component OR changes an existing component's Public API surface: 1. Draft the Per-Component Mapping entry (or the Public API diff) against `_docs/02_document/module-layout.md` using `.cursor/skills/decompose/templates/module-layout.md` format. 2. Add the layout edit to the task's deliverables; the implementer writes it alongside the code change. 3. If `module-layout.md` does not exist, STOP and instruct the user to run `/document` first (existing-code flow) or `/decompose` default mode (greenfield). Do not guess. Record the classification and any contract/layout deliverables in the working notes; they feed Step 5 (Validate Assumptions) and Step 6 (Create Task). **BLOCKING**: none — this step surfaces findings; the user confirms them in Step 5. --- ### Step 5: Validate Assumptions **Role**: Quality gate **Goal**: Surface every uncertainty and get user confirmation. Review all decisions and assumptions made in Steps 2–4. For each uncertainty: 1. State the assumption clearly 2. Propose a solution or approach 3. List alternatives if they exist Present using the Choose format for each decision that has meaningful alternatives: ``` ══════════════════════════════════════ ASSUMPTION VALIDATION ══════════════════════════════════════ 1. [Assumption]: [proposed approach] Alternative: [other option, if any] 2. [Assumption]: [proposed approach] Alternative: [other option, if any] ... ══════════════════════════════════════ Please confirm or correct these assumptions. ══════════════════════════════════════ ``` **BLOCKING**: Do NOT proceed until the user confirms or corrects all assumptions. --- ### Step 6: Create Task **Role**: Technical writer **Goal**: Produce the task specification file. 1. Determine the next numeric prefix by scanning all TASKS_DIR subfolders (`todo/`, `backlog/`, `done/`) for existing files 2. If research was performed (Step 3), the research artifacts live in `PLANS_DIR//` — reference them from the task spec where relevant 3. Write the task file using `.cursor/skills/decompose/templates/task.md`: - Fill all fields from the gathered information - Set **Complexity** based on the assessment from Step 2 - Set **Dependencies** by cross-referencing existing tasks in TASKS_DIR subfolders - Set **Tracker** and **Epic** to `pending` (filled in Step 7) 3. Save as `TASKS_TODO/[##]_[short_name].md` **Self-verification**: - [ ] Problem section clearly describes the user need - [ ] Acceptance criteria are testable (Gherkin format) - [ ] Scope boundaries are explicit - [ ] Complexity points match the assessment - [ ] Dependencies reference existing task tracker IDs where applicable - [ ] No implementation details leaked into the spec - [ ] If Step 4.5 classified the task as producer, the `## Contract` section exists and points at a contract file - [ ] If Step 4.5 classified the task as consumer, `### Document Dependencies` lists the relevant contract file - [ ] If Step 4.5 flagged a layout delta, the task's Scope.Included names the `module-layout.md` edit --- ### Step 7: Work Item Ticket **Role**: Project coordinator **Goal**: Create a work item ticket and link it to the task file. 1. Create a ticket via the configured work item tracker (see `autodev/protocols.md` for tracker detection): - Summary: the task's **Name** field - Description: the task's **Problem** and **Acceptance Criteria** sections - Story points: the task's **Complexity** value - Link to the appropriate epic (ask user if unclear which epic) 2. Write the ticket ID and Epic ID back into the task file header: - Update **Task** field: `[TICKET-ID]_[short_name]` - Update **Tracker** field: `[TICKET-ID]` - Update **Epic** field: `[EPIC-ID]` 3. Rename the file from `[##]_[short_name].md` to `[TICKET-ID]_[short_name].md` If the work item tracker is not authenticated or unavailable (`tracker: local`): - Keep the numeric prefix - Set **Tracker** to `pending` - Set **Epic** to `pending` - The task is still valid and can be implemented; tracker sync happens later --- ### Step 8: Loop Gate Ask the user: ``` ══════════════════════════════════════ Task created: [TRACKER-ID or ##] — [task name] ══════════════════════════════════════ A) Add another task B) Done — finish and update dependencies ══════════════════════════════════════ ``` - If **A** → loop back to Step 1 - If **B** → proceed to Finalize --- ### Finalize After the user chooses **Done**: 1. Update (or create) `DEPENDENCIES_TABLE` — add all newly created tasks to the dependencies table 2. Present a summary of all tasks created in this session: ``` ══════════════════════════════════════ NEW TASK SUMMARY ══════════════════════════════════════ Tasks created: N Total complexity: M points ───────────────────────────────────── [TRACKER-ID] [name] ([complexity] pts) [TRACKER-ID] [name] ([complexity] pts) ... ══════════════════════════════════════ ``` ## Escalation Rules | Situation | Action | |-----------|--------| | User description is vague or incomplete | **ASK** for more detail — do not guess | | Unclear which epic to link to | **ASK** user for the epic | | Research skill hits a blocker | Follow research skill's own escalation rules | | Codebase analysis reveals conflicting architectures | **ASK** user which pattern to follow | | Complexity exceeds 5 points | **WARN** user and suggest splitting into multiple tasks | | Work item tracker MCP unavailable | **WARN**, continue with local-only task files | ## Trigger Conditions When the user wants to: - Add new functionality to an existing codebase - Plan a new feature or component - Create task specifications for upcoming work **Keywords**: "new task", "add feature", "new functionality", "extend", "I want to add" **Differentiation**: - User wants to decompose an existing plan into tasks → use `/decompose` - User wants to research a topic without creating tasks → use `/research` - User wants to refactor existing code → use `/refactor` - User wants to define and plan a new feature → use this skill