mirror of
https://github.com/azaion/admin.git
synced 2026-04-22 22:56:32 +00:00
d96971b050
Add .cursor autodevelopment system
242 lines
11 KiB
Markdown
242 lines
11 KiB
Markdown
---
|
|
name: problem
|
|
description: |
|
|
Interactive problem gathering skill that builds _docs/00_problem/ through structured interview.
|
|
Iteratively asks probing questions until the problem, restrictions, acceptance criteria, and input data
|
|
are fully understood. Produces all required files for downstream skills (research, plan, etc.).
|
|
Trigger phrases:
|
|
- "problem", "define problem", "problem gathering"
|
|
- "what am I building", "describe problem"
|
|
- "start project", "new project"
|
|
category: build
|
|
tags: [problem, gathering, interview, requirements, acceptance-criteria]
|
|
disable-model-invocation: true
|
|
---
|
|
|
|
# Problem Gathering
|
|
|
|
Build a complete problem definition through structured, interactive interview with the user. Produces all required files in `_docs/00_problem/` that downstream skills (research, plan, decompose, implement, deploy) depend on.
|
|
|
|
## Core Principles
|
|
|
|
- **Ask, don't assume**: never infer requirements the user hasn't stated
|
|
- **Exhaust before writing**: keep asking until all dimensions are covered; do not write files prematurely
|
|
- **Concrete over vague**: push for measurable values, specific constraints, real numbers
|
|
- **Save immediately**: once the user confirms, write all files at once
|
|
- **User is the authority**: the AI suggests, the user decides
|
|
|
|
## Context Resolution
|
|
|
|
Fixed paths:
|
|
|
|
- OUTPUT_DIR: `_docs/00_problem/`
|
|
- INPUT_DATA_DIR: `_docs/00_problem/input_data/`
|
|
|
|
## Prerequisite Checks
|
|
|
|
1. If OUTPUT_DIR already exists and contains files, present what exists and ask user: **resume and fill gaps, overwrite, or skip?**
|
|
2. If overwrite or fresh start, create OUTPUT_DIR and INPUT_DATA_DIR
|
|
|
|
## Completeness Criteria
|
|
|
|
The interview is complete when the AI can write ALL of these:
|
|
|
|
| File | Complete when |
|
|
|------|--------------|
|
|
| `problem.md` | Clear problem statement: what is being built, why, for whom, what it does |
|
|
| `restrictions.md` | All constraints identified: hardware, software, environment, operational, regulatory, budget, timeline |
|
|
| `acceptance_criteria.md` | Measurable success criteria with specific numeric targets grouped by category |
|
|
| `input_data/` | At least one reference data file or detailed data description document. Must include `expected_results.md` with input→output pairs for downstream test specification |
|
|
| `security_approach.md` | (optional) Security requirements identified, or explicitly marked as not applicable |
|
|
|
|
## Interview Protocol
|
|
|
|
### Phase 1: Open Discovery
|
|
|
|
Start with broad, open questions. Let the user describe the problem in their own words.
|
|
|
|
**Opening**: Ask the user to describe what they are building and what problem it solves. Do not interrupt or narrow down yet.
|
|
|
|
After the user responds, summarize what you understood and ask: "Did I get this right? What did I miss?"
|
|
|
|
### Phase 2: Structured Probing
|
|
|
|
Work through each dimension systematically. For each dimension, ask only what the user hasn't already covered. Skip dimensions that were fully answered in Phase 1.
|
|
|
|
**Dimension checklist:**
|
|
|
|
1. **Problem & Goals**
|
|
- What exactly does the system do?
|
|
- What problem does it solve? Why does it need to exist?
|
|
- Who are the users / operators / stakeholders?
|
|
- What is the expected usage pattern (frequency, load, environment)?
|
|
|
|
2. **Scope & Boundaries**
|
|
- What is explicitly IN scope?
|
|
- What is explicitly OUT of scope?
|
|
- Are there related systems this integrates with?
|
|
- What does the system NOT do (common misconceptions)?
|
|
|
|
3. **Hardware & Environment**
|
|
- What hardware does it run on? (CPU, GPU, memory, storage)
|
|
- What operating system / platform?
|
|
- What is the deployment environment? (cloud, edge, embedded, on-prem)
|
|
- Any physical constraints? (power, thermal, size, connectivity)
|
|
|
|
4. **Software & Tech Constraints**
|
|
- Required programming languages or frameworks?
|
|
- Required protocols or interfaces?
|
|
- Existing systems it must integrate with?
|
|
- Libraries or tools that must or must not be used?
|
|
|
|
5. **Acceptance Criteria**
|
|
- What does "done" look like?
|
|
- Performance targets: latency, throughput, accuracy, error rates?
|
|
- Quality bars: reliability, availability, recovery time?
|
|
- Push for specific numbers: "less than Xms", "above Y%", "within Z meters"
|
|
- Edge cases: what happens when things go wrong?
|
|
- Startup and shutdown behavior?
|
|
|
|
6. **Input Data**
|
|
- What data does the system consume?
|
|
- Formats, schemas, volumes, update frequency?
|
|
- Does the user have sample/reference data to provide?
|
|
- If no data exists yet, what would representative data look like?
|
|
|
|
7. **Security** (optional, probe gently)
|
|
- Authentication / authorization requirements?
|
|
- Data sensitivity (PII, classified, proprietary)?
|
|
- Communication security (encryption, TLS)?
|
|
- If the user says "not a concern", mark as N/A and move on
|
|
|
|
8. **Operational Constraints**
|
|
- Budget constraints?
|
|
- Timeline constraints?
|
|
- Team size / expertise constraints?
|
|
- Regulatory or compliance requirements?
|
|
- Geographic restrictions?
|
|
|
|
### Phase 3: Gap Analysis
|
|
|
|
After all dimensions are covered:
|
|
|
|
1. Internally assess completeness against the Completeness Criteria table
|
|
2. Present a completeness summary to the user:
|
|
|
|
```
|
|
Completeness Check:
|
|
- problem.md: READY / GAPS: [list missing aspects]
|
|
- restrictions.md: READY / GAPS: [list missing aspects]
|
|
- acceptance_criteria.md: READY / GAPS: [list missing aspects]
|
|
- input_data/: READY / GAPS: [list missing aspects]
|
|
- security_approach.md: READY / N/A / GAPS: [list missing aspects]
|
|
```
|
|
|
|
3. If gaps exist, ask targeted follow-up questions for each gap
|
|
4. Repeat until all required files show READY
|
|
|
|
### Phase 4: Draft & Confirm
|
|
|
|
1. Draft all files in the conversation (show the user what will be written)
|
|
2. Present each file's content for review
|
|
3. Ask: "Should I save these files? Any changes needed?"
|
|
4. Apply any requested changes
|
|
5. Save all files to OUTPUT_DIR
|
|
|
|
## Output File Formats
|
|
|
|
### problem.md
|
|
|
|
Free-form text. Clear, concise description of:
|
|
- What is being built
|
|
- What problem it solves
|
|
- How it works at a high level
|
|
- Key context the reader needs to understand the problem
|
|
|
|
No headers required. Paragraph format. Should be readable by someone unfamiliar with the project.
|
|
|
|
### restrictions.md
|
|
|
|
Categorized constraints with markdown headers and bullet points:
|
|
|
|
```markdown
|
|
# [Category Name]
|
|
|
|
- Constraint description with specific values where applicable
|
|
- Another constraint
|
|
```
|
|
|
|
Categories are derived from the interview (hardware, software, environment, operational, etc.). Each restriction should be specific and testable.
|
|
|
|
### acceptance_criteria.md
|
|
|
|
Categorized measurable criteria with markdown headers and bullet points:
|
|
|
|
```markdown
|
|
# [Category Name]
|
|
|
|
- Criterion with specific numeric target
|
|
- Another criterion with measurable threshold
|
|
```
|
|
|
|
Every criterion must have a measurable value. Vague criteria like "should be fast" are not acceptable — push for "less than 400ms end-to-end".
|
|
|
|
### input_data/
|
|
|
|
At least one file. Options:
|
|
- User provides actual data files (CSV, JSON, images, etc.) — save as-is
|
|
- User describes data parameters — save as `data_parameters.md`
|
|
- User provides URLs to data — save as `data_sources.md` with links and descriptions
|
|
- `expected_results.md` — expected outputs for given inputs (required by downstream test-spec skill). During the Acceptance Criteria dimension, probe for concrete input→output pairs and save them here. Format: use the template from `.cursor/skills/test-spec/templates/expected-results.md`.
|
|
|
|
### security_approach.md (optional)
|
|
|
|
If security requirements exist, document them. If the user says security is not a concern for this project, skip this file entirely.
|
|
|
|
## Progress Tracking
|
|
|
|
Create a TodoWrite with phases 1-4. Update as each phase completes.
|
|
|
|
## Escalation Rules
|
|
|
|
| Situation | Action |
|
|
|-----------|--------|
|
|
| User cannot provide acceptance criteria numbers | Suggest industry benchmarks, ASK user to confirm or adjust |
|
|
| User has no input data at all | ASK what representative data would look like, create a `data_parameters.md` describing expected data |
|
|
| User says "I don't know" to a critical dimension | Research the domain briefly, suggest reasonable defaults, ASK user to confirm |
|
|
| Conflicting requirements discovered | Present the conflict, ASK user which takes priority |
|
|
| User wants to skip a required file | Explain why downstream skills need it, ASK if they want a minimal placeholder |
|
|
|
|
## Common Mistakes
|
|
|
|
- **Writing files before the interview is complete**: gather everything first, then write
|
|
- **Accepting vague criteria**: "fast", "accurate", "reliable" are not acceptance criteria without numbers
|
|
- **Assuming technical choices**: do not suggest specific technologies unless the user constrains them
|
|
- **Over-engineering the problem statement**: problem.md should be concise, not a dissertation
|
|
- **Inventing restrictions**: only document what the user actually states as a constraint
|
|
- **Skipping input data**: downstream skills (especially research and plan) need concrete data context
|
|
|
|
## Methodology Quick Reference
|
|
|
|
```
|
|
┌────────────────────────────────────────────────────────────────┐
|
|
│ Problem Gathering (4-Phase Interview) │
|
|
├────────────────────────────────────────────────────────────────┤
|
|
│ PREREQ: Check if _docs/00_problem/ exists (resume/overwrite?) │
|
|
│ │
|
|
│ Phase 1: Open Discovery │
|
|
│ → "What are you building?" → summarize → confirm │
|
|
│ Phase 2: Structured Probing │
|
|
│ → 8 dimensions: problem, scope, hardware, software, │
|
|
│ acceptance criteria, input data, security, operations │
|
|
│ → skip what Phase 1 already covered │
|
|
│ Phase 3: Gap Analysis │
|
|
│ → assess completeness per file → fill gaps iteratively │
|
|
│ Phase 4: Draft & Confirm │
|
|
│ → show all files → user confirms → save to _docs/00_problem/ │
|
|
├────────────────────────────────────────────────────────────────┤
|
|
│ Principles: Ask don't assume · Concrete over vague │
|
|
│ Exhaust before writing · User is authority │
|
|
└────────────────────────────────────────────────────────────────┘
|
|
```
|