mirror of
https://github.com/azaion/admin.git
synced 2026-04-22 22:46:33 +00:00
d96971b050
Add .cursor autodevelopment system
128 lines
6.9 KiB
Markdown
128 lines
6.9 KiB
Markdown
## Mode A: Initial Research
|
|
|
|
Triggered when no `solution_draft*.md` files exist in OUTPUT_DIR, or when the user explicitly requests initial research.
|
|
|
|
### Phase 1: AC & Restrictions Assessment (BLOCKING)
|
|
|
|
**Role**: Professional software architect
|
|
|
|
A focused preliminary research pass **before** the main solution research. The goal is to validate that the acceptance criteria and restrictions are realistic before designing a solution around them.
|
|
|
|
**Input**: All files from INPUT_DIR (or INPUT_FILE in standalone mode)
|
|
|
|
**Task**:
|
|
1. Read all problem context files thoroughly
|
|
2. **ASK the user about every unclear aspect** — do not assume:
|
|
- Unclear problem boundaries → ask
|
|
- Ambiguous acceptance criteria values → ask
|
|
- Missing context (no `security_approach.md`, no `input_data/`) → ask what they have
|
|
- Conflicting restrictions → ask which takes priority
|
|
3. Research in internet **extensively** — use multiple search queries per question, rephrase, and search from different angles:
|
|
- How realistic are the acceptance criteria for this specific domain? Search for industry benchmarks, standards, and typical values
|
|
- How critical is each criterion? Search for case studies where criteria were relaxed or tightened
|
|
- What domain-specific acceptance criteria are we missing? Search for industry standards, regulatory requirements, and best practices in the specific domain
|
|
- Impact of each criterion value on the whole system quality — search for research papers and engineering reports
|
|
- Cost/budget implications of each criterion — search for pricing, total cost of ownership analyses, and comparable project budgets
|
|
- Timeline implications — search for project timelines, development velocity reports, and comparable implementations
|
|
- What do practitioners in this domain consider the most important criteria? Search forums, conference talks, and experience reports
|
|
4. Research restrictions from multiple perspectives:
|
|
- Are the restrictions realistic? Search for comparable projects that operated under similar constraints
|
|
- Should any be tightened or relaxed? Search for what constraints similar projects actually ended up with
|
|
- Are there additional restrictions we should add? Search for regulatory, compliance, and safety requirements in this domain
|
|
- What restrictions do practitioners wish they had defined earlier? Search for post-mortem reports and lessons learned
|
|
5. Verify findings with authoritative sources (official docs, papers, benchmarks) — each key finding must have at least 2 independent sources
|
|
|
|
**Uses Steps 0-3 of the 8-step engine** (question classification, decomposition, source tiering, fact extraction) scoped to AC and restrictions assessment.
|
|
|
|
**Save action**: Write `RESEARCH_DIR/00_ac_assessment.md` with format:
|
|
|
|
```markdown
|
|
# Acceptance Criteria Assessment
|
|
|
|
## Acceptance Criteria
|
|
|
|
| Criterion | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-----------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Restrictions Assessment
|
|
|
|
| Restriction | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-------------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Key Findings
|
|
[Summary of critical findings]
|
|
|
|
## Sources
|
|
[Key references used]
|
|
```
|
|
|
|
**BLOCKING**: Present the AC assessment tables to the user. Wait for confirmation or adjustments before proceeding to Phase 2. The user may update `acceptance_criteria.md` or `restrictions.md` based on findings.
|
|
|
|
---
|
|
|
|
### Phase 2: Problem Research & Solution Draft
|
|
|
|
**Role**: Professional researcher and software architect
|
|
|
|
Full 8-step research methodology. Produces the first solution draft.
|
|
|
|
**Input**: All files from INPUT_DIR (possibly updated after Phase 1) + Phase 1 artifacts
|
|
|
|
**Task** (drives the 8-step engine):
|
|
1. Research existing/competitor solutions for similar problems — search broadly across industries and adjacent domains, not just the obvious competitors
|
|
2. Research the problem thoroughly — all possible ways to solve it, split into components; search for how different fields approach analogous problems
|
|
3. For each component, research all possible solutions and find the most efficient state-of-the-art approaches — use multiple query variants and perspectives from Step 1
|
|
4. For each promising approach, search for real-world deployment experience: success stories, failure reports, lessons learned, and practitioner opinions
|
|
5. Search for contrarian viewpoints — who argues against the common approaches and why? What failure modes exist?
|
|
6. Verify that suggested tools/libraries actually exist and work as described — check official repos, latest releases, and community health (stars, recent commits, open issues)
|
|
7. Include security considerations in each component analysis
|
|
8. Provide rough cost estimates for proposed solutions
|
|
|
|
Be concise in formulating. The fewer words, the better, but do not miss any important details.
|
|
|
|
**Save action**: Write `OUTPUT_DIR/solution_draft##.md` using template: `templates/solution_draft_mode_a.md`
|
|
|
|
---
|
|
|
|
### Phase 3: Tech Stack Consolidation (OPTIONAL)
|
|
|
|
**Role**: Software architect evaluating technology choices
|
|
|
|
Focused synthesis step — no new 8-step cycle. Uses research already gathered in Phase 2 to make concrete technology decisions.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + all files from INPUT_DIR
|
|
|
|
**Task**:
|
|
1. Extract technology options from the solution draft's component comparison tables
|
|
2. Score each option against: fitness for purpose, maturity, security track record, team expertise, cost, scalability
|
|
3. Produce a tech stack summary with selection rationale
|
|
4. Assess risks and learning requirements per technology choice
|
|
|
|
**Save action**: Write `OUTPUT_DIR/tech_stack.md` with:
|
|
- Requirements analysis (functional, non-functional, constraints)
|
|
- Technology evaluation tables (language, framework, database, infrastructure, key libraries) with scores
|
|
- Tech stack summary block
|
|
- Risk assessment and learning requirements tables
|
|
|
|
---
|
|
|
|
### Phase 4: Security Deep Dive (OPTIONAL)
|
|
|
|
**Role**: Security architect
|
|
|
|
Focused analysis step — deepens the security column from the solution draft into a proper threat model and controls specification.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + `security_approach.md` from INPUT_DIR + problem context
|
|
|
|
**Task**:
|
|
1. Build threat model: asset inventory, threat actors, attack vectors
|
|
2. Define security requirements and proposed controls per component (with risk level)
|
|
3. Summarize authentication/authorization, data protection, secure communication, and logging/monitoring approach
|
|
|
|
**Save action**: Write `OUTPUT_DIR/security_analysis.md` with:
|
|
- Threat model (assets, actors, vectors)
|
|
- Per-component security requirements and controls table
|
|
- Security controls summary
|