mirror of
https://github.com/azaion/gps-denied-desktop.git
synced 2026-04-23 03:46:36 +00:00
1092 lines
50 KiB
Markdown
1092 lines
50 KiB
Markdown
---
|
|
name: deep-research
|
|
description: |
|
|
Deep Research Methodology (8-Step Method) with two execution modes:
|
|
- Mode A (Initial Research): Assess acceptance criteria, then research problem and produce solution draft
|
|
- Mode B (Solution Assessment): Assess existing solution draft for weak points and produce revised draft
|
|
Supports project mode (_docs/ structure) and standalone mode (@file.md).
|
|
Auto-detects research mode based on existing solution_draft files.
|
|
Trigger phrases:
|
|
- "research", "deep research", "deep dive", "in-depth analysis"
|
|
- "research this", "investigate", "look into"
|
|
- "assess solution", "review solution draft"
|
|
- "comparative analysis", "concept comparison", "technical comparison"
|
|
---
|
|
|
|
# Deep Research (8-Step Method)
|
|
|
|
Transform vague topics raised by users into high-quality, deliverable research reports through a systematic methodology. Operates in two modes: **Initial Research** (produce new solution draft) and **Solution Assessment** (assess and revise existing draft).
|
|
|
|
## Core Principles
|
|
|
|
- **Conclusions come from mechanism comparison, not "gut feelings"**
|
|
- **Pin down the facts first, then reason**
|
|
- **Prioritize authoritative sources: L1 > L2 > L3 > L4**
|
|
- **Intermediate results must be saved for traceability and reuse**
|
|
- **Ask, don't assume** — when any aspect of the problem, criteria, or restrictions is unclear, STOP and ask the user before proceeding
|
|
|
|
## Context Resolution
|
|
|
|
Determine the operating mode based on invocation before any other logic runs.
|
|
|
|
**Project mode** (no explicit input file provided):
|
|
- INPUT_DIR: `_docs/00_problem/`
|
|
- OUTPUT_DIR: `_docs/01_solution/`
|
|
- RESEARCH_DIR: `_docs/00_research/`
|
|
- All existing guardrails, mode detection, and draft numbering apply as-is.
|
|
|
|
**Standalone mode** (explicit input file provided, e.g. `/research @some_doc.md`):
|
|
- INPUT_FILE: the provided file (treated as problem description)
|
|
- Derive `<topic>` from the input filename (without extension)
|
|
- OUTPUT_DIR: `_standalone/<topic>/01_solution/`
|
|
- RESEARCH_DIR: `_standalone/<topic>/00_research/`
|
|
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
|
|
- `restrictions.md` and `acceptance_criteria.md` are optional — warn if absent, proceed if user confirms
|
|
- Mode detection uses OUTPUT_DIR for `solution_draft*.md` scanning
|
|
- Draft numbering works the same, scoped to OUTPUT_DIR
|
|
|
|
Announce the detected mode and resolved paths to the user before proceeding.
|
|
|
|
## Project Integration
|
|
|
|
### Prerequisite Guardrails (BLOCKING)
|
|
|
|
Before any research begins, verify the input context exists. **Do not proceed if guardrails fail.**
|
|
|
|
**Project mode:**
|
|
1. Check INPUT_DIR exists — **STOP if missing**, ask user to create it and provide problem files
|
|
2. Check `problem.md` in INPUT_DIR exists and is non-empty — **STOP if missing**
|
|
3. Check for `restrictions.md` and `acceptance_criteria.md` in INPUT_DIR:
|
|
- If missing: **warn user** and ask whether to proceed without them or provide them first
|
|
- If present: read and validate they are non-empty
|
|
4. Read **all** files in INPUT_DIR to ground the investigation in the project context
|
|
5. Create OUTPUT_DIR and RESEARCH_DIR if they don't exist
|
|
|
|
**Standalone mode:**
|
|
1. Check INPUT_FILE exists and is non-empty — **STOP if missing**
|
|
2. Warn if no `restrictions.md` or `acceptance_criteria.md` were provided alongside INPUT_FILE — proceed if user confirms
|
|
3. Create OUTPUT_DIR and RESEARCH_DIR if they don't exist
|
|
|
|
### Mode Detection
|
|
|
|
After guardrails pass, determine the execution mode:
|
|
|
|
1. Scan OUTPUT_DIR for files matching `solution_draft*.md`
|
|
2. **No matches found** → **Mode A: Initial Research**
|
|
3. **Matches found** → **Mode B: Solution Assessment** (use the highest-numbered draft as input)
|
|
4. **User override**: if the user explicitly says "research from scratch" or "initial research", force Mode A regardless of existing drafts
|
|
|
|
Inform the user which mode was detected and confirm before proceeding.
|
|
|
|
### Solution Draft Numbering
|
|
|
|
All final output is saved as `OUTPUT_DIR/solution_draft##.md` with a 2-digit zero-padded number:
|
|
|
|
1. Scan existing files in OUTPUT_DIR matching `solution_draft*.md`
|
|
2. Extract the highest existing number
|
|
3. Increment by 1
|
|
4. Zero-pad to 2 digits (e.g., `01`, `02`, ..., `10`, `11`)
|
|
|
|
Example: if `solution_draft01.md` through `solution_draft10.md` exist, the next output is `solution_draft11.md`.
|
|
|
|
### Working Directory & Intermediate Artifact Management
|
|
|
|
#### Directory Structure
|
|
|
|
At the start of research, **must** create a topic-named working directory under RESEARCH_DIR:
|
|
|
|
```
|
|
RESEARCH_DIR/<topic>/
|
|
├── 00_ac_assessment.md # Mode A Phase 1 output: AC & restrictions assessment
|
|
├── 00_question_decomposition.md # Step 0-1 output
|
|
├── 01_source_registry.md # Step 2 output: all consulted source links
|
|
├── 02_fact_cards.md # Step 3 output: extracted facts
|
|
├── 03_comparison_framework.md # Step 4 output: selected framework and populated data
|
|
├── 04_reasoning_chain.md # Step 6 output: fact → conclusion reasoning
|
|
├── 05_validation_log.md # Step 7 output: use-case validation results
|
|
└── raw/ # Raw source archive (optional)
|
|
├── source_1.md
|
|
└── source_2.md
|
|
```
|
|
|
|
### Save Timing & Content
|
|
|
|
| Step | Save immediately after completion | Filename |
|
|
|------|-----------------------------------|----------|
|
|
| Mode A Phase 1 | AC & restrictions assessment tables | `00_ac_assessment.md` |
|
|
| Step 0-1 | Question type classification + sub-question list | `00_question_decomposition.md` |
|
|
| Step 2 | Each consulted source link, tier, summary | `01_source_registry.md` |
|
|
| Step 3 | Each fact card (statement + source + confidence) | `02_fact_cards.md` |
|
|
| Step 4 | Selected comparison framework + initial population | `03_comparison_framework.md` |
|
|
| Step 6 | Reasoning process for each dimension | `04_reasoning_chain.md` |
|
|
| Step 7 | Validation scenarios + results + review checklist | `05_validation_log.md` |
|
|
| Step 8 | Complete solution draft | `OUTPUT_DIR/solution_draft##.md` |
|
|
|
|
### Save Principles
|
|
|
|
1. **Save immediately**: Write to the corresponding file as soon as a step is completed; don't wait until the end
|
|
2. **Incremental updates**: Same file can be updated multiple times; append or replace new content
|
|
3. **Preserve process**: Keep intermediate files even after their content is integrated into the final report
|
|
4. **Enable recovery**: If research is interrupted, progress can be recovered from intermediate files
|
|
|
|
## Execution Flow
|
|
|
|
### Mode A: Initial Research
|
|
|
|
Triggered when no `solution_draft*.md` files exist in OUTPUT_DIR, or when the user explicitly requests initial research.
|
|
|
|
#### Phase 1: AC & Restrictions Assessment (BLOCKING)
|
|
|
|
**Role**: Professional software architect
|
|
|
|
A focused preliminary research pass **before** the main solution research. The goal is to validate that the acceptance criteria and restrictions are realistic before designing a solution around them.
|
|
|
|
**Input**: All files from INPUT_DIR (or INPUT_FILE in standalone mode)
|
|
|
|
**Task**:
|
|
1. Read all problem context files thoroughly
|
|
2. **ASK the user about every unclear aspect** — do not assume:
|
|
- Unclear problem boundaries → ask
|
|
- Ambiguous acceptance criteria values → ask
|
|
- Missing context (no `security_approach.md`, no `input_data/`) → ask what they have
|
|
- Conflicting restrictions → ask which takes priority
|
|
3. Research in internet:
|
|
- How realistic are the acceptance criteria for this specific domain?
|
|
- How critical is each criterion?
|
|
- What domain-specific acceptance criteria are we missing?
|
|
- Impact of each criterion value on the whole system quality
|
|
- Cost/budget implications of each criterion
|
|
- Timeline implications — how long would it take to meet each criterion
|
|
4. Research restrictions:
|
|
- Are the restrictions realistic?
|
|
- Should any be tightened or relaxed?
|
|
- Are there additional restrictions we should add?
|
|
5. Verify findings with authoritative sources (official docs, papers, benchmarks)
|
|
|
|
**Uses Steps 0-3 of the 8-step engine** (question classification, decomposition, source tiering, fact extraction) scoped to AC and restrictions assessment.
|
|
|
|
**📁 Save action**: Write `RESEARCH_DIR/<topic>/00_ac_assessment.md` with format:
|
|
|
|
```markdown
|
|
# Acceptance Criteria Assessment
|
|
|
|
## Acceptance Criteria
|
|
|
|
| Criterion | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-----------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Restrictions Assessment
|
|
|
|
| Restriction | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-------------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Key Findings
|
|
[Summary of critical findings]
|
|
|
|
## Sources
|
|
[Key references used]
|
|
```
|
|
|
|
**BLOCKING**: Present the AC assessment tables to the user. Wait for confirmation or adjustments before proceeding to Phase 2. The user may update `acceptance_criteria.md` or `restrictions.md` based on findings.
|
|
|
|
---
|
|
|
|
#### Phase 2: Problem Research & Solution Draft
|
|
|
|
**Role**: Professional researcher and software architect
|
|
|
|
Full 8-step research methodology. Produces the first solution draft.
|
|
|
|
**Input**: All files from INPUT_DIR (possibly updated after Phase 1) + Phase 1 artifacts
|
|
|
|
**Task** (drives the 8-step engine):
|
|
1. Research existing/competitor solutions for similar problems
|
|
2. Research the problem thoroughly — all possible ways to solve it, split into components
|
|
3. For each component, research all possible solutions and find the most efficient state-of-the-art approaches
|
|
4. Verify that suggested tools/libraries actually exist and work as described
|
|
5. Include security considerations in each component analysis
|
|
6. Provide rough cost estimates for proposed solutions
|
|
|
|
Be concise in formulating. The fewer words, the better, but do not miss any important details.
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/solution_draft##.md` using template: `templates/solution_draft_mode_a.md`
|
|
|
|
---
|
|
|
|
#### Phase 3: Tech Stack Consolidation (OPTIONAL)
|
|
|
|
**Role**: Software architect evaluating technology choices
|
|
|
|
Focused synthesis step — no new 8-step cycle. Uses research already gathered in Phase 2 to make concrete technology decisions.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + all files from INPUT_DIR
|
|
|
|
**Task**:
|
|
1. Extract technology options from the solution draft's component comparison tables
|
|
2. Score each option against: fitness for purpose, maturity, security track record, team expertise, cost, scalability
|
|
3. Produce a tech stack summary with selection rationale
|
|
4. Assess risks and learning requirements per technology choice
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/tech_stack.md` with:
|
|
- Requirements analysis (functional, non-functional, constraints)
|
|
- Technology evaluation tables (language, framework, database, infrastructure, key libraries) with scores
|
|
- Tech stack summary block
|
|
- Risk assessment and learning requirements tables
|
|
|
|
---
|
|
|
|
#### Phase 4: Security Deep Dive (OPTIONAL)
|
|
|
|
**Role**: Security architect
|
|
|
|
Focused analysis step — deepens the security column from the solution draft into a proper threat model and controls specification.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + `security_approach.md` from INPUT_DIR + problem context
|
|
|
|
**Task**:
|
|
1. Build threat model: asset inventory, threat actors, attack vectors
|
|
2. Define security requirements and proposed controls per component (with risk level)
|
|
3. Summarize authentication/authorization, data protection, secure communication, and logging/monitoring approach
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/security_analysis.md` with:
|
|
- Threat model (assets, actors, vectors)
|
|
- Per-component security requirements and controls table
|
|
- Security controls summary
|
|
|
|
---
|
|
|
|
### Mode B: Solution Assessment
|
|
|
|
Triggered when `solution_draft*.md` files exist in OUTPUT_DIR.
|
|
|
|
**Role**: Professional software architect
|
|
|
|
Full 8-step research methodology applied to assessing and improving an existing solution draft.
|
|
|
|
**Input**: All files from INPUT_DIR + the latest (highest-numbered) `solution_draft##.md` from OUTPUT_DIR
|
|
|
|
**Task** (drives the 8-step engine):
|
|
1. Read the existing solution draft thoroughly
|
|
2. Research in internet — identify all potential weak points and problems
|
|
3. Identify security weak points and vulnerabilities
|
|
4. Identify performance bottlenecks
|
|
5. Address these problems and find ways to solve them
|
|
6. Based on findings, form a new solution draft in the same format
|
|
7. During the comparison, try to find the best solution which produces the most quality result within the boundaries of the restrictions. In case of uncertaintis and come closer or above the boundaries, ask the user
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/solution_draft##.md` (incremented) using template: `templates/solution_draft_mode_b.md`
|
|
|
|
**Optional follow-up**: After Mode B completes, the user can request Phase 3 (Tech Stack Consolidation) or Phase 4 (Security Deep Dive) using the revised draft. These phases work identically to their Mode A descriptions above.
|
|
|
|
## Escalation Rules
|
|
|
|
| Situation | Action |
|
|
|-----------|--------|
|
|
| Unclear problem boundaries | **ASK user** |
|
|
| Ambiguous acceptance criteria values | **ASK user** |
|
|
| Missing context files (`security_approach.md`, `input_data/`) | **ASK user** what they have |
|
|
| Conflicting restrictions | **ASK user** which takes priority |
|
|
| Technology choice with multiple valid options | **ASK user** |
|
|
| Contradictions between input files | **ASK user** |
|
|
| Missing acceptance criteria or restrictions files | **WARN user**, ask whether to proceed |
|
|
| File naming within research artifacts | PROCEED |
|
|
| Source tier classification | PROCEED |
|
|
|
|
## Trigger Conditions
|
|
|
|
When the user wants to:
|
|
- Deeply understand a concept/technology/phenomenon
|
|
- Compare similarities and differences between two or more things
|
|
- Gather information and evidence for a decision
|
|
- Assess or improve an existing solution draft
|
|
|
|
**Keywords**:
|
|
- "deep research", "deep dive", "in-depth analysis"
|
|
- "research this", "investigate", "look into"
|
|
- "assess solution", "review draft", "improve solution"
|
|
- "comparative analysis", "concept comparison", "technical comparison"
|
|
|
|
**Differentiation from other Skills**:
|
|
- Needs a **visual knowledge graph** → use `research-to-diagram`
|
|
- Needs **written output** (articles/tutorials) → use `wsy-writer`
|
|
- Needs **material organization** → use `material-to-markdown`
|
|
- Needs **research + solution draft** → use this Skill
|
|
|
|
## Research Engine (8-Step Method)
|
|
|
|
The 8-step method is the core research engine used by both modes. Steps 0-1 and Step 8 have mode-specific behavior; Steps 2-7 are identical regardless of mode.
|
|
|
|
### Step 0: Question Type Classification
|
|
|
|
First, classify the research question type and select the corresponding strategy:
|
|
|
|
| Question Type | Core Task | Focus Dimensions |
|
|
|---------------|-----------|------------------|
|
|
| **Concept Comparison** | Build comparison framework | Mechanism differences, applicability boundaries |
|
|
| **Decision Support** | Weigh trade-offs | Cost, risk, benefit |
|
|
| **Trend Analysis** | Map evolution trajectory | History, driving factors, predictions |
|
|
| **Problem Diagnosis** | Root cause analysis | Symptoms, causes, evidence chain |
|
|
| **Knowledge Organization** | Systematic structuring | Definitions, classifications, relationships |
|
|
|
|
**Mode-specific classification**:
|
|
|
|
| Mode / Phase | Typical Question Type |
|
|
|--------------|----------------------|
|
|
| Mode A Phase 1 | Knowledge Organization + Decision Support |
|
|
| Mode A Phase 2 | Decision Support |
|
|
| Mode B | Problem Diagnosis + Decision Support |
|
|
|
|
### Step 0.5: Novelty Sensitivity Assessment (BLOCKING)
|
|
|
|
**Before starting research, you must assess the novelty sensitivity of the question. This determines the source filtering strategy.**
|
|
|
|
#### Novelty Sensitivity Classification
|
|
|
|
| Sensitivity Level | Typical Domains | Source Time Window | Description |
|
|
|-------------------|-----------------|-------------------|-------------|
|
|
| **🔴 Critical** | AI/LLMs, blockchain, cryptocurrency | 3-6 months | Technology iterates extremely fast; info from months ago may be completely outdated |
|
|
| **🟠 High** | Cloud services, frontend frameworks, API interfaces | 6-12 months | Frequent version updates; must confirm current version |
|
|
| **🟡 Medium** | Programming languages, databases, operating systems | 1-2 years | Relatively stable but still evolving |
|
|
| **🟢 Low** | Algorithm fundamentals, design patterns, theoretical concepts | No limit | Core principles change slowly |
|
|
|
|
#### 🔴 Critical Sensitivity Domain Special Rules
|
|
|
|
When the research topic involves the following domains, **special rules must be enforced**:
|
|
|
|
**Trigger word identification**:
|
|
- AI-related: LLM, GPT, Claude, Gemini, AI Agent, RAG, vector database, prompt engineering
|
|
- Cloud-native: Kubernetes new versions, Serverless, container runtimes
|
|
- Cutting-edge tech: Web3, quantum computing, AR/VR
|
|
|
|
**Mandatory rules**:
|
|
|
|
1. **Search with time constraints**:
|
|
- Use `time_range: "month"` or `time_range: "week"` to limit search results
|
|
- Prefer `start_date: "YYYY-MM-DD"` set to within the last 3 months
|
|
|
|
2. **Elevate official source priority**:
|
|
- **Must first consult** official documentation, official blogs, official Changelogs
|
|
- GitHub Release Notes, official X/Twitter announcements
|
|
- Academic papers (arXiv and other preprint platforms)
|
|
|
|
3. **Mandatory version number annotation**:
|
|
- Any technical description must annotate the **current version number**
|
|
- Example: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports..."
|
|
- Prohibit vague statements like "the latest version supports..."
|
|
|
|
4. **Outdated information handling**:
|
|
- Technical blogs/tutorials older than 6 months → historical reference only, **cannot serve as factual evidence**
|
|
- Version inconsistency found → must **verify current version** before using
|
|
- Obviously outdated descriptions (e.g., "will support in the future" but now already supported) → **discard directly**
|
|
|
|
5. **Cross-validation**:
|
|
- Highly sensitive information must be confirmed from **at least 2 independent sources**
|
|
- Priority: Official docs > Official blogs > Authoritative tech media > Personal blogs
|
|
|
|
6. **Official download/release page direct verification (BLOCKING)**:
|
|
- **Must directly visit** official download pages to verify platform support (don't rely on search engine caches)
|
|
- Use `mcp__tavily-mcp__tavily-extract` or `WebFetch` to directly extract download page content
|
|
- Example: `https://product.com/download` or `https://github.com/xxx/releases`
|
|
- Search results about "coming soon" or "planned support" may be outdated; must verify in real time
|
|
- **Platform support is frequently changing information**; cannot infer from old sources
|
|
|
|
7. **Product-specific protocol/feature name search (BLOCKING)**:
|
|
- Beyond searching the product name, **must additionally search protocol/standard names the product supports**
|
|
- Common protocols/standards to search:
|
|
- AI tools: MCP, ACP (Agent Client Protocol), LSP, DAP
|
|
- Cloud services: OAuth, OIDC, SAML
|
|
- Data exchange: GraphQL, gRPC, REST
|
|
- Search format: `"<product_name> <protocol_name> support"` or `"<product_name> <protocol_name> integration"`
|
|
- These protocol integrations are often differentiating features, easily missed in main docs but documented in specialized pages
|
|
|
|
#### Timeliness Assessment Output Template
|
|
|
|
```markdown
|
|
## Timeliness Sensitivity Assessment
|
|
|
|
- **Research Topic**: [topic]
|
|
- **Sensitivity Level**: 🔴 Critical / 🟠 High / 🟡 Medium / 🟢 Low
|
|
- **Rationale**: [why this level]
|
|
- **Source Time Window**: [X months/years]
|
|
- **Priority official sources to consult**:
|
|
1. [Official source 1]
|
|
2. [Official source 2]
|
|
- **Key version information to verify**:
|
|
- [Product/technology 1]: Current version ____
|
|
- [Product/technology 2]: Current version ____
|
|
```
|
|
|
|
**📁 Save action**: Append timeliness assessment to the end of `00_question_decomposition.md`
|
|
|
|
---
|
|
|
|
### Step 1: Question Decomposition & Boundary Definition
|
|
|
|
**Mode-specific sub-questions**:
|
|
|
|
**Mode A Phase 2** (Initial Research — Problem & Solution):
|
|
- "What existing/competitor solutions address this problem?"
|
|
- "What are the component parts of this problem?"
|
|
- "For each component, what are the state-of-the-art solutions?"
|
|
- "What are the security considerations per component?"
|
|
- "What are the cost implications of each approach?"
|
|
|
|
**Mode B** (Solution Assessment):
|
|
- "What are the weak points and potential problems in the existing draft?"
|
|
- "What are the security vulnerabilities in the proposed architecture?"
|
|
- "Where are the performance bottlenecks?"
|
|
- "What solutions exist for each identified issue?"
|
|
|
|
**General sub-question patterns** (use when applicable):
|
|
- **Sub-question A**: "What is X and how does it work?" (Definition & mechanism)
|
|
- **Sub-question B**: "What are the dimensions of relationship/difference between X and Y?" (Comparative analysis)
|
|
- **Sub-question C**: "In what scenarios is X applicable/inapplicable?" (Boundary conditions)
|
|
- **Sub-question D**: "What are X's development trends/best practices?" (Extended analysis)
|
|
|
|
**⚠️ Research Subject Boundary Definition (BLOCKING - must be explicit)**:
|
|
|
|
When decomposing questions, you must explicitly define the **boundaries of the research subject**:
|
|
|
|
| Dimension | Boundary to define | Example |
|
|
|-----------|--------------------|---------|
|
|
| **Population** | Which group is being studied? | University students vs K-12 vs vocational students vs all students |
|
|
| **Geography** | Which region is being studied? | Chinese universities vs US universities vs global |
|
|
| **Timeframe** | Which period is being studied? | Post-2020 vs full historical picture |
|
|
| **Level** | Which level is being studied? | Undergraduate vs graduate vs vocational |
|
|
|
|
**Common mistake**: User asks about "university classroom issues" but sources include policies targeting "K-12 students" — mismatched target populations will invalidate the entire research.
|
|
|
|
**📁 Save action**:
|
|
1. Read all files from INPUT_DIR to ground the research in the project context
|
|
2. Create working directory `RESEARCH_DIR/<topic>/`
|
|
3. Write `00_question_decomposition.md`, including:
|
|
- Original question
|
|
- Active mode (A Phase 2 or B) and rationale
|
|
- Summary of relevant problem context from INPUT_DIR
|
|
- Classified question type and rationale
|
|
- **Research subject boundary definition** (population, geography, timeframe, level)
|
|
- List of decomposed sub-questions
|
|
4. Write TodoWrite to track progress
|
|
|
|
### Step 2: Source Tiering & Authority Anchoring
|
|
|
|
Tier sources by authority, **prioritize primary sources**:
|
|
|
|
| Tier | Source Type | Purpose | Credibility |
|
|
|------|------------|---------|-------------|
|
|
| **L1** | Official docs, papers, specs, RFCs | Definitions, mechanisms, verifiable facts | ✅ High |
|
|
| **L2** | Official blogs, tech talks, white papers | Design intent, architectural thinking | ✅ High |
|
|
| **L3** | Authoritative media, expert commentary, tutorials | Supplementary intuition, case studies | ⚠️ Medium |
|
|
| **L4** | Community discussions, personal blogs, forums | Discover blind spots, validate understanding | ❓ Low |
|
|
|
|
**L4 Community Source Specifics** (mandatory for product comparison research):
|
|
|
|
| Source Type | Access Method | Value |
|
|
|------------|---------------|-------|
|
|
| **GitHub Issues** | Visit `github.com/<org>/<repo>/issues` | Real user pain points, feature requests, bug reports |
|
|
| **GitHub Discussions** | Visit `github.com/<org>/<repo>/discussions` | Feature discussions, usage insights, community consensus |
|
|
| **Reddit** | Search `site:reddit.com "<product_name>"` | Authentic user reviews, comparison discussions |
|
|
| **Hacker News** | Search `site:news.ycombinator.com "<product_name>"` | In-depth technical community discussions |
|
|
| **Discord/Telegram** | Product's official community channels | Active user feedback (must annotate [limited source]) |
|
|
|
|
**Principles**:
|
|
- Conclusions must be traceable to L1/L2
|
|
- L3/L4 serve only as supplementary and validation
|
|
- **L4 community discussions are used to discover "what users truly care about"**
|
|
- Record all information sources
|
|
|
|
**⏰ Timeliness Filtering Rules (execute based on Step 0.5 sensitivity level)**:
|
|
|
|
| Sensitivity Level | Source Filtering Rule | Suggested Search Parameters |
|
|
|-------------------|----------------------|-----------------------------|
|
|
| 🔴 Critical | Only accept sources within 6 months as factual evidence | `time_range: "month"` or `start_date` set to last 3 months |
|
|
| 🟠 High | Prefer sources within 1 year; annotate if older than 1 year | `time_range: "year"` |
|
|
| 🟡 Medium | Sources within 2 years used normally; older ones need validity check | Default search |
|
|
| 🟢 Low | No time limit | Default search |
|
|
|
|
**High-Sensitivity Domain Search Strategy**:
|
|
|
|
```
|
|
1. Round 1: Targeted official source search
|
|
- Use include_domains to restrict to official domains
|
|
- Example: include_domains: ["anthropic.com", "openai.com", "docs.xxx.com"]
|
|
|
|
2. Round 2: Official download/release page direct verification (BLOCKING)
|
|
- Directly visit official download pages; don't rely on search caches
|
|
- Use tavily-extract or WebFetch to extract page content
|
|
- Verify: platform support, current version number, release date
|
|
- This step is mandatory; search engines may cache outdated "Coming soon" info
|
|
|
|
3. Round 3: Product-specific protocol/feature search (BLOCKING)
|
|
- Search protocol names the product supports (MCP, ACP, LSP, etc.)
|
|
- Format: `"<product_name> <protocol_name>" site:official_domain`
|
|
- These integration features are often not displayed on the main page but documented in specialized pages
|
|
|
|
4. Round 4: Time-limited broad search
|
|
- time_range: "month" or start_date set to recent
|
|
- Exclude obviously outdated sources
|
|
|
|
5. Round 5: Version verification
|
|
- Cross-validate version numbers from search results
|
|
- If inconsistency found, immediately consult official Changelog
|
|
|
|
6. Round 6: Community voice mining (BLOCKING - mandatory for product comparison research)
|
|
- Visit the product's GitHub Issues page, review popular/pinned issues
|
|
- Search Issues for key feature terms (e.g., "MCP", "plugin", "integration")
|
|
- Review discussion trends from the last 3-6 months
|
|
- Identify the feature points and differentiating characteristics users care most about
|
|
- Value of this step: Official docs rarely emphasize "features we have that others don't", but community discussions do
|
|
```
|
|
|
|
**Community Voice Mining Detailed Steps**:
|
|
|
|
```
|
|
GitHub Issues Mining Steps:
|
|
1. Visit github.com/<org>/<repo>/issues
|
|
2. Sort by "Most commented" to view popular discussions
|
|
3. Search keywords:
|
|
- Feature-related: feature request, enhancement, MCP, plugin, API
|
|
- Comparison-related: vs, compared to, alternative, migrate from
|
|
4. Review issue labels: enhancement, feature, discussion
|
|
5. Record frequently occurring feature demands and user pain points
|
|
|
|
Value Translation:
|
|
- Frequently discussed features → likely differentiating highlights
|
|
- User complaints/requests → likely product weaknesses
|
|
- Comparison discussions → directly obtain user-perspective difference analysis
|
|
```
|
|
|
|
**Source Timeliness Annotation Template** (append to source registry):
|
|
|
|
```markdown
|
|
- **Publication Date**: [YYYY-MM-DD]
|
|
- **Timeliness Status**: ✅ Currently valid / ⚠️ Needs verification / ❌ Outdated
|
|
- **Version Info**: [If applicable, annotate the relevant version number]
|
|
```
|
|
|
|
**Tool Usage**:
|
|
- Prefer `mcp__plugin_context7_context7__query-docs` for technical documentation
|
|
- Use `WebSearch` or `mcp__tavily-mcp__tavily-search` for broad searches
|
|
- Use `mcp__tavily-mcp__tavily-extract` to extract specific page content
|
|
|
|
**⚠️ Target Audience Verification (BLOCKING - must check before inclusion)**:
|
|
|
|
Before including each source, verify that its **target audience matches the research boundary**:
|
|
|
|
| Source Type | Target audience to verify | Verification method |
|
|
|------------|---------------------------|---------------------|
|
|
| **Policy/Regulation** | Who is it for? (K-12/university/all) | Check document title, scope clauses |
|
|
| **Academic Research** | Who are the subjects? (vocational/undergraduate/graduate) | Check methodology/sample description sections |
|
|
| **Statistical Data** | Which population is measured? | Check data source description |
|
|
| **Case Reports** | What type of institution is involved? | Confirm institution type (university/high school/vocational) |
|
|
|
|
**Handling mismatched sources**:
|
|
- Target audience completely mismatched → **do not include**
|
|
- Partially overlapping (e.g., "students" includes university students) → include but **annotate applicable scope**
|
|
- Usable as analogous reference (e.g., K-12 policy as a trend reference) → include but **explicitly annotate "reference only"**
|
|
|
|
**📁 Save action**:
|
|
For each source consulted, **immediately** append to `01_source_registry.md`:
|
|
```markdown
|
|
## Source #[number]
|
|
- **Title**: [source title]
|
|
- **Link**: [URL]
|
|
- **Tier**: L1/L2/L3/L4
|
|
- **Publication Date**: [YYYY-MM-DD]
|
|
- **Timeliness Status**: ✅ Currently valid / ⚠️ Needs verification / ❌ Outdated (reference only)
|
|
- **Version Info**: [If involving a specific version, must annotate]
|
|
- **Target Audience**: [Explicitly annotate the group/geography/level this source targets]
|
|
- **Research Boundary Match**: ✅ Full match / ⚠️ Partial overlap / 📎 Reference only
|
|
- **Summary**: [1-2 sentence key content]
|
|
- **Related Sub-question**: [which sub-question this corresponds to]
|
|
```
|
|
|
|
### Step 3: Fact Extraction & Evidence Cards
|
|
|
|
Transform sources into **verifiable fact cards**:
|
|
|
|
```markdown
|
|
## Fact Cards
|
|
|
|
### Fact 1
|
|
- **Statement**: [specific fact description]
|
|
- **Source**: [link/document section]
|
|
- **Confidence**: High/Medium/Low
|
|
|
|
### Fact 2
|
|
...
|
|
```
|
|
|
|
**Key discipline**:
|
|
- Pin down facts first, then reason
|
|
- Distinguish "what officials said" from "what I infer"
|
|
- When conflicting information is found, annotate and preserve both sides
|
|
- Annotate confidence level:
|
|
- ✅ High: Explicitly stated in official documentation
|
|
- ⚠️ Medium: Mentioned in official blog but not formally documented
|
|
- ❓ Low: Inference or from unofficial sources
|
|
|
|
**📁 Save action**:
|
|
For each extracted fact, **immediately** append to `02_fact_cards.md`:
|
|
```markdown
|
|
## Fact #[number]
|
|
- **Statement**: [specific fact description]
|
|
- **Source**: [Source #number] [link]
|
|
- **Phase**: [Phase 1 / Phase 2 / Assessment]
|
|
- **Target Audience**: [which group this fact applies to, inherited from source or further refined]
|
|
- **Confidence**: ✅/⚠️/❓
|
|
- **Related Dimension**: [corresponding comparison dimension]
|
|
```
|
|
|
|
**⚠️ Target audience in fact statements**:
|
|
- If a fact comes from a "partially overlapping" or "reference only" source, the statement **must explicitly annotate the applicable scope**
|
|
- Wrong: "The Ministry of Education banned phones in classrooms" (doesn't specify who)
|
|
- Correct: "The Ministry of Education banned K-12 students from bringing phones into classrooms (does not apply to university students)"
|
|
|
|
### Step 4: Build Comparison/Analysis Framework
|
|
|
|
Based on the question type, select fixed analysis dimensions:
|
|
|
|
**General Dimensions** (select as needed):
|
|
1. Goal / What problem does it solve
|
|
2. Working mechanism / Process
|
|
3. Input / Output / Boundaries
|
|
4. Advantages / Disadvantages / Trade-offs
|
|
5. Applicable scenarios / Boundary conditions
|
|
6. Cost / Benefit / Risk
|
|
7. Historical evolution / Future trends
|
|
8. Security / Permissions / Controllability
|
|
|
|
**Concept Comparison Specific Dimensions**:
|
|
1. Definition & essence
|
|
2. Trigger / invocation method
|
|
3. Execution agent
|
|
4. Input/output & type constraints
|
|
5. Determinism & repeatability
|
|
6. Resource & context management
|
|
7. Composition & reuse patterns
|
|
8. Security boundaries & permission control
|
|
|
|
**Decision Support Specific Dimensions**:
|
|
1. Solution overview
|
|
2. Implementation cost
|
|
3. Maintenance cost
|
|
4. Risk assessment
|
|
5. Expected benefit
|
|
6. Applicable scenarios
|
|
7. Team capability requirements
|
|
8. Migration difficulty
|
|
|
|
**📁 Save action**:
|
|
Write to `03_comparison_framework.md`:
|
|
```markdown
|
|
# Comparison Framework
|
|
|
|
## Selected Framework Type
|
|
[Concept Comparison / Decision Support / ...]
|
|
|
|
## Selected Dimensions
|
|
1. [Dimension 1]
|
|
2. [Dimension 2]
|
|
...
|
|
|
|
## Initial Population
|
|
| Dimension | X | Y | Factual Basis |
|
|
|-----------|---|---|---------------|
|
|
| [Dimension 1] | [description] | [description] | Fact #1, #3 |
|
|
| ... | | | |
|
|
```
|
|
|
|
### Step 5: Reference Point Baseline Alignment
|
|
|
|
Ensure all compared parties have clear, consistent definitions:
|
|
|
|
**Checklist**:
|
|
- [ ] Is the reference point's definition stable/widely accepted?
|
|
- [ ] Does it need verification, or can domain common knowledge be used?
|
|
- [ ] Does the reader's understanding of the reference point match mine?
|
|
- [ ] Are there ambiguities that need to be clarified first?
|
|
|
|
### Step 6: Fact-to-Conclusion Reasoning Chain
|
|
|
|
Explicitly write out the "fact → comparison → conclusion" reasoning process:
|
|
|
|
```markdown
|
|
## Reasoning Process
|
|
|
|
### Regarding [Dimension Name]
|
|
|
|
1. **Fact confirmation**: According to [source], X's mechanism is...
|
|
2. **Compare with reference**: While Y's mechanism is...
|
|
3. **Conclusion**: Therefore, the difference between X and Y on this dimension is...
|
|
```
|
|
|
|
**Key discipline**:
|
|
- Conclusions come from mechanism comparison, not "gut feelings"
|
|
- Every conclusion must be traceable to specific facts
|
|
- Uncertain conclusions must be annotated
|
|
|
|
**📁 Save action**:
|
|
Write to `04_reasoning_chain.md`:
|
|
```markdown
|
|
# Reasoning Chain
|
|
|
|
## Dimension 1: [Dimension Name]
|
|
|
|
### Fact Confirmation
|
|
According to [Fact #X], X's mechanism is...
|
|
|
|
### Reference Comparison
|
|
While Y's mechanism is... (Source: [Fact #Y])
|
|
|
|
### Conclusion
|
|
Therefore, the difference between X and Y on this dimension is...
|
|
|
|
### Confidence
|
|
✅/⚠️/❓ + rationale
|
|
|
|
---
|
|
## Dimension 2: [Dimension Name]
|
|
...
|
|
```
|
|
|
|
### Step 7: Use-Case Validation (Sanity Check)
|
|
|
|
Validate conclusions against a typical scenario:
|
|
|
|
**Validation questions**:
|
|
- Based on my conclusions, how should this scenario be handled?
|
|
- Is that actually the case?
|
|
- Are there counterexamples that need to be addressed?
|
|
|
|
**Review checklist**:
|
|
- [ ] Are draft conclusions consistent with Step 3 fact cards?
|
|
- [ ] Are there any important dimensions missed?
|
|
- [ ] Is there any over-extrapolation?
|
|
- [ ] Are conclusions actionable/verifiable?
|
|
|
|
**📁 Save action**:
|
|
Write to `05_validation_log.md`:
|
|
```markdown
|
|
# Validation Log
|
|
|
|
## Validation Scenario
|
|
[Scenario description]
|
|
|
|
## Expected Based on Conclusions
|
|
If using X: [expected behavior]
|
|
If using Y: [expected behavior]
|
|
|
|
## Actual Validation Results
|
|
[actual situation]
|
|
|
|
## Counterexamples
|
|
[yes/no, describe if yes]
|
|
|
|
## Review Checklist
|
|
- [x] Draft conclusions consistent with fact cards
|
|
- [x] No important dimensions missed
|
|
- [x] No over-extrapolation
|
|
- [ ] Issue found: [if any]
|
|
|
|
## Conclusions Requiring Revision
|
|
[if any]
|
|
```
|
|
|
|
### Step 8: Deliverable Formatting
|
|
|
|
Make the output **readable, traceable, and actionable**.
|
|
|
|
**📁 Save action**:
|
|
Integrate all intermediate artifacts. Write to `OUTPUT_DIR/solution_draft##.md` using the appropriate output template based on active mode:
|
|
- Mode A: `templates/solution_draft_mode_a.md`
|
|
- Mode B: `templates/solution_draft_mode_b.md`
|
|
|
|
Sources to integrate:
|
|
- Extract background from `00_question_decomposition.md`
|
|
- Reference key facts from `02_fact_cards.md`
|
|
- Organize conclusions from `04_reasoning_chain.md`
|
|
- Generate references from `01_source_registry.md`
|
|
- Supplement with use cases from `05_validation_log.md`
|
|
- For Mode A: include AC assessment from `00_ac_assessment.md`
|
|
|
|
## Solution Draft Output Templates
|
|
|
|
### Mode A: Initial Research Output
|
|
|
|
Use template: `templates/solution_draft_mode_a.md`
|
|
|
|
### Mode B: Solution Assessment Output
|
|
|
|
Use template: `templates/solution_draft_mode_b.md`
|
|
|
|
## Stakeholder Perspectives
|
|
|
|
Adjust content depth based on audience:
|
|
|
|
| Audience | Focus | Detail Level |
|
|
|----------|-------|--------------|
|
|
| **Decision-makers** | Conclusions, risks, recommendations | Concise, emphasize actionability |
|
|
| **Implementers** | Specific mechanisms, how-to | Detailed, emphasize how to do it |
|
|
| **Technical experts** | Details, boundary conditions, limitations | In-depth, emphasize accuracy |
|
|
|
|
## Output Files
|
|
|
|
Default intermediate artifacts location: `RESEARCH_DIR/<topic>/`
|
|
|
|
**Required files** (automatically generated through the process):
|
|
|
|
| File | Content | When Generated |
|
|
|------|---------|----------------|
|
|
| `00_ac_assessment.md` | AC & restrictions assessment (Mode A only) | After Phase 1 completion |
|
|
| `00_question_decomposition.md` | Question type, sub-question list | After Step 0-1 completion |
|
|
| `01_source_registry.md` | All source links and summaries | Continuously updated during Step 2 |
|
|
| `02_fact_cards.md` | Extracted facts and sources | Continuously updated during Step 3 |
|
|
| `03_comparison_framework.md` | Selected framework and populated data | After Step 4 completion |
|
|
| `04_reasoning_chain.md` | Fact → conclusion reasoning | After Step 6 completion |
|
|
| `05_validation_log.md` | Use-case validation and review | After Step 7 completion |
|
|
| `OUTPUT_DIR/solution_draft##.md` | Complete solution draft | After Step 8 completion |
|
|
| `OUTPUT_DIR/tech_stack.md` | Tech stack evaluation and decisions | After Phase 3 (optional) |
|
|
| `OUTPUT_DIR/security_analysis.md` | Threat model and security controls | After Phase 4 (optional) |
|
|
|
|
**Optional files**:
|
|
- `raw/*.md` - Raw source archives (saved when content is lengthy)
|
|
|
|
## Methodology Quick Reference Card
|
|
|
|
```
|
|
┌──────────────────────────────────────────────────────────────────┐
|
|
│ Deep Research — Mode-Aware 8-Step Method │
|
|
├──────────────────────────────────────────────────────────────────┤
|
|
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
|
|
│ GUARDRAILS: Check INPUT_DIR/INPUT_FILE exists + required files │
|
|
│ MODE DETECT: solution_draft*.md in 01_solution? → A or B │
|
|
│ │
|
|
│ MODE A: Initial Research │
|
|
│ Phase 1: AC & Restrictions Assessment (BLOCKING) │
|
|
│ Phase 2: Full 8-step → solution_draft##.md │
|
|
│ Phase 3: Tech Stack Consolidation (OPTIONAL) → tech_stack.md │
|
|
│ Phase 4: Security Deep Dive (OPTIONAL) → security_analysis.md │
|
|
│ │
|
|
│ MODE B: Solution Assessment │
|
|
│ Read latest draft → Full 8-step → solution_draft##.md (N+1) │
|
|
│ Optional: Phase 3 / Phase 4 on revised draft │
|
|
│ │
|
|
│ 8-STEP ENGINE: │
|
|
│ 0. Classify question type → Select framework template │
|
|
│ 1. Decompose question → mode-specific sub-questions │
|
|
│ 2. Tier sources → L1 Official > L2 Blog > L3 Media > L4 │
|
|
│ 3. Extract facts → Each with source, confidence level │
|
|
│ 4. Build framework → Fixed dimensions, structured compare │
|
|
│ 5. Align references → Ensure unified definitions │
|
|
│ 6. Reasoning chain → Fact→Compare→Conclude, explicit │
|
|
│ 7. Use-case validation → Sanity check, prevent armchairing │
|
|
│ 8. Deliverable → solution_draft##.md (mode-specific format) │
|
|
├──────────────────────────────────────────────────────────────────┤
|
|
│ Key discipline: Ask don't assume · Facts before reasoning │
|
|
│ Conclusions from mechanism, not gut feelings │
|
|
└──────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
## Usage Examples
|
|
|
|
### Example 1: Initial Research (Mode A)
|
|
|
|
```
|
|
User: Research this problem and find the best solution
|
|
```
|
|
|
|
Execution flow:
|
|
1. Context resolution: no explicit file → project mode (INPUT_DIR=`_docs/00_problem/`, OUTPUT_DIR=`_docs/01_solution/`)
|
|
2. Guardrails: verify INPUT_DIR exists with required files
|
|
3. Mode detection: no `solution_draft*.md` → Mode A
|
|
4. Phase 1: Assess acceptance criteria and restrictions, ask user about unclear parts
|
|
5. BLOCKING: present AC assessment, wait for user confirmation
|
|
6. Phase 2: Full 8-step research — competitors, components, state-of-the-art solutions
|
|
7. Output: `OUTPUT_DIR/solution_draft01.md`
|
|
8. (Optional) Phase 3: Tech stack consolidation → `tech_stack.md`
|
|
9. (Optional) Phase 4: Security deep dive → `security_analysis.md`
|
|
|
|
### Example 2: Solution Assessment (Mode B)
|
|
|
|
```
|
|
User: Assess the current solution draft
|
|
```
|
|
|
|
Execution flow:
|
|
1. Context resolution: no explicit file → project mode
|
|
2. Guardrails: verify INPUT_DIR exists
|
|
3. Mode detection: `solution_draft03.md` found in OUTPUT_DIR → Mode B, read it as input
|
|
4. Full 8-step research — weak points, security, performance, solutions
|
|
5. Output: `OUTPUT_DIR/solution_draft04.md` with findings table + revised draft
|
|
|
|
### Example 3: Standalone Research
|
|
|
|
```
|
|
User: /research @my_problem.md
|
|
```
|
|
|
|
Execution flow:
|
|
1. Context resolution: explicit file → standalone mode (INPUT_FILE=`my_problem.md`, OUTPUT_DIR=`_standalone/my_problem/01_solution/`)
|
|
2. Guardrails: verify INPUT_FILE exists and is non-empty, warn about missing restrictions/AC
|
|
3. Mode detection + full research flow as in Example 1, scoped to standalone paths
|
|
4. Output: `_standalone/my_problem/01_solution/solution_draft01.md`
|
|
|
|
### Example 4: Force Initial Research (Override)
|
|
|
|
```
|
|
User: Research from scratch, ignore existing drafts
|
|
```
|
|
|
|
Execution flow:
|
|
1. Context resolution: no explicit file → project mode
|
|
2. Mode detection: drafts exist, but user explicitly requested initial research → Mode A
|
|
3. Phase 1 + Phase 2 as in Example 1
|
|
4. Output: `OUTPUT_DIR/solution_draft##.md` (incremented from highest existing)
|
|
|
|
## Source Verifiability Requirements
|
|
|
|
**Core principle**: Every piece of external information cited in the report must be directly verifiable by the user.
|
|
|
|
**Mandatory rules**:
|
|
|
|
1. **URL Accessibility**:
|
|
- All cited links must be publicly accessible (no login/paywall required)
|
|
- If citing content that requires login, must annotate `[login required]`
|
|
- If citing academic papers, prefer publicly available versions (arXiv/DOI)
|
|
|
|
2. **Citation Precision**:
|
|
- For long documents, must specify exact section/page/timestamp
|
|
- Example: `[Source: OpenAI Blog, 2024-03-15, "GPT-4 Technical Report", §3.2 Safety]`
|
|
- Video/audio citations need timestamps
|
|
|
|
3. **Content Correspondence**:
|
|
- Cited facts must have corresponding statements in the original text
|
|
- Prohibit over-interpretation of original text presented as "citations"
|
|
- If there's interpretation/inference, must explicitly annotate "inferred based on [source]"
|
|
|
|
4. **Timeliness Annotation**:
|
|
- Annotate source publication/update date
|
|
- For technical docs, annotate version number
|
|
- Sources older than 2 years need validity assessment
|
|
|
|
5. **Handling Unverifiable Information**:
|
|
- If the information source cannot be publicly verified (e.g., private communication, paywalled report excerpts), must annotate `[limited source]` in confidence level
|
|
- Unverifiable information cannot be the sole support for core conclusions
|
|
|
|
## Quality Checklist
|
|
|
|
Before completing the solution draft, check the following items:
|
|
|
|
### General Quality
|
|
|
|
- [ ] All core conclusions have L1/L2 tier factual support
|
|
- [ ] No use of vague words like "possibly", "probably" without annotating uncertainty
|
|
- [ ] Comparison dimensions are complete with no key differences missed
|
|
- [ ] At least one real use case validates conclusions
|
|
- [ ] References are complete with accessible links
|
|
- [ ] **Every citation can be directly verified by the user (source verifiability)**
|
|
- [ ] Structure hierarchy is clear; executives can quickly locate information
|
|
|
|
### Mode A Specific
|
|
|
|
- [ ] **Phase 1 completed**: AC assessment was presented to and confirmed by user
|
|
- [ ] **AC assessment consistent**: Solution draft respects the (possibly adjusted) acceptance criteria and restrictions
|
|
- [ ] **Competitor analysis included**: Existing solutions were researched
|
|
- [ ] **All components have comparison tables**: Each component lists alternatives with tools, advantages, limitations, security, cost
|
|
- [ ] **Tools/libraries verified**: Suggested tools actually exist and work as described
|
|
- [ ] **Testing strategy covers AC**: Tests map to acceptance criteria
|
|
- [ ] **Tech stack documented** (if Phase 3 ran): `tech_stack.md` has evaluation tables, risk assessment, and learning requirements
|
|
- [ ] **Security analysis documented** (if Phase 4 ran): `security_analysis.md` has threat model and per-component controls
|
|
|
|
### Mode B Specific
|
|
|
|
- [ ] **Findings table complete**: All identified weak points documented with solutions
|
|
- [ ] **Weak point categories covered**: Functional, security, and performance assessed
|
|
- [ ] **New draft is self-contained**: Written as if from scratch, no "updated" markers
|
|
- [ ] **Performance column included**: Mode B comparison tables include performance characteristics
|
|
- [ ] **Previous draft issues addressed**: Every finding in the table is resolved in the new draft
|
|
|
|
### ⏰ Timeliness Check (High-Sensitivity Domain BLOCKING)
|
|
|
|
When the research topic has 🔴 Critical or 🟠 High sensitivity level, **the following checks must be completed**:
|
|
|
|
- [ ] **Timeliness sensitivity assessment completed**: `00_question_decomposition.md` contains a timeliness assessment section
|
|
- [ ] **Source timeliness annotated**: Every source has publication date, timeliness status, version info
|
|
- [ ] **No outdated sources used as factual evidence**:
|
|
- 🔴 Critical: Core fact sources are all within 6 months
|
|
- 🟠 High: Core fact sources are all within 1 year
|
|
- [ ] **Version numbers explicitly annotated**:
|
|
- Technical product/API/SDK descriptions all annotate specific version numbers
|
|
- No vague time expressions like "latest version" or "currently"
|
|
- [ ] **Official sources prioritized**: Core conclusions have support from official documentation/blogs
|
|
- [ ] **Cross-validation completed**: Key technical information confirmed from at least 2 independent sources
|
|
- [ ] **Download page directly verified**: Platform support info comes from real-time extraction of official download pages, not search caches
|
|
- [ ] **Protocol/feature names searched**: Searched for product-supported protocol names (MCP, ACP, etc.)
|
|
- [ ] **GitHub Issues mined**: Reviewed product's GitHub Issues popular discussions
|
|
- [ ] **Community hotspots identified**: Identified and recorded feature points users care most about
|
|
|
|
**Typical community voice oversight error cases**:
|
|
|
|
> Wrong: Relying solely on official docs, MCP briefly mentioned as a regular feature in the report
|
|
> Correct: Discovered through GitHub Issues that MCP is the most hotly discussed feature in the community, expanded analysis of its value in the report
|
|
|
|
> Wrong: "Both Alma and Cherry Studio support MCP" (no difference analysis)
|
|
> Correct: Discovered through community discussion that "Alma's MCP implementation is highly consistent with Claude Code — this is its core competitive advantage"
|
|
|
|
**Typical platform support/protocol oversight error cases**:
|
|
|
|
> Wrong: "Alma only supports macOS" (based on search engine cached "Coming soon" info)
|
|
> Correct: Directly visited alma.now/download page to verify currently supported platforms
|
|
|
|
> Wrong: "Alma supports MCP" (only searched MCP, missed ACP)
|
|
> Correct: Searched both "Alma MCP" and "Alma ACP", discovered Alma also supports ACP protocol integration for CLI tools
|
|
|
|
**Typical timeliness error cases**:
|
|
|
|
> Wrong: "Claude supports function calling" (no version annotated, may refer to old version capabilities)
|
|
> Correct: "Claude 3.5 Sonnet (claude-3-5-sonnet-20241022) supports function calling via Tool Use API, with a maximum of 8192 tokens for tool definitions"
|
|
|
|
> Wrong: "According to a 2023 blog post, GPT-4's context length is 8K"
|
|
> Correct: "As of January 2024, GPT-4 Turbo supports 128K context (Source: OpenAI official documentation, updated 2024-01-25)"
|
|
|
|
### ⚠️ Target Audience Consistency Check (BLOCKING)
|
|
|
|
This is the most easily overlooked and most critical check item:
|
|
|
|
- [ ] **Research boundary clearly defined**: `00_question_decomposition.md` has clear population/geography/timeframe/level boundaries
|
|
- [ ] **Every source has target audience annotated**: `01_source_registry.md` has "Target Audience" and "Research Boundary Match" fields for each source
|
|
- [ ] **Mismatched sources properly handled**:
|
|
- Completely mismatched sources were not included
|
|
- Partially overlapping sources have annotated applicable scope
|
|
- Reference-only sources are explicitly annotated
|
|
- [ ] **No audience confusion in fact cards**: Every fact in `02_fact_cards.md` has a target audience consistent with the research boundary
|
|
- [ ] **No audience confusion in the report**: Policies/research/data cited in the solution draft have target audiences consistent with the research topic
|
|
|
|
**Typical error case**:
|
|
> Research topic: "University students not paying attention in class"
|
|
> Wrong citation: "In October 2025, the Ministry of Education banned phones in classrooms"
|
|
> Problem: That policy targets K-12 students, not university students
|
|
> Consequence: Readers mistakenly believe the Ministry of Education banned university students from carrying phones — severely misleading
|
|
|
|
## Final Reply Guidelines
|
|
|
|
When replying to the user after research is complete:
|
|
|
|
**✅ Should include**:
|
|
- Active mode used (A or B) and which optional phases were executed
|
|
- One-sentence core conclusion
|
|
- Key findings summary (3-5 points)
|
|
- Path to the solution draft: `OUTPUT_DIR/solution_draft##.md`
|
|
- Paths to optional artifacts if produced: `tech_stack.md`, `security_analysis.md`
|
|
- If there are significant uncertainties, annotate points requiring further verification
|
|
|
|
**❌ Must not include**:
|
|
- Process file listings (e.g., `00_question_decomposition.md`, `01_source_registry.md`, etc.)
|
|
- Detailed research step descriptions
|
|
- Working directory structure display
|
|
|
|
**Reason**: Process files are for retrospective review, not for the user. The user cares about conclusions, not the process.
|