mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-22 07:16:38 +00:00
709 lines
31 KiB
Markdown
709 lines
31 KiB
Markdown
---
|
|
name: deep-research
|
|
description: |
|
|
Deep Research Methodology (8-Step Method) with two execution modes:
|
|
- Mode A (Initial Research): Assess acceptance criteria, then research problem and produce solution draft
|
|
- Mode B (Solution Assessment): Assess existing solution draft for weak points and produce revised draft
|
|
Supports project mode (_docs/ structure) and standalone mode (@file.md).
|
|
Auto-detects research mode based on existing solution_draft files.
|
|
Trigger phrases:
|
|
- "research", "deep research", "deep dive", "in-depth analysis"
|
|
- "research this", "investigate", "look into"
|
|
- "assess solution", "review solution draft"
|
|
- "comparative analysis", "concept comparison", "technical comparison"
|
|
category: build
|
|
tags: [research, analysis, solution-design, comparison, decision-support]
|
|
---
|
|
|
|
# Deep Research (8-Step Method)
|
|
|
|
Transform vague topics raised by users into high-quality, deliverable research reports through a systematic methodology. Operates in two modes: **Initial Research** (produce new solution draft) and **Solution Assessment** (assess and revise existing draft).
|
|
|
|
## Core Principles
|
|
|
|
- **Conclusions come from mechanism comparison, not "gut feelings"**
|
|
- **Pin down the facts first, then reason**
|
|
- **Prioritize authoritative sources: L1 > L2 > L3 > L4**
|
|
- **Intermediate results must be saved for traceability and reuse**
|
|
- **Ask, don't assume** — when any aspect of the problem, criteria, or restrictions is unclear, STOP and ask the user before proceeding
|
|
|
|
## Context Resolution
|
|
|
|
Determine the operating mode based on invocation before any other logic runs.
|
|
|
|
**Project mode** (no explicit input file provided):
|
|
- INPUT_DIR: `_docs/00_problem/`
|
|
- OUTPUT_DIR: `_docs/01_solution/`
|
|
- RESEARCH_DIR: `_docs/00_research/`
|
|
- All existing guardrails, mode detection, and draft numbering apply as-is.
|
|
|
|
**Standalone mode** (explicit input file provided, e.g. `/research @some_doc.md`):
|
|
- INPUT_FILE: the provided file (treated as problem description)
|
|
- OUTPUT_DIR: `_standalone/01_solution/`
|
|
- RESEARCH_DIR: `_standalone/00_research/`
|
|
- Guardrails relaxed: only INPUT_FILE must exist and be non-empty
|
|
- `restrictions.md` and `acceptance_criteria.md` are optional — warn if absent, proceed if user confirms
|
|
- Mode detection uses OUTPUT_DIR for `solution_draft*.md` scanning
|
|
- Draft numbering works the same, scoped to OUTPUT_DIR
|
|
- **Final step**: after all research is complete, move INPUT_FILE into `_standalone/`
|
|
|
|
Announce the detected mode and resolved paths to the user before proceeding.
|
|
|
|
## Project Integration
|
|
|
|
### Prerequisite Guardrails (BLOCKING)
|
|
|
|
Before any research begins, verify the input context exists. **Do not proceed if guardrails fail.**
|
|
|
|
**Project mode:**
|
|
1. Check INPUT_DIR exists — **STOP if missing**, ask user to create it and provide problem files
|
|
2. Check `problem.md` in INPUT_DIR exists and is non-empty — **STOP if missing**
|
|
3. Check `restrictions.md` in INPUT_DIR exists and is non-empty — **STOP if missing**
|
|
4. Check `acceptance_criteria.md` in INPUT_DIR exists and is non-empty — **STOP if missing**
|
|
5. Check `input_data/` in INPUT_DIR exists and contains at least one file — **STOP if missing**
|
|
6. Read **all** files in INPUT_DIR to ground the investigation in the project context
|
|
7. Create OUTPUT_DIR and RESEARCH_DIR if they don't exist
|
|
|
|
**Standalone mode:**
|
|
1. Check INPUT_FILE exists and is non-empty — **STOP if missing**
|
|
2. Warn if no `restrictions.md` or `acceptance_criteria.md` were provided alongside INPUT_FILE — proceed if user confirms
|
|
3. Create OUTPUT_DIR and RESEARCH_DIR if they don't exist
|
|
|
|
### Mode Detection
|
|
|
|
After guardrails pass, determine the execution mode:
|
|
|
|
1. Scan OUTPUT_DIR for files matching `solution_draft*.md`
|
|
2. **No matches found** → **Mode A: Initial Research**
|
|
3. **Matches found** → **Mode B: Solution Assessment** (use the highest-numbered draft as input)
|
|
4. **User override**: if the user explicitly says "research from scratch" or "initial research", force Mode A regardless of existing drafts
|
|
|
|
Inform the user which mode was detected and confirm before proceeding.
|
|
|
|
### Solution Draft Numbering
|
|
|
|
All final output is saved as `OUTPUT_DIR/solution_draft##.md` with a 2-digit zero-padded number:
|
|
|
|
1. Scan existing files in OUTPUT_DIR matching `solution_draft*.md`
|
|
2. Extract the highest existing number
|
|
3. Increment by 1
|
|
4. Zero-pad to 2 digits (e.g., `01`, `02`, ..., `10`, `11`)
|
|
|
|
Example: if `solution_draft01.md` through `solution_draft10.md` exist, the next output is `solution_draft11.md`.
|
|
|
|
### Working Directory & Intermediate Artifact Management
|
|
|
|
#### Directory Structure
|
|
|
|
At the start of research, **must** create a working directory under RESEARCH_DIR:
|
|
|
|
```
|
|
RESEARCH_DIR/
|
|
├── 00_ac_assessment.md # Mode A Phase 1 output: AC & restrictions assessment
|
|
├── 00_question_decomposition.md # Step 0-1 output
|
|
├── 01_source_registry.md # Step 2 output: all consulted source links
|
|
├── 02_fact_cards.md # Step 3 output: extracted facts
|
|
├── 03_comparison_framework.md # Step 4 output: selected framework and populated data
|
|
├── 04_reasoning_chain.md # Step 6 output: fact → conclusion reasoning
|
|
├── 05_validation_log.md # Step 7 output: use-case validation results
|
|
└── raw/ # Raw source archive (optional)
|
|
├── source_1.md
|
|
└── source_2.md
|
|
```
|
|
|
|
### Save Timing & Content
|
|
|
|
| Step | Save immediately after completion | Filename |
|
|
|------|-----------------------------------|----------|
|
|
| Mode A Phase 1 | AC & restrictions assessment tables | `00_ac_assessment.md` |
|
|
| Step 0-1 | Question type classification + sub-question list | `00_question_decomposition.md` |
|
|
| Step 2 | Each consulted source link, tier, summary | `01_source_registry.md` |
|
|
| Step 3 | Each fact card (statement + source + confidence) | `02_fact_cards.md` |
|
|
| Step 4 | Selected comparison framework + initial population | `03_comparison_framework.md` |
|
|
| Step 6 | Reasoning process for each dimension | `04_reasoning_chain.md` |
|
|
| Step 7 | Validation scenarios + results + review checklist | `05_validation_log.md` |
|
|
| Step 8 | Complete solution draft | `OUTPUT_DIR/solution_draft##.md` |
|
|
|
|
### Save Principles
|
|
|
|
1. **Save immediately**: Write to the corresponding file as soon as a step is completed; don't wait until the end
|
|
2. **Incremental updates**: Same file can be updated multiple times; append or replace new content
|
|
3. **Preserve process**: Keep intermediate files even after their content is integrated into the final report
|
|
4. **Enable recovery**: If research is interrupted, progress can be recovered from intermediate files
|
|
|
|
## Execution Flow
|
|
|
|
### Mode A: Initial Research
|
|
|
|
Triggered when no `solution_draft*.md` files exist in OUTPUT_DIR, or when the user explicitly requests initial research.
|
|
|
|
#### Phase 1: AC & Restrictions Assessment (BLOCKING)
|
|
|
|
**Role**: Professional software architect
|
|
|
|
A focused preliminary research pass **before** the main solution research. The goal is to validate that the acceptance criteria and restrictions are realistic before designing a solution around them.
|
|
|
|
**Input**: All files from INPUT_DIR (or INPUT_FILE in standalone mode)
|
|
|
|
**Task**:
|
|
1. Read all problem context files thoroughly
|
|
2. **ASK the user about every unclear aspect** — do not assume:
|
|
- Unclear problem boundaries → ask
|
|
- Ambiguous acceptance criteria values → ask
|
|
- Missing context (no `security_approach.md`, no `input_data/`) → ask what they have
|
|
- Conflicting restrictions → ask which takes priority
|
|
3. Research in internet:
|
|
- How realistic are the acceptance criteria for this specific domain?
|
|
- How critical is each criterion?
|
|
- What domain-specific acceptance criteria are we missing?
|
|
- Impact of each criterion value on the whole system quality
|
|
- Cost/budget implications of each criterion
|
|
- Timeline implications — how long would it take to meet each criterion
|
|
4. Research restrictions:
|
|
- Are the restrictions realistic?
|
|
- Should any be tightened or relaxed?
|
|
- Are there additional restrictions we should add?
|
|
5. Verify findings with authoritative sources (official docs, papers, benchmarks)
|
|
|
|
**Uses Steps 0-3 of the 8-step engine** (question classification, decomposition, source tiering, fact extraction) scoped to AC and restrictions assessment.
|
|
|
|
**📁 Save action**: Write `RESEARCH_DIR/00_ac_assessment.md` with format:
|
|
|
|
```markdown
|
|
# Acceptance Criteria Assessment
|
|
|
|
## Acceptance Criteria
|
|
|
|
| Criterion | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-----------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Restrictions Assessment
|
|
|
|
| Restriction | Our Values | Researched Values | Cost/Timeline Impact | Status |
|
|
|-------------|-----------|-------------------|---------------------|--------|
|
|
| [name] | [current] | [researched range] | [impact] | Added / Modified / Removed |
|
|
|
|
## Key Findings
|
|
[Summary of critical findings]
|
|
|
|
## Sources
|
|
[Key references used]
|
|
```
|
|
|
|
**BLOCKING**: Present the AC assessment tables to the user. Wait for confirmation or adjustments before proceeding to Phase 2. The user may update `acceptance_criteria.md` or `restrictions.md` based on findings.
|
|
|
|
---
|
|
|
|
#### Phase 2: Problem Research & Solution Draft
|
|
|
|
**Role**: Professional researcher and software architect
|
|
|
|
Full 8-step research methodology. Produces the first solution draft.
|
|
|
|
**Input**: All files from INPUT_DIR (possibly updated after Phase 1) + Phase 1 artifacts
|
|
|
|
**Task** (drives the 8-step engine):
|
|
1. Research existing/competitor solutions for similar problems
|
|
2. Research the problem thoroughly — all possible ways to solve it, split into components
|
|
3. For each component, research all possible solutions and find the most efficient state-of-the-art approaches
|
|
4. Verify that suggested tools/libraries actually exist and work as described
|
|
5. Include security considerations in each component analysis
|
|
6. Provide rough cost estimates for proposed solutions
|
|
|
|
Be concise in formulating. The fewer words, the better, but do not miss any important details.
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/solution_draft##.md` using template: `templates/solution_draft_mode_a.md`
|
|
|
|
---
|
|
|
|
#### Phase 3: Tech Stack Consolidation (OPTIONAL)
|
|
|
|
**Role**: Software architect evaluating technology choices
|
|
|
|
Focused synthesis step — no new 8-step cycle. Uses research already gathered in Phase 2 to make concrete technology decisions.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + all files from INPUT_DIR
|
|
|
|
**Task**:
|
|
1. Extract technology options from the solution draft's component comparison tables
|
|
2. Score each option against: fitness for purpose, maturity, security track record, team expertise, cost, scalability
|
|
3. Produce a tech stack summary with selection rationale
|
|
4. Assess risks and learning requirements per technology choice
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/tech_stack.md` with:
|
|
- Requirements analysis (functional, non-functional, constraints)
|
|
- Technology evaluation tables (language, framework, database, infrastructure, key libraries) with scores
|
|
- Tech stack summary block
|
|
- Risk assessment and learning requirements tables
|
|
|
|
---
|
|
|
|
#### Phase 4: Security Deep Dive (OPTIONAL)
|
|
|
|
**Role**: Security architect
|
|
|
|
Focused analysis step — deepens the security column from the solution draft into a proper threat model and controls specification.
|
|
|
|
**Input**: Latest `solution_draft##.md` from OUTPUT_DIR + `security_approach.md` from INPUT_DIR + problem context
|
|
|
|
**Task**:
|
|
1. Build threat model: asset inventory, threat actors, attack vectors
|
|
2. Define security requirements and proposed controls per component (with risk level)
|
|
3. Summarize authentication/authorization, data protection, secure communication, and logging/monitoring approach
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/security_analysis.md` with:
|
|
- Threat model (assets, actors, vectors)
|
|
- Per-component security requirements and controls table
|
|
- Security controls summary
|
|
|
|
---
|
|
|
|
### Mode B: Solution Assessment
|
|
|
|
Triggered when `solution_draft*.md` files exist in OUTPUT_DIR.
|
|
|
|
**Role**: Professional software architect
|
|
|
|
Full 8-step research methodology applied to assessing and improving an existing solution draft.
|
|
|
|
**Input**: All files from INPUT_DIR + the latest (highest-numbered) `solution_draft##.md` from OUTPUT_DIR
|
|
|
|
**Task** (drives the 8-step engine):
|
|
1. Read the existing solution draft thoroughly
|
|
2. Research in internet — identify all potential weak points and problems
|
|
3. Identify security weak points and vulnerabilities
|
|
4. Identify performance bottlenecks
|
|
5. Address these problems and find ways to solve them
|
|
6. Based on findings, form a new solution draft in the same format
|
|
|
|
**📁 Save action**: Write `OUTPUT_DIR/solution_draft##.md` (incremented) using template: `templates/solution_draft_mode_b.md`
|
|
|
|
**Optional follow-up**: After Mode B completes, the user can request Phase 3 (Tech Stack Consolidation) or Phase 4 (Security Deep Dive) using the revised draft. These phases work identically to their Mode A descriptions above.
|
|
|
|
## Escalation Rules
|
|
|
|
| Situation | Action |
|
|
|-----------|--------|
|
|
| Unclear problem boundaries | **ASK user** |
|
|
| Ambiguous acceptance criteria values | **ASK user** |
|
|
| Missing context files (`security_approach.md`, `input_data/`) | **ASK user** what they have |
|
|
| Conflicting restrictions | **ASK user** which takes priority |
|
|
| Technology choice with multiple valid options | **ASK user** |
|
|
| Contradictions between input files | **ASK user** |
|
|
| Missing acceptance criteria or restrictions files | **WARN user**, ask whether to proceed |
|
|
| File naming within research artifacts | PROCEED |
|
|
| Source tier classification | PROCEED |
|
|
|
|
## Trigger Conditions
|
|
|
|
When the user wants to:
|
|
- Deeply understand a concept/technology/phenomenon
|
|
- Compare similarities and differences between two or more things
|
|
- Gather information and evidence for a decision
|
|
- Assess or improve an existing solution draft
|
|
|
|
**Keywords**:
|
|
- "deep research", "deep dive", "in-depth analysis"
|
|
- "research this", "investigate", "look into"
|
|
- "assess solution", "review draft", "improve solution"
|
|
- "comparative analysis", "concept comparison", "technical comparison"
|
|
|
|
**Differentiation from other Skills**:
|
|
- Needs a **visual knowledge graph** → use `research-to-diagram`
|
|
- Needs **written output** (articles/tutorials) → use `wsy-writer`
|
|
- Needs **material organization** → use `material-to-markdown`
|
|
- Needs **research + solution draft** → use this Skill
|
|
|
|
## Research Engine (8-Step Method)
|
|
|
|
The 8-step method is the core research engine used by both modes. Steps 0-1 and Step 8 have mode-specific behavior; Steps 2-7 are identical regardless of mode.
|
|
|
|
### Step 0: Question Type Classification
|
|
|
|
First, classify the research question type and select the corresponding strategy:
|
|
|
|
| Question Type | Core Task | Focus Dimensions |
|
|
|---------------|-----------|------------------|
|
|
| **Concept Comparison** | Build comparison framework | Mechanism differences, applicability boundaries |
|
|
| **Decision Support** | Weigh trade-offs | Cost, risk, benefit |
|
|
| **Trend Analysis** | Map evolution trajectory | History, driving factors, predictions |
|
|
| **Problem Diagnosis** | Root cause analysis | Symptoms, causes, evidence chain |
|
|
| **Knowledge Organization** | Systematic structuring | Definitions, classifications, relationships |
|
|
|
|
**Mode-specific classification**:
|
|
|
|
| Mode / Phase | Typical Question Type |
|
|
|--------------|----------------------|
|
|
| Mode A Phase 1 | Knowledge Organization + Decision Support |
|
|
| Mode A Phase 2 | Decision Support |
|
|
| Mode B | Problem Diagnosis + Decision Support |
|
|
|
|
### Step 0.5: Novelty Sensitivity Assessment (BLOCKING)
|
|
|
|
Before starting research, assess the novelty sensitivity of the question (Critical/High/Medium/Low). This determines source time windows and filtering strategy.
|
|
|
|
**For full classification table, critical-domain rules, trigger words, and assessment template**: Read `references/novelty-sensitivity.md`
|
|
|
|
Key principle: Critical-sensitivity topics (AI/LLMs, blockchain) require sources within 6 months, mandatory version annotations, cross-validation from 2+ sources, and direct verification of official download pages.
|
|
|
|
**📁 Save action**: Append timeliness assessment to the end of `00_question_decomposition.md`
|
|
|
|
---
|
|
|
|
### Step 1: Question Decomposition & Boundary Definition
|
|
|
|
**Mode-specific sub-questions**:
|
|
|
|
**Mode A Phase 2** (Initial Research — Problem & Solution):
|
|
- "What existing/competitor solutions address this problem?"
|
|
- "What are the component parts of this problem?"
|
|
- "For each component, what are the state-of-the-art solutions?"
|
|
- "What are the security considerations per component?"
|
|
- "What are the cost implications of each approach?"
|
|
|
|
**Mode B** (Solution Assessment):
|
|
- "What are the weak points and potential problems in the existing draft?"
|
|
- "What are the security vulnerabilities in the proposed architecture?"
|
|
- "Where are the performance bottlenecks?"
|
|
- "What solutions exist for each identified issue?"
|
|
|
|
**General sub-question patterns** (use when applicable):
|
|
- **Sub-question A**: "What is X and how does it work?" (Definition & mechanism)
|
|
- **Sub-question B**: "What are the dimensions of relationship/difference between X and Y?" (Comparative analysis)
|
|
- **Sub-question C**: "In what scenarios is X applicable/inapplicable?" (Boundary conditions)
|
|
- **Sub-question D**: "What are X's development trends/best practices?" (Extended analysis)
|
|
|
|
**⚠️ Research Subject Boundary Definition (BLOCKING - must be explicit)**:
|
|
|
|
When decomposing questions, you must explicitly define the **boundaries of the research subject**:
|
|
|
|
| Dimension | Boundary to define | Example |
|
|
|-----------|--------------------|---------|
|
|
| **Population** | Which group is being studied? | University students vs K-12 vs vocational students vs all students |
|
|
| **Geography** | Which region is being studied? | Chinese universities vs US universities vs global |
|
|
| **Timeframe** | Which period is being studied? | Post-2020 vs full historical picture |
|
|
| **Level** | Which level is being studied? | Undergraduate vs graduate vs vocational |
|
|
|
|
**Common mistake**: User asks about "university classroom issues" but sources include policies targeting "K-12 students" — mismatched target populations will invalidate the entire research.
|
|
|
|
**📁 Save action**:
|
|
1. Read all files from INPUT_DIR to ground the research in the project context
|
|
2. Create working directory `RESEARCH_DIR/`
|
|
3. Write `00_question_decomposition.md`, including:
|
|
- Original question
|
|
- Active mode (A Phase 2 or B) and rationale
|
|
- Summary of relevant problem context from INPUT_DIR
|
|
- Classified question type and rationale
|
|
- **Research subject boundary definition** (population, geography, timeframe, level)
|
|
- List of decomposed sub-questions
|
|
4. Write TodoWrite to track progress
|
|
|
|
### Step 2: Source Tiering & Authority Anchoring
|
|
|
|
Tier sources by authority, **prioritize primary sources** (L1 > L2 > L3 > L4). Conclusions must be traceable to L1/L2; L3/L4 serve as supplementary and validation.
|
|
|
|
**For full tier definitions, search strategies, community mining steps, and source registry templates**: Read `references/source-tiering.md`
|
|
|
|
**Tool Usage**:
|
|
- Use `WebSearch` for broad searches; `WebFetch` to read specific pages
|
|
- Use the `context7` MCP server (`resolve-library-id` then `get-library-docs`) for up-to-date library/framework documentation
|
|
- Always cross-verify training data claims against live sources for facts that may have changed (versions, APIs, deprecations, security advisories)
|
|
- When citing web sources, include the URL and date accessed
|
|
|
|
**📁 Save action**:
|
|
For each source consulted, **immediately** append to `01_source_registry.md` using the entry template from `references/source-tiering.md`.
|
|
|
|
### Step 3: Fact Extraction & Evidence Cards
|
|
|
|
Transform sources into **verifiable fact cards**:
|
|
|
|
```markdown
|
|
## Fact Cards
|
|
|
|
### Fact 1
|
|
- **Statement**: [specific fact description]
|
|
- **Source**: [link/document section]
|
|
- **Confidence**: High/Medium/Low
|
|
|
|
### Fact 2
|
|
...
|
|
```
|
|
|
|
**Key discipline**:
|
|
- Pin down facts first, then reason
|
|
- Distinguish "what officials said" from "what I infer"
|
|
- When conflicting information is found, annotate and preserve both sides
|
|
- Annotate confidence level:
|
|
- ✅ High: Explicitly stated in official documentation
|
|
- ⚠️ Medium: Mentioned in official blog but not formally documented
|
|
- ❓ Low: Inference or from unofficial sources
|
|
|
|
**📁 Save action**:
|
|
For each extracted fact, **immediately** append to `02_fact_cards.md`:
|
|
```markdown
|
|
## Fact #[number]
|
|
- **Statement**: [specific fact description]
|
|
- **Source**: [Source #number] [link]
|
|
- **Phase**: [Phase 1 / Phase 2 / Assessment]
|
|
- **Target Audience**: [which group this fact applies to, inherited from source or further refined]
|
|
- **Confidence**: ✅/⚠️/❓
|
|
- **Related Dimension**: [corresponding comparison dimension]
|
|
```
|
|
|
|
**⚠️ Target audience in fact statements**:
|
|
- If a fact comes from a "partially overlapping" or "reference only" source, the statement **must explicitly annotate the applicable scope**
|
|
- Wrong: "The Ministry of Education banned phones in classrooms" (doesn't specify who)
|
|
- Correct: "The Ministry of Education banned K-12 students from bringing phones into classrooms (does not apply to university students)"
|
|
|
|
### Step 4: Build Comparison/Analysis Framework
|
|
|
|
Based on the question type, select fixed analysis dimensions. **For dimension lists** (General, Concept Comparison, Decision Support): Read `references/comparison-frameworks.md`
|
|
|
|
**📁 Save action**:
|
|
Write to `03_comparison_framework.md`:
|
|
```markdown
|
|
# Comparison Framework
|
|
|
|
## Selected Framework Type
|
|
[Concept Comparison / Decision Support / ...]
|
|
|
|
## Selected Dimensions
|
|
1. [Dimension 1]
|
|
2. [Dimension 2]
|
|
...
|
|
|
|
## Initial Population
|
|
| Dimension | X | Y | Factual Basis |
|
|
|-----------|---|---|---------------|
|
|
| [Dimension 1] | [description] | [description] | Fact #1, #3 |
|
|
| ... | | | |
|
|
```
|
|
|
|
### Step 5: Reference Point Baseline Alignment
|
|
|
|
Ensure all compared parties have clear, consistent definitions:
|
|
|
|
**Checklist**:
|
|
- [ ] Is the reference point's definition stable/widely accepted?
|
|
- [ ] Does it need verification, or can domain common knowledge be used?
|
|
- [ ] Does the reader's understanding of the reference point match mine?
|
|
- [ ] Are there ambiguities that need to be clarified first?
|
|
|
|
### Step 6: Fact-to-Conclusion Reasoning Chain
|
|
|
|
Explicitly write out the "fact → comparison → conclusion" reasoning process:
|
|
|
|
```markdown
|
|
## Reasoning Process
|
|
|
|
### Regarding [Dimension Name]
|
|
|
|
1. **Fact confirmation**: According to [source], X's mechanism is...
|
|
2. **Compare with reference**: While Y's mechanism is...
|
|
3. **Conclusion**: Therefore, the difference between X and Y on this dimension is...
|
|
```
|
|
|
|
**Key discipline**:
|
|
- Conclusions come from mechanism comparison, not "gut feelings"
|
|
- Every conclusion must be traceable to specific facts
|
|
- Uncertain conclusions must be annotated
|
|
|
|
**📁 Save action**:
|
|
Write to `04_reasoning_chain.md`:
|
|
```markdown
|
|
# Reasoning Chain
|
|
|
|
## Dimension 1: [Dimension Name]
|
|
|
|
### Fact Confirmation
|
|
According to [Fact #X], X's mechanism is...
|
|
|
|
### Reference Comparison
|
|
While Y's mechanism is... (Source: [Fact #Y])
|
|
|
|
### Conclusion
|
|
Therefore, the difference between X and Y on this dimension is...
|
|
|
|
### Confidence
|
|
✅/⚠️/❓ + rationale
|
|
|
|
---
|
|
## Dimension 2: [Dimension Name]
|
|
...
|
|
```
|
|
|
|
### Step 7: Use-Case Validation (Sanity Check)
|
|
|
|
Validate conclusions against a typical scenario:
|
|
|
|
**Validation questions**:
|
|
- Based on my conclusions, how should this scenario be handled?
|
|
- Is that actually the case?
|
|
- Are there counterexamples that need to be addressed?
|
|
|
|
**Review checklist**:
|
|
- [ ] Are draft conclusions consistent with Step 3 fact cards?
|
|
- [ ] Are there any important dimensions missed?
|
|
- [ ] Is there any over-extrapolation?
|
|
- [ ] Are conclusions actionable/verifiable?
|
|
|
|
**📁 Save action**:
|
|
Write to `05_validation_log.md`:
|
|
```markdown
|
|
# Validation Log
|
|
|
|
## Validation Scenario
|
|
[Scenario description]
|
|
|
|
## Expected Based on Conclusions
|
|
If using X: [expected behavior]
|
|
If using Y: [expected behavior]
|
|
|
|
## Actual Validation Results
|
|
[actual situation]
|
|
|
|
## Counterexamples
|
|
[yes/no, describe if yes]
|
|
|
|
## Review Checklist
|
|
- [x] Draft conclusions consistent with fact cards
|
|
- [x] No important dimensions missed
|
|
- [x] No over-extrapolation
|
|
- [ ] Issue found: [if any]
|
|
|
|
## Conclusions Requiring Revision
|
|
[if any]
|
|
```
|
|
|
|
### Step 8: Deliverable Formatting
|
|
|
|
Make the output **readable, traceable, and actionable**.
|
|
|
|
**📁 Save action**:
|
|
Integrate all intermediate artifacts. Write to `OUTPUT_DIR/solution_draft##.md` using the appropriate output template based on active mode:
|
|
- Mode A: `templates/solution_draft_mode_a.md`
|
|
- Mode B: `templates/solution_draft_mode_b.md`
|
|
|
|
Sources to integrate:
|
|
- Extract background from `00_question_decomposition.md`
|
|
- Reference key facts from `02_fact_cards.md`
|
|
- Organize conclusions from `04_reasoning_chain.md`
|
|
- Generate references from `01_source_registry.md`
|
|
- Supplement with use cases from `05_validation_log.md`
|
|
- For Mode A: include AC assessment from `00_ac_assessment.md`
|
|
|
|
## Solution Draft Output Templates
|
|
|
|
### Mode A: Initial Research Output
|
|
|
|
Use template: `templates/solution_draft_mode_a.md`
|
|
|
|
### Mode B: Solution Assessment Output
|
|
|
|
Use template: `templates/solution_draft_mode_b.md`
|
|
|
|
## Stakeholder Perspectives
|
|
|
|
Adjust content depth based on audience:
|
|
|
|
| Audience | Focus | Detail Level |
|
|
|----------|-------|--------------|
|
|
| **Decision-makers** | Conclusions, risks, recommendations | Concise, emphasize actionability |
|
|
| **Implementers** | Specific mechanisms, how-to | Detailed, emphasize how to do it |
|
|
| **Technical experts** | Details, boundary conditions, limitations | In-depth, emphasize accuracy |
|
|
|
|
## Output Files
|
|
|
|
Default intermediate artifacts location: `RESEARCH_DIR/`
|
|
|
|
**Required files** (automatically generated through the process):
|
|
|
|
| File | Content | When Generated |
|
|
|------|---------|----------------|
|
|
| `00_ac_assessment.md` | AC & restrictions assessment (Mode A only) | After Phase 1 completion |
|
|
| `00_question_decomposition.md` | Question type, sub-question list | After Step 0-1 completion |
|
|
| `01_source_registry.md` | All source links and summaries | Continuously updated during Step 2 |
|
|
| `02_fact_cards.md` | Extracted facts and sources | Continuously updated during Step 3 |
|
|
| `03_comparison_framework.md` | Selected framework and populated data | After Step 4 completion |
|
|
| `04_reasoning_chain.md` | Fact → conclusion reasoning | After Step 6 completion |
|
|
| `05_validation_log.md` | Use-case validation and review | After Step 7 completion |
|
|
| `OUTPUT_DIR/solution_draft##.md` | Complete solution draft | After Step 8 completion |
|
|
| `OUTPUT_DIR/tech_stack.md` | Tech stack evaluation and decisions | After Phase 3 (optional) |
|
|
| `OUTPUT_DIR/security_analysis.md` | Threat model and security controls | After Phase 4 (optional) |
|
|
|
|
**Optional files**:
|
|
- `raw/*.md` - Raw source archives (saved when content is lengthy)
|
|
|
|
## Methodology Quick Reference Card
|
|
|
|
```
|
|
┌──────────────────────────────────────────────────────────────────┐
|
|
│ Deep Research — Mode-Aware 8-Step Method │
|
|
├──────────────────────────────────────────────────────────────────┤
|
|
│ CONTEXT: Resolve mode (project vs standalone) + set paths │
|
|
│ GUARDRAILS: Check INPUT_DIR/INPUT_FILE exists + required files │
|
|
│ MODE DETECT: solution_draft*.md in 01_solution? → A or B │
|
|
│ │
|
|
│ MODE A: Initial Research │
|
|
│ Phase 1: AC & Restrictions Assessment (BLOCKING) │
|
|
│ Phase 2: Full 8-step → solution_draft##.md │
|
|
│ Phase 3: Tech Stack Consolidation (OPTIONAL) → tech_stack.md │
|
|
│ Phase 4: Security Deep Dive (OPTIONAL) → security_analysis.md │
|
|
│ │
|
|
│ MODE B: Solution Assessment │
|
|
│ Read latest draft → Full 8-step → solution_draft##.md (N+1) │
|
|
│ Optional: Phase 3 / Phase 4 on revised draft │
|
|
│ │
|
|
│ 8-STEP ENGINE: │
|
|
│ 0. Classify question type → Select framework template │
|
|
│ 1. Decompose question → mode-specific sub-questions │
|
|
│ 2. Tier sources → L1 Official > L2 Blog > L3 Media > L4 │
|
|
│ 3. Extract facts → Each with source, confidence level │
|
|
│ 4. Build framework → Fixed dimensions, structured compare │
|
|
│ 5. Align references → Ensure unified definitions │
|
|
│ 6. Reasoning chain → Fact→Compare→Conclude, explicit │
|
|
│ 7. Use-case validation → Sanity check, prevent armchairing │
|
|
│ 8. Deliverable → solution_draft##.md (mode-specific format) │
|
|
├──────────────────────────────────────────────────────────────────┤
|
|
│ Key discipline: Ask don't assume · Facts before reasoning │
|
|
│ Conclusions from mechanism, not gut feelings │
|
|
└──────────────────────────────────────────────────────────────────┘
|
|
```
|
|
|
|
## Usage Examples
|
|
|
|
For detailed execution flow examples (Mode A initial, Mode B assessment, standalone, force override): Read `references/usage-examples.md`
|
|
|
|
## Source Verifiability Requirements
|
|
|
|
Every cited piece of external information must be directly verifiable by the user. All links must be publicly accessible (annotate `[login required]` if not), citations must include exact section/page/timestamp, and unverifiable information must be annotated `[limited source]`. Full checklist in `references/quality-checklists.md`.
|
|
|
|
## Quality Checklist
|
|
|
|
Before completing the solution draft, run through the checklists in `references/quality-checklists.md`. This covers:
|
|
- General quality (L1/L2 support, verifiability, actionability)
|
|
- Mode A specific (AC assessment, competitor analysis, component tables, tech stack)
|
|
- Mode B specific (findings table, self-contained draft, performance column)
|
|
- Timeliness check for high-sensitivity domains (version annotations, cross-validation, community mining)
|
|
- Target audience consistency (boundary definition, source matching, fact card audience)
|
|
|
|
## Final Reply Guidelines
|
|
|
|
When replying to the user after research is complete:
|
|
|
|
**✅ Should include**:
|
|
- Active mode used (A or B) and which optional phases were executed
|
|
- One-sentence core conclusion
|
|
- Key findings summary (3-5 points)
|
|
- Path to the solution draft: `OUTPUT_DIR/solution_draft##.md`
|
|
- Paths to optional artifacts if produced: `tech_stack.md`, `security_analysis.md`
|
|
- If there are significant uncertainties, annotate points requiring further verification
|
|
|
|
**❌ Must not include**:
|
|
- Process file listings (e.g., `00_question_decomposition.md`, `01_source_registry.md`, etc.)
|
|
- Detailed research step descriptions
|
|
- Working directory structure display
|
|
|
|
**Reason**: Process files are for retrospective review, not for the user. The user cares about conclusions, not the process.
|