Files
ai-training/.cursor/skills/research/steps/04_engine-analysis.md
T
Oleksandr Bezdieniezhnykh 8db19cc60a Add .cursor AI autodevelopment harness (agents, skills, rules)
Made-with: Cursor
2026-03-26 01:06:54 +02:00

3.8 KiB
Raw Blame History

Research Engine — Analysis Phase (Steps 48)

Step 4: Build Comparison/Analysis Framework

Based on the question type, select fixed analysis dimensions. For dimension lists (General, Concept Comparison, Decision Support): Read references/comparison-frameworks.md

Save action: Write to 03_comparison_framework.md:

# Comparison Framework

## Selected Framework Type
[Concept Comparison / Decision Support / ...]

## Selected Dimensions
1. [Dimension 1]
2. [Dimension 2]
...

## Initial Population
| Dimension | X | Y | Factual Basis |
|-----------|---|---|---------------|
| [Dimension 1] | [description] | [description] | Fact #1, #3 |
| ... | | | |

Step 5: Reference Point Baseline Alignment

Ensure all compared parties have clear, consistent definitions:

Checklist:

  • Is the reference point's definition stable/widely accepted?
  • Does it need verification, or can domain common knowledge be used?
  • Does the reader's understanding of the reference point match mine?
  • Are there ambiguities that need to be clarified first?

Step 6: Fact-to-Conclusion Reasoning Chain

Explicitly write out the "fact → comparison → conclusion" reasoning process:

## Reasoning Process

### Regarding [Dimension Name]

1. **Fact confirmation**: According to [source], X's mechanism is...
2. **Compare with reference**: While Y's mechanism is...
3. **Conclusion**: Therefore, the difference between X and Y on this dimension is...

Key discipline:

  • Conclusions come from mechanism comparison, not "gut feelings"
  • Every conclusion must be traceable to specific facts
  • Uncertain conclusions must be annotated

Save action: Write to 04_reasoning_chain.md:

# Reasoning Chain

## Dimension 1: [Dimension Name]

### Fact Confirmation
According to [Fact #X], X's mechanism is...

### Reference Comparison
While Y's mechanism is... (Source: [Fact #Y])

### Conclusion
Therefore, the difference between X and Y on this dimension is...

### Confidence
✅/⚠️/❓ + rationale

---
## Dimension 2: [Dimension Name]
...

Step 7: Use-Case Validation (Sanity Check)

Validate conclusions against a typical scenario:

Validation questions:

  • Based on my conclusions, how should this scenario be handled?
  • Is that actually the case?
  • Are there counterexamples that need to be addressed?

Review checklist:

  • Are draft conclusions consistent with Step 3 fact cards?
  • Are there any important dimensions missed?
  • Is there any over-extrapolation?
  • Are conclusions actionable/verifiable?

Save action: Write to 05_validation_log.md:

# Validation Log

## Validation Scenario
[Scenario description]

## Expected Based on Conclusions
If using X: [expected behavior]
If using Y: [expected behavior]

## Actual Validation Results
[actual situation]

## Counterexamples
[yes/no, describe if yes]

## Review Checklist
- [x] Draft conclusions consistent with fact cards
- [x] No important dimensions missed
- [x] No over-extrapolation
- [ ] Issue found: [if any]

## Conclusions Requiring Revision
[if any]

Step 8: Deliverable Formatting

Make the output readable, traceable, and actionable.

Save action: Integrate all intermediate artifacts. Write to OUTPUT_DIR/solution_draft##.md using the appropriate output template based on active mode:

  • Mode A: templates/solution_draft_mode_a.md
  • Mode B: templates/solution_draft_mode_b.md

Sources to integrate:

  • Extract background from 00_question_decomposition.md
  • Reference key facts from 02_fact_cards.md
  • Organize conclusions from 04_reasoning_chain.md
  • Generate references from 01_source_registry.md
  • Supplement with use cases from 05_validation_log.md
  • For Mode A: include AC assessment from 00_ac_assessment.md