Files
ai-training/.cursor/skills/research/references/comparison-frameworks.md
T
Oleksandr Bezdieniezhnykh 06b47c17c3 Refine coding standards and testing guidelines
- Updated the coding rule descriptions to emphasize readability, meaningful comments, and test verification.
- Revised guidelines to clarify the importance of avoiding boilerplate while maintaining readability.
- Enhanced the testing rules to set a minimum coverage threshold of 75% for business logic and specified criteria for test scenarios.
- Introduced a mechanism for handling skipped tests, categorizing them as legitimate or illegitimate, and outlined resolution steps.

These changes aim to improve code quality, maintainability, and testing effectiveness.
2026-04-17 20:27:45 +03:00

2.3 KiB

Comparison & Analysis Frameworks — Reference

General Dimensions (select as needed)

  1. Goal / What problem does it solve
  2. Working mechanism / Process
  3. Input / Output / Boundaries
  4. Advantages / Disadvantages / Trade-offs
  5. Applicable scenarios / Boundary conditions
  6. Cost / Benefit / Risk
  7. Historical evolution / Future trends
  8. Security / Permissions / Controllability

Concept Comparison Specific Dimensions

  1. Definition & essence
  2. Trigger / invocation method
  3. Execution agent
  4. Input/output & type constraints
  5. Determinism & repeatability
  6. Resource & context management
  7. Composition & reuse patterns
  8. Security boundaries & permission control

Decision Support Specific Dimensions

  1. Solution overview
  2. Implementation cost
  3. Maintenance cost
  4. Risk assessment
  5. Expected benefit
  6. Applicable scenarios
  7. Team capability requirements
  8. Migration difficulty

Decomposition Completeness Probes (Completeness Audit Reference)

Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.

Probe What it catches
What does this cost — in money, time, resources, or trade-offs? Budget, pricing, licensing, tax, opportunity cost, maintenance burden
What are the hard constraints — physical, legal, regulatory, environmental? Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions
What are the dependencies and assumptions that could break? Supply chain, vendor lock-in, API stability, single points of failure, standards evolution
What does the operating environment actually look like? Terrain, weather, connectivity, infrastructure, power, latency, user skill level
What failure modes exist and what happens when they trigger? Degraded operation, fallback, safety margins, blast radius, recovery time
What do practitioners who solved similar problems say matters most? Field-tested priorities that don't appear in specs or papers
What changes over time — and what looks stable now but isn't? Technology roadmaps, regulatory shifts, deprecation risk, scaling effects