mirror of
https://github.com/azaion/flights.git
synced 2026-04-22 21:56:32 +00:00
c51cb9b4a5
- Updated coderule.mdc to emphasize readability, meaningful comments, and maintainability. - Revised testing.mdc to set a 75% coverage threshold for business logic and clarified expected results requirements. - Improved clarity in git-workflow.mdc regarding commit message formatting and length. - Added completeness audit requirements in research steps and quality checklists to ensure thoroughness in test specifications. Made-with: Cursor
2.3 KiB
2.3 KiB
Comparison & Analysis Frameworks — Reference
General Dimensions (select as needed)
- Goal / What problem does it solve
- Working mechanism / Process
- Input / Output / Boundaries
- Advantages / Disadvantages / Trade-offs
- Applicable scenarios / Boundary conditions
- Cost / Benefit / Risk
- Historical evolution / Future trends
- Security / Permissions / Controllability
Concept Comparison Specific Dimensions
- Definition & essence
- Trigger / invocation method
- Execution agent
- Input/output & type constraints
- Determinism & repeatability
- Resource & context management
- Composition & reuse patterns
- Security boundaries & permission control
Decision Support Specific Dimensions
- Solution overview
- Implementation cost
- Maintenance cost
- Risk assessment
- Expected benefit
- Applicable scenarios
- Team capability requirements
- Migration difficulty
Decomposition Completeness Probes (Completeness Audit Reference)
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
| Probe | What it catches |
|---|---|
| What does this cost — in money, time, resources, or trade-offs? | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
| What are the hard constraints — physical, legal, regulatory, environmental? | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
| What are the dependencies and assumptions that could break? | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
| What does the operating environment actually look like? | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
| What failure modes exist and what happens when they trigger? | Degraded operation, fallback, safety margins, blast radius, recovery time |
| What do practitioners who solved similar problems say matters most? | Field-tested priorities that don't appear in specs or papers |
| What changes over time — and what looks stable now but isn't? | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |