mirror of
https://github.com/azaion/admin.git
synced 2026-04-22 22:46:33 +00:00
e94842d330
- Revised coding standards to emphasize readability, meaningful comments, and test verification. - Adjusted test coverage thresholds to 75% for business logic and clarified expectations for test scenarios. - Enhanced guidelines for handling skipped tests, emphasizing the need for investigation and resolution. - Updated commit message format and length requirements for better adherence to Git conventions. Made-with: Cursor
49 lines
2.3 KiB
Markdown
49 lines
2.3 KiB
Markdown
# Comparison & Analysis Frameworks — Reference
|
|
|
|
## General Dimensions (select as needed)
|
|
|
|
1. Goal / What problem does it solve
|
|
2. Working mechanism / Process
|
|
3. Input / Output / Boundaries
|
|
4. Advantages / Disadvantages / Trade-offs
|
|
5. Applicable scenarios / Boundary conditions
|
|
6. Cost / Benefit / Risk
|
|
7. Historical evolution / Future trends
|
|
8. Security / Permissions / Controllability
|
|
|
|
## Concept Comparison Specific Dimensions
|
|
|
|
1. Definition & essence
|
|
2. Trigger / invocation method
|
|
3. Execution agent
|
|
4. Input/output & type constraints
|
|
5. Determinism & repeatability
|
|
6. Resource & context management
|
|
7. Composition & reuse patterns
|
|
8. Security boundaries & permission control
|
|
|
|
## Decision Support Specific Dimensions
|
|
|
|
1. Solution overview
|
|
2. Implementation cost
|
|
3. Maintenance cost
|
|
4. Risk assessment
|
|
5. Expected benefit
|
|
6. Applicable scenarios
|
|
7. Team capability requirements
|
|
8. Migration difficulty
|
|
|
|
## Decomposition Completeness Probes (Completeness Audit Reference)
|
|
|
|
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
|
|
|
|
| Probe | What it catches |
|
|
|-------|-----------------|
|
|
| **What does this cost — in money, time, resources, or trade-offs?** | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
|
|
| **What are the hard constraints — physical, legal, regulatory, environmental?** | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
|
|
| **What are the dependencies and assumptions that could break?** | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
|
|
| **What does the operating environment actually look like?** | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
|
|
| **What failure modes exist and what happens when they trigger?** | Degraded operation, fallback, safety margins, blast radius, recovery time |
|
|
| **What do practitioners who solved similar problems say matters most?** | Field-tested priorities that don't appear in specs or papers |
|
|
| **What changes over time — and what looks stable now but isn't?** | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |
|