mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-22 22:56:34 +00:00
06b47c17c3
- Updated the coding rule descriptions to emphasize readability, meaningful comments, and test verification. - Revised guidelines to clarify the importance of avoiding boilerplate while maintaining readability. - Enhanced the testing rules to set a minimum coverage threshold of 75% for business logic and specified criteria for test scenarios. - Introduced a mechanism for handling skipped tests, categorizing them as legitimate or illegitimate, and outlined resolution steps. These changes aim to improve code quality, maintainability, and testing effectiveness.
2.3 KiB
2.3 KiB
Comparison & Analysis Frameworks — Reference
General Dimensions (select as needed)
- Goal / What problem does it solve
- Working mechanism / Process
- Input / Output / Boundaries
- Advantages / Disadvantages / Trade-offs
- Applicable scenarios / Boundary conditions
- Cost / Benefit / Risk
- Historical evolution / Future trends
- Security / Permissions / Controllability
Concept Comparison Specific Dimensions
- Definition & essence
- Trigger / invocation method
- Execution agent
- Input/output & type constraints
- Determinism & repeatability
- Resource & context management
- Composition & reuse patterns
- Security boundaries & permission control
Decision Support Specific Dimensions
- Solution overview
- Implementation cost
- Maintenance cost
- Risk assessment
- Expected benefit
- Applicable scenarios
- Team capability requirements
- Migration difficulty
Decomposition Completeness Probes (Completeness Audit Reference)
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
| Probe | What it catches |
|---|---|
| What does this cost — in money, time, resources, or trade-offs? | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
| What are the hard constraints — physical, legal, regulatory, environmental? | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
| What are the dependencies and assumptions that could break? | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
| What does the operating environment actually look like? | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
| What failure modes exist and what happens when they trigger? | Degraded operation, fallback, safety margins, blast radius, recovery time |
| What do practitioners who solved similar problems say matters most? | Field-tested priorities that don't appear in specs or papers |
| What changes over time — and what looks stable now but isn't? | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |