mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-23 03:26:36 +00:00
Refine coding standards and testing guidelines
- Updated the coding rule descriptions to emphasize readability, meaningful comments, and test verification. - Revised guidelines to clarify the importance of avoiding boilerplate while maintaining readability. - Enhanced the testing rules to set a minimum coverage threshold of 75% for business logic and specified criteria for test scenarios. - Introduced a mechanism for handling skipped tests, categorizing them as legitimate or illegitimate, and outlined resolution steps. These changes aim to improve code quality, maintainability, and testing effectiveness.
This commit is contained in:
@@ -32,3 +32,17 @@
|
||||
6. Applicable scenarios
|
||||
7. Team capability requirements
|
||||
8. Migration difficulty
|
||||
|
||||
## Decomposition Completeness Probes (Completeness Audit Reference)
|
||||
|
||||
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
|
||||
|
||||
| Probe | What it catches |
|
||||
|-------|-----------------|
|
||||
| **What does this cost — in money, time, resources, or trade-offs?** | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
|
||||
| **What are the hard constraints — physical, legal, regulatory, environmental?** | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
|
||||
| **What are the dependencies and assumptions that could break?** | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
|
||||
| **What does the operating environment actually look like?** | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
|
||||
| **What failure modes exist and what happens when they trigger?** | Degraded operation, fallback, safety margins, blast radius, recovery time |
|
||||
| **What do practitioners who solved similar problems say matters most?** | Field-tested priorities that don't appear in specs or papers |
|
||||
| **What changes over time — and what looks stable now but isn't?** | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |
|
||||
|
||||
@@ -10,6 +10,12 @@
|
||||
- [ ] Every citation can be directly verified by the user (source verifiability)
|
||||
- [ ] Structure hierarchy is clear; executives can quickly locate information
|
||||
|
||||
## Decomposition Completeness
|
||||
|
||||
- [ ] Domain discovery search executed: searched "key factors when [problem domain]" before starting research
|
||||
- [ ] Completeness probes applied: every probe from `references/comparison-frameworks.md` checked against sub-questions
|
||||
- [ ] No uncovered areas remain: all gaps filled with sub-questions or justified as not applicable
|
||||
|
||||
## Internet Search Depth
|
||||
|
||||
- [ ] Every sub-question was searched with at least 3-5 different query variants
|
||||
|
||||
Reference in New Issue
Block a user