mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-23 01:46:36 +00:00
06b47c17c3
- Updated the coding rule descriptions to emphasize readability, meaningful comments, and test verification. - Revised guidelines to clarify the importance of avoiding boilerplate while maintaining readability. - Enhanced the testing rules to set a minimum coverage threshold of 75% for business logic and specified criteria for test scenarios. - Introduced a mechanism for handling skipped tests, categorizing them as legitimate or illegitimate, and outlined resolution steps. These changes aim to improve code quality, maintainability, and testing effectiveness.
5.0 KiB
5.0 KiB
Quality Checklists — Reference
General Quality
- All core conclusions have L1/L2 tier factual support
- No use of vague words like "possibly", "probably" without annotating uncertainty
- Comparison dimensions are complete with no key differences missed
- At least one real use case validates conclusions
- References are complete with accessible links
- Every citation can be directly verified by the user (source verifiability)
- Structure hierarchy is clear; executives can quickly locate information
Decomposition Completeness
- Domain discovery search executed: searched "key factors when [problem domain]" before starting research
- Completeness probes applied: every probe from
references/comparison-frameworks.mdchecked against sub-questions - No uncovered areas remain: all gaps filled with sub-questions or justified as not applicable
Internet Search Depth
- Every sub-question was searched with at least 3-5 different query variants
- At least 3 perspectives from the Perspective Rotation were applied and searched
- Search saturation reached: last searches stopped producing new substantive information
- Adjacent fields and analogous problems were searched, not just direct matches
- Contrarian viewpoints were actively sought ("why not X", "X criticism", "X failure")
- Practitioner experience was searched (production use, real-world results, lessons learned)
- Iterative deepening completed: follow-up questions from initial findings were searched
- No sub-question relies solely on training data without web verification
Mode A Specific
- Phase 1 completed: AC assessment was presented to and confirmed by user
- AC assessment consistent: Solution draft respects the (possibly adjusted) acceptance criteria and restrictions
- Competitor analysis included: Existing solutions were researched
- All components have comparison tables: Each component lists alternatives with tools, advantages, limitations, security, cost
- Tools/libraries verified: Suggested tools actually exist and work as described
- Testing strategy covers AC: Tests map to acceptance criteria
- Tech stack documented (if Phase 3 ran):
tech_stack.mdhas evaluation tables, risk assessment, and learning requirements - Security analysis documented (if Phase 4 ran):
security_analysis.mdhas threat model and per-component controls
Mode B Specific
- Findings table complete: All identified weak points documented with solutions
- Weak point categories covered: Functional, security, and performance assessed
- New draft is self-contained: Written as if from scratch, no "updated" markers
- Performance column included: Mode B comparison tables include performance characteristics
- Previous draft issues addressed: Every finding in the table is resolved in the new draft
Timeliness Check (High-Sensitivity Domain BLOCKING)
When the research topic has Critical or High sensitivity level:
- Timeliness sensitivity assessment completed:
00_question_decomposition.mdcontains a timeliness assessment section - Source timeliness annotated: Every source has publication date, timeliness status, version info
- No outdated sources used as factual evidence (Critical: within 6 months; High: within 1 year)
- Version numbers explicitly annotated for all technical products/APIs/SDKs
- Official sources prioritized: Core conclusions have support from official documentation/blogs
- Cross-validation completed: Key technical information confirmed from at least 2 independent sources
- Download page directly verified: Platform support info comes from real-time extraction of official download pages
- Protocol/feature names searched: Searched for product-supported protocol names (MCP, ACP, etc.)
- GitHub Issues mined: Reviewed product's GitHub Issues popular discussions
- Community hotspots identified: Identified and recorded feature points users care most about
Target Audience Consistency Check (BLOCKING)
- Research boundary clearly defined:
00_question_decomposition.mdhas clear population/geography/timeframe/level boundaries - Every source has target audience annotated in
01_source_registry.md - Mismatched sources properly handled (excluded, annotated, or marked reference-only)
- No audience confusion in fact cards: Every fact has target audience consistent with research boundary
- No audience confusion in the report: Policies/research/data cited have consistent target audiences
Source Verifiability
- All cited links are publicly accessible (annotate
[login required]if not) - Citations include exact section/page/timestamp for long documents
- Cited facts have corresponding statements in the original text (no over-interpretation)
- Source publication/update dates annotated; technical docs include version numbers
- Unverifiable information annotated
[limited source]and not sole support for core conclusions