mirror of
https://github.com/azaion/gps-denied-onboard.git
synced 2026-04-23 11:16:37 +00:00
6ff14a1a7d
- Vendor local .claude/ command skills (autopilot, plan, implement, etc.) - Add CLAUDE.md pointing slash commands to .claude/commands/*/SKILL.md - Untrack docs-Lokal/ and ignore .planning/ for local-only planning docs - Include next_steps.md pulled from upstream Co-Authored-By: Claude Opus 4.7 (1M context) <noreply@anthropic.com>
73 lines
4.6 KiB
Markdown
73 lines
4.6 KiB
Markdown
# Quality Checklists — Reference
|
|
|
|
## General Quality
|
|
|
|
- [ ] All core conclusions have L1/L2 tier factual support
|
|
- [ ] No use of vague words like "possibly", "probably" without annotating uncertainty
|
|
- [ ] Comparison dimensions are complete with no key differences missed
|
|
- [ ] At least one real use case validates conclusions
|
|
- [ ] References are complete with accessible links
|
|
- [ ] Every citation can be directly verified by the user (source verifiability)
|
|
- [ ] Structure hierarchy is clear; executives can quickly locate information
|
|
|
|
## Internet Search Depth
|
|
|
|
- [ ] Every sub-question was searched with at least 3-5 different query variants
|
|
- [ ] At least 3 perspectives from the Perspective Rotation were applied and searched
|
|
- [ ] Search saturation reached: last searches stopped producing new substantive information
|
|
- [ ] Adjacent fields and analogous problems were searched, not just direct matches
|
|
- [ ] Contrarian viewpoints were actively sought ("why not X", "X criticism", "X failure")
|
|
- [ ] Practitioner experience was searched (production use, real-world results, lessons learned)
|
|
- [ ] Iterative deepening completed: follow-up questions from initial findings were searched
|
|
- [ ] No sub-question relies solely on training data without web verification
|
|
|
|
## Mode A Specific
|
|
|
|
- [ ] Phase 1 completed: AC assessment was presented to and confirmed by user
|
|
- [ ] AC assessment consistent: Solution draft respects the (possibly adjusted) acceptance criteria and restrictions
|
|
- [ ] Competitor analysis included: Existing solutions were researched
|
|
- [ ] All components have comparison tables: Each component lists alternatives with tools, advantages, limitations, security, cost
|
|
- [ ] Tools/libraries verified: Suggested tools actually exist and work as described
|
|
- [ ] Testing strategy covers AC: Tests map to acceptance criteria
|
|
- [ ] Tech stack documented (if Phase 3 ran): `tech_stack.md` has evaluation tables, risk assessment, and learning requirements
|
|
- [ ] Security analysis documented (if Phase 4 ran): `security_analysis.md` has threat model and per-component controls
|
|
|
|
## Mode B Specific
|
|
|
|
- [ ] Findings table complete: All identified weak points documented with solutions
|
|
- [ ] Weak point categories covered: Functional, security, and performance assessed
|
|
- [ ] New draft is self-contained: Written as if from scratch, no "updated" markers
|
|
- [ ] Performance column included: Mode B comparison tables include performance characteristics
|
|
- [ ] Previous draft issues addressed: Every finding in the table is resolved in the new draft
|
|
|
|
## Timeliness Check (High-Sensitivity Domain BLOCKING)
|
|
|
|
When the research topic has Critical or High sensitivity level:
|
|
|
|
- [ ] Timeliness sensitivity assessment completed: `00_question_decomposition.md` contains a timeliness assessment section
|
|
- [ ] Source timeliness annotated: Every source has publication date, timeliness status, version info
|
|
- [ ] No outdated sources used as factual evidence (Critical: within 6 months; High: within 1 year)
|
|
- [ ] Version numbers explicitly annotated for all technical products/APIs/SDKs
|
|
- [ ] Official sources prioritized: Core conclusions have support from official documentation/blogs
|
|
- [ ] Cross-validation completed: Key technical information confirmed from at least 2 independent sources
|
|
- [ ] Download page directly verified: Platform support info comes from real-time extraction of official download pages
|
|
- [ ] Protocol/feature names searched: Searched for product-supported protocol names (MCP, ACP, etc.)
|
|
- [ ] GitHub Issues mined: Reviewed product's GitHub Issues popular discussions
|
|
- [ ] Community hotspots identified: Identified and recorded feature points users care most about
|
|
|
|
## Target Audience Consistency Check (BLOCKING)
|
|
|
|
- [ ] Research boundary clearly defined: `00_question_decomposition.md` has clear population/geography/timeframe/level boundaries
|
|
- [ ] Every source has target audience annotated in `01_source_registry.md`
|
|
- [ ] Mismatched sources properly handled (excluded, annotated, or marked reference-only)
|
|
- [ ] No audience confusion in fact cards: Every fact has target audience consistent with research boundary
|
|
- [ ] No audience confusion in the report: Policies/research/data cited have consistent target audiences
|
|
|
|
## Source Verifiability
|
|
|
|
- [ ] All cited links are publicly accessible (annotate `[login required]` if not)
|
|
- [ ] Citations include exact section/page/timestamp for long documents
|
|
- [ ] Cited facts have corresponding statements in the original text (no over-interpretation)
|
|
- [ ] Source publication/update dates annotated; technical docs include version numbers
|
|
- [ ] Unverifiable information annotated `[limited source]` and not sole support for core conclusions
|