mirror of
https://github.com/azaion/gps-denied-desktop.git
synced 2026-04-23 03:46:36 +00:00
Update coding standards and testing rules; enhance clarity on coverage thresholds and test data requirements. Adjust commit message guidelines for better readability and enforce sound notifications for user interactions.
This commit is contained in:
@@ -32,3 +32,17 @@
|
||||
6. Applicable scenarios
|
||||
7. Team capability requirements
|
||||
8. Migration difficulty
|
||||
|
||||
## Decomposition Completeness Probes (Completeness Audit Reference)
|
||||
|
||||
Used during Step 1's Decomposition Completeness Audit. After generating sub-questions, ask each probe against the current decomposition. If a probe reveals an uncovered area, add a sub-question for it.
|
||||
|
||||
| Probe | What it catches |
|
||||
|-------|-----------------|
|
||||
| **What does this cost — in money, time, resources, or trade-offs?** | Budget, pricing, licensing, tax, opportunity cost, maintenance burden |
|
||||
| **What are the hard constraints — physical, legal, regulatory, environmental?** | Regulations, certifications, spectrum/frequency rules, export controls, physics limits, IP restrictions |
|
||||
| **What are the dependencies and assumptions that could break?** | Supply chain, vendor lock-in, API stability, single points of failure, standards evolution |
|
||||
| **What does the operating environment actually look like?** | Terrain, weather, connectivity, infrastructure, power, latency, user skill level |
|
||||
| **What failure modes exist and what happens when they trigger?** | Degraded operation, fallback, safety margins, blast radius, recovery time |
|
||||
| **What do practitioners who solved similar problems say matters most?** | Field-tested priorities that don't appear in specs or papers |
|
||||
| **What changes over time — and what looks stable now but isn't?** | Technology roadmaps, regulatory shifts, deprecation risk, scaling effects |
|
||||
|
||||
@@ -10,6 +10,12 @@
|
||||
- [ ] Every citation can be directly verified by the user (source verifiability)
|
||||
- [ ] Structure hierarchy is clear; executives can quickly locate information
|
||||
|
||||
## Decomposition Completeness
|
||||
|
||||
- [ ] Domain discovery search executed: searched "key factors when [problem domain]" before starting research
|
||||
- [ ] Completeness probes applied: every probe from `references/comparison-frameworks.md` checked against sub-questions
|
||||
- [ ] No uncovered areas remain: all gaps filled with sub-questions or justified as not applicable
|
||||
|
||||
## Internet Search Depth
|
||||
|
||||
- [ ] Every sub-question was searched with at least 3-5 different query variants
|
||||
|
||||
@@ -97,6 +97,16 @@ When decomposing questions, you must explicitly define the **boundaries of the r
|
||||
|
||||
**Common mistake**: User asks about "university classroom issues" but sources include policies targeting "K-12 students" — mismatched target populations will invalidate the entire research.
|
||||
|
||||
#### Decomposition Completeness Audit (MANDATORY)
|
||||
|
||||
After generating sub-questions, verify the decomposition covers all major dimensions of the problem — not just the ones that came to mind first.
|
||||
|
||||
1. **Domain discovery search**: Search the web for "key factors when [problem domain]" / "what to consider when [problem domain]" (e.g., "key factors GPS-denied navigation", "what to consider when choosing an edge deployment strategy"). Extract dimensions that practitioners and domain experts consider important but are absent from the current sub-questions.
|
||||
2. **Run completeness probes**: Walk through each probe in `references/comparison-frameworks.md` → "Decomposition Completeness Probes" against the current sub-question list. For each probe, note whether it is covered, not applicable (state why), or missing.
|
||||
3. **Fill gaps**: Add sub-questions (with search query variants) for any uncovered area. Do this before proceeding to Step 2.
|
||||
|
||||
Record the audit result in `00_question_decomposition.md` as a "Completeness Audit" section.
|
||||
|
||||
**Save action**:
|
||||
1. Read all files from INPUT_DIR to ground the research in the project context
|
||||
2. Create working directory `RESEARCH_DIR/`
|
||||
@@ -109,6 +119,7 @@ When decomposing questions, you must explicitly define the **boundaries of the r
|
||||
- List of decomposed sub-questions
|
||||
- **Chosen perspectives** (at least 3 from the Perspective Rotation table) with rationale
|
||||
- **Search query variants** for each sub-question (at least 3-5 per sub-question)
|
||||
- **Completeness audit** (taxonomy cross-reference + domain discovery results)
|
||||
4. Write TodoWrite to track progress
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user