mirror of
https://github.com/azaion/gps-denied-desktop.git
synced 2026-04-23 07:06:36 +00:00
Update decomposition skill documentation and templates; remove obsolete feature specification and initial structure templates. Enhance task decomposition descriptions and adjust directory paths for project documentation.
This commit is contained in:
@@ -1,6 +1,6 @@
|
||||
# Architecture Document Template
|
||||
|
||||
Use this template for the architecture document. Save as `_docs/02_plans/<topic>/architecture.md`.
|
||||
Use this template for the architecture document. Save as `_docs/02_document/architecture.md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,78 @@
|
||||
# Blackbox Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/blackbox-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Blackbox Tests
|
||||
|
||||
## Positive Scenarios
|
||||
|
||||
### FT-P-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what black-box use case this validates]
|
||||
**Traces to**: AC-[ID], AC-[ID]
|
||||
**Category**: [which AC category — e.g., Position Accuracy, Image Processing, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific data set or file from test-data.md]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [call / send / provide input] | [response / event / output] |
|
||||
| 2 | [call / send / provide input] | [response / event / output] |
|
||||
|
||||
**Expected outcome**: [specific, measurable result]
|
||||
**Max execution time**: [e.g., 10s]
|
||||
|
||||
---
|
||||
|
||||
### FT-P-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Negative Scenarios
|
||||
|
||||
### FT-N-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what invalid/edge input this tests]
|
||||
**Traces to**: AC-[ID] (negative case), RESTRICT-[ID]
|
||||
**Category**: [which AC/restriction category]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Input data**: [reference to specific invalid data or edge case]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [provide invalid input / trigger edge case] | [error response / graceful degradation / fallback behavior] |
|
||||
|
||||
**Expected outcome**: [system rejects gracefully / falls back to X / returns error Y]
|
||||
**Max execution time**: [e.g., 5s]
|
||||
|
||||
---
|
||||
|
||||
### FT-N-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Blackbox tests should typically trace to at least one acceptance criterion or restriction. Tests without a trace are allowed but should have a clear justification.
|
||||
- Positive scenarios validate the system does what it should.
|
||||
- Negative scenarios validate the system rejects or handles gracefully what it shouldn't accept.
|
||||
- Expected outcomes must be specific and measurable — not "works correctly" but "returns position within 50m of ground truth."
|
||||
- Input data references should point to specific entries in test-data.md.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Jira Epic Template
|
||||
# Epic Template
|
||||
|
||||
Use this template for each Jira epic. Create epics via Jira MCP.
|
||||
Use this template for each epic. Create epics via the configured work item tracker (Jira MCP or Azure DevOps MCP).
|
||||
|
||||
---
|
||||
|
||||
@@ -73,14 +73,14 @@ Link to architecture.md and relevant component spec.]
|
||||
|
||||
### Design & Architecture
|
||||
|
||||
- Architecture doc: `_docs/02_plans/<topic>/architecture.md`
|
||||
- Component spec: `_docs/02_plans/<topic>/components/[##]_[name]/description.md`
|
||||
- System flows: `_docs/02_plans/<topic>/system-flows.md`
|
||||
- Architecture doc: `_docs/02_document/architecture.md`
|
||||
- Component spec: `_docs/02_document/components/[##]_[name]/description.md`
|
||||
- System flows: `_docs/02_document/system-flows.md`
|
||||
|
||||
### Definition of Done
|
||||
|
||||
- [ ] All in-scope capabilities implemented
|
||||
- [ ] Automated tests pass (unit + integration + e2e)
|
||||
- [ ] Automated tests pass (unit + blackbox)
|
||||
- [ ] Minimum coverage threshold met (75%)
|
||||
- [ ] Runbooks written (if applicable)
|
||||
- [ ] Documentation updated
|
||||
|
||||
@@ -1,6 +1,6 @@
|
||||
# Final Planning Report Template
|
||||
|
||||
Use this template after completing all 5 steps and the quality checklist. Save as `_docs/02_plans/<topic>/FINAL_report.md`.
|
||||
Use this template after completing all 6 steps and the quality checklist. Save as `_docs/02_document/FINAL_report.md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,35 @@
|
||||
# Performance Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/performance-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Performance Tests
|
||||
|
||||
### NFT-PERF-01: [Test Name]
|
||||
|
||||
**Summary**: [What performance characteristic this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
**Metric**: [what is measured — latency, throughput, frame rate, etc.]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state, load profile, data volume]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Measurement |
|
||||
|------|----------------|-------------|
|
||||
| 1 | [action] | [what to measure and how] |
|
||||
|
||||
**Pass criteria**: [specific threshold — e.g., p95 latency < 400ms]
|
||||
**Duration**: [how long the test runs]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Performance tests should run long enough to capture steady-state behavior, not just cold-start.
|
||||
- Define clear pass/fail thresholds with specific metrics (p50, p95, p99 latency, throughput, etc.).
|
||||
- Include warm-up preconditions to separate initialization cost from steady-state performance.
|
||||
@@ -0,0 +1,37 @@
|
||||
# Resilience Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resilience-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resilience Tests
|
||||
|
||||
### NFT-RES-01: [Test Name]
|
||||
|
||||
**Summary**: [What failure/recovery scenario this validates]
|
||||
**Traces to**: AC-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System state before fault injection]
|
||||
|
||||
**Fault injection**:
|
||||
- [What fault is introduced — process kill, network partition, invalid input sequence, etc.]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Action | Expected Behavior |
|
||||
|------|--------|------------------|
|
||||
| 1 | [inject fault] | [system behavior during fault] |
|
||||
| 2 | [observe recovery] | [system behavior after recovery] |
|
||||
|
||||
**Pass criteria**: [recovery time, data integrity, continued operation]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resilience tests must define both the fault and the expected recovery — not just "system should recover."
|
||||
- Include specific recovery time expectations and data integrity checks.
|
||||
- Test both graceful degradation (partial failure) and full recovery scenarios.
|
||||
@@ -0,0 +1,31 @@
|
||||
# Resource Limit Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/resource-limit-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Resource Limit Tests
|
||||
|
||||
### NFT-RES-LIM-01: [Test Name]
|
||||
|
||||
**Summary**: [What resource constraint this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Preconditions**:
|
||||
- [System running under specified constraints]
|
||||
|
||||
**Monitoring**:
|
||||
- [What resources to monitor — memory, CPU, GPU, disk, temperature]
|
||||
|
||||
**Duration**: [how long to run]
|
||||
**Pass criteria**: [resource stays within limit — e.g., memory < 8GB throughout]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Resource limit tests must specify monitoring duration — short bursts don't prove sustained compliance.
|
||||
- Define specific numeric limits that can be programmatically checked.
|
||||
- Include both the monitoring method and the threshold in the pass criteria.
|
||||
@@ -1,6 +1,6 @@
|
||||
# Risk Register Template
|
||||
|
||||
Use this template for risk assessment. Save as `_docs/02_plans/<topic>/risk_mitigations.md`.
|
||||
Use this template for risk assessment. Save as `_docs/02_document/risk_mitigations.md`.
|
||||
Subsequent iterations: `risk_mitigations_02.md`, `risk_mitigations_03.md`, etc.
|
||||
|
||||
---
|
||||
|
||||
@@ -0,0 +1,30 @@
|
||||
# Security Tests Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/security-tests.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Security Tests
|
||||
|
||||
### NFT-SEC-01: [Test Name]
|
||||
|
||||
**Summary**: [What security property this validates]
|
||||
**Traces to**: AC-[ID], RESTRICT-[ID]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected Response |
|
||||
|------|----------------|------------------|
|
||||
| 1 | [attempt unauthorized access / injection / etc.] | [rejection / no data leak / etc.] |
|
||||
|
||||
**Pass criteria**: [specific security outcome]
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Security tests at blackbox level focus on black-box attacks (unauthorized API calls, malformed input), not code-level vulnerabilities.
|
||||
- Verify the system remains operational after security-related edge cases (no crash, no hang).
|
||||
- Test authentication/authorization boundaries from the consumer's perspective.
|
||||
@@ -1,7 +1,7 @@
|
||||
# System Flows Template
|
||||
|
||||
Use this template for the system flows document. Save as `_docs/02_plans/<topic>/system-flows.md`.
|
||||
Individual flow diagrams go in `_docs/02_plans/<topic>/diagrams/flows/flow_[name].md`.
|
||||
Use this template for the system flows document. Save as `_docs/02_document/system-flows.md`.
|
||||
Individual flow diagrams go in `_docs/02_document/diagrams/flows/flow_[name].md`.
|
||||
|
||||
---
|
||||
|
||||
|
||||
@@ -0,0 +1,55 @@
|
||||
# Test Data Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/test-data.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Test Data Management
|
||||
|
||||
## Seed Data Sets
|
||||
|
||||
| Data Set | Description | Used by Tests | How Loaded | Cleanup |
|
||||
|----------|-------------|---------------|-----------|---------|
|
||||
| [name] | [what it contains] | [test IDs] | [SQL script / API call / fixture file / volume mount] | [how removed after test] |
|
||||
|
||||
## Data Isolation Strategy
|
||||
|
||||
[e.g., each test run gets a fresh container restart, or transactions are rolled back, or namespaced data, or separate DB per test group]
|
||||
|
||||
## Input Data Mapping
|
||||
|
||||
| Input Data File | Source Location | Description | Covers Scenarios |
|
||||
|-----------------|----------------|-------------|-----------------|
|
||||
| [filename] | `_docs/00_problem/input_data/[filename]` | [what it contains] | [test IDs that use this data] |
|
||||
|
||||
## Expected Results Mapping
|
||||
|
||||
| Test Scenario ID | Input Data | Expected Result | Comparison Method | Tolerance | Expected Result Source |
|
||||
|-----------------|------------|-----------------|-------------------|-----------|----------------------|
|
||||
| [test ID] | `input_data/[filename]` | [quantifiable expected output] | [exact / tolerance / pattern / threshold / file-diff] | [± value or N/A] | `input_data/expected_results/[filename]` or inline |
|
||||
|
||||
## External Dependency Mocks
|
||||
|
||||
| External Service | Mock/Stub | How Provided | Behavior |
|
||||
|-----------------|-----------|-------------|----------|
|
||||
| [service name] | [mock type] | [Docker service / in-process stub / recorded responses] | [what it returns / simulates] |
|
||||
|
||||
## Data Validation Rules
|
||||
|
||||
| Data Type | Validation | Invalid Examples | Expected System Behavior |
|
||||
|-----------|-----------|-----------------|------------------------|
|
||||
| [type] | [rules] | [invalid input examples] | [how system should respond] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every seed data set should be traceable to specific test scenarios.
|
||||
- Input data from `_docs/00_problem/input_data/` should be mapped to test scenarios that use it.
|
||||
- Every input data item MUST have a corresponding expected result in the Expected Results Mapping table.
|
||||
- Expected results MUST be quantifiable: exact values, numeric tolerances, pattern matches, thresholds, or reference files. "Works correctly" is never acceptable.
|
||||
- For complex expected outputs, provide machine-readable reference files (JSON, CSV) in `_docs/00_problem/input_data/expected_results/` and reference them in the mapping.
|
||||
- External mocks must be deterministic — same input always produces same output.
|
||||
- Data isolation must guarantee no test can affect another test's outcome.
|
||||
+6
-57
@@ -1,17 +1,16 @@
|
||||
# E2E Black-Box Test Infrastructure Template
|
||||
# Test Environment Template
|
||||
|
||||
Describes a separate consumer application that tests the main system as a black box.
|
||||
Save as `PLANS_DIR/<topic>/e2e_test_infrastructure.md`.
|
||||
Save as `DOCUMENT_DIR/tests/environment.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# E2E Test Infrastructure
|
||||
# Test Environment
|
||||
|
||||
## Overview
|
||||
|
||||
**System under test**: [main system name and entry points — API URLs, message queues, etc.]
|
||||
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating end-to-end use cases without access to internals.
|
||||
**System under test**: [main system name and entry points — API URLs, message queues, serial ports, etc.]
|
||||
**Consumer app purpose**: Standalone application that exercises the main system through its public interfaces, validating black-box use cases without access to internals.
|
||||
|
||||
## Docker Environment
|
||||
|
||||
@@ -22,7 +21,7 @@ Save as `PLANS_DIR/<topic>/e2e_test_infrastructure.md`.
|
||||
| system-under-test | [main app image or build context] | The main system being tested | [ports] |
|
||||
| test-db | [postgres/mysql/etc.] | Database for the main system | [ports] |
|
||||
| e2e-consumer | [build context for consumer app] | Black-box test runner | — |
|
||||
| [dependency] | [image] | [purpose — cache, queue, etc.] | [ports] |
|
||||
| [dependency] | [image] | [purpose — cache, queue, mock, etc.] | [ports] |
|
||||
|
||||
### Networks
|
||||
|
||||
@@ -68,54 +67,6 @@ services:
|
||||
- No internal module imports
|
||||
- No shared memory or file system with the main system
|
||||
|
||||
## E2E Test Scenarios
|
||||
|
||||
### Acceptance Criteria Traceability
|
||||
|
||||
| AC ID | Acceptance Criterion | E2E Test IDs | Coverage |
|
||||
|-------|---------------------|-------------|----------|
|
||||
| AC-01 | [criterion] | E2E-01 | Covered |
|
||||
| AC-02 | [criterion] | E2E-02, E2E-03 | Covered |
|
||||
| AC-03 | [criterion] | — | NOT COVERED — [reason] |
|
||||
|
||||
### E2E-01: [Scenario Name]
|
||||
|
||||
**Summary**: [One sentence: what end-to-end use case this validates]
|
||||
|
||||
**Traces to**: AC-01
|
||||
|
||||
**Preconditions**:
|
||||
- [System state required before test]
|
||||
|
||||
**Steps**:
|
||||
|
||||
| Step | Consumer Action | Expected System Response |
|
||||
|------|----------------|------------------------|
|
||||
| 1 | [call / send] | [response / event] |
|
||||
| 2 | [call / send] | [response / event] |
|
||||
|
||||
**Max execution time**: [e.g., 10s]
|
||||
|
||||
---
|
||||
|
||||
### E2E-02: [Scenario Name]
|
||||
|
||||
(repeat structure)
|
||||
|
||||
---
|
||||
|
||||
## Test Data Management
|
||||
|
||||
**Seed data**:
|
||||
|
||||
| Data Set | Description | How Loaded | Cleanup |
|
||||
|----------|-------------|-----------|---------|
|
||||
| [name] | [what it contains] | [SQL script / API call / fixture file] | [how removed after test] |
|
||||
|
||||
**Isolation strategy**: [e.g., each test run gets a fresh DB via container restart, or transactions are rolled back, or namespaced data]
|
||||
|
||||
**External dependencies**: [any external APIs that need mocking or sandbox environments]
|
||||
|
||||
## CI/CD Integration
|
||||
|
||||
**When to run**: [e.g., on PR merge to dev, nightly, before production deploy]
|
||||
@@ -134,8 +85,6 @@ services:
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every E2E test MUST trace to at least one acceptance criterion. If it doesn't, question whether it's needed.
|
||||
- The consumer app must treat the main system as a true black box — no internal imports, no direct DB queries against the main system's database.
|
||||
- Keep the number of E2E tests focused on critical use cases. Exhaustive testing belongs in per-component tests (Step 4).
|
||||
- Docker environment should be self-contained — `docker compose up` must be sufficient to run the full suite.
|
||||
- If the main system requires external services (payment gateways, third-party APIs), define mock/stub services in the Docker environment.
|
||||
@@ -17,7 +17,7 @@ Use this template for each component's test spec. Save as `components/[##]_[name
|
||||
|
||||
---
|
||||
|
||||
## Integration Tests
|
||||
## Blackbox Tests
|
||||
|
||||
### IT-01: [Test Name]
|
||||
|
||||
@@ -169,4 +169,4 @@ Use this template for each component's test spec. Save as `components/[##]_[name
|
||||
- If an acceptance criterion has no test covering it, mark it as NOT COVERED and explain why (e.g., "requires manual verification", "deferred to phase 2").
|
||||
- Performance test targets should come from the NFR section in `architecture.md`.
|
||||
- Security tests should cover at minimum: authentication bypass, authorization escalation, injection attacks relevant to this component.
|
||||
- Not every component needs all 4 test types. A stateless utility component may only need integration tests.
|
||||
- Not every component needs all 4 test types. A stateless utility component may only need blackbox tests.
|
||||
|
||||
@@ -0,0 +1,47 @@
|
||||
# Traceability Matrix Template
|
||||
|
||||
Save as `DOCUMENT_DIR/tests/traceability-matrix.md`.
|
||||
|
||||
---
|
||||
|
||||
```markdown
|
||||
# Traceability Matrix
|
||||
|
||||
## Acceptance Criteria Coverage
|
||||
|
||||
| AC ID | Acceptance Criterion | Test IDs | Coverage |
|
||||
|-------|---------------------|----------|----------|
|
||||
| AC-01 | [criterion text] | FT-P-01, NFT-PERF-01 | Covered |
|
||||
| AC-02 | [criterion text] | FT-P-02, FT-N-01 | Covered |
|
||||
| AC-03 | [criterion text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Restrictions Coverage
|
||||
|
||||
| Restriction ID | Restriction | Test IDs | Coverage |
|
||||
|---------------|-------------|----------|----------|
|
||||
| RESTRICT-01 | [restriction text] | FT-N-02, NFT-RES-LIM-01 | Covered |
|
||||
| RESTRICT-02 | [restriction text] | — | NOT COVERED — [reason and mitigation] |
|
||||
|
||||
## Coverage Summary
|
||||
|
||||
| Category | Total Items | Covered | Not Covered | Coverage % |
|
||||
|----------|-----------|---------|-------------|-----------|
|
||||
| Acceptance Criteria | [N] | [N] | [N] | [%] |
|
||||
| Restrictions | [N] | [N] | [N] | [%] |
|
||||
| **Total** | [N] | [N] | [N] | [%] |
|
||||
|
||||
## Uncovered Items Analysis
|
||||
|
||||
| Item | Reason Not Covered | Risk | Mitigation |
|
||||
|------|-------------------|------|-----------|
|
||||
| [AC/Restriction ID] | [why it cannot be tested at blackbox level] | [what could go wrong] | [how risk is addressed — e.g., covered by component tests in Step 5] |
|
||||
```
|
||||
|
||||
---
|
||||
|
||||
## Guidance Notes
|
||||
|
||||
- Every acceptance criterion must appear in the matrix — either covered or explicitly marked as not covered with a reason.
|
||||
- Every restriction must appear in the matrix.
|
||||
- NOT COVERED items must have a reason and a mitigation strategy (e.g., "covered at component test level" or "requires real hardware").
|
||||
- Coverage percentage should be at least 75% for acceptance criteria at the blackbox test level.
|
||||
Reference in New Issue
Block a user