mirror of
https://github.com/azaion/detections.git
synced 2026-04-23 04:26:31 +00:00
Add detailed file index and enhance skill documentation for autopilot, decompose, deploy, plan, and research skills. Introduce tests-only mode in decompose skill, clarify required files for deploy and plan skills, and improve prerequisite checks across skills for better user guidance and workflow efficiency.
This commit is contained in:
@@ -0,0 +1,218 @@
|
||||
---
|
||||
name: blackbox-test-spec
|
||||
description: |
|
||||
Black-box integration test specification skill. Analyzes input data completeness and produces
|
||||
detailed E2E test scenarios (functional + non-functional) that treat the system as a black box.
|
||||
2-phase workflow: input data completeness analysis, then test scenario specification.
|
||||
Produces 5 artifacts under integration_tests/.
|
||||
Trigger phrases:
|
||||
- "blackbox test spec", "black box tests", "integration test spec"
|
||||
- "test specification", "e2e test spec"
|
||||
- "test scenarios", "black box scenarios"
|
||||
category: build
|
||||
tags: [testing, black-box, integration-tests, e2e, test-specification, qa]
|
||||
disable-model-invocation: true
|
||||
---
|
||||
|
||||
# Black-Box Test Scenario Specification
|
||||
|
||||
Analyze input data completeness and produce detailed black-box integration test specifications. Tests describe what the system should do given specific inputs — they never reference internals.
|
||||
|
||||
## Core Principles
|
||||
|
||||
- **Black-box only**: tests describe observable behavior through public interfaces; no internal implementation details
|
||||
- **Traceability**: every test traces to at least one acceptance criterion or restriction
|
||||
- **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work
|
||||
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
|
||||
- **Spec, don't code**: this workflow produces test specifications, never test implementation code
|
||||
|
||||
## Context Resolution
|
||||
|
||||
Fixed paths — no mode detection needed:
|
||||
|
||||
- PROBLEM_DIR: `_docs/00_problem/`
|
||||
- SOLUTION_DIR: `_docs/01_solution/`
|
||||
- DOCUMENT_DIR: `_docs/02_document/`
|
||||
- TESTS_OUTPUT_DIR: `_docs/02_document/integration_tests/`
|
||||
|
||||
Announce the resolved paths to the user before proceeding.
|
||||
|
||||
## Input Specification
|
||||
|
||||
### Required Files
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `_docs/00_problem/problem.md` | Problem description and context |
|
||||
| `_docs/00_problem/acceptance_criteria.md` | Measurable acceptance criteria |
|
||||
| `_docs/00_problem/restrictions.md` | Constraints and limitations |
|
||||
| `_docs/00_problem/input_data/` | Reference data examples |
|
||||
| `_docs/01_solution/solution.md` | Finalized solution |
|
||||
|
||||
### Optional Files (used when available)
|
||||
|
||||
| File | Purpose |
|
||||
|------|---------|
|
||||
| `DOCUMENT_DIR/architecture.md` | System architecture for environment design |
|
||||
| `DOCUMENT_DIR/system-flows.md` | System flows for test scenario coverage |
|
||||
| `DOCUMENT_DIR/components/` | Component specs for interface identification |
|
||||
|
||||
### Prerequisite Checks (BLOCKING)
|
||||
|
||||
1. `acceptance_criteria.md` exists and is non-empty — **STOP if missing**
|
||||
2. `restrictions.md` exists and is non-empty — **STOP if missing**
|
||||
3. `input_data/` exists and contains at least one file — **STOP if missing**
|
||||
4. `problem.md` exists and is non-empty — **STOP if missing**
|
||||
5. `solution.md` exists and is non-empty — **STOP if missing**
|
||||
6. Create TESTS_OUTPUT_DIR if it does not exist
|
||||
7. If TESTS_OUTPUT_DIR already contains files, ask user: **resume from last checkpoint or start fresh?**
|
||||
|
||||
## Artifact Management
|
||||
|
||||
### Directory Structure
|
||||
|
||||
```
|
||||
TESTS_OUTPUT_DIR/
|
||||
├── environment.md
|
||||
├── test_data.md
|
||||
├── functional_tests.md
|
||||
├── non_functional_tests.md
|
||||
└── traceability_matrix.md
|
||||
```
|
||||
|
||||
### Save Timing
|
||||
|
||||
| Phase | Save immediately after | Filename |
|
||||
|-------|------------------------|----------|
|
||||
| Phase 1a | Input data analysis (no file — findings feed Phase 1b) | — |
|
||||
| Phase 1b | Environment spec | `environment.md` |
|
||||
| Phase 1b | Test data spec | `test_data.md` |
|
||||
| Phase 1b | Functional tests | `functional_tests.md` |
|
||||
| Phase 1b | Non-functional tests | `non_functional_tests.md` |
|
||||
| Phase 1b | Traceability matrix | `traceability_matrix.md` |
|
||||
|
||||
### Resumability
|
||||
|
||||
If TESTS_OUTPUT_DIR already contains files:
|
||||
|
||||
1. List existing files and match them to the save timing table above
|
||||
2. Identify which phase/artifacts are complete
|
||||
3. Resume from the next incomplete artifact
|
||||
4. Inform the user which artifacts are being skipped
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
At the start of execution, create a TodoWrite with both phases. Update status as each phase completes.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1a: Input Data Completeness Analysis
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Assess whether the available input data is sufficient to build comprehensive test scenarios
|
||||
**Constraints**: Analysis only — no test specs yet
|
||||
|
||||
1. Read `_docs/01_solution/solution.md`
|
||||
2. Read `acceptance_criteria.md`, `restrictions.md`
|
||||
3. Read testing strategy from solution.md (if present)
|
||||
4. If `DOCUMENT_DIR/architecture.md` and `DOCUMENT_DIR/system-flows.md` exist, read them for additional context on system interfaces and flows
|
||||
5. Analyze `input_data/` contents against:
|
||||
- Coverage of acceptance criteria scenarios
|
||||
- Coverage of restriction edge cases
|
||||
- Coverage of testing strategy requirements
|
||||
6. Threshold: at least 70% coverage of the scenarios
|
||||
7. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to `input_data/`
|
||||
8. Present coverage assessment to user
|
||||
|
||||
**BLOCKING**: Do NOT proceed until user confirms the input data coverage is sufficient.
|
||||
|
||||
---
|
||||
|
||||
### Phase 1b: Black-Box Test Scenario Specification
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Produce detailed black-box test specifications covering functional and non-functional scenarios
|
||||
**Constraints**: Spec only — no test code. Tests describe what the system should do given specific inputs, not how the system is built.
|
||||
|
||||
Based on all acquired data, acceptance_criteria, and restrictions, form detailed test scenarios:
|
||||
|
||||
1. Define test environment using `.cursor/skills/plan/templates/integration-environment.md` as structure
|
||||
2. Define test data management using `.cursor/skills/plan/templates/integration-test-data.md` as structure
|
||||
3. Write functional test scenarios (positive + negative) using `.cursor/skills/plan/templates/integration-functional-tests.md` as structure
|
||||
4. Write non-functional test scenarios (performance, resilience, security, edge cases) using `.cursor/skills/plan/templates/integration-non-functional-tests.md` as structure
|
||||
5. Build traceability matrix using `.cursor/skills/plan/templates/integration-traceability-matrix.md` as structure
|
||||
|
||||
**Self-verification**:
|
||||
- [ ] Every acceptance criterion is covered by at least one test scenario
|
||||
- [ ] Every restriction is verified by at least one test scenario
|
||||
- [ ] Positive and negative scenarios are balanced
|
||||
- [ ] Consumer app has no direct access to system internals
|
||||
- [ ] Docker environment is self-contained (`docker compose up` sufficient)
|
||||
- [ ] External dependencies have mock/stub services defined
|
||||
- [ ] Traceability matrix has no uncovered AC or restrictions
|
||||
|
||||
**Save action**: Write all files under TESTS_OUTPUT_DIR:
|
||||
- `environment.md`
|
||||
- `test_data.md`
|
||||
- `functional_tests.md`
|
||||
- `non_functional_tests.md`
|
||||
- `traceability_matrix.md`
|
||||
|
||||
**BLOCKING**: Present test coverage summary (from traceability_matrix.md) to user. Do NOT proceed until confirmed.
|
||||
|
||||
Capture any new questions, findings, or insights that arise during test specification — these feed forward into downstream skills (plan, refactor, etc.).
|
||||
|
||||
---
|
||||
|
||||
## Escalation Rules
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — specification cannot proceed |
|
||||
| Ambiguous requirements | ASK user |
|
||||
| Input data coverage below 70% | Search internet for supplementary data, ASK user to validate |
|
||||
| Test scenario conflicts with restrictions | ASK user to clarify intent |
|
||||
| System interfaces unclear (no architecture.md) | ASK user or derive from solution.md |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
- **Referencing internals**: tests must be black-box — no internal module names, no direct DB queries against the system under test
|
||||
- **Vague expected outcomes**: "works correctly" is not a test outcome; use specific measurable values
|
||||
- **Missing negative scenarios**: every positive scenario category should have corresponding negative/edge-case tests
|
||||
- **Untraceable tests**: every test should trace to at least one AC or restriction
|
||||
- **Writing test code**: this skill produces specifications, never implementation code
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
When the user wants to:
|
||||
- Specify black-box integration tests before implementation or refactoring
|
||||
- Analyze input data completeness for test coverage
|
||||
- Produce E2E test scenarios from acceptance criteria
|
||||
|
||||
**Keywords**: "blackbox test spec", "black box tests", "integration test spec", "test specification", "e2e test spec", "test scenarios"
|
||||
|
||||
## Methodology Quick Reference
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────────────────────┐
|
||||
│ Black-Box Test Scenario Specification (2-Phase) │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ PREREQ: Data Gate (BLOCKING) │
|
||||
│ → verify AC, restrictions, input_data, solution exist │
|
||||
│ │
|
||||
│ Phase 1a: Input Data Completeness Analysis │
|
||||
│ → assess input_data/ coverage vs AC scenarios (≥70%) │
|
||||
│ [BLOCKING: user confirms input data coverage] │
|
||||
│ │
|
||||
│ Phase 1b: Black-Box Test Scenario Specification │
|
||||
│ → environment.md │
|
||||
│ → test_data.md │
|
||||
│ → functional_tests.md (positive + negative) │
|
||||
│ → non_functional_tests.md (perf, resilience, security, limits)│
|
||||
│ → traceability_matrix.md │
|
||||
│ [BLOCKING: user confirms test coverage] │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ Principles: Black-box only · Traceability · Save immediately │
|
||||
│ Ask don't assume · Spec don't code │
|
||||
└────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
Reference in New Issue
Block a user