Files
detections/.cursor/skills/blackbox-test-spec/SKILL.md
T
2026-03-23 14:07:54 +02:00

16 KiB

name, description, category, tags, disable-model-invocation
name description category tags disable-model-invocation
blackbox-test-spec Black-box integration test specification skill. Analyzes input data completeness and produces detailed E2E test scenarios (functional + non-functional) that treat the system as a black box. 3-phase workflow: input data completeness analysis, test scenario specification, test data validation gate. Produces 5 artifacts under integration_tests/. Trigger phrases: - "blackbox test spec", "black box tests", "integration test spec" - "test specification", "e2e test spec" - "test scenarios", "black box scenarios" build
testing
black-box
integration-tests
e2e
test-specification
qa
true

Black-Box Test Scenario Specification

Analyze input data completeness and produce detailed black-box integration test specifications. Tests describe what the system should do given specific inputs — they never reference internals.

Core Principles

  • Black-box only: tests describe observable behavior through public interfaces; no internal implementation details
  • Traceability: every test traces to at least one acceptance criterion or restriction
  • Save immediately: write artifacts to disk after each phase; never accumulate unsaved work
  • Ask, don't assume: when requirements are ambiguous, ask the user before proceeding
  • Spec, don't code: this workflow produces test specifications, never test implementation code
  • No test without data: every test scenario MUST have concrete test data; tests without data are removed

Context Resolution

Fixed paths — no mode detection needed:

  • PROBLEM_DIR: _docs/00_problem/
  • SOLUTION_DIR: _docs/01_solution/
  • DOCUMENT_DIR: _docs/02_document/
  • TESTS_OUTPUT_DIR: _docs/02_document/integration_tests/

Announce the resolved paths to the user before proceeding.

Input Specification

Required Files

File Purpose
_docs/00_problem/problem.md Problem description and context
_docs/00_problem/acceptance_criteria.md Measurable acceptance criteria
_docs/00_problem/restrictions.md Constraints and limitations
_docs/00_problem/input_data/ Reference data examples
_docs/01_solution/solution.md Finalized solution

Optional Files (used when available)

File Purpose
DOCUMENT_DIR/architecture.md System architecture for environment design
DOCUMENT_DIR/system-flows.md System flows for test scenario coverage
DOCUMENT_DIR/components/ Component specs for interface identification

Prerequisite Checks (BLOCKING)

  1. acceptance_criteria.md exists and is non-empty — STOP if missing
  2. restrictions.md exists and is non-empty — STOP if missing
  3. input_data/ exists and contains at least one file — STOP if missing
  4. problem.md exists and is non-empty — STOP if missing
  5. solution.md exists and is non-empty — STOP if missing
  6. Create TESTS_OUTPUT_DIR if it does not exist
  7. If TESTS_OUTPUT_DIR already contains files, ask user: resume from last checkpoint or start fresh?

Artifact Management

Directory Structure

TESTS_OUTPUT_DIR/
├── environment.md
├── test_data.md
├── functional_tests.md
├── non_functional_tests.md
└── traceability_matrix.md

Save Timing

Phase Save immediately after Filename
Phase 1 Input data analysis (no file — findings feed Phase 2)
Phase 2 Environment spec environment.md
Phase 2 Test data spec test_data.md
Phase 2 Functional tests functional_tests.md
Phase 2 Non-functional tests non_functional_tests.md
Phase 2 Traceability matrix traceability_matrix.md
Phase 3 Updated test data spec (if data added) test_data.md
Phase 3 Updated functional tests (if tests removed) functional_tests.md
Phase 3 Updated non-functional tests (if tests removed) non_functional_tests.md
Phase 3 Updated traceability matrix (if tests removed) traceability_matrix.md

Resumability

If TESTS_OUTPUT_DIR already contains files:

  1. List existing files and match them to the save timing table above
  2. Identify which phase/artifacts are complete
  3. Resume from the next incomplete artifact
  4. Inform the user which artifacts are being skipped

Progress Tracking

At the start of execution, create a TodoWrite with all three phases. Update status as each phase completes.

Workflow

Phase 1: Input Data Completeness Analysis

Role: Professional Quality Assurance Engineer Goal: Assess whether the available input data is sufficient to build comprehensive test scenarios Constraints: Analysis only — no test specs yet

  1. Read _docs/01_solution/solution.md
  2. Read acceptance_criteria.md, restrictions.md
  3. Read testing strategy from solution.md (if present)
  4. If DOCUMENT_DIR/architecture.md and DOCUMENT_DIR/system-flows.md exist, read them for additional context on system interfaces and flows
  5. Analyze input_data/ contents against:
    • Coverage of acceptance criteria scenarios
    • Coverage of restriction edge cases
    • Coverage of testing strategy requirements
  6. Threshold: at least 70% coverage of the scenarios
  7. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to input_data/
  8. Present coverage assessment to user

BLOCKING: Do NOT proceed until user confirms the input data coverage is sufficient.


Phase 2: Black-Box Test Scenario Specification

Role: Professional Quality Assurance Engineer Goal: Produce detailed black-box test specifications covering functional and non-functional scenarios Constraints: Spec only — no test code. Tests describe what the system should do given specific inputs, not how the system is built.

Based on all acquired data, acceptance_criteria, and restrictions, form detailed test scenarios:

  1. Define test environment using .cursor/skills/plan/templates/integration-environment.md as structure
  2. Define test data management using .cursor/skills/plan/templates/integration-test-data.md as structure
  3. Write functional test scenarios (positive + negative) using .cursor/skills/plan/templates/integration-functional-tests.md as structure
  4. Write non-functional test scenarios (performance, resilience, security, edge cases) using .cursor/skills/plan/templates/integration-non-functional-tests.md as structure
  5. Build traceability matrix using .cursor/skills/plan/templates/integration-traceability-matrix.md as structure

Self-verification:

  • Every acceptance criterion is covered by at least one test scenario
  • Every restriction is verified by at least one test scenario
  • Positive and negative scenarios are balanced
  • Consumer app has no direct access to system internals
  • Docker environment is self-contained (docker compose up sufficient)
  • External dependencies have mock/stub services defined
  • Traceability matrix has no uncovered AC or restrictions

Save action: Write all files under TESTS_OUTPUT_DIR:

  • environment.md
  • test_data.md
  • functional_tests.md
  • non_functional_tests.md
  • traceability_matrix.md

BLOCKING: Present test coverage summary (from traceability_matrix.md) to user. Do NOT proceed until confirmed.

Capture any new questions, findings, or insights that arise during test specification — these feed forward into downstream skills (plan, refactor, etc.).


Phase 3: Test Data Validation Gate (HARD GATE)

Role: Professional Quality Assurance Engineer Goal: Ensure every test scenario produced in Phase 2 has concrete, sufficient test data. Remove tests that lack data. Verify final coverage stays above 70%. Constraints: This phase is MANDATORY and cannot be skipped.

Step 1 — Build the test-data requirements checklist

Scan functional_tests.md and non_functional_tests.md. For every test scenario, extract:

# Test Scenario ID Test Name Required Data Description Required Data Quality Required Data Quantity Data Provided?

Present this table to the user.

Step 2 — Ask user to provide test data

For each row where Data Provided? is No, ask the user:

Option A — Provide the data: Supply the necessary test data files (with required quality and quantity as described in the table). Place them in _docs/00_problem/input_data/ or indicate the location.

Option B — Skip this test: If you cannot provide the data, this test scenario will be removed from the specification.

BLOCKING: Wait for the user's response for every missing data item.

Step 3 — Validate provided data

For each item where the user chose Option A:

  1. Verify the data file(s) exist at the indicated location
  2. Verify quality: data matches the format, schema, and constraints described in the test scenario (e.g., correct image resolution, valid JSON structure, expected value ranges)
  3. Verify quantity: enough data samples to cover the scenario (e.g., at least N images for a batch test, multiple edge-case variants)
  4. If validation fails, report the specific issue and loop back to Step 2 for that item

Step 4 — Remove tests without data

For each item where the user chose Option B:

  1. Warn the user: ⚠️ Test scenario [ID] "[Name]" will be REMOVED from the specification due to missing test data.
  2. Remove the test scenario from functional_tests.md or non_functional_tests.md
  3. Remove corresponding rows from traceability_matrix.md
  4. Update test_data.md to reflect the removal

Save action: Write updated files under TESTS_OUTPUT_DIR:

  • test_data.md
  • functional_tests.md (if tests removed)
  • non_functional_tests.md (if tests removed)
  • traceability_matrix.md (if tests removed)

Step 5 — Final coverage check

After all removals, recalculate coverage:

  1. Count remaining test scenarios that trace to acceptance criteria
  2. Count total acceptance criteria + restrictions
  3. Calculate coverage percentage: covered_items / total_items * 100
Metric Value
Total AC + Restrictions ?
Covered by remaining tests ?
Coverage % ?%

Decision:

  • Coverage ≥ 70% → Phase 3 PASSED. Present final summary to user.

  • Coverage < 70% → Phase 3 FAILED. Report:

    Test coverage dropped to X% (minimum 70% required). The removed test scenarios left gaps in the following acceptance criteria / restrictions:

    Uncovered Item Type (AC/Restriction) Missing Test Data Needed

    Action required: Provide the missing test data for the items above, or add alternative test scenarios that cover these items with data you can supply.

    BLOCKING: Loop back to Step 2 with the uncovered items. Do NOT finalize until coverage ≥ 70%.

Phase 3 Completion

When coverage ≥ 70% and all remaining tests have validated data:

  1. Present the final coverage report
  2. List all removed tests (if any) with reasons
  3. Confirm all artifacts are saved and consistent

Escalation Rules

Situation Action
Missing acceptance_criteria.md, restrictions.md, or input_data/ STOP — specification cannot proceed
Ambiguous requirements ASK user
Input data coverage below 70% (Phase 1) Search internet for supplementary data, ASK user to validate
Test scenario conflicts with restrictions ASK user to clarify intent
System interfaces unclear (no architecture.md) ASK user or derive from solution.md
Test data not provided for a test scenario (Phase 3) WARN user and REMOVE the test
Final coverage below 70% after removals (Phase 3) BLOCK — require user to supply data or accept reduced spec

Common Mistakes

  • Referencing internals: tests must be black-box — no internal module names, no direct DB queries against the system under test
  • Vague expected outcomes: "works correctly" is not a test outcome; use specific measurable values
  • Missing negative scenarios: every positive scenario category should have corresponding negative/edge-case tests
  • Untraceable tests: every test should trace to at least one AC or restriction
  • Writing test code: this skill produces specifications, never implementation code
  • Tests without data: every test scenario MUST have concrete test data; a test spec without data is not executable and must be removed

Trigger Conditions

When the user wants to:

  • Specify black-box integration tests before implementation or refactoring
  • Analyze input data completeness for test coverage
  • Produce E2E test scenarios from acceptance criteria

Keywords: "blackbox test spec", "black box tests", "integration test spec", "test specification", "e2e test spec", "test scenarios"

Methodology Quick Reference

┌─────────────────────────────────────────────────────────────────┐
│       Black-Box Test Scenario Specification (3-Phase)           │
├─────────────────────────────────────────────────────────────────┤
│ PREREQ: Data Gate (BLOCKING)                                    │
│   → verify AC, restrictions, input_data, solution exist         │
│                                                                 │
│ Phase 1: Input Data Completeness Analysis                       │
│   → assess input_data/ coverage vs AC scenarios (≥70%)          │
│   [BLOCKING: user confirms input data coverage]                 │
│                                                                 │
│ Phase 2: Black-Box Test Scenario Specification                  │
│   → environment.md                                              │
│   → test_data.md                                                │
│   → functional_tests.md (positive + negative)                   │
│   → non_functional_tests.md (perf, resilience, security, limits)│
│   → traceability_matrix.md                                      │
│   [BLOCKING: user confirms test coverage]                       │
│                                                                 │
│ Phase 3: Test Data Validation Gate (HARD GATE)                  │
│   → build test-data requirements checklist                      │
│   → ask user: provide data (Option A) or remove test (Option B) │
│   → validate provided data (quality + quantity)                 │
│   → remove tests without data, warn user                        │
│   → final coverage check (≥70% or FAIL + loop back)            │
│   [BLOCKING: coverage ≥ 70% required to pass]                  │
├─────────────────────────────────────────────────────────────────┤
│ Principles: Black-box only · Traceability · Save immediately    │
│             Ask don't assume · Spec don't code                  │
│             No test without data                                │
└─────────────────────────────────────────────────────────────────┘