mirror of
https://github.com/azaion/detections.git
synced 2026-04-23 04:26:31 +00:00
[AZ-137] [AZ-138] Decompose test tasks and scaffold E2E test infrastructure
Made-with: Cursor
This commit is contained in:
@@ -3,7 +3,7 @@ name: blackbox-test-spec
|
||||
description: |
|
||||
Black-box integration test specification skill. Analyzes input data completeness and produces
|
||||
detailed E2E test scenarios (functional + non-functional) that treat the system as a black box.
|
||||
2-phase workflow: input data completeness analysis, then test scenario specification.
|
||||
3-phase workflow: input data completeness analysis, test scenario specification, test data validation gate.
|
||||
Produces 5 artifacts under integration_tests/.
|
||||
Trigger phrases:
|
||||
- "blackbox test spec", "black box tests", "integration test spec"
|
||||
@@ -25,6 +25,7 @@ Analyze input data completeness and produce detailed black-box integration test
|
||||
- **Save immediately**: write artifacts to disk after each phase; never accumulate unsaved work
|
||||
- **Ask, don't assume**: when requirements are ambiguous, ask the user before proceeding
|
||||
- **Spec, don't code**: this workflow produces test specifications, never test implementation code
|
||||
- **No test without data**: every test scenario MUST have concrete test data; tests without data are removed
|
||||
|
||||
## Context Resolution
|
||||
|
||||
@@ -84,12 +85,16 @@ TESTS_OUTPUT_DIR/
|
||||
|
||||
| Phase | Save immediately after | Filename |
|
||||
|-------|------------------------|----------|
|
||||
| Phase 1a | Input data analysis (no file — findings feed Phase 1b) | — |
|
||||
| Phase 1b | Environment spec | `environment.md` |
|
||||
| Phase 1b | Test data spec | `test_data.md` |
|
||||
| Phase 1b | Functional tests | `functional_tests.md` |
|
||||
| Phase 1b | Non-functional tests | `non_functional_tests.md` |
|
||||
| Phase 1b | Traceability matrix | `traceability_matrix.md` |
|
||||
| Phase 1 | Input data analysis (no file — findings feed Phase 2) | — |
|
||||
| Phase 2 | Environment spec | `environment.md` |
|
||||
| Phase 2 | Test data spec | `test_data.md` |
|
||||
| Phase 2 | Functional tests | `functional_tests.md` |
|
||||
| Phase 2 | Non-functional tests | `non_functional_tests.md` |
|
||||
| Phase 2 | Traceability matrix | `traceability_matrix.md` |
|
||||
| Phase 3 | Updated test data spec (if data added) | `test_data.md` |
|
||||
| Phase 3 | Updated functional tests (if tests removed) | `functional_tests.md` |
|
||||
| Phase 3 | Updated non-functional tests (if tests removed) | `non_functional_tests.md` |
|
||||
| Phase 3 | Updated traceability matrix (if tests removed) | `traceability_matrix.md` |
|
||||
|
||||
### Resumability
|
||||
|
||||
@@ -102,11 +107,11 @@ If TESTS_OUTPUT_DIR already contains files:
|
||||
|
||||
## Progress Tracking
|
||||
|
||||
At the start of execution, create a TodoWrite with both phases. Update status as each phase completes.
|
||||
At the start of execution, create a TodoWrite with all three phases. Update status as each phase completes.
|
||||
|
||||
## Workflow
|
||||
|
||||
### Phase 1a: Input Data Completeness Analysis
|
||||
### Phase 1: Input Data Completeness Analysis
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Assess whether the available input data is sufficient to build comprehensive test scenarios
|
||||
@@ -128,7 +133,7 @@ At the start of execution, create a TodoWrite with both phases. Update status as
|
||||
|
||||
---
|
||||
|
||||
### Phase 1b: Black-Box Test Scenario Specification
|
||||
### Phase 2: Black-Box Test Scenario Specification
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Produce detailed black-box test specifications covering functional and non-functional scenarios
|
||||
@@ -164,15 +169,103 @@ Capture any new questions, findings, or insights that arise during test specific
|
||||
|
||||
---
|
||||
|
||||
### Phase 3: Test Data Validation Gate (HARD GATE)
|
||||
|
||||
**Role**: Professional Quality Assurance Engineer
|
||||
**Goal**: Ensure every test scenario produced in Phase 2 has concrete, sufficient test data. Remove tests that lack data. Verify final coverage stays above 70%.
|
||||
**Constraints**: This phase is MANDATORY and cannot be skipped.
|
||||
|
||||
#### Step 1 — Build the test-data requirements checklist
|
||||
|
||||
Scan `functional_tests.md` and `non_functional_tests.md`. For every test scenario, extract:
|
||||
|
||||
| # | Test Scenario ID | Test Name | Required Data Description | Required Data Quality | Required Data Quantity | Data Provided? |
|
||||
|---|-----------------|-----------|---------------------------|----------------------|----------------------|----------------|
|
||||
|
||||
Present this table to the user.
|
||||
|
||||
#### Step 2 — Ask user to provide test data
|
||||
|
||||
For each row where **Data Provided?** is **No**, ask the user:
|
||||
|
||||
> **Option A — Provide the data**: Supply the necessary test data files (with required quality and quantity as described in the table). Place them in `_docs/00_problem/input_data/` or indicate the location.
|
||||
>
|
||||
> **Option B — Skip this test**: If you cannot provide the data, this test scenario will be **removed** from the specification.
|
||||
|
||||
**BLOCKING**: Wait for the user's response for every missing data item.
|
||||
|
||||
#### Step 3 — Validate provided data
|
||||
|
||||
For each item where the user chose **Option A**:
|
||||
|
||||
1. Verify the data file(s) exist at the indicated location
|
||||
2. Verify **quality**: data matches the format, schema, and constraints described in the test scenario (e.g., correct image resolution, valid JSON structure, expected value ranges)
|
||||
3. Verify **quantity**: enough data samples to cover the scenario (e.g., at least N images for a batch test, multiple edge-case variants)
|
||||
4. If validation fails, report the specific issue and loop back to Step 2 for that item
|
||||
|
||||
#### Step 4 — Remove tests without data
|
||||
|
||||
For each item where the user chose **Option B**:
|
||||
|
||||
1. Warn the user: `⚠️ Test scenario [ID] "[Name]" will be REMOVED from the specification due to missing test data.`
|
||||
2. Remove the test scenario from `functional_tests.md` or `non_functional_tests.md`
|
||||
3. Remove corresponding rows from `traceability_matrix.md`
|
||||
4. Update `test_data.md` to reflect the removal
|
||||
|
||||
**Save action**: Write updated files under TESTS_OUTPUT_DIR:
|
||||
- `test_data.md`
|
||||
- `functional_tests.md` (if tests removed)
|
||||
- `non_functional_tests.md` (if tests removed)
|
||||
- `traceability_matrix.md` (if tests removed)
|
||||
|
||||
#### Step 5 — Final coverage check
|
||||
|
||||
After all removals, recalculate coverage:
|
||||
|
||||
1. Count remaining test scenarios that trace to acceptance criteria
|
||||
2. Count total acceptance criteria + restrictions
|
||||
3. Calculate coverage percentage: `covered_items / total_items * 100`
|
||||
|
||||
| Metric | Value |
|
||||
|--------|-------|
|
||||
| Total AC + Restrictions | ? |
|
||||
| Covered by remaining tests | ? |
|
||||
| **Coverage %** | **?%** |
|
||||
|
||||
**Decision**:
|
||||
|
||||
- **Coverage ≥ 70%** → Phase 3 **PASSED**. Present final summary to user.
|
||||
- **Coverage < 70%** → Phase 3 **FAILED**. Report:
|
||||
> ❌ Test coverage dropped to **X%** (minimum 70% required). The removed test scenarios left gaps in the following acceptance criteria / restrictions:
|
||||
>
|
||||
> | Uncovered Item | Type (AC/Restriction) | Missing Test Data Needed |
|
||||
> |---|---|---|
|
||||
>
|
||||
> **Action required**: Provide the missing test data for the items above, or add alternative test scenarios that cover these items with data you can supply.
|
||||
|
||||
**BLOCKING**: Loop back to Step 2 with the uncovered items. Do NOT finalize until coverage ≥ 70%.
|
||||
|
||||
#### Phase 3 Completion
|
||||
|
||||
When coverage ≥ 70% and all remaining tests have validated data:
|
||||
|
||||
1. Present the final coverage report
|
||||
2. List all removed tests (if any) with reasons
|
||||
3. Confirm all artifacts are saved and consistent
|
||||
|
||||
---
|
||||
|
||||
## Escalation Rules
|
||||
|
||||
| Situation | Action |
|
||||
|-----------|--------|
|
||||
| Missing acceptance_criteria.md, restrictions.md, or input_data/ | **STOP** — specification cannot proceed |
|
||||
| Ambiguous requirements | ASK user |
|
||||
| Input data coverage below 70% | Search internet for supplementary data, ASK user to validate |
|
||||
| Input data coverage below 70% (Phase 1) | Search internet for supplementary data, ASK user to validate |
|
||||
| Test scenario conflicts with restrictions | ASK user to clarify intent |
|
||||
| System interfaces unclear (no architecture.md) | ASK user or derive from solution.md |
|
||||
| Test data not provided for a test scenario (Phase 3) | WARN user and REMOVE the test |
|
||||
| Final coverage below 70% after removals (Phase 3) | BLOCK — require user to supply data or accept reduced spec |
|
||||
|
||||
## Common Mistakes
|
||||
|
||||
@@ -181,6 +274,7 @@ Capture any new questions, findings, or insights that arise during test specific
|
||||
- **Missing negative scenarios**: every positive scenario category should have corresponding negative/edge-case tests
|
||||
- **Untraceable tests**: every test should trace to at least one AC or restriction
|
||||
- **Writing test code**: this skill produces specifications, never implementation code
|
||||
- **Tests without data**: every test scenario MUST have concrete test data; a test spec without data is not executable and must be removed
|
||||
|
||||
## Trigger Conditions
|
||||
|
||||
@@ -194,25 +288,34 @@ When the user wants to:
|
||||
## Methodology Quick Reference
|
||||
|
||||
```
|
||||
┌────────────────────────────────────────────────────────────────┐
|
||||
│ Black-Box Test Scenario Specification (2-Phase) │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
┌─────────────────────────────────────────────────────────────────┐
|
||||
│ Black-Box Test Scenario Specification (3-Phase) │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ PREREQ: Data Gate (BLOCKING) │
|
||||
│ → verify AC, restrictions, input_data, solution exist │
|
||||
│ │
|
||||
│ Phase 1a: Input Data Completeness Analysis │
|
||||
│ │
|
||||
│ Phase 1: Input Data Completeness Analysis │
|
||||
│ → assess input_data/ coverage vs AC scenarios (≥70%) │
|
||||
│ [BLOCKING: user confirms input data coverage] │
|
||||
│ │
|
||||
│ Phase 1b: Black-Box Test Scenario Specification │
|
||||
│ [BLOCKING: user confirms input data coverage] │
|
||||
│ │
|
||||
│ Phase 2: Black-Box Test Scenario Specification │
|
||||
│ → environment.md │
|
||||
│ → test_data.md │
|
||||
│ → functional_tests.md (positive + negative) │
|
||||
│ → non_functional_tests.md (perf, resilience, security, limits)│
|
||||
│ → traceability_matrix.md │
|
||||
│ [BLOCKING: user confirms test coverage] │
|
||||
├────────────────────────────────────────────────────────────────┤
|
||||
│ [BLOCKING: user confirms test coverage] │
|
||||
│ │
|
||||
│ Phase 3: Test Data Validation Gate (HARD GATE) │
|
||||
│ → build test-data requirements checklist │
|
||||
│ → ask user: provide data (Option A) or remove test (Option B) │
|
||||
│ → validate provided data (quality + quantity) │
|
||||
│ → remove tests without data, warn user │
|
||||
│ → final coverage check (≥70% or FAIL + loop back) │
|
||||
│ [BLOCKING: coverage ≥ 70% required to pass] │
|
||||
├─────────────────────────────────────────────────────────────────┤
|
||||
│ Principles: Black-box only · Traceability · Save immediately │
|
||||
│ Ask don't assume · Spec don't code │
|
||||
└────────────────────────────────────────────────────────────────┘
|
||||
│ No test without data │
|
||||
└─────────────────────────────────────────────────────────────────┘
|
||||
```
|
||||
|
||||
Reference in New Issue
Block a user