mirror of
https://github.com/azaion/ai-training.git
synced 2026-04-23 00:46:36 +00:00
40 lines
2.4 KiB
Markdown
40 lines
2.4 KiB
Markdown
# Phase 1: Input Data & Expected Results Completeness Analysis
|
|
|
|
**Role**: Professional Quality Assurance Engineer
|
|
**Goal**: Assess whether the available input data is sufficient to build comprehensive test scenarios, and whether every input is paired with a quantifiable expected result.
|
|
**Constraints**: Analysis only — no test specs yet.
|
|
|
|
## Steps
|
|
|
|
1. Read `_docs/01_solution/solution.md`
|
|
2. Read `acceptance_criteria.md`, `restrictions.md`
|
|
3. Read testing strategy from `solution.md` (if present)
|
|
4. If `DOCUMENT_DIR/architecture.md` and `DOCUMENT_DIR/system-flows.md` exist, read them for additional context on system interfaces and flows
|
|
5. Read `input_data/expected_results/results_report.md` and any referenced files in `input_data/expected_results/`
|
|
6. Analyze `input_data/` contents against:
|
|
- Coverage of acceptance criteria scenarios
|
|
- Coverage of restriction edge cases
|
|
- Coverage of testing strategy requirements
|
|
7. Analyze `input_data/expected_results/results_report.md` completeness:
|
|
- Every input data item has a corresponding expected result row in the mapping
|
|
- Expected results are quantifiable (contain numeric thresholds, exact values, patterns, or file references — not vague descriptions like "works correctly" or "returns result")
|
|
- Expected results specify a comparison method (exact match, tolerance range, pattern match, threshold) per the template
|
|
- Reference files in `input_data/expected_results/` that are cited in the mapping actually exist and are valid
|
|
8. Present input-to-expected-result pairing assessment:
|
|
|
|
| Input Data | Expected Result Provided? | Quantifiable? | Issue (if any) |
|
|
|------------|--------------------------|---------------|----------------|
|
|
| [file/data] | Yes/No | Yes/No | [missing, vague, no tolerance, etc.] |
|
|
|
|
9. Threshold: at least 75% coverage of scenarios AND every covered scenario has a quantifiable expected result (see `.cursor/rules/cursor-meta.mdc` Quality Thresholds table)
|
|
10. If coverage is low, search the internet for supplementary data, assess quality with user, and if user agrees, add to `input_data/` and update `input_data/expected_results/results_report.md`
|
|
11. If expected results are missing or not quantifiable, ask user to provide them before proceeding
|
|
|
|
## Blocking
|
|
|
|
**BLOCKING**: Do NOT proceed to Phase 2 until the user confirms both input data coverage AND expected results completeness are sufficient.
|
|
|
|
## No save action
|
|
|
|
Phase 1 does not write an artifact. Findings feed Phase 2.
|