mirror of
https://github.com/azaion/detections.git
synced 2026-04-22 09:16:33 +00:00
Enhance autopilot documentation and workflows: Add assumptions regarding single project per workspace, update notification sound references, and introduce context budget heuristics for managing session limits. Revise various skill documents to reflect changes in task management, including ticketing and testing processes, ensuring clarity and consistency across the system.
This commit is contained in:
@@ -132,8 +132,8 @@ Condition: `_docs/03_implementation/FINAL_implementation_report.md` exists AND t
|
||||
|
||||
Action: Run the full test suite to verify the implementation before deployment.
|
||||
|
||||
1. **Unit tests**: detect the project's test runner (e.g., `pytest`, `dotnet test`, `cargo test`, `npm test`) and run all unit tests
|
||||
2. **Blackbox tests**: if `docker-compose.test.yml` or an equivalent test environment exists, spin it up and run the blackbox test suite
|
||||
1. If `scripts/run-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
|
||||
2. Otherwise, detect the project's test runner manually (e.g., `pytest`, `dotnet test`, `cargo test`, `npm test`) and run all unit tests; if `docker-compose.test.yml` or an equivalent test environment exists, spin it up and run the blackbox test suite
|
||||
3. **Report results**: present a summary of passed/failed/skipped tests
|
||||
|
||||
If all tests pass → auto-chain to Step 5b (Security Audit).
|
||||
@@ -193,12 +193,11 @@ Action: Present using Choose format:
|
||||
```
|
||||
|
||||
- If user picks A → Run performance tests:
|
||||
1. Check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios
|
||||
2. Detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks)
|
||||
3. Execute performance test scenarios against the running system
|
||||
4. Present results vs acceptance criteria thresholds
|
||||
5. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
|
||||
6. After completion, auto-chain to Step 6 (Deploy)
|
||||
1. If `scripts/run-performance-tests.sh` exists (generated by the test-spec skill Phase 4), execute it
|
||||
2. Otherwise, check if `_docs/02_document/tests/performance-tests.md` exists for test scenarios, detect appropriate load testing tool (k6, locust, artillery, wrk, or built-in benchmarks), and execute performance test scenarios against the running system
|
||||
3. Present results vs acceptance criteria thresholds
|
||||
4. If thresholds fail → present Choose format: A) Fix and re-run, B) Proceed anyway, C) Abort
|
||||
5. After completion, auto-chain to Step 6 (Deploy)
|
||||
- If user picks B → Mark Step 5c as `skipped` in the state file, auto-chain to Step 6 (Deploy).
|
||||
|
||||
---
|
||||
|
||||
Reference in New Issue
Block a user