Files
Oleksandr Bezdieniezhnykh f5a46e9790 Sync .cursor from detections
2026-04-12 05:05:12 +03:00

7.8 KiB

name, description, category, tags, disable-model-invocation
name description category tags disable-model-invocation
code-review Multi-phase code review against task specs with structured findings output. 6-phase workflow: context loading, spec compliance, code quality, security quick-scan, performance scan, cross-task consistency. Produces a structured report with severity-ranked findings and a PASS/FAIL/PASS_WITH_WARNINGS verdict. Invoked by /implement skill after each batch, or manually. Trigger phrases: - "code review", "review code", "review implementation" - "check code quality", "review against specs" review
code-review
quality
security-scan
performance
SOLID
true

Code Review

Multi-phase code review that verifies implementation against task specs, checks code quality, and produces structured findings.

Core Principles

  • Understand intent first: read the task specs before reviewing code — know what it should do before judging how
  • Structured output: every finding has severity, category, location, description, and suggestion
  • Deduplicate: same issue at the same location is reported once using {file}:{line}:{title} as key
  • Severity-ranked: findings sorted Critical > High > Medium > Low
  • Verdict-driven: clear PASS/FAIL/PASS_WITH_WARNINGS drives automation decisions

Input

  • List of task spec files that were just implemented (paths to [TRACKER-ID]_[short_name].md)
  • Changed files (detected via git diff or provided by the /implement skill)
  • Project context: _docs/00_problem/restrictions.md, _docs/01_solution/solution.md

Phase 1: Context Loading

Before reviewing code, build understanding of intent:

  1. Read each task spec — acceptance criteria, scope, constraints, dependencies
  2. Read project restrictions and solution overview
  3. Map which changed files correspond to which task specs
  4. Understand what the code is supposed to do before judging how it does it

Phase 2: Spec Compliance Review

For each task, verify implementation satisfies every acceptance criterion:

  • Walk through each AC (Given/When/Then) and trace it in the code
  • Check that unit tests cover each AC
  • Check that blackbox tests exist where specified in the task spec
  • Flag any AC that is not demonstrably satisfied as a Spec-Gap finding (severity: High)
  • Flag any scope creep (implementation beyond what the spec asked for) as a Scope finding (severity: Low)

Phase 3: Code Quality Review

Check implemented code against quality standards:

  • SOLID principles — single responsibility, open/closed, Liskov, interface segregation, dependency inversion
  • Error handling — consistent strategy, no bare catch/except, meaningful error messages
  • Naming — clear intent, follows project conventions
  • Complexity — functions longer than 50 lines or cyclomatic complexity > 10
  • DRY — duplicated logic across files
  • Test quality — tests assert meaningful behavior, not just "no error thrown"
  • Dead code — unused imports, unreachable branches

Phase 4: Security Quick-Scan

Lightweight security checks (defer deep analysis to the /security skill):

  • SQL injection via string interpolation
  • Command injection (subprocess with shell=True, exec, eval)
  • Hardcoded secrets, API keys, passwords
  • Missing input validation on external inputs
  • Sensitive data in logs or error messages
  • Insecure deserialization

Phase 5: Performance Scan

Check for common performance anti-patterns:

  • O(n^2) or worse algorithms where O(n) is possible
  • N+1 query patterns
  • Unbounded data fetching (missing pagination/limits)
  • Blocking I/O in async contexts
  • Unnecessary memory copies or allocations in hot paths

Phase 6: Cross-Task Consistency

When multiple tasks were implemented in the same batch:

  • Interfaces between tasks are compatible (method signatures, DTOs match)
  • No conflicting patterns (e.g., one task uses repository pattern, another does raw SQL)
  • Shared code is not duplicated across task implementations
  • Dependencies declared in task specs are properly wired

Output Format

Produce a structured report with findings deduplicated and sorted by severity:

# Code Review Report

**Batch**: [task list]
**Date**: [YYYY-MM-DD]
**Verdict**: PASS | PASS_WITH_WARNINGS | FAIL

## Findings

| # | Severity | Category | File:Line | Title |
|---|----------|----------|-----------|-------|
| 1 | Critical | Security | src/api/auth.py:42 | SQL injection via f-string |
| 2 | High | Spec-Gap | src/service/orders.py | AC-3 not satisfied |

### Finding Details

**F1: SQL injection via f-string** (Critical / Security)
- Location: `src/api/auth.py:42`
- Description: User input interpolated directly into SQL query
- Suggestion: Use parameterized query via bind parameters
- Task: 04_auth_service

**F2: AC-3 not satisfied** (High / Spec-Gap)
- Location: `src/service/orders.py`
- Description: AC-3 requires order total recalculation on item removal, but no such logic exists
- Suggestion: Add recalculation in remove_item() method
- Task: 07_order_processing

Severity Definitions

Severity Meaning Blocks?
Critical Security vulnerability, data loss, crash Yes — verdict FAIL
High Spec gap, logic bug, broken test Yes — verdict FAIL
Medium Performance issue, maintainability concern, missing validation No — verdict PASS_WITH_WARNINGS
Low Style, minor improvement, scope creep No — verdict PASS_WITH_WARNINGS

Category Values

Bug, Spec-Gap, Security, Performance, Maintainability, Style, Scope

Verdict Logic

  • FAIL: any Critical or High finding exists
  • PASS_WITH_WARNINGS: only Medium or Low findings
  • PASS: no findings

Integration with /implement

The /implement skill invokes this skill after each batch completes:

  1. Collects changed files from all implementer agents in the batch
  2. Passes task spec paths + changed files to this skill
  3. If verdict is FAIL — presents findings to user (BLOCKING), user fixes or confirms
  4. If verdict is PASS or PASS_WITH_WARNINGS — proceeds automatically (findings shown as info)

Integration Contract

Inputs (provided by the implement skill)

Input Type Source Required
task_specs list of file paths Task .md files from _docs/02_tasks/todo/ for the current batch Yes
changed_files list of file paths Files modified by implementer agents (from git diff or agent reports) Yes
batch_number integer Current batch number (for report naming) Yes
project_restrictions file path _docs/00_problem/restrictions.md If exists
solution_overview file path _docs/01_solution/solution.md If exists

Invocation Pattern

The implement skill invokes code-review by:

  1. Reading .cursor/skills/code-review/SKILL.md
  2. Providing the inputs above as context (read the files, pass content to the review phases)
  3. Executing all 6 phases sequentially
  4. Consuming the verdict from the output

Outputs (returned to the implement skill)

Output Type Description
verdict PASS / PASS_WITH_WARNINGS / FAIL Drives the implement skill's auto-fix gate
findings structured list Each finding has: severity, category, file:line, title, description, suggestion, task reference
critical_count integer Number of Critical findings
high_count integer Number of High findings
report_path file path _docs/03_implementation/reviews/batch_[NN]_review.md

Report Persistence

Save the review report to _docs/03_implementation/reviews/batch_[NN]_review.md (create the reviews/ directory if it does not exist). The report uses the Output Format defined above.

The implement skill uses verdict to decide:

  • PASS / PASS_WITH_WARNINGS → proceed to commit
  • FAIL → enter auto-fix loop (up to 2 attempts), then escalate to user