Files
detections/_docs/02_document/integration_tests/environment.md
T

4.4 KiB

E2E Test Environment

Overview

System under test: Azaion.Detections — FastAPI HTTP service exposing POST /detect, POST /detect/{media_id}, GET /detect/stream, GET /health Consumer app purpose: Standalone test runner that exercises the detection service through its public HTTP/SSE interfaces, validating end-to-end use cases without access to internals.

Docker Environment

Services

Service Image / Build Purpose Ports
detections Build from repo root (setup.py + Cython compile, uvicorn entrypoint) System under test — the detection microservice 8000:8000
mock-loader Custom lightweight HTTP stub (Python/Node) Mock of the Loader service — serves ONNX model files, accepts TensorRT uploads 8080:8080
mock-annotations Custom lightweight HTTP stub (Python/Node) Mock of the Annotations service — accepts detection results, provides token refresh 8081:8081
e2e-consumer Build from e2e/ directory Black-box test runner (pytest)

GPU Configuration

For tests requiring TensorRT (GPU path):

  • Deploy detections with runtime: nvidia and NVIDIA_VISIBLE_DEVICES=all
  • The test suite has two profiles: gpu (TensorRT tests) and cpu (ONNX fallback tests)
  • CPU-only tests run without GPU runtime, verifying ONNX fallback behavior

Networks

Network Services Purpose
e2e-net all Isolated test network — all service-to-service communication via hostnames

Volumes

Volume Mounted to Purpose
test-models mock-loader:/models Pre-built ONNX model file for test inference
test-media e2e-consumer:/media Sample images and video files for detection requests
test-classes detections:/app/classes.json classes.json with 19 detection classes
test-results e2e-consumer:/results CSV test report output

docker-compose structure

services:
  mock-loader:
    build: ./e2e/mocks/loader
    ports: ["8080:8080"]
    volumes:
      - test-models:/models
    networks: [e2e-net]

  mock-annotations:
    build: ./e2e/mocks/annotations
    ports: ["8081:8081"]
    networks: [e2e-net]

  detections:
    build:
      context: .
      dockerfile: Dockerfile
    ports: ["8000:8000"]
    environment:
      - LOADER_URL=http://mock-loader:8080
      - ANNOTATIONS_URL=http://mock-annotations:8081
    volumes:
      - test-classes:/app/classes.json
    depends_on:
      - mock-loader
      - mock-annotations
    networks: [e2e-net]
    # GPU profile adds: runtime: nvidia

  e2e-consumer:
    build: ./e2e
    volumes:
      - test-media:/media
      - test-results:/results
    depends_on:
      - detections
    networks: [e2e-net]
    command: pytest --csv=/results/report.csv

volumes:
  test-models:
  test-media:
  test-classes:
  test-results:

networks:
  e2e-net:

Consumer Application

Tech stack: Python 3, pytest, requests, sseclient-py Entry point: pytest --csv=/results/report.csv

Communication with system under test

Interface Protocol Endpoint Authentication
Health check HTTP GET http://detections:8000/health None
Single image detect HTTP POST (multipart) http://detections:8000/detect None
Media detect HTTP POST (JSON) http://detections:8000/detect/{media_id} Bearer JWT + x-refresh-token headers
SSE stream HTTP GET (SSE) http://detections:8000/detect/stream None

What the consumer does NOT have access to

  • No direct import of Cython modules (inference, annotation, engines)
  • No direct access to the detections service filesystem or Logs/ directory
  • No shared memory with the detections process
  • No direct calls to mock-loader or mock-annotations (except for test setup/teardown verification)

CI/CD Integration

When to run: On PR merge to dev, nightly scheduled run Pipeline stage: After unit tests, before deployment Gate behavior: Block merge if any functional test fails; non-functional failures are warnings Timeout: 15 minutes for CPU profile, 30 minutes for GPU profile

Reporting

Format: CSV Columns: Test ID, Test Name, Execution Time (ms), Result (PASS/FAIL/SKIP), Error Message (if FAIL) Output path: /results/report.csv (mounted volume → ./e2e-results/report.csv on host)