[AZ-187] Docker & hardening

Made-with: Cursor
This commit is contained in:
Oleksandr Bezdieniezhnykh
2026-04-17 18:48:55 +03:00
parent 7d690e1fb4
commit cfed26ff8c
6 changed files with 784 additions and 56 deletions
@@ -1,102 +1,222 @@
# Jetson device provisioning runbook
This runbook describes the end-to-end flow to fuse, flash, provision a device identity, and reach a state where the Azaion Loader can authenticate against the admin/resource APIs. It targets a Jetson Orin Nano class device; adapt paths and NVIDIA bundle versions to your manufacturing image.
This runbook describes the end-to-end flow to fuse, flash, and provision device identities so the Azaion Loader can authenticate against the admin/resource APIs. It supports Jetson Orin Nano, Orin NX 8GB, and Orin NX 16GB devices. Board configuration is auto-detected from the USB product ID.
The `scripts/provision_devices.sh` script automates the entire flow: detecting connected Jetsons, auto-installing L4T if needed, setting up Docker with the Loader container, optionally hardening the OS, registering device identities via the admin API, writing credentials, fusing, and flashing.
After provisioning, each Jetson boots into a production-ready state with Docker Compose running the Loader container.
## Prerequisites
- Provisioning workstation with bash, curl, openssl, python3, and USB/network access to the Jetson in recovery or mass-storage mode as required by your flash tools.
- Admin API reachable from the workstation (base URL, for example `https://admin.internal.example.com`).
- NVIDIA Jetson Linux Driver Package (L4T) and flash scripts for your SKU (for example `odmfuse.sh`, `flash.sh` from the board support package).
- Root filesystem staging directory on the workstation that will be merged into the image before `flash.sh` (often a `Linux_for_Tegra/rootfs/` tree or an extracted sample rootfs overlay).
- Ubuntu amd64 provisioning workstation with bash, curl, jq, wget, lsusb.
- Admin API reachable from the workstation (base URL configured in `scripts/.env`).
- An ApiAdmin account on the admin API (email and password in `scripts/.env`).
- `sudo` access on the workstation.
- USB-C cables that support both power and data transfer.
- Physical label/sticker materials for serial numbers.
- Internet access on first run (to download L4T BSP if not already installed).
- Loader Docker image tar file (see [Preparing the Loader image](#preparing-the-loader-image)).
## Admin API contract (provisioning)
The NVIDIA L4T BSP and sample rootfs are downloaded and installed automatically to `/opt/nvidia/Linux_for_Tegra` if not already present. No manual L4T setup is required.
The `scripts/provision_device.sh` script expects:
## Configuration
1. **POST** `{admin_base}/users` with JSON body `{"email":"<string>","password":"<string>","role":"CompanionPC"}`
- **201** or **200**: user created.
- **409**: user with this email already exists (idempotent re-run).
Copy `scripts/.env.example` to `scripts/.env` and fill in values:
2. **PATCH** `{admin_base}/users/password` with JSON body `{"email":"<string>","password":"<string>"}`
- Used when POST returns **409** so the password in `device.conf` matches the account after re-provisioning.
- **200** or **204**: password updated.
```
ADMIN_EMAIL=admin@azaion.com
ADMIN_PASSWORD=<your ApiAdmin password>
API_URL=https://admin.azaion.com
LOADER_IMAGE_TAR=/path/to/loader-image.tar
```
Adjust URL paths or JSON field names in the script if your deployment uses a different but equivalent contract.
Optional overrides (auto-detected/defaulted if omitted):
```
L4T_VERSION=r36.4.4
L4T_DIR=/opt/nvidia/Linux_for_Tegra
ROOTFS_DIR=/opt/nvidia/Linux_for_Tegra/rootfs
RESOURCE_API_URL=https://admin.azaion.com
LOADER_DEV_STAGE=main
LOADER_IMAGE=localhost:5000/loader:arm
FLASH_TARGET=nvme0n1p1
HARDEN=true
```
The `.env` file is git-ignored and must not be committed.
## Preparing the Loader image
The provisioning script requires a Loader Docker image tar to pre-load onto each device. Options:
**From CI (recommended):** Download the `loader-image.tar` artifact from the Woodpecker CI pipeline for the target branch.
**Local build (requires arm64 builder or BuildKit cross-compilation):**
```bash
docker build -f Dockerfile -t localhost:5000/loader:arm .
docker save localhost:5000/loader:arm -o loader-image.tar
```
Set `LOADER_IMAGE_TAR` in `.env` to the absolute path of the resulting tar file.
## Supported devices
| USB Product ID | Model | Board Config (auto-detected) |
| --- | --- | --- |
| 0955:7523 | Jetson Orin Nano | jetson-orin-nano-devkit |
| 0955:7323 | Jetson Orin NX 16GB | jetson-orin-nx-devkit |
| 0955:7423 | Jetson Orin NX 8GB | jetson-orin-nx-devkit |
The script scans for all NVIDIA USB devices (`lsusb -d 0955:`), matches them against the table above, and displays the model name next to each detected device.
## Admin API contract (device registration)
The script calls:
1. **POST** `{API_URL}/login` with `{"email":"<admin>","password":"<password>"}` to obtain a JWT.
2. **POST** `{API_URL}/devices` with `Authorization: Bearer <token>` and no request body.
- **200** or **201**: returns `{"serial":"azj-NNNN","email":"azj-NNNN@azaion.com","password":"<32-hex-chars>"}`.
- The server auto-assigns the next sequential serial number.
## Device identity and `device.conf`
For serial **AZJN-0042**, the script creates email **azaion-jetson-0042@azaion.com** (suffix is the segment after the last hyphen in the serial, lowercased). The password is 32 hexadecimal characters from `openssl rand -hex 16`.
For each registered device, the script writes:
The script writes:
`{ROOTFS_DIR}/etc/azaion/device.conf`
`{rootfs_staging}/etc/azaion/device.conf`
On the flashed device this becomes `/etc/azaion/device.conf` with:
On the flashed device this becomes **`/etc/azaion/device.conf`** with:
- `AZAION_DEVICE_EMAIL=azj-NNNN@azaion.com`
- `AZAION_DEVICE_PASSWORD=<32-hex-chars>`
- `AZAION_DEVICE_EMAIL=...`
- `AZAION_DEVICE_PASSWORD=...`
File permissions are set to **600**.
File permissions on the staging file are set to **600**. Ensure your image build preserves ownership and permissions appropriate for the service user that runs the Loader.
## Docker and application setup
The `scripts/setup_rootfs_docker.sh` script prepares the rootfs before flashing. It runs automatically as part of `provision_devices.sh`. What it installs:
| Component | Details |
| --- | --- |
| Docker Engine + Compose plugin | Installed via apt in chroot from Docker's official repository |
| NVIDIA Container Toolkit | GPU passthrough for containers; nvidia set as default runtime |
| Production compose file | `/opt/azaion/docker-compose.yml` — defines the `loader` service |
| Loader image | Pre-loaded from `LOADER_IMAGE_TAR` at `/opt/azaion/loader-image.tar` |
| Boot service | `azaion-loader.service` — loads the image tar on first boot, starts compose |
### Device filesystem layout after flash
```
/etc/azaion/device.conf Per-device credentials
/etc/docker/daemon.json Docker config (NVIDIA default runtime)
/opt/azaion/docker-compose.yml Production compose file
/opt/azaion/boot.sh Boot startup script
/opt/azaion/loader-image.tar Initial Loader image (deleted after first boot)
/opt/azaion/models/ Model storage
/opt/azaion/state/ Update manager state
/etc/systemd/system/azaion-loader.service Systemd unit
```
### First boot sequence
1. systemd starts `docker.service`
2. `azaion-loader.service` runs `/opt/azaion/boot.sh`
3. `boot.sh` runs `docker load -i /opt/azaion/loader-image.tar` (first boot only), then deletes the tar
4. `boot.sh` runs `docker compose -f /opt/azaion/docker-compose.yml up -d`
5. The Loader container starts, reads `/etc/azaion/device.conf`, authenticates with the API
6. The update manager begins polling for updates
## Security hardening
The `scripts/harden_rootfs.sh` script applies production security hardening to the rootfs. It runs automatically unless `--no-harden` is passed.
| Measure | Details |
| --- | --- |
| SSH disabled | `sshd.service` and `ssh.service` masked; `sshd_config` removed |
| Getty masked | `getty@.service` and `serial-getty@.service` masked — no login prompt |
| Serial console disabled | `console=ttyTCU0` / `console=ttyS0` removed from `extlinux.conf` |
| Sysctl hardening | ptrace blocked, core dumps disabled, kernel pointers hidden, ICMP redirects off |
| Root locked | Root account password-locked in `/etc/shadow` |
To provision without hardening (e.g. for development devices):
```bash
./scripts/provision_devices.sh --no-harden
```
Or set `HARDEN=false` in `.env`.
## Step-by-step flow
### 1. Unbox and record the serial
### 1. Connect Jetsons in recovery mode
Read the manufacturing label or use your factory barcode process. Example serial: `AZJN-0042`.
Connect one or more Jetson devices via USB-C. Put each device into recovery mode: hold Force Recovery button, press Power, release Power, then release Force Recovery after 2 seconds.
### 2. Fuse (if your product requires it)
Verify with `lsusb -d 0955:` -- each recovery-mode Jetson appears as `NVIDIA Corp. APX`.
Run your approved **fuse** workflow (for example NVIDIA `odmfuse.sh` or internal wrapper). This task does not replace secure boot or fTPM scripts; complete them per your security phase checklist before or after provisioning, according to your process.
### 2. Run the provisioning script
### 3. Prepare the rootfs staging tree
Extract or sync the rootfs you will flash into a directory on the workstation, for example:
`/work/images/orin-nano/rootfs-staging/`
Ensure `etc/` exists or can be created under this tree.
### 4. Provision the CompanionPC user and embed credentials
From the Loader repository root (or using an absolute path to the script):
From the loader repository root:
```bash
./scripts/provision_device.sh \
--serial AZJN-0042 \
--api-url "https://admin.internal.example.com" \
--rootfs-dir "/work/images/orin-nano/rootfs-staging"
./scripts/provision_devices.sh
```
Confirm the script prints success and that `rootfs-staging/etc/azaion/device.conf` exists.
The script will:
Re-running the same command for the same serial must not create a duplicate user; the script updates the password via **PATCH** when POST returns **409**.
1. **Install dependencies** -- installs lsusb, curl, jq, wget via apt; adds `qemu-user-static` and `binfmt-support` on x86 hosts for cross-arch chroot.
2. **Install L4T** -- if L4T BSP is not present at `L4T_DIR`, downloads the BSP and sample rootfs, extracts them, and runs `apply_binaries.sh`. This only happens on first run.
3. **Set up Docker** -- installs Docker Engine, NVIDIA Container Toolkit, compose file, and Loader image into the rootfs via chroot (`setup_rootfs_docker.sh`).
4. **Harden OS** (unless `--no-harden`) -- disables SSH, getty, serial console, applies sysctl hardening (`harden_rootfs.sh`).
5. **Authenticate** -- logs in to the admin API to get a JWT.
6. **Scan USB** -- detects all supported Jetson devices in recovery mode, displays model names.
7. **Display selection UI** -- lists detected devices with numbers and model type.
8. **Prompt for selection** -- enter device numbers (e.g. `1 3 4`), or `0` for all.
If the admin API requires authentication (Bearer token, mTLS), extend the script or shell wrapper to pass the required `curl` headers or use a local proxy; the stock script assumes network-restricted admin access without extra headers.
### 3. Per-device provisioning (automatic)
### 5. Flash the device
For each selected device, the script runs sequentially:
Run your normal **flash** procedure (for example `flash.sh` or SDK Manager) so the staged rootfs—including `etc/azaion/device.conf`—is written to the device storage.
1. **Register** -- calls `POST /devices` to get server-assigned serial, email, and password.
2. **Write device.conf** -- embeds credentials in the rootfs staging directory.
3. **Fuse** -- runs `odmfuse.sh` targeting the specific USB device instance. Board config is auto-detected from the USB product ID.
4. **Power-cycle prompt** -- asks the admin to power-cycle the device and re-enter recovery mode.
5. **Flash** -- runs `flash.sh` with the auto-detected board config to write the rootfs (including `device.conf`, Docker, and application files) to the device. Default target is `nvme0n1p1` (NVMe SSD); override with `FLASH_TARGET` in `.env` (e.g. `mmcblk0p1` for eMMC).
6. **Sticker prompt** -- displays the assigned serial and asks the admin to apply a physical label.
### 6. First boot
### 4. Apply serial labels
Power the Jetson, complete first-boot configuration if any, and verify the Loader service starts. The Loader should read `AZAION_DEVICE_EMAIL` and `AZAION_DEVICE_PASSWORD` from `/etc/azaion/device.conf`, then use them when calling **POST /login** on the Loader HTTP API (which forwards credentials to the configured resource API per your deployment). After a successful login path, the device can request resources and unlock flows as designed.
After each device is flashed, the script prints the assigned serial (e.g. `azj-0042`). Apply a label/sticker with this serial to the device enclosure for physical identification.
### 7. Smoke verification
### 5. First boot
- From another host: Loader **GET /health** returns healthy.
- **POST /login** on the Loader with the same email and password as in `device.conf` returns success (for example `{"status":"ok"}` in the reference implementation).
- Optional: trigger your normal resource or unlock smoke test against a staging API.
Power the Jetson. Docker starts automatically, loads the Loader image, and starts Docker Compose. The Loader service reads `AZAION_DEVICE_EMAIL` and `AZAION_DEVICE_PASSWORD` from `/etc/azaion/device.conf` and uses them to authenticate with the admin API via `POST /login`. The update manager begins checking for updates.
### 6. Smoke verification
- From another host: Loader `GET /health` on port 8080 returns healthy.
- `docker ps` on the device (if unhardened) shows the loader container running.
- Optional: trigger a resource or unlock smoke test against a staging API.
## Troubleshooting
| Symptom | Check |
|--------|--------|
| curl fails to reach admin API | DNS, VPN, firewall, and `API_URL` trailing slash (script strips one trailing slash). |
| HTTP 4xx/5xx from POST /users | Admin logs; confirm role value **CompanionPC** and email uniqueness rules. |
| 409 then failure on PATCH | Implement or enable **PATCH /users/password** (or change script to match your upsert API). |
| Loader cannot log in | `device.conf` path, permissions, and that the password in the file matches the account after the last successful provision. |
| --- | --- |
| No devices found by script | USB cables, recovery mode entry sequence, `lsusb -d 0955:` |
| Unknown product ID warning | Device is an NVIDIA USB device but not in the supported models table. Check SKU. |
| L4T download fails | Internet access, NVIDIA download servers availability, `L4T_VERSION` value |
| Login fails (HTTP 401) | `ADMIN_EMAIL` and `ADMIN_PASSWORD` in `.env`; account must have ApiAdmin role |
| POST /devices fails | Admin API logs; ensure AZ-196 endpoint is deployed |
| Fuse fails | L4T version compatibility, USB connection stability, sudo access |
| Flash fails | Rootfs contents, USB device still in recovery mode after power-cycle, verify `FLASH_TARGET` matches your storage (NVMe vs eMMC) |
| Docker setup fails in chroot | Verify `qemu-user-static` was installed (auto-installed on x86 hosts); check internet in chroot |
| Loader container not starting | Check `docker logs` on device; verify `/etc/azaion/device.conf` exists and has correct permissions |
| Loader cannot log in after boot | `device.conf` path and permissions; password must match the account created by POST /devices |
| Cannot SSH to hardened device | Expected behavior. Use `--no-harden` for dev devices, or reflash with USB recovery mode |
## Security notes
- Treat `device.conf` as a secret at rest; restrict file permissions and disk encryption per your product policy.
- The `.env` file contains ApiAdmin credentials -- do not commit it. It is listed in `.gitignore`.
- Prefer short-lived credentials or key rotation if the admin API supports it; this runbook describes the baseline manufacturing flow.
- Hardened devices have no SSH, no serial console, and no interactive login. Field debug requires USB recovery mode reflash with `--no-harden`.