Skip to content

Latest commit

 

History

History
179 lines (133 loc) · 6.76 KB

File metadata and controls

179 lines (133 loc) · 6.76 KB

Daytona Sandbox Fingerprinting Results

Date: 2026-02-17

Executive Summary

Daytona sandboxes run as Docker containers with the Sysbox runtime on bare-metal Hetzner dedicated servers (AMD EPYC 9254). Each sandbox gets 1 vCPU and 1 GiB RAM via cgroup limits. The container image is Debian 13 (Trixie) running on an Ubuntu 24.04 host kernel.


Infrastructure Layer

Component Detail
Hosting provider Hetzner (likely AX-series dedicated)
CPU AMD EPYC 9254 24-Core / 48 threads — bare metal, standard retail SKU (not cloud-custom)
RAM 384 GiB (377 GiB visible to OS)
Storage Software RAID — /dev/md0 (439 GB ext4) + /dev/md1 (3.5 TB XFS)
Host OS Ubuntu 24.04 LTS, kernel 6.8.0-94-generic

Evidence for Hetzner (not AWS/GCE/Azure)

  • AMD EPYC 9254 is a standard datacenter SKU. AWS uses custom R-suffix chips (9R14, 7R13).
  • /dev/md0, /dev/md1 = Linux software RAID. Cloud providers present NVMe (/dev/nvme*) or virtual block devices (/dev/xvd*), never md arrays. Software RAID is Hetzner's default provisioning.
  • 384 GiB RAM matches Hetzner AX-series configs.
  • 100.66.0.1 in DNS resolvers suggests an internal overlay network (Tailscale/WireGuard mesh between hosts).

Container Runtime

Component Detail
Runtime Docker Engine + Sysbox (Nestybox)
Virtualization detected systemd-detect-virt returns docker
/.dockerenv Present — confirms Docker container
cgroup version v2 unified hierarchy (0::/init.scope)

Sysbox Evidence

The mount info reveals sysboxfs FUSE mounts — the signature of the Sysbox container runtime:

sysboxfs on /proc/sys
sysboxfs on /proc/swaps
sysboxfs on /proc/uptime
sysboxfs on /sys/kernel
sysboxfs on /sys/devices/virtual
sysboxfs on /sys/module/nf_conntrack/parameters

Sysbox makes Docker containers behave more like lightweight VMs (systemd support, docker-in-docker, etc.) without actual VM overhead. This is how Daytona achieves sub-90ms cold starts while still providing root access inside sandboxes.

Notable: /proc/meminfo is NOT intercepted by Sysbox, so free -h shows the host's 377 GiB instead of the container's 1 GiB cgroup limit. This is a known container gotcha — they don't use LXCFS to virtualize meminfo.


Sandbox Limits (cgroup enforced)

Resource Limit
Memory 1 GiB (memory.max = 1073741824)
CPU 1 core (cpu.max = 100000 100000 — 100ms per 100ms period)
Root filesystem 3 GiB overlay
Network 172.20.0.0/16 Docker bridge, 1 veth per container

Container Image

Property Value
Base OS Debian 13 (Trixie)
Image layers ~30 overlay layers
Snapshot registry cr.app.daytona.io (private)
Snapshot tag cr.app.daytona.io/sbox/daytona-<hash>:daytona
Python 3.14.2
Node.js via NVM (/usr/local/share/nvm)
Package tools pipx, pip
Jupyter Pre-installed (/usr/local/jupyter)

Processes Running at Idle

PID Process RSS (approx) Purpose
1 /usr/local/bin/daytona sleep infinity ~60 MB Main daemon / agent — handles SDK commands
80 sleep infinity ~2 MB Keepalive subprocess
82 /usr/local/lib/daytona-computer-use ~18 MB GUI/browser automation tooling
83 python3 /tmp/daytona_repl_worker.py ~13 MB Stateful Python REPL for code interpreter
109 dbus-daemon --session ~1.5 MB D-Bus session bus

Total idle footprint: ~95 MB per sandbox (plus ~25 MB kernel overhead for namespaces, cgroups, veth, FUSE).

The Daytona daemon binary is mounted read-only from the host (/opt/daytona-runner/.tmp/binaries/daemon-amd64 -> /usr/local/bin/daytona), not baked into the image. Same for daytona-computer-use.


Network Configuration

Property Value
Subnet 172.20.0.0/16 (Docker bridge)
Container IP 172.20.0.63
Interface eth0@if39570 (veth pair)
DNS 1.1.1.1, 1.0.0.1 (Cloudflare), 100.66.0.1 (internal), 8.8.8.8 (Google)
Hostname Sandbox ID (5dde310c-9372-4c1f-a176-a1e4d523547a)

DNS config is generated by Docker Engine, based on host's /run/systemd/resolve/resolv.conf.


Environment Variables

Variable Value Purpose
DAYTONA_SANDBOX_ID UUID Unique sandbox identifier
DAYTONA_SANDBOX_USER daytona Non-root user
DAYTONA_SANDBOX_SNAPSHOT cr.app.daytona.io/sbox/... Image snapshot reference
DAYTONA_OTEL_ENDPOINT https://otel-collector.app.daytona.io OpenTelemetry telemetry
DAYTONA_USER_HOME_AS_WORKDIR true CWD config

Packing Density Estimate

Per host (384 GiB RAM, 48 vCPUs):

Strategy Sandboxes per host
No overcommit (memory-bound) ~370
2:1 overcommit ~700
3:1 overcommit ~1,100
Theoretical idle packing (120 MB each) ~3,000

Likely operating point: 500-800 sandboxes per host with moderate memory overcommit, leaving headroom for burst usage.

At ~$300/month per Hetzner box, that's roughly $0.40-0.60 per sandbox/month in raw infrastructure cost.


Architecture Diagram

Hetzner AX-series Bare Metal
├── AMD EPYC 9254 (48 vCPUs) / 384 GiB RAM
├── /dev/md0 (ext4, 439 GB) — OS + Daytona binaries
├── /dev/md1 (XFS, 3.5 TB)  — Docker storage + container data
│
├── Ubuntu 24.04 LTS (kernel 6.8.0-94-generic)
│   ├── Docker Engine + Sysbox Runtime
│   │   ├── Sandbox Container (Debian 13 Trixie, 1 vCPU / 1 GiB)
│   │   │   ├── /usr/local/bin/daytona (PID 1, mounted RO from host)
│   │   │   ├── daytona-computer-use
│   │   │   ├── daytona_repl_worker.py
│   │   │   └── user code runs here
│   │   ├── Sandbox Container ...
│   │   ├── Sandbox Container ...
│   │   └── (500-800 per host)
│   │
│   └── Internal network mesh (100.66.0.1)
│
└── cr.app.daytona.io (private container registry)
    otel-collector.app.daytona.io (telemetry)

Comparison to OpenSandbox

Aspect Daytona OpenSandbox (target)
Isolation Docker + Sysbox Podman (dev) / configurable
Cold start <90ms TBD
Host kernel Shared (container) Shared (container)
State Stateful snapshots via private registry Per-sandbox SQLite + NATS sync
Control plane Centralized (app.daytona.io) Durable Objects model (workers hold state)
Telemetry OpenTelemetry Prometheus (planned)