Date: 2026-03-24
Cloudflare Sandbox runs Firecracker microVMs on Cloudflare's own network (AS13335) with a custom kernel (6.12.71-cloudflare-firecracker-2026.2.25). Each sandbox is a minimal microVM with 1 vCPU, 466 MiB RAM, and a multi-disk layout (4 virtio block devices). The architecture is unique: a Firecracker VM hosts a container-like environment with overlay mounts on /etc/* and a /container/bundle directory, suggesting Cloudflare runs OCI container images inside Firecracker microVMs. The hostname cloudchamber confirms the internal codename for their container runtime.
Important placement finding: Despite Cloudflare marketing containers as "Region: Earth", sandboxes are NOT placed at the nearest edge PoP. Our Worker ran in SJC (San Jose) per cf-ray headers, but all sandbox containers consistently landed in Mumbai (bom08, APAC) on the same physical node (578m192). Multiple sessions with different Durable Object IDs all routed to the same node. This suggests container placement is determined by Durable Object namespace hashing, not client proximity — containers only run in a subset of Cloudflare datacenters equipped with KVM/Firecracker infrastructure.
| Component | Detail |
|---|---|
| Hypervisor | Firecracker (confirmed by kernel name + MPTABLE OEM ID FC) |
| KVM backend | Yes — Hypervisor detected: KVM |
| Host CPU | AMD EPYC (has AVX-512, SVM, nested virt enabled) |
| Hosting | Cloudflare own network (AS13335) |
| Public IP | 104.28.157.141 |
| ASN | AS13335 Cloudflare, Inc. |
| Location | Mumbai, India (bom08 — Cloudflare BOM datacenter) |
| Region | APAC |
The kernel version string explicitly identifies Firecracker:
6.12.71-cloudflare-firecracker-2026.2.25
Additional Firecracker signals:
MPTABLE: OEM ID: FC— Firecracker's MP table signatureACPI BIOS Error: A valid RSDP was not found— ACPI disabled (like Sprites, unlike E2B/Freestyle)pci=off+ 6virtio_mmio.deviceentries in kernel cmdlineDMI not present or invalidlspcinot availablereboot=k panic=1standard FC argsnomodule— kernel modules disabled entirely
| Signal | E2B | Sprites | Freestyle | Cloudflare |
|---|---|---|---|---|
| ACPI | Enabled (FIRECK) | Disabled | Enabled (FIRECK) | Disabled |
| MPTABLE OEM ID | Not shown | FC |
Not shown | FC |
| Kernel name | 6.1.158 |
6.12.47-fly |
6.1.155 |
6.12.71-cloudflare-firecracker |
| Kernel builder | CI runner | Custom | jacob@jazlinux |
builder@k8s-ams-a-highcpu-bj4v4 |
| virtio-mmio devices | 3 | 10 | 2 | 6 |
| Host CPU | Intel Xeon (GCP) | AMD EPYC (Fly.io) | AMD EPYC | AMD EPYC (Cloudflare) |
| Module loading | Yes | Yes | Yes | Disabled (nomodule) |
| Build timestamp | Real | Real | Real | Fake (Mon Sep 27 00:00:00 UTC 2010) |
Cloudflare uses a deliberately falsified build timestamp (2010) — likely for reproducible builds or to avoid leaking build timing information. The kernel was built on k8s-ams-a-highcpu-bj4v4 — a Kubernetes pod in Amsterdam (ams), confirming Cloudflare builds kernels in their AMS datacenter's CI/CD infrastructure.
| Resource | Value |
|---|---|
| vCPUs | 1 |
| RAM | 466 MiB (477,716 kB) |
| Block devices | 4 (vda, vdb, vdc, vdd) |
| Root disk | /dev/vdc — 2 GiB ext4 |
| Cgroup memory | No limit (real VM) |
| Cgroup CPU | No limit (real VM) |
| Swap | None |
Cloudflare has the most constrained resources of all tested sandboxes:
- 1 vCPU — tied with Daytona and Modal for fewest
- 466 MiB RAM — smallest of all (even less than E2B's 482 MiB)
- 2 GiB root disk — smallest usable disk
Cloudflare uses 4 separate virtio block devices — unique among all tested sandboxes:
| Device | Size | Type | Mount | Purpose |
|---|---|---|---|---|
vda |
2 GiB | ext4 | (not mounted) | Unknown — possibly snapshot/template storage |
vdb |
128 MiB | ext4 | (not mounted) | Label: config — VM configuration |
vdc |
2 GiB | ext4 | / |
Root filesystem |
vdd |
150 MiB | ext4 | (not mounted) | Label: bind — bind mount data |
The kernel cmdline specifies root=/dev/vda but the actual root mount is /dev/vdc. This suggests a boot-time remapping — possibly the init process pivots root from vda to vdc, or the container runtime overlays vdc on top.
| Property | Value |
|---|---|
| Version | 6.12.71 (newest of all tested sandboxes) |
| Suffix | cloudflare-firecracker-2026.2.25 |
| Built by | builder@k8s-ams-a-highcpu-bj4v4 |
| Compiler | GCC 14.2.0 |
| Config | SMP PREEMPT_DYNAMIC |
| Modules | Disabled (nomodule cmdline) |
| Build date | Fake: Mon Sep 27 00:00:00 UTC 2010 |
The 6.12.x kernel is the newest among all tested sandboxes, and the version suffix 2026.2.25 likely indicates a February 2026 build. Kernel module loading is entirely disabled — a strong security hardening measure that no other sandbox implements.
Cloudflare's most distinctive feature is running OCI containers inside Firecracker microVMs. Evidence:
-
Overlay mounts on
/etc/hostname,/etc/hosts,/etc/resolv.conf:overlay on /etc/hosts type overlay (ro,relatime, lowerdir=/container/bundle/etc:/etc, upperdir=/container/etc, workdir=/container/work) -
/container/bundledirectory — standard OCI bundle path -
Docker image — the sandbox runs from
docker.io/cloudflare/sandbox:0.7.19(confirmed by Dockerfile andSANDBOX_VERSION=0.7.19env var) -
Hostname
cloudchamber— Cloudflare's internal codename for their container runtime (now publicly known as "Cloudflare Containers") -
Double mount layering — the VM has both the raw block device mounts AND container-style overlay mounts on top, creating a two-layer filesystem
This is architecturally similar to Kata Containers or AWS Firecracker+containerd — full VM isolation with container UX.
vdc 2.0G ext4 / (root, rw)
vda 2.0G ext4 (unmounted)
vdb 128M ext4 (config)
vdd 150M ext4 (bind)
| Test | Cloudflare | E2B | exe.dev | Sprites | Freestyle |
|---|---|---|---|---|---|
| Sequential write (1 GB) | 21.5 MB/s | 1.1 GB/s | 476 MB/s | 568 MB/s | 60 MB/s |
| 4K sync write | 2.1 MB/s (~525 IOPS) | N/A | N/A | N/A | N/A |
Cloudflare has the slowest storage of all tested sandboxes — 21.5 MB/s sequential write, 50x slower than E2B. The 4K sync write at 2.1 MB/s (~525 IOPS) is extremely slow. This likely reflects:
- Lite instance type (minimal I/O allocation)
- Network-attached storage (not local NVMe)
- Aggressive I/O throttling for multi-tenant density
- 2 GiB disk — likely thin-provisioned
Disk scheduler is none (passthrough), same as exe.dev and Sprites.
| Property | Value |
|---|---|
| OS | Ubuntu 22.04.5 LTS (Jammy) |
| Init | systemd (PID 1 = /sbin/init) |
| User | root |
| Shell | /bin/bash |
| Working dir | /workspace |
| Property | Value |
|---|---|
| Interface | cfeth0 (Cloudflare custom naming) |
| MAC | 00:11:22:33:44:66 (static, assigned via kernel cmdline) |
| IPv6 | fe80::211:22ff:fe33:4466 (link-local) + fd00::11 (ULA) |
| DNS | 2606:4700:4700::1111 (Cloudflare 1.1.1.1 IPv6), 2620:fe::fe |
| Hostname | cloudchamber |
| Public IP | 104.28.157.141 (Cloudflare anycast) |
| Metadata | No 169.254.169.254 endpoint (metadata via Durable Objects, not MMDS) |
Key observations:
- IPv6-first networking — DNS resolvers are IPv6, interface has ULA address
fd00::11 - Cloudflare DNS — uses 1.1.1.1's IPv6 address
- No cloud metadata endpoint — unlike E2B/Freestyle which use Firecracker MMDS, Cloudflare passes config via environment variables and the
/dev/vdbconfig disk - Custom interface naming via
ifname=cfeth0:MACkernel parameter - Anycast public IP —
104.28.157.141is Cloudflare's anycast range, meaning traffic exits through the nearest Cloudflare PoP
| Variable | Value | Purpose |
|---|---|---|
CLOUDFLARE_REGION |
APAC |
Geographic region |
CLOUDFLARE_LOCATION |
bom08 |
Specific datacenter (Mumbai) |
CLOUDFLARE_COUNTRY_A2 |
IN |
Country code |
CLOUDFLARE_NODE_ID |
578m192 |
Physical node identifier |
CLOUDFLARE_APPLICATION_ID |
a038a6dc-... |
Worker/app identifier |
CLOUDFLARE_DURABLE_OBJECT_ID |
c61d3c4c... |
DO instance ID |
CLOUDFLARE_PLACEMENT_ID |
8ed9203b-... |
Placement/scheduling ID |
CLOUDFLARE_DEPLOYMENT_ID |
d038ed7d-... |
Deployment version |
SANDBOX_VERSION |
0.7.19 |
Container image version |
JAVASCRIPT_POOL_MIN_SIZE |
3 |
Pre-warmed JS interpreters |
TYPESCRIPT_POOL_MIN_SIZE |
3 |
Pre-warmed TS interpreters |
PYTHON_POOL_MIN_SIZE |
0 |
Python pool (not pre-warmed) |
The *_POOL_MIN_SIZE variables reveal that Cloudflare pre-warms 3 JavaScript and 3 TypeScript interpreter processes inside each sandbox for fast code execution. Python is not pre-warmed (pool size 0).
Cloudflare markets containers as "Region: Earth" with automatic geo-placement. Our testing disproves this for the Sandbox SDK:
| Test | Worker PoP (cf-ray) | Container Location | Container Node |
|---|---|---|---|
| Original fingerprint | SJC (San Jose) | bom08 (Mumbai) | 578m192 |
Session test1 |
SJC | bom08 (Mumbai) | 578m192 |
All sandbox sessions landed on the exact same physical node in Mumbai, despite the Worker running in San Jose. The CLOUDFLARE_DEPLOYMENT_ID was identical across sessions while CLOUDFLARE_DURABLE_OBJECT_ID differed — confirming these are separate sandbox instances pinned to the same location.
This means container placement is determined by Durable Object namespace hashing or pre-initialization, not request proximity. Containers likely only run in a subset of Cloudflare datacenters with KVM-capable hardware, and the DO hash determines which datacenter handles a given Worker's containers.
The ~150ms+ latency penalty of SJC→BOM round-trips for every sandbox.exec() call explains why the fingerprint took so long to complete.
Cloudflare Edge Network (AS13335)
├── Mumbai PoP (bom08)
│ ├── AMD EPYC server (node 578m192)
│ │ ├── Host Linux + KVM
│ │ │ ├── Firecracker VMM
│ │ │ │ ├── MicroVM: cloudchamber (1 vCPU, 466 MiB)
│ │ │ │ │ ├── Kernel 6.12.71-cloudflare-firecracker
│ │ │ │ │ │ └── nomodule, ACPI disabled
│ │ │ │ │ ├── /dev/vdc (2G ext4) — root
│ │ │ │ │ ├── /dev/vda (2G ext4) — template
│ │ │ │ │ ├── /dev/vdb (128M ext4) — config
│ │ │ │ │ ├── /dev/vdd (150M ext4) — bind data
│ │ │ │ │ ├── OCI container overlay
│ │ │ │ │ │ └── cloudflare/sandbox:0.7.19
│ │ │ │ │ ├── systemd (PID 1)
│ │ │ │ │ ├── Ubuntu 22.04 LTS
│ │ │ │ │ ├── cfeth0: fd00::11 (IPv6 ULA)
│ │ │ │ │ ├── JS/TS pool (3+3 pre-warmed)
│ │ │ │ │ └── user: root, /workspace
│ │ │ │ ├── MicroVM: [another-sandbox] ...
│ │ │ │ └── (high density — 466 MiB per VM)
│ │ │ │
│ │ │ └── No MMDS (config via env vars + vdb disk)
│ │ │
│ │ └── Cloudflare anycast → AS13335 → internet
│ │
│ └── Durable Objects (sandbox state management)
│
├── Amsterdam PoP (kernel CI/CD)
└── [250+ cities worldwide]
| Aspect | E2B | Sprites | Freestyle | Cloudflare |
|---|---|---|---|---|
| Isolation | Firecracker | Firecracker | Firecracker | Firecracker |
| Host provider | GCP | Fly.io | Comcast | Cloudflare own |
| Host CPU | Intel Xeon | AMD EPYC | AMD EPYC | AMD EPYC |
| Location | The Dalles, OR | Los Angeles | Napa, CA | Mumbai (geo-distributed) |
| Kernel | 6.1.158 | 6.12.47-fly | 6.1.155 | 6.12.71-cf-fc |
| Kernel modules | Yes | Yes | Yes | Disabled |
| ACPI | Enabled | Disabled | Enabled | Disabled |
| Init | systemd | tini | systemd | systemd |
| RAM | 482 MiB | 7.8 GiB | 7.8 GiB | 466 MiB |
| CPU | 2 vCPUs | 8 vCPUs | 4 vCPUs | 1 vCPU |
| Root disk | 22.9 GiB ext4 | 20 GiB + layers | 15.6 GiB ext4 | 2 GiB ext4 |
| Block devices | 1 | 7+ | 1 | 4 |
| Write speed | 1.1 GB/s | 568 MB/s | 60 MB/s | 21.5 MB/s |
| Container layer | No | Yes (overlay) | No | Yes (OCI overlay) |
| Network | GCP public IP | Fly.io IPv4+6 | Comcast NAT | CF anycast IPv6 |
| DNS | Google (8.8.8.8) | — | Google+CF | Cloudflare IPv6 |
| Metadata | MMDS | None | MMDS | Env vars + disk |
| Geo-distributed | No (Oregon) | No (LA) | No (Napa) | Limited (DO hash, not edge) |
| OS | Debian 12 | Ubuntu 25.04 | Debian 13 | Ubuntu 22.04 |
| Root access | No | No | Yes | Yes |
| Pre-warmed pools | No | No | No | Yes (JS/TS) |
| Design goal | Ephemeral exec | Stateful + isolated | Dev environment | Edge code exec |
Cloudflare Sandbox is Firecracker microVMs on Cloudflare's global edge network. It's the most minimal in resources (1 vCPU, 466 MiB, 2 GiB disk) but the most sophisticated in distribution — sandboxes run in 250+ cities worldwide, placed near the requesting Worker. The architecture uniquely combines Firecracker VM isolation with OCI container images, and pre-warms JS/TS interpreter pools for fast code execution. Storage is the slowest of all tested (21.5 MB/s), reflecting the tradeoff for global distribution. The custom kernel disables modules entirely — the strongest kernel hardening of any sandbox.
The spectrum updated:
- Cloudflare → Edge-distributed Firecracker + OCI containers, minimal resources, global reach
- Daytona → Max density, containers, fast cold start, tiny sandboxes
- E2B → Ephemeral microVMs, minimal resources, GCP scale
- Freestyle → Full dev-environment Firecracker VMs, generous resources
- exe.dev → "Real dev machine" VMs, Cloud Hypervisor, persistent
- Sprites → Most sophisticated: Firecracker + container layers + JuiceFS + checkpoints