News
min read

Kameleo on Linux via Docker: What We Built, What Broke, What's Next

Written by
Barnabas Szenasi
(Founder)
Updated on
April 23, 2026

The real reason this took longer than it should have is that we were thinking about it wrong.

Kameleo has always been built around Windows fingerprints. That's where the browser market share is, that's where the most realistic device profiles come from, and for a long time we quietly assumed Linux fingerprints couldn't reach the same masking quality bar. So we kept Windows first and treated Linux as something to figure out later.

We shipped the Linux Docker image last week. It supports both Chroma (our Chrome-based kernel) and Junglefox (our Firefox-based kernel), works on headless servers with no GPU, and connects to any CDP client out of the box. If that's what you came for, start here. If you want to know what the development actually looked like, including the wrong assumptions, the unexpected bugs, and what's still not working, keep reading.

Why developers kept asking for this

Most web scraping infrastructure runs on Linux. Cloud servers are Linux. Kubernetes pods are Linux. VPSes are Linux. There's nothing exotic about this, it's just how backend infrastructure works in 2026.

When customers asked for Linux support, we initially interpreted it as a fingerprinting request: "Linux fingerprints." What they actually meant was simpler and harder: they needed Kameleo to run as a container on the servers where their automation already lived. They weren't asking for Linux fingerprints specifically. They were asking for Docker. The phrasing was about the OS; the need was about infrastructure.

Once we understood the actual request, the picture got clearer. People were running Windows VMs in AWS just to run Kameleo, then connecting their Linux-based scrapers to that Windows box across a network boundary. Others were forwarding X11 displays through SSH tunnels to get a windowed browser on a remote machine. Nobody builds an X11-over-SSH pipeline because they love it. It's the kind of setup you make work once, and then document in a README under the heading "please don't ask". The workarounds were real and painful.

One customer who had been on an enterprise plan for over a year hit this wall hard. Their scraping volume was large enough that they faced significant infrastructure strain during peak-load periods, and their stack was fully Linux-native. They couldn't reconcile it with what we offered. So they built their own solution: a rough-edged, self-maintained Linux Docker wrapper around a browser that was definitely not optimized for stealth. It worked well enough for certain target sites. That was the moment it became impossible to ignore: someone had spent their own engineering time building a worse version of what we should have shipped.

Their experience also taught us something about masking requirements. Their solution worked on a subset of sites because those sites don't require top-tier fingerprint fidelity. They just need a real browser, not vanilla headless Chromium. Not every target runs DataDome or Cloudflare Enterprise. A significant portion of the web is catchable with a browser that's merely not obviously a bot, even if its fingerprint isn't perfectly crafted. That insight mattered later.

Why it took this long

Kameleo has always held a quality bar: when we ship support for a new platform, the masking quality should match what we already ship on Windows and macOS. Not "functional enough," not "good for a first pass" — the same level.

That standard slowed us down on macOS too. We were later than some competitors to ship macOS support. But when the masking audit results came back, macOS and Windows were at parity. We think that was worth the wait. We brought the same expectation to Linux.

The problem is that Linux fingerprints are genuinely harder to get right, for reasons that compound. Our fingerprint database is built from real device traffic, and most of the Linux signal in that data came from mobile and embedded devices: phones and IoT hardware running Linux under the hood, not server hardware. For someone running on a cloud VPS and requesting a Linux fingerprint, we were handing them a profile that looked nothing like their actual machine. Fixing that properly (expanding the right kind of Linux collection, reworking how we rank profiles) took time.

It also required resolving our own assumption. We had spent years building and validating Windows profiles. Our testing pipelines, our masking audits, our internal benchmarks: all calibrated around Windows. Adjusting them to give Linux a real evaluation took a real shift in how we worked. The enterprise customer's experience was part of what made that shift concrete: Linux fingerprints were already good enough for real workloads. The question was whether we could make them good enough for all of them. That work is ongoing. The current release is solid; full parity with Windows is the target for Q2.

The GPU wall, and how we got past it

Browsers are among the most GPU-dependent applications running on the average developer's machine, and most people have no idea. Canvas rendering, WebGL, hardware-accelerated compositing. Even if your scraper never touches a 3D scene, the browser's internal rendering pipeline leans on GPU hardware constantly.

On a headless Linux server with no GPU, Chromium doesn't gracefully degrade. It partially falls back, partially fails, and partially produces output that looks wrong to fingerprinting systems. The GPU-related signals (WebGL renderer, canvas noise, hardware concurrency behavior) suddenly look like a headless bot, not a real device.

The fix we landed on is SwiftShader: Chromium's software rasterizer, a CPU-based implementation of OpenGL ES that kicks in when no hardware GPU is present. Google uses it internally; it's been in Chromium for years. Once we enabled it, most of our WebGL tests came back clean, and acceptance tests improved across the board.

The tricky part wasn't enabling SwiftShader — it was enabling it consistently, and then validating that the resulting fingerprint still looked like a real device rather than a server running software rendering. We're still finishing that work: SwiftShader is currently opt-in in the released image, and we're finalizing the rollout to make it the default. (If you're running a GPU-less server today and hitting WebGL issues, the docs explain how to enable it manually in the meantime.)

The bugs we didn't expect

Shared memory. Browsers are memory-hungry. Docker's default /dev/shm allocation is 64 MB, which is enough for a lot of containerized workloads but nowhere near enough for a browser with any real rendering load. The symptom is random crashes that look unrelated to memory, which made this one annoying to pin down. The fix is one flag: --shm-size=2g. It's now prominently documented, but it caused more than one head-scratching debugging session before we understood what was happening.

Fonts. Out of the box, the container was missing the font coverage you'd expect on a real desktop OS. That's a fingerprinting problem: font enumeration is a well-established detection vector, and a browser that reports an unusually sparse font set stands out. We ended up bundling a significant set of fonts directly into the Docker image to bring it in line with what real desktop Linux environments look like. It added image size, but the improvement to font-related masking quality was worth it. (If you've ever wondered why the Docker image is larger than you'd expect: fonts. You're welcome, or sorry, depending on your connection.)

Testing across environments surfaced different failures. We ran test builds on WSL, macOS, and native Linux machines, and each environment had its own quirks. Something that looked clean on macOS would behave differently under WSL; something that passed on native Linux would surface an edge case on macOS Docker. The containerization layer abstracts a lot, but not everything, and testing across all three was the only way to find the full set of issues before shipping.

Speech synthesis and Widevine are broken. I'll be upfront about this: there are two browser APIs that currently don't work correctly in the Linux Docker image. Speech synthesis (the browser API for text-to-speech) errors out in the containerized environment. Widevine, Google's DRM plugin used for media playback, doesn't load. We're investigating both, estimating the fix scope. If your scraping workflow touches either of these, the current version won't work for you on Linux. Track the changelog for when those land.

What the current version actually does

Here's what works, reliably, today:

Two browsers. Chroma (our Chrome-based kernel) and Junglefox (our Firefox-based kernel). This matters more than it might seem. Chrome and Firefox have meaningfully different fingerprint surfaces, and some sites are more suspicious of one than the other. Being able to switch without changing infrastructure is something no other Linux Docker anti-detect image currently offers.

Headless, GPU-free operation. You can run this on the cheapest cloud VPS that doesn't advertise a GPU. No Xvfb. No display server. No faking out the X socket. The container handles all of it internally.

Any CDP client, zero changes to your code. If you're using Playwright, Puppeteer, Selenium, or a custom automation script that speaks Chrome DevTools Protocol, point it at the container's WebSocket endpoint and it works. We've tested the common automation libraries; there's nothing to reconfigure.

Real fingerprints. The profiles loaded into each browser session come from our live fingerprint collection: real device signatures captured from real browser traffic. They're not static templates from 2023 that anti-bot systems have already catalogued. They update with each release.

Getting started:

1docker run --platform linux/amd64 \
2  --shm-size=2g \
3  -p 5050:5050 \
4  -e EMAIL="email" \
5  -e PASSWORD="pw" \
6  -v kameleo-data:/data \
7  kameleo/kameleo-app:latest

Full documentation, including Playwright and Selenium examples, is at developer.kameleo.io/integrations/docker.

What's coming in the next version

A few things are either done and in final review, or actively being worked on:

SwiftShader as default. As mentioned above, this makes WebGL work reliably on GPU-less servers without requiring any configuration. We're validating that it doesn't introduce new fingerprint signals before shipping it as default. And this is just the start of a broader push: graphics-related fingerprinting (WebGL renderer strings, canvas noise, hardware acceleration signals) is an area we're investing in more heavily going forward. The Docker context makes it more important, not less, because software rendering introduces signals you have to actively manage.

A web UI for Docker. The current image is API and CLI only. There's no management UI. We're building one designed specifically for Docker: browser-based, no Electron dependency, focused on the controls that are actually relevant in a containerized environment.

One more thing worth saying

The enterprise customer story above is the one I keep thinking about. We moved slowly because we were waiting to meet our own quality bar. That's a defensible reason, and I still think it was right. But while we were waiting, a customer spent their own engineering time building a worse version of what we should have already shipped. That's the kind of signal you don't sit on.

It's what finally made us stop treating Linux as "something to figure out later" and start treating it as a platform we owe our customers.

What we shipped has capabilities the competition currently lacks: two browser engines, a live fingerprint collection, a commercially maintained update cycle, and integration with Kameleo's full API surface. We're going to keep building on that. The Linux image is a platform, not a checkbox.

If you've been waiting for this, try it. If you hit something broken, reach us on Discord or email support. We're actively working through early feedback.

Share this post

Say Goodbye to Anti-Bot Blocks for Good.

No Credit Card Required!

Say Goodbye to Anti-bot Blocks for Good.
No credit card required!

Proven Against Anti-Bot Shields

See real proof on our live masking audit page - and discover which anti-bot shields Kameleo has already bypassed.