Container Escape Telemetry, Part 6: TeamPCP and What the Lab Predicted
A real threat actor is doing exactly what our lab scenarios simulate. Mapping TeamPCP's container escape kill chain against Tetragon, Falco, and Tracee telemetry to answer: would these tools have caught it?
This is Part 6 of the container escape telemetry series (overview). Parts 1-5 covered isolation primitives, methodology, per-scenario telemetry, production considerations, and tuning. This post takes the lab findings and pressure-tests them against a real threat actor operating in the wild right now.
TeamPCP compromised Trivy on March 19, 2026. Five days later they backdoored LiteLLM’s PyPI package. As of late March 2026, incident response teams across the industry are still assessing the blast radius. But the container escape techniques TeamPCP uses aren’t novel – they’re the same patterns our 15 lab scenarios were designed to detect. That’s either a validation of the research or a coincidence. Having spent weeks staring at this telemetry, I don’t think it’s a coincidence.
Who Is TeamPCP
TeamPCP (also tracked as DeadCatx3, PCPcat, ShellForce, and CanisterWorm) emerged in December 2025 as a cloud-native threat actor targeting Docker APIs, Kubernetes clusters, Redis servers, and CI/CD pipelines. Elastic Security Labs published the initial CanisterWorm analysis in January 2026. Trivy and Checkmarx tracked the supply chain component through their own incident reports (Trivy GHSA, Checkmarx advisory). Their operation has two distinct phases that are worth separating because they test different parts of a detection stack.
Phase 1: CanisterWorm (December 2025 – ongoing). A worm that scans for exposed Docker APIs on port 2375, creates privileged containers with the host root filesystem mounted, and propagates via stolen SSH keys. On Kubernetes, it deploys privileged DaemonSets to every node. There’s a geopolitical dimension – Iranian systems get wiped, everyone else gets backdoored – but from a detection engineering perspective, the container escape techniques are identical regardless of the payload.
Phase 2: Supply chain campaign (March 2026). TeamPCP compromised Aqua Security’s Trivy vulnerability scanner via a misconfigured pull_request_target GitHub Actions workflow, stole CI/CD secrets, and used those credentials to cascade into Checkmarx KICS and LiteLLM. The LiteLLM compromise is particularly relevant: malicious PyPI packages (versions 1.82.7 and 1.82.8) deployed a three-stage payload that harvested credentials, attempted Kubernetes lateral movement via privileged pod deployment, and installed persistent systemd backdoors. LiteLLM has broad adoption across cloud environments.
The supply chain initial access is outside the scope of runtime container security tools – you need CI/CD pipeline integrity controls, package signature verification, and GitHub Actions security hardening for that. But once TeamPCP’s payload executes inside a container, every subsequent step is observable at the kernel level. That’s where our lab data becomes directly applicable.
The Kill Chain, Mapped to Lab Telemetry
I’m going to walk through TeamPCP’s documented techniques and map each one to the specific scenario, tool, and telemetry signal from our research. The question at each step: would our tuned Tetragon, Falco, and Tracee configurations have detected this?
Step 1: Docker API Scan and Privileged Container Creation
TeamPCP technique: Scan the local subnet for exposed Docker APIs on port 2375. When found, create a privileged Alpine container via the API with the host root filesystem mounted at /mnt/host.
Lab mapping: This is S04 (Docker socket mount). Our scenario mounts docker.sock into a container and uses the Docker API to spawn a new container with the host’s root filesystem. The vector is different (mounted socket vs. exposed API), but the kernel-level behavior is identical: a process inside a container connects to the Docker daemon and creates a privileged child container.
What each tool would see:
- Tetragon:
security_socket_connectto the Docker socket, then full process lifecycle telemetry for the new privileged container including all 41 capabilities in the effective set. The namespace metadata would show the new container’s namespace inodes, and theis_host: falseboolean confirms the request originated from a container, not the host. Our S04 data showed exactly this chain. - Tracee:
security_socket_connectx5 during the Docker API interaction (our S04 captured this), thensched_process_execfor every process in the new container. Thecommit_credsevent would show the full capability expansion. - Falco: This is where it gets interesting. Our S04 testing showed Falco’s “Docker Socket Access from Container” rule only fired on runc init, not the actual in-container abuse. The
fd.namepath resolution issue through mount namespaces means Falco would likely miss the socket connection from inside the container. The privileged container creation itself would trigger “Launch Privileged Container” from the default ruleset. So Falco catches the consequence (privileged container appeared) but potentially misses the cause (which container requested it via the API).
Detection verdict: Tetragon and Tracee catch the full chain. Falco catches the privileged container creation but may miss the socket-level attribution.
Step 2: Host Filesystem Access via chroot
TeamPCP technique: Inside the privileged container, run chroot /mnt/host to pivot into the host’s root filesystem. From there, install backdoors, harvest credentials, and establish persistence.
Lab mapping: This combines S05 (host /proc access) and S13 (privileged post-exploitation). The chroot call itself would be visible to all three tools, but the critical signals are what happens after – the file accesses on host paths.
What each tool would see:
- Tetragon:
security_file_openevents on every sensitive host file accessed through the chroot. The 04-sensitive-file-access TracingPolicy covers/etc/shadow,/proc/1/root/*, and credential paths. In our S05 test, Tetragon capturedfile_open on /proc/1/root/*, core_patternwith full namespace context. - Tracee:
security_file_openwith path-filtered events on the same sensitive paths. Our S05 data showed 49security_file_openevents from the container accessing host files. - Falco: “Sensitive Host File Read” would fire on reads to
/etc/shadow,/proc/1/root/*, etc. Our S05 data showed 3 alerts with container metadata.
Detection verdict: All three tools catch this. The signals are loud and well-covered by default and custom rules alike.
Step 3: SSH Key Harvesting and /proc/*/environ Reads
TeamPCP technique: The CanisterWorm parses SSH authentication logs to derive user/IP pairs and harvests local SSH private keys. The LiteLLM payload scans /proc/*/environ across all processes, extracting cloud tokens, API keys, and credentials from environment variables.
Lab mapping: This is exactly S14 (excessive capabilities). Our S14 scenario reads /proc/1/environ with CAP_SYS_PTRACE and --pid=host – the same technique TeamPCP’s LiteLLM payload uses to harvest secrets from process environment variables.
What each tool would see:
- Tetragon:
security_file_openon/proc/1/environvia the capability-abuse TracingPolicy (07-capability-abuse.yaml). Our S14 data showed these events with full capability sets proving the container hadCAP_SYS_PTRACE. - Tracee:
security_file_openon/proc/1/environand/proc/1/maps. Our S14 data captured both paths. - Falco: “Host Process Environ Read from Container” (3 alerts in our S14 run) and “Host Process Maps Read from Container” (3 alerts). These are the custom rules I wrote – Falco’s defaults don’t cover this.
Detection verdict: All three catch it after tuning. This is the critical point: none of these tools detected /proc/1/environ reads out of the box. Every detection required custom policy additions. TeamPCP’s most valuable credential harvesting technique is invisible to default configurations of all three tools.
Step 4: Kubernetes DaemonSet Deployment
TeamPCP technique: On Kubernetes, deploy a privileged DaemonSet named host-provisioner-iran in the kube-system namespace. Each node runs an Alpine container named kamikaze with the host root filesystem mounted at /mnt/host.
Lab mapping: This is S13 (privileged post-exploitation) scaled to every node in the cluster. The per-node container behavior is identical to what we tested. The Kubernetes control plane activity (DaemonSet creation in kube-system) is outside the scope of runtime eBPF tools – you need Kubernetes audit logs for that.
What each tool would see on each node:
- Tetragon: 122 mount events (our S13 count) from the privileged container’s filesystem operations,
sys_keyctlevents if the payload enumerates keyrings,security_file_openon/dev/kmsgand other sensitive paths. The namespace metadata on every event confirms container origin. - Tracee: 7 mount events with full arguments,
commit_credsx18, process lifecycle for every payload binary. Our S13 Tracee data captured all of these. - Falco: Keyctl Invocation (504 alerts in our S13 run), Kernel Log Buffer Read, Mount Syscall alerts. Plus the default “Launch Privileged Container” rule would fire on each node.
Detection verdict: All three tools catch the per-node container behavior. But no eBPF-based runtime tool sees the DaemonSet creation itself. That’s a Kubernetes audit log signal (create DaemonSet in kube-system from non-system service account). This is the gap Elastic’s analysis of TeamPCP also identified: effective cloud-native detection requires combining runtime and control-plane data sources.
Step 5: Persistence via systemd
TeamPCP technique: The LiteLLM payload installs a persistent systemd backdoor on the host that polls for additional binaries. CanisterWorm installs itself as a systemd service on non-Iranian nodes.
Lab mapping: This goes beyond our scenarios. We tested host filesystem access (S05, S13) but not explicit systemd service installation. The file writes to /etc/systemd/system/ would be visible to any tool monitoring sensitive host file writes, but we didn’t have a specific policy for systemd unit file creation.
What each tool would see:
- Tetragon: Would capture
security_file_openon systemd paths if the path filter includes/etc/systemd/. Our current 04-sensitive-file-access policy doesn’t cover this path. This is a gap. - Tracee: Same –
security_file_openwould fire if the path filter includes systemd directories. Our current filter list doesn’t. - Falco: “Write below etc” is a default Falco rule that would fire on writes to
/etc/systemd/system/. This is one case where Falco’s broader default ruleset provides coverage that the more targeted Tetragon and Tracee configurations miss.
Detection verdict: Falco catches this by default. Tetragon and Tracee would miss it without adding /etc/systemd/* to their path filters. Adding it is trivial, but the gap exists in our current configuration.
Step 6: C2 via Internet Computer Protocol
TeamPCP technique: CanisterWorm uses an Internet Computer Protocol (ICP) canister for command-and-control. There’s no registrar to contact, no hosting provider to subpoena, no single point of failure. The C2 remains operational as long as the ICP blockchain exists.
Lab mapping: No direct scenario mapping. Our network monitoring (Tetragon’s 05-network-activity policy with tcp_connect/tcp_close) would capture outbound connections from the container, but distinguishing ICP canister traffic from legitimate HTTPS requires network-level analysis beyond what runtime eBPF tools provide.
Detection verdict: Runtime tools see the connection. They don’t understand the protocol. Network detection (TLS fingerprinting, JA3/JA4 hashes, destination reputation) is the right layer for this signal.
The Scorecard
| TeamPCP Technique | S-Mapping | Tetragon | Falco | Tracee |
|---|---|---|---|---|
| Docker API scan + privileged container | S04, S13 | Full chain | Partial (misses socket attribution) | Full chain |
| Host filesystem via chroot /mnt/host | S05, S13 | Detected | Detected | Detected |
| /proc/*/environ credential harvest | S14 | Detected (custom policy) | Detected (custom rule) | Detected (custom policy) |
| SSH key harvesting from host | S05 | Detected | Detected | Detected |
| K8s DaemonSet in kube-system | – | Not visible (K8s audit log) | Not visible (K8s audit log) | Not visible (K8s audit log) |
| systemd persistence on host | – | Gap (path filter needed) | Detected (default rule) | Gap (path filter needed) |
| ICP canister C2 | – | Connection visible, protocol opaque | Connection visible, protocol opaque | Connection visible, protocol opaque |
Out of TeamPCP’s seven documented techniques, a tuned three-tool stack catches five at the runtime level. The Kubernetes control plane activity and the C2 protocol identification require different detection layers.
What This Validates
S13/S14 Were the Right Scenarios to Add
These two scenarios were added in response to peer review feedback: “I would have loved if one of your scenarios was taking a privileged container that isn’t a straight breakout.” TeamPCP’s entire CanisterWorm operation is exactly this – privileged containers abusing their capabilities without namespace escape. The mount /dev/sda1, chroot /mnt/host, and /proc/*/environ harvesting patterns we tested in S13/S14 are the same patterns TeamPCP uses in production. The fact that no tool detected /proc/1/environ reads by default, and all three required custom policies, is directly validated by TeamPCP making this their primary credential harvesting technique.
Default Configurations Are Insufficient for Real Threats
TeamPCP’s techniques are not exotic. Docker API scanning, privileged container creation, host filesystem mounting, and environment variable harvesting are well-documented patterns. Yet Falco’s default ruleset only covers four of the five runtime-detectable techniques – “Launch Privileged Container” catches the container creation, “Sensitive Host File Read” catches host filesystem access and SSH key harvesting, “Write below etc” catches systemd persistence, and “Docker Socket Access from Container” partially catches the API abuse (runc init only, not the source container’s socket connect). The /proc/*/environ harvesting – TeamPCP’s primary credential theft technique – requires a custom rule. Tetragon ships with zero detection policies. Tracee’s defaults would miss the /proc/*/environ reads entirely.
The Falco rule gap finding from Part 4 – configuration, not engine – takes on new urgency when the configuration gap aligns with what a real threat actor is doing right now.
What Would Have Caught TeamPCP Fastest
If I were building a detection stack specifically for TeamPCP’s documented TTPs, the minimum set would be:
- Falco with “Launch Privileged Container” rule (default) – catches the privileged container creation on every node, fastest alert
- Tetragon or Tracee with
/proc/*/environpath filter – catches the credential harvesting that makes lateral movement possible - Kubernetes audit log monitoring for DaemonSet/Pod creation in
kube-systemby non-system identities - Network monitoring for outbound connections to ICP canister endpoints (destination reputation)
The first two are the runtime layer that our research covers. The third and fourth require different tooling. But the runtime signals alone – privileged container launch plus /proc/*/environ access from within that container – would have generated alerts within seconds of CanisterWorm executing on a node. Whether those alerts get investigated before the attacker establishes persistence is an operational question, not a detection one.
Why Runtime Telemetry Still Matters
TeamPCP’s supply chain campaign compromised security scanners, AI proxies, and CI/CD pipelines. But the runtime telemetry layer – what the kernel sees happening inside containers – operates below all of that. An attacker can poison your dependencies and backdoor your toolchain. But when their payload runs mount /dev/sda1 or reads /proc/1/environ, the kernel knows, and a properly tuned eBPF tool will see it. The gap between “properly tuned” and “default configuration” is where TeamPCP lives.
The Series
This is the final post. The full series:
- Series Overview – project goals, key findings, and reading guide
- Part 1: Isolation Primitives and the eBPF Observability Model
- Part 2: Methodology and Tool Architecture
- Part 3: What Each Tool Actually Captured
- Part 4: Volume, Signal-to-Noise, and Choosing a Tool
- Part 5: Tuning eBPF Tools From Defaults to Detection
The full scenario scripts, Tetragon policies, Falco rules, and automation harness are available in the research repository.
LLM Disclosure
Claude (Anthropic) was used throughout this project to assist with lab setup and automation, telemetry analysis and correlation, and authoring this blog series.
