Isolating software at scale is more cumbersome than ever; software teams deploy frequently to infrastructure they share with peer teams (a pain I know intimately from building dev platforms at a tech-driven hedge fund). The jumble of PodSecurityPolicies, cgroups, seccomp, ARNs, and other buzzwords is difficult for engineers to navigate and implement – resulting in less isolated, trust-worthy software in practice. This talk focuses on system call sandboxing via seccomp and covers how people apply syscall policies today, why they struggle, and a new automated mechanism for generating them automatically. We’ll take a brief detour into program analysis land to show how we can pick programs apart and make assertions about them. With the powers of program analysis, we can form a much more accurate and precise understanding of what syscalls specific programs could issue than any reasonable human could. I’ll introduce my new open source system for fellow software engineers to solve this: Callander. Using Callander's precise understanding of a program's behavior, we can generate a similarly precise syscall policy to apply to the program – binding it to only the behaviors that are part of the program itself. None of this is to be believed without proof, so we end with a demonstration of a vulnerable application running unsandboxed and hijacked by an attacker. It is then run with Callander applied to demonstrate that the attack is defeated in the spirit of “secure by design.”
Ryan Petrich is an SVP at a technology-driven hedge fund and was previously CTO at Capsule8, a Linux security monitoring startup. Their current research focuses on using subsystems in unexpected ways for optimum performance and subterfuge. Their work spans developing popular and foundational jailbreak tweaks, architecting resilient distributed systems, and experimenting with compilers, eventual consistency, and frustrating instruction sets.