How Theo de Raadt and OpenBSD shaped “secure by default” thinking through auditing, conservative design, and practical mitigations used across modern systems.

“Secure by default” means a system starts in its safest reasonable state without you having to hunt through menus, read a long checklist, or already know what can go wrong. The first install should minimize exposed services, limit permissions, and choose safer options automatically. You can still open things up—but you do so deliberately, with your eyes open.
A default is the path most people will take. That makes it a security control: it shapes real-world outcomes more than any optional hardening guide. If the default configuration quietly enables extra network services, permissive file access, or risky features, many deployments will inherit that risk for a long time.
OpenBSD is frequently cited in security discussions because it treated this idea as a core engineering goal for decades: ship conservative defaults, reduce attack surface, and make risky behavior opt-in. That focus influenced how many engineers think about operating systems, network services, and application design.
We’ll look at the practices that supported the “secure by default” mindset, including:
Theo de Raadt’s role matters historically, but the goal here isn’t hero worship. The more useful takeaway is how a project can turn security from an afterthought into a set of repeatable choices—choices that show up in the defaults, the code review habits, and the willingness to say “no” to convenience when it creates unnecessary risk.
Theo de Raadt is a Canadian developer best known for his long-running focus on careful systems engineering in the BSD family. Before OpenBSD, he was a central figure in early BSD-on-PC efforts and became one of the co-founders of NetBSD in the early 1990s. That background matters: the BSDs weren’t “apps,” they were operating systems meant to be trusted foundations.
OpenBSD began in 1995 after de Raadt left the NetBSD project. The new project wasn’t started to chase novelty or to build a “BSD with everything.” It was started to build a system where correctness and security were explicit priorities, even when that meant saying “no” to convenience.
From the start, OpenBSD put energy into things many projects treat as unglamorous:
Many operating systems and distributions compete on breadth: more drivers, more bundled services, more configuration options, faster feature delivery. Those are legitimate goals, and they help users.
OpenBSD’s origin story reflects a different bet: that a smaller, more comprehensible base system—shipped with conservative defaults—can reduce the chance of security-critical mistakes.
That doesn’t make other approaches “wrong.” It does mean trade-offs show up in everyday decisions: whether to enable a service by default, whether to accept a complex new subsystem, or whether to redesign an interface so it’s harder to misuse.
OpenBSD’s founding emphasis was a security goal: treat security as a design constraint, not an add-on. But goals aren’t the same as outcomes. Real security is measured over years—through vulnerabilities found, how quickly they’re fixed, how clear the communication is, and how well the project learns from mistakes.
OpenBSD’s culture grew from that premise: assume software can fail, then engineer the defaults and the process to fail less often.
OpenBSD treats the “default install” as a security promise: a fresh system should be reasonably safe before you’ve read a tuning guide, added a firewall rule, or hunted through obscure config files. That’s not convenience—it’s a security control.
If most machines stay close to their defaults (as many do in real life), then the defaults are where risk is either prevented or quietly multiplied.
A secure-by-default approach assumes new administrators will make mistakes, be busy, or follow outdated advice. So the system aims to start from a defensible baseline: minimal exposure, predictable behavior, and configurations that don’t surprise you.
When you do change something, you should be doing it deliberately—because you need a service—not because the base system “helpfully” enabled it.
One practical expression of this mindset is conservative feature selection and a bias toward fewer network-facing services enabled by default. Every listening daemon is a new place for bugs, misconfigurations, and forgotten credentials to hide.
OpenBSD’s defaults aim to keep the initial attack surface small, so the first security win comes from not running things you didn’t ask for.
This conservatism also reduces the number of “foot-guns”—features that are powerful, but easy to misuse when you’re learning.
Defaults only help if people can understand and maintain them. OpenBSD’s culture emphasizes clear documentation and straightforward configuration files so administrators can answer basic questions quickly:
That clarity matters because security failures are often operational: a service left on unintentionally, a copied config with unsafe options, or an assumption that “someone else already hardened it.”
OpenBSD tries to make the secure path the easy, obvious path—starting from the very first boot.
OpenBSD’s security reputation isn’t only about clever mitigations or strict defaults—it’s also about a habit: assuming security improves when people repeatedly, deliberately read and question the code.
“Read the code” is less a slogan than a workflow: review what you ship, keep reviewing it, and treat ambiguity as a bug.
Systematic review is not just scanning for obvious mistakes. It typically includes:
A key idea is that audits often aim to prevent whole classes of bugs, not just fix one reported issue.
Audits focus on components that parse untrusted input or handle high-risk operations. Common targets include:
These areas tend to combine complexity with exposure—exactly where subtle vulnerabilities thrive.
Continuous code review takes time and concentrated expertise. It can slow feature work, and it’s not a guarantee: reviewers miss things, and new code can reintroduce old problems.
OpenBSD’s lesson is more practical than magical: disciplined auditing meaningfully reduces risk when it’s treated as ongoing engineering work, not a one-time “security pass.”
Security isn’t only about adding protections after something goes wrong. OpenBSD pushed a different instinct: start by assuming software will have bugs, then design the system so bugs have limited power.
“Least privilege” means a program (or user) should have only the permissions it needs to do its job—and nothing extra. If a web server only needs to read its own config and serve files from one directory, it shouldn’t also have permission to read everyone’s home folders, change system settings, or access raw devices.
This matters because when something breaks (or gets exploited), the damage is capped by what the compromised component is allowed to do.
Network-facing programs are exposed to untrusted input all day long: web requests, SSH login attempts, malformed packets.
Privilege separation splits a program into smaller parts:
So even if an attacker finds a bug in the internet-facing portion, they don’t automatically gain full system control. They land in a process with few rights and fewer ways to escalate.
OpenBSD reinforced this split with additional isolation tools (like chroot jails and other OS-level restrictions). Think of it as running a risky component in a locked room: it can do its narrow task, but it can’t wander around the house.
Before: one big daemon runs with broad privileges → compromise one piece, compromise the whole system.
After: small, separated components with minimal privileges → compromise one piece, get a limited foothold, and hit barriers at every step.
For years, a huge share of real-world compromises started with a simple class of defect: memory safety bugs. Buffer overflows, use-after-free, and similar mistakes can let an attacker overwrite control data and run arbitrary code.
OpenBSD treated that reality as a practical engineering problem: assume some bugs will slip through, then design the system so exploiting them is harder, noisier, and less reliable.
OpenBSD helped normalize mitigations that many people now take for granted:
These mechanisms aren’t “magic shields.” They’re speed bumps—often very effective ones—that force attackers to chain more steps, require better information leaks, or accept lower reliability.
The deeper lesson is defense-in-depth: mitigations buy time, reduce blast radius, and turn some vulnerabilities into crashes instead of takeovers. That matters operationally because it can shrink the window between discovery and patching, and it can prevent one mistake from becoming a full-system incident.
But mitigations are not a substitute for fixing vulnerabilities. OpenBSD’s philosophy paired exploit resistance with relentless bug fixing and review: make exploitation harder today, and keep removing the underlying bugs tomorrow.
OpenBSD’s security reputation isn’t built on “more crypto everywhere.” It’s built on correctness first: fewer surprises, clearer APIs, and behavior you can reason about under pressure.
That mindset affects how cryptography is integrated, how randomness is generated, and how interfaces are designed so that unsafe choices are harder to make by accident.
A recurring OpenBSD theme is that security failures often start as ordinary bugs: parsing edge cases, ambiguous flags, silent truncation, or “helpful” defaults that mask errors.
The project tends to prefer smaller, auditable interfaces with explicit failure modes, even if that means removing or redesigning long-standing behaviors.
Clear APIs also reduce “configuration foot-guns.” If a secure option requires a maze of toggles, many deployments will end up insecure despite good intentions.
OpenBSD’s approach to cryptography is conservative: use well-understood primitives, integrate them carefully, and avoid enabling legacy behaviors that exist mainly for backward compatibility.
This shows up in defaults that favor strong algorithms and in the willingness to deprecate older, weaker options rather than keep them around “just in case.”
The goal is not to offer every possible cipher suite—it’s to make the safe path the normal path.
Many real-world breakages trace back to weak randomness, unsafe parsing, or hidden complexity in configuration layers.
Weak randomness can undermine otherwise strong cryptography, so secure-by-default systems treat entropy and random APIs as critical infrastructure, not an afterthought.
Unsafe parsing (of keys, certificates, config files, or network inputs) is another repeat offender; predictable formats, strict validation, and safer string handling reduce the attack surface.
Finally, “hidden” configuration complexity is itself a risk: when security depends on subtle ordering rules or undocumented interactions, mistakes become inevitable.
OpenBSD’s preference is to simplify the interface and choose defaults that don’t quietly inherit insecure legacy behavior.
OpenSSH is one of the clearest examples of how OpenBSD’s security philosophy escaped the project and became a default expectation elsewhere.
When SSH became the standard way to administer Unix and Linux systems remotely, the question wasn’t “Should we encrypt remote logins?”—it was “Which implementation can we trust to run everywhere, all the time?”
OpenSSH emerged when the original free SSH implementation (SSH 1.x) faced licensing changes and the ecosystem needed a permissively available, actively maintained alternative.
OpenBSD didn’t just provide a replacement; it delivered a version shaped by its culture: conservative changes, code clarity, and a bias toward safe behavior without requiring every admin to be an expert.
That mattered broadly because SSH sits on the most sensitive path in many environments: privileged access, fleet-wide automation, and emergency recovery. A weakness in SSH isn’t “one more bug”—it can become a universal key.
OpenBSD treated remote administration as a high-stakes workflow.
OpenSSH’s configuration and supported features nudged administrators toward better patterns: strong cryptography, sane authentication options, and guardrails that reduce accidental exposure.
This is what “secure by default” looks like in practice: reducing the number of footguns available to an operator under pressure. When you’re SSH’ing into a production box at 2 a.m., defaults matter more than policy docs.
OpenSSH was designed to travel. Ports to Linux, *BSDs, macOS, and commercial Unix meant OpenBSD’s security decisions—APIs, configuration conventions, and hardening attitudes—moved with the code.
Even organizations that never ran OpenBSD directly still adopted its remote-access assumptions because OpenSSH became the common denominator.
The biggest impact wasn’t theoretical: it showed up in day-to-day admin access patterns. Teams standardized on encrypted remote management, improved key-based workflows, and gained a well-audited tool they could deploy almost everywhere.
Over time, this raised the baseline for what “normal” secure administration looks like—and made insecure remote access harder to justify.
“Secure by default” isn’t only a design goal—it’s a promise you keep every time you ship.
OpenBSD’s reputation rests heavily on disciplined release engineering: predictable releases, careful changes, and a bias toward clarity over cleverness.
Defaults can be secure on day one, but users experience security over months and years through updates, advisories, and how confidently they can apply fixes.
Trust grows when updates are regular and communication is concrete. A good security advisory answers four questions without drama: What’s affected? What’s the impact? How do I remediate? How can I verify?
OpenBSD-style communication tends to avoid vague severity talk and focuses on actionable detail—version ranges, patch references, and minimal workarounds.
Responsible disclosure norms matter here too. Coordinating with reporters, setting clear timelines, and crediting researchers helps keep fixes orderly without turning every issue into a headline.
Release engineering is also risk management. The more complex the build and release chain, the more opportunities for mis-signing, wrong artifacts, or compromised dependencies.
A simpler, well-understood pipeline—repeatable builds, minimal moving parts, strong signing practices, and straightforward provenance—lowers the odds of shipping the wrong thing.
Avoid fear-based messaging. Use plain language, define what “remote,” “local,” and “privilege escalation” mean, and be honest about uncertainty. When you must speculate, label it.
Provide a calm “do this now” path (upgrade or patch) and a “do this next” path (configuration review, monitoring).
When release processes, patching, and communication are consistent, users learn to update quickly—and that’s where secure defaults become durable trust.
OpenBSD’s security reputation isn’t just about clever mitigations—it’s also about how people work.
The project normalized the idea that security is a shared responsibility, and that “good enough” defaults (or sloppy patches) aren’t acceptable simply because they function.
A few habits show up repeatedly in secure engineering teams, and OpenBSD made them explicit:
Strong opinions can improve security by preventing gradual quality drift: risky shortcuts get challenged early, and vague reasoning (“it should be fine”) is treated as a bug.
But that same intensity can also reduce contributions if people feel unsafe asking questions or proposing changes. Security benefits from scrutiny; scrutiny requires participation.
You can borrow the mechanics of a demanding culture without replicating the friction.
Practical rituals that work in most organizations:
The takeaway: security isn’t a feature you add later. It’s a standard you enforce—repeatedly, visibly, and with processes that make the right choice the easiest one.
OpenBSD’s biggest transferable idea isn’t a specific tool—it’s the habit of treating defaults as part of your security posture.
You can apply that mindset anywhere by turning “secure by default” into concrete decisions your org makes repeatedly, not heroics after an incident.
Start by writing two short policies that are easy to audit:
This is how you replace “remember to harden it” with “it ships hardened unless someone signs for risk.”
Use this as a starting point for endpoints and services:
Pick a few numbers that are hard to game:
The common thread is simple: make the safe choice the easiest choice, and make risky choices visible, reviewed, and reversible.
Fast build cycles can either improve security (because fixes ship quickly) or accidentally amplify risk (because insecure defaults get replicated at speed). If you’re using an LLM-assisted workflow, treat “secure by default” as a product requirement, not an afterthought.
For example, when building apps on Koder.ai (a vibe-coding platform that generates web, backend, and mobile apps from chat), you can apply the OpenBSD lesson by making your baseline explicit early: least-privilege roles, private-by-default networking, and conservative configuration templates. Koder.ai’s Planning Mode is a good place to force that discipline up front—define threat boundaries and default exposure before implementation.
Operationally, features like snapshots and rollback help reinforce “defense in depth” at the deployment level: when a change accidentally widens exposure (a misconfigured endpoint, overly permissive policy, or debug flag), you can revert quickly, then ship a corrected default. And because Koder.ai supports source code export, you can still run the same “read the code” auditing habits—treat generated code like any other production code: review, test, and harden.
“Secure by default” is often repeated, but it’s easy to misunderstand what OpenBSD (and Theo de Raadt’s broader philosophy) actually demonstrated.
It doesn’t. No general-purpose operating system can promise “can’t be hacked.” The real claim is more practical: a fresh install should start from a defensive posture—fewer risky services exposed, safer defaults, and features that reduce the blast radius when something goes wrong.
That mindset shifts work earlier in the lifecycle. Instead of asking users to discover and fix insecure settings, the system tries to make the safer choice the path of least resistance.
Security defaults can cost something: convenience, compatibility, or performance. Disabling a legacy feature, tightening permissions, or enforcing safer cryptographic choices may frustrate someone who relied on the old behavior.
OpenBSD’s approach implicitly argues that some friction is acceptable if it prevents silent, widespread exposure. The tradeoff isn’t “security vs. usability,” but “who carries the burden”: every user by default, or the minority who truly need the less-safe option.
Cargo-cult security—lifting config snippets without understanding threat models, deployment context, and operational constraints—often creates brittle systems. A hardening flag that helps on one platform can break updates, monitoring, or recovery procedures elsewhere.
The deeper lesson is the method: careful defaults, continuous review, and a willingness to remove risky behavior even when it’s popular.
OpenBSD’s influence is real: modern hardening, auditing habits, and “safer by default” expectations owe it a lot.
But its biggest contribution may be cultural—treating security as an engineering discipline with standards, maintenance, and accountability, not a checklist of knobs to turn.
“Secure by default” means the initial, out-of-the-box configuration starts from a defensible baseline: minimal exposed services, conservative permissions, and safer protocol/crypto choices.
You can still relax restrictions, but you do it intentionally—so risk is explicit rather than inherited accidentally.
Because defaults are the path most deployments stay on. If a service is enabled by default, many systems will run it for years—often without anyone remembering it’s there.
Treat the default config like a high-impact security control: it determines the real-world attack surface for the majority of installs.
Start with basic exposure checks:
The goal is to ensure nothing is reachable or privileged “just because it came that way.”
Auditing is systematic review aimed at reducing whole classes of bugs, not just fixing a single reported issue. Common audit activities include:
It’s ongoing engineering work, not a one-time “security pass.”
Least privilege means each service (and each component within it) gets only the permissions it needs.
Practical steps:
Privilege separation splits a risky, internet-facing program into parts:
If the exposed part is compromised, the attacker lands in a process with limited rights, reducing blast radius and making escalation harder.
Mitigations like W^X, ASLR, and stack protections aim to make memory-corruption bugs harder to exploit reliably.
In practice, they:
They’re defense-in-depth, not a substitute for fixing the underlying bug.
OpenSSH became a widely deployed default for remote administration, so its security posture affects a huge portion of the internet.
Operationally, this matters because SSH often sits on the most sensitive path (admin access, automation, recovery). Safer defaults and conservative changes reduce the chance that “normal usage” becomes an organization-wide weak point.
Trust is built by making updates and advisories easy to act on.
A practical advisory/update process should:
Consistent patching plus clear communication is how “secure by default” stays true over time.
Make the safe path the default path, and require review for anything that increases exposure.
Examples:
Track exceptions with owners and expiration dates so risk doesn’t become permanent.