Learn how Solomon Hykes and Docker popularized containers, making images, Dockerfiles, and registries the standard way to package and deploy modern apps.

Solomon Hykes is the engineer who helped turn a long-standing idea—isolating software so it runs the same everywhere—into something teams could actually use day to day. In 2013, the project he introduced to the world became Docker, and it quickly changed how companies ship applications.
At the time, the pain was simple and familiar: an app worked on a developer’s laptop, then behaved differently on a teammate’s machine, then broke again in staging or production. These “inconsistent environments” weren’t just annoying—they slowed releases, made bugs hard to reproduce, and created endless handoffs between development and operations.
Docker gave teams a repeatable way to package an application together with the dependencies it expects—so the app can run the same way on a laptop, a test server, or in the cloud.
That’s why people say containers became the “default packaging and deployment unit.” Put simply:
Instead of deploying “a zip file plus a wiki page of setup steps,” many teams deploy an image that already includes what the app needs. The result is fewer surprises and faster, more predictable releases.
This article mixes history with practical concepts. You’ll learn who Solomon Hykes is in this context, what Docker introduced at the right moment, and the basic mechanics—without assuming deep infrastructure knowledge.
You’ll also see where containers fit today: how they connect to CI/CD and DevOps workflows, why orchestration tools like Kubernetes became important later, and what containers do not automatically fix (especially around security and trust).
By the end, you should be able to explain—clearly and confidently—why “ship it as a container” became a default assumption for modern application deployment.
Before containers became mainstream, getting an application from a developer’s laptop to a server was often more painful than writing the app itself. Teams didn’t lack talent—they lacked a reliable way to move “the thing that works” between environments.
A developer might run the app perfectly on their computer, then watch it fail in staging or production. Not because the code changed, but because the environment did. Different operating system versions, missing libraries, slightly different config files, or a database running with different defaults could all break the same build.
Many projects relied on long, fragile setup instructions:
Even when written carefully, these guides aged quickly. One teammate upgrading a dependency could accidentally break onboarding for everyone else.
Worse, two apps on the same server might require incompatible versions of the same runtime or library, forcing teams into awkward workarounds or separate machines.
“Packaging” often meant producing a ZIP file, a tarball, or an installer. “Deployment” meant a different set of scripts and server steps: provision a machine, configure it, copy files, restart services, and hope nothing else on the server was impacted.
Those two concerns rarely matched cleanly. The package didn’t fully describe the environment it needed, and the deployment process depended heavily on the target server being prepared “just right.”
What teams needed was a single, portable unit that could travel with its dependencies and run consistently across laptops, test servers, and production. That pressure—repeatable setup, fewer conflicts, and predictable deployment—set the stage for containers to become the default way to ship applications.
Docker didn’t start as a grand plan to “change software forever.” It grew out of practical engineering work led by Solomon Hykes while building a platform-as-a-service product. The team needed a repeatable way to package and run applications across different machines without the usual “it works on my laptop” surprises.
Before Docker was a household name, the underlying need was straightforward: ship an app with its dependencies, run it reliably, and do it again and again for many customers.
The project that became Docker emerged as an internal solution—something that made deployments predictable and environments consistent. Once the team realized the packaging-and-running mechanism was broadly useful beyond their own product, they released it publicly.
That release mattered because it turned a private deployment technique into a shared toolchain the whole industry could adopt, improve, and standardize around.
It’s easy to blur these together, but they’re different:
Containers existed in various forms before Docker. What changed is that Docker packaged the workflow into a developer-friendly set of commands and conventions—build an image, run a container, share it with someone else.
A few widely known steps pushed Docker from “interesting” to “default”:
The practical result: developers stopped debating how to replicate environments and started shipping the same runnable unit everywhere.
Containers are a way to package and run an application so it behaves the same on your laptop, on a coworker’s machine, and in production. The key idea is isolation without a full new computer.
A virtual machine (VM) is like renting an entire apartment: you get your own front door, your own utilities, and your own copy of the operating system. That’s why VMs can run different OS types side by side, but they’re heavier and usually take longer to boot.
A container is more like renting a locked room inside a shared building: you bring your furniture (app code + libraries), but the building’s utilities (the host operating system kernel) are shared. You still get separation from other rooms, but you’re not starting a whole new OS each time.
On Linux, containers rely on built-in isolation features that:
You don’t need to know the kernel details to use containers, but it helps to know they’re leveraging OS features—not magic.
Containers became popular because they’re:
Containers are not a security boundary by default. Because containers share the host kernel, a kernel-level vulnerability can potentially affect multiple containers. It also means you can’t run a Windows container on a Linux kernel (and vice versa) without extra virtualization.
So: containers improve packaging and consistency—but you still need smart security, patching, and configuration practices.
Docker succeeded partly because it gave teams a simple mental model with clear “parts”: a Dockerfile (instructions), an image (the built artifact), and a container (the running instance). Once you understand that chain, the rest of the Docker ecosystem starts to make sense.
A Dockerfile is a plain-text file that describes how to build your application environment step by step. Think of it like a cooking recipe: it doesn’t feed anyone by itself, but it tells you exactly how to produce the same dish every time.
Typical Dockerfile steps include: choosing a base (like a language runtime), copying your app code in, installing dependencies, and declaring the command to run.
An image is the built result of a Dockerfile. It’s a packaged snapshot of everything needed to run: your code, dependencies, and configuration defaults. It’s not “alive”—it’s more like a sealed box you can ship around.
A container is what you get when you run an image. It’s a live process with its own isolated filesystem and settings. You can start it, stop it, restart it, and create multiple containers from the same image.
Images are built in layers. Each instruction in a Dockerfile usually creates a new layer, and Docker tries to reuse (“cache”) layers that haven’t changed.
In plain terms: if you only change your application code, Docker can often reuse the layers that installed the operating system packages and dependencies, making rebuilds much faster. This also encourages reuse across projects—many images share common base layers.
Here’s what the “recipe → artifact → running instance” flow looks like:
FROM node:20-alpine
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
CMD ["node", "server.js"]
docker build -t myapp:1.0 .docker run --rm -p 3000:3000 myapp:1.0This is the core promise Docker popularized: if you can build the image, you can run the same thing reliably—on your laptop, in CI, or on a server—without rewriting installation steps each time.
Running a container on your own laptop is useful—but it’s not the breakthrough. The real shift happened when teams could share the exact same build and run it anywhere, without “it works on my machine” arguments.
Docker made that sharing feel as normal as sharing code.
A container registry is a store for container images. If an image is the packaged app, a registry is the place you keep packaged versions so other people and systems can fetch them.
Registries support a simple workflow:
Public registries (like Docker Hub) made it easy to start. But most teams quickly needed a registry that matched their access rules and compliance requirements.
Images are usually identified as name:tag—for example myapp:1.4.2. That tag is more than a label: it’s how humans and automation agree on which build to run.
A common mistake is relying on latest. It sounds convenient, but it’s ambiguous: “latest” can change without warning, causing environments to drift. One deploy might pull a newer build than the previous deploy—even if nobody intended to upgrade.
Better habits:
1.4.2) for releasesAs soon as you’re sharing internal services, paid dependencies, or company code, you typically want a private registry. It lets you control who can pull or push images, integrate with single sign-on, and keep proprietary software out of public indexes.
This is the “laptop to team” leap: once images live in a registry, your CI system, your coworkers, and your production servers can all pull the same artifact—and deployment becomes repeatable, not improvisational.
CI/CD works best when it can treat your application like a single, repeatable “thing” that moves forward through stages. Containers provide exactly that: one packaged artifact (the image) that you can build once and run many times, with far fewer “it worked on my machine” surprises.
Before containers, teams often tried to match environments with long setup docs and shared scripts. Docker changed the default workflow: pull the repo, build an image, run the app. The same commands tend to work across macOS, Windows, and Linux because the application runs inside the container.
That standardization speeds up onboarding. New teammates spend less time installing dependencies and more time understanding the product.
A strong CI/CD setup aims for a single pipeline output. With containers, the output is an image tagged with a version (often tied to a commit SHA). That same image is promoted from dev → test → staging → production.
Instead of rebuilding the app differently per environment, you change configuration (like environment variables) while keeping the artifact identical. This reduces environment drift and makes releases easier to debug.
Containers map cleanly to pipeline steps:
Because each step runs against the same packaged app, failures are more meaningful: a test that passed in CI is more likely to behave the same after deployment.
If you’re refining your process, it’s also worth setting simple rules (tagging conventions, image signing, basic scanning) so the pipeline stays predictable. You can expand from there as your team grows (see /blog/common-mistakes-and-how-to-avoid-them).
Where this connects to modern “vibe-coding” workflows: platforms like Koder.ai can generate and iterate on full-stack apps (React on the web, Go + PostgreSQL on the backend, Flutter for mobile) through a chat interface—but you still need a reliable packaging unit to move from “it runs” to “it ships.” Treating every build as a versioned container image keeps even AI-accelerated development aligned with the same CI/CD expectations: reproducible builds, predictable deploys, and rollback-ready releases.
Docker made it practical to package an app once and run it anywhere. The next challenge showed up quickly: teams didn’t run one container on one laptop—they ran dozens (then hundreds) of containers across many machines, with versions changing constantly.
At that point, “starting a container” stops being the hard part. The hard part becomes managing a fleet: deciding where each container should run, keeping the right number of copies online, and recovering automatically when things fail.
When you have many containers across many servers, you need a system that can coordinate them. That’s what container orchestrators do: they treat your infrastructure like a pool of resources and continually work to keep your applications in the desired state.
Kubernetes became the most common answer for this need (though not the only one). It provides a shared set of concepts and APIs that many teams and platforms have standardized on.
It helps to separate responsibilities:
Kubernetes introduced (and popularized) a few practical capabilities that teams needed once containers moved beyond a single host:
In short, Docker made the unit portable; Kubernetes helped make it operable—predictably and continuously—when there are many units in motion.
Containers didn’t just change how we deploy software—they also nudged teams to design software differently.
Before containers, splitting an app into many small services often meant multiplying operational pain: different runtimes, conflicting dependencies, and complicated deployment scripts. Containers reduced that friction. If every service ships as an image and runs the same way, creating a new service feels less risky.
That said, containers also work well for monoliths. A monolith in a container can be simpler than a half-finished microservices migration: one deployable unit, one set of logs, one scaling lever. Containers don’t force a style—they make multiple styles more manageable.
Container platforms encouraged apps to behave like well-behaved “black boxes” with predictable inputs and outputs. Common conventions include:
These interfaces made it easier to swap versions, roll back, and run the same app across laptops, CI, and production.
Containers popularized repeatable building blocks such as sidecars (a helper container alongside the main app for logging, proxies, or certificates). They also reinforced the guideline of one process per container—not a hard rule, but a helpful default for clarity, scaling, and troubleshooting.
The main trap is over-splitting. Just because you can turn everything into a service doesn’t mean you should. If a “microservice” adds more coordination, latency, and deployment overhead than it removes, keep it together until there’s a clear boundary—like different scaling needs, ownership, or failure isolation.
Containers make software easier to ship, but they don’t magically make it safer. A container is still just code plus dependencies, and it can be misconfigured, outdated, or outright malicious—especially when images are pulled from the internet with minimal scrutiny.
If you can’t answer “Where did this image come from?” you’re already taking a risk. Teams often move toward a clear chain of custody: build images in controlled CI, sign or attest what was built, and keep a record of what went into the image (dependencies, base image version, build steps).
That’s also where SBOMs (Software Bills of Materials) help: they make your container’s contents visible and auditable.
Scanning is the next practical step. Regularly scan images for known vulnerabilities, but treat results as inputs to decisions—not as a guarantee of safety.
A frequent mistake is running containers with overly broad permissions—root user by default, extra Linux capabilities, host networking, or privileged mode “because it works.” Each of these widens the blast radius if something goes wrong.
Secrets are another trap. Environment variables, baked-in config files, or committed .env files can leak credentials. Prefer secret stores or orchestrator-managed secrets and rotate them as if exposure is inevitable.
Even “clean” images can be dangerous at runtime. Watch for exposed Docker sockets, overly permissive volume mounts, and containers that can reach internal services they don’t need.
Also remember: patching your host and kernel still matters—containers share the kernel.
Think in four phases:
Containers reduce friction—but trust still has to be earned, verified, and continuously maintained.
Docker makes packaging predictable, but only if you use it with a bit of discipline. Many teams hit the same potholes—then blame “containers” for what are really workflow issues.
A classic mistake is building huge images: using full OS base images, installing build tools you don’t need at runtime, and copying the entire repo (including tests, docs, and node_modules). The result is slow downloads, slow CI, and more surface area for security issues.
Another common issue is slow, cache-busting builds. If you copy your whole source tree before installing dependencies, every small code change forces a full dependency reinstall.
Finally, teams often use unclear or floating tags like latest or prod. That makes rollbacks painful and turns deployments into guesswork.
This usually comes down to differences in configuration (missing env vars or secrets), networking (different hostnames, ports, proxies, DNS), or storage (data written to the container filesystem instead of a volume, or file permissions that differ between environments).
Use slim base images when possible (or distroless if your team is ready). Pin versions for base images and key dependencies so builds are repeatable.
Adopt multi-stage builds to keep compilers and build tools out of the final image:
FROM node:20 AS build
WORKDIR /app
COPY package*.json ./
RUN npm ci
COPY . .
RUN npm run build
FROM node:20-slim
WORKDIR /app
COPY --from=build /app/dist ./dist
CMD ["node","dist/server.js"]
Also, tag images with something traceable, like a git SHA (and optionally a human-friendly release tag).
If an app is truly simple (a single static binary, run rarely, no scaling needs), containers may add overhead. Legacy systems with tight OS coupling or specialized hardware drivers can also be poor fits—sometimes a VM or managed service is the cleaner choice.
Containers became the default unit because they solved a very specific, repeatable pain: getting the same app to run the same way across laptops, test servers, and production. Packaging the app and its dependencies together made deployments faster, rollbacks safer, and handoffs between teams less fragile.
Just as important, containers standardized the workflow: build once, ship, run.
“Default” doesn’t mean everything runs in Docker everywhere. It means most modern delivery pipelines treat a container image as the primary artifact—more than a zip file, VM snapshot, or set of manual setup steps.
That default usually includes three pieces working together:
Start small and focus on repeatability.
.dockerignore early.1.4.2, main, sha-…) and define who can push vs. pull.If you’re experimenting with faster ways to build software (including AI-assisted approaches), keep the same discipline: version the image, store it in a registry, and make deployments promote that single artifact forward. That’s one reason teams using Koder.ai still benefit from container-first delivery—rapid iteration is great, but reproducibility and rollback are what make it safe.
Containers reduce “works on my machine” problems, but they don’t replace good operational habits. You still need monitoring, incident response, secrets management, patching, access control, and clear ownership.
Treat containers as a powerful packaging standard—not a shortcut around engineering discipline.
Solomon Hykes is an engineer who led the work that turned OS-level isolation (containers) into a developer-friendly workflow. In 2013, that work was released publicly as Docker, which made it practical for everyday teams to package an app with its dependencies and run it consistently across environments.
Containers are the underlying concept: isolated processes using OS features (like namespaces and cgroups on Linux). Docker is the tooling and conventions that made containers easy to build, run, and share (e.g., Dockerfile → image → container). In practice, you can use containers without Docker today, but Docker popularized the workflow.
It solved “works on my machine” by bundling application code and its expected dependencies into a repeatable, portable unit. Instead of deploying a ZIP plus setup instructions, teams deploy a container image that can run the same way on laptops, CI, staging, and production.
A Dockerfile is the build recipe.
An image is the built artifact (immutable snapshot you can store and share).
A container is a running instance of that image (a live process with isolated filesystem/settings).
Avoid latest because it’s ambiguous and can change without warning, causing drift between environments.
Better options:
1.4.2sha-<hash>)A registry is where you store container images so other machines and systems can pull the exact same build.
Typical workflow:
For most teams, a matters for access control, compliance, and keeping internal code out of public indexes.
Containers share the host OS kernel, so they’re typically lighter and start faster than VMs.
A simple mental model:
One practical limit: you generally can’t run Windows containers on a Linux kernel (and vice versa) without additional virtualization.
They let you produce a single pipeline output: the image.
A common CI/CD pattern:
You change configuration (env vars/secrets) per environment, not the artifact itself, which reduces drift and makes rollbacks easier.
Docker made “run this container” easy on one machine. At scale, you also need:
Kubernetes provides those capabilities so fleets of containers can be operated predictably across many machines.
Containers improve packaging consistency, but they don’t automatically make software safe.
Practical basics:
privileged, minimize capabilities, don’t run as root when possible)For common workflow pitfalls (huge images, cache-busting builds, unclear tags), see also: /blog/common-mistakes-and-how-to-avoid-them