OS Lab 10 - Containers and Docker
Objectives
Upon completion of this lab, you will be able to:
- Explain how containers use Linux kernel namespaces (PID, mount, network, UTS, IPC, user, cgroup) to provide process isolation without requiring separate operating systems or hypervisors.
- Differentiate between container images (immutable filesystem templates) and running containers (ephemeral process instances in isolated namespaces).
- Understand how OverlayFS provides efficient layered filesystems that allow multiple containers to share common base layers while maintaining separate writable layers.
- Apply cgroup resource limits to constrain container CPU, memory, and I/O usage, preventing resource monopolization.
- Configure Docker networking using custom bridges, port mapping (NAT), and container-to-container communication with automatic DNS resolution.
- Use volumes and bind mounts to persist data beyond container lifecycles and share data between containers and the host.
- Build multi-container applications with coordinated networking and resource management, demonstrating modern microservices architecture patterns.
- Connect container concepts to previous labs: relate network namespaces to Lab 7, mount namespaces to Lab 2, PID namespaces to Lab 3, and Docker networking to Labs 7-9.
Introduction
From Manual Isolation to Automated Containers
In Labs 7-9, you manually constructed network isolation using Linux kernel primitives. You created virtual network interfaces with ip link add type veth, configured bridges with ip link add type bridge, established isolated network stacks with ip netns add, and set up routing and NAT rules. Through this hands-on work, you gained deep insight into how the Linux kernel provides network isolation at the namespace level.
Every time you executed sudo ip netns exec red ping 10.0.0.3, you were demonstrating a fundamental concept: the kernel can create completely isolated environments where processes see only a subset of system resources. The red namespace had its own network interfaces, its own routing table, and its own firewall rules—completely invisible to processes running in other namespaces or on the host.
This isolation is powerful, but network namespaces are just one of seven namespace types that the Linux kernel provides. To fully isolate an application and create what we call a "container," you need:
- Network namespace (
net): Isolated network stack—you've mastered this in Labs 7-9 - PID namespace (
pid): Isolated process tree—each namespace has its own PID 1 - Mount namespace (
mnt): Isolated filesystem view—different root directory from the host - UTS namespace (
uts): Isolated hostname—each container can have its own hostname - IPC namespace (
ipc): Isolated inter-process communication—shared memory, semaphores, message queues - User namespace (
user): Isolated UIDs/GIDs—security boundary for privilege separation - Cgroup namespace (
cgroup): Isolated view of control group hierarchy
Additionally, you need:
- Control groups (cgroups) to limit CPU, memory, and I/O resources
- Copy-on-write filesystems (OverlayFS) for efficient storage
- Image management for distributing application packages
- Orchestration for managing the lifecycle of multiple isolated environments
Manually setting up all of these components for every application would require hundreds of commands and deep kernel knowledge. This is the problem that containerization technology solves.
What are Containers?
A container is an isolated process (or process tree) that uses Linux kernel features—namespaces, cgroups, and layered filesystems—to provide the illusion of running in a separate system. Containers package an application with its dependencies, libraries, and configuration into a single unit that can run consistently across different environments.
Containers are not a specific product or tool—they are a pattern for using kernel isolation features. Multiple container runtimes exist, each implementing this pattern:
Container Runtimes:
- Docker: The most widely adopted container platform, providing a complete ecosystem (daemon, CLI, image format, registry)
- Podman: Daemonless container engine, compatible with Docker images and commands, can run rootless
- containerd: Industry-standard container runtime, used by Kubernetes and Docker (as of Docker 1.11+)
- CRI-O: Lightweight container runtime built for Kubernetes, implementing the Container Runtime Interface (CRI)
- LXC/LXD: System containers that more closely resemble traditional VMs, providing full init systems
What Container Runtimes Provide:
- Namespace Management: Automatically create and configure all necessary namespace types
- Filesystem Layers: Use OverlayFS or similar copy-on-write filesystems for efficient storage
- Network Configuration: Set up bridges, veth pairs, and NAT rules automatically
- Resource Control: Configure cgroups to enforce CPU and memory limits
- Image Distribution: Provide standard formats (OCI) for packaging and distributing applications
- Lifecycle Management: Offer commands to create, start, stop, and remove containers
Docker as a Container Runtime
In this lab, we use Docker because it is the most widely deployed and well-documented container runtime. Docker's architecture consists of:
- Docker Engine (dockerd): Daemon that manages containers
- Docker CLI (docker): Command-line interface for interacting with the daemon
- containerd: Lower-level runtime that Docker uses internally
- runc: OCI-compliant runtime that actually creates and runs containers
When you execute docker run nginx, Docker performs approximately 50-100 system calls to configure namespaces, mount filesystems, set up networking, and start the process—all operations you could do manually but would require significant time and expertise.
The concepts you learn with Docker apply to all container runtimes, as they all use the same underlying kernel features. The commands may differ (e.g., podman run instead of docker run), but the fundamental mechanisms remain the same.
The Shift in Software Deployment
Containers represent a fundamental paradigm shift in how we think about software deployment and infrastructure management.
Traditional Deployment Model (Pre-Container Era):
1. Provision a server (physical or virtual machine)
2. Install operating system (Ubuntu, RHEL, etc.)
3. Install runtime dependencies (Python 3.9, Node.js 16, specific library versions)
4. Configure environment variables, users, permissions
5. Deploy application code
6. Configure monitoring, logging, security
7. Hope everything works the same as on your development machine
8. Troubleshoot when it doesn't ("works on my machine" problem)
This model suffers from several critical issues:
- Dependency Hell: Different applications require different, potentially conflicting library versions
- Configuration Drift: Development, staging, and production environments gradually diverge
- Snowflake Servers: Each server becomes unique and unreproducible
- Slow Deployment: Setting up a new environment can take hours or days
- Poor Resource Utilization: Applications can't share servers due to dependency conflicts
Container-Based Deployment Model:
1. Developer creates Dockerfile specifying exact environment 2. Build process creates immutable container image with all dependencies 3. Image tested in CI/CD pipeline (identical to production) 4. Image deployed to any server with Docker installed 5. Container starts in seconds with guaranteed-identical environment 6. Multiple isolated applications run on same host without conflicts
This model provides:
- Immutable Infrastructure: Images never change after building; deploy new versions rather than modifying running systems
- Reproducibility: Development environment = testing environment = production environment
- Portability: "Build once, run anywhere" (laptop, data center, cloud)
- Efficiency: Run 10-100 containers on a single host (vs. 5-10 VMs)
- Rapid Deployment: Start containers in milliseconds vs. minutes for VMs
- Microservices Architecture: Enables decomposing monoliths into independently deployable services
This shift has revolutionized software engineering, enabling modern DevOps practices, continuous deployment, and cloud-native architectures. Companies like Netflix, Uber, and Airbnb run millions of containers to serve billions of requests daily.
Containers vs Virtual Machines
Understanding the architectural difference between containers and virtual machines is crucial for appreciating why containers have become the dominant deployment model.
Virtual Machine Architecture:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ App A │ │ App B │ │ App C │
├─────────────┤ ├─────────────┤ ├─────────────┤
│ Libraries │ │ Libraries │ │ Libraries │
├─────────────┤ ├─────────────┤ ├─────────────┤
│ Guest OS │ │ Guest OS │ │ Guest OS │
│ (Kernel) │ │ (Kernel) │ │ (Kernel) │
└─────────────┘ └─────────────┘ └─────────────┘
─────────────────────────────────────────────────
Hypervisor (VMware, KVM, Xen)
─────────────────────────────────────────────────
Host Operating System & Kernel
─────────────────────────────────────────────────
Hardware
Characteristics:
- Each VM runs a complete guest operating system with its own kernel
- Hypervisor emulates hardware, providing virtual CPUs, RAM, disks, NICs
- Strong isolation (separate kernels mean vulnerabilities in one VM don't affect others)
- Heavy resource consumption (each OS kernel needs 1-2GB RAM)
- Slow startup (boot entire OS: 30-60 seconds)
- Large disk footprint (each VM stores complete OS: 1-10GB)
Container Architecture:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ App A │ │ App B │ │ App C │
├─────────────┤ ├─────────────┤ ├─────────────┤
│ Libraries │ │ Libraries │ │ Libraries │
└─────────────┘ └─────────────┘ └─────────────┘
─────────────────────────────────────────────────
Container Runtime (Docker Engine)
─────────────────────────────────────────────────
Host Operating System & Shared Kernel
─────────────────────────────────────────────────
Hardware
Characteristics:
- All containers share the host's kernel (no guest OS needed)
- Container runtime manages namespace and cgroup isolation
- Lightweight isolation (namespaces separate processes, but they're still just processes)
- Minimal resource overhead (containers use only incremental memory beyond their application)
- Fast startup (start process in isolated namespace: milliseconds)
- Small disk footprint (layered filesystem shares common base images: 10-100MB incremental)
The Trade-off:
Virtual machines provide stronger security isolation at the cost of resource efficiency. If your threat model requires complete kernel isolation (e.g., multi-tenant cloud providers hosting untrusted code), VMs are appropriate.
Containers provide lighter-weight isolation with much better resource efficiency. If you're running your own applications on your own infrastructure, containers are usually the right choice. You can run 10-100 containers on hardware that would support only 5-10 VMs.
An Important Insight: When you run ps aux on the host, you see container processes. They're not hidden inside separate kernels like VM processes would be. This demonstrates that containers are just isolated processes on the host—the kernel makes them appear isolated through namespaces, but they're fundamentally just processes.
This is both a feature (efficiency, observability) and a constraint (shared kernel means a kernel vulnerability could affect all containers). Modern container security practices use defense-in-depth: namespaces + cgroups + seccomp + AppArmor/SELinux + user namespaces to create multiple security layers.
Prerequisites
System Requirements
- Operating System: Linux-based system (Ubuntu 20.04+ recommended, but Debian, Fedora, CentOS also supported)
- RAM: Minimum 2GB, 4GB recommended for comfortable operation
- Disk Space: At least 20GB free (Docker images and container layers consume significant space)
- CPU: Any modern x86_64 or ARM64 processor
- Privileges: Root access via
sudo(required for Docker installation and initial setup) - Kernel: Minimum Linux kernel 3.10 (kernel 4.0+ recommended for full feature support)
Check your kernel version:
uname -r
If below 3.10, you'll need to update your kernel before proceeding.
Check available disk space:
df -h /var/lib/docker
Docker stores images and containers in /var/lib/docker by default. Ensure you have sufficient space.
Required Packages
Before beginning, ensure the following packages are installed:
sudo apt update
sudo apt install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release \
bridge-utils \
net-tools
Package descriptions:
apt-transport-https: Enables apt to retrieve packages over HTTPSca-certificates: Common CA certificates for SSL verificationcurl: Command-line tool for transferring data (used to download Docker's GPG key)gnupg: GNU Privacy Guard for verifying package signatureslsb-release: Provides Linux Standard Base version information (used to detect Ubuntu version)bridge-utils: Tools for managing bridge devices (brctlcommand)net-tools: Legacy networking tools (ifconfig,netstat) for compatibility
Knowledge Prerequisites
This lab builds directly on concepts from previous labs. You should be comfortable with:
From Lab 2 (Filesystems):
- Filesystem hierarchy and directory structure
- Mount points and the
mountcommand - Understanding of
/,/etc,/var,/usrpurposes - File permissions and ownership (
chmod,chown) - Symbolic links and filesystem navigation
From Lab 3 (Processes and Jobs):
- Process IDs (PIDs) and process hierarchy
- Parent-child process relationships
ps auxcommand and process listing- Foreground vs. background processes
- Process signals (SIGTERM, SIGKILL)
From Lab 4 (Users, Groups, and Permissions):
- User IDs (UIDs) and group IDs (GIDs)
- Root vs. unprivileged users
sudofor privilege escalation- File and directory permissions (read, write, execute)
- Security implications of running processes as root
From Lab 7 (Network Fundamentals):
- Network interfaces and IP addresses
- Bridges and virtual ethernet (veth) pairs
- Subnets and CIDR notation
- Routing tables and default gateways
- Network Address Translation (NAT)
- Network namespaces (
ip netns add,ip netns exec)
This is crucial—Docker networking uses the exact same mechanisms you built manually.
From Lab 8 (Transport and Security):
- TCP and UDP protocols
- Port numbers and socket addresses (IP:PORT)
- Client-server communication model
- The concept of listening vs. connecting
- TLS/SSL (Docker can use HTTPS for secure image distribution)
From Lab 9 (Application Protocols):
- HTTP/HTTPS protocols
- Reverse proxies and path-based routing
- DNS and hostname resolution
- Caddy web server configuration
- Multi-tier application architecture
You should also be comfortable with:
- Command-line text manipulation (
grep,awk,cut,sed) - Basic bash scripting (variables, loops, conditionals)
- Using multiple terminal windows simultaneously
Theoretical Background
Linux Namespaces: The Foundation of Containers
In Labs 7-9, you worked extensively with network namespaces—one of seven namespace types provided by the Linux kernel. Every time you executed:
sudo ip netns add red
sudo ip netns exec red ip addr show
You were creating an isolated network environment and executing commands within that isolated environment. The processes running in the red namespace could not see network interfaces, routes, or connections in other namespaces. This isolation is the fundamental mechanism that makes containers possible.
Docker extends this concept to six additional namespace types, providing complete process isolation. Understanding namespaces is essential to understanding containers—they are not optional background knowledge, but rather the core technology that defines what a container is.
The Seven Namespace Types
1. Network Namespace (net)
You know this namespace intimately from Labs 7-9. When you created the red and blue namespaces, you were creating isolated network stacks.
What it isolates:
- Network interfaces (lo, eth0, wlan0, etc.)
- IP addresses (each namespace has its own IPs)
- Routing tables (
ip route showoutput differs per namespace) - Firewall rules (iptables/nftables rules are namespace-specific)
- Network sockets and ports (multiple processes in different namespaces can bind to the same port number)
Connection to Lab 7:
In Lab 7, you manually created network namespaces:
sudo ip netns add red # Create isolated network stack
sudo ip netns exec red ip link show # View interfaces in that namespace
Docker does exactly this when you run a container, but also creates six other namespace types simultaneously.
Example implications:
- Container A can listen on port 80, Container B can listen on port 80—no conflict
- Container A cannot see Container B's network connections (
ss -tunashows only its own) - Each container has its own localhost (127.0.0.1) that's separate from the host
2. PID Namespace (pid)
The PID namespace isolates the process ID number space. This is related to what you learned in Lab 3 about process management.
What it isolates:
- Process IDs—processes in different PID namespaces can have the same PID
- Process visibility—processes can only see other processes in the same PID namespace
- PID 1 (init process)—each PID namespace has its own PID 1
How it works:
The kernel maintains a separate process tree for each PID namespace. When you create a PID namespace and start a process in it:
- Inside the namespace, that process is PID 1 (like an init system)
- Outside the namespace, that same process has a different PID (e.g., 12345)
- Processes inside cannot see processes outside their namespace tree
Example:
# On the host
ps aux | wc -l
# Output: 237 processes
# Inside a container
docker exec container ps aux | wc -l
# Output: 5 processes
The container's processes think they're the only processes on the system. They cannot see the host's other 232 processes.
3. Mount Namespace (mnt)
The mount namespace isolates the filesystem mount table. This relates directly to Lab 2 where you learned about filesystems and mounting.
What it isolates:
- Mount points—what is mounted at
/,/tmp,/var, etc. - Root filesystem—each namespace can have a completely different root directory
- Mount propagation—mounts in one namespace don't affect others (by default)
How it works:
When you create a mount namespace, the new namespace inherits a copy of the parent's mount table. But subsequent mounts/unmounts in the child don't affect the parent.
Containers use this to provide a completely different filesystem:
# On host (Ubuntu)
ls /
bin boot dev etc home lib ...
# In container (Fedora)
docker exec fedora ls /
bin boot dev etc home lib ... # Different files!
Both see /etc, but they're seeing different directories. The container's /etc/os-release shows Fedora, while the host's shows Ubuntu.
Technical implementation:
Docker uses OverlayFS (covered later) to construct the container's root filesystem from image layers, then uses pivot_root or chroot to make that directory appear as / to the container's processes.
4. UTS Namespace (uts)
UTS stands for "Unix Timesharing System"—a historical name. The UTS namespace isolates hostname and domain name.
What it isolates:
- System hostname (
hostnamecommand output) - Domain name (NIS domain name)
How it works:
Each UTS namespace can set its own hostname independently of other namespaces.
# On host
hostname
# Output: myserver.example.com
# In container
docker exec container hostname
# Output: a1b2c3d4e5f6 (container ID)
Containers typically use the container ID as hostname by default, but you can override with --hostname:
docker run --hostname=webserver nginx
5. IPC Namespace (ipc)
The IPC namespace isolates System V Inter-Process Communication resources. This relates to Lab 6 where you learned about IPC mechanisms.
What it isolates:
- System V message queues
- System V semaphore sets
- System V shared memory segments
- POSIX message queues (in
/dev/mqueue)
Connection to Lab 6:
In Lab 6, you learned that processes can communicate via shared memory, message queues, and semaphores. The IPC namespace ensures that processes in different namespaces cannot access each other's IPC objects, even if they use the same IPC keys.
Example:
# Container A creates shared memory segment with key 1234
docker exec containerA ipcmk -M 1024 -p 1234
# Container B tries to access it
docker exec containerB ipcs -m -k 1234
# Output: no segment found (different IPC namespace)
6. User Namespace (user)
The user namespace isolates user IDs (UIDs) and group IDs (GIDs). This is the most complex namespace and provides significant security benefits.
What it isolates:
- User IDs—UID 0 inside namespace can map to UID 100000 outside
- Group IDs—similar mapping for GIDs
- Capabilities—process can have capabilities inside namespace but not outside
- Security attributes—AppArmor/SELinux contexts
How it works:
User namespaces allow UID/GID mapping. A process can be root (UID 0) inside the namespace but unprivileged (e.g., UID 100000) outside.
Inside Container: UID 0 (root)
↓ mapping
Outside Container: UID 100000 (unprivileged)
Security benefit:
Even if an attacker compromises a container and gains root privileges inside the container, they're still unprivileged on the host. If they escape the container, they cannot access files owned by actual root.
Docker's approach:
By default, many Docker configurations share the host's user namespace (for simplicity and compatibility). Rootless Docker and user-remapped Docker use separate user namespaces for enhanced security.
7. Cgroup Namespace (cgroup)
The cgroup namespace isolates the view of the cgroup hierarchy. Note this is different from cgroups themselves (which we'll cover separately).
What it isolates:
- View of
/proc/self/cgroup - View of
/sys/fs/cgrouphierarchy - Ability to see other containers' resource constraints
How it works:
Without cgroup namespaces, a process can read /proc/self/cgroup and see the full path to its cgroup, revealing information about the container orchestration system.
With cgroup namespaces, the process sees itself at the root of the cgroup tree, hiding the real hierarchy.
Note: This namespace isolates the view of cgroups, not the enforcement of resource limits. Resource limits are enforced by cgroups themselves (covered in section 4.5).
Namespace Identifiers: Understanding /proc/PID/ns/
Every process has a directory at /proc/PID/ns/ containing symbolic links to namespace identifiers. These links reveal which namespaces the process belongs to.
Examining namespace identifiers:
sudo ls -la /proc/$$/ns/
Example output:
lrwxrwxrwx 1 root root 0 Dec 12 10:30 cgroup -> 'cgroup:[4026531835]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 ipc -> 'ipc:[4026531839]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 mnt -> 'mnt:[4026531840]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 net -> 'net:[4026531992]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 pid -> 'pid:[4026531836]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 user -> 'user:[4026531837]' lrwxrwxrwx 1 root root 0 Dec 12 10:30 uts -> 'uts:[4026531838]'
Understanding the format:
Each symlink follows the pattern: namespace_type:[inode_number]
The inode number is the critical piece of information. It uniquely identifies that specific namespace instance. Think of it as a "namespace ID."
Key principle: Same inode = Shared namespace, Different inode = Isolated namespace
Example from Lab 7:
When you created network namespaces in Lab 7, each had a unique network namespace inode:
# Host process
sudo ls -la /proc/$$/ns/net
net -> 'net:[4026531992]' # Host's network namespace
# Process in red namespace
sudo ip netns exec red ls -la /proc/$$/ns/net
net -> 'net:[4026532145]' # Different inode = isolated!
# Process in blue namespace
sudo ip netns exec blue ls -la /proc/$$/ns/net
net -> 'net:[4026532147]' # Also different = also isolated!
Comparing container to host:
# Get container's PID on host
docker inspect container --format '{{.State.Pid}}'
# Example output: 12345
# View container's namespaces
sudo ls -la /proc/12345/ns/net
# Output: net -> 'net:[4026533672]'
# View host's namespace
sudo ls -la /proc/$$/ns/net
# Output: net -> 'net:[4026531992]'
# Different inodes! Container is isolated.
Namespace sharing:
Sometimes containers intentionally share namespaces. For example:
docker run --network=host nginx
This container shares the host's network namespace:
sudo ls -la /proc/CONTAINER_PID/ns/net
# Output: net -> 'net:[4026531992]' # Same as host!
Using namespace identifiers:
These symlinks aren't just informational—you can actually use them to enter namespaces:
sudo nsenter --net=/proc/12345/ns/net ip addr show
This command enters the network namespace of PID 12345 and runs ip addr show inside that namespace. This is how docker exec works under the hood—it uses nsenter to join the container's namespaces.
Summary table:
| Namespace Type | What It Isolates | Lab Connection |
|---|---|---|
net
|
Network stack (interfaces, IPs, routes, ports) | Lab 7: You built this manually! |
pid
|
Process IDs and process tree | Lab 3: Process management |
mnt
|
Filesystem mounts and root directory | Lab 2: Mounting and filesystems |
uts
|
Hostname and domain name | - |
ipc
|
Shared memory, message queues, semaphores | Lab 6: IPC mechanisms |
user
|
User and group IDs, capabilities | Lab 4: Users and permissions |
cgroup
|
View of cgroup hierarchy | - |
Container Images vs Running Containers
This distinction is fundamental to understanding Docker and is often a source of confusion.
Container Image:
A container image is a read-only template consisting of:
- A root filesystem: All files and directories that will appear in the container (applications, libraries, configuration files)
- Metadata: Information about how to run the container (default command, environment variables, exposed ports, volumes)
- Layers: The filesystem is composed of multiple read-only layers (explained in section 4.4)
Characteristics:
- Immutable: Once built, an image never changes
- Shareable: Multiple containers can use the same image
- Versionable: Images can have tags (e.g.,
nginx:1.21,nginx:latest) - Distributable: Images can be pushed to/pulled from registries (Docker Hub, private registries)
- Stored on disk: Images consume storage even when not running
Running Container:
A running container is an instance of an image—a process (or process tree) running in isolated namespaces with its own writable filesystem layer.
Characteristics:
- Ephemeral: State is lost when the container is removed (unless using volumes)
- Mutable: Can make changes inside the container (install packages, create files)
- Process-based: A container is fundamentally just a Linux process in isolated namespaces
- Short-lived: Containers are typically created, used, and destroyed frequently
- Stateful during runtime: Maintains state while running, but that state disappears on removal
Example to illustrate:
# Pull an image (download the template)
docker pull nginx
# The image now exists on disk
docker images
# Output shows: nginx latest a1b2c3d4 100MB
# Start first container from this image
docker run -d --name web1 nginx
# Start second container from the same image
docker run -d --name web2 nginx
# Both containers share the same base image filesystem
# But each has its own writable layer and separate namespaces
Now you have:
- One image (nginx:latest) on disk
- Two containers (web1 and web2) running as separate processes
- Each container has its own PID namespace, network namespace, etc.
- Changes in web1 don't affect web2 (isolated writable layers)
Verification:
# Modify web1
docker exec web1 bash -c "echo 'Hello from web1' > /usr/share/nginx/html/test.txt"
# Check web1
docker exec web1 cat /usr/share/nginx/html/test.txt
# Output: Hello from web1
# Check web2
docker exec web2 cat /usr/share/nginx/html/test.txt
# Output: cat: /usr/share/nginx/html/test.txt: No such file or directory
The file exists in web1 but not in web2, even though they're from the same image. Each container has its own writable layer.
Image to Container Relationship Diagram:
[nginx Image]
(Read-only)
|
┌─────────┴─────────┐
↓ ↓
[Container 1] [Container 2]
(Writable layer) (Writable layer)
(PID namespace) (PID namespace)
(Net namespace) (Net namespace)
(Isolated) (Isolated)
Filesystem perspective:
Image Layers (read-only, shared): ├─ Layer 3: nginx files ├─ Layer 2: nginx dependencies └─ Layer 1: Base OS (Debian) Container 1 (writable, unique): └─ Writable layer: Changes made in container 1 Container 2 (writable, unique): └─ Writable layer: Changes made in container 2
Why this design?
- Efficiency: 100 containers from the same image share one copy of the base filesystem
- Speed: Starting a container doesn't require copying files—just create a new writable layer
- Consistency: All containers from an image start with identical state
- Immutability: Encourages treating containers as disposable—don't modify running containers, rebuild images instead
Container Lifecycle and Ephemeral Nature
Understanding the container lifecycle is essential for proper container usage. Containers are designed to be ephemeral—short-lived and replaceable.
Container States:
┌─────────┐
│ Created │ (Container exists but not running)
└────┬────┘
│ docker start
↓
┌─────────┐
│ Running │ (Processes executing in isolated namespaces)
└────┬────┘
│ docker stop (SIGTERM, then SIGKILL after timeout)
↓
┌─────────┐
│ Stopped │ (Processes terminated, filesystem layer persists)
└────┬────┘
│ docker start (restart with same writable layer)
↓
┌─────────┐
│ Running │
└────┬────┘
│ docker rm (delete container)
↓
┌─────────┐
│ Removed │ (Container and writable layer deleted forever)
└─────────┘
Key lifecycle commands:
# Create and start in one step (most common)
docker run nginx
# Create without starting
docker create --name test nginx
# Start existing stopped container
docker start test
# Stop running container (SIGTERM to main process)
docker stop test
# Force stop (SIGKILL)
docker kill test
# Remove stopped container
docker rm test
# Remove running container (force)
docker rm -f test
The Ephemeral Nature:
By default, all changes made inside a container are lost when the container is removed:
# Start container
docker run -d --name demo nginx
# Make changes inside
docker exec demo bash -c "echo 'My data' > /tmp/important.txt"
docker exec demo cat /tmp/important.txt
# Output: My data
# Stop and remove container
docker stop demo
docker rm demo
# Try to access the data
docker run --name demo2 nginx
docker exec demo2 cat /tmp/important.txt
# Output: cat: /tmp/important.txt: No such file or directory
# THE DATA IS GONE!
Why ephemeral?
This might seem like a limitation, but it's actually a feature that enables important practices:
- Immutable Infrastructure: Don't patch running systems; deploy new versions
- Reproducibility: Every deployment starts from a known state
- Testing: Test environments are identical to production
- Rollback: Easy to roll back to previous image version
- Scaling: Identical containers can be created/destroyed dynamically
When you need persistence:
For data that must survive container restarts, use volumes (covered in section 4.7):
# Create named volume
docker volume create mydata
# Use volume in container
docker run -v mydata:/data nginx
# Data in /data survives container removal
Container lifecycle best practices:
- Treat containers as cattle, not pets: Don't name them, don't SSH into them to debug, don't manually configure them
- Logs go to stdout/stderr: Not to files inside the container (so
docker logscan capture them) - Configuration via environment variables: Not by editing files inside the container
- Data goes in volumes: Not in the container's writable layer
- Short-lived processes: Containers should start quickly and shut down gracefully
Connection to Lab 3:
In Lab 3, you learned about process lifecycle (start, run, terminate). Containers follow a similar lifecycle, but operate at a higher level of abstraction—each container lifecycle event actually involves creating/destroying multiple processes in isolated namespaces.
Control Groups (cgroups): Resource Limiting
Control groups (cgroups) are a Linux kernel feature that limits, accounts for, and isolates resource usage (CPU, memory, disk I/O, network) of process groups. Without cgroups, a runaway container could consume all CPU or memory, starving other containers and crashing the host.
The Problem:
Without resource limits:
# Malicious or buggy container
docker run -d evil-container
# This container's process could:
# - Consume 100% CPU (slow down everything else)
# - Allocate all available RAM (trigger OOM killer on host)
# - Fill up disk space (crash other containers)
# - Monopolize network bandwidth
This is unacceptable in multi-tenant environments. You need resource isolation.
The Solution: cgroups
Cgroups organize processes into hierarchical groups with configurable resource limits. The kernel enforces these limits, preventing processes from exceeding their allocation.
Cgroup Controllers:
The Linux kernel provides several cgroup controllers, each managing a different resource type:
- cpu: Limits CPU time
- CPU shares (relative priority)
- CPU quotas (hard limits)
- CPU affinity (pin to specific cores)
- memory: Limits RAM usage
- Hard limits (container killed if exceeded)
- Soft limits (reclaim memory under pressure)
- Swap limits
- blkio: Limits disk I/O
- Read/write bandwidth limits
- I/O operation rate limits
- net_cls/net_prio: Network bandwidth control
- Traffic classification
- Priority settings
- pids: Limits number of processes
- Prevents fork bombs
- cpuset: Assigns specific CPUs and memory nodes
- NUMA awareness
Docker's cgroup integration:
When you start a container with resource limits, Docker configures the appropriate cgroups:
docker run --memory=512m --cpus=1.5 nginx
Docker creates a cgroup hierarchy at /sys/fs/cgroup/ and configures:
memory.limit_in_bytes = 536870912(512MB)cpu.cfs_quota_usandcpu.cfs_period_usto enforce 1.5 CPUs
Viewing cgroup settings:
# Get container's PID
docker inspect container --format '{{.State.Pid}}'
# Example: 12345
# View memory limit
cat /sys/fs/cgroup/memory/docker/12345/memory.limit_in_bytes
# Output: 536870912
# View CPU quota
cat /sys/fs/cgroup/cpu/docker/12345/cpu.cfs_quota_us
# Output: 150000 (1.5 CPUs)
Common resource limit flags:
# Memory limits
--memory=512m # Hard limit: 512MB RAM
--memory-reservation=256m # Soft limit: try to stay under 256MB
--memory-swap=512m # Total memory+swap limit
# CPU limits
--cpus=1.5 # Use at most 1.5 CPU cores
--cpu-shares=512 # Relative CPU priority (default 1024)
--cpuset-cpus=0,1 # Pin to CPU cores 0 and 1
# I/O limits
--device-read-bps=/dev/sda:10mb # Limit read bandwidth
--device-write-bps=/dev/sda:10mb # Limit write bandwidth
# Process limits
--pids-limit=100 # Max 100 processes in container
Testing memory limits:
# Run container with 256MB memory limit
docker run -it --memory=256m ubuntu bash
# Inside container, try to allocate 512MB
# (requires 'stress' tool)
apt-get update && apt-get install -y stress
stress --vm 1 --vm-bytes 512M
# Container is killed when exceeding limit
# Output: stress: FAIL: [1] (415) <-- worker 7 got signal 9
# Signal 9 = SIGKILL (OOM killer)
Why cgroups are essential:
- Multi-tenancy: Run untrusted workloads safely
- Quality of Service: Guarantee resources for critical applications
- Fair sharing: Prevent one container from monopolizing resources
- Predictability: Know exactly how much resources each container can use
- Cost control: In cloud environments, map cgroups to billing
cgroup vs namespace distinction:
- Namespaces: Provide isolation
- cgroups: Provide resource limits
Both are necessary for containers. Namespaces prevent containers from seeing each other; cgroups prevent containers from starving each other.
Docker Networking: Bridges, veth Pairs, and NAT
Docker networking should feel familiar—it uses the exact same mechanisms you built manually in Lab 7. The primary difference is automation: Docker sets up bridges, veth pairs, routes, and NAT rules automatically.
Default Docker Network Architecture:
When you install Docker, it creates a default bridge network:
┌─────────────────┐ ┌─────────────────┐ ┌─────────────────┐
│ Container A │ │ Container B │ │ Container C │
│ 172.17.0.2 │ │ 172.17.0.3 │ │ 172.17.0.4 │
└────────┬────────┘ └────────┬────────┘ └────────┬────────┘
│eth0 │eth0 │eth0
│ │ │
(veth pair) (veth pair) (veth pair)
│ │ │
┌────┴───────────────────┴───────────────────┴────┐
│ docker0 Bridge │
│ 172.17.0.1/16 │
└────────────────────┬─────────────────────────────┘
│
[Host eth0]
│
[Internet]
(via NAT/MASQUERADE)
Component breakdown (all from Lab 7!):
1. Bridge Interface (docker0)
Docker automatically creates a bridge interface when installed:
ip addr show docker0
Expected output:
3: docker0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
link/ether 02:42:8f:a3:f1:2a brd ff:ff:ff:ff:ff:ff
inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0
This is exactly like br-lab from Lab 7:
- Bridge interface acting as virtual switch
- Assigned IP address 172.17.0.1
- Subnet 172.17.0.0/16 (65,536 addresses available)
2. veth Pairs
For each container, Docker creates a veth pair:
ip link show | grep veth
Expected output:
8: veth7a3f2b1@if7: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0 10: veth9d4e8c2@if9: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master docker0
Decoding this:
veth7a3f2b1@if7: One end of veth pair, connected to bridge (master docker0)@if7: Paired with interface index 7 (inside container)
This is exactly what you did in Lab 7:
sudo ip link add v-host type veth peer name v-client
Docker does the same thing, automatically.
3. Container Network Namespace
Each container has its own network namespace (you built these manually in Lab 7!):
# View container's network interfaces
docker exec container ip addr show
Expected output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN
inet 127.0.0.1/8 scope host lo
7: eth0@if8: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
The container sees:
lo: Its own loopback interfaceeth0@if8: Its end of the veth pair (paired with host's interface index 8)- IP address from docker0 subnet
4. IP Address Assignment
Docker acts as a simple DHCP-like service, assigning IPs sequentially:
- First container: 172.17.0.2
- Second container: 172.17.0.3
- Third container: 172.17.0.4
- etc.
5. Routing in Container
Check the container's routing table:
docker exec container ip route show
Expected output:
default via 172.17.0.1 dev eth0 172.17.0.0/16 dev eth0 proto kernel scope link src 172.17.0.2
Translation:
- Default route: Send all traffic to 172.17.0.1 (the bridge) for routing to internet
- Local route: 172.17.0.0/16 is directly reachable via eth0
This is exactly the routing you configured in Lab 7:
sudo ip netns exec red ip route add default via 10.0.0.1
6. NAT for Internet Access
Docker automatically configures iptables/nftables NAT rules (MASQUERADE) so containers can reach the internet:
sudo iptables -t nat -L -n | grep MASQUERADE
Expected output:
MASQUERADE all -- 172.17.0.0/16 0.0.0.0/0
This is exactly the NAT you configured in Lab 7:
sudo iptables -t nat -A POSTROUTING -s 10.0.0.0/24 -j MASQUERADE
7. Port Mapping (-p flag)
When you use -p 8080:80, Docker sets up Destination NAT (DNAT):
docker run -d -p 8080:80 nginx
Docker adds an iptables DNAT rule:
sudo iptables -t nat -L -n | grep DNAT
Expected output:
DNAT tcp -- 0.0.0.0/0 0.0.0.0/0 tcp dpt:8080 to:172.17.0.2:80
Translation: Traffic arriving at host port 8080 is redirected to 172.17.0.2:80
This is also NAT, specifically DNAT (Destination NAT). You encountered source NAT (SNAT/MASQUERADE) in Lab 7.
Container-to-Container Communication
Containers on the same bridge can communicate directly:
# Container A pings Container B
docker exec containerA ping 172.17.0.3
The packet flow:
- Container A sends to 172.17.0.2 (Container B's IP)
- Packet goes out Container A's eth0 (through veth pair)
- Arrives on docker0 bridge
- Bridge forwards to veth pair for Container B
- Packet arrives at Container B's eth0
No routing through host networking needed—the bridge forwards directly (Layer 2 switching).
Custom Networks
You can create custom bridge networks:
docker network create --subnet=10.20.0.0/24 mynet
Docker creates a new bridge interface (e.g., br-abc123def456) with subnet 10.20.0.0/24.
Containers on different networks are isolated:
docker run -d --network=mynet --name=isolated nginx
This container is on mynet, not docker0, so it cannot communicate with containers on the default bridge (different Layer 2 segment).
DNS Resolution Between Containers
Docker provides automatic DNS resolution for container names within custom networks:
docker network create mynet
docker run -d --network=mynet --name=web nginx
docker run -d --network=mynet --name=app alpine
# From app container
docker exec app ping web
# Resolves 'web' to web container's IP address!
Docker runs an embedded DNS server (listening on 127.0.0.11 inside containers) that resolves container names to IPs.
Network Modes:
Docker supports several network modes:
- bridge (default): Container on docker0 bridge (or custom bridge)
- host: Container shares host's network namespace (no isolation)
- none: No networking (container has only loopback)
- container:name: Share another container's network namespace
Example: host mode
docker run --network=host nginx
The container's network namespace inode is the same as the host's:
sudo ls -la /proc/CONTAINER_PID/ns/net
# Output: net -> 'net:[4026531992]' # Same as host!
Container sees all host's network interfaces and can bind to any port on any interface.
Summary:
Docker networking uses:
- Bridges (like
br-labfrom Lab 7) - veth pairs (like
v-host↔v-clientfrom Lab 7) - Network namespaces (like
redandbluefrom Lab 7) - Routing tables and default routes
- NAT/MASQUERADE for internet access
- DNAT for port mapping
By default, container filesystems are ephemeral—all changes are lost when the container is removed. For data that must persist (databases, user uploads, logs), Docker provides volumes.
The Problem:
# Start database container
docker run -d --name db postgres
# Database writes data
docker exec db psql -c "CREATE DATABASE myapp;"
# Stop and remove container
docker rm -f db
# Data is GONE FOREVER!
This is unacceptable for stateful applications.
The Solution: Volumes
Volumes are directories on the host that are mounted into containers. Data written to volumes persists beyond container lifecycle.
Two Volume Types:
1. Named Volumes (Docker-managed):
Docker manages the volume storage location.
# Create volume
docker volume create mydata
# Use in container
docker run -v mydata:/data nginx
# Data written to /data inside container is stored in:
# /var/lib/docker/volumes/mydata/_data (on host)
Advantages:
- Docker manages storage location
- Portable across hosts (can be backed up, restored)
- Works with volume plugins (NFS, cloud storage)
Best practice: Use named volumes for production databases, critical data.
2. Bind Mounts (Host directory):
Mount a host directory directly into the container.
# Mount host directory into container
docker run -v /home/user/html:/usr/share/nginx/html nginx
Any changes in /home/user/html on the host are immediately visible in /usr/share/nginx/html inside the container, and vice versa.
Advantages:
- Direct access to files from host
- Useful for development (edit code on host, see changes in container immediately)
- No Docker management needed
Disadvantages:
- Tied to specific host filesystem paths
- Less portable
- Permissions can be tricky (host UID vs. container UID)
# Docker essentially does:
mount --bind /var/lib/docker/volumes/mydata/_data /data
Volume Sharing Between Containers:
Multiple containers can share the same volume:
# Create volume
docker volume create shared
# Container 1 writes
docker run -v shared:/data --name writer alpine sh -c "echo 'Hello' > /data/file.txt"
# Container 2 reads
docker run -v shared:/data --name reader alpine cat /data/file.txt
# Output: Hello
Use cases:
- Shared configuration between containers
- Log aggregation (multiple containers write logs to shared volume)
- Data processing pipelines (one container produces, another consumes)
Volume Lifecycle:
# Create volume
docker volume create mydata
# List volumes
docker volume ls
# Inspect volume
docker volume inspect mydata
# Remove volume (only if no containers using it)
docker volume rm mydata
# Remove all unused volumes
docker volume prune
Inspecting Volume Mounts:
docker inspect container --format='{{json .Mounts}}' | python3 -m json.tool
Example output:
[
{
"Type": "volume",
"Source": "/var/lib/docker/volumes/mydata/_data",
"Destination": "/data",
"Mode": "z",
"RW": true
}
]
Fields:
Type: "volume" or "bind"Source: Host-side pathDestination: Container-side pathRW: Read-write (true) or read-only (false)
Read-Only Volumes:
For security, you can mount volumes read-only:
docker run -v mydata:/data:ro nginx
Container can read /data but cannot write to it.
tmpfs Mounts (In-Memory Temporary Storage):
For sensitive data that should never touch disk:
docker run --tmpfs /tmp:size=100m nginx
The /tmp directory is stored in RAM and disappears when the container stops.
Best Practices:
- Named volumes for databases: Postgres, MySQL, MongoDB
- Bind mounts for development: Code that you're actively editing
- tmpfs for secrets: Temporary credentials, keys
- Volume plugins for cloud: AWS EBS, Azure Disk, GCP Persistent Disk
Laboratory Exercises
The following exercises build progressively, demonstrating how Docker automates the kernel-level primitives you mastered in previous labs. You will install Docker, inspect namespace isolation, explore interactive containers, configure persistent storage, and build a multi-container application with networking and resource limits.
Exercise A: Installing Docker
Objective: Install Docker Engine from the official Docker repository and configure it for non-root access.
Why the official repository? Ubuntu's default repositories often contain outdated Docker versions. The official Docker repository provides the latest stable releases with security updates and new features.
Step 1: Remove old Docker versions (if any)
If you previously installed Docker from Ubuntu's repositories or older Docker installations exist, remove them to avoid conflicts:
sudo apt remove docker docker-engine docker.io containerd runc
It's safe to run this even if these packages aren't installed—apt will simply report they're not present.
Step 2: Install prerequisites
Install packages needed for adding Docker's repository:
sudo apt update
sudo apt install -y \
apt-transport-https \
ca-certificates \
curl \
gnupg \
lsb-release
Step 3: Add Docker's GPG key
Docker signs its packages with a GPG key to ensure authenticity. Add this key to your system:
sudo install -m 0755 -d /etc/apt/keyrings
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | \
sudo gpg --dearmor -o /etc/apt/keyrings/docker.gpg
sudo chmod a+r /etc/apt/keyrings/docker.gpg
What this does:
- Creates
/etc/apt/keyrings/directory for storing repository keys - Downloads Docker's GPG public key
- Converts it to binary format (
.gpgfile) - Makes it readable by all users
Step 4: Add Docker repository to apt sources
sudo tee /etc/apt/sources.list.d/docker.sources <<EOF
Types: deb
URIs: https://download.docker.com/linux/ubuntu
Suites: $(. /etc/os-release && echo "${UBUNTU_CODENAME:-$VERSION_CODENAME}")
Components: stable
Signed-By: /etc/apt/keyrings/docker.asc
EOF
What this does:
- Detects your CPU architecture (amd64, arm64, etc.)
- Detects your Ubuntu version codename (focal, jammy, etc.)
- Adds Docker's repository to apt's source list
Step 5: Install Docker Engine
Update package index and install Docker:
sudo apt update
sudo apt install -y \
docker-ce \
docker-ce-cli \
containerd.io \
docker-buildx-plugin \
docker-compose-plugin
Packages installed:
docker-ce: Docker Community Edition engine (the main Docker daemon)docker-ce-cli: Docker command-line interfacecontainerd.io: Container runtime that Docker uses under the hooddocker-buildx-plugin: Extended build capabilities (multi-platform images)docker-compose-plugin: Docker Compose for multi-container applications
Step 6: Verify Docker installation
Check Docker version:
sudo docker --version
Expected output:
Docker version 24.0.7, build afdd53b
Your version number may be different (newer), which is fine.
Step 7: Start and enable Docker service
Ensure Docker daemon starts on boot:
sudo systemctl start docker
sudo systemctl enable docker
Check service status:
sudo systemctl status docker
Expected output:
● docker.service - Docker Application Container Engine
Loaded: loaded (/lib/systemd/system/docker.service; enabled)
Active: active (running) since Thu 2024-12-12 10:30:00 UTC; 5min ago
Look for Active: active (running).
Step 8: Configure non-root Docker access
By default, only root can run Docker commands. To run Docker without sudo, add your user to the docker group:
sudo usermod -aG docker $USER
Important: Log out and log back in for this change to take effect. Alternatively, you can run:
newgrp docker
This starts a new shell with updated group membership.
Step 9: Verify non-root access
Test that you can run Docker without sudo:
docker run hello-world
If this works without errors, you've successfully installed Docker!
Expected output:
Unable to find image 'hello-world:latest' locally
latest: Pulling from library/hello-world
c1ec31eb5944: Pull complete
Digest: sha256:4bd78111b6914a99dbc560e6a20eab57ff6655aea4a80c50b0c5491968cbc2e6
Status: Downloaded newer image for hello-world:latest
Hello from Docker!
This message shows that your installation appears to be working correctly.
To generate this message, Docker took the following steps:
1. The Docker client contacted the Docker daemon.
2. The Docker daemon pulled the "hello-world" image from the Docker Hub.
3. The Docker daemon created a new container from that image which runs the
executable that produces the output you are currently reading.
4. The Docker daemon streamed that output to the Docker client, which sent it
to your terminal.
To try something more ambitious, you can run an Ubuntu container with:
$ docker run -it ubuntu bash
Share images, automate workflows, and more with a free Docker ID:
https://hub.docker.com/
For more examples and ideas, visit:
https://docs.docker.com/get-started/
What happened:
- Docker client (
dockercommand) connected to Docker daemon (dockerd) - Daemon checked local images—didn't find
hello-world - Daemon pulled
hello-worldimage from Docker Hub (public registry) - Daemon created container from image
- Container executed its program (printed the message)
- Container exited
- Output sent back to your terminal
Step 10: Clean up test container
List all containers (including stopped):
docker ps -a
You should see the hello-world container with status Exited (0).
Remove it:
docker rm $(docker ps -aq --filter "ancestor=hello-world")
Verify it's gone:
docker ps -a
Deliverable A:
Provide screenshots showing:
- Output of
docker --version - Output of
sudo systemctl status docker(showing Active: active (running)) - Output of
docker run hello-world(the entire message)
Exercise B: Hello World and Namespace Inspection
Objective: Run your first container, understand the container lifecycle, and inspect the Linux kernel namespaces that provide container isolation—connecting directly to your work in Lab 7.
Part 1: Hello World
Step 1: Run the hello-world container (if not done in Exercise A)
docker run hello-world
We covered this in Exercise A, but let's examine what actually happened in more detail.
Step 2: List all containers
docker ps
Expected output: Nothing (empty list)
Why? The hello-world container ran, printed its message, and exited immediately. docker ps only shows running containers by default.
To see all containers (including stopped):
docker ps -a
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a1b2c3d4e5f6 hello-world "/hello" 10 seconds ago Exited (0) 8 seconds ago eager_tesla
Understanding the fields:
- CONTAINER ID: Short hex identifier (first 12 chars of full 64-char ID)
- IMAGE: Which image this container was created from
- COMMAND: The process that ran inside the container (
/helloexecutable) - CREATED: When the container was created
- STATUS: Current state—
Exited (0)means process exited with code 0 (success) - PORTS: Port mappings (none for hello-world)
- NAMES: Random name if you don't specify one (Docker generates names like "eager_tesla", "hopeful_darwin")
Step 3: Inspect the container
Get detailed information about the container:
docker inspect eager_tesla # Use your actual container name
This outputs a large JSON document with all container metadata. Let's extract specific fields:
# Just the State section
docker inspect eager_tesla --format='{{json .State}}' | python3 -m json.tool
Expected output (formatted):
{
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 0,
"StartedAt": "2024-12-12T10:35:00Z",
"FinishedAt": "2024-12-12T10:35:01Z"
}
Analysis:
- Container ran for about 1 second (started at :00, finished at :01)
- Exit code 0 (successful completion)
- PID is 0 (process has terminated; while running it had a real PID)
Step 4: View container logs
Even though the container exited, Docker saved its output:
docker logs eager_tesla
Shows the hello-world message again. This demonstrates that Docker captures stdout/stderr from containers.
Step 5: Remove the container
Stopped containers still consume disk space (their writable layer persists). Remove it:
docker rm eager_tesla
Verify removal:
docker ps -a
The container should be gone.
Part 2: Inspect Namespaces (Connect to Lab 7!)
Now let's run a longer-lived container and examine its namespace isolation—this directly connects to your hands-on work with network namespaces in Lab 7.
Step 1: Run a persistent container
docker run -d --name inspector nginx
Flags explained:
-d: Detached mode—run in background--name inspector: Give it a memorable name instead of random name
Expected output:
Unable to find image 'nginx:latest' locally latest: Pulling from library/nginx ... Status: Downloaded newer image for nginx:latest b7f9a8e6c4d3a1b2c5e8f9d2a7c4b6e8f3d9a2c1b4e7f8d3a5c2b1
The long hex string is the full 64-character container ID. Docker returns this after creating the container.
Step 2: Verify the container is running
docker ps
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES b7f9a8e6c4d3 nginx "/docker-entrypoint.…" 10 seconds ago Up 8 seconds 80/tcp inspector
The container is running nginx web server.
Step 3: Get the container's PID on the host
Remember: containers are just processes. Let's find the PID:
docker inspect inspector --format '{{.State.Pid}}'
Expected output: A number like 12345
This is the process ID on the host system. Let's verify:
ps aux | grep 12345
You should see nginx processes! The container is just a process with a fancy namespace wrapper.
Step 4: Examine the container's namespaces
This is the crucial step connecting to Lab 7. Replace 12345 with your actual PID:
sudo ls -la /proc/12345/ns/
Expected output:
total 0 dr-x--x--x 2 root root 0 Dec 12 10:40 . dr-xr-xr-x 9 root root 0 Dec 12 10:40 .. lrwxrwxrwx 1 root root 0 Dec 12 10:40 cgroup -> 'cgroup:[4026533671]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 ipc -> 'ipc:[4026533669]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 mnt -> 'mnt:[4026533667]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 net -> 'net:[4026533672]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 pid -> 'pid:[4026533670]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 pid_for_children -> 'pid:[4026533670]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 user -> 'user:[4026531837]' lrwxrwxrwx 1 root root 0 Dec 12 10:40 uts -> 'uts:[4026533668]'
Analysis of namespace inodes:
Note the inode numbers (the numbers in brackets). Each represents a namespace instance.
Step 5: Compare with host's namespaces
sudo ls -la /proc/$$/ns/net
Expected output:
lrwxrwxrwx 1 youruser youruser 0 Dec 12 10:40 net -> 'net:[4026531992]'
Critical observation: The container's network namespace inode (4026533672) is different from the host's (4026531992).
Step 6: Compare two containers
Start a second container:
docker run -d --name inspector2 nginx
Get its PID:
docker inspect inspector2 --format '{{.State.Pid}}'
Check its network namespace:
sudo ls -la /proc/NEW_PID/ns/net
Expected: A different inode number from both the host and inspector container!
Conclusion: Each container has its own isolated network namespace, just like the red and blue namespaces you created manually in Lab 7.
Step 7: Examine other namespaces
Let's check PID namespace isolation:
# Host PID namespace
sudo ls -la /proc/$$/ns/pid
# Container PID namespace
sudo ls -la /proc/CONTAINER_PID/ns/pid
Different inodes = isolated process trees!
Step 8: User namespace (often shared)
# Host user namespace
sudo ls -la /proc/$$/ns/user
# Container user namespace
sudo ls -la /proc/CONTAINER_PID/ns/user
Expected: Often the same inode number.
Many Docker configurations share the host's user namespace for simplicity. This means UID 0 in the container is UID 0 on the host (less secure, but more compatible).
For enhanced security, Docker can be configured to use separate user namespaces (rootless Docker), but that's beyond this lab's scope.
Step 9: Enter the container's namespace with nsenter
You can actually enter a container's namespaces using the nsenter command (this is how docker exec works!):
sudo nsenter --target CONTAINER_PID --net ip addr show
This executes ip addr show inside the container's network namespace, showing the container's view of network interfaces.
Step 10: Clean up
docker stop inspector inspector2
docker rm inspector inspector2
Deliverable B
Provide screenshots showing:
- Output of
docker run hello-world(the full hello message) - Output of
docker ps -ashowing the exited hello-world container with its random name - Output of
docker inspect inspector --format 'Format:.State.Pid'(showing the PID) - Output of
sudo ls -la /proc/PID/ns/for the inspector container (showing all namespaces) - Side-by-side comparison:
sudo ls -la /proc/$$/ns/net(host's network namespace inode)sudo ls -la /proc/CONTAINER_PID/ns/net(container's network namespace inode)- Highlight that the inode numbers are different
Exercise C: Interactive Exploration with Fedora
Objective: Run an interactive container with a different Linux distribution (Fedora instead of Ubuntu), demonstrating mount namespace isolation (different root filesystems), PID namespace isolation (isolated process tree), and UTS namespace isolation (different hostname).
Important context: Your host might be running Ubuntu, but the container will run Fedora. Both will be using the same Linux kernel, but they'll have completely different filesystems and will appear as different "machines" from inside.
Step 1: Run Fedora container interactively
docker run -it --name fedora-explore fedora bash
Flags explained:
-i: Interactive—keep STDIN open-t: Allocate pseudo-TTY (terminal)fedora: Pull Fedora base image from Docker Hubbash: Command to run inside container (start bash shell)
What happens:
- Docker downloads Fedora base image (if not cached)
- Creates container from image
- Starts bash inside the container
- Attaches your terminal to that bash session
Expected output:
Unable to find image 'fedora:latest' locally latest: Pulling from library/fedora ... Status: Downloaded newer image for fedora:latest [root@a1b2c3d4e5f6 /]#
Observe the prompt change:
- Before:
user@hostname:~$(your normal shell) - After:
[root@a1b2c3d4e5f6 /]#(inside container)
You're now inside the Fedora container!
Step 2: Explore the filesystem (Mount Namespace)
Check the operating system:
cat /etc/os-release
Expected output:
NAME="Fedora Linux" VERSION="39 (Container Image)" ID=fedora VERSION_ID=39 ...
Open a new terminal on your host (don't close the container terminal) and run:
cat /etc/os-release
Expected output on host:
NAME="Ubuntu" VERSION="22.04.3 LTS (Jammy Jellyfish)" ID=ubuntu ...
Two different operating systems on the same machine! This is mount namespace isolation—the container has its own root filesystem.
Back in the container, explore the filesystem:
ls /
Expected output:
bin boot dev etc home lib lib64 media mnt opt proc root run sbin srv sys tmp usr var
These are Fedora's files, not your host's Ubuntu files. They're completely different directory trees.
Try to use Ubuntu's package manager:
apt update
Expected output:
bash: apt: command not found
apt doesn't exist in Fedora! Fedora uses a different package manager.
Try Fedora's package manager:
dnf --version
Expected output:
4.18.2 Installed: dnf-0:4.18.2-1.fc39.noarch ...
dnf exists because we're in a Fedora environment.
Install a package:
dnf install -y nano
This works! We can install packages just like on a real Fedora system.
Connection to Lab 2:
In Lab 2, you learned about the filesystem hierarchy (/etc, /var, /usr, etc.). Mount namespaces let the container have completely different contents at these paths. The container's /etc is different from the host's /etc.
Step 3: Examine process isolation (PID Namespace)
Inside the container, check running processes:
ps aux
Expected output:
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND root 1 0.0 0.0 12345 2345 pts/0 Ss 10:45 0:00 bash root 67 0.0 0.0 44567 3456 pts/0 R+ 10:47 0:00 ps aux
Only two processes visible!
- PID 1: bash (the container's init process)
- PID 67: ps command we just ran
On the host (in your other terminal):
ps aux | wc -l
Expected output: 200+ processes
The container cannot see the host's processes! This is PID namespace isolation.
From the container's perspective, bash is PID 1 (like systemd is PID 1 on a normal Linux system).
From the host's perspective, that same bash process has a different PID (e.g., 12345).
Connection to Lab 3:
In Lab 3, you learned about PIDs and the process hierarchy. PID namespaces create separate process hierarchies—the container has its own process tree starting from PID 1.
Step 4: Check hostname (UTS Namespace)
Inside the container:
hostname
Expected output:
a1b2c3d4e5f6
This is the container ID (first 12 characters of the full container ID).
On the host:
hostname
Expected output:
your-hostname.example.com
Different hostnames! This is UTS namespace isolation.
Step 5: Examine network configuration (Network Namespace)
Inside the container:
ip addr show
Expected output:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
inet 127.0.0.1/8 scope host lo
valid_lft forever preferred_lft forever
17: eth0@if18: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default
link/ether 02:42:ac:11:00:02 brd ff:ff:ff:ff:ff:ff link-netnsid 0
inet 172.17.0.2/16 brd 172.17.255.255 scope global eth0
valid_lft forever preferred_lft forever
Analysis:
lo: Container's loopback interface (127.0.0.1)eth0@if18: Container's network interface (part of veth pair)@if18: Indicates this interface is paired with interface index 18 (on the host)- IP address: 172.17.0.2 (from docker0 bridge subnet)
On the host:
ip addr show | grep "172.17"
You should see the docker0 bridge has IP 172.17.0.1, and you might see veth interfaces for containers.
Connection to Lab 7:
This is exactly what you built manually!
- docker0 bridge ≈ br-lab from Lab 7
- Container's eth0 ≈ v-client from Lab 7
- veth pair connects container to bridge ≈ Lab 7's veth pair architecture
Step 6: Test internet connectivity from container
ping -c 3 8.8.8.8
Expected: Ping succeeds!
This works because:
- Container has default route to 172.17.0.1 (docker0 bridge)
- Host has IP forwarding enabled
- Host has NAT rule (MASQUERADE) for 172.17.0.0/16
This is identical to what you configured in Lab 7!
Step 7: Try to see host's processes (will fail)
ps aux | grep systemd
Expected: No systemd processes visible.
You cannot see the host's processes from inside the container (PID namespace isolation).
Step 8: Exit the container
exit
When you exit bash (PID 1 in the container), the container stops automatically.
Verify the container stopped:
docker ps -a
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS NAMES a1b2c3d4e5f6 fedora "bash" 5 minutes ago Exited (0) 10 seconds ago fedora-explore
Note: The container still exists (STATUS: Exited), but it's not running. You can restart it with docker start fedora-explore if needed.
Step 9: Clean up
docker rm fedora-explore
Deliverable C
Provide screenshots showing:
- Inside container: Output of
cat /etc/os-release(showing Fedora) - On host: Output of
cat /etc/os-release(showing Ubuntu or your host OS) - Inside container: Output of
ps aux(showing minimal processes, bash as PID 1) - On host: Output of
ps aux | wc -l(showing many more processes) - Inside container: Output of
hostname(showing container ID) - On host: Output of
hostname(showing host's hostname) - Inside container: Output of
ip addr show(showing eth0 with 172.17.0.x address) - Brief explanation (4-5 sentences): What do these differences demonstrate about namespace isolation? How does this relate to what you learned in Labs 2, 3, and 7?
Exercise D: Persistent Storage with Caddy
Objective: Run Caddy web server with persistent configuration and content using bind mounts, demonstrating that data can survive container removal and be shared between host and container.
Caddy is the web server you've been using throughout Lab 9. We'll run it in a container and configure it using files from the host.
Part 1: Basic Caddy Container
Step 1: Create directory structure on host
mkdir -p ~/lab10-caddy/{site,data,config}
Directory purposes:
site: Website content (HTML files)data: Caddy's data directory (certificates, storage)config: Caddy configuration (Caddyfile)
Step 2: Create a simple website
cat > ~/lab10-caddy/site/index.html << 'EOF'
<!DOCTYPE html>
<html>
<head>
<title>Docker Caddy Demo</title>
</head>
<body>
<h1>Hello from Dockerized Caddy!</h1>
<div>
<p><strong>This is running in a Docker container.</strong></p>
<p>The file you're viewing is mounted from the host filesystem.</p>
<p>Changes made on the host appear instantly in the container!</p>
</div>
</body>
</html>
EOF
Step 3: Create Caddyfile configuration
cat > ~/lab10-caddy/config/Caddyfile << 'EOF'
:80 {
root * /usr/share/caddy
file_server
log {
output stdout
format console
}
}
EOF
Caddyfile explanation:
:80: Listen on port 80 (inside container)root * /usr/share/caddy: Serve files from this directoryfile_server: Enable static file servinglog: Send access logs to stdout (sodocker logscan capture them)
Step 4: Run Caddy container with volume mounts
docker run -d \
--name caddy-persistent \
-p 8080:80 \
-v ~/lab10-caddy/site:/usr/share/caddy \
-v ~/lab10-caddy/data:/data \
-v ~/lab10-caddy/config:/etc/caddy \
caddy
Breaking down the command:
-d: Detached mode (run in background)--name caddy-persistent: Give container a memorable name-p 8080:80: Port mapping- Host port 8080 → Container port 80
- This is NAT (DNAT specifically) from Lab 7!
-v ~/lab10-caddy/site:/usr/share/caddy: Bind mount- Host directory
~/lab10-caddy/siteappears at/usr/share/caddyinside container - Bidirectional: changes on either side are visible on both sides
- Host directory
-v ~/lab10-caddy/data:/data: Caddy's data storage-v ~/lab10-caddy/config:/etc/caddy: Caddy's configurationcaddy: Image to use
Expected output:
Unable to find image 'caddy:latest' locally latest: Pulling from library/caddy ... Status: Downloaded newer image for caddy:latest f8e9c7b6d5a4e3b2c1f7d8a9e6b5c4d3a2f1e8d7c6b5a4e3d2c1b9f8
Step 5: Verify container is running
docker ps
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES f8e9c7b6d5a4 caddy "caddy run..." 10 seconds ago Up 8 seconds 0.0.0.0:8080->80/tcp caddy-persistent
Note the PORTS column: 0.0.0.0:8080->80/tcp
This means:
- Listen on all host interfaces (
0.0.0.0) - Host port 8080 maps to container port 80
- Protocol: TCP
Step 6: Test the web server
curl http://localhost:8080
Expected output: Your HTML page!
<!DOCTYPE html>
<html>
<head>
<title>Docker Caddy Demo</title>
...
<h1>Hello from Dockerized Caddy!</h1>
...
</html>
You can also open http://localhost:8080 in a web browser on your host.
Step 7: View Caddy logs
docker logs caddy-persistent
Expected output:
{"level":"info","ts":1702389123.456,"msg":"using provided configuration",...}
{"level":"info","ts":1702389123.789,"msg":"serving initial configuration"}
...
These are Caddy's startup logs. When you access the website, you'll see access logs here too.
Part 2: Inspect Mounts
Step 8: Inspect the container's mounts
docker inspect caddy-persistent --format='{{json .Mounts}}' | python3 -m json.tool
Expected output:
[
{
"Type": "bind",
"Source": "/home/youruser/lab10-caddy/site",
"Destination": "/usr/share/caddy",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/youruser/lab10-caddy/data",
"Destination": "/data",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
},
{
"Type": "bind",
"Source": "/home/youruser/lab10-caddy/config",
"Destination": "/etc/caddy",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
Field explanations:
Type: "bind" (bind mount, not Docker-managed volume)Source: Host filesystem pathDestination: Path inside containerRW: true (read-write), false would be read-onlyPropagation: How mount events propagate (rprivate = don't propagate to other mounts)
Step 9: View from inside the container
docker exec caddy-persistent df -h | grep caddy
Shows mounted filesystems inside the container containing "caddy" in the path.
You can also list the files:
docker exec caddy-persistent ls -la /usr/share/caddy
Expected output:
total 12 drwxr-xr-x 2 root root 4096 Dec 12 10:50 . drwxr-xr-x 1 root root 4096 Dec 12 10:50 .. -rw-r--r-- 1 root root 687 Dec 12 10:50 index.html
This is the same index.html you created on the host!
Part 3: Test Persistence
Step 10: Modify the file on the host
Without stopping the container, edit the HTML file:
cat >> ~/lab10-caddy/site/index.html << 'EOF'
<div class="info">
<p><strong>🎉 This was added from the host!</strong></p>
<p>The container is still running, and changes appear instantly.</p>
</div>
</body>
</html>
EOF
Step 11: Verify the change appears in the container immediately
curl http://localhost:8080
Expected: The new section appears!
You didn't restart the container, yet the content changed. This demonstrates bidirectional bind mount synchronization.
Step 12: Create a new file from inside the container
docker exec caddy-persistent bash -c "echo '<h2>Created from container</h2>' > /usr/share/caddy/test.html"
Step 13: Verify the file appears on the host
ls -la ~/lab10-caddy/site/
Expected output:
total 16 drwxr-xr-x 2 youruser youruser 4096 Dec 12 11:00 . drwxr-xr-x 5 youruser youruser 4096 Dec 12 10:45 .. -rw-r--r-- 1 youruser youruser 687 Dec 12 10:50 index.html -rw-r--r-- 1 root root 34 Dec 12 11:00 test.html ← New file!
The file exists on the host! Changes flow both directions.
Note: The file is owned by root because the process inside the container runs as root. This is one of the complexities of bind mounts—UID/GID mapping between host and container.
Step 14: Stop and remove the container
docker stop caddy-persistent
docker rm caddy-persistent
Step 15: Verify files still exist on host
ls -la ~/lab10-caddy/site/
Expected: All files still there!
The files survive because they're stored on the host filesystem, not in the container's ephemeral writable layer.
Step 16: Start a NEW container with the same bind mounts
docker run -d \
--name caddy-new \
-p 8080:80 \
-v ~/lab10-caddy/site:/usr/share/caddy \
-v ~/lab10-caddy/data:/data \
-v ~/lab10-caddy/config:/etc/caddy \
caddy
Step 17: Verify persistence
curl http://localhost:8080
Expected: All your previous content is still there!
The website works immediately because all the content and configuration was stored on the host, not inside the old container.
Step 18: Compare to ephemeral storage
Let's demonstrate what happens without volumes:
# Start container without volumes
docker run -d --name caddy-temp caddy
# Create file inside container
docker exec caddy-temp bash -c "echo 'temporary' > /tmp/temp.txt"
# Verify file exists
docker exec caddy-temp cat /tmp/temp.txt
# Output: temporary
# Stop and remove container
docker stop caddy-temp
docker rm caddy-temp
# Try to access the file in a new container
docker run -d --name caddy-temp2 caddy
docker exec caddy-temp2 cat /tmp/temp.txt
# Output: cat: /tmp/temp.txt: No such file or directory
# The file is GONE because it was in the ephemeral layer
Step 19: Clean up
docker stop caddy-new
docker rm caddy-new
The files in ~/lab10-caddy/ remain intact on your host.
# What Docker essentially does:
mount --bind ~/lab10-caddy/site /var/lib/docker/overlay2/.../merged/usr/share/caddy
Deliverable D
Provide screenshots showing:
- Output of
curl http://localhost:8080showing your custom HTML (initial version) - Output of
docker inspect caddy-persistent --format='Format:Json .Mounts'(formatted with python3 -m json.tool) - After modifying
index.htmlon the host, output ofcurl http://localhost:8080showing the new content (without restarting container) - Output of
ls -la ~/lab10-caddy/site/showing bothindex.htmland thetest.htmlcreated from inside the container - After removing the original container and starting
caddy-new, output ofcurl http://localhost:8080demonstrating that content persisted
Exercise E: Multi-Container Infrastructure with Networking and Resource Limits
Objective: Build a complete three-tier architecture with:
- Purple container acting as reverse proxy (routes requests based on URL path)
- Red container serving backend content (memory-limited)
- Blue container serving backend content (CPU-limited)
This exercise synthesizes everything from Labs 7-9:
- Custom networks (Lab 7 bridges)
- Container-to-container communication (Lab 7 veth pairs and routing)
- Reverse proxy with path-based routing (Lab 9 Caddy configuration)
- Resource limits (cgroups)
Part 1: Create a Custom Network
Step 1: Create a custom bridge network
docker network create --subnet=10.10.0.0/24 labnet
This creates a new bridge (just like br-lab from Lab 7) with subnet 10.10.0.0/24.
Expected output:
a1b2c3d4e5f6a7b8c9d0e1f2a3b4c5d6e7f8a9b0c1d2e3f4a5b6c7d8e9f0a1b2
This is the network ID.
Step 2: Inspect the network
docker network inspect labnet
Expected output (abbreviated):
[
{
"Name": "labnet",
"Id": "a1b2c3d4e5f6...",
"Driver": "bridge",
"IPAM": {
"Config": [
{
"Subnet": "10.10.0.0/24",
"Gateway": "10.10.0.1"
}
]
},
"Containers": {},
...
}
]
Key observations:
Driver: "bridge": Uses bridge networking (Layer 2 switching)Subnet: "10.10.0.0/24": Our specified subnetGateway: "10.10.0.1": Docker automatically assigns .1 as gatewayContainers: {}: No containers connected yet
Step 3: Verify bridge creation on host
ip link show | grep br-
Expected output:
5: br-a1b2c3d4e5f6: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default
Docker created a new bridge interface! The name includes the network ID prefix.
You can also use:
brctl show
Expected output:
bridge name bridge id STP enabled interfaces br-a1b2c3d4e5f6 8000.0242a1b2c3d4 no docker0 8000.0242f7a8b9c0 no
Two bridges:
docker0: Default Docker bridgebr-a1b2c3d4e5f6: Our custom labnet bridge
Connection to Lab 7:
This is exactly what you did with:
sudo ip link add br-lab type bridge
sudo ip link set br-lab up
sudo ip addr add 10.0.0.1/24 dev br-lab
Docker automated it!
Part 2: Create Backend Services (Red and Blue)
Step 4: Create content directories
mkdir -p ~/lab10-multicontainer/{red,blue,purple}
Step 5: Create Red service content
cat > ~/lab10-multicontainer/red/index.html << 'EOF'
<!DOCTYPE html>
<html>
<head>
<title>Red Service</title>
</head>
<body>
<div>
<p><strong>Container:</strong> Red</p>
<p><strong>IP:</strong> 10.10.0.2</p>
<p><strong>Resource Limit:</strong> Memory capped at 256MB</p>
<p>This backend is memory-constrained by cgroups.</p>
</div>
</body>
</html>
EOF
Step 6: Create Red Caddyfile
cat > ~/lab10-multicontainer/red/Caddyfile << 'EOF'
:80 {
root * /usr/share/caddy
file_server
log {
output stdout
format console
}
}
EOF
Step 7: Create Blue service content
cat > ~/lab10-multicontainer/blue/index.html << 'EOF'
<!DOCTYPE html>
<html>
<head>
<title>Blue Service</title>
</head>
<body>
<h1>Blue Service</h1>
<div class="info">
<p><strong>Container:</strong> Blue</p>
<p><strong>IP:</strong> 10.10.0.3</p>
<p><strong>Resource Limit:</strong> CPU capped at 0.5 cores</p>
<p>This backend is CPU-constrained by cgroups.</p>
</div>
</body>
</html>
EOF
Step 8: Create Blue Caddyfile
cat > ~/lab10-multicontainer/blue/Caddyfile << 'EOF'
:80 {
root * /usr/share/caddy
file_server
log {
output stdout
format console
}
}
EOF
Step 9: Start Red container with memory limit
docker run -d \
--name red \
--network labnet \
--ip 10.10.0.2 \
--memory=256m \
--memory-swap=256m \
-v ~/lab10-multicontainer/red:/etc/caddy \
-v ~/lab10-multicontainer/red:/usr/share/caddy \
caddy
Flags explained:
--network labnet: Connect to our custom network (not default docker0)--ip 10.10.0.2: Assign static IP address (just likeip addr addin Lab 7!)--memory=256m: cgroup memory limit (hard cap: 256MB RAM)--memory-swap=256m: Total memory+swap limit (setting equal to memory means no swap)- If swap was 512m, container could use 256MB RAM + 256MB swap
-v ~/lab10-multicontainer/red:/etc/caddy: Mount Caddyfile-v ~/lab10-multicontainer/red:/usr/share/caddy: Mount website content
Expected output: Container ID
Step 10: Verify Red is running
docker ps --filter "name=red"
Step 11: Test Red directly
Since Red has IP 10.10.0.2, let's verify we can reach it:
# From host (won't work directly because we're not in labnet)
# But we can use docker exec from another container
# Quick test: use docker exec to reach Red from Red itself
docker exec red curl -s http://localhost | grep "<h1>"
Expected output:
<h1>Red Service</h1>
Step 12: Start Blue container with CPU limit
docker run -d \
--name blue \
--network labnet \
--ip 10.10.0.3 \
--cpus=0.5 \
-v ~/lab10-multicontainer/blue:/etc/caddy \
-v ~/lab10-multicontainer/blue:/usr/share/caddy \
caddy
Flags explained:
--cpus=0.5: cgroup CPU limit (maximum 50% of one CPU core)- If CPU-intensive work is attempted, kernel will throttle it
- Other flags same as Red
Step 13: Verify both containers are running
docker ps
Expected output:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES a1b2c3d4e5f6 caddy "caddy run..." 20 seconds ago Up 18 seconds 80/tcp red b7c8d9e0f1a2 caddy "caddy run..." 10 seconds ago Up 8 seconds 80/tcp blue
Step 14: Test container-to-container communication
# From Red, ping Blue
docker exec red ping -c 2 10.10.0.3
Expected: Success!
# From Red, curl Blue's website
docker exec red curl -s http://10.10.0.3 | grep "<h1>"
Expected output:
<h1>Blue Service</h1>
Containers on the same custom network can communicate directly.
Part 3: Create Reverse Proxy (Purple)
Step 15: Create Purple's Caddyfile
This is the crucial piece—routing based on URL path (from Lab 9!)
cat > ~/lab10-multicontainer/purple/Caddyfile << 'EOF'
:80 {
# Health check endpoint
handle /health {
respond "Purple Reverse Proxy - OK" 200
}
# Root path
handle / {
respond "Purple Reverse Proxy - Use /red/ or /blue/ paths" 200
}
# Route /red/* to red container (strip /red prefix)
handle_path /red/* {
reverse_proxy red:80
}
# Route /blue/* to blue container (strip /blue prefix)
handle_path /blue/* {
reverse_proxy blue:80
}
# Logging
log {
output stdout
format console
}
}
EOF
Key points:
handle_path /red/*: Matches any path starting with/red/handle_pathstrips the prefix before forwarding- Client requests
/red/index.html→ Backend receives/index.html
reverse_proxy red:80: Forward to container named "red" on port 80- Using hostname "red", not IP!
- Docker's embedded DNS resolves "red" to 10.10.0.2
- Same for blue
Step 16: Start Purple reverse proxy
docker run -d \
--name purple \
--network labnet \
--ip 10.10.0.10 \
-p 8080:80 \
-v ~/lab10-multicontainer/purple:/etc/caddy \
caddy
Flags explained:
--ip 10.10.0.10: Purple gets IP 10.10.0.10 (different from Red/Blue)-p 8080:80: Port mapping (host port 8080 → container port 80)- This is NAT (DNAT) from Lab 7!
- Traffic to
localhost:8080on host gets forwarded to10.10.0.10:80in container
Part 4: Testing the Infrastructure
Step 17: Test health check
curl http://localhost:8080/health
Expected output:
Purple Reverse Proxy - OK
This request:
- Reaches host's port 8080
- Docker's DNAT rule forwards to Purple (10.10.0.10:80)
- Purple's Caddy responds directly (no backend needed for /health)
Step 18: Test root path
curl http://localhost:8080/
Expected output:
Purple Reverse Proxy - Use /red/ or /blue/ paths
Step 19: Test routing to Red
curl http://localhost:8080/red/
Expected output: Red service HTML (with red background styling)
<!DOCTYPE html>
<html>
...
<h1>Red Service</h1>
<div>
<p><strong>Container:</strong> Red</p>
<p><strong>IP:</strong> 10.10.0.2</p>
...
What happened:
- Your curl → Host port 8080
- DNAT → Purple (10.10.0.10:80)
- Purple sees path
/red/, strips/red, forwards/tored:80 - Purple's DNS resolves
redto 10.10.0.2 - Request goes to Red (10.10.0.2:80)
- Red's Caddy serves
index.html - Response travels back through Purple to your curl
Step 20: Test routing to Blue
curl http://localhost:8080/blue/
Expected output: Blue service HTML (with blue background styling)
Step 21: Test with verbose output to see headers
curl -v http://localhost:8080/red/ 2>&1 | grep -A10 "< HTTP"
Expected output:
< HTTP/1.1 200 OK < Content-Type: text/html; charset=utf-8 < Server: Caddy < Date: Thu, 12 Dec 2024 11:30:00 GMT < Content-Length: 687
Observe: Server: Caddy header (from Purple, acting as proxy)
Step 22: Test in browser
Open in your web browser:
http://localhost:8080/red/→ Should see red-styled pagehttp://localhost:8080/blue/→ Should see blue-styled page
Part 5: Inspect Resource Limits
Step 23: Check Red's memory limit
docker inspect red --format='{{.HostConfig.Memory}}'
Expected output:
268435456
This is 256 MB in bytes (256 × 1024 × 1024 = 268435456).
Step 24: Check Blue's CPU limit
docker inspect blue --format='{{.HostConfig.NanoCpus}}'
Expected output:
500000000
This is 0.5 CPU cores in nanocpus (0.5 × 10^9 = 500,000,000).
Step 25: View cgroup settings from the host
Get Red's main process PID:
RED_PID=$(docker inspect red --format '{{.State.Pid}}')
echo "Red's PID: $RED_PID"
View memory limit in cgroup filesystem:
cat /sys/fs/cgroup/memory/docker/$RED_PID/memory.limit_in_bytes
Expected output:
268435456
View Blue's CPU quota:
BLUE_PID=$(docker inspect blue --format '{{.State.Pid}}')
cat /sys/fs/cgroup/cpu/docker/$BLUE_PID/cpu.cfs_quota_us
Expected output:
50000
Explanation:
cpu.cfs_period_us: 100000 (default, 100ms period)cpu.cfs_quota_us: 50000 (50ms out of 100ms = 50% = 0.5 CPUs)
Connection to kernel concepts:
These cgroup files in /sys/fs/cgroup/ are how the kernel enforces resource limits. Docker configures these files, and the kernel does the actual enforcement.
Part 6: Network Inspection
Step 26: Inspect labnet network showing all containers
docker network inspect labnet --format='{{json .Containers}}' | python3 -m json.tool
Expected output:
{
"a1b2c3d4e5f6...": {
"Name": "red",
"EndpointID": "...",
"MacAddress": "02:42:0a:0a:00:02",
"IPv4Address": "10.10.0.2/24",
"IPv6Address": ""
},
"b7c8d9e0f1a2...": {
"Name": "blue",
"EndpointID": "...",
"MacAddress": "02:42:0a:0a:00:03",
"IPv4Address": "10.10.0.3/24",
"IPv6Address": ""
},
"f8e9c7b6d5a4...": {
"Name": "purple",
"EndpointID": "...",
"MacAddress": "02:42:0a:0a:00:0a",
"IPv4Address": "10.10.0.10/24",
"IPv6Address": ""
}
}
All three containers are on the labnet network with their assigned IPs.
Step 27: Test DNS resolution between containers
# From Purple, resolve "red" hostname
docker exec purple nslookup red
Expected output:
Server: 127.0.0.11 Address: 127.0.0.11#53 Non-authoritative answer: Name: red Address: 10.10.0.2
Explanation:
127.0.0.11: Docker's embedded DNS server (listening inside each container)- DNS resolves container name "red" to IP 10.10.0.2
Step 28: Test connectivity using hostnames
# Purple pings Red by hostname
docker exec purple ping -c 2 red
Expected: Success!
# Purple curls Blue by hostname
docker exec purple curl -s http://blue | grep "<h1>"
Expected output:
<h1>Blue Service</h1>
Part 7: Architecture Visualization
The architecture you built:
┌─────────────────┐
│ Your Host │
│ (Port 8080) │
└────────┬────────┘
│
Port Mapping (NAT)
8080 → 80
│
┌────────▼────────┐
│ Purple Proxy │
│ 10.10.0.10:80 │
│ (Caddy) │
└────────┬────────┘
│
Docker Network: labnet
(Bridge: br-a1b2c3d4e5f6)
10.10.0.0/24
│
┌─────────┴──────────┐
│ │
┌────────▼────────┐ ┌───────▼─────────┐
│ Red Service │ │ Blue Service │
│ 10.10.0.2:80 │ │ 10.10.0.3:80 │
│ (Caddy) │ │ (Caddy) │
│ [mem: 256MB] │ │ [cpu: 0.5] │
└─────────────────┘ └─────────────────┘
Request flow for curl http://localhost:8080/red/:
1. Curl → localhost:8080 (your host) 2. Host's iptables DNAT rule → 10.10.0.10:80 (Purple) 3. Purple receives request to /red/ 4. Purple's Caddy config: handle_path /red/* → reverse_proxy red:80 5. Purple strips /red prefix → request becomes / 6. Purple's DNS resolves "red" → 10.10.0.2 7. Purple → Red (10.10.0.2:80) GET / 8. Red's Caddy serves /usr/share/caddy/index.html 9. Response: Red → Purple → Host → Curl
Compare to Lab 7 and Lab 9:
- Lab 7: You manually created bridges, veth pairs, assigned IPs, configured routes, set up NAT
- Lab 9: You manually configured Caddy reverse proxy with path-based routing
- This exercise: Docker automated all the networking, you just specified what you wanted
Part 8: Cleanup
Step 29: Stop all containers
docker stop purple red blue
Step 30: Remove containers
docker rm purple red blue
Step 31: Remove the network
docker network rm labnet
Step 32: Verify cleanup
docker ps -a
docker network ls
Only default networks (bridge, host, none) should remain.
Step 33: Verify host bridge removed
ip link show | grep br-
The custom bridge (br-a1b2c3d4e5f6) should be gone. Only docker0 remains.
Deliverable E
Provide screenshots showing:
- Output of
docker network inspect labnetshowing all three containers with their IPs (before running curl tests) - Output of
curl http://localhost:8080/health(health check response) - Output of
curl http://localhost:8080/red/(Red service HTML) with visible red-styled content - Output of
curl http://localhost:8080/blue/(Blue service HTML) with visible blue-styled content - Output of
docker inspect red --format='Format:.HostConfig.Memory'showing memory limit in bytes - Output of
docker inspect blue --format='Format:.HostConfig.NanoCpus'showing CPU limit in nanocpus - Output of
docker exec purple nslookup redshowing DNS resolution
Reference: Docker Command Quick Guide
This section provides a quick reference for Docker commands introduced in this lab.
Container Lifecycle
# Run container (create + start)
docker run [OPTIONS] IMAGE [COMMAND]
docker run -d nginx # Detached (background)
docker run -it ubuntu bash # Interactive with terminal
docker run --name mycontainer nginx # Assign name
# List containers
docker ps # Running only
docker ps -a # All (including stopped)
docker ps -q # Show only IDs (quiet)
# Start stopped container
docker start CONTAINER
# Stop running container
docker stop CONTAINER # Graceful (SIGTERM)
docker kill CONTAINER # Forceful (SIGKILL)
# Restart container
docker restart CONTAINER
# Remove container
docker rm CONTAINER # Must be stopped first
docker rm -f CONTAINER # Force remove (stop + remove)
# Execute command in running container
docker exec CONTAINER COMMAND
docker exec -it CONTAINER bash # Interactive shell
# View container logs
docker logs CONTAINER
docker logs -f CONTAINER # Follow (tail -f style)
# Inspect container (detailed JSON info)
docker inspect CONTAINER
docker inspect CONTAINER --format='{{.State.Status}}'
Image Management
# List images
docker images
docker image ls
# Pull image from registry
docker pull IMAGE[:TAG]
docker pull nginx # Latest tag (default)
docker pull nginx:1.21 # Specific tag
# Remove image
docker rmi IMAGE
docker image rm IMAGE
# View image layers
docker history IMAGE
# Remove unused images
docker image prune # Dangling images
docker image prune -a # All unused images
Network Management
# List networks
docker network ls
# Create network
docker network create NETWORK
docker network create --subnet=10.20.0.0/24 mynet
# Inspect network
docker network inspect NETWORK
# Connect container to network
docker network connect NETWORK CONTAINER
# Disconnect container from network
docker network disconnect NETWORK CONTAINER
# Remove network
docker network rm NETWORK
# Remove unused networks
docker network prune
Volume Management
# List volumes
docker volume ls
# Create volume
docker volume create VOLUME
# Inspect volume
docker volume inspect VOLUME
# Remove volume
docker volume rm VOLUME
# Remove unused volumes
docker volume prune
Resource Management
# CPU limits
--cpus=1.5 # Limit to 1.5 CPU cores
--cpu-shares=512 # Relative priority (default 1024)
--cpuset-cpus=0,1 # Pin to specific cores
# Memory limits
--memory=512m # Hard limit: 512 MB RAM
--memory=2g # Hard limit: 2 GB RAM
--memory-swap=1g # Total memory+swap
--memory-reservation=256m # Soft limit
# I/O limits
--device-read-bps=/dev/sda:10mb # Limit read bandwidth
--device-write-bps=/dev/sda:10mb # Limit write bandwidth
# Process limits
--pids-limit=100 # Max 100 processes
Inspection and Debugging
# View resource usage stats
docker stats
docker stats CONTAINER # Specific container
# View processes in container
docker top CONTAINER
# View port mappings
docker port CONTAINER
# Copy files between host and container
docker cp CONTAINER:/path/to/file ./file # Container → Host
docker cp ./file CONTAINER:/path/to/file # Host → Container
# Stream events from Docker daemon
docker events
# Show disk usage
docker system df
# System-wide cleanup
docker system prune # Remove unused objects
docker system prune -a # Remove all unused objects
docker system prune -a --volumes # Include volumes
Common Troubleshooting
Installation Issues
Problem: docker --version fails after installation
Solution:
# Check if Docker daemon is running
sudo systemctl status docker
# If not running, start it
sudo systemctl start docker
sudo systemctl enable docker
Problem: "Cannot connect to the Docker daemon"
Cause: Docker daemon not running or permission issue
Solution:
# Check daemon status
sudo systemctl status docker
# Check if your user is in docker group
groups | grep docker
# If not, add user and log out/in
sudo usermod -aG docker $USER
Permission Issues
Problem: "permission denied" when running docker commands
Solution:
# Option 1: Add user to docker group (permanent)
sudo usermod -aG docker $USER
# Then log out and log back in
# Option 2: Use sudo (temporary)
sudo docker run hello-world
Problem: Files created by container are owned by root
Cause: Container runs as root (UID 0) by default
Solution:
# Run container as specific user
docker run --user $(id -u):$(id -g) IMAGE
# Or use user namespaces (advanced)
Network Issues
Problem: Container can't access internet
Diagnostics:
# Test from container
docker exec CONTAINER ping -c 2 8.8.8.8
# Check Docker's NAT rules
sudo iptables -t nat -L -n | grep docker
# Check IP forwarding
sysctl net.ipv4.ip_forward
Solution:
# Enable IP forwarding if disabled
sudo sysctl -w net.ipv4.ip_forward=1
# Restart Docker daemon
sudo systemctl restart docker
Problem: Containers on custom network can't communicate
Diagnostics:
# Check network exists
docker network ls
# Check containers are on same network
docker network inspect NETWORK
# Test connectivity
docker exec CONTAINER1 ping CONTAINER2_IP
Solution:
# Ensure both containers on same network
docker network connect NETWORK CONTAINER
Problem: Port mapping not working (-p flag)
Diagnostics:
# Check port is actually mapped
docker port CONTAINER
# Check if host port is already in use
sudo ss -tulpn | grep :8080
Solution:
# If port in use, use different host port
docker run -p 8081:80 nginx
# Or stop the conflicting process
sudo fuser -k 8080/tcp
Storage Issues
Problem: "No space left on device"
Diagnostics:
# Check Docker disk usage
docker system df
# Check host disk space
df -h /var/lib/docker
Solution:
# Remove unused objects
docker system prune -a
# Remove specific old images
docker images
docker rmi IMAGE_ID
Problem: Changes in bind mount not visible in container
Cause: Path doesn't exist or wrong path specified
Solution:
# Verify path exists on host
ls -la /path/to/host/directory
# Use absolute paths
docker run -v /absolute/path:/container/path IMAGE
# Or use $PWD for current directory
docker run -v $PWD/relative/path:/container/path IMAGE
Resource Limit Issues
Problem: Container killed unexpectedly
Diagnostics:
# Check container exit code
docker inspect CONTAINER --format='{{.State.ExitCode}}'
# Exit code 137 = killed (often OOM)
# Check logs
docker logs CONTAINER
Solution:
# Increase memory limit
docker run --memory=1g IMAGE
# Or run without memory limit (default)
docker run IMAGE
Problem: Container using too much CPU
Solution:
# Limit CPU usage
docker run --cpus=0.5 IMAGE
# Check what's consuming CPU
docker exec CONTAINER top
Next Steps: Dockerfile and Docker Compose
Congratulations! You've mastered the fundamental OS concepts behind containers:
- Namespaces provide isolation (network, PID, mount, UTS, IPC, user, cgroup)
- cgroups enforce resource limits (CPU, memory, I/O)
- OverlayFS provides efficient layered filesystems
- Bridges and veth pairs connect containers (same as Lab 7!)
- Volumes persist data beyond container lifecycles
However, you've been using pre-built images and running containers with long docker run commands. In production environments, you need:
- Custom images tailored to your applications
- Reproducible builds that can be version-controlled
- Multi-container orchestration with coordinated startup and networking
This is where Dockerfile and Docker Compose come in.
Dockerfile: Building Custom Images
Instead of starting from a base image and manually installing packages, you define your application's environment in a Dockerfile:
# Start from Ubuntu base image
FROM ubuntu:22.04
# Install dependencies
RUN apt-get update && apt-get install -y \
python3 \
python3-pip \
&& rm -rf /var/lib/apt/lists/*
# Copy application code
COPY app.py /app/
COPY requirements.txt /app/
# Install Python dependencies
WORKDIR /app
RUN pip3 install -r requirements.txt
# Expose port
EXPOSE 8000
# Define startup command
CMD ["python3", "app.py"]
Build the image:
docker build -t myapp:1.0 .
Run containers from your custom image:
docker run -d -p 8000:8000 myapp:1.0
Benefits:
- Reproducibility: Same Dockerfile always builds identical image
- Version control: Dockerfile lives in git alongside code
- Documentation: Dockerfile explicitly documents dependencies and setup
- Automation: CI/CD pipelines can build images automatically
Common Dockerfile instructions:
FROM: Base image to start fromRUN: Execute command during build (install packages, etc.)COPY/ADD: Copy files from host into imageWORKDIR: Set working directoryENV: Set environment variablesEXPOSE: Document which ports the application usesCMD: Default command to run when container startsENTRYPOINT: Configure container as executable
Best practices:
- Use specific image tags (not
latest) - Minimize layers (combine
RUNcommands) - Leverage build cache (order instructions from least to most frequently changing)
- Use
.dockerignore(like.gitignorefor Docker builds) - Don't run as root (use
USERinstruction) - Multi-stage builds for smaller images
Docker Compose: Multi-Container Orchestration
Instead of running multiple docker run commands with complex options, define your entire infrastructure in a docker-compose.yml file:
version: '3.8'
services:
# Purple reverse proxy
purple:
image: caddy
ports:
- "8080:80"
volumes:
- ./purple:/etc/caddy
networks:
labnet:
ipv4_address: 10.10.0.10
depends_on:
- red
- blue
# Red backend
red:
image: caddy
volumes:
- ./red:/etc/caddy
- ./red:/usr/share/caddy
networks:
labnet:
ipv4_address: 10.10.0.2
deploy:
resources:
limits:
memory: 256M
# Blue backend
blue:
image: caddy
volumes:
- ./blue:/etc/caddy
- ./blue:/usr/share/caddy
networks:
labnet:
ipv4_address: 10.10.0.3
deploy:
resources:
limits:
cpus: '0.5'
networks:
labnet:
driver: bridge
ipam:
config:
- subnet: 10.10.0.0/24
Start entire infrastructure:
docker compose up
Or in detached mode:
docker compose up -d
Stop everything:
docker compose down
View logs:
docker compose logs -f
Benefits:
- Declarative: Describe desired state, not imperative commands
- Single source of truth: One file defines entire application
- Easy to share: Colleagues can run
docker compose upand get identical environment - Development/production parity: Same compose file works everywhere
- Automatic networking: Compose creates network and DNS automatically
This is the production-ready way to deploy multi-container applications.
Further learning:
- Dockerfile: Learn advanced instructions, multi-stage builds, build arguments
- Docker Compose: Learn service dependencies, health checks, scaling
- Docker Swarm: Docker's built-in orchestration for multi-host deployments
- Kubernetes: Industry-standard container orchestration platform
- Container registries: Push images to Docker Hub, GitHub Container Registry, or private registries
- Security: Image scanning, rootless Docker, seccomp profiles, AppArmor
These topics build on the solid foundation you've established in this lab. You now understand what containers actually are at the OS level—the rest is learning tools that make containers easier to build and deploy.
Deliverables and Assessment
Submit a single PDF document containing all deliverables from Exercises A through E, organized with clear section headers matching the exercise labels.
Required deliverables:
- Deliverable A: Installation verification (screenshots)
- Deliverable B: Hello World and namespace inspection (screenshots)
- Deliverable C: Fedora exploration (screenshots)
- Deliverable D: Persistent storage (screenshots)
- Deliverable E: Multi-container infrastructure (screenshots)
Additional Resources
This lab introduced containerization using Docker, demonstrating how Linux kernel features (namespaces, cgroups, OverlayFS) are composed to create isolated application environments. You've seen that Docker is not magic—it's sophisticated automation of kernel primitives you already understand from previous labs.
For Further Study
Container Internals:
- Deep dive into runc (the OCI runtime that Docker uses)
- Understanding containerd (container runtime layer)
- Linux capabilities and security contexts
- AppArmor and SELinux profiles for containers
- User namespaces and rootless containers
- Seccomp profiles for system call filtering
Dockerfile Best Practices:
- Multi-stage builds for smaller images
- Build cache optimization strategies
- Layer ordering for efficient rebuilds
- .dockerignore for excluding files
- Security scanning with Trivy or Clair
- Distroless and minimal base images
Docker Compose Advanced:
- Service dependencies with health checks
- Environment-specific overrides
- Secrets management
- Multiple compose files (base + overrides)
- Named volumes vs bind mounts
- Integration testing with compose
Container Orchestration:
- Kubernetes architecture and concepts
- kubectl basics
- Pods, Services, Deployments, ConfigMaps
- Kubernetes networking (CNI plugins)
- Helm charts for package management
- Service mesh (Istio, Linkerd)
Container Networking Deep Dive:
- Bridge vs host vs macvlan networking
- Overlay networks for multi-host communication
- Network plugins and CNI specification
- Load balancing strategies
- Service discovery patterns
- Network policies and segmentation
Security:
- Container escape techniques and mitigations
- Image vulnerability scanning
- Runtime security monitoring (Falco)
- Least privilege principles
- Supply chain security
- Harbor for secure image registry
Performance:
- Container resource tuning
- Storage drivers (overlay2, btrfs, zfs)
- Network performance optimization
- Monitoring with Prometheus and Grafana
- Distributed tracing with Jaeger
Production Operations:
- Logging strategies (centralized logging)
- Health checks and readiness probes
- Rolling updates and blue-green deployments
- Backup and disaster recovery
- CI/CD integration (Jenkins, GitLab CI)
- Container registries (private, public)
Relevant Manual Pages
man 7 namespaces # Linux namespaces overview
man 7 cgroups # Control groups overview
man 2 unshare # Create namespaces
man 2 setns # Enter existing namespace
man 8 nsenter # Run program in namespace
man 1 docker # Docker CLI reference
man 1 docker-run # docker run reference
man 1 docker-compose # Docker Compose reference
Online Resources
Official Documentation:
- Docker Documentation - Comprehensive official docs
- Docker Hub - Public image registry
- OCI Specification - Open Container Initiative standards
- containerd Documentation - Container runtime
Tutorials and Guides:
- Docker Curriculum - Beginner-friendly tutorial
- Play with Docker - Free interactive playground
- Kubernetes Documentation - K8s official docs
- Dockerfile Best Practices
Deep Dives:
- Understanding Namespaces (LWN) - Series on Linux namespaces
- How Containers Work - Excellent blog post by Julia Evans
- Container Security Best Practices - Security-focused guide
Community:
- Docker Forums - Community support
- Stack Overflow - Docker Tag
- Reddit r/docker
- CNCF Slack - Cloud Native Computing Foundation community
Practice Environments:
- Katacoda - Interactive Docker and Kubernetes scenarios
- Play with Kubernetes - K8s playground
- KillerCoda - Interactive learning scenarios