OS Lab 7 - The Network Subsystem
Objectives
Upon completion of this lab, you will be able to:
- Explain how the kernel represents network abstractions: Interfaces, Addresses, Routes, and Neighbors.
- Use the modern iproute2 suite (
ip link,ip addr,ip route,ip neigh) to manage kernel network objects. - Create isolated network environments using Network Namespaces to simulate multiple virtual computers.
- Construct a complete virtual network topology (Switch, Router, NAT) from scratch using OS primitives.
- Diagnose connectivity issues by inspecting the ARP cache and Routing table to understand packet delivery.
- Understand the relationship between Layer 2 (MAC addresses) and Layer 3 (IP addresses) networking.
Introduction
In Lab 6, we explored Inter-Process Communication (IPC) mechanisms that allow processes to exchange data and coordinate their actions. We touched briefly on Sockets, which extend IPC beyond a single machine by enabling communication across network boundaries. However, using sockets effectively requires understanding what happens beneath the abstraction: how does data actually travel from one machine to another?
The answer lies in the kernel's network stack, a complex subsystem responsible for managing network interfaces, addressing, routing, and protocol handling. Usually, the network stack is configured automatically by background services like NetworkManager, systemd-networkd, or DHCP clients. These tools shield users from complexity, but they also obscure the fundamental mechanisms at work.
In this lab, we peel back those layers to interact directly with the kernel's networking subsystem. We will not use physical network cables or hardware switches. Instead, we will use the operating system itself to manufacture virtual hardware—virtual network cards, virtual cables, and virtual switches. By constructing networks programmatically, we gain deep insight into how the kernel manages connectivity, routing, and address resolution.
This lab is structured around a progression: we start with a simple point-to-point connection between two virtual machines, then evolve it into a switched network with multiple clients, routing capabilities, and internet access via Network Address Translation (NAT). Along the way, we examine the kernel's internal state at each step to understand exactly how packets flow through the system.
Understanding the network subsystem is essential for system administration, DevOps, container orchestration (Docker, Kubernetes), network troubleshooting, and security. These same primitives underlie modern cloud networking, virtual private networks, and service meshes.
Prerequisites
System Requirements
A running instance of Linux machine with root privileges (via sudo).
Required Packages
The following packages must be installed:
sudo apt update
sudo apt install -y iproute2 iputils-ping traceroute nftables
iproute2: The modern suite for network configuration, replacing legacy tools likeifconfigandroute. Providesipcommand.iputils-ping: Provides thepingutility for testing connectivity.traceroute: Maps the path packets take through networks.nftables: The modern Linux firewall/NAT framework.
Knowledge Prerequisites
You should be familiar with:
- Process concepts from Lab 3 (PIDs, process hierarchy)
- File permissions from Lab 4 (execute bit, ownership)
- Bash scripting from Lab 5 (shebangs, variables, loops, functions)
- IPC concepts from Lab 6 (understanding of how processes communicate)
The Network Subsystem: Kernel Fundamentals
What is a Network?
To the Linux kernel, a "Network" is not a physical thing—it's an abstraction representing a group of computers that can communicate with each other directly, without requiring a router as an intermediary. This direct communication capability is what defines a network segment or broadcast domain.
A single computer can participate in multiple networks simultaneously. It simply needs a separate Network Interface for each network it wants to join. Think of interfaces as "plugs" or "sockets" that connect your computer to different communication channels.
The Four Pillars of Networking
The kernel's network subsystem is built on four fundamental abstractions. Understanding these is key to mastering network configuration:
- Interfaces: The physical or virtual network adapters that can send and receive packets. Each interface represents a connection point to a network.
- Addresses: IP addresses assigned to interfaces, giving them identities on the network. Addresses define "who" you are.
- Routes: Kernel rules that determine which interface should be used to reach a given destination. Routes define "how to get there."
- Neighbors: The kernel's cache mapping IP addresses to MAC addresses for direct communication. Neighbors define "where exactly on this wire."
These four abstractions work together. When you send a packet to an IP address:
- The kernel consults the Routing Table to determine which Interface to use.
- If the route indicates the destination is local (directly reachable), the kernel consults the Neighbor Table to find the MAC address.
- If the MAC address is unknown, the kernel uses ARP (Address Resolution Protocol) to discover it.
- The kernel constructs an Ethernet frame with the destination MAC address and sends it out the chosen Interface.
The iproute2 Suite: Modern Network Management
Historically, Linux used separate commands for each aspect of networking: ifconfig for interfaces, route for routing, arp for neighbors. These tools are now considered legacy. Modern Linux uses the unified iproute2 suite, centered around the ip command.
The ip command uses a consistent syntax:
ip OBJECT COMMAND [OPTIONS]
Where OBJECT is one of:
link: Network interfaces (Layer 2)addroraddress: IP addresses (Layer 3)route: Routing table entriesneighorneighbour: ARP/Neighbor cache
Common commands:
ip link show: List all network interfacesip addr show: Show IP addresses assigned to interfacesip route show: Display the routing tableip neigh show: Display the neighbor (ARP) cache
The beauty of ip is that it provides a consistent, scriptable interface to the kernel's network state. Unlike older tools, ip is designed for automation and parsing by scripts.
Network Namespaces: Virtual Computers
A Network Namespace (often abbreviated "netns") is a kernel feature that creates an isolated copy of the network stack. Each namespace has its own:
- Set of network interfaces
- Routing table
- ARP cache
- Firewall rules
- Socket listening ports
From the kernel's perspective, a network namespace is effectively a separate "Virtual Computer" with its own completely independent networking configuration. Processes running inside a namespace cannot see or interact with interfaces or connections in other namespaces (unless explicitly bridged).
Your Linux host runs in the "default" or "root" namespace. When you run ip link show normally, you see the default namespace's interfaces. But we can create new namespaces to simulate remote machines for testing, all on a single physical computer.
Network namespaces are the foundation of container networking. When you run a Docker container, Docker creates a new network namespace for it, giving the container its own isolated network stack.
Commands for working with namespaces:
ip netns add NAME: Create a new namespaceip netns list: List all namespacesip netns exec NAME COMMAND: Execute a command inside a namespaceip netns delete NAME: Remove a namespace
When you execute a command with ip netns exec client_ns ip link, you're running the ip link command inside the client_ns namespace, so it sees only that namespace's interfaces.
Hands-on Exercises
In this lab, we will start by building a simple direct connection between two virtual computers, then evolve it step-by-step into a complex switched network with routing and NAT.
Exercise 1: Network Interfaces and Virtual Cables
Theory: The Virtual Ethernet (veth) Pair
Physical networks require network interface cards (NICs) and physical cables. Virtual networks require virtual equivalents. The Linux kernel provides several types of virtual interfaces:
- veth (Virtual Ethernet) pairs: A veth pair is like a virtual patch cable with two ends. Packets sent into one end immediately come out the other end. veth pairs are the fundamental building block for connecting network namespaces.
- bridges: Virtual switches that connect multiple interfaces together.
- tun/tap devices: Virtual interfaces used by userspace programs (like VPNs).
- vlan devices: Virtual interfaces representing 802.1Q VLAN tags.
In this exercise, we focus on veth pairs. When you create a veth pair, the kernel creates two interfaces that are permanently wired together. You can place these interfaces in different namespaces, effectively running a "cable" between two virtual machines.
Every network interface, physical or virtual, has several properties:
- Name: The interface identifier (e.g.,
eth0,v-host) - MAC Address: A Layer 2 hardware address (48 bits, typically written as six hex pairs like
aa:bb:cc:dd:ee:ff) - State: UP (enabled) or DOWN (disabled)
- MTU: Maximum Transmission Unit, the largest packet size the interface can handle
Even virtual interfaces have MAC addresses. The kernel auto-generates them, though you can set them manually if needed.
Practice: Creating Your First Virtual Network
We will create two virtual computers: your host (running in the default namespace) and a client (running in a new namespace called client_ns). We'll connect them with a veth pair.
Step 1: Examine Your Current Network Configuration
Before we begin constructing virtual networks, let's see what we're starting with:
ip link show
You should see at least two interfaces:
lo: The loopback interface (127.0.0.1), used for local communication within the machineeth0(orens33,enp0s3, etc.): Your primary physical (or virtualized) network interface connected to the outside world
Note the MAC addresses, the state (UP or DOWN), and the MTU (typically 1500 bytes for Ethernet).
Each interface has an "ifindex" number, which is the kernel's internal identifier. You'll see output like:
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN mode DEFAULT group default qlen 1000
link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
2: eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP mode DEFAULT group default qlen 1000
link/ether 52:54:00:12:34:56 brd ff:ff:ff:ff:ff:ff
The flags like <BROADCAST,MULTICAST,UP,LOWER_UP> indicate interface capabilities and state. UP means the interface is enabled, LOWER_UP means the link layer is connected.
Step 2: Create a Virtual Ethernet Cable
Create a veth pair. This is like manufacturing a network cable with two connectors:
sudo ip link add dev v-host type veth peer name v-client
Let's break down this command:
ip link add: Create a new interfacedev v-host: Name one end "v-host"type veth: The interface type is a virtual ethernet pairpeer name v-client: Name the other end "v-client"
Verify the creation:
ip link show
You should now see two additional interfaces: v-host and v-client. Both will be in state DOWN initially. Notice that each has been automatically assigned a MAC address by the kernel. These MAC addresses are randomly generated to avoid collisions.
Important: The two interfaces are linked. Packets sent to v-host will appear on v-client, and vice versa. This is how we'll connect our host to the virtual client machine.
Step 3: Create a Virtual Computer (Network Namespace)
Create a new network namespace to simulate a separate computer:
sudo ip netns add client_ns
Verify it exists:
ip netns list
You should see client_ns in the output.
This namespace is now a completely isolated network environment. It has its own set of interfaces (initially just a loopback), its own routing table, and its own ARP cache.
Let's peek inside to see what interfaces exist in the new namespace:
sudo ip netns exec client_ns ip link show
You should see only the loopback interface (lo). The v-client interface we created is still in the default namespace.
Step 4: Connect the Virtual Cable
Now we'll "plug" one end of our virtual cable into the virtual computer. We do this by moving the v-client interface from the default namespace into client_ns:
sudo ip link set v-client netns client_ns
This command transfers ownership of the v-client interface to the client_ns namespace.
Step 5: Verify the Topology
Verify the change in the default namespace:
ip link show
Notice that v-client is now gone from the default namespace. It has moved to client_ns.
Verify inside the namespace:
sudo ip netns exec client_ns ip link show
Now you should see both lo and v-client inside the namespace.
At this point, we have successfully created a virtual topology:
- Host (default namespace): Has interface
v-host - Client (client_ns namespace): Has interface
v-client - Connection:
v-hostandv-clientare connected by a virtual cable
However, neither interface is UP yet, and neither has an IP address. They cannot communicate yet.
Deliverable A
After following exercise 1, provide the output of the following commands:
ip link show | grep v-host
sudo ip netns exec client_ns ip link show | grep v-client
Exercise 2: IP Addresses and Subnets
Theory: Identity and Reachability
Network interfaces allow computers to physically connect to a network, but to actually communicate, they need identities—IP addresses. An IP address serves two purposes:
- Identity: It uniquely identifies a device on a network (like a phone number)
- Routing: It indicates which network the device belongs to (like an area code)
An IP address by itself is not enough. We also need a subnet mask (or prefix length) that defines the "scope" of the local network—which other IP addresses are directly reachable without routing.
Understanding Subnet Masks and CIDR Notation
A subnet mask divides an IP address into two parts:
- Network portion: Identifies which network the address belongs to
- Host portion: Identifies the specific device within that network
CIDR (Classless Inter-Domain Routing) notation uses a slash followed by the number of bits in the network portion. For example:
10.0.0.5/24: The first 24 bits (10.0.0) are the network, leaving 8 bits for hosts (256 addresses: 10.0.0.0 - 10.0.0.255)192.168.1.10/16: The first 16 bits (192.168) are the network, leaving 16 bits for hosts (65,536 addresses)172.16.0.1/8: The first 8 bits (172) are the network, leaving 24 bits for hosts (16,777,216 addresses)
When you assign an IP address with a prefix length (e.g., 10.0.0.1/24), the kernel automatically understands that all addresses matching the network portion (10.0.0.x) are "local" or "directly reachable."
The Subnet Decision: How the Kernel Routes Locally
When a process wants to send a packet to a destination IP address, the kernel must decide: "Is this destination a neighbor on one of my local networks, or do I need to forward it to a router?"
The kernel's logic:
- For each interface with an assigned IP address and subnet mask, calculate the network address
- Check if the destination IP falls within any of these networks
- If yes → the destination is directly reachable; send the packet out that interface using the destination's MAC address
- If no → consult the routing table for a gateway to forward the packet through
Example: Your interface has IP 10.0.0.1/24
- Network:
10.0.0.0/24(all addresses from 10.0.0.0 to 10.0.0.255) - If you ping
10.0.0.50: The kernel recognizes this is in the local subnet and sends directly - If you ping
8.8.8.8: The kernel recognizes this is NOT local and looks for a route to a gateway
This automatic local route is created when you assign an IP address. You don't need to manually configure routes for local subnets.
Static vs. Dynamic Configuration
IP addresses can be configured in two ways:
- Static: Manually assigned by an administrator using
ip addr add - Dynamic: Automatically assigned by a DHCP server
In production systems, DHCP is common because it allows centralized management of address allocation. In this lab, we use static configuration to understand exactly what the kernel is doing.
Practice: Assigning IP Addresses
We will assign IP addresses to both ends of our virtual cable, placing them in the same subnet so they can communicate directly.
Step 1: Understand the Current State
Check if any IP addresses are assigned to v-host:
ip addr show v-host
You should see only the MAC address, no IP addresses. The interface is also DOWN (note state DOWN).
Similarly, check the client:
sudo ip netns exec client_ns ip addr show v-client
Same situation: no IP address, interface DOWN.
Step 2: Assign IP Address to the Client
We'll assign 10.0.0.2/24 to the client's interface. The /24 means the first 24 bits are the network portion, so this device considers all addresses from 10.0.0.0 to 10.0.0.255 as local neighbors.
sudo ip netns exec client_ns ip addr add 10.0.0.2/24 dev v-client
Verify:
sudo ip netns exec client_ns ip addr show v-client
You should see:
inet 10.0.0.2/24 scope global v-client
Step 3: Bring the Client Interface UP
Interfaces are created in the DOWN state by default. We must explicitly enable them:
sudo ip netns exec client_ns ip link set v-client up
Also bring up the loopback interface (required for many network operations):
sudo ip netns exec client_ns ip link set lo up
Verify the state changed to UP:
sudo ip netns exec client_ns ip link show v-client
Look for state UP in the output.
Step 4: Assign IP Address to the Host
Now configure the host end of the cable with 10.0.0.1/24 (same subnet):
sudo ip addr add 10.0.0.1/24 dev v-host
Bring it up:
sudo ip link set v-host up
Verify:
ip addr show v-host
Step 5: Verify Connectivity
The moment of truth. Can the host reach the client?
ping -c 3 10.0.0.2
If successful, you'll see output like:
PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.045 ms 64 bytes from 10.0.0.2: icmp_seq=2 ttl=64 time=0.038 ms 64 bytes from 10.0.0.2: icmp_seq=3 ttl=64 time=0.040 ms
Congratulations! You've established your first virtual network connection. The host and client can now communicate.
Can the client ping the host?
sudo ip netns exec client_ns ping -c 3 10.0.0.1
This should also succeed. Communication is bidirectional.
Why Does This Work?
Let's trace what happens when you ping 10.0.0.2 from the host:
- Destination Analysis: The kernel looks at the destination IP
10.0.0.2and your interfacev-hostwith IP10.0.0.1/24. It calculates: "10.0.0.2 is in the 10.0.0.0/24 network, which matches my v-host subnet. This is a local destination." - MAC Address Resolution: The kernel needs the MAC address of 10.0.0.2. It doesn't know it yet, so it broadcasts an ARP request on
v-host: "Who has 10.0.0.2? Please tell 10.0.0.1." - ARP Reply: The client (in client_ns) receives the ARP request on
v-client, recognizes its own IP, and replies: "10.0.0.2 is at MAC address aa:bb:cc:dd:ee:ff" (the MAC of v-client). - Packet Transmission: Now the host knows the MAC address. It constructs an ICMP Echo Request packet, wraps it in an Ethernet frame addressed to the client's MAC, and sends it out
v-host. - Packet Reception: The packet instantly appears on
v-client(because they're a veth pair), travels up the network stack in client_ns, and the kernel processes the ICMP Echo Request. - Reply: The client sends an ICMP Echo Reply back to 10.0.0.1, using the same process in reverse.
All of this happens in milliseconds, managed entirely by the kernel.
Deliverable B
After following exercise 2, show the output of the following commands:
ip addr show v-host
sudo ip netns exec client_ns ip addr show v-client
ping -c 3 10.0.0.2
Exercise 3: Neighbor Discovery and ARP
So far, we've worked with IP addresses , but the physical network uses MAC addresses. How does the kernel translate between them?
Thwory
The Address Resolution Protocol (ARP)
ARP is the protocol that maps IP addresses to MAC addresses on Ethernet networks. When the kernel needs to send a packet to an IP address that it knows is on a local network, it must first discover the target's MAC address.
The ARP process:
- The kernel checks its Neighbor Table (ARP cache) for an existing entry mapping the destination IP to a MAC address
- If found, use it
- If not found:
- Broadcast an ARP Request packet on the local network: "Who has IP X.X.X.X? Tell Y.Y.Y.Y."
- Wait for an ARP Reply: "IP X.X.X.X is at MAC aa:bb:cc:dd:ee:ff"
- Cache this mapping in the Neighbor Table for future use
ARP is a broadcast protocol at Layer 2. Every device on the local network segment receives the ARP request, but only the device with the matching IP address responds.
The Neighbor Table
The kernel maintains a Neighbor Table (also called the ARP cache) mapping IP addresses to MAC addresses. Entries have states:
- REACHABLE: The entry is valid and recently confirmed
- STALE: The entry is old but assumed still valid
- DELAY: Awaiting confirmation
- INCOMPLETE: ARP request sent, waiting for reply
- FAILED: ARP request failed, no reply received
Entries automatically expire after a timeout (typically a few minutes of inactivity) to handle cases where devices change MAC addresses or leave the network.
You can view the Neighbor Table with:
ip neigh show
Or the older command:
arp -n
Why ARP Matters for Troubleshooting
Many network connectivity issues that seem like "routing problems" are actually ARP problems:
- The kernel can't deliver packets because it never receives an ARP reply
- Stale ARP entries point to the wrong MAC address after a device changes
- ARP conflicts occur when two devices claim the same IP address
Understanding ARP is essential for diagnosing these issues.
Practice: Inspecting the ARP Cache
Step 1: View Current Neighbors
Check your neighbor table:
ip neigh show
You should see an entry for 10.0.0.2 (the client) with its MAC address and state REACHABLE:
10.0.0.2 dev v-host lladdr aa:bb:cc:dd:ee:ff REACHABLE
This entry was created when you pinged the client in Exercise 2. The "lladdr" (link-layer address) is the MAC address.
Also check from the client's perspective:
sudo ip netns exec client_ns ip neigh show
You should see an entry for 10.0.0.1 (the host).
Step 2: Flush the ARP Cache
Let's simulate a situation where the kernel has forgotten the MAC address mappings:
sudo ip neigh flush all
Verify they're gone:
ip neigh show
The entry for 10.0.0.2 should be absent or in state FAILED or INCOMPLETE.
Step 3: Watch ARP in Action
Now ping the client again:
ping -c 1 10.0.0.2
This ping should succeed. Immediately check the neighbor table:
ip neigh show
The entry for 10.0.0.2 has been automatically recreated. The kernel performed ARP resolution transparently during the ping.
Common ARP Issues
When troubleshooting connectivity:
- If
pingshows "Destination Host Unreachable", checkip neigh. If the entry isFAILED, the remote host isn't responding to ARP requests (possibly down, wrong subnet, or firewall blocking ARP). - If
pingworks but other services don't, ARP isn't the problem—look at routing or firewall rules. - If connectivity is intermittent, check for duplicate IP addresses (two devices responding to the same IP).
Deliverable C:
After following exercise 3, provide the output of:
ip neigh show
sudo ip neigh flush all
ip neigh show # Should be empty or show failed entries
ping -c 1 10.0.0.2
ip neigh show # Should show REACHABLE entry
Exercise 4: Routing and Gateway Configuration
Theory: Beyond the Local Network
So far, our host and client can communicate because they're in the same subnet (10.0.0.0/24). But what happens when you try to reach an IP address that doesn't match any of your local subnets? For example, how do you reach the internet (like Google's DNS server at 8.8.8.8)?
The Routing Table
The routing table is a kernel data structure that maps destination networks to interfaces and gateways. When the kernel needs to send a packet, it consults this table to decide where to send it.
Each routing table entry specifies:
- Destination network: Which IP addresses this route applies to (e.g.,
10.0.0.0/24or0.0.0.0/0for default) - Gateway: The IP address of the router to forward packets through (or "direct" if no gateway needed)
- Interface: Which network interface to send packets out
- Metric: Priority when multiple routes match (lower is preferred)
View the routing table:
ip route show
Or the older command:
route -n
Example output:
default via 192.168.1.1 dev eth0 metric 100 10.0.0.0/24 dev v-host proto kernel scope link src 10.0.0.1 192.168.1.0/24 dev eth0 proto kernel scope link src 192.168.1.50
Let's decode this:
default via 192.168.1.1 dev eth0: For any destination not matched by more specific routes, forward packets to the gateway at 192.168.1.1 through interface eth0. This is called the "default route" or "default gateway."10.0.0.0/24 dev v-host: For destinations in 10.0.0.0/24, send directly out v-host (no gateway needed). This route was automatically created when we assigned 10.0.0.1/24 to v-host.192.168.1.0/24 dev eth0: Local network, send directly out eth0.
The Default Route
The default route (destination 0.0.0.0/0 or shown as default) is the "route of last resort." It's a catch-all that matches any destination not covered by more specific routes. Without a default route, the kernel can only reach directly connected networks.
The gateway specified in the default route must itself be reachable via a local network. You can't set a gateway to an IP address that the kernel doesn't know how to reach.
How Routing Decisions Work
When sending a packet to destination IP D:
- The kernel searches the routing table for the most specific match (longest prefix match)
- If a match is found, use that route's interface and gateway
- If no match is found and no default route exists, return "Network is unreachable" error
Example: Routing to 8.8.8.8
- Check local routes: Does 8.8.8.8 match 10.0.0.0/24? No. Does it match 192.168.1.0/24? No.
- Check default route: Yes,
defaultmatches everything - Use the default route: Send packet to gateway 192.168.1.1 via eth0
Network Address Translation (NAT)
Private IP addresses (like 10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16) are not routable on the public internet. If the client (10.0.0.2) wants to reach Google (8.8.8.8), it faces a problem: Google's servers can't send replies back to 10.0.0.2 because that address is private and potentially used by millions of devices worldwide.
The solution is NAT (Network Address Translation), specifically a technique called "masquerading":
- The client sends a packet to 8.8.8.8 with source IP 10.0.0.2
- The packet reaches the host (acting as a router/gateway)
- The host rewrites the source IP from 10.0.0.2 to its own public IP (e.g., 203.0.113.5)
- The host forwards the packet to the internet
- Google replies to 203.0.113.5
- The host receives the reply, recognizes it belongs to the client's connection, rewrites the destination IP back to 10.0.0.2, and forwards it to the client
This source IP rewriting is called "masquerading" because the host "masks" the private IP behind its own public IP. The host maintains a connection table to track which internal client made which request.
IP Forwarding
For the host to act as a router, the kernel must be configured to forward packets between interfaces. By default, Linux does not forward (for security reasons). We enable it with:
sudo sysctl -w net.ipv4.ip_forward=1
This setting tells the kernel: "If a packet arrives on one interface and is destined for an address reachable via another interface, forward it."
Practice: Enabling Internet Access
Step 1: The Problem - No Route
From the client namespace, try to ping Google's DNS server:
sudo ip netns exec client_ns ping -c 2 8.8.8.8
Result: connect: Network is unreachable
Why? Let's check the client's routing table:
sudo ip netns exec client_ns ip route show
You should see only:
10.0.0.0/24 dev v-client proto kernel scope link src 10.0.0.2
This means the client knows how to reach 10.0.0.0/24, but it has no idea how to reach 8.8.8.8. There's no default route.
Step 2: Add a Default Route
Tell the client to use the host (10.0.0.1) as its default gateway:
sudo ip netns exec client_ns ip route add default via 10.0.0.1
Verify:
sudo ip netns exec client_ns ip route show
You should now see:
default via 10.0.0.1 dev v-client 10.0.0.0/24 dev v-client proto kernel scope link src 10.0.0.2
This tells the kernel: "For any destination I don't have a specific route for, send it to 10.0.0.1."
Step 3: The Problem - No Forwarding
Try pinging again:
sudo ip netns exec client_ns ping -c 2 8.8.8.8
It might still fail or timeout. Why? Even though the client sends packets to the host, the host might not be forwarding them. Let's enable IP forwarding on the host:
sudo sysctl -w net.ipv4.ip_forward=1
Verify:
sysctl net.ipv4.ip_forward
Should show: net.ipv4.ip_forward = 1
Step 4: The Problem - No NAT
Try pinging again:
sudo ip netns exec client_ns ping -c 2 8.8.8.8
It still might not work. The packets are being forwarded, but they have source IP 10.0.0.2. The internet routers don't know how to route replies back to private IP addresses. We need NAT.
Step 5: Configure NAT (Masquerade)
We'll use nftables to configure NAT. First, identify your internet-connected interface (replace eth0 if yours is different):
ip route show default
Look for default via X.X.X.X dev INTERFACE. Note the interface name (e.g., eth0).
Now configure NAT to masquerade traffic from the 10.0.0.0/24 subnet going out your internet interface:
sudo nft add table ip nat
sudo nft add chain ip nat postrouting { type nat hook postrouting priority srcnat \; }
sudo nft add rule ip nat postrouting ip saddr 10.0.0.0/24 oifname "eth0" masquerade
Important: Replace "eth0" with your actual internet interface name.
The nftables interface is a complex and powerful kernel tool for firewall and NAT managment. Understanding exactly how this works is beyond the scope of this lab. For now, use the commands aboce "as-is".
Verify the rule:
sudo nft list ruleset
Step 6: Success!
Now try pinging from the client:
sudo ip netns exec client_ns ping -c 3 8.8.8.8
Success! The client can now reach the internet.
What Just Happened?
When the client pings 8.8.8.8:
- Client kernel checks routing table, finds default route via 10.0.0.1
- Client sends packet to 10.0.0.1 (the host)
- Host receives packet, checks if IP forwarding is enabled (yes)
- Host checks its routing table for 8.8.8.8, finds default route via internet gateway
- Host's nftables NAT rule rewrites source IP from 10.0.0.2 to host's public IP
- Host forwards packet to internet gateway
- Packet reaches 8.8.8.8, reply comes back to host's public IP
- Host's NAT table recognizes this is a reply to client's connection
- Host rewrites destination IP from host's public IP back to 10.0.0.2
- Host forwards reply to client via v-host interface
All of this happens transparently at wire speed, managed by the kernel.
Deliverable D
After following exercise 4, provide the output of:
sudo ip netns exec client_ns ip route show
sudo ip netns exec client_ns ping -c 3 8.8.8.8
Exercise 5: Virtual Switches (Linux Bridge)
Theory: Hub-and-Spoke vs. Switched Networks
So far, we've created a simple point-to-point connection using a veth pair. This works for connecting two devices, but what if we want to connect multiple devices? We could create veth pairs between every pair of devices, but that's not scalable. A three-device network would need 3 pairs, a four-device network would need 6 pairs, and so on.
The traditional solution is a network switch.
Physical vs. Virtual Switches
A physical Ethernet switch is a hardware device with multiple ports. When it receives a frame on one port, it examines the destination MAC address and forwards the frame only to the port where that MAC address is located. The switch learns MAC address locations by observing source addresses on incoming frames.
A Linux bridge is a software implementation of a network switch. It's a virtual interface that you can "plug" other interfaces into. The bridge learns MAC addresses and forwards frames intelligently, just like a physical switch.
Key characteristics of a bridge:
- Operates at the network level (MAC addresses)
- Supports multiple connected interfaces
- Learns MAC addresses dynamically
- Provides broadcast domain for protocols like ARP
- Transparent to IP protocols
Creating a Switched Topology
In this exercise, we'll recreate our network with a proper switched architecture:
[Host]
|
| v-host (connected to br-lab)
|
[br-lab] (bridge/switch)
|
+--- v-red-br ~~~ v-red-ns ----> [Red NS]
|
+--- v-blue-br ~~~ v-blue-ns ---> [Blue NS]
The bridge (br-lab) acts as a switch. The host, red namespace, and blue namespace are all "plugged into" different ports of this virtual switch.
Why This Matters
This architecture mirrors real networks:
- Home networks: Your home router has a built-in switch connecting your devices
- Data centers: Switches connect servers to network backbones
- Cloud environments: Virtual switches (like Open vSwitch) connect containers and VMs
- Container orchestration: Docker and Kubernetes use Linux bridges for container networking
Understanding bridges is essential for working with modern virtualization and containerization technologies.
Practice: Building a Switched Network
Step 1: Clean Up Previous Configuration
Remove the old point-to-point setup:
sudo ip netns delete client_ns
sudo ip link delete v-host
The veth pair is automatically cleaned up when we delete v-host (since they're paired).
Verify:
ip netns list
ip link show | grep v-
Both should show no results.
Step 2: Create the Bridge (Virtual Switch)
Create a Linux bridge named br-lab:
sudo ip link add name br-lab type bridge
Bring it up:
sudo ip link set br-lab up
Verify:
ip link show br-lab
You should see a new interface of type bridge in state UP.
Step 3: Create Network Namespaces for Two Clients
Create namespaces for "red" and "blue" clients:
sudo ip netns add red
sudo ip netns add blue
Verify:
ip netns list
Step 4: Create and Connect Red Client
Create a veth pair for the red client:
sudo ip link add v-red-br type veth peer name v-red-ns
Connect one end (v-red-br) to the bridge:
sudo ip link set v-red-br master br-lab
This "plugs" v-red-br into the switch. The master keyword means "this interface is now a port of the bridge."
Move the other end into the red namespace:
sudo ip link set v-red-ns netns red
Bring up the bridge-side interface:
sudo ip link set v-red-br up
Assign IP to red client and bring it up:
sudo ip netns exec red ip addr add 10.0.0.2/24 dev v-red-ns
sudo ip netns exec red ip link set v-red-ns up
sudo ip netns exec red ip link set lo up
Step 5: Create and Connect Blue Client
Repeat the same process for blue:
sudo ip link add v-blue-br type veth peer name v-blue-ns
sudo ip link set v-blue-br master br-lab
sudo ip link set v-blue-ns netns blue
sudo ip link set v-blue-br up
sudo ip netns exec blue ip addr add 10.0.0.3/24 dev v-blue-ns
sudo ip netns exec blue ip link set v-blue-ns up
sudo ip netns exec blue ip link set lo up
Step 6: Configure the Host Interface
The host also needs to be connected to the switch. We'll assign an IP address directly to the bridge interface:
sudo ip addr add 10.0.0.1/24 dev br-lab
Why does this work? A bridge can have an IP address, making the host itself a participant on the switched network. This is simpler than creating another veth pair.
Step 7: Verify the Topology
Check bridge status:
ip link show master br-lab
This shows all interfaces connected to the bridge. You should see v-red-br and v-blue-br.
Or use the bridge utility:
bridge link show
Check IP addresses:
ip addr show br-lab
sudo ip netns exec red ip addr show v-red-ns
sudo ip netns exec blue ip addr show v-blue-ns
Step 8: Test Connectivity
Red to Blue (peer-to-peer):
sudo ip netns exec red ping -c 3 10.0.0.3
Blue to Red:
sudo ip netns exec blue ping -c 3 10.0.0.2
Red to Host:
sudo ip netns exec red ping -c 3 10.0.0.1
Blue to Host:
sudo ip netns exec blue ping -c 3 10.0.0.1
All of these should succeed. The bridge is forwarding frames between all three participants.
Host to Red:
ping -c 3 10.0.0.2
Host to Blue:
ping -c 3 10.0.0.3
Step 9: Enable Internet Access for Clients
Add default routes for both clients:
sudo ip netns exec red ip route add default via 10.0.0.1
sudo ip netns exec blue ip route add default via 10.0.0.1
Verify IP forwarding is still enabled (should be from Exercise 4):
sysctl net.ipv4.ip_forward
If not set to 1, enable it:
sudo sysctl -w net.ipv4.ip_forward=1
Ensure NAT is still configured (should be from Exercise 4):
sudo nft list ruleset | grep masquerade
If not present, add it again (replace eth0 with your internet interface):
sudo nft add table ip nat
sudo nft add chain ip nat postrouting { type nat hook postrouting priority srcnat \; }
sudo nft add rule ip nat postrouting ip saddr 10.0.0.0/24 oifname "eth0" masquerade
Test internet access:
sudo ip netns exec red ping -c 3 8.8.8.8
sudo ip netns exec blue ping -c 3 8.8.8.8
Both clients can now reach the internet through the host, which acts as both a switch (via the bridge) and a router (via IP forwarding and NAT).
Understanding the Data Flow
When Red (10.0.0.2) pings Blue (10.0.0.3):
- Red's kernel sees 10.0.0.3 is in the local subnet, sends ARP request
- ARP request goes out v-red-ns, through the veth pair to v-red-br
- v-red-br is connected to br-lab (the switch)
- br-lab broadcasts the ARP request to all connected interfaces (v-blue-br and itself)
- Blue receives ARP request via v-blue-ns, replies with its MAC
- br-lab learns Blue's MAC address is on port v-blue-br
- Subsequent packets from Red to Blue are forwarded directly to v-blue-br (no broadcast)
- The bridge maintains a MAC address table, just like a physical switch
When Red (10.0.0.2) pings Google (8.8.8.8):
- Red's kernel sees 8.8.8.8 is not in the local subnet, consults routing table
- Default route says to send to gateway 10.0.0.1 (the host)
- Red uses ARP to find MAC of 10.0.0.1
- Packet goes through veth pair to br-lab
- br-lab forwards to its own IP (10.0.0.1 is assigned to br-lab)
- Host kernel receives packet, sees it's destined for 8.8.8.8
- IP forwarding is enabled, so host checks routing table
- Host finds default route via internet gateway
- NAT rule rewrites source IP from 10.0.0.2 to host's public IP
- Packet forwarded to internet
Scripting Challenges
These challenges test your ability to automate network configuration and extract useful diagnostic information from the kernel's network subsystem.
Challenge 1: Automated Network Builder
Write a bash script lab7_builder.sh that automatically constructs the Red/Blue/Bridge topology from Exercise 5, assigns IP addresses, enables routing and NAT, and provides cleanup functionality.
Requirements:
- Script Name :
lab7_builder.sh - Shebang and Best Practices:
- Start with
#!/bin/bash - Use
set -euo pipefailfor error handling
- Start with
- Arguments:
start: Create and configure the networkstop: Clean up all created resources
- Start Operation Must:
- Create bridge
br-lab - Create namespaces
redandblue - Create veth pairs and connect them to the bridge and namespaces
- Assign IP addresses:
- Host (br-lab): 10.0.0.1/24
- Red: 10.0.0.2/24
- Blue: 10.0.0.3/24
- Bring all interfaces UP
- Add default routes for both clients pointing to 10.0.0.1
- Enable IP forwarding
- Configure NAT/masquerade for 10.0.0.0/24 traffic going out the default internet interface
- Print a success message: "Network topology created successfully"
- Create bridge
- Stop Operation Must:
- Delete namespaces (this automatically removes interfaces inside them)
- Delete bridge
- Flush nftables nat table
- Disable IP forwarding (set back to 0)
- Print a success message: "Network topology cleaned up successfully"
Testing:
chmod +x ./lab7_builder.sh
sudo ./lab7_builder.sh start
sudo ip netns exec red ping -c 2 10.0.0.3
sudo ip netns exec red ping -c 2 8.8.8.8
sudo ./lab7_builder.sh stop
Deliverable Challenge 1:
- Complete script source code with comments
- Output of running the script with
startargument - Output of connectivity tests (red to blue, red to internet)
- Output of running the script with
stopargument
Challenge 2: Namespace Inspector
Write a bash script lab7_inspect.sh that takes a namespace name as an argument and displays diagnostic information about that namespace's network configuration.
Requirements:
Script Name:
lab7_inspect.shShebang and Best Practices:
- Start with
#!/bin/bash - Use
set -euo pipefail - Use functions
- Comment your code
- Start with
Arguments:
- Takes exactly one argument: the namespace name
- If no argument or more than one argument, print usage and exit with error
Output Format (exactly as shown):
=== Network Inspection for Namespace: <ns_name> === [Interfaces] <output of ip link show> [IP Addresses] <output of ip addr show> [Routes] <output of ip route show> [Default Gateway] <extract and show only the default gateway IP, or "None" if not configured> [Neighbors (ARP Cache)] <output of ip neigh show>
Functionality:
- Check if namespace exists before attempting to inspect
- All commands should run inside the specified namespace
- Extract the default gateway IP from the routing table (hint: grep for "default" and extract the IP after "via")
Testing (after running Challenge 1's start command):
chmod +x ./lab7_inspect.sh
sudo ./lab7_inspect.sh red
sudo ./lab7_inspect.sh blue
Deliverable Challenge 2:
- Complete script source code with comments
- Output of running the script for the
rednamespace (after pinging a few addresses to populate ARP cache)
Reference: Network Command Quick Guide
This section provides a quick reference for the commands introduced in this lab.
Interface Management (ip link)
# List all interfaces
ip link show
# Create veth pair
sudo ip link add dev <name1> type veth peer name <name2>
# Move interface to namespace
sudo ip link set <interface> netns <namespace>
# Bring interface up/down
sudo ip link set <interface> up
sudo ip link set <interface> down
# Delete interface
sudo ip link delete <interface>
# Create bridge
sudo ip link add name <bridge> type bridge
# Connect interface to bridge
sudo ip link set <interface> master <bridge>
Address Management (ip addr)
# Show addresses
ip addr show
ip addr show <interface>
# Add address
sudo ip addr add <ip>/<prefix> dev <interface>
# Example: sudo ip addr add 10.0.0.1/24 dev eth0
# Remove address
sudo ip addr del <ip>/<prefix> dev <interface>
# Flush all addresses from interface
sudo ip addr flush dev <interface>
Routing Management (ip route)
# Show routing table
ip route show
# Add route
sudo ip route add <network>/<prefix> via <gateway>
# Example: sudo ip route add 192.168.2.0/24 via 192.168.1.1
# Add default route
sudo ip route add default via <gateway>
# Example: sudo ip route add default via 10.0.0.1
# Delete route
sudo ip route del <network>/<prefix>
sudo ip route del default
# Flush routing table
sudo ip route flush table main
Neighbor Management (ip neigh)
# Show ARP cache
ip neigh show
# Flush ARP cache
sudo ip neigh flush all
# Add static ARP entry
sudo ip neigh add <ip> lladdr <mac> dev <interface>
# Delete ARP entry
sudo ip neigh del <ip> dev <interface>
Namespace Management (ip netns)
# List namespaces
ip netns list
# Create namespace
sudo ip netns add <name>
# Delete namespace
sudo ip netns delete <name>
# Execute command in namespace
sudo ip netns exec <name> <command>
# Example: sudo ip netns exec red ping 10.0.0.1
# Get shell in namespace
sudo ip netns exec <name> bash
Bridge Management
# Show bridge details
bridge link show
bridge fdb show # Show MAC address table
# Show which interfaces are part of a bridge
ip link show master <bridge>
IP Forwarding
# Check status
sysctl net.ipv4.ip_forward
# Enable (temporary)
sudo sysctl -w net.ipv4.ip_forward=1
# Disable
sudo sysctl -w net.ipv4.ip_forward=0
# Enable permanently (edit /etc/sysctl.conf)
echo "net.ipv4.ip_forward=1" | sudo tee -a /etc/sysctl.conf
NAT with nftables
# List current ruleset
sudo nft list ruleset
# Create NAT table and rules
sudo nft add table ip nat
sudo nft add chain ip nat postrouting { type nat hook postrouting priority srcnat \; }
sudo nft add rule ip nat postrouting ip saddr 10.0.0.0/24 oifname "eth0" masquerade
# Delete NAT table
sudo nft delete table ip nat
# Flush ruleset
sudo nft flush ruleset
Diagnostic Tools
# Test connectivity
ping -c 3 <ip>
# Trace route
traceroute <ip>
# Show listening sockets
ss -tuln
# Monitor traffic
sudo tcpdump -i <interface>
sudo tcpdump -i <interface> -n arp # Show only ARP traffic
sudo tcpdump -i <interface> -n icmp # Show only ping traffic
# Show interface statistics
ip -s link show <interface>
Common Network Topology Patterns
Point-to-Point Connection
[Host] <---veth pair---> [Namespace]
Use case: Simple container isolation, testing
Switched Network
[Bridge]
|
+-------+-------+
| | |
[Host] [NS1] [NS2]
Use case: Multiple containers/VMs on same network
Routed Network
[Internet] <---> [Host/Router] <---> [Bridge] <---> [Containers]
(NAT)
Use case: Containers with internet access
Summary Table: Network Components
| Component | Purpose | Layer | Key Commands |
|---|---|---|---|
| veth pair | Virtual cable connecting two interfaces | L2 | ip link add type veth
|
| bridge | Virtual switch | L2 | ip link add type bridge
|
| IP address | Device identity and subnet membership | L3 | ip addr add
|
| Route | Path determination | L3 | ip route add
|
| ARP/Neighbor | IP-to-MAC mapping | L2/L3 | ip neigh show
|
| NAT | Private-to-public IP translation | L3 | nft add rule masquerade
|
| Namespace | Isolated network stack | All | ip netns add
|
Deliverables and Assessment
Submit a single PDF document containing:
The deliverables A-D and solutions to the challanges 1 and 2.
For further study:
Advanced Topics:
- VLANs and 802.1Q tagging
- Open vSwitch (OVS) for advanced switching
- Network policies and firewalling with nftables
- Quality of Service (QoS) and traffic shaping
- IPv6 configuration and dual-stack networking
- VPN setup with WireGuard or OpenVPN
Container Networking:
- Docker networking modes (bridge, host, overlay)
- Kubernetes networking model and CNI plugins
- Service meshes (Istio, Linkerd)
Performance and Monitoring:
- Network performance testing with iperf
- Packet capture and analysis with Wireshark
- Continuous monitoring with Prometheus and Grafana
Security:
- Network segmentation and isolation
- Firewall rule design
- Intrusion detection systems
- Zero-trust networking
Relevant Manual Pages:
man 8 ip # iproute2 command reference
man 8 ip-link # Interface management
man 8 ip-address # Address management
man 8 ip-route # Routing management
man 8 ip-netns # Namespace management
man 8 bridge # Bridge management
man 7 netdevice # Network device overview
man 7 arp # ARP protocol
man 5 nft # nftables syntax
man 8 tcpdump # Packet capture