OS Lab 6 - Inter Process Communication
Objectives
Upon completion of this lab, you will be able to:
- Explain how the kernel provides IPC mechanisms and how bash exposes them to scripts.
- Use environment variables and
exportto pass configuration from parent processes to children. - Construct pipelines with anonymous pipes to connect process standard streams.
- Create and use named pipes (FIFOs) for communication between unrelated processes.
- Send signals to processes and handle them with
trapfor asynchronous event-driven scripting. - Demonstrate basic socket communication using existing utilities for network IPC.
Introduction
In Lab 5, we explored bash scripting as a way to automate tasks by writing programs that the shell interprets. We learned how processes execute scripts, how to control flow with conditionals and loops, and how to structure code with functions. However, all of our scripts operated in isolation. Each script was a single process (or spawned child processes) that performed its task independently.
Real-world systems require processes to cooperate. A web server must communicate with a database. A shell pipeline connects the output of one program to the input of another. A service must respond to signals sent by the init system. This lab explores the kernel-provided mechanisms that enable processes to exchange data and coordinate their actions. We focus on five fundamental IPC mechanisms, all accessible from bash: environment variables, pipes, named pipes, signals, and sockets.
Understanding IPC is essential for system administration and software development. These mechanisms form the foundation of everything from command pipelines to distributed systems.
Prerequisites
System Requirements
A running instance of the course-provided Linux virtual machine with SSH or direct terminal access.
Required Packages
The following packages must be installed:
sudo apt update
sudo apt install -y caddy socat
caddy: A modern HTTP server with automatic HTTPS. We use it to demonstrate socket communication.socat: A versatile networking tool that can work with Unix domain sockets.
Knowledge Prerequisites
You should be familiar with: - Process concepts from Lab 3 (PIDs, process hierarchy, file descriptors) - File permissions from Lab 4 (execute bit, ownership) - Bash scripting fundamentals from Lab 5 (shebangs, builtins, variables, quoting, exit status, redirection, loops, functions)
Inter-Process Communication
What is IPC?
By default, processes are isolated from one another. Each process has its own memory space, file descriptor table, and execution context. This isolation provides security and stability, but it creates a problem: how can processes cooperate to accomplish complex tasks?
Inter-Process Communication (IPC) refers to the kernel-provided mechanisms that allow processes to exchange data and synchronize their actions. The kernel acts as an intermediary, providing channels, buffers, and signaling primitives that processes can use to communicate safely without violating isolation boundaries.
In this lab, we examine five IPC mechanisms arranged roughly by complexity:
- Environment Variables: The simplest form of IPC. A parent process passes key-value configuration to its children via inherited environment variables. Communication is unidirectional (parent → child) and occurs only at process creation.
- Pipes: The kernel creates a buffer connecting one process’s standard output to another’s standard input. Data flows in one direction through the pipe. Anonymous pipes exist only while the processes using them are running.
- Named Pipes (FIFOs): Like anonymous pipes, but visible in the filesystem. This persistence allows unrelated processes to connect to the same pipe by opening a file path.
- Signals: Asynchronous notifications sent from one process to another (or from the kernel to a process). Signals interrupt the receiving process, which can catch and handle them or allow default behavior (often termination).
- Sockets: Bidirectional communication channels that work across network boundaries or locally via Unix domain sockets. Multiple processes can connect to the same socket, enabling one-to-many communication patterns.
Each mechanism solves different problems and has different trade-offs in terms of complexity, performance, and flexibility.
Environment Variables and export
The Kernel’s Role
When a process forks, the child inherits a copy of the parent’s environment: a set of key-value string pairs maintained by the kernel for each process. The child can read these values and modify its own copy, but changes do not propagate back to the parent or to other processes. This inheritance mechanism provides a simple, unidirectional channel for passing configuration from parent to child.
Environment variables are not limited to shell scripts. Every process has an environment. When you run any program, it receives the environment from the shell that launched it. Programs written in C access this via the environ global variable or the third parameter to main(). Python uses os.environ. The environment is a universal convention for passing configuration.
Bash’s Role: export and env
In bash, variables are local to the shell process by default. When you set name=value, that variable exists in the shell’s memory but is not passed to child processes. The export builtin marks a variable for inclusion in the environment of future child processes:
myvar="hello"
export myvar
Or, more concisely:
export myvar="hello"
Once exported, all child processes started by this shell (scripts, programs, or subshells) will inherit myvar in their environment. To see the current environment, use the env command (or printenv), which prints all exported variables.
You can also set environment variables for a single command without affecting the shell:
DEBUG=1 ./myscript.sh
This syntax sets DEBUG=1 in the environment of myscript.sh only, without exporting it in the parent shell.
Common environment variables you’ve already been using include PATH (where bash searches for commands), HOME (your home directory), USER (your username), and SHELL (your login shell). These are all set by the login process and inherited by every subsequent process in your session.
When to Use Environment Variables
Environment variables are appropriate for: - Configuration that should be inherited by all child processes (e.g., locale settings, proxy configuration) - Passing secrets or credentials to programs without embedding them in command-line arguments (which are visible via ps) - Controlling program behavior via well-known variables like PATH, LD_LIBRARY_PATH, or TZ
They are not suitable for: - Runtime communication between already-running processes (use pipes, sockets, or signals) - Large amounts of data (environment is limited in size, typically a few megabytes) - Bi-directional communication (child changes don’t affect parent)
Pipes
The Kernel’s Pipe Mechanism
A pipe is a one-way data channel maintained in kernel memory. When you create a pipe, the kernel allocates a buffer (typically 64 KB on Linux) and returns two file descriptors: one for writing and one for reading. Data written to the write end is buffered by the kernel and can be read from the read end in FIFO (first-in, first-out) order.
Pipes are anonymous: they have no name in the filesystem. They exist only as long as at least one process holds a file descriptor to them. When all processes close their references to a pipe, the kernel deallocates it.
The key insight from Lab 3: a pipeline like cat file.txt | grep "error" | wc -l creates multiple processes (all in the same process group) connected by pipes. The kernel sets up the file descriptors so that cat’s stdout is connected to grep’s stdin, and grep’s stdout is connected to wc’s stdin. The processes run concurrently, with data flowing through kernel buffers as it’s produced and consumed.
Bash’s Pipe Operator
Bash creates pipes using the | operator. The syntax is simple:
command1 | command2
This creates a pipe and starts two processes. Bash configures command1’s stdout (FD 1) to point to the pipe’s write end and command2’s stdin (FD 0) to point to the pipe’s read end. The commands execute concurrently.
Longer pipelines work the same way:
cat data.txt | sort | uniq | wc -l
This creates three pipes connecting four processes. Data flows left-to-right through kernel buffers.
Pipes in Scripts
You can use pipes inside scripts just as you would interactively:
#!/bin/bash
set -euo pipefail
# Count unique IP addresses in an access log
cat /var/log/access.log | cut -d' ' -f1 | sort | uniq | wc -l
Pipes are particularly powerful when combined with bash’s process substitution feature <(command), but that’s beyond the scope of this introductory lab.
When to Use Pipes
Pipes are ideal for: - Connecting the output of one program to the input of another in a linear processing chain - Streaming data processing where data is generated and consumed incrementally - Quick, one-off data transformations at the command line
They are not suitable for: - Bi-directional communication (data flows one way only) - Communication between unrelated processes that weren’t started in a pipeline - Persistent communication (pipe disappears when processes exit)
Named Pipes (FIFOs)
The Kernel’s FIFO Mechanism
A named pipe, or FIFO (First-In, First-Out), is a pipe with a name in the filesystem. Unlike anonymous pipes, FIFOs persist as filesystem entries (though the data buffer is still in kernel memory). Any process with appropriate permissions can open the FIFO by its path, allowing unrelated processes to communicate.
When a process opens a FIFO for reading, it blocks until another process opens the same FIFO for writing (and vice versa). Once both ends are open, data flows through the kernel buffer just like an anonymous pipe. When all processes close their connections, the FIFO remains as a filesystem entry but the kernel buffer is deallocated.
You can identify a FIFO in ls -l output by the leading p in the permission string:
prw-r--r-- 1 user user 0 Nov 6 10:00 myfifo
Creating FIFOs with mkfifo
The mkfifo command creates a named pipe:
mkfifo /tmp/myfifo
Now two unrelated processes can communicate by opening this file. One writes to it:
echo "Hello from writer" > /tmp/myfifo
This command will block until a reader appears. In another terminal (or in the background), a reader can consume the data:
cat < /tmp/myfifo
When the reader connects, the writer unblocks, the message flows through the kernel buffer, and both commands complete.
When to Use Named Pipes
Named pipes are useful for: - Communication between unrelated processes that start at different times - Producer-consumer patterns where one process generates data and another processes it - Simple IPC without needing network sockets or shared files
They are not suitable for: - Multiple simultaneous readers or writers (FIFO semantics become unpredictable) - Persistent data storage (data is lost when all processes disconnect) - Bi-directional communication (like anonymous pipes, FIFOs are one-way)
Signals and trap
The Kernel’s Signal Mechanism
A signal is an asynchronous notification sent to a process. Signals can be sent by other processes (via the kill system call) or by the kernel itself in response to events like segmentation faults, keyboard interrupts (Ctrl+C), or child process termination.
When the kernel delivers a signal to a process, it interrupts the process’s normal execution. The process can respond in one of three ways:
- Default Action: Each signal has a default behavior, often terminating the process. For example,
SIGTERM(signal 15) gracefully terminates, whileSIGKILL(signal 9) forces immediate termination. - Ignore: The process can choose to ignore certain signals (except
SIGKILLandSIGSTOP, which cannot be caught or ignored). - Custom Handler: The process can register a function (signal handler) to execute when the signal arrives. This allows the process to perform cleanup or take other actions before deciding whether to continue, terminate, or take other action.
Common signals: - SIGINT (2): Sent by Ctrl+C. Default: terminate. - SIGTERM (15): Polite request to terminate. Default: terminate. Most services handle this to perform clean shutdown. - SIGKILL (9): Immediate termination. Cannot be caught or ignored. Used as a last resort. - SIGHUP (1): Historically “hang up” (modem disconnected). Often used to tell daemons to reload configuration. - SIGCHLD (17): Sent to a parent when a child process terminates. - SIGUSR1 (10) and SIGUSR2 (12): User-defined signals for custom purposes.
Signals are sent by PID:
kill -TERM 1234 # Send SIGTERM to process 1234
kill -9 1234 # Send SIGKILL to process 1234
kill -HUP 1234 # Send SIGHUP to process 1234
You can also use the %n job notation from Lab 3 to send signals to background jobs.
Bash’s trap Builtin
The trap builtin allows a bash script to register a handler for incoming signals:
trap 'echo "Caught SIGINT"; exit' INT
This tells bash: “When SIGINT arrives, execute the command echo "Caught SIGINT"; exit.” The handler can be any bash command or function.
Common pattern for cleanup on exit:
#!/bin/bash
set -euo pipefail
cleanup() {
echo "Cleaning up temporary files..."
rm -f /tmp/myscript.*
}
trap cleanup EXIT
# Script body
echo "Running..."
sleep 10
echo "Done"
# cleanup() is automatically called on normal exit, Ctrl+C, or errors (with set -e)
The special signal EXIT isn’t a real signal; it’s a bash pseudo-signal that fires whenever the script exits for any reason.
When to Use Signals
Signals are appropriate for: - Event-driven scripts that respond to external events - Graceful shutdown and cleanup on termination - Daemon control (reload config with SIGHUP, graceful restart with SIGTERM) - Inter-process coordination where one process needs to notify another of state changes
They are not suitable for: - Transferring data (signals carry almost no information, just the signal number) - Reliable communication (signals can be lost or delayed) - Complex coordination (race conditions are common)
Sockets
The Kernel’s Socket Mechanism
A socket is a bidirectional communication endpoint. Unlike pipes, sockets support two-way data flow. Unlike named pipes, sockets support multiple concurrent connections, making them suitable for client-server architectures.
There are two main types:
- Network Sockets: Use TCP or UDP to communicate over IP networks. These work across machines and are the foundation of the internet.
- Unix Domain Sockets: Use file paths (like named pipes) but support bidirectional communication and connection multiplexing. These work only on the same machine but are faster than network sockets.
When a server process binds to a socket, it listens for incoming connections. Multiple client processes can connect to the same server socket. Each connection is independent, with its own bidirectional channel. This one-to-many pattern distinguishes sockets from pipes and FIFOs.
For network sockets, the server binds to a port number (e.g., port 80 for HTTP). For Unix domain sockets, it binds to a filesystem path (e.g., /tmp/myservice.sock).
Socket Communication from Bash
While bash doesn’t have native socket support, we can use existing tools to demonstrate socket communication without writing low-level code:
- caddy: A modern web server that can listen on both network and Unix domain sockets
- curl: A command-line HTTP client that supports Unix domain sockets
- socat: A general-purpose networking tool for creating and connecting to various socket types
These tools abstract the complexity of socket programming, allowing us to focus on the conceptual model.
When to Use Sockets
Sockets are ideal for: - Client-server applications with multiple concurrent clients - Networked communication across machines - Bidirectional data exchange - Services that need to accept connections from many processes
They are not necessary for: - Simple one-to-one, one-way communication (use pipes instead) - Configuration passing (use environment variables) - Asynchronous notifications (use signals)
Hands-on Exercises
Exercise A: Environment Variables and export
This exercise demonstrates how environment variables are inherited by child processes but not propagated back to parents.
Steps:
- Check your current environment. Run
env | head -n 10and observe some of the variables already set. - Create a shell variable without exporting it:
myvar="not exported". - Start a subshell with
bashand try to readmyvarwithecho "$myvar". It should be empty. Exit the subshell. - Back in your original shell, export the variable:
export myvar="exported now". - Start a subshell again with
bash -c 'echo "In child: $myvar"'and verify thatmyvaris now accessible. - In a subshell, modify the variable:
bash -c 'myvar="changed in child"; echo "Child modified: $myvar"'. - Print
myvarin the parent shell:echo "Parent still has: $myvar". Observe that the parent’s value is unchanged. - Demonstrate setting an environment variable for a single command:
GREETING="Hello" bash -c 'echo $GREETING'. - Verify that
GREETINGis not set in your current shell:echo "GREETING in parent: $GREETING"(should be empty). - Use
envto run a command with a specific environment:env DEBUG=1 bash -c 'echo "DEBUG=$DEBUG"'.
Deliverable A: Provide the output showing: the subshell cannot see the unexported variable, the subshell can see the exported variable, the parent is unaffected by child changes, and single-command environment variable setting.
Exercise B: Pipes in Practice
This exercise explores anonymous pipes and how they connect processes.
Steps:
- Use a simple pipe to count lines:
cat /etc/passwd | wc -l. Observe the result. - Build a longer pipeline to find how many unique shells are in use:
cat /etc/passwd | cut -d: -f7 | sort | uniq. Count them manually or pipe towc -l. - Demonstrate that processes in a pipeline run concurrently. Run
yes | head -n 5. Theyescommand produces infinite output, butheadreads only 5 lines and then exits, causingyesto receive aSIGPIPEand terminate. - Create a sample log file for analysis:
cat > /tmp/lab6_sample.log <<'EOF'
2024-01-15 10:00:00 INFO Application started
2024-01-15 10:05:23 ERROR Database connection failed
2024-01-15 10:12:45 WARN Connection timeout
2024-01-15 10:15:00 INFO Retry successful
2024-01-15 10:20:00 ERROR Invalid configuration
2024-01-15 10:25:00 WARN Low memory
EOF
- Use a pipeline to count ERROR entries:
grep "ERROR" /tmp/lab6_sample.log | wc -l. - Use a pipeline to extract and count unique log levels:
cut -d' ' -f3 /tmp/lab6_sample.log | sort | uniq -c. - Combine multiple operations: Find the timestamps of all ERROR entries:
grep "ERROR" /tmp/lab6_sample.log | cut -d' ' -f1,2.
Deliverable B: Provide the output from the /etc/passwd shell analysis, the yes | head -n 5 demonstration, and the log file analysis commands showing ERROR count and unique log levels with counts.
Exercise C: Named Pipes (FIFOs)
This exercise demonstrates persistent named pipes and communication between unrelated processes.
Steps:
- Create a named pipe:
mkfifo /tmp/lab6_fifo. - Verify it exists and note its type:
ls -l /tmp/lab6_fifo. The leadingpindicates a FIFO. - In one terminal, start a reader that will block:
cat < /tmp/lab6_fifo. Leave this running. - In a second terminal, write to the FIFO:
echo "Hello via FIFO" > /tmp/lab6_fifo. - Observe that the reader in the first terminal unblocks, displays the message, and exits.
- Demonstrate that the FIFO persists. List it again:
ls -l /tmp/lab6_fifo. - Test a more complex scenario. In terminal 1:
while read line; do echo "Received: $line"; done < /tmp/lab6_fifo. - In terminal 2, send multiple messages:
echo "Message 1" > /tmp/lab6_fifo
echo "Message 2" > /tmp/lab6_fifo
echo "Message 3" > /tmp/lab6_fifo
- Observe the messages being received. Press Ctrl+C in terminal 1 to stop the reader.
- Remove the FIFO:
rm /tmp/lab6_fifo.
Deliverable C: Provide the ls -l output showing the FIFO type, screenshots or output from both terminals showing the message exchange, and a brief explanation of how the FIFO persists between writes.
Exercise D: Signals and trap
This exercise demonstrates signal handling in bash using interactive commands.
Steps:
- Start a simple sleep command:
sleep 30. Press Ctrl+C to interrupt it. Observe that it terminates immediately. - Now run a command that ignores SIGINT:
trap 'echo "Caught SIGINT, ignoring..."' INT; sleep 30. Press Ctrl+C. The trap catches the signal and the sleep continues. - Demonstrate cleanup on exit. Run this compound command:
trap 'echo "Cleanup: removing temp file"; rm -f /tmp/lab6_test.txt' EXIT; \
touch /tmp/lab6_test.txt; \
echo "File created. Press Ctrl+C or wait..."; \
sleep 10; \
echo "Normal exit"
- Try both: let it complete normally, then run it again and interrupt with Ctrl+C. Observe cleanup happens both times.
- Start a background sleep:
sleep 100 &. Note the PID displayed. - Send SIGTERM to it:
kill -TERM <pid>(replace<pid>with the actual PID). Verify it terminated:jobs. - Start another background process that will ignore SIGTERM:
(trap 'echo "Caught SIGTERM, staying alive"' TERM; \
while true; do echo "Running..."; sleep 5; done) &
- Note the PID, then send SIGTERM:
kill -TERM <pid>. Observe the trap message. - Send SIGKILL to force termination:
kill -9 <pid>. This cannot be caught. - Verify all background jobs are gone:
jobs.
Deliverable D: Provide output showing: the trapped SIGINT message, cleanup on both normal exit and Ctrl+C, the SIGTERM being caught with the trap message, and SIGKILL forcing termination.
Exercise E: Socket Communication Basics
This exercise provides a brief introduction to socket communication using existing tools.
Steps:
- Create a simple static site directory:
mkdir -p /tmp/lab6_site && echo "<h1>Hello from Caddy</h1>" > /tmp/lab6_site/index.html. - Start Caddy as a file server listening on a Unix domain socket:
caddy file-server --root /tmp/lab6_site --listen unix//tmp/caddy.sock &. Note the PID. - Wait a moment for Caddy to start. Verify the socket exists:
ls -l /tmp/caddy.sock. Note thestype indicating a socket. - Use
curlto make an HTTP request via the Unix domain socket:curl -v --unix-socket /tmp/caddy.sock http://localhost/. - Observe the response. Note that the communication is bidirectional: you sent an HTTP request, and the server responded with the HTML content.
- (Optional) Open multiple terminals and run
curlsimultaneously several times. Each request is handled independently, demonstrating the one-to-many capability of sockets. - Stop Caddy by sending
SIGTERM:kill -TERM <caddy_pid>or usepkill caddy. - Clean up:
rm -rf /tmp/lab6_site /tmp/caddy.sock.
Deliverable E: Provide the ls -l output showing the socket file, the curl output demonstrating the request and response (especially the HTTP headers and HTML body), and a brief explanation (2-3 sentences) of how this differs from a named pipe in terms of directionality and connection multiplexing.
Scripting Challenges
Challenge 1: Log Aggregator with Named Pipes
Write a script /tmp/lab6_log_aggregator.sh that aggregates log messages from multiple sources using named pipes.
Requirements:
- Create three named pipes:
/tmp/log_fifo1,/tmp/log_fifo2,/tmp/log_fifo3. - Start three background processes that each write timestamped messages to one of the FIFOs (simulating different log sources). Each should write 5 messages at 1-second intervals.
- Implement a main loop that reads from all three FIFOs simultaneously (hint: use
selectviareadwith timeout, or use multiple background readers). - Write all received messages to a combined log file
/tmp/aggregated.logwith timestamps. - Trap
SIGTERMto clean up: kill background processes, remove FIFOs, and exit gracefully. - Use:
set -euo pipefail, functions,trap,mkfifo, background processes (&), proper cleanup.
Test:
chmod +x /tmp/lab6_log_aggregator.sh
/tmp/lab6_log_aggregator.sh &
AGG_PID=$!
sleep 10
kill -TERM $AGG_PID
wait
cat /tmp/aggregated.log
Deliverable Challenge 1: - Complete script with comments - Contents of /tmp/aggregated.log showing interleaved messages from all three sources - Brief explanation (2-3 sentences) of why named pipes were necessary here instead of anonymous pipes
Challenge 2: Graceful Service Controller
Write a script /tmp/lab6_service_controller.sh that manages a long-running service and responds to signals.
Requirements:
- The script acts as a daemon that runs indefinitely, printing a heartbeat message every 5 seconds.
- Accept one optional argument: a “config file” path. If provided, read a setting (e.g.,
INTERVAL=10) from the file to control the heartbeat interval. - Trap
SIGHUP: Reload the configuration file and adjust the interval dynamically without restarting the script. Print “Configuration reloaded.” - Trap
SIGTERM: Perform graceful shutdown. Print “Shutting down gracefully…”, wait 2 seconds, then exit cleanly. - Trap
SIGUSR1: Export the current status to/tmp/service_status.txt(e.g., uptime, number of heartbeats sent, current interval). Print “Status exported.” - Trap
EXIT: Perform cleanup. Print “Service stopped.” - Maintain internal state: count the number of heartbeats sent, track start time.
- Use:
set -euo pipefail, functions,trapfor multiple signals, variables for state, a loop, cleanup handler.
Test:
echo "INTERVAL=3" > /tmp/service.conf
chmod +x /tmp/lab6_service_controller.sh
/tmp/lab6_service_controller.sh /tmp/service.conf &
SVC_PID=$!
sleep 10
kill -USR1 $SVC_PID # Export status
cat /tmp/service_status.txt
echo "INTERVAL=1" > /tmp/service.conf
kill -HUP $SVC_PID # Reload config
sleep 5
kill -TERM $SVC_PID # Graceful shutdown
wait
Deliverable Challenge 2: - Complete script with comments - Output showing heartbeats before and after SIGHUP (with visible interval change) - Contents of /tmp/service_status.txt after SIGUSR1 - Output showing graceful shutdown message after SIGTERM
Reference: Common IPC Patterns
Quick reference for IPC mechanisms covered in this lab:
Environment Variables:
# Export a variable for child processes
export VAR="value"
# Set for one command only
VAR="value" command
# View all environment variables
env
printenv
Anonymous Pipes:
# Simple pipeline
command1 | command2
# Multi-stage pipeline
cat file.txt | grep "pattern" | sort | uniq
Named Pipes:
# Create a FIFO
mkfifo /path/to/fifo
# Write to FIFO (blocks until reader connects)
echo "data" > /path/to/fifo
# Read from FIFO (blocks until writer connects)
cat < /path/to/fifo
# Remove FIFO
rm /path/to/fifo
Signals:
# Send signal by name
kill -TERM <pid>
kill -HUP <pid>
# Send signal by number
kill -15 <pid>
# Force kill (cannot be caught)
kill -9 <pid>
# Trap signals in a script
trap 'echo "Caught signal"' INT TERM
trap cleanup EXIT
# Define cleanup function
cleanup() {
echo "Cleaning up..."
rm -f /tmp/myfiles.*
}
Sockets (using existing tools):
# Start a server on Unix domain socket (using socat)
socat UNIX-LISTEN:/tmp/service.sock,fork EXEC:'/usr/bin/myhandler'
# Connect as client (using socat)
echo "request" | socat - UNIX-CONNECT:/tmp/service.sock
# HTTP via Unix socket (using curl)
curl --unix-socket /tmp/service.sock http://localhost/path
Common Patterns Table
| Mechanism | Direction | Persistence | Use Case |
|---|---|---|---|
| Environment Variables | Parent → Child (one-way) | Inherited at fork | Configuration, credentials |
| ) | One-way | Ephemeral (process lifetime) | Command chaining, streaming |
| Named Pipes (FIFO) | One-way | Filesystem entry persists | Unrelated process communication |
| Signals | One-way (notification) | Asynchronous event | Process control, event notification |
| Sockets | Bidirectional | Filesystem entry persists (Unix domain) | Client-server, network services |
Deliverables and Assessment
Submit a single document (PDF or similar) containing:
Exercise Deliverables:
- Exercise A: Outputs demonstrating export behavior, parent-child isolation, and single-command environment variables
- Exercise B: Pipeline outputs and complete analysis script
- Exercise C: FIFO creation output, producer/consumer scripts with sample output
- Exercise D: Complete daemon simulation script with signal handling demonstrations
- Exercise E: Socket file listing, curl output with HTTP headers, explanation of socket vs FIFO differences
Challenge Deliverables:
- Challenge 1: Complete log aggregator script, aggregated log contents, explanation
- Challenge 2: Complete service controller script, outputs demonstrating all signal handlers (SIGHUP reload, SIGUSR1 status export, SIGTERM shutdown)
Additional:
- Each deliverable should include command outputs (screenshots or text) and brief explanations where requested.
- For scripts, include the complete, commented source code and example execution output.
Additional Resources
This lab covers the fundamental IPC mechanisms accessible from bash. You’ve learned how processes inherit environment variables, communicate via pipes, respond to signals, and use sockets for network communication. These concepts form the foundation for system administration, scripting, and understanding how complex systems coordinate.
For further study:
- Advanced IPC: Shared memory, message queues, semaphores (typically used in C/C++ programs, not bash)
- Network programming: TCP/UDP sockets, client-server architecture
- D-Bus: A modern IPC system used by desktop environments and systemd
- Signal safety: Writing robust signal handlers (critical in C, less relevant in bash)
Relevant manual pages:
man 7 pipe- Pipe overviewman 7 fifo- Named pipe overviewman 7 signal- Signal overviewman 7 unix- Unix domain socketsman bash- Section on trap and job control