Diferență între revizuiri ale paginii „OS Lab 6 - Inter Process Communication”
(Pagină nouă: # OS Lab 6 - Inter-Process Communication ## Objectives Upon completion of this lab, you will be able to: - Explain how the kernel provides IPC mechanisms and how bash exposes th...) |
|||
| (Nu s-au afișat 3 versiuni intermediare efectuate de același utilizator) | |||
| Linia 1: | Linia 1: | ||
| − | + | <span id="objectives"></span> | |
| − | + | == Objectives == | |
| − | |||
Upon completion of this lab, you will be able to: | Upon completion of this lab, you will be able to: | ||
| − | + | * Explain how the kernel provides IPC mechanisms and how bash exposes them to scripts. | |
| − | + | * Use environment variables and <code>export</code> to pass configuration from parent processes to children. | |
| − | + | * Construct pipelines with anonymous pipes to connect process standard streams. | |
| − | + | * Create and use named pipes (FIFOs) for communication between unrelated processes. | |
| − | + | * Send signals to processes and handle them with <code>trap</code> for asynchronous event-driven scripting. | |
| − | + | * Demonstrate basic socket communication using existing utilities for network IPC. | |
| − | + | <span id="introduction"></span> | |
| + | == Introduction == | ||
In Lab 5, we explored bash scripting as a way to automate tasks by writing programs that the shell interprets. We learned how processes execute scripts, how to control flow with conditionals and loops, and how to structure code with functions. However, all of our scripts operated in isolation. Each script was a single process (or spawned child processes) that performed its task independently. | In Lab 5, we explored bash scripting as a way to automate tasks by writing programs that the shell interprets. We learned how processes execute scripts, how to control flow with conditionals and loops, and how to structure code with functions. However, all of our scripts operated in isolation. Each script was a single process (or spawned child processes) that performed its task independently. | ||
| Linia 20: | Linia 20: | ||
Understanding IPC is essential for system administration and software development. These mechanisms form the foundation of everything from command pipelines to distributed systems. | Understanding IPC is essential for system administration and software development. These mechanisms form the foundation of everything from command pipelines to distributed systems. | ||
| − | + | <span id="prerequisites"></span> | |
| + | == Prerequisites == | ||
| − | + | <span id="system-requirements"></span> | |
| + | === System Requirements === | ||
A running instance of the course-provided Linux virtual machine with SSH or direct terminal access. | A running instance of the course-provided Linux virtual machine with SSH or direct terminal access. | ||
| − | + | <span id="required-packages"></span> | |
| + | === Required Packages === | ||
The following packages must be installed: | The following packages must be installed: | ||
| − | + | <syntaxhighlight lang="bash">sudo apt update | |
| − | sudo apt update | + | sudo apt install -y caddy socat</syntaxhighlight> |
| − | sudo apt install -y caddy socat | + | * <code>caddy</code>: A modern HTTP server with automatic HTTPS. We use it to demonstrate socket communication. |
| − | + | * <code>socat</code>: A versatile networking tool that can work with Unix domain sockets. | |
| − | - | + | <span id="knowledge-prerequisites"></span> |
| − | + | === Knowledge Prerequisites === | |
| − | + | You should be familiar with: - Process concepts from Lab 3 (PIDs, process hierarchy, file descriptors) - File permissions from Lab 4 (execute bit, ownership) - Bash scripting fundamentals from Lab 5 (shebangs, builtins, variables, quoting, exit status, redirection, loops, functions) | |
| − | + | <span id="inter-process-communication"></span> | |
| − | - | + | == Inter-Process Communication == |
| − | - | ||
| − | - | ||
| − | + | <span id="what-is-ipc"></span> | |
| − | + | === What is IPC? === | |
| − | |||
By default, processes are isolated from one another. Each process has its own memory space, file descriptor table, and execution context. This isolation provides security and stability, but it creates a problem: how can processes cooperate to accomplish complex tasks? | By default, processes are isolated from one another. Each process has its own memory space, file descriptor table, and execution context. This isolation provides security and stability, but it creates a problem: how can processes cooperate to accomplish complex tasks? | ||
| Linia 55: | Linia 55: | ||
In this lab, we examine five IPC mechanisms arranged roughly by complexity: | In this lab, we examine five IPC mechanisms arranged roughly by complexity: | ||
| − | + | # '''Environment Variables''': The simplest form of IPC. A parent process passes key-value configuration to its children via inherited environment variables. Communication is unidirectional (parent → child) and occurs only at process creation. | |
| − | + | # '''Pipes''': The kernel creates a buffer connecting one process’s standard output to another’s standard input. Data flows in one direction through the pipe. Anonymous pipes exist only while the processes using them are running. | |
| − | + | # '''Named Pipes (FIFOs)''': Like anonymous pipes, but visible in the filesystem. This persistence allows unrelated processes to connect to the same pipe by opening a file path. | |
| − | + | # '''Signals''': Asynchronous notifications sent from one process to another (or from the kernel to a process). Signals interrupt the receiving process, which can catch and handle them or allow default behavior (often termination). | |
| − | + | # '''Sockets''': Bidirectional communication channels that work across network boundaries or locally via Unix domain sockets. Multiple processes can connect to the same socket, enabling one-to-many communication patterns. | |
| − | |||
| − | |||
| − | |||
| − | |||
Each mechanism solves different problems and has different trade-offs in terms of complexity, performance, and flexibility. | Each mechanism solves different problems and has different trade-offs in terms of complexity, performance, and flexibility. | ||
| − | + | <span id="environment-variables-and-export"></span> | |
| + | === Environment Variables and <code>export</code> === | ||
| − | + | <span id="the-kernels-role"></span> | |
| + | ==== The Kernel’s Role ==== | ||
| − | When a process forks, the child inherits a copy of the | + | When a process forks, the child inherits a copy of the parent’s environment: a set of key-value string pairs maintained by the kernel for each process. The child can read these values and modify its own copy, but changes do not propagate back to the parent or to other processes. This inheritance mechanism provides a simple, unidirectional channel for passing configuration from parent to child. |
| − | Environment variables are not limited to shell scripts. Every process has an environment. When you run any program, it receives the environment from the shell that launched it. Programs written in C access this via the | + | Environment variables are not limited to shell scripts. Every process has an environment. When you run any program, it receives the environment from the shell that launched it. Programs written in C access this via the <code>environ</code> global variable or the third parameter to <code>main()</code>. Python uses <code>os.environ</code>. The environment is a universal convention for passing configuration. |
| − | + | <span id="bashs-role-export-and-env"></span> | |
| + | ==== Bash’s Role: <code>export</code> and <code>env</code> ==== | ||
| − | In bash, variables are local to the shell process by default. When you set | + | In bash, variables are local to the shell process by default. When you set <code>name=value</code>, that variable exists in the shell’s memory but is '''not''' passed to child processes. The <code>export</code> builtin marks a variable for inclusion in the environment of future child processes: |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| + | <syntaxhighlight lang="bash">myvar="hello" | ||
| + | export myvar</syntaxhighlight> | ||
Or, more concisely: | Or, more concisely: | ||
| − | + | <syntaxhighlight lang="bash">export myvar="hello"</syntaxhighlight> | |
| − | export myvar="hello" | + | Once exported, all child processes started by this shell (scripts, programs, or subshells) will inherit <code>myvar</code> in their environment. To see the current environment, use the <code>env</code> command (or <code>printenv</code>), which prints all exported variables. |
| − | |||
| − | |||
| − | Once exported, all child processes started by this shell (scripts, programs, or subshells) will inherit | ||
You can also set environment variables for a single command without affecting the shell: | You can also set environment variables for a single command without affecting the shell: | ||
| − | + | <syntaxhighlight lang="bash">DEBUG=1 ./myscript.sh</syntaxhighlight> | |
| − | DEBUG=1 ./myscript.sh | + | This syntax sets <code>DEBUG=1</code> in the environment of <code>myscript.sh</code> only, without exporting it in the parent shell. |
| − | |||
| − | + | Common environment variables you’ve already been using include <code>PATH</code> (where bash searches for commands), <code>HOME</code> (your home directory), <code>USER</code> (your username), and <code>SHELL</code> (your login shell). These are all set by the login process and inherited by every subsequent process in your session. | |
| − | + | <span id="when-to-use-environment-variables"></span> | |
| + | ==== When to Use Environment Variables ==== | ||
| − | + | Environment variables are appropriate for: - Configuration that should be inherited by all child processes (e.g., locale settings, proxy configuration) - Passing secrets or credentials to programs without embedding them in command-line arguments (which are visible via <code>ps</code>) - Controlling program behavior via well-known variables like <code>PATH</code>, <code>LD_LIBRARY_PATH</code>, or <code>TZ</code> | |
| − | + | They are '''not''' suitable for: - Runtime communication between already-running processes (use pipes, sockets, or signals) - Large amounts of data (environment is limited in size, typically a few megabytes) - Bi-directional communication (child changes don’t affect parent) | |
| − | - | ||
| − | - | ||
| − | |||
| − | + | <span id="pipes"></span> | |
| − | + | === Pipes === | |
| − | |||
| − | |||
| − | + | <span id="the-kernels-pipe-mechanism"></span> | |
| − | + | ==== The Kernel’s Pipe Mechanism ==== | |
| − | |||
A pipe is a one-way data channel maintained in kernel memory. When you create a pipe, the kernel allocates a buffer (typically 64 KB on Linux) and returns two file descriptors: one for writing and one for reading. Data written to the write end is buffered by the kernel and can be read from the read end in FIFO (first-in, first-out) order. | A pipe is a one-way data channel maintained in kernel memory. When you create a pipe, the kernel allocates a buffer (typically 64 KB on Linux) and returns two file descriptors: one for writing and one for reading. Data written to the write end is buffered by the kernel and can be read from the read end in FIFO (first-in, first-out) order. | ||
| Linia 122: | Linia 109: | ||
Pipes are anonymous: they have no name in the filesystem. They exist only as long as at least one process holds a file descriptor to them. When all processes close their references to a pipe, the kernel deallocates it. | Pipes are anonymous: they have no name in the filesystem. They exist only as long as at least one process holds a file descriptor to them. When all processes close their references to a pipe, the kernel deallocates it. | ||
| − | The key insight from Lab 3: a pipeline like | + | The key insight from Lab 3: a pipeline like <code>cat file.txt | grep "error" | wc -l</code> creates multiple processes (all in the same process group) connected by pipes. The kernel sets up the file descriptors so that <code>cat</code>’s stdout is connected to <code>grep</code>’s stdin, and <code>grep</code>’s stdout is connected to <code>wc</code>’s stdin. The processes run concurrently, with data flowing through kernel buffers as it’s produced and consumed. |
| − | + | <span id="bashs-pipe-operator"></span> | |
| + | ==== Bash’s Pipe Operator ==== | ||
| − | Bash creates pipes using the | + | Bash creates pipes using the <code>|</code> operator. The syntax is simple: |
| − | + | <syntaxhighlight lang="bash">command1 | command2</syntaxhighlight> | |
| − | command1 | command2 | + | This creates a pipe and starts two processes. Bash configures <code>command1</code>’s stdout (FD 1) to point to the pipe’s write end and <code>command2</code>’s stdin (FD 0) to point to the pipe’s read end. The commands execute concurrently. |
| − | |||
| − | |||
| − | This creates a pipe and starts two processes. Bash configures | ||
Longer pipelines work the same way: | Longer pipelines work the same way: | ||
| − | + | <syntaxhighlight lang="bash">cat data.txt | sort | uniq | wc -l</syntaxhighlight> | |
| − | cat data.txt | sort | uniq | wc -l | ||
| − | |||
| − | |||
This creates three pipes connecting four processes. Data flows left-to-right through kernel buffers. | This creates three pipes connecting four processes. Data flows left-to-right through kernel buffers. | ||
| − | + | <span id="pipes-in-scripts"></span> | |
| + | ==== Pipes in Scripts ==== | ||
You can use pipes inside scripts just as you would interactively: | You can use pipes inside scripts just as you would interactively: | ||
| − | + | <syntaxhighlight lang="bash">#!/bin/bash | |
| − | #!/bin/bash | ||
set -euo pipefail | set -euo pipefail | ||
# Count unique IP addresses in an access log | # Count unique IP addresses in an access log | ||
| − | cat /var/log/access.log | cut -d' ' -f1 | sort | uniq | wc -l | + | cat /var/log/access.log | cut -d' ' -f1 | sort | uniq | wc -l</syntaxhighlight> |
| − | + | Pipes are particularly powerful when combined with bash’s process substitution feature <code><(command)</code>, but that’s beyond the scope of this introductory lab. | |
| − | + | <span id="when-to-use-pipes"></span> | |
| + | ==== When to Use Pipes ==== | ||
| − | + | Pipes are ideal for: - Connecting the output of one program to the input of another in a linear processing chain - Streaming data processing where data is generated and consumed incrementally - Quick, one-off data transformations at the command line | |
| − | + | They are '''not''' suitable for: - Bi-directional communication (data flows one way only) - Communication between unrelated processes that weren’t started in a pipeline - Persistent communication (pipe disappears when processes exit) | |
| − | - | ||
| − | - | ||
| − | |||
| − | + | <span id="named-pipes-fifos"></span> | |
| − | - | + | === Named Pipes (FIFOs) === |
| − | - | ||
| − | |||
| − | + | <span id="the-kernels-fifo-mechanism"></span> | |
| − | + | ==== The Kernel’s FIFO Mechanism ==== | |
| − | |||
A named pipe, or FIFO (First-In, First-Out), is a pipe with a name in the filesystem. Unlike anonymous pipes, FIFOs persist as filesystem entries (though the data buffer is still in kernel memory). Any process with appropriate permissions can open the FIFO by its path, allowing unrelated processes to communicate. | A named pipe, or FIFO (First-In, First-Out), is a pipe with a name in the filesystem. Unlike anonymous pipes, FIFOs persist as filesystem entries (though the data buffer is still in kernel memory). Any process with appropriate permissions can open the FIFO by its path, allowing unrelated processes to communicate. | ||
| Linia 176: | Linia 153: | ||
When a process opens a FIFO for reading, it blocks until another process opens the same FIFO for writing (and vice versa). Once both ends are open, data flows through the kernel buffer just like an anonymous pipe. When all processes close their connections, the FIFO remains as a filesystem entry but the kernel buffer is deallocated. | When a process opens a FIFO for reading, it blocks until another process opens the same FIFO for writing (and vice versa). Once both ends are open, data flows through the kernel buffer just like an anonymous pipe. When all processes close their connections, the FIFO remains as a filesystem entry but the kernel buffer is deallocated. | ||
| − | You can identify a FIFO in | + | You can identify a FIFO in <code>ls -l</code> output by the leading <code>p</code> in the permission string: |
| − | + | <pre>prw-r--r-- 1 user user 0 Nov 6 10:00 myfifo</pre> | |
| − | prw-r--r-- 1 user user 0 Nov 6 10:00 myfifo | + | <span id="creating-fifos-with-mkfifo"></span> |
| − | + | ==== Creating FIFOs with <code>mkfifo</code> ==== | |
| − | + | The <code>mkfifo</code> command creates a named pipe: | |
| − | |||
| − | The | ||
| − | |||
| − | |||
| − | |||
| − | |||
| + | <syntaxhighlight lang="bash">mkfifo /tmp/myfifo</syntaxhighlight> | ||
Now two unrelated processes can communicate by opening this file. One writes to it: | Now two unrelated processes can communicate by opening this file. One writes to it: | ||
| − | + | <syntaxhighlight lang="bash">echo "Hello from writer" > /tmp/myfifo</syntaxhighlight> | |
| − | echo "Hello from writer" > /tmp/myfifo | ||
| − | |||
| − | |||
This command will block until a reader appears. In another terminal (or in the background), a reader can consume the data: | This command will block until a reader appears. In another terminal (or in the background), a reader can consume the data: | ||
| − | + | <syntaxhighlight lang="bash">cat < /tmp/myfifo</syntaxhighlight> | |
| − | cat < /tmp/myfifo | ||
| − | |||
| − | |||
When the reader connects, the writer unblocks, the message flows through the kernel buffer, and both commands complete. | When the reader connects, the writer unblocks, the message flows through the kernel buffer, and both commands complete. | ||
| − | + | <span id="when-to-use-named-pipes"></span> | |
| − | + | ==== When to Use Named Pipes ==== | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | Named pipes are useful for: - Communication between unrelated processes that start at different times - Producer-consumer patterns where one process generates data and another processes it - Simple IPC without needing network sockets or shared files | |
| − | - | ||
| − | |||
| − | - | ||
| − | + | They are '''not''' suitable for: - Multiple simultaneous readers or writers (FIFO semantics become unpredictable) - Persistent data storage (data is lost when all processes disconnect) - Bi-directional communication (like anonymous pipes, FIFOs are one-way) | |
| − | + | <span id="signals-and-trap"></span> | |
| + | === Signals and <code>trap</code> === | ||
| − | + | <span id="the-kernels-signal-mechanism"></span> | |
| + | ==== The Kernel’s Signal Mechanism ==== | ||
| − | + | A signal is an asynchronous notification sent to a process. Signals can be sent by other processes (via the <code>kill</code> system call) or by the kernel itself in response to events like segmentation faults, keyboard interrupts (Ctrl+C), or child process termination. | |
| − | + | When the kernel delivers a signal to a process, it interrupts the process’s normal execution. The process can respond in one of three ways: | |
| − | + | # '''Default Action''': Each signal has a default behavior, often terminating the process. For example, <code>SIGTERM</code> (signal 15) gracefully terminates, while <code>SIGKILL</code> (signal 9) forces immediate termination. | |
| + | # '''Ignore''': The process can choose to ignore certain signals (except <code>SIGKILL</code> and <code>SIGSTOP</code>, which cannot be caught or ignored). | ||
| + | # '''Custom Handler''': The process can register a function (signal handler) to execute when the signal arrives. This allows the process to perform cleanup or take other actions before deciding whether to continue, terminate, or take other action. | ||
| − | + | Common signals: - <code>SIGINT</code> (2): Sent by Ctrl+C. Default: terminate. - <code>SIGTERM</code> (15): Polite request to terminate. Default: terminate. Most services handle this to perform clean shutdown. - <code>SIGKILL</code> (9): Immediate termination. Cannot be caught or ignored. Used as a last resort. - <code>SIGHUP</code> (1): Historically “hang up” (modem disconnected). Often used to tell daemons to reload configuration. - <code>SIGCHLD</code> (17): Sent to a parent when a child process terminates. - <code>SIGUSR1</code> (10) and <code>SIGUSR2</code> (12): User-defined signals for custom purposes. | |
| − | |||
| − | Common signals: | ||
| − | - | ||
| − | - | ||
| − | - | ||
| − | - | ||
| − | - | ||
| − | - | ||
Signals are sent by PID: | Signals are sent by PID: | ||
| − | + | <syntaxhighlight lang="bash">kill -TERM 1234 # Send SIGTERM to process 1234 | |
| − | kill -TERM 1234 # Send SIGTERM to process 1234 | ||
kill -9 1234 # Send SIGKILL to process 1234 | kill -9 1234 # Send SIGKILL to process 1234 | ||
| − | kill -HUP 1234 # Send SIGHUP to process 1234 | + | kill -HUP 1234 # Send SIGHUP to process 1234</syntaxhighlight> |
| − | + | You can also use the <code>%n</code> job notation from Lab 3 to send signals to background jobs. | |
| − | + | <span id="bashs-trap-builtin"></span> | |
| + | ==== Bash’s <code>trap</code> Builtin ==== | ||
| − | + | The <code>trap</code> builtin allows a bash script to register a handler for incoming signals: | |
| − | + | <syntaxhighlight lang="bash">trap 'echo "Caught SIGINT"; exit' INT</syntaxhighlight> | |
| − | + | This tells bash: “When SIGINT arrives, execute the command <code>echo "Caught SIGINT"; exit</code>.” The handler can be any bash command or function. | |
| − | |||
| − | trap 'echo "Caught SIGINT"; exit' INT | ||
| − | |||
| − | |||
| − | This tells bash: | ||
Common pattern for cleanup on exit: | Common pattern for cleanup on exit: | ||
| − | + | <syntaxhighlight lang="bash">#!/bin/bash | |
| − | #!/bin/bash | ||
set -euo pipefail | set -euo pipefail | ||
| Linia 276: | Linia 225: | ||
echo "Done" | echo "Done" | ||
| − | # cleanup() is automatically called on normal exit, Ctrl+C, or errors (with set -e) | + | # cleanup() is automatically called on normal exit, Ctrl+C, or errors (with set -e)</syntaxhighlight> |
| − | + | The special signal <code>EXIT</code> isn’t a real signal; it’s a bash pseudo-signal that fires whenever the script exits for any reason. | |
| − | + | <span id="when-to-use-signals"></span> | |
| + | ==== When to Use Signals ==== | ||
| − | + | Signals are appropriate for: - Event-driven scripts that respond to external events - Graceful shutdown and cleanup on termination - Daemon control (reload config with <code>SIGHUP</code>, graceful restart with <code>SIGTERM</code>) - Inter-process coordination where one process needs to notify another of state changes | |
| − | + | They are '''not''' suitable for: - Transferring data (signals carry almost no information, just the signal number) - Reliable communication (signals can be lost or delayed) - Complex coordination (race conditions are common) | |
| − | - | ||
| − | |||
| − | |||
| − | - | ||
| − | + | <span id="sockets"></span> | |
| − | + | === Sockets === | |
| − | |||
| − | |||
| − | + | <span id="the-kernels-socket-mechanism"></span> | |
| − | + | ==== The Kernel’s Socket Mechanism ==== | |
| − | |||
A socket is a bidirectional communication endpoint. Unlike pipes, sockets support two-way data flow. Unlike named pipes, sockets support multiple concurrent connections, making them suitable for client-server architectures. | A socket is a bidirectional communication endpoint. Unlike pipes, sockets support two-way data flow. Unlike named pipes, sockets support multiple concurrent connections, making them suitable for client-server architectures. | ||
| Linia 302: | Linia 245: | ||
There are two main types: | There are two main types: | ||
| − | + | # '''Network Sockets''': Use TCP or UDP to communicate over IP networks. These work across machines and are the foundation of the internet. | |
| − | + | # '''Unix Domain Sockets''': Use file paths (like named pipes) but support bidirectional communication and connection multiplexing. These work only on the same machine but are faster than network sockets. | |
| − | |||
When a server process binds to a socket, it listens for incoming connections. Multiple client processes can connect to the same server socket. Each connection is independent, with its own bidirectional channel. This one-to-many pattern distinguishes sockets from pipes and FIFOs. | When a server process binds to a socket, it listens for incoming connections. Multiple client processes can connect to the same server socket. Each connection is independent, with its own bidirectional channel. This one-to-many pattern distinguishes sockets from pipes and FIFOs. | ||
| − | For network sockets, the server binds to a port number (e.g., port 80 for HTTP). For Unix domain sockets, it binds to a filesystem path (e.g., | + | For network sockets, the server binds to a port number (e.g., port 80 for HTTP). For Unix domain sockets, it binds to a filesystem path (e.g., <code>/tmp/myservice.sock</code>). |
| − | + | <span id="socket-communication-from-bash"></span> | |
| + | ==== Socket Communication from Bash ==== | ||
| − | While bash | + | While bash doesn’t have native socket support, we can use existing tools to demonstrate socket communication without writing low-level code: |
| − | + | * '''caddy''': A modern web server that can listen on both network and Unix domain sockets | |
| − | + | * '''curl''': A command-line HTTP client that supports Unix domain sockets | |
| − | + | * '''socat''': A general-purpose networking tool for creating and connecting to various socket types | |
These tools abstract the complexity of socket programming, allowing us to focus on the conceptual model. | These tools abstract the complexity of socket programming, allowing us to focus on the conceptual model. | ||
| − | + | <span id="when-to-use-sockets"></span> | |
| + | ==== When to Use Sockets ==== | ||
| − | Sockets are ideal for: | + | Sockets are ideal for: - Client-server applications with multiple concurrent clients - Networked communication across machines - Bidirectional data exchange - Services that need to accept connections from many processes |
| − | - Client-server applications with multiple concurrent clients | ||
| − | - Networked communication across machines | ||
| − | - Bidirectional data exchange | ||
| − | - Services that need to accept connections from many processes | ||
| − | They are | + | They are '''not''' necessary for: - Simple one-to-one, one-way communication (use pipes instead) - Configuration passing (use environment variables) - Asynchronous notifications (use signals) |
| − | - Simple one-to-one, one-way communication (use pipes instead) | ||
| − | - Configuration passing (use environment variables) | ||
| − | - Asynchronous notifications (use signals) | ||
| − | + | <span id="hands-on-exercises"></span> | |
| + | == Hands-on Exercises == | ||
| − | + | <span id="exercise-a-environment-variables-and-export"></span> | |
| + | === Exercise A: Environment Variables and <code>export</code> === | ||
This exercise demonstrates how environment variables are inherited by child processes but not propagated back to parents. | This exercise demonstrates how environment variables are inherited by child processes but not propagated back to parents. | ||
| − | + | '''Steps:''' | |
| − | + | # Check your current environment. Run <code>env | head -n 10</code> and observe some of the variables already set. | |
| − | + | # Create a shell variable without exporting it: <code>myvar="not exported"</code>. | |
| − | + | # Start a subshell with <code>bash</code> and try to read <code>myvar</code> with <code>echo "$myvar"</code>. It should be empty. Exit the subshell. | |
| − | + | # Back in your original shell, export the variable: <code>export myvar="exported now"</code>. | |
| − | + | # Start a subshell again with <code>bash -c 'echo "In child: $myvar"'</code> and verify that <code>myvar</code> is now accessible. | |
| − | + | # In a subshell, modify the variable: <code>bash -c 'myvar="changed in child"; echo "Child modified: $myvar"'</code>. | |
| − | + | # Print <code>myvar</code> in the parent shell: <code>echo "Parent still has: $myvar"</code>. Observe that the parent’s value is unchanged. | |
| − | + | # Demonstrate setting an environment variable for a single command: <code>GREETING="Hello" bash -c 'echo $GREETING'</code>. | |
| − | + | # Verify that <code>GREETING</code> is not set in your current shell: <code>echo "GREETING in parent: $GREETING"</code> (should be empty). | |
| − | + | # Use <code>env</code> to run a command with a specific environment: <code>env DEBUG=1 bash -c 'echo "DEBUG=$DEBUG"'</code>. | |
| − | + | '''Deliverable A:''' Provide the output showing: the subshell cannot see the unexported variable, the subshell can see the exported variable, the parent is unaffected by child changes, and single-command environment variable setting. | |
| − | + | <span id="exercise-b-pipes-in-practice"></span> | |
| + | === Exercise B: Pipes in Practice === | ||
This exercise explores anonymous pipes and how they connect processes. | This exercise explores anonymous pipes and how they connect processes. | ||
| − | + | '''Steps:''' | |
| − | + | # Use a simple pipe to count lines: <code>cat /etc/passwd | wc -l</code>. Observe the result. | |
| − | + | # Build a longer pipeline to find how many unique shells are in use: <code>cat /etc/passwd | cut -d: -f7 | sort | uniq</code>. Count them manually or pipe to <code>wc -l</code>. | |
| − | + | # Demonstrate that processes in a pipeline run concurrently. Run <code>yes | head -n 5</code>. The <code>yes</code> command produces infinite output, but <code>head</code> reads only 5 lines and then exits, causing <code>yes</code> to receive a <code>SIGPIPE</code> and terminate. | |
| − | + | # Create a sample log file for analysis: | |
| − | + | ||
| − | cat > /tmp/lab6_sample.log <<'EOF' | + | <syntaxhighlight lang="bash">cat > /tmp/lab6_sample.log <<'EOF' |
2024-01-15 10:00:00 INFO Application started | 2024-01-15 10:00:00 INFO Application started | ||
2024-01-15 10:05:23 ERROR Database connection failed | 2024-01-15 10:05:23 ERROR Database connection failed | ||
| Linia 372: | Linia 312: | ||
2024-01-15 10:20:00 ERROR Invalid configuration | 2024-01-15 10:20:00 ERROR Invalid configuration | ||
2024-01-15 10:25:00 WARN Low memory | 2024-01-15 10:25:00 WARN Low memory | ||
| − | EOF | + | EOF</syntaxhighlight> |
| − | + | <ol start="5" style="list-style-type: decimal;"> | |
| − | + | <li>Use a pipeline to count ERROR entries: <code>grep "ERROR" /tmp/lab6_sample.log | wc -l</code>.</li> | |
| − | + | <li>Use a pipeline to extract and count unique log levels: <code>cut -d' ' -f3 /tmp/lab6_sample.log | sort | uniq -c</code>.</li> | |
| − | + | <li>Combine multiple operations: Find the timestamps of all ERROR entries: <code>grep "ERROR" /tmp/lab6_sample.log | cut -d' ' -f1,2</code>.</li></ol> | |
| − | + | '''Deliverable B:''' Provide the output from the <code>/etc/passwd</code> shell analysis, the <code>yes | head -n 5</code> demonstration, and the log file analysis commands showing ERROR count and unique log levels with counts. | |
| − | + | <span id="exercise-c-named-pipes-fifos"></span> | |
| + | === Exercise C: Named Pipes (FIFOs) === | ||
This exercise demonstrates persistent named pipes and communication between unrelated processes. | This exercise demonstrates persistent named pipes and communication between unrelated processes. | ||
| − | + | '''Steps:''' | |
| − | + | # Create a named pipe: <code>mkfifo /tmp/lab6_fifo</code>. | |
| − | + | # Verify it exists and note its type: <code>ls -l /tmp/lab6_fifo</code>. The leading <code>p</code> indicates a FIFO. | |
| − | + | # In one terminal, start a reader that will block: <code>cat < /tmp/lab6_fifo</code>. Leave this running. | |
| − | + | # In a second terminal, write to the FIFO: <code>echo "Hello via FIFO" > /tmp/lab6_fifo</code>. | |
| − | + | # Observe that the reader in the first terminal unblocks, displays the message, and exits. | |
| − | + | # Demonstrate that the FIFO persists. List it again: <code>ls -l /tmp/lab6_fifo</code>. | |
| − | + | # Test a more complex scenario. In terminal 1: <code>while read line; do echo "Received: $line"; done < /tmp/lab6_fifo</code>. | |
| − | + | # In terminal 2, send multiple messages: | |
| − | + | ||
| − | echo "Message 1" > /tmp/lab6_fifo | + | <syntaxhighlight lang="bash">echo "Message 1" > /tmp/lab6_fifo |
echo "Message 2" > /tmp/lab6_fifo | echo "Message 2" > /tmp/lab6_fifo | ||
| − | echo "Message 3" > /tmp/lab6_fifo | + | echo "Message 3" > /tmp/lab6_fifo</syntaxhighlight> |
| − | + | <ol start="9" style="list-style-type: decimal;"> | |
| − | + | <li>Observe the messages being received. Press Ctrl+C in terminal 1 to stop the reader.</li> | |
| − | + | <li>Remove the FIFO: <code>rm /tmp/lab6_fifo</code>.</li></ol> | |
| − | + | '''Deliverable C:''' Provide the <code>ls -l</code> output showing the FIFO type, screenshots or output from both terminals showing the message exchange, and a brief explanation of how the FIFO persists between writes. | |
| − | + | <span id="exercise-d-signals-and-trap"></span> | |
| + | === Exercise D: Signals and <code>trap</code> === | ||
This exercise demonstrates signal handling in bash using interactive commands. | This exercise demonstrates signal handling in bash using interactive commands. | ||
| − | + | '''Steps:''' | |
| + | |||
| + | # Start a simple sleep command: <code>sleep 30</code>. Press Ctrl+C to interrupt it. Observe that it terminates immediately. | ||
| + | # Now run a command that ignores SIGINT: <code>trap 'echo "Caught SIGINT, ignoring..."' INT; sleep 30</code>. Press Ctrl+C. The trap catches the signal and the sleep continues. | ||
| + | # Demonstrate cleanup on exit. Run this compound command: | ||
| − | + | <syntaxhighlight lang="bash">trap 'echo "Cleanup: removing temp file"; rm -f /tmp/lab6_test.txt' EXIT; \ | |
| − | |||
| − | |||
| − | |||
| − | trap 'echo "Cleanup: removing temp file"; rm -f /tmp/lab6_test.txt' EXIT; \ | ||
touch /tmp/lab6_test.txt; \ | touch /tmp/lab6_test.txt; \ | ||
echo "File created. Press Ctrl+C or wait..."; \ | echo "File created. Press Ctrl+C or wait..."; \ | ||
sleep 10; \ | sleep 10; \ | ||
| − | echo "Normal exit" | + | echo "Normal exit"</syntaxhighlight> |
| − | + | <ol start="4" style="list-style-type: decimal;"> | |
| − | + | <li>Try both: let it complete normally, then run it again and interrupt with Ctrl+C. Observe cleanup happens both times.</li> | |
| − | + | <li>Start a background sleep: <code>sleep 100 &</code>. Note the PID displayed.</li> | |
| − | + | <li>Send SIGTERM to it: <code>kill -TERM <pid></code> (replace <code><pid></code> with the actual PID). Verify it terminated: <code>jobs</code>.</li> | |
| − | + | <li>Start another background process that will ignore SIGTERM:</li></ol> | |
| − | + | ||
| − | (trap 'echo "Caught SIGTERM, staying alive"' TERM; \ | + | <syntaxhighlight lang="bash">(trap 'echo "Caught SIGTERM, staying alive"' TERM; \ |
| − | while true; do echo "Running..."; sleep 5; done) & | + | while true; do echo "Running..."; sleep 5; done) &</syntaxhighlight> |
| − | + | <ol start="8" style="list-style-type: decimal;"> | |
| − | + | <li>Note the PID, then send SIGTERM: <code>kill -TERM <pid></code>. Observe the trap message.</li> | |
| − | + | <li>Send SIGKILL to force termination: <code>kill -9 <pid></code>. This cannot be caught.</li> | |
| − | + | <li>Verify all background jobs are gone: <code>jobs</code>.</li></ol> | |
| − | + | '''Deliverable D:''' Provide output showing: the trapped SIGINT message, cleanup on both normal exit and Ctrl+C, the SIGTERM being caught with the trap message, and SIGKILL forcing termination. | |
| − | + | <span id="exercise-e-socket-communication-basics"></span> | |
| + | === Exercise E: Socket Communication Basics === | ||
This exercise provides a brief introduction to socket communication using existing tools. | This exercise provides a brief introduction to socket communication using existing tools. | ||
| − | + | '''Steps:''' | |
| + | |||
| + | # Create a simple static site directory: <code>mkdir -p /tmp/lab6_site && echo "<h1>Hello from Caddy</h1>" > /tmp/lab6_site/index.html</code>. | ||
| + | # Start Caddy as a file server listening on a Unix domain socket: <code>caddy file-server --root /tmp/lab6_site --listen unix//tmp/caddy.sock &</code>. Note the PID. | ||
| + | # Wait a moment for Caddy to start. Verify the socket exists: <code>ls -l /tmp/caddy.sock</code>. Note the <code>s</code> type indicating a socket. | ||
| + | # Use <code>curl</code> to make an HTTP request via the Unix domain socket: <code>curl -v --unix-socket /tmp/caddy.sock http://localhost/</code>. | ||
| + | # Observe the response. Note that the communication is bidirectional: you sent an HTTP request, and the server responded with the HTML content. | ||
| + | # (Optional) Open multiple terminals and run <code>curl</code> simultaneously several times. Each request is handled independently, demonstrating the one-to-many capability of sockets. | ||
| + | # Stop Caddy by sending <code>SIGTERM</code>: <code>kill -TERM <caddy_pid></code> or use <code>pkill caddy</code>. | ||
| + | # Clean up: <code>rm -rf /tmp/lab6_site /tmp/caddy.sock</code>. | ||
| − | + | '''Deliverable E:''' Provide the <code>ls -l</code> output showing the socket file, the <code>curl</code> output demonstrating the request and response (especially the HTTP headers and HTML body), and a brief explanation (2-3 sentences) of how this differs from a named pipe in terms of directionality and connection multiplexing. | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | <span id="scripting-challenges"></span> | |
| + | == Scripting Challenges == | ||
| − | + | <span id="challenge-1-log-aggregator-with-named-pipes"></span> | |
| + | === Challenge 1: Log Aggregator with Named Pipes === | ||
| − | + | Write a script <code>/tmp/lab6_log_aggregator.sh</code> that aggregates log messages from multiple sources using named pipes. | |
| − | + | '''Requirements:''' | |
| − | ** | + | * Create three named pipes: <code>/tmp/log_fifo1</code>, <code>/tmp/log_fifo2</code>, <code>/tmp/log_fifo3</code>. |
| + | * Start three background processes that each write timestamped messages to one of the FIFOs (simulating different log sources). Each should write 5 messages at 1-second intervals. | ||
| + | * Implement a main loop that reads from all three FIFOs simultaneously (hint: use <code>select</code> via <code>read</code> with timeout, or use multiple background readers). | ||
| + | * Write all received messages to a combined log file <code>/tmp/aggregated.log</code> with timestamps. | ||
| + | * Trap <code>SIGTERM</code> to clean up: kill background processes, remove FIFOs, and exit gracefully. | ||
| + | * Use: <code>set -euo pipefail</code>, functions, <code>trap</code>, <code>mkfifo</code>, background processes (<code>&</code>), proper cleanup. | ||
| − | + | '''Test:''' | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | <syntaxhighlight lang="bash">chmod +x /tmp/lab6_log_aggregator.sh | |
| − | |||
| − | chmod +x /tmp/lab6_log_aggregator.sh | ||
/tmp/lab6_log_aggregator.sh & | /tmp/lab6_log_aggregator.sh & | ||
AGG_PID=$! | AGG_PID=$! | ||
| Linia 474: | Linia 419: | ||
kill -TERM $AGG_PID | kill -TERM $AGG_PID | ||
wait | wait | ||
| − | cat /tmp/aggregated.log | + | cat /tmp/aggregated.log</syntaxhighlight> |
| − | + | '''Deliverable Challenge 1:''' - Complete script with comments - Contents of <code>/tmp/aggregated.log</code> showing interleaved messages from all three sources - Brief explanation (2-3 sentences) of why named pipes were necessary here instead of anonymous pipes | |
| − | + | <span id="challenge-2-graceful-service-controller"></span> | |
| − | - | + | === Challenge 2: Graceful Service Controller === |
| − | - | ||
| − | |||
| − | + | Write a script <code>/tmp/lab6_service_controller.sh</code> that manages a long-running service and responds to signals. | |
| − | + | '''Requirements:''' | |
| − | ** | + | * The script acts as a daemon that runs indefinitely, printing a heartbeat message every 5 seconds. |
| + | * Accept one optional argument: a “config file” path. If provided, read a setting (e.g., <code>INTERVAL=10</code>) from the file to control the heartbeat interval. | ||
| + | * Trap <code>SIGHUP</code>: Reload the configuration file and adjust the interval dynamically without restarting the script. Print “Configuration reloaded.” | ||
| + | * Trap <code>SIGTERM</code>: Perform graceful shutdown. Print “Shutting down gracefully…”, wait 2 seconds, then exit cleanly. | ||
| + | * Trap <code>SIGUSR1</code>: Export the current status to <code>/tmp/service_status.txt</code> (e.g., uptime, number of heartbeats sent, current interval). Print “Status exported.” | ||
| + | * Trap <code>EXIT</code>: Perform cleanup. Print “Service stopped.” | ||
| + | * Maintain internal state: count the number of heartbeats sent, track start time. | ||
| + | * Use: <code>set -euo pipefail</code>, functions, <code>trap</code> for multiple signals, variables for state, a loop, cleanup handler. | ||
| − | + | '''Test:''' | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | <syntaxhighlight lang="bash">echo "INTERVAL=3" > /tmp/service.conf | |
| − | |||
| − | echo "INTERVAL=3" > /tmp/service.conf | ||
chmod +x /tmp/lab6_service_controller.sh | chmod +x /tmp/lab6_service_controller.sh | ||
/tmp/lab6_service_controller.sh /tmp/service.conf & | /tmp/lab6_service_controller.sh /tmp/service.conf & | ||
| Linia 510: | Linia 451: | ||
sleep 5 | sleep 5 | ||
kill -TERM $SVC_PID # Graceful shutdown | kill -TERM $SVC_PID # Graceful shutdown | ||
| − | wait | + | wait</syntaxhighlight> |
| − | + | '''Deliverable Challenge 2:''' - Complete script with comments - Output showing heartbeats before and after <code>SIGHUP</code> (with visible interval change) - Contents of <code>/tmp/service_status.txt</code> after <code>SIGUSR1</code> - Output showing graceful shutdown message after <code>SIGTERM</code> | |
| − | + | <span id="reference-common-ipc-patterns"></span> | |
| − | - | + | == Reference: Common IPC Patterns == |
| − | - | ||
| − | - | ||
| − | |||
| − | + | Quick reference for IPC mechanisms covered in this lab: | |
| − | + | '''Environment Variables:''' | |
| − | + | <syntaxhighlight lang="bash"># Export a variable for child processes | |
| − | |||
| − | # Export a variable for child processes | ||
export VAR="value" | export VAR="value" | ||
| Linia 533: | Linia 469: | ||
# View all environment variables | # View all environment variables | ||
env | env | ||
| − | printenv | + | printenv</syntaxhighlight> |
| − | + | '''Anonymous Pipes:''' | |
| − | + | <syntaxhighlight lang="bash"># Simple pipeline | |
| − | |||
| − | # Simple pipeline | ||
command1 | command2 | command1 | command2 | ||
# Multi-stage pipeline | # Multi-stage pipeline | ||
| − | cat file.txt | grep "pattern" | sort | uniq | + | cat file.txt | grep "pattern" | sort | uniq</syntaxhighlight> |
| − | + | '''Named Pipes:''' | |
| − | + | <syntaxhighlight lang="bash"># Create a FIFO | |
| − | |||
| − | # Create a FIFO | ||
mkfifo /path/to/fifo | mkfifo /path/to/fifo | ||
| Linia 557: | Linia 489: | ||
# Remove FIFO | # Remove FIFO | ||
| − | rm /path/to/fifo | + | rm /path/to/fifo</syntaxhighlight> |
| − | + | '''Signals:''' | |
| − | + | <syntaxhighlight lang="bash"># Send signal by name | |
| − | |||
| − | # Send signal by name | ||
kill -TERM <pid> | kill -TERM <pid> | ||
kill -HUP <pid> | kill -HUP <pid> | ||
| Linia 580: | Linia 510: | ||
echo "Cleaning up..." | echo "Cleaning up..." | ||
rm -f /tmp/myfiles.* | rm -f /tmp/myfiles.* | ||
| − | } | + | }</syntaxhighlight> |
| − | + | '''Sockets (using existing tools):''' | |
| − | + | <syntaxhighlight lang="bash"># Start a server on Unix domain socket (using socat) | |
| − | |||
| − | # Start a server on Unix domain socket (using socat) | ||
socat UNIX-LISTEN:/tmp/service.sock,fork EXEC:'/usr/bin/myhandler' | socat UNIX-LISTEN:/tmp/service.sock,fork EXEC:'/usr/bin/myhandler' | ||
| Linia 592: | Linia 520: | ||
# HTTP via Unix socket (using curl) | # HTTP via Unix socket (using curl) | ||
| − | curl --unix-socket /tmp/service.sock http://localhost/path | + | curl --unix-socket /tmp/service.sock http://localhost/path</syntaxhighlight> |
| − | + | <span id="common-patterns-table"></span> | |
| + | == Common Patterns Table == | ||
| − | + | {| class="wikitable" | |
| + | |- | ||
| + | ! Mechanism | ||
| + | ! Direction | ||
| + | ! Persistence | ||
| + | ! Use Case | ||
| + | |- | ||
| + | | Environment Variables | ||
| + | | Parent → Child (one-way) | ||
| + | | Inherited at fork | ||
| + | | Configuration, credentials | ||
| + | |- | ||
| + | | Pipes (<code>\|</code>) | ||
| + | | One-way | ||
| + | | Ephemeral (process lifetime) | ||
| + | | Command chaining, streaming | ||
| + | |- | ||
| + | | Named Pipes (FIFO) | ||
| + | | One-way | ||
| + | | Filesystem entry persists | ||
| + | | Unrelated process communication | ||
| + | |- | ||
| + | | Signals | ||
| + | | One-way (notification) | ||
| + | | Asynchronous event | ||
| + | | Process control, event notification | ||
| + | |- | ||
| + | | Sockets | ||
| + | | Bidirectional | ||
| + | | Filesystem entry persists (Unix domain) | ||
| + | | Client-server, network services | ||
| + | |} | ||
| − | + | <span id="deliverables-and-assessment"></span> | |
| − | + | == Deliverables and Assessment == | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | + | Submit a single document (PDF or similar) containing: | |
| − | + | '''Exercise Deliverables:''' | |
| + | |||
| + | * Exercise A: Outputs demonstrating export behavior, parent-child isolation, and single-command environment variables | ||
| + | * Exercise B: Pipeline outputs and complete analysis script | ||
| + | * Exercise C: FIFO creation output, producer/consumer scripts with sample output | ||
| + | * Exercise D: Complete daemon simulation script with signal handling demonstrations | ||
| + | * Exercise E: Socket file listing, curl output with HTTP headers, explanation of socket vs FIFO differences | ||
| + | |||
| + | '''Challenge Deliverables:''' | ||
| + | |||
| + | * Challenge 1: Complete log aggregator script, aggregated log contents, explanation | ||
| + | * Challenge 2: Complete service controller script, outputs demonstrating all signal handlers (SIGHUP reload, SIGUSR1 status export, SIGTERM shutdown) | ||
| − | + | '''Additional:''' | |
| − | |||
| − | |||
| − | |||
| − | |||
| − | |||
| − | * | + | * Each deliverable should include command outputs (screenshots or text) and brief explanations where requested. |
| − | + | * For scripts, include the complete, commented source code and example execution output. | |
| − | |||
| − | + | <span id="additional-resources"></span> | |
| − | - | ||
| − | |||
| − | + | == Additional Resources == | |
| − | This lab covers the fundamental IPC mechanisms accessible from bash. | + | This lab covers the fundamental IPC mechanisms accessible from bash. You’ve learned how processes inherit environment variables, communicate via pipes, respond to signals, and use sockets for network communication. These concepts form the foundation for system administration, scripting, and understanding how complex systems coordinate. |
For further study: | For further study: | ||
| − | + | * Advanced IPC: Shared memory, message queues, semaphores (typically used in C/C++ programs, not bash) | |
| − | + | * Network programming: TCP/UDP sockets, client-server architecture | |
| − | + | * D-Bus: A modern IPC system used by desktop environments and systemd | |
| − | + | * Signal safety: Writing robust signal handlers (critical in C, less relevant in bash) | |
Relevant manual pages: | Relevant manual pages: | ||
| − | + | * <code>man 7 pipe</code> - Pipe overview | |
| − | + | * <code>man 7 fifo</code> - Named pipe overview | |
| − | + | * <code>man 7 signal</code> - Signal overview | |
| − | + | * <code>man 7 unix</code> - Unix domain sockets | |
| − | + | * <code>man bash</code> - Section on trap and job control | |
Versiunea curentă din 14 noiembrie 2025 15:06
Objectives
Upon completion of this lab, you will be able to:
- Explain how the kernel provides IPC mechanisms and how bash exposes them to scripts.
- Use environment variables and
exportto pass configuration from parent processes to children. - Construct pipelines with anonymous pipes to connect process standard streams.
- Create and use named pipes (FIFOs) for communication between unrelated processes.
- Send signals to processes and handle them with
trapfor asynchronous event-driven scripting. - Demonstrate basic socket communication using existing utilities for network IPC.
Introduction
In Lab 5, we explored bash scripting as a way to automate tasks by writing programs that the shell interprets. We learned how processes execute scripts, how to control flow with conditionals and loops, and how to structure code with functions. However, all of our scripts operated in isolation. Each script was a single process (or spawned child processes) that performed its task independently.
Real-world systems require processes to cooperate. A web server must communicate with a database. A shell pipeline connects the output of one program to the input of another. A service must respond to signals sent by the init system. This lab explores the kernel-provided mechanisms that enable processes to exchange data and coordinate their actions. We focus on five fundamental IPC mechanisms, all accessible from bash: environment variables, pipes, named pipes, signals, and sockets.
Understanding IPC is essential for system administration and software development. These mechanisms form the foundation of everything from command pipelines to distributed systems.
Prerequisites
System Requirements
A running instance of the course-provided Linux virtual machine with SSH or direct terminal access.
Required Packages
The following packages must be installed:
sudo apt update
sudo apt install -y caddy socat
caddy: A modern HTTP server with automatic HTTPS. We use it to demonstrate socket communication.socat: A versatile networking tool that can work with Unix domain sockets.
Knowledge Prerequisites
You should be familiar with: - Process concepts from Lab 3 (PIDs, process hierarchy, file descriptors) - File permissions from Lab 4 (execute bit, ownership) - Bash scripting fundamentals from Lab 5 (shebangs, builtins, variables, quoting, exit status, redirection, loops, functions)
Inter-Process Communication
What is IPC?
By default, processes are isolated from one another. Each process has its own memory space, file descriptor table, and execution context. This isolation provides security and stability, but it creates a problem: how can processes cooperate to accomplish complex tasks?
Inter-Process Communication (IPC) refers to the kernel-provided mechanisms that allow processes to exchange data and synchronize their actions. The kernel acts as an intermediary, providing channels, buffers, and signaling primitives that processes can use to communicate safely without violating isolation boundaries.
In this lab, we examine five IPC mechanisms arranged roughly by complexity:
- Environment Variables: The simplest form of IPC. A parent process passes key-value configuration to its children via inherited environment variables. Communication is unidirectional (parent → child) and occurs only at process creation.
- Pipes: The kernel creates a buffer connecting one process’s standard output to another’s standard input. Data flows in one direction through the pipe. Anonymous pipes exist only while the processes using them are running.
- Named Pipes (FIFOs): Like anonymous pipes, but visible in the filesystem. This persistence allows unrelated processes to connect to the same pipe by opening a file path.
- Signals: Asynchronous notifications sent from one process to another (or from the kernel to a process). Signals interrupt the receiving process, which can catch and handle them or allow default behavior (often termination).
- Sockets: Bidirectional communication channels that work across network boundaries or locally via Unix domain sockets. Multiple processes can connect to the same socket, enabling one-to-many communication patterns.
Each mechanism solves different problems and has different trade-offs in terms of complexity, performance, and flexibility.
Environment Variables and export
The Kernel’s Role
When a process forks, the child inherits a copy of the parent’s environment: a set of key-value string pairs maintained by the kernel for each process. The child can read these values and modify its own copy, but changes do not propagate back to the parent or to other processes. This inheritance mechanism provides a simple, unidirectional channel for passing configuration from parent to child.
Environment variables are not limited to shell scripts. Every process has an environment. When you run any program, it receives the environment from the shell that launched it. Programs written in C access this via the environ global variable or the third parameter to main(). Python uses os.environ. The environment is a universal convention for passing configuration.
Bash’s Role: export and env
In bash, variables are local to the shell process by default. When you set name=value, that variable exists in the shell’s memory but is not passed to child processes. The export builtin marks a variable for inclusion in the environment of future child processes:
myvar="hello"
export myvar
Or, more concisely:
export myvar="hello"
Once exported, all child processes started by this shell (scripts, programs, or subshells) will inherit myvar in their environment. To see the current environment, use the env command (or printenv), which prints all exported variables.
You can also set environment variables for a single command without affecting the shell:
DEBUG=1 ./myscript.sh
This syntax sets DEBUG=1 in the environment of myscript.sh only, without exporting it in the parent shell.
Common environment variables you’ve already been using include PATH (where bash searches for commands), HOME (your home directory), USER (your username), and SHELL (your login shell). These are all set by the login process and inherited by every subsequent process in your session.
When to Use Environment Variables
Environment variables are appropriate for: - Configuration that should be inherited by all child processes (e.g., locale settings, proxy configuration) - Passing secrets or credentials to programs without embedding them in command-line arguments (which are visible via ps) - Controlling program behavior via well-known variables like PATH, LD_LIBRARY_PATH, or TZ
They are not suitable for: - Runtime communication between already-running processes (use pipes, sockets, or signals) - Large amounts of data (environment is limited in size, typically a few megabytes) - Bi-directional communication (child changes don’t affect parent)
Pipes
The Kernel’s Pipe Mechanism
A pipe is a one-way data channel maintained in kernel memory. When you create a pipe, the kernel allocates a buffer (typically 64 KB on Linux) and returns two file descriptors: one for writing and one for reading. Data written to the write end is buffered by the kernel and can be read from the read end in FIFO (first-in, first-out) order.
Pipes are anonymous: they have no name in the filesystem. They exist only as long as at least one process holds a file descriptor to them. When all processes close their references to a pipe, the kernel deallocates it.
The key insight from Lab 3: a pipeline like cat file.txt | grep "error" | wc -l creates multiple processes (all in the same process group) connected by pipes. The kernel sets up the file descriptors so that cat’s stdout is connected to grep’s stdin, and grep’s stdout is connected to wc’s stdin. The processes run concurrently, with data flowing through kernel buffers as it’s produced and consumed.
Bash’s Pipe Operator
Bash creates pipes using the | operator. The syntax is simple:
command1 | command2
This creates a pipe and starts two processes. Bash configures command1’s stdout (FD 1) to point to the pipe’s write end and command2’s stdin (FD 0) to point to the pipe’s read end. The commands execute concurrently.
Longer pipelines work the same way:
cat data.txt | sort | uniq | wc -l
This creates three pipes connecting four processes. Data flows left-to-right through kernel buffers.
Pipes in Scripts
You can use pipes inside scripts just as you would interactively:
#!/bin/bash
set -euo pipefail
# Count unique IP addresses in an access log
cat /var/log/access.log | cut -d' ' -f1 | sort | uniq | wc -l
Pipes are particularly powerful when combined with bash’s process substitution feature <(command), but that’s beyond the scope of this introductory lab.
When to Use Pipes
Pipes are ideal for: - Connecting the output of one program to the input of another in a linear processing chain - Streaming data processing where data is generated and consumed incrementally - Quick, one-off data transformations at the command line
They are not suitable for: - Bi-directional communication (data flows one way only) - Communication between unrelated processes that weren’t started in a pipeline - Persistent communication (pipe disappears when processes exit)
Named Pipes (FIFOs)
The Kernel’s FIFO Mechanism
A named pipe, or FIFO (First-In, First-Out), is a pipe with a name in the filesystem. Unlike anonymous pipes, FIFOs persist as filesystem entries (though the data buffer is still in kernel memory). Any process with appropriate permissions can open the FIFO by its path, allowing unrelated processes to communicate.
When a process opens a FIFO for reading, it blocks until another process opens the same FIFO for writing (and vice versa). Once both ends are open, data flows through the kernel buffer just like an anonymous pipe. When all processes close their connections, the FIFO remains as a filesystem entry but the kernel buffer is deallocated.
You can identify a FIFO in ls -l output by the leading p in the permission string:
prw-r--r-- 1 user user 0 Nov 6 10:00 myfifo
Creating FIFOs with mkfifo
The mkfifo command creates a named pipe:
mkfifo /tmp/myfifo
Now two unrelated processes can communicate by opening this file. One writes to it:
echo "Hello from writer" > /tmp/myfifo
This command will block until a reader appears. In another terminal (or in the background), a reader can consume the data:
cat < /tmp/myfifo
When the reader connects, the writer unblocks, the message flows through the kernel buffer, and both commands complete.
When to Use Named Pipes
Named pipes are useful for: - Communication between unrelated processes that start at different times - Producer-consumer patterns where one process generates data and another processes it - Simple IPC without needing network sockets or shared files
They are not suitable for: - Multiple simultaneous readers or writers (FIFO semantics become unpredictable) - Persistent data storage (data is lost when all processes disconnect) - Bi-directional communication (like anonymous pipes, FIFOs are one-way)
Signals and trap
The Kernel’s Signal Mechanism
A signal is an asynchronous notification sent to a process. Signals can be sent by other processes (via the kill system call) or by the kernel itself in response to events like segmentation faults, keyboard interrupts (Ctrl+C), or child process termination.
When the kernel delivers a signal to a process, it interrupts the process’s normal execution. The process can respond in one of three ways:
- Default Action: Each signal has a default behavior, often terminating the process. For example,
SIGTERM(signal 15) gracefully terminates, whileSIGKILL(signal 9) forces immediate termination. - Ignore: The process can choose to ignore certain signals (except
SIGKILLandSIGSTOP, which cannot be caught or ignored). - Custom Handler: The process can register a function (signal handler) to execute when the signal arrives. This allows the process to perform cleanup or take other actions before deciding whether to continue, terminate, or take other action.
Common signals: - SIGINT (2): Sent by Ctrl+C. Default: terminate. - SIGTERM (15): Polite request to terminate. Default: terminate. Most services handle this to perform clean shutdown. - SIGKILL (9): Immediate termination. Cannot be caught or ignored. Used as a last resort. - SIGHUP (1): Historically “hang up” (modem disconnected). Often used to tell daemons to reload configuration. - SIGCHLD (17): Sent to a parent when a child process terminates. - SIGUSR1 (10) and SIGUSR2 (12): User-defined signals for custom purposes.
Signals are sent by PID:
kill -TERM 1234 # Send SIGTERM to process 1234
kill -9 1234 # Send SIGKILL to process 1234
kill -HUP 1234 # Send SIGHUP to process 1234
You can also use the %n job notation from Lab 3 to send signals to background jobs.
Bash’s trap Builtin
The trap builtin allows a bash script to register a handler for incoming signals:
trap 'echo "Caught SIGINT"; exit' INT
This tells bash: “When SIGINT arrives, execute the command echo "Caught SIGINT"; exit.” The handler can be any bash command or function.
Common pattern for cleanup on exit:
#!/bin/bash
set -euo pipefail
cleanup() {
echo "Cleaning up temporary files..."
rm -f /tmp/myscript.*
}
trap cleanup EXIT
# Script body
echo "Running..."
sleep 10
echo "Done"
# cleanup() is automatically called on normal exit, Ctrl+C, or errors (with set -e)
The special signal EXIT isn’t a real signal; it’s a bash pseudo-signal that fires whenever the script exits for any reason.
When to Use Signals
Signals are appropriate for: - Event-driven scripts that respond to external events - Graceful shutdown and cleanup on termination - Daemon control (reload config with SIGHUP, graceful restart with SIGTERM) - Inter-process coordination where one process needs to notify another of state changes
They are not suitable for: - Transferring data (signals carry almost no information, just the signal number) - Reliable communication (signals can be lost or delayed) - Complex coordination (race conditions are common)
Sockets
The Kernel’s Socket Mechanism
A socket is a bidirectional communication endpoint. Unlike pipes, sockets support two-way data flow. Unlike named pipes, sockets support multiple concurrent connections, making them suitable for client-server architectures.
There are two main types:
- Network Sockets: Use TCP or UDP to communicate over IP networks. These work across machines and are the foundation of the internet.
- Unix Domain Sockets: Use file paths (like named pipes) but support bidirectional communication and connection multiplexing. These work only on the same machine but are faster than network sockets.
When a server process binds to a socket, it listens for incoming connections. Multiple client processes can connect to the same server socket. Each connection is independent, with its own bidirectional channel. This one-to-many pattern distinguishes sockets from pipes and FIFOs.
For network sockets, the server binds to a port number (e.g., port 80 for HTTP). For Unix domain sockets, it binds to a filesystem path (e.g., /tmp/myservice.sock).
Socket Communication from Bash
While bash doesn’t have native socket support, we can use existing tools to demonstrate socket communication without writing low-level code:
- caddy: A modern web server that can listen on both network and Unix domain sockets
- curl: A command-line HTTP client that supports Unix domain sockets
- socat: A general-purpose networking tool for creating and connecting to various socket types
These tools abstract the complexity of socket programming, allowing us to focus on the conceptual model.
When to Use Sockets
Sockets are ideal for: - Client-server applications with multiple concurrent clients - Networked communication across machines - Bidirectional data exchange - Services that need to accept connections from many processes
They are not necessary for: - Simple one-to-one, one-way communication (use pipes instead) - Configuration passing (use environment variables) - Asynchronous notifications (use signals)
Hands-on Exercises
Exercise A: Environment Variables and export
This exercise demonstrates how environment variables are inherited by child processes but not propagated back to parents.
Steps:
- Check your current environment. Run
env | head -n 10and observe some of the variables already set. - Create a shell variable without exporting it:
myvar="not exported". - Start a subshell with
bashand try to readmyvarwithecho "$myvar". It should be empty. Exit the subshell. - Back in your original shell, export the variable:
export myvar="exported now". - Start a subshell again with
bash -c 'echo "In child: $myvar"'and verify thatmyvaris now accessible. - In a subshell, modify the variable:
bash -c 'myvar="changed in child"; echo "Child modified: $myvar"'. - Print
myvarin the parent shell:echo "Parent still has: $myvar". Observe that the parent’s value is unchanged. - Demonstrate setting an environment variable for a single command:
GREETING="Hello" bash -c 'echo $GREETING'. - Verify that
GREETINGis not set in your current shell:echo "GREETING in parent: $GREETING"(should be empty). - Use
envto run a command with a specific environment:env DEBUG=1 bash -c 'echo "DEBUG=$DEBUG"'.
Deliverable A: Provide the output showing: the subshell cannot see the unexported variable, the subshell can see the exported variable, the parent is unaffected by child changes, and single-command environment variable setting.
Exercise B: Pipes in Practice
This exercise explores anonymous pipes and how they connect processes.
Steps:
- Use a simple pipe to count lines:
cat /etc/passwd | wc -l. Observe the result. - Build a longer pipeline to find how many unique shells are in use:
cat /etc/passwd | cut -d: -f7 | sort | uniq. Count them manually or pipe towc -l. - Demonstrate that processes in a pipeline run concurrently. Run
yes | head -n 5. Theyescommand produces infinite output, butheadreads only 5 lines and then exits, causingyesto receive aSIGPIPEand terminate. - Create a sample log file for analysis:
cat > /tmp/lab6_sample.log <<'EOF'
2024-01-15 10:00:00 INFO Application started
2024-01-15 10:05:23 ERROR Database connection failed
2024-01-15 10:12:45 WARN Connection timeout
2024-01-15 10:15:00 INFO Retry successful
2024-01-15 10:20:00 ERROR Invalid configuration
2024-01-15 10:25:00 WARN Low memory
EOF
- Use a pipeline to count ERROR entries:
grep "ERROR" /tmp/lab6_sample.log | wc -l. - Use a pipeline to extract and count unique log levels:
cut -d' ' -f3 /tmp/lab6_sample.log | sort | uniq -c. - Combine multiple operations: Find the timestamps of all ERROR entries:
grep "ERROR" /tmp/lab6_sample.log | cut -d' ' -f1,2.
Deliverable B: Provide the output from the /etc/passwd shell analysis, the yes | head -n 5 demonstration, and the log file analysis commands showing ERROR count and unique log levels with counts.
Exercise C: Named Pipes (FIFOs)
This exercise demonstrates persistent named pipes and communication between unrelated processes.
Steps:
- Create a named pipe:
mkfifo /tmp/lab6_fifo. - Verify it exists and note its type:
ls -l /tmp/lab6_fifo. The leadingpindicates a FIFO. - In one terminal, start a reader that will block:
cat < /tmp/lab6_fifo. Leave this running. - In a second terminal, write to the FIFO:
echo "Hello via FIFO" > /tmp/lab6_fifo. - Observe that the reader in the first terminal unblocks, displays the message, and exits.
- Demonstrate that the FIFO persists. List it again:
ls -l /tmp/lab6_fifo. - Test a more complex scenario. In terminal 1:
while read line; do echo "Received: $line"; done < /tmp/lab6_fifo. - In terminal 2, send multiple messages:
echo "Message 1" > /tmp/lab6_fifo
echo "Message 2" > /tmp/lab6_fifo
echo "Message 3" > /tmp/lab6_fifo
- Observe the messages being received. Press Ctrl+C in terminal 1 to stop the reader.
- Remove the FIFO:
rm /tmp/lab6_fifo.
Deliverable C: Provide the ls -l output showing the FIFO type, screenshots or output from both terminals showing the message exchange, and a brief explanation of how the FIFO persists between writes.
Exercise D: Signals and trap
This exercise demonstrates signal handling in bash using interactive commands.
Steps:
- Start a simple sleep command:
sleep 30. Press Ctrl+C to interrupt it. Observe that it terminates immediately. - Now run a command that ignores SIGINT:
trap 'echo "Caught SIGINT, ignoring..."' INT; sleep 30. Press Ctrl+C. The trap catches the signal and the sleep continues. - Demonstrate cleanup on exit. Run this compound command:
trap 'echo "Cleanup: removing temp file"; rm -f /tmp/lab6_test.txt' EXIT; \
touch /tmp/lab6_test.txt; \
echo "File created. Press Ctrl+C or wait..."; \
sleep 10; \
echo "Normal exit"
- Try both: let it complete normally, then run it again and interrupt with Ctrl+C. Observe cleanup happens both times.
- Start a background sleep:
sleep 100 &. Note the PID displayed. - Send SIGTERM to it:
kill -TERM <pid>(replace<pid>with the actual PID). Verify it terminated:jobs. - Start another background process that will ignore SIGTERM:
(trap 'echo "Caught SIGTERM, staying alive"' TERM; \
while true; do echo "Running..."; sleep 5; done) &
- Note the PID, then send SIGTERM:
kill -TERM <pid>. Observe the trap message. - Send SIGKILL to force termination:
kill -9 <pid>. This cannot be caught. - Verify all background jobs are gone:
jobs.
Deliverable D: Provide output showing: the trapped SIGINT message, cleanup on both normal exit and Ctrl+C, the SIGTERM being caught with the trap message, and SIGKILL forcing termination.
Exercise E: Socket Communication Basics
This exercise provides a brief introduction to socket communication using existing tools.
Steps:
- Create a simple static site directory:
mkdir -p /tmp/lab6_site && echo "<h1>Hello from Caddy</h1>" > /tmp/lab6_site/index.html. - Start Caddy as a file server listening on a Unix domain socket:
caddy file-server --root /tmp/lab6_site --listen unix//tmp/caddy.sock &. Note the PID. - Wait a moment for Caddy to start. Verify the socket exists:
ls -l /tmp/caddy.sock. Note thestype indicating a socket. - Use
curlto make an HTTP request via the Unix domain socket:curl -v --unix-socket /tmp/caddy.sock http://localhost/. - Observe the response. Note that the communication is bidirectional: you sent an HTTP request, and the server responded with the HTML content.
- (Optional) Open multiple terminals and run
curlsimultaneously several times. Each request is handled independently, demonstrating the one-to-many capability of sockets. - Stop Caddy by sending
SIGTERM:kill -TERM <caddy_pid>or usepkill caddy. - Clean up:
rm -rf /tmp/lab6_site /tmp/caddy.sock.
Deliverable E: Provide the ls -l output showing the socket file, the curl output demonstrating the request and response (especially the HTTP headers and HTML body), and a brief explanation (2-3 sentences) of how this differs from a named pipe in terms of directionality and connection multiplexing.
Scripting Challenges
Challenge 1: Log Aggregator with Named Pipes
Write a script /tmp/lab6_log_aggregator.sh that aggregates log messages from multiple sources using named pipes.
Requirements:
- Create three named pipes:
/tmp/log_fifo1,/tmp/log_fifo2,/tmp/log_fifo3. - Start three background processes that each write timestamped messages to one of the FIFOs (simulating different log sources). Each should write 5 messages at 1-second intervals.
- Implement a main loop that reads from all three FIFOs simultaneously (hint: use
selectviareadwith timeout, or use multiple background readers). - Write all received messages to a combined log file
/tmp/aggregated.logwith timestamps. - Trap
SIGTERMto clean up: kill background processes, remove FIFOs, and exit gracefully. - Use:
set -euo pipefail, functions,trap,mkfifo, background processes (&), proper cleanup.
Test:
chmod +x /tmp/lab6_log_aggregator.sh
/tmp/lab6_log_aggregator.sh &
AGG_PID=$!
sleep 10
kill -TERM $AGG_PID
wait
cat /tmp/aggregated.log
Deliverable Challenge 1: - Complete script with comments - Contents of /tmp/aggregated.log showing interleaved messages from all three sources - Brief explanation (2-3 sentences) of why named pipes were necessary here instead of anonymous pipes
Challenge 2: Graceful Service Controller
Write a script /tmp/lab6_service_controller.sh that manages a long-running service and responds to signals.
Requirements:
- The script acts as a daemon that runs indefinitely, printing a heartbeat message every 5 seconds.
- Accept one optional argument: a “config file” path. If provided, read a setting (e.g.,
INTERVAL=10) from the file to control the heartbeat interval. - Trap
SIGHUP: Reload the configuration file and adjust the interval dynamically without restarting the script. Print “Configuration reloaded.” - Trap
SIGTERM: Perform graceful shutdown. Print “Shutting down gracefully…”, wait 2 seconds, then exit cleanly. - Trap
SIGUSR1: Export the current status to/tmp/service_status.txt(e.g., uptime, number of heartbeats sent, current interval). Print “Status exported.” - Trap
EXIT: Perform cleanup. Print “Service stopped.” - Maintain internal state: count the number of heartbeats sent, track start time.
- Use:
set -euo pipefail, functions,trapfor multiple signals, variables for state, a loop, cleanup handler.
Test:
echo "INTERVAL=3" > /tmp/service.conf
chmod +x /tmp/lab6_service_controller.sh
/tmp/lab6_service_controller.sh /tmp/service.conf &
SVC_PID=$!
sleep 10
kill -USR1 $SVC_PID # Export status
cat /tmp/service_status.txt
echo "INTERVAL=1" > /tmp/service.conf
kill -HUP $SVC_PID # Reload config
sleep 5
kill -TERM $SVC_PID # Graceful shutdown
wait
Deliverable Challenge 2: - Complete script with comments - Output showing heartbeats before and after SIGHUP (with visible interval change) - Contents of /tmp/service_status.txt after SIGUSR1 - Output showing graceful shutdown message after SIGTERM
Reference: Common IPC Patterns
Quick reference for IPC mechanisms covered in this lab:
Environment Variables:
# Export a variable for child processes
export VAR="value"
# Set for one command only
VAR="value" command
# View all environment variables
env
printenv
Anonymous Pipes:
# Simple pipeline
command1 | command2
# Multi-stage pipeline
cat file.txt | grep "pattern" | sort | uniq
Named Pipes:
# Create a FIFO
mkfifo /path/to/fifo
# Write to FIFO (blocks until reader connects)
echo "data" > /path/to/fifo
# Read from FIFO (blocks until writer connects)
cat < /path/to/fifo
# Remove FIFO
rm /path/to/fifo
Signals:
# Send signal by name
kill -TERM <pid>
kill -HUP <pid>
# Send signal by number
kill -15 <pid>
# Force kill (cannot be caught)
kill -9 <pid>
# Trap signals in a script
trap 'echo "Caught signal"' INT TERM
trap cleanup EXIT
# Define cleanup function
cleanup() {
echo "Cleaning up..."
rm -f /tmp/myfiles.*
}
Sockets (using existing tools):
# Start a server on Unix domain socket (using socat)
socat UNIX-LISTEN:/tmp/service.sock,fork EXEC:'/usr/bin/myhandler'
# Connect as client (using socat)
echo "request" | socat - UNIX-CONNECT:/tmp/service.sock
# HTTP via Unix socket (using curl)
curl --unix-socket /tmp/service.sock http://localhost/path
Common Patterns Table
| Mechanism | Direction | Persistence | Use Case |
|---|---|---|---|
| Environment Variables | Parent → Child (one-way) | Inherited at fork | Configuration, credentials |
| ) | One-way | Ephemeral (process lifetime) | Command chaining, streaming |
| Named Pipes (FIFO) | One-way | Filesystem entry persists | Unrelated process communication |
| Signals | One-way (notification) | Asynchronous event | Process control, event notification |
| Sockets | Bidirectional | Filesystem entry persists (Unix domain) | Client-server, network services |
Deliverables and Assessment
Submit a single document (PDF or similar) containing:
Exercise Deliverables:
- Exercise A: Outputs demonstrating export behavior, parent-child isolation, and single-command environment variables
- Exercise B: Pipeline outputs and complete analysis script
- Exercise C: FIFO creation output, producer/consumer scripts with sample output
- Exercise D: Complete daemon simulation script with signal handling demonstrations
- Exercise E: Socket file listing, curl output with HTTP headers, explanation of socket vs FIFO differences
Challenge Deliverables:
- Challenge 1: Complete log aggregator script, aggregated log contents, explanation
- Challenge 2: Complete service controller script, outputs demonstrating all signal handlers (SIGHUP reload, SIGUSR1 status export, SIGTERM shutdown)
Additional:
- Each deliverable should include command outputs (screenshots or text) and brief explanations where requested.
- For scripts, include the complete, commented source code and example execution output.
Additional Resources
This lab covers the fundamental IPC mechanisms accessible from bash. You’ve learned how processes inherit environment variables, communicate via pipes, respond to signals, and use sockets for network communication. These concepts form the foundation for system administration, scripting, and understanding how complex systems coordinate.
For further study:
- Advanced IPC: Shared memory, message queues, semaphores (typically used in C/C++ programs, not bash)
- Network programming: TCP/UDP sockets, client-server architecture
- D-Bus: A modern IPC system used by desktop environments and systemd
- Signal safety: Writing robust signal handlers (critical in C, less relevant in bash)
Relevant manual pages:
man 7 pipe- Pipe overviewman 7 fifo- Named pipe overviewman 7 signal- Signal overviewman 7 unix- Unix domain socketsman bash- Section on trap and job control