Diferență între revizuiri ale paginii „SDPT Lab 5”

De la WikiLabs
Jump to navigationJump to search
(Pagină nouă: # Week 5 Lab Activity: Containerization & Cross-Compilation ## Objective This week, we eliminate the "It works on my machine" problem. You will learn how to build isolated Linux e...)
 
 
(Nu s-a afișat o versiune intermediară efectuată de același utilizator)
Linia 1: Linia 1:
# Week 5 Lab Activity: Containerization & Cross-Compilation
+
= Week 5 Lab Activity: Containerization & Cross-Compilation =
  
## Objective
+
== Objective ==
This week, we eliminate the "It works on my machine" problem. You will learn how to build isolated Linux environments, cross-compile your `OvenController` C++ code for an ARM architecture, and solve the complex problem of linking third-party static and dynamic libraries compiled for foreign architectures.
+
This week, we eliminate the "It works on my machine" problem. You will learn how to build isolated Linux environments, cross-compile your <code>OvenController</code> C++ code for an ARM architecture, and solve the complex problem of linking third-party static and dynamic libraries compiled for foreign architectures.
  
 
We will accomplish this in three phases:
 
We will accomplish this in three phases:
1. **Preparing CMake:** Updating the build system to link an external database library.
+
# '''Preparing CMake:''' Updating the build system to link an external database library.
2. **The Manual Container (The "Old Way"):** Building the cross-compilation environment by hand to understand the underlying mechanics.
+
# '''The Manual Container (The "Old Way"):''' Building the cross-compilation environment by hand to understand the underlying mechanics.
3. **Infrastructure as Code (The "Modern Way"):** Automating the entire environment with a `Dockerfile`.
+
# '''Infrastructure as Code (The "Modern Way"):''' Automating the entire environment with a <code>Dockerfile</code>.
  
## Phase 1: Preparing CMake for Foreign Libraries
+
----
Imagine our `OvenController` needs to log telemetry data locally using an SQLite database. To do this, we need to link the `sqlite3` library.
 
  
Open your `CMakeLists.txt` from Week 4 and add the standard linking commands. *(Note: CMake is smart enough to look for the correct architecture version of this library based on the compiler we provide later).*
+
== Phase 1: Preparing CMake for Foreign Libraries ==
 +
Imagine our <code>OvenController</code> needs to log telemetry data locally using an SQLite database. To do this, we need to link the <code>sqlite3</code> library.  
  
```cmake
+
Open your <code>CMakeLists.txt</code> from Week 4 and add the standard linking commands. ''(Note: CMake is smart enough to look for the correct architecture version of this library based on the compiler we provide later).''
 +
 
 +
<syntaxhighlight lang="cmake">
 
# Add this near the bottom of your CMakeLists.txt
 
# Add this near the bottom of your CMakeLists.txt
 
find_package(PkgConfig REQUIRED)
 
find_package(PkgConfig REQUIRED)
Linia 22: Linia 24:
 
target_include_directories(unit_tests PRIVATE ${SQLITE3_INCLUDE_DIRS})
 
target_include_directories(unit_tests PRIVATE ${SQLITE3_INCLUDE_DIRS})
 
target_link_libraries(unit_tests PRIVATE ${SQLITE3_LIBRARIES})
 
target_link_libraries(unit_tests PRIVATE ${SQLITE3_LIBRARIES})
```
+
</syntaxhighlight>
 +
 
 +
----
  
## Phase 2: The Manual Container (The "Old Way")
+
== Phase 2: The Manual Container (The "Old Way") ==
 
Before we automate, let's understand what Docker is actually doing under the hood.
 
Before we automate, let's understand what Docker is actually doing under the hood.
  
**1. Start a raw, interactive Ubuntu container:**
+
'''1. Start a raw, interactive Ubuntu container:'''
Open your terminal and run a fresh Ubuntu 22.04 image, mounting your current code directory into `/app`:
+
Open your terminal and run a fresh Ubuntu 22.04 image, mounting your current code directory into <code>/app</code>:
```bash
+
<syntaxhighlight lang="bash">
 
docker run -it --name manual-build-env -v ${PWD}:/app ubuntu:22.04 /bin/bash
 
docker run -it --name manual-build-env -v ${PWD}:/app ubuntu:22.04 /bin/bash
```
+
</syntaxhighlight>
*Notice your terminal prompt has changed. You are now the `root` user inside an isolated Linux environment.*
+
''Notice your terminal prompt has changed. You are now the root user inside an isolated Linux environment.''
  
**2. The "Multiarch" Superpower:**
+
'''2. The "Multiarch" Superpower:'''
If we just run `apt-get install libsqlite3-dev`, Ubuntu will download the `x86_64` version. If we try to cross-compile our ARM code against it, the linker will violently crash. We must explicitly tell Ubuntu's package manager to accept packages for a foreign architecture (`arm64`).
+
If we just run <code>apt-get install libsqlite3-dev</code>, Ubuntu will download the <code>x86_64</code> version. If we try to cross-compile our ARM code against it, the linker will violently crash. We must explicitly tell Ubuntu's package manager to accept packages for a foreign architecture (<code>arm64</code>).
```bash
+
<syntaxhighlight lang="bash">
 
dpkg --add-architecture arm64
 
dpkg --add-architecture arm64
 
apt-get update
 
apt-get update
```
+
</syntaxhighlight>
  
**3. Install the Toolchain and Foreign Libraries:**
+
'''3. Install the Toolchain and Foreign Libraries:'''
Now we install our cross-compiler, QEMU emulator, and the specifically compiled **ARM version** of our library (`:arm64`).
+
Now we install our cross-compiler, QEMU emulator, and the specifically compiled '''ARM version''' of our library (<code>:arm64</code>).
```bash
+
<syntaxhighlight lang="bash">
 
apt-get install -y build-essential cmake pkg-config g++-aarch64-linux-gnu qemu-user qemu-user-static
 
apt-get install -y build-essential cmake pkg-config g++-aarch64-linux-gnu qemu-user qemu-user-static
 
apt-get install -y libsqlite3-dev:arm64
 
apt-get install -y libsqlite3-dev:arm64
```
+
</syntaxhighlight>
  
**4. Cross-Compile and Emulate:**
+
'''4. Cross-Compile and Emulate:'''
 
Navigate to your mounted code and build the project for ARM. We use a command-line flag to tell CMake to use the ARM compiler instead of native GCC.
 
Navigate to your mounted code and build the project for ARM. We use a command-line flag to tell CMake to use the ARM compiler instead of native GCC.
```bash
+
<syntaxhighlight lang="bash">
 
cd /app
 
cd /app
 
mkdir build_arm && cd build_arm
 
mkdir build_arm && cd build_arm
 
cmake -DCMAKE_CXX_COMPILER=aarch64-linux-gnu-g++ ..
 
cmake -DCMAKE_CXX_COMPILER=aarch64-linux-gnu-g++ ..
 
make
 
make
```
+
</syntaxhighlight>
  
If you try to run `./unit_tests` natively, Linux will throw an `Exec format error`. Use QEMU to emulate the execution of your ARM binary. Because we linked a dynamic library, we must tell QEMU where to find the ARM shared objects (`-L` flag):
+
If you try to run <code>./unit_tests</code> natively, Linux will throw an <code>Exec format error</code>. Use QEMU to emulate the execution of your ARM binary. Because we linked a dynamic library, we must tell QEMU where to find the ARM shared objects (<code>-L</code> flag):
```bash
+
<syntaxhighlight lang="bash">
 
qemu-aarch64 -L /usr/aarch64-linux-gnu/ ./unit_tests
 
qemu-aarch64 -L /usr/aarch64-linux-gnu/ ./unit_tests
```
+
</syntaxhighlight>
*Verify that your Google Tests pass on the emulated architecture!* Type `exit` to leave and stop the container.
+
''Verify that your Google Tests pass on the emulated architecture!'' Type <code>exit</code> to leave and stop the container.
 
 
 
 
## Phase 3: Infrastructure as Code (The "Modern Way")
 
Manually typing `apt-get` commands, enabling architectures, and committing containers is tedious and error-prone. Modern DevOps engineers automate this using a `Dockerfile`.
 
 
 
**1. Write the Dockerfile:**
 
Create a file named exactly `Dockerfile` (no extension) in your project root. Translate the manual steps you just took into an automated recipe.
 
 
 
```dockerfile
 
# 1. Base Image
 
FROM ubuntu:22.04
 
 
 
# 2. Prevent interactive timezone prompts
 
ENV DEBIAN_FRONTEND=noninteractive
 
  
# 3. Enable ARM64 Architecture for foreign libraries
+
----
RUN dpkg --add-architecture arm64
 
  
# 4. Install toolchain & ARM libraries in ONE layer to save space
+
== Phase 3: Infrastructure as Code (The "Modern Way") ==
RUN apt-get update && apt-get install -y \
+
Manually typing <code>apt-get</code> commands, enabling architectures, and committing containers is tedious and error-prone. Modern DevOps engineers automate this using a <code>Dockerfile</code>.
    build-essential \
 
    cmake \
 
    pkg-config \
 
    g++-aarch64-linux-gnu \
 
    qemu-user \
 
    qemu-user-static \
 
    libsqlite3-dev:arm64 \
 
    && rm -rf /var/lib/apt/lists/*
 
  
# 5. Set Workspace
+
'''Your Challenge:'''
WORKDIR /app
+
Create a file named exactly <code>Dockerfile</code> (no extension) in your project root. Translate the manual steps you just took into an automated recipe that meets the following strict specifications:
  
# 6. Keep the container alive interactively
+
# '''Base Image:''' Must use <code>ubuntu:22.04</code>.
CMD ["/bin/bash"]
+
# '''Environment:''' Set <code>DEBIAN_FRONTEND</code> to <code>noninteractive</code> to prevent the build from hanging on timezone prompts.
```
+
# '''Architecture:''' Use a <code>RUN</code> command to add the <code>arm64</code> architecture to <code>dpkg</code>.
 +
# '''The Mega-Layer:''' Use a single <code>RUN</code> command to update <code>apt-get</code>, install all required tools (<code>build-essential cmake pkg-config g++-aarch64-linux-gnu qemu-user qemu-user-static libsqlite3-dev:arm64</code>), and immediately clean up the apt cache (<code>rm -rf /var/lib/apt/lists/*</code>) to keep the image size small. ''(Hint: Chain your commands with <code>&&</code> and use <code>\</code> for multi-line formatting).''
 +
# '''Workspace:''' Set the working directory inside the container to <code>/app</code>.
 +
# '''PID 1:''' Set the default command (<code>CMD</code>) to launch <code>/bin/bash</code> so the container stays alive interactively.
  
**2. Build the Automated Image:**
+
'''Build and Test Your Image:'''
Ask Docker Engine to read your recipe and build the image from scratch.
+
Once you have written your Dockerfile, test your infrastructure code by building and running it. It should behave exactly like your manual container did:
```bash
+
<syntaxhighlight lang="bash">
 
docker build -t automated-arm-env:latest .
 
docker build -t automated-arm-env:latest .
```
 
 
**3. Run your Final Environment:**
 
```bash
 
 
docker run -it --rm -v ${PWD}:/app automated-arm-env:latest
 
docker run -it --rm -v ${PWD}:/app automated-arm-env:latest
```
+
</syntaxhighlight>
You now have a perfectly reproducible, portable embedded toolchain that natively resolves third-party ARM dependencies.
 
  
### Assignment Submission
+
----
Upload **ONLY** your `Dockerfile` to the Moodle VPL assignment. The automated grading server will run static analysis on your infrastructure-as-code syntax to verify your architecture setup, toolchain requirements, and layer optimization.
+
=== Assignment Submission ===
 +
Upload '''ONLY''' your <code>Dockerfile</code> to the Moodle VPL assignment. The automated grading server will run static analysis on your infrastructure-as-code syntax to verify your architecture setup, toolchain requirements, and layer optimization.

Versiunea curentă din 30 martie 2026 13:55

Week 5 Lab Activity: Containerization & Cross-Compilation

Objective

This week, we eliminate the "It works on my machine" problem. You will learn how to build isolated Linux environments, cross-compile your OvenController C++ code for an ARM architecture, and solve the complex problem of linking third-party static and dynamic libraries compiled for foreign architectures.

We will accomplish this in three phases:

  1. Preparing CMake: Updating the build system to link an external database library.
  2. The Manual Container (The "Old Way"): Building the cross-compilation environment by hand to understand the underlying mechanics.
  3. Infrastructure as Code (The "Modern Way"): Automating the entire environment with a Dockerfile.

Phase 1: Preparing CMake for Foreign Libraries

Imagine our OvenController needs to log telemetry data locally using an SQLite database. To do this, we need to link the sqlite3 library.

Open your CMakeLists.txt from Week 4 and add the standard linking commands. (Note: CMake is smart enough to look for the correct architecture version of this library based on the compiler we provide later).

# Add this near the bottom of your CMakeLists.txt
find_package(PkgConfig REQUIRED)
pkg_check_modules(SQLITE3 REQUIRED sqlite3)

# Add the includes and link the library to your executable
target_include_directories(unit_tests PRIVATE ${SQLITE3_INCLUDE_DIRS})
target_link_libraries(unit_tests PRIVATE ${SQLITE3_LIBRARIES})

Phase 2: The Manual Container (The "Old Way")

Before we automate, let's understand what Docker is actually doing under the hood.

1. Start a raw, interactive Ubuntu container: Open your terminal and run a fresh Ubuntu 22.04 image, mounting your current code directory into /app:

docker run -it --name manual-build-env -v ${PWD}:/app ubuntu:22.04 /bin/bash

Notice your terminal prompt has changed. You are now the root user inside an isolated Linux environment.

2. The "Multiarch" Superpower: If we just run apt-get install libsqlite3-dev, Ubuntu will download the x86_64 version. If we try to cross-compile our ARM code against it, the linker will violently crash. We must explicitly tell Ubuntu's package manager to accept packages for a foreign architecture (arm64).

dpkg --add-architecture arm64
apt-get update

3. Install the Toolchain and Foreign Libraries: Now we install our cross-compiler, QEMU emulator, and the specifically compiled ARM version of our library (:arm64).

apt-get install -y build-essential cmake pkg-config g++-aarch64-linux-gnu qemu-user qemu-user-static
apt-get install -y libsqlite3-dev:arm64

4. Cross-Compile and Emulate: Navigate to your mounted code and build the project for ARM. We use a command-line flag to tell CMake to use the ARM compiler instead of native GCC.

cd /app
mkdir build_arm && cd build_arm
cmake -DCMAKE_CXX_COMPILER=aarch64-linux-gnu-g++ ..
make

If you try to run ./unit_tests natively, Linux will throw an Exec format error. Use QEMU to emulate the execution of your ARM binary. Because we linked a dynamic library, we must tell QEMU where to find the ARM shared objects (-L flag):

qemu-aarch64 -L /usr/aarch64-linux-gnu/ ./unit_tests

Verify that your Google Tests pass on the emulated architecture! Type exit to leave and stop the container.


Phase 3: Infrastructure as Code (The "Modern Way")

Manually typing apt-get commands, enabling architectures, and committing containers is tedious and error-prone. Modern DevOps engineers automate this using a Dockerfile.

Your Challenge: Create a file named exactly Dockerfile (no extension) in your project root. Translate the manual steps you just took into an automated recipe that meets the following strict specifications:

  1. Base Image: Must use ubuntu:22.04.
  2. Environment: Set DEBIAN_FRONTEND to noninteractive to prevent the build from hanging on timezone prompts.
  3. Architecture: Use a RUN command to add the arm64 architecture to dpkg.
  4. The Mega-Layer: Use a single RUN command to update apt-get, install all required tools (build-essential cmake pkg-config g++-aarch64-linux-gnu qemu-user qemu-user-static libsqlite3-dev:arm64), and immediately clean up the apt cache (rm -rf /var/lib/apt/lists/*) to keep the image size small. (Hint: Chain your commands with && and use \ for multi-line formatting).
  5. Workspace: Set the working directory inside the container to /app.
  6. PID 1: Set the default command (CMD) to launch /bin/bash so the container stays alive interactively.

Build and Test Your Image: Once you have written your Dockerfile, test your infrastructure code by building and running it. It should behave exactly like your manual container did:

docker build -t automated-arm-env:latest .
docker run -it --rm -v ${PWD}:/app automated-arm-env:latest

Assignment Submission

Upload ONLY your Dockerfile to the Moodle VPL assignment. The automated grading server will run static analysis on your infrastructure-as-code syntax to verify your architecture setup, toolchain requirements, and layer optimization.