Set up an ODROID XU4 RAID Server with Cross-Compiled Docker Images for ARM

Goal: Set up a modest 8-core ODROID XU4 ARM server1 running RAID1 with Ubuntu Bionic LTS as a dedicated Docker environment for specialty network operations using cross-compiled Docker images for ARM.

ODROID XU4s are awesome. They are 8-core, 2GHz ARM SBCs2 with Gigabit Ethernet and USB 3.0 connections. They only have 2GB of DDR3 RAM, but when paired with a CloudShell2 case and a couple of HDDs (or SSDs), they become an impressive NAS, or better, specialty network activity drivers for other projects. For example, I use a couple for my automated data-collection work across several VPNs.

ODROID XU4 in a CloudShell2
ODROID XU4 in a CloudShell2
This guide is a reference for myself on how to set these devices up because I seem to set up a new one every half-year or so.

Requirements:

  • ODROID XU4[Q] ($50)
  • CloudShell2 case ($40)
  • CloudShell2 power adapter: 15V at 4A (5.5mm, 2.15mm DC jack) ($10)
  • 2x1TB HDD3 ($50 each)
  • 64GB microSD A14 card ($15)
  • MicroSD card reader
  • Rufus v3.3 (download)
  • Ubuntu tuned for XU4 (download)

Step 1 – Download Ubuntu, Flash a MicroSD Card

Download the ~370MB image and flash it directly to the microSD card with Rufus 3.3. Do check for bad blocks to avoid any surprises later. The latest image with Kernel LTS over 4.14 will automatically resize the root filesystem without a reboot. Any size card will do as you’ll move the root filesystem to disk later.

Flash the OS to a microSD card with Rufus
Flash the OS to a microSD card with Rufus
Windows causes headaches when flashing microSD cards. Do yourself a favor and extract that downloaded .xz file into the ~2.7GB .img file and flash that instead.

Step 2 – Assemble the CloudShell2, Set the RAID Level

You can find several YouTube videos on assembling these cases. Set the RAID level to RAID1 by sliding the tiny LCD-board DIP switches as follows: left up, right down. Also, set the ORDROID DIP switch to boot from microSD (default setting). Plug in your LAN cable. Hold the RAID-set button on the bottom left of the LCD board down and plug in the power cord. The red lights should blink and the drives should spin. Release the button after about 10s. It may reboot once to resize the root filesystem on the microSD.

If the device turns off and doesn’t boot up again, flash the microSD card again. A bad boot causes the HDDs to turn off.
RAID1 DIP settings
RAID1 DIP settings

Step 3 – SSH into the ODROID, Move RootFS to Disk

Power on the Cloudshell 2 by plugging in the 15V/4A power supply.

If the next steps don’t work, or you cannot find the IP of the ODROID, then plug in an HDMI cable and observe the console output.
If the boot process for kernel 4.14 hangs on “random: crng init done”, then seriously reflash the MicroSD card with the .iso, not the .xz image (this is for me because I always forget). This works. I’m not kidding. Alternatively, the random number generator waits for mouse or keyboard input so you plug in a keyboard and press a key.

Find the IP of the ORDOID with either Angry IP scanner, nmap -sn 192.168.1.0/24, or log into the gateway router and find the IP. Set a static DHCP entry as well.

Find the ODROID IP from the Gateway
Find the ODROID IP from the Gateway

When you have the IP, SSH into the device with the default username: root and password: odroid.

Before going any further, confirm the RAID setting is as intended. I prefer RAID1. See below.

RAID1 is not set
RAID1 is not set
RAID1 is correctly set
RAID1 is correctly set

Change your password next.

Change the hostname in a few places:

Upgrade the distribution:

Now you have the microSD card as a backup OS. Create a new boot partition (I’ll use 20GB):

I like to create another partition for data to hold everything under the sun, but separated from the root filesystem. To do that with the remaining space:

With two 1TB drives my RAID1 partition table looks like this:

1TB RAID1 partition table after fdisk
1TB RAID1 partition table after fdisk

Format the partitions with ext4 and mount them:

Sync the boot partition on the microSD card to /dev/sda1:

Prepare to change the boot partition:

Get the new boot partition UUID
Get the new boot partition UUID

Then make these changes:

Update the fstab file to mount the partitions
Update the fstab file to mount the partitions

Restart your system with shutdown -r now.

Make sure to keep the microSD card in the ODROID. The boot.ini file is still read from it.

Your filesystem will now look like this if you run lsblk -f or df -h --output=source,target:

The root filesystem has been transferred
The root filesystem has been transferred

Step 4 – Turn on the Fan and LCD

By now the ODROID is getting hot with the fan off. Turn on the fan and the LCD:

I prefer to use my own LCD and fan scripts on Github:

To turn the LCD on and off with my LCD script, you can run lcd.

Custom LCD on/off script
Custom LCD on/off script
Be sure to unplug the HDMI cable if it is still attached or the LCD contents will display on the TV.

Step 5 – Install Docker and Compose

Install Docker by adding the stable ARM repo and installing the latest version:

Running apt-get install docker will likely install an older version of Docker on Bionic. Follow the steps below instead.

Create a docker user to pilot the server from. It’s part of the docker group so no sudo is required to operate Docker.

Install Docker Compose for ARM architectures with Python’s PIP. Be sure not to be root when installing docker-compose to limit what user scripts can do with your system (ref):

Docker compose is only supported on x86_64 architectures, but with PIP (Python) it can be installed on ARM-based architectures.

Tip: To run the docker-compose command from a non-login shell (i.e. su docker), add PATH="$HOME/.local/bin:$PATH" to your ~/.bashrc file. If you login to the shell as ‘docker’, the ~/.local/bin is automatically added to the path.

Step 6 – Cross-Compile Docker Images for ARM with BuildKit

Many great Docker images are built for x64 architectures. For example, one of my favorite images is browserless/chrome but it is only supported on x64.

Headless Chrome image only supported on x64 architecture
Headless Chrome image only supported on x64 architecture

Wonderfully, with a stable Docker CE version over 19.03, new BuildKit functionality (with buildx) is included. Simply enable a flag:

Docker BuildKit integrated with the buildx command
Docker BuildKit integrated with the buildx command

Architecture emulators need to be installed, but this is as easy as running the following commands on an x64 machine:

The ODROID XU4 is an ARMv7 (32-bit) processor, so the above test result is fortuitous. Next, still on an x64 system, set a multi-architecture build instance like so:

Create a builder instance for multi-architecture Docker builds
Create a builder instance for multi-architecture Docker builds

Build the images you want for ARMv7 processors. For instance, in my Chrome-VPN project I build ARMv7 images like so and then I can pull those images to my ODROID machine.

TCO Analysis

Taking a trip to the AWS Monthly Cost Calculator I note that an unreserved T3a.small (newest at this time) and, say, 800 GB of S3 storage in the US-WEST-2 cost over US$30/mo, not to mention data-in and data-out costs, as well as the cost-per-PUT (e.g. 1,000,000 writes or copies cost US$5) with S3. My use case requires several CPU cores for concurrent-but-low-bandwidth network requests, heavy writing, and modest RAM, so this on-prem solution is well-suited.

The TCO is either US$215 once (plus negligible electricity), or ~US$37/mo forever.

Results

High-end machines and cloud instances have their place, but not all use cases require them; sometimes a simple low-power machine with reliable spinning-platter hard drives is ideal.

Success: Here we set up an 8-core ARMv7 Docker environment on the popular ODROID XU4 in a CloudShell2 case which has an LCD display and RAID1-enabled 1TB HDDs, cross-compiled some Docker images for ARMv7 from an x64 machine, and finally ran a cost analysis showing the TCO of this on-prem device can much lower than a cloud solution.

Notes:

  1. I’m going to be playing fast-and-loose with the term “server” in this article when really my use case makes it a “worker”.
  2. Single-Board Computers
  3. I prefer WD HDDs because they last longer than mid-range SSDs for heavy writes
  4. You don’t need an A2