48 --> 6
Recently, my two 24-core servers were shutdown and replaced with a 6-core Pine64 RockPro64. There were three migrations. The first migration was from compiled-from-source binaries to docker containers. The second migration was one 24-core server to another. The third and final migration was the last 24-core server to the RockPro64.
Pine64 RockPro64
If you read the hyperlink above about the RockPro64, you learned it is an ARM-based CPU single board computer. The RockPro64 (RP64) is based on the Rockchip 3399 System on a Chip (SoC). It's a great little board with plenty of i/o performance and computing power for serving up content on the Internet.
The RP64 serving up content and such contains a few PIne64 parts.
- RockPro64 SBC
- 30mm tall heatsink
- PCI-e X4 to M.2/NGFF NVMe SSD Interface Card
- 128GB eMMC Module
- Crucial 2TB NVMe storage with giant heatsink/heatpipe
- Metal Desktop/NAS Casing
The whole system runs on around 0.4 amps of power (48 watts) at 120 volts. The system is also passively cooled and silent.
First Migration
Both 24-core servers were used for compiling software, generating image galleries, and such. I used both servers as one unit, so a 48-core compile farm Compiling code with gcc/make, llvm, and such works better with more cores. JuliaLang also runs well @distributed or @thread macros. It was pretty simple to migrate to a single server. The real migration is not from 2 to 1 servers, but in the second migration.
Second Migration
The second migration is the bigger change, from compiled source code to docker containers. Initially, I started building all my containers from source I compiled myself. However, this works on a 24-core AMD64 server, but probably not so well on a 6-core ARM SoC. Taking a leap of faith that what's in the docker container is the same as what I'd compile and build, then you don't need a whole lot of cpu power to pull a container. Vive docker!
It took some time to convert my configs to docker configs and docker-compose YAML files. The setup isn't so straightforward with docker. Docker volumes, mapping volume to host filesystem, and just generally what's in the docker container take some experimentation and testing. I decided that the safer path is to convert to docker on the working system, before changing CPU architectures. Over the past few weeks, in my spare time, I'd convert a program or system to docker containers. It worked pretty well, and like most things, once you grok it, progress accelerates.
This migration was also a time to rethink decisions and figure out what should be hosted by the RP64 versus other services. The big changes were to use:
- SmugMug for photos.lewman.com
- BunnyCDN for serving static content
- DNSimple for DNS hosting
- Gitlab for code.lewman.com
Third Migration
Once the major service were whittled down to the minimum, then the remaining docker containers can just be migrated to the RP64. The whole system was setup on my home lab, dns tested, services tested, and the new system prepped for deployment. The RP64 cutover was quick, about 15 minutes of racking, unracking, and turning on the system. After some quick tests, I migrated DNS and eccoti (here you are).
Overall, the migration went smoothly and everything continues to work.
Concerns
I thought through a lot of concerns about migrating from compiled source on redundant servers to a single computer with zero redundancy. Let's walk through it for the exercise alone.
The servers were identical:
- dual-CPUs with 12 cores on each CPU.
- ECC RAM
- 4 spinning hard drives setup in a RAID5 array,
- 2 SATA buses on separate controllers.
- Redundant power supplies
- 4 network interfaces, setup in load-balance and failover modes
The RP64 has none of this, in fact, it's a series of single point of failures tied to each other. Other than electricity moving through the system, there are no moving parts.And that's the hardware.
On the software level, migrating from source code to binaries in docker containers created by someone else was the biggest change. This is really a mindset change from software I compiled from source with a compiler I trust, to trusting the developers and the community to detecting what's in the docker container does what it says. I also test the docker containers to make sure they behave as expected.
On the data level, that is still redundant. The data on the NVMe is a copy of what's on photos, code, and the CDN. If all falls apart, I can point everything back at the RP64 in a minute, or if the RP64 fails, most stuff continues working as is. On top of this, the data is backed up offsite.
Why not the cloud?
Because. The cloud is someone else's computer, is very expensive, and I wanted to test out low power computing. By using smugmug, gitlab, and bunnycdn, I've partially migrated to the cloud. I tried to pick the cheapest way into the cloud that wouldn't be too cheap and end up hurting performance and reliability.
I repurposed the two servers by handing them over to orgs/project who needed bare metal servers for build farms. More about this in the future.