The world awaits...

Hiking Tilden Regional Park

The walk

In the endless quest to explore the local parks, we headed over to Tilden Regional Park in Berkeley.  We picked an easy trail and head into the mist. The Quarry Trail to Lower Big Springs Trail seemed like a nice loop before dinner.

I think it was only 170m climb from parking lot to trail peak. The park is popular with people walking their dogs. Or in two instances, owners picking burrs out of their dogs fur. The dogs were clearly not enjoying the forced grooming either.

Near the top of the trail, we came across this little rabbit. It was asleep and I practiced my most quiet walking to get close for the pictures. Not sure about the evolutionary advantage to sleeping in the open trail for this rabbit, but maybe the snakes or romping dogs in the grass are a far greater risk.

One thing I've learned from years of hunting is to sit quietly for a while and let the forest reveal itself, on its terms. Walking and talking is great to have a conversation in the beauty of the woods, however it keeps all of the local wildlife away from you. After sitting for a few minutes, this guy appears.

That picture was the second shot of him. The first caused him to look at us up the hill.

In the end, it was a nice, quick hike. We looped the trails and headed back down to get some dinner. The mist added a certain surrealism to the experience.

The full photo album is available here.

Image Galleries

A challenge for the past few years is finding a modern image gallery which generates static HTML/CSS/JS output. I want to host everything myself and not have to rely on 3rd party services to serve up the image gallery. I also want everything generated in advance and saved as straight files (HTML/CSS/JS). I've tried them all. And written a few of my own. The previous image galleries were produced with a modified llgal script. They worked well, but lacked any real modern way to showcase the images without lots of clicking. The script was easy to run in a few minutes to generate a new set of html pages to showcase the images. The challenge with llgal is it handles standard image formats fine, but doesn't movie files. I started hacking away on llgal to handle all media files, but then decided there must be something better.

The current image gallery is generated via ThumsUp. ThumbsUp handles all the media files with ease, is multi-process/threaded, and separates the design from the content rather well. Rather than installing all of the requirements on a single machine, I use the official docker image and generate the galleries automatically. ThumbsUp will use as many CPU cores as you can give it, creating one processing thread per core. I upload the new images to a directory and kick off a script. The script is a mix of Ruby and Ansible to spin up a gallery building host.

The ruby scripts polls a few of the virtual  host providers, such as Linode, Digital Ocean, Memset, Amazon Web Services, etc, in order to find the cheapest hourly rate with the most CPUs available. I also want the host on a "close" network to keep data transfer rates high. And by close, I mean as few hops or peering points as possible. The Ruby script runs an http ping to each chosen provider to see which is the lowest latency, most reliable, and least hops. As the virtual host is powering on an ansible playbook (aka script) polls the host awaiting a login. Once the host responds, another ansible playbook then sets up the entire system as needed, which are mostly updates, new kernels, installed, and a directory structure created.

After the new host returns from booting into a new kernel, a parallel rsync is kicked off to transfer the new media as fast as possible from the source host. The "closeness" of the build host to this source host is critical here because the goal is to keep the gigabit connections between hosts saturated. After all the images are transferred, a docker version of ThumbsUp kicks off based on a JSON configuration file. If the build host has 32 cpus, it can process 32 images/movies simultaneously.  I've been lucking in finding 48 cpu core and larger hosts lately. It's pretty fun to watch ThumbsUp use 96 cores to build the finished gallery in a matter of minutes. A side observation is that AMD EPYC CPUs seem much faster at this process than Intel CPUs.

If the whole process successfully completes, I'll use a parallel rsync to pull the new image gallery back to the source host and host them on for you to view. At most, the running duration is around 21 minutes to complete from start to finish and costs around $1 per run. Limiting the "find hosting provider" to 48 cpus provides more options and generally takes around 30 minutes, costing $0.80 to $1.07 depending upon network transfer times. After the final check that everything looks good, the script then wipes the build host and deletes it from my account.

The main reason for automating all of this is that the automation is faster than I am at the keyboard. And since I'm paying for every second of usage, saving 10-20% of the time matters. It would matter much more if these were longer running jobs, but for the 18-35 minutes it can take, we're talking pennies here. I have other jobs I'm working with for machine learning, which take hours. And automation really helps keep the costs manageable. A secondary reason for the automation was to re-learn Ruby and Ansible. The automation also feeds into my laziness where I can upload some images to the source host, kick off a script, and come back and check on status 20 minutes later. The longest part of the script is the prompt awaiting feedback on good or not:

puts "Does the final gallery of #{new_image_gallery_url} look good? (Yes or No)"

good_bad =  gets.chomp

if good_bad.downcase.include?("y")  == true then

call teardown


abort("Figure out what didn't work.")

All in all, the script and automation took around 2 weeks to fine tune and get working smoothly. There's always some new condition I didn't expect while developing the scripts and playbooks. Such as docker hub isn't responding or the docker ThumbsUp image quits with some new error condition. Or I can't find a host with enough CPUs to make the build cost effective. Anyway, that's the behind the scenes walkthrough of this statement,

The full photo album is available here.