Fleet Week and Deep Learning

Blue Angel Andrew Lewman

TL;DR, photo album is available. I used deep learning to increase my productivity in publishing the images from Fleet Week airshow. 

Fleet Week is in town. This year the weather cooperated and we had clear skies, no fog, and great viewing of the airshow. I took a half day to shoot the airshow practice on Friday the 6th. I asked a stochastic pigeon for the best camera settings for my old Sony 6500. It suggested automatic mode or using 1/2000 shutter speed with ISO 1000. My lens is f/5.6, so no, those settings are incorrect. Unless I want all dark shots emulating an oddly dark and dreary environment. Instead, I read a bunch of writings by various airshow photographers. First, I had to discount their suggested settings because most of them are shooting with full-format cameras and $10-20k fixed telephoto lens (300mm, 400mm or higher). I don't have a $30k setup. I have a refurb Sony  α6500 with a refurb 100-400 mm lens. 

I finally arrived on 1/1000 shutter speed with floating ISO limited to 100-1000. I set the lens to prefer infinity and set the shutter to continuous mid range (roughly 5 shots per second). I then rode my bike down to the waterfront to find a good spot for shooting. It was a perfectly clear day with a nice breeze. I brought my monopod, but mostly just freehand shot everything. I need a multi-axis gimbal to properly handle the sweeps fast enough. 

The good thing is that being on the waterfront puts me closest to the action, In fact, a few times the 100mm minimum was too zoomed in and I only got partial shots. 

when 100 mm is too much

After 3 hours, I filled my sdcard and went through one and a half batteries. The good is that the camera settings and the location worked great. The challenge is I now have to sort through 2,576 images. 

I started sorting with my usual process:
1. remove all out of focus, horrible exposure, and horrible composition images.
2. look through the bursts for the best composed, most in-focus shots. 
3. repeat #2 until we're down to a few hundred images.
4. add tags, keywords, and metadata.

This process got old quick. I looked for automation to help me parse the images. I used photoprism to ingest the images and do some automatic object recognition. I then used digikam's new "image quality sort" feature to well, sort the image by quality. The deep learning model it uses is based upon this model. Lacking any better way to sort, I let the deep learning algorithm loose on my 2,576 photos. After burning through cpu for far too long (why doesn't opencl use the amdgpu? beyond me), the algorithm had 390 acceptable images, 1,212 barely acceptable, and 470 rejected images. I put the 390 acceptable images into a photo album

I need to further pare it down, as 390 is still too many, and still too repetitive in it's selection. I need a "critics choice" deep learning model for extra refinement of the images. Another option is to re-run the algorithm on the 390 accepted images, and repeat until we're down to the last 25 or 50 or so "best images". However, as is, the 390 choice in an hour or so is vastly faster than I could do it manually. 

I consider it a win. Previously, previously, previously, previously, and previously