View this entire project on my GitHub.

Using Deep Learning to Accelerate Photography Workflows

Problem Statement

Digital photographers often shoot thousands of images for a particular event or "gig." On average, less than 5% of images shot will be kept, post-processed, and potentially submitted to a client. A majority of images are culled or rejected (out-of-focus, mistimed, unwanted, etc.) prior to being edited and post-processed via Adobe Lightroom, Photoshop, etc. Computer vision is a complicated endeavor. Software does not know the content of new images imported from a camera.

The human eye can classify images quickly (sharp focus, landscape, portrait, too dark, too bright, etc.). However, this process requires images to be viewed and assessed individually, a time-consuming process. Time that photographers could spend editing worthy images, delivering finished products to clients, and gaining new business. In the digital age, clients are seeking delivery even faster, photographers are NOT in short supply, and those that can operate more quickly gain a competitive advantage.

Objectives

This project used my personal photography portfolio dating back to 2006 of over 54,000 images to build a convolutional neural network that identifies the category (landscapes, people, etc.) of individual images. The trained model is used by a python script (executed from the command line) to sort new images into their categories for a quicker post-processing workflow.

Neural networks are computationally intensive and analyze photos pixel by pixel. Thousands of photos from the portfolio reach as high as 45 megapixels (45.4 million pixels per image). Using all 3 color channels (RGB), this equates to over 136 million values per image.

The final model uses 3728 selected images (reduced to 256x256 pixels with all 3 color channels) separated into 2 classes (people and landscapes) for efficiency.

Next
Next

RIADA