An automated workflow is developed to classify images of camouflaging cuttlefish.
•
Classification methodology is based on texture learning.
•
System achieves a top performance of 94% accuracy as compared to human labels.
•
Classifier output is used to propose a new model of cuttlefish camouflage.
Abstract
The automated processing of images for scientific analysis has become an integral part of projects that collect large amounts of data. Our recent study of cuttlefish camouflaging behavior captured ∼12,000 images of the animals’ response to changing visual environments. This work presents an automated segmentation and classification workflow to alleviate the human cost of processing this complex data set. The specimens’ bodies are segmented from the background using a combination of intensity thresholding and Histogram of Oriented Gradients. Subregions are then used to train a texton-based classifier designed to codify traditional, manual methods of cuttlefish image analysis. The segmentation procedure properly selected the subregion from ∼95% of the images. The classifier achieved an accuracy of ∼94% as compared to manual annotation. Together, the process correctly processed ∼90% of the images. Additionally, we leverage the output of the classifier to propose a model of camouflage display that attributes a given display to a superposition of the user-defined classes.