Repository logo
 

Gore Classification and Censoring in Images

Loading...
Thumbnail Image

Date

2021-11-30

Journal Title

Journal ISSN

Volume Title

Publisher

Université d'Ottawa / University of Ottawa

Abstract

With the large amount of content being posted on the Internet every day, moderators, investigators, and analysts can be exposed to hateful, pornographic, or graphic content as part of their work. Exposure to this kind of content can have a severe impact on the mental health of these individuals. Hence, measures must be taken to lessen their mental health burden. Significant effort has been made to find and censor pornographic content; gore has not been researched to the same extent. Research in this domain has focused on protecting the public from seeing graphic content in images, movies, or online videos. However, these solutions do little to flag this content for employees who need to review such footage as part of their work. In this thesis, we aim to address this problem by creating a full image processing pipeline to find and censor gore in images. This involves creating a dataset, as none are publicly available, training and testing different machine learning solutions to automatically censor gore content. We propose an Image Processing Pipeline consisting of two models: a classification model which aims to find whether the image contains gore, and a segmentation model to censor the gore in the image. The classification results can be used to reduce accidental exposure to gore, by blurring the image in the search results for example. It can also be used to reduce processing time and storage space by ensuring the segmentation model does not need to generate a censored image for every image submitted to the pipeline. Both models use pretrained Convolutional Neural Network (CNN) architectures and weights as part of their design and are fine-tuned using Machine Learning (ML). We have done so to maximize the performance on the small dataset we gathered for these two tasks. The segmentation dataset contains 737 training images while the classification dataset contains 3830 images. We explored various variations on the proposed models that are inspired from existing solutions in similar domains, such as pornographic content detection and censoring and medical wound segmentation. These variations include Multiple Instance Learning (MIL), Generative Adversarial Networks (GANs) and Mask R-CNN. The best classification model we trained is a voting ensemble that combines the results of 4 classification models. This model achieved a 91.92% Double F1-Score, 87.30% precision, and 90.66% recall on the testing set. Our highest performing segmentation model achieved a testing Intersection over Union (IoU) value of 56.75%. However, when we employed the proposed Image Processing Pipeline (classification followed by segmentation), we achieved a testing IoU of 69.95%.

Description

Keywords

Gore, Artificial Intelligence, Classification, Segmentation, Censoring

Citation