Total Pageviews

Sunday, 28 May 2023

成人图像辨别模型



NSFW(Not Suitable for Work)图像深度学习模型Open nsfw model,是雅虎开源的成人图片(不适合工作场合的图片)分辨学习算法, 基于Python编写,遵循BSD 2开源协议。NSFW主要针对真实的色情照片图片,暂时不支持分辨暴力图片,成人文本,卡通和素描图片。NSFW模型使用了Caffe深度学习库和CaffeOnSpark分布式学习框架,已经过预先训练,可给图像出NSFW 0到1的评分.  0.2分表示安全;0.8分表示很有可能属于色情图片。

运行环境需要安装Caffe和 python拓展。使用 classify.py运行NSFW. 保证pycaffe 在PYTHONPATH下存在。

python ./classify_nsfw.py \ –model_def nsfw_model/deploy.prototxt \ –pretrained_model nsfw_model/resnet_50_1by2_nsfw.caffemodel \ INPUT_IMAGE_PATH

源码下载:https://github.com/yahoo/open_nsfw

-----------------------------------------------------------

Not Suitable for Work (NSFW) classification using deep neural network Caffe models.  

Open nsfw model

This repo contains code for running Not Suitable for Work (NSFW) classification deep neural network Caffe models. Please refer our blog post which describes this work and experiments in more detail.

Not suitable for work classifier

Detecting offensive / adult images is an important problem which researchers have tackled for decades. With the evolution of computer vision and deep learning the algorithms have matured and we are now able to classify an image as not suitable for work with greater precision.

Defining NSFW material is subjective and the task of identifying these images is non-trivial. Moreover, what may be objectionable in one context can be suitable in another. For this reason, the model we describe below focuses only on one type of NSFW content: pornographic images. The identification of NSFW sketches, cartoons, text, images of graphic violence, or other types of unsuitable content is not addressed with this model.

Since images and user generated content dominate the internet today, filtering nudity and other not suitable for work images becomes an important problem. In this repository we opensource a Caffe deep neural network for preliminary filtering of NSFW images.

Demo Image

Usage

  • The network takes in an image and gives output a probability (score between 0-1) which can be used to filter not suitable for work images. Scores < 0.2 indicate that the image is likely to be safe with high probability. Scores > 0.8 indicate that the image is highly probable to be NSFW. Scores in middle range may be binned for different NSFW levels.
  • Depending on the dataset, usecase and types of images, we advise developers to choose suitable thresholds. Due to difficult nature of problem, there will be errors, which depend on use-cases / definition / tolerance of NSFW. Ideally developers should create an evaluation set according to the definition of what is safe for their application, then fit a ROC curve to choose a suitable threshold if they are using the model as it is.
  • Results can be improved by fine-tuning the model for your dataset/ use case / definition of NSFW. We do not provide any guarantees of accuracy of results. Please read the disclaimer below.
  • Using human moderation for edge cases in combination with the machine learned solution will help improve performance.

Description of model

We trained the model on the dataset with NSFW images as positive and SFW(suitable for work) images as negative. These images were editorially labelled. We cannot release the dataset or other details due to the nature of the data.

We use CaffeOnSpark which is a wonderful framework for distributed learning that brings deep learning to Hadoop and Spark clusters for training models for our experiments. Big thanks to the CaffeOnSpark team!

The deep model was first pretrained on ImageNet 1000 class dataset. Then we finetuned the weights on the NSFW dataset. We used the thin resnet 50 1by2 architecture as the pretrained network. The model was generated using pynetbuilder tool and replicates the residual network paper's 50 layer network (with half number of filters in each layer). You can find more details on how the model was generated and trained here

Please note that deeper networks, or networks with more filters can improve accuracy. We train the model using a thin residual network architecture, since it provides good tradeoff in terms of accuracy, and the model is light-weight in terms of runtime (or flops) and memory (or number of parameters).

Docker Quickstart

This Docker quickstart guide can be used for evaluating the model quickly with minimal dependency installation.

Install Docker Engine

Build a caffe docker image (CPU)

docker build -t caffe:cpu https://raw.githubusercontent.com/BVLC/caffe/master/docker/cpu/Dockerfile

Check the caffe installation

docker run caffe:cpu caffe --version
caffe version 1.0.0-rc3

Run the docker image with a volume mapped to your open_nsfw repository. Your test_image.jpg should be located in this same directory.

cd open_nsfw
docker run --volume=$(pwd):/workspace caffe:cpu \
python ./classify_nsfw.py \
--model_def nsfw_model/deploy.prototxt \
--pretrained_model nsfw_model/resnet_50_1by2_nsfw.caffemodel \
test_image.jpg

We will get the NSFW score returned:

NSFW score:   0.14057905972

Running the model

To run this model, please install Caffe and its python extension and make sure pycaffe is available in your PYTHONPATH.

We can use the classify.py script to run the NSFW model. For convenience, we have provided the script in this repo as well, and it prints the NSFW score.

python ./classify_nsfw.py \
--model_def nsfw_model/deploy.prototxt \
--pretrained_model nsfw_model/resnet_50_1by2_nsfw.caffemodel \
INPUT_IMAGE_PATH 

Contact

The model was trained by Jay Mahadeokar, in collaboration with Sachin Farfade , Amar Ramesh Kamat, Armin Kappeler and others. Special thanks to Gerry Pesavento for taking the initiative for open-sourcing this model. If you have any queries, please raise an issue and we will get back ASAP.

from https://github.com/yahoo/open_nsfw

 

 

No comments:

Post a Comment