Paper Title
Explicitnet: An Extension for Not Safe for Work Content Moderation Using Multi-Model Deep Learning
Abstract
The increase in growth and the exposure of Not Safe For Work (NSFW) content on the internet is proving to be a
big issue, especially for kids and teenagers. The mental and emotional well-being of users may be negatively impacted by
this content, which may also be displeasing or unprofessional to view in various situations or contexts. This research paper
proposes a chrome extension that makes use of multi-model deep learning techniques to flag and moderate NSFW content on
the internet. Through this extension the user's browser can be configured to blur or prevent the loading of NSFW images and
videos which makes it simpler to utilize the internet in public or professional environments and helps shield users from
exposure to hazardous and sensitive content. The extension is trained on a wide range of both NSFW and non-NSFW
content, which includes many topics including but not limited to vulgarity, violence and gore. Using this information, the
extension can identify and regulate NSFW content with precision by learning the characteristics of such content.
Additionally, the extension is also designed to be lightweight and efficient, so that it does not impact the user's browsing
experience.
Keywords - Multimodal Deep Learning, Content Moderation , Not Safe For Work (NSFW), Extension.