A Python-based system for detecting faces in images and videos using YOLOv8, with the ability to censor detected faces. The system is designed to be modular and extensible.
- Face detection using YOLOv8
- Support for both image and video processing
- Easy to use User Interface
- Modular censoring system
- Trained on the WIDER FACE dataset via Roboflow
- Multiple masking methods including: blur, emoji, and text (see demo)
- Clone the repository:
git clone https://github.com/Spring-0/face-censor.git
cd face-censor
- Create a virtual environment and activate it:
python -m venv .venv
source .venv/bin/activate # On Windows, use: .venv\Scripts\activate
- Install the required packages:
pip install -r requirements.txt
- Run:
python src/main.py
The project uses the WIDER FACE dataset from Roboflow for training. I have included a pre-trained model, so there is no need to re-train it unless you want to. Here is how:
- Update this line in
training/training.py
if required:
device="0" # Set to "0" to utilize GPU, otherwise set to "cpu" to utilize CPU
- Create a
.env
file in the project root with your Roboflow API key:
ROBOFLOW_API_KEY=your_api_key_here
- Run the training script:
cd training
python3 training.py
# Face detection model
from models.yolo_detector import YOLOFaceDetector
# Masking methods (no need to import all, just what you want to use)
from masking.text import TextCensor
from masking.emoji import EmojiCensor
from masking.blur import BlurCensor
# Media processor
from processor import MediaProcessor
# Initialize face detector model
detector = YOLOFaceDetector()
This is what determines what effect will be applied to mask the faces.
text_censor = TextCensor(
text="HELLO", # The text to draw on faces
draw_background=True, # Control whether to draw solid background behind text
background_color="white", # The color of the solid background
text_color="black", # The color of the text
scale_factor=0.2 # The text size scaling factor, default to 0.5
)
emoji_censor = EmojiCensor(
emoji="😁", # The emoji you want to use to mask faces
font="seguiemj.ttf", # The path to the emoji font file, by default uses "seguiemj.ttf"
scale_factor=1.0 # The emoji size scaling factor in percentage, default to 1.0
)
blur_censor = BlurCensor(
blur_factor=71 # The strength of the blur effect, defaults to 99
)
After creating the masking method object(s), you need to pass it to the MediaProcessor
constructor like so:
processor = MediaProcessor(detector, blur_censor)
# Process an image
processor.process_image("input.jpg", "output.jpg")
# Process a video
processor.process_video("input.mp4", "output.mp4")
- Python 3.8+
- PyTorch
- OpenCV
- Ultralytics YOLOv8
- Roboflow
See requirements.txt
for complete list.
GPU General Public License - see LICENSE file for details.
- Fork the repository
- Create your feature branch (
git checkout -b feature/AmazingFeature
) - Commit your changes (
git commit -m 'Add some AmazingFeature'
) - Push to the branch (
git push origin feature/AmazingFeature
) - Open a Pull Request
- Add emoji face masking
- Add support for real time streams
- Add GUI interface
- Add partial face censoring (eyes)