This repository presents a lightweight face anti-spoofing solution using the MiniFASNetV2SE architecture, achieving ~98% validation accuracy. It provides a training pipeline, pretrained weights, and ONNX export capabilities, making it ideal for applications requiring robust facial recognition while ensuring security against spoofing attempts.
Lightweight Face Antispoofing with MiniFAS
The face-antispoof-onnx repository features a lightweight and efficient face anti-spoofing model built on the MiniFASNetV2SE architecture. This model effectively distinguishes between genuine faces and various spoofing attacks, such as printed photographs and displayed images. Achieving approximately 98% validation accuracy on a dataset of more than 70,000 samples, this project is designed for applications requiring robust biometric security.
Core Features
- Model Type: A compact classifier that determines two categories: Real or Spoof.
- Sizes:
- ONNX Model: 1.82 MB
- Quantized ONNX Model: 600 KB
- PyTorch Model: 1.95 MB
- Architecture: Specifically designed using MiniFAS to enhance anti-spoofing accuracy.
Performance Metrics
The model demonstrates excellent precision and performance metrics, validated against the CelebA Spoof benchmark:
| Metric | Model | Quantized |
|---|---|---|
| Overall Accuracy | 97.80% | 97.80% |
| Real Accuracy | 98.16% | 98.14% |
| Spoof Accuracy | 97.50% | 97.52% |
| ROC-AUC | 0.9978 | 0.9978 |
| Average Precision | 0.9981 | 0.9981 |
This high precision (>99%) at operational thresholds significantly reduces the chances of false biometric access, making it suitable for secure applications.
Model Availability
Pre-trained models are included in the models/ directory:
- best_model.pth: For training and fine-tuning in PyTorch.
- best_model.onnx: General deployment for cross-platform inference.
- best_model_quantized.onnx: Optimized for production environments, significantly reducing model size.
Advantages of Using MiniFAS
Compared to its predecessor (MobileNetV4), the MiniFAS architecture offers:
- A more compact model resulting in faster inference times.
- Tailored specifically for anti-spoofing, ensuring superior training and performance characteristics.
- Enhanced texture learning capabilities through Fourier Transform auxiliary loss, allowing for better differentiation of real skin from non-authentic sources.
Usage Example
To run the demo for webcam input or to test a single image, the following command can be utilized:
python demo.py --camera [index] # For webcam
python demo.py --image path/to/face.jpg # For image input
This model is integrated into advanced systems such as SURI, a sophisticated AI attendance solution.
Repository Structure
The repository encompasses the following components:
├── src/ # Source code for MiniFAS
├── scripts/ # Tools for training and exporting models
├── docs/ # Technical documentation
├── models/ # Pre-trained models
└── demo.py # Inference demo script
Limitations
The model operates optimally when applied to well-lit, frontal facial images. Detailed conditions and best practices can be reviewed in the repository's documentation on limitations.
The face-antispoof-onnx repository serves as an essential resource for developers and researchers looking to implement advanced face anti-spoofing solutions in their applications.
No comments yet.
Sign in to be the first to comment.