CavaFace: Optimized for Qualcomm Devices

A PyTorch-based framework for training face recognition models that generates facial embeddings for verification and identification tasks

This is based on the implementation of CavaFace found here. This repository contains pre-exported model files optimized for Qualcomm® devices. You can use the Qualcomm® AI Hub Models library to export with custom configurations. More details on model performance across various devices, can be found here.

Qualcomm AI Hub Models uses Qualcomm AI Hub Workbench to compile, profile, and evaluate this model. Sign up to run these models on a hosted Qualcomm® device.

Getting Started

There are two ways to deploy this model on your device:

Option 1: Download Pre-Exported Models

Below are pre-exported model assets ready for deployment.

Runtime Precision Chipset SDK Versions Download
ONNX float Universal QAIRT 2.42, ONNX Runtime 1.24.1 Download
QNN_DLC float Universal QAIRT 2.43 Download
TFLITE float Universal QAIRT 2.43, TFLite 2.17.0 Download

For more device-specific assets and performance metrics, visit CavaFace on Qualcomm® AI Hub.

Option 2: Export with Custom Configurations

Use the Qualcomm® AI Hub Models Python library to compile and export the model with your own:

  • Custom weights (e.g., fine-tuned checkpoints)
  • Custom input shapes
  • Target device and runtime configurations

This option is ideal if you need to customize the model beyond the default configuration provided here.

See our repository for CavaFace on GitHub for usage instructions.

Model Details

Model Type: Model_use_case.object_detection

Model Stats:

  • Model checkpoint: IR_SE_100_Combined_Epoch_24.pt
  • Input resolution: 112x112
  • Number of parameters: 65.5M
  • Model size (float): 249.96MB

Performance Summary

Model Runtime Precision Chipset Inference Time (ms) Peak Memory Range (MB) Primary Compute Unit
CavaFace ONNX float Snapdragon® X2 Elite 2.337 ms 126 - 126 MB NPU
CavaFace ONNX float Snapdragon® X Elite 4.511 ms 126 - 126 MB NPU
CavaFace ONNX float Snapdragon® 8 Gen 3 Mobile 3.2 ms 0 - 110 MB NPU
CavaFace ONNX float Qualcomm® QCS8550 (Proxy) 4.34 ms 0 - 131 MB NPU
CavaFace ONNX float Qualcomm® QCS9075 6.789 ms 0 - 3 MB NPU
CavaFace ONNX float Snapdragon® 8 Elite For Galaxy Mobile 2.64 ms 0 - 79 MB NPU
CavaFace ONNX float Snapdragon® 8 Elite Gen 5 Mobile 2.265 ms 0 - 92 MB NPU
CavaFace QNN_DLC float Snapdragon® X2 Elite 2.602 ms 0 - 0 MB NPU
CavaFace QNN_DLC float Snapdragon® X Elite 4.456 ms 0 - 0 MB NPU
CavaFace QNN_DLC float Snapdragon® 8 Gen 3 Mobile 3.187 ms 0 - 102 MB NPU
CavaFace QNN_DLC float Qualcomm® QCS8275 (Proxy) 24.701 ms 0 - 81 MB NPU
CavaFace QNN_DLC float Qualcomm® QCS8550 (Proxy) 4.299 ms 0 - 2 MB NPU
CavaFace QNN_DLC float Qualcomm® SA8775P 30.193 ms 0 - 81 MB NPU
CavaFace QNN_DLC float Qualcomm® QCS9075 6.772 ms 0 - 2 MB NPU
CavaFace QNN_DLC float Qualcomm® QCS8450 (Proxy) 8.941 ms 0 - 109 MB NPU
CavaFace QNN_DLC float Qualcomm® SA7255P 24.701 ms 0 - 81 MB NPU
CavaFace QNN_DLC float Qualcomm® SA8295P 7.956 ms 0 - 86 MB NPU
CavaFace QNN_DLC float Snapdragon® 8 Elite For Galaxy Mobile 2.63 ms 0 - 85 MB NPU
CavaFace QNN_DLC float Snapdragon® 8 Elite Gen 5 Mobile 2.235 ms 0 - 84 MB NPU
CavaFace TFLITE float Snapdragon® 8 Gen 3 Mobile 3.143 ms 0 - 213 MB NPU
CavaFace TFLITE float Qualcomm® QCS8275 (Proxy) 24.552 ms 0 - 98 MB NPU
CavaFace TFLITE float Qualcomm® QCS8550 (Proxy) 4.172 ms 0 - 2 MB NPU
CavaFace TFLITE float Qualcomm® SA8775P 6.896 ms 0 - 98 MB NPU
CavaFace TFLITE float Qualcomm® QCS9075 6.699 ms 0 - 129 MB NPU
CavaFace TFLITE float Qualcomm® QCS8450 (Proxy) 8.752 ms 0 - 221 MB NPU
CavaFace TFLITE float Qualcomm® SA7255P 24.552 ms 0 - 98 MB NPU
CavaFace TFLITE float Qualcomm® SA8295P 7.934 ms 0 - 101 MB NPU
CavaFace TFLITE float Snapdragon® 8 Elite For Galaxy Mobile 2.589 ms 0 - 102 MB NPU
CavaFace TFLITE float Snapdragon® 8 Elite Gen 5 Mobile 2.233 ms 0 - 101 MB NPU

License

  • The license for the original implementation of CavaFace can be found here.

References

Community

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support