Image Feature Extraction
Transformers
JAX
Safetensors
MLX
PyTorch
aimv2_vision_model
vision
custom_code
Instructions to use apple/aimv2-large-patch14-native with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use apple/aimv2-large-patch14-native with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("image-feature-extraction", model="apple/aimv2-large-patch14-native", trust_remote_code=True)# Load model directly from transformers import AutoImageProcessor, AutoModel processor = AutoImageProcessor.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) model = AutoModel.from_pretrained("apple/aimv2-large-patch14-native", trust_remote_code=True) - MLX
How to use apple/aimv2-large-patch14-native with MLX:
# Download the model from the Hub pip install huggingface_hub[hf_xet] huggingface-cli download --local-dir aimv2-large-patch14-native apple/aimv2-large-patch14-native
- Notebooks
- Google Colab
- Kaggle
- Local Apps
- LM Studio
Thanks for the model!
#2
by harpreetsahota - opened
Thank you so much for taking the time and effort to make this model available via Transformers!
For anyone who is interested, I have built a plugin for the model so that you can run it easily on a FiftyOne Dataset.
You can see details here: https://github.com/harpreetsahota204/aim-embeddings-plugin
Also, you can use the apple/aimv2-large-patch14-224-lit via the Zero Shot Prediction plugin
If there are any questions with the plugin, please feel free to open an issue.
Thanks again!