bhanc's picture
Fix README
1613265
---
license: mit
---
# Introduction
This repository hosts the image encoder of
[clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)
model for the [React Native
ExecuTorch](https://www.npmjs.com/package/react-native-executorch) library. It
includes the model exported for xnnpack in `.pte` format, ready for use in the
**ExecuTorch** runtime.
If you'd like to run these models in your own ExecuTorch runtime, refer to the
[official documentation](https://pytorch.org/executorch/stable/index.html) for
setup instructions.
## Compatibility
If you intend to use this model outside of React Native ExecuTorch, make sure
your runtime is compatible with the **ExecuTorch** version used to export the
`.pte` files. For more details, see the compatibility note in the [ExecuTorch
GitHub
repository](https://github.com/pytorch/executorch/blob/11d1742fdeddcf05bc30a6cfac321d2a2e3b6768/runtime/COMPATIBILITY.md?plain=1#L4).
If you work with React Native ExecuTorch, the constants from the library will
guarantee compatibility with runtime used behind the scenes.
These models were exported using **ExecuTorch** version 1.1.0 and **no forward
compatibility** is guaranteed. Older versions of the runtime may not work with
these files.
### Repository Structure
The repository is organized into directories:
- `xnnpack`
Each directory contains models exported for the respective backend.
- The `.pte` file should be passed to the `modelSource` parameter.