File size: 1,475 Bytes
c0da73a
 
 
 
 
 
cb3d5d5
 
 
 
 
 
c0da73a
cb3d5d5
 
 
c0da73a
 
 
cb3d5d5
 
 
 
 
 
 
c0da73a
cb3d5d5
 
 
c0da73a
 
 
1613265
c0da73a
cb3d5d5
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
---
license: mit
---

# Introduction

This repository hosts the image encoder of
[clip-vit-base-patch32](https://huggingface.co/openai/clip-vit-base-patch32)
model for the [React Native
ExecuTorch](https://www.npmjs.com/package/react-native-executorch) library. It
includes the model exported for xnnpack in `.pte` format, ready for use in the
**ExecuTorch** runtime.

If you'd like to run these models in your own ExecuTorch runtime, refer to the
[official documentation](https://pytorch.org/executorch/stable/index.html) for
setup instructions.

## Compatibility

If you intend to use this model outside of React Native ExecuTorch, make sure
your runtime is compatible with the **ExecuTorch** version used to export the
`.pte` files. For more details, see the compatibility note in the [ExecuTorch
GitHub
repository](https://github.com/pytorch/executorch/blob/11d1742fdeddcf05bc30a6cfac321d2a2e3b6768/runtime/COMPATIBILITY.md?plain=1#L4).
If you work with React Native ExecuTorch, the constants from the library will
guarantee compatibility with runtime used behind the scenes.

These models were exported using **ExecuTorch** version 1.1.0 and **no forward
compatibility** is guaranteed. Older versions of the runtime may not work with
these files.

### Repository Structure

The repository is organized into directories:

- `xnnpack`

Each directory contains models exported for the respective backend.

- The `.pte` file should be passed to the `modelSource` parameter.