Instructions to use bumblebee-testing/tiny-random-PhiForTokenClassification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use bumblebee-testing/tiny-random-PhiForTokenClassification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("token-classification", model="bumblebee-testing/tiny-random-PhiForTokenClassification")# Load model directly from transformers import AutoTokenizer, AutoModelForTokenClassification tokenizer = AutoTokenizer.from_pretrained("bumblebee-testing/tiny-random-PhiForTokenClassification") model = AutoModelForTokenClassification.from_pretrained("bumblebee-testing/tiny-random-PhiForTokenClassification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 39225a0ca04d68699b94e74e4e4fe9d6e92d6b9f11674fc49dad0f43b1e02421
- Size of remote file:
- 189 kB
- SHA256:
- f5abdf5459fb8ed8cdff1042b0475ca6aa89dba8dad6d4e31b18f8d83bd4747e
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.