Instructions to use sgugger/tiny-distilbert-classification with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use sgugger/tiny-distilbert-classification with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="sgugger/tiny-distilbert-classification")# Load model directly from transformers import AutoTokenizer, AutoModelForSequenceClassification tokenizer = AutoTokenizer.from_pretrained("sgugger/tiny-distilbert-classification") model = AutoModelForSequenceClassification.from_pretrained("sgugger/tiny-distilbert-classification") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- ab380ded4b862d9c2db63fc71fa0d00e4ab531f70c87d855c4dd88deb246bb73
- Size of remote file:
- 264 kB
- SHA256:
- fc24d64f215909ffcda1eab978897015d369eef140bc2d051f690e05d556a4f2
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.