Instructions to use ModelTC/bart-base-squad with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use ModelTC/bart-base-squad with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("question-answering", model="ModelTC/bart-base-squad")# Load model directly from transformers import AutoTokenizer, AutoModelForQuestionAnswering tokenizer = AutoTokenizer.from_pretrained("ModelTC/bart-base-squad") model = AutoModelForQuestionAnswering.from_pretrained("ModelTC/bart-base-squad") - Notebooks
- Google Colab
- Kaggle
| {"unk_token": "<unk>", "bos_token": "<s>", "eos_token": "</s>", "add_prefix_space": false, "errors": "replace", "sep_token": "</s>", "cls_token": "<s>", "pad_token": "<pad>", "mask_token": "<mask>", "trim_offsets": true, "use_fase": true, "special_tokens_map_file": null, "name_or_path": "/mnt/lustre/zhangyunchen/transformers/bart-base", "tokenizer_class": "BartTokenizer"} |