Feature Extraction
Transformers
Safetensors
mistral
Merge
mergekit
lazymergekit
Syed-Hasan-8503/Tess-Coder-7B-Mistral-v1.0
mlabonne/AlphaMonarch-7B
Eval Results (legacy)
text-embeddings-inference
Instructions to use QueryloopAI/MonarchCoder-7B with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use QueryloopAI/MonarchCoder-7B with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("feature-extraction", model="QueryloopAI/MonarchCoder-7B")# Load model directly from transformers import AutoTokenizer, AutoModel tokenizer = AutoTokenizer.from_pretrained("QueryloopAI/MonarchCoder-7B") model = AutoModel.from_pretrained("QueryloopAI/MonarchCoder-7B") - Notebooks
- Google Colab
- Kaggle
Update README.md
Browse files
README.md
CHANGED
|
@@ -150,7 +150,7 @@ from transformers import AutoTokenizer
|
|
| 150 |
import transformers
|
| 151 |
import torch
|
| 152 |
|
| 153 |
-
model = "
|
| 154 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 155 |
|
| 156 |
tokenizer = AutoTokenizer.from_pretrained(model)
|
|
|
|
| 150 |
import transformers
|
| 151 |
import torch
|
| 152 |
|
| 153 |
+
model = "QueryloopAI/MonarchCoder-7B"
|
| 154 |
messages = [{"role": "user", "content": "What is a large language model?"}]
|
| 155 |
|
| 156 |
tokenizer = AutoTokenizer.from_pretrained(model)
|