| --- |
| license: mit |
| base_model: |
| - mistralai/Mistral-7B-Instruct-v0.3 |
| - uukuguy/speechless-code-mistral-7b-v2.0 |
| - Nondzu/Mistral-7B-Instruct-v0.2-code-ft |
| - teknium/OpenHermes-2.5-Mistral-7B |
| - meta-math/MetaMath-Mistral-7B |
| tags: |
| - merge |
| - mergekit |
| - lazymergekit |
| - mistralai/Mistral-7B-Instruct-v0.3 |
| - uukuguy/speechless-code-mistral-7b-v2.0 |
| - Nondzu/Mistral-7B-Instruct-v0.2-code-ft |
| - teknium/OpenHermes-2.5-Mistral-7B |
| - meta-math/MetaMath-Mistral-7B |
| --- |
| |
| # Axon26-Coder |
|
|
| Axon26-Coder is a merge of the following models using [LazyMergekit](https://colab.research.google.com/drive/1obulZ1ROXHjYLn6PPZJwRR6GzgQogxxb?usp=sharing): |
| * [mistralai/Mistral-7B-Instruct-v0.3](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.3) |
| * [uukuguy/speechless-code-mistral-7b-v2.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v2.0) |
| * [Nondzu/Mistral-7B-Instruct-v0.2-code-ft](https://huggingface.co/Nondzu/Mistral-7B-Instruct-v0.2-code-ft) |
| * [teknium/OpenHermes-2.5-Mistral-7B](https://huggingface.co/teknium/OpenHermes-2.5-Mistral-7B) |
| * [meta-math/MetaMath-Mistral-7B](https://huggingface.co/meta-math/MetaMath-Mistral-7B) |
|
|
| ## 🧩 Configuration |
| ```yaml |
| merge_method: dare_ties |
| base_model: mistralai/Mistral-7B-Instruct-v0.3 |
| models: |
| - model: mistralai/Mistral-7B-Instruct-v0.3 |
| parameters: |
| weight: 0.15 |
| density: 0.5 |
| - model: uukuguy/speechless-code-mistral-7b-v2.0 |
| parameters: |
| weight: 0.25 |
| density: 0.7 |
| - model: Nondzu/Mistral-7B-Instruct-v0.2-code-ft |
| parameters: |
| weight: 0.2 |
| density: 0.6 |
| - model: teknium/OpenHermes-2.5-Mistral-7B |
| parameters: |
| weight: 0.2 |
| density: 0.6 |
| - model: meta-math/MetaMath-Mistral-7B |
| parameters: |
| weight: 0.2 |
| density: 0.6 |
| parameters: |
| int8_mask: true |
| normalize: true |
| dtype: bfloat16 |
| tokenizer_source: mistralai/Mistral-7B-Instruct-v0.3 |
| ``` |
|
|
| ## 💻 Usage |
| ```python |
| !pip install -qU transformers accelerate |
| |
| from transformers import AutoTokenizer |
| import transformers |
| import torch |
| |
| model = "AIencoder/Axon26-Coder" |
| messages = [{"role": "user", "content": "What is a large language model?"}] |
| |
| tokenizer = AutoTokenizer.from_pretrained(model) |
| prompt = tokenizer.apply_chat_template(messages, tokenize=False, add_generation_prompt=True) |
| pipeline = transformers.pipeline( |
| "text-generation", |
| model=model, |
| torch_dtype=torch.float16, |
| device_map="auto", |
| ) |
| |
| outputs = pipeline(prompt, max_new_tokens=256, do_sample=True, temperature=0.7, top_k=50, top_p=0.95) |
| print(outputs[0]["generated_text"]) |
| ``` |