| --- |
| license: llama2 |
| library_name: transformers |
| tags: |
| - code |
| metrics: |
| - code_eval |
| base_model: WizardLM/WizardCoder-Python-7b-V1.0 |
| inference: false |
| model_creator: WizardLM |
| model_type: llama |
| pipeline_tag: text-generation |
| quantized_by: Second State Inc. |
| --- |
| |
| <!-- header start --> |
| <!-- 200823 --> |
| <div style="width: auto; margin-left: auto; margin-right: auto"> |
| <img src="https://github.com/LlamaEdge/LlamaEdge/raw/dev/assets/logo.svg" style="width: 100%; min-width: 400px; display: block; margin: auto;"> |
| </div> |
| <hr style="margin-top: 1.0em; margin-bottom: 1.0em;"> |
| <!-- header end --> |
|
|
| # WizardCoder-Python-7B-v1.0-GGUF |
|
|
| ## Original Model |
|
|
| [WizardLM/WizardCoder-Python-7b-V1.0](https://huggingface.co/WizardLM/WizardCoder-Python-7B-V1.0) |
|
|
| ## Run with LlamaEdge |
|
|
| - LlamaEdge version: [v0.2.8](https://github.com/LlamaEdge/LlamaEdge/releases/tag/0.2.8) and above |
|
|
| - Prompt template |
|
|
| - Prompt type: `wizard-coder` |
|
|
| - Prompt string |
|
|
| ```text |
| Below is an instruction that describes a task. Write a response that appropriately completes the request. |
| |
| \### Instruction: |
| {instruction} |
| |
| \### Response: |
| ``` |
| |
| **Note that the \ character is used to escape the ### in the prompt string. Remove it in the practical use.** |
| |
| - Context size: `4096` |
|
|
| - Run as LlamaEdge service |
|
|
| ```bash |
| wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-api-server.wasm -p wizard-coder |
| ``` |
|
|
| - Run as LlamaEdge command app |
|
|
| ```bash |
| wasmedge --dir .:. --nn-preload default:GGML:AUTO:WizardCoder-Python-7B-V1.0-Q5_K_M.gguf llama-chat.wasm -p wizard-coder -s 'Below is an instruction that describes a task. Write a response that appropriately completes the request.' |
| ``` |
|
|
| ## Quantized GGUF Models |
|
|
| | Name | Quant method | Bits | Size | Use case | |
| | ---- | ---- | ---- | ---- | ----- | |
| | [WizardCoder-Python-7B-V1.0-Q2_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q2_K.gguf) | Q2_K | 2 | 2.53 GB| smallest, significant quality loss - not recommended for most purposes | |
| | [WizardCoder-Python-7B-V1.0-Q3_K_L.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_L.gguf) | Q3_K_L | 3 | 3.60 GB| small, substantial quality loss | |
| | [WizardCoder-Python-7B-V1.0-Q3_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_M.gguf) | Q3_K_M | 3 | 3.30 GB| very small, high quality loss | |
| | [WizardCoder-Python-7B-V1.0-Q3_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q3_K_S.gguf) | Q3_K_S | 3 | 2.95 GB| very small, high quality loss | |
| | [WizardCoder-Python-7B-V1.0-Q4_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_0.gguf) | Q4_0 | 4 | 3.83 GB| legacy; small, very high quality loss - prefer using Q3_K_M | |
| | [WizardCoder-Python-7B-V1.0-Q4_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_M.gguf) | Q4_K_M | 4 | 4.08 GB| medium, balanced quality - recommended | |
| | [WizardCoder-Python-7B-V1.0-Q4_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q4_K_S.gguf) | Q4_K_S | 4 | 3.86 GB| small, greater quality loss | |
| | [WizardCoder-Python-7B-V1.0-Q5_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_0.gguf) | Q5_0 | 5 | 4.65 GB| legacy; medium, balanced quality - prefer using Q4_K_M | |
| | [WizardCoder-Python-7B-V1.0-Q5_K_M.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_M.gguf) | Q5_K_M | 5 | 4.78 GB| large, very low quality loss - recommended | |
| | [WizardCoder-Python-7B-V1.0-Q5_K_S.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q5_K_S.gguf) | Q5_K_S | 5 | 4.65 GB| large, low quality loss - recommended | |
| | [WizardCoder-Python-7B-V1.0-Q6_K.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q6_K.gguf) | Q6_K | 6 | 5.53 GB| very large, extremely low quality loss | |
| | [WizardCoder-Python-7B-V1.0-Q8_0.gguf](https://huggingface.co/second-state/WizardCoder-Python-7B-v1.0-GGUF/blob/main/WizardCoder-Python-7B-V1.0-Q8_0.gguf) | Q8_0 | 8 | 7.16 GB| very large, extremely low quality loss - not recommended | |
| |