Instructions to use SVECTOR-CORPORATION/Spec-Coder-4b-V1 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use SVECTOR-CORPORATION/Spec-Coder-4b-V1 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="SVECTOR-CORPORATION/Spec-Coder-4b-V1")# Load model directly from transformers import AutoTokenizer, AutoModelForCausalLM tokenizer = AutoTokenizer.from_pretrained("SVECTOR-CORPORATION/Spec-Coder-4b-V1") model = AutoModelForCausalLM.from_pretrained("SVECTOR-CORPORATION/Spec-Coder-4b-V1") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use SVECTOR-CORPORATION/Spec-Coder-4b-V1 with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "SVECTOR-CORPORATION/Spec-Coder-4b-V1" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Spec-Coder-4b-V1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/SVECTOR-CORPORATION/Spec-Coder-4b-V1
- SGLang
How to use SVECTOR-CORPORATION/Spec-Coder-4b-V1 with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "SVECTOR-CORPORATION/Spec-Coder-4b-V1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Spec-Coder-4b-V1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "SVECTOR-CORPORATION/Spec-Coder-4b-V1" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "SVECTOR-CORPORATION/Spec-Coder-4b-V1", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use SVECTOR-CORPORATION/Spec-Coder-4b-V1 with Docker Model Runner:
docker model run hf.co/SVECTOR-CORPORATION/Spec-Coder-4b-V1
Spec Coder V1
Spec Coder is a cutting-edge, open-source AI model designed to assist with fundamental coding tasks. It is built on the Llama architecture, allowing seamless access via tools like llama.cpp and Ollama. This makes Spec Coder highly compatible with a variety of systems, enabling flexible deployment both locally and in the cloud.
Trained on vast datasets, Spec Coder excels in generating code, completing code snippets, and understanding programming tasks across multiple languages. It can be used for code completion, debugging, and automated code generation, acting as a versatile assistant for developers.
Spec Coder is optimized for integration into developer tools, providing intelligent coding assistance and facilitating research in programming languages. Its advanced transformer-based architecture, with 4 billion parameters, allows it to perform tasks across different environments efficiently.
The model supports various downstream tasks including supervised fine-tuning (SFT) and reinforcement learning (RL) to improve its performance for specific programming tasks.
Training Data
- Total Training Tokens: ~4.3 trillion tokens
- Corpus: The Stack, StarCoder Training Dataset, The Stack v2, CommitPack, OpenCodeReasoning, English Wikipedia
Training Details
- Context Window: 8,192 tokens
- Optimization: Standard language modeling objective
- Hardware: Cluster of 5 x RTX 4090 GPUs
- Training Duration: ~140 days (approximately 6 months)
Benchmarks
RepoBench 1.1 (Python)
| Model | 2k | 4k | 8k | 12k | 16k | Avg | Avg ≤ 8k |
|---|---|---|---|---|---|---|---|
| Spec-Coder-4b-V1 | 30.42% | 38.55% | 36.91% | 32.75% | 30.34% | 34.59% | 36.23% |
Syntax-Aware Fill-in-the-Middle (SAFIM)
| Model | Algorithmic | Control | API | Average |
|---|---|---|---|---|
| Spec-Coder-4b-V1 | 38.22% | 41.18% | 60.45% | 46.28% |
HumanEval Infilling
| Model | Single-Line | Multi-Line | Random Span |
|---|---|---|---|
| Spec-Coder-4b-V1 | 72.34% | 45.65% | 39.12% |
Limitations
- Biases: The model may reflect biases present in the public codebases.
- Security: Code generated by the model may contain security vulnerabilities. It is essential to verify and audit the code generated by the model for any potential risks.
Sample Usage
Here are examples of how to run and interact with Spec Coder:
from transformers import AutoModelForCausalLM, AutoTokenizer
model_name = "SVECTOR-CORPORATION/Spec-Coder-4b-V1"
model = AutoModelForCausalLM.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
input_code = "def factorial(n):\n if n == 0:"
inputs = tokenizer(input_code, return_tensors="pt")
outputs = model.generate(inputs['input_ids'], max_length=50, num_return_sequences=1)
generated_code = tokenizer.decode(outputs[0], skip_special_tokens=True)
print("Generated Python code:\n", generated_code)
- Downloads last month
- 30