File size: 2,453 Bytes
11050bf
 
 
 
 
 
2e85e10
11050bf
2e85e10
 
 
 
 
11050bf
 
55ced14
11050bf
093ea10
11050bf
2e85e10
11050bf
 
 
2e85e10
11050bf
2e85e10
 
 
 
 
11050bf
 
 
2e85e10
11050bf
2e85e10
 
 
 
11050bf
f44a10c
 
11050bf
 
 
2e85e10
11050bf
 
 
 
55ced14
11050bf
 
2e85e10
 
 
 
11050bf
 
2e85e10
 
 
 
11050bf
 
 
 
 
2e85e10
11050bf
2e85e10
11050bf
 
 
2e85e10
11050bf
2e85e10
11050bf
 
 
2e85e10
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
---
license: apache-2.0
language:
  - en
pipeline_tag: text-generation
tags:
  - deepbrainz
  - reasoning
  - mathematics
  - code
  - enterprise
  - 0.6b
library_name: transformers
---

# DeepBrainz-R1-0.6B-Exp

**DeepBrainz-R1-0.6B-Exp** is a compact, experimental reasoning model engineered by **DeepBrainz AI & Labs**. Designed for efficiency and scalability, it specializes in structured chain-of-thought reasoning, mathematical problem solving, and logical analysis.

This model is part of the **DeepBrainz-R1 Series**, built to deliver frontier-class reasoning capabilities in cost-effective parameter sizes.

---

## 🚀 Model Highlights

- **Parameter Count:** ~0.6B
- **Context Window:** 32,768 tokens
- **Specialization:** STEM Reasoning, Logic, Code Analysis
- **Architecture:** Optimized Dense Transformer (Qwen2.5/3 Compatible)
- **Deployment:** Ready for vLLM, TGI, and local inference

---

## 🎯 Intended Use Cases

- **Agentic Workflows:** Reliability in multi-step planning tasks.
- **Math & Science:** Solving complex word problems and equations.
- **Code Generation:** Writing and debugging algorithms.
- **Structured Data Extraction:** Parsing and reasoning over unstructured text.

> **Note:** This is a post-trained reasoning variant intended for evaluation and experimentation.  
> It is not production-validated and is not optimized for open-ended conversational chat.

---

## 💻 Usage

```python
from transformers import AutoModelForCausalLM, AutoTokenizer

model_id = "DeepBrainz/DeepBrainz-R1-0.6B-Exp"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    torch_dtype="bfloat16",
    device_map="auto"
)

prompt = "Analyze the time complexity of the following algorithm:"
inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(**inputs, max_new_tokens=256)
print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```

---

## 🛡️ Limitations & Safety

While this model demonstrates strong reasoning capabilities, it may still produce inaccurate information ("hallucinations"). Users should implement appropriate guardrails for production deployments.

---

## 📜 License

This model is released under the **Apache 2.0** license, allowing for academic and commercial use.

---

<div align="center">
  <b>DeepBrainz AI & Labs</b><br>
  <i>Advancing General Intelligence through Scalable Reasoning</i>
</div>