Update README.md
Browse files
README.md
CHANGED
|
@@ -57,21 +57,32 @@ Only the solution portion of each example was used for loss computation through
|
|
| 57 |
|
| 58 |
## Training Configuration
|
| 59 |
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
LoRA
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 75 |
|
| 76 |
---
|
| 77 |
|
|
|
|
| 57 |
|
| 58 |
## Training Configuration
|
| 59 |
|
| 60 |
+
## Training Configuration (MI300X Run)
|
| 61 |
+
|
| 62 |
+
**Method:** LoRA (full precision, bfloat16)
|
| 63 |
+
**Precision:** bfloat16 (no 4-bit quantization)
|
| 64 |
+
|
| 65 |
+
**LoRA settings**
|
| 66 |
+
- Rank: 16
|
| 67 |
+
- Alpha: 32
|
| 68 |
+
- Dropout: 0.05
|
| 69 |
+
- Target modules: `q_proj`, `k_proj`, `v_proj`, `o_proj`
|
| 70 |
+
|
| 71 |
+
**Data & sequence**
|
| 72 |
+
- Max sequence length: 1024
|
| 73 |
+
|
| 74 |
+
**Optimization**
|
| 75 |
+
- Batch size: 2
|
| 76 |
+
- Gradient accumulation: 8
|
| 77 |
+
- **Effective batch size:** 16
|
| 78 |
+
- Learning rate: 1e-4
|
| 79 |
+
- Optimizer: `adamw_torch`
|
| 80 |
+
- Scheduler: cosine
|
| 81 |
+
- Warmup: 5%
|
| 82 |
+
|
| 83 |
+
**Training**
|
| 84 |
+
- Epochs: 3
|
| 85 |
+
|
| 86 |
|
| 87 |
---
|
| 88 |
|