Update README.md
Browse files
README.md
CHANGED
|
@@ -20,35 +20,70 @@ tags:
|
|
| 20 |
- typescript
|
| 21 |
- rust
|
| 22 |
- go
|
| 23 |
-
-
|
| 24 |
-
-
|
| 25 |
-
- mit
|
| 26 |
- commercial-safe
|
| 27 |
-
|
|
|
|
|
|
|
| 28 |
size_categories:
|
| 29 |
- 1K<n<10K
|
| 30 |
---
|
| 31 |
|
| 32 |
# HSH Intelligence — GitHub Code AI Training Corpus
|
| 33 |
|
| 34 |
-
**
|
| 35 |
|
| 36 |
-
|
| 37 |
|
| 38 |
-
The full corpus contains **5.6 TB** of source code (**211 million+ files**, **7.05 billion lines**) across **14 production languages**
|
| 39 |
|
| 40 |
---
|
| 41 |
|
| 42 |
-
##
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
| Metric | Value |
|
| 45 |
|---|---|
|
| 46 |
-
| Records |
|
| 47 |
| Languages | 5 (Python, JavaScript, TypeScript, Go, Rust) |
|
| 48 |
-
|
|
| 49 |
-
|
|
| 50 |
-
|
|
| 51 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 52 |
|
| 53 |
---
|
| 54 |
|
|
@@ -61,44 +96,12 @@ The full corpus contains **5.6 TB** of source code (**211 million+ files**, **7.
|
|
| 61 |
| Total lines of code | 7.05 billion |
|
| 62 |
| Unique repositories | 3,710+ permissive-license repos |
|
| 63 |
| Programming languages | 14 production languages |
|
| 64 |
-
| Average quality score | 0.978 / 1.000 |
|
| 65 |
| Updates | Daily incremental |
|
| 66 |
|
| 67 |
**Languages covered:** Python, JavaScript, TypeScript, Go, Rust, Java, C++, Ruby, Swift, Kotlin, PHP, C#, Scala, Solidity
|
| 68 |
|
| 69 |
---
|
| 70 |
|
| 71 |
-
## Schema (24 Fields)
|
| 72 |
-
|
| 73 |
-
| Field | Type | Description |
|
| 74 |
-
|---|---|---|
|
| 75 |
-
| `id` | string | Unique record identifier |
|
| 76 |
-
| `repo_name` | string | GitHub repository slug (owner/repo) |
|
| 77 |
-
| `repo_owner` | string | GitHub username or organization |
|
| 78 |
-
| `repo_url` | string | Full HTTPS URL to repository |
|
| 79 |
-
| `file_path` | string | Relative path within repo |
|
| 80 |
-
| `file_name` | string | Filename with extension |
|
| 81 |
-
| `file_sha` | string | SHA-256 hash for deduplication |
|
| 82 |
-
| `code` | string | Raw source code content |
|
| 83 |
-
| `language` | string | Detected programming language |
|
| 84 |
-
| `language_extension` | string | File extension |
|
| 85 |
-
| `line_count` | integer | Total lines of code |
|
| 86 |
-
| `char_count` | integer | Character count |
|
| 87 |
-
| `token_count` | integer | Estimated tokens (tiktoken cl100k) |
|
| 88 |
-
| `license` | string | SPDX license identifier |
|
| 89 |
-
| `license_source` | string | Where license was detected |
|
| 90 |
-
| `license_confidence` | float | Detection confidence (0.0–1.0) |
|
| 91 |
-
| `commercial_safe` | boolean | Commercial-use safe flag |
|
| 92 |
-
| `repo_stars` | integer | GitHub star count |
|
| 93 |
-
| `repo_forks` | integer | GitHub fork count |
|
| 94 |
-
| `repo_description` | string | Repository description |
|
| 95 |
-
| `repo_primary_language` | string | Repository's dominant language |
|
| 96 |
-
| `repo_created_at` | timestamp | Repository creation date |
|
| 97 |
-
| `repo_updated_at` | timestamp | Last activity timestamp |
|
| 98 |
-
| `data_quality_score` | float | Composite quality score (0.0–1.0) |
|
| 99 |
-
|
| 100 |
-
---
|
| 101 |
-
|
| 102 |
## License Coverage (Commercial-Safe Only)
|
| 103 |
|
| 104 |
| License | Status | Notes |
|
|
@@ -113,17 +116,44 @@ The full corpus contains **5.6 TB** of source code (**211 million+ files**, **7.
|
|
| 113 |
| LGPL-2.1 / LGPL-3.0 | EXCLUDED | Copyleft |
|
| 114 |
| No license / Proprietary | EXCLUDED | Default copyright |
|
| 115 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 116 |
---
|
| 117 |
|
| 118 |
## Quick Start
|
| 119 |
|
|
|
|
|
|
|
| 120 |
```python
|
| 121 |
from datasets import load_dataset
|
| 122 |
|
| 123 |
-
# Load sample
|
| 124 |
ds = load_dataset("HSH-Intelligence/github-code-corpus-sample")
|
| 125 |
-
|
| 126 |
-
# Inspect
|
| 127 |
print(ds)
|
| 128 |
print(ds["train"][0])
|
| 129 |
|
|
@@ -131,19 +161,21 @@ print(ds["train"][0])
|
|
| 131 |
python_only = ds["train"].filter(
|
| 132 |
lambda x: x["language"] == "Python" and x["data_quality_score"] >= 0.95
|
| 133 |
)
|
|
|
|
| 134 |
```
|
| 135 |
|
|
|
|
|
|
|
| 136 |
```python
|
| 137 |
import pandas as pd
|
| 138 |
|
| 139 |
-
# Or load directly with pandas
|
| 140 |
df = pd.read_parquet(
|
| 141 |
-
"hf://datasets/HSH-Intelligence/github-code-corpus-sample/
|
| 142 |
)
|
| 143 |
-
|
| 144 |
print(df.head())
|
| 145 |
print(f"Total records: {len(df):,}")
|
| 146 |
print(f"Languages: {df['language'].value_counts()}")
|
|
|
|
| 147 |
```
|
| 148 |
|
| 149 |
---
|
|
@@ -157,8 +189,8 @@ curl -H "X-API-Key: demo-key-12345" \
|
|
| 157 |
"https://api.hshintelligence.com/api/v1/github-code-corpus?language=Rust&license=MIT&page_size=5"
|
| 158 |
```
|
| 159 |
|
| 160 |
-
Or run the interactive Google Colab notebook:
|
| 161 |
-
|
| 162 |
|
| 163 |
---
|
| 164 |
|
|
@@ -169,54 +201,106 @@ Or run the interactive Google Colab notebook:
|
|
| 169 |
- **Code search and retrieval** — embedding training
|
| 170 |
- **Code understanding research** — academic benchmarks
|
| 171 |
- **Vertical AI** — domain-specific code assistants
|
|
|
|
| 172 |
|
| 173 |
---
|
| 174 |
|
| 175 |
## Why This Corpus
|
| 176 |
|
| 177 |
-
|
|
| 178 |
|---|---|
|
| 179 |
-
| The Stack v2 |
|
| 180 |
-
| Common Crawl code | Pre-filtered, deduplicated,
|
| 181 |
| Custom GitHub scraping | Saves 4+ months of engineering work |
|
| 182 |
| Internal datasets | EU AI Act Article 10 compliance ready |
|
|
|
|
| 183 |
|
| 184 |
---
|
| 185 |
|
| 186 |
## Compliance & Provenance
|
| 187 |
|
| 188 |
- **EU AI Act Article 10** ready (training data governance)
|
|
|
|
|
|
|
|
|
|
| 189 |
- Per-record license audit trail
|
| 190 |
-
- Source attribution retained (
|
| 191 |
- Quality scoring per record
|
| 192 |
-
-
|
|
|
|
|
|
|
| 193 |
|
| 194 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 195 |
|
| 196 |
---
|
| 197 |
|
| 198 |
## Full Corpus Access
|
| 199 |
|
| 200 |
-
This is a **
|
| 201 |
|
| 202 |
-
|
| 203 |
-
-
|
| 204 |
-
|
| 205 |
-
|
| 206 |
-
|
| 207 |
|
| 208 |
-
**
|
| 209 |
-
|
| 210 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 211 |
|
| 212 |
---
|
| 213 |
|
| 214 |
## About HSH Intelligence
|
| 215 |
|
| 216 |
-
HSH Intelligence is
|
| 217 |
|
| 218 |
-
We
|
| 219 |
|
| 220 |
---
|
| 221 |
|
| 222 |
-
*This dataset is provided for evaluation purposes. The full 5.6 TB corpus is available under commercial license.
|
|
|
|
|
|
|
|
|
| 20 |
- typescript
|
| 21 |
- rust
|
| 22 |
- go
|
| 23 |
+
- bigcode-standard
|
| 24 |
+
- stack-v2-methodology
|
|
|
|
| 25 |
- commercial-safe
|
| 26 |
+
- pii-scrubbed
|
| 27 |
+
- license-audited
|
| 28 |
+
pretty_name: HSH Intelligence — GitHub Code AI Training Corpus (5K Sample)
|
| 29 |
size_categories:
|
| 30 |
- 1K<n<10K
|
| 31 |
---
|
| 32 |
|
| 33 |
# HSH Intelligence — GitHub Code AI Training Corpus
|
| 34 |
|
| 35 |
+
**5,000-record sample of the HSH Intelligence GitHub Code AI Training Corpus.**
|
| 36 |
|
| 37 |
+
A curated, production-grade sample of source code from top-tier public GitHub repositories — engineered for large language model training, fine-tuning, and code understanding research.
|
| 38 |
|
| 39 |
+
The full corpus contains **5.6 TB** of source code (**211 million+ files**, **7.05 billion lines**) across **14 production languages**.
|
| 40 |
|
| 41 |
---
|
| 42 |
|
| 43 |
+
## 10/10 Quality Checks
|
| 44 |
+
|
| 45 |
+
This sample passes all 10 industry-standard quality checks following **BigCode / The Stack v2** production methodology.
|
| 46 |
+
|
| 47 |
+
| # | Check | Tool | Result |
|
| 48 |
+
|---|---|---|---|
|
| 49 |
+
| 1 | License compliance | scancode-toolkit 32.5.0 | 0% copyleft |
|
| 50 |
+
| 2 | Secret detection | gitleaks 8.18.4 | 0 leaks |
|
| 51 |
+
| 3 | Near-duplicate removal | MinHash LSH (256-perm, 5-gram, 0.9 threshold) | 0% duplicates |
|
| 52 |
+
| 4 | Code complexity | radon 6.0.1 | 3.92 avg cyclomatic |
|
| 53 |
+
| 5 | Token diversity | tiktoken cl100k_base (GPT-4) | 63,712 unique tokens |
|
| 54 |
+
| 6 | Statistical balance | Custom audit | 1K per language |
|
| 55 |
+
| 7 | Benchmark contamination | vs HumanEval (164) + MBPP (500) | 0 matches |
|
| 56 |
+
| 8 | PII beyond secrets | Custom regex + Luhn validation | 0 real PII |
|
| 57 |
+
| 9 | Syntax validation | Babel parser, syn 2.0, tsc, ast, gofmt | 98.0% parseable |
|
| 58 |
+
| 10 | Repo legitimacy | GitHub REST API verification | 100% verified |
|
| 59 |
+
|
| 60 |
+
**Reference:** Methodology follows [BigCode / The Stack v2](https://huggingface.co/datasets/bigcode/the-stack-v2) production standards.
|
| 61 |
+
|
| 62 |
+
Full audit certificate: [`QUALITY_CERTIFICATE.json`](./QUALITY_CERTIFICATE.json)
|
| 63 |
+
|
| 64 |
+
---
|
| 65 |
+
|
| 66 |
+
## Sample Specifications
|
| 67 |
|
| 68 |
| Metric | Value |
|
| 69 |
|---|---|
|
| 70 |
+
| Records | 5,000 (curated subset) |
|
| 71 |
| Languages | 5 (Python, JavaScript, TypeScript, Go, Rust) |
|
| 72 |
+
| Records per language | 1,000 (perfectly balanced) |
|
| 73 |
+
| Unique repositories | 1,499 verified active on GitHub |
|
| 74 |
+
| Format | Apache Parquet (zstd compression) + CSV |
|
| 75 |
+
| Schema | 19 fields per record |
|
| 76 |
+
| Size | 13.4 MB (Parquet) / 49.9 MB (CSV) |
|
| 77 |
+
| License coverage | 100% commercial-safe (MIT, Apache-2.0, BSD, ISC) |
|
| 78 |
+
| PII status | Fully scrubbed (zero secrets, emails, IPs, SSNs) |
|
| 79 |
+
| Syntax validation | 98.0% parseable (industry standard greater than or equal to 95%) |
|
| 80 |
+
|
| 81 |
+
### Repository Quality
|
| 82 |
+
|
| 83 |
+
- 56.1% from repos with 10,000+ GitHub stars
|
| 84 |
+
- 6.1% archived repos (still valid, just not actively maintained)
|
| 85 |
+
- 0.0% deleted repos
|
| 86 |
+
- Top repos include: `facebook/react`, `ollama/ollama`, `django/django`, `AUTOMATIC1111/stable-diffusion-webui`
|
| 87 |
|
| 88 |
---
|
| 89 |
|
|
|
|
| 96 |
| Total lines of code | 7.05 billion |
|
| 97 |
| Unique repositories | 3,710+ permissive-license repos |
|
| 98 |
| Programming languages | 14 production languages |
|
|
|
|
| 99 |
| Updates | Daily incremental |
|
| 100 |
|
| 101 |
**Languages covered:** Python, JavaScript, TypeScript, Go, Rust, Java, C++, Ruby, Swift, Kotlin, PHP, C#, Scala, Solidity
|
| 102 |
|
| 103 |
---
|
| 104 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 105 |
## License Coverage (Commercial-Safe Only)
|
| 106 |
|
| 107 |
| License | Status | Notes |
|
|
|
|
| 116 |
| LGPL-2.1 / LGPL-3.0 | EXCLUDED | Copyleft |
|
| 117 |
| No license / Proprietary | EXCLUDED | Default copyright |
|
| 118 |
|
| 119 |
+
License detection performed using **scancode-toolkit 32.5.0** with per-file SPDX classification.
|
| 120 |
+
|
| 121 |
+
---
|
| 122 |
+
|
| 123 |
+
## Schema (19 Fields)
|
| 124 |
+
|
| 125 |
+
| Field | Type | Description |
|
| 126 |
+
|---|---|---|
|
| 127 |
+
| `id` | string | Unique record identifier (sha256-prefixed) |
|
| 128 |
+
| `language` | string | Detected programming language |
|
| 129 |
+
| `repo_owner` | string | GitHub username or organization |
|
| 130 |
+
| `repo_name` | string | Repository name |
|
| 131 |
+
| `repo_stars` | integer | GitHub star count |
|
| 132 |
+
| `repo_forks` | integer | GitHub fork count |
|
| 133 |
+
| `repo_description` | string | Repository description |
|
| 134 |
+
| `repo_topics` | list[string] | GitHub repo topics |
|
| 135 |
+
| `license` | string | SPDX license identifier |
|
| 136 |
+
| `file_path` | string | Relative path within repo |
|
| 137 |
+
| `file_name` | string | Filename with extension |
|
| 138 |
+
| `file_size` | integer | File size in bytes |
|
| 139 |
+
| `code` | string | Raw source code content (PII-scrubbed) |
|
| 140 |
+
| `word_count` | integer | Total word count |
|
| 141 |
+
| `char_count` | integer | Character count |
|
| 142 |
+
| `line_count` | integer | Total lines of code |
|
| 143 |
+
| `data_quality_score` | float | Composite quality score (0.0–1.0) |
|
| 144 |
+
| `timestamp` | timestamp | Record creation timestamp |
|
| 145 |
+
| `scrubbed` | boolean | PII scrubbing flag (always `True`) |
|
| 146 |
+
|
| 147 |
---
|
| 148 |
|
| 149 |
## Quick Start
|
| 150 |
|
| 151 |
+
### Load with Hugging Face Datasets
|
| 152 |
+
|
| 153 |
```python
|
| 154 |
from datasets import load_dataset
|
| 155 |
|
|
|
|
| 156 |
ds = load_dataset("HSH-Intelligence/github-code-corpus-sample")
|
|
|
|
|
|
|
| 157 |
print(ds)
|
| 158 |
print(ds["train"][0])
|
| 159 |
|
|
|
|
| 161 |
python_only = ds["train"].filter(
|
| 162 |
lambda x: x["language"] == "Python" and x["data_quality_score"] >= 0.95
|
| 163 |
)
|
| 164 |
+
print(f"High-quality Python records: {len(python_only)}")
|
| 165 |
```
|
| 166 |
|
| 167 |
+
### Load directly with pandas
|
| 168 |
+
|
| 169 |
```python
|
| 170 |
import pandas as pd
|
| 171 |
|
|
|
|
| 172 |
df = pd.read_parquet(
|
| 173 |
+
"hf://datasets/HSH-Intelligence/github-code-corpus-sample/github_code_sample_5000.parquet"
|
| 174 |
)
|
|
|
|
| 175 |
print(df.head())
|
| 176 |
print(f"Total records: {len(df):,}")
|
| 177 |
print(f"Languages: {df['language'].value_counts()}")
|
| 178 |
+
print(f"Top repos: {df['repo_name'].value_counts().head(10)}")
|
| 179 |
```
|
| 180 |
|
| 181 |
---
|
|
|
|
| 189 |
"https://api.hshintelligence.com/api/v1/github-code-corpus?language=Rust&license=MIT&page_size=5"
|
| 190 |
```
|
| 191 |
|
| 192 |
+
Or run the interactive Google Colab notebook:
|
| 193 |
+
https://links.hshintelligence.com/github-demo
|
| 194 |
|
| 195 |
---
|
| 196 |
|
|
|
|
| 201 |
- **Code search and retrieval** — embedding training
|
| 202 |
- **Code understanding research** — academic benchmarks
|
| 203 |
- **Vertical AI** — domain-specific code assistants
|
| 204 |
+
- **Benchmark-safe evaluation** — zero contamination vs HumanEval/MBPP
|
| 205 |
|
| 206 |
---
|
| 207 |
|
| 208 |
## Why This Corpus
|
| 209 |
|
| 210 |
+
| vs. Alternative | HSH Intelligence Edge |
|
| 211 |
|---|---|
|
| 212 |
+
| The Stack v2 | Per-file license audit + provenance trail + 10-check quality verification |
|
| 213 |
+
| Common Crawl code | Pre-filtered, deduplicated, syntax-validated, PII-scrubbed |
|
| 214 |
| Custom GitHub scraping | Saves 4+ months of engineering work |
|
| 215 |
| Internal datasets | EU AI Act Article 10 compliance ready |
|
| 216 |
+
| Generic samples | Industry-standard 10/10 quality checks documented |
|
| 217 |
|
| 218 |
---
|
| 219 |
|
| 220 |
## Compliance & Provenance
|
| 221 |
|
| 222 |
- **EU AI Act Article 10** ready (training data governance)
|
| 223 |
+
- **GDPR** safe (zero PII verified)
|
| 224 |
+
- **CCPA** safe (no California resident data)
|
| 225 |
+
- **HIPAA** considerations addressed (no medical data)
|
| 226 |
- Per-record license audit trail
|
| 227 |
+
- Source attribution retained (`repo_owner`, `repo_name`)
|
| 228 |
- Quality scoring per record
|
| 229 |
+
- Zero PII (emails, phones, IPs, SSNs, credit cards verified)
|
| 230 |
+
- Zero secrets (API keys, tokens, credentials verified via gitleaks)
|
| 231 |
+
- Zero benchmark contamination (HumanEval, MBPP verified)
|
| 232 |
|
| 233 |
+
---
|
| 234 |
+
|
| 235 |
+
## Methodology
|
| 236 |
+
|
| 237 |
+
This dataset follows **BigCode / The Stack v2** production methodology with additional quality gates.
|
| 238 |
+
|
| 239 |
+
### Tools Used
|
| 240 |
+
|
| 241 |
+
| Category | Tools |
|
| 242 |
+
|---|---|
|
| 243 |
+
| License detection | scancode-toolkit |
|
| 244 |
+
| Secret scanning | gitleaks |
|
| 245 |
+
| Deduplication | datasketch MinHash LSH |
|
| 246 |
+
| Complexity analysis | radon |
|
| 247 |
+
| Tokenization | tiktoken (cl100k_base) |
|
| 248 |
+
| Syntax validation | Babel parser, syn 2.0, tsc, Python ast, gofmt |
|
| 249 |
+
| Repo verification | GitHub REST API v3 |
|
| 250 |
+
|
| 251 |
+
### Quality Thresholds
|
| 252 |
+
|
| 253 |
+
- License compliance: less than 0.1% copyleft (achieved: 0%)
|
| 254 |
+
- Secret leaks: 0 tolerance (achieved: 0)
|
| 255 |
+
- Near-duplicates: less than 5% (achieved: 0%)
|
| 256 |
+
- PII: 0 tolerance (achieved: 0)
|
| 257 |
+
- Syntax validation: greater than or equal to 95% parseable (achieved: 98%)
|
| 258 |
+
- Repo legitimacy: less than 1% deleted (achieved: 0%)
|
| 259 |
+
|
| 260 |
+
Full quality certificate: [`QUALITY_CERTIFICATE.json`](./QUALITY_CERTIFICATE.json)
|
| 261 |
|
| 262 |
---
|
| 263 |
|
| 264 |
## Full Corpus Access
|
| 265 |
|
| 266 |
+
This is a **5,000-record evaluation sample**. The full corpus is available via commercial license:
|
| 267 |
|
| 268 |
+
| Tier | Records | Languages | Format |
|
| 269 |
+
|---|---|---|---|
|
| 270 |
+
| Sample (this dataset) | 5,000 | 5 | Parquet + CSV |
|
| 271 |
+
| Standard | 10M+ | 14 | Parquet |
|
| 272 |
+
| Enterprise | 211M+ (full) | 14 | Parquet (+JSONL on request) |
|
| 273 |
|
| 274 |
+
**Delivery options:**
|
| 275 |
+
- Cloud signed URL (Backblaze B2, AWS S3)
|
| 276 |
+
- Cross-cloud transfer (AWS, GCP, Azure)
|
| 277 |
+
- sFTP delivery for on-prem
|
| 278 |
+
- Daily incremental updates (Enterprise tier)
|
| 279 |
+
|
| 280 |
+
**Custom subsets available:** Filter by language, license, repo stars, complexity, or quality threshold.
|
| 281 |
+
|
| 282 |
+
**Licensing:** 1-year non-exclusive commercial license.
|
| 283 |
+
|
| 284 |
+
---
|
| 285 |
+
|
| 286 |
+
## Contact
|
| 287 |
+
|
| 288 |
+
- **Email:** sales@healingsunhaven.com
|
| 289 |
+
- **Website:** https://www.hshintelligence.com
|
| 290 |
+
- **Live API:** https://api.hshintelligence.com
|
| 291 |
+
- **Documentation:** https://links.hshintelligence.com/github-docs
|
| 292 |
+
- **Demo Colab:** https://links.hshintelligence.com/github-demo
|
| 293 |
|
| 294 |
---
|
| 295 |
|
| 296 |
## About HSH Intelligence
|
| 297 |
|
| 298 |
+
**HSH Intelligence** is the Data Division of **Healing Sun Haven LLC**, building production-grade AI training datasets and B2B intelligence products.
|
| 299 |
|
| 300 |
+
We engineer datasets across AI training, B2B intelligence, and decision-support — purpose-built for frontier AI labs and enterprise teams who demand industry-standard quality verification.
|
| 301 |
|
| 302 |
---
|
| 303 |
|
| 304 |
+
*This dataset is provided for evaluation purposes. The full 5.6 TB corpus is available under commercial license. Quality audit certificate, license documentation, and provenance trail included with all enterprise contracts.*
|
| 305 |
+
|
| 306 |
+
Audit date: 2026-05-07 | Methodology reference: BigCode/Stack v2 | Full quality report: QUALITY_CERTIFICATE.json
|