load_dataset appears broken
Hi, thank you for uploading. I tried to load this dataset with the usual
from datasets import load_dataset
ds = load_dataset("allenai/dolma3_mix-6T")
But this appears to load only a tiny portion of the dataset with 217,644 documents. The workaround is to clone the download or use the hf CLI to download it to disk first. But there's something not fully correct in how the dataset registers with load_dataset.
(Also if I may make a feature request, one thing of interest to many might be uniformly random samples of the data for quick exploration, e.g. similar to FineWeb 10B, 100B, 350B token subsets.)
Hi @karpathy , thanks so much for calling this out and so sorry for the issue! This happened because of how we defined the data split in the README - we originally had defined it as one of the common_crawl subfolders to keep it simple (as the dataset is quite large with many folders), but looks like that causes a problem for load_dataset. I should be able to fix that ASAP!
Regarding your feature request, we do actually have a 150B sample available: https://huggingface.co/datasets/allenai/dolma3_mix-150B-1025. It's not uniform, but it's consistent with the upsampling we applied on the tokens for the hero run. Would that be sufficient? I'll relay this request to the team though - we should be able to do this relatively easily if not!
@baileyk got it, thank you!
I missed the 150B sample earlier. Do you have more details on what the "hero run" is? I'm not able to find a mention of it in the paper for example. 150B would be a good amount and sufficient for my purposes, e.g. I was trying to swap it out for nanochat to compare to FineWeb, so these are GPT-2/3 miniseries models and 150B is plenty.
I seem to have hit more errors, e.g.:
from datasets import load_dataset
ds = load_dataset("allenai/dolma3_mix-150B-1025", split="train", streaming=True)
for i, d in enumerate(ds):
if i > 10:
break
print(d)
Produces:
raise TypeError(f"Couldn't cast array of type {_short_str(array.type)} to {_short_str(pa_type)}")
TypeError: Couldn't cast array of type struct<cc_dump: string, dolma2_qc: struct<0: double, 1: double>, exact_duplicates: int64, lang: struct<en: double>, madlad: struct<num_sentences: int64, rule.2: list<item: int64>, rule.5: list<item: int64>, status: string>, minhash: null, original_word_count: int64, sa_remove_ranges: list<item: null>, text_hash: string, warc_content_type: string, warc_date: string, warc_url: string, weborganizer: struct<__label__adult_content: double, __label__crime_and_law: double, __label__entertainment: double, __label__finance_and_business: double, __label__games: double, __label__home_and_hobbies: double, __label__science_math_and_technology: double, __label__social_life: double, __label__software: double, __label__software_development: double, __label__art_and_design: double, __label__education_and_jobs: double, __label__fashion_and_beauty: double, __label__health: double, __label__literature: double, __label__sports_and_fitness: double>, weborganizer_max: string> to string
And I'm not able to load_dataset without streaming=True because it crashes with a rate limiting error. I'm not actually sure how to get around that in huggingface
Do you have more details on what the "hero run" is?
Yep, sorry - the hero run is just the larger run. So in this case, that would be our 7B/ 32B runs. So the 150B sample is representatively upsampled relative to this (6T) dataset :)
@karpathy
Regarding the error you're hitting - ah yeah, unfortunately load_dataset is a bit picky about schema, and due to our differing data sources, we can't perfectly unify the schemas across each source. I just updated the README though to define the features a bit further, which should hopefully fix some of those errors. I just ran your code snippet and now I'm seeing the data populate:
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6081/6081 [00:00<00:00, 56566.52it/s]
Resolving data files: 100%|βββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 6081/6081 [00:00<00:00, 54736.84it/s]
{'id': 'fd27c4f3-b13c-488e-959a-5741223f5985', 'text': 'Myfreecam S\n\nSlutroulette Features:\nSlutroulette is a doppelganger of Streamate in regards to appearance as well as the only difference you will see is that the black background of Streamate is changed by a white one. The very same model cameras from Streamate are featured here.\n\nMyfreecam S\n\nWhen you slutroulette.com, you will certainly be asked to develop a totally free account, after which you will certainly be rerouted to the homepage of Slutroulette Live which is only Streamate...
I'll be able to take a closer look in the morning, but I should be able to resolve those for you if not already!
@baileyk Claude managed to find a workaround the download issue, but now I noticed that the https://huggingface.co/datasets/allenai/dolma3_mix-150B-1025 dataset looks malformed! A very large amount of documents seem to be extremely short. For example the entire document is just "Accurate Biz". You can see this in the HF viewer there. Possibly something went wrong with the data processing?
EDIT: this dataset seems to be broken too in some way. For example the id "2dece164-a4de-4720-a611-c6d5306912c1" document is just the single character "5" as text. Running some quick stats, about 15% of the documents are super tiny, less than 50 characters.
@karpathy Thanks for the update - the team is investigating and will get back to you shortly. It's expected that we encountered short documents, but we would expect those documents to be filtered out by initial heuristic filtering steps. There is some deduplication of substrings we do near the end of the pipeline that could possibly remove large bits of texts, leaving a much smaller document as a result.
@karpathy Just giving an update -- these datasets are indeed accurate to what we trained on due the deduplication procedure mentioned above. If it's of interest to you, the team can run the heuristics again on the 150B sample after suffix deduplication to remove these short texts. Let me know if this is something that would be helpful to you and we can get that for you ASAP.
For what it's worth, in future iterations, we have a plan in place to remove these documents if the suffix arrays leave short texts behinds.