title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Complete rouge kwargs
In #701 we noticed that some kwargs were missing for rouge
https://github.com/huggingface/datasets/pull/702
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/702", "html_url": "https://github.com/huggingface/datasets/pull/702", "diff_url": "https://github.com/huggingface/datasets/pull/702.diff", "patch_url": "https://github.com/huggingface/datasets/pull/702.patch", "merged_at": "2020-10-02T10:11:03"...
702
true
Add rouge 2 and rouge Lsum to rouge metric outputs
Continuation of #700 Rouge 2 and Rouge Lsum were missing in Rouge's outputs. Rouge Lsum is also useful to evaluate Rouge L for sentences with `\n` Fix #617
https://github.com/huggingface/datasets/pull/701
[ "Oups too late, sorry" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/701", "html_url": "https://github.com/huggingface/datasets/pull/701", "diff_url": "https://github.com/huggingface/datasets/pull/701.diff", "patch_url": "https://github.com/huggingface/datasets/pull/701.patch", "merged_at": "2020-10-02T09:52:18"...
701
true
Add rouge-2 in rouge_types for metric calculation
The description of the ROUGE metric says, ``` _KWARGS_DESCRIPTION = """ Calculates average rouge scores for a list of hypotheses and references Args: predictions: list of predictions to score. Each predictions should be a string with tokens separated by spaces. references: list of reference for ...
https://github.com/huggingface/datasets/pull/700
[ "Indeed there's currently a mismatch between the description and what it rouge actually returns.\r\nThanks for proposing this fix :) \r\n\r\nI think it's better to return rouge 1-2-L.\r\nWas there a reason to only include rouge 1 and rouge L @thomwolf ? ", "rougeLsum is also missing, could you add it ?", "Addin...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/700", "html_url": "https://github.com/huggingface/datasets/pull/700", "diff_url": "https://github.com/huggingface/datasets/pull/700.diff", "patch_url": "https://github.com/huggingface/datasets/pull/700.patch", "merged_at": null }
700
true
XNLI dataset is not loading
`dataset = datasets.load_dataset(path='xnli')` showing below error ``` /opt/conda/lib/python3.7/site-packages/nlp/utils/info_utils.py in verify_checksums(expected_checksums, recorded_checksums, verification_name) 36 if len(bad_urls) > 0: 37 error_msg = "Checksums didn't match" + for_verifi...
https://github.com/huggingface/datasets/issues/699
[ "also i tried below code to solve checksum error \r\n`datasets-cli test ./datasets/xnli --save_infos --all_configs`\r\n\r\nand it shows \r\n\r\n```\r\n2020-10-02 07:06:16.588760: I tensorflow/stream_executor/platform/default/dso_loader.cc:44] Successfully opened dynamic library libcudart.so.10.1\r\nTraceback (most ...
null
699
false
Update README.md
Hey I was just telling my subscribers to check out your repositories Thank you
https://github.com/huggingface/datasets/pull/697
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/697", "html_url": "https://github.com/huggingface/datasets/pull/697", "diff_url": "https://github.com/huggingface/datasets/pull/697.diff", "patch_url": "https://github.com/huggingface/datasets/pull/697.patch", "merged_at": null }
697
true
Elasticsearch index docs
I added the docs for ES indexes. I also added a `load_elasticsearch_index` method to load an index that has already been built. I checked the tests for the ES index and we have tests that mock ElasticSearch. I think this is good for now but at some point it would be cool to have an end-to-end test with a real ES...
https://github.com/huggingface/datasets/pull/696
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/696", "html_url": "https://github.com/huggingface/datasets/pull/696", "diff_url": "https://github.com/huggingface/datasets/pull/696.diff", "patch_url": "https://github.com/huggingface/datasets/pull/696.patch", "merged_at": "2020-10-02T07:48:18"...
696
true
Update XNLI download link
The old link isn't working anymore. I updated it with the new official link. Fix #690
https://github.com/huggingface/datasets/pull/695
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/695", "html_url": "https://github.com/huggingface/datasets/pull/695", "diff_url": "https://github.com/huggingface/datasets/pull/695.diff", "patch_url": "https://github.com/huggingface/datasets/pull/695.patch", "merged_at": "2020-10-01T14:01:14"...
695
true
Use GitHub instead of aws in remote dataset tests
Recently we switched from aws s3 to github to download dataset scripts. However in the tests, the dummy data were still downloaded from s3. So I changed that to download them from github instead, in the MockDownloadManager. Moreover I noticed that `anli`'s dummy data were quite heavy (18MB compressed, i.e. the ent...
https://github.com/huggingface/datasets/pull/694
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/694", "html_url": "https://github.com/huggingface/datasets/pull/694", "diff_url": "https://github.com/huggingface/datasets/pull/694.diff", "patch_url": "https://github.com/huggingface/datasets/pull/694.patch", "merged_at": "2020-10-02T07:47:26"...
694
true
Rachel ker add dataset/mlsum
.
https://github.com/huggingface/datasets/pull/693
[ "It looks like an outdated PR (we've already added mlsum). Closing it" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/693", "html_url": "https://github.com/huggingface/datasets/pull/693", "diff_url": "https://github.com/huggingface/datasets/pull/693.diff", "patch_url": "https://github.com/huggingface/datasets/pull/693.patch", "merged_at": null }
693
true
Update README.md
https://github.com/huggingface/datasets/pull/692
[ "Hacktoberfest spam", "To enhance its readability.....not Hacktoberfest spam", "How is adding a punctuation to the end of a sentence justified as \"To enhance its readability\". \r\nConsidering that this is not your first \"README enhancement '' please don't spam the open source community with useless PR to get...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/692", "html_url": "https://github.com/huggingface/datasets/pull/692", "diff_url": "https://github.com/huggingface/datasets/pull/692.diff", "patch_url": "https://github.com/huggingface/datasets/pull/692.patch", "merged_at": null }
692
true
Add UI filter to filter datasets based on task
This is great work, so huge shoutout to contributors and huggingface. The [/nlp/viewer](https://huggingface.co/nlp/viewer/) is great and the [/datasets](https://huggingface.co/datasets) page is great. I was wondering if in both or either places we can have a filter that selects if a dataset is good for the following...
https://github.com/huggingface/datasets/issues/691
[ "Already supported." ]
null
691
false
XNLI dataset: NonMatchingChecksumError
Hi, I tried to download "xnli" dataset in colab using `xnli = load_dataset(path='xnli')` but got 'NonMatchingChecksumError' error `NonMatchingChecksumError Traceback (most recent call last) <ipython-input-27-a87bedc82eeb> in <module>() ----> 1 xnli = load_dataset(path='xnli') 3 frames /usr...
https://github.com/huggingface/datasets/issues/690
[ "Thanks for reporting.\r\nThe data file must have been updated by the host.\r\nI'll update the checksum with the new one.", "Well actually it looks like the link isn't working anymore :(", "The new link is https://cims.nyu.edu/~sbowman/xnli/XNLI-1.0.zip\r\nI'll update the dataset script", "I'll do a release i...
null
690
false
Switch to pandas reader for text dataset
Following the discussion in #622 , it appears that there's no appropriate ways to use the payrrow csv reader to read text files because of the separator. In this PR I switched to pandas to read the file. Moreover pandas allows to read the file by chunk, which means that you can build the arrow dataset from a text...
https://github.com/huggingface/datasets/pull/689
[ "If the windows tests in the CI pass, today will be a happy day" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/689", "html_url": "https://github.com/huggingface/datasets/pull/689", "diff_url": "https://github.com/huggingface/datasets/pull/689.diff", "patch_url": "https://github.com/huggingface/datasets/pull/689.patch", "merged_at": "2020-09-30T16:45:31"...
689
true
Disable tokenizers parallelism in multiprocessed map
It was reported in #620 that using multiprocessing with a tokenizers shows this message: ``` The current process just got forked. Disabling parallelism to avoid deadlocks... To disable this warning, please explicitly set TOKENIZERS_PARALLELISM=(true | false) ``` This message is shown when TOKENIZERS_PARALLELISM is...
https://github.com/huggingface/datasets/pull/688
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/688", "html_url": "https://github.com/huggingface/datasets/pull/688", "diff_url": "https://github.com/huggingface/datasets/pull/688.diff", "patch_url": "https://github.com/huggingface/datasets/pull/688.patch", "merged_at": "2020-10-01T08:45:45"...
688
true
`ArrowInvalid` occurs while running `Dataset.map()` function
It seems to fail to process the final batch. This [colab](https://colab.research.google.com/drive/1_byLZRHwGP13PHMkJWo62Wp50S_Z2HMD?usp=sharing) can reproduce the error. Code: ```python # train_ds = Dataset(features: { # 'title': Value(dtype='string', id=None), # 'score': Value(dtype='float64', id=Non...
https://github.com/huggingface/datasets/issues/687
[ "Hi !\r\n\r\nThis is because `encode` expects one single text as input (str), or one tokenized text (List[str]).\r\nI believe that you actually wanted to use `encode_batch` which expects a batch of texts.\r\nHowever this method is only available for our \"fast\" tokenizers (ex: BertTokenizerFast).\r\nBertJapanese i...
null
687
false
Dataset browser url is still https://huggingface.co/nlp/viewer/
Might be worth updating to https://huggingface.co/datasets/viewer/
https://github.com/huggingface/datasets/issues/686
[ "Yes! might do it with @srush one of these days. Hopefully it won't break too many links (we can always redirect from old url to new)", "This was fixed but forgot to close the issue. cc @lhoestq @yjernite \r\n\r\nThanks @jarednielsen!" ]
null
686
false
Add features parameter to CSV
Add support for the `features` parameter when loading a csv dataset: ```python from datasets import load_dataset, Features features = Features({...}) csv_dataset = load_dataset("csv", data_files=["path/to/my/file.csv"], features=features) ``` I added tests to make sure that it is also compatible with the ca...
https://github.com/huggingface/datasets/pull/685
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/685", "html_url": "https://github.com/huggingface/datasets/pull/685", "diff_url": "https://github.com/huggingface/datasets/pull/685.diff", "patch_url": "https://github.com/huggingface/datasets/pull/685.patch", "merged_at": "2020-09-30T08:39:54"...
685
true
Fix column order issue in cast
Previously, the order of the columns in the features passes to `cast_` mattered. However even though features passed to `cast_` had the same order as the dataset features, it could fail because the schema that was built was always in alphabetical order. This issue was reported by @lewtun in #623 To fix that I fi...
https://github.com/huggingface/datasets/pull/684
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/684", "html_url": "https://github.com/huggingface/datasets/pull/684", "diff_url": "https://github.com/huggingface/datasets/pull/684.diff", "patch_url": "https://github.com/huggingface/datasets/pull/684.patch", "merged_at": "2020-09-29T15:56:45"...
684
true
Fix wrong delimiter in text dataset
The delimiter is set to the bell character as it is used nowhere is text files usually. However in the text dataset the delimiter was set to `\b` which is backspace in python, while the bell character is `\a`. I replace \b by \a Hopefully it fixes issues mentioned by some users in #622
https://github.com/huggingface/datasets/pull/683
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/683", "html_url": "https://github.com/huggingface/datasets/pull/683", "diff_url": "https://github.com/huggingface/datasets/pull/683.diff", "patch_url": "https://github.com/huggingface/datasets/pull/683.patch", "merged_at": null }
683
true
Update navbar chapter titles color
Consistency with the color change that was done in transformers at https://github.com/huggingface/transformers/pull/7423 It makes the background-color of the chapter titles in the docs navbar darker, to differentiate them from the inner sections. see changes [here](https://691-250213286-gh.circle-artifacts.com/0/do...
https://github.com/huggingface/datasets/pull/682
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/682", "html_url": "https://github.com/huggingface/datasets/pull/682", "diff_url": "https://github.com/huggingface/datasets/pull/682.diff", "patch_url": "https://github.com/huggingface/datasets/pull/682.patch", "merged_at": "2020-09-28T17:30:12"...
682
true
Adding missing @property (+2 small flake8 fixes).
Fixes #678
https://github.com/huggingface/datasets/pull/681
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/681", "html_url": "https://github.com/huggingface/datasets/pull/681", "diff_url": "https://github.com/huggingface/datasets/pull/681.diff", "patch_url": "https://github.com/huggingface/datasets/pull/681.patch", "merged_at": "2020-09-28T10:26:09"...
681
true
Fix bug related to boolean in GAP dataset.
### Why I did The value in `row["A-coref"]` and `row["B-coref"]` is `'TRUE'` or `'FALSE'`. This type is `string`, then `bool('FALSE')` is equal to `True` in Python. So, both rows are transformed into `True` now. So, I modified this problem. ### What I did I modified `bool(row["A-coref"])` and `bool(row["B-cor...
https://github.com/huggingface/datasets/pull/680
[ "Hi !\r\n\r\nGood catch, thanks for creating this PR :)\r\n\r\nCould you also regenerate the metadata for this dataset using \r\n```\r\ndatasets-cli test ./datasets/gap --save_infos --all_configs\r\n```\r\n\r\nThat'd be awesome", "@lhoestq Thank you for your revieing!!!\r\n\r\nI've performed it and have read CONT...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/680", "html_url": "https://github.com/huggingface/datasets/pull/680", "diff_url": "https://github.com/huggingface/datasets/pull/680.diff", "patch_url": "https://github.com/huggingface/datasets/pull/680.patch", "merged_at": "2020-09-29T15:54:47"...
680
true
Fix negative ids when slicing with an array
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[[0, -1]]) # OverflowError ``` raises an error because of the negative id. This PR fixes that. Fix #668
https://github.com/huggingface/datasets/pull/679
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/679", "html_url": "https://github.com/huggingface/datasets/pull/679", "diff_url": "https://github.com/huggingface/datasets/pull/679.diff", "patch_url": "https://github.com/huggingface/datasets/pull/679.patch", "merged_at": "2020-09-28T14:42:19"...
679
true
The download instructions for c4 datasets are not contained in the error message
The manual download instructions are not clear ```The dataset c4 with config en requires manual data. Please follow the manual download instructions: <bound method C4.manual_download_instructions of <datasets_modules.datasets.c4.830b0c218bd41fed439812c8dd19dbd4767d2a3faa385eb695cf8666c982b1b3.c4.C4 object at 0x7ff...
https://github.com/huggingface/datasets/issues/678
[ "Good catch !\r\nIndeed the `@property` is missing.\r\n\r\nFeel free to open a PR :)", "Also not that C4 is a dataset that needs an Apache Beam runtime to be generated.\r\nFor example Dataflow, Spark, Flink etc.\r\n\r\nUsually we generate the dataset on our side once and for all, but we haven't done it for C4 yet...
null
678
false
Move cache dir root creation in builder's init
We use lock files in the builder initialization but sometimes the cache directory where they're supposed to be was not created. To fix that I moved the builder's cache dir root creation in the builder's init. Fix #671
https://github.com/huggingface/datasets/pull/677
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/677", "html_url": "https://github.com/huggingface/datasets/pull/677", "diff_url": "https://github.com/huggingface/datasets/pull/677.diff", "patch_url": "https://github.com/huggingface/datasets/pull/677.patch", "merged_at": "2020-09-28T14:42:42"...
677
true
train_test_split returns empty dataset item
I try to split my dataset by `train_test_split`, but after that the item in `train` and `test` `Dataset` is empty. The codes: ``` yelp_data = datasets.load_from_disk('/home/ssd4/huanglianzhe/test_yelp') print(yelp_data[0]) yelp_data = yelp_data.train_test_split(test_size=0.1) print(yelp_data) pri...
https://github.com/huggingface/datasets/issues/676
[ "The problem still exists after removing the cache files.", "Can you reproduce this example in a Colab so we can investigate? (or give more information on your software/hardware config)", "Thanks for reporting.\r\nI just found the issue, I'm creating a PR", "We'll do a release pretty soon to include the fix :...
null
676
false
Add custom dataset to NLP?
Is it possible to add a custom dataset such as a .csv to the NLP library? Thanks.
https://github.com/huggingface/datasets/issues/675
[ "Yes you can have a look here: https://huggingface.co/docs/datasets/loading_datasets.html#csv-files", "No activity, closing" ]
null
675
false
load_dataset() won't download in Windows
I don't know if this is just me or Windows. Maybe other Windows users can chime in if they don't have this problem. I've been trying to get some of the tutorials working on Windows, but when I use the load_dataset() function, it just stalls and the script keeps running indefinitely without downloading anything. I've wa...
https://github.com/huggingface/datasets/issues/674
[ "I have the same issue. Tried to download a few of them and not a single one is downloaded successfully.\r\n\r\nThis is the output:\r\n```\r\n>>> dataset = load_dataset('blended_skill_talk', split='train')\r\nUsing custom data configuration default <-- This step never ends\r\n```", "This was fixed i...
null
674
false
blog_authorship_corpus crashed
This is just to report that When I pick blog_authorship_corpus in https://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus I get this: ![image](https://user-images.githubusercontent.com/7553188/94349542-4364f300-0013-11eb-897d-b25660a449f0.png)
https://github.com/huggingface/datasets/issues/673
[ "Thanks for reporting !\r\nWe'll free some memory" ]
null
673
false
Questions about XSUM
Hi there ✋ I'm looking into your `xsum` dataset and I have several questions on that. So here is how I loaded the data: ``` >>> data = datasets.load_dataset('xsum', version='1.0.1') >>> data['train'] Dataset(features: {'document': Value(dtype='string', id=None), 'summary': Value(dtype='string', id=None)}, nu...
https://github.com/huggingface/datasets/issues/672
[ "We should try to regenerate the data using the official script.\r\nBut iirc that's what we used in the first place, so not sure why it didn't match in the first place.\r\n\r\nI'll let you know when the dataset is updated", "Thanks, looking forward to hearing your update on this thread. \r\n\r\nThis is a blocking...
null
672
false
[BUG] No such file or directory
This happens when both 1. Huggingface datasets cache dir does not exist 2. Try to load a local dataset script builder.py throws an error when trying to create a filelock in a directory (cache/datasets) that does not exist https://github.com/huggingface/datasets/blob/master/src/datasets/builder.py#L177 Tested o...
https://github.com/huggingface/datasets/issues/671
[]
null
671
false
Fix SQuAD metric kwargs description
The `answer_start` field was missing in the kwargs docstring. This should fix #657 FYI another fix was proposed by @tshrjn in #658 and suggests to remove this field. However IMO `answer_start` is useful to match the squad dataset format for consistency, even though it is not used in the metric computation. I th...
https://github.com/huggingface/datasets/pull/670
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/670", "html_url": "https://github.com/huggingface/datasets/pull/670", "diff_url": "https://github.com/huggingface/datasets/pull/670.diff", "patch_url": "https://github.com/huggingface/datasets/pull/670.patch", "merged_at": "2020-09-29T15:57:37"...
670
true
How to skip a example when running dataset.map
in processing func, I process examples and detect some invalid examples, which I did not want it to be added into train dataset. However I did not find how to skip this recognized invalid example when doing dataset.map.
https://github.com/huggingface/datasets/issues/669
[ "Hi @xixiaoyao,\r\nDepending on what you want to do you can:\r\n- use a first step of `filter` to filter out the invalid examples: https://huggingface.co/docs/datasets/processing.html#filtering-rows-select-and-filter\r\n- or directly detect the invalid examples inside the callable used with `map` and return them un...
null
669
false
OverflowError when slicing with an array containing negative ids
```python from datasets import Dataset d = ds.Dataset.from_dict({"a": range(10)}) print(d[0]) # {'a': 0} print(d[-1]) # {'a': 9} print(d[[0, -1]]) # OverflowError ``` results in ``` --------------------------------------------------------------------------- OverflowError ...
https://github.com/huggingface/datasets/issues/668
[]
null
668
false
Loss not decrease with Datasets and Transformers
HI, The following script is used to fine-tune a BertForSequenceClassification model on SST2. The script is adapted from [this colab](https://colab.research.google.com/github/huggingface/datasets/blob/master/notebooks/Overview.ipynb) that presents an example of fine-tuning BertForQuestionAnswering using squad data...
https://github.com/huggingface/datasets/issues/667
[ "And I tested it on T5ForConditionalGeneration, that works no problem.", "Hi did you manage to fix your issue ?\r\n\r\nIf so feel free to share your fix and close this thread" ]
null
667
false
Does both 'bookcorpus' and 'wikipedia' belong to the same datasets which Google used for pretraining BERT?
https://github.com/huggingface/datasets/issues/666
[ "No they are other similar copies but they are not provided by the official Bert models authors." ]
null
666
false
runing dataset.map, it raises TypeError: can't pickle Tokenizer objects
I load squad dataset. Then want to process data use following function with `Huggingface Transformers LongformerTokenizer`. ``` def convert_to_features(example): # Tokenize contexts and questions (as pairs of inputs) input_pairs = [example['question'], example['context']] encodings = tokenizer.encode...
https://github.com/huggingface/datasets/issues/665
[ "Hi !\r\nIt works on my side with both the LongFormerTokenizer and the LongFormerTokenizerFast.\r\n\r\nWhich version of transformers/datasets are you using ?", "transformers and datasets are both the latest", "Then I guess you need to give us more informations on your setup (OS, python, GPU, etc) or a Google Co...
null
665
false
load_dataset from local squad.py, raise error: TypeError: 'NoneType' object is not callable
version: 1.0.2 ``` train_dataset = datasets.load_dataset('squad') ``` The above code can works. However, when I download the squad.py from your server, and saved as `my_squad.py` to local. I run followings raise errors. ``` train_dataset = datasets.load_dataset('./my_squad.py') ...
https://github.com/huggingface/datasets/issues/664
[ "Hi !\r\nThanks for reporting.\r\nIt looks like no object inherits from `datasets.GeneratorBasedBuilder` (or more generally from `datasets.DatasetBuilder`) in your script.\r\n\r\nCould you check that there exist at least one dataset builder class ?", "Hi @xixiaoyao did you manage to fix your issue ?", "No activ...
null
664
false
Created dataset card snli.md
First draft of a dataset card using the SNLI corpus as an example. This is mostly based on the [Google Doc draft](https://docs.google.com/document/d/1dKPGP-dA2W0QoTRGfqQ5eBp0CeSsTy7g2yM8RseHtos/edit), but I added a few sections and moved some things around. - I moved **Who Was Involved** to follow **Language**, ...
https://github.com/huggingface/datasets/pull/663
[ "Adding a direct link to the rendered markdown:\r\nhttps://github.com/mcmillanmajora/datasets/blob/add_dataset_documentation/datasets/snli/README.md\r\n", "It would be amazing if we ended up with this much information on all of our datasets :) \r\n\r\nI don't think there's too much repetition, everything that is ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/663", "html_url": "https://github.com/huggingface/datasets/pull/663", "diff_url": "https://github.com/huggingface/datasets/pull/663.diff", "patch_url": "https://github.com/huggingface/datasets/pull/663.patch", "merged_at": "2020-10-12T20:26:52"...
663
true
Created dataset card snli.md
First draft of a dataset card using the SNLI corpus as an example
https://github.com/huggingface/datasets/pull/662
[ "Resubmitting on a new fork" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/662", "html_url": "https://github.com/huggingface/datasets/pull/662", "diff_url": "https://github.com/huggingface/datasets/pull/662.diff", "patch_url": "https://github.com/huggingface/datasets/pull/662.patch", "merged_at": null }
662
true
Replace pa.OSFile by open
It should fix #643
https://github.com/huggingface/datasets/pull/661
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/661", "html_url": "https://github.com/huggingface/datasets/pull/661", "diff_url": "https://github.com/huggingface/datasets/pull/661.diff", "patch_url": "https://github.com/huggingface/datasets/pull/661.patch", "merged_at": null }
661
true
add openwebtext
This adds [The OpenWebText Corpus](https://skylion007.github.io/OpenWebTextCorpus/), which is a clean and large text corpus for nlp pretraining. It is an open source effort to reproduce OpenAI’s WebText dataset used by GPT-2, and it is also needed to reproduce ELECTRA. It solves #132 . ### Besides dataset buildin...
https://github.com/huggingface/datasets/pull/660
[ "BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality test), I got like trailing space or mixed space and tab warning and error, and fixed them manually.", "> BTW, is there a one-line command to make our building scripts pass flake8 test? (included code quality te...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/660", "html_url": "https://github.com/huggingface/datasets/pull/660", "diff_url": "https://github.com/huggingface/datasets/pull/660.diff", "patch_url": "https://github.com/huggingface/datasets/pull/660.patch", "merged_at": "2020-09-28T09:07:26"...
660
true
Keep new columns in transmit format
When a dataset is formatted with a list of columns that `__getitem__` should return, then calling `map` to add new columns doesn't add the new columns to this list. It caused `KeyError` issues in #620 I changed the logic to add those new columns to the list that `__getitem__` should return.
https://github.com/huggingface/datasets/pull/659
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/659", "html_url": "https://github.com/huggingface/datasets/pull/659", "diff_url": "https://github.com/huggingface/datasets/pull/659.diff", "patch_url": "https://github.com/huggingface/datasets/pull/659.patch", "merged_at": "2020-09-22T10:07:20"...
659
true
Fix squad metric's Features
Resolves issue [657](https://github.com/huggingface/datasets/issues/657).
https://github.com/huggingface/datasets/pull/658
[ "Closing this one in favor of #670 \r\n\r\nThanks again for reporting the issue and proposing this fix !\r\nLet me know if you have other remarks" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/658", "html_url": "https://github.com/huggingface/datasets/pull/658", "diff_url": "https://github.com/huggingface/datasets/pull/658.diff", "patch_url": "https://github.com/huggingface/datasets/pull/658.patch", "merged_at": null }
658
true
Squad Metric Description & Feature Mismatch
The [description](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L39) doesn't mention `answer_start` in squad. However the `datasets.features` require [it](https://github.com/huggingface/datasets/blob/master/metrics/squad/squad.py#L68). It's also not used in the evaluation.
https://github.com/huggingface/datasets/issues/657
[ "Thanks for reporting !\r\nThere indeed a mismatch between the features and the kwargs description\r\n\r\nI believe `answer_start` was added to match the squad dataset format for consistency, even though it is not used in the metric computation. I think I'd rather keep it this way, so that you can just give `refere...
null
657
false
Use multiprocess from pathos for multiprocessing
[Multiprocess](https://github.com/uqfoundation/multiprocess) (from the [pathos](https://github.com/uqfoundation/pathos) project) allows to use lambda functions in multiprocessed map. It was suggested to use it by @kandorm. We're already using dill which is its only dependency.
https://github.com/huggingface/datasets/pull/656
[ "We can just install multiprocess actually, I'll change that", "Just an FYI: I remember that I wanted to try pathos a couple of years back and I ran into issues considering cross-platform; the code would just break on Windows. If I can verify this PR by running CPU tests on Windows, let me know!", "That's good ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/656", "html_url": "https://github.com/huggingface/datasets/pull/656", "diff_url": "https://github.com/huggingface/datasets/pull/656.diff", "patch_url": "https://github.com/huggingface/datasets/pull/656.patch", "merged_at": "2020-09-28T14:45:39"...
656
true
added Winogrande debiased subset
The [Winogrande](https://arxiv.org/abs/1907.10641) paper mentions a `debiased` subset that wasn't in the first release; this PR adds it.
https://github.com/huggingface/datasets/pull/655
[ "To fix the CI you just have to copy the dummy data to the 1.1.0 folder, and maybe create the dummy ones for the `debiased` configuration", "Fixed! Thanks @lhoestq " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/655", "html_url": "https://github.com/huggingface/datasets/pull/655", "diff_url": "https://github.com/huggingface/datasets/pull/655.diff", "patch_url": "https://github.com/huggingface/datasets/pull/655.patch", "merged_at": "2020-09-21T16:16:04"...
655
true
Allow empty inputs in metrics
There was an arrow error when trying to compute a metric with empty inputs. The error was occurring when reading the arrow file, before calling metric._compute.
https://github.com/huggingface/datasets/pull/654
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/654", "html_url": "https://github.com/huggingface/datasets/pull/654", "diff_url": "https://github.com/huggingface/datasets/pull/654.diff", "patch_url": "https://github.com/huggingface/datasets/pull/654.patch", "merged_at": "2020-09-21T16:13:38"...
654
true
handle data alteration when trying type
Fix #649 The bug came from the type inference that didn't handle a weird case in Pyarrow. Indeed this code runs without error but alters the data in arrow: ```python import pyarrow as pa type = pa.struct({"a": pa.struct({"b": pa.string()})}) array_with_altered_data = pa.array([{"a": {"b": "foo", "c": "bar"}}...
https://github.com/huggingface/datasets/pull/653
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/653", "html_url": "https://github.com/huggingface/datasets/pull/653", "diff_url": "https://github.com/huggingface/datasets/pull/653.diff", "patch_url": "https://github.com/huggingface/datasets/pull/653.patch", "merged_at": "2020-09-21T16:13:05"...
653
true
handle connection error in download_prepared_from_hf_gcs
Fix #647
https://github.com/huggingface/datasets/pull/652
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/652", "html_url": "https://github.com/huggingface/datasets/pull/652", "diff_url": "https://github.com/huggingface/datasets/pull/652.diff", "patch_url": "https://github.com/huggingface/datasets/pull/652.patch", "merged_at": "2020-09-21T08:28:42"...
652
true
Problem with JSON dataset format
I have a local json dataset with the following form. { 'id01234': {'key1': value1, 'key2': value2, 'key3': value3}, 'id01235': {'key1': value1, 'key2': value2, 'key3': value3}, . . . 'id09999': {'key1': value1, 'key2': value2, 'key3': value3} } Note that instead of a list of records i...
https://github.com/huggingface/datasets/issues/651
[ "Currently the `json` dataset doesn't support this format unfortunately.\r\nHowever you could load it with\r\n```python\r\nfrom datasets import Dataset\r\nimport pandas as pd\r\n\r\ndf = pd.read_json(\"path_to_local.json\", orient=\"index\")\r\ndataset = Dataset.from_pandas(df)\r\n```", "or you can make a custom ...
null
651
false
dummy data testing can't test datasets using `dl_manager.extract` in `_split_generators`
Hi, I recently want to add a dataset whose source data is like this ``` openwebtext.tar.xz |__ openwebtext |__subset000.xz | |__ ....txt | |__ ....txt | ... |__ subset001.xz | .... ``` So I wrote `openwebtext.py` like this ``` d...
https://github.com/huggingface/datasets/issues/650
[ "Hi :) \r\nIn your dummy data zip file you can just have `subset000.xz` as directories instead of compressed files.\r\nLet me know if it helps", "Thanks for your comment @lhoestq ,\r\nJust for confirmation, changing dummy data like this won't make dummy test test the functionality to extract `subsetxxx.xz` but ac...
null
650
false
Inconsistent behavior in map
I'm observing inconsistent behavior when applying .map(). This happens specifically when I'm incrementally adding onto a feature that is a nested dictionary. Here's a simple example that reproduces the problem. ```python import datasets # Dataset with a single feature called 'field' consisting of two examples d...
https://github.com/huggingface/datasets/issues/649
[ "Thanks for reporting !\r\n\r\nThis issue must have appeared when we refactored type inference in `nlp`\r\nBy default the library tries to keep the same feature types when applying `map` but apparently it has troubles with nested structures. I'll try to fix that next week" ]
null
649
false
offset overflow when multiprocessing batched map on large datasets.
It only happened when "multiprocessing" + "batched" + "large dataset" at the same time. ``` def bprocess(examples): examples['len'] = [] for text in examples['text']: examples['len'].append(len(text)) return examples wiki.map(brpocess, batched=True, num_proc=8) ``` ``` ----------------------------...
https://github.com/huggingface/datasets/issues/648
[ "This should be fixed with #645 ", "Feel free to re-open if it still occurs" ]
null
648
false
Cannot download dataset_info.json
I am running my job on a cloud server where does not provide for connections from the standard compute nodes to outside resources. Hence, when I use `dataset.load_dataset()` to load data, I got an error like this: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/huggingface-nlp/cache/datasets/text...
https://github.com/huggingface/datasets/issues/647
[ "Thanks for reporting !\r\nWe should add support for servers without internet connection indeed\r\nI'll do that early next week", "Thanks, @lhoestq !\r\nPlease let me know when it is available. ", "Right now the recommended way is to create the dataset on a server with internet connection and then to save it an...
null
647
false
Fix docs typos
This PR fixes few typos in the docs and the error in the code snippet in the set_format section in docs/source/torch_tensorflow.rst. `torch.utils.data.Dataloader` expects padded batches so it throws an error due to not being able to stack the unpadded tensors. If we follow the Quick tour from the docs where they add th...
https://github.com/huggingface/datasets/pull/646
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/646", "html_url": "https://github.com/huggingface/datasets/pull/646", "diff_url": "https://github.com/huggingface/datasets/pull/646.diff", "patch_url": "https://github.com/huggingface/datasets/pull/646.patch", "merged_at": "2020-09-21T16:14:12"...
646
true
Don't use take on dataset table in pyarrow 1.0.x
Fix #615
https://github.com/huggingface/datasets/pull/645
[ "I tried lower batch sizes and it didn't accelerate filter (quite the opposite actually).\r\nThe slow-down also appears for pyarrow 0.17.1 for some reason, not sure it comes from these changes", "I just checked the benchmarks of other PRs and some of them had 300s (!!) for filter. This needs some investigation.."...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/645", "html_url": "https://github.com/huggingface/datasets/pull/645", "diff_url": "https://github.com/huggingface/datasets/pull/645.diff", "patch_url": "https://github.com/huggingface/datasets/pull/645.patch", "merged_at": "2020-09-19T16:46:31"...
645
true
Better windows support
There are a few differences in the behavior of python and pyarrow on windows. For example there are restrictions when accessing/deleting files that are open Fix #590
https://github.com/huggingface/datasets/pull/644
[ "This PR is ready :)\r\nIt brings official support for windows.\r\n\r\nSome tests `AWSDatasetTest` are failing.\r\nThis is because I had to fix a few datasets that were not compatible with windows.\r\nThese test will pass once they got merged on master :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/644", "html_url": "https://github.com/huggingface/datasets/pull/644", "diff_url": "https://github.com/huggingface/datasets/pull/644.diff", "patch_url": "https://github.com/huggingface/datasets/pull/644.patch", "merged_at": "2020-09-25T14:02:28"...
644
true
Caching processed dataset at wrong folder
Hi guys, I run this on my Colab (PRO): ```python from datasets import load_dataset dataset = load_dataset('text', data_files='/content/corpus.txt', cache_dir='/content/drive/My Drive', split='train') def encode(examples): return tokenizer(examples['text'], truncation=True, padding='max_length') dataset = ...
https://github.com/huggingface/datasets/issues/643
[ "Thanks for reporting !\r\nIt uses a temporary file to write the data.\r\nHowever it looks like the temporary file is not placed in the right directory during the processing", "Well actually I just tested and the temporary file is placed in the same directory, so it should work as expected.\r\nWhich version of `d...
null
643
false
Rename wnut fields
As mentioned in #641 it would be cool to have it follow the naming of the other NER datasets
https://github.com/huggingface/datasets/pull/642
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/642", "html_url": "https://github.com/huggingface/datasets/pull/642", "diff_url": "https://github.com/huggingface/datasets/pull/642.diff", "patch_url": "https://github.com/huggingface/datasets/pull/642.patch", "merged_at": "2020-09-18T17:18:30"...
642
true
Add Polyglot-NER Dataset
Adds the [Polyglot-NER dataset](https://sites.google.com/site/rmyeid/projects/polylgot-ner) with named entity tags for 40 languages. I include separate configs for each language as well as a `combined` config which lumps them all together.
https://github.com/huggingface/datasets/pull/641
[ "Hi @joeddav thanks for adding this! (I did a long webarchive.org session to actually find that dataset a while ago).\r\n\r\nOne question: should we manually correct the labeling scheme to (at least) IOB1?\r\n\r\nThat means \"LOC\" will be converted to \"I-LOC\". IOB1 is not explict. mentioned in the paper, but it ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/641", "html_url": "https://github.com/huggingface/datasets/pull/641", "diff_url": "https://github.com/huggingface/datasets/pull/641.diff", "patch_url": "https://github.com/huggingface/datasets/pull/641.patch", "merged_at": "2020-09-20T03:04:43"...
641
true
Make shuffle compatible with temp_seed
This code used to return different dataset at each run ```python import dataset as ds dataset = ... with ds.temp_seed(42): shuffled = dataset.shuffle() ``` Now it returns the same one since the seed is set
https://github.com/huggingface/datasets/pull/640
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/640", "html_url": "https://github.com/huggingface/datasets/pull/640", "diff_url": "https://github.com/huggingface/datasets/pull/640.diff", "patch_url": "https://github.com/huggingface/datasets/pull/640.patch", "merged_at": "2020-09-18T11:47:50"...
640
true
Update glue QQP checksum
Fix #638
https://github.com/huggingface/datasets/pull/639
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/639", "html_url": "https://github.com/huggingface/datasets/pull/639", "diff_url": "https://github.com/huggingface/datasets/pull/639.diff", "patch_url": "https://github.com/huggingface/datasets/pull/639.patch", "merged_at": "2020-09-18T11:37:07"...
639
true
GLUE/QQP dataset: NonMatchingChecksumError
Hi @lhoestq , I know you are busy and there are also other important issues. But if this is easy to be fixed, I am shamelessly wondering if you can give me some help , so I can evaluate my models and restart with my developing cycle asap. 😚 datasets version: editable install of master at 9/17 `datasets.load_data...
https://github.com/huggingface/datasets/issues/638
[ "Hi ! Sure I'll take a look" ]
null
638
false
Add MATINF
https://github.com/huggingface/datasets/pull/637
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/637", "html_url": "https://github.com/huggingface/datasets/pull/637", "diff_url": "https://github.com/huggingface/datasets/pull/637.diff", "patch_url": "https://github.com/huggingface/datasets/pull/637.patch", "merged_at": "2020-09-17T13:23:17"...
637
true
Consistent ner features
As discussed in #613 , this PR aims at making NER feature names consistent across datasets. I changed the feature names of LinCE and XTREME/PAN-X
https://github.com/huggingface/datasets/pull/636
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/636", "html_url": "https://github.com/huggingface/datasets/pull/636", "diff_url": "https://github.com/huggingface/datasets/pull/636.diff", "patch_url": "https://github.com/huggingface/datasets/pull/636.patch", "merged_at": "2020-09-17T09:52:58"...
636
true
Loglevel
Continuation of #618
https://github.com/huggingface/datasets/pull/635
[ "I think it's ready now @stas00, did you want to add something else ?\r\nThis PR includes your changes but with the level set to warning", "LGTM, thank you, @lhoestq " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/635", "html_url": "https://github.com/huggingface/datasets/pull/635", "diff_url": "https://github.com/huggingface/datasets/pull/635.diff", "patch_url": "https://github.com/huggingface/datasets/pull/635.patch", "merged_at": "2020-09-17T09:52:18"...
635
true
Add ConLL-2000 dataset
Adds ConLL-2000 dataset used for text chunking. See https://www.clips.uantwerpen.be/conll2000/chunking/ for details and [motivation](https://github.com/huggingface/transformers/pull/7041#issuecomment-692710948) behind this PR
https://github.com/huggingface/datasets/pull/634
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/634", "html_url": "https://github.com/huggingface/datasets/pull/634", "diff_url": "https://github.com/huggingface/datasets/pull/634.diff", "patch_url": "https://github.com/huggingface/datasets/pull/634.patch", "merged_at": "2020-09-17T10:38:10"...
634
true
Load large text file for LM pre-training resulting in OOM
I tried to pretrain Longformer using transformers and datasets. But I got OOM issues with loading a large text file. My script is almost like this: ```python from datasets import load_dataset @dataclass class DataCollatorForDatasetsLanguageModeling(DataCollatorForLanguageModeling): """ Data collator u...
https://github.com/huggingface/datasets/issues/633
[ "Not sure what could cause that on the `datasets` side. Could this be a `Trainer` issue ? cc @julien-c @sgugger ?", "There was a memory leak issue fixed recently in master. You should install from source and see if it fixes your problem.", "@lhoestq @sgugger Thanks for your comments. I have install from source ...
null
633
false
Fix typos in the loading datasets docs
This PR fixes two typos in the loading datasets docs, one of them being a broken link to the `load_dataset` function.
https://github.com/huggingface/datasets/pull/632
[ "thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/632", "html_url": "https://github.com/huggingface/datasets/pull/632", "diff_url": "https://github.com/huggingface/datasets/pull/632.diff", "patch_url": "https://github.com/huggingface/datasets/pull/632.patch", "merged_at": "2020-09-16T06:52:44"...
632
true
Fix text delimiter
I changed the delimiter in the `text` dataset script. It should fix the `pyarrow.lib.ArrowInvalid: CSV parse error` from #622 I changed the delimiter to an unused ascii character that is not present in text files : `\b`
https://github.com/huggingface/datasets/pull/631
[ "Which OS are you using ?@abhi1nandy2", "> Which OS are you using ?\r\n\r\nPRETTY_NAME=\"Debian GNU/Linux 9 (stretch)\"\r\nNAME=\"Debian GNU/Linux\"\r\nVERSION_ID=\"9\"\r\nVERSION=\"9 (stretch)\"\r\nVERSION_CODENAME=stretch\r\nID=debian\r\nHOME_URL=\"https://www.debian.org/\"\r\nSUPPORT_URL=\"https://www.debian.o...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/631", "html_url": "https://github.com/huggingface/datasets/pull/631", "diff_url": "https://github.com/huggingface/datasets/pull/631.diff", "patch_url": "https://github.com/huggingface/datasets/pull/631.patch", "merged_at": "2020-09-15T08:26:25"...
631
true
Text dataset not working with large files
``` Traceback (most recent call last): File "examples/language-modeling/run_language_modeling.py", line 333, in <module> main() File "examples/language-modeling/run_language_modeling.py", line 262, in main get_dataset(data_args, tokenizer=tokenizer, cache_dir=model_args.cache_dir) if training_args.do_t...
https://github.com/huggingface/datasets/issues/630
[ "Seems like it works when setting ```block_size=2100000000``` or something arbitrarily large though.", "Can you give us some stats on the data files you use as inputs?", "Basically ~600MB txt files(UTF-8) * 59. \r\ncontents like ```안녕하세요, 이것은 예제로 한번 말해보는 텍스트입니다. 그냥 이렇다고요.<|endoftext|>\\n```\r\n\r\nAlso, it gets...
null
630
false
straddling object straddles two block boundaries
I am trying to read json data (it's an array with lots of dictionaries) and getting block boundaries issue as below : I tried calling read_json with readOptions but no luck . ``` table = json.read_json(fn) Traceback (most recent call last): File "<stdin>", line 1, in <module> File "pyarrow/_json.pyx", li...
https://github.com/huggingface/datasets/issues/629
[ "sorry it's an apache arrow issue." ]
null
629
false
Update docs links in the contribution guideline
Fixed the `add a dataset` and `share a dataset` links in the contribution guideline to refer to the new docs website.
https://github.com/huggingface/datasets/pull/628
[ "Thanks!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/628", "html_url": "https://github.com/huggingface/datasets/pull/628", "diff_url": "https://github.com/huggingface/datasets/pull/628.diff", "patch_url": "https://github.com/huggingface/datasets/pull/628.patch", "merged_at": "2020-09-15T06:19:35"...
628
true
fix (#619) MLQA features names
Fixed the features names as suggested in (#619) in the `_generate_examples` and `_info` methods in the MLQA loading script and also changed the names in the `dataset_infos.json` file.
https://github.com/huggingface/datasets/pull/627
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/627", "html_url": "https://github.com/huggingface/datasets/pull/627", "diff_url": "https://github.com/huggingface/datasets/pull/627.diff", "patch_url": "https://github.com/huggingface/datasets/pull/627.patch", "merged_at": "2020-09-16T06:54:11"...
627
true
Update GLUE URLs (now hosted on FB)
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112. Note: rebased on huggingface/dat...
https://github.com/huggingface/datasets/pull/626
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/626", "html_url": "https://github.com/huggingface/datasets/pull/626", "diff_url": "https://github.com/huggingface/datasets/pull/626.diff", "patch_url": "https://github.com/huggingface/datasets/pull/626.patch", "merged_at": "2020-09-16T06:53:18"...
626
true
dtype of tensors should be preserved
After switching to `datasets` my model just broke. After a weekend of debugging, the issue was that my model could not handle the double that the Dataset provided, as it expected a float (but didn't give a warning, which seems a [PyTorch issue](https://discuss.pytorch.org/t/is-it-required-that-input-and-hidden-for-gru-...
https://github.com/huggingface/datasets/issues/625
[ "Indeed we convert tensors to list to be able to write in arrow format. Because of this conversion we lose the dtype information. We should add the dtype detection when we do type inference. However it would require a bit of refactoring since currently the conversion happens before the type inference..\r\n\r\nAnd t...
null
625
false
Add learningq dataset
Hi, Thank you again for this amazing repo. Would it be possible for y'all to add the LearningQ dataset - https://github.com/AngusGLChen/LearningQ ?
https://github.com/huggingface/datasets/issues/624
[]
null
624
false
Custom feature types in `load_dataset` from CSV
I am trying to load a local file with the `load_dataset` function and I want to predefine the feature types with the `features` argument. However, the types are always the same independent of the value of `features`. I am working with the local files from the emotion dataset. To get the data you can use the followi...
https://github.com/huggingface/datasets/issues/623
[ "Currently `csv` doesn't support the `features` attribute (unlike `json`).\r\nWhat you can do for now is cast the features using the in-place transform `cast_`\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset('csv', data_files=file_dict, delimiter=';', column_names=['text', 'label...
null
623
false
load_dataset for text files not working
Trying the following snippet, I get different problems on Linux and Windows. ```python dataset = load_dataset("text", data_files="data.txt") # or dataset = load_dataset("text", data_files=["data.txt"]) ``` (ps [This example](https://huggingface.co/docs/datasets/loading_datasets.html#json-files) shows that ...
https://github.com/huggingface/datasets/issues/622
[ "Can you give us more information on your os and pip environments (pip list)?", "@thomwolf Sure. I'll try downgrading to 3.7 now even though Arrow say they support >=3.5.\r\n\r\nLinux (Ubuntu 18.04) - Python 3.8\r\n======================\r\nPackage - Version\r\n---------------------\r\ncertifi 2...
null
622
false
[docs] Index: The native emoji looks kinda ugly in large size
https://github.com/huggingface/datasets/pull/621
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/621", "html_url": "https://github.com/huggingface/datasets/pull/621", "diff_url": "https://github.com/huggingface/datasets/pull/621.diff", "patch_url": "https://github.com/huggingface/datasets/pull/621.patch", "merged_at": "2020-09-15T06:20:02"...
621
true
map/filter multiprocessing raises errors and corrupts datasets
After upgrading to the 1.0 started seeing errors in my data loading script after enabling multiprocessing. ```python ... ner_ds_dict = ner_ds.train_test_split(test_size=test_pct, shuffle=True, seed=seed) ner_ds_dict["validation"] = ner_ds_dict["test"] rel_ds_dict = rel_ds.train_test_split(test_si...
https://github.com/huggingface/datasets/issues/620
[ "It seems that I ran into the same problem\r\n```\r\ndef tokenize(cols, example):\r\n for in_col, out_col in cols.items():\r\n example[out_col] = hf_tokenizer.convert_tokens_to_ids(hf_tokenizer.tokenize(example[in_col]))\r\n return example\r\ncola = datasets.load_dataset('glue', 'cola')\r\ntokenized_cola = col...
null
620
false
Mistakes in MLQA features names
I think the following features in MLQA shouldn't be named the way they are: 1. `questions` (should be `question`) 2. `ids` (should be `id`) 3. `start` (should be `answer_start`) The reasons I'm suggesting these features be renamed are: * To make them consistent with other QA datasets like SQuAD, XQuAD, TyDiQA et...
https://github.com/huggingface/datasets/issues/619
[ "Indeed you're right ! Thanks for reporting that\r\n\r\nCould you open a PR to fix the features names ?" ]
null
619
false
sync logging utils with transformers
sync the docs/code with the recent changes in transformers' `logging` utils: 1. change the default level to `WARNING` 2. add `DATASETS_VERBOSITY` env var 3. expand docs
https://github.com/huggingface/datasets/pull/618
[ "Also, some downloads and dataset processing can be quite long for large datasets like wikipedia/pg19/etc. We probably don't want to user to think that the library is hanging. Happy to reorganize logging between DEBUG/INFO/WARNING to make it less verbose by default though.", "The problem is that `transformers` im...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/618", "html_url": "https://github.com/huggingface/datasets/pull/618", "diff_url": "https://github.com/huggingface/datasets/pull/618.diff", "patch_url": "https://github.com/huggingface/datasets/pull/618.patch", "merged_at": null }
618
true
Compare different Rouge implementations
I used RougeL implementation provided in `datasets` [here](https://github.com/huggingface/datasets/blob/master/metrics/rouge/rouge.py) and it gives numbers that match those reported in the pegasus paper but very different from those reported in other papers, [this](https://arxiv.org/pdf/1909.03186.pdf) for example. Ca...
https://github.com/huggingface/datasets/issues/617
[ "Updates - the differences between the following three\r\n(1) https://github.com/bheinzerling/pyrouge (previously popular. The one I trust the most)\r\n(2) https://github.com/google-research/google-research/tree/master/rouge\r\n(3) https://github.com/pltrdy/files2rouge (used in fairseq)\r\ncan be explained by two t...
null
617
false
UserWarning: The given NumPy array is not writeable, and PyTorch does not support non-writeable tensors
I am trying out the library and want to load in pickled data with `from_dict`. In that dict, one column `text` should be tokenized and the other (an embedding vector) should be retained. All other columns should be removed. When I eventually try to set the format for the columns with `set_format` I am getting this stra...
https://github.com/huggingface/datasets/issues/616
[ "I have the same issue", "Same issue here when Trying to load a dataset from disk.", "I am also experiencing this issue, and don't know if it's affecting my training.", "Same here. I hope the dataset is not being modified in-place.", "I think the only way to avoid this warning would be to do a copy of the n...
null
616
false
Offset overflow when slicing a big dataset with an array of indices in Pyarrow >= 1.0.0
How to reproduce: ```python from datasets import load_dataset wiki = load_dataset("wikipedia", "20200501.en", split="train") wiki[[0]] --------------------------------------------------------------------------- ArrowInvalid Traceback (most recent call last) <ipython-input-13-38...
https://github.com/huggingface/datasets/issues/615
[ "Related: https://issues.apache.org/jira/browse/ARROW-9773\r\n\r\nIt's definitely a size thing. I took a smaller dataset with 87000 rows and did:\r\n```\r\nfor i in range(10,1000,20):\r\n table = pa.concat_tables([dset._data]*i)\r\n table.take([0])\r\n```\r\nand it broke at around i=300.\r\n\r\nAlso when `_in...
null
615
false
[doc] Update deploy.sh
https://github.com/huggingface/datasets/pull/614
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/614", "html_url": "https://github.com/huggingface/datasets/pull/614", "diff_url": "https://github.com/huggingface/datasets/pull/614.diff", "patch_url": "https://github.com/huggingface/datasets/pull/614.patch", "merged_at": "2020-09-14T08:49:17"...
614
true
Add CoNLL-2003 shared task dataset
Please consider adding CoNLL-2003 shared task dataset as it's beneficial for token classification tasks. The motivation behind this PR is the [PR](https://github.com/huggingface/transformers/pull/7041) in the transformers project. This dataset would be not only useful for the usual run-of-the-mill NER tasks but also fo...
https://github.com/huggingface/datasets/pull/613
[ "I think we should somewhere mention, that is the dataset in IOB2 tagging scheme, whereas the original dataset uses IOB1 :)", "Indeed this is something we want to mention.\r\n\r\nIf would want to add more details about the IOB1->2 change, feel free to ignore my suggestions and edit the description + update the da...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/613", "html_url": "https://github.com/huggingface/datasets/pull/613", "diff_url": "https://github.com/huggingface/datasets/pull/613.diff", "patch_url": "https://github.com/huggingface/datasets/pull/613.patch", "merged_at": "2020-09-17T10:36:38"...
613
true
add multi-proc to dataset dict
Add multi-proc to `DatasetDict`
https://github.com/huggingface/datasets/pull/612
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/612", "html_url": "https://github.com/huggingface/datasets/pull/612", "diff_url": "https://github.com/huggingface/datasets/pull/612.diff", "patch_url": "https://github.com/huggingface/datasets/pull/612.patch", "merged_at": "2020-09-11T10:20:11"...
612
true
ArrowCapacityError: List array cannot contain more than 2147483646 child elements, have 2147483648
Hi, I'm trying to load a dataset from Dataframe, but I get the error: ```bash --------------------------------------------------------------------------- ArrowCapacityError Traceback (most recent call last) <ipython-input-7-146b6b495963> in <module> ----> 1 dataset = Dataset.from_pandas(emb)...
https://github.com/huggingface/datasets/issues/611
[ "Can you give us stats/information on your pandas DataFrame?", "```\r\n<class 'pandas.core.frame.DataFrame'>\r\nInt64Index: 17136104 entries, 0 to 17136103\r\nData columns (total 6 columns):\r\n # Column Dtype \r\n--- ------ ----- \r\n 0 item_id int64 \r\n 1 item_titl object \r\n...
null
611
false
Load text file for RoBERTa pre-training.
I migrate my question from https://github.com/huggingface/transformers/pull/4009#issuecomment-690039444 I tried to train a Roberta from scratch using transformers. But I got OOM issues with loading a large text file. According to the suggestion from @thomwolf , I tried to implement `datasets` to load my text file....
https://github.com/huggingface/datasets/issues/610
[ "Could you try\r\n```python\r\nload_dataset('text', data_files='test.txt',cache_dir=\"./\", split=\"train\")\r\n```\r\n?\r\n\r\n`load_dataset` returns a dictionary by default, like {\"train\": your_dataset}", "Hi @lhoestq\r\nThanks for your suggestion.\r\n\r\nI tried \r\n```\r\ndataset = load_dataset('text', data...
null
610
false
Update GLUE URLs (now hosted on FB)
NYU is switching dataset hosting from Google to FB. This PR closes https://github.com/huggingface/datasets/issues/608 and is necessary for https://github.com/jiant-dev/jiant/issues/161. This PR updates the data URLs based on changes made in https://github.com/nyu-mll/jiant/pull/1112.
https://github.com/huggingface/datasets/pull/609
[ "Thanks for opening this PR :) \r\n\r\nWe changed the name of the lib from nlp to datasets yesterday.\r\nCould you rebase from master and re-generate the dataset_info.json file to fix the name changes ?", "Rebased changes here: https://github.com/huggingface/datasets/pull/626" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/609", "html_url": "https://github.com/huggingface/datasets/pull/609", "diff_url": "https://github.com/huggingface/datasets/pull/609.diff", "patch_url": "https://github.com/huggingface/datasets/pull/609.patch", "merged_at": null }
609
true
Don't use the old NYU GLUE dataset URLs
NYU is switching dataset hosting from Google to FB. Initial changes to `datasets` are in https://github.com/jeswan/nlp/commit/b7d4a071d432592ded971e30ef73330529de25ce. What tests do you suggest I run before opening a PR? See: https://github.com/jiant-dev/jiant/issues/161 and https://github.com/nyu-mll/jiant/pull/111...
https://github.com/huggingface/datasets/issues/608
[ "Feel free to open the PR ;)\r\nThanks for updating the dataset_info.json file !" ]
null
608
false
Add transmit_format wrapper and tests
Same as #605 but using a decorator on-top of dataset transforms that are not in place
https://github.com/huggingface/datasets/pull/607
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/607", "html_url": "https://github.com/huggingface/datasets/pull/607", "diff_url": "https://github.com/huggingface/datasets/pull/607.diff", "patch_url": "https://github.com/huggingface/datasets/pull/607.patch", "merged_at": "2020-09-10T15:21:47"...
607
true
Quick fix :)
`nlp` => `datasets`
https://github.com/huggingface/datasets/pull/606
[ ":heart:" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/606", "html_url": "https://github.com/huggingface/datasets/pull/606", "diff_url": "https://github.com/huggingface/datasets/pull/606.diff", "patch_url": "https://github.com/huggingface/datasets/pull/606.patch", "merged_at": "2020-09-10T16:18:30"...
606
true
[Datasets] Transmit format to children
Transmit format to children obtained when processing a dataset. Added a test. When concatenating datasets, if the formats are disparate, the concatenated dataset has a format reset to defaults.
https://github.com/huggingface/datasets/pull/605
[ "Closing as #607 was merged" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/605", "html_url": "https://github.com/huggingface/datasets/pull/605", "diff_url": "https://github.com/huggingface/datasets/pull/605.diff", "patch_url": "https://github.com/huggingface/datasets/pull/605.patch", "merged_at": null }
605
true
Update bucket prefix
cc @julien-c
https://github.com/huggingface/datasets/pull/604
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/604", "html_url": "https://github.com/huggingface/datasets/pull/604", "diff_url": "https://github.com/huggingface/datasets/pull/604.diff", "patch_url": "https://github.com/huggingface/datasets/pull/604.patch", "merged_at": "2020-09-10T12:45:32"...
604
true
Set scripts version to master
By default the scripts version is master, so that if the library is installed with ``` pip install git+http://github.com/huggingface/nlp.git ``` or ``` git clone http://github.com/huggingface/nlp.git pip install -e ./nlp ``` will use the latest scripts, and not the ones from the previous version.
https://github.com/huggingface/datasets/pull/603
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/603", "html_url": "https://github.com/huggingface/datasets/pull/603", "diff_url": "https://github.com/huggingface/datasets/pull/603.diff", "patch_url": "https://github.com/huggingface/datasets/pull/603.patch", "merged_at": "2020-09-10T11:02:04"...
603
true
apply offset to indices in multiprocessed map
Fix #597 I fixed the indices by applying an offset. I added the case to our tests to make sure it doesn't happen again. I also added the message proposed by @thomwolf in #597 ```python >>> d.select(range(10)).map(fn, with_indices=True, batched=True, num_proc=2, load_from_cache_file=False) Done writing 10 ...
https://github.com/huggingface/datasets/pull/602
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/602", "html_url": "https://github.com/huggingface/datasets/pull/602", "diff_url": "https://github.com/huggingface/datasets/pull/602.diff", "patch_url": "https://github.com/huggingface/datasets/pull/602.patch", "merged_at": "2020-09-10T11:03:37"...
602
true