title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Adding Enriched WebNLG dataset
This pull requests adds the `en` and `de` versions of the [Enriched WebNLG](https://github.com/ThiagoCF05/webnlg) dataset
https://github.com/huggingface/datasets/pull/1206
[ "Nice :) \r\n\r\ncould you add the tags and also remove all the dummy data files that are not zipped ? The diff currently shows 800 files changes xD", "Aaaaand it's rebase time - the new one is at #1264 !", "closing this one since a new PR was created" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1206", "html_url": "https://github.com/huggingface/datasets/pull/1206", "diff_url": "https://github.com/huggingface/datasets/pull/1206.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1206.patch", "merged_at": null }
1,206
true
add lst20 with manual download
passed on local: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_configs_lst20 ``` Not sure how to test: ``` RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_lst20 ``` ``` LST20 Corpus is a dataset for Thai language processin...
https://github.com/huggingface/datasets/pull/1205
[ "The pytest suite doesn't allow manual downloads so we just make sure that the `datasets-cli test` command to run without errors instead", "@lhoestq Changes made. Thank you for the review. I've made some same mistakes for https://github.com/huggingface/datasets/pull/1253 too. Will fix them before review." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1205", "html_url": "https://github.com/huggingface/datasets/pull/1205", "diff_url": "https://github.com/huggingface/datasets/pull/1205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1205.patch", "merged_at": "2020-12-09T16:33...
1,205
true
adding meta_woz dataset
https://github.com/huggingface/datasets/pull/1204
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1204", "html_url": "https://github.com/huggingface/datasets/pull/1204", "diff_url": "https://github.com/huggingface/datasets/pull/1204.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1204.patch", "merged_at": "2020-12-16T15:05...
1,204
true
Add Neural Code Search Dataset
https://github.com/huggingface/datasets/pull/1203
[ "> Really good thanks !\r\n> \r\n> I left a few comments\r\n\r\nThanks, resolved them :) ", "looks like this PR includes changes about many other files than the ones for Code Search\r\n\r\ncan you create another branch and another PR please ?", "> looks like this PR includes changes about many other files than ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1203", "html_url": "https://github.com/huggingface/datasets/pull/1203", "diff_url": "https://github.com/huggingface/datasets/pull/1203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1203.patch", "merged_at": null }
1,203
true
Medical question pairs
This dataset consists of 3048 similar and dissimilar medical question pairs hand-generated and labeled by Curai's doctors. Dataset : https://github.com/curai/medical-question-pair-dataset Paper : https://drive.google.com/file/d/1CHPGBXkvZuZc8hpr46HeHU6U6jnVze-s/view **No splits added**
https://github.com/huggingface/datasets/pull/1202
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1202", "html_url": "https://github.com/huggingface/datasets/pull/1202", "diff_url": "https://github.com/huggingface/datasets/pull/1202.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1202.patch", "merged_at": null }
1,202
true
adding medical-questions-pairs
https://github.com/huggingface/datasets/pull/1201
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1201", "html_url": "https://github.com/huggingface/datasets/pull/1201", "diff_url": "https://github.com/huggingface/datasets/pull/1201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1201.patch", "merged_at": null }
1,201
true
Update ADD_NEW_DATASET.md
Windows needs special treatment again: unfortunately adding `torch` to the requirements does not work well (crashing the installation). Users should first install torch manually and then continue with the other commands. This issue arises all the time when adding torch as a dependency, but because so many novice use...
https://github.com/huggingface/datasets/pull/1200
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1200", "html_url": "https://github.com/huggingface/datasets/pull/1200", "diff_url": "https://github.com/huggingface/datasets/pull/1200.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1200.patch", "merged_at": "2020-12-07T08:32...
1,200
true
Turkish NER dataset, script works fine, couldn't generate dummy data
I've written the script (Turkish_NER.py) that includes dataset. The dataset is a zip inside another zip, and it's extracted as .DUMP file. However, after preprocessing I only get .arrow file. After I ran the script with no error messages, I get .arrow file of dataset, LICENSE and dataset_info.json.
https://github.com/huggingface/datasets/pull/1199
[ "the .DUMP file looks like a txt with one example per line so adding `--match_text_files *.DUMP --n_lines 50` to the dummy generation command might work .", "We can close this PR since a new PR was open at #1268 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1199", "html_url": "https://github.com/huggingface/datasets/pull/1199", "diff_url": "https://github.com/huggingface/datasets/pull/1199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1199.patch", "merged_at": null }
1,199
true
Add ALT
ALT dataset -- https://www2.nict.go.jp/astrec-att/member/mutiyama/ALT/
https://github.com/huggingface/datasets/pull/1198
[ "the `RemoteDatasetTest ` erros in the CI are fixed on master so it's fine", "used `Translation ` feature type and fixed few typos as you suggested.", "Sorry, I made a mistake. please see new PR here. https://github.com/huggingface/datasets/pull/1436" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1198", "html_url": "https://github.com/huggingface/datasets/pull/1198", "diff_url": "https://github.com/huggingface/datasets/pull/1198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1198.patch", "merged_at": null }
1,198
true
add taskmaster-2
Adding taskmaster-2 dataset. https://github.com/google-research-datasets/Taskmaster/tree/master/TM-2-2020
https://github.com/huggingface/datasets/pull/1197
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1197", "html_url": "https://github.com/huggingface/datasets/pull/1197", "diff_url": "https://github.com/huggingface/datasets/pull/1197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1197.patch", "merged_at": "2020-12-07T15:22...
1,197
true
Add IWSLT'15 English-Vietnamese machine translation Data
Preprocessed Dataset from IWSLT'15 English-Vietnamese machine translation: English-Vietnamese. from https://nlp.stanford.edu/projects/nmt/data/iwslt15.en-vi/
https://github.com/huggingface/datasets/pull/1196
[ "Thanks ! feel free to ping me once you've added the tags in the dataset card :) ", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1196", "html_url": "https://github.com/huggingface/datasets/pull/1196", "diff_url": "https://github.com/huggingface/datasets/pull/1196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1196.patch", "merged_at": "2020-12-11T18:26...
1,196
true
addition of py_ast
The dataset consists of parsed Parsed ASTs that were used to train and evaluate the DeepSyn tool. The Python programs are collected from GitHub repositories by removing duplicate files, removing project forks (copy of another existing repository) ,keeping only programs that parse and have at most 30'000 nodes in th...
https://github.com/huggingface/datasets/pull/1195
[ "Hi @reshinthadithyan !\r\n\r\nAs mentioned on the Slack, it would be better in this case to parse the file lines into the following feature structure:\r\n```python\r\n\"ast\": datasets.Sequence(\r\n {\r\n \"type\": datasets.Value(\"string\"),\r\n \"value\": datasets.Value(\"string\"),\r\n \...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1195", "html_url": "https://github.com/huggingface/datasets/pull/1195", "diff_url": "https://github.com/huggingface/datasets/pull/1195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1195.patch", "merged_at": null }
1,195
true
Add msr_text_compression
Add [MSR Abstractive Text Compression Dataset](https://msropendata.com/datasets/f8ce2ec9-7fbd-48f7-a8bb-2d2279373563)
https://github.com/huggingface/datasets/pull/1194
[ "the `RemoteDatasetTest ` error in the CI is fixed on master so it's fine" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1194", "html_url": "https://github.com/huggingface/datasets/pull/1194", "diff_url": "https://github.com/huggingface/datasets/pull/1194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1194.patch", "merged_at": "2020-12-09T10:53...
1,194
true
add taskmaster-1
Adding Taskmaster-1 dataset https://github.com/google-research-datasets/Taskmaster/tree/master/TM-1-2019
https://github.com/huggingface/datasets/pull/1193
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1193", "html_url": "https://github.com/huggingface/datasets/pull/1193", "diff_url": "https://github.com/huggingface/datasets/pull/1193.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1193.patch", "merged_at": "2020-12-07T15:08...
1,193
true
Add NewsPH_NLI dataset
This PR adds the NewsPH-NLI Dataset, the first benchmark dataset for sentence entailment in the low-resource Filipino language. Constructed through exploting the structure of news articles. Contains 600,000 premise-hypothesis pairs, in 70-15-15 split for training, validation, and testing. Link to the paper: https://...
https://github.com/huggingface/datasets/pull/1192
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1192", "html_url": "https://github.com/huggingface/datasets/pull/1192", "diff_url": "https://github.com/huggingface/datasets/pull/1192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1192.patch", "merged_at": "2020-12-07T15:39...
1,192
true
Added Translator Human Parity Data For a Chinese-English news transla…
…tion system from Open dataset list for Dataset sprint, Microsoft Datasets tab.
https://github.com/huggingface/datasets/pull/1191
[ "Can you run `make style` to format the code and fix the CI please ?", "> Can you run `make style` to format the code and fix the CI please ?\r\n\r\nI ran `make style` before this PR and just a few minutes ago. No changes to the code. Not sure why the CI is failing.", "Also, I attempted to see if I can get the ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1191", "html_url": "https://github.com/huggingface/datasets/pull/1191", "diff_url": "https://github.com/huggingface/datasets/pull/1191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1191.patch", "merged_at": "2020-12-09T13:22...
1,191
true
Add Fake News Detection in Filipino dataset
This PR adds the Fake News Filipino Dataset, a low-resource fake news detection corpora in Filipino. Contains 3,206 expertly-labeled news samples, half of which are real and half of which are fake. Link to the paper: http://www.lrec-conf.org/proceedings/lrec2020/index.html Link to the dataset/repo: https://github...
https://github.com/huggingface/datasets/pull/1190
[ "Hi! I'm the author of this paper (surprised to see our datasets have been added already).\r\n\r\nThat paper link only leads to the conference index, here's a link to the actual paper: https://www.aclweb.org/anthology/2020.lrec-1.316/\r\n\r\nWould it be fine if I also edited your gsheet entry to reflect this change...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1190", "html_url": "https://github.com/huggingface/datasets/pull/1190", "diff_url": "https://github.com/huggingface/datasets/pull/1190.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1190.patch", "merged_at": "2020-12-07T15:39...
1,190
true
Add Dengue dataset in Filipino
This PR adds the Dengue Dataset, a benchmark dataset for low-resource multiclass classification, with 4,015 training, 500 testing, and 500 validation examples, each labeled as part of five classes. Each sample can be a part of multiple classes. Collected as tweets. Link to the paper: https://ieeexplore.ieee.org/docu...
https://github.com/huggingface/datasets/pull/1189
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1189", "html_url": "https://github.com/huggingface/datasets/pull/1189", "diff_url": "https://github.com/huggingface/datasets/pull/1189.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1189.patch", "merged_at": "2020-12-07T15:38...
1,189
true
adding hind_encorp dataset
adding Hindi_Encorp05 dataset
https://github.com/huggingface/datasets/pull/1188
[ "help needed in dummy data", "extension of the file is .plaintext so dummy data generation is failing\r\n", "you can add the `--match_text_file \"*.plaintext\"` flag when generating the dummy data\r\n\r\nalso it looks like the PR is empty, is this expected ?", "yes it is expected because I made all my change...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1188", "html_url": "https://github.com/huggingface/datasets/pull/1188", "diff_url": "https://github.com/huggingface/datasets/pull/1188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1188.patch", "merged_at": null }
1,188
true
Added AQUA-RAT (Algebra Question Answering with Rationales) Dataset
https://github.com/huggingface/datasets/pull/1187
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1187", "html_url": "https://github.com/huggingface/datasets/pull/1187", "diff_url": "https://github.com/huggingface/datasets/pull/1187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1187.patch", "merged_at": "2020-12-07T15:37...
1,187
true
all test passed
need help creating dummy data
https://github.com/huggingface/datasets/pull/1186
[ "looks like this PR includes changes to 5000 files\r\ncould you create a new branch and a new PR ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1186", "html_url": "https://github.com/huggingface/datasets/pull/1186", "diff_url": "https://github.com/huggingface/datasets/pull/1186.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1186.patch", "merged_at": null }
1,186
true
Add Hate Speech Dataset in Filipino
This PR adds the Hate Speech Dataset, a text classification dataset in Filipino, consisting 10k tweets (training set) that are labeled as hate speech or non-hate speech. Released with 4,232 validation and 4,232 testing samples. Collected during the 2016 Philippine Presidential Elections. Link to the paper: https://p...
https://github.com/huggingface/datasets/pull/1185
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1185", "html_url": "https://github.com/huggingface/datasets/pull/1185", "diff_url": "https://github.com/huggingface/datasets/pull/1185.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1185.patch", "merged_at": "2020-12-07T15:35...
1,185
true
Add Adversarial SQuAD dataset
# Adversarial SQuAD Adding the Adversarial [SQuAD](https://github.com/robinjia/adversarial-squad) dataset as part of the sprint 🎉 This dataset adds adversarial sentences to a subset of the SQuAD dataset's dev examples. How to get the original squad example id is explained in readme->Data Instances. The whole data...
https://github.com/huggingface/datasets/pull/1184
[ "the CI error was just a connection error due to all the activity on the repo this week ^^'\r\nI re-ran it so it should be good now", "I hadn't realized the problem with the dummies since it had passed without errors.\r\nSuggestion: maybe we can show the user a warning based on the generated dummy size.", "Than...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1184", "html_url": "https://github.com/huggingface/datasets/pull/1184", "diff_url": "https://github.com/huggingface/datasets/pull/1184.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1184.patch", "merged_at": "2020-12-16T16:12...
1,184
true
add mkb dataset
This PR will add Mann Ki Baat dataset (parallel data for Indian languages).
https://github.com/huggingface/datasets/pull/1183
[ "Could you update the languages tags before we merge @VasudevGupta7 ?", "done.", "thanks !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1183", "html_url": "https://github.com/huggingface/datasets/pull/1183", "diff_url": "https://github.com/huggingface/datasets/pull/1183.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1183.patch", "merged_at": "2020-12-09T09:38...
1,183
true
ADD COVID-QA dataset
This PR adds the COVID-QA dataset, a question answering dataset consisting of 2,019 question/answer pairs annotated by volunteer biomedical experts on scientific articles related to COVID-19 Link to the paper: https://openreview.net/forum?id=JENSKEEzsoU Link to the dataset/repo: https://github.com/deepset-ai/COVID-...
https://github.com/huggingface/datasets/pull/1182
[ "merging since the CI is fixed on master", "Wow, thanks for including this dataset from my side as well!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1182", "html_url": "https://github.com/huggingface/datasets/pull/1182", "diff_url": "https://github.com/huggingface/datasets/pull/1182.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1182.patch", "merged_at": "2020-12-07T14:23...
1,182
true
added emotions detection in arabic dataset
Dataset for Emotions detection in Arabic text more info: https://github.com/AmrMehasseb/Emotional-Tone
https://github.com/huggingface/datasets/pull/1181
[ "Hi @abdulelahsm did you manage to fix your issue ?\r\nFeel free to ping me if you have questions or if you're ready for a review", "@lhoestq fixed it! ready to merge. I hope haha", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1181", "html_url": "https://github.com/huggingface/datasets/pull/1181", "diff_url": "https://github.com/huggingface/datasets/pull/1181.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1181.patch", "merged_at": "2020-12-21T09:53...
1,181
true
Add KorQuAD v2 Dataset
# The Korean Question Answering Dataset v2 Adding the [KorQuAD](https://korquad.github.io/) v2 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD and is an extension of [squad_kor_v1](https://github.com/huggingface/datasets/pull/1178) which is why I added it as `squad_kor_v2`. - Crowd generat...
https://github.com/huggingface/datasets/pull/1180
[ "looks like this PR also includes the changes for the V1\r\nCould you only include the files of the V2 ?", "hmm I have made the dummy data lighter retested on local and it passed not sure why it fails here?", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1180", "html_url": "https://github.com/huggingface/datasets/pull/1180", "diff_url": "https://github.com/huggingface/datasets/pull/1180.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1180.patch", "merged_at": "2020-12-16T16:10...
1,180
true
Small update to the doc: add flatten_indices in doc
Small update to the doc: add flatten_indices in doc
https://github.com/huggingface/datasets/pull/1179
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1179", "html_url": "https://github.com/huggingface/datasets/pull/1179", "diff_url": "https://github.com/huggingface/datasets/pull/1179.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1179.patch", "merged_at": "2020-12-07T13:42...
1,179
true
Add KorQuAD v1 Dataset
# The Korean Question Answering Dataset Adding the [KorQuAD](https://korquad.github.io/KorQuad%201.0/) v1 dataset as part of the sprint 🎉 This dataset is very similar to SQuAD which is why I added it as `squad_kor_v1`. There is also a v2 which I added [here](https://github.com/huggingface/datasets/pull/1180). - ...
https://github.com/huggingface/datasets/pull/1178
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1178", "html_url": "https://github.com/huggingface/datasets/pull/1178", "diff_url": "https://github.com/huggingface/datasets/pull/1178.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1178.patch", "merged_at": "2020-12-07T13:41...
1,178
true
Add Korean NER dataset
This PR adds the [Korean named entity recognition dataset](https://github.com/kmounlp/NER). This dataset has been used in many downstream tasks, such as training [KoBERT](https://github.com/SKTBrain/KoBERT) for NER, as seen in this [KoBERT-CRF implementation](https://github.com/eagle705/pytorch-bert-crf-ner).
https://github.com/huggingface/datasets/pull/1177
[ "Closed via #1219 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1177", "html_url": "https://github.com/huggingface/datasets/pull/1177", "diff_url": "https://github.com/huggingface/datasets/pull/1177.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1177.patch", "merged_at": null }
1,177
true
Add OpenPI Dataset
Add the OpenPI Dataset by AI2 (AllenAI)
https://github.com/huggingface/datasets/pull/1176
[ "Hi @Bharat123rox ! It looks like some of the dummy data is broken or missing. Did you auto-generate it? Does the local test pass for you?", "@yjernite requesting you to have a look as to why the tests are failing only on Windows, there seems to be a backslash error somewhere, could it be the result of `os.path.j...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1176", "html_url": "https://github.com/huggingface/datasets/pull/1176", "diff_url": "https://github.com/huggingface/datasets/pull/1176.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1176.patch", "merged_at": null }
1,176
true
added ReDial dataset
Updating README Dataset link: https://redialdata.github.io/website/datasheet
https://github.com/huggingface/datasets/pull/1175
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1175", "html_url": "https://github.com/huggingface/datasets/pull/1175", "diff_url": "https://github.com/huggingface/datasets/pull/1175.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1175.patch", "merged_at": "2020-12-07T13:21...
1,175
true
Add Universal Morphologies
Adding unimorph universal morphology annotations for 110 languages, pfew!!! one lemma per row with all possible forms and annotations https://unimorph.github.io/
https://github.com/huggingface/datasets/pull/1174
[ "Sorry for the delay, changed the default language to \"ady\" (first alphabetical) and only downloading the relevant files for each config (dataset_infos is till 918KB though)", "Thanks for merging it ! Looks all good\r\n\r\nLooks like I didn't reply to your last message, sorry about that.\r\nFeel free to ping me...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1174", "html_url": "https://github.com/huggingface/datasets/pull/1174", "diff_url": "https://github.com/huggingface/datasets/pull/1174.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1174.patch", "merged_at": "2021-01-26T16:41...
1,174
true
add wikipedia biography dataset
My first PR containing the Wikipedia biographies dataset. I have followed all the steps in the [guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). It passes all the tests.
https://github.com/huggingface/datasets/pull/1173
[ "Does anyone know why am I getting this \"Some checks were not successful\" message? For the _code_quality_ one, I have successfully run the flake8 command.", "Ok, I need to update the README.md, but don't know if that will fix the errors", "Hi @ACR0S , thanks for adding the dataset!\r\n\r\nIt looks like `black...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1173", "html_url": "https://github.com/huggingface/datasets/pull/1173", "diff_url": "https://github.com/huggingface/datasets/pull/1173.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1173.patch", "merged_at": "2020-12-07T11:13...
1,173
true
Add proto_qa dataset
Added dataset tags as required.
https://github.com/huggingface/datasets/pull/1172
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1172", "html_url": "https://github.com/huggingface/datasets/pull/1172", "diff_url": "https://github.com/huggingface/datasets/pull/1172.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1172.patch", "merged_at": "2020-12-07T11:12...
1,172
true
Add imdb Urdu Reviews dataset.
Added the imdb Urdu reviews dataset. More info about the dataset over <a href="https://github.com/mirfan899/Urdu">here</a>.
https://github.com/huggingface/datasets/pull/1171
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1171", "html_url": "https://github.com/huggingface/datasets/pull/1171", "diff_url": "https://github.com/huggingface/datasets/pull/1171.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1171.patch", "merged_at": "2020-12-07T11:11...
1,171
true
Fix path handling for Windows
https://github.com/huggingface/datasets/pull/1170
[ "@lhoestq here's the fix!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1170", "html_url": "https://github.com/huggingface/datasets/pull/1170", "diff_url": "https://github.com/huggingface/datasets/pull/1170.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1170.patch", "merged_at": "2020-12-07T10:47...
1,170
true
Add Opus fiskmo dataset for Finnish and Swedish for MT task
Adding fiskmo, a massive parallel corpus for Finnish and Swedish. for more info : http://opus.nlpl.eu/fiskmo.php
https://github.com/huggingface/datasets/pull/1169
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1169", "html_url": "https://github.com/huggingface/datasets/pull/1169", "diff_url": "https://github.com/huggingface/datasets/pull/1169.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1169.patch", "merged_at": "2020-12-07T11:04...
1,169
true
Add Naver sentiment movie corpus
This PR adds the [Naver sentiment movie corpus](https://github.com/e9t/nsmc), a dataset containing Korean movie reviews from Naver, the most commonly used search engine in Korea. This dataset is often used to benchmark models on Korean NLP tasks, as seen in [this paper](https://www.aclweb.org/anthology/2020.lrec-1.199....
https://github.com/huggingface/datasets/pull/1168
[ "Closed via #1252 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1168", "html_url": "https://github.com/huggingface/datasets/pull/1168", "diff_url": "https://github.com/huggingface/datasets/pull/1168.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1168.patch", "merged_at": null }
1,168
true
❓ On-the-fly tokenization with datasets, tokenizers, and torch Datasets and Dataloaders
Hi there, I have a question regarding "on-the-fly" tokenization. This question was elicited by reading the "How to train a new language model from scratch using Transformers and Tokenizers" [here](https://huggingface.co/blog/how-to-train). Towards the end there is this sentence: "If your dataset is very large, you c...
https://github.com/huggingface/datasets/issues/1167
[ "We're working on adding on-the-fly transforms in datasets.\r\nCurrently the only on-the-fly functions that can be applied are in `set_format` in which we transform the data in either numpy/torch/tf tensors or pandas.\r\nFor example\r\n```python\r\ndataset.set_format(\"torch\")\r\n```\r\napplies `torch.Tensor` to t...
null
1,167
false
Opus montenegrinsubs
Opus montenegrinsubs - language pair en-me more info : http://opus.nlpl.eu/MontenegrinSubs.php
https://github.com/huggingface/datasets/pull/1166
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1166", "html_url": "https://github.com/huggingface/datasets/pull/1166", "diff_url": "https://github.com/huggingface/datasets/pull/1166.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1166.patch", "merged_at": "2020-12-07T11:02...
1,166
true
Add ar rest reviews
added restaurants reviews in Arabic for sentiment analysis tasks
https://github.com/huggingface/datasets/pull/1165
[ "Copy-pasted from the Slack discussion:\r\nthe annotation and language creators should be found , not unknown\r\nthe example should go under the \"Data Instances\" paragraph, not \"Data fields\"\r\ncan you remove the abstract from the citation and add it to the dataset description? More people will see that", "@y...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1165", "html_url": "https://github.com/huggingface/datasets/pull/1165", "diff_url": "https://github.com/huggingface/datasets/pull/1165.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1165.patch", "merged_at": "2020-12-21T17:06...
1,165
true
Add DaNe dataset
https://github.com/huggingface/datasets/pull/1164
[ "Thanks, this looks great!\r\n\r\nFor the code quality test, it looks like `flake8` is throwing the error, so you can tun `flake8 datasets` locally and fix the errors it points out until it passes" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1164", "html_url": "https://github.com/huggingface/datasets/pull/1164", "diff_url": "https://github.com/huggingface/datasets/pull/1164.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1164.patch", "merged_at": null }
1,164
true
Added memat : Xhosa-English parallel corpora
Added memat : Xhosa-English parallel corpora for more info : http://opus.nlpl.eu/memat.php
https://github.com/huggingface/datasets/pull/1163
[ "The `RemoteDatasetTest` CI fail is fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1163", "html_url": "https://github.com/huggingface/datasets/pull/1163", "diff_url": "https://github.com/huggingface/datasets/pull/1163.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1163.patch", "merged_at": "2020-12-07T10:40...
1,163
true
Add Mocha dataset
More information: https://allennlp.org/mocha
https://github.com/huggingface/datasets/pull/1162
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1162", "html_url": "https://github.com/huggingface/datasets/pull/1162", "diff_url": "https://github.com/huggingface/datasets/pull/1162.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1162.patch", "merged_at": "2020-12-07T10:09...
1,162
true
Linguisticprobing
Adding Linguistic probing datasets from What you can cram into a single $&!#* vector: Probing sentence embeddings for linguistic properties https://www.aclweb.org/anthology/P18-1198/
https://github.com/huggingface/datasets/pull/1161
[ "Thanks for your contribution, @sileod.\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nAs you already created this dataset under your organization namespace (https://huggingface.co/datasets/metaeval/linguisticprobing),...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1161", "html_url": "https://github.com/huggingface/datasets/pull/1161", "diff_url": "https://github.com/huggingface/datasets/pull/1161.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1161.patch", "merged_at": null }
1,161
true
adding TabFact dataset
Adding TabFact: A Large-scale Dataset for Table-based Fact Verification. https://github.com/wenhuchen/Table-Fact-Checking - The tables are stored as individual csv files, so need to download 16,573 🤯 csv files. As a result the `datasets_infos.json` file is huge (6.62 MB). - Original dataset has nested structur...
https://github.com/huggingface/datasets/pull/1160
[ "FYI you guys are on GitHub's homepage 😍\r\n\r\n<img width=\"1589\" alt=\"Screenshot 2020-12-09 at 12 34 28\" src=\"https://user-images.githubusercontent.com/326577/101624883-a0ecc700-39e8-11eb-8a97-11af0d036536.png\">\r\n", "Yeayy 😍 🔥" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1160", "html_url": "https://github.com/huggingface/datasets/pull/1160", "diff_url": "https://github.com/huggingface/datasets/pull/1160.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1160.patch", "merged_at": "2020-12-09T09:12...
1,160
true
Add Roman Urdu dataset
This PR adds the [Roman Urdu dataset](https://archive.ics.uci.edu/ml/datasets/Roman+Urdu+Data+Set#).
https://github.com/huggingface/datasets/pull/1159
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1159", "html_url": "https://github.com/huggingface/datasets/pull/1159", "diff_url": "https://github.com/huggingface/datasets/pull/1159.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1159.patch", "merged_at": "2020-12-07T09:59...
1,159
true
Add BBC Hindi NLI Dataset
# Dataset Card for BBC Hindi NLI Dataset ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances...
https://github.com/huggingface/datasets/pull/1158
[ "Hi @avinsit123 !\r\nDid you manage to rename the dataset and apply the suggestion I mentioned for the data fields ?\r\nFeel free to ping me when you're ready for a review :) ", "Hi @avinsit123 ! Have you had a chance to take a look at my suggestions ?\r\nLet me know if you have questions or if I can help", "@l...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1158", "html_url": "https://github.com/huggingface/datasets/pull/1158", "diff_url": "https://github.com/huggingface/datasets/pull/1158.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1158.patch", "merged_at": "2021-02-05T09:48...
1,158
true
Add dataset XhosaNavy English -Xhosa
Add dataset XhosaNavy English -Xhosa More info : http://opus.nlpl.eu/XhosaNavy.php
https://github.com/huggingface/datasets/pull/1157
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1157", "html_url": "https://github.com/huggingface/datasets/pull/1157", "diff_url": "https://github.com/huggingface/datasets/pull/1157.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1157.patch", "merged_at": "2020-12-07T09:11...
1,157
true
add telugu-news corpus
Adding Telugu News Corpus to datasets.
https://github.com/huggingface/datasets/pull/1156
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1156", "html_url": "https://github.com/huggingface/datasets/pull/1156", "diff_url": "https://github.com/huggingface/datasets/pull/1156.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1156.patch", "merged_at": "2020-12-07T09:08...
1,156
true
Add BSD
This PR adds BSD, the Japanese-English business dialogue corpus by [Rikters et al., 2020](https://www.aclweb.org/anthology/D19-5204.pdf).
https://github.com/huggingface/datasets/pull/1155
[ "Glad to have more Japanese data! Couple of comments:\r\n- the abbreviation might confuse some people as there is also an OPUS BSD corpus, would you mind renaming it as `bsd_ja_en`?\r\n- `flake8` is throwing some errors, you can run it locally (`flake8 datasets`) and fix what it tells you until it's happy :)\r\n- W...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1155", "html_url": "https://github.com/huggingface/datasets/pull/1155", "diff_url": "https://github.com/huggingface/datasets/pull/1155.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1155.patch", "merged_at": "2020-12-07T09:27...
1,155
true
Opus sardware
Added Opus sardware dataset for machine translation English to Sardinian. for more info : http://opus.nlpl.eu/sardware.php
https://github.com/huggingface/datasets/pull/1154
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1154", "html_url": "https://github.com/huggingface/datasets/pull/1154", "diff_url": "https://github.com/huggingface/datasets/pull/1154.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1154.patch", "merged_at": "2020-12-05T17:05...
1,154
true
Adding dataset for proto_qa in huggingface datasets library
Added dataset for ProtoQA: A Question Answering Dataset for Prototypical Common-Sense Reasoning Followed all steps for adding a new dataset.
https://github.com/huggingface/datasets/pull/1153
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1153", "html_url": "https://github.com/huggingface/datasets/pull/1153", "diff_url": "https://github.com/huggingface/datasets/pull/1153.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1153.patch", "merged_at": null }
1,153
true
hindi discourse analysis dataset commit
https://github.com/huggingface/datasets/pull/1152
[ "That's a great dataset to have! We need a couple more things to be good to go:\r\n- you should `make style` and `flake8 datasets` before pushing to make the code quality check happy :) \r\n- the dataset will need some dummy data which you should be able to auto-generate and test locally: https://github.com/hugging...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1152", "html_url": "https://github.com/huggingface/datasets/pull/1152", "diff_url": "https://github.com/huggingface/datasets/pull/1152.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1152.patch", "merged_at": "2020-12-14T19:44...
1,152
true
adding psc dataset
https://github.com/huggingface/datasets/pull/1151
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1151", "html_url": "https://github.com/huggingface/datasets/pull/1151", "diff_url": "https://github.com/huggingface/datasets/pull/1151.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1151.patch", "merged_at": "2020-12-09T11:38...
1,151
true
adding dyk dataset
https://github.com/huggingface/datasets/pull/1150
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1150", "html_url": "https://github.com/huggingface/datasets/pull/1150", "diff_url": "https://github.com/huggingface/datasets/pull/1150.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1150.patch", "merged_at": "2020-12-05T16:52...
1,150
true
Fix typo in the comment in _info function
https://github.com/huggingface/datasets/pull/1149
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1149", "html_url": "https://github.com/huggingface/datasets/pull/1149", "diff_url": "https://github.com/huggingface/datasets/pull/1149.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1149.patch", "merged_at": "2020-12-05T16:19...
1,149
true
adding polemo2 dataset
https://github.com/huggingface/datasets/pull/1148
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1148", "html_url": "https://github.com/huggingface/datasets/pull/1148", "diff_url": "https://github.com/huggingface/datasets/pull/1148.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1148.patch", "merged_at": "2020-12-05T16:51...
1,148
true
Vinay/add/telugu books
Real data tests are failing as this dataset needs to be manually downloaded
https://github.com/huggingface/datasets/pull/1147
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1147", "html_url": "https://github.com/huggingface/datasets/pull/1147", "diff_url": "https://github.com/huggingface/datasets/pull/1147.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1147.patch", "merged_at": "2020-12-05T16:36...
1,147
true
Add LINNAEUS
https://github.com/huggingface/datasets/pull/1146
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1146", "html_url": "https://github.com/huggingface/datasets/pull/1146", "diff_url": "https://github.com/huggingface/datasets/pull/1146.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1146.patch", "merged_at": "2020-12-05T16:35...
1,146
true
Add Species-800
https://github.com/huggingface/datasets/pull/1145
[ "thanks @lhoestq ! I probably need to do the same change in the `SplitGenerator`s (lines 107, 110 and 113). I'll open a new PR for that", "Yes indeed ! Good catch 👍 \r\nFeel free to open a PR and ping me", "Hi , theres a issue pulling species_800 dataset , throws google drive error \r\n\r\nerror: \r\n\r\n```...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1145", "html_url": "https://github.com/huggingface/datasets/pull/1145", "diff_url": "https://github.com/huggingface/datasets/pull/1145.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1145.patch", "merged_at": "2020-12-05T16:35...
1,145
true
Add JFLEG
This PR adds [JFLEG ](https://www.aclweb.org/anthology/E17-2037/), an English grammatical error correction benchmark. The tests were successful on real data, although it would be great if I can get some guidance on the **dummy data**. Basically, **for each source sentence there are 4 possible gold standard target s...
https://github.com/huggingface/datasets/pull/1144
[ "Hi @j-chim ! You're right it does feel redundant: your option works better, but I'd even suggest having the references in a Sequence feature, which you can declare as:\r\n```\r\n\t features=datasets.Features(\r\n {\r\n \"sentence\": datasets.Value(\"string\"),\r\n ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1144", "html_url": "https://github.com/huggingface/datasets/pull/1144", "diff_url": "https://github.com/huggingface/datasets/pull/1144.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1144.patch", "merged_at": "2020-12-06T18:16...
1,144
true
Add the Winograd Schema Challenge
Adds the Winograd Schema Challenge, including configs for the more canonical wsc273 as well as wsc285 with 12 new examples. - https://cs.nyu.edu/faculty/davise/papers/WinogradSchemas/WS.html The data format was a bit of a nightmare but I think I got it to a workable format.
https://github.com/huggingface/datasets/pull/1143
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1143", "html_url": "https://github.com/huggingface/datasets/pull/1143", "diff_url": "https://github.com/huggingface/datasets/pull/1143.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1143.patch", "merged_at": "2020-12-09T09:32...
1,143
true
Fix PerSenT
New PR for dataset PerSenT
https://github.com/huggingface/datasets/pull/1142
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1142", "html_url": "https://github.com/huggingface/datasets/pull/1142", "diff_url": "https://github.com/huggingface/datasets/pull/1142.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1142.patch", "merged_at": "2020-12-14T13:39...
1,142
true
Add GitHub version of ETH Py150 Corpus
Add the redistributable version of **ETH Py150 Corpus**
https://github.com/huggingface/datasets/pull/1141
[ "The `RemoteDatasetTest` is fixed on master so it's fine", "thanks for rebasing :)\r\n\r\nCI is green now, merging" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1141", "html_url": "https://github.com/huggingface/datasets/pull/1141", "diff_url": "https://github.com/huggingface/datasets/pull/1141.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1141.patch", "merged_at": "2020-12-07T10:00...
1,141
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1140
[ "@lhoestq have made the suggested changes in the README file.", "@lhoestq Created a new PR #1231 with only the relevant files.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1140", "html_url": "https://github.com/huggingface/datasets/pull/1140", "diff_url": "https://github.com/huggingface/datasets/pull/1140.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1140.patch", "merged_at": null }
1,140
true
Add ReFreSD dataset
This PR adds the **ReFreSD dataset**. The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data. Need feedback on: - I couldn't generate the dummy data. The file we download is a tsv file, but without extension, I suppose...
https://github.com/huggingface/datasets/pull/1139
[ "Cool dataset! Replying in-line:\r\n\r\n> This PR adds the **ReFreSD dataset**.\r\n> The original data is hosted [on this github repo](https://github.com/Elbria/xling-SemDiv) and we use the `REFreSD_rationale` to expose all the data.\r\n> \r\n> Need feedback on:\r\n> \r\n> * I couldn't generate the dummy data. The ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1139", "html_url": "https://github.com/huggingface/datasets/pull/1139", "diff_url": "https://github.com/huggingface/datasets/pull/1139.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1139.patch", "merged_at": "2020-12-16T16:01...
1,139
true
updated after the class name update
@lhoestq <---
https://github.com/huggingface/datasets/pull/1138
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1138", "html_url": "https://github.com/huggingface/datasets/pull/1138", "diff_url": "https://github.com/huggingface/datasets/pull/1138.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1138.patch", "merged_at": "2020-12-05T15:43...
1,138
true
add wmt mlqe 2020 shared task
First commit for Shared task 1 (wmt_mlqw_task1) of WMT20 MLQE (quality estimation of machine translation) Note that I copied the tags in the README for only one (of the 7 configurations): `en-de`. There is one configuration for each pair of languages.
https://github.com/huggingface/datasets/pull/1137
[ "re-created in #1218 because this was too messy" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1137", "html_url": "https://github.com/huggingface/datasets/pull/1137", "diff_url": "https://github.com/huggingface/datasets/pull/1137.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1137.patch", "merged_at": null }
1,137
true
minor change in description in paws-x.py and updated dataset_infos
https://github.com/huggingface/datasets/pull/1136
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1136", "html_url": "https://github.com/huggingface/datasets/pull/1136", "diff_url": "https://github.com/huggingface/datasets/pull/1136.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1136.patch", "merged_at": "2020-12-06T18:02...
1,136
true
added paws
Updating README and tags for dataset card in a while
https://github.com/huggingface/datasets/pull/1135
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1135", "html_url": "https://github.com/huggingface/datasets/pull/1135", "diff_url": "https://github.com/huggingface/datasets/pull/1135.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1135.patch", "merged_at": "2020-12-09T17:17...
1,135
true
adding xquad-r dataset
https://github.com/huggingface/datasets/pull/1134
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1134", "html_url": "https://github.com/huggingface/datasets/pull/1134", "diff_url": "https://github.com/huggingface/datasets/pull/1134.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1134.patch", "merged_at": "2020-12-05T16:50...
1,134
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1133
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1133", "html_url": "https://github.com/huggingface/datasets/pull/1133", "diff_url": "https://github.com/huggingface/datasets/pull/1133.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1133.patch", "merged_at": null }
1,133
true
Add Urdu Sentiment Corpus (USC).
Added Urdu Sentiment Corpus. More details about the dataset over <a href="https://github.com/MuhammadYaseenKhan/Urdu-Sentiment-Corpus">here</a>.
https://github.com/huggingface/datasets/pull/1132
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1132", "html_url": "https://github.com/huggingface/datasets/pull/1132", "diff_url": "https://github.com/huggingface/datasets/pull/1132.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1132.patch", "merged_at": null }
1,132
true
Adding XQUAD-R Dataset
https://github.com/huggingface/datasets/pull/1131
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1131", "html_url": "https://github.com/huggingface/datasets/pull/1131", "diff_url": "https://github.com/huggingface/datasets/pull/1131.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1131.patch", "merged_at": null }
1,131
true
adding discovery
https://github.com/huggingface/datasets/pull/1130
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1130", "html_url": "https://github.com/huggingface/datasets/pull/1130", "diff_url": "https://github.com/huggingface/datasets/pull/1130.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1130.patch", "merged_at": "2020-12-14T13:03...
1,130
true
Adding initial version of cord-19 dataset
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIG...
https://github.com/huggingface/datasets/pull/1129
[ "Hi @ggdupont !\r\nHave you had a chance to take a look at my suggestions ?\r\nFeel free to ping me if you have questions or when you're ready for a review", "> Hi @ggdupont !\r\n> Have you had a chance to take a look at my suggestions ?\r\n> Feel free to ping me if you have questions or when you're ready for a r...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1129", "html_url": "https://github.com/huggingface/datasets/pull/1129", "diff_url": "https://github.com/huggingface/datasets/pull/1129.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1129.patch", "merged_at": null }
1,129
true
Add xquad-r dataset
https://github.com/huggingface/datasets/pull/1128
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1128", "html_url": "https://github.com/huggingface/datasets/pull/1128", "diff_url": "https://github.com/huggingface/datasets/pull/1128.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1128.patch", "merged_at": null }
1,128
true
Add wikiqaar dataset
Arabic Wiki Question Answering Corpus.
https://github.com/huggingface/datasets/pull/1127
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1127", "html_url": "https://github.com/huggingface/datasets/pull/1127", "diff_url": "https://github.com/huggingface/datasets/pull/1127.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1127.patch", "merged_at": "2020-12-07T16:39...
1,127
true
Adding babi dataset
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment. Supersede #945 (problem with the rebase) and adresses the issues mentioned in the review (dummy data are smaller now and code comments are fixed).
https://github.com/huggingface/datasets/pull/1126
[ "This is ok now @lhoestq\r\n\r\nI've included the tweak to `dummy_data` to only use the data transmitted to `_generate_examples` by default (it only do that if it can find at least one path to an existing file in the `gen_kwargs` and this can be unactivated with a flag).\r\n\r\nShould I extract it in another PR or ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1126", "html_url": "https://github.com/huggingface/datasets/pull/1126", "diff_url": "https://github.com/huggingface/datasets/pull/1126.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1126.patch", "merged_at": null }
1,126
true
Add Urdu fake news dataset.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1125
[ "@lhoestq looks like a lot of files were updated... shall I create a new PR?", "Hi @chaitnayabasava ! you can try rebasing and see if that fixes the number of files changed, otherwise please do open a new PR with only the relevant files and close this one :) ", "Created a new PR #1230.\r\nclosing this one :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1125", "html_url": "https://github.com/huggingface/datasets/pull/1125", "diff_url": "https://github.com/huggingface/datasets/pull/1125.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1125.patch", "merged_at": null }
1,125
true
Add Xitsonga Ner
Clean Xitsonga Ner PR
https://github.com/huggingface/datasets/pull/1124
[ "looks like this PR includes changes about many files other than the ones related to xitsonga NER\r\n\r\ncould you create another branch and another PR please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1124", "html_url": "https://github.com/huggingface/datasets/pull/1124", "diff_url": "https://github.com/huggingface/datasets/pull/1124.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1124.patch", "merged_at": null }
1,124
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1123
[ "the `ms_terms` formatting CI fails is fixed on master", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1123", "html_url": "https://github.com/huggingface/datasets/pull/1123", "diff_url": "https://github.com/huggingface/datasets/pull/1123.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1123.patch", "merged_at": "2020-12-04T17:05...
1,123
true
Add Urdu fake news.
Added Urdu fake news dataset. More information about the dataset can be found <a href="https://github.com/MaazAmjad/Datasets-for-Urdu-news">here</a>.
https://github.com/huggingface/datasets/pull/1122
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1122", "html_url": "https://github.com/huggingface/datasets/pull/1122", "diff_url": "https://github.com/huggingface/datasets/pull/1122.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1122.patch", "merged_at": null }
1,122
true
adding cdt dataset
https://github.com/huggingface/datasets/pull/1121
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1121", "html_url": "https://github.com/huggingface/datasets/pull/1121", "diff_url": "https://github.com/huggingface/datasets/pull/1121.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1121.patch", "merged_at": null }
1,121
true
Add conda environment activation
Added activation of Conda environment before installing.
https://github.com/huggingface/datasets/pull/1120
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1120", "html_url": "https://github.com/huggingface/datasets/pull/1120", "diff_url": "https://github.com/huggingface/datasets/pull/1120.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1120.patch", "merged_at": "2020-12-04T16:40...
1,120
true
Add Google Great Code Dataset
https://github.com/huggingface/datasets/pull/1119
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1119", "html_url": "https://github.com/huggingface/datasets/pull/1119", "diff_url": "https://github.com/huggingface/datasets/pull/1119.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1119.patch", "merged_at": "2020-12-06T17:33...
1,119
true
Add Tashkeela dataset
Arabic Vocalized Words Dataset.
https://github.com/huggingface/datasets/pull/1118
[ "Sorry @lhoestq for the trouble, sometime I forget to change the names :/", "> Sorry @lhoestq for the trouble, sometime I forget to change the names :/\r\n\r\nhaha it's ok ;)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1118", "html_url": "https://github.com/huggingface/datasets/pull/1118", "diff_url": "https://github.com/huggingface/datasets/pull/1118.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1118.patch", "merged_at": "2020-12-04T15:46...
1,118
true
Fix incorrect MRQA train+SQuAD URL
Fix issue #1115
https://github.com/huggingface/datasets/pull/1117
[ "Thanks ! could you regenerate the dataset_infos.json file ?\r\n\r\n```\r\ndatasets-cli test ./datasets/mrqa --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\nalso cc @VictorSanh ", "Oooops, good catch @jimmycode ", "> Thanks ! could you regenerate the dataset_infos.json file ?\r\n> \r\n> ```\r\n>...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1117", "html_url": "https://github.com/huggingface/datasets/pull/1117", "diff_url": "https://github.com/huggingface/datasets/pull/1117.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1117.patch", "merged_at": "2020-12-06T17:14...
1,117
true
add dbpedia_14 dataset
This dataset corresponds to the DBpedia dataset requested in https://github.com/huggingface/datasets/issues/353.
https://github.com/huggingface/datasets/pull/1116
[ "Thanks for the review. \r\nCheers!", "Hi @hfawaz, this week we are doing the 🤗 `datasets` sprint (see some details [here](https://discuss.huggingface.co/t/open-to-the-community-one-week-team-effort-to-reach-v2-0-of-hf-datasets-library/2176)).\r\n\r\nNothing more to do on your side but it means that if you regis...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1116", "html_url": "https://github.com/huggingface/datasets/pull/1116", "diff_url": "https://github.com/huggingface/datasets/pull/1116.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1116.patch", "merged_at": "2020-12-05T15:36...
1,116
true
Incorrect URL for MRQA SQuAD train subset
https://github.com/huggingface/datasets/blob/4ef4c8f8b7a60e35c6fa21115fca9faae91c9f74/datasets/mrqa/mrqa.py#L53 The URL for `train+SQuAD` subset of MRQA points to the dev set instead of train set. It should be `https://s3.us-east-2.amazonaws.com/mrqa/release/v2/train/SQuAD.jsonl.gz`.
https://github.com/huggingface/datasets/issues/1115
[ "good catch !" ]
null
1,115
false
Add sesotho ner corpus
Clean Sesotho PR
https://github.com/huggingface/datasets/pull/1114
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1114", "html_url": "https://github.com/huggingface/datasets/pull/1114", "diff_url": "https://github.com/huggingface/datasets/pull/1114.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1114.patch", "merged_at": "2020-12-04T15:02...
1,114
true
add qed
adding QED: Dataset for Explanations in Question Answering https://github.com/google-research-datasets/QED https://arxiv.org/abs/2009.06354
https://github.com/huggingface/datasets/pull/1113
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1113", "html_url": "https://github.com/huggingface/datasets/pull/1113", "diff_url": "https://github.com/huggingface/datasets/pull/1113.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1113.patch", "merged_at": "2020-12-05T15:41...
1,113
true
Initial version of cord-19 dataset from AllenAI with only the abstract
Initial version only reading the metadata in CSV. ### Checklist: - [x] Create the dataset script /datasets/my_dataset/my_dataset.py using the template - [x] Fill the _DESCRIPTION and _CITATION variables - [x] Implement _infos(), _split_generators() and _generate_examples() - [x] Make sure that the BUILDER_CONFIG...
https://github.com/huggingface/datasets/pull/1112
[ "too ugly, I'll make a clean one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1112", "html_url": "https://github.com/huggingface/datasets/pull/1112", "diff_url": "https://github.com/huggingface/datasets/pull/1112.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1112.patch", "merged_at": null }
1,112
true
Add Siswati Ner corpus
Clean Siswati PR
https://github.com/huggingface/datasets/pull/1111
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1111", "html_url": "https://github.com/huggingface/datasets/pull/1111", "diff_url": "https://github.com/huggingface/datasets/pull/1111.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1111.patch", "merged_at": "2020-12-04T14:43...
1,111
true
Using a feature named "_type" fails with certain operations
A column named `_type` leads to a `TypeError: unhashable type: 'dict'` for certain operations: ```python from datasets import Dataset, concatenate_datasets ds = Dataset.from_dict({"_type": ["whatever"]}).map() concatenate_datasets([ds]) # or simply Dataset(ds._data) ``` Context: We are using datasets to persi...
https://github.com/huggingface/datasets/issues/1110
[ "Thanks for reporting !\r\n\r\nIndeed this is a keyword in the library that is used to encode/decode features to a python dictionary that we can save/load to json.\r\nWe can probably change `_type` to something that is less likely to collide with user feature names.\r\nIn this case we would want something backward ...
null
1,110
false
add woz_dialogue
Adding Wizard-of-Oz task oriented dialogue dataset https://github.com/nmrksic/neural-belief-tracker/tree/master/data/woz https://arxiv.org/abs/1604.04562
https://github.com/huggingface/datasets/pull/1109
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1109", "html_url": "https://github.com/huggingface/datasets/pull/1109", "diff_url": "https://github.com/huggingface/datasets/pull/1109.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1109.patch", "merged_at": "2020-12-05T15:40...
1,109
true
Add Sepedi NER corpus
Finally a clean PR for Sepedi
https://github.com/huggingface/datasets/pull/1108
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1108", "html_url": "https://github.com/huggingface/datasets/pull/1108", "diff_url": "https://github.com/huggingface/datasets/pull/1108.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1108.patch", "merged_at": "2020-12-04T14:39...
1,108
true
Add arsentd_lev dataset
Add The Arabic Sentiment Twitter Dataset for Levantine dialect (ArSenTD-LEV) Paper: [ArSentD-LEV: A Multi-Topic Corpus for Target-based Sentiment Analysis in Arabic Levantine Tweets](https://arxiv.org/abs/1906.01830) Homepage: http://oma-project.com/
https://github.com/huggingface/datasets/pull/1107
[ "thanks ! can you also regenerate the dataset_infos.json file please ?" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1107", "html_url": "https://github.com/huggingface/datasets/pull/1107", "diff_url": "https://github.com/huggingface/datasets/pull/1107.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1107.patch", "merged_at": "2020-12-05T15:38...
1,107
true