title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
add yahoo_answers_topics
This PR adds yahoo answers topic classification dataset. More info: https://github.com/LC-John/Yahoo-Answers-Topic-Classification-Dataset cc @joeddav, @yjernite
https://github.com/huggingface/datasets/pull/1006
[ "feel free to merge/ping me to merge if there're no more changes to do" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1006", "html_url": "https://github.com/huggingface/datasets/pull/1006", "diff_url": "https://github.com/huggingface/datasets/pull/1006.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1006.patch", "merged_at": "2020-12-02T18:01...
1,006
true
Adding Autshumato South african langages:
https://repo.sadilar.org/handle/20.500.12185/7/discover?filtertype=database&filter_relational_operator=equals&filter=Multilingual+Text+Corpora%3A+Aligned
https://github.com/huggingface/datasets/pull/1005
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1005", "html_url": "https://github.com/huggingface/datasets/pull/1005", "diff_url": "https://github.com/huggingface/datasets/pull/1005.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1005.patch", "merged_at": "2020-12-03T13:13...
1,005
true
how large datasets are handled under the hood
Hi I want to use multiple large datasets with a mapping style dataloader, where they cannot fit into memory, could you tell me how you handled the datasets under the hood? is this you bring all in memory in case of mapping style ones? or is this some sharding under the hood and you bring in memory when necessary, than...
https://github.com/huggingface/datasets/issues/1004
[ "This library uses Apache Arrow under the hood to store datasets on disk.\r\nThe advantage of Apache Arrow is that it allows to memory map the dataset. This allows to load datasets bigger than memory and with almost no RAM usage. It also offers excellent I/O speed.\r\n\r\nFor example when you access one element or ...
null
1,004
false
Add multi_x_science_sum
Add Multi-XScience Dataset. github repo: https://github.com/yaolu/Multi-XScience paper: [Multi-XScience: A Large-scale Dataset for Extreme Multi-document Summarization of Scientific Articles](https://arxiv.org/abs/2010.14235)
https://github.com/huggingface/datasets/pull/1003
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1003", "html_url": "https://github.com/huggingface/datasets/pull/1003", "diff_url": "https://github.com/huggingface/datasets/pull/1003.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1003.patch", "merged_at": "2020-12-02T17:39...
1,003
true
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
null
https://github.com/huggingface/datasets/pull/1002
[ "Could you fix the dummy data before we merge ?\r\nLooks like the dummy `train.csv` is missing", "Thanks @Narsil @lhoestq for adding MeDAL :)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1002", "html_url": "https://github.com/huggingface/datasets/pull/1002", "diff_url": "https://github.com/huggingface/datasets/pull/1002.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1002.patch", "merged_at": "2020-12-03T13:14...
1,002
true
Adding Medal: MeDAL: Medical Abbreviation Disambiguation Dataset for Natural Language Understanding Pretraining
null
https://github.com/huggingface/datasets/pull/1001
[ "Dupe" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1001", "html_url": "https://github.com/huggingface/datasets/pull/1001", "diff_url": "https://github.com/huggingface/datasets/pull/1001.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1001.patch", "merged_at": null }
1,001
true
UM005: Urdu <> English Translation Dataset
Adds Urdu-English dataset for machine translation: http://ufal.ms.mff.cuni.cz/umc/005-en-ur/
https://github.com/huggingface/datasets/pull/1000
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/1000", "html_url": "https://github.com/huggingface/datasets/pull/1000", "diff_url": "https://github.com/huggingface/datasets/pull/1000.diff", "patch_url": "https://github.com/huggingface/datasets/pull/1000.patch", "merged_at": "2020-12-04T15:34...
1,000
true
add generated_reviews_enth
`generated_reviews_enth` is created as part of [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf) for machine translation task. This dataset (referred to as `generated_reviews_yn` in [scb-mt-en-th-2020](https://arxiv.org/pdf/2007.03541.pdf)) are English product reviews generated by [CTRL](https://arxiv.org/abs/1...
https://github.com/huggingface/datasets/pull/999
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/999", "html_url": "https://github.com/huggingface/datasets/pull/999", "diff_url": "https://github.com/huggingface/datasets/pull/999.diff", "patch_url": "https://github.com/huggingface/datasets/pull/999.patch", "merged_at": "2020-12-03T11:17:28"...
999
true
adding yahoo_answers_qa
Adding Yahoo Answers QA dataset. More info: https://ciir.cs.umass.edu/downloads/nfL6/
https://github.com/huggingface/datasets/pull/998
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/998", "html_url": "https://github.com/huggingface/datasets/pull/998", "diff_url": "https://github.com/huggingface/datasets/pull/998.diff", "patch_url": "https://github.com/huggingface/datasets/pull/998.patch", "merged_at": "2020-12-02T13:26:06"...
998
true
Microsoft CodeXGlue
Datasets from https://github.com/microsoft/CodeXGLUE This contains 13 datasets: code_x_glue_cc_clone_detection_big_clone_bench code_x_glue_cc_clone_detection_poj_104 code_x_glue_cc_cloze_testing_all code_x_glue_cc_cloze_testing_maxmin code_x_glue_cc_code_completion_line code_x_glue_cc_code_completion_token ...
https://github.com/huggingface/datasets/pull/997
[ "#978 is working on adding code refinement\r\n\r\nmaybe we should keep the CodeXGlue benchmark (as glue) and don't merge the code_refinement dataset proposed in #978 ?\r\n\r\ncc @reshinthadithyan", "Hi @madlag and @lhoestq , I am extremely interested in getting this dataset into HF's library as I research in this...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/997", "html_url": "https://github.com/huggingface/datasets/pull/997", "diff_url": "https://github.com/huggingface/datasets/pull/997.diff", "patch_url": "https://github.com/huggingface/datasets/pull/997.patch", "merged_at": null }
997
true
NotADirectoryError while loading the CNN/Dailymail dataset
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.28 GiB, post-processed: Unknown size, total: 1.82 GiB) to /root/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0/0128610a44e10f25b4af6689441c72af86205282d26399642f7db38fa7535602... ---------------------------------------...
https://github.com/huggingface/datasets/issues/996
[ "Looks like the google drive download failed.\r\nI'm getting a `Google Drive - Quota exceeded` error while looking at the downloaded file.\r\n\r\nWe should consider finding a better host than google drive for this dataset imo\r\nrelated : #873 #864 ", "It is working now, thank you. \r\n\r\nShould I leave this iss...
null
996
false
added dataset circa
Dataset Circa added. Only README.md and dataset card left
https://github.com/huggingface/datasets/pull/995
[ "Blocked @k125-ak ;) Bye-bye" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/995", "html_url": "https://github.com/huggingface/datasets/pull/995", "diff_url": "https://github.com/huggingface/datasets/pull/995.diff", "patch_url": "https://github.com/huggingface/datasets/pull/995.patch", "merged_at": "2020-12-03T09:39:37"...
995
true
Add Sepedi ner corpus
https://github.com/huggingface/datasets/pull/994
[ "Looks like the PR includes commits about many other files.\r\nCould you create a clean branch from master, and create another PR ?", "Sorry, will do that. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/994", "html_url": "https://github.com/huggingface/datasets/pull/994", "diff_url": "https://github.com/huggingface/datasets/pull/994.diff", "patch_url": "https://github.com/huggingface/datasets/pull/994.patch", "merged_at": null }
994
true
Problem downloading amazon_reviews_multi
Thanks for adding the dataset. After trying to load the dataset, I am getting the following error: `ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_fr_train.json ` I used the following code to load the dataset: `load_dataset( dataset_name, ...
https://github.com/huggingface/datasets/issues/993
[ "Hi @hfawaz ! This is working fine for me. Is it a repeated occurence? Have you tried from the latest verion?", "Hi, it seems a connection problem. \r\nNow it says: \r\n`ConnectionError: Couldn't reach https://amazon-reviews-ml.s3-us-west-2.amazonaws.com/json/train/dataset_ja_train.json`" ]
null
993
false
Add CAIL 2018 dataset
https://github.com/huggingface/datasets/pull/992
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/992", "html_url": "https://github.com/huggingface/datasets/pull/992", "diff_url": "https://github.com/huggingface/datasets/pull/992.diff", "patch_url": "https://github.com/huggingface/datasets/pull/992.patch", "merged_at": "2020-12-02T16:49:01"...
992
true
Adding farsi_news dataset (https://github.com/sci2lab/Farsi-datasets)
null
https://github.com/huggingface/datasets/pull/991
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/991", "html_url": "https://github.com/huggingface/datasets/pull/991", "diff_url": "https://github.com/huggingface/datasets/pull/991.diff", "patch_url": "https://github.com/huggingface/datasets/pull/991.patch", "merged_at": "2020-12-03T11:01:26"...
991
true
Add E2E NLG
Adding the E2E NLG dataset. More info here : http://www.macs.hw.ac.uk/InteractionLab/E2E/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` and `_genera...
https://github.com/huggingface/datasets/pull/990
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/990", "html_url": "https://github.com/huggingface/datasets/pull/990", "diff_url": "https://github.com/huggingface/datasets/pull/990.diff", "patch_url": "https://github.com/huggingface/datasets/pull/990.patch", "merged_at": "2020-12-03T13:08:04"...
990
true
Fix SV -> NO
This PR fixes the small typo as seen in #956
https://github.com/huggingface/datasets/pull/989
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/989", "html_url": "https://github.com/huggingface/datasets/pull/989", "diff_url": "https://github.com/huggingface/datasets/pull/989.diff", "patch_url": "https://github.com/huggingface/datasets/pull/989.patch", "merged_at": "2020-12-02T09:18:14"...
989
true
making sure datasets are not loaded in memory and distributed training of them
Hi I am dealing with large-scale datasets which I need to train distributedly, I used the shard function to divide the dataset across the cores, without any sampler, this does not work for distributed training and does not become any faster than 1 TPU core. 1) how I can make sure data is not loaded in memory 2) in cas...
https://github.com/huggingface/datasets/issues/988
[ "my implementation of sharding per TPU core: https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/trainers/t5_trainer.py#L316 \r\nmy implementation of dataloader for this case https://github.com/google-research/ruse/blob/d4dd58a2d8efe0ffb1a9e9e77e3228d6824d3c3c/seq2seq/tasks...
null
988
false
Add OPUS DOGC dataset
https://github.com/huggingface/datasets/pull/987
[ "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/987", "html_url": "https://github.com/huggingface/datasets/pull/987", "diff_url": "https://github.com/huggingface/datasets/pull/987.diff", "patch_url": "https://github.com/huggingface/datasets/pull/987.patch", "merged_at": "2020-12-04T13:27:41"...
987
true
Add SciTLDR Dataset
Adds the SciTLDR Dataset by AI2 Added README card with tags to the best of my knowledge Multi-target summaries or TLDRs of Scientific Documents
https://github.com/huggingface/datasets/pull/986
[ "CI failures seem to be unrelated (related to `norwegian_ner`)\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_configs_norwegian_ner\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::t...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/986", "html_url": "https://github.com/huggingface/datasets/pull/986", "diff_url": "https://github.com/huggingface/datasets/pull/986.diff", "patch_url": "https://github.com/huggingface/datasets/pull/986.patch", "merged_at": null }
986
true
Add GAP dataset
GAP dataset Gender bias coreference resolution
https://github.com/huggingface/datasets/pull/985
[ "This dataset already exists apparently, sorry :/ \r\nsee\r\nhttps://github.com/huggingface/datasets/blob/master/datasets/gap/gap.py\r\n\r\nFeel free to re-use the dataset card you did for `/datasets/gap`\r\n", "oh heck, my bad 🤦‍♂️ sorry", "I think you should also delete this branch." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/985", "html_url": "https://github.com/huggingface/datasets/pull/985", "diff_url": "https://github.com/huggingface/datasets/pull/985.diff", "patch_url": "https://github.com/huggingface/datasets/pull/985.patch", "merged_at": null }
985
true
committing Whoa file
https://github.com/huggingface/datasets/pull/984
[ "can't find the Whoa file since there' nothing left", "The classic `rm -rf` command - nice one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/984", "html_url": "https://github.com/huggingface/datasets/pull/984", "diff_url": "https://github.com/huggingface/datasets/pull/984.diff", "patch_url": "https://github.com/huggingface/datasets/pull/984.patch", "merged_at": null }
984
true
add mc taco
MC-TACO Temporal commonsense knowledge
https://github.com/huggingface/datasets/pull/983
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/983", "html_url": "https://github.com/huggingface/datasets/pull/983", "diff_url": "https://github.com/huggingface/datasets/pull/983.diff", "patch_url": "https://github.com/huggingface/datasets/pull/983.patch", "merged_at": "2020-12-02T15:37:46"...
983
true
add prachathai67k take2
I decided it will be faster to create a new pull request instead of fixing the rebase issues. continuing from https://github.com/huggingface/datasets/pull/954
https://github.com/huggingface/datasets/pull/982
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/982", "html_url": "https://github.com/huggingface/datasets/pull/982", "diff_url": "https://github.com/huggingface/datasets/pull/982.diff", "patch_url": "https://github.com/huggingface/datasets/pull/982.patch", "merged_at": "2020-12-02T10:18:11"...
982
true
add wisesight_sentiment take2
Take 2 since last time the rebase issues were taking me too much time to fix as opposed to just open a new one.
https://github.com/huggingface/datasets/pull/981
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/981", "html_url": "https://github.com/huggingface/datasets/pull/981", "diff_url": "https://github.com/huggingface/datasets/pull/981.diff", "patch_url": "https://github.com/huggingface/datasets/pull/981.patch", "merged_at": "2020-12-02T10:37:13"...
981
true
Wongnai - Thai reviews dataset
40,000 reviews, previously released on GitHub ( https://github.com/wongnai/wongnai-corpus ) with an LGPL license, and on a closed Kaggle competition ( https://www.kaggle.com/c/wongnai-challenge-review-rating-prediction/ )
https://github.com/huggingface/datasets/pull/980
[ "Thank you for contributing a Thai dataset, @mapmeld ! I'm super hyped. \r\nOne comment I may add is that wongnai-corpus has two datasets: review classification (this) and word tokenization (https://github.com/wongnai/wongnai-corpus/blob/master/search/labeled_queries_by_judges.txt).\r\nWould it be possible for you ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/980", "html_url": "https://github.com/huggingface/datasets/pull/980", "diff_url": "https://github.com/huggingface/datasets/pull/980.diff", "patch_url": "https://github.com/huggingface/datasets/pull/980.patch", "merged_at": "2020-12-02T15:30:04"...
980
true
[WIP] Add multi woz
This PR adds version 2.2 of the Multi-domain Wizard of OZ dataset: https://github.com/budzianowski/multiwoz/tree/master/data/MultiWOZ_2.2 It was a pretty big chunk of work to figure out the structure, so I stil have tol add the description to the README.md On the plus side the structure is broadly similar to that...
https://github.com/huggingface/datasets/pull/979
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/979", "html_url": "https://github.com/huggingface/datasets/pull/979", "diff_url": "https://github.com/huggingface/datasets/pull/979.diff", "patch_url": "https://github.com/huggingface/datasets/pull/979.patch", "merged_at": "2020-12-02T16:07:16"...
979
true
Add code refinement
### OVERVIEW Millions of open-source projects with numerous bug fixes are available in code repositories. This proliferation of software development histories can be leveraged to learn how to fix common programming bugs Code refinement aims to automatically fix bugs in the code, which can contribute to reducing t...
https://github.com/huggingface/datasets/pull/978
[ "Also cc @madlag since I recall you wanted to work on CodeXGlue as well ?", "Yes, sorry I did not see earlier your message. I added 34 on the 35 datasets in CodeXGlue, tomorrow I will wrap it up, and so I will remove my version for code_refinement. Maybe we can just have a renaming after the merge, to have a cons...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/978", "html_url": "https://github.com/huggingface/datasets/pull/978", "diff_url": "https://github.com/huggingface/datasets/pull/978.diff", "patch_url": "https://github.com/huggingface/datasets/pull/978.patch", "merged_at": null }
978
true
Add ROPES dataset
ROPES dataset Reasoning over paragraph effects in situations - testing a system's ability to apply knowledge from a passage of text to a new situation. The task is framed into a reading comprehension task following squad-style extractive qa. One thing to note: labels of the test set are hidden (leaderboard submiss...
https://github.com/huggingface/datasets/pull/977
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/977", "html_url": "https://github.com/huggingface/datasets/pull/977", "diff_url": "https://github.com/huggingface/datasets/pull/977.diff", "patch_url": "https://github.com/huggingface/datasets/pull/977.patch", "merged_at": "2020-12-02T10:58:35"...
977
true
Arabic pos dialect
A README.md and loading script for the Arabic POS Dialect dataset. The README is missing the sections on personal information, biases, and limitations, as it would probably be better for those to be filled by someone who can read the contents of the dataset and is familiar with Arabic NLP.
https://github.com/huggingface/datasets/pull/976
[ "looks like this PR includes changes about many other files than the oens for Araboc POS Dialect\r\n\r\nCan you create a another branch and another PR please ?", "Sorry! I'm not sure how I managed to do that. I'll make a new branch." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/976", "html_url": "https://github.com/huggingface/datasets/pull/976", "diff_url": "https://github.com/huggingface/datasets/pull/976.diff", "patch_url": "https://github.com/huggingface/datasets/pull/976.patch", "merged_at": null }
976
true
add MeTooMA dataset
This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and guideli...
https://github.com/huggingface/datasets/pull/975
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/975", "html_url": "https://github.com/huggingface/datasets/pull/975", "diff_url": "https://github.com/huggingface/datasets/pull/975.diff", "patch_url": "https://github.com/huggingface/datasets/pull/975.patch", "merged_at": "2020-12-02T10:58:55"...
975
true
Add MeTooMA Dataset
https://github.com/huggingface/datasets/pull/974
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/974", "html_url": "https://github.com/huggingface/datasets/pull/974", "diff_url": "https://github.com/huggingface/datasets/pull/974.diff", "patch_url": "https://github.com/huggingface/datasets/pull/974.patch", "merged_at": null }
974
true
Adding The Microsoft Terminology Collection dataset.
https://github.com/huggingface/datasets/pull/973
[ "I have to manually copy a dataset_infos.json file from other dataset and modify it since the `datasets-cli` isn't able to handle manually downloaded datasets yet (as far as I know).", "you can generate the dataset_infos.json file even for dataset with manual data\r\nTo do so just specify `--data_dir <path/to/the...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/973", "html_url": "https://github.com/huggingface/datasets/pull/973", "diff_url": "https://github.com/huggingface/datasets/pull/973.diff", "patch_url": "https://github.com/huggingface/datasets/pull/973.patch", "merged_at": "2020-12-04T15:12:46"...
973
true
Add Children's Book Test (CBT) dataset
Add the Children's Book Test (CBT) from Facebook (Hill et al. 2016). Sentence completion given a few sentences as context from a children's book.
https://github.com/huggingface/datasets/pull/972
[ "Hi @lhoestq,\r\n\r\nI guess this PR can be closed since we merged #2044?\r\n\r\nI have used the same link for the homepage, as it is where the dataset is provided, hope that is okay?", "Closing in favor of #2044, thanks again :)\r\n\r\n> I have used the same link for the homepage, as it is where the dataset is p...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/972", "html_url": "https://github.com/huggingface/datasets/pull/972", "diff_url": "https://github.com/huggingface/datasets/pull/972.diff", "patch_url": "https://github.com/huggingface/datasets/pull/972.patch", "merged_at": null }
972
true
add piqa
Physical Interaction: Question Answering (commonsense) https://yonatanbisk.com/piqa/
https://github.com/huggingface/datasets/pull/971
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/971", "html_url": "https://github.com/huggingface/datasets/pull/971", "diff_url": "https://github.com/huggingface/datasets/pull/971.diff", "patch_url": "https://github.com/huggingface/datasets/pull/971.patch", "merged_at": "2020-12-02T09:58:01"...
971
true
Add SWAG
Commonsense NLI -> https://rowanzellers.com/swag/
https://github.com/huggingface/datasets/pull/970
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/970", "html_url": "https://github.com/huggingface/datasets/pull/970", "diff_url": "https://github.com/huggingface/datasets/pull/970.diff", "patch_url": "https://github.com/huggingface/datasets/pull/970.patch", "merged_at": "2020-12-02T09:55:15"...
970
true
Add wiki auto dataset
This PR adds the WikiAuto sentence simplification dataset https://github.com/chaojiang06/wiki-auto This is also a prospective GEM task, hence the README.md
https://github.com/huggingface/datasets/pull/969
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/969", "html_url": "https://github.com/huggingface/datasets/pull/969", "diff_url": "https://github.com/huggingface/datasets/pull/969.diff", "patch_url": "https://github.com/huggingface/datasets/pull/969.patch", "merged_at": "2020-12-02T16:19:14"...
969
true
ADD Afrikaans NER
Afrikaans NER corpus
https://github.com/huggingface/datasets/pull/968
[ "One trick if you want to add other datasets: consider running these commands each time you want to add a new dataset\r\n```\r\ngit checkout master\r\ngit fetch upstream\r\ngit rebase upstream/master\r\ngit checkout -b add-<my_dataset_name>\r\n```" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/968", "html_url": "https://github.com/huggingface/datasets/pull/968", "diff_url": "https://github.com/huggingface/datasets/pull/968.diff", "patch_url": "https://github.com/huggingface/datasets/pull/968.patch", "merged_at": "2020-12-02T09:41:28"...
968
true
Add CS Restaurants dataset
This PR adds the Czech restaurants dataset for Czech NLG.
https://github.com/huggingface/datasets/pull/967
[ "Oh yeah, for some reason I thought you had to do it after the merge, I'll get on it", "Weird, now the CI seems to fail because of other datasets (XGLUE, Norwegian_NER)", "Yea you just need to rebase from master", "Re-opening a PR without the messed-up rebase" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/967", "html_url": "https://github.com/huggingface/datasets/pull/967", "diff_url": "https://github.com/huggingface/datasets/pull/967.diff", "patch_url": "https://github.com/huggingface/datasets/pull/967.patch", "merged_at": null }
967
true
Add CLINC150 Dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/966
[ "Looks like your PR now shows changes in many other files than the ones for CLINC150.\r\nFeel free to create another branch and another PR", "created new [PR](https://github.com/huggingface/datasets/pull/1016)\r\n\r\nclosing this!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/966", "html_url": "https://github.com/huggingface/datasets/pull/966", "diff_url": "https://github.com/huggingface/datasets/pull/966.diff", "patch_url": "https://github.com/huggingface/datasets/pull/966.patch", "merged_at": null }
966
true
Add CLINC150 Dataset
Added CLINC150 Dataset. The link to the dataset can be found [here](https://github.com/clinc/oos-eval) and the paper can be found [here](https://www.aclweb.org/anthology/D19-1131.pdf) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/965
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/965", "html_url": "https://github.com/huggingface/datasets/pull/965", "diff_url": "https://github.com/huggingface/datasets/pull/965.diff", "patch_url": "https://github.com/huggingface/datasets/pull/965.patch", "merged_at": null }
965
true
Adding the WebNLG dataset
This PR adds data from the WebNLG challenge, with one configuration per release and challenge iteration. More information can be found [here](https://webnlg-challenge.loria.fr/) Unfortunately, the data itself comes from a pretty large number of small XML files, so the dummy data ends up being quite large (8.4 MB ...
https://github.com/huggingface/datasets/pull/964
[ "This is task is part of the GEM suite so will actually need a more complete dataset card. I'm taking a break for now though and will get back to it before merging :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/964", "html_url": "https://github.com/huggingface/datasets/pull/964", "diff_url": "https://github.com/huggingface/datasets/pull/964.diff", "patch_url": "https://github.com/huggingface/datasets/pull/964.patch", "merged_at": "2020-12-02T17:34:05"...
964
true
add CODAH dataset
Adding CODAH dataset. More info: https://github.com/Websail-NU/CODAH
https://github.com/huggingface/datasets/pull/963
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/963", "html_url": "https://github.com/huggingface/datasets/pull/963", "diff_url": "https://github.com/huggingface/datasets/pull/963.diff", "patch_url": "https://github.com/huggingface/datasets/pull/963.patch", "merged_at": "2020-12-02T13:21:25"...
963
true
Add Danish Political Comments Dataset
https://github.com/huggingface/datasets/pull/962
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/962", "html_url": "https://github.com/huggingface/datasets/pull/962", "diff_url": "https://github.com/huggingface/datasets/pull/962.diff", "patch_url": "https://github.com/huggingface/datasets/pull/962.patch", "merged_at": "2020-12-03T10:31:54"...
962
true
sample multiple datasets
Hi I am dealing with multiple datasets, I need to have a dataloader over them with a condition that in each batch data samples are coming from one of the datasets. My main question is: - I need to have a way to sample the datasets first with some weights, lets say 2x dataset1 1x dataset2, could you point me how I c...
https://github.com/huggingface/datasets/issues/961
[ "here I share my dataloader currently for multiple tasks: https://gist.github.com/rabeehkarimimahabadi/39f9444a4fb6f53dcc4fca5d73bf8195 \r\n\r\nI need to train my model distributedly with this dataloader, \"MultiTasksataloader\", currently this does not work in distributed fasion,\r\nto save on memory I tried to us...
null
961
false
Add code to automate parts of the dataset card
Most parts of the "Dataset Structure" section can be generated automatically. This PR adds some code to do so.
https://github.com/huggingface/datasets/pull/960
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/960", "html_url": "https://github.com/huggingface/datasets/pull/960", "diff_url": "https://github.com/huggingface/datasets/pull/960.diff", "patch_url": "https://github.com/huggingface/datasets/pull/960.patch", "merged_at": null }
960
true
Add Tunizi Dataset
https://github.com/huggingface/datasets/pull/959
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/959", "html_url": "https://github.com/huggingface/datasets/pull/959", "diff_url": "https://github.com/huggingface/datasets/pull/959.diff", "patch_url": "https://github.com/huggingface/datasets/pull/959.patch", "merged_at": "2020-12-03T14:21:40"...
959
true
dataset(ncslgr): add initial loading script
clean #789
https://github.com/huggingface/datasets/pull/958
[ "@lhoestq I added the README files, and now the tests fail... (check commit history, only changed MD file)\r\nThe tests seem a bit unstable", "the `RemoteDatasetTest ` errors in the CI are fixed on master so it's fine", "merging since the CI is fixed on master" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/958", "html_url": "https://github.com/huggingface/datasets/pull/958", "diff_url": "https://github.com/huggingface/datasets/pull/958.diff", "patch_url": "https://github.com/huggingface/datasets/pull/958.patch", "merged_at": "2020-12-07T16:35:39"...
958
true
Isixhosa ner corpus
https://github.com/huggingface/datasets/pull/957
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/957", "html_url": "https://github.com/huggingface/datasets/pull/957", "diff_url": "https://github.com/huggingface/datasets/pull/957.diff", "patch_url": "https://github.com/huggingface/datasets/pull/957.patch", "merged_at": "2020-12-01T18:14:58"...
957
true
Add Norwegian NER
This PR adds the [Norwegian NER](https://github.com/ljos/navnkjenner) dataset. I have added the `conllu` package as a test dependency. This is required to properly parse the `.conllu` files.
https://github.com/huggingface/datasets/pull/956
[ "Merging this one, good job and thank you @jplu :) " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/956", "html_url": "https://github.com/huggingface/datasets/pull/956", "diff_url": "https://github.com/huggingface/datasets/pull/956.diff", "patch_url": "https://github.com/huggingface/datasets/pull/956.patch", "merged_at": "2020-12-01T18:09:21"...
956
true
Added PragmEval benchmark
https://github.com/huggingface/datasets/pull/955
[ "> Really cool ! Thanks for adding this one :)\r\n> Good job at adding all those citations for each task\r\n> \r\n> Looks like the dummy data test doesn't pass. Maybe some files are missing in the dummy_data.zip files ?\r\n> The error reports `pragmeval/verifiability/train.tsv` to be missing\r\n> \r\n> Also could y...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/955", "html_url": "https://github.com/huggingface/datasets/pull/955", "diff_url": "https://github.com/huggingface/datasets/pull/955.diff", "patch_url": "https://github.com/huggingface/datasets/pull/955.patch", "merged_at": "2020-12-03T09:36:47"...
955
true
add prachathai67k
`prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The prachathai-67k dataset was scraped from the news site Prachathai. We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articles wtih 12 curated tags ...
https://github.com/huggingface/datasets/pull/954
[ "Test failing for same issues as https://github.com/huggingface/datasets/pull/939\r\nPlease advise.\r\n\r\n```\r\n=========================== short test summary info ============================\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILED tests/test_dataset_common.py...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/954", "html_url": "https://github.com/huggingface/datasets/pull/954", "diff_url": "https://github.com/huggingface/datasets/pull/954.diff", "patch_url": "https://github.com/huggingface/datasets/pull/954.patch", "merged_at": null }
954
true
added health_fact dataset
Added dataset Explainable Fact-Checking for Public Health Claims (dataset_id: health_fact)
https://github.com/huggingface/datasets/pull/953
[ "Hi @lhoestq,\r\nInitially I tried int(-1) only in place of nan labels and missing values but I kept on getting this error ```pyarrow.lib.ArrowTypeError: Expected bytes, got a 'int' object``` maybe because I'm sending int values (-1) to objects which are string type" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/953", "html_url": "https://github.com/huggingface/datasets/pull/953", "diff_url": "https://github.com/huggingface/datasets/pull/953.diff", "patch_url": "https://github.com/huggingface/datasets/pull/953.patch", "merged_at": "2020-12-01T23:11:33"...
953
true
Add orange sum
Add OrangeSum a french abstractive summarization dataset. Paper: [BARThez: a Skilled Pretrained French Sequence-to-Sequence Model](https://arxiv.org/abs/2010.12321)
https://github.com/huggingface/datasets/pull/952
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/952", "html_url": "https://github.com/huggingface/datasets/pull/952", "diff_url": "https://github.com/huggingface/datasets/pull/952.diff", "patch_url": "https://github.com/huggingface/datasets/pull/952.patch", "merged_at": "2020-12-01T15:44:00"...
952
true
Prachathai67k
Add `prachathai-67k`: News Article Corpus and Multi-label Text Classificdation from Prachathai.com The `prachathai-67k` dataset was scraped from the news site [Prachathai](prachathai.com). We filtered out those articles with less than 500 characters of body text, mostly images and cartoons. It contains 67,889 articl...
https://github.com/huggingface/datasets/pull/951
[ "Wrongly branching from existing branch of wisesight_sentiment. Closing and opening another one specifically for prachathai67k" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/951", "html_url": "https://github.com/huggingface/datasets/pull/951", "diff_url": "https://github.com/huggingface/datasets/pull/951.diff", "patch_url": "https://github.com/huggingface/datasets/pull/951.patch", "merged_at": null }
951
true
Support .xz file format
Add support to extract/uncompress files in .xz format.
https://github.com/huggingface/datasets/pull/950
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/950", "html_url": "https://github.com/huggingface/datasets/pull/950", "diff_url": "https://github.com/huggingface/datasets/pull/950.diff", "patch_url": "https://github.com/huggingface/datasets/pull/950.patch", "merged_at": "2020-12-01T13:39:18"...
950
true
Add GermaNER Dataset
https://github.com/huggingface/datasets/pull/949
[ "@lhoestq added. " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/949", "html_url": "https://github.com/huggingface/datasets/pull/949", "diff_url": "https://github.com/huggingface/datasets/pull/949.diff", "patch_url": "https://github.com/huggingface/datasets/pull/949.patch", "merged_at": "2020-12-03T14:06:40"...
949
true
docs(ADD_NEW_DATASET): correct indentation for script
https://github.com/huggingface/datasets/pull/948
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/948", "html_url": "https://github.com/huggingface/datasets/pull/948", "diff_url": "https://github.com/huggingface/datasets/pull/948.diff", "patch_url": "https://github.com/huggingface/datasets/pull/948.patch", "merged_at": "2020-12-01T11:25:18"...
948
true
Add europeana newspapers
This PR adds the [Europeana newspapers](https://github.com/EuropeanaNewspapers/ner-corpora) dataset.
https://github.com/huggingface/datasets/pull/947
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/947", "html_url": "https://github.com/huggingface/datasets/pull/947", "diff_url": "https://github.com/huggingface/datasets/pull/947.diff", "patch_url": "https://github.com/huggingface/datasets/pull/947.patch", "merged_at": "2020-12-02T09:42:09"...
947
true
add PEC dataset
A persona-based empathetic conversation dataset published at EMNLP 2020.
https://github.com/huggingface/datasets/pull/946
[ "The checks failed again even if I didn't make any changes.", "you just need to rebase from master to fix the CI :)", "Sorry for the mess, I'm confused by the rebase and thus created a new branch." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/946", "html_url": "https://github.com/huggingface/datasets/pull/946", "diff_url": "https://github.com/huggingface/datasets/pull/946.diff", "patch_url": "https://github.com/huggingface/datasets/pull/946.patch", "merged_at": null }
946
true
Adding Babi dataset - English version
Adding the English version of bAbI. Samples are taken from ParlAI for consistency with the main users at the moment.
https://github.com/huggingface/datasets/pull/945
[ "Replaced by #1126" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/945", "html_url": "https://github.com/huggingface/datasets/pull/945", "diff_url": "https://github.com/huggingface/datasets/pull/945.diff", "patch_url": "https://github.com/huggingface/datasets/pull/945.patch", "merged_at": null }
945
true
Add German Legal Entity Recognition Dataset
https://github.com/huggingface/datasets/pull/944
[ "thanks ! merging this one" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/944", "html_url": "https://github.com/huggingface/datasets/pull/944", "diff_url": "https://github.com/huggingface/datasets/pull/944.diff", "patch_url": "https://github.com/huggingface/datasets/pull/944.patch", "merged_at": "2020-12-03T13:06:54"...
944
true
The FLUE Benchmark
This PR adds the [FLUE](https://github.com/getalp/Flaubert/tree/master/flue) benchmark which is a set of different datasets to evaluate models for French content. Two datasets are missing, the French Treebank that we can use only for research purpose and we are not allowed to distribute, and the Word Sense disambigu...
https://github.com/huggingface/datasets/pull/943
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/943", "html_url": "https://github.com/huggingface/datasets/pull/943", "diff_url": "https://github.com/huggingface/datasets/pull/943.diff", "patch_url": "https://github.com/huggingface/datasets/pull/943.patch", "merged_at": "2020-12-01T15:24:30"...
943
true
D
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/942
[]
null
942
false
Add People's Daily NER dataset
https://github.com/huggingface/datasets/pull/941
[ "> LGTM thanks :)\n> \n> \n> \n> Before we merge, could you add a dataset card ? see here for more info: https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#tag-the-dataset-and-write-the-dataset-card\n> \n> \n> \n> Note that only the tags at the top of the dataset card are mandatory, if you feel ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/941", "html_url": "https://github.com/huggingface/datasets/pull/941", "diff_url": "https://github.com/huggingface/datasets/pull/941.diff", "patch_url": "https://github.com/huggingface/datasets/pull/941.patch", "merged_at": "2020-12-02T18:42:41"...
941
true
Add MSRA NER dataset
https://github.com/huggingface/datasets/pull/940
[ "LGTM, don't forget the tags ;)" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/940", "html_url": "https://github.com/huggingface/datasets/pull/940", "diff_url": "https://github.com/huggingface/datasets/pull/940.diff", "patch_url": "https://github.com/huggingface/datasets/pull/940.patch", "merged_at": "2020-12-01T07:25:53"...
940
true
add wisesight_sentiment
Add `wisesight_sentiment` Social media messages in Thai language with sentiment label (positive, neutral, negative, question) Model Card: --- YAML tags: annotations_creators: - expert-generated language_creators: - found languages: - th licenses: - cc0-1.0 multilinguality: - monolingual size_categories:...
https://github.com/huggingface/datasets/pull/939
[ "@lhoestq Thanks, Quentin. Removed the .ipynb_checkpoints and edited the README.md. The tests are failing because of other dataets. I'm figuring out why since the commits only have changes on `wisesight_sentiment`\r\n\r\n```\r\nFAILED tests/test_dataset_common.py::RemoteDatasetTest::test_builder_class_flue\r\nFAILE...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/939", "html_url": "https://github.com/huggingface/datasets/pull/939", "diff_url": "https://github.com/huggingface/datasets/pull/939.diff", "patch_url": "https://github.com/huggingface/datasets/pull/939.patch", "merged_at": null }
939
true
V-1.0.0 of isizulu_ner_corpus
https://github.com/huggingface/datasets/pull/938
[ "closing since it's been added in #957 " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/938", "html_url": "https://github.com/huggingface/datasets/pull/938", "diff_url": "https://github.com/huggingface/datasets/pull/938.diff", "patch_url": "https://github.com/huggingface/datasets/pull/938.patch", "merged_at": null }
938
true
Local machine/cluster Beam Datasets example/tutorial
Hi, I'm wondering if https://huggingface.co/docs/datasets/beam_dataset.html has an non-GCP or non-Dataflow version example/tutorial? I tried to migrate it to run on DirectRunner and SparkRunner, however, there were way too many runtime errors that I had to fix during the process, and even so I wasn't able to get eit...
https://github.com/huggingface/datasets/issues/937
[ "I tried to make it run once on the SparkRunner but it seems that this runner has some issues when it is run locally.\r\nFrom my experience the DirectRunner is fine though, even if it's clearly not memory efficient.\r\n\r\nIt would be awesome though to make it work locally on a SparkRunner !\r\nDid you manage to ma...
null
937
false
Added HANS parses and categories
This pull request adds HANS missing information: the sentence parses, as well as the heuristic category.
https://github.com/huggingface/datasets/pull/936
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/936", "html_url": "https://github.com/huggingface/datasets/pull/936", "diff_url": "https://github.com/huggingface/datasets/pull/936.diff", "patch_url": "https://github.com/huggingface/datasets/pull/936.patch", "merged_at": "2020-12-01T13:19:40"...
936
true
add PIB dataset
This pull request will add PIB dataset.
https://github.com/huggingface/datasets/pull/935
[ "Hi, \r\n\r\nI am unable to get success in these tests. Can someone help me by pointing out possible errors?\r\n\r\nThanks", "Hi ! you can read the tests by logging in to circleci.\r\n\r\nAnyway for information here are the errors : \r\n```\r\ndatasets/pib/pib.py:19:1: F401 'csv' imported but unused\r\ndatasets/p...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/935", "html_url": "https://github.com/huggingface/datasets/pull/935", "diff_url": "https://github.com/huggingface/datasets/pull/935.diff", "patch_url": "https://github.com/huggingface/datasets/pull/935.patch", "merged_at": "2020-12-01T23:17:11"...
935
true
small updates to the "add new dataset" guide
small updates (corrections/typos) to the "add new dataset" guide
https://github.com/huggingface/datasets/pull/934
[ "cc @yjernite @lhoestq @thomwolf " ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/934", "html_url": "https://github.com/huggingface/datasets/pull/934", "diff_url": "https://github.com/huggingface/datasets/pull/934.diff", "patch_url": "https://github.com/huggingface/datasets/pull/934.patch", "merged_at": "2020-11-30T23:14:00"...
934
true
Add NumerSense
Adds the NumerSense dataset - Webpage/leaderboard: https://inklab.usc.edu/NumerSense/ - Paper: https://arxiv.org/abs/2005.00683 - Description: NumerSense is a new numerical commonsense reasoning probing task, with a diagnostic dataset consisting of 3,145 masked-word-prediction probes. Basically, it's a benchmark to ...
https://github.com/huggingface/datasets/pull/933
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/933", "html_url": "https://github.com/huggingface/datasets/pull/933", "diff_url": "https://github.com/huggingface/datasets/pull/933.diff", "patch_url": "https://github.com/huggingface/datasets/pull/933.patch", "merged_at": "2020-12-01T19:51:56"...
933
true
adding metooma dataset
https://github.com/huggingface/datasets/pull/932
[ "This PR adds the #MeToo MA dataset. It presents multi-label data points for tweets mined in the backdrop of the #MeToo movement. The dataset includes data points in the form of Tweet ids and appropriate labels. Please refer to the accompanying paper for detailed information regarding annotation, collection, and gu...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/932", "html_url": "https://github.com/huggingface/datasets/pull/932", "diff_url": "https://github.com/huggingface/datasets/pull/932.diff", "patch_url": "https://github.com/huggingface/datasets/pull/932.patch", "merged_at": null }
932
true
[WIP] complex_webqa - Error zipfile.BadZipFile: Bad CRC-32
Have a string `zipfile.BadZipFile: Bad CRC-32 for file 'web_snippets_train.json'` error when downloading the largest file from dropbox: `https://www.dropbox.com/sh/7pkwkrfnwqhsnpo/AABVENv_Q9rFtnM61liyzO0La/web_snippets_train.json.zip?dl=1` Didn't managed to see how to solve that. Putting aside for now.
https://github.com/huggingface/datasets/pull/931
[ "Thanks for your contribution, @thomwolf. Are you still interested in adding this dataset?\r\n\r\nWe are removing the dataset scripts from this GitHub repo and moving them to the Hugging Face Hub: https://huggingface.co/datasets\r\n\r\nWe would suggest that you create this dataset there. Please, feel free to tell u...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/931", "html_url": "https://github.com/huggingface/datasets/pull/931", "diff_url": "https://github.com/huggingface/datasets/pull/931.diff", "patch_url": "https://github.com/huggingface/datasets/pull/931.patch", "merged_at": null }
931
true
Lambada
Added LAMBADA dataset. A couple of points of attention (mostly because I am not sure) - The training data are compressed in a .tar file inside the main tar.gz file. I had to manually un-tar the training file to access the examples. - The dev and test splits don't have the `category` field so I put `None` by defaul...
https://github.com/huggingface/datasets/pull/930
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/930", "html_url": "https://github.com/huggingface/datasets/pull/930", "diff_url": "https://github.com/huggingface/datasets/pull/930.diff", "patch_url": "https://github.com/huggingface/datasets/pull/930.patch", "merged_at": "2020-12-01T00:37:11"...
930
true
Add weibo NER dataset
https://github.com/huggingface/datasets/pull/929
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/929", "html_url": "https://github.com/huggingface/datasets/pull/929", "diff_url": "https://github.com/huggingface/datasets/pull/929.diff", "patch_url": "https://github.com/huggingface/datasets/pull/929.patch", "merged_at": "2020-12-03T13:36:54"...
929
true
Add the Multilingual Amazon Reviews Corpus
- **Name:** Multilingual Amazon Reviews Corpus* (`amazon_reviews_multi`) - **Description:** A collection of Amazon reviews in English, Japanese, German, French, Spanish and Chinese. - **Paper:** https://arxiv.org/abs/2010.02573 ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` us...
https://github.com/huggingface/datasets/pull/928
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/928", "html_url": "https://github.com/huggingface/datasets/pull/928", "diff_url": "https://github.com/huggingface/datasets/pull/928.diff", "patch_url": "https://github.com/huggingface/datasets/pull/928.patch", "merged_at": "2020-12-01T16:04:27"...
928
true
Hello
https://github.com/huggingface/datasets/issues/927
[]
null
927
false
add inquisitive
Adding inquisitive qg dataset More info: https://github.com/wjko2/INQUISITIVE
https://github.com/huggingface/datasets/pull/926
[ "`dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\nAny idea ?", "> `dummy_data` right now contains all article files, keeping only the required articles for dummy data fails the dummy data test.\r\n> Any idea ?\r\n\r\nWe should defin...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/926", "html_url": "https://github.com/huggingface/datasets/pull/926", "diff_url": "https://github.com/huggingface/datasets/pull/926.diff", "patch_url": "https://github.com/huggingface/datasets/pull/926.patch", "merged_at": "2020-12-02T13:40:13"...
926
true
Add Turku NLP Corpus for Finnish NER
https://github.com/huggingface/datasets/pull/925
[ "> Did you generate the dummy data with the cli or manually ?\r\n\r\nIt was generated by the cli. Do you want me to make it smaller keep it like this?\r\n\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/925", "html_url": "https://github.com/huggingface/datasets/pull/925", "diff_url": "https://github.com/huggingface/datasets/pull/925.diff", "patch_url": "https://github.com/huggingface/datasets/pull/925.patch", "merged_at": "2020-12-03T14:07:10"...
925
true
Add DART
- **Name:** *DART* - **Description:** *DART is a large dataset for open-domain structured data record to text generation.* - **Paper:** *https://arxiv.org/abs/2007.02871* - **Data:** *https://github.com/Yale-LILY/dart#leaderboard* ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py...
https://github.com/huggingface/datasets/pull/924
[ "LGTM!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/924", "html_url": "https://github.com/huggingface/datasets/pull/924", "diff_url": "https://github.com/huggingface/datasets/pull/924.diff", "patch_url": "https://github.com/huggingface/datasets/pull/924.patch", "merged_at": "2020-12-02T03:13:41"...
924
true
Add CC-100 dataset
Add CC-100. Close #773
https://github.com/huggingface/datasets/pull/923
[ "Hello @lhoestq, I would like just to ask you if it is OK that I include this feature 9f32ba1 in this PR or you would prefer to have it in a separate one.\r\n\r\nI was wondering whether include also a test, but I did not find any test for the other file formats...", "Hi ! Sure that would be valuable to support .x...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/923", "html_url": "https://github.com/huggingface/datasets/pull/923", "diff_url": "https://github.com/huggingface/datasets/pull/923.diff", "patch_url": "https://github.com/huggingface/datasets/pull/923.patch", "merged_at": null }
923
true
Add XOR QA Dataset
Added XOR Question Answering Dataset. The link to the dataset can be found [here](https://nlp.cs.washington.edu/xorqa/) - [x] Followed the instructions in CONTRIBUTING.md - [x] Ran the tests successfully - [x] Created the dummy data
https://github.com/huggingface/datasets/pull/922
[ "Hi @sumanthd17 \r\n\r\nLooks like a good start! You will also need to add a Dataset card, following the instructions given [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#manually-tag-the-dataset-and-write-the-dataset-card)", "I followed the instructions mentioned there but my datas...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/922", "html_url": "https://github.com/huggingface/datasets/pull/922", "diff_url": "https://github.com/huggingface/datasets/pull/922.diff", "patch_url": "https://github.com/huggingface/datasets/pull/922.patch", "merged_at": "2020-12-02T03:12:21"...
922
true
add dream dataset
Adding Dream: a Dataset and for Dialogue-Based Reading Comprehension More details: https://dataset.org/dream/ https://github.com/nlpdata/dream
https://github.com/huggingface/datasets/pull/920
[ "> Awesome good job !\r\n> \r\n> Could you also add a dataset card using the template guide here : https://github.com/huggingface/datasets/blob/master/templates/README_guide.md\r\n> If you can't fill some fields then just leave `[N/A]`\r\n\r\nQuick amendment: `[N/A]` is for fields that are not relevant: if you can'...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/920", "html_url": "https://github.com/huggingface/datasets/pull/920", "diff_url": "https://github.com/huggingface/datasets/pull/920.diff", "patch_url": "https://github.com/huggingface/datasets/pull/920.patch", "merged_at": "2020-12-02T15:39:12"...
920
true
wrong length with datasets
Hi I have a MRPC dataset which I convert it to seq2seq format, then this is of this format: `Dataset(features: {'src_texts': Value(dtype='string', id=None), 'tgt_texts': Value(dtype='string', id=None)}, num_rows: 10) ` I feed it to a dataloader: ``` dataloader = DataLoader( train_dataset, ...
https://github.com/huggingface/datasets/issues/919
[ "Also, I cannot first convert it to torch format, since huggingface seq2seq_trainer codes process the datasets afterwards during datacollector function to make it optimize for TPUs. ", "sorry I misunderstood length of dataset with dataloader, closed. thanks " ]
null
919
false
Add conll2002
Adding the Conll2002 dataset for NER. More info here : https://www.clips.uantwerpen.be/conll2002/ner/ ### Checkbox - [x] Create the dataset script `/datasets/my_dataset/my_dataset.py` using the template - [x] Fill the `_DESCRIPTION` and `_CITATION` variables - [x] Implement `_infos()`, `_split_generators()` ...
https://github.com/huggingface/datasets/pull/918
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/918", "html_url": "https://github.com/huggingface/datasets/pull/918", "diff_url": "https://github.com/huggingface/datasets/pull/918.diff", "patch_url": "https://github.com/huggingface/datasets/pull/918.patch", "merged_at": "2020-11-30T18:34:29"...
918
true
Addition of Concode Dataset
##Overview Concode Dataset contains pairs of Nl Queries and the corresponding Code.(Contextual Code Generation) Reference Links Paper Link = https://arxiv.org/pdf/1904.09086.pdf Github Link =https://github.com/microsoft/CodeXGLUE/tree/main/Text-Code/text-to-code
https://github.com/huggingface/datasets/pull/917
[ "Testing command doesn't work\r\n###trace\r\n-- Docs: https://docs.pytest.org/en/stable/warnings.html\r\n========================================================= short test summary info ========================================================== \r\nERROR tests/test_dataset_common.py - absl.testing.parameterized.No...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/917", "html_url": "https://github.com/huggingface/datasets/pull/917", "diff_url": "https://github.com/huggingface/datasets/pull/917.diff", "patch_url": "https://github.com/huggingface/datasets/pull/917.patch", "merged_at": null }
917
true
Add Swedish NER Corpus
https://github.com/huggingface/datasets/pull/916
[ "Yes the use of configs is optional", "@abhishekkrthakur we want to keep track of the information that is and isn't in the dataset cards so we're asking everyone to use the full template :) If there is some information in there that you really can't find or don't feel qualified to add, you can just leave the `[Mo...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/916", "html_url": "https://github.com/huggingface/datasets/pull/916", "diff_url": "https://github.com/huggingface/datasets/pull/916.diff", "patch_url": "https://github.com/huggingface/datasets/pull/916.patch", "merged_at": "2020-12-02T03:10:49"...
916
true
Shall we change the hashing to encoding to reduce potential replicated cache files?
Hi there. For now, we are using `xxhash` to hash the transformations to fingerprint and we will save a copy of the processed dataset to disk if there is a new hash value. However, there are some transformations that are idempotent or commutative to each other. I think that encoding the transformation chain as the finge...
https://github.com/huggingface/datasets/issues/915
[ "This is an interesting idea !\r\nDo you have ideas about how to approach the decoding and the normalization ?", "@lhoestq\r\nI think we first need to save the transformation chain to a list in `self._fingerprint`. Then we can\r\n- decode all the current saved datasets to see if there is already one that is equiv...
null
915
false
Add list_github_datasets api for retrieving dataset name list in github repo
Thank you for your great effort on unifying data processing for NLP! This pr is trying to add a new api `list_github_datasets` in the `inspect` module. The reason for it is that the current `list_datasets` api need to access https://huggingface.co/api/datasets to get a large json. However, this connection can be rea...
https://github.com/huggingface/datasets/pull/914
[ "We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?", "> We can look into removing some of the attributes from `GET /api/datasets` to make it smaller/faster, what do you think @lhoestq?\r\n\r\nyes at least remove all the `dummy_data.zip...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/914", "html_url": "https://github.com/huggingface/datasets/pull/914", "diff_url": "https://github.com/huggingface/datasets/pull/914.diff", "patch_url": "https://github.com/huggingface/datasets/pull/914.patch", "merged_at": null }
914
true
My new dataset PEC
A new dataset PEC published in EMNLP 2020.
https://github.com/huggingface/datasets/pull/913
[ "How to resolve these failed checks?", "Thanks for adding this one :) \r\n\r\nTo fix the check_code_quality, please run `make style` with the latest version of black, isort, flake8\r\nTo fix the test_no_encoding_on_file_open, make sure to specify the encoding each time you call `open()` on a text file.\r\nFor exa...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/913", "html_url": "https://github.com/huggingface/datasets/pull/913", "diff_url": "https://github.com/huggingface/datasets/pull/913.diff", "patch_url": "https://github.com/huggingface/datasets/pull/913.patch", "merged_at": null }
913
true
datasets module not found
Currently, running `from datasets import load_dataset` will throw a `ModuleNotFoundError: No module named 'datasets'` error.
https://github.com/huggingface/datasets/issues/911
[ "nvm, I'd made an assumption that the library gets installed with transformers. " ]
null
911
false
Grindr meeting app web.Grindr
## Adding a Dataset - **Name:** *name of the dataset* - **Description:** *short description of the dataset (or link to social media or blog post)* - **Paper:** *link to the dataset paper if available* - **Data:** *link to the Github repository or current dataset location* - **Motivation:** *what are some good reasons t...
https://github.com/huggingface/datasets/issues/910
[]
null
910
false
Add FiNER dataset
Hi, this PR adds "A Finnish News Corpus for Named Entity Recognition" as new `finer` dataset. The dataset is described in [this paper](https://arxiv.org/abs/1908.04212). The data is publicly available in [this GitHub](https://github.com/mpsilfve/finer-data). Notice: they provide two testsets. The additional te...
https://github.com/huggingface/datasets/pull/909
[ "> That's really cool thank you !\r\n> \r\n> Could you also add a dataset card ?\r\n> You can find a template here : https://github.com/huggingface/datasets/blob/master/templates/README.md\r\n\r\nThe full information for adding a dataset card can be found here :) \r\nhttps://github.com/huggingface/datasets/blob/mas...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/909", "html_url": "https://github.com/huggingface/datasets/pull/909", "diff_url": "https://github.com/huggingface/datasets/pull/909.diff", "patch_url": "https://github.com/huggingface/datasets/pull/909.patch", "merged_at": "2020-12-07T16:56:23"...
909
true
Add dependency on black for tests
Add package 'black' as an installation requirement for tests.
https://github.com/huggingface/datasets/pull/908
[ "Sorry, I have just seen that it was already in `QUALITY_REQUIRE`.\r\n\r\nFor some reason it did not get installed on my virtual environment..." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/908", "html_url": "https://github.com/huggingface/datasets/pull/908", "diff_url": "https://github.com/huggingface/datasets/pull/908.diff", "patch_url": "https://github.com/huggingface/datasets/pull/908.patch", "merged_at": null }
908
true
Remove os.path.join from all URLs
Remove `os.path.join` from all URLs in dataset scripts.
https://github.com/huggingface/datasets/pull/907
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/907", "html_url": "https://github.com/huggingface/datasets/pull/907", "diff_url": "https://github.com/huggingface/datasets/pull/907.diff", "patch_url": "https://github.com/huggingface/datasets/pull/907.patch", "merged_at": "2020-11-29T22:48:19"...
907
true
Fix url with backslash in windows for blimp and pg19
Following #903 I also fixed blimp and pg19 which were using the `os.path.join` to create urls cc @albertvillanova
https://github.com/huggingface/datasets/pull/906
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/906", "html_url": "https://github.com/huggingface/datasets/pull/906", "diff_url": "https://github.com/huggingface/datasets/pull/906.diff", "patch_url": "https://github.com/huggingface/datasets/pull/906.patch", "merged_at": "2020-11-27T18:19:55"...
906
true
Disallow backslash in urls
Following #903 @albertvillanova noticed that there are sometimes bad usage of `os.path.join` in datasets scripts to create URLS. However this should be avoided since it doesn't work on windows. I'm suggesting a test to make sure we that all the urls don't have backslashes in them in the datasets scripts. The tests ...
https://github.com/huggingface/datasets/pull/905
[ "Looks like the test doesn't detect all the problems fixed by #907 , I'll fix that", "Ok found why it doesn't detect the problems fixed by #907 . That's because for all those datasets the urls are actually fine (no backslash) on windows, even if it uses `os.path.join`.\r\n\r\nThis is because of the behavior of `o...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/905", "html_url": "https://github.com/huggingface/datasets/pull/905", "diff_url": "https://github.com/huggingface/datasets/pull/905.diff", "patch_url": "https://github.com/huggingface/datasets/pull/905.patch", "merged_at": "2020-11-29T22:48:36"...
905
true