title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Very detailed step-by-step on how to add a dataset | Add very detailed step-by-step instructions to add a new dataset to the library. | https://github.com/huggingface/datasets/pull/904 | [
"Awesome! Thanks @lhoestq "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/904",
"html_url": "https://github.com/huggingface/datasets/pull/904",
"diff_url": "https://github.com/huggingface/datasets/pull/904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/904.patch",
"merged_at": "2020-11-30T09:56:26"... | 904 | true |
Fix URL with backslash in Windows | In Windows, `os.path.join` generates URLs containing backslashes, when the first "path" does not end with a slash.
In general, `os.path.join` should be avoided to generate URLs. | https://github.com/huggingface/datasets/pull/903 | [
"@lhoestq I was indeed working on that... to make another commit on this feature branch...",
"But as you prefer... nevermind! :)",
"Ah what do you have in mind for the tests ? I was thinking of adding a check in the MockDownloadManager used for tests based on dummy data. I'm creating a PR right now, I'd be happ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/903",
"html_url": "https://github.com/huggingface/datasets/pull/903",
"diff_url": "https://github.com/huggingface/datasets/pull/903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/903.patch",
"merged_at": "2020-11-27T18:04:46"... | 903 | true |
Follow cache_dir parameter to gcs downloader | As noticed in #900 the cache_dir parameter was not followed to the downloader in the case of an already processed dataset hosted on our google storage (one of them is natural questions).
Fix #900 | https://github.com/huggingface/datasets/pull/902 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/902",
"html_url": "https://github.com/huggingface/datasets/pull/902",
"diff_url": "https://github.com/huggingface/datasets/pull/902.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/902.patch",
"merged_at": "2020-11-29T22:48:53"... | 902 | true |
Addition of Nl2Bash Dataset | ## Overview
The NL2Bash data contains over 10,000 instances of linux shell commands and their corresponding natural language descriptions provided by experts, from the Tellina system. The dataset features 100+ commonly used shell utilities.
## Footnotes
The following dataset marks the first ML on source code related... | https://github.com/huggingface/datasets/pull/901 | [
"Hello, thanks. I had a talk with the dataset authors, found out that the data now is obsolete and they'll get a stable version soon. So temporality closing the PR.\r\n Although I have a question, What should _id_ be in the return statement? Should that be something like a start index (or) the type of split will do... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/901",
"html_url": "https://github.com/huggingface/datasets/pull/901",
"diff_url": "https://github.com/huggingface/datasets/pull/901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/901.patch",
"merged_at": null
} | 901 | true |
datasets.load_dataset() custom chaching directory bug | Hello,
I'm having issue with loading a dataset with a custom `cache_dir`. Despite specifying the output dir, it is still downloaded to
`~/.cache`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```p... | https://github.com/huggingface/datasets/issues/900 | [
"Thanks for reporting ! I'm looking into it."
] | null | 900 | false |
Allow arrow based builder in auto dummy data generation | Following #898 I added support for arrow based builder for the auto dummy data generator | https://github.com/huggingface/datasets/pull/899 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/899",
"html_url": "https://github.com/huggingface/datasets/pull/899",
"diff_url": "https://github.com/huggingface/datasets/pull/899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/899.patch",
"merged_at": "2020-11-27T13:30:08"... | 899 | true |
Adding SQA dataset | As discussed in #880
Seems like automatic dummy-data generation doesn't work if the builder is a `ArrowBasedBuilder`, do you think you could take a look @lhoestq ? | https://github.com/huggingface/datasets/pull/898 | [
"This dataset seems to have around 1000 configs. Therefore when creating the dummy data we end up with hundreds of MB of dummy data which we don't want to add in the repo.\r\nLet's make this PR on hold for now and find a solution after the sprint of next week",
"Closing in favor of #1566 "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/898",
"html_url": "https://github.com/huggingface/datasets/pull/898",
"diff_url": "https://github.com/huggingface/datasets/pull/898.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/898.patch",
"merged_at": null
} | 898 | true |
Dataset viewer issues | I was looking through the dataset viewer and I like it a lot. Version numbers, citation information, everything's there! I've spotted a few issues/bugs though:
- the URL is still under `nlp`, perhaps an alias for `datasets` can be made
- when I remove a **feature** (and the feature list is empty), I get an error. T... | https://github.com/huggingface/datasets/issues/897 | [
"Thanks for reporting !\r\ncc @srush for the empty feature list issue and the encoding issue\r\ncc @julien-c maybe we can update the url and just have a redirection from the old url to the new one ?",
"Ok, I redirected on our side to a new url. ⚠️ @srush: if you update the Streamlit config too to `/datasets/viewe... | null | 897 | false |
Add template and documentation for dataset card | This PR adds a template for dataset cards, as well as a guide to filling out the template and a completed example for the ELI5 dataset, building on the work of @mcmillanmajora
New pull requests adding datasets should now have a README.md file which serves both to hold the tags we will have to index the datasets and... | https://github.com/huggingface/datasets/pull/896 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/896",
"html_url": "https://github.com/huggingface/datasets/pull/896",
"diff_url": "https://github.com/huggingface/datasets/pull/896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/896.patch",
"merged_at": "2020-11-28T01:10:14"... | 896 | true |
Better messages regarding split naming | I made explicit the error message when a bad split name is used.
Also I wanted to allow the `-` symbol for split names but actually this symbol is used to name the arrow files `{dataset_name}-{dataset_split}.arrow` so we should probably keep it this way, i.e. not allowing the `-` symbol in split names. Moreover in t... | https://github.com/huggingface/datasets/pull/895 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/895",
"html_url": "https://github.com/huggingface/datasets/pull/895",
"diff_url": "https://github.com/huggingface/datasets/pull/895.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/895.patch",
"merged_at": "2020-11-27T13:30:59"... | 895 | true |
Allow several tags sets | Hi !
Currently we have three dataset cards : snli, cnn_dailymail and allocine.
For each one of those datasets a set of tag is defined. The set of tags contains fields like `multilinguality`, `task_ids`, `licenses` etc.
For certain datasets like `glue` for example, there exist several configurations: `sst2`, `mnl... | https://github.com/huggingface/datasets/pull/894 | [
"Closing since we don't need to update the tags of those three datasets (for each one of them there is only one tag set)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/894",
"html_url": "https://github.com/huggingface/datasets/pull/894",
"diff_url": "https://github.com/huggingface/datasets/pull/894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/894.patch",
"merged_at": null
} | 894 | true |
add metrec: arabic poetry dataset | https://github.com/huggingface/datasets/pull/893 | [
"@lhoestq removed prints and added the dataset card. ",
"@lhoestq, I want to add other datasets as well. I am not sure if it is possible to do so with the same branch. ",
"Hi @zaidalyafeai, really excited to get more Arabic coverage in the lib, thanks for your contribution!\r\n\r\nCouple of last comments:\r\n- ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/893",
"html_url": "https://github.com/huggingface/datasets/pull/893",
"diff_url": "https://github.com/huggingface/datasets/pull/893.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/893.patch",
"merged_at": "2020-12-01T15:15:07"... | 893 | true | |
Add a few datasets of reference in the documentation | I started making a small list of various datasets of reference in the documentation.
Since many datasets share a lot in common I think it's good to have a list of datasets scripts to get some inspiration from.
Let me know what you think, and if you have ideas of other datasets that we may add to this list, please l... | https://github.com/huggingface/datasets/pull/892 | [
"Looks good to me. Do we also support TSV in this helper (explain if it should be text or CSV) and in the dummy-data creator?",
"snli is basically based on tsv files (but named as .txt) and it is in the list of datasets of reference.\r\nThe dummy data creator supports tsv",
"merging this one.\r\nIf you think of... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/892",
"html_url": "https://github.com/huggingface/datasets/pull/892",
"diff_url": "https://github.com/huggingface/datasets/pull/892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/892.patch",
"merged_at": "2020-11-27T18:08:44"... | 892 | true |
gitignore .python-version | ignore `.python-version` added by `pyenv` | https://github.com/huggingface/datasets/pull/891 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/891",
"html_url": "https://github.com/huggingface/datasets/pull/891",
"diff_url": "https://github.com/huggingface/datasets/pull/891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/891.patch",
"merged_at": "2020-11-26T13:28:26"... | 891 | true |
Add LER | https://github.com/huggingface/datasets/pull/890 | [
"Thanks for the comments. I addressed them and pushed again.\r\nWhen I run \"make quality\" I get the following error but I don't know how to resolve it or what the problem ist respectively:\r\nwould reformat /Users/joelniklaus/NextCloud/PhDJoelNiklaus/Code/datasets/datasets/ler/ler.py\r\nOh no! 💥 💔 💥\r\n1 file ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/890",
"html_url": "https://github.com/huggingface/datasets/pull/890",
"diff_url": "https://github.com/huggingface/datasets/pull/890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/890.patch",
"merged_at": null
} | 890 | true | |
Optional per-dataset default config name | This PR adds a `DEFAULT_CONFIG_NAME` class attribute to `DatasetBuilder`. This allows a dataset to have a specified default config name when a dataset has more than one config but the user does not specify it. For example, after defining `DEFAULT_CONFIG_NAME = "combined"` in PolyglotNER, a user can now do the following... | https://github.com/huggingface/datasets/pull/889 | [
"I like the idea ! And the approach is right imo\r\n\r\nNote that by changing this we will have to add a way for users to get the config lists of a dataset. In the current user workflow, the user could see the list of the config when the missing config error is raised but now it won't be the case because of the def... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/889",
"html_url": "https://github.com/huggingface/datasets/pull/889",
"diff_url": "https://github.com/huggingface/datasets/pull/889.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/889.patch",
"merged_at": "2020-11-30T17:27:27"... | 889 | true |
Nested lists are zipped unexpectedly | I might misunderstand something, but I expect that if I define:
```python
"top": datasets.features.Sequence({
"middle": datasets.features.Sequence({
"bottom": datasets.Value("int32")
})
})
```
And I then create an example:
```python
yield 1, {
"top": [{
"middle": [
{"bottom": 1},
... | https://github.com/huggingface/datasets/issues/888 | [
"Yes following the Tensorflow Datasets convention, objects with type `Sequence of a Dict` are actually stored as a `dictionary of lists`.\r\nSee the [documentation](https://huggingface.co/docs/datasets/features.html?highlight=features) for more details",
"Thanks.\r\nThis is a bit (very) confusing, but I guess if ... | null | 888 | false |
pyarrow.lib.ArrowNotImplementedError: MakeBuilder: cannot construct builder for type extension<arrow.py_extension_type> | I set up a new dataset, with a sequence of arrays (really, I want to have an array of (None, 137, 2), and the first dimension is dynamic)
```python
def _info(self):
return datasets.DatasetInfo(
description=_DESCRIPTION,
# This defines the different columns of the dataset and ... | https://github.com/huggingface/datasets/issues/887 | [
"Yes right now `ArrayXD` can only be used as a column feature type, not a subtype.\r\nWith the current Arrow limitations I don't think we'll be able to make it work as a subtype, however it should be possible to allow dimensions of dynamic sizes (`Array3D(shape=(None, 137, 2), dtype=\"float32\")` for example since ... | null | 887 | false |
Fix wikipedia custom config | It should be possible to use the wikipedia dataset with any `language` and `date`.
However it was not working as noticed in #784 . Indeed the custom wikipedia configurations were not enabled for some reason.
I fixed that and was able to run
```python
from datasets import load_dataset
load_dataset("./datasets/wi... | https://github.com/huggingface/datasets/pull/886 | [
"I think this issue is still not resolve yet. Please check my comment in the following issue, thanks.\r\n[#577](https://github.com/huggingface/datasets/issues/577#issuecomment-868122769)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/886",
"html_url": "https://github.com/huggingface/datasets/pull/886",
"diff_url": "https://github.com/huggingface/datasets/pull/886.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/886.patch",
"merged_at": "2020-11-25T15:42:13"... | 886 | true |
Very slow cold-start | Hi,
I expect when importing `datasets` that nothing major happens in the background, and so the import should be insignificant.
When I load a metric, or a dataset, its fine that it takes time.
The following ranges from 3 to 9 seconds:
```
python -m timeit -n 1 -r 1 'from datasets import load_dataset'
```
edi... | https://github.com/huggingface/datasets/issues/885 | [
"Good point!",
"Yes indeed. We can probably improve that by using lazy imports",
"#1690 added fast start-up of the library "
] | null | 885 | false |
Auto generate dummy data | When adding a new dataset to the library, dummy data creation can take some time.
To make things easier I added a command line tool that automatically generates dummy data when possible.
The tool only supports certain data files types: txt, csv, tsv, jsonl, json and xml.
Here are some examples:
```
python data... | https://github.com/huggingface/datasets/pull/884 | [
"I took your comments into account.\r\nAlso now after compressing the dummy_data.zip file it runs a dummy data test (=make sure each split has at least 1 example using the dummy data)",
"I just tested the tool with some datasets and found out that it's not working for datasets that download files using `download_... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/884",
"html_url": "https://github.com/huggingface/datasets/pull/884",
"diff_url": "https://github.com/huggingface/datasets/pull/884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/884.patch",
"merged_at": "2020-11-26T14:18:46"... | 884 | true |
Downloading/caching only a part of a datasets' dataset. | Hi,
I want to use the validation data *only* (of natural question).
I don't want to have the whole dataset cached in my machine, just the dev set.
Is this possible? I can't find a way to do it in the docs.
Thank you,
Sapir | https://github.com/huggingface/datasets/issues/883 | [
"Not at the moment but we could likely support this feature.",
"?",
"I think it would be a very helpful feature, because sometimes one only wants to evaluate models on the dev set, and the whole training data may be many times bigger.\r\nThis makes the task impossible with limited memory resources."
] | null | 883 | false |
Update README.md | "no label" is "-" in the original dataset but "-1" in Huggingface distribution. | https://github.com/huggingface/datasets/pull/882 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/882",
"html_url": "https://github.com/huggingface/datasets/pull/882",
"diff_url": "https://github.com/huggingface/datasets/pull/882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/882.patch",
"merged_at": "2021-01-29T10:41:06"... | 882 | true |
Use GCP download url instead of tensorflow custom download for boolq | BoolQ is a dataset that used tf.io.gfile.copy to download the file from a GCP bucket.
It prevented the dataset to be downloaded twice because of a FileAlreadyExistsError.
Even though the error could be fixed by providing `overwrite=True` to the tf.io.gfile.copy call, I changed the script to use GCP download urls and ... | https://github.com/huggingface/datasets/pull/881 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/881",
"html_url": "https://github.com/huggingface/datasets/pull/881",
"diff_url": "https://github.com/huggingface/datasets/pull/881.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/881.patch",
"merged_at": "2020-11-24T10:12:33"... | 881 | true |
Add SQA | ## Adding a Dataset
- **Name:** SQA (Sequential Question Answering) by Microsoft.
- **Description:** The SQA dataset was created to explore the task of answering sequences of inter-related questions on HTML tables. It has 6,066 sequences with 17,553 questions in total.
- **Paper:** https://www.microsoft.com/en-us/r... | https://github.com/huggingface/datasets/issues/880 | [
"I’ll take this one to test the workflow for the sprint next week cc @yjernite @lhoestq ",
"@thomwolf here's a slightly adapted version of the code from the [official Tapas repository](https://github.com/google-research/tapas/blob/master/tapas/utils/interaction_utils.py) that is used to turn the `answer_coordinat... | null | 880 | false |
boolq does not load | Hi
I am getting these errors trying to load boolq thanks
Traceback (most recent call last):
File "test.py", line 5, in <module>
data = AutoTask().get("boolq").get_dataset("train", n_obs=10)
File "/remote/idiap.svm/user.active/rkarimi/dev/internship/seq2seq/tasks/tasks.py", line 42, in get_dataset
d... | https://github.com/huggingface/datasets/issues/879 | [
"Hi ! It runs on my side without issues. I tried\r\n```python\r\nfrom datasets import load_dataset\r\nload_dataset(\"boolq\")\r\n```\r\n\r\nWhat version of datasets and tensorflow are your runnning ?\r\nAlso if you manage to get a minimal reproducible script (on google colab for example) that would be useful.",
"... | null | 879 | false |
Loading Data From S3 Path in Sagemaker | In Sagemaker Im tring to load the data set from S3 path as follows
`train_path = 's3://xxxxxxxxxx/xxxxxxxxxx/train.csv'
valid_path = 's3://xxxxxxxxxx/xxxxxxxxxx/validation.csv'
test_path = 's3://xxxxxxxxxx/xxxxxxxxxx/test.csv'
data_files = {}
data_files["train"] = train_path
data_files... | https://github.com/huggingface/datasets/issues/878 | [
"This would be a neat feature",
"> neat feature\r\n\r\nI dint get these clearly, can you please elaborate like how to work on these ",
"It could maybe work almost out of the box just by using `cached_path` in the text/csv/json scripts, no?",
"Thanks thomwolf and julien-c\r\n\r\nI'm still confusion on what you... | null | 878 | false |
DataLoader(datasets) become more and more slowly within iterations | Hello, when I for loop my dataloader, the loading speed is becoming more and more slowly!
```
dataset = load_from_disk(dataset_path) # around 21,000,000 lines
lineloader = tqdm(DataLoader(dataset, batch_size=1))
for idx, line in enumerate(lineloader):
# do some thing for each line
```
In the begining, th... | https://github.com/huggingface/datasets/issues/877 | [
"Hi ! Thanks for reporting.\r\nDo you have the same slowdown when you iterate through the raw dataset object as well ? (no dataloader)\r\nIt would be nice to know whether it comes from the dataloader or not",
"> Hi ! Thanks for reporting.\r\n> Do you have the same slowdown when you iterate through the raw dataset... | null | 877 | false |
imdb dataset cannot be loaded | Hi
I am trying to load the imdb train dataset
`dataset = datasets.load_dataset("imdb", split="train")`
getting following errors, thanks for your help
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/... | https://github.com/huggingface/datasets/issues/876 | [
"It looks like there was an issue while building the imdb dataset.\r\nCould you provide more information about your OS and the version of python and `datasets` ?\r\n\r\nAlso could you try again with \r\n```python\r\ndataset = datasets.load_dataset(\"imdb\", split=\"train\", download_mode=\"force_redownload\")\r\n``... | null | 876 | false |
bug in boolq dataset loading | Hi
I am trying to load boolq dataset:
```
import datasets
datasets.load_dataset("boolq")
```
I am getting the following errors, thanks for your help
```
>>> import datasets
2020-11-22 09:16:30.070332: W tensorflow/stream_executor/platform/default/dso_loader.cc:60] Could not load dynamic library 'libcuda... | https://github.com/huggingface/datasets/issues/875 | [
"I just opened a PR to fix this.\r\nThanks for reporting !"
] | null | 875 | false |
trec dataset unavailable | Hi
when I try to load the trec dataset I am getting these errors, thanks for your help
`datasets.load_dataset("trec", split="train")
`
```
File "<stdin>", line 1, in <module>
File "/idiap/user/rkarimi/libs/anaconda3/envs/internship/lib/python3.7/site-packages/datasets/load.py", line 611, in load_dataset
... | https://github.com/huggingface/datasets/issues/874 | [
"This was fixed in #740 \r\nCould you try to update `datasets` and try again ?",
"This has been fixed in datasets 1.1.3"
] | null | 874 | false |
load_dataset('cnn_dalymail', '3.0.0') gives a 'Not a directory' error | ```
from datasets import load_dataset
dataset = load_dataset('cnn_dailymail', '3.0.0')
```
Stack trace:
```
---------------------------------------------------------------------------
NotADirectoryError Traceback (most recent call last)
<ipython-input-6-2e06a8332652> in <module>()
... | https://github.com/huggingface/datasets/issues/873 | [
"I get the same error. It was fixed some days ago, but again it appears",
"Hi @mrm8488 it's working again today without any fix so I am closing this issue.",
"I see the issue happening again today - \r\n\r\n[nltk_data] Downloading package stopwords to /root/nltk_data...\r\n[nltk_data] Package stopwords is alr... | null | 873 | false |
Add IndicGLUE dataset and Metrics | Added IndicGLUE benchmark for evaluating models on 11 Indian Languages. The descriptions of the tasks and the corresponding paper can be found [here](https://indicnlp.ai4bharat.org/indic-glue/)
- [x] Followed the instructions in CONTRIBUTING.md
- [x] Ran the tests successfully
- [x] Created the dummy data | https://github.com/huggingface/datasets/pull/872 | [
"thanks ! merging now"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/872",
"html_url": "https://github.com/huggingface/datasets/pull/872",
"diff_url": "https://github.com/huggingface/datasets/pull/872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/872.patch",
"merged_at": "2020-11-25T15:26:07"... | 872 | true |
terminate called after throwing an instance of 'google::protobuf::FatalException' | Hi
I am using the dataset "iwslt2017-en-nl", and after downloading it I am getting this error when trying to evaluate it on T5-base with seq2seq_trainer.py in the huggingface repo could you assist me please? thanks
100%|█████████████████████████████████████████████████████████████████████████████████████████████... | https://github.com/huggingface/datasets/issues/871 | [
"Loading the iwslt2017-en-nl config of iwslt2017 works fine on my side. \r\nMaybe you can open an issue on transformers as well ? And also add more details about your environment (OS, python version, version of transformers and datasets etc.)",
"closing now, figured out this is because the max length of decoder w... | null | 871 | false |
[Feature Request] Add optional parameter in text loading script to preserve linebreaks | I'm working on a project about rhyming verse using phonetic poetry and song lyrics, and line breaks are a vital part of the data.
I recently switched over to use the datasets library when my various corpora grew larger than my computer's memory. And so far, it is SO great.
But the first time I processed all of ... | https://github.com/huggingface/datasets/issues/870 | [
"Hi ! Thanks for your message.\r\nIndeed it's a free feature we can add and that can be useful.\r\nIf you want to contribute, feel free to open a PR to add it to the text dataset script :)",
"Resolved via #1913."
] | null | 870 | false |
Update ner datasets infos | Update the dataset_infos.json files for changes made in #850 regarding the ner datasets feature types (and the change to ClassLabel)
I also fixed the ner types of conll2003 | https://github.com/huggingface/datasets/pull/869 | [
":+1: Thanks for fixing it!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/869",
"html_url": "https://github.com/huggingface/datasets/pull/869",
"diff_url": "https://github.com/huggingface/datasets/pull/869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/869.patch",
"merged_at": "2020-11-19T14:14:17"... | 869 | true |
Consistent metric outputs | To automate the use of metrics, they should return consistent outputs.
In particular I'm working on adding a conversion of metrics to keras metrics.
To achieve this we need two things:
- have each metric return dictionaries of string -> floats since each keras metrics should return one float
- define in the metric ... | https://github.com/huggingface/datasets/pull/868 | [
"I keep this PR in stand-by for next week's datasets sprint. If the next release is 2.0.0 then we can include it given that it's breaking for many metrics"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/868",
"html_url": "https://github.com/huggingface/datasets/pull/868",
"diff_url": "https://github.com/huggingface/datasets/pull/868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/868.patch",
"merged_at": null
} | 868 | true |
Fix some metrics feature types | Replace `int` feature type to `int32` since `int` is not a pyarrow dtype in those metrics:
- accuracy
- precision
- recall
- f1
I also added the sklearn citation and used keyword arguments to remove future warnings | https://github.com/huggingface/datasets/pull/867 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/867",
"html_url": "https://github.com/huggingface/datasets/pull/867",
"diff_url": "https://github.com/huggingface/datasets/pull/867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/867.patch",
"merged_at": "2020-11-19T17:35:57"... | 867 | true |
OSCAR from Inria group | ## Adding a Dataset
- **Name:** *OSCAR* (Open Super-large Crawled ALMAnaCH coRpus), multilingual parsing of Common Crawl (separate crawls for many different languages), [here](https://oscar-corpus.com/).
- **Description:** *OSCAR or Open Super-large Crawled ALMAnaCH coRpus is a huge multilingual corpus obtained by la... | https://github.com/huggingface/datasets/issues/866 | [
"PR is already open here : #348 \r\nThe only thing remaining is to compute the metadata of each subdataset (one per language + shuffled/unshuffled).\r\nAs soon as #863 is merged we can start computing them. This will take a bit of time though",
"Grand, thanks for this!"
] | null | 866 | false |
Have Trouble importing `datasets` | I'm failing to import transformers (v4.0.0-dev), and tracing the cause seems to be failing to import datasets.
I cloned the newest version of datasets (master branch), and do `pip install -e .`.
Then, `import datasets` causes the error below.
```
~/workspace/Clone/datasets/src/datasets/utils/file_utils.py in ... | https://github.com/huggingface/datasets/issues/865 | [
"I'm sorry, this was a problem with my environment.\r\nNow that I have identified the cause of environmental dependency, I would like to fix it and try it.\r\nExcuse me for making a noise."
] | null | 865 | false |
Unable to download cnn_dailymail dataset | ### Script to reproduce the error
```
from datasets import load_dataset
train_dataset = load_dataset("cnn_dailymail", "3.0.0", split= 'train[:10%')
valid_dataset = load_dataset("cnn_dailymail","3.0.0", split="validation[:5%]")
```
### Error
```
-------------------------------------------------------------... | https://github.com/huggingface/datasets/issues/864 | [
"Same error here!\r\n",
"Same here! My kaggle notebook stopped working like yesterday. It's strange because I have fixed version of datasets==1.1.2",
"I'm looking at it right now",
"I couldn't reproduce unfortunately. I tried\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nload_dataset(\"cnn_dailymai... | null | 864 | false |
Add clear_cache parameter in the test command | For certain datasets like OSCAR #348 there are lots of different configurations and each one of them can take a lot of disk space.
I added a `--clear_cache` flag to the `datasets-cli test` command to be able to clear the cache after each configuration test to avoid filling up the disk. It should enable an easier gen... | https://github.com/huggingface/datasets/pull/863 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/863",
"html_url": "https://github.com/huggingface/datasets/pull/863",
"diff_url": "https://github.com/huggingface/datasets/pull/863.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/863.patch",
"merged_at": "2020-11-18T14:44:24"... | 863 | true |
Update head requests | Get requests and Head requests didn't have the same parameters. | https://github.com/huggingface/datasets/pull/862 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/862",
"html_url": "https://github.com/huggingface/datasets/pull/862",
"diff_url": "https://github.com/huggingface/datasets/pull/862.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/862.patch",
"merged_at": "2020-11-18T14:43:50"... | 862 | true |
Possible Bug: Small training/dataset file creates gigantic output | Hey guys,
I was trying to create a new bert model from scratch via _huggingface transformers + tokenizers + dataets_ (actually using this example script by your team: https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py). It was supposed to be a first test with a small 5 GB r... | https://github.com/huggingface/datasets/issues/861 | [
"The preprocessing tokenizes the input text. Tokenization outputs `input_ids`, `attention_mask`, `token_type_ids` and `special_tokens_mask`. All those are of length`max_seq_length` because of padding. Therefore for each sample it generate 4 *`max_seq_length` integers. Currently they're all saved as int64. This is w... | null | 861 | false |
wmt16 cs-en does not donwload | Hi
I am trying with wmt16, cs-en pair, thanks for the help, perhaps similar to the ro-en issue. thanks
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "finetune_t5_trainer.py", line 109, in <dictcomp>
split="train", n_obs=data_args.n_train) for task in data_args.task}
File "/hom... | https://github.com/huggingface/datasets/issues/860 | [
"We know host this file, so downloading should be more robust."
] | null | 860 | false |
Integrate file_lock inside the lib for better logging control | Previously the locking system of the lib was based on the file_lock package. However as noticed in #812 there were too many logs printed even when the datasets logging was set to warnings or errors.
For example
```python
import logging
logging.basicConfig(level=logging.INFO)
import datasets
datasets.set_verbo... | https://github.com/huggingface/datasets/pull/859 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/859",
"html_url": "https://github.com/huggingface/datasets/pull/859",
"diff_url": "https://github.com/huggingface/datasets/pull/859.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/859.patch",
"merged_at": "2020-11-16T17:06:42"... | 859 | true |
Add SemEval-2010 task 8 | Hi,
I don't know how to add dummy data, since I create the validation set out of the last 1000 examples of the train set. If you have a suggestion, I am happy to implement it.
Cheers,
Joel | https://github.com/huggingface/datasets/pull/858 | [
"Added dummy data and encoding to open(). Now everything should be fine, hopefully :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/858",
"html_url": "https://github.com/huggingface/datasets/pull/858",
"diff_url": "https://github.com/huggingface/datasets/pull/858.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/858.patch",
"merged_at": "2020-11-26T17:28:55"... | 858 | true |
Use pandas reader in csv | The pyarrow CSV reader has issues that the pandas one doesn't (see #836 ).
To fix that I switched to the pandas csv reader.
The new reader is compatible with all the pandas parameters to read csv files.
Moreover it reads csv by chunk in order to save RAM, while the pyarrow one loads everything in memory.
Fix #836... | https://github.com/huggingface/datasets/pull/857 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/857",
"html_url": "https://github.com/huggingface/datasets/pull/857",
"diff_url": "https://github.com/huggingface/datasets/pull/857.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/857.patch",
"merged_at": "2020-11-19T17:35:38"... | 857 | true |
Add open book corpus | Adds book corpus based on Shawn Presser's [work](https://github.com/soskek/bookcorpus/issues/27) @richarddwang, the author of the original BookCorpus dataset, suggested it should be named [OpenBookCorpus](https://github.com/huggingface/datasets/issues/486). I named it BookCorpusOpen to be easily located alphabetically... | https://github.com/huggingface/datasets/pull/856 | [
"@lhoestq I fixed issues except for the dummy_data zip file. But I think I know why is it happening. So when unzipping dummy_data.zip it gets save in /tmp directory where glob doesn't pick it up. For regular downloads, the archive gets unzipped in ~/.cache/huggingface. Could that be a reason?",
"Nice thanks :)\r\... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/856",
"html_url": "https://github.com/huggingface/datasets/pull/856",
"diff_url": "https://github.com/huggingface/datasets/pull/856.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/856.patch",
"merged_at": "2020-11-17T15:22:17"... | 856 | true |
Fix kor nli csv reader | The kor_nli dataset had an issue with the csv reader that was not able to parse the lines correctly. Some lines were merged together for some reason.
I fixed that by iterating through the lines directly instead of using a csv reader.
I also changed the feature names to match the other NLI datasets (i.e. use "premise"... | https://github.com/huggingface/datasets/pull/855 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/855",
"html_url": "https://github.com/huggingface/datasets/pull/855",
"diff_url": "https://github.com/huggingface/datasets/pull/855.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/855.patch",
"merged_at": "2020-11-16T13:59:12"... | 855 | true |
wmt16 does not download | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | https://github.com/huggingface/datasets/issues/854 | [
"Hi,I also posted it to the forum, but this is a bug, perhaps it needs to be reported here? thanks ",
"It looks like the official OPUS server for WMT16 doesn't provide the data files anymore (503 error).\r\nI searched a bit and couldn't find a mirror except maybe http://nlp.ffzg.hr/resources/corpora/setimes/ (the... | null | 854 | false |
concatenate_datasets support axis=0 or 1 ? | I want to achieve the following result

| https://github.com/huggingface/datasets/issues/853 | [
"Unfortunately `concatenate_datasets` only supports concatenating the rows, while what you want to achieve is concatenate the columns.\r\nCurrently to add more columns to a dataset, one must use `map`.\r\nWhat you can do is somehting like this:\r\n```python\r\n# suppose you have datasets d1, d2, d3\r\ndef add_colum... | null | 853 | false |
wmt cannot be downloaded | Hi, I appreciate your help with the following error, thanks
>>> from datasets import load_dataset
>>> dataset = load_dataset("wmt16", "ro-en", split="train")
Downloading and preparing dataset wmt16/ro-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /root/... | https://github.com/huggingface/datasets/issues/852 | [] | null | 852 | false |
Create ClassLabel for labelling tasks datasets | This PR adds a specific `ClassLabel` for the datasets that are about a labelling task such as POS, NER or Chunking. | https://github.com/huggingface/datasets/pull/850 | [
"@lhoestq Better?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/850",
"html_url": "https://github.com/huggingface/datasets/pull/850",
"diff_url": "https://github.com/huggingface/datasets/pull/850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/850.patch",
"merged_at": "2020-11-16T10:31:58"... | 850 | true |
Load amazon dataset | Hi,
I was going through amazon_us_reviews dataset and found that example API usage given on website is different from the API usage while loading dataset.
Eg. what API usage is on the [website](https://huggingface.co/datasets/amazon_us_reviews)
```
from datasets import load_dataset
dataset = load_dataset("amaz... | https://github.com/huggingface/datasets/issues/849 | [
"Thanks for reporting !\r\nWe plan to show information about the different configs of the datasets on the website, with the corresponding `load_dataset` calls.\r\n\r\nAlso I think the bullet points formatting has been fixed"
] | null | 849 | false |
Error when concatenate_datasets | Hello, when I concatenate two dataset loading from disk, I encountered a problem:
```
test_dataset = load_from_disk('data/test_dataset')
trn_dataset = load_from_disk('data/train_dataset')
train_dataset = concatenate_datasets([trn_dataset, test_dataset])
```
And it reported ValueError blow:
```
--------------... | https://github.com/huggingface/datasets/issues/848 | [
"As you can see in the error the test checks if `indices_mappings_in_memory` is True or not, which is different from the test you do in your script. In a dataset, both the data and the indices mapping can be either on disk or in memory.\r\n\r\nThe indices mapping correspond to a mapping on top of the data table tha... | null | 848 | false |
multiprocessing in dataset map "can only test a child process" | Using a dataset with a single 'text' field and a fast tokenizer in a jupyter notebook.
```
def tokenizer_fn(example):
return tokenizer.batch_encode_plus(example['text'])
ds_tokenized = text_dataset.map(tokenizer_fn, batched=True, num_proc=6, remove_columns=['text'])
```
```
-------------------------... | https://github.com/huggingface/datasets/issues/847 | [
"It looks like an issue with wandb/tqdm here.\r\nWe're using the `multiprocess` library instead of the `multiprocessing` builtin python package to support various types of mapping functions. Maybe there's some sort of incompatibility.\r\n\r\nCould you make a minimal script to reproduce or a google colab ?",
"hi f... | null | 847 | false |
Add HoVer multi-hop fact verification dataset | ## Adding a Dataset
- **Name:** HoVer
- **Description:** https://twitter.com/YichenJiang9/status/1326954363806429186 contains 20K claim verification examples
- **Paper:** https://arxiv.org/abs/2011.03088
- **Data:** https://hover-nlp.github.io/
- **Motivation:** There are still few multi-hop information extraction... | https://github.com/huggingface/datasets/issues/846 | [
"Hi @yjernite I'm new but wanted to contribute. Has anyone already taken this problem and do you think it is suitable for newbies?",
"Hi @tenjjin! This dataset is still up for grabs! Here's the link with the guide to add it. You should play around with the library first (download and look at a few datasets), the... | null | 846 | false |
amazon description fields as bullets | One more minor formatting change to amazon reviews's description (in addition to #844). Just reformatting the fields to display as a bulleted list in markdown. | https://github.com/huggingface/datasets/pull/845 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/845",
"html_url": "https://github.com/huggingface/datasets/pull/845",
"diff_url": "https://github.com/huggingface/datasets/pull/845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/845.patch",
"merged_at": "2020-11-12T18:50:54"... | 845 | true |
add newlines to amazon desc | Just a quick formatting fix to hopefully make it render nicer on Viewer | https://github.com/huggingface/datasets/pull/844 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/844",
"html_url": "https://github.com/huggingface/datasets/pull/844",
"diff_url": "https://github.com/huggingface/datasets/pull/844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/844.patch",
"merged_at": "2020-11-12T18:42:21"... | 844 | true |
use_custom_baseline still produces errors for bertscore | `metric = load_metric('bertscore')`
`a1 = "random sentences"`
`b1 = "random sentences"`
`metric.compute(predictions = [a1], references = [b1], lang = 'en')`
`Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/stephen_chan/.local/lib/python3.6/site-packages/datasets/metric.py"... | https://github.com/huggingface/datasets/issues/843 | [
"Thanks for reporting ! That's a bug indeed\r\nIf you want to contribute, feel free to fix this issue and open a PR :)",
"This error is because of a mismatch between `datasets` and `bert_score`. With `datasets=1.1.2` and `bert_score>=0.3.6` it works ok. So `pip install -U bert_score` should fix the problem. ",
... | null | 843 | false |
How to enable `.map()` pre-processing pipelines to support multi-node parallelism? | Hi,
Currently, multiprocessing can be enabled for the `.map()` stages on a single node. However, in the case of multi-node training, (since more than one node would be available) I'm wondering if it's possible to extend the parallel processing among nodes, instead of only 1 node running the `.map()` while the other ... | https://github.com/huggingface/datasets/issues/842 | [
"Right now multiprocessing only runs on single node.\r\n\r\nHowever it's probably possible to extend it to support multi nodes. Indeed we're using the `multiprocess` library from the `pathos` project to do multiprocessing in `datasets`, and `pathos` is made to support parallelism on several nodes. More info about p... | null | 842 | false |
Can not reuse datasets already downloaded | Hello,
I need to connect to a frontal node (with http proxy, no gpu) before connecting to a gpu node (but no http proxy, so can not use wget so on).
I successfully downloaded and reuse the wikipedia datasets in a frontal node.
When I connect to the gpu node, I supposed to use the downloaded datasets from cache, but... | https://github.com/huggingface/datasets/issues/841 | [
"It seems the process needs '/datasets.huggingface.co/datasets/datasets/wikipedia/wikipedia.py'\r\nWhere and how to assign this ```wikipedia.py``` after I manually download it ?",
"\r\ndownload the ```wikipedia.py``` at the working directory and go with ```dataset = load_dataset('wikipedia.py', '20200501.en')``` ... | null | 841 | false |
Update squad_v2.py | Change lines 100 and 102 to prevent overwriting ```predictions``` variable. | https://github.com/huggingface/datasets/pull/840 | [
"With this change all the checks are passed.",
"Good"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/840",
"html_url": "https://github.com/huggingface/datasets/pull/840",
"diff_url": "https://github.com/huggingface/datasets/pull/840.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/840.patch",
"merged_at": "2020-11-11T15:26:35"... | 840 | true |
XSum dataset missing spaces between sentences | I noticed that the XSum dataset has no space between sentences. This could lead to worse results for anyone training or testing on it. Here's an example (0th entry in the test set):
`The London trio are up for best UK act and best album, as well as getting two nominations in the best song category."We got told like ... | https://github.com/huggingface/datasets/issues/839 | [] | null | 839 | false |
CNN/Dailymail Dataset Card | Link to the card page: https://github.com/mcmillanmajora/datasets/tree/cnn_dailymail_card/datasets/cnn_dailymail
One of the questions this dataset brings up is how we want to handle versioning of the cards to mirror versions of the dataset. The different versions of this dataset are used for different tasks (which may... | https://github.com/huggingface/datasets/pull/838 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/838",
"html_url": "https://github.com/huggingface/datasets/pull/838",
"diff_url": "https://github.com/huggingface/datasets/pull/838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/838.patch",
"merged_at": "2020-11-25T21:09:50"... | 838 | true |
AlloCiné dataset card | Link to the card page: https://github.com/mcmillanmajora/datasets/blob/allocine_card/datasets/allocine/README.md
There wasn't as much information available for this dataset, so I'm wondering what's the best way to address open questions about the dataset. For example, where did the list of films that the dataset creat... | https://github.com/huggingface/datasets/pull/837 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/837",
"html_url": "https://github.com/huggingface/datasets/pull/837",
"diff_url": "https://github.com/huggingface/datasets/pull/837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/837.patch",
"merged_at": "2020-11-25T21:56:27"... | 837 | true |
load_dataset with 'csv' is not working. while the same file is loading with 'text' mode or with pandas | Hi All
I am trying to load a custom dataset and I am trying to load a single file to make sure the file is loading correctly:
dataset = load_dataset('csv', data_files=files)
When I run it I get:
Downloading and preparing dataset csv/default-35575a1051604c88 (download: Unknown size, generated: Unknown size, post-... | https://github.com/huggingface/datasets/issues/836 | [
"Which version of pyarrow do you have ? Could you try to update pyarrow and try again ?",
"Thanks for the fast response. I have the latest version '2.0.0' (I tried to update)\r\nI am working with Python 3.8.5",
"I think that the issue is similar to this one:https://issues.apache.org/jira/browse/ARROW-9612\r\nTh... | null | 836 | false |
Wikipedia postprocessing | Hi, thanks for this library!
Running this code:
```py
import datasets
wikipedia = datasets.load_dataset("wikipedia", "20200501.de")
print(wikipedia['train']['text'][0])
```
I get:
```
mini|Ricardo Flores Magón
mini|Mexikanische Revolutionäre, Magón in der Mitte anführend, gegen die Diktatur von Porfir... | https://github.com/huggingface/datasets/issues/835 | [
"Hi @bminixhofer ! Parsing WikiMedia is notoriously difficult: this processing used [mwparserfromhell](https://github.com/earwig/mwparserfromhell) which is pretty good but not perfect.\r\n\r\nAs an alternative, you can also use the Wiki40b dataset which was pre-processed using an un-released Google internal tool",
... | null | 835 | false |
[GEM] add WikiLingua cross-lingual abstractive summarization dataset | ## Adding a Dataset
- **Name:** WikiLingua
- **Description:** The dataset includes ~770k article and summary pairs in 18 languages from WikiHow. The gold-standard article-summary alignments across languages were extracted by aligning the images that are used to describe each how-to step in an article.
- **Paper:** h... | https://github.com/huggingface/datasets/issues/834 | [
"Hey @yjernite. This is a very interesting dataset. Would love to work on adding it but I see that the link to the data is to a gdrive folder. Can I just confirm wether dlmanager can handle gdrive urls or would this have to be a manual dl?",
"Hi @KMFODA ! A version of WikiLingua is actually already accessible in ... | null | 834 | false |
[GEM] add ASSET text simplification dataset | ## Adding a Dataset
- **Name:** ASSET
- **Description:** ASSET is a crowdsourced
multi-reference corpus for assessing sentence simplification in English where each simplification was produced by executing several rewriting transformations.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.424.pdf
- **Dat... | https://github.com/huggingface/datasets/issues/833 | [] | null | 833 | false |
[GEM] add WikiAuto text simplification dataset | ## Adding a Dataset
- **Name:** WikiAuto
- **Description:** Sentences in English Wikipedia and their corresponding sentences in Simple English Wikipedia that are written with simpler grammar and word choices. A lot of lexical and syntactic paraphrasing.
- **Paper:** https://www.aclweb.org/anthology/2020.acl-main.70... | https://github.com/huggingface/datasets/issues/832 | [] | null | 832 | false |
[GEM] Add WebNLG dataset | ## Adding a Dataset
- **Name:** WebNLG
- **Description:** WebNLG consists of Data/Text pairs where the data is a set of triples extracted from DBpedia and the text is a verbalisation of these triples (16,095 data inputs and 42,873 data-text pairs). The data is available in English and Russian
- **Paper:** https://ww... | https://github.com/huggingface/datasets/issues/831 | [] | null | 831 | false |
[GEM] add ToTTo Table-to-text dataset | ## Adding a Dataset
- **Name:** ToTTo
- **Description:** ToTTo is an open-domain English table-to-text dataset with over 120,000 training examples that proposes a controlled generation task: given a Wikipedia table and a set of highlighted table cells, produce a one-sentence description.
- **Paper:** https://arxiv.o... | https://github.com/huggingface/datasets/issues/830 | [
"closed via #1098 "
] | null | 830 | false |
[GEM] add Schema-Guided Dialogue | ## Adding a Dataset
- **Name:** The Schema-Guided Dialogue Dataset
- **Description:** The Schema-Guided Dialogue (SGD) dataset consists of over 20k annotated multi-domain, task-oriented conversations between a human and a virtual assistant. These conversations involve interactions with services and APIs spanning 20 d... | https://github.com/huggingface/datasets/issues/829 | [] | null | 829 | false |
Add writer_batch_size attribute to GeneratorBasedBuilder | As specified in #741 one would need to specify a custom ArrowWriter batch size to avoid filling the RAM. Indeed the defaults buffer size is 10 000 examples but for multimodal datasets that contain images or videos we may want to reduce that. | https://github.com/huggingface/datasets/pull/828 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/828",
"html_url": "https://github.com/huggingface/datasets/pull/828",
"diff_url": "https://github.com/huggingface/datasets/pull/828.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/828.patch",
"merged_at": "2020-11-10T16:27:35"... | 828 | true |
[GEM] MultiWOZ dialogue dataset | ## Adding a Dataset
- **Name:** MultiWOZ (Multi-Domain Wizard-of-Oz)
- **Description:** 10k annotated human-human dialogues. Each dialogue consists of a goal, multiple user and system utterances as well as a belief state. Only system utterances are annotated with dialogue acts – there are no annotations from the user... | https://github.com/huggingface/datasets/issues/827 | [
"Hi @yjernite can I help in adding this dataset? \r\n\r\nI am excited about this because this will be my first contribution to the datasets library as well as to hugginface.",
"Resolved via https://github.com/huggingface/datasets/pull/979"
] | null | 827 | false |
[GEM] Add E2E dataset | ## Adding a Dataset
- **Name:** E2E NLG dataset (for End-to-end natural language generation)
- **Description:**a dataset for training end-to-end, datadriven natural language generation systems in the restaurant domain, the datasets consists of 5,751 dialogue-act Meaning Representations (structured data) and 8.1 refer... | https://github.com/huggingface/datasets/issues/826 | [] | null | 826 | false |
Add accuracy, precision, recall and F1 metrics | This PR adds several single metrics, namely:
- Accuracy
- Precision
- Recall
- F1
They all uses under the hood the sklearn metrics of the same name. They allow different useful features when training a multilabel/multiclass model:
- have a macro/micro/per label/weighted/binary/per sample score
- score only t... | https://github.com/huggingface/datasets/pull/825 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/825",
"html_url": "https://github.com/huggingface/datasets/pull/825",
"diff_url": "https://github.com/huggingface/datasets/pull/825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/825.patch",
"merged_at": "2020-11-11T19:23:43"... | 825 | true |
Discussion using datasets in offline mode | `datasets.load_dataset("csv", ...)` breaks if you have no connection (There is already this issue https://github.com/huggingface/datasets/issues/761 about it). It seems to be the same for metrics too.
I create this ticket to discuss a bit and gather what you have in mind or other propositions.
Here are some point... | https://github.com/huggingface/datasets/issues/824 | [
"No comments ?",
"I think it would be very cool. I'm currently working on a cluster from Compute Canada, and I have internet access only when I'm not in the nodes where I run the scripts. So I was expecting to be able to use the wmt14 dataset until I realized I needed internet connection even if I downloaded the ... | null | 824 | false |
how processing in batch works in datasets | Hi,
I need to process my datasets before it is passed to dataloader in batch,
here is my codes
```
class AbstractTask(ABC):
task_name: str = NotImplemented
preprocessor: Callable = NotImplemented
split_to_data_split: Mapping[str, str] = NotImplemented
tokenizer: Callable = NotImplemented
... | https://github.com/huggingface/datasets/issues/823 | [
"Hi I don’t think this is a request for a dataset like you labeled it.\r\n\r\nI also think this would be better suited for the forum at https://discuss.huggingface.co. we try to keep the issue for the repo for bug reports and new features/dataset requests and have usage questions discussed on the forum. Thanks.",
... | null | 823 | false |
datasets freezes | Hi, I want to load these two datasets and convert them to Dataset format in torch and the code freezes for me, could you have a look please? thanks
dataset1 = load_dataset("squad", split="train[:10]")
dataset1 = dataset1.set_format(type='torch', columns=['context', 'answers', 'question'])
dataset2 = load_datase... | https://github.com/huggingface/datasets/issues/822 | [
"Pytorch is unable to convert strings to tensors unfortunately.\r\nYou can use `set_format(type=\"torch\")` on columns that can be converted to tensors, such as token ids.\r\n\r\nThis makes me think that we should probably raise an error or at least a warning when one tries to create pytorch tensors out of text col... | null | 822 | false |
`kor_nli` dataset doesn't being loaded properly | There are two issues from `kor_nli` dataset
1. csv.DictReader failed to split features by tab
- Should not exist `None` value in label feature, but there it is.
```python
kor_nli_train['train'].unique('gold_label')
# ['neutral', 'entailment', 'contradiction', None]
```
-... | https://github.com/huggingface/datasets/issues/821 | [] | null | 821 | false |
Update quail dataset to v1.3 | Updated quail to most recent version, to address the problem originally discussed [here](https://github.com/huggingface/datasets/issues/806). | https://github.com/huggingface/datasets/pull/820 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/820",
"html_url": "https://github.com/huggingface/datasets/pull/820",
"diff_url": "https://github.com/huggingface/datasets/pull/820.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/820.patch",
"merged_at": "2020-11-10T09:06:35"... | 820 | true |
Make save function use deterministic global vars order | The `dumps` function need to be deterministic for the caching mechanism.
However in #816 I noticed that one of dill's method to recursively check the globals of a function may return the globals in different orders each time it's used. To fix that I sort the globals by key in the `globs` dictionary.
I had to add a re... | https://github.com/huggingface/datasets/pull/819 | [
"Sorry, asking for help here, but the dill thread stop around 2013. Is it possible to use dill deterministically? I tried to monkeypatch the solution presented here into dill, but I suppose it requires forking their project.",
"Hi ! What we did was to subclass `dill`'s Pickler to fix the non-deterministic behavio... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/819",
"html_url": "https://github.com/huggingface/datasets/pull/819",
"diff_url": "https://github.com/huggingface/datasets/pull/819.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/819.patch",
"merged_at": "2020-11-11T15:20:50"... | 819 | true |
Fix type hints pickling in python 3.6 | Type hints can't be properly pickled in python 3.6. This was causing errors the `run_mlm.py` script from `transformers` with python 3.6
However Cloupickle proposed a [fix](https://github.com/cloudpipe/cloudpickle/pull/318/files) to make it work anyway.
The idea is just to implement the pickling/unpickling of parame... | https://github.com/huggingface/datasets/pull/818 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/818",
"html_url": "https://github.com/huggingface/datasets/pull/818",
"diff_url": "https://github.com/huggingface/datasets/pull/818.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/818.patch",
"merged_at": "2020-11-10T09:07:01"... | 818 | true |
Add MRQA dataset | ## Adding a Dataset
- **Name:** MRQA
- **Description:** Collection of different (subsets of) QA datasets all converted to the same format to evaluate out-of-domain generalization (the datasets come from different domains, distributions, etc.). Some datasets are used for training and others are used for evaluation. Th... | https://github.com/huggingface/datasets/issues/817 | [
"Done! cf #1117 and #1022"
] | null | 817 | false |
[Caching] Dill globalvars() output order is not deterministic and can cause cache issues. | Dill uses `dill.detect.globalvars` to get the globals used by a function in a recursive dump. `globalvars` returns a dictionary of all the globals that a dumped function needs. However the order of the keys in this dict is not deterministic and can cause caching issues.
To fix that one could register an implementati... | https://github.com/huggingface/datasets/issues/816 | [
"To show the issue:\r\n```\r\npython -c \"from datasets.fingerprint import Hasher; a=[]; func = lambda : len(a); print(Hasher.hash(func))\"\r\n```\r\ndoesn't always return the same ouput since `globs` is a dictionary with \"a\" and \"len\" as keys but sometimes not in the same order"
] | null | 816 | false |
Is dataset iterative or not? | Hi
I want to use your library for large-scale training, I am not sure if this is implemented as iterative datasets or not?
could you provide me with example how I can use datasets as iterative datasets?
thanks | https://github.com/huggingface/datasets/issues/815 | [
"Hello !\r\nCould you give more details ?\r\n\r\nIf you mean iter through one dataset then yes, `Dataset` object does implement the `__iter__` method so you can use \r\n```python\r\nfor example in dataset:\r\n # do something\r\n```\r\n\r\nIf you want to iter through several datasets you can first concatenate the... | null | 815 | false |
Joining multiple datasets | Hi
I have multiple iterative datasets from your library with different size and I want to join them in a way that each datasets is sampled equally, so smaller datasets more, larger one less, could you tell me how to implement this in pytorch? thanks | https://github.com/huggingface/datasets/issues/814 | [
"found a solution here https://discuss.pytorch.org/t/train-simultaneously-on-two-datasets/649/35, closed for now, thanks "
] | null | 814 | false |
How to implement DistributedSampler with datasets | Hi,
I am using your datasets to define my dataloaders, and I am training finetune_trainer.py in huggingface repo on them.
I need a distributedSampler to be able to train the models on TPUs being able to distribute the load across the TPU cores. Could you tell me how I can implement the distribued sampler when using d... | https://github.com/huggingface/datasets/issues/813 | [
"Hi Apparently I need to shard the data and give one host a chunk, could you provide me please with examples on how to do it? I want to use it jointly with finetune_trainer.py in huggingface repo seq2seq examples. thanks. ",
"Hey @rabeehkarimimahabadi I'm actually looking for the same feature. Did you manage to g... | null | 813 | false |
Too much logging | I'm doing this in the beginning of my script:
from datasets.utils import logging as datasets_logging
datasets_logging.set_verbosity_warning()
but I'm still getting these logs:
[2020-11-07 15:45:41,908][filelock][INFO] - Lock 139958278886176 acquired on /home/username/.cache/huggingface/datasets/cfe20ffaa80ef1... | https://github.com/huggingface/datasets/issues/812 | [
"Hi ! Thanks for reporting :) \r\nI agree these one should be hidden when the logging level is warning, we'll fix that",
"+1, the amount of logging is excessive.\r\n\r\nMost of it indeed comes from `filelock.py`, though there are occasionally messages from other sources too. Below is an example (all of these mess... | null | 812 | false |
nlp viewer error | Hello,
when I select amazon_us_reviews in nlp viewer, it shows error.
https://huggingface.co/nlp/viewer/?dataset=amazon_us_reviews

| https://github.com/huggingface/datasets/issues/811 | [
"and also for 'blog_authorship_corpus'\r\nhttps://huggingface.co/nlp/viewer/?dataset=blog_authorship_corpus\r\n\r\n",
"Is this the problem of my local computer or ??",
"Related to:\r\n- #673"
] | null | 811 | false |
Fix seqeval metric | The current seqeval metric returns the following error when computed:
```
~/.cache/huggingface/modules/datasets_modules/metrics/seqeval/78a944d83252b5a16c9a2e49f057f4c6e02f18cc03349257025a8c9aea6524d8/seqeval.py in _compute(self, predictions, references, suffix)
102 scores = {}
103 for type_... | https://github.com/huggingface/datasets/pull/810 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/810",
"html_url": "https://github.com/huggingface/datasets/pull/810",
"diff_url": "https://github.com/huggingface/datasets/pull/810.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/810.patch",
"merged_at": "2020-11-09T14:04:27"... | 810 | true |
Add Google Taskmaster dataset | ## Adding a Dataset
- **Name:** Taskmaster
- **Description:** A large dataset of task-oriented dialogue with annotated goals (55K dialogues covering entertainment and travel reservations)
- **Paper:** https://arxiv.org/abs/1909.05358
- **Data:** https://github.com/google-research-datasets/Taskmaster
- **Motivation... | https://github.com/huggingface/datasets/issues/809 | [
"Hey @yjernite. Was going to start working on this but found taskmaster 1,2 & 3 in the datasets library already so think this can be closed now?",
"You are absolutely right :) \r\n\r\nClosed by https://github.com/huggingface/datasets/pull/1193 https://github.com/huggingface/datasets/pull/1197 https://github.com/h... | null | 809 | false |
dataset(dgs): initial dataset loading script | When trying to create dummy data I get:
> Dataset datasets with config None seems to already open files in the method `_split_generators(...)`. You might consider to instead only open files in the method `_generate_examples(...)` instead. If this is not possible the dummy data has t o be created with less guidance. ... | https://github.com/huggingface/datasets/pull/808 | [
"Hi @AmitMY, \r\n\r\nWere you able to figure this out?",
"I did not.\r\nWith all the limitations this repo currently has, I had to create a repo of my own using tfds to mitigate them. \r\nhttps://github.com/sign-language-processing/datasets/tree/master/sign_language_datasets/datasets/dgs_corpus\r\n\r\nClosing as ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/808",
"html_url": "https://github.com/huggingface/datasets/pull/808",
"diff_url": "https://github.com/huggingface/datasets/pull/808.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/808.patch",
"merged_at": null
} | 808 | true |
load_dataset for LOCAL CSV files report CONNECTION ERROR | ## load_dataset for LOCAL CSV files report CONNECTION ERROR
- **Description:**
A local demo csv file:
```
import pandas as pd
import numpy as np
from datasets import load_dataset
import torch
import transformers
df = pd.DataFrame(np.arange(1200).reshape(300,4))
df.to_csv('test.csv', header=False, index=Fal... | https://github.com/huggingface/datasets/issues/807 | [
"Hi !\r\nThe url works on my side.\r\n\r\nIs the url working in your navigator ?\r\nAre you connected to internet ? Does your network block access to `raw.githubusercontent.com` ?",
"> Hi !\r\n> The url works on my side.\r\n> \r\n> Is the url working in your navigator ?\r\n> Are you connected to internet ? Does y... | null | 807 | false |
Quail dataset urls are out of date | <h3>Code</h3>
```
from datasets import load_dataset
quail = load_dataset('quail')
```
<h3>Error</h3>
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/text-machine-lab/quail/master/quail_v1.2/xml/ordered/quail_1.2_train.xml
```
As per [quail v1.3 commit](https://github.co... | https://github.com/huggingface/datasets/issues/806 | [
"Hi ! Thanks for reporting.\r\nWe should fix the urls and use quail 1.3.\r\nIf you want to contribute feel free to fix the urls and open a PR :) ",
"Done! PR [https://github.com/huggingface/datasets/pull/820](https://github.com/huggingface/datasets/pull/820)\r\n\r\nUpdated links and also regenerated the metadata ... | null | 806 | false |
On loading a metric from datasets, I get the following error | `from datasets import load_metric`
`metric = load_metric('bleurt')`
Traceback:
210 class _ArrayXDExtensionType(pa.PyExtensionType):
211
212 ndims: int = None
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType'
Any help will be appreciated. Thank you. | https://github.com/huggingface/datasets/issues/805 | [
"Hi ! We support only pyarrow > 0.17.1 so that we have access to the `PyExtensionType` object.\r\nCould you update pyarrow and try again ?\r\n```\r\npip install --upgrade pyarrow\r\n```"
] | null | 805 | false |
Empty output/answer in TriviaQA test set (both in 'kilt_tasks' and 'trivia_qa') | # The issue
It's all in the title, it appears to be fine on the train and validation sets.
Is there some kind of mapping to do like for the questions (see https://github.com/huggingface/datasets/blob/master/datasets/kilt_tasks/README.md) ?
# How to reproduce
```py
from datasets import load_dataset
kilt_tas... | https://github.com/huggingface/datasets/issues/804 | [
"cc @yjernite is this expected ?",
"Yes: TriviaQA has a private test set for the leaderboard [here](https://competitions.codalab.org/competitions/17208)\r\n\r\nFor the KILT training and validation portions, you need to link the examples from the TriviaQA dataset as detailed here:\r\nhttps://github.com/huggingface... | null | 804 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.