title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Duplicates in the LAMA dataset | I observed duplicates in the LAMA probing dataset, see a minimal code below.
```
>>> import datasets
>>> dataset = datasets.load_dataset('lama')
No config specified, defaulting to: lama/trex
Reusing dataset lama (/home/anam/.cache/huggingface/datasets/lama/trex/1.1.0/97deffae13eca0a18e77dfb3960bb31741e973586f5c... | https://github.com/huggingface/datasets/issues/2218 | [
"Hi,\r\n\r\ncurrently the datasets API doesn't have a dedicated function to remove duplicate rows, but since the LAMA dataset is not too big (it fits in RAM), we can leverage pandas to help us remove duplicates:\r\n```python\r\n>>> from datasets import load_dataset, Dataset\r\n>>> dataset = load_dataset('lama', spl... | null | 2,218 | false |
Revert breaking change in cache_files property | #2025 changed the format of `Dataset.cache_files`.
Before it was formatted like
```python
[{"filename": "path/to/file.arrow", "start": 0, "end": 1337}]
```
and it was changed to
```python
["path/to/file.arrow"]
```
since there's no start/end offsets available anymore.
To make this less breaking, I'm setting... | https://github.com/huggingface/datasets/pull/2217 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2217",
"html_url": "https://github.com/huggingface/datasets/pull/2217",
"diff_url": "https://github.com/huggingface/datasets/pull/2217.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2217.patch",
"merged_at": "2021-04-14T14:24... | 2,217 | true |
added real label for glue/mrpc to test set | Added real label to `glue.py` `mrpc` task for test split. | https://github.com/huggingface/datasets/pull/2216 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2216",
"html_url": "https://github.com/huggingface/datasets/pull/2216",
"diff_url": "https://github.com/huggingface/datasets/pull/2216.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2216.patch",
"merged_at": "2021-04-13T13:53... | 2,216 | true |
Add datasets SLR35 and SLR36 to OpenSLR | I would like to add [SLR35](https://openslr.org/35/) (18GB) and [SLR36](https://openslr.org/36/) (22GB) which are Large Javanese and Sundanese ASR training data set collected by Google in collaboration with Reykjavik University and Universitas Gadjah Mada in Indonesia. | https://github.com/huggingface/datasets/pull/2215 | [
"Hi @lhoestq,\r\nCould you please help me, I got this error message in all \"ci/circleci: run_dataset_script_tests_pyarrow*\" tests:\r\n```\r\n...\r\n \"\"\"Wrapper classes for various types of tokenization.\"\"\"\r\n \r\n from bleurt.lib import bert_tokenization\r\n import tensorflow.compat.v1 as tf\r\... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2215",
"html_url": "https://github.com/huggingface/datasets/pull/2215",
"diff_url": "https://github.com/huggingface/datasets/pull/2215.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2215.patch",
"merged_at": "2021-04-13T14:05... | 2,215 | true |
load_metric error: module 'datasets.utils.file_utils' has no attribute 'add_start_docstrings' | I'm having the same problem as [Notebooks issue 10](https://github.com/huggingface/notebooks/issues/10) on datasets 1.2.1, and it seems to be an issue with the datasets package.
```python
>>> from datasets import load_metric
>>> metric = load_metric("glue", "sst2")
Traceback (most recent call last):
File "<std... | https://github.com/huggingface/datasets/issues/2214 | [
"Hi @nsaphra, thanks for reporting.\r\n\r\nThis issue was fixed in `datasets` version 1.3.0. Could you please update `datasets` and tell me if the problem persists?\r\n```shell\r\npip install -U datasets\r\n```",
"There might be a bug in the conda version of `datasets` 1.2.1 where the datasets/metric scripts are ... | null | 2,214 | false |
Fix lc_quad download checksum | Fixes #2211 | https://github.com/huggingface/datasets/pull/2213 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2213",
"html_url": "https://github.com/huggingface/datasets/pull/2213",
"diff_url": "https://github.com/huggingface/datasets/pull/2213.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2213.patch",
"merged_at": "2021-04-14T13:42... | 2,213 | true |
Can't reach "https://storage.googleapis.com/illuin/fquad/train.json.zip" when trying to load fquad dataset | I'm trying to load the [fquad dataset](https://huggingface.co/datasets/fquad) by running:
```Python
fquad = load_dataset("fquad")
```
which produces the following error:
```
Using custom data configuration default
Downloading and preparing dataset fquad/default (download: 3.14 MiB, generated: 6.62 MiB, ... | https://github.com/huggingface/datasets/issues/2212 | [
"Hi ! Apparently the data are not available from this url anymore. We'll replace it with the new url when it's available",
"I saw this on their website when we request to download the dataset:\r\n\r\n\r\... | null | 2,212 | false |
Getting checksum error when trying to load lc_quad dataset | I'm having issues loading the [lc_quad](https://huggingface.co/datasets/fquad) dataset by running:
```Python
lc_quad = load_dataset("lc_quad")
```
which is giving me the following error:
```
Using custom data configuration default
Downloading and preparing dataset lc_quad/default (download: 3.69 MiB, ge... | https://github.com/huggingface/datasets/issues/2211 | [
"Hi,\r\n\r\nI've already opened a PR with the fix. If you are in a hurry, just build the project from source and run:\r\n```bash\r\ndatasets-cli test datasets/lc_quad --save_infos --all_configs --ignore_verifications\r\n```\r\n\r\n",
"Ah sorry, I tried searching but couldn't find any related PR. \r\n\r\nThank you... | null | 2,211 | false |
dataloading slow when using HUGE dataset | Hi,
When I use datasets with 600GB data, the dataloading speed increases significantly.
I am experimenting with two datasets, and one is about 60GB and the other 600GB.
Simply speaking, my code uses `datasets.set_format("torch")` function and let pytorch-lightning handle ddp training.
When looking at the pytorch... | https://github.com/huggingface/datasets/issues/2210 | [
"Hi ! Yes this is an issue with `datasets<=1.5.0`\r\nThis issue has been fixed by #2122 , we'll do a new release soon :)\r\nFor now you can test it on the `master` branch.",
"Hi, thank you for your answer. I did not realize that my issue stems from the same problem. "
] | null | 2,210 | false |
Add code of conduct to the project | Add code of conduct to the project and link it from README and CONTRIBUTING.
This was already done in `transformers`. | https://github.com/huggingface/datasets/pull/2209 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2209",
"html_url": "https://github.com/huggingface/datasets/pull/2209",
"diff_url": "https://github.com/huggingface/datasets/pull/2209.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2209.patch",
"merged_at": "2021-04-12T17:55... | 2,209 | true |
Remove Python2 leftovers | This PR removes Python2 leftovers since this project aims for Python3.6+ (and as of 2020 Python2 is no longer officially supported) | https://github.com/huggingface/datasets/pull/2208 | [
"merging since the CI is fixed on master"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2208",
"html_url": "https://github.com/huggingface/datasets/pull/2208",
"diff_url": "https://github.com/huggingface/datasets/pull/2208.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2208.patch",
"merged_at": "2021-04-14T13:40... | 2,208 | true |
making labels consistent across the datasets | Hi
For accessing the labels one can type
```
>>> a.features['label']
ClassLabel(num_classes=3, names=['entailment', 'neutral', 'contradiction'], names_file=None, id=None)
```
The labels however are not consistent with the actual labels sometimes, for instance in case of XNLI, the actual labels are 0,1,2, but if ... | https://github.com/huggingface/datasets/issues/2207 | [
"Hi ! The ClassLabel feature type encodes the labels as integers.\r\nThe integer corresponds to the index of the label name in the `names` list of the ClassLabel.\r\nHere that means that the labels are 'entailment' (0), 'neutral' (1), 'contradiction' (2).\r\n\r\nYou can get the label names back by using `a.features... | null | 2,207 | false |
Got pyarrow error when loading a dataset while adding special tokens into the tokenizer | I added five more special tokens into the GPT2 tokenizer. But after that, when I try to pre-process the data using my previous code, I got an error shown below:
Traceback (most recent call last):
File "/home/xuyan/anaconda3/envs/convqa/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1687, in _map_sin... | https://github.com/huggingface/datasets/issues/2206 | [
"Hi,\r\n\r\nthe output of the tokenizers is treated specially in the lib to optimize the dataset size (see the code [here](https://github.com/huggingface/datasets/blob/master/src/datasets/arrow_writer.py#L138-L141)). It looks like that one of the values in a dictionary returned by the tokenizer is out of the assume... | null | 2,206 | false |
Updating citation information on LinCE readme | Hi!
I just updated the citation information in this PR. It had an additional bibtex from one of the datasets used in LinCE and then the LinCE bibtex. I removed the former and added a link that shows the full list of citations for each dataset.
Thanks! | https://github.com/huggingface/datasets/pull/2205 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2205",
"html_url": "https://github.com/huggingface/datasets/pull/2205",
"diff_url": "https://github.com/huggingface/datasets/pull/2205.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2205.patch",
"merged_at": "2021-04-12T17:53... | 2,205 | true |
Add configurable options to `seqeval` metric | Fixes #2148
Adds options to use strict mode, different schemes of evaluation, sample weight and adjust zero_division behavior, if encountered.
`seqeval` provides schemes as objects, hence dynamic import from string, to avoid making the user do the import (thanks to @albertvillanova for the `importlib` idea). | https://github.com/huggingface/datasets/pull/2204 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2204",
"html_url": "https://github.com/huggingface/datasets/pull/2204",
"diff_url": "https://github.com/huggingface/datasets/pull/2204.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2204.patch",
"merged_at": "2021-04-15T13:49... | 2,204 | true |
updated banking77 train and test data | https://github.com/huggingface/datasets/pull/2203 | [
"Hi ! Can you add a description regarding this PR ? Why do you think we need to update the dummy data used to test the `banking77` dataset loading script ?",
"Closing for inactivity. Feel free to re-open if you want to push this change"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2203",
"html_url": "https://github.com/huggingface/datasets/pull/2203",
"diff_url": "https://github.com/huggingface/datasets/pull/2203.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2203.patch",
"merged_at": null
} | 2,203 | true | |
Add classes GenerateMode, DownloadConfig and Version to the documentation | Add documentation for classes `GenerateMode`, `DownloadConfig` and `Version`.
Update the docstring of `load_dataset` to create cross-reference links to the classes.
Related to #2187. | https://github.com/huggingface/datasets/pull/2202 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2202",
"html_url": "https://github.com/huggingface/datasets/pull/2202",
"diff_url": "https://github.com/huggingface/datasets/pull/2202.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2202.patch",
"merged_at": "2021-04-12T17:57... | 2,202 | true |
Fix ArrowWriter overwriting features in ArrowBasedBuilder | This should fix the issues with CSV loading experienced in #2153 and #2200.
The CSV builder is an ArrowBasedBuilder that had an issue with its ArrowWriter used to write the arrow file from the csv data.
The writer wasn't initialized with the features passed by the user. Therefore the writer was inferring the featur... | https://github.com/huggingface/datasets/pull/2201 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2201",
"html_url": "https://github.com/huggingface/datasets/pull/2201",
"diff_url": "https://github.com/huggingface/datasets/pull/2201.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2201.patch",
"merged_at": "2021-04-12T13:32... | 2,201 | true |
_prepare_split will overwrite DatasetBuilder.info.features | Hi, here is my issue:
I initialized a Csv datasetbuilder with specific features:
```
def get_dataset_features(data_args):
features = {}
if data_args.text_features:
features.update({text_feature: hf_features.Value("string") for text_feature in data_args.text_features.strip().split(",")})
if da... | https://github.com/huggingface/datasets/issues/2200 | [
"Hi ! This might be related to #2153 \r\n\r\nYou're right the ArrowWriter should be initialized with `features=self.info.features` ! Good catch\r\nI'm opening a PR to fix this and also to figure out how it was not caught in the tests\r\n\r\nEDIT: opened #2201",
"> Hi ! This might be related to #2153\r\n> \r\n> Yo... | null | 2,200 | false |
Fix backward compatibility in Dataset.load_from_disk | Fix backward compatibility when loading from disk an old dataset saved to disk with indices using key "_indices_data_files".
Related to #2195. | https://github.com/huggingface/datasets/pull/2199 | [
"Hi @lhoestq, could you please check if this makes sense? Thanks.",
"What about using `_indices_data_files` field in save_to_disk instead of `_indices_files` ?\r\nThis way future datasets can also be reloaded from older versions of the lib\r\n\r\n`_indices_files` was introduced in a recent PR and was not released... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2199",
"html_url": "https://github.com/huggingface/datasets/pull/2199",
"diff_url": "https://github.com/huggingface/datasets/pull/2199.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2199.patch",
"merged_at": "2021-04-09T15:57... | 2,199 | true |
added file_permission in load_dataset | As discussed in #2065 I've added `file_permission` argument in `load_dataset`.
Added mainly 2 things here:
1) Permission of downloaded datasets when converted to .arrow files can be changed with argument `file_permission` argument in `load_dataset` (default is 0o644 only)
2) Incase the user uses `map` later on t... | https://github.com/huggingface/datasets/pull/2198 | [
"From offline discussions: we want to make the permissions handling consistent with `transformers`. However from discussion in https://github.com/huggingface/transformers/pull/11119 it looks like it might not be a good solution to provide this argument. Users should use umask for now, and we'll see how things evol... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2198",
"html_url": "https://github.com/huggingface/datasets/pull/2198",
"diff_url": "https://github.com/huggingface/datasets/pull/2198.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2198.patch",
"merged_at": null
} | 2,198 | true |
fix missing indices_files in load_form_disk | This should fix #2195
`load_from_disk` was failing if there was no "_indices_files" field in state.json. This can happen if the dataset has no indices mapping | https://github.com/huggingface/datasets/pull/2197 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2197",
"html_url": "https://github.com/huggingface/datasets/pull/2197",
"diff_url": "https://github.com/huggingface/datasets/pull/2197.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2197.patch",
"merged_at": "2021-04-09T09:54... | 2,197 | true |
`load_dataset` caches two arrow files? | Hi,
I am using datasets to load large json file of 587G.
I checked the cached folder and found that there are two arrow files created:
* `cache-ed205e500a7dc44c.arrow` - 355G
* `json-train.arrow` - 582G
Why is the first file created?
If I delete it, would I still be able to `load_from_disk`? | https://github.com/huggingface/datasets/issues/2196 | [
"Hi ! Files that starts with `cache-*` are cached computation files, i.e. they are the cached results of map/filter/cast/etc. operations. For example if you used `map` on your dataset to transform it, then the resulting dataset is going to be stored and cached in a `cache-*` file. These files are used to avoid havi... | null | 2,196 | false |
KeyError: '_indices_files' in `arrow_dataset.py` | After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset.
Trace:
```
Traceback (most recent call last):
File "load_data.py", line 11, in <module>
dataset = load_from_disk(SRC)
File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line ... | https://github.com/huggingface/datasets/issues/2195 | [
"Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...",
"Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues"
] | null | 2,195 | false |
py3.7: TypeError: can't pickle _LazyModule objects | While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install:
```
git clone https://github.com/huggingface/transformers
cd transformers
pip install -e .[testing]
export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \
examples/language... | https://github.com/huggingface/datasets/issues/2194 | [
"\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n"
] | null | 2,194 | false |
Filtering/mapping on one column is very slow | I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation.
I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_colu... | https://github.com/huggingface/datasets/issues/2193 | [
"Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi... | null | 2,193 | false |
Fix typo in huggingface hub | pip knows how to resolve to `huggingface_hub`, but conda doesn't!
The `packaging` dependency is also required for the build to complete. | https://github.com/huggingface/datasets/pull/2192 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2192",
"html_url": "https://github.com/huggingface/datasets/pull/2192",
"diff_url": "https://github.com/huggingface/datasets/pull/2192.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2192.patch",
"merged_at": "2021-04-08T15:47... | 2,192 | true |
Refactorize tests to use Dataset as context manager | Refactorize Dataset tests to use Dataset as context manager. | https://github.com/huggingface/datasets/pull/2191 | [
"I find very interesting that idea of using a fixture instead!\r\n\r\nLet me rework a little bit this PR, @lhoestq.",
"@lhoestq, as this is a big refactoring, I had many problems to solve the conflicts with the master branch...\r\n\r\nTherefore, I think it is better to merge this as it is, and then to make other ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2191",
"html_url": "https://github.com/huggingface/datasets/pull/2191",
"diff_url": "https://github.com/huggingface/datasets/pull/2191.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2191.patch",
"merged_at": "2021-04-19T07:53... | 2,191 | true |
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs | I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi.
```
train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]')
val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]')
# filtering out examples that a... | https://github.com/huggingface/datasets/issues/2190 | [
"Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```",
"Hello @albertvillanova, \r\n\r\nThanks for... | null | 2,190 | false |
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object. | As you can see, it saves the entire dataset.
@lhoestq
You can check by going through the following example,
```
from datasets import load_from_disk,concatenate_datasets
loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset')
n=20
kb_list=[loaded_data.shard(n, i, contiguous=True) for i... | https://github.com/huggingface/datasets/issues/2189 | [
"Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon"
] | null | 2,189 | false |
Duplicate data in Timit dataset | I ran a simple code to list all texts in Timit dataset and the texts were all the same.
Is this dataset corrupted?
**Code:**
timit = load_dataset("timit_asr")
print(*timit['train']['text'], sep='\n')
**Result:**
Would such an act of refusal be useful?
Would such an act of refusal be useful?
Would such an act of... | https://github.com/huggingface/datasets/issues/2188 | [
"Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```",
"Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n"
] | null | 2,188 | false |
Question (potential issue?) related to datasets caching | I thought I had disabled datasets caching in my code, as follows:
```
from datasets import set_caching_enabled
...
def main():
# disable caching in datasets
set_caching_enabled(False)
```
However, in my log files I see messages like the following:
```
04/07/2021 18:34:42 - WARNING - datasets.build... | https://github.com/huggingface/datasets/issues/2187 | [
"An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out ... | null | 2,187 | false |
GEM: new challenge sets | This PR updates the GEM dataset to:
- remove extraneous fields in WikiAuto after https://github.com/huggingface/datasets/pull/2171 fixed the source
- add context and services to Schema Guided Dialog
- Add new or update challenge sets for MLSUM ES and DE, XSUM, and SGD | https://github.com/huggingface/datasets/pull/2186 | [
"cc @sebastiangehrmann"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2186",
"html_url": "https://github.com/huggingface/datasets/pull/2186",
"diff_url": "https://github.com/huggingface/datasets/pull/2186.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2186.patch",
"merged_at": "2021-04-07T21:56... | 2,186 | true |
.map() and distributed training | Hi,
I have a question regarding distributed training and the `.map` call on a dataset.
I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`.
`dataset` is then tokenized:
```python
datasets = load_from_disk(dataset_path=my_path)
[...]
def tokeni... | https://github.com/huggingface/datasets/issues/2185 | [
"Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seem... | null | 2,185 | false |
Implementation of class_encode_column | Addresses #2176
I'm happy to discuss the API and internals! | https://github.com/huggingface/datasets/pull/2184 | [
"Made the required changes @lhoestq , sorry it took so much time!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2184",
"html_url": "https://github.com/huggingface/datasets/pull/2184",
"diff_url": "https://github.com/huggingface/datasets/pull/2184.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2184.patch",
"merged_at": "2021-04-16T11:26... | 2,184 | true |
Fix s3fs tests for py36 and py37+ | Recently several changes happened:
1. latest versions of `fsspec` require python>3.7 for async features
2. `s3fs` added a dependency on `aiobotocore`, which is not compatible with the `moto` s3 mock context manager
This PR fixes both issues, by pinning `fsspec` and `s3fs` for python 3.6, and by using `moto` in ser... | https://github.com/huggingface/datasets/pull/2183 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2183",
"html_url": "https://github.com/huggingface/datasets/pull/2183",
"diff_url": "https://github.com/huggingface/datasets/pull/2183.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2183.patch",
"merged_at": "2021-04-08T08:54... | 2,183 | true |
Set default in-memory value depending on the dataset size | Set a default value for `in_memory` depending on the size of the dataset to be loaded.
Close #2179.
TODO:
- [x] Add a section in the docs about this.
- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be c... | https://github.com/huggingface/datasets/pull/2182 | [
"I ping @krandiash to keep him up to date.",
"TODO:\r\n- [x] Add a section in the docs about this.\r\n- ~Add a warning if someone tries to specify `cache_file_name=` in `map`, `filter` etc. on a dataset that is in memory, since the computation is not going to be cached in this case.~",
"@lhoestq I have a questi... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2182",
"html_url": "https://github.com/huggingface/datasets/pull/2182",
"diff_url": "https://github.com/huggingface/datasets/pull/2182.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2182.patch",
"merged_at": "2021-04-20T10:04... | 2,182 | true |
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries) | Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project.
When loading a huge json file of 500GB, pyarrow complains as follows:
```
Traceback (most recent call last):
File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-pack... | https://github.com/huggingface/datasets/issues/2181 | [
"Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well... | null | 2,181 | false |
Add tel to xtreme tatoeba | This should fix issue #2149 | https://github.com/huggingface/datasets/pull/2180 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2180",
"html_url": "https://github.com/huggingface/datasets/pull/2180",
"diff_url": "https://github.com/huggingface/datasets/pull/2180.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2180.patch",
"merged_at": "2021-04-07T15:50... | 2,180 | true |
Load small datasets in-memory instead of using memory map | Currently all datasets are loaded using memory mapping by default in `load_dataset`.
However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and:
- its memory footprint would be small so it's ok
- in-memory computations/queries would be faster
- the cach... | https://github.com/huggingface/datasets/issues/2179 | [] | null | 2,179 | false |
Fix cast memory usage by using map on subtables | The `cast` operation on a pyarrow Table may create new arrays in memory.
This is an issue since users expect memory mapped datasets to not fill up the RAM.
To fix that I used `map` to write a new arrow file on disk when cast is used.
To make things more convenient I introduced the `arrow` formatting of a dataset, ... | https://github.com/huggingface/datasets/pull/2178 | [
"I addressed your comments about the docstrings and the output validation :)",
"I updated the bleurt mocking method and bleurt test is passing now.\r\nI also ran the slow tests and they are passing for bleurt.",
"Thanks @lhoestq and @albertvillanova !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2178",
"html_url": "https://github.com/huggingface/datasets/pull/2178",
"diff_url": "https://github.com/huggingface/datasets/pull/2178.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2178.patch",
"merged_at": "2021-04-13T09:28... | 2,178 | true |
add social thumbnial | # What does this PR do?
I added OpenGraph/ Twitter Card support to the docs to create nice social thumbnails.

To be able to add these I needed to install `sphinxext-op... | https://github.com/huggingface/datasets/pull/2177 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2177",
"html_url": "https://github.com/huggingface/datasets/pull/2177",
"diff_url": "https://github.com/huggingface/datasets/pull/2177.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2177.patch",
"merged_at": "2021-04-07T08:16... | 2,177 | true |
Converting a Value to a ClassLabel | Hi!
In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.`
Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks! | https://github.com/huggingface/datasets/issues/2176 | [
"Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class... | null | 2,176 | false |
dataset.search_batch() function outputs all -1 indices sometime. | I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**.
During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_... | https://github.com/huggingface/datasets/issues/2175 | [
"Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.",
"@lhoestq @patrickvonplaten \r\n\r\nI also found another short... | null | 2,175 | false |
Pin docutils for better doc | The latest release of docutils make the navbar in the documentation weird and the Markdown wrongly interpreted:

We had the same problem in Transformers and solved it by pinning docutils (a dep of sphinx... | https://github.com/huggingface/datasets/pull/2174 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2174",
"html_url": "https://github.com/huggingface/datasets/pull/2174",
"diff_url": "https://github.com/huggingface/datasets/pull/2174.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2174.patch",
"merged_at": "2021-04-06T12:55... | 2,174 | true |
Add OpenSLR dataset | OpenSLR (https://openslr.org/) is a site devoted to hosting speech and language resources, such as training corpora for speech recognition, and software related to speech recognition. There are around 80 speech datasets listed in OpenSLR, currently this PR includes only 9 speech datasets SLR41, SLR42, SLR43, SLR44, SLR... | https://github.com/huggingface/datasets/pull/2173 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2173",
"html_url": "https://github.com/huggingface/datasets/pull/2173",
"diff_url": "https://github.com/huggingface/datasets/pull/2173.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2173.patch",
"merged_at": "2021-04-12T16:54... | 2,173 | true |
Pin fsspec lower than 0.9.0 | Today's release of `fsspec` 0.9.0 implied a new release of `s3fs` 0.6.0 but this version breaks the CI (see [here](https://app.circleci.com/pipelines/github/huggingface/datasets/5312/workflows/490f3240-cd1c-4dd1-bb60-b416771c5584/jobs/32734) for example)
I'm pinning `fsspec` until this has been resolved | https://github.com/huggingface/datasets/pull/2172 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2172",
"html_url": "https://github.com/huggingface/datasets/pull/2172",
"diff_url": "https://github.com/huggingface/datasets/pull/2172.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2172.patch",
"merged_at": "2021-04-06T09:49... | 2,172 | true |
Fixed the link to wikiauto training data. | https://github.com/huggingface/datasets/pull/2171 | [
"Also you can ignore the CI failing on `docs`, this has been fixed on master :)",
"@lhoestq I need to update other stuff on GEM later today too, so will merge this one and remove columns in the next PR!",
"Ok !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2171",
"html_url": "https://github.com/huggingface/datasets/pull/2171",
"diff_url": "https://github.com/huggingface/datasets/pull/2171.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2171.patch",
"merged_at": "2021-04-06T16:05... | 2,171 | true | |
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date | Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides
```
20201220/ 02-Feb-2021 01:36 -
20210101/ 21-Feb-2021 01:26 -
20210120/ ... | https://github.com/huggingface/datasets/issues/2170 | [
"It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the fi... | null | 2,170 | false |
Updated WER metric implementation to avoid memory issues | This is in order to fix this issue:
https://github.com/huggingface/datasets/issues/2078
| https://github.com/huggingface/datasets/pull/2169 | [
"Hi ! Thanks for suggesting this fix \r\nUnfortunately it looks like it's already been fixed by #2111 \r\n\r\nFeel free to share your thoughts about this PR !\r\n\r\nI'm closing this one if you don't mind."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2169",
"html_url": "https://github.com/huggingface/datasets/pull/2169",
"diff_url": "https://github.com/huggingface/datasets/pull/2169.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2169.patch",
"merged_at": null
} | 2,169 | true |
Preserve split type when realoding dataset | Fixes #2167
Using `eval` is not ideal for security reasons (in web apps I assume), but without it the code would be much more complex IMO.
In terms of style, instead of explicitly importing a private member (`_RelativeInstruction`), we can add these imports at the top of the module:
```python
from . import arr... | https://github.com/huggingface/datasets/pull/2168 | [
"Thanks for diving into this !\r\n\r\nBefore going further, I just want to make sure if using `eval` is the right solution\r\nPersonally I'm not a big fan of `eval` since it has many security concerns. Also storing string representations of python objects in the json files is not ideal either IMO, so maybe it's pos... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2168",
"html_url": "https://github.com/huggingface/datasets/pull/2168",
"diff_url": "https://github.com/huggingface/datasets/pull/2168.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2168.patch",
"merged_at": "2021-04-19T09:08... | 2,168 | true |
Split type not preserved when reloading the dataset | A minimal reproducible example:
```python
>>> from datasets import load_dataset, Dataset
>>> dset = load_dataset("sst", split="train")
>>> dset.save_to_disk("sst")
>>> type(dset.split)
<class 'datasets.splits.NamedSplit'>
>>> dset = Dataset.load_from_disk("sst")
>>> type(dset.split) # NamedSplit expected
<cla... | https://github.com/huggingface/datasets/issues/2167 | [] | null | 2,167 | false |
Regarding Test Sets for the GEM datasets | @yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)?
e.g.
```
from datasets import load_dataset
DATASET_NAME="common_gen"
data = load_dataset("gem", DATASET_NAME)
```
The test set doesn't have the target or references.
```
data['test... | https://github.com/huggingface/datasets/issues/2166 | [
"Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of... | null | 2,166 | false |
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset | Hi,
I'm trying to pretraine deep-speed model using HF arxiv dataset like:
```
train_ds = nlp.load_dataset('scientific_papers', 'arxiv')
train_ds.set_format(
type="torch",
columns=["input_ids", "attention_mask", "global_attention_mask", "labels"],
)
engine, _, _, _ = deepspeed.initialize(
... | https://github.com/huggingface/datasets/issues/2165 | [
"Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r... | null | 2,165 | false |
Replace assertTrue(isinstance with assertIsInstance in tests | Replaces all the occurrences of the `assertTrue(isinstance(` pattern with `assertIsInstance`. | https://github.com/huggingface/datasets/pull/2164 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2164",
"html_url": "https://github.com/huggingface/datasets/pull/2164",
"diff_url": "https://github.com/huggingface/datasets/pull/2164.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2164.patch",
"merged_at": "2021-04-06T14:41... | 2,164 | true |
Concat only unique fields in DatasetInfo.from_merge | I thought someone from the community with less experience would be interested in fixing this issue, but that wasn't the case.
Fixes #2103 | https://github.com/huggingface/datasets/pull/2163 | [
"Hi @mariosasko,\r\nJust came across this PR and I was wondering if we can use\r\n`description = \"\\n\\n\".join(OrderedDict.fromkeys([info.description for info in dataset_infos]))`\r\n\r\nThis will obviate the need for `unique` and is almost as fast as `set`. We could have used `dict` inplace of `OrderedDict` but ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2163",
"html_url": "https://github.com/huggingface/datasets/pull/2163",
"diff_url": "https://github.com/huggingface/datasets/pull/2163.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2163.patch",
"merged_at": "2021-04-06T14:39... | 2,163 | true |
visualization for cc100 is broken | Hi
visualization through dataset viewer for cc100 is broken
https://huggingface.co/datasets/viewer/
thanks a lot
| https://github.com/huggingface/datasets/issues/2162 | [
"This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?",
"Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself bu... | null | 2,162 | false |
any possibility to download part of large datasets only? | Hi
Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks | https://github.com/huggingface/datasets/issues/2161 | [
"Not yet but it’s on the short/mid-term roadmap (requested by many indeed).",
"oh, great, really awesome feature to have, thank you very much for the great, fabulous work",
"We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)",
"thanks a lot Quentin, this would be... | null | 2,161 | false |
data_args.preprocessing_num_workers almost freezes | Hi @lhoestq
I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py
to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves ... | https://github.com/huggingface/datasets/issues/2160 | [
"Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ ... | null | 2,160 | false |
adding ccnet dataset | ## Adding a Dataset
- **Name:** ccnet
- **Description:**
Common Crawl
- **Paper:**
https://arxiv.org/abs/1911.00359
- **Data:**
https://github.com/facebookresearch/cc_net
- **Motivation:**
this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite importan... | https://github.com/huggingface/datasets/issues/2159 | [
"closing since I think this is cc100, just the name has been changed. thanks "
] | null | 2,159 | false |
viewer "fake_news_english" error | When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error:
> ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional depe... | https://github.com/huggingface/datasets/issues/2158 | [
"Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly",
"This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue"
] | null | 2,158 | false |
updated user permissions based on umask | Updated user permissions based on running user's umask (#2065). Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | https://github.com/huggingface/datasets/pull/2157 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2157",
"html_url": "https://github.com/huggingface/datasets/pull/2157",
"diff_url": "https://github.com/huggingface/datasets/pull/2157.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2157.patch",
"merged_at": "2021-04-06T07:19... | 2,157 | true |
User permissions | Updated user permissions based on running user's umask. Let me know if `0o666` is looking good or should I change it to `~umask` only (to give execute permissions as well) | https://github.com/huggingface/datasets/pull/2156 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2156",
"html_url": "https://github.com/huggingface/datasets/pull/2156",
"diff_url": "https://github.com/huggingface/datasets/pull/2156.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2156.patch",
"merged_at": null
} | 2,156 | true |
Add table classes to the documentation | Following #2025 , I added the table classes to the documentation
cc @albertvillanova | https://github.com/huggingface/datasets/pull/2155 | [
"Just note that docstrings injected from PyArrow do not follow the same convention for formatting types in `Args` or `Returns` as we do... Not a big problem, anyway! 😄 "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2155",
"html_url": "https://github.com/huggingface/datasets/pull/2155",
"diff_url": "https://github.com/huggingface/datasets/pull/2155.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2155.patch",
"merged_at": "2021-03-31T15:42... | 2,155 | true |
Adding the NorNE dataset for Norwegian POS and NER | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | https://github.com/huggingface/datasets/pull/2154 | [
"Awesome!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2154",
"html_url": "https://github.com/huggingface/datasets/pull/2154",
"diff_url": "https://github.com/huggingface/datasets/pull/2154.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2154.patch",
"merged_at": "2021-04-01T09:16... | 2,154 | true |
load_dataset ignoring features | First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything.
I'm using datasets 1.5.0

As you can see, when I load the dataset, the C... | https://github.com/huggingface/datasets/issues/2153 | [
"Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201",
"Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.",
"Hi :) We're indeed working on tutorials that we will add to the docs... | null | 2,153 | false |
Update README.md | Updated some descriptions of Wino_Bias dataset. | https://github.com/huggingface/datasets/pull/2152 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2152",
"html_url": "https://github.com/huggingface/datasets/pull/2152",
"diff_url": "https://github.com/huggingface/datasets/pull/2152.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2152.patch",
"merged_at": "2021-04-01T10:20... | 2,152 | true |
Add support for axis in concatenate datasets | Add support for `axis` (0 or 1) in `concatenate_datasets`.
Close #853. | https://github.com/huggingface/datasets/pull/2151 | [
"@lhoestq I am going to implement the consolidation step you mentioned in #1870.",
"@lhoestq I was thinking that the order of the TableBlocks is not relevant, isn't it?\r\n\r\nI mean, in order to consolidate _consecutive_ in-memory table blocks, in this case:\r\n```\r\nblocks = [in_memory_1, memory_mapped, in_mem... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2151",
"html_url": "https://github.com/huggingface/datasets/pull/2151",
"diff_url": "https://github.com/huggingface/datasets/pull/2151.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2151.patch",
"merged_at": "2021-04-19T16:07... | 2,151 | true |
Allow pickling of big in-memory tables | This should fix issue #2134
Pickling is limited to <4GiB objects, it's not possible to pickle a big arrow table (for multiprocessing for example).
For big tables, we have to write them on disk and only pickle the path to the table. | https://github.com/huggingface/datasets/pull/2150 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2150",
"html_url": "https://github.com/huggingface/datasets/pull/2150",
"diff_url": "https://github.com/huggingface/datasets/pull/2150.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2150.patch",
"merged_at": "2021-03-31T10:37... | 2,150 | true |
Telugu subset missing for xtreme tatoeba dataset | from nlp import load_dataset
train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation']
ValueError: BuilderConfig tatoeba.tel not found.
but language tel is actually included in xtreme:
https://github.com/google-research/xtreme/blob/master/utils_preprocess.py
def tatoeba_preprocess(args):
lang3_dict ... | https://github.com/huggingface/datasets/issues/2149 | [
"Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this",
"Fixed in #2180"
] | null | 2,149 | false |
Add configurable options to `seqeval` metric | Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation).
However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs... | https://github.com/huggingface/datasets/issues/2148 | [
"Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `impor... | null | 2,148 | false |
Render docstring return type as inline | This documentation setting will avoid having the return type in a separate line under `Return type`.
See e.g. current docs for `Dataset.to_csv`. | https://github.com/huggingface/datasets/pull/2147 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2147",
"html_url": "https://github.com/huggingface/datasets/pull/2147",
"diff_url": "https://github.com/huggingface/datasets/pull/2147.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2147.patch",
"merged_at": "2021-03-31T13:11... | 2,147 | true |
Dataset file size on disk is very large with 3D Array | Hi,
I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8.
The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`.
`{
"description": "",
"citation": ""... | https://github.com/huggingface/datasets/issues/2146 | [
"Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for exampl... | null | 2,146 | false |
Implement Dataset add_column | Implement `Dataset.add_column`.
Close #1954. | https://github.com/huggingface/datasets/pull/2145 | [
"#2274 has been merged. You can now merge master into this branch and use `assert_arrow_metadata_are_synced_with_dataset_features(dset)` to make sure that the metadata are good :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2145",
"html_url": "https://github.com/huggingface/datasets/pull/2145",
"diff_url": "https://github.com/huggingface/datasets/pull/2145.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2145.patch",
"merged_at": "2021-04-29T14:50... | 2,145 | true |
Loading wikipedia 20200501.en throws pyarrow related error | **Problem description**
I am getting the following error when trying to load wikipedia/20200501.en dataset.
**Error log**
Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikiped... | https://github.com/huggingface/datasets/issues/2144 | [
"That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```",
"Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa... | null | 2,144 | false |
task casting via load_dataset | wip
not satisfied with the API, it means as a dataset implementer I need to write a function with boilerplate and write classes for each `<dataset><task>` "facet". | https://github.com/huggingface/datasets/pull/2143 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2143",
"html_url": "https://github.com/huggingface/datasets/pull/2143",
"diff_url": "https://github.com/huggingface/datasets/pull/2143.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2143.patch",
"merged_at": null
} | 2,143 | true |
Gem V1.1 | This branch updates the GEM benchmark to its 1.1 version which includes:
- challenge sets for most tasks
- detokenized TurkCorpus to match the rest of the text simplification subtasks
- fixed inputs for TurkCorpus and ASSET test sets
- 18 languages in WikiLingua
cc @sebastianGehrmann | https://github.com/huggingface/datasets/pull/2142 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2142",
"html_url": "https://github.com/huggingface/datasets/pull/2142",
"diff_url": "https://github.com/huggingface/datasets/pull/2142.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2142.patch",
"merged_at": "2021-03-30T00:10... | 2,142 | true |
added spans field for the wikiann datasets | Hi @lhoestq
I tried to add spans to the wikiann datasets.
Thanks a lot for kindly having a look.
This addresses https://github.com/huggingface/datasets/issues/2130.
Best regards
Rabeeh | https://github.com/huggingface/datasets/pull/2141 | [
"Hi @lhoestq \r\nThanks a lot for taking time checking it. I update \"dataset_infos.json\", I added description to the function of _generate_samples in wikiann.py but I was not sure about the format to write in README. thanks. ",
"Thanks !\r\n\r\nFor the fields description in the dataset card, something like thi... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2141",
"html_url": "https://github.com/huggingface/datasets/pull/2141",
"diff_url": "https://github.com/huggingface/datasets/pull/2141.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2141.patch",
"merged_at": "2021-03-31T13:27... | 2,141 | true |
add banking77 dataset | Intent classification/detection dataset from banking category with 77 unique intents. | https://github.com/huggingface/datasets/pull/2140 | [
"@lhoestq I updated files"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2140",
"html_url": "https://github.com/huggingface/datasets/pull/2140",
"diff_url": "https://github.com/huggingface/datasets/pull/2140.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2140.patch",
"merged_at": "2021-04-09T09:32... | 2,140 | true |
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split | Hi,
Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`.
Here is the minimal reproducible example:
```python
from datasets import load_dataset
from dat... | https://github.com/huggingface/datasets/issues/2139 | [
"Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```",
"Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!"
] | null | 2,139 | false |
Add CER metric | Add Character Error Rate (CER) metric that is used in evaluation in ASR. I also have written unittests (hopefully thorough enough) but I'm not sure how to integrate them into the existed codebase.
```python
from cer import CER
cer = CER()
class TestCER(unittest.TestCase):
def test_cer_case_senstive(self)... | https://github.com/huggingface/datasets/pull/2138 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2138",
"html_url": "https://github.com/huggingface/datasets/pull/2138",
"diff_url": "https://github.com/huggingface/datasets/pull/2138.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2138.patch",
"merged_at": "2021-04-06T07:14... | 2,138 | true |
Fix missing infos from concurrent dataset loading | This should fix issue #2131
When calling `load_dataset` at the same time from 2 workers, one of the worker could have missing split infos when reloading the dataset from the cache.
| https://github.com/huggingface/datasets/pull/2137 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2137",
"html_url": "https://github.com/huggingface/datasets/pull/2137",
"diff_url": "https://github.com/huggingface/datasets/pull/2137.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2137.patch",
"merged_at": "2021-03-31T10:35... | 2,137 | true |
fix dialogue action slot name and value | fix #2128 | https://github.com/huggingface/datasets/pull/2136 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2136",
"html_url": "https://github.com/huggingface/datasets/pull/2136",
"diff_url": "https://github.com/huggingface/datasets/pull/2136.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2136.patch",
"merged_at": "2021-03-31T12:48... | 2,136 | true |
en language data from MLQA dataset is missing | Hi
I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue. | https://github.com/huggingface/datasets/issues/2135 | [
"Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?",
"Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, ... | null | 2,135 | false |
Saving large in-memory datasets with save_to_disk crashes because of pickling | Using Datasets 1.5.0 on Python 3.7.
Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so... | https://github.com/huggingface/datasets/issues/2134 | [
"Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_... | null | 2,134 | false |
bug in mlqa dataset | Hi
Looking into MLQA dataset for langauge "ar":
```
"question": [
"\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?",
"\u0643\u0... | https://github.com/huggingface/datasets/issues/2133 | [
"If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u064... | null | 2,133 | false |
TydiQA dataset is mixed and is not split per language | Hi @lhoestq
Currently TydiQA is mixed and user can only access the whole training set of all languages:
https://www.tensorflow.org/datasets/catalog/tydi_qa
for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenien... | https://github.com/huggingface/datasets/issues/2132 | [
"You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\... | null | 2,132 | false |
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object | version: 1.5.0
met a very strange error, I am training large scale language model, and need train on 2 machines(workers).
And sometimes I will get this error `TypeError: 'NoneType' object is not iterable`
This is traceback
```
71 | | Traceback (most recent call last):
-- | -- | --
72 | | File "run_gpt.py"... | https://github.com/huggingface/datasets/issues/2131 | [
"Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue",
"The PR got merged :)\r\nFeel free to try it out on the `master` br... | null | 2,131 | false |
wikiann dataset is missing columns | Hi
Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq | https://github.com/huggingface/datasets/issues/2130 | [
"Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ",
"Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined ... | null | 2,130 | false |
How to train BERT model with next sentence prediction? | Hello.
I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction
like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
| https://github.com/huggingface/datasets/issues/2129 | [
"Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.",
"Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction... | null | 2,129 | false |
Dialogue action slot name and value are reversed in MultiWoZ 2.2 | Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial!
I spot an error that the order of Dialogue action slot names and values are reversed.
https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.p... | https://github.com/huggingface/datasets/issues/2128 | [
"Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) "
] | null | 2,128 | false |
make documentation more clear to use different cloud storage | This PR extends the cloud storage documentation. To show you can use a different `fsspec` implementation. | https://github.com/huggingface/datasets/pull/2127 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2127",
"html_url": "https://github.com/huggingface/datasets/pull/2127",
"diff_url": "https://github.com/huggingface/datasets/pull/2127.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2127.patch",
"merged_at": "2021-03-29T12:16... | 2,127 | true |
Replace legacy torch.Tensor constructor with torch.tensor | The title says it all (motivated by [this issue](https://github.com/pytorch/pytorch/issues/53146) in the pytorch repo). | https://github.com/huggingface/datasets/pull/2126 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2126",
"html_url": "https://github.com/huggingface/datasets/pull/2126",
"diff_url": "https://github.com/huggingface/datasets/pull/2126.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2126.patch",
"merged_at": "2021-03-29T09:27... | 2,126 | true |
Is dataset timit_asr broken? | Using `timit_asr` dataset, I saw all records are the same.
``` python
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
from datasets import ClassLabel
import random
import pandas as pd
from IPython.display import display, HTML
def show_random_elements(dataset, num_example... | https://github.com/huggingface/datasets/issues/2125 | [
"Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ",
"@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem."
] | null | 2,125 | false |
Adding ScaNN library to do MIPS? | @lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors.
https://github.com/google-research/google-research/tree/master/scann
 but it sounds really cool !\r\n"
] | null | 2,124 | false |
Problem downloading GEM wiki_auto_asset_turk dataset | @yjernite
### Summary
I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code.
### Steps to reproduce
Code snippet:
from datasets import load_dataset
#dataset = load_dataset('gem', 'web_nlg_en')
d... | https://github.com/huggingface/datasets/issues/2123 | [
"Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ",
"Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.",
"Is there an... | null | 2,123 | false |
Fast table queries with interpolation search | ## Intro
This should fix issue #1803
Currently querying examples in a dataset is O(n) because of the underlying pyarrow ChunkedArrays implementation.
To fix this I implemented interpolation search that is pretty effective since datasets usually verifies the condition of evenly distributed chunks (the default ch... | https://github.com/huggingface/datasets/pull/2122 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2122",
"html_url": "https://github.com/huggingface/datasets/pull/2122",
"diff_url": "https://github.com/huggingface/datasets/pull/2122.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2122.patch",
"merged_at": "2021-04-06T14:33... | 2,122 | true |
Add Validation For README | Hi @lhoestq, @yjernite
This is a simple Readme parser. All classes specific to different sections can inherit `Section` class, and we can define more attributes in each.
Let me know if this is going in the right direction :)
Currently the output looks like this, for `to_dict()` on `FashionMNIST` `README.md`:
... | https://github.com/huggingface/datasets/pull/2121 | [
"Good start! Here are some proposed next steps:\r\n- We want the Class structure to reflect the template - so the parser know what section titles to expect and when something has gone wrong\r\n- As a result, we don't need to parse the table of contents, since it will always be the same\r\n- For each section/subsect... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2121",
"html_url": "https://github.com/huggingface/datasets/pull/2121",
"diff_url": "https://github.com/huggingface/datasets/pull/2121.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2121.patch",
"merged_at": "2021-05-10T09:41... | 2,121 | true |
dataset viewer does not work anymore | Hi
I normally use this link to see all datasets and how I can load them
https://huggingface.co/datasets/viewer/
Now I am getting
502 Bad Gateway
nginx/1.18.0 (Ubuntu)
could you bring this webpage back ? this was very helpful @lhoestq
thanks for your help | https://github.com/huggingface/datasets/issues/2120 | [
"Thanks for reporting :) We're looking into it",
"Back up. "
] | null | 2,120 | false |
copy.deepcopy os.environ instead of copy | Fixes: https://github.com/huggingface/datasets/issues/2115
- bug fix: using envrion.copy() returns a dict.
- using deepcopy(environ) returns an `_environ` object
- Changing the datatype of the _environ object can break code, if subsequent libraries perform operations using apis exclusive to the environ object, lik... | https://github.com/huggingface/datasets/pull/2119 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2119",
"html_url": "https://github.com/huggingface/datasets/pull/2119",
"diff_url": "https://github.com/huggingface/datasets/pull/2119.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2119.patch",
"merged_at": "2021-03-26T15:13... | 2,119 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.