title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Remove os.environ.copy in Dataset.map | Replace `os.environ.copy` with in-place modification
Fixes #2115 | https://github.com/huggingface/datasets/pull/2118 | [
"I thought deepcopy on `os.environ` is unsafe (see [this](https://stackoverflow.com/questions/13142972/using-copy-deepcopy-on-os-environ-in-python-appears-broken)), but I can't replicate the behavior described in the linked SO thread.\r\n\r\nClosing this one because #2119 has a much cleaner approach."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2118",
"html_url": "https://github.com/huggingface/datasets/pull/2118",
"diff_url": "https://github.com/huggingface/datasets/pull/2118.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2118.patch",
"merged_at": null
} | 2,118 | true |
load_metric from local "glue.py" meet error 'NoneType' object is not callable | actual_task = "mnli" if task == "mnli-mm" else task
dataset = load_dataset(path='/home/glue.py', name=actual_task)
metric = load_metric(path='/home/glue.py', name=actual_task)
---------------------------------------------------------------------------
TypeError Traceback (most recent... | https://github.com/huggingface/datasets/issues/2117 | [
"@Frankie123421 what was the resolution to this?",
"> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric",
"thank you!"
] | null | 2,117 | false |
Creating custom dataset results in error while calling the map() function | calling `map()` of `datasets` library results into an error while defining a Custom dataset.
Reproducible example:
```
import datasets
class MyDataset(datasets.Dataset):
def __init__(self, sentences):
"Initialization"
self.samples = sentences
def __len__(self):
"Denotes the ... | https://github.com/huggingface/datasets/issues/2116 | [
"Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over... | null | 2,116 | false |
The datasets.map() implementation modifies the datatype of os.environ object | In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'.
This causes following function calls to fail as follows:
`
x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None)
TypeError: get() takes... | https://github.com/huggingface/datasets/issues/2115 | [] | null | 2,115 | false |
Support for legal NLP datasets (EURLEX, ECtHR cases and EU-REG-IR) | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084)
- EU-REG-IR (https://arxiv.org/abs/2101.10726) | https://github.com/huggingface/datasets/pull/2114 | [
"> Awesome thank you :)\r\n> This is really cool\r\n> \r\n> I left a few comments.\r\n> \r\n> Also it looks like the dummy data are quite big (100-200KB each). Can you try to reduce their sizes please ? For example I noticed that all the jsonl files inside the `dummy_data.zip` files have 20 lines. Can you only keep... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2114",
"html_url": "https://github.com/huggingface/datasets/pull/2114",
"diff_url": "https://github.com/huggingface/datasets/pull/2114.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2114.patch",
"merged_at": "2021-03-31T10:38... | 2,114 | true |
Implement Dataset as context manager | When used as context manager, it would be safely deleted if some exception is raised.
This will avoid
> During handling of the above exception, another exception occurred: | https://github.com/huggingface/datasets/pull/2113 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2113",
"html_url": "https://github.com/huggingface/datasets/pull/2113",
"diff_url": "https://github.com/huggingface/datasets/pull/2113.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2113.patch",
"merged_at": "2021-03-31T08:30... | 2,113 | true |
Support for legal NLP datasets (EURLEX and ECtHR cases) | Add support for two legal NLP datasets:
- EURLEX (https://www.aclweb.org/anthology/P19-1636/)
- ECtHR cases (https://arxiv.org/abs/2103.13084) | https://github.com/huggingface/datasets/pull/2112 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2112",
"html_url": "https://github.com/huggingface/datasets/pull/2112",
"diff_url": "https://github.com/huggingface/datasets/pull/2112.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2112.patch",
"merged_at": null
} | 2,112 | true |
Compute WER metric iteratively | Compute WER metric iteratively to avoid MemoryError.
Fix #2078. | https://github.com/huggingface/datasets/pull/2111 | [
"I discussed with Patrick and I think we could have a nice addition: have a parameter `concatenate_texts` that, if `True`, uses the old implementation.\r\n\r\nBy default `concatenate_texts` would be `False`, so that sentences are evaluated independently, and to save resources (the WER computation has a quadratic co... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2111",
"html_url": "https://github.com/huggingface/datasets/pull/2111",
"diff_url": "https://github.com/huggingface/datasets/pull/2111.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2111.patch",
"merged_at": "2021-04-06T07:20... | 2,111 | true |
Fix incorrect assertion in builder.py | Fix incorrect num_examples comparison assertion in builder.py | https://github.com/huggingface/datasets/pull/2110 | [
"Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\nSo unfortunately we can't use this assertion you suggested",
"> Hi ! The SplitInfo is not always available. By default you would get `split_info.num_examples == 0`\r\n> So unfortunately we can't use this assert... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2110",
"html_url": "https://github.com/huggingface/datasets/pull/2110",
"diff_url": "https://github.com/huggingface/datasets/pull/2110.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2110.patch",
"merged_at": "2021-04-12T13:33... | 2,110 | true |
Add more issue templates and customize issue template chooser | When opening an issue, it is not evident for the users how to choose a blank issue template. There is a link at the bottom of all the other issue templates (`Don’t see your issue here? Open a blank issue.`), but this is not very visible for users. This is the reason why many users finally chose the `add-dataset` templa... | https://github.com/huggingface/datasets/pull/2109 | [
"If you agree, I could also add a link to [Discussions](https://github.com/huggingface/datasets/discussions) in order to reinforce the use of Discussion to make Questions (instead of Issues).\r\n\r\nI could also add some other templates: Bug, Feature Request,...",
"@theo-m we wrote our same comments at the same t... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2109",
"html_url": "https://github.com/huggingface/datasets/pull/2109",
"diff_url": "https://github.com/huggingface/datasets/pull/2109.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2109.patch",
"merged_at": "2021-04-19T06:20... | 2,109 | true |
Is there a way to use a GPU only when training an Index in the process of add_faisis_index? | Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6... | https://github.com/huggingface/datasets/issues/2108 | [] | null | 2,108 | false |
Metadata validation | - `pydantic` metadata schema with dedicated validators against our taxonomy
- ci script to validate new changes against this schema and start a vertuous loop
- soft validation on tasks ids since we expect the taxonomy to undergo some changes in the near future
for reference with the current validation we have ~365... | https://github.com/huggingface/datasets/pull/2107 | [
"> Also I was wondering this is really needed to have `utils.metadata` as a submodule of `datasets` ? This is only used by the CI so I'm not sure we should have this in the actual `datasets` package.\r\n\r\nI'm unclear on the suggestion, would you rather have a root-level `./metadata.py` file? I think it's well whe... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2107",
"html_url": "https://github.com/huggingface/datasets/pull/2107",
"diff_url": "https://github.com/huggingface/datasets/pull/2107.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2107.patch",
"merged_at": "2021-04-26T08:27... | 2,107 | true |
WMT19 Dataset for Kazakh-English is not formatted correctly | In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error.
The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here:
> ... | https://github.com/huggingface/datasets/issues/2106 | [
"Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is on... | null | 2,106 | false |
Request to remove S2ORC dataset | Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks! | https://github.com/huggingface/datasets/issues/2105 | [
"Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?",
"Hi @kyleclo,... | null | 2,105 | false |
Trouble loading wiki_movies | Hello,
I am trying to load_dataset("wiki_movies") and it gives me this error -
`FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingfa... | https://github.com/huggingface/datasets/issues/2104 | [
"Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```",
"Thanks a lot! That solved it and I was able to upload a model trained on it as well :)"
] | null | 2,104 | false |
citation, homepage, and license fields of `dataset_info.json` are duplicated many times | This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation.
Example result:
```
"citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {... | https://github.com/huggingface/datasets/issues/2103 | [
"Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease co... | null | 2,103 | false |
Move Dataset.to_csv to csv module | Move the implementation of `Dataset.to_csv` to module `datasets.io.csv`. | https://github.com/huggingface/datasets/pull/2102 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2102",
"html_url": "https://github.com/huggingface/datasets/pull/2102",
"diff_url": "https://github.com/huggingface/datasets/pull/2102.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2102.patch",
"merged_at": "2021-03-24T14:07... | 2,102 | true |
MIAM dataset - new citation details | Hi @lhoestq, I have updated the citations to reference an OpenReview preprint. | https://github.com/huggingface/datasets/pull/2101 | [
"Hi !\r\nLooks like there's a unicode error in the new citation in the miam.py file.\r\nCould you try to fix it ? Not sure from which character it comes from though\r\n\r\nYou can test if it works on your side with\r\n```\r\nRUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_dataset_all_con... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2101",
"html_url": "https://github.com/huggingface/datasets/pull/2101",
"diff_url": "https://github.com/huggingface/datasets/pull/2101.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2101.patch",
"merged_at": "2021-03-23T18:08... | 2,101 | true |
Fix deprecated warning message and docstring | Fix deprecated warnings:
- Use deprecated Sphinx directive in docstring
- Fix format of deprecated message
- Raise FutureWarning | https://github.com/huggingface/datasets/pull/2100 | [
"I have a question: what about `dictionary_encode_column_`?\r\n- It is deprecated in Dataset, but it recommends using a non-existing method instead: `Dataset.dictionary_encode_column` does not exist.\r\n- It is NOT deprecated in DatasetDict.",
"`dictionary_encode_column_ ` should be deprecated since it never work... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2100",
"html_url": "https://github.com/huggingface/datasets/pull/2100",
"diff_url": "https://github.com/huggingface/datasets/pull/2100.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2100.patch",
"merged_at": "2021-03-23T18:03... | 2,100 | true |
load_from_disk takes a long time to load local dataset | I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helpin... | https://github.com/huggingface/datasets/issues/2099 | [
"Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?",
"It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a... | null | 2,099 | false |
SQuAD version | Hi~
I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it. | https://github.com/huggingface/datasets/issues/2098 | [
"Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55",
"Got it. Thank you~"
] | null | 2,098 | false |
fixes issue #1110 by descending further if `obj["_type"]` is a dict | Check metrics | https://github.com/huggingface/datasets/pull/2097 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2097",
"html_url": "https://github.com/huggingface/datasets/pull/2097",
"diff_url": "https://github.com/huggingface/datasets/pull/2097.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2097.patch",
"merged_at": null
} | 2,097 | true |
CoNLL 2003 dataset not including German | Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with!
I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it ... | https://github.com/huggingface/datasets/issues/2096 | [
"Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data ... | null | 2,096 | false |
Fix: Allows a feature to be named "_type" | This PR tries to fix issue #1110. Sorry for taking so long to come back to this.
It's a simple fix, but i am not sure if it works for all possible types of `obj`. Let me know what you think @lhoestq | https://github.com/huggingface/datasets/pull/2093 | [
"Nice thank you !\r\nThis looks like a pretty simple yet effective fix ;)\r\nCould you just add a test in `test_features.py` to make sure that you can create `features` with a `_type` field and that it is possible to convert it as a dict and reload it ?\r\n```python\r\nfrom datasets import Features, Value\r\n\r\n# ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2093",
"html_url": "https://github.com/huggingface/datasets/pull/2093",
"diff_url": "https://github.com/huggingface/datasets/pull/2093.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2093.patch",
"merged_at": "2021-03-25T14:35... | 2,093 | true |
How to disable making arrow tables in load_dataset ? | Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ? | https://github.com/huggingface/datasets/issues/2092 | [
"Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do ... | null | 2,092 | false |
Fix copy snippet in docs | With this change the lines starting with `...` in the code blocks can be properly copied to clipboard. | https://github.com/huggingface/datasets/pull/2091 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2091",
"html_url": "https://github.com/huggingface/datasets/pull/2091",
"diff_url": "https://github.com/huggingface/datasets/pull/2091.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2091.patch",
"merged_at": "2021-03-23T17:18... | 2,091 | true |
Add machine translated multilingual STS benchmark dataset | also see here https://github.com/PhilipMay/stsb-multi-mt | https://github.com/huggingface/datasets/pull/2090 | [
"Hello dear maintainer, are there any comments or questions about this PR?",
"@iamollas thanks for the feedback. I did not see the template.\r\nI improved it...",
"Should be clean for merge IMO.",
"@lhoestq CI is green. ;-)",
"Thanks again ! this is awesome :)",
"Thanks for merging. :-)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2090",
"html_url": "https://github.com/huggingface/datasets/pull/2090",
"diff_url": "https://github.com/huggingface/datasets/pull/2090.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2090.patch",
"merged_at": "2021-03-29T13:00... | 2,090 | true |
Add documentaton for dataset README.md files | Hi,
the dataset README files have special headers.
Somehow a documenation of the allowed values and tags is missing.
Could you add that?
Just to give some concrete questions that should be answered imo:
- which values can be passted to multilinguality?
- what should be passed to language_creators?
- which valu... | https://github.com/huggingface/datasets/issues/2089 | [
"Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a... | null | 2,089 | false |
change bibtex template to author instead of authors | Hi,
IMO when using BibTex Author should be used instead of Authors.
See here: http://www.bibtex.org/Using/de/
Thanks
Philip | https://github.com/huggingface/datasets/pull/2088 | [
"Trailing whitespace was removed. So more changes in diff than just this fix."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2088",
"html_url": "https://github.com/huggingface/datasets/pull/2088",
"diff_url": "https://github.com/huggingface/datasets/pull/2088.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2088.patch",
"merged_at": "2021-03-23T15:40... | 2,088 | true |
Update metadata if dataset features are modified | This PR adds a decorator that updates the dataset metadata if a previously executed transform modifies its features.
Fixes #2083
| https://github.com/huggingface/datasets/pull/2087 | [
"@lhoestq I'll try to add a test later if you think this approach with the wrapper is good.",
"Awesome thank you !\r\nYes this approach with a wrapper is good :)",
"@lhoestq Added a test. To verify that this change fixes the problem, replace:\r\n```\r\n!pip install datasets==1.5\r\n```\r\nwith:\r\n```\r\n!pip i... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2087",
"html_url": "https://github.com/huggingface/datasets/pull/2087",
"diff_url": "https://github.com/huggingface/datasets/pull/2087.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2087.patch",
"merged_at": "2021-04-09T09:25... | 2,087 | true |
change user permissions to -rw-r--r-- | Fix for #2065 | https://github.com/huggingface/datasets/pull/2086 | [
"I tried this with `ade_corpus_v2` dataset. `ade_corpus_v2-train.arrow` (downloaded dataset) and `cache-25d41a4d3c2d8a25.arrow` (ran a mapping function on the dataset) both had file permission with octal value of `0644`. "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2086",
"html_url": "https://github.com/huggingface/datasets/pull/2086",
"diff_url": "https://github.com/huggingface/datasets/pull/2086.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2086.patch",
"merged_at": "2021-03-24T13:59... | 2,086 | true |
Fix max_wait_time in requests | it was handled as a min time, not max cc @SBrandeis | https://github.com/huggingface/datasets/pull/2085 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2085",
"html_url": "https://github.com/huggingface/datasets/pull/2085",
"diff_url": "https://github.com/huggingface/datasets/pull/2085.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2085.patch",
"merged_at": "2021-03-23T15:36... | 2,085 | true |
CUAD - Contract Understanding Atticus Dataset | ## Adding a Dataset
- **Name:** CUAD - Contract Understanding Atticus Dataset
- **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
- **Paper:** https://arxiv.org/abs/2103.06268
- **Data:** http... | https://github.com/huggingface/datasets/issues/2084 | [
"+1 on this request"
] | null | 2,084 | false |
`concatenate_datasets` throws error when changing the order of datasets to concatenate | Hey,
I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets
and noticed that when the order in which the datasets are concatenated changes an error is thrown where it shou... | https://github.com/huggingface/datasets/issues/2083 | [
"Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age... | null | 2,083 | false |
Updated card using information from data statement and datasheet | I updated and clarified the REFreSD [data card](https://github.com/mcmillanmajora/datasets/blob/refresd_card/datasets/refresd/README.md) with information from the Eleftheria's [website](https://elbria.github.io/post/refresd/). I added brief descriptions where the initial card referred to the paper, and I also recreated... | https://github.com/huggingface/datasets/pull/2082 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2082",
"html_url": "https://github.com/huggingface/datasets/pull/2082",
"diff_url": "https://github.com/huggingface/datasets/pull/2082.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2082.patch",
"merged_at": "2021-03-19T14:29... | 2,082 | true |
Fix docstrings issues | Fix docstring issues. | https://github.com/huggingface/datasets/pull/2081 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2081",
"html_url": "https://github.com/huggingface/datasets/pull/2081",
"diff_url": "https://github.com/huggingface/datasets/pull/2081.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2081.patch",
"merged_at": "2021-04-07T14:37... | 2,081 | true |
Multidimensional arrays in a Dataset | Hi,
I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row.
... | https://github.com/huggingface/datasets/issues/2080 | [
"Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,... | null | 2,080 | false |
Refactorize Metric.compute signature to force keyword arguments only | Minor refactoring of Metric.compute signature to force the use of keyword arguments, by using the single star syntax. | https://github.com/huggingface/datasets/pull/2079 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2079",
"html_url": "https://github.com/huggingface/datasets/pull/2079",
"diff_url": "https://github.com/huggingface/datasets/pull/2079.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2079.patch",
"merged_at": "2021-03-23T15:31... | 2,079 | true |
MemoryError when computing WER metric | Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation:
```
wer = load_metric("wer")
print(wer.compute(predictions=result["predicted"], references=result["target"]))
```
However, I receive the following exception:
`Traceback (most recent call last):
File ... | https://github.com/huggingface/datasets/issues/2078 | [
"Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compu... | null | 2,078 | false |
Bump huggingface_hub version | `0.0.2 => 0.0.6` | https://github.com/huggingface/datasets/pull/2077 | [
"🔥 "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2077",
"html_url": "https://github.com/huggingface/datasets/pull/2077",
"diff_url": "https://github.com/huggingface/datasets/pull/2077.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2077.patch",
"merged_at": "2021-03-18T11:33... | 2,077 | true |
Issue: Dataset download error | The download link in `iwslt2017.py` file does not seem to work anymore.
For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz`
Would be nice if we could modify it script and use the new downloadable link? | https://github.com/huggingface/datasets/issues/2076 | [
"Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.",
"It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and th... | null | 2,076 | false |
ConnectionError: Couldn't reach common_voice.py | When I run:
from datasets import load_dataset, load_metric
common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation")
common_voice_test = load_dataset("common_voice", "zh-CN", split="test")
Got:
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/ma... | https://github.com/huggingface/datasets/issues/2075 | [
"Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?",
"@albertvillanova Thanks! It works well now. "
] | null | 2,075 | false |
Fix size categories in YAML Tags | This PR fixes several `size_categories` in YAML tags and makes them consistent. Additionally, I have added a few more categories after `1M`, up to `1T`. I would like to add that to the streamlit app also.
This PR also adds a couple of infos that I found missing.
The code for generating this:
```python
for datas... | https://github.com/huggingface/datasets/pull/2074 | [
"> It would be great if there was a way to make the task categories consistent too. For this, the streamlit app can look into all the datasets and check for existing categories and show them in the list. This may add some consistency.\r\n\r\nWe can also update the task lists here: https://github.com/huggingface/dat... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2074",
"html_url": "https://github.com/huggingface/datasets/pull/2074",
"diff_url": "https://github.com/huggingface/datasets/pull/2074.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2074.patch",
"merged_at": "2021-03-23T17:11... | 2,074 | true |
Fixes check of TF_AVAILABLE and TORCH_AVAILABLE | # What is this PR doing
This PR implements the checks if `Tensorflow` and `Pytorch` are available the same way as `transformers` does it. I added the additional checks for the different `Tensorflow` and `torch` versions. #2068 | https://github.com/huggingface/datasets/pull/2073 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2073",
"html_url": "https://github.com/huggingface/datasets/pull/2073",
"diff_url": "https://github.com/huggingface/datasets/pull/2073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2073.patch",
"merged_at": "2021-03-18T09:09... | 2,073 | true |
Fix docstring issues | Fix docstring issues. | https://github.com/huggingface/datasets/pull/2072 | [
"I think I will stop pushing to this PR, so that it can me merged for today release. \r\n\r\nI will open another PR for further fixing docs.\r\n\r\nDo you agree, @lhoestq ?",
"Sounds good thanks !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2072",
"html_url": "https://github.com/huggingface/datasets/pull/2072",
"diff_url": "https://github.com/huggingface/datasets/pull/2072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2072.patch",
"merged_at": "2021-03-18T12:41... | 2,072 | true |
Multiprocessing is slower than single process | ```python
# benchmark_filter.py
import logging
import sys
import time
from datasets import load_dataset, set_caching_enabled
if __name__ == "__main__":
set_caching_enabled(False)
logging.basicConfig(level=logging.DEBUG)
bc = load_dataset("bookcorpus")
now = time.time()
try:
... | https://github.com/huggingface/datasets/issues/2071 | [
"dupe of #1992"
] | null | 2,071 | false |
ArrowInvalid issue for squad v2 dataset | Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb).
In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original co... | https://github.com/huggingface/datasets/issues/2070 | [
"Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a b... | null | 2,070 | false |
Add and fix docstring for NamedSplit | Add and fix docstring for `NamedSplit`, which was missing. | https://github.com/huggingface/datasets/pull/2069 | [
"Maybe we should add some other split classes?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2069",
"html_url": "https://github.com/huggingface/datasets/pull/2069",
"diff_url": "https://github.com/huggingface/datasets/pull/2069.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2069.patch",
"merged_at": "2021-03-18T10:27... | 2,069 | true |
PyTorch not available error on SageMaker GPU docker though it is installed | I get en error when running data loading using SageMaker SDK
```
File "main.py", line 34, in <module>
run_training()
File "main.py", line 25, in run_training
dm.setup('fit')
File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn
return fn(*a... | https://github.com/huggingface/datasets/issues/2068 | [
"cc @philschmid ",
"Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`",
"Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6... | null | 2,068 | false |
Multiprocessing windows error | As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2
When using the num_proc argument on windows the whole Python environment crashes and hanging in loop.
For example at the map_to_array part.
An error occures because the cache file already exists and windows throws and error. After this the log c... | https://github.com/huggingface/datasets/issues/2067 | [
"Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..",
"```\r\nfrom datasets import load_dataset\r\n\r\ndatase... | null | 2,067 | false |
Fix docstring rendering of Dataset/DatasetDict.from_csv args | Fix the docstring rendering of Dataset/DatasetDict.from_csv args. | https://github.com/huggingface/datasets/pull/2066 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2066",
"html_url": "https://github.com/huggingface/datasets/pull/2066",
"diff_url": "https://github.com/huggingface/datasets/pull/2066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2066.patch",
"merged_at": "2021-03-17T09:21... | 2,066 | true |
Only user permission of saved cache files, not group | Hello,
It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you kno... | https://github.com/huggingface/datasets/issues/2065 | [
"Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb... | null | 2,065 | false |
Fix ted_talks_iwslt version error | This PR fixes the bug where the version argument would be passed twice if the dataset configuration was created on the fly.
Fixes #2059 | https://github.com/huggingface/datasets/pull/2064 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2064",
"html_url": "https://github.com/huggingface/datasets/pull/2064",
"diff_url": "https://github.com/huggingface/datasets/pull/2064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2064.patch",
"merged_at": "2021-03-16T18:00... | 2,064 | true |
[Common Voice] Adapt dataset script so that no manual data download is actually needed | This PR changes the dataset script so that no manual data dir is needed anymore. | https://github.com/huggingface/datasets/pull/2063 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2063",
"html_url": "https://github.com/huggingface/datasets/pull/2063",
"diff_url": "https://github.com/huggingface/datasets/pull/2063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2063.patch",
"merged_at": "2021-03-17T09:42... | 2,063 | true |
docs: fix missing quotation | The json code misses a quote | https://github.com/huggingface/datasets/pull/2062 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2062",
"html_url": "https://github.com/huggingface/datasets/pull/2062",
"diff_url": "https://github.com/huggingface/datasets/pull/2062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2062.patch",
"merged_at": "2021-03-17T09:21... | 2,062 | true |
Cannot load udpos subsets from xtreme dataset using load_dataset() | Hello,
I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and ... | https://github.com/huggingface/datasets/issues/2061 | [
"@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.",
"Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset scr... | null | 2,061 | false |
Filtering refactor | fix https://github.com/huggingface/datasets/issues/2032
benchmarking is somewhat inconclusive, currently running on `book_corpus` with:
```python
bc = load_dataset("bookcorpus")
now = time.time()
bc.filter(lambda x: len(x["text"]) < 64)
elapsed = time.time() - now
print(elapsed)
```
t... | https://github.com/huggingface/datasets/pull/2060 | [
"I thought at first that the multiproc test was not relevant now that we do stuff only in memory, but I think there's something that's actually broken, my tiny benchmark on bookcorpus runs forever (2hrs+) when I add `num_proc=4` as a kwarg, will investigate 👀 \r\n\r\nI'm not familiar with the caching you describe ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2060",
"html_url": "https://github.com/huggingface/datasets/pull/2060",
"diff_url": "https://github.com/huggingface/datasets/pull/2060.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2060.patch",
"merged_at": null
} | 2,060 | true |
Error while following docs to load the `ted_talks_iwslt` dataset | I am currently trying to load the `ted_talks_iwslt` dataset into google colab.
The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so.
```python
dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014")
```
Executing it results in the error ... | https://github.com/huggingface/datasets/issues/2059 | [
"@skyprince999 as you authored the PR for this dataset, any comments?",
"This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)"
] | null | 2,059 | false |
Is it possible to convert a `tfds` to HuggingFace `dataset`? | I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :)
I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` ... | https://github.com/huggingface/datasets/issues/2058 | [
"Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples."
] | null | 2,058 | false |
update link to ZEST dataset | Updating the link as the original one is no longer working. | https://github.com/huggingface/datasets/pull/2057 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2057",
"html_url": "https://github.com/huggingface/datasets/pull/2057",
"diff_url": "https://github.com/huggingface/datasets/pull/2057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2057.patch",
"merged_at": "2021-03-16T17:06... | 2,057 | true |
issue with opus100/en-fr dataset | Hi
I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this?
Thanks a lot @lhoestq for your help in advance.
`
thread '<unnamed>' panicked... | https://github.com/huggingface/datasets/issues/2056 | [
"@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ",
"Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers impor... | null | 2,056 | false |
is there a way to override a dataset object saved with save_to_disk? | At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object? | https://github.com/huggingface/datasets/issues/2055 | [
"Hi\r\nYou can rename the arrow file and update the name in `state.json`",
"I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_da... | null | 2,055 | false |
Could not find file for ZEST dataset | I am trying to use zest dataset from Allen AI using below code in colab,
```
!pip install -q datasets
from datasets import load_dataset
dataset = load_dataset("zest")
```
I am getting the following error,
```
Using custom data configuration default
Downloading and preparing dataset zest/default (download: ... | https://github.com/huggingface/datasets/issues/2054 | [
"The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.",
"This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)",
"Thanks @lhoestq and @matt-peters ",
"I am closing this issue since its ... | null | 2,054 | false |
Add bAbI QA tasks | - **Name:** *The (20) QA bAbI tasks*
- **Description:** *The (20) QA bAbI tasks are a set of proxy tasks that evaluate reading comprehension via question answering. Our tasks measure understanding in several ways: whether a system is able to answer questions via chaining facts, simple induction, deduction and many mor... | https://github.com/huggingface/datasets/pull/2053 | [
"Hi @lhoestq,\r\n\r\nShould I remove the 160 configurations? Is it too much?\r\n\r\nEDIT:\r\nCan you also check the task category? I'm not sure if there is an appropriate tag for the same.",
"Thanks for the changes !\r\n\r\n> Should I remove the 160 configurations? Is it too much?\r\n\r\nYea 160 configuration is ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2053",
"html_url": "https://github.com/huggingface/datasets/pull/2053",
"diff_url": "https://github.com/huggingface/datasets/pull/2053.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2053.patch",
"merged_at": "2021-03-29T12:41... | 2,053 | true |
Timit_asr dataset repeats examples | Summary
When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same
Steps to reproduce
As an example, on this code there is the text from the training part:
Code snippet:
```
from datasets import load_dataset, load_metric
timit = load_dataset("timit_asr")
timit['train']['text']... | https://github.com/huggingface/datasets/issues/2052 | [
"Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```",
"Ty!"
] | null | 2,052 | false |
Add MDD Dataset | - **Name:** *MDD Dataset*
- **Description:** The Movie Dialog dataset (MDD) is designed to measure how well models can perform at goal and non-goal orientated dialog centered around the topic of movies (question answering, recommendation and discussion), from various movie reviews sources such as MovieLens and OMDb.
... | https://github.com/huggingface/datasets/pull/2051 | [
"Hi @lhoestq,\r\n\r\nI have added changes from review.",
"Thanks for approving :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2051",
"html_url": "https://github.com/huggingface/datasets/pull/2051",
"diff_url": "https://github.com/huggingface/datasets/pull/2051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2051.patch",
"merged_at": "2021-03-19T10:31... | 2,051 | true |
Build custom dataset to fine-tune Wav2Vec2 | Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
| https://github.com/huggingface/datasets/issues/2050 | [
"@lhoestq - We could simply use the \"general\" json dataset for this no? ",
"Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\n... | null | 2,050 | false |
Fix text-classification tags | There are different tags for text classification right now: `text-classification` and `text_classification`:
.
This PR fixes it.
| https://github.com/huggingface/datasets/pull/2049 | [
"LGTM, thanks for fixing."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2049",
"html_url": "https://github.com/huggingface/datasets/pull/2049",
"diff_url": "https://github.com/huggingface/datasets/pull/2049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2049.patch",
"merged_at": "2021-03-16T15:47... | 2,049 | true |
github is not always available - probably need a back up | Yesterday morning github wasn't working:
```
:/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py
Resolving raw.githubusercontent.com (raw.githubuser... | https://github.com/huggingface/datasets/issues/2048 | [] | null | 2,048 | false |
Multilingual dIalogAct benchMark (miam) | My collaborators (@EmileChapuis, @PierreColombo) and I within the Affective Computing team at Telecom Paris would like to anonymously publish the miam dataset. It is assocated with a publication currently under review. We will update the dataset with full citations once the review period is over. | https://github.com/huggingface/datasets/pull/2047 | [
"Hello. All aforementioned changes have been made. I've also re-run black on miam.py. :-)",
"I will run isort again. Hopefully it resolves the current check_code_quality test failure.",
"Once the review period is over, feel free to open a PR to add all the missing information ;)",
"Hi! I will follow up right ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2047",
"html_url": "https://github.com/huggingface/datasets/pull/2047",
"diff_url": "https://github.com/huggingface/datasets/pull/2047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2047.patch",
"merged_at": "2021-03-19T10:47... | 2,047 | true |
add_faisis_index gets very slow when doing it interatively | As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any ... | https://github.com/huggingface/datasets/issues/2046 | [
"I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?",
"Hi,\r\n I am running the add_faiss_in... | null | 2,046 | false |
Preserve column ordering in Dataset.rename_column | Currently `Dataset.rename_column` doesn't necessarily preserve the order of the columns:
```python
>>> from datasets import Dataset
>>> d = Dataset.from_dict({'sentences': ["s1", "s2"], 'label': [0, 1]})
>>> d
Dataset({
features: ['sentences', 'label'],
num_rows: 2
})
>>> d.rename_column('sentences', '... | https://github.com/huggingface/datasets/pull/2045 | [
"Not sure why CI isn't triggered.\r\n\r\n@lhoestq Can you please help me with this? ",
"I don't know how to trigger it manually, but an empty commit should do the job"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2045",
"html_url": "https://github.com/huggingface/datasets/pull/2045",
"diff_url": "https://github.com/huggingface/datasets/pull/2045.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2045.patch",
"merged_at": "2021-03-16T14:35... | 2,045 | true |
Add CBT dataset | This PR adds the [CBT Dataset](https://arxiv.org/abs/1511.02301).
Note that I have also added the `raw` dataset as a separate configuration. I couldn't find a suitable "task" for it in YAML tags.
The dummy files have one example each, as the examples are slightly big. For `raw` dataset, I just used top few lines,... | https://github.com/huggingface/datasets/pull/2044 | [
"Hi @lhoestq,\r\n\r\nI have added changes from the review.",
"Thanks for approving @lhoestq "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2044",
"html_url": "https://github.com/huggingface/datasets/pull/2044",
"diff_url": "https://github.com/huggingface/datasets/pull/2044.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2044.patch",
"merged_at": "2021-03-19T10:29... | 2,044 | true |
Support pickle protocol for dataset splits defined as ReadInstruction | Fixes #2022 (+ some style fixes) | https://github.com/huggingface/datasets/pull/2043 | [
"@lhoestq But we don't perform conversion to a `NamedSplit` if `_split` is not a string which means it **will** be a `ReadInstruction` after reloading.",
"Yes right ! I read it wrong.\r\nPerfect then"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2043",
"html_url": "https://github.com/huggingface/datasets/pull/2043",
"diff_url": "https://github.com/huggingface/datasets/pull/2043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2043.patch",
"merged_at": "2021-03-16T14:05... | 2,043 | true |
Fix arrow memory checks issue in tests | The tests currently fail on `master` because the arrow memory verification doesn't return the expected memory evolution when loading an arrow table in memory.
From my experiments, the tests fail only when the full test suite is ran.
This made me think that maybe some arrow objects from other tests were not freeing th... | https://github.com/huggingface/datasets/pull/2042 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2042",
"html_url": "https://github.com/huggingface/datasets/pull/2042",
"diff_url": "https://github.com/huggingface/datasets/pull/2042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2042.patch",
"merged_at": "2021-03-12T15:04... | 2,042 | true |
Doc2dial update data_infos and data_loaders | https://github.com/huggingface/datasets/pull/2041 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2041",
"html_url": "https://github.com/huggingface/datasets/pull/2041",
"diff_url": "https://github.com/huggingface/datasets/pull/2041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2041.patch",
"merged_at": "2021-03-16T11:09... | 2,041 | true | |
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk | Hi there,
I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects):
```python
concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']])
```
Yie... | https://github.com/huggingface/datasets/issues/2040 | [
"Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no... | null | 2,040 | false |
Doc2dial rc | Added fix to handle the last turn that is a user turn. | https://github.com/huggingface/datasets/pull/2039 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2039",
"html_url": "https://github.com/huggingface/datasets/pull/2039",
"diff_url": "https://github.com/huggingface/datasets/pull/2039.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2039.patch",
"merged_at": null
} | 2,039 | true |
outdated dataset_infos.json might fail verifications | The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc..
Could you please update this file or point me how to update this file?
Thank you. | https://github.com/huggingface/datasets/issues/2038 | [
"Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```",
"Fixed by #2041, thanks again @songfeng !"
] | null | 2,038 | false |
Fix: Wikipedia - save memory by replacing root.clear with elem.clear | see: https://github.com/huggingface/datasets/issues/2031
What I did:
- replace root.clear with elem.clear
- remove lines to get root element
- $ make style
- $ make test
- some tests required some pip packages, I installed them.
test results on origin/master and my branch are same. I think it's not related... | https://github.com/huggingface/datasets/pull/2037 | [
"The error you got is minor and appeared in the last version of pyarrow, we'll fix the CI to take this into account. You can ignore it"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2037",
"html_url": "https://github.com/huggingface/datasets/pull/2037",
"diff_url": "https://github.com/huggingface/datasets/pull/2037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2037.patch",
"merged_at": "2021-03-16T11:01... | 2,037 | true |
Cannot load wikitext | when I execute these codes
```
>>> from datasets import load_dataset
>>> test_dataset = load_dataset("wikitext")
```
I got an error,any help?
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.p... | https://github.com/huggingface/datasets/issues/2036 | [
"Solved!"
] | null | 2,036 | false |
wiki40b/wikipedia for almost all languages cannot be downloaded | Hi
I am trying to download the data as below:
```
from datasets import load_dataset
dataset = load_dataset("wiki40b", "cs")
print(dataset)
```
I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error.
I rea... | https://github.com/huggingface/datasets/issues/2035 | [
"Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only ... | null | 2,035 | false |
Fix typo | Change `ENV_XDG_CACHE_HOME ` to `XDG_CACHE_HOME ` | https://github.com/huggingface/datasets/pull/2034 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2034",
"html_url": "https://github.com/huggingface/datasets/pull/2034",
"diff_url": "https://github.com/huggingface/datasets/pull/2034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2034.patch",
"merged_at": "2021-03-11T18:06... | 2,034 | true |
Raise an error for outdated sacrebleu versions | The `sacrebleu` metric seem to only work for sacrecleu>=1.4.12
For example using sacrebleu==1.2.10, an error is raised (from metric/sacrebleu/sacrebleu.py):
```python
def _compute(
self,
predictions,
references,
smooth_method="exp",
smooth_value=None,
force... | https://github.com/huggingface/datasets/pull/2033 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2033",
"html_url": "https://github.com/huggingface/datasets/pull/2033",
"diff_url": "https://github.com/huggingface/datasets/pull/2033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2033.patch",
"merged_at": "2021-03-11T17:58... | 2,033 | true |
Use Arrow filtering instead of writing a new arrow file for Dataset.filter | Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time.
Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker.
I think there are two cases:
- i... | https://github.com/huggingface/datasets/issues/2032 | [] | null | 2,032 | false |
wikipedia.py generator that extracts XML doesn't release memory | I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe.
I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop.
https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikip... | https://github.com/huggingface/datasets/issues/2031 | [
"Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?",
"OK! I'll send it later."
] | null | 2,031 | false |
Implement Dataset from text | Implement `Dataset.from_text`.
Analogue to #1943, #1946. | https://github.com/huggingface/datasets/pull/2030 | [
"I am wondering why only one test of \"keep_in_memory=True\" fails, when there are many other tests that test the same and it happens only in pyarrow_1..."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2030",
"html_url": "https://github.com/huggingface/datasets/pull/2030",
"diff_url": "https://github.com/huggingface/datasets/pull/2030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2030.patch",
"merged_at": "2021-03-18T13:29... | 2,030 | true |
Loading a faiss index KeyError | I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation.
The basic steps are:
1. Create a dataset (dataset1)
2. Create an embeddings column using DPR
3. Add a faiss index to the dataset
4. Save faiss index to a file
5. Create a new dataset (d... | https://github.com/huggingface/datasets/issues/2029 | [
"In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r... | null | 2,029 | false |
Adding PersiNLU reading-comprehension | https://github.com/huggingface/datasets/pull/2028 | [
"@lhoestq I think I have addressed all your comments. ",
"Thanks! @lhoestq Let me know if you want me to address anything to get this merged. ",
"It's all good thanks ;)\r\nmerging"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2028",
"html_url": "https://github.com/huggingface/datasets/pull/2028",
"diff_url": "https://github.com/huggingface/datasets/pull/2028.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2028.patch",
"merged_at": "2021-03-15T09:39... | 2,028 | true | |
Update format columns in Dataset.rename_columns | Fixes #2026 | https://github.com/huggingface/datasets/pull/2027 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2027",
"html_url": "https://github.com/huggingface/datasets/pull/2027",
"diff_url": "https://github.com/huggingface/datasets/pull/2027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2027.patch",
"merged_at": "2021-03-11T14:38... | 2,027 | true |
KeyError on using map after renaming a column | Hi,
I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function.
Here is what I try:
```python
transform = Compose([ToPILImage(),... | https://github.com/huggingface/datasets/issues/2026 | [
"Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format... | null | 2,026 | false |
[Refactor] Use in-memory/memory-mapped/concatenation tables in Dataset | ## Intro
Currently there is one assumption that we need to change: a dataset is either fully in memory (dataset._data_files is empty), or the dataset can be reloaded from disk with memory mapping (using the dataset._data_files).
This assumption is used for pickling for example:
- in-memory dataset can just be pick... | https://github.com/huggingface/datasets/pull/2025 | [
"There is one more thing I would love to see. Let's say we iteratively keep updating a data source that loaded from **load_dataset** or **load_from_disk**. Now we need to save it to the same location by overriding the previous file inorder to save the disk space. At the moment **save_to_disk** can not assign a name... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2025",
"html_url": "https://github.com/huggingface/datasets/pull/2025",
"diff_url": "https://github.com/huggingface/datasets/pull/2025.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2025.patch",
"merged_at": "2021-03-26T16:51... | 2,025 | true |
Remove print statement from mnist.py | https://github.com/huggingface/datasets/pull/2024 | [
"Thanks for noticing !\r\n#2020 fixed this earlier today though ^^'\r\n\r\nClosing this one"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2024",
"html_url": "https://github.com/huggingface/datasets/pull/2024",
"diff_url": "https://github.com/huggingface/datasets/pull/2024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2024.patch",
"merged_at": null
} | 2,024 | true | |
Add Romanian to XQuAD | On Jan 18, XQuAD was updated with a new Romanian validation file ([xquad commit link](https://github.com/deepmind/xquad/commit/60cac411649156efb6aab9dd4c9cde787a2c0345))
| https://github.com/huggingface/datasets/pull/2023 | [
"Hi ! Thanks for updating XQUAD :)\r\n\r\nThe slow test is failing though since there's no dummy data nor metadata in dataset_infos.json for the romanian configuration.\r\n\r\nCould you please generate the dummy data with\r\n```\r\ndatasets-cli dummy_data ./datasets/xquad --auto_generate --json_field data\r\n```\r\... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2023",
"html_url": "https://github.com/huggingface/datasets/pull/2023",
"diff_url": "https://github.com/huggingface/datasets/pull/2023.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2023.patch",
"merged_at": "2021-03-15T10:08... | 2,023 | true |
ValueError when rename_column on splitted dataset | Hi there,
I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so:
```python
split = {
'train': ReadInstruction('train', to=90, unit='%'),
'test': ReadInstruction('train', from_=-10, unit='%')
}
dataset = load_datase... | https://github.com/huggingface/datasets/issues/2022 | [
"Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use... | null | 2,022 | false |
Interactively doing save_to_disk and load_from_disk corrupts the datasets object? | dataset_info.json file saved after using save_to_disk gets corrupted as follows.

Is there a way to disable the cache that will save to /tmp/huggiface/datastes ?
I have a feeling there is a seri... | https://github.com/huggingface/datasets/issues/2021 | [
"Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching."
] | null | 2,021 | false |
Remove unnecessary docstart check in conll-like datasets | Related to this PR: #1998
Additionally, this PR adds the docstart note to the conll2002 dataset card ([link](https://raw.githubusercontent.com/teropa/nlp/master/resources/corpora/conll2002/ned.train) to the raw data with `DOCSTART` lines).
| https://github.com/huggingface/datasets/pull/2020 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2020",
"html_url": "https://github.com/huggingface/datasets/pull/2020",
"diff_url": "https://github.com/huggingface/datasets/pull/2020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2020.patch",
"merged_at": "2021-03-11T13:33... | 2,020 | true |
Replace print with logging in dataset scripts | Replaces `print(...)` in the dataset scripts with the library logger. | https://github.com/huggingface/datasets/pull/2019 | [
"@lhoestq Maybe a script or even a test in `test_dataset_common.py` that verifies that a dataset script meets some set of quality standards (print calls and todos from the dataset script template are not present, etc.) could be added?",
"Yes definitely !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2019",
"html_url": "https://github.com/huggingface/datasets/pull/2019",
"diff_url": "https://github.com/huggingface/datasets/pull/2019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2019.patch",
"merged_at": "2021-03-11T16:14... | 2,019 | true |
Md gender card update | I updated the descriptions of the datasets as they appear in the HF repo and the descriptions of the source datasets according to what I could find from the paper and the references. I'm still a little unclear about some of the fields of the different configs, and there was little info on the word list and name list. I... | https://github.com/huggingface/datasets/pull/2018 | [
"Link to the card: https://github.com/mcmillanmajora/datasets/blob/md-gender-card/datasets/md_gender_bias/README.md",
"dataset card* @sgugger :p ",
"Ahah that's what I wanted to say @lhoestq, thanks for fixing. Not used to review the Datasets side ;-)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2018",
"html_url": "https://github.com/huggingface/datasets/pull/2018",
"diff_url": "https://github.com/huggingface/datasets/pull/2018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2018.patch",
"merged_at": "2021-03-12T17:31... | 2,018 | true |
Add TF-based Features to handle different modes of data | Hi,
I am creating this draft PR to work on add features similar to [TF datasets](https://github.com/tensorflow/datasets/tree/master/tensorflow_datasets/core/features). I'll be starting with `Tensor` and `FeatureConnector` classes, and build upon them to add other features as well. This is a work in progress. | https://github.com/huggingface/datasets/pull/2017 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/2017",
"html_url": "https://github.com/huggingface/datasets/pull/2017",
"diff_url": "https://github.com/huggingface/datasets/pull/2017.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/2017.patch",
"merged_at": null
} | 2,017 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.