title stringlengths 1 290 | body stringlengths 0 228k β | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
[GH->HF] Part 2: Remove all dataset scripts from github | Now that all the datasets live on the Hub we can remove the /datasets directory that contains all the dataset scripts of this repository
- [x] Needs https://github.com/huggingface/datasets/pull/4973 to be merged first
- [x] and PR to be enabled on the Hub for non-namespaced datasets | https://github.com/huggingface/datasets/pull/4974 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"So this means metrics will be deleted from this repo in favor of the \"evaluate\" library? Maybe you guys could just redirect metrics to that library.",
"We are deprecating the metrics in `datasets` indeed and suggest users to swit... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4974",
"html_url": "https://github.com/huggingface/datasets/pull/4974",
"diff_url": "https://github.com/huggingface/datasets/pull/4974.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4974.patch",
"merged_at": "2022-10-03T17:07... | 4,974 | true |
[GH->HF] Load datasets from the Hub | Currently datasets with no namespace (e.g. squad, glue) are loaded from github.
In this PR I changed this logic to use the Hugging Face Hub instead.
This is the first step in removing all the dataset scripts in this repository
related to discussions in https://github.com/huggingface/datasets/pull/4059 (I shoul... | https://github.com/huggingface/datasets/pull/4973 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Duplicate of:\r\n- #4059"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4973",
"html_url": "https://github.com/huggingface/datasets/pull/4973",
"diff_url": "https://github.com/huggingface/datasets/pull/4973.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4973.patch",
"merged_at": null
} | 4,973 | true |
Fix map batched with torch output | Reported in https://discuss.huggingface.co/t/typeerror-when-applying-map-after-set-format-type-torch/23067/2
Currently it fails if one uses batched `map` and the map function returns a torch tensor.
I fixed it for torch, tf, jax and pandas series. | https://github.com/huggingface/datasets/pull/4972 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4972",
"html_url": "https://github.com/huggingface/datasets/pull/4972",
"diff_url": "https://github.com/huggingface/datasets/pull/4972.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4972.patch",
"merged_at": "2022-09-20T09:39... | 4,972 | true |
Preserve non-`input_colums` in `Dataset.map` if `input_columns` are specified | Currently, if the `input_columns` list in `Dataset.map` is specified, the columns not in that list are dropped after the `map` transform.
This makes the behavior inconsistent with `IterableDataset.map`.
(It seems this issue was introduced by mistake in https://github.com/huggingface/datasets/pull/2246)
Fix h... | https://github.com/huggingface/datasets/pull/4971 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4971",
"html_url": "https://github.com/huggingface/datasets/pull/4971",
"diff_url": "https://github.com/huggingface/datasets/pull/4971.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4971.patch",
"merged_at": "2022-09-13T13:48... | 4,971 | true |
Support streaming nli_tr dataset | Support streaming nli_tr dataset.
This PR removes legacy `codecs.open` and replaces it with `open` that supports passing encoding.
Fix #3186. | https://github.com/huggingface/datasets/pull/4970 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4970",
"html_url": "https://github.com/huggingface/datasets/pull/4970",
"diff_url": "https://github.com/huggingface/datasets/pull/4970.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4970.patch",
"merged_at": "2022-09-12T08:43... | 4,970 | true |
Fix data URL and metadata of vivos dataset | After contacting the authors of the VIVOS dataset to report that their data server is down, we have received a reply from Hieu-Thi Luong that their data is now hosted on Zenodo: https://doi.org/10.5281/zenodo.7068130
This PR updates their data URL and some metadata (homepage, citation and license).
Fix #4936. | https://github.com/huggingface/datasets/pull/4969 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4969",
"html_url": "https://github.com/huggingface/datasets/pull/4969",
"diff_url": "https://github.com/huggingface/datasets/pull/4969.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4969.patch",
"merged_at": "2022-09-12T07:14... | 4,969 | true |
Support streaming compguesswhat dataset | Support streaming `compguesswhat` dataset.
Fix #3191. | https://github.com/huggingface/datasets/pull/4968 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4968",
"html_url": "https://github.com/huggingface/datasets/pull/4968",
"diff_url": "https://github.com/huggingface/datasets/pull/4968.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4968.patch",
"merged_at": "2022-09-12T07:58... | 4,968 | true |
Strip "/" in local dataset path to avoid empty dataset name error | null | https://github.com/huggingface/datasets/pull/4967 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Cool :-)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4967",
"html_url": "https://github.com/huggingface/datasets/pull/4967",
"diff_url": "https://github.com/huggingface/datasets/pull/4967.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4967.patch",
"merged_at": "2022-09-12T15:30... | 4,967 | true |
[Apple M1] MemoryError: Cannot allocate write+execute memory for ffi.callback() | ## Describe the bug
I'm trying to run `cast_column("audio", Audio())` on Apple M1 Pro, but it seems that it doesn't work.
## Steps to reproduce the bug
```python
import datasets
dataset = load_dataset("csv", data_files="./train.csv")["train"]
dataset = dataset.map(lambda x: {"audio": str(DATA_DIR / "audio" / ... | https://github.com/huggingface/datasets/issues/4965 | [
"Hi! This seems like a bug in `soundfile`. Could you please open an issue in their repo? `soundfile` works without any issues on my M1, so I'm not sure we can help.",
"Hi @mariosasko, can you share how you installed `soundfile` on your mac M1?",
"Hi @hoangtnm - I upgraded to python 3.10 and it fixed the proble... | null | 4,965 | false |
Column of arrays (2D+) are using unreasonably high memory | ## Describe the bug
When trying to store `Array2D, Array3D, etc` as column values in a dataset, accessing that column (or creating depending on how you create it, see code below) will cause more than 10 fold of memory usage.
## Steps to reproduce the bug
```python
from datasets import Dataset, Features, Array2D, ... | https://github.com/huggingface/datasets/issues/4964 | [
"note i have tried the same code with `datasets` version 2.4.0, the outcome is the very same as described above.",
"Seems related to issues #4623 and #4802 so it would appear this issue has been around for a few months.",
"Hi ! `Dataset.from_dict` keeps the data in memory. You can write on disk and reload them ... | null | 4,964 | false |
Dataset without script does not support regular JSON data file | ### Link
https://huggingface.co/datasets/julien-c/label-studio-my-dogs
### Description
<img width="1115" alt="image" src="https://user-images.githubusercontent.com/326577/189422048-7e9c390f-bea7-4521-a232-43f049ccbd1f.png">
### Owner
Yes | https://github.com/huggingface/datasets/issues/4963 | [
"Hi @julien-c,\r\n\r\nOut of the box, we only support JSON lines (NDJSON) data files, but your data file is a regular JSON file. The reason is we use `pyarrow.json.read_json` and this only supports line-delimited JSON. "
] | null | 4,963 | false |
Update setup.py | exclude broken version of fsspec. See the [related issue](https://github.com/huggingface/datasets/issues/4961) | https://github.com/huggingface/datasets/pull/4962 | [
"Before addressing this PR, we should be sure about the issue. See my comment in:\r\n- https://github.com/huggingface/datasets/issues/4961#issuecomment-1243376247",
"Once we know 2022.8.2 works, I'm closing this PR, as the corresponding issue."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4962",
"html_url": "https://github.com/huggingface/datasets/pull/4962",
"diff_url": "https://github.com/huggingface/datasets/pull/4962.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4962.patch",
"merged_at": null
} | 4,962 | true |
fsspec 2022.8.2 breaks xopen in streaming mode | ## Describe the bug
When fsspec 2022.8.2 is installed in your environment, xopen will prematurely close files, making streaming mode inoperable.
## Steps to reproduce the bug
```python
import datasets
data = datasets.load_dataset('MLCommons/ml_spoken_words', 'id_wav', split='train', streaming=True)
```
... | https://github.com/huggingface/datasets/issues/4961 | [
"loading `fsspec==2022.7.1` fixes this issue, setup.py would need to be changed to prevent users from using the latest version of fsspec.",
"Opened [PR](https://github.com/huggingface/datasets/pull/4962) to address this.",
"Hi @DCNemesis, thanks for reporting.\r\n\r\nThat was a temporary issue in `fsspec` relea... | null | 4,961 | false |
BioASQ AttributeError: 'BuilderConfig' object has no attribute 'schema' | ## Describe the bug
I am trying to load a dataset from drive and running into an error.
## Steps to reproduce the bug
```python
data_dir = "/Users/dlituiev/repos/datasets/bioasq/BioASQ-training9b"
bioasq_task_b = load_dataset("aps/bioasq_task_b", data_dir=data_dir)
```
## Actual results
`AttributeError: ... | https://github.com/huggingface/datasets/issues/4960 | [
"Following worked:\r\n\r\n```\r\ndata_dir = \"/Users/dlituiev/repos/datasets/bioasq/\"\r\nbioasq_task_b = load_dataset(\"aps/bioasq_task_b\", data_dir=data_dir, name=\"bioasq_9b_source\")\r\n```\r\n\r\nWould maintainers be open to one of the following:\r\n- automating this with a latest default config (e.g. `bioas... | null | 4,960 | false |
Fix data URLs of compguesswhat dataset | After we informed the `compguesswhat` dataset authors about an error with their data URLs, they have updated them:
- https://github.com/CompGuessWhat/compguesswhat.github.io/issues/1
This PR updates their data URLs in our loading script.
Related to:
- #3191 | https://github.com/huggingface/datasets/pull/4959 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4959",
"html_url": "https://github.com/huggingface/datasets/pull/4959",
"diff_url": "https://github.com/huggingface/datasets/pull/4959.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4959.patch",
"merged_at": "2022-09-09T15:59... | 4,959 | true |
ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/2.4.0/datasets/jsonl/jsonl.py | Hi,
When I use load_dataset from local jsonl files, below error happens, and I type the link into the browser prompting me `404: Not Found`. I download the other `.py` files using the same method and it works. It seems that the server is missing the appropriate file, or it is a problem with the code version.
```
C... | https://github.com/huggingface/datasets/issues/4958 | [
"I have solved this problem... The extension of the file should be `.json` not `.jsonl`"
] | null | 4,958 | false |
Add `Dataset.from_generator` | Add `Dataset.from_generator` to the API to allow creating datasets from data larger than RAM. The implementation relies on a packaged module not exposed in `load_dataset` to tie this method with `datasets`' caching mechanism.
Closes https://github.com/huggingface/datasets/issues/4417 | https://github.com/huggingface/datasets/pull/4957 | [
"I restarted the builder PR job just in case",
"_The documentation is not available anymore as the PR was closed or merged._",
"CI is now green. https://github.com/huggingface/doc-builder/pull/296 explains why it failed."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4957",
"html_url": "https://github.com/huggingface/datasets/pull/4957",
"diff_url": "https://github.com/huggingface/datasets/pull/4957.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4957.patch",
"merged_at": "2022-09-16T14:44... | 4,957 | true |
Fix TF tests for 2.10 | Fixes #4953 | https://github.com/huggingface/datasets/pull/4956 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4956",
"html_url": "https://github.com/huggingface/datasets/pull/4956",
"diff_url": "https://github.com/huggingface/datasets/pull/4956.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4956.patch",
"merged_at": "2022-09-08T15:14... | 4,956 | true |
Raise a more precise error when the URL is unreachable in streaming mode | See for example:
- https://github.com/huggingface/datasets/issues/3191
- https://github.com/huggingface/datasets/issues/3186
It would help provide clearer information on the Hub and help the dataset maintainer solve the issue by themselves quicker. Currently:
- https://huggingface.co/datasets/compguesswhat
... | https://github.com/huggingface/datasets/issues/4955 | [] | null | 4,955 | false |
Pin TensorFlow temporarily | Temporarily fix TensorFlow until a permanent solution is found.
Related to:
- #4953 | https://github.com/huggingface/datasets/pull/4954 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4954",
"html_url": "https://github.com/huggingface/datasets/pull/4954",
"diff_url": "https://github.com/huggingface/datasets/pull/4954.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4954.patch",
"merged_at": "2022-09-08T14:10... | 4,954 | true |
CI test of TensorFlow is failing | ## Describe the bug
The following CI test fails: https://github.com/huggingface/datasets/runs/8246722693?check_suite_focus=true
```
FAILED tests/test_py_utils.py::TempSeedTest::test_tensorflow - AssertionError:
```
Details:
```
_________________________ TempSeedTest.test_tensorflow _________________________
[... | https://github.com/huggingface/datasets/issues/4953 | [] | null | 4,953 | false |
Add test-datasets CI job | To avoid having too many conflicts in the datasets and metrics dependencies I split the CI into test and test-catalog
test does the test of the core of the `datasets` lib, while test-catalog tests the datasets scripts and metrics scripts
This also makes `pip install -e .[dev]` much smaller for developers
WDYT ... | https://github.com/huggingface/datasets/pull/4952 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Closing this one since the dataset scripts will be removed in https://github.com/huggingface/datasets/pull/4974"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4952",
"html_url": "https://github.com/huggingface/datasets/pull/4952",
"diff_url": "https://github.com/huggingface/datasets/pull/4952.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4952.patch",
"merged_at": null
} | 4,952 | true |
Fix license information in qasc dataset card | This PR adds the license information to `qasc` dataset, once reported via GitHub by Tushar Khot, the dataset is licensed under CC BY 4.0:
- https://github.com/allenai/qasc/issues/5
| https://github.com/huggingface/datasets/pull/4951 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4951",
"html_url": "https://github.com/huggingface/datasets/pull/4951",
"diff_url": "https://github.com/huggingface/datasets/pull/4951.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4951.patch",
"merged_at": "2022-09-08T14:52... | 4,951 | true |
Update Enwik8 broken link and information | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | https://github.com/huggingface/datasets/pull/4950 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4950",
"html_url": "https://github.com/huggingface/datasets/pull/4950",
"diff_url": "https://github.com/huggingface/datasets/pull/4950.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4950.patch",
"merged_at": "2022-09-08T14:51... | 4,950 | true |
Update enwik8 fixing the broken link | The current enwik8 dataset link give a 502 bad gateway error which can be view on https://huggingface.co/datasets/enwik8 (click the dropdown to see the dataset preview, it will show the error). This corrects the links, and json metadata as well as adds a little bit more information about enwik8. | https://github.com/huggingface/datasets/pull/4949 | [
"Closing pull request to following contributing guidelines of making a new branch and will make a new pull request"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4949",
"html_url": "https://github.com/huggingface/datasets/pull/4949",
"diff_url": "https://github.com/huggingface/datasets/pull/4949.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4949.patch",
"merged_at": null
} | 4,949 | true |
Fix minor typo in error message for missing imports | null | https://github.com/huggingface/datasets/pull/4948 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4948",
"html_url": "https://github.com/huggingface/datasets/pull/4948",
"diff_url": "https://github.com/huggingface/datasets/pull/4948.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4948.patch",
"merged_at": "2022-09-08T14:57... | 4,948 | true |
Try to fix the Windows CI after TF update 2.10 | null | https://github.com/huggingface/datasets/pull/4947 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4947). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4947",
"html_url": "https://github.com/huggingface/datasets/pull/4947",
"diff_url": "https://github.com/huggingface/datasets/pull/4947.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4947.patch",
"merged_at": null
} | 4,947 | true |
Introduce regex check when pushing as well | Closes https://github.com/huggingface/datasets/issues/4945 by adding a regex check when pushing to hub.
Let me know if this is helpful and if it's the fix you would have in mind for the issue and I'm happy to contribute tests. | https://github.com/huggingface/datasets/pull/4946 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Let me take over this PR if you don't mind"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4946",
"html_url": "https://github.com/huggingface/datasets/pull/4946",
"diff_url": "https://github.com/huggingface/datasets/pull/4946.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4946.patch",
"merged_at": "2022-09-13T10:16... | 4,946 | true |
Push to hub can push splits that do not respect the regex | ## Describe the bug
The `push_to_hub` method can push splits that do not respect the regex check that is used for downloads. Therefore, splits may be pushed but never re-used, which can be painful if the split was done after runtime preprocessing.
## Steps to reproduce the bug
```python
>>> from datasets import... | https://github.com/huggingface/datasets/issues/4945 | [] | null | 4,945 | false |
larger dataset, larger GPU memory in the training phase? Is that correct? | from datasets import set_caching_enabled
set_caching_enabled(False)
for ds_name in ["squad","newsqa","nqopen","narrativeqa"]:
train_ds = load_from_disk("../../../dall/downstream/processedproqa/{}-train.hf".format(ds_name))
break
train_ds = concatenate_datasets([train_ds,train_... | https://github.com/huggingface/datasets/issues/4944 | [
"does the trainer save it in GPU? sooo curious... how to fix it",
"It's my bad. didn't limit the input length"
] | null | 4,944 | false |
Add splits to MBPP dataset | This PR addresses https://github.com/huggingface/datasets/issues/4795 | https://github.com/huggingface/datasets/pull/4943 | [
"```\r\n(env) cwarny@Cedrics-Air datasets % RUN_SLOW=1 pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mbpp\r\n================================================================================================ test session starts ==========================================================... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4943",
"html_url": "https://github.com/huggingface/datasets/pull/4943",
"diff_url": "https://github.com/huggingface/datasets/pull/4943.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4943.patch",
"merged_at": "2022-09-13T12:27... | 4,943 | true |
Trec Dataset has incorrect labels | ## Describe the bug
Both coarse and fine labels seem to be out of line.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = "trec"
raw_datasets = load_dataset(dataset)
df = pd.DataFrame(raw_datasets["test"])
df.head()
```
## Expected results
text (string) | coarse_labe... | https://github.com/huggingface/datasets/issues/4942 | [
"Thanks for reporting, @wmpauli. \r\n\r\nIndeed we recently fixed this issue:\r\n- #4801 \r\n\r\nThe fix will be accessible after our next library release. In the meantime, you can have it by passing `revision=\"main\"` to `load_dataset`."
] | null | 4,942 | false |
Add Papers with Code ID to scifact dataset | This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true | https://github.com/huggingface/datasets/pull/4941 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"merged_at": "2022-09-06T18:26... | 4,941 | true |
Fix multilinguality tag and missing sections in xquad_r dataset card | This PR fixes issue reported on the Hub:
- Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1 | https://github.com/huggingface/datasets/pull/4940 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4940",
"html_url": "https://github.com/huggingface/datasets/pull/4940",
"diff_url": "https://github.com/huggingface/datasets/pull/4940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4940.patch",
"merged_at": "2022-09-12T10:08... | 4,940 | true |
Fix NonMatchingChecksumError in adv_glue dataset | Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1 | https://github.com/huggingface/datasets/pull/4939 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4939",
"html_url": "https://github.com/huggingface/datasets/pull/4939",
"diff_url": "https://github.com/huggingface/datasets/pull/4939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4939.patch",
"merged_at": "2022-09-06T17:39... | 4,939 | true |
Remove main branch rename notice | We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming | https://github.com/huggingface/datasets/pull/4938 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"merged_at": "2022-09-06T16:43... | 4,938 | true |
Remove deprecated identical_ok | `huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:
```python
Args:
...
identical_ok (`bool`, *optional*, defaults to `True`):
Deprecated: will be removed in 0.11.0.
... | https://github.com/huggingface/datasets/pull/4937 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4937",
"html_url": "https://github.com/huggingface/datasets/pull/4937",
"diff_url": "https://github.com/huggingface/datasets/pull/4937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4937.patch",
"merged_at": "2022-09-06T22:21... | 4,937 | true |
vivos (Vietnamese speech corpus) dataset not accessible | ## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dat... | https://github.com/huggingface/datasets/issues/4936 | [
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan om... | null | 4,936 | false |
Dataset Viewer issue for ubuntu_dialogs_corpus | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | https://github.com/huggingface/datasets/issues/4935 | [
"The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thank... | null | 4,935 | false |
Dataset Viewer issue for indonesian-nlp/librivox-indonesia | ### Link
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
### Description
I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message:
```
Server error
Status code: 400
Exception: TypeEr... | https://github.com/huggingface/datasets/issues/4934 | [
"The error is not related to the dataset viewer. I'm having a look...",
"Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp/librivox-indonesia\")\r\nNo config specified, defaulting ... | null | 4,934 | false |
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable. | ## Describe the bug
`Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Steps to reproduce the bug
(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
```python
from datasets import load_dataset
ds_... | https://github.com/huggingface/datasets/issues/4933 | [
"Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instea... | null | 4,933 | false |
Dataset Viewer issue for bigscience-biomedical/biosses | ### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 40... | https://github.com/huggingface/datasets/issues/4932 | [
"Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets im... | null | 4,932 | false |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #489... | https://github.com/huggingface/datasets/pull/4931 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"merged_at": "2022-09-06T05:39... | 4,931 | true |
Add cc-by-nc-2.0 to list of licenses | This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md | https://github.com/huggingface/datasets/pull/4930 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"this list needs to be kept in sync with the ones in moon-landing and hub-docs :)",
"@julien-c don't you think it might be better to a have a single file (source of truth) in one of the repos and then use it in every other repo, ins... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4930",
"html_url": "https://github.com/huggingface/datasets/pull/4930",
"diff_url": "https://github.com/huggingface/datasets/pull/4930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4930.patch",
"merged_at": "2022-09-05T17:01... | 4,930 | true |
Fixes a typo in loading documentation | As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

| https://github.com/huggingface/datasets/pull/4929 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"merged_at": "2022-09-05T13:06... | 4,929 | true |
Add ability to read-write to SQL databases. | Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541... | https://github.com/huggingface/datasets/pull/4928 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Ah CI runs with `pandas=1.3.5` which doesn't return the number of row inserted.",
"wow this is super cool!",
"@lhoestq I'm getting error in integration tests, not sure if it's related to my PR. Any help would be appreciated :) \r... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"merged_at": "2022-10-03T16:32... | 4,928 | true |
fix BLEU metric card | I've fixed some typos in BLEU metric card. | https://github.com/huggingface/datasets/pull/4927 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4927",
"html_url": "https://github.com/huggingface/datasets/pull/4927",
"diff_url": "https://github.com/huggingface/datasets/pull/4927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4927.patch",
"merged_at": "2022-09-09T16:28... | 4,927 | true |
Dataset infos in yaml | To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.
To be more specific, I moved these fie... | https://github.com/huggingface/datasets/pull/4926 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Alright this is ready for review :)\r\nI mostly would like your opinion on the YAML structure and what we can do in the docs (IMO we can add the docs about those fields in the Hub docs). Other than that let me know if the changes in ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4926",
"html_url": "https://github.com/huggingface/datasets/pull/4926",
"diff_url": "https://github.com/huggingface/datasets/pull/4926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4926.patch",
"merged_at": "2022-10-03T09:11... | 4,926 | true |
Add note about loading image / audio files to docs | This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure.
Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447
cc @NielsRogge | https://github.com/huggingface/datasets/pull/4925 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4925). All of your documentation changes will be reflected on that endpoint.",
"Thanks for the feedback @polinaeterna ! I've reworded the docs a bit to integrate your comments and this should be ready for another review :)",
... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4925",
"html_url": "https://github.com/huggingface/datasets/pull/4925",
"diff_url": "https://github.com/huggingface/datasets/pull/4925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4925.patch",
"merged_at": null
} | 4,925 | true |
Concatenate_datasets loads everything into RAM | ## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```... | https://github.com/huggingface/datasets/issues/4924 | [] | null | 4,924 | false |
decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround | `torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)
another option would... | https://github.com/huggingface/datasets/pull/4923 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks ! Should we still support torchaudio>0.12 if it works ? And if it doesn't we can explain that downgrading is the right solution, or alternatively use librosa",
"@lhoestq \r\n\r\n> Should we still support torchaudio>0.12 if i... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4923",
"html_url": "https://github.com/huggingface/datasets/pull/4923",
"diff_url": "https://github.com/huggingface/datasets/pull/4923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4923.patch",
"merged_at": "2022-09-20T13:12... | 4,923 | true |
I/O error on Google Colab in streaming mode | ## Describe the bug
When trying to load a streaming dataset in Google Colab the loading fails with an I/O error
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
list(hf_ds.... | https://github.com/huggingface/datasets/issues/4922 | [] | null | 4,922 | false |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- eraser_multi_rc
- hotpot_qa
- metooma
- movie_rationales
- qanta
- quora
- quoref
- race
- ted_hrlr
- ted_talks_iwslt
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
... | https://github.com/huggingface/datasets/pull/4921 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4921",
"html_url": "https://github.com/huggingface/datasets/pull/4921",
"diff_url": "https://github.com/huggingface/datasets/pull/4921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4921.patch",
"merged_at": "2022-09-01T05:04... | 4,921 | true |
Unable to load local tsv files through load_dataset method | ## Describe the bug
Unable to load local tsv files through load_dataset method.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
data_files = {
'train': 'train.tsv',
'test': 'test.tsv'
}
raw_datasets = load_dataset('tsv', data_files=data_files)
## Expected results
I am p... | https://github.com/huggingface/datasets/issues/4920 | [
"Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_... | null | 4,920 | false |
feat: improve error message on Keys mismatch. closes #4917 | Hi @lhoestq what do you think?
Let me give you a code sample:
```py
>>> import datasets
>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})
>>> foo.save_to_disk('foo')
# edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz'
>>> datasets.load_from_disk('foo')
--------------------------... | https://github.com/huggingface/datasets/pull/4919 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"We are having an unrelated issue that makes several tests fail. We are working on that. Once fixed, you will be able to merge the main branch into this, so that you get the fix and the tests pass..."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4919",
"html_url": "https://github.com/huggingface/datasets/pull/4919",
"diff_url": "https://github.com/huggingface/datasets/pull/4919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4919.patch",
"merged_at": "2022-09-05T08:43... | 4,919 | true |
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines | ### Link
https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines
### Description
After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.
### Owner
_No response_ | https://github.com/huggingface/datasets/issues/4918 | [
"Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture dβeΜcran 2022-09-05 aΜ 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.p... | null | 4,918 | false |
Keys mismatch: make error message more informative | **Is your feature request related to a problem? Please describe.**
When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I donβt know when/why/how this happens but it deserves its own issue), you will get an error message like:
`ValueError: Keys mismatch: between {'bar': V... | https://github.com/huggingface/datasets/issues/4917 | [
"Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?",
"Is this open to work ... | null | 4,917 | false |
Apache Beam unable to write the downloaded wikipedia dataset | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while s... | https://github.com/huggingface/datasets/issues/4916 | [
"See:\r\n- #4915"
] | null | 4,916 | false |
FileNotFoundError while downloading wikipedia dataset for any language | ## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download.
Environment:
## Step... | https://github.com/huggingface/datasets/issues/4915 | [
"Hi @Shilpac20,\r\n\r\nAs explained in the Wikipedia dataset card: https://huggingface.co/datasets/wikipedia\r\n> You can find the full list of languages and dates [here](https://dumps.wikimedia.org/backup-index.html).\r\n\r\nThis means that, before passing a specific date, you should first make sure it is availabl... | null | 4,915 | false |
Support streaming swda dataset | Support streaming swda dataset. | https://github.com/huggingface/datasets/pull/4914 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4914",
"html_url": "https://github.com/huggingface/datasets/pull/4914",
"diff_url": "https://github.com/huggingface/datasets/pull/4914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4914.patch",
"merged_at": "2022-08-30T11:14... | 4,914 | true |
Add license and citation information to cosmos_qa dataset | This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.
This PR also updates the citation information. | https://github.com/huggingface/datasets/pull/4913 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4913",
"html_url": "https://github.com/huggingface/datasets/pull/4913",
"diff_url": "https://github.com/huggingface/datasets/pull/4913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4913.patch",
"merged_at": "2022-08-30T09:47... | 4,913 | true |
datasets map() handles all data at a stroke and takes long time | **1. Background**
Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop.
The corresponding code:
```
with accelerator.main_process_first():
tokenized_... | https://github.com/huggingface/datasets/issues/4912 | [
"Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both op... | null | 4,912 | false |
[Tests] Ensure `datasets` supports renamed repositories | On https://hf.co/datasets you can rename a dataset (or sometimes move it to another user/org). The website handles redirections correctly and AFAIK `datasets` does as well.
However it would be nice to have an integration test to make sure we don't break support for renamed datasets.
To implement this we can use t... | https://github.com/huggingface/datasets/issues/4911 | [
"You could also switch to using `huggingface_hub` more directly, where such a guarantee is already tested =)\r\n\r\ncc @Wauplin "
] | null | 4,911 | false |
Identical keywords in build_kwargs and config_kwargs lead to TypeError in load_dataset_builder() | ## Describe the bug
In `load_dataset_builder()`, `build_kwargs` and `config_kwargs` can contain the same keywords leading to a TypeError("type object got multiple values for keyword argument "xyz").
I ran into this problem with the keyword: `base_path`. It might happen with other kwargs as well. I think a quickfix... | https://github.com/huggingface/datasets/issues/4910 | [
"I am getting similar error - `TypeError: type object got multiple values for keyword argument 'name'` while following this [tutorial](https://huggingface.co/docs/datasets/dataset_script#create-a-dataset-loading-script). I am getting this error with the `dataset-cli test` command.\r\n\r\n`datasets` version: 2.4.0",... | null | 4,910 | false |
Update GLUE evaluation metadata | This PR updates the evaluation metadata for GLUE to:
* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)
* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)
* Fix the `task_id` for some existing defaults
... | https://github.com/huggingface/datasets/pull/4909 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4909",
"html_url": "https://github.com/huggingface/datasets/pull/4909",
"diff_url": "https://github.com/huggingface/datasets/pull/4909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4909.patch",
"merged_at": "2022-08-29T14:51... | 4,909 | true |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- asnq
- clue
- common_gen
- cosmos_qa
- guardian_authorship
- hindi_discourse
- py_ast
- x_stance
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896 | https://github.com/huggingface/datasets/pull/4908 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4908",
"html_url": "https://github.com/huggingface/datasets/pull/4908",
"diff_url": "https://github.com/huggingface/datasets/pull/4908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4908.patch",
"merged_at": "2022-08-29T16:13... | 4,908 | true |
None Type error for swda datasets | ## Describe the bug
I got `'NoneType' object is not callable` error while calling the swda datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("swda")
```
## Expected results
Run without error
## Environment info
<!-- You can run the command `datase... | https://github.com/huggingface/datasets/issues/4907 | [
"Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?",
"Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.",
"Ok, let us know ... | null | 4,907 | false |
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import) | ## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.p... | https://github.com/huggingface/datasets/issues/4906 | [
"Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `data... | null | 4,906 | false |
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config | We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61
These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.
How... | https://github.com/huggingface/datasets/pull/4904 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"This PR fixes a bug introduced in:\r\n- #4184"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4904",
"html_url": "https://github.com/huggingface/datasets/pull/4904",
"diff_url": "https://github.com/huggingface/datasets/pull/4904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4904.patch",
"merged_at": "2022-08-30T10:03... | 4,904 | true |
Fix CI reporting | Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error. | https://github.com/huggingface/datasets/pull/4903 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4903",
"html_url": "https://github.com/huggingface/datasets/pull/4903",
"diff_url": "https://github.com/huggingface/datasets/pull/4903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4903.patch",
"merged_at": "2022-08-26T17:46... | 4,903 | true |
Name the default config `default` | Currently, if a dataset has no configuration, a default configuration is created from the dataset name.
For example, for a dataset loaded from the hub repository, such as https://huggingface.co/datasets/user/dataset (repo id is `user/dataset`), the default configuration will be `user--dataset`.
It might be easier... | https://github.com/huggingface/datasets/issues/4902 | [] | null | 4,902 | false |
Raise ManualDownloadError from get_dataset_config_info | This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo | https://github.com/huggingface/datasets/pull/4901 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4901",
"html_url": "https://github.com/huggingface/datasets/pull/4901",
"diff_url": "https://github.com/huggingface/datasets/pull/4901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4901.patch",
"merged_at": "2022-08-30T10:40... | 4,901 | true |
Dataset Viewer issue for asaxena1990/Dummy_dataset | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | https://github.com/huggingface/datasets/issues/4900 | [
"Seems to be linked to the use of the undocumented `_resolve_features` method in the dataset viewer backend:\r\n\r\n```\r\n>>> from datasets import load_dataset\r\n>>> dataset = load_dataset(\"asaxena1990/Dummy_dataset\", name=\"asaxena1990--Dummy_dataset\", split=\"train\", streaming=True)\r\nUsing custom data con... | null | 4,900 | false |
Re-add code and und language tags | This PR fixes the removal of 2 language tags done by:
- #4882
The tags are:
- "code": this is not a IANA tag but needed
- "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af
- used in "mc4" and "udhr" datasets | https://github.com/huggingface/datasets/pull/4899 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4899",
"html_url": "https://github.com/huggingface/datasets/pull/4899",
"diff_url": "https://github.com/huggingface/datasets/pull/4899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4899.patch",
"merged_at": "2022-08-26T10:24... | 4,899 | true |
Dataset Viewer issue for timit_asr | ### Link
_No response_
### Description
_No response_
### Owner
_No response_ | https://github.com/huggingface/datasets/issues/4898 | [
"Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/data... | null | 4,898 | false |
datasets generate large arrow file | Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be so... | https://github.com/huggingface/datasets/issues/4897 | [
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 time... | null | 4,897 | false |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891 | https://github.com/huggingface/datasets/pull/4896 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4896",
"html_url": "https://github.com/huggingface/datasets/pull/4896",
"diff_url": "https://github.com/huggingface/datasets/pull/4896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4896.patch",
"merged_at": "2022-08-26T04:41... | 4,896 | true |
load_dataset method returns Unknown split "validation" even if this dir exists | ## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and a... | https://github.com/huggingface/datasets/issues/4895 | [
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with t... | null | 4,895 | false |
Add citation information to makhzan dataset | This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43 | https://github.com/huggingface/datasets/pull/4894 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4894",
"html_url": "https://github.com/huggingface/datasets/pull/4894",
"diff_url": "https://github.com/huggingface/datasets/pull/4894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4894.patch",
"merged_at": "2022-08-25T13:19... | 4,894 | true |
Oversampling strategy for iterable datasets in `interleave_datasets` | In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable ... | https://github.com/huggingface/datasets/issues/4893 | [
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(Exa... | null | 4,893 | false |
Add citation to ro_sts and ro_sts_parallel datasets | This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4 | https://github.com/huggingface/datasets/pull/4892 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4892). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4892",
"html_url": "https://github.com/huggingface/datasets/pull/4892",
"diff_url": "https://github.com/huggingface/datasets/pull/4892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4892.patch",
"merged_at": "2022-08-25T10:49... | 4,892 | true |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
| https://github.com/huggingface/datasets/pull/4891 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4891",
"html_url": "https://github.com/huggingface/datasets/pull/4891",
"diff_url": "https://github.com/huggingface/datasets/pull/4891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4891.patch",
"merged_at": "2022-08-25T13:43... | 4,891 | true |
add Dataset.from_list | As discussed in #4885
I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict.
However, it seems the constructor takes care of filling info when it is empty.
```
if info.features is None:
info.features = Features(
{
col: generate_from_arro... | https://github.com/huggingface/datasets/pull/4890 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@albertvillanova it seems tests fail on pyarrow 6, perhaps from_pylist is a v7 method? How do you usually handle these version differences?\r\nAdded something that at least works"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4890",
"html_url": "https://github.com/huggingface/datasets/pull/4890",
"diff_url": "https://github.com/huggingface/datasets/pull/4890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4890.patch",
"merged_at": "2022-09-02T10:20... | 4,890 | true |
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3 | ## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torc... | https://github.com/huggingface/datasets/issues/4889 | [
"Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.",
"torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 deco... | null | 4,889 | false |
Dataset Viewer issue for subjqa | ### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though π€
### Owner
Yes | https://github.com/huggingface/datasets/issues/4888 | [
"It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.",
"Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture dβeΜcran 2022-09-08 aΜ 10 23 26\" src=\"https://user-images.githubusercontent.com/1... | null | 4,888 | false |
Add "cc-by-nc-sa-2.0" to list of licenses | Datasets side of https://github.com/huggingface/hub-docs/pull/285 | https://github.com/huggingface/datasets/pull/4887 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Sorry for the issue @albertvillanova! I think it's now fixed! :heart: "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4887",
"html_url": "https://github.com/huggingface/datasets/pull/4887",
"diff_url": "https://github.com/huggingface/datasets/pull/4887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4887.patch",
"merged_at": "2022-08-26T10:29... | 4,887 | true |
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid | ## Describe the bug
Loading huggan/CelebA-HQ throws pyarrow.lib.ArrowInvalid
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset('huggan/CelebA-HQ')
```
## Expected results
See https://colab.research.google.com/drive/141LJCcM2XyqprPY83nIQ-Zk3BbxWeahq?usp=sharing#... | https://github.com/huggingface/datasets/issues/4886 | [
"Hi! IIRC one of the files in this dataset is corrupted due to https://github.com/huggingface/datasets/pull/4081 (fixed now).\r\n\r\n@NielsRogge Could you please re-generate and re-push this dataset (or I can do it if you share the generation script)?",
"Could you put something in place to catch these problems? ... | null | 4,886 | false |
Create dataset from list of dicts | I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reas... | https://github.com/huggingface/datasets/issues/4885 | [
"Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementin... | null | 4,885 | false |
Fix documentation card of math_qa dataset | Fix documentation card of math_qa dataset. | https://github.com/huggingface/datasets/pull/4884 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4884). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4884",
"html_url": "https://github.com/huggingface/datasets/pull/4884",
"diff_url": "https://github.com/huggingface/datasets/pull/4884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4884.patch",
"merged_at": "2022-08-24T11:33... | 4,884 | true |
With dataloader RSS memory consumed by HF datasets monotonically increases | ## Describe the bug
When the HF datasets is used in conjunction with PyTorch Dataloader, the RSS memory of the process keeps on increasing when it should stay constant.
## Steps to reproduce the bug
Run and observe the output of this snippet which logs RSS memory.
```python
import psutil
import os
from transf... | https://github.com/huggingface/datasets/issues/4883 | [
"Are you sure there is a leak? How can I see it? You shared the script but not the output which you believe should indicate a leak.\r\n\r\nI modified your reproduction script to print only once per try as your original was printing too much info and you absolutely must add `gc.collect()` when doing any memory measu... | null | 4,883 | false |
Fix language tags resource file | This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753 | https://github.com/huggingface/datasets/pull/4882 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4882). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4882",
"html_url": "https://github.com/huggingface/datasets/pull/4882",
"diff_url": "https://github.com/huggingface/datasets/pull/4882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4882.patch",
"merged_at": "2022-08-24T13:58... | 4,882 | true |
Language names and language codes: connecting to a big database (rather than slow enrichment of custom list) | **The problem:**
Language diversity is an important dimension of the diversity of datasets. To find one's way around datasets, being able to search by language name and by standardized codes appears crucial.
Currently the list of language codes is [here](https://github.com/huggingface/datasets/blob/main/src/datase... | https://github.com/huggingface/datasets/issues/4881 | [
"Thanks for opening this discussion, @alexis-michaud.\r\n\r\nAs the language validation procedure is shared with other Hugging Face projects, I'm tagging them as well.\r\n\r\nCC: @huggingface/moon-landing ",
"on the Hub side, there is not fine grained validation we just check that `language:` contains an array of... | null | 4,881 | false |
Added names of less-studied languages | Added names of less-studied languages (nru β Narua and jya β Japhug) for existing datasets. | https://github.com/huggingface/datasets/pull/4880 | [
"OK, I removed Glottolog codes and only added ISO 639-3 ones. The former are for the moment in corpus card description, language details, and in subcorpora names.",
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4880). All of your documentation changes will be reflected on ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4880",
"html_url": "https://github.com/huggingface/datasets/pull/4880",
"diff_url": "https://github.com/huggingface/datasets/pull/4880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4880.patch",
"merged_at": "2022-08-24T12:52... | 4,880 | true |
Fix Citation Information section in dataset cards | Fix Citation Information section in dataset cards:
- cc_news
- conllpp
- datacommons_factcheck
- gnad10
- id_panl_bppt
- jigsaw_toxicity_pred
- kinnews_kirnews
- kor_sarcasm
- makhzan
- reasoning_bg
- ro_sts
- ro_sts_parallel
- sanskrit_classic
- telugu_news
- thaiqa_squad
- wiki_movies
This PR parti... | https://github.com/huggingface/datasets/pull/4879 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4879). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4879",
"html_url": "https://github.com/huggingface/datasets/pull/4879",
"diff_url": "https://github.com/huggingface/datasets/pull/4879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4879.patch",
"merged_at": "2022-08-24T04:09... | 4,879 | true |
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file` | In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingfac... | https://github.com/huggingface/datasets/issues/4878 | [
"Resolved via https://github.com/huggingface/datasets/pull/4937."
] | null | 4,878 | false |
Fix documentation card of covid_qa_castorini dataset | Fix documentation card of covid_qa_castorini dataset. | https://github.com/huggingface/datasets/pull/4877 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4877). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4877",
"html_url": "https://github.com/huggingface/datasets/pull/4877",
"diff_url": "https://github.com/huggingface/datasets/pull/4877.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4877.patch",
"merged_at": "2022-08-23T18:05... | 4,877 | true |
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md` | Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- trai... | https://github.com/huggingface/datasets/issues/4876 | [
"also @osanseviero @Pierrci @SBrandeis potentially",
"Love this in principle π \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config... | null | 4,876 | false |
`_resolve_features` ignores the token | ## Describe the bug
When calling [`_resolve_features()`](https://github.com/huggingface/datasets/blob/54b532a8a2f5353fdb0207578162153f7b2da2ec/src/datasets/iterable_dataset.py#L1255) on a gated dataset, ie. a dataset which requires a token to be loaded, the token seems to be ignored even if it has been provided to `... | https://github.com/huggingface/datasets/issues/4875 | [
"Hi ! Your HF_ENDPOINT seems wrong because of the extra \"/\"\r\n```diff\r\n- os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co/\"\r\n+ os.environ[\"HF_ENDPOINT\"] = \"https://hub-ci.huggingface.co\"\r\n```\r\n\r\ncan you try again without the extra \"/\" ?",
"Oh, yes, sorry, but it's not the issue.\r... | null | 4,875 | false |
[docs] Some tiny doc tweaks | null | https://github.com/huggingface/datasets/pull/4874 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_4874). All of your documentation changes will be reflected on that endpoint."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4874",
"html_url": "https://github.com/huggingface/datasets/pull/4874",
"diff_url": "https://github.com/huggingface/datasets/pull/4874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4874.patch",
"merged_at": "2022-08-24T17:27... | 4,874 | true |
Multiple dataloader memory error | For the use of multiple datasets and tasks, we use around more than 200+ dataloaders, then pass it into `dataloader1, dataloader2, ..., dataloader200=accelerate.prepare(dataloader1, dataloader2, ..., dataloader200)`
It causes the memory error when generating batches. Any solutions to it?
```bash
File "/home/xxx/... | https://github.com/huggingface/datasets/issues/4873 | [
"Hi!\r\n\r\n200+ data loaders is a lot. Have you tried to reduce the number of datasets by concatenating/interleaving the ones with the same structure/task (the API is `{concatenate_datasets/interleave_datasets}([dset1, ..., dset_N])`)?",
"Hi @mariosasko, thank you for your reply. I tried pre-concatenating differ... | null | 4,873 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.