title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
fix: update exception throw from OSError to EnvironmentError in `push… | Status:
Ready for review
Description of Changes:
Fixes #5075
Changes proposed in this pull request:
- Throw EnvironmentError instead of OSError in `push_to_hub` when the Hub token is not present. | https://github.com/huggingface/datasets/pull/5076 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5076",
"html_url": "https://github.com/huggingface/datasets/pull/5076",
"diff_url": "https://github.com/huggingface/datasets/pull/5076.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5076.patch",
"merged_at": "2022-10-07T14:33... | 5,076 | true |
Throw EnvironmentError when token is not present | Throw EnvironmentError instead of OSError ([link](https://github.com/huggingface/datasets/blob/6ad430ba0cdeeb601170f732d4bd977f5c04594d/src/datasets/arrow_dataset.py#L4306) to the line) in `push_to_hub` when the Hub token is not present. | https://github.com/huggingface/datasets/issues/5075 | [
"@mariosasko I've raised a PR #5076 against this issue. Please help to review. Thanks."
] | null | 5,075 | false |
Replace AssertionErrors with more meaningful errors | Replace the AssertionErrors with more meaningful errors such as ValueError, TypeError, etc.
The files with AssertionErrors that need to be replaced:
```
src/datasets/arrow_reader.py
src/datasets/builder.py
src/datasets/utils/version.py
``` | https://github.com/huggingface/datasets/issues/5074 | [
"Hi, can I pick up this issue?",
"#self-assign",
"Looks like the top-level `datasource` directory was removed when https://github.com/huggingface/datasets/pull/4974 was merged, so there are 3 source files to fix."
] | null | 5,074 | false |
Restore saved format state in `load_from_disk` | Hello! @mariosasko
This pull request relates to issue #5050 and intends to add the format to datasets loaded from disk.
All I did was add a set_format in the Dataset.load_from_disk, as DatasetDict.load_from_disk relies on the first.
I don't know if I should add a test and where, so let me know if I should and ... | https://github.com/huggingface/datasets/pull/5073 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5073",
"html_url": "https://github.com/huggingface/datasets/pull/5073",
"diff_url": "https://github.com/huggingface/datasets/pull/5073.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5073.patch",
"merged_at": "2022-10-11T16:49... | 5,073 | true |
Image & Audio formatting for numpy/torch/tf/jax | Added support for image and audio formatting for numpy, torch, tf and jax.
For images, the dtype used is the one of the image (the one returned by PIL.Image), e.g. uint8
I also added support for string, binary and None types. In particular for torch and jax, strings are kept unchanged (previously it was returning... | https://github.com/huggingface/datasets/pull/5072 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I just added a consolidation step so that numpy arrays or tensors of images are stacked together if the shapes match, instead of having lists of tensors\r\n\r\nFeel free to review @mariosasko :)",
"I added a few lines in the docs a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5072",
"html_url": "https://github.com/huggingface/datasets/pull/5072",
"diff_url": "https://github.com/huggingface/datasets/pull/5072.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5072.patch",
"merged_at": "2022-10-10T13:21... | 5,072 | true |
Support DEFAULT_CONFIG_NAME when no BUILDER_CONFIGS | This PR supports defining a default config name, even if no predefined allowed config names are set.
Fix #5070.
CC: @stas00 | https://github.com/huggingface/datasets/pull/5071 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Super, thanks a lot for adding this support, Albert!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5071",
"html_url": "https://github.com/huggingface/datasets/pull/5071",
"diff_url": "https://github.com/huggingface/datasets/pull/5071.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5071.patch",
"merged_at": "2022-10-06T14:40... | 5,071 | true |
Support default config name when no builder configs | **Is your feature request related to a problem? Please describe.**
As discussed with @stas00, we could support defining a default config name, even if no predefined allowed config names are set. That is, support `DEFAULT_CONFIG_NAME`, even when `BUILDER_CONFIGS` is not defined.
**Additional context**
In order to ... | https://github.com/huggingface/datasets/issues/5070 | [
"Thank you for creating this feature request, Albert.\r\n\r\nFor context this is the datatest where Albert has been helping me to switch to on-the-fly split config https://huggingface.co/datasets/HuggingFaceM4/cm4-synthetic-testing\r\n\r\nand the attempt to switch on-the-fly splits was here: https://huggingface.co/... | null | 5,070 | false |
Fix CONTRIBUTING once dataset scripts transferred to Hub | This PR updates the `CONTRIBUTING.md` guide, once the all dataset scripts have been removed from the GitHub repo and transferred to the HF Hub:
- #4974
See diff here: https://github.com/huggingface/datasets/commit/e3291ecff9e54f09fcee3f313f051a03fdc3d94b
Additionally, this PR fixes the line separator that by som... | https://github.com/huggingface/datasets/pull/5067 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5067",
"html_url": "https://github.com/huggingface/datasets/pull/5067",
"diff_url": "https://github.com/huggingface/datasets/pull/5067.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5067.patch",
"merged_at": "2022-10-06T06:12... | 5,067 | true |
Support streaming gzip.open | This PR implements support for streaming out-of-the-box dataset scripts containing `gzip.open`.
This has been a recurring issue. See, e.g.:
- #5060
- #3191 | https://github.com/huggingface/datasets/pull/5066 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5066",
"html_url": "https://github.com/huggingface/datasets/pull/5066",
"diff_url": "https://github.com/huggingface/datasets/pull/5066.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5066.patch",
"merged_at": "2022-10-06T15:11... | 5,066 | true |
Ci py3.10 | Added a CI job for python 3.10
Some dependencies don't work on 3.10 like apache beam, so I remove them from the extras in this case.
I also removed some s3 fixtures that we don't use anymore (and that don't work on 3.10 anyway) | https://github.com/huggingface/datasets/pull/5065 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Does it sound good to you @albertvillanova ?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5065",
"html_url": "https://github.com/huggingface/datasets/pull/5065",
"diff_url": "https://github.com/huggingface/datasets/pull/5065.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5065.patch",
"merged_at": "2022-11-29T15:25... | 5,065 | true |
Align signature of create/delete_repo with latest hfh | This PR aligns the signature of `create_repo`/`delete_repo` with the current one in hfh, by removing deprecated `name` and `organization`, and using `repo_id` instead.
Related to:
- #5063
CC: @lhoestq | https://github.com/huggingface/datasets/pull/5064 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5064",
"html_url": "https://github.com/huggingface/datasets/pull/5064",
"diff_url": "https://github.com/huggingface/datasets/pull/5064.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5064.patch",
"merged_at": "2022-10-07T16:59... | 5,064 | true |
Align signature of list_repo_files with latest hfh | This PR aligns the signature of `list_repo_files` with the current one in `hfh`, by renaming deprecated `token` to `use_auth_token`.
This is already the case for `dataset_info`.
CC: @lhoestq | https://github.com/huggingface/datasets/pull/5063 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5063",
"html_url": "https://github.com/huggingface/datasets/pull/5063",
"diff_url": "https://github.com/huggingface/datasets/pull/5063.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5063.patch",
"merged_at": "2022-10-07T16:40... | 5,063 | true |
Fix CI hfh token warning | In our CI, we get warnings from `hfh` about using deprecated `token`: https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub
tests/te... | https://github.com/huggingface/datasets/pull/5062 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"good catch !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5062",
"html_url": "https://github.com/huggingface/datasets/pull/5062",
"diff_url": "https://github.com/huggingface/datasets/pull/5062.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5062.patch",
"merged_at": "2022-10-04T08:42... | 5,062 | true |
`_pickle.PicklingError: logger cannot be pickled` in multiprocessing `map` | ## Describe the bug
When I `map` with multiple processes, this error occurs. The `.name` of the `logger` that fails to pickle in the final line is `datasets.fingerprint`.
```
File "~/project/dataset.py", line 204, in <dictcomp>
split: dataset.map(
File ".../site-packages/datasets/arrow_dataset.py", line 24... | https://github.com/huggingface/datasets/issues/5061 | [
"This is maybe related to python 3.10, do you think you could try on 3.8 ?\r\n\r\nIn the meantime we'll keep improving the support for 3.10. Let me add a dedicated CI",
"I did some binary search and seems like the root cause is either `multiprocess` or `dill`. python 3.10 is fine. Specifically:\r\n- `multiprocess... | null | 5,061 | false |
Unable to Use Custom Dataset Locally | ## Describe the bug
I have uploaded a [dataset](https://huggingface.co/datasets/zpn/pubchem_selfies) and followed the instructions from the [dataset_loader](https://huggingface.co/docs/datasets/dataset_script#download-data-files-and-organize-splits) tutorial. In that tutorial, it says
```
If the data files live in ... | https://github.com/huggingface/datasets/issues/5060 | [
"Hi ! I opened a PR in your repo to fix this :)\r\nhttps://huggingface.co/datasets/zpn/pubchem_selfies/discussions/7\r\n\r\nbasically you need to use `open` for streaming to work properly",
"Thank you so much for this! Naive question, is this a feature of `open` or have you all overloaded it to be able to read fr... | null | 5,060 | false |
Fix typo | Fixes a small typo :) | https://github.com/huggingface/datasets/pull/5059 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5059",
"html_url": "https://github.com/huggingface/datasets/pull/5059",
"diff_url": "https://github.com/huggingface/datasets/pull/5059.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5059.patch",
"merged_at": "2022-10-03T17:32... | 5,059 | true |
Mark CI tests as xfail when 502 error | To make CI more robust, we could mark as xfail when the Hub raises a 502 error (besides 500 error):
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_skip_identical_files
- https://github.com/huggingface/datasets/actions/runs/3174626525/jobs/5171672431
```
> raise HTTPEr... | https://github.com/huggingface/datasets/pull/5058 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5058",
"html_url": "https://github.com/huggingface/datasets/pull/5058",
"diff_url": "https://github.com/huggingface/datasets/pull/5058.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5058.patch",
"merged_at": "2022-10-04T10:01... | 5,058 | true |
Support `converters` in `CsvBuilder` | Add the `converters` param to `CsvBuilder`, to help in situations like [this one](https://discuss.huggingface.co/t/typeerror-in-load-dataset-related-to-a-sequence-of-strings/23545).
| https://github.com/huggingface/datasets/pull/5057 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5057",
"html_url": "https://github.com/huggingface/datasets/pull/5057",
"diff_url": "https://github.com/huggingface/datasets/pull/5057.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5057.patch",
"merged_at": "2022-10-04T11:17... | 5,057 | true |
Fix broken URL's (GEM) | This PR fixes the broken URL's in GEM. cc. @lhoestq, @albertvillanova | https://github.com/huggingface/datasets/pull/5056 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5056). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @manandey. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5056",
"html_url": "https://github.com/huggingface/datasets/pull/5056",
"diff_url": "https://github.com/huggingface/datasets/pull/5056.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5056.patch",
"merged_at": null
} | 5,056 | true |
Fix backward compatibility for dataset_infos.json | While working on https://github.com/huggingface/datasets/pull/5018 I noticed a small bug introduced in #4926 regarding backward compatibility for dataset_infos.json
Indeed, when a dataset repo had both dataset_infos.json and README.md, the JSON file was ignored. This is unexpected: in practice it should be ignored o... | https://github.com/huggingface/datasets/pull/5055 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5055",
"html_url": "https://github.com/huggingface/datasets/pull/5055",
"diff_url": "https://github.com/huggingface/datasets/pull/5055.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5055.patch",
"merged_at": "2022-10-03T13:41... | 5,055 | true |
Fix license/citation information of squadshifts dataset card | This PR fixes the license/citation information of squadshifts dataset card, once the dataset owners have responded to our request for information:
- https://github.com/modestyachts/squadshifts-website/issues/1
Additionally, we have updated the mention in their website to our `datasets` library (they were referring ... | https://github.com/huggingface/datasets/pull/5054 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5054",
"html_url": "https://github.com/huggingface/datasets/pull/5054",
"diff_url": "https://github.com/huggingface/datasets/pull/5054.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5054.patch",
"merged_at": "2022-10-03T09:24... | 5,054 | true |
Intermittent JSON parse error when streaming the Pile | ## Describe the bug
I have an intermittent error when streaming the Pile, where I get a JSON parse error which causes my program to crash.
This is intermittent - when I rerun the program with the same random seed it does not crash in the same way. The exact point this happens also varied - it happened to me 11B tok... | https://github.com/huggingface/datasets/issues/5053 | [
"Maybe #2838 can help. In this PR we allow to skip bad chunks of JSON data to not crash the training\r\n\r\nDid you have warning messages before the error ?\r\n\r\nsomething like this maybe ?\r\n```\r\n03/24/2022 02:19:46 - WARNING - datasets.utils.streaming_download_manager - Got disconnected from remote data host... | null | 5,053 | false |
added from_generator method to IterableDataset class. | Hello,
This resolves issues #4988.
I added a method `from_generator` to class `IterableDataset`.
I modified the `read` method of input stream generator to also return Iterable_dataset.
| https://github.com/huggingface/datasets/pull/5052 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I added a test and moved the `streaming` param from `read` to `__init_`. Then, I also decided to update the `read` method of the rest of the packaged modules to account for this param. \r\n\r\n@hamid-vakilzadeh Are you OK with these ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5052",
"html_url": "https://github.com/huggingface/datasets/pull/5052",
"diff_url": "https://github.com/huggingface/datasets/pull/5052.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5052.patch",
"merged_at": "2022-10-05T12:10... | 5,052 | true |
Revert task removal in folder-based builders | Reverts the removal of `task_templates` in the folder-based builders. I also added the `AudioClassifaction` task for consistency.
This is needed to fix https://github.com/huggingface/transformers/issues/19177.
I think we should soon deprecate and remove the current task API (and investigate if it's possible to in... | https://github.com/huggingface/datasets/pull/5051 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5051",
"html_url": "https://github.com/huggingface/datasets/pull/5051",
"diff_url": "https://github.com/huggingface/datasets/pull/5051.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5051.patch",
"merged_at": "2022-10-03T12:21... | 5,051 | true |
Restore saved format state in `load_from_disk` | Even though we save the `format` state in `save_to_disk`, we don't restore it in `load_from_disk`. We should fix that.
Reported here: https://discuss.huggingface.co/t/save-to-disk-loses-formatting-information/23815 | https://github.com/huggingface/datasets/issues/5050 | [
"Hi, can I work on this?",
"Hi, sure! Let us know if you need some pointers/help."
] | null | 5,050 | false |
Add `kwargs` to `Dataset.from_generator` | Add the `kwargs` param to `from_generator` to align it with the rest of the `from_` methods (this param allows passing custom `writer_batch_size` for instance). | https://github.com/huggingface/datasets/pull/5049 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5049",
"html_url": "https://github.com/huggingface/datasets/pull/5049",
"diff_url": "https://github.com/huggingface/datasets/pull/5049.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5049.patch",
"merged_at": "2022-10-03T10:58... | 5,049 | true |
Fix bug with labels of eurlex config of lex_glue dataset | Fix for a critical bug in the EURLEX dataset label list to make LexGLUE EURLEX results replicable.
In LexGLUE (Chalkidis et al., 2022), the following is mentioned w.r.t. EUR-LEX: _"It supports four different label granularities, comprising 21, 127, 567, 7390 EuroVoc concepts, respectively. We use the 100 most frequ... | https://github.com/huggingface/datasets/pull/5048 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@JamesLYC88 here is the fix! Thanks again!",
"Thanks, @albertvillanova. When do you expect that this change will take effect when someone downloads the dataset?",
"The change is immediately available now, since this change we mad... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5048",
"html_url": "https://github.com/huggingface/datasets/pull/5048",
"diff_url": "https://github.com/huggingface/datasets/pull/5048.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5048.patch",
"merged_at": "2022-09-30T16:21... | 5,048 | true |
Fix cats_vs_dogs | Reported in https://github.com/huggingface/datasets/pull/3878
I updated the number of examples | https://github.com/huggingface/datasets/pull/5047 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5047",
"html_url": "https://github.com/huggingface/datasets/pull/5047",
"diff_url": "https://github.com/huggingface/datasets/pull/5047.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5047.patch",
"merged_at": "2022-09-30T09:34... | 5,047 | true |
Audiofolder creates empty Dataset if files same level as metadata | ## Describe the bug
When audio files are at the same level as the metadata (`metadata.csv` or `metadata.jsonl` ), the `load_dataset` returns a `DatasetDict` with no rows but the correct columns.
https://github.com/huggingface/datasets/blob/1ea4d091b7a4b83a85b2eeb8df65115d39af3766/docs/source/audio_dataset.mdx?plain... | https://github.com/huggingface/datasets/issues/5046 | [
"Hi! Unfortunately, I can't reproduce this behavior. Instead, I get `ValueError: audio at 2063_fe9936e7-62b2-4e62-a276-acbd344480ce_1.wav doesn't have metadata in /audio-data/metadata.csv`, which can be fixed by removing the `./` from the file name.\r\n\r\n(Link to a Colab that tries to reproduce this behavior: htt... | null | 5,046 | false |
Automatically revert to last successful commit to hub when a push_to_hub is interrupted | **Is your feature request related to a problem? Please describe.**
I pushed a modification of a large dataset (remove a column) to the hub. The push was interrupted after some files were committed to the repo. This left the dataset to raise an error on load_dataset() (ValueError couldn’t cast … because column names do... | https://github.com/huggingface/datasets/issues/5045 | [
"Could you share the error you got please ? Maybe the full stack trace if you have it ?\r\n\r\nMaybe `push_to_hub` be implemented as a single commit @Wauplin ? This way if it fails, the repo is still at the previous (valid) state instead of ending-up in an invalid/incimplete state.",
"> Maybe push_to_hub be imple... | null | 5,045 | false |
integrate `load_from_disk` into `load_dataset` | **Is your feature request related to a problem? Please describe.**
Is it possible to make `load_dataset` more universal similar to `from_pretrained` in `transformers` so that it can handle the hub, and the local path datasets of all supported types?
Currently one has to choose a different loader depending on how ... | https://github.com/huggingface/datasets/issues/5044 | [
"I agree the situation is not ideal and it would be awesome to use `load_dataset` to reload a dataset saved locally !\r\n\r\nFor context:\r\n\r\n- `load_dataset` works in three steps: download the dataset, then prepare it as an arrow dataset, and finally return a memory mapped arrow dataset. In particular it create... | null | 5,044 | false |
Fix `flatten_indices` with empty indices mapping | Fix #5038 | https://github.com/huggingface/datasets/pull/5043 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5043",
"html_url": "https://github.com/huggingface/datasets/pull/5043",
"diff_url": "https://github.com/huggingface/datasets/pull/5043.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5043.patch",
"merged_at": "2022-09-30T15:44... | 5,043 | true |
Update swiss judgment prediction | I forgot to add the new citation. | https://github.com/huggingface/datasets/pull/5042 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5042",
"html_url": "https://github.com/huggingface/datasets/pull/5042",
"diff_url": "https://github.com/huggingface/datasets/pull/5042.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5042.patch",
"merged_at": "2022-09-29T14:32... | 5,042 | true |
Support streaming hendrycks_test dataset. | This PR:
- supports streaming
- fixes the description section of the dataset card | https://github.com/huggingface/datasets/pull/5041 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5041",
"html_url": "https://github.com/huggingface/datasets/pull/5041",
"diff_url": "https://github.com/huggingface/datasets/pull/5041.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5041.patch",
"merged_at": "2022-09-29T12:07... | 5,041 | true |
Fix NonMatchingChecksumError in hendrycks_test dataset | Update metadata JSON.
Fix #5039. | https://github.com/huggingface/datasets/pull/5040 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5040",
"html_url": "https://github.com/huggingface/datasets/pull/5040",
"diff_url": "https://github.com/huggingface/datasets/pull/5040.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5040.patch",
"merged_at": "2022-09-29T10:04... | 5,040 | true |
Hendrycks Checksum | Hi,
The checksum for [hendrycks_test](https://huggingface.co/datasets/hendrycks_test) does not compare correctly, I guess it has been updated on the remote.
```
datasets.utils.info_utils.NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://people.eecs.berkeley.edu/~hendrycks/data.... | https://github.com/huggingface/datasets/issues/5039 | [
"Thanks for reporting, @DanielHesslow. We are fixing it. ",
"@albertvillanova thanks for taking care of this so quickly!",
"The dataset metadata is fixed. You can download it normally."
] | null | 5,039 | false |
`Dataset.unique` showing wrong output after filtering | ## Describe the bug
After filtering a dataset, and if no samples remain, `Dataset.unique` will return the unique values of the unfiltered dataset.
## Steps to reproduce the bug
```python
from datasets import Dataset
dataset = Dataset.from_dict({'id': [0]})
dataset = dataset.filter(lambda _: False)
print(data... | https://github.com/huggingface/datasets/issues/5038 | [
"Hi! It seems like `flatten_indices` (called in `unique`) doesn't know how to handle empty indices mappings. I'm working on the fix.",
"Thanks, that was fast!"
] | null | 5,038 | false |
Improve CI performance speed of PackagedDatasetTest | This PR improves PackagedDatasetTest CI performance speed. For Ubuntu (latest):
- Duration (without parallelism) before: 334.78s (5.58m)
- Duration (without parallelism) afterwards: 0.48s
The approach is passing a dummy `data_files` argument to load the builder, so that it avoids the slow inferring of it over the ... | https://github.com/huggingface/datasets/pull/5037 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"There was a CI error which seemed unrelated: https://github.com/huggingface/datasets/actions/runs/3143581330/jobs/5111807056\r\n```\r\nFAILED tests/test_load.py::test_load_dataset_private_zipped_images[True] - FileNotFoundError: http... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5037",
"html_url": "https://github.com/huggingface/datasets/pull/5037",
"diff_url": "https://github.com/huggingface/datasets/pull/5037.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5037.patch",
"merged_at": "2022-09-30T16:03... | 5,037 | true |
Add oversampling strategy iterable datasets interleave | Hello everyone,
Following the issue #4893 and the PR #4831, I propose here an oversampling strategy for a `IterableDataset` list.
The `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
It follows roughly the same logic behind #4831, namely... | https://github.com/huggingface/datasets/pull/5036 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5036",
"html_url": "https://github.com/huggingface/datasets/pull/5036",
"diff_url": "https://github.com/huggingface/datasets/pull/5036.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5036.patch",
"merged_at": "2022-09-30T12:28... | 5,036 | true |
Fix typos in load docstrings and comments | Minor fix of typos in load docstrings and comments | https://github.com/huggingface/datasets/pull/5035 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5035",
"html_url": "https://github.com/huggingface/datasets/pull/5035",
"diff_url": "https://github.com/huggingface/datasets/pull/5035.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5035.patch",
"merged_at": "2022-09-28T17:26... | 5,035 | true |
Update README.md of yahoo_answers_topics dataset | null | https://github.com/huggingface/datasets/pull/5034 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5034). All of your documentation changes will be reflected on that endpoint.",
"Thanks, @borgr. We have removed all dataset scripts from this repo. Subsequent PRs should be opened directly on the Hugging Face Hub.",
"Do you m... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5034",
"html_url": "https://github.com/huggingface/datasets/pull/5034",
"diff_url": "https://github.com/huggingface/datasets/pull/5034.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5034.patch",
"merged_at": null
} | 5,034 | true |
Remove redundant code from some dataset module factories | This PR removes some redundant code introduced by mistake after a refactoring in:
- #4576 | https://github.com/huggingface/datasets/pull/5033 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5033",
"html_url": "https://github.com/huggingface/datasets/pull/5033",
"diff_url": "https://github.com/huggingface/datasets/pull/5033.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5033.patch",
"merged_at": "2022-09-28T16:55... | 5,033 | true |
new dataset type: single-label and multi-label video classification | **Is your feature request related to a problem? Please describe.**
In my research, I am dealing with multi-modal (audio+text+frame sequence) video classification. It would be great if the datasets library supported generating multi-modal batches from a video dataset.
**Describe the solution you'd like**
Assume I h... | https://github.com/huggingface/datasets/issues/5032 | [
"Hi ! You can in the `features` folder how we implemented the audio and image feature types.\r\n\r\nWe can have something similar to videos. What we need to decide:\r\n- the video loading library to use\r\n- the output format when a user accesses a video type object\r\n- what parameters a `Video()` feature type nee... | null | 5,032 | false |
Support hfh 0.10 implicit auth | In huggingface-hub 0.10 the `token` parameter is deprecated for dataset_info and list_repo_files in favor of use_auth_token.
Moreover if use_auth_token=None then the user's token is used implicitly.
I took those two changes into account
Close https://github.com/huggingface/datasets/issues/4990
TODO:
- [x] fi... | https://github.com/huggingface/datasets/pull/5031 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"@lhoestq it is now released so you can move forward with it :) ",
"I took your comments into account @Wauplin :)\r\nI also bumped the requirement to 0.2.0 because we're using `set_access_token`\r\n\r\ncc @albertvillanova WDYT ? I e... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5031",
"html_url": "https://github.com/huggingface/datasets/pull/5031",
"diff_url": "https://github.com/huggingface/datasets/pull/5031.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5031.patch",
"merged_at": "2022-09-30T09:15... | 5,031 | true |
Fast dataset iter | Use `pa.Table.to_reader` to make iteration over examples/batches faster in `Dataset.{__iter__, map}`
TODO:
* [x] benchmarking (the only benchmark for now - iterating over (single) examples of `bookcorpus` (75 mil examples) in Colab is approx. 2.3x faster)
* [x] check if iterating over bigger chunks + slicing to fe... | https://github.com/huggingface/datasets/pull/5030 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I ran some benchmarks (focused on the data fetching part of `__iter__`) and it seems like the combination `table.to_reader(batch_size)` + `RecordBatch.slice` performs the best ([script](https://gist.github.com/mariosasko/0248288a2e3a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5030",
"html_url": "https://github.com/huggingface/datasets/pull/5030",
"diff_url": "https://github.com/huggingface/datasets/pull/5030.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5030.patch",
"merged_at": "2022-09-29T15:48... | 5,030 | true |
Fix import in `ClassLabel` docstring example | This PR addresses a super-simple fix: adding a missing `import` to the `ClassLabel` docstring example, as it was formatted as `from datasets Features`, so it's been fixed to `from datasets import Features`. | https://github.com/huggingface/datasets/pull/5029 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5029",
"html_url": "https://github.com/huggingface/datasets/pull/5029",
"diff_url": "https://github.com/huggingface/datasets/pull/5029.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5029.patch",
"merged_at": "2022-09-27T12:27... | 5,029 | true |
passing parameters to the method passed to Dataset.from_generator() | Big thanks for providing dataset creation via a generator.
I want to ask whether there is any way that parameters can be passed to the method Dataset.from_generator() method, like as follows.
```
from datasets import Dataset
def gen(param1):
for idx in len(custom_dataset):
yield custom_dataset[id... | https://github.com/huggingface/datasets/issues/5028 | [
"Hi! Yes, you can either use the `gen_kwargs` param in `Dataset.from_generator` (`ds = Dataset.from_generator(gen, gen_kwargs={\"param1\": val})`) or wrap the generator function with `functools.partial`\r\n(`ds = Dataset.from_generator(functools.partial(gen, param1=\"val\"))`) to pass custom parameters to it.\r\n"
... | null | 5,028 | false |
Fix typo in error message | null | https://github.com/huggingface/datasets/pull/5027 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5027",
"html_url": "https://github.com/huggingface/datasets/pull/5027",
"diff_url": "https://github.com/huggingface/datasets/pull/5027.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5027.patch",
"merged_at": "2022-09-27T12:26... | 5,027 | true |
patch CI_HUB_TOKEN_PATH with Path instead of str | Should fix the tests for `huggingface_hub==0.10.0rc0` prerelease (see [failed CI](https://github.com/huggingface/datasets/actions/runs/3127805250/jobs/5074879144)).
Related to [this thread](https://huggingface.slack.com/archives/C02V5EA0A95/p1664195165294559) (internal link).
Note: this should be a backward compat... | https://github.com/huggingface/datasets/pull/5026 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5026",
"html_url": "https://github.com/huggingface/datasets/pull/5026",
"diff_url": "https://github.com/huggingface/datasets/pull/5026.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5026.patch",
"merged_at": "2022-09-26T14:28... | 5,026 | true |
Custom Json Dataset Throwing Error when batch is False | ## Describe the bug
A clear and concise description of what the bug is.
I tried to create my custom dataset using below code
```
from datasets import Features, Sequence, ClassLabel, Value, Array2D, Array3D
from torchvision import transforms
from transformers import AutoProcessor
# we'll use the Auto API here -... | https://github.com/huggingface/datasets/issues/5025 | [
"Hi! Our processors are meant to be used in `batched` mode, so if `batched` is `False`, you need to drop the batch dimension (the error message warns you that the array has an extra dimension meaning it's 4D instead of 3D) to avoid the error:\r\n```python\r\ndef prepare_examples(examples):\r\n #Some preporcessin... | null | 5,025 | false |
Fix string features of xcsr dataset | This PR fixes string features of `xcsr` dataset to avoid character splitting.
Fix #5023.
CC: @yangxqiao, @yuchenlin | https://github.com/huggingface/datasets/pull/5024 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5024",
"html_url": "https://github.com/huggingface/datasets/pull/5024",
"diff_url": "https://github.com/huggingface/datasets/pull/5024.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5024.patch",
"merged_at": "2022-09-28T07:54... | 5,024 | true |
Text strings are split into lists of characters in xcsr dataset | ## Describe the bug
Text strings are split into lists of characters.
Example for "X-CSQA-en":
```
{'id': 'd3845adc08414fda',
'lang': 'en',
'question': {'stem': ['T',
'h',
'e',
' ',
'd',
'e',
'n',
't',
'a',
'l',
' ',
'o',
'f',
'f',
'i',
'c',
'e',
... | https://github.com/huggingface/datasets/issues/5023 | [] | null | 5,023 | false |
Fix languages of X-CSQA configs in xcsr dataset | Fix #5017.
CC: @yangxqiao, @yuchenlin | https://github.com/huggingface/datasets/pull/5022 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks @lhoestq, I had missed that... ",
"thx for the super fast work @albertvillanova ! any estimate for when the relevant release will happen?\r\n\r\nThanks again ",
"@thesofakillers after a recent change in our library (see #4... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5022",
"html_url": "https://github.com/huggingface/datasets/pull/5022",
"diff_url": "https://github.com/huggingface/datasets/pull/5022.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5022.patch",
"merged_at": "2022-09-26T10:57... | 5,022 | true |
Split is inferred from filename and overrides metadata.jsonl | ## Describe the bug
Including the strings "test" or "train" anywhere in a filename causes `datasets` to infer the split and silently ignore all other files.
This behavior is documented for directory names but not filenames: https://huggingface.co/docs/datasets/image_dataset#imagefolder
## Steps to reproduce th... | https://github.com/huggingface/datasets/issues/5021 | [
"Hi! What's the structure of your image folder? `datasets` by default tries to infer to what split each file belongs based on directory/file names. If it's OK to load all the images inside the `dataset` folder in the `train` split, you can do the following:\r\n```python\r\ndataset = load_dataset(\"imagefolder\", da... | null | 5,021 | false |
Fix URLs of sbu_captions dataset | Forbidden
You don't have permission to access /~vicente/sbucaptions/sbu-captions-all.tar.gz on this server.
Additionally, a 403 Forbidden error was encountered while trying to use an ErrorDocument to handle the request.
Apache/2.4.6 (Red Hat Enterprise Linux) OpenSSL/1.0.2k-fips PHP/5.4.16 mod_fcgid/2.3.9 mod_ws... | https://github.com/huggingface/datasets/pull/5020 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5020",
"html_url": "https://github.com/huggingface/datasets/pull/5020",
"diff_url": "https://github.com/huggingface/datasets/pull/5020.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5020.patch",
"merged_at": "2022-09-28T07:18... | 5,020 | true |
Update swiss judgment prediction | Hi,
I updated the dataset to include additional data made available recently. When I test it locally, it seems to work. However, I get the following error with the dummy data creation:
`Dummy data generation done but dummy data test failed since splits ['train', 'validation', 'test'] have 0 examples for config 'fr... | https://github.com/huggingface/datasets/pull/5019 | [
"Thank you very much for the detailed review @albertvillanova!\r\n\r\nI updated the PR with the requested changes. ",
"At the end, I had to manually fix the conflict, so that CI tests are launched.\r\n\r\nPLEASE NOTE: you should first pull to incorporate the previous commit\r\n```shell\r\ngit pull\r\n```",
"_Th... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5019",
"html_url": "https://github.com/huggingface/datasets/pull/5019",
"diff_url": "https://github.com/huggingface/datasets/pull/5019.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5019.patch",
"merged_at": "2022-09-28T05:48... | 5,019 | true |
Create all YAML dataset_info | Following https://github.com/huggingface/datasets/pull/4926
Creates all the `dataset_info` YAML fields in the dataset cards
The JSON are also updated using the simplified backward compatible format added in https://github.com/huggingface/datasets/pull/4926
Needs https://github.com/huggingface/datasets/pull/4926 ... | https://github.com/huggingface/datasets/pull/5018 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5018). All of your documentation changes will be reflected on that endpoint.",
"Closing since https://github.com/huggingface/datasets/pull/4974 removed all the datasets scripts.\r\n\r\nIndividual PRs must be opened on the Huggi... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5018",
"html_url": "https://github.com/huggingface/datasets/pull/5018",
"diff_url": "https://github.com/huggingface/datasets/pull/5018.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5018.patch",
"merged_at": null
} | 5,018 | true |
xcsr: X-CSQA simply uses english for all alleged non-english data | ## Describe the bug
All the alleged non-english subcollections for the X-CSQA task in the [xcsr benchmark dataset ](https://huggingface.co/datasets/xcsr) seem to be copies of the english subcollection, rather than translations. This is in contrast to the data description:
> we automatically translate the original C... | https://github.com/huggingface/datasets/issues/5017 | [
"Thanks for reporting, @thesofakillers. Good catch. We are fixing this. "
] | null | 5,017 | false |
Fix tar extraction vuln | Fix for CVE-2007-4559
Description:
Directory traversal vulnerability in the (1) extract and (2) extractall functions in the tarfile
module in Python allows user-assisted remote attackers to overwrite arbitrary files via a .. (dot dot)
sequence in filenames in a TAR archive, a related issue to CVE-2001-1267.
I ... | https://github.com/huggingface/datasets/pull/5016 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5016",
"html_url": "https://github.com/huggingface/datasets/pull/5016",
"diff_url": "https://github.com/huggingface/datasets/pull/5016.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5016.patch",
"merged_at": "2022-09-29T12:40... | 5,016 | true |
Transfer dataset scripts to Hub | Before merging:
- #4974
TODO:
- [x] Create label: ["dataset contribution"](https://github.com/huggingface/datasets/pulls?q=label%3A%22dataset+contribution%22)
- [x] Create project: [Datasets: Transfer datasets to Hub](https://github.com/orgs/huggingface/projects/22/)
- [x] PRs:
- [x] Add dataset: we should r... | https://github.com/huggingface/datasets/issues/5015 | [
"Sounds good ! Can I help with anything ?"
] | null | 5,015 | false |
I need to read the custom dataset in conll format | I need to read the custom dataset in conll format
| https://github.com/huggingface/datasets/issues/5014 | [
"Hi! We don't currently have a builder for parsing custom `conll` datasets, but I guess we could add one as a packaged module (similarly to what [TFDS](https://github.com/tensorflow/datasets/blob/master/tensorflow_datasets/core/dataset_builders/conll/conll_dataset_builder.py) did). @lhoestq @albertvillanova WDYT?\r... | null | 5,014 | false |
would huggingface like publish cpp binding for datasets package ? | HI:
I use cpp env libtorch, I like use hugggingface ,but huggingface not cpp binding, would you like publish cpp binding for it.
thanks | https://github.com/huggingface/datasets/issues/5013 | [
"Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?",
"> Hi ! Can you share more information about your use case ? How could it help you to have cpp bindings versus using the python libraries ?\r\n\r\nfor example ,the huggingfac... | null | 5,013 | false |
Force JSON format regardless of file naming on S3 | I have a file on S3 created by Data Version Control, it looks like `s3://dvc/ac/badff5b134382a0f25248f1b45d7b2` but contains a json file. If I run
```python
dataset = load_dataset(
"json",
data_files='s3://dvc/ac/badff5b134382a0f25248f1b45d7b2'
)
```
It gives me
```
InvalidSchema: No connection adap... | https://github.com/huggingface/datasets/issues/5012 | [
"Hi ! Support for URIs like `s3://...` is not implemented yet in `data_files=`. You can use the HTTP URL instead if your data is public in the meantime"
] | null | 5,012 | false |
Audio: `encode_example` fails with IndexError | ## Describe the bug
Loading the dataset [earnings-22](https://huggingface.co/datasets/sanchit-gandhi/earnings22_split) from the Hub yields an Index Error. I created this dataset locally and then pushed to hub at the specified URL. Thus, I expect the dataset should work out-of-the-box! Indeed, the dataset viewer functi... | https://github.com/huggingface/datasets/issues/5011 | [
"Sorry bug on my part 😅 Closing "
] | null | 5,011 | false |
Add deprecation warning to multilingual_librispeech dataset card | Besides the current deprecation warning in the script of `multilingual_librispeech`, this PR adds a deprecation warning to its dataset card as well.
The format of the deprecation warning is aligned with the one in the library documentation when docstrings contain the `<Deprecated/>` tag.
Related to:
- #4060 | https://github.com/huggingface/datasets/pull/5010 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5010",
"html_url": "https://github.com/huggingface/datasets/pull/5010",
"diff_url": "https://github.com/huggingface/datasets/pull/5010.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5010.patch",
"merged_at": "2022-09-23T12:02... | 5,010 | true |
Error loading StonyBrookNLP/tellmewhy dataset from hub even though local copy loads correctly | ## Describe the bug
I have added a new dataset with the identifier `StonyBrookNLP/tellmewhy` to the hub. When I load the individual files using my local copy using `dataset = datasets.load_dataset("json", data_files="data/train.jsonl")`, it loads the dataset correctly. However, when I try to load it from the hub, I ge... | https://github.com/huggingface/datasets/issues/5009 | [
"I think this is because some columns are mostly empty lists. In particular the train and validation splits only have empty lists for `val_ann`. Therefore the type inference doesn't know which type is inside (or it would have to scan the other splits first before knowing).\r\n\r\nYou can fix that by specifying the ... | null | 5,009 | false |
Re-apply input columns change | Fixes the `filter` + `input_columns` combination, which is used in the `transformers` examples for instance.
Revert #5006 (which in turn reverts #4971)
Fix https://github.com/huggingface/datasets/issues/4858 | https://github.com/huggingface/datasets/pull/5008 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5008",
"html_url": "https://github.com/huggingface/datasets/pull/5008",
"diff_url": "https://github.com/huggingface/datasets/pull/5008.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5008.patch",
"merged_at": "2022-09-22T13:55... | 5,008 | true |
Add some note about running the transformers ci before a release | null | https://github.com/huggingface/datasets/pull/5007 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5007",
"html_url": "https://github.com/huggingface/datasets/pull/5007",
"diff_url": "https://github.com/huggingface/datasets/pull/5007.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5007.patch",
"merged_at": "2022-09-22T10:14... | 5,007 | true |
Revert input_columns change | Revert https://github.com/huggingface/datasets/pull/4971
Fix https://github.com/huggingface/datasets/issues/5005 | https://github.com/huggingface/datasets/pull/5006 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Merging this one and I'll check if it fixes the `transformers` CI before doing a patch release"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5006",
"html_url": "https://github.com/huggingface/datasets/pull/5006",
"diff_url": "https://github.com/huggingface/datasets/pull/5006.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5006.patch",
"merged_at": "2022-09-21T14:11... | 5,006 | true |
Release 2.5.0 breaks transformers CI | ## Describe the bug
As reported by @lhoestq:
> see https://app.circleci.com/pipelines/github/huggingface/transformers/47634/workflows/b491886b-e66e-4edb-af96-8b459e72aa25/jobs/564563
this is used here: [https://github.com/huggingface/transformers/blob/3b19c0317b6909e2d7f11b5053895ac55[…]torch/speech-pretraining/ru... | https://github.com/huggingface/datasets/issues/5005 | [
"Shall we revert https://github.com/huggingface/datasets/pull/4971 @mariosasko ?\r\n\r\nAnd for consistency we can update IterableDataset.map later"
] | null | 5,005 | false |
Remove license tag file and validation | As requested, we are removing the validation of the licenses from `datasets` because this is done on the Hub.
Fix #4994.
Related to:
- #4926, which is removing all the validation from `datasets` | https://github.com/huggingface/datasets/pull/5004 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5004",
"html_url": "https://github.com/huggingface/datasets/pull/5004",
"diff_url": "https://github.com/huggingface/datasets/pull/5004.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5004.patch",
"merged_at": "2022-09-22T11:45... | 5,004 | true |
Fix missing use_auth_token in streaming docstrings | This PRs fixes docstrings:
- adds the missing `use_auth_token` param
- updates syntax of param types
- adds params to docstrings without them
- fixes return/yield types
- fixes syntax | https://github.com/huggingface/datasets/pull/5003 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5003",
"html_url": "https://github.com/huggingface/datasets/pull/5003",
"diff_url": "https://github.com/huggingface/datasets/pull/5003.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5003.patch",
"merged_at": "2022-09-21T16:20... | 5,003 | true |
Dataset Viewer issue for loubnabnl/humaneval-x | ### Link
https://huggingface.co/datasets/loubnabnl/humaneval-x/viewer/
### Description
The dataset has subsets but the viewer gets stuck in the default subset even when I select another one (the data loading of the subsets works fine)
### Owner
Yes | https://github.com/huggingface/datasets/issues/5002 | [
"It's a bug! Thanks for reporting, I'm looking at it",
"Fixed."
] | null | 5,002 | false |
Support loading XML datasets | CC: @davanstrien | https://github.com/huggingface/datasets/pull/5001 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5001). All of your documentation changes will be reflected on that endpoint.",
"> CC: @davanstrien\r\n\r\nI should have some time to look at this on Friday :) ",
"@albertvillanova I've tried this with a few different XML data... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5001",
"html_url": "https://github.com/huggingface/datasets/pull/5001",
"diff_url": "https://github.com/huggingface/datasets/pull/5001.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5001.patch",
"merged_at": null
} | 5,001 | true |
Dataset Viewer issue for asapp/slue | ### Link
https://huggingface.co/datasets/asapp/slue/viewer/
### Description
Hi,
I wonder how to get the dataset viewer of our slue dataset to work.
Best,
Felix
### Owner
Yes | https://github.com/huggingface/datasets/issues/5000 | [
"<img width=\"519\" alt=\"Capture d’écran 2022-09-20 à 22 33 47\" src=\"https://user-images.githubusercontent.com/1676121/191358952-1220cb7d-745a-4203-a66b-3c707b25038f.png\">\r\n\r\n```\r\nNot found.\r\n\r\nError code: SplitsResponseNotFound\r\n```\r\n\r\nhttps://datasets-server.huggingface.co/splits?dataset=a... | null | 5,000 | false |
Add EmptyDatasetError | examples:
from the hub:
```python
Traceback (most recent call last):
File "playground/ttest.py", line 3, in <module>
print(load_dataset("lhoestq/empty"))
File "/Users/quentinlhoest/Desktop/hf/nlp/src/datasets/load.py", line 1686, in load_dataset
**config_kwargs,
File "/Users/quentinlhoest/Deskto... | https://github.com/huggingface/datasets/pull/4999 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4999",
"html_url": "https://github.com/huggingface/datasets/pull/4999",
"diff_url": "https://github.com/huggingface/datasets/pull/4999.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4999.patch",
"merged_at": "2022-09-21T12:21... | 4,999 | true |
Don't add a tag on the Hub on release | Datasets with no namespace on the Hub have tags to redirect to the version of datasets where they come from.
I’m about to remove them all because I think it looks bad/unexpected in the UI and it’s not actually useful
Therefore I'm also disabling tagging.
Note that the CI job will be completely removed in https:/... | https://github.com/huggingface/datasets/pull/4998 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4998",
"html_url": "https://github.com/huggingface/datasets/pull/4998",
"diff_url": "https://github.com/huggingface/datasets/pull/4998.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4998.patch",
"merged_at": "2022-09-20T14:08... | 4,998 | true |
Add support for parsing JSON files in array form | Support parsing JSON files in the array form (top-level object is an array). For simplicity, `json.load` is used for decoding. This means the entire file is loaded into memory. If requested, we can optimize this by introducing a param similar to `lines` in [`pandas.read_json`](https://pandas.pydata.org/docs/reference/a... | https://github.com/huggingface/datasets/pull/4997 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4997",
"html_url": "https://github.com/huggingface/datasets/pull/4997",
"diff_url": "https://github.com/huggingface/datasets/pull/4997.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4997.patch",
"merged_at": "2022-09-20T15:40... | 4,997 | true |
Dataset Viewer issue for Jean-Baptiste/wikiner_fr | ### Link
https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr
### Description
```
Error code: StreamingRowsError
Exception: FileNotFoundError
Message: [Errno 2] No such file or directory: 'zip:/data/train::https:/huggingface.co/datasets/Jean-Baptiste/wikiner_fr/resolve/main/data.zip/state.json'
Tra... | https://github.com/huggingface/datasets/issues/4996 | [
"The script uses `Dataset.load_from_disk`, which as you can expect, doesn't work in streaming mode.\r\n\r\nIt would probably be more practical to load the dataset locally using `Dataset.load_from_disk` first and then `push_to_hub` to upload it in Parquet on the Hub",
"I've transferred this issue to the Hub repo: ... | null | 4,996 | false |
Get a specific Exception when the dataset has no data | In the dataset viewer on the Hub (https://huggingface.co/datasets/glue/viewer), we would like (https://github.com/huggingface/moon-landing/issues/3882) to show a specific message when the repository lacks any data files.
In that case, instead of showing a complex traceback, we want to show a call to action to help t... | https://github.com/huggingface/datasets/issues/4995 | [] | null | 4,995 | false |
delete the hardcoded license list in `datasets` | > Feel free to delete the license list in `datasets` [...]
>
> Also FYI in #4926 I also removed all the validation steps anyway (language, license, types etc.)
_Originally posted by @lhoestq in https://github.com/huggingface/datasets/issues/4930#issuecomment-1238401662_
> [...], in my opinion we can just delete... | https://github.com/huggingface/datasets/issues/4994 | [] | null | 4,994 | false |
fix: avoid casting tuples after Dataset.map | This PR updates features.py to avoid casting tuples to lists when reading the results of Dataset.map as suggested by @lhoestq [here](https://github.com/huggingface/datasets/issues/4676#issuecomment-1187371367) in https://github.com/huggingface/datasets/issues/4676.
| https://github.com/huggingface/datasets/pull/4993 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4993",
"html_url": "https://github.com/huggingface/datasets/pull/4993",
"diff_url": "https://github.com/huggingface/datasets/pull/4993.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4993.patch",
"merged_at": "2022-09-20T13:08... | 4,993 | true |
Support streaming iwslt2017 dataset | Support streaming iwslt2017 dataset.
Once this PR is merged:
- [x] Remove old ".tgz" data files from the Hub. | https://github.com/huggingface/datasets/pull/4992 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4992",
"html_url": "https://github.com/huggingface/datasets/pull/4992",
"diff_url": "https://github.com/huggingface/datasets/pull/4992.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4992.patch",
"merged_at": "2022-09-20T09:15... | 4,992 | true |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- aeslc
- empathetic_dialogues
- event2Mind
- gap
- iwslt2017
- newsgroup
- qa4mre
- scicite
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931
- ... | https://github.com/huggingface/datasets/pull/4991 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4991",
"html_url": "https://github.com/huggingface/datasets/pull/4991",
"diff_url": "https://github.com/huggingface/datasets/pull/4991.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4991.patch",
"merged_at": "2022-09-20T07:37... | 4,991 | true |
"no-token" is passed to `huggingface_hub` when token is `None` | ## Describe the bug
In the 2 lines listed below, a token is passed to `huggingface_hub` to get information from a dataset. If no token is provided, a "no-token" string is passed. What is the purpose of it ? If no real, I would prefer if the `None` value could be sent directly to be handle by `huggingface_hub`. I fee... | https://github.com/huggingface/datasets/issues/4990 | [
"Hi @Wauplin, thanks for raising this potential issue.\r\n\r\nThe choice of passing `\"no-token\"` instead of `None` was made in this PR:\r\n- #4536 \r\n\r\nAccording to the PR description, the reason why it is passed is to avoid that `HfApi.dataset_info` uses the local token when no token should be used.",
"Hi @... | null | 4,990 | false |
Running add_column() seems to corrupt existing sequence-type column info | I have a dataset that contains a column ("foo") that is a sequence type of length 4. So when I run .to_pandas() on it, the resulting dataframe correctly contains 4 columns - foo_0, foo_1, foo_2, foo_3. So the 1st row of the dataframe might look like:
ds = load_dataset(...)
df = ds.to_pandas()
df:
foo_0 | foo_1 ... | https://github.com/huggingface/datasets/issues/4989 | [
"Nevermind, I was incorrect."
] | null | 4,989 | false |
Add `IterableDataset.from_generator` to the API | We've just added `Dataset.from_generator` to the API. It would also be cool to add `IterableDataset.from_generator` to support creating an iterable dataset from a generator.
cc @lhoestq | https://github.com/huggingface/datasets/issues/4988 | [
"#take",
"Thanks @hamid-vakilzadeh ! Let us know if you have some questions or if we can help",
"Thank you! I certainly will reach out if I need any help."
] | null | 4,988 | false |
Embed image/audio data in dl_and_prepare parquet | Embed the bytes of the image or audio files in the Parquet files directly, instead of having a "path" that points to a local file.
Indeed Parquet files are often used to share data or to be used by workers that may not have access to the local files. | https://github.com/huggingface/datasets/pull/4987 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4987",
"html_url": "https://github.com/huggingface/datasets/pull/4987",
"diff_url": "https://github.com/huggingface/datasets/pull/4987.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4987.patch",
"merged_at": "2022-09-16T16:22... | 4,987 | true |
[doc] Fix broken snippet that had too many quotes | Hello!
### Pull request overview
* Fix broken snippet in https://huggingface.co/docs/datasets/main/en/process that has too many quotes
### Details
The snippet in question can be found here: https://huggingface.co/docs/datasets/main/en/process#map
This screenshot shows the issue, there is a quote too many, caus... | https://github.com/huggingface/datasets/pull/4986 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Spent the day familiarising myself with the huggingface line of products, and happened to run into some small issues here and there. Magically, I've found exactly one small issue in `transformers`, one in `accelerate` and now one in ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4986",
"html_url": "https://github.com/huggingface/datasets/pull/4986",
"diff_url": "https://github.com/huggingface/datasets/pull/4986.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4986.patch",
"merged_at": "2022-09-16T17:32... | 4,986 | true |
Prefer split patterns from directories over split patterns from filenames | related to https://github.com/huggingface/datasets/issues/4895
| https://github.com/huggingface/datasets/pull/4985 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Can we merge this one since the issue this PR fixes was reported for the second time? I also think we don't need a test for this simple change.",
"@mariosasko sure! could you please approve it? ",
"Hi there @polinaeterna @mariosa... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4985",
"html_url": "https://github.com/huggingface/datasets/pull/4985",
"diff_url": "https://github.com/huggingface/datasets/pull/4985.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4985.patch",
"merged_at": "2022-09-29T08:07... | 4,985 | true |
docs: ✏️ add links to the Datasets API | I added some links to the Datasets API in the docs. See https://github.com/huggingface/datasets-server/pull/566 for a companion PR in the datasets-server. The idea is to improve the discovery of the API through the docs.
I'm a bit shy about pasting a lot of links to the API in the docs, so it's minimal for now. I'm ... | https://github.com/huggingface/datasets/pull/4984 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"OK, thanks @lhoestq. I'll close this PR, and come back to it with @stevhliu once we work on https://github.com/huggingface/datasets-server/issues/568"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4984",
"html_url": "https://github.com/huggingface/datasets/pull/4984",
"diff_url": "https://github.com/huggingface/datasets/pull/4984.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4984.patch",
"merged_at": null
} | 4,984 | true |
How to convert torch.utils.data.Dataset to huggingface dataset? | I look through the huggingface dataset docs, and it seems that there is no offical support function to convert `torch.utils.data.Dataset` to huggingface dataset. However, there is a way to convert huggingface dataset to `torch.utils.data.Dataset`, like below:
```python
from datasets import Dataset
data = [[1, 2]... | https://github.com/huggingface/datasets/issues/4983 | [
"Hi! I think you can use the newly-added `from_generator` method for that:\r\n```python\r\nfrom datasets import Dataset\r\n\r\ndef gen():\r\n for idx in len(torch_dataset):\r\n yield torch_dataset[idx] # this has to be a dictionary\r\n ## or if it's an IterableDataset\r\n # for ex in torch_dataset:... | null | 4,983 | false |
Create dataset_infos.json with VALIDATION and TEST splits | The problem is described in that [issue](https://github.com/huggingface/datasets/issues/4895#issuecomment-1247975569).
> When I try to create data_infos.json using datasets-cli test Peter.py --save_infos --all_configs I get an error:
> ValueError: Unknown split "test". Should be one of ['train'].
>
> The data_i... | https://github.com/huggingface/datasets/issues/4982 | [
"@mariosasko could you help me with this issue? we've started the discussion from [here](https://github.com/huggingface/datasets/issues/4895#issuecomment-1248227130)",
"Hi again! Can you please pass the directory name containing the dataset script instead of the script name to `datasets-cli test`?",
"Yes, it wo... | null | 4,982 | false |
Can't create a dataset with `float16` features | ## Describe the bug
I can't create a dataset with `float16` features.
I understand from the traceback that this is a `pyarrow` error, but I don't see anywhere in the `datasets` documentation about how to successfully do this. Is it actually supported? I've tried older versions of `pyarrow` as well with the same e... | https://github.com/huggingface/datasets/issues/4981 | [
"Hi @dconathan, thanks for reporting.\r\n\r\nWe rely on Arrow as a backend, and as far as I know currently support for `float16` in Arrow is not fully implemented in Python (C++), hence the `ArrowNotImplementedError` you get.\r\n\r\nSee, e.g.: https://arrow.apache.org/docs/status.html?highlight=float16#data-types",... | null | 4,981 | false |
Make `pyarrow` optional | **Is your feature request related to a problem? Please describe.**
Is `pyarrow` really needed for every dataset?
**Describe the solution you'd like**
It is made optional.
**Describe alternatives you've considered**
Likely, no.
| https://github.com/huggingface/datasets/issues/4980 | [
"The whole datasets library is pretty much a wrapper to pyarrow (just take a look at some of the source for a Dataset) https://github.com/huggingface/datasets/blob/51aef08ad7053c0bfe8f9a961207b26df15850d3/src/datasets/arrow_dataset.py#L639 \r\n\r\nI think removing the pyarrow dependency would involve a complete rew... | null | 4,980 | false |
Fix missing tags in dataset cards | Fix missing tags in dataset cards:
- amazon_us_reviews
- art
- discofuse
- indic_glue
- ubuntu_dialogs_corpus
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
- #4931 | https://github.com/huggingface/datasets/pull/4979 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4979",
"html_url": "https://github.com/huggingface/datasets/pull/4979",
"diff_url": "https://github.com/huggingface/datasets/pull/4979.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4979.patch",
"merged_at": "2022-09-15T17:12... | 4,979 | true |
Update IndicGLUE download links | null | https://github.com/huggingface/datasets/pull/4978 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4978",
"html_url": "https://github.com/huggingface/datasets/pull/4978",
"diff_url": "https://github.com/huggingface/datasets/pull/4978.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4978.patch",
"merged_at": "2022-09-15T21:57... | 4,978 | true |
Providing dataset size | **Is your feature request related to a problem? Please describe.**
Especially for big datasets like [LAION](https://huggingface.co/datasets/laion/laion2B-en/), it's hard to know exactly the downloaded size (because there are many files and you don't have their exact size when downloaded).
**Describe the solution yo... | https://github.com/huggingface/datasets/issues/4977 | [
"Hi @sashavor, thanks for your suggestion.\r\n\r\nUntil now we have the CLI command \r\n```\r\ndatasets-cli test datasets/<your-dataset-folder> --save_infos --all_configs\r\n```\r\nthat generates the `dataset_infos.json` with the size of the downloaded dataset, among other information.\r\n\r\nWe are currently in th... | null | 4,977 | false |
Hope to adapt Python3.9 as soon as possible | **Is your feature request related to a problem? Please describe.**
A clear and concise description of what the problem is.
**Describe the solution you'd like**
A clear and concise description of what you want to happen.
**Describe alternatives you've considered**
A clear and concise description of any alternat... | https://github.com/huggingface/datasets/issues/4976 | [
"Hi! `datasets` should work in Python 3.9. What kind of issue have you encountered?",
"There is this related issue already: https://github.com/huggingface/datasets/issues/4113\r\nAnd I guess we need a CI job for 3.9 ^^",
"Perhaps we should report this issue in the `filelock` repo?"
] | null | 4,976 | false |
Add `fn_kwargs` param to `IterableDataset.map` | Add the `fn_kwargs` parameter to `IterableDataset.map`.
("Resolves" https://discuss.huggingface.co/t/how-to-use-large-image-text-datasets-in-hugging-face-hub-without-downloading-for-free/22780/3) | https://github.com/huggingface/datasets/pull/4975 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4975",
"html_url": "https://github.com/huggingface/datasets/pull/4975",
"diff_url": "https://github.com/huggingface/datasets/pull/4975.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4975.patch",
"merged_at": "2022-09-13T16:45... | 4,975 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.