title
stringlengths
1
290
body
stringlengths
0
228k
html_url
stringlengths
46
51
comments
list
pull_request
dict
number
int64
1
5.59k
is_pull_request
bool
2 classes
Support cloud storage in load_dataset
Would be nice to be able to do ```python data_files=["s3://..."] storage_options = {...} load_dataset(..., data_files=data_files, storage_options=storage_options) ``` or even ```python load_dataset("gs://...") ``` The idea would be to use `fsspec` as in `download_and_prepare` and `save_to_disk`. This has...
https://github.com/huggingface/datasets/issues/5281
[ "Or for example an archive on GitHub releases! Before I added support for JXL (locally only, PR still pending) I was considering hosting my files on GitHub instead...", "+1 to this. I would like to use 'audiofolder' with a data_dir that's on S3, for example. I don't want to upload my dataset to the Hub, but I wo...
null
5,281
false
Import error
https://github.com/huggingface/datasets/blob/cd3d8e637cfab62d352a3f4e5e60e96597b5f0e9/src/datasets/__init__.py#L28 Hy, I have error at the above line. I have python version 3.8.13, the message says I need python>=3.7, which is True, but I think the if statement not working properly (or the message wrong)
https://github.com/huggingface/datasets/issues/5280
[ "Hi ! Can you \r\n```python\r\nimport platform\r\nprint(platform.python_version())\r\n```\r\nto see that it returns ?", "Hi,\n\n3.8.13\n\nGet Outlook for Android<https://aka.ms/AAb9ysg>\n________________________________\nFrom: Quentin Lhoest ***@***.***>\nSent: Tuesday, November 22, 2022 2:37:02 PM\nTo: huggingfa...
null
5,280
false
Warn about checksums
It takes a lot of time on big datasets to compute the checksums, we should at least add a warning to notify the user about this step. I also mentioned how to disable it, and added a tqdm bar (delay=5 seconds) cc @ola13
https://github.com/huggingface/datasets/pull/5279
[ "_The documentation is not available anymore as the PR was closed or merged._", "I'm also in favor of disabling this by default - it's kinda impractical", "Great, thanks for the quick turnaround on this!" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5279", "html_url": "https://github.com/huggingface/datasets/pull/5279", "diff_url": "https://github.com/huggingface/datasets/pull/5279.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5279.patch", "merged_at": "2022-11-23T09:47...
5,279
true
load_dataset does not read jsonl metadata file properly
### Describe the bug Hi, I'm following [this page](https://huggingface.co/docs/datasets/image_dataset) to create a dataset of images and captions via an image folder and a metadata.json file, but I can't seem to get the dataloader to recognize the "text" column. It just spits out "image" and "label" as features. B...
https://github.com/huggingface/datasets/issues/5278
[ "Can you try to remove \"drop_labels=false\" ? It may force the loader to infer the labels instead of reading the metadata", "Hi, thanks for responding. I tried that, but it does not change anything.", "Can you try updating `datasets` ? Metadata support was added in `datasets` 2.4", "Probably the issue, will ...
null
5,278
false
Remove YAML integer keys from class_label metadata
Fix partially #5275.
https://github.com/huggingface/datasets/pull/5277
[ "_The documentation is not available anymore as the PR was closed or merged._", "Also note that this approach is valid when metadata keys are str, but also if they are int.\r\n- This will be helpful for any community dataset using old integer keys in their metadata", "perfect !" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5277", "html_url": "https://github.com/huggingface/datasets/pull/5277", "diff_url": "https://github.com/huggingface/datasets/pull/5277.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5277.patch", "merged_at": "2022-11-22T13:55...
5,277
true
Bug in downloading common_voice data and snall chunk of it to one's own hub
### Describe the bug I'm trying to load the common voice dataset. Currently there is no implementation to download just par tof the data, and I need just one part of it, without downloading the entire dataset Help please? ![image](https://user-images.githubusercontent.com/48530104/203260511-26df766f-6013-4...
https://github.com/huggingface/datasets/issues/5276
[ "Sounds like one of the file is not a valid one, can you make sure you uploaded valid mp3 files ?", "Well I just sharded the original commonVoice dataset and pushed a small chunk of it in a private rep\n\nWhat did go wrong?\n\nHolen Sie sich Outlook für iOS<https://aka.ms/o0ukef>\n________________________________...
null
5,276
false
YAML integer keys are not preserved Hub server-side
After an internal discussion (https://github.com/huggingface/moon-landing/issues/4563): - YAML integer keys are not preserved server-side: they are transformed to strings - See for example this Hub PR: https://huggingface.co/datasets/acronym_identification/discussions/1/files - Original: ```yaml ...
https://github.com/huggingface/datasets/issues/5275
[ "@huggingface/datasets if you agree, I can make the bulk edit on the Hub to fix integer keys into strings.", "Ok for me, and we can merge (internal) https://github.com/huggingface/moon-landing/pull/4609", "FYI there are still 2k+ weekly users on `datasets` 2.6.1 which doesn't support the string label format for...
null
5,275
false
load_dataset possibly broken for gated datasets?
### Describe the bug When trying to download the [winoground dataset](https://huggingface.co/datasets/facebook/winoground), I get this error unless I roll back the version of huggingface-hub: ``` [/usr/local/lib/python3.7/dist-packages/huggingface_hub/utils/_validators.py](https://localhost:8080/#) in validate_rep...
https://github.com/huggingface/datasets/issues/5274
[ "@BradleyHsu", "Btw, thanks very much for finding the hub rollback temporary fix and bringing the issue to our attention @KhoomeiK!", "I see the same issue when calling `load_dataset('poloclub/diffusiondb', 'large_random_1k')` with `datasets==2.7.1` and `huggingface-hub=0.11.0`. No issue with `datasets=2.6.1` a...
null
5,274
false
download_mode="force_redownload" does not refresh cached dataset
### Describe the bug `load_datasets` does not refresh dataset when features are imported from external file, even with `download_mode="force_redownload"`. The bug is not limited to nested fields, however it is more likely to occur with nested fields. ### Steps to reproduce the bug To reproduce the bug 3 files are ne...
https://github.com/huggingface/datasets/issues/5273
[]
null
5,273
false
Use pyarrow Tensor dtype
### Feature request I was going the discussion of converting tensors to lists. Is there a way to leverage pyarrow's Tensors for nested arrays / embeddings? For example: ```python import pyarrow as pa import numpy as np x = np.array([[2, 2, 4], [4, 5, 100]], np.int32) pa.Tensor.from_numpy(x, dim_names=["dim1...
https://github.com/huggingface/datasets/issues/5272
[ "Hi ! We're using the Arrow format for the datasets, and PyArrow tensors are not part of the Arrow format AFAIK:\r\n\r\n> There is no direct support in the arrow columnar format to store Tensors as column values.\r\n\r\nsource: https://github.com/apache/arrow/issues/4802#issuecomment-508494694", "@wesm @rok its b...
null
5,272
false
Fix #5269
``` $ datasets-cli convert --datasets_directory <TAB> datasets_directory benchmarks/ docs/ metrics/ notebooks/ src/ templates/ tests/ utils/ ```
https://github.com/huggingface/datasets/pull/5271
[ "See <https://github.com/huggingface/datasets/issues/5269>" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5271", "html_url": "https://github.com/huggingface/datasets/pull/5271", "diff_url": "https://github.com/huggingface/datasets/pull/5271.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5271.patch", "merged_at": null }
5,271
true
When len(_URLS) > 16, download will hang
### Describe the bug ```python In [9]: dataset = load_dataset('Freed-Wu/kodak', split='test') Downloading: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.53k/2.53k [00:00<00:00, 1.88MB/s] [1...
https://github.com/huggingface/datasets/issues/5270
[ "It can fix the bug temporarily.\r\n```python\r\nfrom datasets import DownloadConfig\r\nconfig = DownloadConfig(num_proc=8)\r\nIn [5]: dataset = load_dataset('Freed-Wu/kodak', split='test', download_config=config)\r\nDownloading and preparing dataset kodak/default to /home/wzy/.cache/huggingface/datasets/Freed-Wu__...
null
5,270
false
Shell completions
### Feature request Like <https://github.com/huggingface/huggingface_hub/issues/1197>, datasets-cli maybe need it, too. ### Motivation See above. ### Your contribution Maybe.
https://github.com/huggingface/datasets/issues/5269
[ "I don't think we need completion on the datasets-cli, since we're mainly developing huggingface-cli", "I see." ]
null
5,269
false
Sharded save_to_disk + multiprocessing
Added `num_shards=` and `num_proc=` to `save_to_disk()` EDIT: also added `max_shard_size=` to `save_to_disk()`, and also `num_shards=` to `push_to_hub` I also: - deprecated the fs parameter in favor of storage_options (for consistency with the rest of the lib) in save_to_disk and load_from_disk - always embed t...
https://github.com/huggingface/datasets/pull/5268
[ "_The documentation is not available anymore as the PR was closed or merged._", "Added both num_shards and max_shard_size in push_to_hub/save_to_disk. Will take care of updating the tests later", "It's ready for a final review @mariosasko and @albertvillanova, let me know what you think :)", "Took your commen...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5268", "html_url": "https://github.com/huggingface/datasets/pull/5268", "diff_url": "https://github.com/huggingface/datasets/pull/5268.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5268.patch", "merged_at": "2022-12-14T18:22...
5,268
true
Fix `max_shard_size` docs
null
https://github.com/huggingface/datasets/pull/5267
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5267", "html_url": "https://github.com/huggingface/datasets/pull/5267", "diff_url": "https://github.com/huggingface/datasets/pull/5267.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5267.patch", "merged_at": "2022-11-18T17:25...
5,267
true
Specify arguments as keywords in librosa.reshape to avoid future errors
Fixes a warning and future deprecation from `librosa.reshape`: ``` FutureWarning: Pass orig_sr=16000, target_sr=48000 as keyword args. From version 0.10 passing these as positional arguments will result in an error array = librosa.resample(array, sampling_rate, self.sampling_rate, res_type="kaiser_best") ```
https://github.com/huggingface/datasets/pull/5266
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5266", "html_url": "https://github.com/huggingface/datasets/pull/5266", "diff_url": "https://github.com/huggingface/datasets/pull/5266.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5266.patch", "merged_at": "2022-11-21T15:41...
5,266
true
Get an IterableDataset from a map-style Dataset
This is useful to leverage iterable datasets specific features like: - fast approximate shuffling - lazy map, filter etc. Iterating over the resulting iterable dataset should be at least as fast at iterating over the map-style dataset. Here are some ideas regarding the API: ```python # 1. # - consistency wi...
https://github.com/huggingface/datasets/issues/5265
[ "I think `stream` could be misleading since the data is not being streamed from remote endpoints (one could think that's the case when they see `load_dataset` followed by `stream`). Hence, I prefer the second option.\r\n\r\nPS: When we resolve https://github.com/huggingface/datasets/issues/4542, we could add `as_tf...
null
5,265
false
`datasets` can't read a Parquet file in Python 3.9.13
### Describe the bug I have an error when trying to load this [dataset](https://huggingface.co/datasets/bigcode/the-stack-dedup-pjj) (it's private but I can add you to the bigcode org). `datasets` can't read one of the parquet files in the Java subset ```python from datasets import load_dataset ds = load_data...
https://github.com/huggingface/datasets/issues/5264
[ "Could you share the full stack trace please ?\r\n\r\n\r\nCan you also try running this code ? It can be useful to determine if the issue comes from `datasets` or `fsspec` (streaming) or `pyarrow` (parquet reading):\r\n```python\r\nds = load_dataset(\"parquet\", data_files=a_parquet_file_url, use_auth_token=True)\r...
null
5,264
false
Save a dataset in a determined number of shards
This is useful to distribute the shards to training nodes. This can be implemented in `save_to_disk` and can also leverage multiprocessing to speed up the process
https://github.com/huggingface/datasets/issues/5263
[]
null
5,263
false
AttributeError: 'Value' object has no attribute 'names'
Hello I'm trying to build a model for custom token classification I already followed the token classification course on huggingface while adapting the code to my work, this message occures : 'Value' object has no attribute 'names' Here's my code: `raw_datasets` generates DatasetDict({ train: Datas...
https://github.com/huggingface/datasets/issues/5262
[ "Hi ! It looks like your \"isDif\" column is a Sequence of Value(\"string\"), not a Sequence of ClassLabel.\r\n\r\nYou can convert your Value(\"string\") feature type to a ClassLabel feature type this way:\r\n```python\r\nfrom datasets import ClassLabel, Sequence\r\n\r\n# provide the label_names yourself\r\nlabel_n...
null
5,262
false
Add PubTables-1M
### Name PubTables-1M ### Paper https://openaccess.thecvf.com/content/CVPR2022/html/Smock_PubTables-1M_Towards_Comprehensive_Table_Extraction_From_Unstructured_Documents_CVPR_2022_paper.html ### Data https://github.com/microsoft/table-transformer ### Motivation Table Transformer is now available in 🤗 Transforme...
https://github.com/huggingface/datasets/issues/5261
[ "cc @albertvillanova the author would like to add this dataset to the hub: https://github.com/microsoft/table-transformer/issues/68#issuecomment-1319114621. Could you help him out?" ]
null
5,261
false
consumer-finance-complaints dataset not loading
### Describe the bug Error during dataset loading ### Steps to reproduce the bug ``` >>> import datasets >>> cf_raw = datasets.load_dataset("consumer-finance-complaints") Downloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████...
https://github.com/huggingface/datasets/issues/5260
[ "Thanks for reporting, @adiprasad.\r\n\r\nWe are having a look at it.", "I have opened an issue in that dataset Community tab on the Hub: https://huggingface.co/datasets/consumer-finance-complaints/discussions/1\r\n\r\nPlease note that in the meantime, you can load the dataset by passing `ignore_verifications=Tru...
null
5,260
false
datasets 2.7 introduces sharding error
### Describe the bug dataset fails to load with runtime error `RuntimeError: Sharding is ambiguous for this dataset: we found several data sources lists of different lengths, and we don't know over which list we should parallelize: - key audio_files has length 46 - key data has length 0 To fix this, check the ...
https://github.com/huggingface/datasets/issues/5259
[ "I notice a comment in the code says:\r\n`Having lists of different sizes makes sharding ambigious, raise an error in this case until we decide how to define sharding without ambiguity for users` \r\n \r\n ... which suggests this update was pushed knowing that it might break some things. But, it didn't seem to h...
null
5,259
false
Restore order of split names in dataset_info for canonical datasets
After a bulk edit of canonical datasets to create the YAML `dataset_info` metadata, the split names were accidentally sorted alphabetically. See for example: - https://huggingface.co/datasets/bc2gm_corpus/commit/2384629484401ecf4bb77cd808816719c424e57c Note that this order is the one appearing in the preview of the...
https://github.com/huggingface/datasets/issues/5258
[ "The bulk edit is running...\r\n\r\nSee for example: \r\n- A single config: https://huggingface.co/datasets/acronym_identification/discussions/2\r\n- Multiple configs: https://huggingface.co/datasets/babi_qa/discussions/1", "TODO: Add \"dataset_info\" YAML metadata to:\r\n- [x] \"chr_en\" has no metadata JSON fil...
null
5,258
false
remove an unused statement
remove the unused statement: `input_pairs = list(zip())`
https://github.com/huggingface/datasets/pull/5257
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5257", "html_url": "https://github.com/huggingface/datasets/pull/5257", "diff_url": "https://github.com/huggingface/datasets/pull/5257.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5257.patch", "merged_at": "2022-11-18T11:04...
5,257
true
fix wrong print
print `encoded_dataset.column_names` not `dataset.column_names`
https://github.com/huggingface/datasets/pull/5256
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5256", "html_url": "https://github.com/huggingface/datasets/pull/5256", "diff_url": "https://github.com/huggingface/datasets/pull/5256.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5256.patch", "merged_at": "2022-11-18T11:05...
5,256
true
Add a Depth Estimation dataset - DIODE / NYUDepth / KITTI
### Name NYUDepth ### Paper http://cs.nyu.edu/~silberman/papers/indoor_seg_support.pdf ### Data https://cs.nyu.edu/~silberman/datasets/nyu_depth_v2.html ### Motivation Depth estimation is an important problem in computer vision. We have a couple of Depth Estimation models on Hub as well: * [GLPN...
https://github.com/huggingface/datasets/issues/5255
[ "Also cc @mariosasko and @lhoestq ", "Cool ! Let us know if you have questions or if we can help :)\r\n\r\nI guess we'll also have to create the NYU CS Department on the Hub ?", "> I guess we'll also have to create the NYU CS Department on the Hub ?\r\n\r\nYes, you're right! Let me add it to my profile first, a...
null
5,255
false
typo
null
https://github.com/huggingface/datasets/pull/5254
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5254", "html_url": "https://github.com/huggingface/datasets/pull/5254", "diff_url": "https://github.com/huggingface/datasets/pull/5254.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5254.patch", "merged_at": "2022-11-18T10:53...
5,254
true
typo
null
https://github.com/huggingface/datasets/pull/5253
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5253", "html_url": "https://github.com/huggingface/datasets/pull/5253", "diff_url": "https://github.com/huggingface/datasets/pull/5253.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5253.patch", "merged_at": "2022-11-18T10:53...
5,253
true
Support for decoding Image/Audio types in map when format type is not default one
Add support for decoding the `Image`/`Audio` types in `map` for the formats (Numpy, TF, Jax, PyTorch) other than the default one (Python). Additional improvements: * make `Dataset`'s "iter" API cleaner by removing `_iter` and replacing `_iter_batches` with `iter(batch_size)` (also implemented for `IterableDataset`...
https://github.com/huggingface/datasets/pull/5252
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5252). All of your documentation changes will be reflected on that endpoint.", "Yes, if the image column is the first in the batch keys, it will ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5252", "html_url": "https://github.com/huggingface/datasets/pull/5252", "diff_url": "https://github.com/huggingface/datasets/pull/5252.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5252.patch", "merged_at": "2022-12-13T16:59...
5,252
true
Docs are not generated after latest release
After the latest `datasets` release version 0.7.0, the docs were not generated. As we have changed the release procedure (so that now we do not push directly to main branch), maybe we should also change the corresponding GitHub action: https://github.com/huggingface/datasets/blob/edf1902f954c5568daadebcd8754bdad4...
https://github.com/huggingface/datasets/issues/5251
[ "After a discussion with @mishig25:\r\n- He said that this action should be triggered if we call our release branch according to the regex `v*-release`, as transformers does\r\n- I said that our procedure is different: our release branch is *temporary* and it is deleted just after the release PR is merged to main\r...
null
5,251
false
Change release procedure to use only pull requests
This PR changes the release procedure so that: - it only make changes to main branch via pull requests - it is no longer necessary to directly commit/push to main branch Close #5251.
https://github.com/huggingface/datasets/pull/5250
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5250). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5250", "html_url": "https://github.com/huggingface/datasets/pull/5250", "diff_url": "https://github.com/huggingface/datasets/pull/5250.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5250.patch", "merged_at": "2022-11-22T16:27...
5,250
true
Protect the main branch from inadvertent direct pushes
We have decided to implement a protection mechanism in this repository, so that nobody (not even administrators) can inadvertently push accidentally directly to the main branch. See context here: - d7c942228b8dcf4de64b00a3053dce59b335f618 To do: - [x] Protect main branch - Settings > Branches > Branch protec...
https://github.com/huggingface/datasets/issues/5249
[]
null
5,249
false
Complete doc migration
Reverts huggingface/datasets#5214 Everything is handled on the doc-builder side now 😊
https://github.com/huggingface/datasets/pull/5248
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5248). All of your documentation changes will be reflected on that endpoint.", "Thanks for the fix @mishig25.\r\n\r\nI guess this is the reason why the docs are not generated for the latest release version 2.7.0? https://huggin...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5248", "html_url": "https://github.com/huggingface/datasets/pull/5248", "diff_url": "https://github.com/huggingface/datasets/pull/5248.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5248.patch", "merged_at": "2022-11-16T10:41...
5,248
true
Set dev version
null
https://github.com/huggingface/datasets/pull/5247
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5247). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5247", "html_url": "https://github.com/huggingface/datasets/pull/5247", "diff_url": "https://github.com/huggingface/datasets/pull/5247.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5247.patch", "merged_at": "2022-11-16T10:17...
5,247
true
Release: 2.7.0
null
https://github.com/huggingface/datasets/pull/5246
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5246", "html_url": "https://github.com/huggingface/datasets/pull/5246", "diff_url": "https://github.com/huggingface/datasets/pull/5246.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5246.patch", "merged_at": "2022-11-16T09:37...
5,246
true
Unable to rename columns in streaming dataset
### Describe the bug Trying to rename column in a streaming datasets, destroys the features object. ### Steps to reproduce the bug The following code illustrates the error: ``` from datasets import load_dataset dataset = load_dataset('mc4', 'en', streaming=True, split='train') dataset.info.features # {'text':...
https://github.com/huggingface/datasets/issues/5245
[ "Hi @peregilk this bug is directly related to https://github.com/huggingface/datasets/issues/3888, and still not fixed... But I'll try to have a look!", "Thanks @alvarobartt. It is great if you are able to fix it, but when reading the explanation it seems like it is possible to work around it.\r\n\r\nWe also trie...
null
5,245
false
Allow dataset streaming from private a private source when loading a dataset with a dataset loading script
### Feature request Add arguments to the function _get_authentication_headers_for_url_ like custom_endpoint and custom_token in order to add flexibility when downloading files from a private source. It should also be possible to provide these arguments from the dataset loading script, maybe giving them to the dl_...
https://github.com/huggingface/datasets/issues/5244
[ "Hi ! What kind of private source ? We're exploring adding support for cloud storage and URIs like s3://, gs:// etc. with authentication in the download manager", "Hello! It's a google cloud storage, so gs://, but I'm using it with https.\r\nBeing able to provide a file system like [here](https://huggingface.co/d...
null
5,244
false
Download only split data
### Feature request Is it possible to download only the data that I am requesting and not the entire dataset? I run out of disk spaceas it seems to download the entire dataset, instead of only the part needed. common_voice["test"] = load_dataset("mozilla-foundation/common_voice_11_0", "en", split="test", ...
https://github.com/huggingface/datasets/issues/5243
[ "Hi @capsabogdan! Unfortunately, it's hard to implement because quite often datasets data is being hosted in a single archive for all splits :( So we have to download the whole archive to split it into splits. This is the case for CommonVoice too. \r\n\r\nHowever, for cases when data is distributed in separate arch...
null
5,243
false
Failed Data Processing upon upload with zip file full of images
I went to autotrain and under image classification arrived where it was time to prepare my dataset. Screenshot below ![image](https://user-images.githubusercontent.com/82735473/201814099-3cc5ff8a-88dc-4f5f-8140-f19560641d83.png) I chose the method 2 option. I have a csv file with two columns. ~23,000 files. I...
https://github.com/huggingface/datasets/issues/5242
[ "cc @abhishekkrthakur @SBrandeis " ]
null
5,242
false
Support hfh rc version
otherwise the code doesn't work for hfh 0.11.0rc0 following #5237
https://github.com/huggingface/datasets/pull/5241
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5241", "html_url": "https://github.com/huggingface/datasets/pull/5241", "diff_url": "https://github.com/huggingface/datasets/pull/5241.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5241.patch", "merged_at": "2022-11-15T16:09...
5,241
true
Cleaner error tracebacks for dataset script errors
Make the traceback of the errors raised in `_generate_examples` cleaner for easier debugging. Additionally, initialize the `writer` in the for-loop to avoid the `ValueError` from `ArrowWriter.finalize` raised in the `finally` block when no examples are yielded before the `_generate_examples` error. <details> <s...
https://github.com/huggingface/datasets/pull/5240
[ "_The documentation is not available anymore as the PR was closed or merged._", "@lhoestq Good catch! This currently leads to an AttributeError (due to `writer` being None) on this line:\r\nhttps://github.com/huggingface/datasets/blob/fed1628d49a91f9ae259ddf6edbb252c7972d9a3/src/datasets/builder.py#L1552\r\n" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5240", "html_url": "https://github.com/huggingface/datasets/pull/5240", "diff_url": "https://github.com/huggingface/datasets/pull/5240.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5240.patch", "merged_at": "2022-11-15T18:24...
5,240
true
Add num_proc to from_csv/generator/json/parquet/text
Allow multiprocessing to from_* methods
https://github.com/huggingface/datasets/pull/5239
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5239). All of your documentation changes will be reflected on that endpoint.", "I ended up moving `num_proc` to `AbstractDatasetReader.__init__` :)\r\n\r\nLet me know if it sounds good to you now" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5239", "html_url": "https://github.com/huggingface/datasets/pull/5239", "diff_url": "https://github.com/huggingface/datasets/pull/5239.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5239.patch", "merged_at": "2022-12-06T15:39...
5,239
true
Make `Version` hashable
Add `__hash__` to the `Version` class to make it hashable (and remove the unneeded methods), as `Version("0.0.0")` is the default value of `BuilderConfig.version` and the default fields of a dataclass need to be hashable in Python 3.11. Fix https://github.com/huggingface/datasets/issues/5230
https://github.com/huggingface/datasets/pull/5238
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5238", "html_url": "https://github.com/huggingface/datasets/pull/5238", "diff_url": "https://github.com/huggingface/datasets/pull/5238.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5238.patch", "merged_at": "2022-11-14T15:27...
5,238
true
Encode path only for old versions of hfh
Next version of `huggingface-hub` 0.11 does encode the `path`, and we don't want to encode twice
https://github.com/huggingface/datasets/pull/5237
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5237", "html_url": "https://github.com/huggingface/datasets/pull/5237", "diff_url": "https://github.com/huggingface/datasets/pull/5237.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5237.patch", "merged_at": "2022-11-14T17:35...
5,237
true
Handle ArrowNotImplementedError caused by try_type being Image or Audio in cast
Handle the `ArrowNotImplementedError` thrown when `try_type` is `Image` or `Audio` and the input array cannot be converted to their storage formats. Reproducer: ```python from datasets import Dataset from PIL import Image import requests ds = Dataset.from_dict({"image": [Image.open(requests.get("https://uploa...
https://github.com/huggingface/datasets/pull/5236
[ "_The documentation is not available anymore as the PR was closed or merged._", "> Not sure how we can have a test that is relevant for this though - feel free to add one if you have ideas\r\n\r\nYes, this was my reasoning for not adding a test. This change is pretty simple, so I think it's OK not to have a test ...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5236", "html_url": "https://github.com/huggingface/datasets/pull/5236", "diff_url": "https://github.com/huggingface/datasets/pull/5236.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5236.patch", "merged_at": "2022-11-14T16:01...
5,236
true
Pin `typer` version in tests to <0.5 to fix Windows CI
Otherwise `click` fails on Windows: ``` Traceback (most recent call last): File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 193, in _run_module_as_main "__main__", mod_spec) File "C:\hostedtoolcache\windows\Python\3.7.9\x64\lib\runpy.py", line 85, in _run_code exec(code, run_glob...
https://github.com/huggingface/datasets/pull/5235
[]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5235", "html_url": "https://github.com/huggingface/datasets/pull/5235", "diff_url": "https://github.com/huggingface/datasets/pull/5235.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5235.patch", "merged_at": "2022-11-14T13:41...
5,235
true
fix: dataset path should be absolute
cache_file_name depends on dataset's path. A simple way where this could cause a problem: ``` import os import datasets def add_prefix(example): example["text"] = "Review: " + example["text"] return example ds = datasets.load_from_disk("a/relative/path") os.chdir("/tmp") ds_1 = ds.map(add_...
https://github.com/huggingface/datasets/pull/5234
[ "Good catch thanks ! Have you tried to use the absolue path in `MemoryMappedTable.__init__` in `table.py`?\r\n\r\nI think it can fix issues with relative paths at more levels than just fixing it `load_from_disk`. If it works I think it would be a more robust fix to this issue", "@lhoestq right, that actually fixe...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5234", "html_url": "https://github.com/huggingface/datasets/pull/5234", "diff_url": "https://github.com/huggingface/datasets/pull/5234.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5234.patch", "merged_at": "2022-12-07T23:46...
5,234
true
Fix shards in IterableDataset.from_generator
Allow to define a sharded iterable dataset
https://github.com/huggingface/datasets/pull/5233
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5233", "html_url": "https://github.com/huggingface/datasets/pull/5233", "diff_url": "https://github.com/huggingface/datasets/pull/5233.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5233.patch", "merged_at": "2022-11-14T14:13...
5,233
true
Incompatible dill versions in datasets 2.6.1
### Describe the bug datasets version 2.6.1 has a dependency on dill<0.3.6. This causes a conflict with dill>=0.3.6 used by multiprocess dependency in datasets 2.6.1 This issue is already fixed in https://github.com/huggingface/datasets/pull/5166/files, but not yet been released. Please release a new version of the...
https://github.com/huggingface/datasets/issues/5232
[ "Thanks for reporting, @vinaykakade.\r\n\r\nWe are discussing about making a release early this week.\r\n\r\nPlease note that in the meantime, in your specific case (as we also pointed out here: https://github.com/huggingface/datasets/issues/5162#issuecomment-1291720293), you can circumvent the issue by pinning `mu...
null
5,232
false
Using `set_format(type='torch', columns=columns)` makes Array2D/3D columns stop formatting correctly
I have a Dataset with two Features defined as follows: ``` 'image': Array3D(dtype="int64", shape=(3, 224, 224)), 'bbox': Array2D(dtype="int64", shape=(512, 4)), ``` On said dataset, if I `dataset.set_format(type='torch')` and then use the dataset in a dataloader, these columns are correctly cast to Tensors of ...
https://github.com/huggingface/datasets/issues/5231
[ "In case others find this, the problem was not with set_format, but my usages of `to_pandas()` and `from_pandas()` which I was using during dataset splitting; somewhere in the chain of converting to and from pandas the `Array2D/Array3D` types get converted to series of `Sequence()` types" ]
null
5,231
false
dataclasses error when importing the library in python 3.11
### Describe the bug When I import datasets using python 3.11 the dataclasses standard library raises the following error: `ValueError: mutable default <class 'datasets.utils.version.Version'> for field version is not allowed: use default_factory` When I tried to import the library using the following jupyter note...
https://github.com/huggingface/datasets/issues/5230
[ "I opened [this issue](https://github.com/python/cpython/issues/99401).\r\nPython's maintainers say that the issue is caused by [this change](https://docs.python.org/3.11/whatsnew/3.11.html#dataclasses).\r\nI believe adding a `__hash__` method to `datasets.utils.version.Version` should solve (at least partially) th...
null
5,230
false
Type error when calling `map` over dataset containing 0-d tensors
### Describe the bug 0-dimensional tensors in a dataset lead to `TypeError: iteration over a 0-d array` when calling `map`. It is easy to generate such tensors by using `.with_format("...")` on the whole dataset. ### Steps to reproduce the bug ``` ds = datasets.Dataset.from_list([{"a": 1}, {"a": 1}]).with_fo...
https://github.com/huggingface/datasets/issues/5229
[ "Hi! \r\n\r\nWe could address this by calling `.item()` on such tensors to extract the value, but this would lose us the type, which could lead to storing the generated dataset in a suboptimal format. Considering this, I think the only proper fix would be implementing support for 0-D tensors on Apache Arrow's side ...
null
5,229
false
Loading a dataset from the hub fails if you happen to have a folder of the same name
### Describe the bug I'm not 100% sure this should be considered a bug, but it was certainly annoying to figure out the cause of. And perhaps I am just missing a specific argument needed to avoid this conflict. Basically I had a situation where multiple workers were downloading different parts of the glue dataset and ...
https://github.com/huggingface/datasets/issues/5228
[ "`load_dataset` first checks for a local directory before checking for the Hub.\r\n\r\nTo make it explicit that it has to fetch the Hub, we could support the `hffs` syntax:\r\n```python\r\nload_dataset(\"hf://datasets/glue\")\r\n```\r\n\r\nwould that work for you ? Also cc @mariosasko who's leading the `hffs` proje...
null
5,228
false
datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files
### Describe the bug From these lines: from datasets import list_datasets, load_dataset dataset = load_dataset("wikisql","binary") I get error message: datasets.data_files.EmptyDatasetError: The directory at wikisql doesn't contain any data files And yet the 'wikisql' is reported to exist via the list_datas...
https://github.com/huggingface/datasets/issues/5227
[ "Fixed. Please close." ]
null
5,227
false
Q: Memory release when removing the column?
### Describe the bug How do I release memory when I use methods like `.remove_columns()` or `clear()` in notebooks? ```python from datasets import load_dataset common_voice = load_dataset("mozilla-foundation/common_voice_11_0", "ja", use_auth_token=True) # check memory -> RAM Used (GB): 0.704 / Total (GB) 33.670...
https://github.com/huggingface/datasets/issues/5226
[ "Hi ! Datasets are memory mapped from your disk, i.e. they're not loaded in RAM. This is possible thanks to the Arrow data format.\r\n\r\nTherefore the column you remove is not in RAM, so removing it doesn't cause the RAM to decrease.", "Thanks for the explanation! @lhoestq \r\nI wonder since it is memory mapped,...
null
5,226
false
Add video feature
### Feature request Add a `Video` feature to the library so folks can include videos in their datasets. ### Motivation Being able to load Video data would be quite helpful. However, there are some challenges when it comes to videos: 1. Videos, unlike images, can end up being extremely large files 2. Often times ...
https://github.com/huggingface/datasets/issues/5225
[ "@NielsRogge @rwightman may have additional requirements regarding this feature.\r\n\r\nWhen adding a new (decodable) type, the hardest part is choosing the right decoding library. What I mean by \"right\" here is that it has all the features we need and is easy to install (with GPU support?).\r\n\r\nSome candidate...
null
5,225
false
Seems to freeze when loading audio dataset with wav files from local folder
### Describe the bug I'm following the instructions in [https://huggingface.co/docs/datasets/audio_load#audiofolder-with-metadata](url) to be able to load a dataset from a local folder. I have everything into a folder, into a train folder and then the audios and csv. When I try to load the dataset and run from term...
https://github.com/huggingface/datasets/issues/5224
[ "I just tried to do the same but changing the `.wav` files to `.mp3` files and that doesn't fix it.", "I don't know if anyone will ever read this but I've tried to upload the same dataset with google colab and the output seems more clarifying. I didn't specify the train/test split so the dataset wasn't fully uplo...
null
5,224
false
Add SQL guide
This PR adapts @nateraw's awesome SQL notebook as a guide for the docs!
https://github.com/huggingface/datasets/pull/5223
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5223). All of your documentation changes will be reflected on that endpoint.", "I think we may want more content on this page that's not SQL related. Some of that content probably already lives in the main `load` docs page, but...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5223", "html_url": "https://github.com/huggingface/datasets/pull/5223", "diff_url": "https://github.com/huggingface/datasets/pull/5223.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5223.patch", "merged_at": "2022-11-15T17:40...
5,223
true
HuggingFace website is incorrectly reporting that my datasets are pickled
### Describe the bug HuggingFace is incorrectly reporting that my datasets are pickled. They are not picked, they are simple ZIP files containing PNG images. Hopefully this is the right location to report this bug. ### Steps to reproduce the bug Inspect my dataset respository here: https://huggingface.co/datasets...
https://github.com/huggingface/datasets/issues/5222
[ "cc @McPatate maybe you know what's happening ?", "Yes I think I know what is happening. We check in zips for pickles, and the UI must display the pickle jar when a scan has an associated list of imports, even when empty.\r\n~I'll fix ASAP !~", "> I'll fix ASAP !\r\n\r\nActually I'd rather leave it like that f...
null
5,222
false
Cannot push
### Describe the bug I am facing the issue when I try to push the tar.gz file around 11G to HUB. ``` (venv) ╭─laptop@laptop ~/PersonalProjects/data/ulaanbal_v0 ‹main●› ╰─$ du -sh * 4.0K README.md 13G data 516K test.jsonl 18M train.jsonl 4.0K ulaanbal_v0.py 11G ulaanbal_v0.tar.gz 452K validation.jsonl...
https://github.com/huggingface/datasets/issues/5221
[ "Did you run `huggingface-cli lfs-enable-largefiles` before committing or before adding ? Maybe you can try before adding\r\n\r\nAnyway I'd encourage you to split your data into several TAR archives if possible, this way the dataset can loaded faster using multiprocessing (by giving each process a subset of shards ...
null
5,221
false
Implicit type conversion of lists in to_pandas
### Describe the bug ``` ds = Dataset.from_list([{'a':[1,2,3]}]) ds.to_pandas().a.values[0] ``` Results in `array([1, 2, 3])` -- a rather unexpected conversion of types which made downstream tools expecting lists not happy. ### Steps to reproduce the bug See snippet ### Expected behavior Keep the original typ...
https://github.com/huggingface/datasets/issues/5220
[ "I think this behavior comes from PyArrow:\r\n```python\r\nimport pyarrow as pa\r\nt = pa.table({\"a\": [[0]]})\r\nt.to_pandas().a.values[0]\r\n# array([0])\r\n```\r\n\r\nI believe this has to do with zero-copy: you can get a pandas DataFrame without copying the buffers from arrow, and therefore end up with numpy a...
null
5,220
false
Delta Tables usage using Datasets Library
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
https://github.com/huggingface/datasets/issues/5219
[ "Hi ! Interesting :) Can you provide concrete examples of cases where it can be useful ?", "Few example blogs and posts that might help on this - \r\n\r\n1. https://hevodata.com/learn/databricks-delta-tables/\r\n2. https://docs.databricks.com/delta/index.html\r\n\r\nBasically, we are looking at utility of Dataset...
null
5,219
false
Delta Tables usage using Datasets Library
### Feature request Adding compatibility of Datasets library with Delta Format. Elevating the utilities of Datasets library from Machine Learning Scope to Data Engineering Scope as well. ### Motivation We know datasets library can absorb csv, json, parquet, etc. file formats but it would be great if Datasets library...
https://github.com/huggingface/datasets/issues/5218
[]
null
5,218
false
Reword E2E training and inference tips in the vision guides
Reference: https://github.com/huggingface/datasets/pull/5188#discussion_r1012148730
https://github.com/huggingface/datasets/pull/5217
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5217", "html_url": "https://github.com/huggingface/datasets/pull/5217", "diff_url": "https://github.com/huggingface/datasets/pull/5217.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5217.patch", "merged_at": "2022-11-10T01:36...
5,217
true
save_elasticsearch_index
Hi, I am new to Dataset and elasticsearch. I was wondering is there any equivalent approach to save elasticsearch index as of save_faiss_index locally for later use, to remove the need to re-index a dataset?
https://github.com/huggingface/datasets/issues/5216
[ "Hi ! I think there exist tools to dump and reload an index in your elastic search but I'm not super familiar with it.\r\n\r\nAnyway after reloading an index in elastic search you can call `ds.load_elasticsearch_index` which will connect the index to the dataset without re-indexing" ]
null
5,216
false
Update github pr docs actions
null
https://github.com/huggingface/datasets/pull/5214
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5214). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5214", "html_url": "https://github.com/huggingface/datasets/pull/5214", "diff_url": "https://github.com/huggingface/datasets/pull/5214.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5214.patch", "merged_at": "2022-11-08T15:39...
5,214
true
Add support for different configs with `push_to_hub`
will solve #5151 @lhoestq @albertvillanova @mariosasko This is still a super draft so please ignore code issues but I want to discuss some conceptually important things. I suggest a way to do `.push_to_hub("repo_id", "config_name")` with pushing parquet files to directories named as `config_name` (inside `data...
https://github.com/huggingface/datasets/pull/5213
[ "_The documentation is not available anymore as the PR was closed or merged._", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5213). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5213", "html_url": "https://github.com/huggingface/datasets/pull/5213", "diff_url": "https://github.com/huggingface/datasets/pull/5213.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5213.patch", "merged_at": null }
5,213
true
Fix CI require_beam maximum compatible dill version
A previous commit to main branch introduced an additional requirement on maximum compatible `dill` version with `apache-beam` in our CI `require_beam`: - d7c942228b8dcf4de64b00a3053dce59b335f618 - ec222b220b79f10c8d7b015769f0999b15959feb This PR fixes the maximum compatible `dill` version with `apache-beam`, which...
https://github.com/huggingface/datasets/pull/5212
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5212). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5212", "html_url": "https://github.com/huggingface/datasets/pull/5212", "diff_url": "https://github.com/huggingface/datasets/pull/5212.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5212.patch", "merged_at": "2022-11-15T06:32...
5,212
true
Update Overview.ipynb google colab
- removed metrics stuff - added image example - added audio example (with ffmpeg instructions) - updated the "add a new dataset" section
https://github.com/huggingface/datasets/pull/5211
[ "_The documentation is not available anymore as the PR was closed or merged._", "WDYT @albertvillanova ?", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5211). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5211", "html_url": "https://github.com/huggingface/datasets/pull/5211", "diff_url": "https://github.com/huggingface/datasets/pull/5211.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5211.patch", "merged_at": "2022-11-29T15:54...
5,211
true
Tweak readme
Tweaked some paragraphs mentioning the modalities we support + added a paragraph on security
https://github.com/huggingface/datasets/pull/5210
[ "_The documentation is not available anymore as the PR was closed or merged._", "Nit: We should also update the `Disclaimers` section to let the dataset owners know they should use Hub discussions rather than GH issues for removal requests/updates", "Updated the disclaimers section, thanks !\r\n\r\nDoes it soun...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5210", "html_url": "https://github.com/huggingface/datasets/pull/5210", "diff_url": "https://github.com/huggingface/datasets/pull/5210.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5210.patch", "merged_at": "2022-11-24T11:26...
5,210
true
Implement ability to define splits in metadata section of dataset card
### Feature request If you go here: https://huggingface.co/datasets/inria-soda/tabular-benchmark/tree/main you will see bunch of folders that has various CSV files. I’d like dataset viewer to show these files instead of only one dataset like it currently does. (and also people to be able to load them as splits inste...
https://github.com/huggingface/datasets/issues/5209
[ "@merveenoyan Do you want different files to be splits or configurations?\r\n\r\nFrom [what you specified in `Readme.md`](https://huggingface.co/datasets/inria-soda/tabular-benchmark/commit/fb4575853772c62a20203bdd6cc0202f5db4ce4e) I hypothesize that you want to have 4 **configs** corresponding to directories: `\"c...
null
5,209
false
Refactor CI hub fixtures to use monkeypatch instead of patch
Minor refactoring of CI to use `pytest` `monkeypatch` instead of `unittest` `patch`.
https://github.com/huggingface/datasets/pull/5208
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5208", "html_url": "https://github.com/huggingface/datasets/pull/5208", "diff_url": "https://github.com/huggingface/datasets/pull/5208.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5208.patch", "merged_at": "2022-11-08T06:49...
5,208
true
Connection error of the HuggingFace's dataset Hub due to SSLError with proxy
### Describe the bug It's weird. I could not normally connect the dataset Hub of HuggingFace due to a SSLError in my office. Even when I try to connect using my company's proxy address (e.g., http_proxy and https_proxy), I'm getting the SSLError issue. What should I do to download the datanet stored in Hugg...
https://github.com/huggingface/datasets/issues/5207
[ "Hi ! It looks like an issue with your python environment, can you make sure you're able to run GET requests to https://huggingface.co using `requests` in python ?", "\r\nThanks for your reply. Does this mean that I have to use the `do_dataset `function and the `requests `function to download the dataset from the...
null
5,207
false
Use logging instead of printing to console
### Describe the bug Some logs ([here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L778), [here](https://github.com/huggingface/datasets/blob/4a6e1fe2735505efc7e3a3dbd3e1835da0702575/src/datasets/builder.py#L786), and [here](https://github.com/huggingfa...
https://github.com/huggingface/datasets/issues/5206
[ "Actually upon closer inspection, it is documented in the code that this behavior is intentional, so I'll close this." ]
null
5,206
false
Add missing `DownloadConfig.use_auth_token` value
This PR solves https://github.com/huggingface/datasets/issues/5204 Now the `token` is propagated so that `DownloadConfig.use_auth_token` value is set before trying to download private files from existing datasets in the Hub.
https://github.com/huggingface/datasets/pull/5205
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5205", "html_url": "https://github.com/huggingface/datasets/pull/5205", "diff_url": "https://github.com/huggingface/datasets/pull/5205.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5205.patch", "merged_at": "2022-11-07T16:20...
5,205
true
`push_to_hub` not propagating `token` through `DownloadConfig`
### Describe the bug When trying to upload a new 🤗 Dataset to the Hub via Python, and providing the `token` as a parameter to the `Dataset.push_to_hub` function, it just works for the first time, assuming that the dataset didn't exist before. But when trying to run `Dataset.push_to_hub` again over the same dataset...
https://github.com/huggingface/datasets/issues/5204
[ "#self-assign", "@lhoestq can you close this issue as part of the recent #5205 merge? Thanks 🤗 ", "Thank you :)" ]
null
5,204
false
Update canonical links to Hub links
This PR updates some of the canonical dataset links to their corresponding links on the Hub; closes #5200.
https://github.com/huggingface/datasets/pull/5203
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5203", "html_url": "https://github.com/huggingface/datasets/pull/5203", "diff_url": "https://github.com/huggingface/datasets/pull/5203.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5203.patch", "merged_at": "2022-11-07T18:40...
5,203
true
CI fails after bulk edit of canonical datasets
``` ______ test_get_dataset_config_info[paws-labeled_final-expected_splits2] _______ [gw0] linux -- Python 3.7.15 /opt/hostedtoolcache/Python/3.7.15/x64/bin/python path = 'paws', config_name = 'labeled_final' expected_splits = ['train', 'test', 'validation'] @pytest.mark.parametrize( "path, config...
https://github.com/huggingface/datasets/issues/5202
[ "Fixed by: https://huggingface.co/datasets/paws/discussions/1" ]
null
5,202
false
Do not sort splits in dataset info
I suggest not to sort splits by their names in dataset_info in README so that they are displayed in the order specified in the loading script. Otherwise `test` split is displayed first, see this repo: https://huggingface.co/datasets/paws What do you think? But I added sorting in tests to fix CI (for the same datase...
https://github.com/huggingface/datasets/pull/5201
[ "_The documentation is not available anymore as the PR was closed or merged._", "It would be coherent with https://github.com/huggingface/datasets-server/issues/614#issuecomment-1290534153", "I think we started working on this issue nearly at the same time... :sweat_smile: \r\n- CI was fixed with this: https://...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5201", "html_url": "https://github.com/huggingface/datasets/pull/5201", "diff_url": "https://github.com/huggingface/datasets/pull/5201.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5201.patch", "merged_at": "2022-11-04T14:45...
5,201
true
Some links to canonical datasets in the docs are outdated
As we don't have canonical datasets in the github repo anymore, some old links to them doesn't work. I don't know how many of them are there, I found link to SuperGlue here: https://huggingface.co/docs/datasets/dataset_script#multiple-configurations, probably there are more of them. These links should be replaced by li...
https://github.com/huggingface/datasets/issues/5200
[ "Thanks for catching this, I can go through the docs and replace the links to their corresponding datasets on the Hub!" ]
null
5,200
false
Deprecate dummy data generation command
Deprecate the `dummy_data` CLI command.
https://github.com/huggingface/datasets/pull/5199
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5199", "html_url": "https://github.com/huggingface/datasets/pull/5199", "diff_url": "https://github.com/huggingface/datasets/pull/5199.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5199.patch", "merged_at": "2022-11-04T13:59...
5,199
true
Add note about the name of a dataset script
Add note that a dataset script should has the same name as a repo/dir, a bit related to this issue https://github.com/huggingface/datasets/issues/5193 also fixed two minor issues in audio docs (broken links)
https://github.com/huggingface/datasets/pull/5198
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5198", "html_url": "https://github.com/huggingface/datasets/pull/5198", "diff_url": "https://github.com/huggingface/datasets/pull/5198.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5198.patch", "merged_at": "2022-11-04T12:46...
5,198
true
[zstd] Use max window log size
ZstdDecompressor has a parameter `max_window_size` to limit max memory usage when decompressing zstd files. The default `max_window_size` is not enough when files are compressed by `zstd --ultra` flags. Change `max_window_size` to the zstd's max window size. NOTE, the `zstd.WINDOWLOG_MAX` is the log_2 value of the m...
https://github.com/huggingface/datasets/pull/5197
[ "@albertvillanova Please take a review.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5197). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5197", "html_url": "https://github.com/huggingface/datasets/pull/5197", "diff_url": "https://github.com/huggingface/datasets/pull/5197.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5197.patch", "merged_at": null }
5,197
true
Use hfh hf_hub_url function
Small refactoring to use `hf_hub_url` function from `huggingface_hub`. This PR also creates the `hub` module that will contain all `huggingface_hub` functionalities relevant to `datasets`. This is a necessary stage before implementing the use of the `hfh` caching system (which uses its `hf_hub_url` under the hood...
https://github.com/huggingface/datasets/pull/5196
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5196). All of your documentation changes will be reflected on that endpoint.", "@lhoestq I think we should first agree if `datasets` can introduce the breaking change of ignoring `config.HUB_DATASETS_URL`: some users may have o...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5196", "html_url": "https://github.com/huggingface/datasets/pull/5196", "diff_url": "https://github.com/huggingface/datasets/pull/5196.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5196.patch", "merged_at": "2022-11-09T07:15...
5,196
true
[wip testing docs]
null
https://github.com/huggingface/datasets/pull/5195
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5195). All of your documentation changes will be reflected on that endpoint." ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5195", "html_url": "https://github.com/huggingface/datasets/pull/5195", "diff_url": "https://github.com/huggingface/datasets/pull/5195.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5195.patch", "merged_at": null }
5,195
true
Fix docs about dataset_info in YAML
This PR fixes some misalignment in the docs after we transferred the dataset_info from `dataset_infos.json` to YAML in the dataset card: - #4926 Related to: - #5193
https://github.com/huggingface/datasets/pull/5194
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5194", "html_url": "https://github.com/huggingface/datasets/pull/5194", "diff_url": "https://github.com/huggingface/datasets/pull/5194.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5194.patch", "merged_at": "2022-11-03T13:29...
5,194
true
"One or several metadata. were found, but not in the same directory or in a parent directory"
### Describe the bug When loading my own dataset, on loading it I get an error. Here is my dataset link: https://huggingface.co/datasets/corentinm7/MyoQuant-SDH-Data And the error after loading with: ```python from datasets import load_dataset load_dataset("corentinm7/MyoQuant-SDH-Data") ``` ```python Downlo...
https://github.com/huggingface/datasets/issues/5193
[ "Also unrelated but still: https://huggingface.co/docs/datasets/image_dataset#generate-the-dataset\r\n```If your loading script passed the test, you should now have a dataset_infos.json file in your dataset folder.```\r\nIt's not the case anymore as it's now in the readme.md, it was confusing to me", "And here is...
null
5,193
false
Drop labels in Image and Audio folders if files are on different levels in directory or if there is only one label
Will close https://github.com/huggingface/datasets/issues/5153 Drop labels by default (`drop_labels=None`) when: * there are files on different levels of directory hierarchy by checking their path depth * all files are in the same directory (=only one label was inferred) First one fixes cases like this: ``` r...
https://github.com/huggingface/datasets/pull/5192
[ "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on that endpoint.", "The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5192). All of your documentation changes will be reflected on...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5192", "html_url": "https://github.com/huggingface/datasets/pull/5192", "diff_url": "https://github.com/huggingface/datasets/pull/5192.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5192.patch", "merged_at": "2022-11-15T16:31...
5,192
true
Make torch.Tensor and spacy models cacheable
Override `Pickler.save` to implement deterministic reduction (lazily registered; inspired by https://github.com/uqfoundation/dill/blob/master/dill/_dill.py#L343) functions for `torch.Tensor` and spaCy models. Fix https://github.com/huggingface/datasets/issues/5170, fix https://github.com/huggingface/datasets/issues/...
https://github.com/huggingface/datasets/pull/5191
[ "_The documentation is not available anymore as the PR was closed or merged._" ]
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5191", "html_url": "https://github.com/huggingface/datasets/pull/5191", "diff_url": "https://github.com/huggingface/datasets/pull/5191.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5191.patch", "merged_at": "2022-11-02T17:18...
5,191
true
`path` is `None` when downloading a custom audio dataset from the Hub
### Describe the bug I've created an [audio dataset](https://huggingface.co/datasets/lewtun/audio-test-push) using the `audiofolder` feature desribed in the [docs](https://huggingface.co/docs/datasets/audio_dataset#audiofolder) and then pushed it to the Hub. Locally, I can see the `audio.path` feature is of the ...
https://github.com/huggingface/datasets/issues/5190
[ "Hi! Yes, this is expected behavior - we do this as a security measure to not leak local paths (this info would be useless on other users' machines anyways) and only push audio bytes. \r\n" ]
null
5,190
false
Reduce friction in tabular dataset workflow by eliminating having splits when dataset is loaded
### Feature request Sorry for cryptic name but I'd like to explain using code itself. When I want to load a specific dataset from a repository (for instance, this: https://huggingface.co/datasets/inria-soda/tabular-benchmark) ```python from datasets import load_dataset dataset = load_dataset("inria-soda/tabular-b...
https://github.com/huggingface/datasets/issues/5189
[ "I have to admit I'm not a fan of this idea, as this would result in a non-consistent behavior between tabular and non-tabular datasets, which is confusing if done without the context you provided. Instead, we could consider returning a `Dataset` object rather than `DatasetDict` if there is only one split in the ge...
null
5,189
false
add: segmentation guide.
Closes #5181 I have opened a PR on Hub (https://huggingface.co/datasets/huggingface/documentation-images/discussions/5) to include the images in our central Hub repository. Once the PR is merged I will edit the image links. I have also prepared a [Colab Notebook](https://colab.research.google.com/drive/1BMDCfOT...
https://github.com/huggingface/datasets/pull/5188
[ "_The documentation is not available anymore as the PR was closed or merged._", "Thanks @osanseviero. Am I good to merge? ", "I would wait for a second approval just in case :) ", "Sure :) ", "Merging since the images have been pushed as LFS files ([PR](https://huggingface.co/datasets/huggingface/documentat...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5188", "html_url": "https://github.com/huggingface/datasets/pull/5188", "diff_url": "https://github.com/huggingface/datasets/pull/5188.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5188.patch", "merged_at": "2022-11-04T18:23...
5,188
true
chore: add notebook links to img cls and obj det.
Closes https://github.com/huggingface/datasets/issues/5182
https://github.com/huggingface/datasets/pull/5187
[ "_The documentation is not available anymore as the PR was closed or merged._", "@nateraw I guess the failing test is unrelated. ", "@sayakpaul Yea failures are unrelated. ", "Alright. Will wait for @osanseviero's take and then merge. ", "FYI @stevhliu ", "@osanseviero @stevhliu @nateraw thank you for yo...
{ "url": "https://api.github.com/repos/huggingface/datasets/pulls/5187", "html_url": "https://github.com/huggingface/datasets/pull/5187", "diff_url": "https://github.com/huggingface/datasets/pull/5187.diff", "patch_url": "https://github.com/huggingface/datasets/pull/5187.patch", "merged_at": "2022-11-03T01:49...
5,187
true
Incorrect error message when Dataset.from_sql fails and sqlalchemy not installed
### Describe the bug When calling `Dataset.from_sql` (in my case, with sqlite3), it fails with a message ```ValueError: Please pass `features` or at least one example when writing data``` when I don't have `sqlalchemy` installed. ### Steps to reproduce the bug Make a new sqlite db with `sqlite3` and `pandas` from...
https://github.com/huggingface/datasets/issues/5186
[ "Hi! The first `Dataset.from_sql` call also outputs the \"ImportError: Using URI string without sqlalchemy installed.\" message, but you also get \"During handling of the above exception another exception occurred: ...\" after which the ValueError is printed. I agree that this behavior makes it easy to miss the ori...
null
5,186
false
Allow passing a subset of output features to Dataset.map
### Feature request Currently, map does one of two things to the features (if I'm not mistaken): * when you do not pass features, types are assumed to be equal to the input if they can be cast, and inferred otherwise * when you pass a full specification of features, output features are set to this However, so...
https://github.com/huggingface/datasets/issues/5185
[]
null
5,185
false
Loading an external dataset in a format similar to conll2003
I'm trying to load a custom dataset in a Dataset object, it's similar to conll2003 but with 2 columns only (word entity), I used the following script: features = datasets.Features( {"tokens": datasets.Sequence(datasets.Value("string")), "ner_tags": datasets.Sequence( datasets.featu...
https://github.com/huggingface/datasets/issues/5183
[]
null
5,183
false
Add notebook / other resource links to the task-specific data loading guides
Does it make sense to include links to notebooks / scripts that show how to use a dataset for training / fine-tuning a model? For example, here in [https://huggingface.co/docs/datasets/image_classification] we could include a mention of https://github.com/huggingface/notebooks/blob/main/examples/image_classificatio...
https://github.com/huggingface/datasets/issues/5182
[ "Yea this would be great! We would need an object detection tutorial notebook too if it doesn't already exist there. ", "There is one: https://huggingface.co/docs/datasets/object_detection.\r\n\r\nI will start the work. " ]
null
5,182
false
Add a guide for semantic segmentation
Currently, we have these guides for object detection and image classification: * https://huggingface.co/docs/datasets/object_detection * https://huggingface.co/docs/datasets/image_classification I am proposing adding a similar guide for semantic segmentation. I am happy to contribute a PR for it. Cc: @os...
https://github.com/huggingface/datasets/issues/5181
[ "Sure this sounds great! Would this be pure torchvision, albumentations, or something else?", "I am considering `torchvision` and `albumentations`. Also [works with TensorFlow](https://github.com/deep-diver/segformer-tf-transformers/blob/main/notebooks/TFSegFormer_Finetune.ipynb). \r\n\r\nI am assigning the issue...
null
5,181
false
An example or recommendations for creating large image datasets?
I know that Apache Beam and `datasets` have [some connector utilities](https://huggingface.co/docs/datasets/beam). But it's a little unclear what we mean by "But if you want to run your own Beam pipeline with Dataflow, here is how:". What does that pipeline do? As a user, I was wondering if we have this support for...
https://github.com/huggingface/datasets/issues/5180
[ "The beam utilities allow to prepare a dataset as parquet in your cloud storage. From my perspective this CLI is not super easy to use, but we've been working on a new python API to prepare a dataset in your cloud storage:\r\n```python\r\nfrom datasets import load_dataset_builder\r\n\r\nbuilder = load_dataset_build...
null
5,180
false