title stringlengths 1 290 | body stringlengths 0 228k β | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Add section in tutorial for IterableDataset | Introduces an `IterableDataset` and how to access it in the tutorial section. It also adds a brief next step section at the end to provide a path for users who want more explanation and a path for users who want something more practical and learn how to preprocess these dataset types. It'll complement the awesome new d... | https://github.com/huggingface/datasets/pull/5485 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5485",
"html_url": "https://github.com/huggingface/datasets/pull/5485",
"diff_url": "https://github.com/huggingface/datasets/pull/5485.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5485.patch",
"merged_at": "2023-02-01T18:08... | 5,485 | true |
Update docs for `nyu_depth_v2` dataset | This PR will fix the issue mentioned in #5461.
cc: @sayakpaul @lhoestq
| https://github.com/huggingface/datasets/pull/5484 | [
"I think I need to create another PR on https://huggingface.co/datasets/huggingface/documentation-images/tree/main/datasets for hosting the images there?",
"_The documentation is not available anymore as the PR was closed or merged._",
"Thanks for the update @awsaf49 !",
"> Thanks a lot for the updates!\r\n> ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5484",
"html_url": "https://github.com/huggingface/datasets/pull/5484",
"diff_url": "https://github.com/huggingface/datasets/pull/5484.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5484.patch",
"merged_at": "2023-02-05T14:15... | 5,484 | true |
Unable to upload dataset | ### Describe the bug
Uploading a simple dataset ends with an exception
### Steps to reproduce the bug
I created a new conda env with python 3.10, pip installed datasets and:
```python
>>> from datasets import load_dataset, load_from_disk, Dataset
>>> d = Dataset.from_dict({"text": ["hello"] * 2})
>>> d.pus... | https://github.com/huggingface/datasets/issues/5483 | [
"Seems to work now, perhaps it was something internal with our university's network."
] | null | 5,483 | false |
Reload features from Parquet metadata | The idea would be to allow this :
```python
ds.to_parquet("my_dataset/ds.parquet")
reloaded = load_dataset("my_dataset")
assert ds.features == reloaded.features
```
And it should also work with Image and Audio types (right now they're reloaded as a dict type)
This can be implemented by storing and reading th... | https://github.com/huggingface/datasets/issues/5482 | [
"I'd be happy to have a look, if nobody else has started working on this yet @lhoestq. \r\n\r\nIt seems to me that for the `arrow` format features are currently attached as metadata [in `datasets.arrow_writer`](https://github.com/huggingface/datasets/blob/5f810b7011a8a4ab077a1847c024d2d9e267b065/src/datasets/arrow_... | null | 5,482 | false |
Load a cached dataset as iterable | The idea would be to allow something like
```python
ds = load_dataset("c4", "en", as_iterable=True)
```
To be used to train models. It would load an IterableDataset from the cached Arrow files.
Cc @stas00
Edit : from the discussions we may load from cache when streaming=True | https://github.com/huggingface/datasets/issues/5481 | [
"Can I work on this issue? I am pretty new to this.",
"Hi ! Sure :) you can comment `#self-assign` to assign yourself to this issue.\r\n\r\nI can give you some pointers to get started:\r\n\r\n`load_dataset` works roughly this way:\r\n1. it instantiate a dataset builder using `load_dataset_builder()`\r\n2. the bui... | null | 5,481 | false |
Select columns of Dataset or DatasetDict | Close #5474 and #5468. | https://github.com/huggingface/datasets/pull/5480 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5480",
"html_url": "https://github.com/huggingface/datasets/pull/5480",
"diff_url": "https://github.com/huggingface/datasets/pull/5480.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5480.patch",
"merged_at": "2023-02-13T09:59... | 5,480 | true |
audiofolder works on local env, but creates empty dataset in a remote one, what dependencies could I be missing/outdated | ### Describe the bug
I'm using a custom audio dataset (400+ audio files) in the correct format for audiofolder. Although loading the dataset with audiofolder works in one local setup, it doesn't in a remote one (it just creates an empty dataset). I have both ffmpeg and libndfile installed on both computers, what cou... | https://github.com/huggingface/datasets/issues/5479 | [] | null | 5,479 | false |
Tip for recomputing metadata | From this [feedback](https://discuss.huggingface.co/t/nonmatchingsplitssizeserror/30033) on the forum, thought I'd include a tip for recomputing the metadata numbers if it is your own dataset. | https://github.com/huggingface/datasets/pull/5478 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5478",
"html_url": "https://github.com/huggingface/datasets/pull/5478",
"diff_url": "https://github.com/huggingface/datasets/pull/5478.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5478.patch",
"merged_at": "2023-01-30T19:15... | 5,478 | true |
Unpin sqlalchemy once issue is fixed | Once the source issue is fixed:
- pandas-dev/pandas#51015
we should revert the pin introduced in:
- #5476 | https://github.com/huggingface/datasets/issues/5477 | [
"@albertvillanova It looks like that issue has been fixed so I made a PR to unpin sqlalchemy! "
] | null | 5,477 | false |
Pin sqlalchemy | since sqlalchemy update to 2.0.0 the CI started to fail: https://github.com/huggingface/datasets/actions/runs/4023742457/jobs/6914976514
the error comes from pandas: https://github.com/pandas-dev/pandas/issues/51015 | https://github.com/huggingface/datasets/pull/5476 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5476",
"html_url": "https://github.com/huggingface/datasets/pull/5476",
"diff_url": "https://github.com/huggingface/datasets/pull/5476.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5476.patch",
"merged_at": "2023-01-27T11:57... | 5,476 | true |
Dataset scan time is much slower than using native arrow | ### Describe the bug
I'm basically running the same scanning experiment from the tutorials https://huggingface.co/course/chapter5/4?fw=pt except now I'm comparing to a native pyarrow version.
I'm finding that the native pyarrow approach is much faster (2 orders of magnitude). Is there something I'm missing that exp... | https://github.com/huggingface/datasets/issues/5475 | [
"Hi ! In your code you only iterate on the Arrow buffers - you don't actually load the data as python objects. For a fair comparison, you can modify your code using:\r\n```diff\r\n- for _ in range(0, len(table), bsz):\r\n- _ = {k:table[k][_ : _ + bsz] for k in cols}\r\n+ for _ in range(0, len(table)... | null | 5,475 | false |
Column project operation on `datasets.Dataset` | ### Feature request
There is no operation to select a subset of columns of original dataset. Expected API follows.
```python
a = Dataset.from_dict({
'int': [0, 1, 2]
'char': ['a', 'b', 'c'],
'none': [None] * 3,
})
b = a.project('int', 'char') # usually, .select()
print(a.column_names) # std... | https://github.com/huggingface/datasets/issues/5474 | [
"Hi ! This would be a nice addition indeed :) This sounds like a duplicate of https://github.com/huggingface/datasets/issues/5468\r\n\r\n> Not sure. Some of my PRs are still open and some do not have any discussions.\r\n\r\nSorry to hear that, feel free to ping me on those PRs"
] | null | 5,474 | false |
Set dev version | null | https://github.com/huggingface/datasets/pull/5473 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5473",
"html_url": "https://github.com/huggingface/datasets/pull/5473",
"diff_url": "https://github.com/huggingface/datasets/pull/5473.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5473.patch",
"merged_at": "2023-01-26T19:38... | 5,473 | true |
Release: 2.9.0 | null | https://github.com/huggingface/datasets/pull/5472 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5472",
"html_url": "https://github.com/huggingface/datasets/pull/5472",
"diff_url": "https://github.com/huggingface/datasets/pull/5472.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5472.patch",
"merged_at": "2023-01-26T19:33... | 5,472 | true |
Add num_test_batches option | `to_tf_dataset` calls can be very costly because of the number of test batches drawn during `_get_output_signature`. The test batches are draw in order to estimate the shapes when creating the tensorflow dataset. This is necessary when the shapes can be irregular, but not in cases when the tensor shapes are the same ac... | https://github.com/huggingface/datasets/pull/5471 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"I thought this issue was resolved in my parallel `to_tf_dataset` PR! I changed the default `num_test_batches` in `_get_output_signature` to 20 and used a test batch size of 1 to maximize variance to detect shorter samples. I think it... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5471",
"html_url": "https://github.com/huggingface/datasets/pull/5471",
"diff_url": "https://github.com/huggingface/datasets/pull/5471.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5471.patch",
"merged_at": "2023-01-27T18:08... | 5,471 | true |
Update dataset card creation | Encourages users to create a dataset card on the Hub directly with the new metadata ui + import dataset card template instead of telling users to manually create and upload one. | https://github.com/huggingface/datasets/pull/5470 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"The CI failure is unrelated to your PR - feel free to merge :)",
"Haha thanks, you read my mind :)",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n##... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5470",
"html_url": "https://github.com/huggingface/datasets/pull/5470",
"diff_url": "https://github.com/huggingface/datasets/pull/5470.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5470.patch",
"merged_at": "2023-01-27T16:20... | 5,470 | true |
Remove deprecated `shard_size` arg from `.push_to_hub()` | The docstrings say that it was supposed to be deprecated since version 2.4.0, can we remove it? | https://github.com/huggingface/datasets/pull/5469 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5469",
"html_url": "https://github.com/huggingface/datasets/pull/5469",
"diff_url": "https://github.com/huggingface/datasets/pull/5469.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5469.patch",
"merged_at": "2023-01-26T17:30... | 5,469 | true |
Allow opposite of remove_columns on Dataset and DatasetDict | ### Feature request
In this blog post https://huggingface.co/blog/audio-datasets, I noticed the following code:
```python
COLUMNS_TO_KEEP = ["text", "audio"]
all_columns = gigaspeech["train"].column_names
columns_to_remove = set(all_columns) - set(COLUMNS_TO_KEEP)
gigaspeech = gigaspeech.remove_columns(column... | https://github.com/huggingface/datasets/issues/5468 | [
"Hi! I agree it would be nice to have a method like that. Instead of `keep_columns`, we can name it `select_columns` to be more aligned with PyArrow's naming convention (`pa.Table.select`).",
"Hi, I am a newbie to open source and would like to contribute. @mariosasko can I take up this issue ?",
"Hey, I also wa... | null | 5,468 | false |
Fix conda command in readme | The [conda forge channel](https://anaconda.org/conda-forge/datasets) is lagging behind (as of right now, only 2.7.1 is available), we should recommend using the [Hugging face channel](https://anaconda.org/HuggingFace/datasets) that we are maintaining
```
conda install -c huggingface datasets
``` | https://github.com/huggingface/datasets/pull/5467 | [
"ah didn't read well - it's all good",
"or maybe it isn't ? `-c huggingface -c conda-forge` installs from HF or from conda-forge ?",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5467",
"html_url": "https://github.com/huggingface/datasets/pull/5467",
"diff_url": "https://github.com/huggingface/datasets/pull/5467.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5467.patch",
"merged_at": null
} | 5,467 | true |
remove pathlib.Path with URIs | Pathlib will convert "//" to "/" which causes retry errors when downloading from cloud storage | https://github.com/huggingface/datasets/pull/5466 | [
"Thanks !\r\n`os.path.join` will use a backslash `\\` on windows which will also fail. You can use this instead in `load_from_disk`:\r\n```python\r\nfrom .filesystems import is_remote_filesystem\r\n\r\nis_local = not is_remote_filesystem(fs)\r\npath_join = os.path.join if is_local else posixpath.join\r\n```",
"Th... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5466",
"html_url": "https://github.com/huggingface/datasets/pull/5466",
"diff_url": "https://github.com/huggingface/datasets/pull/5466.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5466.patch",
"merged_at": "2023-01-26T16:59... | 5,466 | true |
audiofolder creates empty dataset even though the dataset passed in follows the correct structure | ### Describe the bug
The structure of my dataset folder called "my_dataset" is : data metadata.csv
The data folder consists of all mp3 files and metadata.csv consist of file locations like 'data/...mp3 and transcriptions. There's 400+ mp3 files and corresponding transcriptions for my dataset.
When I run the follo... | https://github.com/huggingface/datasets/issues/5465 | [] | null | 5,465 | false |
NonMatchingChecksumError for hendrycks_test | ### Describe the bug
The checksum of the file has likely changed on the remote host.
### Steps to reproduce the bug
`dataset = nlp.load_dataset("hendrycks_test", "anatomy")`
### Expected behavior
no error thrown
### Environment info
- `datasets` version: 2.2.1
- Platform: macOS-13.1-arm64-arm-64bit
- Pyt... | https://github.com/huggingface/datasets/issues/5464 | [
"Thanks for reporting, @sarahwie.\r\n\r\nPlease note this issue was already fixed in `datasets` 2.6.0 version:\r\n- #5040\r\n\r\nIf you update your `datasets` version, you will be able to load the dataset:\r\n```\r\npip install -U datasets\r\n```",
"Oops, missed that I needed to upgrade. Thanks!"
] | null | 5,464 | false |
Imagefolder docs: mention support of CSV and ZIP | null | https://github.com/huggingface/datasets/pull/5463 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5463",
"html_url": "https://github.com/huggingface/datasets/pull/5463",
"diff_url": "https://github.com/huggingface/datasets/pull/5463.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5463.patch",
"merged_at": "2023-01-25T18:26... | 5,463 | true |
Concatenate on axis=1 with misaligned blocks | Allow to concatenate on axis 1 two tables made of misaligned blocks.
For example if the first table has 2 row blocks of 3 rows each, and the second table has 3 row blocks or 2 rows each.
To do that, I slice the row blocks to re-align the blocks.
Fix https://github.com/huggingface/datasets/issues/5413 | https://github.com/huggingface/datasets/pull/5462 | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5462",
"html_url": "https://github.com/huggingface/datasets/pull/5462",
"diff_url": "https://github.com/huggingface/datasets/pull/5462.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5462.patch",
"merged_at": "2023-01-26T09:27... | 5,462 | true |
Discrepancy in `nyu_depth_v2` dataset | ### Describe the bug
I think there is a discrepancy between depth map of `nyu_depth_v2` dataset [here](https://huggingface.co/docs/datasets/main/en/depth_estimation) and actual depth map. Depth values somehow got **discretized/clipped** resulting in depth maps that are different from actual ones. Here is a side-by-sid... | https://github.com/huggingface/datasets/issues/5461 | [
"Ccing @dwofk (the author of `fast-depth`). \r\n\r\nThanks, @awsaf49 for reporting this. I believe this is because the NYU Depth V2 shipped from `fast-depth` is already preprocessed. \r\n\r\nIf you think it might be better to have the NYU Depth V2 dataset from BTS [here](https://huggingface.co/datasets/sayakpaul/ny... | null | 5,461 | false |
Document that removing all the columns returns an empty document and the num_row is lost | null | https://github.com/huggingface/datasets/pull/5460 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5460",
"html_url": "https://github.com/huggingface/datasets/pull/5460",
"diff_url": "https://github.com/huggingface/datasets/pull/5460.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5460.patch",
"merged_at": "2023-01-25T16:04... | 5,460 | true |
Disable aiohttp requoting of redirection URL | The library `aiohttp` performs a requoting of redirection URLs that unquotes the single quotation mark character: `%27` => `'`
This is a problem for our Hugging Face Hub, which requires exact URL from location header.
Specifically, in the query component of the URL (`https://netloc/path?query`), the value for `re... | https://github.com/huggingface/datasets/pull/5459 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"Comment by @lhoestq:\r\n> Do you think we need this in `datasets` if it's fixed on the moon landing side ? In the aiohttp doc they consider those symbols as \"non-safe\" ",
"The lib `requests` does not perform that requote on redir... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5459",
"html_url": "https://github.com/huggingface/datasets/pull/5459",
"diff_url": "https://github.com/huggingface/datasets/pull/5459.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5459.patch",
"merged_at": "2023-01-31T08:37... | 5,459 | true |
slice split while streaming | ### Describe the bug
When using the `load_dataset` function with streaming set to True, slicing splits is apparently not supported.
Did I miss this in the documentation?
### Steps to reproduce the bug
`load_dataset("lhoestq/demo1",revision=None, streaming=True, split="train[:3]")`
causes ValueError: Bad split:... | https://github.com/huggingface/datasets/issues/5458 | [
"Hi! Yes, that's correct. When `streaming` is `True`, only split names can be specified as `split`, and for slicing, you have to use `.skip`/`.take` instead.\r\n\r\nE.g. \r\n`load_dataset(\"lhoestq/demo1\",revision=None, streaming=True, split=\"train[:3]\")`\r\n\r\nrewritten with `.skip`/`.take`:\r\n`load_dataset(\... | null | 5,458 | false |
prebuilt dataset relies on `downloads/extracted` | ### Describe the bug
I pre-built the dataset:
```
python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
```
and it can be used just fine.
now I wipe out `downloads/extracted` and it no longer works.
```
rm -r ~/.cache/huggingface... | https://github.com/huggingface/datasets/issues/5457 | [
"Hi! \r\n\r\nThis issue is due to our audio/image datasets not being self-contained. This allows us to save disk space (files are written only once) but also leads to the issues like this one. We plan to make all our datasets self-contained in Datasets 3.0.\r\n\r\nIn the meantime, you can run the following map to e... | null | 5,457 | false |
feat: tqdm for `to_parquet` | As described in #5418
I noticed also that the `to_json` function supports multi-workers whereas `to_parquet`, is that not possible/not needed with Parquet or something that hasn't been implemented yet? | https://github.com/huggingface/datasets/pull/5456 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5456",
"html_url": "https://github.com/huggingface/datasets/pull/5456",
"diff_url": "https://github.com/huggingface/datasets/pull/5456.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5456.patch",
"merged_at": "2023-01-24T11:17... | 5,456 | true |
Single TQDM bar in multi-proc map | Use the "shard generator approach with periodic progress updates" (used in `save_to_disk` and multi-proc `load_dataset`) in `Dataset.map` to enable having a single TQDM progress bar in the multi-proc mode.
Closes https://github.com/huggingface/datasets/issues/771, closes https://github.com/huggingface/datasets/issue... | https://github.com/huggingface/datasets/pull/5455 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5455",
"html_url": "https://github.com/huggingface/datasets/pull/5455",
"diff_url": "https://github.com/huggingface/datasets/pull/5455.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5455.patch",
"merged_at": "2023-02-13T20:16... | 5,455 | true |
Save and resume the state of a DataLoader | It would be nice when using `datasets` with a PyTorch DataLoader to be able to resume a training from a DataLoader state (e.g. to resume a training that crashed)
What I have in mind (but lmk if you have other ideas or comments):
For map-style datasets, this requires to have a PyTorch Sampler state that can be sav... | https://github.com/huggingface/datasets/issues/5454 | [
"Something that'd be nice to have is \"manual update of state\". One of the learning from training LLMs is the ability to skip some batches whenever we notice huge spike might be handy.",
"Your outline spec is very sound and clear, @lhoestq - thank you!\r\n\r\n@thomasw21, indeed that would be a wonderful extra fe... | null | 5,454 | false |
Fix base directory while extracting insecure TAR files | This PR fixes the extraction of insecure TAR files by changing the base path against which TAR members are compared:
- from: "."
- to: `output_path`
This PR also adds tests for extracting insecure TAR files.
Related to:
- #5441
- #5452
@stas00 please note this PR addresses just one of the issues you pointe... | https://github.com/huggingface/datasets/pull/5453 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5453",
"html_url": "https://github.com/huggingface/datasets/pull/5453",
"diff_url": "https://github.com/huggingface/datasets/pull/5453.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5453.patch",
"merged_at": "2023-01-23T10:10... | 5,453 | true |
Swap log messages for symbolic/hard links in tar extractor | The log messages do not match their if-condition. This PR swaps them.
Found while investigating:
- #5441
CC: @lhoestq | https://github.com/huggingface/datasets/pull/5452 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5452",
"html_url": "https://github.com/huggingface/datasets/pull/5452",
"diff_url": "https://github.com/huggingface/datasets/pull/5452.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5452.patch",
"merged_at": "2023-01-23T08:31... | 5,452 | true |
ImageFolder BadZipFile: Bad offset for central directory | ### Describe the bug
I'm getting the following exception:
```
lib/python3.10/zipfile.py:1353 in _RealGetContents β
β β
β 1350 β β # self.start_dir: Position of start of central directory ... | https://github.com/huggingface/datasets/issues/5451 | [
"Hi ! Could you share the full stack trace ? Which dataset did you try to load ?",
"The `BadZipFile` error means the ZIP file is corrupted, so I'm closing this issue as it's not directly related to `datasets`."
] | null | 5,451 | false |
to_tf_dataset with a TF collator causes bizarrely persistent slowdown | ### Describe the bug
This will make more sense if you take a look at [a Colab notebook that reproduces this issue.](https://colab.research.google.com/drive/1rxyeciQFWJTI0WrZ5aojp4Ls1ut18fNH?usp=sharing)
Briefly, there are several datasets that, when you iterate over them with `to_tf_dataset` **and** a data colla... | https://github.com/huggingface/datasets/issues/5450 | [
"wtf",
"Couldn't find what's causing this, this will need more investigation",
"A possible hint: The function it seems to be spending a lot of time in (when iterating over the original dataset) is `_get_mp` in the PIL JPEG decoder: \r\n with the **HF** datasets section.
For example, if I have **50GB** on my **Onedrive*... | https://github.com/huggingface/datasets/issues/5442 | [
"Hi! \r\n\r\nWe use [`fsspec`](https://github.com/fsspec/filesystem_spec) to integrate with storage providers. You can find more info (and the usage examples) in [our docs](https://huggingface.co/docs/datasets/v2.8.0/filesystems#download-and-prepare-a-dataset-into-a-cloud-storage).\r\n\r\n[`gdrivefs`](https://githu... | null | 5,442 | false |
resolving a weird tar extract issue | ok, every so often, I have been getting a strange failure on dataset install:
```
$ python -c 'import sys; from datasets import load_dataset; ds=load_dataset(sys.argv[1])' HuggingFaceM4/general-pmd-synthetic-testing
No config specified, defaulting to: general-pmd-synthetic-testing/100.unique
Downloading and prep... | https://github.com/huggingface/datasets/pull/5441 | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5441",
"html_url": "https://github.com/huggingface/datasets/pull/5441",
"diff_url": "https://github.com/huggingface/datasets/pull/5441.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5441.patch",
"merged_at": null
} | 5,441 | true |
Fix documentation about batch samplers | null | https://github.com/huggingface/datasets/pull/5440 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5440",
"html_url": "https://github.com/huggingface/datasets/pull/5440",
"diff_url": "https://github.com/huggingface/datasets/pull/5440.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5440.patch",
"merged_at": "2023-01-18T17:50... | 5,440 | true |
[dataset request] Add Common Voice 12.0 | ### Feature request
Please add the common voice 12_0 datasets. Apart from English, a significant amount of audio-data has been added to the other minor-language datasets.
### Motivation
The dataset link:
https://commonvoice.mozilla.org/en/datasets
| https://github.com/huggingface/datasets/issues/5439 | [
"@polinaeterna any tentative date on when the Common Voice 12.0 dataset will be added ?"
] | null | 5,439 | false |
Update actions/checkout in CD Conda release | This PR updates the "checkout" GitHub Action to its latest version, as previous ones are deprecated: https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead-of-node12/ | https://github.com/huggingface/datasets/pull/5438 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5438",
"html_url": "https://github.com/huggingface/datasets/pull/5438",
"diff_url": "https://github.com/huggingface/datasets/pull/5438.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5438.patch",
"merged_at": "2023-01-18T13:42... | 5,438 | true |
Can't load png dataset with 4 channel (RGBA) | I try to create dataset which contains about 9000 png images 64x64 in size, and they are all 4-channel (RGBA). When trying to use load_dataset() then a dataset is created from only 2 images. What exactly interferes I can not understand. for extra speed
* Updates `actions/checkout` to `v3` (note that `v2` is [deprecated](https://github.blog/changelog/2022-09-22-github-actions-all-actions-will-begin-running-on-node16-instead... | https://github.com/huggingface/datasets/pull/5436 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5436",
"html_url": "https://github.com/huggingface/datasets/pull/5436",
"diff_url": "https://github.com/huggingface/datasets/pull/5436.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5436.patch",
"merged_at": "2023-01-18T06:29... | 5,436 | true |
Wrong statement in "Load a Dataset in Streaming mode" leads to data leakage | ### Describe the bug
In the [Split your dataset with take and skip](https://huggingface.co/docs/datasets/v1.10.2/dataset_streaming.html#split-your-dataset-with-take-and-skip), it states:
> Using take (or skip) prevents future calls to shuffle from shuffling the dataset shards order, otherwise the taken examples cou... | https://github.com/huggingface/datasets/issues/5435 | [
"Just for your information, Tensorflow confirmed this issue [here.](https://github.com/tensorflow/tensorflow/issues/59279)",
"Thanks for reporting, @HaoyuYang59.\r\n\r\nPlease note that these are different \"dataset\" objects: our docs refer to Hugging Face `datasets.Dataset` and not to TensorFlow `tf.data.Datase... | null | 5,435 | false |
sample_dataset module not found | null | https://github.com/huggingface/datasets/issues/5434 | [
"Hi! Can you describe what the actual error is?",
"working on the setfit example script\r\n\r\n from setfit import SetFitModel, SetFitTrainer, sample_dataset\r\n\r\nImportError: cannot import name 'sample_dataset' from 'setfit' (C:\\Python\\Python38\\lib\\site-packages\\setfit\\__init__.py)\r\n\r\n apart from t... | null | 5,434 | false |
Support latest Docker image in CI benchmarks | Once we find out the root cause of:
- #5431
we should revert the temporary pin on the Docker image version introduced by:
- #5432 | https://github.com/huggingface/datasets/issues/5433 | [
"Sorry, it was us:[^1] https://github.com/iterative/cml/pull/1317 & https://github.com/iterative/cml/issues/1319#issuecomment-1385599559; should be fixed with [v0.18.17](https://github.com/iterative/cml/releases/tag/v0.18.17).\r\n\r\n[^1]: More or less, see https://github.com/yargs/yargs/issues/873.",
"Opened htt... | null | 5,433 | false |
Fix CI benchmarks by temporarily pinning Docker image version | This PR fixes CI benchmarks, by temporarily pinning Docker image version, instead of "latest" tag.
It also updates deprecated `cml-send-comment` command and using `cml comment create` instead.
Fix #5431. | https://github.com/huggingface/datasets/pull/5432 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5432",
"html_url": "https://github.com/huggingface/datasets/pull/5432",
"diff_url": "https://github.com/huggingface/datasets/pull/5432.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5432.patch",
"merged_at": "2023-01-17T08:51... | 5,432 | true |
CI benchmarks are broken: Unknown arguments: runnerPath, path | Our CI benchmarks are broken, raising `Unknown arguments` error: https://github.com/huggingface/datasets/actions/runs/3932397079/jobs/6724905161
```
Unknown arguments: runnerPath, path
```
Stack trace:
```
100%|ββββββββββ| 500/500 [00:01<00:00, 338.98ba/s]
Updating lock file 'dvc.lock'
To track the changes ... | https://github.com/huggingface/datasets/issues/5431 | [] | null | 5,431 | false |
Support Apache Beam >= 2.44.0 | Once we find out the root cause of:
- #5426
we should revert the temporary pin on apache-beam introduced by:
- #5429 | https://github.com/huggingface/datasets/issues/5430 | [
"Some of the shard files now have 0 number of rows.\r\n\r\nWe have opened an issue in the Apache Beam repo:\r\n- https://github.com/apache/beam/issues/25041"
] | null | 5,430 | false |
Fix CI by temporarily pinning apache-beam < 2.44.0 | Temporarily pin apache-beam < 2.44.0
Fix #5426. | https://github.com/huggingface/datasets/pull/5429 | [
"_The documentation is not available anymore as the PR was closed or merged._"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5429",
"html_url": "https://github.com/huggingface/datasets/pull/5429",
"diff_url": "https://github.com/huggingface/datasets/pull/5429.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5429.patch",
"merged_at": "2023-01-16T16:49... | 5,429 | true |
Load/Save FAISS index using fsspec | ### Feature request
From what I understand `faiss` already support this [link](https://github.com/facebookresearch/faiss/wiki/Index-IO,-cloning-and-hyper-parameter-tuning#generic-io-support)
I would like to use a stream as input to `Dataset.load_faiss_index` and `Dataset.save_faiss_index`.
### Motivation
In... | https://github.com/huggingface/datasets/issues/5428 | [
"Hi! Sure, feel free to submit a PR. Maybe if we want to be consistent with the existing API, it would be cleaner to directly add support for `fsspec` paths in `Dataset.load_faiss_index`/`Dataset.save_faiss_index` in the same manner as it was done in `Dataset.load_from_disk`/`Dataset.save_to_disk`.",
"That's a gr... | null | 5,428 | false |
Unable to download dataset id_clickbait | ### Describe the bug
I tried to download dataset `id_clickbait`, but receive this error message.
```
FileNotFoundError: Couldn't find file at https://md-datasets-cache-zipfiles-prod.s3.eu-west-1.amazonaws.com/k42j7x2kpn-1.zip
```
When i open the link using browser, i got this XML data.
```xml
<?xml versi... | https://github.com/huggingface/datasets/issues/5427 | [
"Thanks for reporting, @ilos-vigil.\r\n\r\nWe have transferred this issue to the corresponding dataset on the Hugging Face Hub: https://huggingface.co/datasets/id_clickbait/discussions/1 "
] | null | 5,427 | false |
CI tests are broken: SchemaInferenceError | CI is broken, raising a `SchemaInferenceError`: see https://github.com/huggingface/datasets/actions/runs/3930901593/jobs/6721492004
```
FAILED tests/test_beam.py::BeamBuilderTest::test_download_and_prepare_sharded - datasets.arrow_writer.SchemaInferenceError: Please pass `features` or at least one example when writin... | https://github.com/huggingface/datasets/issues/5426 | [] | null | 5,426 | false |
Sort on multiple keys with datasets.Dataset.sort() | ### Feature request
From discussion on forum: https://discuss.huggingface.co/t/datasets-dataset-sort-does-not-preserve-ordering/29065/1
`sort()` does not preserve ordering, and it does not support sorting on multiple columns, nor a key function.
The suggested solution:
> ... having something similar to panda... | https://github.com/huggingface/datasets/issues/5425 | [
"Hi! \r\n\r\n`Dataset.sort` calls `df.sort_values` internally, and `df.sort_values` brings all the \"sort\" columns in memory, so sorting on multiple keys could be very expensive. This makes me think that maybe we can replace `df.sort_values` with `pyarrow.compute.sort_indices` - the latter can also sort on multipl... | null | 5,425 | false |
When applying `ReadInstruction` to custom load it's not DatasetDict but list of Dataset? | ### Describe the bug
I am loading datasets from custom `tsv` files stored locally and applying split instructions for each split. Although the ReadInstruction is being applied correctly and I was expecting it to be `DatasetDict` but instead it is a list of `Dataset`.
### Steps to reproduce the bug
Steps to reproduc... | https://github.com/huggingface/datasets/issues/5424 | [
"Hi! You can get a `DatasetDict` if you pass a dictionary with read instructions as follows:\r\n```python\r\ninstructions = [\r\n ReadInstruction(split_name=\"train\", from_=0, to=10, unit='%', rounding='closest'),\r\n ReadInstruction(split_name=\"dev\", from_=0, to=10, unit='%', rounding='closest'),\r\n R... | null | 5,424 | false |
Datasets load error for saved github issues | ### Describe the bug
Loading a previously downloaded & saved dataset as described in the HuggingFace course:
issues_dataset = load_dataset("json", data_files="issues/datasets-issues.jsonl", split="train")
Gives this error:
datasets.builder.DatasetGenerationError: An error occurred while generating the dataset... | https://github.com/huggingface/datasets/issues/5422 | [
"I can confirm that the error exists!\r\nI'm trying to read 3 parquet files locally:\r\n```python\r\nfrom datasets import load_dataset, Features, Value, ClassLabel\r\n\r\nreview_dataset = load_dataset(\r\n \"parquet\",\r\n data_files={\r\n \"train\": os.path.join(sentiment_analysis_data_path, \"train.p... | null | 5,422 | false |
Support case-insensitive Hub dataset name in load_dataset | ### Feature request
The dataset name on the Hub is case-insensitive (see https://github.com/huggingface/moon-landing/pull/2399, internal issue), i.e., https://huggingface.co/datasets/GLUE redirects to https://huggingface.co/datasets/glue.
Ideally, we could load the glue dataset using the following:
```
from d... | https://github.com/huggingface/datasets/issues/5421 | [
"Closing as case-insensitivity should be only for URL redirection on the Hub. In the APIs, we will only support the canonical name (https://github.com/huggingface/moon-landing/pull/2399#issuecomment-1382085611)"
] | null | 5,421 | false |
ci: π‘ remove two obsolete issue templates | add-dataset is not needed anymore since the "canonical" datasets are on the Hub. And dataset-viewer is managed within the datasets-server project.
See https://github.com/huggingface/datasets/issues/new/choose
<img width="1245" alt="Capture dβeΜcran 2023-01-13 aΜ 13 59 58" src="https://user-images.githubuserconten... | https://github.com/huggingface/datasets/pull/5420 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5420",
"html_url": "https://github.com/huggingface/datasets/pull/5420",
"diff_url": "https://github.com/huggingface/datasets/pull/5420.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5420.patch",
"merged_at": "2023-01-13T13:29... | 5,420 | true |
label_column='labels' in datasets.TextClassification and 'label' or 'label_ids' in transformers.DataColator | ### Describe the bug
When preparing a dataset for a task using `datasets.TextClassification`, the output feature is named `labels`. When preparing the trainer using the `transformers.DataCollator` the default column name is `label` if binary or `label_ids` if multi-class problem.
It is required to rename the column... | https://github.com/huggingface/datasets/issues/5419 | [
"Hi! Thanks for pointing out this inconsistency. Changing the default value at this point is probably not worth it, considering we've started discussing the state of the task API internally - we will most likely deprecate the current one and replace it with a more robust solution that relies on the `train_eval_inde... | null | 5,419 | false |
Add ProgressBar for `to_parquet` | ### Feature request
Add a progress bar for `Dataset.to_parquet`, similar to how `to_json` works.
### Motivation
It's a bit frustrating to not know how long a dataset will take to write to file and if it's stuck or not without a progress bar
### Your contribution
Sure I can help if needed | https://github.com/huggingface/datasets/issues/5418 | [
"Thanks for your proposal, @zanussbaum. Yes, I agree that would definitely be a nice feature to have!",
"@albertvillanova Iβm happy to make a quick PR for the feature! let me know ",
"That would be awesome ! You can comment `#self-assign` to assign you to this issue and open a PR :) Will be happy to review",
... | null | 5,418 | false |
Fix RuntimeError: Sharding is ambiguous for this dataset | This PR fixes the RuntimeError: Sharding is ambiguous for this dataset.
The error for ambiguous sharding will be raised only if num_proc > 1.
Fix #5415, fix #5414.
Fix https://huggingface.co/datasets/ami/discussions/3. | https://github.com/huggingface/datasets/pull/5416 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"By the way, do we know how many datasets are impacted by this issue?\r\n\r\nMaybe we should do a patch release with this fix.",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated be... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5416",
"html_url": "https://github.com/huggingface/datasets/pull/5416",
"diff_url": "https://github.com/huggingface/datasets/pull/5416.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5416.patch",
"merged_at": "2023-01-18T14:09... | 5,416 | true |
RuntimeError: Sharding is ambiguous for this dataset | ### Describe the bug
When loading some datasets, a RuntimeError is raised.
For example, for "ami" dataset: https://huggingface.co/datasets/ami/discussions/3
```
.../huggingface/datasets/src/datasets/builder.py in _prepare_split(self, split_generator, check_duplicate_keys, file_format, num_proc, max_shard_size)
... | https://github.com/huggingface/datasets/issues/5415 | [] | null | 5,415 | false |
Sharding error with Multilingual LibriSpeech | ### Describe the bug
Loading the German Multilingual LibriSpeech dataset results in a RuntimeError regarding sharding with the following stacktrace:
```
Downloading and preparing dataset multilingual_librispeech/german to /home/nithin/datadrive/cache/huggingface/datasets/facebook___multilingual_librispeech/german/... | https://github.com/huggingface/datasets/issues/5414 | [
"Thanks for reporting, @Nithin-Holla.\r\n\r\nThis is a known issue for multiple datasets and we are investigating it:\r\n- See e.g.: https://huggingface.co/datasets/ami/discussions/3",
"Main issue:\r\n- #5415",
"@albertvillanova Thanks! As a workaround for now, can I use the dataset in streaming mode?",
"Yes,... | null | 5,414 | false |
concatenate_datasets fails when two dataset with shards > 1 and unequal shard numbers | ### Describe the bug
When using `concatenate_datasets([dataset1, dataset2], axis = 1)` to concatenate two datasets with shards > 1, it fails:
```
File "/home/xzg/anaconda3/envs/tri-transfer/lib/python3.9/site-packages/datasets/combine.py", line 182, in concatenate_datasets
return _concatenate_map_style_data... | https://github.com/huggingface/datasets/issues/5413 | [
"Hi ! Thanks for reporting :)\r\n\r\nI managed to reproduce the hub using\r\n```python\r\n\r\nfrom datasets import concatenate_datasets, Dataset, load_from_disk\r\n\r\nDataset.from_dict({\"a\": range(9)}).save_to_disk(\"tmp/ds1\")\r\nds1 = load_from_disk(\"tmp/ds1\")\r\nds1 = concatenate_datasets([ds1, ds1])\r\n\r\... | null | 5,413 | false |
load_dataset() cannot find dataset_info.json with multiple training runs in parallel | ### Describe the bug
I have a custom local dataset in JSON form. I am trying to do multiple training runs in parallel. The first training run runs with no issue. However, when I start another run on another GPU, the following code throws this error.
If there is a workaround to ignore the cache I think that would ... | https://github.com/huggingface/datasets/issues/5412 | [
"Hi ! It fails because the dataset is already being prepared by your first run. I'd encourage you to prepare your dataset before using it for multiple trainings.\r\n\r\nYou can also specify another cache directory by passing `cache_dir=` to `load_dataset()`.",
"Thank you! What do you mean by prepare it beforehand... | null | 5,412 | false |
Update docs of S3 filesystem with async aiobotocore | [s3fs has migrated to all async calls](https://github.com/fsspec/s3fs/commit/0de2c6fb3d87c08ea694de96dca0d0834034f8bf).
Updating documentation to use `AioSession` while using s3fs for download manager as well as working with datasets | https://github.com/huggingface/datasets/pull/5411 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5411",
"html_url": "https://github.com/huggingface/datasets/pull/5411",
"diff_url": "https://github.com/huggingface/datasets/pull/5411.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5411.patch",
"merged_at": "2023-01-18T11:12... | 5,411 | true |
Map-style Dataset to IterableDataset | Added `ds.to_iterable()` to get an iterable dataset from a map-style arrow dataset.
It also has a `num_shards` argument to split the dataset before converting to an iterable dataset. Sharding is important to enable efficient shuffling and parallel loading of iterable datasets.
TODO:
- [x] tests
- [x] docs
Fi... | https://github.com/huggingface/datasets/pull/5410 | [
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | read_batch_formatted_as_numpy after write_flattened_sequence | read_batch_formatted_a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5410",
"html_url": "https://github.com/huggingface/datasets/pull/5410",
"diff_url": "https://github.com/huggingface/datasets/pull/5410.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5410.patch",
"merged_at": "2023-02-01T16:36... | 5,410 | true |
Fix deprecation warning when use_auth_token passed to download_and_prepare | The `DatasetBuilder.download_and_prepare` argument `use_auth_token` was deprecated in:
- #5302
However, `use_auth_token` is still passed to `download_and_prepare` in our built-in `io` readers (csv, json, parquet,...).
This PR fixes it, so that no deprecation warning is raised.
Fix #5407. | https://github.com/huggingface/datasets/pull/5409 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5409",
"html_url": "https://github.com/huggingface/datasets/pull/5409",
"diff_url": "https://github.com/huggingface/datasets/pull/5409.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5409.patch",
"merged_at": "2023-01-06T10:59... | 5,409 | true |
dataset map function could not be hash properly | ### Describe the bug
I follow the [blog post](https://huggingface.co/blog/fine-tune-whisper#building-a-demo) to finetune a Cantonese transcribe model.
When using map function to prepare dataset, following warning pop out:
`common_voice = common_voice.map(prepare_dataset,
remove_... | https://github.com/huggingface/datasets/issues/5408 | [
"Hi ! On macos I tried with\r\n- py 3.9.11\r\n- datasets 2.8.0\r\n- transformers 4.25.1\r\n- dill 0.3.4\r\n\r\nand I was able to hash `prepare_dataset` correctly:\r\n```python\r\nfrom datasets.fingerprint import Hasher\r\nHasher.hash(prepare_dataset)\r\n```\r\n\r\nWhat version of transformers do you have ? Can you ... | null | 5,408 | false |
Datasets.from_sql() generates deprecation warning | ### Describe the bug
Calling `Datasets.from_sql()` generates a warning:
`.../site-packages/datasets/builder.py:712: FutureWarning: 'use_auth_token' was deprecated in version 2.7.1 and will be removed in 3.0.0. Pass 'use_auth_token' to the initializer/'load_dataset_builder' instead.`
### Steps to reproduce the ... | https://github.com/huggingface/datasets/issues/5407 | [
"Thanks for reporting @msummerfield. We are fixing it."
] | null | 5,407 | false |
[2.6.1][2.7.0] Upgrade `datasets` to fix `TypeError: can only concatenate str (not "int") to str` | `datasets` 2.6.1 and 2.7.0 started to stop supporting datasets like IMDB, ConLL or MNIST datasets.
When loading a dataset using 2.6.1 or 2.7.0, you may this error when loading certain datasets:
```python
TypeError: can only concatenate str (not "int") to str
```
This is because we started to update the metadat... | https://github.com/huggingface/datasets/issues/5406 | [
"I still get this error on 2.9.0\r\n<img width=\"1925\" alt=\"image\" src=\"https://user-images.githubusercontent.com/7208470/215597359-2f253c76-c472-4612-8099-d3a74d16eb29.png\">\r\n",
"Hi ! I just tested locally and or colab and it works fine for 2.9 on `sst2`.\r\n\r\nAlso the code that is shown in your stack t... | null | 5,406 | false |
size_in_bytes the same for all splits | ### Describe the bug
Hi, it looks like whenever you pull a dataset and get size_in_bytes, it returns the same size for all splits (and that size is the combined size of all splits). It seems like this shouldn't be the intended behavior since it is misleading. Here's an example:
```
>>> from datasets import load_da... | https://github.com/huggingface/datasets/issues/5405 | [
"Hi @Breakend,\r\n\r\nIndeed, the attribute `size_in_bytes` refers to the size of the entire dataset configuration, for all splits (size of downloaded files + Arrow files), not the specific split.\r\nThis is also the case for `download_size` (downloaded files) and `dataset_size` (Arrow files).\r\n\r\nThe size of th... | null | 5,405 | false |
Better integration of BIG-bench | ### Feature request
Ideally, it would be nice to have a maintained PyPI package for `bigbench`.
### Motivation
We'd like to allow anyone to access, explore and use any task.
### Your contribution
@lhoestq has opened an issue in their repo:
- https://github.com/google/BIG-bench/issues/906 | https://github.com/huggingface/datasets/issues/5404 | [
"Hi, I made my version : https://huggingface.co/datasets/tasksource/bigbench"
] | null | 5,404 | false |
Replace one letter import in docs | This PR updates a code example for consistency across the docs based on [feedback from this comment](https://github.com/huggingface/transformers/pull/20925/files/9fda31634d203a47d3212e4e8d43d3267faf9808#r1058769500):
"In terms of style we usually stay away from one-letter imports like this (even if the community use... | https://github.com/huggingface/datasets/pull/5403 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the docs fix for consistency.\r\n> \r\n> Again for consistency, it would be nice to make the same fix across all the docs, e.g.\r\n> \r\n> https://github.com/huggingface/datasets/blob/310cdddd1c43f9658de172b85b6509d07d5e... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5403",
"html_url": "https://github.com/huggingface/datasets/pull/5403",
"diff_url": "https://github.com/huggingface/datasets/pull/5403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5403.patch",
"merged_at": "2023-01-03T14:59... | 5,403 | true |
Missing state.json when creating a cloud dataset using a dataset_builder | ### Describe the bug
Using `load_dataset_builder` to create a builder, run `download_and_prepare` do upload it to S3. However when trying to load it, there are missing `state.json` files. Complete example:
```python
from aiobotocore.session import AioSession as Session
from datasets import load_from_disk, load_da... | https://github.com/huggingface/datasets/issues/5402 | [
"`load_from_disk` must be used on datasets saved using `save_to_disk`: they correspond to fully serialized datasets including their state.\r\n\r\nOn the other hand, `download_and_prepare` just downloads the raw data and convert them to arrow (or parquet if you want). We are working on allowing you to reload a datas... | null | 5,402 | false |
Support Dataset conversion from/to Spark | This PR implements Spark integration by supporting `Dataset` conversion from/to Spark `DataFrame`. | https://github.com/huggingface/datasets/pull/5401 | [
"The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/datasets/pr_5401). All of your documentation changes will be reflected on that endpoint.",
"Cool thanks !\r\n\r\nSpark DataFrame are usually quite big, and I believe here `from_spark` would load everything in the driver node's RAM, which i... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5401",
"html_url": "https://github.com/huggingface/datasets/pull/5401",
"diff_url": "https://github.com/huggingface/datasets/pull/5401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5401.patch",
"merged_at": null
} | 5,401 | true |
Support streaming datasets with os.path.exists and Path.exists | Support streaming datasets with `os.path.exists` and `pathlib.Path.exists`. | https://github.com/huggingface/datasets/pull/5400 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5400",
"html_url": "https://github.com/huggingface/datasets/pull/5400",
"diff_url": "https://github.com/huggingface/datasets/pull/5400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5400.patch",
"merged_at": "2023-01-06T10:35... | 5,400 | true |
Got disconnected from remote data host. Retrying in 5sec [2/20] | ### Describe the bug
While trying to upload my image dataset of a CSV file type to huggingface by running the below code. The dataset consists of a little over 100k of image-caption pairs
### Steps to reproduce the bug
```
df = pd.read_csv('x.csv', encoding='utf-8-sig')
features = Features({
'link': Ima... | https://github.com/huggingface/datasets/issues/5399 | [] | null | 5,399 | false |
Unpin pydantic | Once `pydantic` fixes their issue in their 1.10.3 version, unpin it.
See issue:
- #5394
See temporary fix:
- #5395 | https://github.com/huggingface/datasets/issues/5398 | [] | null | 5,398 | false |
Unpin pydantic test dependency | Once pydantic-1.10.3 has been yanked, we can unpin it: https://pypi.org/project/pydantic/1.10.3/
See reply by pydantic team https://github.com/pydantic/pydantic/issues/4885#issuecomment-1367819807
```
v1.10.3 has been yanked.
```
in response to spacy request: https://github.com/pydantic/pydantic/issues/4885#issu... | https://github.com/huggingface/datasets/pull/5397 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5397",
"html_url": "https://github.com/huggingface/datasets/pull/5397",
"diff_url": "https://github.com/huggingface/datasets/pull/5397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5397.patch",
"merged_at": "2022-12-30T10:43... | 5,397 | true |
Fix checksum verification | Expected checksum was verified against checksum dict (not checksum). | https://github.com/huggingface/datasets/pull/5396 | [
"Hi ! If I'm not mistaken both `expected_checksums[url]` and `recorded_checksums[url]` are dictionaries with keys \"checksum\" and \"num_bytes\". So we need to check whether `expected_checksums[url] != recorded_checksums[url]` (or simply `expected_checksums[url][\"checksum\"] != recorded_checksums[url][\"checksum\"... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5396",
"html_url": "https://github.com/huggingface/datasets/pull/5396",
"diff_url": "https://github.com/huggingface/datasets/pull/5396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5396.patch",
"merged_at": null
} | 5,396 | true |
Temporarily pin pydantic test dependency | Temporarily pin `pydantic` until a permanent solution is found.
Fix #5394. | https://github.com/huggingface/datasets/pull/5395 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5395",
"html_url": "https://github.com/huggingface/datasets/pull/5395",
"diff_url": "https://github.com/huggingface/datasets/pull/5395.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5395.patch",
"merged_at": "2022-12-29T21:00... | 5,395 | true |
CI error: TypeError: dataclass_transform() got an unexpected keyword argument 'field_specifiers' | ### Describe the bug
While installing the dependencies, the CI raises a TypeError:
```
Traceback (most recent call last):
File "/opt/hostedtoolcache/Python/3.7.15/x64/lib/python3.7/runpy.py", line 183, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "/opt/hoste... | https://github.com/huggingface/datasets/issues/5394 | [
"I still getting the same error :\r\n\r\n`python -m spacy download fr_core_news_lg\r\n`.\r\n`import spacy`",
"@MFatnassi, this issue and the corresponding fix only affect our Continuous Integration testing environment.\r\n\r\nNote that `datasets` does not depend on `spacy`."
] | null | 5,394 | false |
Finish deprecating the fs argument | See #5385 for some discussion on this
The `fs=` arg was depcrecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in `2.8.0` (to be removed in `3.0.0`). There are a few other places where the `fs=` arg was still used (functions/methods in `datasets.info` and `datasets.load`). This PR adds a similar beha... | https://github.com/huggingface/datasets/pull/5393 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"> Thanks for the deprecation. Some minor suggested fixes below...\r\n> \r\n> Also note that the corresponding tests should be updated as well.\r\n\r\nThanks for the suggestions/typo fixes. I updated the failing test - passing locall... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5393",
"html_url": "https://github.com/huggingface/datasets/pull/5393",
"diff_url": "https://github.com/huggingface/datasets/pull/5393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5393.patch",
"merged_at": "2023-01-18T12:35... | 5,393 | true |
Fix Colab notebook link | Fix notebook link to open in Colab. | https://github.com/huggingface/datasets/pull/5392 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5392",
"html_url": "https://github.com/huggingface/datasets/pull/5392",
"diff_url": "https://github.com/huggingface/datasets/pull/5392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5392.patch",
"merged_at": "2023-01-03T15:27... | 5,392 | true |
Whisper Event - RuntimeError: The size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1000/1000 [2:52:21<00:00, 10.34s/it] | Done in a VM with a GPU (Ubuntu) following the [Whisper Event - PYTHON](https://github.com/huggingface/community-events/tree/main/whisper-fine-tuning-event#python-script) instructions.
Attempted using [RuntimeError: he size of tensor a (504) must match the size of tensor b (448) at non-singleton dimension 1 100% 1... | https://github.com/huggingface/datasets/issues/5391 | [
"Hey @catswithbats! Super sorry for the late reply! This is happening because there is data with label length (504) that exceeds the model's max length (448). \r\n\r\nThere are two options here:\r\n1. Increase the model's `max_length` parameter: \r\n```python\r\nmodel.config.max_length = 512\r\n```\r\n2. Filter dat... | null | 5,391 | false |
Error when pushing to the CI hub | ### Describe the bug
Note that it's a special case where the Hub URL is "https://hub-ci.huggingface.co", which does not appear if we do the same on the Hub (https://huggingface.co).
The call to `dataset.push_to_hub(` fails:
```
Pushing dataset shards to the dataset hub: 100%|ββββββββββββββββββββββββββββββββββ... | https://github.com/huggingface/datasets/issues/5390 | [
"Hmmm, git bisect tells me that the behavior is the same since https://github.com/huggingface/datasets/commit/67e65c90e9490810b89ee140da11fdd13c356c9c (3 Oct), i.e. https://github.com/huggingface/datasets/pull/4926",
"Maybe related to the discussions in https://github.com/huggingface/datasets/pull/5196",
"Maybe... | null | 5,390 | false |
Fix link in `load_dataset` docstring | Fix https://github.com/huggingface/datasets/issues/5387, fix https://github.com/huggingface/datasets/issues/4566 | https://github.com/huggingface/datasets/pull/5389 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5389",
"html_url": "https://github.com/huggingface/datasets/pull/5389",
"diff_url": "https://github.com/huggingface/datasets/pull/5389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5389.patch",
"merged_at": "2023-01-24T16:33... | 5,389 | true |
Getting Value Error while loading a dataset.. | ### Describe the bug
I am trying to load a dataset using Hugging Face Datasets load_dataset method. I am getting the value error as show below. Can someone help with this? I am using Windows laptop and Google Colab notebook.
```
WARNING:datasets.builder:Using custom data configuration default-a1d9e8eaedd958cd
---... | https://github.com/huggingface/datasets/issues/5388 | [
"Hi! I can't reproduce this error locally (Mac) or in Colab. What version of `datasets` are you using?",
"Hi [mariosasko](https://github.com/mariosasko), the datasets version is '2.8.0'.",
"@valmetisrinivas you get that error because you imported `datasets` (and thus `fsspec`) before installing `zstandard`.\r\n... | null | 5,388 | false |
Missing documentation page : improve-performance | ### Describe the bug
Trying to access https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/cache#improve-performance, the page is missing.
The link is in here : https://huggingface.co/docs/datasets/v2.8.0/en/package_reference/loading_methods#datasets.load_dataset.keep_in_memory
### Steps to reproduce t... | https://github.com/huggingface/datasets/issues/5387 | [
"Hi! Our documentation builder does not support links to sections, hence the bug. This is the link it should point to https://huggingface.co/docs/datasets/v2.8.0/en/cache#improve-performance."
] | null | 5,387 | false |
`max_shard_size` in `datasets.push_to_hub()` breaks with large files | ### Describe the bug
`max_shard_size` parameter for `datasets.push_to_hub()` works unreliably with large files, generating shard files that are way past the specified limit.
In my private dataset, which contains unprocessed images of all sizes (up to `~100MB` per file), I've encountered cases where `max_shard_siz... | https://github.com/huggingface/datasets/issues/5386 | [
"Hi! \r\n\r\nThis behavior stems from the fact that we don't always embed image bytes in the underlying arrow table, which can lead to bad size estimation (we use the first 1000 table rows to [estimate](https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.... | null | 5,386 | false |
Is `fs=` deprecated in `load_from_disk()` as well? | ### Describe the bug
The `fs=` argument was deprecated from `Dataset.save_to_disk` and `Dataset.load_from_disk` in favor of automagically figuring it out via fsspec:
https://github.com/huggingface/datasets/blob/9a7272cd4222383a5b932b0083a4cc173fda44e8/src/datasets/arrow_dataset.py#L1339-L1340
Is there a reason the... | https://github.com/huggingface/datasets/issues/5385 | [
"Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR? ",
"> Hi! Yes, we should deprecate the `fs` param here. Would you be interested in submitting a PR?\r\n\r\nYeah I can do that sometime next week. Should the storage_options be a new arg here? Iβll look around for anywh... | null | 5,385 | false |
Handle 0-dim tensors in `cast_to_python_objects` | Fix #5229 | https://github.com/huggingface/datasets/pull/5384 | [
"_The documentation is not available anymore as the PR was closed or merged._",
"<details>\n<summary>Show benchmarks</summary>\n\nPyArrow==6.0.0\n\n<details>\n<summary>Show updated benchmarks!</summary>\n\n### Benchmark: benchmark_array_xd.json\n\n| metric | read_batch_formatted_as_numpy after write_array2d | rea... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/5384",
"html_url": "https://github.com/huggingface/datasets/pull/5384",
"diff_url": "https://github.com/huggingface/datasets/pull/5384.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/5384.patch",
"merged_at": "2023-01-13T16:00... | 5,384 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.