url
stringlengths
58
61
repository_url
stringclasses
1 value
labels_url
stringlengths
72
75
comments_url
stringlengths
67
70
events_url
stringlengths
65
68
html_url
stringlengths
48
51
id
int64
600M
3.43B
node_id
stringlengths
18
24
number
int64
2
7.78k
title
stringlengths
1
290
user
dict
labels
listlengths
0
4
state
stringclasses
2 values
locked
bool
1 class
assignee
dict
assignees
listlengths
0
4
milestone
dict
comments
listlengths
0
30
created_at
stringdate
2020-04-14 18:18:51
2025-09-18 08:25:34
updated_at
stringdate
2020-04-29 09:23:05
2025-09-22 08:47:53
closed_at
stringlengths
20
20
author_association
stringclasses
4 values
type
null
active_lock_reason
null
draft
bool
0 classes
pull_request
dict
body
stringlengths
0
228k
closed_by
dict
reactions
dict
timeline_url
stringlengths
67
70
performed_via_github_app
null
state_reason
stringclasses
4 values
sub_issues_summary
dict
issue_dependencies_summary
dict
is_pull_request
bool
1 class
https://api.github.com/repos/huggingface/datasets/issues/2195
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2195/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2195/comments
https://api.github.com/repos/huggingface/datasets/issues/2195/events
https://github.com/huggingface/datasets/issues/2195
854,070,194
MDU6SXNzdWU4NTQwNzAxOTQ=
2,195
KeyError: '_indices_files' in `arrow_dataset.py`
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "Thanks for reporting @samsontmr.\r\n\r\nIt seems a backward compatibility issue...", "Thanks @samsontmr this should be fixed on master now\r\n\r\nFeel free to reopen if you're still having issues" ]
2021-04-09T01:37:12Z
2021-04-09T09:55:09Z
2021-04-09T09:54:39Z
NONE
null
null
null
null
After pulling the latest master, I'm getting a crash when `load_from_disk` tries to load my local dataset. Trace: ``` Traceback (most recent call last): File "load_data.py", line 11, in <module> dataset = load_from_disk(SRC) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/load.py", line 784, in load_from_disk return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/dataset_dict.py", line 692, in load_from_disk dataset_dict[k] = Dataset.load_from_disk(dataset_dict_split_path, fs, keep_in_memory=keep_in_memory) File "/opt/conda/envs/py38/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 634, in load_from_disk if state["_indices_files"]: KeyError: '_indices_files' ``` I believe this is the line causing the error since there may not be a `_indices_files` key in the older versions: https://github.com/huggingface/datasets/blob/b70141e3c5149430951773aaa0155555c5fb3e76/src/datasets/arrow_dataset.py#L634 May I suggest using `state.get()` instead of directly indexing the dictionary? @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2195/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2195/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2194
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2194/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2194/comments
https://api.github.com/repos/huggingface/datasets/issues/2194/events
https://github.com/huggingface/datasets/issues/2194
853,909,452
MDU6SXNzdWU4NTM5MDk0NTI=
2,194
py3.7: TypeError: can't pickle _LazyModule objects
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "\r\nThis wasn't a `datasets` problem, but `transformers`' and it was solved here https://github.com/huggingface/transformers/pull/11168\r\n" ]
2021-04-08T21:02:48Z
2021-04-09T16:56:50Z
2021-04-09T01:52:57Z
CONTRIBUTOR
null
null
null
null
While this works fine with py3.8, under py3.7, with a totally new conda env and transformers install: ``` git clone https://github.com/huggingface/transformers cd transformers pip install -e .[testing] export BS=1; rm -rf /tmp/test-clm; PYTHONPATH=src USE_TF=0 CUDA_VISIBLE_DEVICES=0 python \ examples/language-modeling/run_clm.py --model_name_or_path distilgpt2 --dataset_name wikitext \ --dataset_config_name wikitext-2-raw-v1 --do_train --max_train_samples 1 \ --per_device_train_batch_size $BS --output_dir /tmp/test-clm --block_size 128 --logging_steps 1 \ --fp16 ``` ``` Traceback (most recent call last): File "examples/language-modeling/run_clm.py", line 453, in <module> main() File "examples/language-modeling/run_clm.py", line 336, in main load_from_cache_file=not data_args.overwrite_cache, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in map for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/dataset_dict.py", line 303, in <dictcomp> for k, dataset in self.items() File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1259, in map update_data=update_data, File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 157, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 158, in wrapper self._fingerprint, transform, kwargs_for_fingerprint File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 105, in update_fingerprint hasher.update(transform_args[key]) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 57, in update self.m.update(self.hash(value).encode("utf-8")) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 53, in hash return cls.hash_default(value) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/fingerprint.py", line 46, in hash_default return cls.hash_bytes(dumps(value)) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 389, in dumps dump(obj, file) File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 361, in dump Pickler(file, recurse=True).dump(obj) File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/datasets/utils/py_utils.py", line 556, in save_function obj=obj, File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/stas/anaconda3/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/home/stas/anaconda3/lib/python3.7/pickle.py", line 524, in save rv = reduce(self.proto) TypeError: can't pickle _LazyModule objects ``` ``` $ python --version Python 3.7.4 $ python -m torch.utils.collect_env Collecting environment information... PyTorch version: 1.8.0.dev20210110+cu110 Is debug build: False CUDA used to build PyTorch: 11.0 ROCM used to build PyTorch: N/A OS: Ubuntu 20.04.2 LTS (x86_64) GCC version: (Ubuntu 9.3.0-17ubuntu1~20.04) 9.3.0 Clang version: 10.0.0-4ubuntu1 CMake version: version 3.16.3 ``` Thanks.
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2194/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2194/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2193
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2193/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2193/comments
https://api.github.com/repos/huggingface/datasets/issues/2193/events
https://github.com/huggingface/datasets/issues/2193
853,725,707
MDU6SXNzdWU4NTM3MjU3MDc=
2,193
Filtering/mapping on one column is very slow
{ "avatar_url": "https://avatars.githubusercontent.com/u/39116809?v=4", "events_url": "https://api.github.com/users/norabelrose/events{/privacy}", "followers_url": "https://api.github.com/users/norabelrose/followers", "following_url": "https://api.github.com/users/norabelrose/following{/other_user}", "gists_url": "https://api.github.com/users/norabelrose/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/norabelrose", "id": 39116809, "login": "norabelrose", "node_id": "MDQ6VXNlcjM5MTE2ODA5", "organizations_url": "https://api.github.com/users/norabelrose/orgs", "received_events_url": "https://api.github.com/users/norabelrose/received_events", "repos_url": "https://api.github.com/users/norabelrose/repos", "site_admin": false, "starred_url": "https://api.github.com/users/norabelrose/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/norabelrose/subscriptions", "type": "User", "url": "https://api.github.com/users/norabelrose", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
closed
false
null
[]
null
[ "Hi ! Yes we are working on making `filter` significantly faster. You can look at related PRs here: #2060 #2178 \r\n\r\nI think you can expect to have the fast version of `filter` available next week.\r\n\r\nWe'll make it only select one column, and we'll also make the overall filtering operation way faster by avoi...
2021-04-08T18:16:14Z
2021-04-26T16:13:59Z
2021-04-26T16:13:59Z
CONTRIBUTOR
null
null
null
null
I'm currently using the `wikipedia` dataset— I'm tokenizing the articles with the `tokenizers` library using `map()` and also adding a new `num_tokens` column to the dataset as part of that map operation. I want to be able to _filter_ the dataset based on this `num_tokens` column, but even when I specify `input_columns=['num_tokens']`, it seems that the entirety of each row is loaded into memory, which makes the operation take much longer than it should. Indeed, `filter` currently just calls `map`, and I found that in `_map_single` on lines 1690-1704 of `arrow_dataset.py`, the method is just grabbing slices of _all the rows_ of the dataset and then passing only the specified columns to the map function. It seems that, when the user passes a value for `input_columns`, the `map` function should create a temporary pyarrow table by selecting just those columns, and then get slices from that table. Or something like that— I'm not very familiar with the pyarrow API. I know that in the meantime I can sort of get around this by simply only returning the rows that match my filter criterion from the tokenizing function I pass to `map()`, but I actually _also_ want to map on just the `num_tokens` column in order to compute batches with a roughly uniform number of tokens per batch. I would also ideally like to be able to change my minimum and maximum article lengths without having to re-tokenize the entire dataset. PS: This is definitely not a "dataset request." I'm realizing that I don't actually know how to remove labels from my own issues on other people's repos, if that is even possible.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2193/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2193/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2190
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2190/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2190/comments
https://api.github.com/repos/huggingface/datasets/issues/2190/events
https://github.com/huggingface/datasets/issues/2190
853,181,564
MDU6SXNzdWU4NTMxODE1NjQ=
2,190
News_commentary Dataset Translation Pairs are of Incorrect Language Specified Pairs
{ "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anassalamah", "id": 8571003, "login": "anassalamah", "node_id": "MDQ6VXNlcjg1NzEwMDM=", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "repos_url": "https://api.github.com/users/anassalamah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "type": "User", "url": "https://api.github.com/users/anassalamah", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @anassalamah,\r\n\r\nCould you please try with this:\r\n```python\r\ntrain_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[:98%]')\r\nval_ds = load_dataset(\"news_commentary\", lang1=\"ar\", lang2=\"en\", split='train[98%:]')\r\n```", "Hello @albertvillanova, \r\n\r\nThanks for...
2021-04-08T07:53:43Z
2021-05-24T10:03:55Z
2021-05-24T10:03:55Z
NONE
null
null
null
null
I used load_dataset to load the news_commentary dataset for "ar-en" translation pairs but found translations from Arabic to Hindi. ``` train_ds = load_dataset("news_commentary", "ar-en", split='train[:98%]') val_ds = load_dataset("news_commentary", "ar-en", split='train[98%:]') # filtering out examples that are not ar-en translations but ar-hi val_ds = val_ds.filter(lambda example, indice: indice not in chain(range(1312,1327) ,range(1384,1399), range(1030,1042)), with_indices=True) ``` * I'm fairly new to using datasets so I might be doing something wrong
{ "avatar_url": "https://avatars.githubusercontent.com/u/8571003?v=4", "events_url": "https://api.github.com/users/anassalamah/events{/privacy}", "followers_url": "https://api.github.com/users/anassalamah/followers", "following_url": "https://api.github.com/users/anassalamah/following{/other_user}", "gists_url": "https://api.github.com/users/anassalamah/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/anassalamah", "id": 8571003, "login": "anassalamah", "node_id": "MDQ6VXNlcjg1NzEwMDM=", "organizations_url": "https://api.github.com/users/anassalamah/orgs", "received_events_url": "https://api.github.com/users/anassalamah/received_events", "repos_url": "https://api.github.com/users/anassalamah/repos", "site_admin": false, "starred_url": "https://api.github.com/users/anassalamah/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/anassalamah/subscriptions", "type": "User", "url": "https://api.github.com/users/anassalamah", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2190/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2190/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2189
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2189/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2189/comments
https://api.github.com/repos/huggingface/datasets/issues/2189/events
https://github.com/huggingface/datasets/issues/2189
853,052,891
MDU6SXNzdWU4NTMwNTI4OTE=
2,189
save_to_disk doesn't work when we use concatenate_datasets function before creating the final dataset_object.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! We refactored save_to_disk in #2025 so this doesn't happen.\r\nFeel free to try it on master for now\r\nWe'll do a new release soon" ]
2021-04-08T04:42:53Z
2022-06-01T16:32:15Z
2022-06-01T16:32:15Z
NONE
null
null
null
null
As you can see, it saves the entire dataset. @lhoestq You can check by going through the following example, ``` from datasets import load_from_disk,concatenate_datasets loaded_data=load_from_disk('/home/gsir059/HNSW-ori/my_knowledge_dataset') n=20 kb_list=[loaded_data.shard(n, i, contiguous=True) for i in range(n)] final_dataset=concatenate_datasets([kb_list[1],kb_list[2]]) final_dataset.save_to_disk('/home/gsir059/haha/k.arrow') ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2189/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2189/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2188
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2188/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2188/comments
https://api.github.com/repos/huggingface/datasets/issues/2188/events
https://github.com/huggingface/datasets/issues/2188
853,044,166
MDU6SXNzdWU4NTMwNDQxNjY=
2,188
Duplicate data in Timit dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4", "events_url": "https://api.github.com/users/thanh-p/events{/privacy}", "followers_url": "https://api.github.com/users/thanh-p/followers", "following_url": "https://api.github.com/users/thanh-p/following{/other_user}", "gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thanh-p", "id": 78190188, "login": "thanh-p", "node_id": "MDQ6VXNlcjc4MTkwMTg4", "organizations_url": "https://api.github.com/users/thanh-p/orgs", "received_events_url": "https://api.github.com/users/thanh-p/received_events", "repos_url": "https://api.github.com/users/thanh-p/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions", "type": "User", "url": "https://api.github.com/users/thanh-p", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting\r\nIf I recall correctly this has been recently fixed #1995\r\nCan you try to upgrade your local version of `datasets` ?\r\n```\r\npip install --upgrade datasets\r\n```", "Hi Ihoestq,\r\n\r\nThank you. It works after upgrading the datasets\r\n" ]
2021-04-08T04:21:54Z
2021-04-08T12:13:19Z
2021-04-08T12:13:19Z
NONE
null
null
null
null
I ran a simple code to list all texts in Timit dataset and the texts were all the same. Is this dataset corrupted? **Code:** timit = load_dataset("timit_asr") print(*timit['train']['text'], sep='\n') **Result:** Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? Would such an act of refusal be useful? ... ... Would such an act of refusal be useful?
{ "avatar_url": "https://avatars.githubusercontent.com/u/78190188?v=4", "events_url": "https://api.github.com/users/thanh-p/events{/privacy}", "followers_url": "https://api.github.com/users/thanh-p/followers", "following_url": "https://api.github.com/users/thanh-p/following{/other_user}", "gists_url": "https://api.github.com/users/thanh-p/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/thanh-p", "id": 78190188, "login": "thanh-p", "node_id": "MDQ6VXNlcjc4MTkwMTg4", "organizations_url": "https://api.github.com/users/thanh-p/orgs", "received_events_url": "https://api.github.com/users/thanh-p/received_events", "repos_url": "https://api.github.com/users/thanh-p/repos", "site_admin": false, "starred_url": "https://api.github.com/users/thanh-p/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/thanh-p/subscriptions", "type": "User", "url": "https://api.github.com/users/thanh-p", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2188/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2188/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2187
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2187/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2187/comments
https://api.github.com/repos/huggingface/datasets/issues/2187/events
https://github.com/huggingface/datasets/issues/2187
852,939,736
MDU6SXNzdWU4NTI5Mzk3MzY=
2,187
Question (potential issue?) related to datasets caching
{ "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ioana-blue", "id": 17202292, "login": "ioana-blue", "node_id": "MDQ6VXNlcjE3MjAyMjky", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "repos_url": "https://api.github.com/users/ioana-blue/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "type": "User", "url": "https://api.github.com/users/ioana-blue", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
null
[ "An educated guess: does this refer to the fact that depending on the custom column names in the dataset files (csv in this case), there is a dataset loader being created? and this dataset loader - using the \"custom data configuration\" is used among all jobs running using this particular csv files? (thinking out ...
2021-04-08T00:16:28Z
2023-01-03T18:30:38Z
null
NONE
null
null
null
null
I thought I had disabled datasets caching in my code, as follows: ``` from datasets import set_caching_enabled ... def main(): # disable caching in datasets set_caching_enabled(False) ``` However, in my log files I see messages like the following: ``` 04/07/2021 18:34:42 - WARNING - datasets.builder - Using custom data configuration default-888a87931cbc5877 04/07/2021 18:34:42 - WARNING - datasets.builder - Reusing dataset csv (xxxx/cache-transformers/datasets/csv/default-888a87931cbc5877/0.0.0/965b6429be0fc05f975b608ce64e1fa941cc8fb4f30629b523d2390f3c0e1a93 ``` Can you please let me know what this reusing dataset csv means? I wouldn't expect any reusing with the datasets caching disabled. Thank you!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2187/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2187/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2185
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2185/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2185/comments
https://api.github.com/repos/huggingface/datasets/issues/2185/events
https://github.com/huggingface/datasets/issues/2185
852,684,395
MDU6SXNzdWU4NTI2ODQzOTU=
2,185
.map() and distributed training
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi, one workaround would be to save the mapped(tokenized in your case) file using `save_to_disk`, and having each process load this file using `load_from_disk`. This is what I am doing, and in this case, I turn off the ability to automatically load from the cache.\r\n\r\nAlso, multiprocessing the map function seem...
2021-04-07T18:22:14Z
2021-10-23T07:11:15Z
2021-04-09T15:38:31Z
CONTRIBUTOR
null
null
null
null
Hi, I have a question regarding distributed training and the `.map` call on a dataset. I have a local dataset "my_custom_dataset" that I am loading with `datasets = load_from_disk(dataset_path=my_path)`. `dataset` is then tokenized: ```python datasets = load_from_disk(dataset_path=my_path) [...] def tokenize_function(examples): return tokenizer(examples[text_column_name]) logger.info("Mapping dataset to tokenized dataset.") tokenized_datasets = datasets.map( tokenize_function, batched=True, num_proc=preprocessing_num_workers, remove_columns=column_names, load_from_cache_file=True, ) ``` I am using 31 workers (`preprocessing_num_workers=31`) and thus it creates 31 `cache*.arrow` files in `my_path/train` (there is only a train split). When I relaunch the script, the map is tokenization is skipped in favor of loading the 31 previously cached files, and that's perfect. Everything so far was done by launching a **single process script**. I now launch the same training script in **distributed mode** (`pytorch -m torch.distributed.launch --nproc_per_node 2`). However, once it reaches the map call, it re-does the tokenization... instead of loading the 31 cached files. I tried adding the `cache_file_name` argument: `cache_file_name={"train": my_path/one_of_the_arrow_file}`, but I can't give the 31 cached files, so it probably isn't the right way to do it. **My question: what is the best way to load cached files if they were pre-processed and dumped in multiple arrow files?** It seems automatically handled for single processes but fails on distributed training. - I am following the same structure as the examples of transformers (more specifically [run_clm.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_clm.py) in my case) - I am using 1.5.0 version of datasets if that matters.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16107619?v=4", "events_url": "https://api.github.com/users/VictorSanh/events{/privacy}", "followers_url": "https://api.github.com/users/VictorSanh/followers", "following_url": "https://api.github.com/users/VictorSanh/following{/other_user}", "gists_url": "https://api.github.com/users/VictorSanh/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/VictorSanh", "id": 16107619, "login": "VictorSanh", "node_id": "MDQ6VXNlcjE2MTA3NjE5", "organizations_url": "https://api.github.com/users/VictorSanh/orgs", "received_events_url": "https://api.github.com/users/VictorSanh/received_events", "repos_url": "https://api.github.com/users/VictorSanh/repos", "site_admin": false, "starred_url": "https://api.github.com/users/VictorSanh/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/VictorSanh/subscriptions", "type": "User", "url": "https://api.github.com/users/VictorSanh", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2185/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2185/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2181
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2181/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2181/comments
https://api.github.com/repos/huggingface/datasets/issues/2181/events
https://github.com/huggingface/datasets/issues/2181
852,261,607
MDU6SXNzdWU4NTIyNjE2MDc=
2,181
Error when loading a HUGE json file (pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries)
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Can you try to increase the block size ? For example\r\n```python\r\nblock_size_10MB = 10<<20\r\nload_dataset(\"json\", ..., block_size=block_size_10MB)\r\n```\r\nThe block size corresponds to how much bytes to process at a time from the input stream.\r\nThis will determine multi-threading granularity as well...
2021-04-07T10:26:46Z
2021-04-12T07:15:55Z
2021-04-12T07:15:55Z
NONE
null
null
null
null
Hi, thanks for the great library. I have used the brilliant library for a couple of small projects, and now using it for a fairly big project. When loading a huge json file of 500GB, pyarrow complains as follows: ``` Traceback (most recent call last): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 531, in incomplete_dir yield tmp_dir File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 573, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/datasets/builder.py", line 1027, in _prepare_split for key, table in utils.tqdm(generator, unit=" tables", leave=False, disable=not_verbose): File "/home/user/.pyenv/versions/3.7.9/lib/python3.7/site-packages/tqdm/std.py", line 1133, in __iter__ for obj in iterable: File "/app/.cache/huggingface/modules/datasets_modules/datasets/json/9498524fd296a6cca99c66d6c5be507d1c0991f5a814e535b507f4a66096a641/json.py", line 83, in _generate_tables parse_options=self.config.pa_parse_options, File "pyarrow/_json.pyx", line 247, in pyarrow._json.read_json File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries (try to increase block size?) ``` When using only a small portion of the sample file, say first 100 lines, it works perfectly well.. I see that it is the error from pyarrow, but could you give me a hint or possible solutions? #369 describes the same error and #372 claims to have fixed the issue, but I have no clue why I am still getting this one. Thanks in advance!
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen", "user_view_type": "public" }
{ "+1": 3, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 3, "url": "https://api.github.com/repos/huggingface/datasets/issues/2181/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2181/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2179
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2179/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2179/comments
https://api.github.com/repos/huggingface/datasets/issues/2179/events
https://github.com/huggingface/datasets/issues/2179
852,237,957
MDU6SXNzdWU4NTIyMzc5NTc=
2,179
Load small datasets in-memory instead of using memory map
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "c5def5", "default": fals...
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[]
2021-04-07T09:58:16Z
2021-04-20T10:04:04Z
2021-04-20T10:04:03Z
MEMBER
null
null
null
null
Currently all datasets are loaded using memory mapping by default in `load_dataset`. However this might not be necessary for small datasets. If a dataset is small enough, then it can be loaded in-memory and: - its memory footprint would be small so it's ok - in-memory computations/queries would be faster - the caching on-disk would be disabled, making computations even faster (no I/O bound because of the disk) - but running the same computation a second time would recompute everything since there would be no cached results on-disk. But this is probably fine since computations would be fast anyway + users should be able to provide a cache filename if needed. Therefore, maybe the default behavior of `load_dataset` should be to load small datasets in-memory and big datasets using memory mapping.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2179/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2179/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2176
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2176/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2176/comments
https://api.github.com/repos/huggingface/datasets/issues/2176/events
https://github.com/huggingface/datasets/issues/2176
851,865,795
MDU6SXNzdWU4NTE4NjU3OTU=
2,176
Converting a Value to a ClassLabel
{ "avatar_url": "https://avatars.githubusercontent.com/u/7272031?v=4", "events_url": "https://api.github.com/users/nelson-liu/events{/privacy}", "followers_url": "https://api.github.com/users/nelson-liu/followers", "following_url": "https://api.github.com/users/nelson-liu/following{/other_user}", "gists_url": "https://api.github.com/users/nelson-liu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nelson-liu", "id": 7272031, "login": "nelson-liu", "node_id": "MDQ6VXNlcjcyNzIwMzE=", "organizations_url": "https://api.github.com/users/nelson-liu/orgs", "received_events_url": "https://api.github.com/users/nelson-liu/received_events", "repos_url": "https://api.github.com/users/nelson-liu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nelson-liu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nelson-liu/subscriptions", "type": "User", "url": "https://api.github.com/users/nelson-liu", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
null
[]
null
[ "Hi @nelson-liu!\r\nHere is what I do to convert a string to class label:\r\n\r\n```python\r\nfrom datasets import load_dataset, features\r\n\r\n\r\ndset = load_dataset(...)\r\ncol_name = \"the string column name\"\r\n\r\nclass_names = dset.unique(col_name)\r\nclass_feature = features.ClassLabel(names=sorted(class...
2021-04-06T22:54:16Z
2022-06-01T16:31:49Z
2022-06-01T16:31:49Z
NONE
null
null
null
null
Hi! In the docs for `cast`, it's noted that `For non-trivial conversion, e.g. string <-> ClassLabel you should use map() to update the Dataset.` Would it be possible to have an example that demonstrates such a string <-> ClassLabel conversion using `map`? Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2176/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2176/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2175
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2175/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2175/comments
https://api.github.com/repos/huggingface/datasets/issues/2175/events
https://github.com/huggingface/datasets/issues/2175
851,836,096
MDU6SXNzdWU4NTE4MzYwOTY=
2,175
dataset.search_batch() function outputs all -1 indices sometime.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Actually, I found the answer [here](https://github.com/facebookresearch/faiss/wiki/FAQ#what-does-it-mean-when-a-search-returns--1-ids). \r\n\r\nSo we have to do some modifications to the code for instances where the index doesn't retrieve any IDs.", "@lhoestq @patrickvonplaten \r\n\r\nI also found another short...
2021-04-06T21:50:49Z
2021-04-16T12:21:16Z
2021-04-16T12:21:15Z
NONE
null
null
null
null
I am working with RAG and playing around with different faiss indexes. At the moment I use **index = faiss.index_factory(768, "IVF65536_HNSW32,Flat")**. During the retrieval phase exactly in [this line of retrieval_rag.py](https://github.com/huggingface/transformers/blob/master/src/transformers/models/rag/retrieval_rag.py#L231) an error issue when all retrieved indices are -1. Please refer to the screenshot of a PID worker. ![image](https://user-images.githubusercontent.com/16892570/113782387-37a67600-9786-11eb-9c29-acad661a9648.png) Here, my retrieve batch size is 2 and n_docs is 5. I can solve this by working around np. stack, but I want to ask, why we get an output index of -1. Do you have any idea :) ? Is this a problem of the index, where the faiss can't find any similar vector? Is there documentation on the output index being -1? @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2175/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2175/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2170
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2170/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2170/comments
https://api.github.com/repos/huggingface/datasets/issues/2170/events
https://github.com/huggingface/datasets/issues/2170
850,913,228
MDU6SXNzdWU4NTA5MTMyMjg=
2,170
Wikipedia historic dumps are deleted but hf/datasets hardcodes dump date
{ "avatar_url": "https://avatars.githubusercontent.com/u/946903?v=4", "events_url": "https://api.github.com/users/leezu/events{/privacy}", "followers_url": "https://api.github.com/users/leezu/followers", "following_url": "https://api.github.com/users/leezu/following{/other_user}", "gists_url": "https://api.github.com/users/leezu/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leezu", "id": 946903, "login": "leezu", "node_id": "MDQ6VXNlcjk0NjkwMw==", "organizations_url": "https://api.github.com/users/leezu/orgs", "received_events_url": "https://api.github.com/users/leezu/received_events", "repos_url": "https://api.github.com/users/leezu/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leezu/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leezu/subscriptions", "type": "User", "url": "https://api.github.com/users/leezu", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "It seems that this can be fixed from user's end by including a `date` argument, like this:\r\n\r\n`dataset = datasets.load_dataset('wikipedia', '20200501.en', date='20210420')`\r\n\r\nYou can get available dates from [here](https://dumps.wikimedia.org/enwiki/).\r\n\r\nThis is not a proper fix however as all the fi...
2021-04-06T03:13:18Z
2021-06-16T01:10:50Z
null
NONE
null
null
null
null
Wikimedia does not keep all historical dumps. For example, as of today https://dumps.wikimedia.org/kowiki/ only provides ``` 20201220/ 02-Feb-2021 01:36 - 20210101/ 21-Feb-2021 01:26 - 20210120/ 02-Mar-2021 01:25 - 20210201/ 21-Mar-2021 01:26 - 20210220/ 02-Apr-2021 01:26 - 20210301/ 03-Mar-2021 08:10 - 20210320/ 21-Mar-2021 18:13 - 20210401/ 03-Apr-2021 10:08 - latest/ 03-Apr-2021 10:08 - ``` However, the wikipedia dataset provided in the library, only supports the following configs, none of which are applicable anymore when disregarding the cached datasets: ``` ValueError: BuilderConfig 20210401.ko not found. Available: ['20200501.aa', '20200501.ab', '20200501.ace', '20200501.ady', '20200501.af', '20200501.ak', '20200501.als', '20200501.am', '20200501.an', '20200501.ang', '20200501.ar', '20200501.arc', '20200501.arz', '20200501.as', '20200501.ast', '20200501.atj', '20200501.av', '20200501.ay', '20200501.az', '20200501.azb', '20200501.ba', '20200501.bar', '20200501.bat-smg', '20200501.bcl', '20200501.be', '20200501.be-x-old', '20200501.bg', '20200501.bh', '20200501.bi', '20200501.bjn', '20200501.bm', '20200501.bn', '20200501.bo', '20200501.bpy', '20200501.br', '20200501.bs', '20200501.bug', '20200501.bxr', '20200501.ca', '20200501.cbk-zam', '20200501.cdo', '20200501.ce', '20200501.ceb', '20200501.ch', '20200501.cho', '20200501.chr', '20200501.chy', '20200501.ckb', '20200501.co', '20200501.cr', '20200501.crh', '20200501.cs', '20200501.csb', '20200501.cu', '20200501.cv', '20200501.cy', '20200501.da', '20200501.de', '20200501.din', '20200501.diq', '20200501.dsb', '20200501.dty', '20200501.dv', '20200501.dz', '20200501.ee', '20200501.el', '20200501.eml', '20200501.en', '20200501.eo', '20200501.es', '20200501.et', '20200501.eu', '20200501.ext', '20200501.fa', '20200501.ff', '20200501.fi', '20200501.fiu-vro', '20200501.fj', '20200501.fo', '20200501.fr', '20200501.frp', '20200501.frr', '20200501.fur', '20200501.fy', '20200501.ga', '20200501.gag', '20200501.gan', '20200501.gd', '20200501.gl', '20200501.glk', '20200501.gn', '20200501.gom', '20200501.gor', '20200501.got', '20200501.gu', '20200501.gv', '20200501.ha', '20200501.hak', '20200501.haw', '20200501.he', '20200501.hi', '20200501.hif', '20200501.ho', '20200501.hr', '20200501.hsb', '20200501.ht', '20200501.hu', '20200501.hy', '20200501.ia', '20200501.id', '20200501.ie', '20200501.ig', '20200501.ii', '20200501.ik', '20200501.ilo', '20200501.inh', '20200501.io', '20200501.is', '20200501.it', '20200501.iu', '20200501.ja', '20200501.jam', '20200501.jbo', '20200501.jv', '20200501.ka', '20200501.kaa', '20200501.kab', '20200501.kbd', '20200501.kbp', '20200501.kg', '20200501.ki', '20200501.kj', '20200501.kk', '20200501.kl', '20200501.km', '20200501.kn', '20200501.ko', '20200501.koi', '20200501.krc', '20200501.ks', '20200501.ksh', '20200501.ku', '20200501.kv', '20200501.kw', '20200501.ky', '20200501.la', '20200501.lad', '20200501.lb', '20200501.lbe', '20200501.lez', '20200501.lfn', '20200501.lg', '20200501.li', '20200501.lij', '20200501.lmo', '20200501.ln', '20200501.lo', '20200501.lrc', '20200501.lt', '20200501.ltg', '20200501.lv', '20200501.mai', '20200501.map-bms', '20200501.mdf', '20200501.mg', '20200501.mh', '20200501.mhr', '20200501.mi', '20200501.min', '20200501.mk', '20200501.ml', '20200501.mn', '20200501.mr', '20200501.mrj', '20200501.ms', '20200501.mt', '20200501.mus', '20200501.mwl', '20200501.my', '20200501.myv', '20200501.mzn', '20200501.na', '20200501.nah', '20200501.nap', '20200501.nds', '20200501.nds-nl', '20200501.ne', '20200501.new', '20200501.ng', '20200501.nl', '20200501.nn', '20200501.no', '20200501.nov', '20200501.nrm', '20200501.nso', '20200501.nv', '20200501.ny', '20200501.oc', '20200501.olo', '20200501.om', '20200501.or', '20200501.os', '20200501.pa', '20200501.pag', '20200501.pam', '20200501.pap', '20200501.pcd', '20200501.pdc', '20200501.pfl', '20200501.pi', '20200501.pih', '20200501.pl', '20200501.pms', '20200501.pnb', '20200501.pnt', '20200501.ps', '20200501.pt', '20200501.qu', '20200501.rm', '20200501.rmy', '20200501.rn', '20200501.ro', '20200501.roa-rup', '20200501.roa-tara', '20200501.ru', '20200501.rue', '20200501.rw', '20200501.sa', '20200501.sah', '20200501.sat', '20200501.sc', '20200501.scn', '20200501.sco', '20200501.sd', '20200501.se', '20200501.sg', '20200501.sh', '20200501.si', '20200501.simple', '20200501.sk', '20200501.sl', '20200501.sm', '20200501.sn', '20200501.so', '20200501.sq', '20200501.sr', '20200501.srn', '20200501.ss', '20200501.st', '20200501.stq', '20200501.su', '20200501.sv', '20200501.sw', '20200501.szl', '20200501.ta', '20200501.tcy', '20200501.te', '20200501.tet', '20200501.tg', '20200501.th', '20200501.ti', '20200501.tk', '20200501.tl', '20200501.tn', '20200501.to', '20200501.tpi', '20200501.tr', '20200501.ts', '20200501.tt', '20200501.tum', '20200501.tw', '20200501.ty', '20200501.tyv', '20200501.udm', '20200501.ug', '20200501.uk', '20200501.ur', '20200501.uz', '20200501.ve', '20200501.vec', '20200501.vep', '20200501.vi', '20200501.vls', '20200501.vo', '20200501.wa', '20200501.war', '20200501.wo', '20200501.wuu', '20200501.xal', '20200501.xh', '20200501.xmf', '20200501.yi', '20200501.yo', '20200501.za', '20200501.zea', '20200501.zh', '20200501.zh-classical', '20200501.zh-min-nan', '20200501.zh-yue', '20200501.zu'] ``` The cached datasets: ``` % aws s3 --no-sign-request --endpoint-url https://storage.googleapis.com ls s3://huggingface-nlp/cache/datasets/wikipedia/ PRE 20200501.de/ PRE 20200501.en/ PRE 20200501.fr/ PRE 20200501.frr/ PRE 20200501.it/ PRE 20200501.simple/ ```
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2170/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2170/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2167
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2167/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2167/comments
https://api.github.com/repos/huggingface/datasets/issues/2167/events
https://github.com/huggingface/datasets/issues/2167
849,944,891
MDU6SXNzdWU4NDk5NDQ4OTE=
2,167
Split type not preserved when reloading the dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2021-04-04T19:29:54Z
2021-04-19T09:08:55Z
2021-04-19T09:08:55Z
COLLABORATOR
null
null
null
null
A minimal reproducible example: ```python >>> from datasets import load_dataset, Dataset >>> dset = load_dataset("sst", split="train") >>> dset.save_to_disk("sst") >>> type(dset.split) <class 'datasets.splits.NamedSplit'> >>> dset = Dataset.load_from_disk("sst") >>> type(dset.split) # NamedSplit expected <class 'str'> ``` It seems like this bug was introduced in #2025.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2167/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2167/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2166
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2166/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2166/comments
https://api.github.com/repos/huggingface/datasets/issues/2166/events
https://github.com/huggingface/datasets/issues/2166
849,778,545
MDU6SXNzdWU4NDk3Nzg1NDU=
2,166
Regarding Test Sets for the GEM datasets
{ "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vyraun", "id": 17217068, "login": "vyraun", "node_id": "MDQ6VXNlcjE3MjE3MDY4", "organizations_url": "https://api.github.com/users/vyraun/orgs", "received_events_url": "https://api.github.com/users/vyraun/received_events", "repos_url": "https://api.github.com/users/vyraun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "type": "User", "url": "https://api.github.com/users/vyraun", "user_view_type": "public" }
[ { "color": "72f99f", "default": false, "description": "Discussions on the datasets", "id": 2067401494, "name": "Dataset discussion", "node_id": "MDU6TGFiZWwyMDY3NDAxNDk0", "url": "https://api.github.com/repos/huggingface/datasets/labels/Dataset%20discussion" } ]
closed
false
null
[]
null
[ "Hi @vyraun ! The test references for CommonGen are not publicly available: you can reach out to the original dataset authors if you would like to ask for them, but we will not be releasing them as part of GEM (March 31st was the release date for the test set inputs, references are incidentally released for some of...
2021-04-04T02:02:45Z
2021-04-06T08:13:12Z
2021-04-06T08:13:12Z
NONE
null
null
null
null
@yjernite Hi, are the test sets for the GEM datasets scheduled to be [added soon](https://gem-benchmark.com/shared_task)? e.g. ``` from datasets import load_dataset DATASET_NAME="common_gen" data = load_dataset("gem", DATASET_NAME) ``` The test set doesn't have the target or references. ``` data['test'][0] {'concept_set_id': 0, 'concepts': ['drill', 'field', 'run', 'team'], 'gem_id': 'common_gen-test-0', 'gem_parent_id': 'common_gen-test-0', 'references': [], 'target': ''} ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/17217068?v=4", "events_url": "https://api.github.com/users/vyraun/events{/privacy}", "followers_url": "https://api.github.com/users/vyraun/followers", "following_url": "https://api.github.com/users/vyraun/following{/other_user}", "gists_url": "https://api.github.com/users/vyraun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vyraun", "id": 17217068, "login": "vyraun", "node_id": "MDQ6VXNlcjE3MjE3MDY4", "organizations_url": "https://api.github.com/users/vyraun/orgs", "received_events_url": "https://api.github.com/users/vyraun/received_events", "repos_url": "https://api.github.com/users/vyraun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vyraun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vyraun/subscriptions", "type": "User", "url": "https://api.github.com/users/vyraun", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2166/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2166/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2165
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2165/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2165/comments
https://api.github.com/repos/huggingface/datasets/issues/2165/events
https://github.com/huggingface/datasets/issues/2165
849,771,665
MDU6SXNzdWU4NDk3NzE2NjU=
2,165
How to convert datasets.arrow_dataset.Dataset to torch.utils.data.Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/y-rokutan", "id": 24562381, "login": "y-rokutan", "node_id": "MDQ6VXNlcjI0NTYyMzgx", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "repos_url": "https://api.github.com/users/y-rokutan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "type": "User", "url": "https://api.github.com/users/y-rokutan", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\na HF dataset can be converted to a Torch Dataset with a simple wrapper as follows:\r\n```python\r\nfrom torch.utils.data import Dataset\r\n \r\nclass HFDataset(Dataset):\r\n def __init__(self, dset):\r\n self.dset = dset\r\n\r\n def __getitem__(self, idx):\r\n return self.dset[idx]\r...
2021-04-04T01:01:48Z
2021-08-24T15:55:35Z
2021-04-07T15:06:04Z
NONE
null
null
null
null
Hi, I'm trying to pretraine deep-speed model using HF arxiv dataset like: ``` train_ds = nlp.load_dataset('scientific_papers', 'arxiv') train_ds.set_format( type="torch", columns=["input_ids", "attention_mask", "global_attention_mask", "labels"], ) engine, _, _, _ = deepspeed.initialize( args=args, model=model, model_parameters=[p for p in model.parameters() if p.requires_grad], training_data=train_ds) ``` but deepspeed.initialize accepts torch.utils.data.Dataset only. How can I convert HF-style dataset to torch-style dataset?
{ "avatar_url": "https://avatars.githubusercontent.com/u/24562381?v=4", "events_url": "https://api.github.com/users/y-rokutan/events{/privacy}", "followers_url": "https://api.github.com/users/y-rokutan/followers", "following_url": "https://api.github.com/users/y-rokutan/following{/other_user}", "gists_url": "https://api.github.com/users/y-rokutan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/y-rokutan", "id": 24562381, "login": "y-rokutan", "node_id": "MDQ6VXNlcjI0NTYyMzgx", "organizations_url": "https://api.github.com/users/y-rokutan/orgs", "received_events_url": "https://api.github.com/users/y-rokutan/received_events", "repos_url": "https://api.github.com/users/y-rokutan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/y-rokutan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/y-rokutan/subscriptions", "type": "User", "url": "https://api.github.com/users/y-rokutan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2165/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2165/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2162
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2162/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2162/comments
https://api.github.com/repos/huggingface/datasets/issues/2162/events
https://github.com/huggingface/datasets/issues/2162
849,129,201
MDU6SXNzdWU4NDkxMjkyMDE=
2,162
visualization for cc100 is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "This looks like an issue with the cc100 dataset itself but not sure\r\nDid you try loading cc100 on your machine ?", "Hi\nloading works fine, but the viewer only is broken\nthanks\n\nOn Wed, Apr 7, 2021 at 12:17 PM Quentin Lhoest ***@***.***>\nwrote:\n\n> This looks like an issue with the cc100 dataset itself bu...
2021-04-02T10:11:13Z
2022-10-05T13:20:24Z
2022-10-05T13:20:24Z
NONE
null
null
null
null
Hi visualization through dataset viewer for cc100 is broken https://huggingface.co/datasets/viewer/ thanks a lot
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2162/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2162/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2161
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2161/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2161/comments
https://api.github.com/repos/huggingface/datasets/issues/2161/events
https://github.com/huggingface/datasets/issues/2161
849,127,041
MDU6SXNzdWU4NDkxMjcwNDE=
2,161
any possibility to download part of large datasets only?
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Not yet but it’s on the short/mid-term roadmap (requested by many indeed).", "oh, great, really awesome feature to have, thank you very much for the great, fabulous work", "We'll work on dataset streaming soon. This should allow you to only load the examples you need ;)", "thanks a lot Quentin, this would be...
2021-04-02T10:06:46Z
2022-10-05T13:26:51Z
2022-10-05T13:26:51Z
NONE
null
null
null
null
Hi Some of the datasets I need like cc100 are very large, and then I wonder if I can download first X samples of the shuffled/unshuffled data without going through first downloading the whole data then sampling? thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2161/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2161/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2160
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2160/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2160/comments
https://api.github.com/repos/huggingface/datasets/issues/2160/events
https://github.com/huggingface/datasets/issues/2160
849,052,921
MDU6SXNzdWU4NDkwNTI5MjE=
2,160
data_args.preprocessing_num_workers almost freezes
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi.\r\nI cannot always reproduce this issue, and on later runs I did not see it so far. Sometimes also I set 8 processes but I see less being showed, is this normal, here only 5 are shown for 8 being set, thanks\r\n\r\n```\r\n#3: 11%|███████████████▊ ...
2021-04-02T07:56:13Z
2021-04-02T10:14:32Z
2021-04-02T10:14:31Z
NONE
null
null
null
null
Hi @lhoestq I am running this code from huggingface transformers https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm.py to speed up tokenization, since I am running on multiple datasets, I am using data_args.preprocessing_num_workers = 4 with opus100 corpus but this moves on till a point and then this freezes almost for sometime during tokenization steps and then this is back again, overall to me taking more time than normal case, I appreciate your advice on how I can use this option properly to speed up. thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2160/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2160/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2159
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2159/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2159/comments
https://api.github.com/repos/huggingface/datasets/issues/2159/events
https://github.com/huggingface/datasets/issues/2159
848,851,962
MDU6SXNzdWU4NDg4NTE5NjI=
2,159
adding ccnet dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "closing since I think this is cc100, just the name has been changed. thanks " ]
2021-04-01T23:28:36Z
2021-04-02T10:05:19Z
2021-04-02T10:05:19Z
NONE
null
null
null
null
## Adding a Dataset - **Name:** ccnet - **Description:** Common Crawl - **Paper:** https://arxiv.org/abs/1911.00359 - **Data:** https://github.com/facebookresearch/cc_net - **Motivation:** this is one of the most comprehensive clean monolingual datasets across a variety of languages. Quite important for cross-lingual reseach Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md). thanks
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2159/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2159/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2158
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2158/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2158/comments
https://api.github.com/repos/huggingface/datasets/issues/2158/events
https://github.com/huggingface/datasets/issues/2158
848,506,746
MDU6SXNzdWU4NDg1MDY3NDY=
2,158
viewer "fake_news_english" error
{ "avatar_url": "https://avatars.githubusercontent.com/u/9447991?v=4", "events_url": "https://api.github.com/users/emanuelevivoli/events{/privacy}", "followers_url": "https://api.github.com/users/emanuelevivoli/followers", "following_url": "https://api.github.com/users/emanuelevivoli/following{/other_user}", "gists_url": "https://api.github.com/users/emanuelevivoli/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/emanuelevivoli", "id": 9447991, "login": "emanuelevivoli", "node_id": "MDQ6VXNlcjk0NDc5OTE=", "organizations_url": "https://api.github.com/users/emanuelevivoli/orgs", "received_events_url": "https://api.github.com/users/emanuelevivoli/received_events", "repos_url": "https://api.github.com/users/emanuelevivoli/repos", "site_admin": false, "starred_url": "https://api.github.com/users/emanuelevivoli/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/emanuelevivoli/subscriptions", "type": "User", "url": "https://api.github.com/users/emanuelevivoli", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting !\r\nThe viewer doesn't have all the dependencies of the datasets. We may add openpyxl to be able to show this dataset properly", "This viewer tool is deprecated now and the new viewer at https://huggingface.co/datasets/fake_news_english works fine, so I'm closing this issue" ]
2021-04-01T14:13:20Z
2022-10-05T13:22:02Z
2022-10-05T13:22:02Z
NONE
null
null
null
null
When I visit the [Huggingface - viewer](https://huggingface.co/datasets/viewer/) web site, under the dataset "fake_news_english" I've got this error: > ImportError: To be able to use this dataset, you need to install the following dependencies['openpyxl'] using 'pip install # noqa: requires this pandas optional dependency for reading xlsx files' for instance' as well as the error Traceback.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2158/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2158/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2153
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2153/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2153/comments
https://api.github.com/repos/huggingface/datasets/issues/2153/events
https://github.com/huggingface/datasets/issues/2153
846,181,502
MDU6SXNzdWU4NDYxODE1MDI=
2,153
load_dataset ignoring features
{ "avatar_url": "https://avatars.githubusercontent.com/u/37592763?v=4", "events_url": "https://api.github.com/users/GuillemGSubies/events{/privacy}", "followers_url": "https://api.github.com/users/GuillemGSubies/followers", "following_url": "https://api.github.com/users/GuillemGSubies/following{/other_user}", "gists_url": "https://api.github.com/users/GuillemGSubies/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GuillemGSubies", "id": 37592763, "login": "GuillemGSubies", "node_id": "MDQ6VXNlcjM3NTkyNzYz", "organizations_url": "https://api.github.com/users/GuillemGSubies/orgs", "received_events_url": "https://api.github.com/users/GuillemGSubies/received_events", "repos_url": "https://api.github.com/users/GuillemGSubies/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GuillemGSubies/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GuillemGSubies/subscriptions", "type": "User", "url": "https://api.github.com/users/GuillemGSubies", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Hi ! Thanks for reporting. I opened a PR to fix this issue: #2201", "Nice question which helped me a lot! I have wasted a lot of time to the `DatasetDict` creation from a csv file. Hope the document of this module add some simple examples.", "Hi :) We're indeed working on tutorials that we will add to the docs...
2021-03-31T08:30:09Z
2022-10-05T13:29:12Z
2022-10-05T13:29:12Z
NONE
null
null
null
null
First of all, I'm sorry if it is a repeated issue or the changes are already in master, I searched and I didn't find anything. I'm using datasets 1.5.0 ![image](https://user-images.githubusercontent.com/37592763/113114369-8f376580-920b-11eb-900d-94365b59f04b.png) As you can see, when I load the dataset, the ClassLabels are ignored, I have to cast the dataset in order to make it work. Code to reproduce: ```python import datasets data_location = "/data/prueba_multiclase" features = datasets.Features( {"texto": datasets.Value("string"), "label": datasets.features.ClassLabel(names=["false", "true"])} ) dataset = datasets.load_dataset( "csv", data_files=data_location, delimiter="\t", features=features ) ``` Dataset I used: [prueba_multiclase.zip](https://github.com/huggingface/datasets/files/6235022/prueba_multiclase.zip) (it has to be unzipped) Thank you! ❤️
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2153/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2153/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2149
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2149/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2149/comments
https://api.github.com/repos/huggingface/datasets/issues/2149/events
https://github.com/huggingface/datasets/issues/2149
844,734,076
MDU6SXNzdWU4NDQ3MzQwNzY=
2,149
Telugu subset missing for xtreme tatoeba dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/50871412?v=4", "events_url": "https://api.github.com/users/cosmeowpawlitan/events{/privacy}", "followers_url": "https://api.github.com/users/cosmeowpawlitan/followers", "following_url": "https://api.github.com/users/cosmeowpawlitan/following{/other_user}", "gists_url": "https://api.github.com/users/cosmeowpawlitan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/cosmeowpawlitan", "id": 50871412, "login": "cosmeowpawlitan", "node_id": "MDQ6VXNlcjUwODcxNDEy", "organizations_url": "https://api.github.com/users/cosmeowpawlitan/orgs", "received_events_url": "https://api.github.com/users/cosmeowpawlitan/received_events", "repos_url": "https://api.github.com/users/cosmeowpawlitan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/cosmeowpawlitan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/cosmeowpawlitan/subscriptions", "type": "User", "url": "https://api.github.com/users/cosmeowpawlitan", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Good catch ! Thanks for reporting\r\n\r\nI just opened #2180 to fix this", "Fixed in #2180" ]
2021-03-30T15:26:34Z
2022-10-05T13:28:30Z
2022-10-05T13:28:30Z
CONTRIBUTOR
null
null
null
null
from nlp import load_dataset train_dataset = load_dataset('xtreme', 'tatoeba.tel')['validation'] ValueError: BuilderConfig tatoeba.tel not found. but language tel is actually included in xtreme: https://github.com/google-research/xtreme/blob/master/utils_preprocess.py def tatoeba_preprocess(args): lang3_dict = { 'afr':'af', 'ara':'ar', 'bul':'bg', 'ben':'bn', 'deu':'de', 'ell':'el', 'spa':'es', 'est':'et', 'eus':'eu', 'pes':'fa', 'fin':'fi', 'fra':'fr', 'heb':'he', 'hin':'hi', 'hun':'hu', 'ind':'id', 'ita':'it', 'jpn':'ja', 'jav':'jv', 'kat':'ka', 'kaz':'kk', 'kor':'ko', 'mal':'ml', 'mar':'mr', 'nld':'nl', 'por':'pt', 'rus':'ru', 'swh':'sw', 'tam':'ta', **_'tel':'te'_**, 'tha':'th', 'tgl':'tl', <----here 'tur':'tr', 'urd':'ur', 'vie':'vi', 'cmn':'zh', 'eng':'en', }
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2149/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2149/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2148
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2148/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2148/comments
https://api.github.com/repos/huggingface/datasets/issues/2148/events
https://github.com/huggingface/datasets/issues/2148
844,700,910
MDU6SXNzdWU4NDQ3MDA5MTA=
2,148
Add configurable options to `seqeval` metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/44571847?v=4", "events_url": "https://api.github.com/users/marrodion/events{/privacy}", "followers_url": "https://api.github.com/users/marrodion/followers", "following_url": "https://api.github.com/users/marrodion/following{/other_user}", "gists_url": "https://api.github.com/users/marrodion/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/marrodion", "id": 44571847, "login": "marrodion", "node_id": "MDQ6VXNlcjQ0NTcxODQ3", "organizations_url": "https://api.github.com/users/marrodion/orgs", "received_events_url": "https://api.github.com/users/marrodion/received_events", "repos_url": "https://api.github.com/users/marrodion/repos", "site_admin": false, "starred_url": "https://api.github.com/users/marrodion/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/marrodion/subscriptions", "type": "User", "url": "https://api.github.com/users/marrodion", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @marrodion. \r\n\r\nThanks for pointing this out. It would be great to incorporate this metric-specific enhancement.\r\n\r\nAnother possibility would be to require the user to input the scheme as a string `mode=\"strict\", scheme=\"IOB2\"` and then dynamically import the corresponding module using Python `impor...
2021-03-30T15:04:06Z
2021-04-15T13:49:46Z
2021-04-15T13:49:46Z
CONTRIBUTOR
null
null
null
null
Right now `load_metric("seqeval")` only works in the default mode of evaluation (equivalent to conll evaluation). However, seqeval library [supports](https://github.com/chakki-works/seqeval#support-features) different evaluation schemes (IOB1, IOB2, etc.), which can be plugged in just by supporting additional kwargs in `Seqeval._compute` https://github.com/huggingface/datasets/blob/85cf7ff920c90ca2e12bedca12b36d2a043c3da2/metrics/seqeval/seqeval.py#L109 Things that would be relevant are, for example, supporting `mode="strict", scheme=IOB2` to count only full entity match as a true positive and omit partial matches. The only problem I see is that the spirit of `metrics` seems to not require additional imports from user. `seqeval` only supports schemes as objects, without any string aliases. It can be solved naively with mapping like `{"IOB2": seqeval.scheme.IOB2}`. Or just left as is and require user to explicitly import scheme from `seqeval` if he wants to configure it past the default implementation. If that makes sense, I am happy to implement the change.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 1, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 2, "url": "https://api.github.com/repos/huggingface/datasets/issues/2148/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2148/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2146
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2146/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2146/comments
https://api.github.com/repos/huggingface/datasets/issues/2146/events
https://github.com/huggingface/datasets/issues/2146
844,673,244
MDU6SXNzdWU4NDQ2NzMyNDQ=
2,146
Dataset file size on disk is very large with 3D Array
{ "avatar_url": "https://avatars.githubusercontent.com/u/22685854?v=4", "events_url": "https://api.github.com/users/jblemoine/events{/privacy}", "followers_url": "https://api.github.com/users/jblemoine/followers", "following_url": "https://api.github.com/users/jblemoine/following{/other_user}", "gists_url": "https://api.github.com/users/jblemoine/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jblemoine", "id": 22685854, "login": "jblemoine", "node_id": "MDQ6VXNlcjIyNjg1ODU0", "organizations_url": "https://api.github.com/users/jblemoine/orgs", "received_events_url": "https://api.github.com/users/jblemoine/received_events", "repos_url": "https://api.github.com/users/jblemoine/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jblemoine/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jblemoine/subscriptions", "type": "User", "url": "https://api.github.com/users/jblemoine", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hi ! In the arrow file we store all the integers as uint8.\r\nSo your arrow file should weigh around `height x width x n_channels x n_images` bytes.\r\n\r\nWhat feature type do your TFDS dataset have ?\r\n\r\nIf it uses a `tfds.features.Image` type, then what is stored is the encoded data (as png or jpg for exampl...
2021-03-30T14:46:09Z
2021-04-16T13:07:02Z
null
NONE
null
null
null
null
Hi, I have created my own dataset using the provided dataset loading script. It is an image dataset where images are stored as 3D Array with dtype=uint8. The actual size on disk is surprisingly large. It takes 520 MB. Here is some info from `dataset_info.json`. `{ "description": "", "citation": "", "homepage": "", "license": "", "features": { "image": { "shape": [224, 224, 3], "dtype": "uint8", "id": null, "_type": "Array3D", } }, "post_processed": null, "supervised_keys": null, "builder_name": "shot_type_image_dataset", "config_name": "default", "version": { "version_str": "0.0.0", "description": null, "major": 0, "minor": 0, "patch": 0, }, "splits": { "train": { "name": "train", "num_bytes": 520803408, "num_examples": 1479, "dataset_name": "shot_type_image_dataset", } }, "download_checksums": { "": { "num_bytes": 16940447118, "checksum": "5854035705efe08b0ed8f3cf3da7b4d29cba9055c2d2d702c79785350d72ee03", } }, "download_size": 16940447118, "post_processing_size": null, "dataset_size": 520803408, "size_in_bytes": 17461250526, }` I have created the same dataset with tensorflow_dataset and it takes only 125MB on disk. I am wondering, is it normal behavior ? I understand `Datasets` uses Arrow for serialization wheres tf uses TF Records. This might be a problem for large dataset. Thanks for your help.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2146/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2146/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2144
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2144/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2144/comments
https://api.github.com/repos/huggingface/datasets/issues/2144/events
https://github.com/huggingface/datasets/issues/2144
844,352,067
MDU6SXNzdWU4NDQzNTIwNjc=
2,144
Loading wikipedia 20200501.en throws pyarrow related error
{ "avatar_url": "https://avatars.githubusercontent.com/u/26637405?v=4", "events_url": "https://api.github.com/users/TomPyonsuke/events{/privacy}", "followers_url": "https://api.github.com/users/TomPyonsuke/followers", "following_url": "https://api.github.com/users/TomPyonsuke/following{/other_user}", "gists_url": "https://api.github.com/users/TomPyonsuke/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/TomPyonsuke", "id": 26637405, "login": "TomPyonsuke", "node_id": "MDQ6VXNlcjI2NjM3NDA1", "organizations_url": "https://api.github.com/users/TomPyonsuke/orgs", "received_events_url": "https://api.github.com/users/TomPyonsuke/received_events", "repos_url": "https://api.github.com/users/TomPyonsuke/repos", "site_admin": false, "starred_url": "https://api.github.com/users/TomPyonsuke/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/TomPyonsuke/subscriptions", "type": "User", "url": "https://api.github.com/users/TomPyonsuke", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "That's how I loaded the dataset\r\n```python\r\nfrom datasets import load_dataset\r\nds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache')\r\n```", "Hi ! It looks like the arrow file in the folder\r\n`/usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa...
2021-03-30T10:38:31Z
2021-04-01T09:21:17Z
null
NONE
null
null
null
null
**Problem description** I am getting the following error when trying to load wikipedia/20200501.en dataset. **Error log** Downloading and preparing dataset wikipedia/20200501.en (download: 16.99 GiB, generated: 17.07 GiB, post-processed: Unknown size, total: 34.06 GiB) to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931... Downloading: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 14.6k/14.6k [00:00<00:00, 5.41MB/s] Downloading: 59%|███████████████████████████████████████████████████████████████████████████████████████▊ | 10.7G/18.3G [11:30<08:08, 15.5MB/s] Dataset wikipedia downloaded and prepared to /usr/local/workspace/NAS_NLP/cache/wikipedia/20200501.en/1.0.0/50aa706aa417bb77d910ad61211cc672c0ef3e0f224225a5e0a18277ade8b931. Subsequent calls will reuse this data. Traceback (most recent call last): File "load_wiki.py", line 2, in <module> ds = load_dataset('wikipedia', '20200501.en', cache_dir='/usr/local/workspace/NAS_NLP/cache') File "/usr/local/lib/python3.6/dist-packages/datasets/load.py", line 751, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 746, in as_dataset map_tuple=True, File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in map_nested _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/usr/local/lib/python3.6/dist-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 763, in _build_single_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/builder.py", line 835, in _as_dataset in_memory=in_memory, File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/usr/local/lib/python3.6/dist-packages/datasets/arrow_reader.py", line 324, in read_table pa_table = f.read_all() File "pyarrow/ipc.pxi", line 544, in pyarrow.lib.RecordBatchReader.read_all File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Expected to be able to read 9176784 bytes for message body, got 4918712 **Detailed version info** datasets==1.5.0 - dataclasses [required: Any, installed: 0.8] - dill [required: Any, installed: 0.3.3] - fsspec [required: Any, installed: 0.8.7] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - huggingface-hub [required: <0.1.0, installed: 0.0.7] - filelock [required: Any, installed: 3.0.12] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - requests [required: Any, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: Any, installed: 4.49.0] - importlib-metadata [required: Any, installed: 1.7.0] - zipp [required: >=0.5, installed: 3.1.0] - multiprocess [required: Any, installed: 0.70.11.1] - dill [required: >=0.3.3, installed: 0.3.3] - numpy [required: >=1.17, installed: 1.17.0] - pandas [required: Any, installed: 1.1.5] - numpy [required: >=1.15.4, installed: 1.17.0] - python-dateutil [required: >=2.7.3, installed: 2.8.0] - six [required: >=1.5, installed: 1.15.0] - pytz [required: >=2017.2, installed: 2020.1] - pyarrow [required: >=0.17.1, installed: 3.0.0] - numpy [required: >=1.16.6, installed: 1.17.0] - requests [required: >=2.19.0, installed: 2.24.0] - certifi [required: >=2017.4.17, installed: 2020.6.20] - chardet [required: >=3.0.2,<4, installed: 3.0.4] - idna [required: >=2.5,<3, installed: 2.6] - urllib3 [required: >=1.21.1,<1.26,!=1.25.1,!=1.25.0, installed: 1.25.10] - tqdm [required: >=4.27,<4.50.0, installed: 4.49.0] - xxhash [required: Any, installed: 2.0.0]
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2144/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2144/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2139
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2139/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2139/comments
https://api.github.com/repos/huggingface/datasets/issues/2139/events
https://github.com/huggingface/datasets/issues/2139
843,662,613
MDU6SXNzdWU4NDM2NjI2MTM=
2,139
TypeError when using save_to_disk in a dataset loaded with ReadInstruction split
{ "avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4", "events_url": "https://api.github.com/users/PedroMLF/events{/privacy}", "followers_url": "https://api.github.com/users/PedroMLF/followers", "following_url": "https://api.github.com/users/PedroMLF/following{/other_user}", "gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PedroMLF", "id": 22480495, "login": "PedroMLF", "node_id": "MDQ6VXNlcjIyNDgwNDk1", "organizations_url": "https://api.github.com/users/PedroMLF/orgs", "received_events_url": "https://api.github.com/users/PedroMLF/received_events", "repos_url": "https://api.github.com/users/PedroMLF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions", "type": "User", "url": "https://api.github.com/users/PedroMLF", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi !\r\nI think this has been fixed recently on `master`.\r\nCan you try again by installing `datasets` from `master` ?\r\n```\r\npip install git+https://github.com/huggingface/datasets.git\r\n```", "Hi!\r\n\r\nUsing that version of the code solves the issue. Thanks!" ]
2021-03-29T18:23:54Z
2021-03-30T09:12:53Z
2021-03-30T09:12:53Z
NONE
null
null
null
null
Hi, Loading a dataset with `load_dataset` using a split defined via `ReadInstruction` and then saving it to disk results in the following error: `TypeError: Object of type ReadInstruction is not JSON serializable`. Here is the minimal reproducible example: ```python from datasets import load_dataset from datasets import ReadInstruction data_1 = load_dataset( "wikiann", "en", split="validation", ) data_1.save_to_disk("temporary_path_1") print("Save with regular split works.") data_2 = load_dataset( "wikiann", "en", split=ReadInstruction("validation", to=50, unit="%"), ) data_2.save_to_disk("temporary_path_2") ``` and the corresponding output: ``` Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Save with regular split works. Reusing dataset wikiann (/xxxxx/.cache/huggingface/datasets/wikiann/en/1.1.0/0b11a6fb31eea02f38ca17610657bfba3206100685283014daceb8da291c3be9) Traceback (most recent call last): File "bug.py", line 20, in <module> data_2.save_to_disk("temporary_path_2") File "/xxxxx/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 645, in save_to_disk json.dump(state, state_file, indent=2, sort_keys=True) File "/usr/lib/python3.7/json/__init__.py", line 179, in dump for chunk in iterable: File "/usr/lib/python3.7/json/encoder.py", line 431, in _iterencode yield from _iterencode_dict(o, _current_indent_level) File "/usr/lib/python3.7/json/encoder.py", line 405, in _iterencode_dict yield from chunks File "/usr/lib/python3.7/json/encoder.py", line 438, in _iterencode o = _default(o) File "/usr/lib/python3.7/json/encoder.py", line 179, in default raise TypeError(f'Object of type {o.__class__.__name__} ' TypeError: Object of type ReadInstruction is not JSON serializable ``` Let me know if there is some misuse from my end. Thanks in advance.
{ "avatar_url": "https://avatars.githubusercontent.com/u/22480495?v=4", "events_url": "https://api.github.com/users/PedroMLF/events{/privacy}", "followers_url": "https://api.github.com/users/PedroMLF/followers", "following_url": "https://api.github.com/users/PedroMLF/following{/other_user}", "gists_url": "https://api.github.com/users/PedroMLF/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PedroMLF", "id": 22480495, "login": "PedroMLF", "node_id": "MDQ6VXNlcjIyNDgwNDk1", "organizations_url": "https://api.github.com/users/PedroMLF/orgs", "received_events_url": "https://api.github.com/users/PedroMLF/received_events", "repos_url": "https://api.github.com/users/PedroMLF/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PedroMLF/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PedroMLF/subscriptions", "type": "User", "url": "https://api.github.com/users/PedroMLF", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2139/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2139/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2135
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2135/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2135/comments
https://api.github.com/repos/huggingface/datasets/issues/2135/events
https://github.com/huggingface/datasets/issues/2135
843,246,344
MDU6SXNzdWU4NDMyNDYzNDQ=
2,135
en language data from MLQA dataset is missing
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Indeed only the languages of the `translate-train` data are included...\r\nI can't find a link to download the english train set on https://github.com/facebookresearch/MLQA though, do you know where we can download it ?", "Hi @lhoestq \r\nthank you very much for coming back to me, now I see, you are right, ...
2021-03-29T10:47:50Z
2021-03-30T10:20:23Z
2021-03-30T10:20:23Z
CONTRIBUTOR
null
null
null
null
Hi I need mlqa-translate-train.en dataset, but it is missing from the MLQA dataset. could you have a look please? @lhoestq thank you for your help to fix this issue.
{ "avatar_url": "https://avatars.githubusercontent.com/u/6278280?v=4", "events_url": "https://api.github.com/users/rabeehk/events{/privacy}", "followers_url": "https://api.github.com/users/rabeehk/followers", "following_url": "https://api.github.com/users/rabeehk/following{/other_user}", "gists_url": "https://api.github.com/users/rabeehk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rabeehk", "id": 6278280, "login": "rabeehk", "node_id": "MDQ6VXNlcjYyNzgyODA=", "organizations_url": "https://api.github.com/users/rabeehk/orgs", "received_events_url": "https://api.github.com/users/rabeehk/received_events", "repos_url": "https://api.github.com/users/rabeehk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rabeehk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rabeehk/subscriptions", "type": "User", "url": "https://api.github.com/users/rabeehk", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2135/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2135/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2134
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2134/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2134/comments
https://api.github.com/repos/huggingface/datasets/issues/2134/events
https://github.com/huggingface/datasets/issues/2134
843,242,849
MDU6SXNzdWU4NDMyNDI4NDk=
2,134
Saving large in-memory datasets with save_to_disk crashes because of pickling
{ "avatar_url": "https://avatars.githubusercontent.com/u/5815801?v=4", "events_url": "https://api.github.com/users/prokopCerny/events{/privacy}", "followers_url": "https://api.github.com/users/prokopCerny/followers", "following_url": "https://api.github.com/users/prokopCerny/following{/other_user}", "gists_url": "https://api.github.com/users/prokopCerny/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/prokopCerny", "id": 5815801, "login": "prokopCerny", "node_id": "MDQ6VXNlcjU4MTU4MDE=", "organizations_url": "https://api.github.com/users/prokopCerny/orgs", "received_events_url": "https://api.github.com/users/prokopCerny/received_events", "repos_url": "https://api.github.com/users/prokopCerny/repos", "site_admin": false, "starred_url": "https://api.github.com/users/prokopCerny/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/prokopCerny/subscriptions", "type": "User", "url": "https://api.github.com/users/prokopCerny", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Hi !\r\nIndeed `save_to_disk` doesn't call pickle anymore. Though the `OverflowError` can still appear for in-memory datasets bigger than 4GB. This happens when doing this for example:\r\n```python\r\nimport pyarrow as pa\r\nimport pickle\r\n\r\narr = pa.array([0] * ((4 * 8 << 30) // 64))\r\ntable = pa.Table.from_...
2021-03-29T10:43:15Z
2021-05-03T17:59:21Z
2021-05-03T17:59:21Z
NONE
null
null
null
null
Using Datasets 1.5.0 on Python 3.7. Recently I've been working on medium to large size datasets (pretokenized raw text sizes from few gigabytes to low tens of gigabytes), and have found out that several preprocessing steps are massively faster when done in memory, and I have the ability to requisition a lot of RAM, so I decided to do these steps completely out of the datasets library. So my workflow is to do several .map() on datasets object, then for the operation which is faster in memory to extract the necessary columns from the dataset and then drop it whole, do the transformation in memory, and then create a fresh Dataset object using .from_dict() or other method. When I then try to call save_to_disk(path) on the dataset, it crashes because of pickling, which appears to be because of using old pickle protocol which doesn't support large files (over 4 GiB). ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 80, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 75, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 60, in tokenize_and_chunkify contexts_dataset.save_to_disk(chunked_path) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 457, in save_to_disk self = pickle.loads(pickle.dumps(self)) OverflowError: cannot serialize a bytes object larger than 4 GiB ``` From what I've seen this issue may be possibly fixed, as the line `self = pickle.loads(pickle.dumps(self))` does not appear to be present in the current state of the repository. To save these datasets to disk, I've resorted to calling .map() over them with `function=None` and specifying the .arrow cache file, and then creating a new dataset using the .from_file() method, which I can then safely save to disk. Additional issue when working with these large in-memory datasets is when using multiprocessing, is again to do with pickling. I've tried to speed up the mapping with function=None by specifying num_proc to the available cpu count, and I again get issues with transferring the dataset, with the following traceback. I am not sure if I should open a separate issue for that. ``` Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295Traceback (most recent call last): File "./tokenize_and_chunkify_in_memory.py", line 94, in <module> main() File "./tokenize_and_chunkify_in_memory.py", line 89, in main tokenize_and_chunkify(config) File "./tokenize_and_chunkify_in_memory.py", line 67, in tokenize_and_chunkify contexts_dataset.map(function=None, cache_file_name=str(output_dir_path / "tmp.arrow"), writer_batch_size=50000, num_proc=config.threads) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in map transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1485, in <listcomp> transformed_shards = [r.get() for r in results] File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 657, in get raise self._value File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/pool.py", line 431, in _handle_tasks put(task) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/connection.py", line 209, in send self._send_bytes(_ForkingPickler.dumps(obj)) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/multiprocess/reduction.py", line 54, in dumps cls(buf, protocol, *args, **kwds).dump(obj) File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 454, in dump StockPickler.dump(self, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 437, in dump self.save(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 662, in save_reduce save(state) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/home/cernypro/dev/envs/huggingface_gpu/lib/python3.7/site-packages/dill/_dill.py", line 941, in save_module_dict StockPickler.save_dict(pickler, obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 859, in save_dict self._batch_setitems(obj.items()) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 885, in _batch_setitems save(v) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 846, in _batch_appends save(tmp[0]) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 789, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 819, in save_list self._batch_appends(obj) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 843, in _batch_appends save(x) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 549, in save self.save_reduce(obj=obj, *rv) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 638, in save_reduce save(args) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 774, in save_tuple save(element) File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 504, in save f(self, obj) # Call unbound method with explicit self File "/mnt/appl/software/Python/3.7.4-GCCcore-8.3.0/lib/python3.7/pickle.py", line 732, in save_bytes self._write_large_bytes(BINBYTES + pack("<I", n), obj) struct.error: 'I' format requires 0 <= number <= 4294967295 ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2134/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2134/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2133
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2133/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2133/comments
https://api.github.com/repos/huggingface/datasets/issues/2133/events
https://github.com/huggingface/datasets/issues/2133
843,149,680
MDU6SXNzdWU4NDMxNDk2ODA=
2,133
bug in mlqa dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "If you print those questions, you get readable texts:\r\n```python\r\n>>> questions = [\r\n... \"\\u0645\\u062a\\u0649 \\u0628\\u062f\\u0627\\u062a \\u0627\\u0644\\u0645\\u062c\\u0644\\u0629 \\u0627\\u0644\\u0645\\u062f\\u0631\\u0633\\u064a\\u0629 \\u0641\\u064a \\u0646\\u0648\\u062a\\u0631\\u062f\\u0627\\u064...
2021-03-29T09:03:09Z
2021-03-30T17:40:57Z
2021-03-30T17:40:57Z
NONE
null
null
null
null
Hi Looking into MLQA dataset for langauge "ar": ``` "question": [ "\u0645\u062a\u0649 \u0628\u062f\u0627\u062a \u0627\u0644\u0645\u062c\u0644\u0629 \u0627\u0644\u0645\u062f\u0631\u0633\u064a\u0629 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645 \u0628\u0627\u0644\u0646\u0634\u0631?", "\u0643\u0645 \u0645\u0631\u0629 \u064a\u062a\u0645 \u0646\u0634\u0631\u0647\u0627 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0645\u0627 \u0647\u064a \u0627\u0644\u0648\u0631\u0642\u0629 \u0627\u0644\u064a\u0648\u0645\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0643\u0645 \u0639\u062f\u062f \u0627\u0644\u0627\u0648\u0631\u0627\u0642 \u0627\u0644\u0627\u062e\u0628\u0627\u0631\u064a\u0629 \u0644\u0644\u0637\u0644\u0627\u0628 \u0627\u0644\u062a\u064a \u0648\u062c\u062f\u062a \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?", "\u0641\u064a \u0627\u064a \u0633\u0646\u0629 \u0628\u062f\u0627\u062a \u0648\u0631\u0642\u0629 \u0627\u0644\u0637\u0627\u0644\u0628 \u0627\u0644\u062d\u0633 \u0627\u0644\u0633\u0644\u064a\u0645 \u0628\u0627\u0644\u0646\u0634\u0631 \u0641\u064a \u0646\u0648\u062a\u0631\u062f\u0627\u0645?" ] ``` the questions are in the wrong format, and not readable, could you please have a look? thanks @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2133/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2133/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2132
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2132/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2132/comments
https://api.github.com/repos/huggingface/datasets/issues/2132/events
https://github.com/huggingface/datasets/issues/2132
843,142,822
MDU6SXNzdWU4NDMxNDI4MjI=
2,132
TydiQA dataset is mixed and is not split per language
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "You can filter the languages this way:\r\n```python\r\ntydiqa_en = tydiqa_dataset.filter(lambda x: x[\"language\"] == \"english\")\r\n```\r\n\r\nOtherwise maybe we can have one configuration per language ?\r\nWhat do you think of this for example ?\r\n\r\n```python\r\nload_dataset(\"tydiqa\", \"primary_task.en\")\...
2021-03-29T08:56:21Z
2021-04-04T09:57:15Z
null
NONE
null
null
null
null
Hi @lhoestq Currently TydiQA is mixed and user can only access the whole training set of all languages: https://www.tensorflow.org/datasets/catalog/tydi_qa for using this dataset, one need to train/evaluate in each separate language, and having them mixed, makes it hard to use this dataset. This is much convenient for user to have them split and I appreciate your help on this. Meanwhile, till hopefully this is split per language, I greatly appreciate telling me how I can preprocess and get data per language. thanks a lot
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2132/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2132/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2131
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2131/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2131/comments
https://api.github.com/repos/huggingface/datasets/issues/2131/events
https://github.com/huggingface/datasets/issues/2131
843,133,112
MDU6SXNzdWU4NDMxMzMxMTI=
2,131
When training with Multi-Node Multi-GPU the worker 2 has TypeError: 'NoneType' object
{ "avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4", "events_url": "https://api.github.com/users/andy-yangz/events{/privacy}", "followers_url": "https://api.github.com/users/andy-yangz/followers", "following_url": "https://api.github.com/users/andy-yangz/following{/other_user}", "gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andy-yangz", "id": 23011317, "login": "andy-yangz", "node_id": "MDQ6VXNlcjIzMDExMzE3", "organizations_url": "https://api.github.com/users/andy-yangz/orgs", "received_events_url": "https://api.github.com/users/andy-yangz/received_events", "repos_url": "https://api.github.com/users/andy-yangz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions", "type": "User", "url": "https://api.github.com/users/andy-yangz", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "Hi ! Thanks for reporting\r\nI was able to reproduce this issue. This was caused by missing split infos if a worker reloads the cache of the other worker.\r\n\r\nI just opened https://github.com/huggingface/datasets/pull/2137 to fix this issue", "The PR got merged :)\r\nFeel free to try it out on the `master` br...
2021-03-29T08:45:58Z
2021-04-10T11:08:55Z
2021-04-10T11:08:55Z
NONE
null
null
null
null
version: 1.5.0 met a very strange error, I am training large scale language model, and need train on 2 machines(workers). And sometimes I will get this error `TypeError: 'NoneType' object is not iterable` This is traceback ``` 71 |   | Traceback (most recent call last): -- | -- | -- 72 |   | File "run_gpt.py", line 316, in <module> 73 |   | main() 74 |   | File "run_gpt.py", line 222, in main 75 |   | delimiter="\t", column_names=["input_ids", "attention_mask", "chinese_ref"]) 76 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/load.py", line 747, in load_dataset 77 |   | use_auth_token=use_auth_token, 78 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 513, in download_and_prepare 79 |   | self.download_post_processing_resources(dl_manager) 80 |   | File "/data/miniconda3/lib/python3.7/site-packages/datasets/builder.py", line 673, in download_post_processing_resources 81 |   | for split in self.info.splits: 82 |   | TypeError: 'NoneType' object is not iterable 83 |   | WARNING:datasets.builder:Reusing dataset csv (/usr/local/app/.cache/huggingface/datasets/csv/default-1c257ebd48e225e7/0.0.0/2960f95a26e85d40ca41a230ac88787f715ee3003edaacb8b1f0891e9f04dda2) 84 |   | Traceback (most recent call last): 85 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 193, in _run_module_as_main 86 |   | "__main__", mod_spec) 87 |   | File "/data/miniconda3/lib/python3.7/runpy.py", line 85, in _run_code 88 |   | exec(code, run_globals) 89 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 340, in <module> 90 |   | main() 91 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 326, in main 92 |   | sigkill_handler(signal.SIGTERM, None) # not coming back 93 |   | File "/data/miniconda3/lib/python3.7/site-packages/torch/distributed/launch.py", line 301, in sigkill_handler 94 |   | raise subprocess.CalledProcessError(returncode=last_return_code, cmd=cmd) ``` On worker 1 it loads the dataset well, however on worker 2 will get this error. And I will meet this error from time to time, sometimes it just goes well.
{ "avatar_url": "https://avatars.githubusercontent.com/u/23011317?v=4", "events_url": "https://api.github.com/users/andy-yangz/events{/privacy}", "followers_url": "https://api.github.com/users/andy-yangz/followers", "following_url": "https://api.github.com/users/andy-yangz/following{/other_user}", "gists_url": "https://api.github.com/users/andy-yangz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/andy-yangz", "id": 23011317, "login": "andy-yangz", "node_id": "MDQ6VXNlcjIzMDExMzE3", "organizations_url": "https://api.github.com/users/andy-yangz/orgs", "received_events_url": "https://api.github.com/users/andy-yangz/received_events", "repos_url": "https://api.github.com/users/andy-yangz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/andy-yangz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/andy-yangz/subscriptions", "type": "User", "url": "https://api.github.com/users/andy-yangz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 1, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2131/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2131/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2130
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2130/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2130/comments
https://api.github.com/repos/huggingface/datasets/issues/2130/events
https://github.com/huggingface/datasets/issues/2130
843,111,936
MDU6SXNzdWU4NDMxMTE5MzY=
2,130
wikiann dataset is missing columns
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "Here please find TFDS format of this dataset: https://www.tensorflow.org/datasets/catalog/wikiann\r\nwhere there is a span column, this is really necessary to be able to use the data, and I appreciate your help @lhoestq ", "Hi !\r\nApparently you can get the spans from the NER tags using `tags_to_spans` defined ...
2021-03-29T08:23:00Z
2021-08-27T14:44:18Z
2021-08-27T14:44:18Z
NONE
null
null
null
null
Hi Wikiann dataset needs to have "spans" columns, which is necessary to be able to use this dataset, but this column is missing from huggingface datasets, could you please have a look? thank you @lhoestq
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2130/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2130/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2129
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2129/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2129/comments
https://api.github.com/repos/huggingface/datasets/issues/2129/events
https://github.com/huggingface/datasets/issues/2129
843,033,656
MDU6SXNzdWU4NDMwMzM2NTY=
2,129
How to train BERT model with next sentence prediction?
{ "avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4", "events_url": "https://api.github.com/users/jnishi/events{/privacy}", "followers_url": "https://api.github.com/users/jnishi/followers", "following_url": "https://api.github.com/users/jnishi/following{/other_user}", "gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jnishi", "id": 836541, "login": "jnishi", "node_id": "MDQ6VXNlcjgzNjU0MQ==", "organizations_url": "https://api.github.com/users/jnishi/orgs", "received_events_url": "https://api.github.com/users/jnishi/received_events", "repos_url": "https://api.github.com/users/jnishi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnishi/subscriptions", "type": "User", "url": "https://api.github.com/users/jnishi", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi !\r\nWe're not using `TextDatasetForNextSentencePrediction` in `datasets`.\r\nAlthough you can probably use the `TextDatasetForNextSentencePrediction.create_examples_from_document` on a dataset to prepare it for next sentence prediction.", "Thanks.\r\n\r\nDo you mean that `TextDatasetForNextSentencePrediction...
2021-03-29T06:48:03Z
2021-04-01T04:58:40Z
2021-04-01T04:58:40Z
NONE
null
null
null
null
Hello. I'm trying to pretrain the BERT model with next sentence prediction. Is there any function that supports next sentence prediction like ` TextDatasetForNextSentencePrediction` of `huggingface/transformers` ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/836541?v=4", "events_url": "https://api.github.com/users/jnishi/events{/privacy}", "followers_url": "https://api.github.com/users/jnishi/followers", "following_url": "https://api.github.com/users/jnishi/following{/other_user}", "gists_url": "https://api.github.com/users/jnishi/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/jnishi", "id": 836541, "login": "jnishi", "node_id": "MDQ6VXNlcjgzNjU0MQ==", "organizations_url": "https://api.github.com/users/jnishi/orgs", "received_events_url": "https://api.github.com/users/jnishi/received_events", "repos_url": "https://api.github.com/users/jnishi/repos", "site_admin": false, "starred_url": "https://api.github.com/users/jnishi/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/jnishi/subscriptions", "type": "User", "url": "https://api.github.com/users/jnishi", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2129/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2129/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2128
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2128/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2128/comments
https://api.github.com/repos/huggingface/datasets/issues/2128/events
https://github.com/huggingface/datasets/issues/2128
843,023,910
MDU6SXNzdWU4NDMwMjM5MTA=
2,128
Dialogue action slot name and value are reversed in MultiWoZ 2.2
{ "avatar_url": "https://avatars.githubusercontent.com/u/31605305?v=4", "events_url": "https://api.github.com/users/adamlin120/events{/privacy}", "followers_url": "https://api.github.com/users/adamlin120/followers", "following_url": "https://api.github.com/users/adamlin120/following{/other_user}", "gists_url": "https://api.github.com/users/adamlin120/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adamlin120", "id": 31605305, "login": "adamlin120", "node_id": "MDQ6VXNlcjMxNjA1MzA1", "organizations_url": "https://api.github.com/users/adamlin120/orgs", "received_events_url": "https://api.github.com/users/adamlin120/received_events", "repos_url": "https://api.github.com/users/adamlin120/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adamlin120/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adamlin120/subscriptions", "type": "User", "url": "https://api.github.com/users/adamlin120", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "Hi\r\nGood catch ! Thanks for reporting\r\n\r\nIf you are interested in contributing, feel free to open a PR to fix this :) " ]
2021-03-29T06:34:02Z
2021-03-31T12:48:01Z
2021-03-31T12:48:01Z
CONTRIBUTOR
null
null
null
null
Hi @yjernite, thank you for adding MultiWoZ 2.2 in the huggingface datasets platform. It is beneficial! I spot an error that the order of Dialogue action slot names and values are reversed. https://github.com/huggingface/datasets/blob/649b2c469779bc4221e1b6969aa2496d63eb5953/datasets/multi_woz_v22/multi_woz_v22.py#L251-L262
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2128/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2128/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2125
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2125/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2125/comments
https://api.github.com/repos/huggingface/datasets/issues/2125/events
https://github.com/huggingface/datasets/issues/2125
842,690,570
MDU6SXNzdWU4NDI2OTA1NzA=
2,125
Is dataset timit_asr broken?
{ "avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4", "events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}", "followers_url": "https://api.github.com/users/kosuke-kitahara/followers", "following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}", "gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kosuke-kitahara", "id": 42398050, "login": "kosuke-kitahara", "node_id": "MDQ6VXNlcjQyMzk4MDUw", "organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs", "received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events", "repos_url": "https://api.github.com/users/kosuke-kitahara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions", "type": "User", "url": "https://api.github.com/users/kosuke-kitahara", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthanks for the report, but this is a duplicate of #2052. ", "@mariosasko \r\nThank you for your quick response! Following #2052, I've fixed the problem." ]
2021-03-28T08:30:18Z
2021-03-28T12:29:25Z
2021-03-28T12:29:25Z
NONE
null
null
null
null
Using `timit_asr` dataset, I saw all records are the same. ``` python from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") from datasets import ClassLabel import random import pandas as pd from IPython.display import display, HTML def show_random_elements(dataset, num_examples=10): assert num_examples <= len(dataset), "Can't pick more elements than there are in the dataset." picks = [] for _ in range(num_examples): pick = random.randint(0, len(dataset)-1) while pick in picks: pick = random.randint(0, len(dataset)-1) picks.append(pick) df = pd.DataFrame(dataset[picks]) display(HTML(df.to_html())) show_random_elements(timit['train'].remove_columns(["file", "phonetic_detail", "word_detail", "dialect_region", "id", "sentence_type", "speaker_id"]), num_examples=20) ``` `output` <img width="312" alt="Screen Shot 2021-03-28 at 17 29 04" src="https://user-images.githubusercontent.com/42398050/112746646-21acee80-8feb-11eb-84f3-dbb5d4269724.png"> I double-checked it [here](https://huggingface.co/datasets/viewer/), and met the same problem. <img width="1374" alt="Screen Shot 2021-03-28 at 17 32 07" src="https://user-images.githubusercontent.com/42398050/112746698-9bdd7300-8feb-11eb-97ed-5babead385f4.png">
{ "avatar_url": "https://avatars.githubusercontent.com/u/42398050?v=4", "events_url": "https://api.github.com/users/kosuke-kitahara/events{/privacy}", "followers_url": "https://api.github.com/users/kosuke-kitahara/followers", "following_url": "https://api.github.com/users/kosuke-kitahara/following{/other_user}", "gists_url": "https://api.github.com/users/kosuke-kitahara/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kosuke-kitahara", "id": 42398050, "login": "kosuke-kitahara", "node_id": "MDQ6VXNlcjQyMzk4MDUw", "organizations_url": "https://api.github.com/users/kosuke-kitahara/orgs", "received_events_url": "https://api.github.com/users/kosuke-kitahara/received_events", "repos_url": "https://api.github.com/users/kosuke-kitahara/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kosuke-kitahara/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kosuke-kitahara/subscriptions", "type": "User", "url": "https://api.github.com/users/kosuke-kitahara", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2125/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2125/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2124
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2124/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2124/comments
https://api.github.com/repos/huggingface/datasets/issues/2124/events
https://github.com/huggingface/datasets/issues/2124
842,627,729
MDU6SXNzdWU4NDI2Mjc3Mjk=
2,124
Adding ScaNN library to do MIPS?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "I haven't played with it (yet) but it sounds really cool !\r\n" ]
2021-03-28T00:07:00Z
2021-03-29T13:23:43Z
null
NONE
null
null
null
null
@lhoestq Hi I am thinking of adding this new google library to do the MIPS similar to **add_faiss_idex**. As the paper suggests, it is really fast when it comes to retrieving the nearest neighbors. https://github.com/google-research/google-research/tree/master/scann ![image](https://user-images.githubusercontent.com/16892570/112738294-78ec9800-8fc6-11eb-9a5f-3d7ee5818e76.png)
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2124/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2124/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2123
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2123/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2123/comments
https://api.github.com/repos/huggingface/datasets/issues/2123/events
https://github.com/huggingface/datasets/issues/2123
842,577,285
MDU6SXNzdWU4NDI1NzcyODU=
2,123
Problem downloading GEM wiki_auto_asset_turk dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29705940?v=4", "events_url": "https://api.github.com/users/mille-s/events{/privacy}", "followers_url": "https://api.github.com/users/mille-s/followers", "following_url": "https://api.github.com/users/mille-s/following{/other_user}", "gists_url": "https://api.github.com/users/mille-s/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mille-s", "id": 29705940, "login": "mille-s", "node_id": "MDQ6VXNlcjI5NzA1OTQw", "organizations_url": "https://api.github.com/users/mille-s/orgs", "received_events_url": "https://api.github.com/users/mille-s/received_events", "repos_url": "https://api.github.com/users/mille-s/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mille-s/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mille-s/subscriptions", "type": "User", "url": "https://api.github.com/users/mille-s", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nsadly I can't replicate the problem on my Windows machine. Try to update the library to the newest version with:\r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n``` ", "Thanks for the answer! I updated the library but unfortunately it didn't solve the problem.", "Is there an...
2021-03-27T18:41:28Z
2021-05-12T16:15:18Z
2021-05-12T16:15:17Z
NONE
null
null
null
null
@yjernite ### Summary I am currently working on the GEM datasets and do not manage to download the wiki_auto_asset_turk data, whereas all other datasets download well with the same code. ### Steps to reproduce Code snippet: from datasets import load_dataset #dataset = load_dataset('gem', 'web_nlg_en') dataset = load_dataset('gem', 'wiki_auto_asset_turk') ``` **Expected behavior:** I expect the dataset to start downloading (download bar appears and progresses toward 100%) **Actual behavior:** Instead of seeing the download bar appearing, nothing happens; the following appears in the console as expected, but nothing more: Downloading: 36.6kB [00:00, 37.2MB/s] Downloading: 41.7kB [00:00, ?B/s] Downloading and preparing dataset gem/wiki_auto_asset_turk (download: 121.37 MiB, generated: 145.69 MiB, post-processed: Unknown size, total: 267.07 MiB) to C:\Users\sfmil\.cache\huggingface\datasets\gem\wiki_auto_asset_turk\1.0.0\f252756d7f1b8f019aac71a1623b2950acfe10d25d956668ac4eae4e93c58b8d... ### Is this a regression? No, it was the first time I was trying to download this dataset (same for the other ones). ### Debug info - Python version: Python 3.8.2 - OS version: Windows 10 Family
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2123/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2123/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2120
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2120/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2120/comments
https://api.github.com/repos/huggingface/datasets/issues/2120/events
https://github.com/huggingface/datasets/issues/2120
841,954,521
MDU6SXNzdWU4NDE5NTQ1MjE=
2,120
dataset viewer does not work anymore
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[ { "color": "94203D", "default": false, "description": "", "id": 2107841032, "name": "nlp-viewer", "node_id": "MDU6TGFiZWwyMTA3ODQxMDMy", "url": "https://api.github.com/repos/huggingface/datasets/labels/nlp-viewer" } ]
closed
false
null
[]
null
[ "Thanks for reporting :) We're looking into it", "Back up. " ]
2021-03-26T13:22:13Z
2021-03-26T15:52:22Z
2021-03-26T15:52:22Z
NONE
null
null
null
null
Hi I normally use this link to see all datasets and how I can load them https://huggingface.co/datasets/viewer/ Now I am getting 502 Bad Gateway nginx/1.18.0 (Ubuntu) could you bring this webpage back ? this was very helpful @lhoestq thanks for your help
{ "avatar_url": "https://avatars.githubusercontent.com/u/35882?v=4", "events_url": "https://api.github.com/users/srush/events{/privacy}", "followers_url": "https://api.github.com/users/srush/followers", "following_url": "https://api.github.com/users/srush/following{/other_user}", "gists_url": "https://api.github.com/users/srush/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/srush", "id": 35882, "login": "srush", "node_id": "MDQ6VXNlcjM1ODgy", "organizations_url": "https://api.github.com/users/srush/orgs", "received_events_url": "https://api.github.com/users/srush/received_events", "repos_url": "https://api.github.com/users/srush/repos", "site_admin": false, "starred_url": "https://api.github.com/users/srush/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/srush/subscriptions", "type": "User", "url": "https://api.github.com/users/srush", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2120/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2120/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2117
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2117/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2117/comments
https://api.github.com/repos/huggingface/datasets/issues/2117/events
https://github.com/huggingface/datasets/issues/2117
841,535,283
MDU6SXNzdWU4NDE1MzUyODM=
2,117
load_metric from local "glue.py" meet error 'NoneType' object is not callable
{ "avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4", "events_url": "https://api.github.com/users/Frankie123421/events{/privacy}", "followers_url": "https://api.github.com/users/Frankie123421/followers", "following_url": "https://api.github.com/users/Frankie123421/following{/other_user}", "gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Frankie123421", "id": 54012361, "login": "Frankie123421", "node_id": "MDQ6VXNlcjU0MDEyMzYx", "organizations_url": "https://api.github.com/users/Frankie123421/orgs", "received_events_url": "https://api.github.com/users/Frankie123421/received_events", "repos_url": "https://api.github.com/users/Frankie123421/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions", "type": "User", "url": "https://api.github.com/users/Frankie123421", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@Frankie123421 what was the resolution to this?", "> @Frankie123421 what was the resolution to this?\r\n\r\nuse glue_metric.py instead of glue.py in load_metric", "thank you!" ]
2021-03-26T02:35:22Z
2021-08-25T21:44:05Z
2021-03-26T02:40:26Z
NONE
null
null
null
null
actual_task = "mnli" if task == "mnli-mm" else task dataset = load_dataset(path='/home/glue.py', name=actual_task) metric = load_metric(path='/home/glue.py', name=actual_task) --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-7ab77a465d81> in <module> 1 actual_task = "mnli" if task == "mnli-mm" else task 2 dataset = load_dataset(path='/home/jcli/glue.py', name=actual_task) ----> 3 metric = load_metric(path='/home/jcli/glue.py', name=actual_task) ~/anaconda3/envs/pytorch/lib/python3.6/site-packages/datasets/load.py in load_metric(path, config_name, process_id, num_process, cache_dir, experiment_id, keep_in_memory, download_config, download_mode, script_version, **metric_init_kwargs) 508 keep_in_memory=keep_in_memory, 509 experiment_id=experiment_id, --> 510 **metric_init_kwargs, 511 ) 512 TypeError: 'NoneType' object is not callable Please help
{ "avatar_url": "https://avatars.githubusercontent.com/u/54012361?v=4", "events_url": "https://api.github.com/users/Frankie123421/events{/privacy}", "followers_url": "https://api.github.com/users/Frankie123421/followers", "following_url": "https://api.github.com/users/Frankie123421/following{/other_user}", "gists_url": "https://api.github.com/users/Frankie123421/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Frankie123421", "id": 54012361, "login": "Frankie123421", "node_id": "MDQ6VXNlcjU0MDEyMzYx", "organizations_url": "https://api.github.com/users/Frankie123421/orgs", "received_events_url": "https://api.github.com/users/Frankie123421/received_events", "repos_url": "https://api.github.com/users/Frankie123421/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Frankie123421/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Frankie123421/subscriptions", "type": "User", "url": "https://api.github.com/users/Frankie123421", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2117/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2117/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2116
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2116/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2116/comments
https://api.github.com/repos/huggingface/datasets/issues/2116/events
https://github.com/huggingface/datasets/issues/2116
841,481,292
MDU6SXNzdWU4NDE0ODEyOTI=
2,116
Creating custom dataset results in error while calling the map() function
{ "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GeetDsa", "id": 13940397, "login": "GeetDsa", "node_id": "MDQ6VXNlcjEzOTQwMzk3", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "repos_url": "https://api.github.com/users/GeetDsa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "type": "User", "url": "https://api.github.com/users/GeetDsa", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthe `_data` attribute is missing due to `MyDataset.__init__` not calling the parent `__init__`. However, I don't think it's a good idea to subclass the `datasets.Dataset` class (e.g. it's kind of dangerous to override `datasets.Dataset.__getitem__`). Instead, it's better to follow the \"association over...
2021-03-26T00:37:46Z
2021-03-31T14:30:32Z
2021-03-31T14:30:32Z
NONE
null
null
null
null
calling `map()` of `datasets` library results into an error while defining a Custom dataset. Reproducible example: ``` import datasets class MyDataset(datasets.Dataset): def __init__(self, sentences): "Initialization" self.samples = sentences def __len__(self): "Denotes the total number of samples" return len(self.samples) def __getitem__(self, index): "Generates one sample of data" # Select sample # Load data and get label samples = self.samples[index] return samples def preprocess_function_train(examples): inputs = examples labels = [example+tokenizer.eos_token for example in examples ] inputs = tokenizer(inputs, max_length=30, padding=True, truncation=True) labels = tokenizer(labels, max_length=30, padding=True, truncation=True) model_inputs = inputs model_inputs["labels"] = labels["input_ids"] print("about to return") return model_inputs ##train["sentence"] is dataframe column train_dataset = MyDataset(train['sentence'].values.tolist()) train_dataset = train_dataset.map( preprocess_function, batched = True, batch_size=32 ) ``` Stack trace of error: ``` Traceback (most recent call last): File "dir/train_generate.py", line 362, in <module> main() File "dir/train_generate.py", line 245, in main train_dataset = train_dataset.map( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 1244, in map return self._map_single( File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 149, in wrapper unformatted_columns = set(self.column_names) - set(self._format_columns or []) File "anaconda_dir/anaconda3/envs/env1/lib/python3.8/site-packages/datasets/arrow_dataset.py", line 526, in column_names return self._data.column_names AttributeError: 'MyDataset' object has no attribute '_data' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/13940397?v=4", "events_url": "https://api.github.com/users/GeetDsa/events{/privacy}", "followers_url": "https://api.github.com/users/GeetDsa/followers", "following_url": "https://api.github.com/users/GeetDsa/following{/other_user}", "gists_url": "https://api.github.com/users/GeetDsa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/GeetDsa", "id": 13940397, "login": "GeetDsa", "node_id": "MDQ6VXNlcjEzOTQwMzk3", "organizations_url": "https://api.github.com/users/GeetDsa/orgs", "received_events_url": "https://api.github.com/users/GeetDsa/received_events", "repos_url": "https://api.github.com/users/GeetDsa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/GeetDsa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/GeetDsa/subscriptions", "type": "User", "url": "https://api.github.com/users/GeetDsa", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2116/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2116/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2115
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2115/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2115/comments
https://api.github.com/repos/huggingface/datasets/issues/2115/events
https://github.com/huggingface/datasets/issues/2115
841,283,974
MDU6SXNzdWU4NDEyODM5NzQ=
2,115
The datasets.map() implementation modifies the datatype of os.environ object
{ "avatar_url": "https://avatars.githubusercontent.com/u/19983848?v=4", "events_url": "https://api.github.com/users/leleamol/events{/privacy}", "followers_url": "https://api.github.com/users/leleamol/followers", "following_url": "https://api.github.com/users/leleamol/following{/other_user}", "gists_url": "https://api.github.com/users/leleamol/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/leleamol", "id": 19983848, "login": "leleamol", "node_id": "MDQ6VXNlcjE5OTgzODQ4", "organizations_url": "https://api.github.com/users/leleamol/orgs", "received_events_url": "https://api.github.com/users/leleamol/received_events", "repos_url": "https://api.github.com/users/leleamol/repos", "site_admin": false, "starred_url": "https://api.github.com/users/leleamol/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/leleamol/subscriptions", "type": "User", "url": "https://api.github.com/users/leleamol", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2021-03-25T20:29:19Z
2021-03-26T15:13:52Z
2021-03-26T15:13:52Z
NONE
null
null
null
null
In our testing, we noticed that the datasets.map() implementation is modifying the datatype of python os.environ object from '_Environ' to 'dict'. This causes following function calls to fail as follows: ` x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) TypeError: get() takes no keyword arguments ` It looks like the following line in datasets.map implementation introduced this functionality. https://github.com/huggingface/datasets/blob/0cb1ac06acb0df44a1cf4128d03a01865faa2504/src/datasets/arrow_dataset.py#L1421 Here is the test script to reproduce this error. ``` from datasets import load_dataset from transformers import AutoTokenizer import os def test_train(): model_checkpoint = "distilgpt2" datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenizer = AutoTokenizer.from_pretrained(model_checkpoint, use_fast=True) tokenizer.pad_token = tokenizer.eos_token def tokenize_function(examples): y = tokenizer(examples['text'], truncation=True, max_length=64) return y x = os.environ.get("TEST_ENV_VARIABLE_BEFORE_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_BEFORE_dataset_map {x}") print(f"Data type of os.environ before datasets.map = {os.environ.__class__.__name__}") datasets.map(tokenize_function, batched=True, num_proc=2, remove_columns=["text"]) print(f"Data type of os.environ after datasets.map = {os.environ.__class__.__name__}") x = os.environ.get("TEST_ENV_VARIABLE_AFTER_dataset_map", default=None) print(f"Testing environment variable: TEST_ENV_VARIABLE_AFTER_dataset_map {x}") if __name__ == "__main__": test_train() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2115/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2115/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2108
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2108/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2108/comments
https://api.github.com/repos/huggingface/datasets/issues/2108/events
https://github.com/huggingface/datasets/issues/2108
840,181,055
MDU6SXNzdWU4NDAxODEwNTU=
2,108
Is there a way to use a GPU only when training an Index in the process of add_faisis_index?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[ { "color": "d876e3", "default": true, "description": "Further information is requested", "id": 1935892912, "name": "question", "node_id": "MDU6TGFiZWwxOTM1ODkyOTEy", "url": "https://api.github.com/repos/huggingface/datasets/labels/question" } ]
open
false
null
[]
null
[]
2021-03-24T21:32:16Z
2021-03-25T06:31:43Z
null
NONE
null
null
null
null
Motivation - Some FAISS indexes like IVF consist of the training step that clusters the dataset into a given number of indexes. It would be nice if we can use a GPU to do the training step and covert the index back to CPU as mention in [this faiss example](https://gist.github.com/mdouze/46d6bbbaabca0b9778fca37ed2bcccf6).
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2108/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2108/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2106
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2106/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2106/comments
https://api.github.com/repos/huggingface/datasets/issues/2106/events
https://github.com/huggingface/datasets/issues/2106
839,084,264
MDU6SXNzdWU4MzkwODQyNjQ=
2,106
WMT19 Dataset for Kazakh-English is not formatted correctly
{ "avatar_url": "https://avatars.githubusercontent.com/u/22580542?v=4", "events_url": "https://api.github.com/users/trina731/events{/privacy}", "followers_url": "https://api.github.com/users/trina731/followers", "following_url": "https://api.github.com/users/trina731/following{/other_user}", "gists_url": "https://api.github.com/users/trina731/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/trina731", "id": 22580542, "login": "trina731", "node_id": "MDQ6VXNlcjIyNTgwNTQy", "organizations_url": "https://api.github.com/users/trina731/orgs", "received_events_url": "https://api.github.com/users/trina731/received_events", "repos_url": "https://api.github.com/users/trina731/repos", "site_admin": false, "starred_url": "https://api.github.com/users/trina731/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/trina731/subscriptions", "type": "User", "url": "https://api.github.com/users/trina731", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
null
[]
null
[ "Hi ! Thanks for reporting\r\n\r\nBy looking at the raw `news-commentary-v14.en-kk.tsv` file, it looks like there are at least 17 lines with this issue.\r\nMoreover these issues are not always the same:\r\n- L97 is only `kk` text and must be appended at the end of the `kk` text of the **next** line\r\n- L2897 is on...
2021-03-23T20:14:47Z
2021-03-25T21:36:20Z
null
NONE
null
null
null
null
In addition to the bug of languages being switched from Issue @415, there are incorrect translations in the dataset because the English-Kazakh translations have a one off formatting error. The News Commentary v14 parallel data set for kk-en from http://www.statmt.org/wmt19/translation-task.html has a bug here: > Line 94. The Swiss National Bank, for its part, has been battling with the deflationary effects of the franc’s dramatic appreciation over the past few years. Швейцарияның Ұлттық банкі өз тарапынан, соңғы бірнеше жыл ішінде франк құнының қатты өсуінің дефляциялық әсерімен күресіп келеді. > > Line 95. Дефляциялық күштер 2008 жылы терең және ұзаққа созылған жаһандық дағдарысқа байланысты орын алған ірі экономикалық және қаржылық орын алмасулардың арқасында босатылды. Жеке қарыз қаражаты үлесінің қысқаруы орталық банктің рефляцияға жұмсалған күш-жігеріне тұрақты соққан қарсы желдей болды. > > Line 96. The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. As you can see, line 95 has only the Kazakh translation which should be part of line 96. This causes all of the following English-Kazakh translation pairs to be one off rendering ALL of those translations incorrect. This issue was not fixed when the dataset was imported to Huggingface. By running this code ``` import datasets from datasets import load_dataset dataset = load_dataset('wmt19', 'kk-en') for key in dataset['train']['translation']: if 'The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008.' in key['kk']: print(key['en']) print(key['kk']) break ``` we get: > 2009 жылы, алдыңғы қатарлы экономикалардың шамамен үштен бірі бағаның төмендеуін көрсетті, бұл соғыстан кейінгі жоғары деңгей болды. > The deflationary forces were unleashed by the major economic and financial dislocations associated with the deep and protracted global crisis that erupted in 2008. Private deleveraging became a steady headwind to central bank efforts to reflate. which shows that the issue still persists in the Huggingface dataset. The Kazakh sentence matches up to the next English sentence in the dataset instead of the current one. Please let me know if there's you have any ideas to fix this one-off error from the dataset or if this can be fixed by Huggingface.
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2106/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2106/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2105
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2105/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2105/comments
https://api.github.com/repos/huggingface/datasets/issues/2105/events
https://github.com/huggingface/datasets/issues/2105
839,059,226
MDU6SXNzdWU4MzkwNTkyMjY=
2,105
Request to remove S2ORC dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/13603748?v=4", "events_url": "https://api.github.com/users/kyleclo/events{/privacy}", "followers_url": "https://api.github.com/users/kyleclo/followers", "following_url": "https://api.github.com/users/kyleclo/following{/other_user}", "gists_url": "https://api.github.com/users/kyleclo/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/kyleclo", "id": 13603748, "login": "kyleclo", "node_id": "MDQ6VXNlcjEzNjAzNzQ4", "organizations_url": "https://api.github.com/users/kyleclo/orgs", "received_events_url": "https://api.github.com/users/kyleclo/received_events", "repos_url": "https://api.github.com/users/kyleclo/repos", "site_admin": false, "starred_url": "https://api.github.com/users/kyleclo/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/kyleclo/subscriptions", "type": "User", "url": "https://api.github.com/users/kyleclo", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "Hello @kyleclo! Currently, we are getting the data from your bucket, so if you remove it the HF script won't work anymore :) \r\n\r\nUntil you solve things on your end, @lhoestq suggested we just return a warning message when people try to load that dataset from HF. What would you like it to say?", "Hi @kyleclo,...
2021-03-23T19:43:06Z
2021-08-04T19:18:02Z
null
NONE
null
null
null
null
Hi! I was wondering if it's possible to remove [S2ORC](https://huggingface.co/datasets/s2orc) from hosting on Huggingface's platform? Unfortunately, there are some legal considerations about how we make this data available. Happy to add back to Huggingface's platform once we work out those hurdles! Thanks!
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 1, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2105/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2105/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2104
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2104/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2104/comments
https://api.github.com/repos/huggingface/datasets/issues/2104/events
https://github.com/huggingface/datasets/issues/2104
839,027,834
MDU6SXNzdWU4MzkwMjc4MzQ=
2,104
Trouble loading wiki_movies
{ "avatar_url": "https://avatars.githubusercontent.com/u/35391599?v=4", "events_url": "https://api.github.com/users/adityaarunsinghal/events{/privacy}", "followers_url": "https://api.github.com/users/adityaarunsinghal/followers", "following_url": "https://api.github.com/users/adityaarunsinghal/following{/other_user}", "gists_url": "https://api.github.com/users/adityaarunsinghal/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adityaarunsinghal", "id": 35391599, "login": "adityaarunsinghal", "node_id": "MDQ6VXNlcjM1MzkxNTk5", "organizations_url": "https://api.github.com/users/adityaarunsinghal/orgs", "received_events_url": "https://api.github.com/users/adityaarunsinghal/received_events", "repos_url": "https://api.github.com/users/adityaarunsinghal/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adityaarunsinghal/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adityaarunsinghal/subscriptions", "type": "User", "url": "https://api.github.com/users/adityaarunsinghal", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! `wiki_movies` was added in `datasets==1.2.0`. However it looks like you have `datasets==1.1.2`.\r\n\r\nTo use `wiki_movies`, please update `datasets` with\r\n```\r\npip install --upgrade datasets\r\n```", "Thanks a lot! That solved it and I was able to upload a model trained on it as well :)" ]
2021-03-23T18:59:54Z
2022-03-30T08:22:58Z
2022-03-30T08:22:58Z
NONE
null
null
null
null
Hello, I am trying to load_dataset("wiki_movies") and it gives me this error - `FileNotFoundError: Couldn't find file locally at wiki_movies/wiki_movies.py, or remotely at https://raw.githubusercontent.com/huggingface/datasets/1.1.2/datasets/wiki_movies/wiki_movies.py or https://s3.amazonaws.com/datasets.huggingface.co/datasets/datasets/wiki_movies/wiki_movies.py` Trying to do `python run_mlm.py \ --model_name_or_path roberta-base \ --dataset_name wiki_movies \` also gives the same error. Is this something on my end? From what I can tell, this dataset was re-added by @lhoestq a few months ago. Thank you!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2104/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2104/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2103
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2103/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2103/comments
https://api.github.com/repos/huggingface/datasets/issues/2103/events
https://github.com/huggingface/datasets/issues/2103
838,946,916
MDU6SXNzdWU4Mzg5NDY5MTY=
2,103
citation, homepage, and license fields of `dataset_info.json` are duplicated many times
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
null
[]
null
[ "Thanks for reporting :)\r\nMaybe we can concatenate fields only if they are different.\r\n\r\nCurrently this is done here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/src/datasets/info.py#L180-L196\r\n\r\nThis can be a good first contribution to the library.\r\nPlease co...
2021-03-23T17:18:09Z
2021-04-06T14:39:59Z
2021-04-06T14:39:59Z
NONE
null
null
null
null
This happens after a `map` operation when `num_proc` is set to `>1`. I tested this by cleaning up the json before running the `map` op on the dataset so it's unlikely it's coming from an earlier concatenation. Example result: ``` "citation": "@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n url = {https://dumps.wikimedia.org}\n}\n\n@ONLINE {wikidump,\n author = {Wikimedia Foundation},\n title = {Wikimedia Downloads},\n ``` @lhoestq and I believe this is happening due to the fields being concatenated `num_proc` times.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2103/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2103/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2099
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2099/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2099/comments
https://api.github.com/repos/huggingface/datasets/issues/2099/events
https://github.com/huggingface/datasets/issues/2099
838,523,819
MDU6SXNzdWU4Mzg1MjM4MTk=
2,099
load_from_disk takes a long time to load local dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi !\r\nCan you share more information about the features of your dataset ? You can get them by printing `my_dataset.features`\r\nCan you also share the code of your `map` function ?", "It is actually just the tokenized `wikipedia` dataset with `input_ids`, `attention_mask`, etc, with one extra column which is a...
2021-03-23T09:28:37Z
2021-03-23T17:12:16Z
2021-03-23T17:12:16Z
NONE
null
null
null
null
I have an extremely large tokenized dataset (24M examples) that loads in a few minutes. However, after adding a column similar to `input_ids` (basically a list of integers) and saving the dataset to disk, the load time goes to >1 hour. I've even tried using `np.uint8` after seeing #1985 but it doesn't seem to be helping (the total size seems to be smaller though). Does anyone know what could be the issue? Or does the casting of that column to `int8` need to happen in the function that writes the arrow table instead of in the `map` where I create the list of integers? Tagging @lhoestq since you seem to be working on these issues and PRs :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/15007950?v=4", "events_url": "https://api.github.com/users/samsontmr/events{/privacy}", "followers_url": "https://api.github.com/users/samsontmr/followers", "following_url": "https://api.github.com/users/samsontmr/following{/other_user}", "gists_url": "https://api.github.com/users/samsontmr/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/samsontmr", "id": 15007950, "login": "samsontmr", "node_id": "MDQ6VXNlcjE1MDA3OTUw", "organizations_url": "https://api.github.com/users/samsontmr/orgs", "received_events_url": "https://api.github.com/users/samsontmr/received_events", "repos_url": "https://api.github.com/users/samsontmr/repos", "site_admin": false, "starred_url": "https://api.github.com/users/samsontmr/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/samsontmr/subscriptions", "type": "User", "url": "https://api.github.com/users/samsontmr", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2099/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2099/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2098
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2098/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2098/comments
https://api.github.com/repos/huggingface/datasets/issues/2098/events
https://github.com/huggingface/datasets/issues/2098
838,447,959
MDU6SXNzdWU4Mzg0NDc5NTk=
2,098
SQuAD version
{ "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/h-peng17", "id": 39556019, "login": "h-peng17", "node_id": "MDQ6VXNlcjM5NTU2MDE5", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "repos_url": "https://api.github.com/users/h-peng17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "type": "User", "url": "https://api.github.com/users/h-peng17", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! This is 1.1 as specified by the download urls here:\r\n\r\nhttps://github.com/huggingface/nlp/blob/349ac4398a3bcae6356f14c5754483383a60e8a4/datasets/squad/squad.py#L50-L55", "Got it. Thank you~" ]
2021-03-23T07:47:54Z
2021-03-26T09:48:54Z
2021-03-26T09:48:54Z
NONE
null
null
null
null
Hi~ I want train on squad dataset. What's the version of the squad? Is it 1.1 or 1.0? I'm new in QA, I don't find some descriptions about it.
{ "avatar_url": "https://avatars.githubusercontent.com/u/39556019?v=4", "events_url": "https://api.github.com/users/h-peng17/events{/privacy}", "followers_url": "https://api.github.com/users/h-peng17/followers", "following_url": "https://api.github.com/users/h-peng17/following{/other_user}", "gists_url": "https://api.github.com/users/h-peng17/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/h-peng17", "id": 39556019, "login": "h-peng17", "node_id": "MDQ6VXNlcjM5NTU2MDE5", "organizations_url": "https://api.github.com/users/h-peng17/orgs", "received_events_url": "https://api.github.com/users/h-peng17/received_events", "repos_url": "https://api.github.com/users/h-peng17/repos", "site_admin": false, "starred_url": "https://api.github.com/users/h-peng17/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/h-peng17/subscriptions", "type": "User", "url": "https://api.github.com/users/h-peng17", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2098/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2098/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2096
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2096/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2096/comments
https://api.github.com/repos/huggingface/datasets/issues/2096/events
https://github.com/huggingface/datasets/issues/2096
838,038,379
MDU6SXNzdWU4MzgwMzgzNzk=
2,096
CoNLL 2003 dataset not including German
{ "avatar_url": "https://avatars.githubusercontent.com/u/8406802?v=4", "events_url": "https://api.github.com/users/rxian/events{/privacy}", "followers_url": "https://api.github.com/users/rxian/followers", "following_url": "https://api.github.com/users/rxian/following{/other_user}", "gists_url": "https://api.github.com/users/rxian/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/rxian", "id": 8406802, "login": "rxian", "node_id": "MDQ6VXNlcjg0MDY4MDI=", "organizations_url": "https://api.github.com/users/rxian/orgs", "received_events_url": "https://api.github.com/users/rxian/received_events", "repos_url": "https://api.github.com/users/rxian/repos", "site_admin": false, "starred_url": "https://api.github.com/users/rxian/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/rxian/subscriptions", "type": "User", "url": "https://api.github.com/users/rxian", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "Hello. I've been looking for information about German Conll2003 and found your question. Official site (https://www.clips.uantwerpen.be/conll2003/ner/) mentions that organizers provide only annotation. German texts (ECI Multilingual Text Corpus) are not freely available and can be ordered from the Linguistic Data ...
2021-03-22T19:23:56Z
2023-07-25T16:49:07Z
2023-07-25T16:49:07Z
NONE
null
null
null
null
Hello, thanks for all the work on developing and maintaining this amazing platform, which I am enjoying working with! I was wondering if there is a reason why the German CoNLL 2003 dataset is not included in the [repository](https://github.com/huggingface/datasets/tree/master/datasets/conll2003), since a copy of it could be found in some places on the internet such as GitHub? I could help adding the German data to the hub, unless there are some copyright issues that I am unaware of... This is considering that many work use the union of CoNLL 2002 and 2003 datasets for comparing cross-lingual NER transfer performance in `en`, `de`, `es`, and `nl`. E.g., [XLM-R](https://www.aclweb.org/anthology/2020.acl-main.747.pdf). ## Adding a Dataset - **Name:** CoNLL 2003 German - **Paper:** https://www.aclweb.org/anthology/W03-0419/ - **Data:** https://github.com/huggingface/datasets/tree/master/datasets/conll2003
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2096/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2096/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2092
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2092/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2092/comments
https://api.github.com/repos/huggingface/datasets/issues/2092/events
https://github.com/huggingface/datasets/issues/2092
836,984,043
MDU6SXNzdWU4MzY5ODQwNDM=
2,092
How to disable making arrow tables in load_dataset ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/48825663?v=4", "events_url": "https://api.github.com/users/Jeevesh8/events{/privacy}", "followers_url": "https://api.github.com/users/Jeevesh8/followers", "following_url": "https://api.github.com/users/Jeevesh8/following{/other_user}", "gists_url": "https://api.github.com/users/Jeevesh8/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Jeevesh8", "id": 48825663, "login": "Jeevesh8", "node_id": "MDQ6VXNlcjQ4ODI1NjYz", "organizations_url": "https://api.github.com/users/Jeevesh8/orgs", "received_events_url": "https://api.github.com/users/Jeevesh8/received_events", "repos_url": "https://api.github.com/users/Jeevesh8/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Jeevesh8/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Jeevesh8/subscriptions", "type": "User", "url": "https://api.github.com/users/Jeevesh8", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! We plan to add streaming features in the future.\r\n\r\nThis should allow to load a dataset instantaneously without generating the arrow table. The trade-off is that accessing examples from a streaming dataset must be done in an iterative way, and with an additional (but hopefully minor) overhead.\r\nWhat do ...
2021-03-21T04:50:07Z
2022-06-01T16:49:52Z
2022-06-01T16:49:52Z
NONE
null
null
null
null
Is there a way to disable the construction of arrow tables, or to make them on the fly as the dataset is being used ?
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2092/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2092/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2089
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2089/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2089/comments
https://api.github.com/repos/huggingface/datasets/issues/2089/events
https://github.com/huggingface/datasets/issues/2089
836,788,019
MDU6SXNzdWU4MzY3ODgwMTk=
2,089
Add documentaton for dataset README.md files
{ "avatar_url": "https://avatars.githubusercontent.com/u/229382?v=4", "events_url": "https://api.github.com/users/PhilipMay/events{/privacy}", "followers_url": "https://api.github.com/users/PhilipMay/followers", "following_url": "https://api.github.com/users/PhilipMay/following{/other_user}", "gists_url": "https://api.github.com/users/PhilipMay/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/PhilipMay", "id": 229382, "login": "PhilipMay", "node_id": "MDQ6VXNlcjIyOTM4Mg==", "organizations_url": "https://api.github.com/users/PhilipMay/orgs", "received_events_url": "https://api.github.com/users/PhilipMay/received_events", "repos_url": "https://api.github.com/users/PhilipMay/repos", "site_admin": false, "starred_url": "https://api.github.com/users/PhilipMay/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/PhilipMay/subscriptions", "type": "User", "url": "https://api.github.com/users/PhilipMay", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! We are using the [datasets-tagging app](https://github.com/huggingface/datasets-tagging) to select the tags to add.\r\n\r\nWe are also adding the full list of tags in #2107 \r\nThis covers multilinguality, language_creators, licenses, size_categories and task_categories.\r\n\r\nIn general if you want to add a...
2021-03-20T11:44:38Z
2023-07-25T16:45:38Z
2023-07-25T16:45:37Z
CONTRIBUTOR
null
null
null
null
Hi, the dataset README files have special headers. Somehow a documenation of the allowed values and tags is missing. Could you add that? Just to give some concrete questions that should be answered imo: - which values can be passted to multilinguality? - what should be passed to language_creators? - which values should licenses have? What do I say when it is a custom license? Should I add a link? - how should I choose size_categories ? What are valid ranges? - what are valid task_categories? Thanks Philip
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2089/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2089/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2084
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2084/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2084/comments
https://api.github.com/repos/huggingface/datasets/issues/2084/events
https://github.com/huggingface/datasets/issues/2084
835,750,671
MDU6SXNzdWU4MzU3NTA2NzE=
2,084
CUAD - Contract Understanding Atticus Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "+1 on this request" ]
2021-03-19T09:27:43Z
2021-04-16T08:50:44Z
2021-04-16T08:50:44Z
CONTRIBUTOR
null
null
null
null
## Adding a Dataset - **Name:** CUAD - Contract Understanding Atticus Dataset - **Description:** As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community. - **Paper:** https://arxiv.org/abs/2103.06268 - **Data:** https://github.com/TheAtticusProject/cuad/ - **Motivation:** good domain specific datasets are valuable Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md).
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2084/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2084/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2083
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2083/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2083/comments
https://api.github.com/repos/huggingface/datasets/issues/2083/events
https://github.com/huggingface/datasets/issues/2083
835,695,425
MDU6SXNzdWU4MzU2OTU0MjU=
2,083
`concatenate_datasets` throws error when changing the order of datasets to concatenate
{ "avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4", "events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}", "followers_url": "https://api.github.com/users/patrickvonplaten/followers", "following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}", "gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/patrickvonplaten", "id": 23423619, "login": "patrickvonplaten", "node_id": "MDQ6VXNlcjIzNDIzNjE5", "organizations_url": "https://api.github.com/users/patrickvonplaten/orgs", "received_events_url": "https://api.github.com/users/patrickvonplaten/received_events", "repos_url": "https://api.github.com/users/patrickvonplaten/repos", "site_admin": false, "starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions", "type": "User", "url": "https://api.github.com/users/patrickvonplaten", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis bug is related to `Dataset.{remove_columns, rename_column, flatten}` not propagating the change to the schema metadata when the info features are updated, so this line is the culprit:\r\n```python\r\ncommon_voice_train = common_voice_train.remove_columns(['client_id', 'up_votes', 'down_votes', 'age...
2021-03-19T08:29:48Z
2021-04-09T09:25:33Z
2021-04-09T09:25:33Z
CONTRIBUTOR
null
null
null
null
Hey, I played around with the `concatenate_datasets(...)` function: https://huggingface.co/docs/datasets/package_reference/main_classes.html?highlight=concatenate_datasets#datasets.concatenate_datasets and noticed that when the order in which the datasets are concatenated changes an error is thrown where it should not IMO. Here is a google colab to reproduce the error: https://colab.research.google.com/drive/17VTFU4KQ735-waWZJjeOHS6yDTfV5ekK?usp=sharing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2083/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2083/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2080
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2080/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2080/comments
https://api.github.com/repos/huggingface/datasets/issues/2080/events
https://github.com/huggingface/datasets/issues/2080
835,023,000
MDU6SXNzdWU4MzUwMjMwMDA=
2,080
Multidimensional arrays in a Dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4", "events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}", "followers_url": "https://api.github.com/users/vermouthmjl/followers", "following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}", "gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vermouthmjl", "id": 3142085, "login": "vermouthmjl", "node_id": "MDQ6VXNlcjMxNDIwODU=", "organizations_url": "https://api.github.com/users/vermouthmjl/orgs", "received_events_url": "https://api.github.com/users/vermouthmjl/received_events", "repos_url": "https://api.github.com/users/vermouthmjl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions", "type": "User", "url": "https://api.github.com/users/vermouthmjl", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi !\r\n\r\nThis is actually supported ! but not yet in `from_pandas`.\r\nYou can use `from_dict` for now instead:\r\n```python\r\nfrom datasets import Dataset, Array2D, Features, Value\r\nimport pandas as pd\r\nimport numpy as np\r\n\r\ndataset = {\r\n 'bbox': [\r\n np.array([[1,2,3,4],[1,2,3,4],[1,2,3,...
2021-03-18T16:29:14Z
2021-03-25T12:46:53Z
2021-03-25T12:46:53Z
NONE
null
null
null
null
Hi, I'm trying to put together a `datasets.Dataset` to be used with LayoutLM which is available in `transformers`. This model requires as input the bounding boxes of each of the token of a sequence. This is when I realized that `Dataset` does not support multi-dimensional arrays as a value for a column in a row. The following code results in conversion error in pyarrow (`pyarrow.lib.ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column bbox with type object')`) ``` from datasets import Dataset import pandas as pd import numpy as np dataset = pd.DataFrame({ 'bbox': [ np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]), np.array([[1,2,3,4],[1,2,3,4],[1,2,3,4]]) ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) ``` Since I wanted to use pytorch for the downstream training task, I also tried a few ways to directly put in a column of 2-D pytorch tensor in a formatted dataset, but I can only have a list of 1-D tensors, or a list of arrays, or a list of lists. ``` import torch from datasets import Dataset import pandas as pd dataset = pd.DataFrame({ 'bbox': [ [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]], [[1,2,3,4],[1,2,3,4],[1,2,3,4]] ], 'input_ids': [1, 2, 3, 4] }) dataset = Dataset.from_pandas(dataset) def test(examples): return {'bbbox': torch.Tensor(examples['bbox'])} dataset = dataset.map(test) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) dataset.set_format(type='torch', columns=['input_ids', 'bbox'], output_all_columns=True) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) def test2(examples): return {'bbbox': torch.stack(examples['bbox'])} dataset = dataset.map(test2) print(dataset[0]['bbox']) print(dataset[0]['bbbox']) ``` Is is possible to support n-D arrays/tensors in datasets? It seems that it can also be useful for this [feature request](https://github.com/huggingface/datasets/issues/263).
{ "avatar_url": "https://avatars.githubusercontent.com/u/3142085?v=4", "events_url": "https://api.github.com/users/vermouthmjl/events{/privacy}", "followers_url": "https://api.github.com/users/vermouthmjl/followers", "following_url": "https://api.github.com/users/vermouthmjl/following{/other_user}", "gists_url": "https://api.github.com/users/vermouthmjl/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/vermouthmjl", "id": 3142085, "login": "vermouthmjl", "node_id": "MDQ6VXNlcjMxNDIwODU=", "organizations_url": "https://api.github.com/users/vermouthmjl/orgs", "received_events_url": "https://api.github.com/users/vermouthmjl/received_events", "repos_url": "https://api.github.com/users/vermouthmjl/repos", "site_admin": false, "starred_url": "https://api.github.com/users/vermouthmjl/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/vermouthmjl/subscriptions", "type": "User", "url": "https://api.github.com/users/vermouthmjl", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2080/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2080/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2078
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2078/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2078/comments
https://api.github.com/repos/huggingface/datasets/issues/2078/events
https://github.com/huggingface/datasets/issues/2078
834,694,819
MDU6SXNzdWU4MzQ2OTQ4MTk=
2,078
MemoryError when computing WER metric
{ "avatar_url": "https://avatars.githubusercontent.com/u/5707233?v=4", "events_url": "https://api.github.com/users/diego-fustes/events{/privacy}", "followers_url": "https://api.github.com/users/diego-fustes/followers", "following_url": "https://api.github.com/users/diego-fustes/following{/other_user}", "gists_url": "https://api.github.com/users/diego-fustes/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/diego-fustes", "id": 5707233, "login": "diego-fustes", "node_id": "MDQ6VXNlcjU3MDcyMzM=", "organizations_url": "https://api.github.com/users/diego-fustes/orgs", "received_events_url": "https://api.github.com/users/diego-fustes/received_events", "repos_url": "https://api.github.com/users/diego-fustes/repos", "site_admin": false, "starred_url": "https://api.github.com/users/diego-fustes/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/diego-fustes/subscriptions", "type": "User", "url": "https://api.github.com/users/diego-fustes", "user_view_type": "public" }
[ { "color": "25b21e", "default": false, "description": "A bug in a metric script", "id": 2067393914, "name": "metric bug", "node_id": "MDU6TGFiZWwyMDY3MzkzOTE0", "url": "https://api.github.com/repos/huggingface/datasets/labels/metric%20bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/o...
null
[ "Hi ! Thanks for reporting.\r\nWe're indeed using `jiwer` to compute the WER.\r\n\r\nMaybe instead of calling `jiwer.wer` once for all the preditions/references we can compute the WER iteratively to avoid memory issues ? I'm not too familial with `jiwer` but this must be possible.\r\n\r\nCurrently the code to compu...
2021-03-18T11:30:05Z
2021-05-01T08:31:49Z
2021-04-06T07:20:43Z
NONE
null
null
null
null
Hi, I'm trying to follow the ASR example to try Wav2Vec. This is the code that I use for WER calculation: ``` wer = load_metric("wer") print(wer.compute(predictions=result["predicted"], references=result["target"])) ``` However, I receive the following exception: `Traceback (most recent call last): File "/home/diego/IpGlobal/wav2vec/test_wav2vec.py", line 51, in <module> print(wer.compute(predictions=result["predicted"], references=result["target"])) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/datasets/metric.py", line 403, in compute output = self._compute(predictions=predictions, references=references, **kwargs) File "/home/diego/.cache/huggingface/modules/datasets_modules/metrics/wer/73b2d32b723b7fb8f204d785c00980ae4d937f12a65466f8fdf78706e2951281/wer.py", line 94, in _compute return wer(references, predictions) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 81, in wer truth, hypothesis, truth_transform, hypothesis_transform, **kwargs File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 192, in compute_measures H, S, D, I = _get_operation_counts(truth, hypothesis) File "/home/diego/miniconda3/envs/wav2vec3.6/lib/python3.6/site-packages/jiwer/measures.py", line 273, in _get_operation_counts editops = Levenshtein.editops(source_string, destination_string) MemoryError` My system has more than 10GB of available RAM. Looking at the code, I think that it could be related to the way jiwer does the calculation, as it is pasting all the sentences in a single string before calling Levenshtein editops function.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2078/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2078/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2076
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2076/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2076/comments
https://api.github.com/repos/huggingface/datasets/issues/2076/events
https://github.com/huggingface/datasets/issues/2076
834,445,296
MDU6SXNzdWU4MzQ0NDUyOTY=
2,076
Issue: Dataset download error
{ "avatar_url": "https://avatars.githubusercontent.com/u/20436061?v=4", "events_url": "https://api.github.com/users/XuhuiZhou/events{/privacy}", "followers_url": "https://api.github.com/users/XuhuiZhou/followers", "following_url": "https://api.github.com/users/XuhuiZhou/following{/other_user}", "gists_url": "https://api.github.com/users/XuhuiZhou/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/XuhuiZhou", "id": 20436061, "login": "XuhuiZhou", "node_id": "MDQ6VXNlcjIwNDM2MDYx", "organizations_url": "https://api.github.com/users/XuhuiZhou/orgs", "received_events_url": "https://api.github.com/users/XuhuiZhou/received_events", "repos_url": "https://api.github.com/users/XuhuiZhou/repos", "site_admin": false, "starred_url": "https://api.github.com/users/XuhuiZhou/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/XuhuiZhou/subscriptions", "type": "User", "url": "https://api.github.com/users/XuhuiZhou", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
open
false
null
[]
null
[ "Hi @XuhuiZhou, thanks for reporting this issue. \r\n\r\nIndeed, the old links are no longer valid (404 Not Found error), and the script must be updated with the new links to Google Drive.", "It would be nice to update the urls indeed !\r\n\r\nTo do this, you just need to replace the urls in `iwslt2017.py` and th...
2021-03-18T06:36:06Z
2021-03-22T11:52:31Z
null
NONE
null
null
null
null
The download link in `iwslt2017.py` file does not seem to work anymore. For example, `FileNotFoundError: Couldn't find file at https://wit3.fbk.eu/archive/2017-01-trnted/texts/zh/en/zh-en.tgz` Would be nice if we could modify it script and use the new downloadable link?
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2076/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2076/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2075
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2075/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2075/comments
https://api.github.com/repos/huggingface/datasets/issues/2075/events
https://github.com/huggingface/datasets/issues/2075
834,301,246
MDU6SXNzdWU4MzQzMDEyNDY=
2,075
ConnectionError: Couldn't reach common_voice.py
{ "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LifaSun", "id": 6188893, "login": "LifaSun", "node_id": "MDQ6VXNlcjYxODg4OTM=", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "repos_url": "https://api.github.com/users/LifaSun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "type": "User", "url": "https://api.github.com/users/LifaSun", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @LifaSun, thanks for reporting this issue.\r\n\r\nSometimes, GitHub has some connectivity problems. Could you confirm that the problem persists?", "@albertvillanova Thanks! It works well now. " ]
2021-03-18T01:19:06Z
2021-03-20T10:29:41Z
2021-03-20T10:29:41Z
NONE
null
null
null
null
When I run: from datasets import load_dataset, load_metric common_voice_train = load_dataset("common_voice", "zh-CN", split="train+validation") common_voice_test = load_dataset("common_voice", "zh-CN", split="test") Got: ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/master/datasets/common_voice/common_voice.py Version: 1.4.1 Thanks! @lhoestq @LysandreJik @thomwolf
{ "avatar_url": "https://avatars.githubusercontent.com/u/6188893?v=4", "events_url": "https://api.github.com/users/LifaSun/events{/privacy}", "followers_url": "https://api.github.com/users/LifaSun/followers", "following_url": "https://api.github.com/users/LifaSun/following{/other_user}", "gists_url": "https://api.github.com/users/LifaSun/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LifaSun", "id": 6188893, "login": "LifaSun", "node_id": "MDQ6VXNlcjYxODg4OTM=", "organizations_url": "https://api.github.com/users/LifaSun/orgs", "received_events_url": "https://api.github.com/users/LifaSun/received_events", "repos_url": "https://api.github.com/users/LifaSun/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LifaSun/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LifaSun/subscriptions", "type": "User", "url": "https://api.github.com/users/LifaSun", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2075/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2075/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2071
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2071/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2071/comments
https://api.github.com/repos/huggingface/datasets/issues/2071/events
https://github.com/huggingface/datasets/issues/2071
833,950,824
MDU6SXNzdWU4MzM5NTA4MjQ=
2,071
Multiprocessing is slower than single process
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
null
[]
null
[ "dupe of #1992" ]
2021-03-17T16:08:58Z
2021-03-18T09:10:23Z
2021-03-18T09:10:23Z
CONTRIBUTOR
null
null
null
null
```python # benchmark_filter.py import logging import sys import time from datasets import load_dataset, set_caching_enabled if __name__ == "__main__": set_caching_enabled(False) logging.basicConfig(level=logging.DEBUG) bc = load_dataset("bookcorpus") now = time.time() try: bc["train"].filter(lambda x: len(x["text"]) < 64, num_proc=int(sys.argv[1])) except Exception as e: print(f"cancelled: {e}") elapsed = time.time() - now print(elapsed) ``` Running `python benchmark_filter.py 1` (20min+) is faster than `python benchmark_filter.py 2` (2hrs+)
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2071/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2071/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2070
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2070/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2070/comments
https://api.github.com/repos/huggingface/datasets/issues/2070/events
https://github.com/huggingface/datasets/issues/2070
833,799,035
MDU6SXNzdWU4MzM3OTkwMzU=
2,070
ArrowInvalid issue for squad v2 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/29818977?v=4", "events_url": "https://api.github.com/users/MichaelYxWang/events{/privacy}", "followers_url": "https://api.github.com/users/MichaelYxWang/followers", "following_url": "https://api.github.com/users/MichaelYxWang/following{/other_user}", "gists_url": "https://api.github.com/users/MichaelYxWang/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/MichaelYxWang", "id": 29818977, "login": "MichaelYxWang", "node_id": "MDQ6VXNlcjI5ODE4OTc3", "organizations_url": "https://api.github.com/users/MichaelYxWang/orgs", "received_events_url": "https://api.github.com/users/MichaelYxWang/received_events", "repos_url": "https://api.github.com/users/MichaelYxWang/repos", "site_admin": false, "starred_url": "https://api.github.com/users/MichaelYxWang/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/MichaelYxWang/subscriptions", "type": "User", "url": "https://api.github.com/users/MichaelYxWang", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! This error happens when you use `map` in batched mode and then your function doesn't return the same number of values per column.\r\n\r\nIndeed since you're using `map` in batched mode, `prepare_validation_features` must take a batch as input (i.e. a dictionary of multiple rows of the dataset), and return a b...
2021-03-17T13:51:49Z
2021-08-04T17:57:16Z
2021-08-04T17:57:16Z
NONE
null
null
null
null
Hello, I am using the huggingface official question answering example notebook (https://colab.research.google.com/github/huggingface/notebooks/blob/master/examples/question_answering.ipynb). In the prepare_validation_features function, I made some modifications to tokenize a new set of quesions with the original contexts and save them in three different list called candidate_input_dis, candidate_attetion_mask and candidate_token_type_ids. When I try to run the next cell for dataset.map, I got the following error: `ArrowInvalid: Column 1 named candidate_attention_mask expected length 1180 but got length 1178` My code is as follows: ``` def generate_candidate_questions(examples): val_questions = examples["question"] candididate_questions = random.sample(datasets["train"]["question"], len(val_questions)) candididate_questions = [x[:max_length] for x in candididate_questions] return candididate_questions def prepare_validation_features(examples, use_mixing=False): pad_on_right = tokenizer.padding_side == "right" tokenized_examples = tokenizer( examples["question" if pad_on_right else "context"], examples["context" if pad_on_right else "question"], truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) if use_mixing: candidate_questions = generate_candidate_questions(examples) tokenized_candidates = tokenizer( candidate_questions if pad_on_right else examples["context"], examples["context"] if pad_on_right else candidate_questions, truncation="only_second" if pad_on_right else "only_first", max_length=max_length, stride=doc_stride, return_overflowing_tokens=True, return_offsets_mapping=True, padding="max_length", ) sample_mapping = tokenized_examples.pop("overflow_to_sample_mapping") tokenized_examples["example_id"] = [] if use_mixing: tokenized_examples["candidate_input_ids"] = tokenized_candidates["input_ids"] tokenized_examples["candidate_attention_mask"] = tokenized_candidates["attention_mask"] tokenized_examples["candidate_token_type_ids"] = tokenized_candidates["token_type_ids"] for i in range(len(tokenized_examples["input_ids"])): sequence_ids = tokenized_examples.sequence_ids(i) context_index = 1 if pad_on_right else 0 sample_index = sample_mapping[i] tokenized_examples["example_id"].append(examples["id"][sample_index]) tokenized_examples["offset_mapping"][i] = [ (o if sequence_ids[k] == context_index else None) for k, o in enumerate(tokenized_examples["offset_mapping"][i]) ] return tokenized_examples validation_features = datasets["validation"].map( lambda xs: prepare_validation_features(xs, True), batched=True, remove_columns=datasets["validation"].column_names ) ``` I guess this might happen because of the batched=True. I see similar issues in this repo related to arrow table length mismatch error, but in their cases, the numbers vary a lot. In my case, this error always happens when the expected length and unexpected length are very close. Thanks for the help!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2070/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2070/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2068
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2068/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2068/comments
https://api.github.com/repos/huggingface/datasets/issues/2068/events
https://github.com/huggingface/datasets/issues/2068
833,602,832
MDU6SXNzdWU4MzM2MDI4MzI=
2,068
PyTorch not available error on SageMaker GPU docker though it is installed
{ "avatar_url": "https://avatars.githubusercontent.com/u/1651457?v=4", "events_url": "https://api.github.com/users/sivakhno/events{/privacy}", "followers_url": "https://api.github.com/users/sivakhno/followers", "following_url": "https://api.github.com/users/sivakhno/following{/other_user}", "gists_url": "https://api.github.com/users/sivakhno/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/sivakhno", "id": 1651457, "login": "sivakhno", "node_id": "MDQ6VXNlcjE2NTE0NTc=", "organizations_url": "https://api.github.com/users/sivakhno/orgs", "received_events_url": "https://api.github.com/users/sivakhno/received_events", "repos_url": "https://api.github.com/users/sivakhno/repos", "site_admin": false, "starred_url": "https://api.github.com/users/sivakhno/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/sivakhno/subscriptions", "type": "User", "url": "https://api.github.com/users/sivakhno", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "cc @philschmid ", "Hey @sivakhno,\r\n\r\nhow does your `requirements.txt` look like to install the `datasets` library and which version of it are you running? Can you try to install `datasets>=1.4.0`", "Hi @philschmid - thanks for suggestion. I am using `datasets==1.4.1`. \r\nI have also tried using `torch=1.6...
2021-03-17T10:04:27Z
2021-06-14T04:47:30Z
2021-06-14T04:47:30Z
NONE
null
null
null
null
I get en error when running data loading using SageMaker SDK ``` File "main.py", line 34, in <module> run_training() File "main.py", line 25, in run_training dm.setup('fit') File "/opt/conda/lib/python3.6/site-packages/pytorch_lightning/core/datamodule.py", line 92, in wrapped_fn return fn(*args, **kwargs) File "/opt/ml/code/data_module.py", line 103, in setup self.dataset[split].set_format(type="torch", columns=self.columns) File "/opt/conda/lib/python3.6/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/arrow_dataset.py", line 995, in set_format _ = get_formatter(type, **format_kwargs) File "/opt/conda/lib/python3.6/site-packages/datasets/formatting/__init__.py", line 114, in get_formatter raise _FORMAT_TYPES_ALIASES_UNAVAILABLE[format_type] ValueError: PyTorch needs to be installed to be able to return PyTorch tensors. ``` when trying to execute dataset loading using this notebook https://github.com/PyTorchLightning/pytorch-lightning/blob/master/notebooks/04-transformers-text-classification.ipynb, specifically lines ``` self.columns = [c for c in self.dataset[split].column_names if c in self.loader_columns] self.dataset[split].set_format(type="torch", columns=self.columns) ``` The SageMaker docker image used is 763104351884.dkr.ecr.eu-central-1.amazonaws.com/pytorch-training:1.4.0-gpu-py3 . By running container interactively I have checked that torch loading completes successfully by executing `https://github.com/huggingface/datasets/blob/master/src/datasets/config.py#L39`. Also as a first line in the data loading module I have ``` import os os.environ["USE_TF"] = "0" os.environ["USE_TORCH"] = "1" ```` But unfortunately the error stills persists. Any suggestions would be appreciated as I am stack. Many Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2068/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2068/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2067
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2067/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2067/comments
https://api.github.com/repos/huggingface/datasets/issues/2067/events
https://github.com/huggingface/datasets/issues/2067
833,559,940
MDU6SXNzdWU4MzM1NTk5NDA=
2,067
Multiprocessing windows error
{ "avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4", "events_url": "https://api.github.com/users/flozi00/events{/privacy}", "followers_url": "https://api.github.com/users/flozi00/followers", "following_url": "https://api.github.com/users/flozi00/following{/other_user}", "gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/flozi00", "id": 47894090, "login": "flozi00", "node_id": "MDQ6VXNlcjQ3ODk0MDkw", "organizations_url": "https://api.github.com/users/flozi00/orgs", "received_events_url": "https://api.github.com/users/flozi00/received_events", "repos_url": "https://api.github.com/users/flozi00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/flozi00/subscriptions", "type": "User", "url": "https://api.github.com/users/flozi00", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\nThis looks like a bug, could you try to provide a minimal code example that reproduces the issue ? This would be very helpful !\r\n\r\nOtherwise I can try to run the wav2vec2 code above on my side but probably not this week..", "```\r\nfrom datasets import load_dataset\r\n\r\ndatase...
2021-03-17T09:12:28Z
2021-08-04T17:59:08Z
2021-08-04T17:59:08Z
CONTRIBUTOR
null
null
null
null
As described here https://huggingface.co/blog/fine-tune-xlsr-wav2vec2 When using the num_proc argument on windows the whole Python environment crashes and hanging in loop. For example at the map_to_array part. An error occures because the cache file already exists and windows throws and error. After this the log crashes into an loop
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2067/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2067/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2065
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2065/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2065/comments
https://api.github.com/repos/huggingface/datasets/issues/2065/events
https://github.com/huggingface/datasets/issues/2065
833,291,432
MDU6SXNzdWU4MzMyOTE0MzI=
2,065
Only user permission of saved cache files, not group
{ "avatar_url": "https://avatars.githubusercontent.com/u/57237365?v=4", "events_url": "https://api.github.com/users/lorr1/events{/privacy}", "followers_url": "https://api.github.com/users/lorr1/followers", "following_url": "https://api.github.com/users/lorr1/following{/other_user}", "gists_url": "https://api.github.com/users/lorr1/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lorr1", "id": 57237365, "login": "lorr1", "node_id": "MDQ6VXNlcjU3MjM3MzY1", "organizations_url": "https://api.github.com/users/lorr1/orgs", "received_events_url": "https://api.github.com/users/lorr1/received_events", "repos_url": "https://api.github.com/users/lorr1/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lorr1/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lorr1/subscriptions", "type": "User", "url": "https://api.github.com/users/lorr1", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" }, { "color": "7057ff", "default": true...
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nCurrently there's no way to specify this.\r\n\r\nWhen loading/processing a dataset, the arrow file is written using a temporary file. Then once writing is finished, it's moved to the cache directory (using `shutil.move` [here](https://github.com/huggingface/datasets/blob/f6b8251eb...
2021-03-17T00:20:22Z
2023-03-31T12:17:06Z
2021-05-10T06:45:29Z
NONE
null
null
null
null
Hello, It seems when a cached file is saved from calling `dataset.map` for preprocessing, it gets the user permissions and none of the user's group permissions. As we share data files across members of our team, this is causing a bit of an issue as we have to continually reset the permission of the files. Do you know any ways around this or a way to correctly set the permissions?
{ "avatar_url": "https://avatars.githubusercontent.com/u/30755778?v=4", "events_url": "https://api.github.com/users/LysandreJik/events{/privacy}", "followers_url": "https://api.github.com/users/LysandreJik/followers", "following_url": "https://api.github.com/users/LysandreJik/following{/other_user}", "gists_url": "https://api.github.com/users/LysandreJik/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/LysandreJik", "id": 30755778, "login": "LysandreJik", "node_id": "MDQ6VXNlcjMwNzU1Nzc4", "organizations_url": "https://api.github.com/users/LysandreJik/orgs", "received_events_url": "https://api.github.com/users/LysandreJik/received_events", "repos_url": "https://api.github.com/users/LysandreJik/repos", "site_admin": false, "starred_url": "https://api.github.com/users/LysandreJik/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/LysandreJik/subscriptions", "type": "User", "url": "https://api.github.com/users/LysandreJik", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2065/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2065/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2061
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2061/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2061/comments
https://api.github.com/repos/huggingface/datasets/issues/2061/events
https://github.com/huggingface/datasets/issues/2061
832,596,228
MDU6SXNzdWU4MzI1OTYyMjg=
2,061
Cannot load udpos subsets from xtreme dataset using load_dataset()
{ "avatar_url": "https://avatars.githubusercontent.com/u/55791365?v=4", "events_url": "https://api.github.com/users/adzcodez/events{/privacy}", "followers_url": "https://api.github.com/users/adzcodez/followers", "following_url": "https://api.github.com/users/adzcodez/following{/other_user}", "gists_url": "https://api.github.com/users/adzcodez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/adzcodez", "id": 55791365, "login": "adzcodez", "node_id": "MDQ6VXNlcjU1NzkxMzY1", "organizations_url": "https://api.github.com/users/adzcodez/orgs", "received_events_url": "https://api.github.com/users/adzcodez/received_events", "repos_url": "https://api.github.com/users/adzcodez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/adzcodez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/adzcodez/subscriptions", "type": "User", "url": "https://api.github.com/users/adzcodez", "user_view_type": "public" }
[ { "color": "7057ff", "default": true, "description": "Good for newcomers", "id": 1935892877, "name": "good first issue", "node_id": "MDU6TGFiZWwxOTM1ODkyODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue" } ]
closed
false
null
[]
null
[ "@lhoestq Adding \"_\" to the class labels in the dataset script will fix the issue.\r\n\r\nThe bigger issue IMO is that the data files are in conll format, but the examples are tokens, not sentences.", "Hi ! Thanks for reporting @adzcodez \r\n\r\n\r\n> @lhoestq Adding \"_\" to the class labels in the dataset scr...
2021-03-16T09:32:13Z
2021-06-18T11:54:11Z
2021-06-18T11:54:10Z
NONE
null
null
null
null
Hello, I am trying to load the udpos English subset from xtreme dataset, but this faces an error during loading. I am using datasets v1.4.1, pip install. I have tried with other udpos languages which also fail, though loading a different subset altogether (such as XNLI) has no issue. I have also tried on Colab and faced the same error. Reprex is: `from datasets import load_dataset ` `dataset = load_dataset('xtreme', 'udpos.English')` The error is: `KeyError: '_'` The full traceback is: KeyError Traceback (most recent call last) <ipython-input-5-7181359ea09d> in <module> 1 from datasets import load_dataset ----> 2 dataset = load_dataset('xtreme', 'udpos.English') ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 738 739 # Download and prepare data --> 740 builder_instance.download_and_prepare( 741 download_config=download_config, 742 download_mode=download_mode, ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in download_and_prepare(self, download_config, download_mode, ignore_verifications, try_from_hf_gcs, dl_manager, base_path, use_auth_token, **download_and_prepare_kwargs) 576 logger.warning("HF google storage unreachable. Downloading and preparing it from source") 577 if not downloaded_from_gcs: --> 578 self._download_and_prepare( 579 dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs 580 ) ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _download_and_prepare(self, dl_manager, verify_infos, **prepare_split_kwargs) 654 try: 655 # Prepare split will record examples associated to the split --> 656 self._prepare_split(split_generator, **prepare_split_kwargs) 657 except OSError as e: 658 raise OSError( ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\builder.py in _prepare_split(self, split_generator) 977 generator, unit=" examples", total=split_info.num_examples, leave=False, disable=not_verbose 978 ): --> 979 example = self.info.features.encode_example(record) 980 writer.write(example) 981 finally: ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example) 946 def encode_example(self, example): 947 example = cast_to_python_objects(example) --> 948 return encode_nested_example(self, example) 949 950 def encode_batch(self, batch): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 840 # Nested structures: we allow dict, list/tuples, sequences 841 if isinstance(schema, dict): --> 842 return { 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in <dictcomp>(.0) 841 if isinstance(schema, dict): 842 return { --> 843 k: encode_nested_example(sub_schema, sub_obj) for k, (sub_schema, sub_obj) in utils.zip_dict(schema, obj) 844 } 845 elif isinstance(schema, (list, tuple)): ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_nested_example(schema, obj) 868 # ClassLabel will convert from string to int, TranslationVariableLanguages does some checks 869 elif isinstance(schema, (ClassLabel, TranslationVariableLanguages, Value, _ArrayXD)): --> 870 return schema.encode_example(obj) 871 # Other object should be directly convertible to a native Arrow type (like Translation and Translation) 872 return obj ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in encode_example(self, example_data) 647 # If a string is given, convert to associated integer 648 if isinstance(example_data, str): --> 649 example_data = self.str2int(example_data) 650 651 # Allowing -1 to mean no label. ~\Anaconda3\envs\mlenv\lib\site-packages\datasets\features.py in str2int(self, values) 605 if value not in self._str2int: 606 value = value.strip() --> 607 output.append(self._str2int[str(value)]) 608 else: 609 # No names provided, try to integerize KeyError: '_'
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2061/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2061/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2059
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2059/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2059/comments
https://api.github.com/repos/huggingface/datasets/issues/2059/events
https://github.com/huggingface/datasets/issues/2059
832,579,156
MDU6SXNzdWU4MzI1NzkxNTY=
2,059
Error while following docs to load the `ted_talks_iwslt` dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/40426312?v=4", "events_url": "https://api.github.com/users/ekdnam/events{/privacy}", "followers_url": "https://api.github.com/users/ekdnam/followers", "following_url": "https://api.github.com/users/ekdnam/following{/other_user}", "gists_url": "https://api.github.com/users/ekdnam/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ekdnam", "id": 40426312, "login": "ekdnam", "node_id": "MDQ6VXNlcjQwNDI2MzEy", "organizations_url": "https://api.github.com/users/ekdnam/orgs", "received_events_url": "https://api.github.com/users/ekdnam/received_events", "repos_url": "https://api.github.com/users/ekdnam/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ekdnam/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ekdnam/subscriptions", "type": "User", "url": "https://api.github.com/users/ekdnam", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "@skyprince999 as you authored the PR for this dataset, any comments?", "This has been fixed in #2064 by @mariosasko (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)" ]
2021-03-16T09:12:19Z
2021-03-16T18:00:31Z
2021-03-16T18:00:07Z
NONE
null
null
null
null
I am currently trying to load the `ted_talks_iwslt` dataset into google colab. The [docs](https://huggingface.co/datasets/ted_talks_iwslt) mention the following way of doing so. ```python dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") ``` Executing it results in the error attached below. ``` --------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-6-7dcc67154ef9> in <module>() ----> 1 dataset = load_dataset("ted_talks_iwslt", language_pair=("it", "pl"), year="2014") 4 frames /usr/local/lib/python3.7/dist-packages/datasets/load.py in load_dataset(path, name, data_dir, data_files, split, cache_dir, features, download_config, download_mode, ignore_verifications, keep_in_memory, save_infos, script_version, use_auth_token, **config_kwargs) 730 hash=hash, 731 features=features, --> 732 **config_kwargs, 733 ) 734 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, writer_batch_size, *args, **kwargs) 927 928 def __init__(self, *args, writer_batch_size=None, **kwargs): --> 929 super(GeneratorBasedBuilder, self).__init__(*args, **kwargs) 930 # Batch size used by the ArrowWriter 931 # It defines the number of samples that are kept in memory before writing them /usr/local/lib/python3.7/dist-packages/datasets/builder.py in __init__(self, cache_dir, name, hash, features, **config_kwargs) 241 name, 242 custom_features=features, --> 243 **config_kwargs, 244 ) 245 /usr/local/lib/python3.7/dist-packages/datasets/builder.py in _create_builder_config(self, name, custom_features, **config_kwargs) 337 if "version" not in config_kwargs and hasattr(self, "VERSION") and self.VERSION: 338 config_kwargs["version"] = self.VERSION --> 339 builder_config = self.BUILDER_CONFIG_CLASS(**config_kwargs) 340 341 # otherwise use the config_kwargs to overwrite the attributes /root/.cache/huggingface/modules/datasets_modules/datasets/ted_talks_iwslt/024d06b1376b361e59245c5878ab8acf9a7576d765f2d0077f61751158e60914/ted_talks_iwslt.py in __init__(self, language_pair, year, **kwargs) 219 description=description, 220 version=datasets.Version("1.1.0", ""), --> 221 **kwargs, 222 ) 223 TypeError: __init__() got multiple values for keyword argument 'version' ``` How to resolve this? PS: Thanks a lot @huggingface team for creating this great library!
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2059/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2059/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2058
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2058/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2058/comments
https://api.github.com/repos/huggingface/datasets/issues/2058/events
https://github.com/huggingface/datasets/issues/2058
832,159,844
MDU6SXNzdWU4MzIxNTk4NDQ=
2,058
Is it possible to convert a `tfds` to HuggingFace `dataset`?
{ "avatar_url": "https://avatars.githubusercontent.com/u/6608232?v=4", "events_url": "https://api.github.com/users/abarbosa94/events{/privacy}", "followers_url": "https://api.github.com/users/abarbosa94/followers", "following_url": "https://api.github.com/users/abarbosa94/following{/other_user}", "gists_url": "https://api.github.com/users/abarbosa94/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/abarbosa94", "id": 6608232, "login": "abarbosa94", "node_id": "MDQ6VXNlcjY2MDgyMzI=", "organizations_url": "https://api.github.com/users/abarbosa94/orgs", "received_events_url": "https://api.github.com/users/abarbosa94/received_events", "repos_url": "https://api.github.com/users/abarbosa94/repos", "site_admin": false, "starred_url": "https://api.github.com/users/abarbosa94/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/abarbosa94/subscriptions", "type": "User", "url": "https://api.github.com/users/abarbosa94", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi! You can either save the TF dataset to one of the formats supported by datasets (`parquet`, `csv`, `json`, ...) or pass a generator function to `Dataset.from_generator` that yields its examples." ]
2021-03-15T20:18:47Z
2023-07-25T16:47:40Z
2023-07-25T16:47:40Z
CONTRIBUTOR
null
null
null
null
I was having some weird bugs with `C4`dataset version of HuggingFace, so I decided to try to download `C4`from `tfds`. I would like to know if it is possible to convert a tfds dataset to HuggingFace dataset format :) I can also open a new issue reporting the bug I'm receiving with `datasets.load_dataset('c4','en')` in the future if you think that it would be useful. Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2058/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2058/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2056
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2056/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2056/comments
https://api.github.com/repos/huggingface/datasets/issues/2056/events
https://github.com/huggingface/datasets/issues/2056
831,718,397
MDU6SXNzdWU4MzE3MTgzOTc=
2,056
issue with opus100/en-fr dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "@lhoestq I also deleted the cache and redownload the file and still the same issue, I appreciate any help on this. thanks ", "Here please find the minimal code to reproduce the issue @lhoestq note this only happens with MT5TokenizerFast\r\n\r\n```\r\nfrom datasets import load_dataset\r\nfrom transformers impor...
2021-03-15T11:32:42Z
2021-03-16T15:49:00Z
2021-03-16T15:48:59Z
NONE
null
null
null
null
Hi I am running run_mlm.py code of huggingface repo with opus100/fr-en pair, I am getting this error, note that this error occurs for only this pairs and not the other pairs. Any idea why this is occurring? and how I can solve this? Thanks a lot @lhoestq for your help in advance. ` thread '<unnamed>' panicked at 'index out of bounds: the len is 617 but the index is 617', /__w/tokenizers/tokenizers/tokenizers/src/tokenizer/normalizer.rs:382:21 note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace 63%|██████████████████████████████████████████████████████████▊ | 626/1000 [00:27<00:16, 22.69ba/s] Traceback (most recent call last): File "run_mlm.py", line 550, in <module> main() File "run_mlm.py", line 412, in main in zip(data_args.dataset_name, data_args.dataset_config_name)] File "run_mlm.py", line 411, in <listcomp> logger) for dataset_name, dataset_config_name\ File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 96, in get_tokenized_dataset load_from_cache_file=not data_args.overwrite_cache, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in map for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/dataset_dict.py", line 448, in <dictcomp> for k, dataset in self.items() File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1309, in map update_data=update_data, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 204, in wrapper out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/fingerprint.py", line 337, in wrapper out = func(self, *args, **kwargs) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1574, in _map_single batch, indices, check_same_num_examples=len(self.list_indexes()) > 0, offset=offset File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 1490, in apply_function_on_filtered_inputs function(*fn_args, effective_indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) File "/user/dara/dev/codes/seq2seq/data/tokenize_datasets.py", line 89, in tokenize_function return tokenizer(examples[text_column_name], return_special_tokens_mask=True) File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2347, in __call__ **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_base.py", line 2532, in batch_encode_plus **kwargs, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/transformers/tokenization_utils_fast.py", line 384, in _batch_encode_plus is_pretokenized=is_split_into_words, pyo3_runtime.PanicException: index out of bounds: the len is 617 but the index is 617 `
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2056/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2056/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2055
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2055/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2055/comments
https://api.github.com/repos/huggingface/datasets/issues/2055/events
https://github.com/huggingface/datasets/issues/2055
831,684,312
MDU6SXNzdWU4MzE2ODQzMTI=
2,055
is there a way to override a dataset object saved with save_to_disk?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi\r\nYou can rename the arrow file and update the name in `state.json`", "I tried this way, but when there is a mapping process to the dataset, it again uses a random cache name. atm, I am trying to use the following method by setting an exact cache file,\r\n\r\n```\r\n dataset_with_embedding =csv_da...
2021-03-15T10:50:53Z
2021-03-22T04:06:17Z
2021-03-22T04:06:17Z
NONE
null
null
null
null
At the moment when I use save_to_disk, it uses the arbitrary name for the arrow file. Is there a way to override such an object?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2055/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2055/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2054
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2054/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2054/comments
https://api.github.com/repos/huggingface/datasets/issues/2054/events
https://github.com/huggingface/datasets/issues/2054
831,597,665
MDU6SXNzdWU4MzE1OTc2NjU=
2,054
Could not find file for ZEST dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
[ { "color": "2edb81", "default": false, "description": "A bug in a dataset script provided in the library", "id": 2067388877, "name": "dataset bug", "node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug" } ]
closed
false
null
[]
null
[ "The zest dataset url was changed (allenai/zest#3) and #2057 should resolve this.", "This has been fixed in #2057 by @matt-peters (thanks again !)\r\n\r\nThe fix is available on the master branch and we'll do a new release very soon :)", "Thanks @lhoestq and @matt-peters ", "I am closing this issue since its ...
2021-03-15T09:11:58Z
2021-05-03T09:30:24Z
2021-05-03T09:30:24Z
CONTRIBUTOR
null
null
null
null
I am trying to use zest dataset from Allen AI using below code in colab, ``` !pip install -q datasets from datasets import load_dataset dataset = load_dataset("zest") ``` I am getting the following error, ``` Using custom data configuration default Downloading and preparing dataset zest/default (download: 5.53 MiB, generated: 19.96 MiB, post-processed: Unknown size, total: 25.48 MiB) to /root/.cache/huggingface/datasets/zest/default/0.0.0/1f7a230fbfc964d979bbca0f0130fbab3259fce547ee758ad8aa4f9c9bec6cca... --------------------------------------------------------------------------- FileNotFoundError Traceback (most recent call last) <ipython-input-6-18dbbc1a4b8a> in <module>() 1 from datasets import load_dataset 2 ----> 3 dataset = load_dataset("zest") 9 frames /usr/local/lib/python3.7/dist-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, resume_download, user_agent, local_files_only, use_etag, max_retries, use_auth_token) 612 ) 613 elif response is not None and response.status_code == 404: --> 614 raise FileNotFoundError("Couldn't find file at {}".format(url)) 615 _raise_if_offline_mode_is_enabled(f"Tried to reach {url}") 616 raise ConnectionError("Couldn't reach {}".format(url)) FileNotFoundError: Couldn't find file at https://ai2-datasets.s3-us-west-2.amazonaws.com/zest/zest.zip ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/26653468?v=4", "events_url": "https://api.github.com/users/bhadreshpsavani/events{/privacy}", "followers_url": "https://api.github.com/users/bhadreshpsavani/followers", "following_url": "https://api.github.com/users/bhadreshpsavani/following{/other_user}", "gists_url": "https://api.github.com/users/bhadreshpsavani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/bhadreshpsavani", "id": 26653468, "login": "bhadreshpsavani", "node_id": "MDQ6VXNlcjI2NjUzNDY4", "organizations_url": "https://api.github.com/users/bhadreshpsavani/orgs", "received_events_url": "https://api.github.com/users/bhadreshpsavani/received_events", "repos_url": "https://api.github.com/users/bhadreshpsavani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/bhadreshpsavani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/bhadreshpsavani/subscriptions", "type": "User", "url": "https://api.github.com/users/bhadreshpsavani", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2054/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2054/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2052
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2052/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2052/comments
https://api.github.com/repos/huggingface/datasets/issues/2052/events
https://github.com/huggingface/datasets/issues/2052
831,135,704
MDU6SXNzdWU4MzExMzU3MDQ=
2,052
Timit_asr dataset repeats examples
{ "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fermaat", "id": 7583522, "login": "fermaat", "node_id": "MDQ6VXNlcjc1ODM1MjI=", "organizations_url": "https://api.github.com/users/fermaat/orgs", "received_events_url": "https://api.github.com/users/fermaat/received_events", "repos_url": "https://api.github.com/users/fermaat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "type": "User", "url": "https://api.github.com/users/fermaat", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nthis was fixed by #1995, so you can wait for the next release or install the package directly from the master branch with the following command: \r\n```bash\r\npip install git+https://github.com/huggingface/datasets\r\n```", "Ty!" ]
2021-03-14T11:43:43Z
2021-03-15T10:37:16Z
2021-03-15T10:37:16Z
NONE
null
null
null
null
Summary When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same Steps to reproduce As an example, on this code there is the text from the training part: Code snippet: ``` from datasets import load_dataset, load_metric timit = load_dataset("timit_asr") timit['train']['text'] #['Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', # 'Would such an act of refusal be useful?', ``` The same behavior happens for other columns Expected behavior: Different info on the actual timit_asr dataset Actual behavior: When loading timit_asr dataset on datasets 1.4+, every row in the dataset is the same. I've checked datasets 1.3 and the rows are different Debug info Streamlit version: (get it with $ streamlit version) Python version: Python 3.6.12 Using Conda? PipEnv? PyEnv? Pex? Using pip OS version: Centos-release-7-9.2009.1.el7.centos.x86_64 Additional information You can check the same behavior on https://huggingface.co/datasets/viewer/?dataset=timit_asr
{ "avatar_url": "https://avatars.githubusercontent.com/u/7583522?v=4", "events_url": "https://api.github.com/users/fermaat/events{/privacy}", "followers_url": "https://api.github.com/users/fermaat/followers", "following_url": "https://api.github.com/users/fermaat/following{/other_user}", "gists_url": "https://api.github.com/users/fermaat/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/fermaat", "id": 7583522, "login": "fermaat", "node_id": "MDQ6VXNlcjc1ODM1MjI=", "organizations_url": "https://api.github.com/users/fermaat/orgs", "received_events_url": "https://api.github.com/users/fermaat/received_events", "repos_url": "https://api.github.com/users/fermaat/repos", "site_admin": false, "starred_url": "https://api.github.com/users/fermaat/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/fermaat/subscriptions", "type": "User", "url": "https://api.github.com/users/fermaat", "user_view_type": "public" }
{ "+1": 1, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 1, "url": "https://api.github.com/repos/huggingface/datasets/issues/2052/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2052/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2050
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2050/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2050/comments
https://api.github.com/repos/huggingface/datasets/issues/2050/events
https://github.com/huggingface/datasets/issues/2050
831,006,551
MDU6SXNzdWU4MzEwMDY1NTE=
2,050
Build custom dataset to fine-tune Wav2Vec2
{ "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Omarnabk", "id": 72882909, "login": "Omarnabk", "node_id": "MDQ6VXNlcjcyODgyOTA5", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "repos_url": "https://api.github.com/users/Omarnabk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "type": "User", "url": "https://api.github.com/users/Omarnabk", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[ "@lhoestq - We could simply use the \"general\" json dataset for this no? ", "Sure you can use the json loader\r\n```python\r\ndata_files = {\"train\": \"path/to/your/train_data.json\", \"test\": \"path/to/your/test_data.json\"}\r\ntrain_dataset = load_dataset(\"json\", data_files=data_files, split=\"train\")\r\n...
2021-03-13T22:01:10Z
2021-03-15T09:27:28Z
2021-03-15T09:27:28Z
NONE
null
null
null
null
Thank you for your recent tutorial on how to finetune Wav2Vec2 on a custom dataset. The example you gave here (https://huggingface.co/blog/fine-tune-xlsr-wav2vec2) was on the CommonVoice dataset. However, what if I want to load my own dataset? I have a manifest (transcript and their audio files) in a JSON file.
{ "avatar_url": "https://avatars.githubusercontent.com/u/72882909?v=4", "events_url": "https://api.github.com/users/Omarnabk/events{/privacy}", "followers_url": "https://api.github.com/users/Omarnabk/followers", "following_url": "https://api.github.com/users/Omarnabk/following{/other_user}", "gists_url": "https://api.github.com/users/Omarnabk/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Omarnabk", "id": 72882909, "login": "Omarnabk", "node_id": "MDQ6VXNlcjcyODgyOTA5", "organizations_url": "https://api.github.com/users/Omarnabk/orgs", "received_events_url": "https://api.github.com/users/Omarnabk/received_events", "repos_url": "https://api.github.com/users/Omarnabk/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Omarnabk/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Omarnabk/subscriptions", "type": "User", "url": "https://api.github.com/users/Omarnabk", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2050/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2050/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2048
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2048/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2048/comments
https://api.github.com/repos/huggingface/datasets/issues/2048/events
https://github.com/huggingface/datasets/issues/2048
830,953,431
MDU6SXNzdWU4MzA5NTM0MzE=
2,048
github is not always available - probably need a back up
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
null
[]
null
[]
2021-03-13T18:03:32Z
2022-04-01T15:27:10Z
2022-04-01T15:27:10Z
CONTRIBUTOR
null
null
null
null
Yesterday morning github wasn't working: ``` :/tmp$ wget https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py--2021-03-12 18:35:59-- https://raw.githubusercontent.com/huggingface/datasets/1.4.1/metrics/sacrebleu/sacrebleu.py Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.108.133, 185.199.111.133, 185.199.109.133, ... Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.108.133|:443... connected. HTTP request sent, awaiting response... 500 Internal Server Error 2021-03-12 18:36:11 ERROR 500: Internal Server Error. ``` Suggestion: have a failover system and replicate the data on another system and reach there if gh isn't reachable? perhaps gh can be a master and the replicate a slave - so there is only one true source.
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2048/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2048/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2046
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2046/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2046/comments
https://api.github.com/repos/huggingface/datasets/issues/2046/events
https://github.com/huggingface/datasets/issues/2046
830,423,033
MDU6SXNzdWU4MzA0MjMwMzM=
2,046
add_faisis_index gets very slow when doing it interatively
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "I think faiss automatically sets the number of threads to use to build the index.\r\nCan you check how many CPU cores are being used when you build the index in `use_own_knowleldge_dataset` as compared to this script ? Are there other programs running (maybe for rank>0) ?", "Hi,\r\n I am running the add_faiss_in...
2021-03-12T20:27:18Z
2021-03-24T22:29:11Z
2021-03-24T22:29:11Z
NONE
null
null
null
null
As the below code suggests, I want to run add_faisis_index in every nth interaction from the training loop. I have 7.2 million documents. Usually, it takes 2.5 hours (if I run an as a separate process similar to the script given in rag/use_own_knowleldge_dataset.py). Now, this takes usually 5hrs. Is this normal? Any way to make this process faster? @lhoestq ``` def training_step(self, batch, batch_idx) -> Dict: if (not batch_idx==0) and (batch_idx%5==0): print("******************************************************") ctx_encoder=self.trainer.model.module.module.model.rag.ctx_encoder model_copy =type(ctx_encoder)(self.config_dpr) # get a new instance #this will be load in the CPU model_copy.load_state_dict(ctx_encoder.state_dict()) # copy weights and stuff list_of_gpus = ['cuda:2','cuda:3'] c_dir='/custom/cache/dir' kb_dataset = load_dataset("csv", data_files=[self.custom_config.csv_path], split="train", delimiter="\t", column_names=["title", "text"],cache_dir=c_dir) print(kb_dataset) n=len(list_of_gpus) #nunber of dedicated GPUs kb_list=[kb_dataset.shard(n, i, contiguous=True) for i in range(n)] #kb_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/haha-dir') print(self.trainer.global_rank) dataset_shards = self.re_encode_kb(model_copy.to(device=list_of_gpus[self.trainer.global_rank]),kb_list[self.trainer.global_rank]) output = [None for _ in list_of_gpus] #self.trainer.accelerator_connector.accelerator.barrier("embedding_process") dist.all_gather_object(output, dataset_shards) #This creation and re-initlaization of the new index if (self.trainer.global_rank==0): #saving will be done in the main process combined_dataset = concatenate_datasets(output) passages_path =self.config.passages_path logger.info("saving the dataset with ") #combined_dataset.save_to_disk('/hpc/gsir059/MY-Test/RAY/transformers/examples/research_projects/rag/MY-Passage') combined_dataset.save_to_disk(passages_path) logger.info("Add faiss index to the dataset that consist of embeddings") embedding_dataset=combined_dataset index = faiss.IndexHNSWFlat(768, 128, faiss.METRIC_INNER_PRODUCT) embedding_dataset.add_faiss_index("embeddings", custom_index=index) embedding_dataset.get_index("embeddings").save(self.config.index_path)
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2046/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2046/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2040
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2040/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2040/comments
https://api.github.com/repos/huggingface/datasets/issues/2040/events
https://github.com/huggingface/datasets/issues/2040
830,169,387
MDU6SXNzdWU4MzAxNjkzODc=
2,040
ValueError: datasets' indices [1] come from memory and datasets' indices [0] come from disk
{ "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simonschoe", "id": 53626067, "login": "simonschoe", "node_id": "MDQ6VXNlcjUzNjI2MDY3", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "repos_url": "https://api.github.com/users/simonschoe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "type": "User", "url": "https://api.github.com/users/simonschoe", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! To help me understand the situation, can you print the values of `load_from_disk(PATH_DATA_CLS_A)['train']._indices_data_files` and `load_from_disk(PATH_DATA_CLS_B)['train']._indices_data_files` ?\r\nThey should both have a path to an arrow file\r\n\r\nAlso note that from #2025 concatenating datasets will no...
2021-03-12T14:27:00Z
2021-08-04T18:00:43Z
2021-08-04T18:00:43Z
NONE
null
null
null
null
Hi there, I am trying to concat two datasets that I've previously saved to disk via `save_to_disk()` like so (note that both are saved as `DataDict`, `PATH_DATA_CLS_*` are `Path`-objects): ```python concatenate_datasets([load_from_disk(PATH_DATA_CLS_A)['train'], load_from_disk(PATH_DATA_CLS_B)['train']]) ``` Yielding the following error: ```python ValueError: Datasets' indices should ALL come from memory, or should ALL come from disk. However datasets' indices [1] come from memory and datasets' indices [0] come from disk. ``` Been trying to solve this for quite some time now. Both `DataDict` have been created by reading in a `csv` via `load_dataset` and subsequently processed using the various `datasets` methods (i.e. filter, map, remove col, rename col). Can't figure out tho... `load_from_disk(PATH_DATA_CLS_A)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 785 }) ``` `load_from_disk(PATH_DATA_CLS_B)['train']` yields: ```python Dataset({ features: ['labels', 'text'], num_rows: 3341 }) ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2040/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2040/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2038
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2038/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2038/comments
https://api.github.com/repos/huggingface/datasets/issues/2038/events
https://github.com/huggingface/datasets/issues/2038
830,036,875
MDU6SXNzdWU4MzAwMzY4NzU=
2,038
outdated dataset_infos.json might fail verifications
{ "avatar_url": "https://avatars.githubusercontent.com/u/2062185?v=4", "events_url": "https://api.github.com/users/songfeng/events{/privacy}", "followers_url": "https://api.github.com/users/songfeng/followers", "following_url": "https://api.github.com/users/songfeng/following{/other_user}", "gists_url": "https://api.github.com/users/songfeng/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/songfeng", "id": 2062185, "login": "songfeng", "node_id": "MDQ6VXNlcjIwNjIxODU=", "organizations_url": "https://api.github.com/users/songfeng/orgs", "received_events_url": "https://api.github.com/users/songfeng/received_events", "repos_url": "https://api.github.com/users/songfeng/repos", "site_admin": false, "starred_url": "https://api.github.com/users/songfeng/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/songfeng/subscriptions", "type": "User", "url": "https://api.github.com/users/songfeng", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Thanks for reporting.\r\n\r\nTo update the dataset_infos.json you can run:\r\n```\r\ndatasets-cli test ./datasets/doc2dial --all_configs --save_infos --ignore_verifications\r\n```", "Fixed by #2041, thanks again @songfeng !" ]
2021-03-12T11:41:54Z
2021-03-16T16:27:40Z
2021-03-16T16:27:40Z
CONTRIBUTOR
null
null
null
null
The [doc2dial/dataset_infos.json](https://github.com/huggingface/datasets/blob/master/datasets/doc2dial/dataset_infos.json) is outdated. It would fail data_loader when verifying download checksum etc.. Could you please update this file or point me how to update this file? Thank you.
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2038/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2038/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2036
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2036/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2036/comments
https://api.github.com/repos/huggingface/datasets/issues/2036/events
https://github.com/huggingface/datasets/issues/2036
829,909,258
MDU6SXNzdWU4Mjk5MDkyNTg=
2,036
Cannot load wikitext
{ "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Gpwner", "id": 19349207, "login": "Gpwner", "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "repos_url": "https://api.github.com/users/Gpwner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "type": "User", "url": "https://api.github.com/users/Gpwner", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Solved!" ]
2021-03-12T09:09:39Z
2021-03-15T08:45:02Z
2021-03-15T08:44:44Z
NONE
null
null
null
null
when I execute these codes ``` >>> from datasets import load_dataset >>> test_dataset = load_dataset("wikitext") ``` I got an error,any help? ``` Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 589, in load_dataset path, script_version=script_version, download_config=download_config, download_mode=download_mode, dataset=True File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module local_path = cached_path(file_path, download_config=download_config) File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 308, in cached_path use_etag=download_config.use_etag, File "/home/xxx/anaconda3/envs/transformer/lib/python3.7/site-packages/datasets/utils/file_utils.py", line 487, in get_from_cache raise ConnectionError("Couldn't reach {}".format(url)) ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/wikitext/wikitext.py ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/19349207?v=4", "events_url": "https://api.github.com/users/Gpwner/events{/privacy}", "followers_url": "https://api.github.com/users/Gpwner/followers", "following_url": "https://api.github.com/users/Gpwner/following{/other_user}", "gists_url": "https://api.github.com/users/Gpwner/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/Gpwner", "id": 19349207, "login": "Gpwner", "node_id": "MDQ6VXNlcjE5MzQ5MjA3", "organizations_url": "https://api.github.com/users/Gpwner/orgs", "received_events_url": "https://api.github.com/users/Gpwner/received_events", "repos_url": "https://api.github.com/users/Gpwner/repos", "site_admin": false, "starred_url": "https://api.github.com/users/Gpwner/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/Gpwner/subscriptions", "type": "User", "url": "https://api.github.com/users/Gpwner", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2036/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2036/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2035
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2035/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2035/comments
https://api.github.com/repos/huggingface/datasets/issues/2035/events
https://github.com/huggingface/datasets/issues/2035
829,475,544
MDU6SXNzdWU4Mjk0NzU1NDQ=
2,035
wiki40b/wikipedia for almost all languages cannot be downloaded
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Dear @lhoestq for wikipedia dataset I also get the same error, I greatly appreciate if you could have a look into this dataset as well. Below please find the command to reproduce the error:\r\n\r\n```\r\ndataset = load_dataset(\"wikipedia\", \"20200501.bg\")\r\nprint(dataset)\r\n```\r\n\r\nYour library is my only ...
2021-03-11T19:54:54Z
2024-03-15T16:09:49Z
2024-03-15T16:09:48Z
NONE
null
null
null
null
Hi I am trying to download the data as below: ``` from datasets import load_dataset dataset = load_dataset("wiki40b", "cs") print(dataset) ``` I am getting this error. @lhoestq I will be grateful if you could assist me with this error. For almost all languages except english I am getting this error. I really need majority of languages in this dataset to be able to train my models for a deadline and your great scalable super well-written library is my only hope to train the models at scale while being low on resources. thank you very much. ``` (fast) dara@vgne046:/user/dara/dev/codes/seq2seq$ python test_data.py Downloading and preparing dataset wiki40b/cs (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to temp/dara/cache_home_2/datasets/wiki40b/cs/1.1.0/063778187363ffb294896eaa010fc254b42b73e31117c71573a953b0b0bf010f... Traceback (most recent call last): File "test_data.py", line 3, in <module> dataset = load_dataset("wiki40b", "cs") File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/load.py", line 746, in load_dataset use_auth_token=use_auth_token, File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 579, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/datasets/builder.py", line 1105, in _download_and_prepare import apache_beam as beam File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/__init__.py", line 96, in <module> from apache_beam import io File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/__init__.py", line 23, in <module> from apache_beam.io.avroio import * File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/apache_beam-2.28.0-py3.7-linux-x86_64.egg/apache_beam/io/avroio.py", line 55, in <module> import avro File "<frozen importlib._bootstrap>", line 983, in _find_and_load File "<frozen importlib._bootstrap>", line 967, in _find_and_load_unlocked File "<frozen importlib._bootstrap>", line 668, in _load_unlocked File "<frozen importlib._bootstrap>", line 638, in _load_backward_compatible File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 34, in <module> File "/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/__init__.py", line 30, in LoadResource NotADirectoryError: [Errno 20] Not a directory: '/user/dara/libs/anaconda3/envs/fast/lib/python3.7/site-packages/avro_python3-1.9.2.1-py3.7.egg/avro/VERSION.txt' ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2035/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2035/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2032
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2032/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2032/comments
https://api.github.com/repos/huggingface/datasets/issues/2032/events
https://github.com/huggingface/datasets/issues/2032
829,250,912
MDU6SXNzdWU4MjkyNTA5MTI=
2,032
Use Arrow filtering instead of writing a new arrow file for Dataset.filter
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "color": "a2eeef", "default": true, "description": "New feature or request", "id": 1935892871, "name": "enhancement", "node_id": "MDU6TGFiZWwxOTM1ODkyODcx", "url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_ur...
null
[ "Actually table.filter returns a new table in memory, which can fill users RAM.\r\n\r\nTherefore it's not a good solution if we want to keep supporting bigger than RAM datastes" ]
2021-03-11T15:18:50Z
2024-01-19T13:26:32Z
2024-01-19T13:26:32Z
MEMBER
null
null
null
null
Currently the filter method reads the dataset batch by batch to write a new, filtered, arrow file on disk. Therefore all the reading + writing can take some time. Using a mask directly on the arrow table doesn't do any read or write operation therefore it's significantly quicker. I think there are two cases: - if the dataset doesn't have an indices mapping, then one can simply use the arrow filtering on the main arrow table `dataset._data.filter(...)` - if the dataset an indices mapping, then the mask should be applied on the indices mapping table `dataset._indices.filter(...)` The indices mapping is used to map between the idx at `dataset[idx]` in `__getitem__` and the idx in the actual arrow table. The new filter method should therefore be faster, and allow users to pass either a filtering function (that returns a boolean given an example), or directly a mask. Feel free to discuss this idea in this thread :) One additional note: the refactor at #2025 would make all the pickle-related stuff work directly with the arrow filtering, so that we only need to change the Dataset.filter method without having to deal with pickle. cc @theo-m @gchhablani related issues: #1796 #1949
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 4, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 4, "url": "https://api.github.com/repos/huggingface/datasets/issues/2032/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2032/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2031
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2031/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2031/comments
https://api.github.com/repos/huggingface/datasets/issues/2031/events
https://github.com/huggingface/datasets/issues/2031
829,122,778
MDU6SXNzdWU4MjkxMjI3Nzg=
2,031
wikipedia.py generator that extracts XML doesn't release memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/miyamonz", "id": 6331508, "login": "miyamonz", "node_id": "MDQ6VXNlcjYzMzE1MDg=", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "repos_url": "https://api.github.com/users/miyamonz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "type": "User", "url": "https://api.github.com/users/miyamonz", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @miyamonz \r\nThanks for investigating this issue, good job !\r\nIt would be awesome to integrate your fix in the library, could you open a pull request ?", "OK! I'll send it later." ]
2021-03-11T12:51:24Z
2021-03-22T08:33:52Z
2021-03-22T08:33:52Z
CONTRIBUTOR
null
null
null
null
I tried downloading Japanese wikipedia, but it always failed because of out of memory maybe. I found that the generator function that extracts XML data in wikipedia.py doesn't release memory in the loop. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L464-L502 `root.clear()` intend to clear memory, but it doesn't. https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L490 https://github.com/huggingface/datasets/blob/13a5b7db992ad5cf77895e4c0f76595314390418/datasets/wikipedia/wikipedia.py#L494 I replaced them with `elem.clear()`, then it seems to work correctly. here is the notebook to reproduce it. https://gist.github.com/miyamonz/dc06117302b6e85fa51cbf46dde6bb51#file-xtract_content-ipynb
{ "avatar_url": "https://avatars.githubusercontent.com/u/6331508?v=4", "events_url": "https://api.github.com/users/miyamonz/events{/privacy}", "followers_url": "https://api.github.com/users/miyamonz/followers", "following_url": "https://api.github.com/users/miyamonz/following{/other_user}", "gists_url": "https://api.github.com/users/miyamonz/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/miyamonz", "id": 6331508, "login": "miyamonz", "node_id": "MDQ6VXNlcjYzMzE1MDg=", "organizations_url": "https://api.github.com/users/miyamonz/orgs", "received_events_url": "https://api.github.com/users/miyamonz/received_events", "repos_url": "https://api.github.com/users/miyamonz/repos", "site_admin": false, "starred_url": "https://api.github.com/users/miyamonz/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/miyamonz/subscriptions", "type": "User", "url": "https://api.github.com/users/miyamonz", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2031/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2031/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2029
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2029/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2029/comments
https://api.github.com/repos/huggingface/datasets/issues/2029/events
https://github.com/huggingface/datasets/issues/2029
829,097,290
MDU6SXNzdWU4MjkwOTcyOTA=
2,029
Loading a faiss index KeyError
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881", "user_view_type": "public" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
null
[]
null
[ "In your code `dataset2` doesn't contain the \"embeddings\" column, since it is created from the pandas DataFrame with columns \"text\" and \"label\".\r\n\r\nTherefore when you call `dataset2[embeddings_name]`, you get a `KeyError`.\r\n\r\nIf you want the \"embeddings\" column back, you can create `dataset2` with\r...
2021-03-11T12:16:13Z
2021-03-12T00:21:09Z
2021-03-12T00:21:09Z
NONE
null
null
null
null
I've recently been testing out RAG and DPR embeddings, and I've run into an issue that is not apparent in the documentation. The basic steps are: 1. Create a dataset (dataset1) 2. Create an embeddings column using DPR 3. Add a faiss index to the dataset 4. Save faiss index to a file 5. Create a new dataset (dataset2) with the same text and label information as dataset1 6. Try to load the faiss index from file to dataset2 7. Get `KeyError: "Column embeddings not in the dataset"` I've made a colab notebook that should show exactly what I did. Please switch to GPU runtime; I didn't check on CPU. https://colab.research.google.com/drive/1X0S9ZuZ8k0ybcoei4w7so6dS_WrABmIx?usp=sharing Ubuntu Version VERSION="18.04.5 LTS (Bionic Beaver)" datasets==1.4.1 faiss==1.5.3 faiss-gpu==1.7.0 torch==1.8.0+cu101 transformers==4.3.3 NVIDIA-SMI 460.56 Driver Version: 460.32.03 CUDA Version: 11.2 Tesla K80 I was basically following the steps here: https://huggingface.co/docs/datasets/faiss_and_ea.html#adding-a-faiss-index I included the exact code from the documentation at the end of the notebook to show that they don't work either.
{ "avatar_url": "https://avatars.githubusercontent.com/u/24982805?v=4", "events_url": "https://api.github.com/users/nbroad1881/events{/privacy}", "followers_url": "https://api.github.com/users/nbroad1881/followers", "following_url": "https://api.github.com/users/nbroad1881/following{/other_user}", "gists_url": "https://api.github.com/users/nbroad1881/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/nbroad1881", "id": 24982805, "login": "nbroad1881", "node_id": "MDQ6VXNlcjI0OTgyODA1", "organizations_url": "https://api.github.com/users/nbroad1881/orgs", "received_events_url": "https://api.github.com/users/nbroad1881/received_events", "repos_url": "https://api.github.com/users/nbroad1881/repos", "site_admin": false, "starred_url": "https://api.github.com/users/nbroad1881/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/nbroad1881/subscriptions", "type": "User", "url": "https://api.github.com/users/nbroad1881", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2029/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2029/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2026
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2026/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2026/comments
https://api.github.com/repos/huggingface/datasets/issues/2026/events
https://github.com/huggingface/datasets/issues/2026
828,194,467
MDU6SXNzdWU4MjgxOTQ0Njc=
2,026
KeyError on using map after renaming a column
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nActually, the error occurs due to these two lines:\r\n```python\r\nraw_dataset.set_format('torch',columns=['img','label'])\r\nraw_dataset = raw_dataset.rename_column('img','image')\r\n```\r\n`Dataset.rename_column` doesn't update the `_format_columns` attribute, previously defined by `Dataset.set_format...
2021-03-10T18:54:17Z
2021-03-11T14:39:34Z
2021-03-11T14:38:40Z
CONTRIBUTOR
null
null
null
null
Hi, I'm trying to use `cifar10` dataset. I want to rename the `img` feature to `image` in order to make it consistent with `mnist`, which I'm also planning to use. By doing this, I was trying to avoid modifying `prepare_train_features` function. Here is what I try: ```python transform = Compose([ToPILImage(),ToTensor(),Normalize([0.0,0.0,0.0],[1.0,1.0,1.0])]) def prepare_features(examples): images = [] labels = [] print(examples) for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform(examples["image"][example_idx].permute(2,0,1))) else: images.append(examples["image"][example_idx].permute(2,0,1)) labels.append(examples["label"][example_idx]) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('cifar10') raw_dataset.set_format('torch',columns=['img','label']) raw_dataset = raw_dataset.rename_column('img','image') features = datasets.Features({ "image": datasets.Array3D(shape=(3,32,32),dtype="float32"), "label": datasets.features.ClassLabel(names=[ "airplane", "automobile", "bird", "cat", "deer", "dog", "frog", "horse", "ship", "truck", ]), }) train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) ``` The error: ```python --------------------------------------------------------------------------- KeyError Traceback (most recent call last) <ipython-input-54-bf29672c53ee> in <module>() 14 ]), 15 }) ---> 16 train_dataset = raw_dataset.map(prepare_features, features = features,batched=True, batch_size=10000) 2 frames /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in map(self, function, with_indices, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint) 1287 test_inputs = self[:2] if batched else self[0] 1288 test_indices = [0, 1] if batched else 0 -> 1289 update_data = does_function_return_dict(test_inputs, test_indices) 1290 logger.info("Testing finished, running the mapping function on the dataset") 1291 /usr/local/lib/python3.7/dist-packages/datasets/arrow_dataset.py in does_function_return_dict(inputs, indices) 1258 fn_args = [inputs] if input_columns is None else [inputs[col] for col in input_columns] 1259 processed_inputs = ( -> 1260 function(*fn_args, indices, **fn_kwargs) if with_indices else function(*fn_args, **fn_kwargs) 1261 ) 1262 does_return_dict = isinstance(processed_inputs, Mapping) <ipython-input-52-b4dccbafb70d> in prepare_features(examples) 3 labels = [] 4 print(examples) ----> 5 for example_idx, example in enumerate(examples["image"]): 6 if transform is not None: 7 images.append(transform(examples["image"][example_idx].permute(2,0,1))) KeyError: 'image' ``` The print statement inside returns this: ```python {'label': tensor([6, 9])} ``` Apparently, both `img` and `image` do not exist after renaming. Note that this code works fine with `img` everywhere. Notebook: https://colab.research.google.com/drive/1SzESAlz3BnVYrgQeJ838vbMp1OsukiA2?usp=sharing
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2026/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2026/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2022
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2022/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2022/comments
https://api.github.com/repos/huggingface/datasets/issues/2022/events
https://github.com/huggingface/datasets/issues/2022
827,435,033
MDU6SXNzdWU4Mjc0MzUwMzM=
2,022
ValueError when rename_column on splitted dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/53626067?v=4", "events_url": "https://api.github.com/users/simonschoe/events{/privacy}", "followers_url": "https://api.github.com/users/simonschoe/followers", "following_url": "https://api.github.com/users/simonschoe/following{/other_user}", "gists_url": "https://api.github.com/users/simonschoe/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/simonschoe", "id": 53626067, "login": "simonschoe", "node_id": "MDQ6VXNlcjUzNjI2MDY3", "organizations_url": "https://api.github.com/users/simonschoe/orgs", "received_events_url": "https://api.github.com/users/simonschoe/received_events", "repos_url": "https://api.github.com/users/simonschoe/repos", "site_admin": false, "starred_url": "https://api.github.com/users/simonschoe/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/simonschoe/subscriptions", "type": "User", "url": "https://api.github.com/users/simonschoe", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nThis is a bug so thanks for reporting it. `Dataset.__setstate__` is the problem, which is called when `Dataset.rename_column` tries to copy the dataset with `copy.deepcopy(self)`. This only happens if the `split` arg in `load_dataset` was defined as `ReadInstruction`.\r\n\r\nTo overcome this issue, use...
2021-03-10T09:40:38Z
2025-02-05T13:36:07Z
2021-03-16T14:05:05Z
NONE
null
null
null
null
Hi there, I am loading `.tsv` file via `load_dataset` and subsequently split the rows into training and test set via the `ReadInstruction` API like so: ```python split = { 'train': ReadInstruction('train', to=90, unit='%'), 'test': ReadInstruction('train', from_=-10, unit='%') } dataset = load_dataset( path='csv', # use 'text' loading script to load from local txt-files delimiter='\t', # xxx data_files=text_files, # list of paths to local text files split=split, # xxx ) dataset ``` Part of output: ```python DatasetDict({ train: Dataset({ features: ['sentence', 'sentiment'], num_rows: 900 }) test: Dataset({ features: ['sentence', 'sentiment'], num_rows: 100 }) }) ``` Afterwards I'd like to rename the 'sentence' column to 'text' in order to be compatible with my modelin pipeline. If I run the following code I experience a `ValueError` however: ```python dataset['train'].rename_column('sentence', 'text') ``` ```python /usr/local/lib/python3.7/dist-packages/datasets/splits.py in __init__(self, name) 353 for split_name in split_names_from_instruction: 354 if not re.match(_split_re, split_name): --> 355 raise ValueError(f"Split name should match '{_split_re}'' but got '{split_name}'.") 356 357 def __str__(self): ValueError: Split name should match '^\w+(\.\w+)*$'' but got 'ReadInstruction('. ``` In particular, these behavior does not arise if I use the deprecated `rename_column_` method. Any idea what causes the error? Would assume something in the way I defined the split. Thanks in advance! :)
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2022/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2022/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2021
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2021/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2021/comments
https://api.github.com/repos/huggingface/datasets/issues/2021/events
https://github.com/huggingface/datasets/issues/2021
826,988,016
MDU6SXNzdWU4MjY5ODgwMTY=
2,021
Interactively doing save_to_disk and load_from_disk corrupts the datasets object?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi,\r\n\r\nCan you give us a minimal reproducible example? This [part](https://huggingface.co/docs/datasets/master/processing.html#controling-the-cache-behavior) of the docs explains how to control caching." ]
2021-03-10T02:48:34Z
2021-03-13T10:07:41Z
2021-03-13T10:07:41Z
NONE
null
null
null
null
dataset_info.json file saved after using save_to_disk gets corrupted as follows. ![image](https://user-images.githubusercontent.com/16892570/110568474-ed969880-81b7-11eb-832f-2e5129656016.png) Is there a way to disable the cache that will save to /tmp/huggiface/datastes ? I have a feeling there is a serious issue with cashing.
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2021/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2021/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2012
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2012/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2012/comments
https://api.github.com/repos/huggingface/datasets/issues/2012/events
https://github.com/huggingface/datasets/issues/2012
825,634,064
MDU6SXNzdWU4MjU2MzQwNjQ=
2,012
No upstream branch
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "What's the issue exactly ?\r\n\r\nGiven an `upstream` remote repository with url `https://github.com/huggingface/datasets.git`, you can totally rebase from `upstream/master`.\r\n\r\nIt's mentioned at the beginning how to add the `upstream` remote repository\r\n\r\nhttps://github.com/huggingface/datasets/blob/987df...
2021-03-09T09:48:55Z
2021-03-09T11:33:31Z
2021-03-09T11:33:31Z
CONTRIBUTOR
null
null
null
null
Feels like the documentation on adding a new dataset is outdated? https://github.com/huggingface/datasets/blob/987df6b4e9e20fc0c92bc9df48137d170756fd7b/ADD_NEW_DATASET.md#L49-L54 There is no upstream branch on remote.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2012/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2012/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2010
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2010/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2010/comments
https://api.github.com/repos/huggingface/datasets/issues/2010/events
https://github.com/huggingface/datasets/issues/2010
825,567,635
MDU6SXNzdWU4MjU1Njc2MzU=
2,010
Local testing fails
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists...
null
[ "I'm not able to reproduce on my side.\r\nCan you provide the full stacktrace please ?\r\nWhat version of `python` and `dill` do you have ? Which OS are you using ?", "```\r\nco_filename = '<ipython-input-2-e0383a102aae>', returned_obj = [0]\r\n ...
2021-03-09T09:01:38Z
2021-03-09T14:06:03Z
2021-03-09T14:06:03Z
CONTRIBUTOR
null
null
null
null
I'm following the CI setup as described in https://github.com/huggingface/datasets/blob/8eee4fa9e133fe873a7993ba746d32ca2b687551/.circleci/config.yml#L16-L19 in a new conda environment, at commit https://github.com/huggingface/datasets/commit/4de6dbf84e93dad97e1000120d6628c88954e5d4 and getting ``` FAILED tests/test_caching.py::RecurseDumpTest::test_dump_ipython_function - TypeError: an integer is required (got type bytes) 1 failed, 2321 passed, 5109 skipped, 10 warnings in 124.32s (0:02:04) ``` Seems like a discrepancy with CI, perhaps a lib version that's not controlled? Tried with `pyarrow=={1.0.0,0.17.1,2.0.0}`
{ "avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4", "events_url": "https://api.github.com/users/lhoestq/events{/privacy}", "followers_url": "https://api.github.com/users/lhoestq/followers", "following_url": "https://api.github.com/users/lhoestq/following{/other_user}", "gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/lhoestq", "id": 42851186, "login": "lhoestq", "node_id": "MDQ6VXNlcjQyODUxMTg2", "organizations_url": "https://api.github.com/users/lhoestq/orgs", "received_events_url": "https://api.github.com/users/lhoestq/received_events", "repos_url": "https://api.github.com/users/lhoestq/repos", "site_admin": false, "starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions", "type": "User", "url": "https://api.github.com/users/lhoestq", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2010/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2010/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2009
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2009/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2009/comments
https://api.github.com/repos/huggingface/datasets/issues/2009/events
https://github.com/huggingface/datasets/issues/2009
825,541,366
MDU6SXNzdWU4MjU1NDEzNjY=
2,009
Ambiguous documentation
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "color": "0075ca", "default": true, "description": "Improvements or additions to documentation", "id": 1935892861, "name": "documentation", "node_id": "MDU6TGFiZWwxOTM1ODkyODYx", "url": "https://api.github.com/repos/huggingface/datasets/labels/documentation" } ]
closed
false
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
[ { "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_ur...
null
[ "Hi @theo-m !\r\n\r\nA few lines above this line, you'll find that the `_split_generators` method returns a list of `SplitGenerator`s objects:\r\n\r\n```python\r\ndatasets.SplitGenerator(\r\n name=datasets.Split.VALIDATION,\r\n # These kwargs will be passed to _generate_examples\r\n gen_kwargs={\r\n ...
2021-03-09T08:42:11Z
2021-03-12T15:01:34Z
2021-03-12T15:01:34Z
CONTRIBUTOR
null
null
null
null
https://github.com/huggingface/datasets/blob/2ac9a0d24a091989f869af55f9f6411b37ff5188/templates/new_dataset_script.py#L156-L158 Looking at the template, I find this documentation line to be confusing, the method parameters don't include the `gen_kwargs` so I'm unclear where they're coming from. Happy to push a PR with a clearer statement when I understand the meaning.
{ "avatar_url": "https://avatars.githubusercontent.com/u/17948980?v=4", "events_url": "https://api.github.com/users/theo-m/events{/privacy}", "followers_url": "https://api.github.com/users/theo-m/followers", "following_url": "https://api.github.com/users/theo-m/following{/other_user}", "gists_url": "https://api.github.com/users/theo-m/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/theo-m", "id": 17948980, "login": "theo-m", "node_id": "MDQ6VXNlcjE3OTQ4OTgw", "organizations_url": "https://api.github.com/users/theo-m/orgs", "received_events_url": "https://api.github.com/users/theo-m/received_events", "repos_url": "https://api.github.com/users/theo-m/repos", "site_admin": false, "starred_url": "https://api.github.com/users/theo-m/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/theo-m/subscriptions", "type": "User", "url": "https://api.github.com/users/theo-m", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2009/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2009/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2007
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2007/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2007/comments
https://api.github.com/repos/huggingface/datasets/issues/2007/events
https://github.com/huggingface/datasets/issues/2007
824,518,158
MDU6SXNzdWU4MjQ1MTgxNTg=
2,007
How to not load huggingface datasets into memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "So maybe a summary here: \r\nIf I could fit a large model with batch_size = X into memory, is there a way I could train this model for huge datasets with keeping setting the same? thanks ", "The `datastets` library doesn't load datasets into memory. Therefore you can load a dataset that is terabytes big without ...
2021-03-08T12:35:26Z
2021-08-04T18:02:25Z
2021-08-04T18:02:25Z
NONE
null
null
null
null
Hi I am running this example from transformers library version 4.3.3: (Here is the full documentation https://github.com/huggingface/transformers/issues/8771 but the running command should work out of the box) USE_TF=0 deepspeed run_seq2seq.py --model_name_or_path google/mt5-base --dataset_name wmt16 --dataset_config_name ro-en --source_prefix "translate English to Romanian: " --task translation_en_to_ro --output_dir /test/test_large --do_train --do_eval --predict_with_generate --max_train_samples 500 --max_val_samples 500 --max_source_length 128 --max_target_length 128 --sortish_sampler --per_device_train_batch_size 8 --val_max_target_length 128 --deepspeed ds_config.json --num_train_epochs 1 --eval_steps 25000 --warmup_steps 500 --overwrite_output_dir (Here please find the script: https://github.com/huggingface/transformers/blob/master/examples/seq2seq/run_seq2seq.py) If you do not pass max_train_samples in above command to load the full dataset, then I get memory issue on a gpu with 24 GigBytes of memory. I need to train large-scale mt5 model on large-scale datasets of wikipedia (multiple of them concatenated or other datasets in multiple languages like OPUS), could you help me how I can avoid loading the full data into memory? to make the scripts not related to data size? In above example, I was hoping the script could work without relying on dataset size, so I can still train the model without subsampling training set. thank you so much @lhoestq for your great help in advance
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2007/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2007/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2005
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2005/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2005/comments
https://api.github.com/repos/huggingface/datasets/issues/2005/events
https://github.com/huggingface/datasets/issues/2005
824,275,035
MDU6SXNzdWU4MjQyNzUwMzU=
2,005
Setting to torch format not working with torchvision and MNIST
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Adding to the previous information, I think `torch.utils.data.DataLoader` is doing some conversion. \r\nWhat I tried:\r\n```python\r\ntrain_dataset = load_dataset('mnist')\r\n```\r\nI don't use any `map` or `set_format` or any `transform`. I use this directly, and try to load batches using the `DataLoader` with ba...
2021-03-08T07:38:11Z
2021-03-09T17:58:13Z
2021-03-09T17:58:13Z
CONTRIBUTOR
null
null
null
null
Hi I am trying to use `torchvision.transforms` to handle the transformation of the image data in the `mnist` dataset. Assume I have a `transform` variable which contains the `torchvision.transforms` object. A snippet of what I am trying to do: ```python def prepare_features(examples): images = [] labels = [] for example_idx, example in enumerate(examples["image"]): if transform is not None: images.append(transform( np.array(examples["image"][example_idx], dtype=np.uint8) )) else: images.append(torch.tensor(np.array(examples["image"][example_idx], dtype=np.uint8))) labels.append(torch.tensor(examples["label"][example_idx])) output = {"label":labels, "image":images} return output raw_dataset = load_dataset('mnist') train_dataset = raw_dataset.map(prepare_features, batched=True, batch_size=10000) train_dataset.set_format("torch",columns=["image","label"]) ``` After this, I check the type of the following: ```python print(type(train_dataset["train"]["label"])) print(type(train_dataset["train"]["image"][0])) ``` This leads to the following output: ```python <class 'torch.Tensor'> <class 'list'> ``` I use `torch.utils.DataLoader` for batches, the type of `batch["train"]["image"]` is also `<class 'list'>`. I don't understand why only the `label` is converted to a torch tensor, why does the image not get converted? How can I fix this issue? Thanks, Gunjan EDIT: I just checked the shapes, and the types, `batch[image]` is a actually a list of list of tensors. Shape is (1,28,2,28), where `batch_size` is 2. I don't understand why this is happening. Ideally it should be a tensor of shape (2,1,28,28). EDIT 2: Inside `prepare_train_features`, the shape of `images[0]` is `torch.Size([1,28,28])`, the conversion is working. However, the output of the `map` is a list of list of list of list.
{ "avatar_url": "https://avatars.githubusercontent.com/u/29076344?v=4", "events_url": "https://api.github.com/users/gchhablani/events{/privacy}", "followers_url": "https://api.github.com/users/gchhablani/followers", "following_url": "https://api.github.com/users/gchhablani/following{/other_user}", "gists_url": "https://api.github.com/users/gchhablani/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/gchhablani", "id": 29076344, "login": "gchhablani", "node_id": "MDQ6VXNlcjI5MDc2MzQ0", "organizations_url": "https://api.github.com/users/gchhablani/orgs", "received_events_url": "https://api.github.com/users/gchhablani/received_events", "repos_url": "https://api.github.com/users/gchhablani/repos", "site_admin": false, "starred_url": "https://api.github.com/users/gchhablani/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/gchhablani/subscriptions", "type": "User", "url": "https://api.github.com/users/gchhablani", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2005/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2005/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2003
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2003/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2003/comments
https://api.github.com/repos/huggingface/datasets/issues/2003/events
https://github.com/huggingface/datasets/issues/2003
824,034,678
MDU6SXNzdWU4MjQwMzQ2Nzg=
2,003
Messages are being printed to the `stdout`
{ "avatar_url": "https://avatars.githubusercontent.com/u/1367529?v=4", "events_url": "https://api.github.com/users/mahnerak/events{/privacy}", "followers_url": "https://api.github.com/users/mahnerak/followers", "following_url": "https://api.github.com/users/mahnerak/following{/other_user}", "gists_url": "https://api.github.com/users/mahnerak/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mahnerak", "id": 1367529, "login": "mahnerak", "node_id": "MDQ6VXNlcjEzNjc1Mjk=", "organizations_url": "https://api.github.com/users/mahnerak/orgs", "received_events_url": "https://api.github.com/users/mahnerak/received_events", "repos_url": "https://api.github.com/users/mahnerak/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mahnerak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mahnerak/subscriptions", "type": "User", "url": "https://api.github.com/users/mahnerak", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "This is expected to show this message to the user via stdout.\r\nThis way the users see it directly and can cancel the downloading if they want to.\r\nCould you elaborate why it would be better to have it in stderr instead of stdout ?", "@lhoestq, sorry for the late reply\r\n\r\nI completely understand why you d...
2021-03-07T22:09:34Z
2023-07-25T16:35:21Z
2023-07-25T16:35:21Z
NONE
null
null
null
null
In this code segment, we can see some messages are being printed to the `stdout`. https://github.com/huggingface/datasets/blob/7e60bb509b595e8edc60a87f32b2bacfc065d607/src/datasets/builder.py#L545-L554 According to the comment, it is done intentionally, but I don't really understand why don't we log it with a higher level or print it directly to the `stderr`. In my opinion, this kind of messages should never printed to the stdout. At least some configuration/flag should make it possible to provide in order to explicitly prevent the package to contaminate the stdout.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2003/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2003/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2001
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2001/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2001/comments
https://api.github.com/repos/huggingface/datasets/issues/2001/events
https://github.com/huggingface/datasets/issues/2001
823,946,706
MDU6SXNzdWU4MjM5NDY3MDY=
2,001
Empty evidence document ("provenance") in KILT ELI5 dataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/donggyukimc", "id": 16605764, "login": "donggyukimc", "node_id": "MDQ6VXNlcjE2NjA1NzY0", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "repos_url": "https://api.github.com/users/donggyukimc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "type": "User", "url": "https://api.github.com/users/donggyukimc", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Why did you close this issue? How did you end up finding the evidence documents? I'm running into a similar issue with other KILT tasks." ]
2021-03-07T15:41:35Z
2022-12-19T19:25:14Z
2021-03-17T05:51:01Z
NONE
null
null
null
null
In the original KILT benchmark(https://github.com/facebookresearch/KILT), all samples has its evidence document (i.e. wikipedia page id) for prediction. For example, a sample in ELI5 dataset has the format including provenance (=evidence document) like this `{"id": "1kiwfx", "input": "In Trading Places (1983, Akroyd/Murphy) how does the scheme at the end of the movie work? Why would buying a lot of OJ at a high price ruin the Duke Brothers?", "output": [{"answer": "I feel so old. People have been askinbg what happened at the end of this movie for what must be the last 15 years of my life. It never stops. Every year/month/fortnight, I see someone asking what happened, and someone explaining. Andf it will keep on happening, until I am 90yrs old, in a home, with nothing but the Internet and my bladder to keep me going. And there it will be: \"what happens at the end of Trading Places?\""}, {"provenance": [{"wikipedia_id": "242855", "title": "Futures contract", "section": "Section::::Abstract.", "start_paragraph_id": 1, "start_character": 14, "end_paragraph_id": 1, "end_character": 612, "bleu_score": 0.9232808519770748}]}], "meta": {"partial_evidence": [{"wikipedia_id": "520990", "title": "Trading Places", "section": "Section::::Plot.\n", "start_paragraph_id": 7, "end_paragraph_id": 7, "meta": {"evidence_span": ["On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts.", "On television, they learn that Clarence Beeks is transporting a secret USDA report on orange crop forecasts. Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice.", "Winthorpe and Valentine recall large payments made to Beeks by the Dukes and realize that the Dukes plan to obtain the report to corner the market on frozen orange juice."]}}]}}` However, KILT ELI5 dataset from huggingface datasets library only contain empty list of provenance. `{'id': '1oy5tc', 'input': 'in football whats the point of wasting the first two plays with a rush - up the middle - not regular rush plays i get those', 'meta': {'left_context': '', 'mention': '', 'obj_surface': [], 'partial_evidence': [], 'right_context': '', 'sub_surface': [], 'subj_aliases': [], 'template_questions': []}, 'output': [{'answer': 'In most cases the O-Line is supposed to make a hole for the running back to go through. If you run too many plays to the outside/throws the defense will catch on.\n\nAlso, 2 5 yard plays gets you a new set of downs.', 'meta': {'score': 2}, 'provenance': []}, {'answer': "I you don't like those type of plays, watch CFL. We only get 3 downs so you can't afford to waste one. Lots more passing.", 'meta': {'score': 2}, 'provenance': []}]} ` should i perform other procedure to obtain evidence documents?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16605764?v=4", "events_url": "https://api.github.com/users/donggyukimc/events{/privacy}", "followers_url": "https://api.github.com/users/donggyukimc/followers", "following_url": "https://api.github.com/users/donggyukimc/following{/other_user}", "gists_url": "https://api.github.com/users/donggyukimc/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/donggyukimc", "id": 16605764, "login": "donggyukimc", "node_id": "MDQ6VXNlcjE2NjA1NzY0", "organizations_url": "https://api.github.com/users/donggyukimc/orgs", "received_events_url": "https://api.github.com/users/donggyukimc/received_events", "repos_url": "https://api.github.com/users/donggyukimc/repos", "site_admin": false, "starred_url": "https://api.github.com/users/donggyukimc/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/donggyukimc/subscriptions", "type": "User", "url": "https://api.github.com/users/donggyukimc", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2001/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2001/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/2000
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/2000/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/2000/comments
https://api.github.com/repos/huggingface/datasets/issues/2000/events
https://github.com/huggingface/datasets/issues/2000
823,899,910
MDU6SXNzdWU4MjM4OTk5MTA=
2,000
Windows Permission Error (most recent version of datasets)
{ "avatar_url": "https://avatars.githubusercontent.com/u/73881148?v=4", "events_url": "https://api.github.com/users/itsLuisa/events{/privacy}", "followers_url": "https://api.github.com/users/itsLuisa/followers", "following_url": "https://api.github.com/users/itsLuisa/following{/other_user}", "gists_url": "https://api.github.com/users/itsLuisa/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/itsLuisa", "id": 73881148, "login": "itsLuisa", "node_id": "MDQ6VXNlcjczODgxMTQ4", "organizations_url": "https://api.github.com/users/itsLuisa/orgs", "received_events_url": "https://api.github.com/users/itsLuisa/received_events", "repos_url": "https://api.github.com/users/itsLuisa/repos", "site_admin": false, "starred_url": "https://api.github.com/users/itsLuisa/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/itsLuisa/subscriptions", "type": "User", "url": "https://api.github.com/users/itsLuisa", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi @itsLuisa !\r\n\r\nCould you give us more information about the error you're getting, please?\r\nA copy-paste of the Traceback would be nice to get a better understanding of what is wrong :) ", "Hello @SBrandeis , this is it:\r\n```\r\nTraceback (most recent call last):\r\n File \"C:\\Users\\Luisa\\AppData\\...
2021-03-07T11:55:28Z
2021-03-09T12:42:57Z
2021-03-09T12:42:57Z
NONE
null
null
null
null
Hi everyone, Can anyone help me with why the dataset loading script below raises a Windows Permission Error? I stuck quite closely to https://github.com/huggingface/datasets/blob/master/datasets/conll2003/conll2003.py , only I want to load the data from three local three-column tsv-files (id\ttokens\tpos_tags\n). I am using the most recent version of datasets. Thank you in advance! Luisa My script: ``` import datasets import csv logger = datasets.logging.get_logger(__name__) class SampleConfig(datasets.BuilderConfig): def __init__(self, **kwargs): super(SampleConfig, self).__init__(**kwargs) class Sample(datasets.GeneratorBasedBuilder): BUILDER_CONFIGS = [ SampleConfig(name="conll2003", version=datasets.Version("1.0.0"), description="Conll2003 dataset"), ] def _info(self): return datasets.DatasetInfo( description="Dataset with words and their POS-Tags", features=datasets.Features( { "id": datasets.Value("string"), "tokens": datasets.Sequence(datasets.Value("string")), "pos_tags": datasets.Sequence( datasets.features.ClassLabel( names=[ "''", ",", "-LRB-", "-RRB-", ".", ":", "CC", "CD", "DT", "EX", "FW", "HYPH", "IN", "JJ", "JJR", "JJS", "MD", "NN", "NNP", "NNPS", "NNS", "PDT", "POS", "PRP", "PRP$", "RB", "RBR", "RBS", "RP", "TO", "UH", "VB", "VBD", "VBG", "VBN", "VBP", "VBZ", "WDT", "WP", "WRB", "``" ] ) ), } ), supervised_keys=None, homepage="https://catalog.ldc.upenn.edu/LDC2011T03", citation="Weischedel, Ralph, et al. OntoNotes Release 4.0 LDC2011T03. Web Download. Philadelphia: Linguistic Data Consortium, 2011.", ) def _split_generators(self, dl_manager): loaded_files = dl_manager.download_and_extract(self.config.data_files) return [ datasets.SplitGenerator(name=datasets.Split.TRAIN, gen_kwargs={"filepath": loaded_files["train"]}), datasets.SplitGenerator(name=datasets.Split.TEST, gen_kwargs={"filepath": loaded_files["test"]}), datasets.SplitGenerator(name=datasets.Split.VALIDATION, gen_kwargs={"filepath": loaded_files["val"]}) ] def _generate_examples(self, filepath): logger.info("generating examples from = %s", filepath) with open(filepath, encoding="cp1252") as f: data = csv.reader(f, delimiter="\t") ids = list() tokens = list() pos_tags = list() for id_, line in enumerate(data): #print(line) if len(line) == 1: if tokens: yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} ids = list() tokens = list() pos_tags = list() else: ids.append(line[0]) tokens.append(line[1]) pos_tags.append(line[2]) # last example yield id_, {"id": ids, "tokens": tokens, "pos_tags": pos_tags} def main(): dataset = datasets.load_dataset( "data_loading.py", data_files={ "train": "train.tsv", "test": "test.tsv", "val": "val.tsv" } ) #print(dataset) if __name__=="__main__": main() ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/33657802?v=4", "events_url": "https://api.github.com/users/SBrandeis/events{/privacy}", "followers_url": "https://api.github.com/users/SBrandeis/followers", "following_url": "https://api.github.com/users/SBrandeis/following{/other_user}", "gists_url": "https://api.github.com/users/SBrandeis/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/SBrandeis", "id": 33657802, "login": "SBrandeis", "node_id": "MDQ6VXNlcjMzNjU3ODAy", "organizations_url": "https://api.github.com/users/SBrandeis/orgs", "received_events_url": "https://api.github.com/users/SBrandeis/received_events", "repos_url": "https://api.github.com/users/SBrandeis/repos", "site_admin": false, "starred_url": "https://api.github.com/users/SBrandeis/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/SBrandeis/subscriptions", "type": "User", "url": "https://api.github.com/users/SBrandeis", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/2000/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/2000/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1997
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1997/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1997/comments
https://api.github.com/repos/huggingface/datasets/issues/1997/events
https://github.com/huggingface/datasets/issues/1997
823,679,465
MDU6SXNzdWU4MjM2Nzk0NjU=
1,997
from datasets import MoleculeDataset, GEOMDataset
{ "avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4", "events_url": "https://api.github.com/users/futianfan/events{/privacy}", "followers_url": "https://api.github.com/users/futianfan/followers", "following_url": "https://api.github.com/users/futianfan/following{/other_user}", "gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/futianfan", "id": 5087210, "login": "futianfan", "node_id": "MDQ6VXNlcjUwODcyMTA=", "organizations_url": "https://api.github.com/users/futianfan/orgs", "received_events_url": "https://api.github.com/users/futianfan/received_events", "repos_url": "https://api.github.com/users/futianfan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/futianfan/subscriptions", "type": "User", "url": "https://api.github.com/users/futianfan", "user_view_type": "public" }
[ { "color": "e99695", "default": false, "description": "Requesting to add a new dataset", "id": 2067376369, "name": "dataset request", "node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5", "url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request" } ]
closed
false
null
[]
null
[]
2021-03-06T15:50:19Z
2021-03-06T16:13:26Z
2021-03-06T16:13:26Z
NONE
null
null
null
null
I met the ImportError: cannot import name 'MoleculeDataset' from 'datasets'. Have anyone met the similar issues? Thanks!
{ "avatar_url": "https://avatars.githubusercontent.com/u/5087210?v=4", "events_url": "https://api.github.com/users/futianfan/events{/privacy}", "followers_url": "https://api.github.com/users/futianfan/followers", "following_url": "https://api.github.com/users/futianfan/following{/other_user}", "gists_url": "https://api.github.com/users/futianfan/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/futianfan", "id": 5087210, "login": "futianfan", "node_id": "MDQ6VXNlcjUwODcyMTA=", "organizations_url": "https://api.github.com/users/futianfan/orgs", "received_events_url": "https://api.github.com/users/futianfan/received_events", "repos_url": "https://api.github.com/users/futianfan/repos", "site_admin": false, "starred_url": "https://api.github.com/users/futianfan/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/futianfan/subscriptions", "type": "User", "url": "https://api.github.com/users/futianfan", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1997/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1997/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1996
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1996/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1996/comments
https://api.github.com/repos/huggingface/datasets/issues/1996/events
https://github.com/huggingface/datasets/issues/1996
823,573,410
MDU6SXNzdWU4MjM1NzM0MTA=
1,996
Error when exploring `arabic_speech_corpus`
{ "avatar_url": "https://avatars.githubusercontent.com/u/6879673?v=4", "events_url": "https://api.github.com/users/elgeish/events{/privacy}", "followers_url": "https://api.github.com/users/elgeish/followers", "following_url": "https://api.github.com/users/elgeish/following{/other_user}", "gists_url": "https://api.github.com/users/elgeish/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/elgeish", "id": 6879673, "login": "elgeish", "node_id": "MDQ6VXNlcjY4Nzk2NzM=", "organizations_url": "https://api.github.com/users/elgeish/orgs", "received_events_url": "https://api.github.com/users/elgeish/received_events", "repos_url": "https://api.github.com/users/elgeish/repos", "site_admin": false, "starred_url": "https://api.github.com/users/elgeish/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/elgeish/subscriptions", "type": "User", "url": "https://api.github.com/users/elgeish", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" }, { "color": "94203D", "default": false, "descrip...
closed
false
null
[]
null
[ "Thanks for reporting! We'll fix that as soon as possible", "Actually soundfile is not a dependency of this dataset.\r\nThe error comes from a bug that was fixed in this commit: https://github.com/huggingface/datasets/pull/1767/commits/c304e63629f4453367de2fd42883a78768055532\r\nBasically the library used to cons...
2021-03-06T05:55:20Z
2022-10-05T13:24:26Z
2022-10-05T13:24:26Z
NONE
null
null
null
null
Navigate to https://huggingface.co/datasets/viewer/?dataset=arabic_speech_corpus Error: ``` ImportError: To be able to use this dataset, you need to install the following dependencies['soundfile'] using 'pip install soundfile' for instance' Traceback: File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/script_runner.py", line 332, in _run_script exec(code, module.__dict__) File "/home/sasha/nlp-viewer/run.py", line 233, in <module> configs = get_confs(option) File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 604, in wrapped_func return get_or_create_cached_value() File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/streamlit/caching.py", line 588, in get_or_create_cached_value return_value = func(*args, **kwargs) File "/home/sasha/nlp-viewer/run.py", line 145, in get_confs module_path = nlp.load.prepare_module(path, dataset=True File "/home/sasha/.local/share/virtualenvs/lib-ogGKnCK_/lib/python3.7/site-packages/datasets/load.py", line 342, in prepare_module f"To be able to use this {module_type}, you need to install the following dependencies" ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1996/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1996/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1994
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1994/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1994/comments
https://api.github.com/repos/huggingface/datasets/issues/1994/events
https://github.com/huggingface/datasets/issues/1994
822,871,238
MDU6SXNzdWU4MjI4NzEyMzg=
1,994
not being able to get wikipedia es language
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
open
false
null
[]
null
[ "@lhoestq I really appreciate if you could help me providiing processed datasets, I do not really have access to enough resources to run the apache-beam and need to run the codes on these datasets. Only en/de/fr currently works, but I need all the languages more or less. thanks ", "Hi @dorost1234, I think I can ...
2021-03-05T08:31:48Z
2021-03-11T20:46:21Z
null
NONE
null
null
null
null
Hi I am trying to run a code with wikipedia of config 20200501.es, getting: Traceback (most recent call last): File "run_mlm_t5.py", line 608, in <module> main() File "run_mlm_t5.py", line 359, in main datasets = load_dataset(data_args.dataset_name, data_args.dataset_config_name) File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/load.py", line 612, in load_dataset ignore_verifications=ignore_verifications, File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 527, in download_and_prepare dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs File "/dara/libs/anaconda3/envs/success432/lib/python3.7/site-packages/datasets-1.2.1-py3.7.egg/datasets/builder.py", line 1050, in _download_and_prepare "\n\t`{}`".format(usage_example) datasets.builder.MissingBeamOptions: Trying to generate a dataset using Apache Beam, yet no Beam Runner or PipelineOptions() has been provided in `load_dataset` or in the builder arguments. For big datasets it has to run on large-scale data processing tools like Dataflow, Spark, etc. More information about Apache Beam runners at https://beam.apache.org/documentation/runners/capability-matrix/ If you really want to run it locally because you feel like the Dataset is small enough, you can use the local beam runner called `DirectRunner` (you may run out of memory). Example of usage: `load_dataset('wikipedia', '20200501.es', beam_runner='DirectRunner')` thanks @lhoestq for any suggestion/help
null
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1994/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1994/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1993
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1993/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1993/comments
https://api.github.com/repos/huggingface/datasets/issues/1993/events
https://github.com/huggingface/datasets/issues/1993
822,758,387
MDU6SXNzdWU4MjI3NTgzODc=
1,993
How to load a dataset with load_from disk and save it again after doing transformations without changing the original?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! That looks like a bug, can you provide some code so that we can reproduce ?\r\nIt's not supposed to update the original dataset", "Hi, I experimented with RAG. \r\n\r\nActually, you can run the [use_own_knowldge_dataset.py](https://github.com/shamanez/transformers/blob/rag-end-to-end-retrieval/examples/rese...
2021-03-05T05:25:50Z
2021-03-22T04:05:50Z
2021-03-22T04:05:50Z
NONE
null
null
null
null
I am using the latest datasets library. In my work, I first use **load_from_disk** to load a data set that contains 3.8Gb information. Then during my training process, I update that dataset object and add new elements and save it in a different place. When I save the dataset with **save_to_disk**, the original dataset which is already in the disk also gets updated. I do not want to update it. How to prevent from this?
{ "avatar_url": "https://avatars.githubusercontent.com/u/16892570?v=4", "events_url": "https://api.github.com/users/shamanez/events{/privacy}", "followers_url": "https://api.github.com/users/shamanez/followers", "following_url": "https://api.github.com/users/shamanez/following{/other_user}", "gists_url": "https://api.github.com/users/shamanez/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/shamanez", "id": 16892570, "login": "shamanez", "node_id": "MDQ6VXNlcjE2ODkyNTcw", "organizations_url": "https://api.github.com/users/shamanez/orgs", "received_events_url": "https://api.github.com/users/shamanez/received_events", "repos_url": "https://api.github.com/users/shamanez/repos", "site_admin": false, "starred_url": "https://api.github.com/users/shamanez/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/shamanez/subscriptions", "type": "User", "url": "https://api.github.com/users/shamanez", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1993/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1993/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1992
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1992/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1992/comments
https://api.github.com/repos/huggingface/datasets/issues/1992/events
https://github.com/huggingface/datasets/issues/1992
822,672,238
MDU6SXNzdWU4MjI2NzIyMzg=
1,992
`datasets.map` multi processing much slower than single processing
{ "avatar_url": "https://avatars.githubusercontent.com/u/29157715?v=4", "events_url": "https://api.github.com/users/hwijeen/events{/privacy}", "followers_url": "https://api.github.com/users/hwijeen/followers", "following_url": "https://api.github.com/users/hwijeen/following{/other_user}", "gists_url": "https://api.github.com/users/hwijeen/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/hwijeen", "id": 29157715, "login": "hwijeen", "node_id": "MDQ6VXNlcjI5MTU3NzE1", "organizations_url": "https://api.github.com/users/hwijeen/orgs", "received_events_url": "https://api.github.com/users/hwijeen/received_events", "repos_url": "https://api.github.com/users/hwijeen/repos", "site_admin": false, "starred_url": "https://api.github.com/users/hwijeen/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/hwijeen/subscriptions", "type": "User", "url": "https://api.github.com/users/hwijeen", "user_view_type": "public" }
[ { "color": "d73a4a", "default": true, "description": "Something isn't working", "id": 1935892857, "name": "bug", "node_id": "MDU6TGFiZWwxOTM1ODkyODU3", "url": "https://api.github.com/repos/huggingface/datasets/labels/bug" } ]
open
false
null
[]
null
[ "Hi @hwijeen, you might want to look at issues #1796 and #1949. I think it could be something related to the I/O operations being performed.", "I see that many people are experiencing the same issue. Is this problem considered an \"official\" bug that is worth a closer look? @lhoestq", "Yes this looks like a bu...
2021-03-05T02:10:02Z
2024-06-08T20:18:03Z
null
NONE
null
null
null
null
Hi, thank you for the great library. I've been using datasets to pretrain language models, and it often involves datasets as large as ~70G. My data preparation step is roughly two steps: `load_dataset` which splits corpora into a table of sentences, and `map` converts a sentence into a list of integers, using a tokenizer. I noticed that `map` function with `num_proc=mp.cpu_count() //2` takes more than 20 hours to finish the job where as `num_proc=1` gets the job done in about 5 hours. The machine I used has 40 cores, with 126G of RAM. There were no other jobs when `map` function was running. What could be the reason? I would be happy to provide information necessary to spot the reason. p.s. I was experiencing the imbalance issue mentioned in [here](https://github.com/huggingface/datasets/issues/610#issuecomment-705177036) when I was using multi processing. p.s.2 When I run `map` with `num_proc=1`, I see one tqdm bar but all the cores are working. When `num_proc=20`, only 20 cores work. ![Screen Shot 2021-03-05 at 11 04 59](https://user-images.githubusercontent.com/29157715/110056895-ef6cf000-7da2-11eb-8307-6698e9fb1ad4.png)
null
{ "+1": 9, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 9, "url": "https://api.github.com/repos/huggingface/datasets/issues/1992/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1992/timeline
null
null
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1990
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1990/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1990/comments
https://api.github.com/repos/huggingface/datasets/issues/1990/events
https://github.com/huggingface/datasets/issues/1990
822,384,502
MDU6SXNzdWU4MjIzODQ1MDI=
1,990
OSError: Memory mapping file failed: Cannot allocate memory
{ "avatar_url": "https://avatars.githubusercontent.com/u/79165106?v=4", "events_url": "https://api.github.com/users/dorost1234/events{/privacy}", "followers_url": "https://api.github.com/users/dorost1234/followers", "following_url": "https://api.github.com/users/dorost1234/following{/other_user}", "gists_url": "https://api.github.com/users/dorost1234/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/dorost1234", "id": 79165106, "login": "dorost1234", "node_id": "MDQ6VXNlcjc5MTY1MTA2", "organizations_url": "https://api.github.com/users/dorost1234/orgs", "received_events_url": "https://api.github.com/users/dorost1234/received_events", "repos_url": "https://api.github.com/users/dorost1234/repos", "site_admin": false, "starred_url": "https://api.github.com/users/dorost1234/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/dorost1234/subscriptions", "type": "User", "url": "https://api.github.com/users/dorost1234", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Do you think this is trying to bring the dataset into memory and if I can avoid it to save on memory so it only brings a batch into memory? @lhoestq thank you", "It's not trying to bring the dataset into memory.\r\n\r\nActually, it's trying to memory map the dataset file, which is different. It allows to load l...
2021-03-04T18:21:58Z
2021-08-04T18:04:25Z
2021-08-04T18:04:25Z
NONE
null
null
null
null
Hi, I am trying to run a code with a wikipedia dataset, here is the command to reproduce the error. You can find the codes for run_mlm.py in huggingface repo here: https://github.com/huggingface/transformers/blob/v4.3.2/examples/language-modeling/run_mlm.py ``` python run_mlm.py --model_name_or_path bert-base-multilingual-cased --dataset_name wikipedia --dataset_config_name 20200501.en --do_train --do_eval --output_dir /dara/test --max_seq_length 128 ``` I am using transformer version: 4.3.2 But I got memory erorr using this dataset, is there a way I could save on memory with dataset library with wikipedia dataset? Specially I need to train a model with multiple of wikipedia datasets concatenated. thank you very much @lhoestq for your help and suggestions: ``` File "run_mlm.py", line 441, in <module> main() File "run_mlm.py", line 233, in main split=f"train[{data_args.validation_split_percentage}%:]", File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/load.py", line 750, in load_dataset ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 740, in as_dataset map_tuple=True, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/utils/py_utils.py", line 225, in map_nested return function(data_struct) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 757, in _build_single_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/builder.py", line 829, in _as_dataset in_memory=in_memory, File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 215, in read return self.read_files(files=files, original_instructions=instructions, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 236, in read_files pa_table = self._read_files(files, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 171, in _read_files pa_table: pa.Table = self._get_dataset_from_filename(f_dict, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 302, in _get_dataset_from_filename pa_table = ArrowReader.read_table(filename, in_memory=in_memory) File "/dara/libs/anaconda3/envs/code/lib/python3.7/site-packages/datasets-1.3.0-py3.7.egg/datasets/arrow_reader.py", line 322, in read_table stream = stream_from(filename) File "pyarrow/io.pxi", line 782, in pyarrow.lib.memory_map File "pyarrow/io.pxi", line 743, in pyarrow.lib.MemoryMappedFile._open File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 99, in pyarrow.lib.check_status OSError: Memory mapping file failed: Cannot allocate memory ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1990/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1990/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1989
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1989/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1989/comments
https://api.github.com/repos/huggingface/datasets/issues/1989/events
https://github.com/huggingface/datasets/issues/1989
822,328,147
MDU6SXNzdWU4MjIzMjgxNDc=
1,989
Question/problem with dataset labels
{ "avatar_url": "https://avatars.githubusercontent.com/u/17202292?v=4", "events_url": "https://api.github.com/users/ioana-blue/events{/privacy}", "followers_url": "https://api.github.com/users/ioana-blue/followers", "following_url": "https://api.github.com/users/ioana-blue/following{/other_user}", "gists_url": "https://api.github.com/users/ioana-blue/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/ioana-blue", "id": 17202292, "login": "ioana-blue", "node_id": "MDQ6VXNlcjE3MjAyMjky", "organizations_url": "https://api.github.com/users/ioana-blue/orgs", "received_events_url": "https://api.github.com/users/ioana-blue/received_events", "repos_url": "https://api.github.com/users/ioana-blue/repos", "site_admin": false, "starred_url": "https://api.github.com/users/ioana-blue/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/ioana-blue/subscriptions", "type": "User", "url": "https://api.github.com/users/ioana-blue", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It seems that I get parsing errors for various fields in my data. For example now I get this:\r\n```\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 523, in <module>\r\n main()\r\n File \"../../../models/tr-4.3.2/run_puppets.py\", line 249, in main\r\n datasets = load_dataset(\"csv\", data_files...
2021-03-04T17:06:53Z
2023-07-24T14:39:33Z
2023-07-24T14:39:33Z
NONE
null
null
null
null
Hi, I'm using a dataset with two labels "nurse" and "not nurse". For whatever reason (that I don't understand), I get an error that I think comes from the datasets package (using csv). Everything works fine if the labels are "nurse" and "surgeon". This is the trace I get: ``` File "../../../models/tr-4.3.2/run_puppets.py", line 523, in <module> main() File "../../../models/tr-4.3.2/run_puppets.py", line 249, in main datasets = load_dataset("csv", data_files=data_files) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 572, in download_and_prepare self._download_and_prepare( File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 650, in _download_and_prepare self._prepare_split(split_generator, **prepare_split_kwargs) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/builder.py", line 1028, in _prepare_split writer.write_table(table) File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/datasets/arrow_writer.py", line 292, in write_table pa_table = pa_table.cast(self._schema) File "pyarrow/table.pxi", line 1311, in pyarrow.lib.Table.cast File "pyarrow/table.pxi", line 265, in pyarrow.lib.ChunkedArray.cast File "/dccstor/redrug_ier/envs/last-tr/lib/python3.8/site-packages/pyarrow/compute.py", line 87, in cast return call_function("cast", [arr], options) File "pyarrow/_compute.pyx", line 298, in pyarrow._compute.call_function File "pyarrow/_compute.pyx", line 192, in pyarrow._compute.Function.call File "pyarrow/error.pxi", line 122, in pyarrow.lib.pyarrow_internal_check_status File "pyarrow/error.pxi", line 84, in pyarrow.lib.check_status pyarrow.lib.ArrowInvalid: Failed to parse string: not nurse ``` Any ideas how to fix this? For now, I'll probably make them numeric.
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1989/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1989/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1988
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1988/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1988/comments
https://api.github.com/repos/huggingface/datasets/issues/1988/events
https://github.com/huggingface/datasets/issues/1988
822,324,605
MDU6SXNzdWU4MjIzMjQ2MDU=
1,988
Readme.md is misleading about kinds of datasets?
{ "avatar_url": "https://avatars.githubusercontent.com/u/878399?v=4", "events_url": "https://api.github.com/users/surak/events{/privacy}", "followers_url": "https://api.github.com/users/surak/followers", "following_url": "https://api.github.com/users/surak/following{/other_user}", "gists_url": "https://api.github.com/users/surak/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/surak", "id": 878399, "login": "surak", "node_id": "MDQ6VXNlcjg3ODM5OQ==", "organizations_url": "https://api.github.com/users/surak/orgs", "received_events_url": "https://api.github.com/users/surak/received_events", "repos_url": "https://api.github.com/users/surak/repos", "site_admin": false, "starred_url": "https://api.github.com/users/surak/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/surak/subscriptions", "type": "User", "url": "https://api.github.com/users/surak", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "Hi ! Yes it's possible to use image data. There are already a few of them available (MNIST, CIFAR..)" ]
2021-03-04T17:04:20Z
2021-08-04T18:05:23Z
2021-08-04T18:05:23Z
NONE
null
null
null
null
Hi! At the README.MD, you say: "efficient data pre-processing: simple, fast and reproducible data pre-processing for the above public datasets as well as your own local datasets in CSV/JSON/text. " But here: https://github.com/huggingface/datasets/blob/master/templates/new_dataset_script.py#L82-L117 You mention other kinds of datasets, with images and so on. I'm confused. Is it possible to use it to store, say, imagenet locally?
{ "avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4", "events_url": "https://api.github.com/users/albertvillanova/events{/privacy}", "followers_url": "https://api.github.com/users/albertvillanova/followers", "following_url": "https://api.github.com/users/albertvillanova/following{/other_user}", "gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/albertvillanova", "id": 8515462, "login": "albertvillanova", "node_id": "MDQ6VXNlcjg1MTU0NjI=", "organizations_url": "https://api.github.com/users/albertvillanova/orgs", "received_events_url": "https://api.github.com/users/albertvillanova/received_events", "repos_url": "https://api.github.com/users/albertvillanova/repos", "site_admin": false, "starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions", "type": "User", "url": "https://api.github.com/users/albertvillanova", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1988/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1988/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false
https://api.github.com/repos/huggingface/datasets/issues/1987
https://api.github.com/repos/huggingface/datasets
https://api.github.com/repos/huggingface/datasets/issues/1987/labels{/name}
https://api.github.com/repos/huggingface/datasets/issues/1987/comments
https://api.github.com/repos/huggingface/datasets/issues/1987/events
https://github.com/huggingface/datasets/issues/1987
822,308,956
MDU6SXNzdWU4MjIzMDg5NTY=
1,987
wmt15 is broken
{ "avatar_url": "https://avatars.githubusercontent.com/u/10676103?v=4", "events_url": "https://api.github.com/users/stas00/events{/privacy}", "followers_url": "https://api.github.com/users/stas00/followers", "following_url": "https://api.github.com/users/stas00/following{/other_user}", "gists_url": "https://api.github.com/users/stas00/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/stas00", "id": 10676103, "login": "stas00", "node_id": "MDQ6VXNlcjEwNjc2MTAz", "organizations_url": "https://api.github.com/users/stas00/orgs", "received_events_url": "https://api.github.com/users/stas00/received_events", "repos_url": "https://api.github.com/users/stas00/repos", "site_admin": false, "starred_url": "https://api.github.com/users/stas00/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/stas00/subscriptions", "type": "User", "url": "https://api.github.com/users/stas00", "user_view_type": "public" }
[]
closed
false
null
[]
null
[ "It's reachable for the viewer and me, so I suppose it was down at that moment?" ]
2021-03-04T16:46:25Z
2022-10-05T13:12:26Z
2022-10-05T13:12:26Z
CONTRIBUTOR
null
null
null
null
While testing the hotfix, I tried a random other wmt release and found wmt15 to be broken: ``` python -c 'from datasets import load_dataset; load_dataset("wmt15", "de-en")' Downloading: 2.91kB [00:00, 818kB/s] Downloading: 3.02kB [00:00, 897kB/s] Downloading: 41.1kB [00:00, 19.1MB/s] Downloading and preparing dataset wmt15/de-en (download: Unknown size, generated: Unknown size, post-processed: Unknown size, total: Unknown size) to /home/stas/.cache/huggingface/datasets/wmt15/de-en/1.0.0/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f... Traceback (most recent call last): File "<string>", line 1, in <module> File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/load.py", line 740, in load_dataset builder_instance.download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 578, in download_and_prepare self._download_and_prepare( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/builder.py", line 634, in _download_and_prepare split_generators = self._split_generators(dl_manager, **split_generators_kwargs) File "/home/stas/.cache/huggingface/modules/datasets_modules/datasets/wmt15/39ad5f9262a0910a8ad7028ad432731ad23fdf91f2cebbbf2ba4776b9859e87f/wmt_utils.py", line 757, in _split_generators downloaded_files = dl_manager.download_and_extract(urls_to_download) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 283, in download_and_extract return self.extract(self.download(url_or_urls)) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 191, in download downloaded_path_or_paths = map_nested( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 203, in map_nested mapped = [ File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 204, in <listcomp> _single_map_nested((function, obj, types, None, True)) for obj in tqdm(iterable, disable=disable_tqdm) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in _single_map_nested mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 160, in <listcomp> mapped = [_single_map_nested((function, v, types, None, True)) for v in pbar] File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 142, in _single_map_nested return function(data_struct) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/download_manager.py", line 214, in _download return cached_path(url_or_filename, download_config=download_config) File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 274, in cached_path output_path = get_from_cache( File "/home/stas/anaconda3/envs/main-38/lib/python3.8/site-packages/datasets/utils/file_utils.py", line 614, in get_from_cache raise FileNotFoundError("Couldn't find file at {}".format(url)) FileNotFoundError: Couldn't find file at https://huggingface.co/datasets/wmt/wmt15/resolve/main/training-parallel-nc-v10.tgz ```
{ "avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4", "events_url": "https://api.github.com/users/mariosasko/events{/privacy}", "followers_url": "https://api.github.com/users/mariosasko/followers", "following_url": "https://api.github.com/users/mariosasko/following{/other_user}", "gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}", "gravatar_id": "", "html_url": "https://github.com/mariosasko", "id": 47462742, "login": "mariosasko", "node_id": "MDQ6VXNlcjQ3NDYyNzQy", "organizations_url": "https://api.github.com/users/mariosasko/orgs", "received_events_url": "https://api.github.com/users/mariosasko/received_events", "repos_url": "https://api.github.com/users/mariosasko/repos", "site_admin": false, "starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}", "subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions", "type": "User", "url": "https://api.github.com/users/mariosasko", "user_view_type": "public" }
{ "+1": 0, "-1": 0, "confused": 0, "eyes": 0, "heart": 0, "hooray": 0, "laugh": 0, "rocket": 0, "total_count": 0, "url": "https://api.github.com/repos/huggingface/datasets/issues/1987/reactions" }
https://api.github.com/repos/huggingface/datasets/issues/1987/timeline
null
completed
{ "completed": 0, "percent_completed": 0, "total": 0 }
{ "blocked_by": 0, "blocking": 0, "total_blocked_by": 0, "total_blocking": 0 }
false