url
stringlengths 58
61
| repository_url
stringclasses 1
value | labels_url
stringlengths 72
75
| comments_url
stringlengths 67
70
| events_url
stringlengths 65
68
| html_url
stringlengths 46
51
| id
int64 599M
1.62B
| node_id
stringlengths 18
32
| number
int64 1
5.62k
| title
stringlengths 1
290
| user
dict | labels
list | state
stringclasses 1
value | locked
bool 1
class | assignee
dict | assignees
list | milestone
dict | comments
list | created_at
timestamp[s] | updated_at
timestamp[s] | closed_at
timestamp[s] | author_association
stringclasses 3
values | active_lock_reason
null | body
stringlengths 0
228k
⌀ | reactions
dict | timeline_url
stringlengths 67
70
| performed_via_github_app
null | state_reason
stringclasses 2
values | draft
bool 2
classes | pull_request
dict | is_pull_request
bool 2
classes |
|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://api.github.com/repos/huggingface/datasets/issues/4941
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4941/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4941/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4941/events
|
https://github.com/huggingface/datasets/pull/4941
| 1,363,622,861
|
PR_kwDODunzps4-dQ9F
| 4,941
|
Add Papers with Code ID to scifact dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-06T17:46:37
| 2022-09-06T18:28:17
| 2022-09-06T18:26:01
|
MEMBER
| null |
This PR:
- adds Papers with Code ID
- forces sync between GitHub and Hub, which previously failed due to Hub validation error of the license tag: https://github.com/huggingface/datasets/runs/8200223631?check_suite_focus=true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4941/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4941/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4941",
"html_url": "https://github.com/huggingface/datasets/pull/4941",
"diff_url": "https://github.com/huggingface/datasets/pull/4941.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4941.patch",
"merged_at": "2022-09-06T18:26:01"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4940
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4940/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4940/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4940/events
|
https://github.com/huggingface/datasets/pull/4940
| 1,363,513,058
|
PR_kwDODunzps4-c6WY
| 4,940
|
Fix multilinguality tag and missing sections in xquad_r dataset card
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-06T16:05:35
| 2022-09-12T10:11:07
| 2022-09-12T10:08:48
|
MEMBER
| null |
This PR fixes issue reported on the Hub:
- Label as multilingual: https://huggingface.co/datasets/xquad_r/discussions/1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4940/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4940/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4940",
"html_url": "https://github.com/huggingface/datasets/pull/4940",
"diff_url": "https://github.com/huggingface/datasets/pull/4940.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4940.patch",
"merged_at": "2022-09-12T10:08:48"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4939
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4939/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4939/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4939/events
|
https://github.com/huggingface/datasets/pull/4939
| 1,363,468,679
|
PR_kwDODunzps4-cw4A
| 4,939
|
Fix NonMatchingChecksumError in adv_glue dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-06T15:31:16
| 2022-09-06T17:42:10
| 2022-09-06T17:39:16
|
MEMBER
| null |
Fix issue reported on the Hub: https://huggingface.co/datasets/adv_glue/discussions/1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4939/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4939/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4939",
"html_url": "https://github.com/huggingface/datasets/pull/4939",
"diff_url": "https://github.com/huggingface/datasets/pull/4939.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4939.patch",
"merged_at": "2022-09-06T17:39:16"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4938
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4938/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4938/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4938/events
|
https://github.com/huggingface/datasets/pull/4938
| 1,363,429,228
|
PR_kwDODunzps4-coaB
| 4,938
|
Remove main branch rename notice
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-06T15:03:05
| 2022-09-06T16:46:11
| 2022-09-06T16:43:53
|
MEMBER
| null |
We added a notice in README.md to show that we renamed the master branch to main, but we can remove it now (it's been 2 months)
I also unpinned the github issue about the branch renaming
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4938/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4938/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4938",
"html_url": "https://github.com/huggingface/datasets/pull/4938",
"diff_url": "https://github.com/huggingface/datasets/pull/4938.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4938.patch",
"merged_at": "2022-09-06T16:43:53"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4937
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4937/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4937/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4937/events
|
https://github.com/huggingface/datasets/pull/4937
| 1,363,426,946
|
PR_kwDODunzps4-cn6W
| 4,937
|
Remove deprecated identical_ok
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-06T15:01:24
| 2022-09-06T22:24:09
| 2022-09-06T22:21:57
|
MEMBER
| null |
`huggingface-hub` says that the `identical_ok` argument of `HfApi.upload_file` is now deprecated, and will be removed soon. It even has no effect at the moment when it's passed:
```python
Args:
...
identical_ok (`bool`, *optional*, defaults to `True`):
Deprecated: will be removed in 0.11.0.
Changing this value has no effect.
...
```
There was only one occurence of `identical_ok=False` but it's maybe not worth adding a check ti verify if the files were the same.
cc @mariosasko
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4937/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4937/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4937",
"html_url": "https://github.com/huggingface/datasets/pull/4937",
"diff_url": "https://github.com/huggingface/datasets/pull/4937.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4937.patch",
"merged_at": "2022-09-06T22:21:57"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4936
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4936/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4936/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4936/events
|
https://github.com/huggingface/datasets/issues/4936
| 1,363,274,907
|
I_kwDODunzps5RQeyb
| 4,936
|
vivos (Vietnamese speech corpus) dataset not accessible
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067388877,
"node_id": "MDU6TGFiZWwyMDY3Mzg4ODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20bug",
"name": "dataset bug",
"color": "2edb81",
"default": false,
"description": "A bug in a dataset script provided in the library"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"If you need an example of a small audio datasets, I just created few hours ago a speech dataset with only 300MB of compressed audio files https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia. It works also with streaming (@albertvillanova helped me adding this functionality) :-)",
"@cahya-wirawan omg this is awesome!! thank you! ",
"We have contacted the authors to ask them."
] | 2022-09-06T13:17:55
| 2022-09-21T06:06:02
| 2022-09-12T07:14:20
|
CONTRIBUTOR
| null |
## Describe the bug
VIVOS data is not accessible anymore, neither of these links work (at least from France):
* https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (data)
* https://ailab.hcmus.edu.vn/vivos (dataset page)
Therefore `load_dataset` doesn't work.
## Steps to reproduce the bug
```python
ds = load_dataset("vivos")
```
## Expected results
dataset loaded
## Actual results
```
ConnectionError: Couldn't reach https://ailab.hcmus.edu.vn/assets/vivos.tar.gz (ConnectionError(MaxRetryError("HTTPSConnectionPool(host='ailab.hcmus.edu.vn', port=443): Max retries exceeded with url: /assets/vivos.tar.gz (Caused by NewConnectionError('<urllib3.connection.HTTPSConnection object at 0x7f9d8a27d190>: Failed to establish a new connection: [Errno -5] No address associated with hostname'))")))
```
Will try to contact the authors, as we wanted to use Vivos as an example in documentation on how to create scripts for audio datasets (https://github.com/huggingface/datasets/pull/4872), because it's small and straightforward and uses tar archives.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4936/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4936/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4935
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4935/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4935/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4935/events
|
https://github.com/huggingface/datasets/issues/4935
| 1,363,226,736
|
I_kwDODunzps5RQTBw
| 4,935
|
Dataset Viewer issue for ubuntu_dialogs_corpus
|
{
"login": "CibinQuadance",
"id": 87330568,
"node_id": "MDQ6VXNlcjg3MzMwNTY4",
"avatar_url": "https://avatars.githubusercontent.com/u/87330568?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/CibinQuadance",
"html_url": "https://github.com/CibinQuadance",
"followers_url": "https://api.github.com/users/CibinQuadance/followers",
"following_url": "https://api.github.com/users/CibinQuadance/following{/other_user}",
"gists_url": "https://api.github.com/users/CibinQuadance/gists{/gist_id}",
"starred_url": "https://api.github.com/users/CibinQuadance/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/CibinQuadance/subscriptions",
"organizations_url": "https://api.github.com/users/CibinQuadance/orgs",
"repos_url": "https://api.github.com/users/CibinQuadance/repos",
"events_url": "https://api.github.com/users/CibinQuadance/events{/privacy}",
"received_events_url": "https://api.github.com/users/CibinQuadance/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"The dataset maintainers (https://huggingface.co/datasets/ubuntu_dialogs_corpus) decided to forbid the dataset from being downloaded automatically (https://huggingface.co/docs/datasets/v2.4.0/en/loading#manual-download), and the dataset viewer respects this.\r\nWe will try to improve the error display though. Thanks for reporting."
] | 2022-09-06T12:41:50
| 2022-09-06T12:51:25
| 2022-09-06T12:51:25
|
NONE
| null |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4935/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4935/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4934
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4934/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4934/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4934/events
|
https://github.com/huggingface/datasets/issues/4934
| 1,363,034,253
|
I_kwDODunzps5RPkCN
| 4,934
|
Dataset Viewer issue for indonesian-nlp/librivox-indonesia
|
{
"login": "cahya-wirawan",
"id": 7669893,
"node_id": "MDQ6VXNlcjc2Njk4OTM=",
"avatar_url": "https://avatars.githubusercontent.com/u/7669893?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cahya-wirawan",
"html_url": "https://github.com/cahya-wirawan",
"followers_url": "https://api.github.com/users/cahya-wirawan/followers",
"following_url": "https://api.github.com/users/cahya-wirawan/following{/other_user}",
"gists_url": "https://api.github.com/users/cahya-wirawan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cahya-wirawan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cahya-wirawan/subscriptions",
"organizations_url": "https://api.github.com/users/cahya-wirawan/orgs",
"repos_url": "https://api.github.com/users/cahya-wirawan/repos",
"events_url": "https://api.github.com/users/cahya-wirawan/events{/privacy}",
"received_events_url": "https://api.github.com/users/cahya-wirawan/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"The error is not related to the dataset viewer. I'm having a look...",
"Thanks @albertvillanova for checking the issue. Actually, I can use the dataset like following:\r\n```\r\n>>> from datasets import load_dataset\r\n>>> ds=load_dataset(\"indonesian-nlp/librivox-indonesia\")\r\nNo config specified, defaulting to: librivox-indonesia/all\r\nReusing dataset librivox-indonesia (/root/.cache/huggingface/datasets/indonesian-nlp___librivox-indonesia/all/1.0.0/9a934a42bfb53dc103003d191618443b8a786bea2bd7bb0bc2d9454b8494521e)\r\n100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1/1 [00:00<00:00, 500.87it/s]\r\n>>> ds\r\nDatasetDict({\r\n train: Dataset({\r\n features: ['path', 'language', 'reader', 'sentence', 'audio'],\r\n num_rows: 7815\r\n })\r\n})\r\n>>> ds[\"train\"][0]\r\n{'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'language': 'sun', 'reader': '3174', 'sentence': 'pernyataan umum ngeunaan hak hak asasi manusa sakabeh manusa', 'audio': {'path': '/root/.cache/huggingface/datasets/downloads/extracted/c8ead52370fa28feb64643ea9d05cd7d820192dc8a1700d665ec45ec7624f5a3/librivox-indonesia/sundanese/universal-declaration-of-human-rights/human_rights_un_sun_brc_0000.mp3', 'array': array([ 0. , 0. , 0. , ..., -0.02419001,\r\n -0.01957154, -0.01502833], dtype=float32), 'sampling_rate': 44100}}\r\n\r\n```\r\nIt would be just nice if I also can see it using dataset viewer.",
"Yes, the issue arises when streaming (that is used by the viewer): your script does not support streaming and to support it in this case there are some subtleties that we are explaining better in our docs in a work-in progress pull request:\r\n- #4872\r\n\r\nJust note that when streaming, `local_extracted_archive` is None, and this code line generates the error:\r\n```python\r\nfilepath = local_extracted_archive + \"/librivox-indonesia/audio_transcription.csv\"\r\n```\r\n\r\nFor a proper implementation, you could have a look at: https://huggingface.co/datasets/common_voice/blob/main/common_voice.py\r\n\r\nYou can test your script locally by passing `streaming=True` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"indonesian-nlp/librivox-indonesia\", split=\"train\", streaming=True); item = next(iter(ds)); item\r\n```",
"Great, I will have a look and update the script. Thanks.",
"Hi @albertvillanova , I just add the streaming functionality and it works in the first try :-) Thanks a lot!",
"Awesome!!! :hugs: "
] | 2022-09-06T10:03:23
| 2022-09-06T12:46:40
| 2022-09-06T12:46:40
|
CONTRIBUTOR
| null |
### Link
https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia
### Description
I created a new speech dataset https://huggingface.co/datasets/indonesian-nlp/librivox-indonesia, but the dataset preview doesn't work with following error message:
```
Server error
Status code: 400
Exception: TypeError
Message: unsupported operand type(s) for +: 'NoneType' and 'str'
```
Please help, I am not sure what the problem here is. Thanks a lot.
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4934/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4934/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4933
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4933/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4933/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4933/events
|
https://github.com/huggingface/datasets/issues/4933
| 1,363,013,023
|
I_kwDODunzps5RPe2f
| 4,933
|
Dataset/DatasetDict.filter() cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
|
{
"login": "tianjianjiang",
"id": 4812544,
"node_id": "MDQ6VXNlcjQ4MTI1NDQ=",
"avatar_url": "https://avatars.githubusercontent.com/u/4812544?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/tianjianjiang",
"html_url": "https://github.com/tianjianjiang",
"followers_url": "https://api.github.com/users/tianjianjiang/followers",
"following_url": "https://api.github.com/users/tianjianjiang/following{/other_user}",
"gists_url": "https://api.github.com/users/tianjianjiang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/tianjianjiang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/tianjianjiang/subscriptions",
"organizations_url": "https://api.github.com/users/tianjianjiang/orgs",
"repos_url": "https://api.github.com/users/tianjianjiang/repos",
"events_url": "https://api.github.com/users/tianjianjiang/events{/privacy}",
"received_events_url": "https://api.github.com/users/tianjianjiang/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n\r\nIn your case, something like\r\n```python\r\nfrom datasets import load_dataset\r\n\r\n\r\nds_mc4_ja = load_dataset(\"mc4\", \"ja\") # This will take 6+ hours... perhaps test it with a toy dataset instead?\r\nds_mc4_ja_2020 = ds_mc4_ja.filter(\r\n lambda batch: [timestamp[:4] == \"2020\" for timestamp in batch[\"timestamp\"]],\r\n batched=True,\r\n)\r\n```\r\n\r\nLet me know if it helps !",
"> Hi ! When `batched=True`, you filter function must take a batch as input, and return a list of booleans.\r\n> [...]\r\n> Let me know if it helps !\r\n\r\nHi @lhoestq,\r\n\r\nAh, my bad, I totally forgot that part...\r\nSorry for the trouble and thank you for the kind help!"
] | 2022-09-06T09:47:48
| 2022-09-06T11:44:27
| 2022-09-06T11:44:27
|
CONTRIBUTOR
| null |
## Describe the bug
`Dataset/DatasetDict.filter()` cannot have `batched=True` due to `mask` (numpy array?) being non-iterable.
## Steps to reproduce the bug
(In a python 3.7.12 env, I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
```python
from datasets import load_dataset
ds_mc4_ja = load_dataset("mc4", "ja") # This will take 6+ hours... perhaps test it with a toy dataset instead?
ds_mc4_ja_2020 = ds_mc4_ja.filter(
lambda example: example["timestamp"][:4] == "2020",
batched=True,
)
```
## Expected results
No error
## Actual results
```python
---------------------------------------------------------------------------
RemoteTraceback Traceback (most recent call last)
RemoteTraceback:
"""
Traceback (most recent call last):
File "/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py", line 121, in worker
result = (True, func(*args, **kwds))
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 557, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 524, in wrapper
out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py", line 480, in wrapper
out = func(self, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2779, in _map_single
offset=offset,
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2655, in apply_function_on_filtered_inputs
processed_inputs = function(*fn_args, *additional_args, **fn_kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 2347, in decorated
result = f(decorated_item, *args, **kwargs)
File "/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py", line 4946, in get_indices_from_mask_function
indices_array = [i for i, to_keep in zip(indices, mask) if to_keep]
TypeError: zip argument #2 must support iteration
"""
The above exception was the direct cause of the following exception:
TypeError Traceback (most recent call last)
/tmp/ipykernel_51348/2345782281.py in <module>
7 batched=True,
8 # batch_size=10_000,
----> 9 num_proc=111,
10 )
11 # ds_mc4_ja_clean_2020 = ds_mc4_ja.filter(
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_names, writer_batch_size, fn_kwargs, num_proc, desc)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/dataset_dict.py in <dictcomp>(.0)
878 desc=desc,
879 )
--> 880 for k, dataset in self.items()
881 }
882 )
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in wrapper(*args, **kwargs)
522 }
523 # apply actual function
--> 524 out: Union["Dataset", "DatasetDict"] = func(self, *args, **kwargs)
525 datasets: List["Dataset"] = list(out.values()) if isinstance(out, dict) else [out]
526 # re-apply format to the output
/opt/conda/lib/python3.7/site-packages/datasets/fingerprint.py in wrapper(*args, **kwargs)
478 # Call actual function
479
--> 480 out = func(self, *args, **kwargs)
481
482 # Update fingerprint of in-place transforms + update in-place history of transforms
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in filter(self, function, with_indices, input_columns, batched, batch_size, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2920 new_fingerprint=new_fingerprint,
2921 input_columns=input_columns,
-> 2922 desc=desc,
2923 )
2924 new_dataset = copy.deepcopy(self)
/opt/conda/lib/python3.7/site-packages/datasets/arrow_dataset.py in map(self, function, with_indices, with_rank, input_columns, batched, batch_size, drop_last_batch, remove_columns, keep_in_memory, load_from_cache_file, cache_file_name, writer_batch_size, features, disable_nullable, fn_kwargs, num_proc, suffix_template, new_fingerprint, desc)
2498
2499 for index, async_result in results.items():
-> 2500 transformed_shards[index] = async_result.get()
2501
2502 assert (
/opt/conda/lib/python3.7/site-packages/multiprocess/pool.py in get(self, timeout)
655 return self._value
656 else:
--> 657 raise self._value
658
659 def _set(self, i, obj):
TypeError: zip argument #2 must support iteration
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-debian-10.12
- Python version: 3.7.12
- PyArrow version: 9.0.0
- Pandas version: 1.3.5
(I've tried 2.4.0 and 2.3.2 with both `pyarraw==9.0.0` and `pyarrow==8.0.0`.)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4933/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4933/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4932
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4932/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4932/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4932/events
|
https://github.com/huggingface/datasets/issues/4932
| 1,362,522,423
|
I_kwDODunzps5RNnE3
| 4,932
|
Dataset Viewer issue for bigscience-biomedical/biosses
|
{
"login": "galtay",
"id": 663051,
"node_id": "MDQ6VXNlcjY2MzA1MQ==",
"avatar_url": "https://avatars.githubusercontent.com/u/663051?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/galtay",
"html_url": "https://github.com/galtay",
"followers_url": "https://api.github.com/users/galtay/followers",
"following_url": "https://api.github.com/users/galtay/following{/other_user}",
"gists_url": "https://api.github.com/users/galtay/gists{/gist_id}",
"starred_url": "https://api.github.com/users/galtay/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/galtay/subscriptions",
"organizations_url": "https://api.github.com/users/galtay/orgs",
"repos_url": "https://api.github.com/users/galtay/repos",
"events_url": "https://api.github.com/users/galtay/events{/privacy}",
"received_events_url": "https://api.github.com/users/galtay/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Possibly not related to the dataset viewer in itself. cc @huggingface/datasets.\r\n\r\nIn particular, I think that the import of bigbiohub is not working here: https://huggingface.co/datasets/bigscience-biomedical/biosses/blob/main/biosses.py#L29 (requires a relative path?)\r\n\r\n```python\r\n>>> from datasets import get_dataset_config_names\r\n>>> get_dataset_config_names('bigscience-biomedical/biosses')\r\nDownloading builder script: 100%|██████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 8.00k/8.00k [00:00<00:00, 7.47MB/s]\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 289, in get_dataset_config_names\r\n dataset_module = dataset_module_factory(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1247, in dataset_module_factory\r\n raise e1 from None\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 1220, in dataset_module_factory\r\n return HubDatasetModuleFactoryWithScript(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 931, in get_module\r\n local_imports = _download_additional_modules(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py\", line 215, in _download_additional_modules\r\n raise ImportError(\r\nImportError: To be able to use bigscience-biomedical/biosses, you need to install the following dependency: bigbiohub.\r\nPlease install it using 'pip install bigbiohub' for instance'\r\n```",
"Opened a PR here to (hopefully) fix the dataset script: https://huggingface.co/datasets/bigscience-biomedical/biosses/discussions/1/files",
"thanks for taking a look @severo . agree this isn't related to dataset viewer (sorry just clicked on the auto issue creator). also thanks @lhoestq , I see the format to use for relative imports. was a bit confused b/c it seems to be working here \r\n\r\nhttps://huggingface.co/datasets/bigscience-biomedical/scitail/blob/main/scitail.py#L31\r\n\r\nI'll try this PR a see what happens. ",
"closing as I think the issue is relative imports and attempting to read json files directly in the repo (thanks again @lhoestq ) "
] | 2022-09-05T22:40:32
| 2022-09-06T14:24:56
| 2022-09-06T14:24:56
|
NONE
| null |
### Link
https://huggingface.co/datasets/bigscience-biomedical/biosses
### Description
I've just been working on adding the dataset loader script to this dataset and working with the relative imports. I'm not sure how to interpret the error below (show where the dataset preview used to be) .
```
Status code: 400
Exception: ModuleNotFoundError
Message: No module named 'datasets_modules.datasets.bigscience-biomedical--biosses.ddbd5893bf6c2f4db06f407665eaeac619520ba41f69d94ead28f7cc5b674056.bigbiohub'
```
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4932/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4932/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4931
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4931/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4931/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4931/events
|
https://github.com/huggingface/datasets/pull/4931
| 1,362,298,764
|
PR_kwDODunzps4-Y3L6
| 4,931
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-05T17:03:04
| 2022-09-22T12:40:15
| 2022-09-06T05:39:29
|
MEMBER
| null |
Fix missing tags in dataset cards:
- coqa
- hyperpartisan_news_detection
- opinosis
- scientific_papers
- scifact
- search_qa
- wiki_qa
- wiki_split
- wikisql
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
- #4921
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4931/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4931/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4931",
"html_url": "https://github.com/huggingface/datasets/pull/4931",
"diff_url": "https://github.com/huggingface/datasets/pull/4931.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4931.patch",
"merged_at": "2022-09-06T05:39:29"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4930
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4930/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4930/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4930/events
|
https://github.com/huggingface/datasets/pull/4930
| 1,362,193,587
|
PR_kwDODunzps4-Yflc
| 4,930
|
Add cc-by-nc-2.0 to list of licenses
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-05T15:37:32
| 2022-09-06T16:43:32
| 2022-09-05T17:01:04
|
MEMBER
| null |
This PR adds the `cc-by-nc-2.0` to the list of licenses because it is required by `scifact` dataset: https://github.com/allenai/scifact/blob/master/LICENSE.md
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4930/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4930/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4930",
"html_url": "https://github.com/huggingface/datasets/pull/4930",
"diff_url": "https://github.com/huggingface/datasets/pull/4930.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4930.patch",
"merged_at": "2022-09-05T17:01:04"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4929
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4929/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4929/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4929/events
|
https://github.com/huggingface/datasets/pull/4929
| 1,361,508,366
|
PR_kwDODunzps4-WK2w
| 4,929
|
Fixes a typo in loading documentation
|
{
"login": "sighingnow",
"id": 7144772,
"node_id": "MDQ6VXNlcjcxNDQ3NzI=",
"avatar_url": "https://avatars.githubusercontent.com/u/7144772?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sighingnow",
"html_url": "https://github.com/sighingnow",
"followers_url": "https://api.github.com/users/sighingnow/followers",
"following_url": "https://api.github.com/users/sighingnow/following{/other_user}",
"gists_url": "https://api.github.com/users/sighingnow/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sighingnow/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sighingnow/subscriptions",
"organizations_url": "https://api.github.com/users/sighingnow/orgs",
"repos_url": "https://api.github.com/users/sighingnow/repos",
"events_url": "https://api.github.com/users/sighingnow/events{/privacy}",
"received_events_url": "https://api.github.com/users/sighingnow/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-05T07:18:54
| 2022-09-06T02:11:03
| 2022-09-05T13:06:38
|
CONTRIBUTOR
| null |
As show in the [documentation page](https://huggingface.co/docs/datasets/loading) here the `"tr"in` should be `"train`.

|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4929/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4929/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4929",
"html_url": "https://github.com/huggingface/datasets/pull/4929",
"diff_url": "https://github.com/huggingface/datasets/pull/4929.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4929.patch",
"merged_at": "2022-09-05T13:06:38"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4928
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4928/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4928/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4928/events
|
https://github.com/huggingface/datasets/pull/4928
| 1,360,941,172
|
PR_kwDODunzps4-Ubi4
| 4,928
|
Add ability to read-write to SQL databases.
|
{
"login": "Dref360",
"id": 8976546,
"node_id": "MDQ6VXNlcjg5NzY1NDY=",
"avatar_url": "https://avatars.githubusercontent.com/u/8976546?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Dref360",
"html_url": "https://github.com/Dref360",
"followers_url": "https://api.github.com/users/Dref360/followers",
"following_url": "https://api.github.com/users/Dref360/following{/other_user}",
"gists_url": "https://api.github.com/users/Dref360/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Dref360/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Dref360/subscriptions",
"organizations_url": "https://api.github.com/users/Dref360/orgs",
"repos_url": "https://api.github.com/users/Dref360/repos",
"events_url": "https://api.github.com/users/Dref360/events{/privacy}",
"received_events_url": "https://api.github.com/users/Dref360/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-03T19:09:08
| 2022-10-03T16:34:36
| 2022-10-03T16:32:28
|
CONTRIBUTOR
| null |
Fixes #3094
Add ability to read/write to SQLite files and also read from any SQL database supported by SQLAlchemy.
I didn't add SQLAlchemy as a dependence as it is fairly big and it remains optional.
I also recorded a Loom to showcase the feature.
https://www.loom.com/share/f0e602c2de8a46f58bca4b43333d541f
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4928/reactions",
"total_count": 8,
"+1": 2,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 4,
"rocket": 0,
"eyes": 2
}
|
https://api.github.com/repos/huggingface/datasets/issues/4928/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4928",
"html_url": "https://github.com/huggingface/datasets/pull/4928",
"diff_url": "https://github.com/huggingface/datasets/pull/4928.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4928.patch",
"merged_at": "2022-10-03T16:32:28"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4927
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4927/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4927/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4927/events
|
https://github.com/huggingface/datasets/pull/4927
| 1,360,428,139
|
PR_kwDODunzps4-S0we
| 4,927
|
fix BLEU metric card
|
{
"login": "antoniolanza1996",
"id": 40452030,
"node_id": "MDQ6VXNlcjQwNDUyMDMw",
"avatar_url": "https://avatars.githubusercontent.com/u/40452030?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/antoniolanza1996",
"html_url": "https://github.com/antoniolanza1996",
"followers_url": "https://api.github.com/users/antoniolanza1996/followers",
"following_url": "https://api.github.com/users/antoniolanza1996/following{/other_user}",
"gists_url": "https://api.github.com/users/antoniolanza1996/gists{/gist_id}",
"starred_url": "https://api.github.com/users/antoniolanza1996/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/antoniolanza1996/subscriptions",
"organizations_url": "https://api.github.com/users/antoniolanza1996/orgs",
"repos_url": "https://api.github.com/users/antoniolanza1996/repos",
"events_url": "https://api.github.com/users/antoniolanza1996/events{/privacy}",
"received_events_url": "https://api.github.com/users/antoniolanza1996/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-02T17:00:56
| 2022-09-09T16:28:15
| 2022-09-09T16:28:15
|
CONTRIBUTOR
| null |
I've fixed some typos in BLEU metric card.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4927/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4927/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4927",
"html_url": "https://github.com/huggingface/datasets/pull/4927",
"diff_url": "https://github.com/huggingface/datasets/pull/4927.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4927.patch",
"merged_at": "2022-09-09T16:28:15"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4926
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4926/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4926/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4926/events
|
https://github.com/huggingface/datasets/pull/4926
| 1,360,384,484
|
PR_kwDODunzps4-Srm1
| 4,926
|
Dataset infos in yaml
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 4564477500,
"node_id": "LA_kwDODunzps8AAAABEBBmPA",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20contribution",
"name": "dataset contribution",
"color": "0e8a16",
"default": false,
"description": "Contribution to a dataset script"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-09-02T16:10:05
| 2022-10-03T09:13:07
| 2022-10-03T09:11:12
|
MEMBER
| null |
To simplify the addition of new datasets, we'd like to have the dataset infos in the YAML and deprecate the dataset_infos.json file. YAML is readable and easy to edit, and the YAML metadata of the readme already contain dataset metadata so we would have everything in one place.
To be more specific, I moved these fields from DatasetInfo to the YAML:
- config_name (if there are several configs)
- download_size
- dataset_size
- features
- splits
Here is what I ended up with for `squad`:
```yaml
dataset_info:
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
sequence:
- name: text
dtype: string
- name: answer_start
dtype: int32
splits:
- name: train
num_bytes: 79346360
num_examples: 87599
- name: validation
num_bytes: 10473040
num_examples: 10570
config_name: plain_text
download_size: 35142551
dataset_size: 89819400
```
and it can be a list if there are several configs
I already did the change for `conll2000` and `crime_and_punish` as an example.
## Implementation details
### Load/Read
This is done via `DatasetInfosDict.write_to_directory/from_directory`
I had to implement a custom the YAML export logic for `SplitDict`, `Version` and `Features`.
The first two are trivial, but the logic for `Features` is more complicated, because I added a simplification step (or the YAML would be too long and less readable): it's just a formatting step to remove unnecessary nesting of YAML data.
### Other changes
I had to update the DatasetModule factories to also download the README.md alongside the dataset scripts/data files, and not just the dataset_infos.json
## YAML validation
I removed the old validation code that was in metadata.py, now we can just use the Hub YAML validation
## Datasets-cli
The `datasets-cli test --save_infos` command now creates a README.md file with the dataset_infos in it, instead of a datasets_infos.json file
## Backward compatibility
`dataset_infos.json` files are still supported and loaded if they exist to have full backward compatibility.
Though I removed the unnecessary keys when the value is the default (like all the `id: null` from the Value feature types) to make them easier to read.
## TODO
- [x] add comments
- [x] tests
- [x] document the new YAML fields
- [x] try to reload the new dataset_infos.json file content with an old version of `datasets`
## EDITS
- removed "config_name" when there's only one config
- removed "version" for now (?), because it's not useful in general
- renamed the yaml field "dataset_info" instead of "dataset_infos", since it only has one by default (and because "infos" is not english)
Fix https://github.com/huggingface/datasets/issues/4876
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4926/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4926/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4926",
"html_url": "https://github.com/huggingface/datasets/pull/4926",
"diff_url": "https://github.com/huggingface/datasets/pull/4926.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4926.patch",
"merged_at": "2022-10-03T09:11:12"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4925
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4925/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4925/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4925/events
|
https://github.com/huggingface/datasets/pull/4925
| 1,360,007,616
|
PR_kwDODunzps4-RbP5
| 4,925
|
Add note about loading image / audio files to docs
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-09-02T10:31:58
| 2022-09-26T12:21:30
| 2022-09-23T13:59:07
|
MEMBER
| null |
This PR adds a small note about how to load image / audio datasets that have multiple splits in their dataset structure.
Related forum thread: https://discuss.huggingface.co/t/loading-train-and-test-splits-with-audiofolder/22447
cc @NielsRogge
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4925/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4925/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4925",
"html_url": "https://github.com/huggingface/datasets/pull/4925",
"diff_url": "https://github.com/huggingface/datasets/pull/4925.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4925.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4924
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4924/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4924/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4924/events
|
https://github.com/huggingface/datasets/issues/4924
| 1,358,611,513
|
I_kwDODunzps5Q-sQ5
| 4,924
|
Concatenate_datasets loads everything into RAM
|
{
"login": "louisdeneve",
"id": 39416047,
"node_id": "MDQ6VXNlcjM5NDE2MDQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/39416047?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/louisdeneve",
"html_url": "https://github.com/louisdeneve",
"followers_url": "https://api.github.com/users/louisdeneve/followers",
"following_url": "https://api.github.com/users/louisdeneve/following{/other_user}",
"gists_url": "https://api.github.com/users/louisdeneve/gists{/gist_id}",
"starred_url": "https://api.github.com/users/louisdeneve/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/louisdeneve/subscriptions",
"organizations_url": "https://api.github.com/users/louisdeneve/orgs",
"repos_url": "https://api.github.com/users/louisdeneve/repos",
"events_url": "https://api.github.com/users/louisdeneve/events{/privacy}",
"received_events_url": "https://api.github.com/users/louisdeneve/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-09-01T10:25:17
| 2022-09-01T11:50:54
| 2022-09-01T11:50:54
|
NONE
| null |
## Describe the bug
When loading the datasets seperately and saving them on disk, I want to concatenate them. But `concatenate_datasets` is filling up my RAM and the process gets killed. Is there a way to prevent this from happening or is this intended behaviour? Thanks in advance
## Steps to reproduce the bug
```python
gcs = gcsfs.GCSFileSystem(project='project')
datasets = [load_from_disk(f'path/to/slice/of/data/{i}', fs=gcs, keep_in_memory=False) for i in range(10)]
dataset = concatenate_datasets(datasets)
```
## Expected results
A concatenated dataset which is stored on my disk.
## Actual results
Concatenated dataset gets loaded into RAM and overflows it which gets the process killed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.19.0-21-cloud-amd64-x86_64-with-glibc2.10
- Python version: 3.8.13
- PyArrow version: 8.0.1
- Pandas version: 1.4.3
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4924/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4924/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4923
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4923/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4923/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4923/events
|
https://github.com/huggingface/datasets/pull/4923
| 1,357,735,287
|
PR_kwDODunzps4-Jv7C
| 4,923
|
decode mp3 with librosa if torchaudio is > 0.12 as a temporary workaround
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-31T18:57:59
| 2022-11-02T11:54:33
| 2022-09-20T13:12:52
|
CONTRIBUTOR
| null |
`torchaudio>0.12` fails with decoding mp3 files if `ffmpeg<4`. currently we ask users to downgrade torchaudio, but sometimes it's not possible as torchaudio version is binded to torch version. as a temporary workaround we can decode mp3 with librosa (though it 60 times slower, at least it works)
another option would be to ask users to install the required version of `ffmpeg`, but is non-trivial on colab: it's not in apt packages in ubuntu 18 and `conda` is not preinstalled (with `conda` it would be easily installable)
- [x] decode with torchaudio anyway if the version of ffmpeg is correct? it's 60 times faster
- [x] tests
- [x] DO NOT FORGET to get back all the tests
see https://github.com/huggingface/datasets/issues/4776 and https://github.com/huggingface/datasets/issues/3663#issuecomment-1225797165 (there is a Colab notebook to reproduce the error)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4923/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4923/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4923",
"html_url": "https://github.com/huggingface/datasets/pull/4923",
"diff_url": "https://github.com/huggingface/datasets/pull/4923.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4923.patch",
"merged_at": "2022-09-20T13:12:52"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4922
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4922/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4922/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4922/events
|
https://github.com/huggingface/datasets/issues/4922
| 1,357,684,018
|
I_kwDODunzps5Q7J0y
| 4,922
|
I/O error on Google Colab in streaming mode
|
{
"login": "jotterbach",
"id": 5595043,
"node_id": "MDQ6VXNlcjU1OTUwNDM=",
"avatar_url": "https://avatars.githubusercontent.com/u/5595043?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/jotterbach",
"html_url": "https://github.com/jotterbach",
"followers_url": "https://api.github.com/users/jotterbach/followers",
"following_url": "https://api.github.com/users/jotterbach/following{/other_user}",
"gists_url": "https://api.github.com/users/jotterbach/gists{/gist_id}",
"starred_url": "https://api.github.com/users/jotterbach/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/jotterbach/subscriptions",
"organizations_url": "https://api.github.com/users/jotterbach/orgs",
"repos_url": "https://api.github.com/users/jotterbach/repos",
"events_url": "https://api.github.com/users/jotterbach/events{/privacy}",
"received_events_url": "https://api.github.com/users/jotterbach/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-31T18:08:26
| 2022-08-31T18:15:48
| 2022-08-31T18:15:48
|
NONE
| null |
## Describe the bug
When trying to load a streaming dataset in Google Colab the loading fails with an I/O error
## Steps to reproduce the bug
```python
import datasets
from datasets import load_dataset
hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
list(hf_ds.take(5))
```
## Expected results
It should load five data points
## Actual results
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
[<ipython-input-13-7b5b8b1e7e58>](https://localhost:8080/#) in <module>
2 from datasets import load_dataset
3 hf_ds = load_dataset(path='wmt19', name='cs-en', streaming=True, split=datasets.Split.VALIDATION)
----> 4 list(hf_ds.take(5))
6 frames
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
716
717 def __iter__(self):
--> 718 for key, example in self._iter():
719 if self.features:
720 # `IterableDataset` automatically fills missing columns with None.
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in _iter(self)
706 else:
707 ex_iterable = self._ex_iterable
--> 708 yield from ex_iterable
709
710 def _iter_shard(self, shard_idx: int):
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
582
583 def __iter__(self):
--> 584 yield from islice(self.ex_iterable, self.n)
585
586 def shuffle_data_sources(self, generator: np.random.Generator) -> "TakeExamplesIterable":
[/usr/local/lib/python3.7/dist-packages/datasets/iterable_dataset.py](https://localhost:8080/#) in __iter__(self)
110
111 def __iter__(self):
--> 112 yield from self.generate_examples_fn(**self.kwargs)
113
114 def shuffle_data_sources(self, generator: np.random.Generator) -> "ExamplesIterable":
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _generate_examples(self, split_subsets, extraction_map, with_translation)
845 raise ValueError("Invalid number of files: %d" % len(files))
846
--> 847 for sub_key, ex in sub_generator(*sub_generator_args):
848 if not all(ex.values()):
849 continue
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in _parse_parallel_sentences(f1, f2, filename1, filename2)
923 l2_sentences, l2 = parse_file(f2_i, filename2)
924
--> 925 for line_id, (s1, s2) in enumerate(zip(l1_sentences, l2_sentences)):
926 key = f"{f_id}/{line_id}"
927 yield key, {l1: s1, l2: s2}
[~/.cache/huggingface/modules/datasets_modules/datasets/wmt19/aeadcbe9f1cbf9969e603239d33d3e43670cf250c1158edf74f5f6e74d4f21d0/wmt_utils.py](https://localhost:8080/#) in gen()
895
896 def gen():
--> 897 with open(path, encoding="utf-8") as f:
898 for line in f:
899 seg_match = re.match(seg_re, line)
ValueError: I/O operation on closed file.
```
## Environment info
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 9.0.0. (the same error happened with PyArrow version 6.0.0)
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4922/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4922/timeline
| null |
not_planned
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4921
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4921/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4921/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4921/events
|
https://github.com/huggingface/datasets/pull/4921
| 1,357,609,003
|
PR_kwDODunzps4-JVFV
| 4,921
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-31T16:52:27
| 2022-09-22T14:34:11
| 2022-09-01T05:04:53
|
MEMBER
| null |
Fix missing tags in dataset cards:
- eraser_multi_rc
- hotpot_qa
- metooma
- movie_rationales
- qanta
- quora
- quoref
- race
- ted_hrlr
- ted_talks_iwslt
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
- #4908
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4921/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4921/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4921",
"html_url": "https://github.com/huggingface/datasets/pull/4921",
"diff_url": "https://github.com/huggingface/datasets/pull/4921.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4921.patch",
"merged_at": "2022-09-01T05:04:53"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4920
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4920/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4920/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4920/events
|
https://github.com/huggingface/datasets/issues/4920
| 1,357,564,589
|
I_kwDODunzps5Q6sqt
| 4,920
|
Unable to load local tsv files through load_dataset method
|
{
"login": "DataNoob0723",
"id": 44038517,
"node_id": "MDQ6VXNlcjQ0MDM4NTE3",
"avatar_url": "https://avatars.githubusercontent.com/u/44038517?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DataNoob0723",
"html_url": "https://github.com/DataNoob0723",
"followers_url": "https://api.github.com/users/DataNoob0723/followers",
"following_url": "https://api.github.com/users/DataNoob0723/following{/other_user}",
"gists_url": "https://api.github.com/users/DataNoob0723/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DataNoob0723/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DataNoob0723/subscriptions",
"organizations_url": "https://api.github.com/users/DataNoob0723/orgs",
"repos_url": "https://api.github.com/users/DataNoob0723/repos",
"events_url": "https://api.github.com/users/DataNoob0723/events{/privacy}",
"received_events_url": "https://api.github.com/users/DataNoob0723/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @DataNoob0723,\r\n\r\nUnder the hood, we use `pandas` to load CSV/TSV files. Therefore, you should use \"csv\" and pass `sep=\"\\t\"`, as explained in our docs: https://huggingface.co/docs/datasets/v2.4.0/en/package_reference/loading_methods#from-files\r\n```python\r\nds = load_dataset('csv', sep=\"\\t\", data_files=data_files)\r\n``` "
] | 2022-08-31T16:13:39
| 2022-09-01T05:31:30
| 2022-09-01T05:31:30
|
NONE
| null |
## Describe the bug
Unable to load local tsv files through load_dataset method.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
data_files = {
'train': 'train.tsv',
'test': 'test.tsv'
}
raw_datasets = load_dataset('tsv', data_files=data_files)
## Expected results
I am pretty sure the data files exist in the current directory. The above code should load them as Datasets, but threw exceptions.
## Actual results
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
[<ipython-input-9-24207899c1af>](https://localhost:8080/#) in <module>
----> 1 raw_datasets = load_dataset('tsv', data_files='train.tsv')
2 frames
[/usr/local/lib/python3.7/dist-packages/datasets/load.py](https://localhost:8080/#) in dataset_module_factory(path, revision, download_config, download_mode, dynamic_modules_path, data_dir, data_files, **download_kwargs)
1244 f"Couldn't find a dataset script at {relative_to_absolute_path(combined_path)} or any data file in the same directory. "
1245 f"Couldn't find '{path}' on the Hugging Face Hub either: {type(e1).__name__}: {e1}"
-> 1246 ) from None
1247 raise e1 from None
1248 else:
FileNotFoundError: Couldn't find a dataset script at /content/tsv/tsv.py or any data file in the same directory. Couldn't find 'tsv' on the Hugging Face Hub either: FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/main/datasets/tsv/tsv.py
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-5.4.188+-x86_64-with-Ubuntu-18.04-bionic
- Python version: 3.7.13
- PyArrow version: 6.0.1
- Pandas version: 1.3.5
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4920/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4920/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4919
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4919/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4919/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4919/events
|
https://github.com/huggingface/datasets/pull/4919
| 1,357,441,599
|
PR_kwDODunzps4-IxDZ
| 4,919
|
feat: improve error message on Keys mismatch. closes #4917
|
{
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-31T14:41:36
| 2022-09-05T08:46:01
| 2022-09-05T08:43:33
|
CONTRIBUTOR
| null |
Hi @lhoestq what do you think?
Let me give you a code sample:
```py
>>> import datasets
>>> foo = datasets.Dataset.from_dict({'foo':[0,1], 'bar':[2,3]})
>>> foo.save_to_disk('foo')
# edit foo/dataset_info.json e.g. rename the 'foo' feature to 'baz'
>>> datasets.load_from_disk('foo')
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-2-4863e606b330> in <module>
----> 1 datasets.load_from_disk('foo')
~/code/datasets/src/datasets/load.py in load_from_disk(dataset_path, fs, keep_in_memory)
1851 raise FileNotFoundError(f"Directory {dataset_path} not found")
1852 if fs.isfile(Path(dest_dataset_path, config.DATASET_INFO_FILENAME).as_posix()):
-> 1853 return Dataset.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
1854 elif fs.isfile(Path(dest_dataset_path, config.DATASETDICT_JSON_FILENAME).as_posix()):
1855 return DatasetDict.load_from_disk(dataset_path, fs, keep_in_memory=keep_in_memory)
~/code/datasets/src/datasets/arrow_dataset.py in load_from_disk(dataset_path, fs, keep_in_memory)
1230 info=dataset_info,
1231 split=split,
-> 1232 fingerprint=state["_fingerprint"],
1233 )
1234
~/code/datasets/src/datasets/arrow_dataset.py in __init__(self, arrow_table, info, split, indices_table, fingerprint)
687 self.info.features = inferred_features
688 else: # make sure the nested columns are in the right order
--> 689 self.info.features = self.info.features.reorder_fields_as(inferred_features)
690
691 # Infer fingerprint if None
~/code/datasets/src/datasets/features/features.py in reorder_fields_as(self, other)
1771 return source
1772
-> 1773 return Features(recursive_reorder(self, other))
1774
1775 def flatten(self, max_depth=16) -> "Features":
~/code/datasets/src/datasets/features/features.py in recursive_reorder(source, target, stack)
1760 f"{source.keys()-target.keys()} are missing from dataset.arrow "
1761 f"and {target.keys()-source.keys()} are missing from dataset_info.json"+stack_position)
-> 1762 raise ValueError(message)
1763 return {key: recursive_reorder(source[key], target[key], stack + f".{key}") for key in target}
1764 elif isinstance(source, list):
ValueError: Keys mismatch: between {'baz': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (dataset_info.json) and {'foo': Value(dtype='int64', id=None), 'bar': Value(dtype='int64', id=None)} (inferred from dataset.arrow).
{'baz'} are missing from dataset.arrow and {'foo'} are missing from dataset_info.json
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4919/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4919/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4919",
"html_url": "https://github.com/huggingface/datasets/pull/4919",
"diff_url": "https://github.com/huggingface/datasets/pull/4919.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4919.patch",
"merged_at": "2022-09-05T08:43:33"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4918
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4918/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4918/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4918/events
|
https://github.com/huggingface/datasets/issues/4918
| 1,357,242,757
|
I_kwDODunzps5Q5eGF
| 4,918
|
Dataset Viewer issue for pysentimiento/spanish-targeted-sentiment-headlines
|
{
"login": "finiteautomata",
"id": 167943,
"node_id": "MDQ6VXNlcjE2Nzk0Mw==",
"avatar_url": "https://avatars.githubusercontent.com/u/167943?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/finiteautomata",
"html_url": "https://github.com/finiteautomata",
"followers_url": "https://api.github.com/users/finiteautomata/followers",
"following_url": "https://api.github.com/users/finiteautomata/following{/other_user}",
"gists_url": "https://api.github.com/users/finiteautomata/gists{/gist_id}",
"starred_url": "https://api.github.com/users/finiteautomata/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/finiteautomata/subscriptions",
"organizations_url": "https://api.github.com/users/finiteautomata/orgs",
"repos_url": "https://api.github.com/users/finiteautomata/repos",
"events_url": "https://api.github.com/users/finiteautomata/events{/privacy}",
"received_events_url": "https://api.github.com/users/finiteautomata/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting, it's fixed now (I refreshed it manually). It's a known issue; we hope it will be fixed permanently in a few days.\r\n\r\n<img width=\"1508\" alt=\"Capture d’écran 2022-09-05 à 18 31 22\" src=\"https://user-images.githubusercontent.com/1676121/188489762-0ed86a7e-dfb3-46e8-a125-43b815a2c6f4.png\">\r\n",
"Thanks @severo! "
] | 2022-08-31T12:09:07
| 2022-09-05T21:36:34
| 2022-09-05T16:32:44
|
NONE
| null |
### Link
https://huggingface.co/datasets/pysentimiento/spanish-targeted-sentiment-headlines
### Description
After moving the dataset from my user (`finiteautomata`) to the `pysentimiento` organization, the dataset viewer says that it doesn't exist.
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4918/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4918/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4917
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4917/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4917/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4917/events
|
https://github.com/huggingface/datasets/issues/4917
| 1,357,193,841
|
I_kwDODunzps5Q5SJx
| 4,917
|
Keys mismatch: make error message more informative
|
{
"login": "PaulLerner",
"id": 25532159,
"node_id": "MDQ6VXNlcjI1NTMyMTU5",
"avatar_url": "https://avatars.githubusercontent.com/u/25532159?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PaulLerner",
"html_url": "https://github.com/PaulLerner",
"followers_url": "https://api.github.com/users/PaulLerner/followers",
"following_url": "https://api.github.com/users/PaulLerner/following{/other_user}",
"gists_url": "https://api.github.com/users/PaulLerner/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PaulLerner/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PaulLerner/subscriptions",
"organizations_url": "https://api.github.com/users/PaulLerner/orgs",
"repos_url": "https://api.github.com/users/PaulLerner/repos",
"events_url": "https://api.github.com/users/PaulLerner/events{/privacy}",
"received_events_url": "https://api.github.com/users/PaulLerner/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
},
{
"id": 1935892877,
"node_id": "MDU6TGFiZWwxOTM1ODkyODc3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20first%20issue",
"name": "good first issue",
"color": "7057ff",
"default": true,
"description": "Good for newcomers"
}
] |
closed
| false
| null |
[] | null |
[
"Good idea ! I think this can be improved in `Features.reorder_fields_as()` indeed at\r\n\r\nhttps://github.com/huggingface/datasets/blob/7feeb5648a63b6135a8259dedc3b1e19185ee4c7/src/datasets/features/features.py#L1739-L1740\r\n\r\nIs it something you would be interested in contributing ?",
"Is this open to work on? I'd love to take on this as my first issue.",
"Hi @daspartho I’ve opened a PR #4919 \r\nI don’t think there’s much left to do",
"ok : )"
] | 2022-08-31T11:24:34
| 2022-09-05T08:43:38
| 2022-09-05T08:43:38
|
CONTRIBUTOR
| null |
**Is your feature request related to a problem? Please describe.**
When loading a dataset from disk with a defect in its `dataset_info.json` describing its features (I don’t know when/why/how this happens but it deserves its own issue), you will get an error message like:
`ValueError: Keys mismatch: between {'bar': Value(dtype='int64', id=None)} and {'foo': Value(dtype='int64', id=None)}`
Which is fine when you have only a few features like in the example but it gets very hard to read when you have a lot of features in your dataset.
**Describe the solution you'd like**
The error message should give the difference between the features (what keys are in A but missing in B and vice-versa). It should also tell which keys are inferred from `dataset.arrow` and which come from `dataset_info.json`.
Willing to help :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4917/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4917/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4916
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4916/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4916/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4916/events
|
https://github.com/huggingface/datasets/issues/4916
| 1,357,076,940
|
I_kwDODunzps5Q41nM
| 4,916
|
Apache Beam unable to write the downloaded wikipedia dataset
|
{
"login": "Shilpac20",
"id": 71849081,
"node_id": "MDQ6VXNlcjcxODQ5MDgx",
"avatar_url": "https://avatars.githubusercontent.com/u/71849081?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Shilpac20",
"html_url": "https://github.com/Shilpac20",
"followers_url": "https://api.github.com/users/Shilpac20/followers",
"following_url": "https://api.github.com/users/Shilpac20/following{/other_user}",
"gists_url": "https://api.github.com/users/Shilpac20/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Shilpac20/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Shilpac20/subscriptions",
"organizations_url": "https://api.github.com/users/Shilpac20/orgs",
"repos_url": "https://api.github.com/users/Shilpac20/repos",
"events_url": "https://api.github.com/users/Shilpac20/events{/privacy}",
"received_events_url": "https://api.github.com/users/Shilpac20/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"See:\r\n- #4915"
] | 2022-08-31T09:39:25
| 2022-08-31T10:53:19
| 2022-08-31T10:53:19
|
NONE
| null |
## Describe the bug
Hi, I am currently trying to download wikipedia dataset using
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner'). However, I end up in getting filenotfound error. I get this error for any language I try to download. It downloads the file but while saving it in hugging face cache it fails to write. This happens for any available date of any language in wikipedia dump. I had raised another issue earlier #4915 but probably was not that clear and the solution provider misunderstood my problem. Hence raising one more issue. Any help is appreciated.
## Steps to reproduce the bug
```python
from datasets import load_dataset
load_dataset("wikipedia", language="aa", date="20220401", split="train",beam_runner='DirectRunner')
```
## Expected results
to load the dataset
## Actual results
I am pasting the error trace here:
Downloading builder script: 35.9kB [00:00, ?B/s]
Downloading metadata: 30.4kB [00:00, 1.94MB/s]
Using custom data configuration 20220401.aa-date=20220401,language=aa
Downloading and preparing dataset wikipedia/20220401.aa to C:\Users\Shilpa.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559...
Downloading data: 100%|████████████████████████████████████████████████████████████| 11.1k/11.1k [00:00<00:00, 712kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.82s/it]
Extracting data files: 100%|█████████████████████████████████████████████████████████████████████| 1/1 [00:00<?, ?it/s]
Downloading data: 100%|███████████████████████████████████████████████████████████| 35.6k/35.6k [00:00<00:00, 84.3kB/s]
Downloading data files: 100%|████████████████████████████████████████████████████████████| 1/1 [00:02<00:00, 2.93s/it]
Traceback (most recent call last):
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train'
During handling of the above exception, another exception occurred:
Traceback (most recent call last):
File "G:/abc/temp.py", line 32, in
beam_runner='DirectRunner')
File "G:\Python3.7\lib\site-packages\datasets\load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "G:\Python3.7\lib\site-packages\datasets\builder.py", line 1394, in _download_and_prepare
pipeline_results = pipeline.run()
File "G:\Python3.7\lib\site-packages\apache_beam\pipeline.py", line 574, in run
return self.runner.run_pipeline(self, self._options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\direct\direct_runner.py", line 131, in run_pipeline
return runner.run_pipeline(pipeline, options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 201, in run_pipeline
options)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 212, in run_via_runner_api
return self.run_stages(stage_context, stages)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 443, in run_stages
runner_execution_context, bundle_context_manager, bundle_input)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 776, in _execute_bundle
bundle_manager))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1000, in _run_bundle
data_input, data_output, input_timers, expected_timer_output)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\fn_runner.py", line 1309, in process_bundle
result_future = self._worker_handler.control_conn.push(process_bundle_req)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\portability\fn_api_runner\worker_handlers.py", line 380, in push
response = self.worker.do_instruction(request)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 598, in do_instruction
getattr(request, request_type), request.instruction_id)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\sdk_worker.py", line 635, in process_bundle
bundle_processor.process_bundle(instruction_id))
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 1004, in process_bundle
element.data)
File "G:\Python3.7\lib\site-packages\apache_beam\runners\worker\bundle_processor.py", line 227, in process_encoded
self.output(decoded_value)
File "apache_beam\runners\worker\operations.py", line 526, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 528, in apache_beam.runners.worker.operations.Operation.output
File "apache_beam\runners\worker\operations.py", line 237, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 324, in apache_beam.runners.worker.operations.GeneralPurposeConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 905, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 623, in apache_beam.runners.common.SimpleInvoker.invoke_process
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1491, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1581, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "apache_beam\runners\common.py", line 1694, in apache_beam.runners.common._OutputHandler._write_value_to_tag
File "apache_beam\runners\worker\operations.py", line 240, in apache_beam.runners.worker.operations.SingletonElementConsumerSet.receive
File "apache_beam\runners\worker\operations.py", line 907, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\worker\operations.py", line 908, in apache_beam.runners.worker.operations.DoOperation.process
File "apache_beam\runners\common.py", line 1419, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 1507, in apache_beam.runners.common.DoFnRunner._reraise_augmented
File "apache_beam\runners\common.py", line 1417, in apache_beam.runners.common.DoFnRunner.process
File "apache_beam\runners\common.py", line 837, in apache_beam.runners.common.PerWindowInvoker.invoke_process
File "apache_beam\runners\common.py", line 981, in apache_beam.runners.common.PerWindowInvoker._invoke_process_per_window
File "apache_beam\runners\common.py", line 1571, in apache_beam.runners.common._OutputHandler.handle_process_outputs
File "G:\Python3.7\lib\site-packages\apache_beam\io\iobase.py", line 1193, in process
self.writer = self.sink.open_writer(init_result, str(uuid.uuid4()))
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 202, in open_writer
return FileBasedSinkWriter(self, writer_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 419, in init
self.temp_handle = self.sink.open(temp_shard_path)
File "G:\Python3.7\lib\site-packages\apache_beam\io\parquetio.py", line 553, in open
self._file_handle = super().open(temp_path)
File "G:\Python3.7\lib\site-packages\apache_beam\options\value_provider.py", line 193, in _f
return fnc(self, *args, **kwargs)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filebasedsink.py", line 139, in open
temp_path, self.mime_type, self.compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\filesystems.py", line 224, in create
return filesystem.create(path, mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 163, in create
return self._path_open(path, 'wb', mime_type, compression_type)
File "G:\Python3.7\lib\site-packages\apache_beam\io\localfilesystem.py", line 140, in _path_open
raw_file = io.open(path, mode)
RuntimeError: FileNotFoundError: [Errno 2] No such file or directory: 'C:\Users\Shilpa\.cache\huggingface\datasets\wikipedia\20220401.aa-date=20220401,language=aa\2.0.0\aa542ed919df55cc5d3347f42dd4521d05ca68751f50dbc32bae2a7f1e167559.incomplete\beam-temp-wikipedia-train-880233e8287e11edaf9d3ca067f2714e\20a05238-6106-4420-a713-4eca6dd5959a.wikipedia-train' [while running 'train/Save to parquet/Write/WriteImpl/WriteBundles']
## Environment info
Python: 3.7.6
Windows 10 Pro
datasets :2.4.0
apache_beam: 2.41.0
mwparserfromhell: 0.6.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4916/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4916/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4914
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4914/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4914/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4914/events
|
https://github.com/huggingface/datasets/pull/4914
| 1,355,482,624
|
PR_kwDODunzps4-CFyN
| 4,914
|
Support streaming swda dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-30T09:46:28
| 2022-08-30T11:16:33
| 2022-08-30T11:14:16
|
MEMBER
| null |
Support streaming swda dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4914/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4914/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4914",
"html_url": "https://github.com/huggingface/datasets/pull/4914",
"diff_url": "https://github.com/huggingface/datasets/pull/4914.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4914.patch",
"merged_at": "2022-08-30T11:14:15"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4913
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4913/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4913/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4913/events
|
https://github.com/huggingface/datasets/pull/4913
| 1,355,232,007
|
PR_kwDODunzps4-BP00
| 4,913
|
Add license and citation information to cosmos_qa dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-30T06:23:19
| 2022-08-30T09:49:31
| 2022-08-30T09:47:35
|
MEMBER
| null |
This PR adds the license information to `cosmos_qa` dataset, once reported via email by Yejin Choi, the dataset is licensed under CC BY 4.0.
This PR also updates the citation information.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4913/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4913/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4913",
"html_url": "https://github.com/huggingface/datasets/pull/4913",
"diff_url": "https://github.com/huggingface/datasets/pull/4913.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4913.patch",
"merged_at": "2022-08-30T09:47:35"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4912
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4912/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4912/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4912/events
|
https://github.com/huggingface/datasets/issues/4912
| 1,355,078,864
|
I_kwDODunzps5QxNzQ
| 4,912
|
datasets map() handles all data at a stroke and takes long time
|
{
"login": "BruceStayHungry",
"id": 40711748,
"node_id": "MDQ6VXNlcjQwNzExNzQ4",
"avatar_url": "https://avatars.githubusercontent.com/u/40711748?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BruceStayHungry",
"html_url": "https://github.com/BruceStayHungry",
"followers_url": "https://api.github.com/users/BruceStayHungry/followers",
"following_url": "https://api.github.com/users/BruceStayHungry/following{/other_user}",
"gists_url": "https://api.github.com/users/BruceStayHungry/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BruceStayHungry/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BruceStayHungry/subscriptions",
"organizations_url": "https://api.github.com/users/BruceStayHungry/orgs",
"repos_url": "https://api.github.com/users/BruceStayHungry/repos",
"events_url": "https://api.github.com/users/BruceStayHungry/events{/privacy}",
"received_events_url": "https://api.github.com/users/BruceStayHungry/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[
"Hi ! Interesting question ;)\r\n\r\n> Which is better? Process in map() or in data-collator\r\n\r\nAs you said, both can be used in practice: map() if you want to preprocess before training, or a data-collator (or the equivalent `dataset.set_transform`) if you want to preprocess on-the-fly during training. Both options are great and really depend on your case.\r\n\r\nTo choose between the two, here are IMO the main caveats of each approach:\r\n- if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n- on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\n> Why huggingface advises map() function? There should be some advantages to using map()\r\n\r\nTo get the best throughput when training a model, it is often recommended to preprocess your dataset before training. Note that preprocessing may include other steps before tokenization such as data filtering, cleaning, chunking etc. which are often done before training.",
"Thanks for your clear explanation @lhoestq ! \r\n> * if your preprocessing takes too much CPU for example, using a data-collator may slow down your training and your GPUs may not work at full speed\r\n> * on the other hand, map() may take a lot of time and disk space to run if your dataset is too big.\r\n\r\nI really agree with you. There should be some trade-off between processing before and during the train loop.\r\nBesides, I find `map()` function can cache the results once it has been executed. Very useful!",
"I'm closing this issue if you don't mind, feel free to reopen if needed ;)"
] | 2022-08-30T02:25:56
| 2022-09-06T09:23:35
| 2022-09-06T09:23:35
|
NONE
| null |
**1. Background**
Huggingface datasets package advises using `map()` to process data in batches. In the example code on pretraining masked language model, they use `map()` to tokenize all data at a stroke before the train loop.
The corresponding code:
```
with accelerator.main_process_first():
tokenized_datasets = raw_datasets.map(
tokenize_function,
batched=True,
num_proc=args.preprocessing_num_workers,
remove_columns=column_names,
load_from_cache_file=not args.overwrite_cache,
desc="Running tokenizer on every text in dataset"
)
```
**2. The problem**
Thus, when I try the same pertaining code with a much larger corpus, it takes quite a long time to tokenize.
Also, we can choose to tokenize data in `data-collator`. In this way, the program only tokenizes one batch in the next training step and avoids getting stuck in tokenization.
**3. My question**
As described above, my questions are:
* **Which is better? Process in `map()` or in `data-collator`**
* **Why huggingface advises `map()` function?** There should be some advantages to using `map()`
Thanks for your answers!
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4912/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4912/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4909
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4909/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4909/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4909/events
|
https://github.com/huggingface/datasets/pull/4909
| 1,353,997,788
|
PR_kwDODunzps499Fhe
| 4,909
|
Update GLUE evaluation metadata
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-29T09:43:44
| 2022-08-29T14:53:29
| 2022-08-29T14:51:18
|
MEMBER
| null |
This PR updates the evaluation metadata for GLUE to:
* Include defaults for all configs except `ax` (which only has a `test` split with no known labels)
* Fix the default split from `test` to `validation` since `test` splits in GLUE have no labels (they're private)
* Fix the `task_id` for some existing defaults
cc @sashavor @douwekiela
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4909/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4909/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4909",
"html_url": "https://github.com/huggingface/datasets/pull/4909",
"diff_url": "https://github.com/huggingface/datasets/pull/4909.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4909.patch",
"merged_at": "2022-08-29T14:51:18"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4908
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4908/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4908/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4908/events
|
https://github.com/huggingface/datasets/pull/4908
| 1,353,995,574
|
PR_kwDODunzps499FDS
| 4,908
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-29T09:41:53
| 2022-09-22T14:35:56
| 2022-08-29T16:13:07
|
MEMBER
| null |
Fix missing tags in dataset cards:
- asnq
- clue
- common_gen
- cosmos_qa
- guardian_authorship
- hindi_discourse
- py_ast
- x_stance
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
- #4896
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4908/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4908/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4908",
"html_url": "https://github.com/huggingface/datasets/pull/4908",
"diff_url": "https://github.com/huggingface/datasets/pull/4908.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4908.patch",
"merged_at": "2022-08-29T16:13:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4907
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4907/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4907/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4907/events
|
https://github.com/huggingface/datasets/issues/4907
| 1,353,808,348
|
I_kwDODunzps5QsXnc
| 4,907
|
None Type error for swda datasets
|
{
"login": "hannan72",
"id": 8229163,
"node_id": "MDQ6VXNlcjgyMjkxNjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/8229163?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/hannan72",
"html_url": "https://github.com/hannan72",
"followers_url": "https://api.github.com/users/hannan72/followers",
"following_url": "https://api.github.com/users/hannan72/following{/other_user}",
"gists_url": "https://api.github.com/users/hannan72/gists{/gist_id}",
"starred_url": "https://api.github.com/users/hannan72/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/hannan72/subscriptions",
"organizations_url": "https://api.github.com/users/hannan72/orgs",
"repos_url": "https://api.github.com/users/hannan72/repos",
"events_url": "https://api.github.com/users/hannan72/events{/privacy}",
"received_events_url": "https://api.github.com/users/hannan72/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @hannan72 ! I couldn't reproduce the error on my side, can you share the full stack trace please ?",
"Thanks a lot for your response @lhoestq \r\nThe problem is solved accidentally today and I don't know exactly why it was happened yesterday.\r\nThe issue can be closed.",
"Ok, let us know if you encounter the issue again ;)"
] | 2022-08-29T07:05:20
| 2022-08-30T14:43:41
| 2022-08-30T14:43:41
|
NONE
| null |
## Describe the bug
I got `'NoneType' object is not callable` error while calling the swda datasets.
## Steps to reproduce the bug
```python
from datasets import load_dataset
dataset = load_dataset("swda")
```
## Expected results
Run without error
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Python version: 3.8.10
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4907/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4907/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4906
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4906/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4906/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4906/events
|
https://github.com/huggingface/datasets/issues/4906
| 1,353,223,925
|
I_kwDODunzps5QqI71
| 4,906
|
Can't import datasets AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
|
{
"login": "OPterminator",
"id": 63536981,
"node_id": "MDQ6VXNlcjYzNTM2OTgx",
"avatar_url": "https://avatars.githubusercontent.com/u/63536981?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/OPterminator",
"html_url": "https://github.com/OPterminator",
"followers_url": "https://api.github.com/users/OPterminator/followers",
"following_url": "https://api.github.com/users/OPterminator/following{/other_user}",
"gists_url": "https://api.github.com/users/OPterminator/gists{/gist_id}",
"starred_url": "https://api.github.com/users/OPterminator/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/OPterminator/subscriptions",
"organizations_url": "https://api.github.com/users/OPterminator/orgs",
"repos_url": "https://api.github.com/users/OPterminator/repos",
"events_url": "https://api.github.com/users/OPterminator/events{/privacy}",
"received_events_url": "https://api.github.com/users/OPterminator/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting, @OPterminator.\r\n\r\nHowever, we are not able to reproduce this issue.\r\n\r\nThere might be 2 reasons why you get this exception:\r\n- Either the name of your local Python file: if it is called `datasets.py` this could generate a circular import when trying to import the Hugging Face `datasets` library.\r\n - You could try to rename it and run it again.\r\n- Another cause could be the simultaneous use of the packages `nlp` and `datasets`. Please note that we renamed the Hugging Face `nlp` library to `datasets` more than 2 years ago: they are 2 versions of the same library.\r\n - Please try to update your script and use only `datasets` (`nlp` name is no longer in use and is out of date).",
"i am also facing this issue\r\n\r\n\r\n```\r\n----> 1 import datasets\r\n 3 dataset = datasets.load_dataset(\"ucberkeley-dlab/measuring-hate-speech\", \"binary\")\r\n 4 df = dataset[\"train\"].to_pandas()\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/__init__.py:52\r\n 50 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled\r\n 51 from .info import DatasetInfo, MetricInfo\r\n---> 52 from .inspect import (\r\n 53 get_dataset_config_info,\r\n 54 get_dataset_config_names,\r\n 55 get_dataset_infos,\r\n 56 get_dataset_split_names,\r\n 57 inspect_dataset,\r\n 58 inspect_metric,\r\n 59 list_datasets,\r\n 60 list_metrics,\r\n 61 )\r\n 62 from .iterable_dataset import IterableDataset\r\n 63 from .load import load_dataset, load_dataset_builder, load_from_disk, load_metric\r\n\r\nFile ~/.pyenv/versions/3.10.9/lib/python3.10/site-packages/datasets/inspect.py:30\r\n 28 from .download.streaming_download_manager import StreamingDownloadManager\r\n...\r\n---> 16 logger = datasets.utils.logging.get_logger(__name__)\r\n 19 if datasets.config.PYARROW_VERSION.major >= 7:\r\n 21 def pa_table_to_pylist(table):\r\n```"
] | 2022-08-28T02:23:24
| 2023-01-12T23:23:41
| 2022-10-03T12:22:50
|
NONE
| null |
## Describe the bug
A clear and concise description of what the bug is.
Not able to import datasets
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
import os
os.environ["WANDB_API_KEY"] = "0" ## to silence warning
import numpy as np
import random
import sklearn
import matplotlib.pyplot as plt
import pandas as pd
import sys
import tensorflow as tf
import plotly.express as px
import transformers
import tokenizers
import nlp as nlp
import utils
import datasets
```
## Expected results
A clear and concise description of the expected results.
import should work normal
## Actual results
Specify the actual results or traceback.
---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
<ipython-input-21-b3b5b0b62103> in <module>
13 import nlp as nlp
14 import utils
---> 15 import datasets
~\anaconda3\lib\site-packages\datasets\__init__.py in <module>
44 from .fingerprint import disable_caching, enable_caching, is_caching_enabled, set_caching_enabled
45 from .info import DatasetInfo, MetricInfo
---> 46 from .inspect import (
47 get_dataset_config_info,
48 get_dataset_config_names,
~\anaconda3\lib\site-packages\datasets\inspect.py in <module>
28 from .download.streaming_download_manager import StreamingDownloadManager
29 from .info import DatasetInfo
---> 30 from .load import dataset_module_factory, import_main_class, load_dataset_builder, metric_module_factory
31 from .utils.file_utils import relative_to_absolute_path
32 from .utils.logging import get_logger
~\anaconda3\lib\site-packages\datasets\load.py in <module>
53 from .iterable_dataset import IterableDataset
54 from .metric import Metric
---> 55 from .packaged_modules import (
56 _EXTENSION_TO_MODULE,
57 _MODULE_SUPPORTS_METADATA,
~\anaconda3\lib\site-packages\datasets\packaged_modules\__init__.py in <module>
4 from typing import List
5
----> 6 from .csv import csv
7 from .imagefolder import imagefolder
8 from .json import json
~\anaconda3\lib\site-packages\datasets\packaged_modules\csv\csv.py in <module>
13
14
---> 15 logger = datasets.utils.logging.get_logger(__name__)
16
17 _PANDAS_READ_CSV_NO_DEFAULT_PARAMETERS = ["names", "prefix"]
AttributeError: partially initialized module 'datasets' has no attribute 'utils' (most likely due to a circular import)
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
Copy-and-paste the text below in your GitHub issue.
- `datasets` version: 2.4.0
- Platform: Windows-10-10.0.22000-SP0
- Python version: 3.8.8
- PyArrow version: 9.0.0
- Pandas version: 1.2.4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4906/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4906/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4904
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4904/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4904/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4904/events
|
https://github.com/huggingface/datasets/pull/4904
| 1,353,002,837
|
PR_kwDODunzps4959Ad
| 4,904
|
[LibriSpeech] Fix dev split local_extracted_archive for 'all' config
|
{
"login": "sanchit-gandhi",
"id": 93869735,
"node_id": "U_kgDOBZhWpw",
"avatar_url": "https://avatars.githubusercontent.com/u/93869735?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanchit-gandhi",
"html_url": "https://github.com/sanchit-gandhi",
"followers_url": "https://api.github.com/users/sanchit-gandhi/followers",
"following_url": "https://api.github.com/users/sanchit-gandhi/following{/other_user}",
"gists_url": "https://api.github.com/users/sanchit-gandhi/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanchit-gandhi/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanchit-gandhi/subscriptions",
"organizations_url": "https://api.github.com/users/sanchit-gandhi/orgs",
"repos_url": "https://api.github.com/users/sanchit-gandhi/repos",
"events_url": "https://api.github.com/users/sanchit-gandhi/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanchit-gandhi/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-27T10:04:57
| 2022-08-30T10:06:21
| 2022-08-30T10:03:25
|
CONTRIBUTOR
| null |
We define the keys for the `_DL_URLS` of the dev split as `dev.clean` and `dev.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L60-L61
These keys get forwarded to the `dl_manager` and thus the `local_extracted_archive`.
However, when calling `SplitGenerator` for the dev sets, we query the `local_extracted_archive` keys `validation.clean` and `validation.other`:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L212
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L219
The consequence of this is that the `local_extracted_archive` arg passed to `_generate_examples` is always `None`, as the keys `validation.clean` and `validation.other` do not exists in the `local_extracted_archive`.
When defining the `audio_file` in `_generate_examples`, since `local_extracted_archive` is always `None`, we always omit the `local_extracted_archive` path from the `audio_file` path, **even** if in non-streaming mode:
https://github.com/huggingface/datasets/blob/2e7142a3c6500b560da45e8d5128e320a09fcbd4/datasets/librispeech_asr/librispeech_asr.py#L259-L263
Thus, `audio_file` will only ever be the streaming path (`audio_file`, not `os.path.join(local_extracted_archive, audio_file)`).
This PR fixes the `.get()` keys for the `local_extracted_archive` for the dev splits.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4904/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4904/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4904",
"html_url": "https://github.com/huggingface/datasets/pull/4904",
"diff_url": "https://github.com/huggingface/datasets/pull/4904.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4904.patch",
"merged_at": "2022-08-30T10:03:25"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4903
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4903/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4903/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4903/events
|
https://github.com/huggingface/datasets/pull/4903
| 1,352,539,075
|
PR_kwDODunzps494aud
| 4,903
|
Fix CI reporting
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-26T17:16:30
| 2022-08-26T17:49:33
| 2022-08-26T17:46:59
|
MEMBER
| null |
Fix CI so that it reports defaults (failed and error) besides the custom (xfailed and xpassed) in the test summary.
This PR fixes a regression introduced by:
- #4845
This introduced the reporting of xfailed and xpassed, but wrongly removed the reporting of the defaults failed and error.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4903/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4903/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4903",
"html_url": "https://github.com/huggingface/datasets/pull/4903",
"diff_url": "https://github.com/huggingface/datasets/pull/4903.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4903.patch",
"merged_at": "2022-08-26T17:46:59"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4901
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4901/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4901/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4901/events
|
https://github.com/huggingface/datasets/pull/4901
| 1,352,438,915
|
PR_kwDODunzps494FNX
| 4,901
|
Raise ManualDownloadError from get_dataset_config_info
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-26T15:45:56
| 2022-08-30T10:42:21
| 2022-08-30T10:40:04
|
MEMBER
| null |
This PRs raises a specific `ManualDownloadError` when `get_dataset_config_info` is called for a dataset that requires manual download.
Related to:
- #4898
CC: @severo
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4901/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4901/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4901",
"html_url": "https://github.com/huggingface/datasets/pull/4901",
"diff_url": "https://github.com/huggingface/datasets/pull/4901.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4901.patch",
"merged_at": "2022-08-30T10:40:04"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4899
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4899/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4899/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4899/events
|
https://github.com/huggingface/datasets/pull/4899
| 1,352,031,286
|
PR_kwDODunzps492uTO
| 4,899
|
Re-add code and und language tags
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-26T09:48:57
| 2022-08-26T10:27:18
| 2022-08-26T10:24:20
|
MEMBER
| null |
This PR fixes the removal of 2 language tags done by:
- #4882
The tags are:
- "code": this is not a IANA tag but needed
- "und": this is one of the special scoped tags removed by 0d53202b9abce6fd0358cb00d06fcfd904b875af
- used in "mc4" and "udhr" datasets
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4899/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4899/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4899",
"html_url": "https://github.com/huggingface/datasets/pull/4899",
"diff_url": "https://github.com/huggingface/datasets/pull/4899.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4899.patch",
"merged_at": "2022-08-26T10:24:20"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4898
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4898/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4898/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4898/events
|
https://github.com/huggingface/datasets/issues/4898
| 1,351,851,254
|
I_kwDODunzps5Qk5z2
| 4,898
|
Dataset Viewer issue for timit_asr
|
{
"login": "InayatUllah932",
"id": 91126978,
"node_id": "MDQ6VXNlcjkxMTI2OTc4",
"avatar_url": "https://avatars.githubusercontent.com/u/91126978?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/InayatUllah932",
"html_url": "https://github.com/InayatUllah932",
"followers_url": "https://api.github.com/users/InayatUllah932/followers",
"following_url": "https://api.github.com/users/InayatUllah932/following{/other_user}",
"gists_url": "https://api.github.com/users/InayatUllah932/gists{/gist_id}",
"starred_url": "https://api.github.com/users/InayatUllah932/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/InayatUllah932/subscriptions",
"organizations_url": "https://api.github.com/users/InayatUllah932/orgs",
"repos_url": "https://api.github.com/users/InayatUllah932/repos",
"events_url": "https://api.github.com/users/InayatUllah932/events{/privacy}",
"received_events_url": "https://api.github.com/users/InayatUllah932/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Yes, the dataset viewer is based on `datasets`, and the following does not work:\r\n\r\n```\r\n>>> from datasets import get_dataset_split_names\r\n>>> get_dataset_split_names('timit_asr')\r\nDownloading builder script: 7.48kB [00:00, 6.69MB/s]\r\nTraceback (most recent call last):\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 354, in get_dataset_config_info\r\n for split_generator in builder._split_generators(\r\n File \"/home/slesage/.cache/huggingface/modules/datasets_modules/datasets/timit_asr/43f9448dd5db58e95ee48a277f466481b151f112ea53e27f8173784da9254fb2/timit_asr.py\", line 117, in _split_generators\r\n data_dir = os.path.abspath(os.path.expanduser(dl_manager.manual_dir))\r\n File \"/home/slesage/.pyenv/versions/3.9.6/lib/python3.9/posixpath.py\", line 231, in expanduser\r\n path = os.fspath(path)\r\nTypeError: expected str, bytes or os.PathLike object, not NoneType\r\n\r\nThe above exception was the direct cause of the following exception:\r\n\r\nTraceback (most recent call last):\r\n File \"<stdin>\", line 1, in <module>\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 404, in get_dataset_split_names\r\n info = get_dataset_config_info(\r\n File \"/home/slesage/hf/datasets-server/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py\", line 359, in get_dataset_config_info\r\n raise SplitsNotFoundError(\"The split names could not be parsed from the dataset config.\") from err\r\ndatasets.inspect.SplitsNotFoundError: The split names could not be parsed from the dataset config.\r\n```\r\n\r\ncc @huggingface/datasets ",
"Due to license restriction, this dataset needs manual downloading of the original data.\r\n\r\nThis information is in the dataset card: https://huggingface.co/datasets/timit_asr\r\n> The dataset needs to be downloaded manually from https://catalog.ldc.upenn.edu/LDC93S1",
"Maybe a better error message for datasets that need manual downloading? @severo \r\n\r\nMaybe we can raise a specific excpetion as done from `load_dataset`...",
"Yes, ideally something like https://github.com/huggingface/datasets/blob/main/src/datasets/builder.py#L81\r\n",
"The preview is now disabled (and a descriptive warning is displayed) for datasets requiring manual download. See:\r\n\r\n\r\n"
] | 2022-08-26T07:12:05
| 2022-10-03T12:40:28
| 2022-10-03T12:40:27
|
NONE
| null |
### Link
_No response_
### Description
_No response_
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4898/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4898/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4897
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4897/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4897/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4897/events
|
https://github.com/huggingface/datasets/issues/4897
| 1,351,784,727
|
I_kwDODunzps5QkpkX
| 4,897
|
datasets generate large arrow file
|
{
"login": "osayes",
"id": 18533904,
"node_id": "MDQ6VXNlcjE4NTMzOTA0",
"avatar_url": "https://avatars.githubusercontent.com/u/18533904?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osayes",
"html_url": "https://github.com/osayes",
"followers_url": "https://api.github.com/users/osayes/followers",
"following_url": "https://api.github.com/users/osayes/following{/other_user}",
"gists_url": "https://api.github.com/users/osayes/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osayes/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osayes/subscriptions",
"organizations_url": "https://api.github.com/users/osayes/orgs",
"repos_url": "https://api.github.com/users/osayes/repos",
"events_url": "https://api.github.com/users/osayes/events{/privacy}",
"received_events_url": "https://api.github.com/users/osayes/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Hi ! The cache files are the results of all the transforms you applied to the dataset using `map` for example.\r\nDid you run a transform that could potentially blow up the size of the dataset ?",
"@lhoestq,\r\nI don't remember, but I can't imagine what kind of transform may generate data that grow over 200 times in size. \r\nI think maybe it doesn' matter, it's just cache after all."
] | 2022-08-26T05:51:16
| 2022-09-18T05:07:52
| 2022-09-18T05:07:52
|
NONE
| null |
Checking the large file in disk, and found the large cache file in the cifar10 data directory:

As we know, the size of cifar10 dataset is ~130MB, but the cache file has almost 30GB size, there may be some problems here.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4897/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4897/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4896
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4896/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4896/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4896/events
|
https://github.com/huggingface/datasets/pull/4896
| 1,351,180,409
|
PR_kwDODunzps49z4fU
| 4,896
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-25T16:41:43
| 2022-09-22T14:37:16
| 2022-08-26T04:41:48
|
MEMBER
| null |
Fix missing tags in dataset cards:
- anli
- coarse_discourse
- commonsense_qa
- cos_e
- ilist
- lc_quad
- web_questions
- xsum
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
- #4891
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4896/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4896/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4896",
"html_url": "https://github.com/huggingface/datasets/pull/4896",
"diff_url": "https://github.com/huggingface/datasets/pull/4896.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4896.patch",
"merged_at": "2022-08-26T04:41:48"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4895
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4895/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4895/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4895/events
|
https://github.com/huggingface/datasets/issues/4895
| 1,350,798,527
|
I_kwDODunzps5Qg4y_
| 4,895
|
load_dataset method returns Unknown split "validation" even if this dir exists
|
{
"login": "SamSamhuns",
"id": 13418507,
"node_id": "MDQ6VXNlcjEzNDE4NTA3",
"avatar_url": "https://avatars.githubusercontent.com/u/13418507?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/SamSamhuns",
"html_url": "https://github.com/SamSamhuns",
"followers_url": "https://api.github.com/users/SamSamhuns/followers",
"following_url": "https://api.github.com/users/SamSamhuns/following{/other_user}",
"gists_url": "https://api.github.com/users/SamSamhuns/gists{/gist_id}",
"starred_url": "https://api.github.com/users/SamSamhuns/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/SamSamhuns/subscriptions",
"organizations_url": "https://api.github.com/users/SamSamhuns/orgs",
"repos_url": "https://api.github.com/users/SamSamhuns/repos",
"events_url": "https://api.github.com/users/SamSamhuns/events{/privacy}",
"received_events_url": "https://api.github.com/users/SamSamhuns/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"I don't know the main problem but it looks like, it is ignoring the last directory in your case. So, create a directory called 'zzz' in the same folder as train, validation and test. if it doesn't work, create a directory called \"aaa\". It worked for me.\r\n",
"@SamSamhuns could you please try to load it with the current main-branch version of `datasets`? I suppose the problem is that it tries to get splits names from filenames in this case, ignoring directories names, but `val` wasn't in keywords at that time, but it was fixed recently in this PR https://github.com/huggingface/datasets/pull/4844. ",
"I have a similar problem.\r\nWhen I try to create `data_infos.json` using `datasets-cli test Peter.py --save_infos --all_configs` I get an error:\r\n`ValueError: Unknown split \"test\". Should be one of ['train'].`\r\n\r\nThe `data_infos.json` is created perfectly fine when I use only one split - `datasets.Split.TRAIN`\r\n\r\n@polinaeterna Could you help here please?\r\n\r\nYou can find the code here: https://huggingface.co/datasets/sberbank-ai/Peter/tree/add_splits (add_splits branch)",
"@skalinin It seems the `dataset_infos.json` of your dataset is missing the info on the test split (and `datasets-cli` doesn't ignore the cached infos at the moment, which is a known bug), so your issue is not related to this one. I think you can fix your issue by deleting all the cached `dataset_infos.json` (in the local repo and in `~/.cache/huggingface/modules`) before running the `datasets-cli test` command. Let us know if that doesn't help, and I can try to generate it myself.",
"This code indeed behaves as expected on `main`. But suppose the `val_234.png` is renamed to some other value not containing one of [these](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L31) keywords, in that case, this issue becomes relevant again because the real cause of it is the order in which we check the predefined split patterns to assign data files to each split - first we assign data files based on filenames, and only if this fails meaning not a single split found (`val` is not recognized here in the older versions of `datasets`, which results in an empty `validation` split), do we assign based on directory names.\r\n\r\n@polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if `data_dir` is specified (or if `load_dataset(data_dir)` is called)? ",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nyes that makes sense !",
"Looks like the `val/validation` dir name issue is fixed with the current main-branch version of the `datasets` repository. \r\n\r\n> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nI agree with this as well. I would expect higher precedence to the directory name over the file name. Right now if I place a single file named `train_00001.jpg` under the `validation` directory, `load_dataset` cannot find the validation split.",
"Thanks for the reply\r\n\r\nI've created a separate [issue](https://github.com/huggingface/datasets/issues/4982#issue-1375604693) for my problem.",
"> @polinaeterna @lhoestq Perhaps one way to fix this would be to swap the [order](https://github.com/huggingface/datasets/blob/38c8c725f3996ff1ff03f6fd461aa6d645321034/src/datasets/data_files.py#L78-L79) of the patterns if data_dir is specified (or if load_dataset(data_dir) is called)?\r\n\r\nSounds good to me! opened a PR: https://github.com/huggingface/datasets/pull/4985",
"Hi there @polinaeterna @mariosasko ! I have installed 5.2.3.dev0, which should have this fix. Unfortunately, I am still getting the error:\r\n`ValueError: Unknown split \"validation\". Should be one of ['train'].` When I call `load_dataset(\"csv\", data_files=files, split=split)`\r\n\r\nAny help would be greatly appreciated!",
"hi @shaneacton ! could you please show your dataset structure?",
"Hi there @polinaeterna . My local CSV files are stored as follows:\r\nbinding:\r\n---------- tune.csv\r\n---------- public_data:\r\n--------------------------- train.csv\r\n\r\n`self.list_shards(split)` sucessfully finds the relevant data files",
"@shaneacton do you have `validation.csv`/`val.csv`/`valid.csv`/`dev.csv` file in your data folder? I can't find it in the structure you provided",
"@polinaeterna no, does the name of the split need to match the name of the file exactly?\r\n\r\nBut my train file is not actually named 'train.py' its called 'XXXXXXXXX_train_XXXXXXXX.csv'\r\nAnd the code works fine for train, but fails for validation.\r\nDoes the file name need to _contain_ the split name?",
"@shaneacton what files do you expect to be included in \"validation\" split? yes, you should somehow indicate that a file belongs to a certain split - either by including split name in a filename or by putting it into a folder with split name, you can also check out [this documentation page](https://huggingface.co/docs/datasets/main/en/repository_structure) :)\r\nby default all the data goes to a single `train` split",
"@polinaeterna I have specified my train/test/tune files via the `split_to_filepattern` argument when initialising my `FileDataSource` class. This is how `list_shards` is able to find the right files.\r\nAfter your last message, I have tried renaminig my data files to simply `train.csv` and `validation.csv`, however I am still getting the same error: `Unknown split \"validation\". Should be one of ['train']`",
"@polinaeterna I have solved the issue. The solution was to call:\r\n`load_dataset(\"csv\", data_files={split: files}, split=split)`"
] | 2022-08-25T12:11:00
| 2022-10-06T17:49:28
| 2022-09-29T08:07:50
|
NONE
| null |
## Describe the bug
The `datasets.load_dataset` returns a `ValueError: Unknown split "validation". Should be one of ['train', 'test'].` when running `load_dataset(local_data_dir_path, split="validation")` even if the `validation` sub-directory exists in the local data path.
The data directories are as follows and attached to this issue:
```
test_data1
|_ train
|_ 1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ 234.png
|_ metadata.jsonl
...
test_data2
|_ train
|_ train_1012.png
|_ metadata.jsonl
...
|_ test
...
|_ validation
|_ val_234.png
|_ metadata.jsonl
...
```
They contain the same image files and `metadata.jsonl` but the images in `test_data2` have the split names prepended i.e.
`train_1012.png, val_234.png` and the images in `test_data1` do not have the split names prepended to the image names i.e. `1012.png, 234.png`
I actually saw in another issue `val` was not recognized as a split name but here I would expect the files to take the split from the parent directory name i.e. val should become part of the validation split?
## Steps to reproduce the bug
```python
import datasets
datasets.logging.set_verbosity_error()
from datasets import load_dataset, get_dataset_split_names
# the following only finds train, validation and test splits correctly
path = "./test_data1"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
# the following only finds train and test splits
path = "./test_data2"
print("######################", get_dataset_split_names(path), "######################")
dataset_list = []
for spt in ["train", "test", "validation"]:
dataset = load_dataset(path, split=spt)
dataset_list.append(dataset)
```
## Expected results
```
###################### ['train', 'test', 'validation'] ######################
###################### ['train', 'test', 'validation'] ######################
```
## Actual results
```
Traceback (most recent call last):
File "test_data_loader.py", line 11, in <module>
dataset = load_dataset(path, split=spt)
File "/home/venv/lib/python3.8/site-packages/datasets/load.py", line 1758, in load_dataset
ds = builder_instance.as_dataset(split=split, ignore_verifications=ignore_verifications, in_memory=keep_in_memory)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 893, in as_dataset
datasets = map_nested(
File "/home/venv/lib/python3.8/site-packages/datasets/utils/py_utils.py", line 385, in map_nested
return function(data_struct)
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 924, in _build_single_dataset
ds = self._as_dataset(
File "/home/venv/lib/python3.8/site-packages/datasets/builder.py", line 993, in _as_dataset
dataset_kwargs = ArrowReader(self._cache_dir, self.info).read(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 211, in read
files = self.get_file_instructions(name, instructions, split_infos)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 184, in get_file_instructions
file_instructions = make_file_instructions(
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 107, in make_file_instructions
absolute_instructions = instruction.to_absolute(name2len)
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in to_absolute
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 616, in <listcomp>
return [_rel_to_abs_instr(rel_instr, name2len) for rel_instr in self._relative_instructions]
File "/home/venv/lib/python3.8/site-packages/datasets/arrow_reader.py", line 433, in _rel_to_abs_instr
raise ValueError(f'Unknown split "{split}". Should be one of {list(name2len)}.')
ValueError: Unknown split "validation". Should be one of ['train', 'test'].
```
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version:
- Platform: Linux Ubuntu 18.04
- Python version: 3.8.12
- PyArrow version: 9.0.0
Data files
[test_data1.zip](https://github.com/huggingface/datasets/files/9424463/test_data1.zip)
[test_data2.zip](https://github.com/huggingface/datasets/files/9424468/test_data2.zip)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4895/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4895/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4894
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4894/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4894/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4894/events
|
https://github.com/huggingface/datasets/pull/4894
| 1,350,667,270
|
PR_kwDODunzps49yIvr
| 4,894
|
Add citation information to makhzan dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-25T10:16:40
| 2022-08-30T06:21:54
| 2022-08-25T13:19:41
|
MEMBER
| null |
This PR adds the citation information to `makhzan` dataset, once they have replied to our request for that information:
- https://github.com/zeerakahmed/makhzan/issues/43
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4894/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4894/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4894",
"html_url": "https://github.com/huggingface/datasets/pull/4894",
"diff_url": "https://github.com/huggingface/datasets/pull/4894.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4894.patch",
"merged_at": "2022-08-25T13:19:41"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4893
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4893/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4893/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4893/events
|
https://github.com/huggingface/datasets/issues/4893
| 1,350,655,674
|
I_kwDODunzps5QgV66
| 4,893
|
Oversampling strategy for iterable datasets in `interleave_datasets`
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3761482852,
"node_id": "LA_kwDODunzps7gM6xk",
"url": "https://api.github.com/repos/huggingface/datasets/labels/good%20second%20issue",
"name": "good second issue",
"color": "BDE59C",
"default": false,
"description": "Issues a bit more difficult than \"Good First\" issues"
}
] |
closed
| false
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @lhoestq,\r\nI plunged into the code and it should be manageable for me to work on it!\r\n#take\r\n\r\nAlso, setting `d1`, `d2` and `d3` as you did raised a `SyntaxError: 'yield' inside list comprehension` for me, on Python 3.8.10.\r\nThe following snippet works for me though:\r\n```\r\nd1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\nd2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\nd3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n```\r\n\r\n",
"Great @ylacombe thanks ! I'm assigning you this issue",
"Hi @ylacombe :) Is there anything I can do to help ? Feel free to ping me if you have any question :)",
"Hi @lhoestq,\r\n\r\nI actually have already wrote the code last time [on this commit](https://github.com/ylacombe/datasets/commit/84769db97facc78a33ec53f7b1b395951e1804df) but I still have to change the docs and write some tests though. I'm working on it.\r\n\r\nHowever, I still your advice on one matter. \r\nIn #4831, when using a `Dataset` list with probabilities, I had change the original behavior so that it stops as soon as one or all datasets are out of samples. By nature, this behavior can't be applied with an `IterableDataset` because one only knows an iterable dataset is out of sample when receiving a StopIteration error after calling the iterator once again. \r\nTo sum up, as it is right know, the behavior is not consistent with an `IterableDataset` list or a `Dataset` list, when using probabilities.\r\nTo be honest, I think that the current behavior with a `Dataset` list is desirable and avoid having too many samples, so I would recommand keeping that as it is, but I can understand the desire to have the same behavior for both classes. \r\nWhat do you think ? Please let me know if you need more details.\r\n\r\n\r\nEDIT:\r\nHere is an example:\r\n```\r\n>>> from tests.test_iterable_dataset import *\r\n>>> d1 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [0, 1, 2]])), {}))\r\n>>> d2 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [10, 11, 12, 13]])), {}))\r\n>>> d3 = IterableDataset(ExamplesIterable((lambda: (yield from [(i, {\"a\": i}) for i in [20, 21, 22, 23, 24]])), {}))\r\n>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)\r\n>>> [x[\"a\"] for x in dataset]\r\n[10, 0, 11, 1, 2, 20, 12, 13]\r\n>>> from tests.test_arrow_dataset import *\r\n>>> d1 = Dataset.from_dict({\"a\": [0, 1, 2]})\r\n>>> d2 = Dataset.from_dict({\"a\": [10, 11, 12]})\r\n>>> d3 = Dataset.from_dict({\"a\": [20, 21, 22]})\r\n>>> interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)[\"a\"]\r\n[10, 0, 11, 1, 2]\r\n[10, 0, 11, 1, 2]\r\n```\r\n ",
"Hi ! Awesome :) \r\n\r\nMaybe you can pre-load the next sample to know if the dataset is empty or not ?\r\nThis way it should be possible to have the same behavior for `IterableDataset`",
"Hi @ylacombe let us know if we can help with anything :)",
"Hi @lhoestq, I've finally made some advances in the matter. I've modified the `IterableDataset` behavior so that it aligns with the `Dataset` behavior as we have discussed. The documentation has been dealt with too. \r\nIt works as expected on my examples. However I'm having trouble figuring out how to test `interleave_datasets` on `test_iterable_datasets.py` as I have never worked with pytest. Could you help me on that or give me some indications? \r\n",
"Thanks @ylacombe :)\r\n\r\nUsing the `pytest` command, you can run all the functions in a python file that start with \"test_*\" and make sure they return not errors:\r\n```\r\npytest tests/test_iterable_dataset.py\r\n```\r\n\r\nIn our case it can be nice to define a `test_interleave_datasets_with_oversampling` function. This function can contain the code example that we mentioned earlier in this github issue to make sure it works as expected.",
"Resolved via #5036."
] | 2022-08-25T10:06:55
| 2022-10-03T12:37:46
| 2022-10-03T12:37:46
|
MEMBER
| null |
In https://github.com/huggingface/datasets/pull/4831 @ylacombe added an oversampling strategy for `interleave_datasets`. However right now it doesn't work for datasets loaded using `load_dataset(..., streaming=True)`, which are `IterableDataset` objects.
It would be nice to expand `interleave_datasets` for iterable datasets as well to support this oversampling strategy
```python
>>> from datasets.iterable_dataset import IterableDataset, ExamplesIterable
>>> d1 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [0, 1, 2]], {}))
>>> d2 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [10, 11, 12, 13]], {}))
>>> d3 = IterableDataset(ExamplesIterable(lambda: [(yield i, {"a": i}) for i in [20, 21, 22, 23, 24]], {}))
>>> dataset = interleave_datasets([d1, d2, d3]) # is supported
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted") # is not supported yet
>>> [x["a"] for x in dataset]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
```
This can be implemented by adding the strategy to both `CyclingMultiSourcesExamplesIterable` and `RandomlyCyclingMultiSourcesExamplesIterable` used in `_interleave_iterable_datasets` in `iterable_dataset.py`
I would be happy to share some guidance if anyone would like to give it a shot :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4893/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4893/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4892
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4892/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4892/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4892/events
|
https://github.com/huggingface/datasets/pull/4892
| 1,350,636,499
|
PR_kwDODunzps49yCD3
| 4,892
|
Add citation to ro_sts and ro_sts_parallel datasets
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-25T09:51:06
| 2022-08-25T10:49:56
| 2022-08-25T10:49:56
|
MEMBER
| null |
This PR adds the citation information to `ro_sts_parallel` and `ro_sts_parallel` datasets, once they have replied our request for that information:
- https://github.com/dumitrescustefan/RO-STS/issues/4
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4892/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4892/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4892",
"html_url": "https://github.com/huggingface/datasets/pull/4892",
"diff_url": "https://github.com/huggingface/datasets/pull/4892.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4892.patch",
"merged_at": "2022-08-25T10:49:56"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4891
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4891/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4891/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4891/events
|
https://github.com/huggingface/datasets/pull/4891
| 1,350,589,813
|
PR_kwDODunzps49x382
| 4,891
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-25T09:14:17
| 2022-09-22T14:39:02
| 2022-08-25T13:43:34
|
MEMBER
| null |
Fix missing tags in dataset cards:
- aslg_pc12
- librispeech_lm
- mwsc
- opus100
- qasc
- quail
- squadshifts
- winograd_wsc
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
Related to:
- #4833
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4891/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4891/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4891",
"html_url": "https://github.com/huggingface/datasets/pull/4891",
"diff_url": "https://github.com/huggingface/datasets/pull/4891.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4891.patch",
"merged_at": "2022-08-25T13:43:34"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4890
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4890/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4890/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4890/events
|
https://github.com/huggingface/datasets/pull/4890
| 1,350,578,029
|
PR_kwDODunzps49x1YC
| 4,890
|
add Dataset.from_list
|
{
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-25T09:05:58
| 2022-09-02T10:22:59
| 2022-09-02T10:20:33
|
CONTRIBUTOR
| null |
As discussed in #4885
I initially added this bit at the end, thinking filling this field was necessary as it is done in from_dict.
However, it seems the constructor takes care of filling info when it is empty.
```
if info.features is None:
info.features = Features(
{
col: generate_from_arrow_type(coldata.type)
for col, coldata in zip(pa_table.column_names, pa_table.columns)
}
)
```
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4890/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4890/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4890",
"html_url": "https://github.com/huggingface/datasets/pull/4890",
"diff_url": "https://github.com/huggingface/datasets/pull/4890.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4890.patch",
"merged_at": "2022-09-02T10:20:33"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4889
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4889/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4889/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4889/events
|
https://github.com/huggingface/datasets/issues/4889
| 1,349,758,525
|
I_kwDODunzps5Qc649
| 4,889
|
torchaudio 11.0 yields different results than torchaudio 12.1 when loading MP3
|
{
"login": "patrickvonplaten",
"id": 23423619,
"node_id": "MDQ6VXNlcjIzNDIzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/23423619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/patrickvonplaten",
"html_url": "https://github.com/patrickvonplaten",
"followers_url": "https://api.github.com/users/patrickvonplaten/followers",
"following_url": "https://api.github.com/users/patrickvonplaten/following{/other_user}",
"gists_url": "https://api.github.com/users/patrickvonplaten/gists{/gist_id}",
"starred_url": "https://api.github.com/users/patrickvonplaten/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/patrickvonplaten/subscriptions",
"organizations_url": "https://api.github.com/users/patrickvonplaten/orgs",
"repos_url": "https://api.github.com/users/patrickvonplaten/repos",
"events_url": "https://api.github.com/users/patrickvonplaten/events{/privacy}",
"received_events_url": "https://api.github.com/users/patrickvonplaten/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Maybe we can just pass this along to torchaudio @lhoestq @albertvillanova ? It be great if you could investigate if the errors lies in datasets or in torchaudio.",
"torchaudio did a change in [0.12](https://github.com/pytorch/audio/releases/tag/v0.12.0) on MP3 decoding (which affects common voice):\r\n> MP3 decoding is now handled by FFmpeg in sox_io backend. (https://github.com/pytorch/audio/pull/2419, https://github.com/pytorch/audio/pull/2428)\r\n> - FFmpeg is now used as fallback in sox_io backend, and now MP3 decoding is handled by FFmpeg. To load MP3 audio with torchaudio.load, please install a compatible version of FFmpeg (Version 4 when using an official binary distribution).\r\n> - Note that, whereas the previous MP3 decoding scheme pads the output audio, the new scheme does not. As a consequence, the new version returns shorter audio tensors.",
"Do we have a solution for this now? Should we just upgrade to `torchaudio 0.12.0` then? ",
"`datasets` supports `torchaudio` 0.12 if you have an environment that supports reading MP3 with `torchaudio`, i.e. if you have `ffmpeg>=4`",
"Closing as we no longer use `torchaudio` for decoding."
] | 2022-08-24T16:54:43
| 2023-03-02T15:33:05
| 2023-03-02T15:33:04
|
MEMBER
| null |
## Describe the bug
When loading Common Voice with torchaudio 0.11.0 the results are different to 0.12.1 which leads to problems in transformers see: https://github.com/huggingface/transformers/pull/18749
## Steps to reproduce the bug
If you run the following code once with `torchaudio==0.11.0+cu102` and `torchaudio==0.12.1+cu102` you can see that the tensors differ. This is a pretty big breaking change and makes some integration tests fail in Transformers.
```python
#!/usr/bin/env python3
from datasets import load_dataset
import datasets
import numpy as np
import torch
import torchaudio
print("torch vesion", torch.__version__)
print("torchaudio vesion", torchaudio.__version__)
save_audio = True
load_audios = False
if save_audio:
ds = load_dataset("common_voice", "en", split="train", streaming=True)
ds = ds.cast_column("audio", datasets.Audio(sampling_rate=16_000))
ds_iter = iter(ds)
sample = next(ds_iter)
np.save(f"audio_sample_{torch.__version__}", sample["audio"]["array"])
print(sample["audio"]["array"])
if load_audios:
array_torch_11 = np.load("/home/patrick/audio_sample_1.11.0+cu102.npy")
print("Array 11 Shape", array_torch_11.shape)
print("Array 11 abs sum", np.sum(np.abs(array_torch_11)))
array_torch_12 = np.load("/home/patrick/audio_sample_1.12.1+cu102.npy")
print("Array 12 Shape", array_torch_12.shape)
print("Array 12 abs sum", np.sum(np.abs(array_torch_12)))
```
Having saved the tensors the print output yields:
```
torch vesion 1.12.1+cu102
torchaudio vesion 0.12.1+cu102
Array 11 Shape (122880,)
Array 11 abs sum 1396.4988
Array 12 Shape (123264,)
Array 12 abs sum 1396.5193
```
## Expected results
torchaudio 11.0 and 12.1 should yield same results.
## Actual results
See above.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.1.1.dev0
- Platform: Linux-5.18.10-76051810-generic-x86_64-with-glibc2.34
- Python version: 3.9.7
- PyArrow version: 6.0.1
- Pandas version: 1.4.2
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4889/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4889/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4888
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4888/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4888/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4888/events
|
https://github.com/huggingface/datasets/issues/4888
| 1,349,447,521
|
I_kwDODunzps5Qbu9h
| 4,888
|
Dataset Viewer issue for subjqa
|
{
"login": "lewtun",
"id": 26859204,
"node_id": "MDQ6VXNlcjI2ODU5MjA0",
"avatar_url": "https://avatars.githubusercontent.com/u/26859204?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lewtun",
"html_url": "https://github.com/lewtun",
"followers_url": "https://api.github.com/users/lewtun/followers",
"following_url": "https://api.github.com/users/lewtun/following{/other_user}",
"gists_url": "https://api.github.com/users/lewtun/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lewtun/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lewtun/subscriptions",
"organizations_url": "https://api.github.com/users/lewtun/orgs",
"repos_url": "https://api.github.com/users/lewtun/repos",
"events_url": "https://api.github.com/users/lewtun/events{/privacy}",
"received_events_url": "https://api.github.com/users/lewtun/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"It's a bug in the viewer, thanks for reporting it. We're hoping to update to a new version in the next few days which should fix it.",
"Fixed \r\n\r\nhttps://huggingface.co/datasets/subjqa\r\n\r\n<img width=\"1040\" alt=\"Capture d’écran 2022-09-08 à 10 23 26\" src=\"https://user-images.githubusercontent.com/1676121/189073210-2a57ff88-8bb1-44bd-851e-0e75473cea3f.png\">\r\n"
] | 2022-08-24T13:26:20
| 2022-09-08T08:23:42
| 2022-09-08T08:23:42
|
MEMBER
| null |
### Link
https://huggingface.co/datasets/subjqa
### Description
Getting the following error for this dataset:
```
Status code: 500
Exception: Status500Error
Message: 2 or more items returned, instead of 1
```
Not sure what's causing it though 🤔
### Owner
Yes
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4888/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4888/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4887
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4887/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4887/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4887/events
|
https://github.com/huggingface/datasets/pull/4887
| 1,349,426,693
|
PR_kwDODunzps49t_PM
| 4,887
|
Add "cc-by-nc-sa-2.0" to list of licenses
|
{
"login": "osanseviero",
"id": 7246357,
"node_id": "MDQ6VXNlcjcyNDYzNTc=",
"avatar_url": "https://avatars.githubusercontent.com/u/7246357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/osanseviero",
"html_url": "https://github.com/osanseviero",
"followers_url": "https://api.github.com/users/osanseviero/followers",
"following_url": "https://api.github.com/users/osanseviero/following{/other_user}",
"gists_url": "https://api.github.com/users/osanseviero/gists{/gist_id}",
"starred_url": "https://api.github.com/users/osanseviero/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/osanseviero/subscriptions",
"organizations_url": "https://api.github.com/users/osanseviero/orgs",
"repos_url": "https://api.github.com/users/osanseviero/repos",
"events_url": "https://api.github.com/users/osanseviero/events{/privacy}",
"received_events_url": "https://api.github.com/users/osanseviero/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-24T13:11:49
| 2022-08-26T10:31:32
| 2022-08-26T10:29:20
|
MEMBER
| null |
Datasets side of https://github.com/huggingface/hub-docs/pull/285
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4887/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4887/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4887",
"html_url": "https://github.com/huggingface/datasets/pull/4887",
"diff_url": "https://github.com/huggingface/datasets/pull/4887.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4887.patch",
"merged_at": "2022-08-26T10:29:20"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4885
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4885/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4885/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4885/events
|
https://github.com/huggingface/datasets/issues/4885
| 1,349,181,448
|
I_kwDODunzps5QauAI
| 4,885
|
Create dataset from list of dicts
|
{
"login": "sanderland",
"id": 48946947,
"node_id": "MDQ6VXNlcjQ4OTQ2OTQ3",
"avatar_url": "https://avatars.githubusercontent.com/u/48946947?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/sanderland",
"html_url": "https://github.com/sanderland",
"followers_url": "https://api.github.com/users/sanderland/followers",
"following_url": "https://api.github.com/users/sanderland/following{/other_user}",
"gists_url": "https://api.github.com/users/sanderland/gists{/gist_id}",
"starred_url": "https://api.github.com/users/sanderland/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/sanderland/subscriptions",
"organizations_url": "https://api.github.com/users/sanderland/orgs",
"repos_url": "https://api.github.com/users/sanderland/repos",
"events_url": "https://api.github.com/users/sanderland/events{/privacy}",
"received_events_url": "https://api.github.com/users/sanderland/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
| null |
[] | null |
[
"Hi @sanderland, thanks for your enhancement proposal.\r\n\r\nI agree with you that this would be useful.\r\n\r\nPlease note that under the hood, we use PyArrow tables as backend:\r\n- The implementation of `Dataset.from_dict` uses the PyArrow `Table.from_pydict`\r\n\r\nTherefore, I would suggest:\r\n- Implementing `Dataset.from_list` using the PyArrow `Table.from_pylist`\r\n\r\nWhat do you think?\r\nLet's see if other people have other suggestions...",
"Thanks for the quick and positive reply @albertvillanova! \r\n`from_list` seems sensible. Have opened a PR so we can discuss details there.",
"Resolved via #4890."
] | 2022-08-24T10:01:24
| 2022-09-08T16:02:52
| 2022-09-08T16:02:52
|
CONTRIBUTOR
| null |
I often find myself with data from a variety of sources, and a list of dicts is very common among these.
However, converting this to a Dataset is a little awkward, requiring either
```Dataset.from_pandas(pd.DataFrame(formatted_training_data))```
Which can error out on some more exotic values as 2-d arrays for reasons that are not entirely clear
> ArrowInvalid: ('Can only convert 1-dimensional array values', 'Conversion failed for column labels with type object')
Alternatively:
```Dataset.from_dict({k: [s[k] for s in formatted_training_data] for k in formatted_training_data[0].keys()})```
Which works, but is a little ugly.
**Describe the solution you'd like**
Either `.from_dict` accepting a list of dicts, or a `.from_records` function accepting such.
I am happy to PR this, just wanted to check you are happy to accept this I haven't missed something obvious, and which of the solutions would be prefered.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4885/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4885/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4884
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4884/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4884/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4884/events
|
https://github.com/huggingface/datasets/pull/4884
| 1,349,105,946
|
PR_kwDODunzps49s6Aj
| 4,884
|
Fix documentation card of math_qa dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-24T09:00:56
| 2022-08-24T11:33:17
| 2022-08-24T11:33:16
|
MEMBER
| null |
Fix documentation card of math_qa dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4884/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4884/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4884",
"html_url": "https://github.com/huggingface/datasets/pull/4884",
"diff_url": "https://github.com/huggingface/datasets/pull/4884.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4884.patch",
"merged_at": "2022-08-24T11:33:16"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4882
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4882/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4882/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4882/events
|
https://github.com/huggingface/datasets/pull/4882
| 1,348,913,665
|
PR_kwDODunzps49sRtv
| 4,882
|
Fix language tags resource file
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-24T06:06:01
| 2022-08-24T13:58:33
| 2022-08-24T13:58:30
|
MEMBER
| null |
This PR fixes/updates/adds ALL language tags from IANA (as of 2022-08-08).
This PR also removes all BCP47 suffixes (the languages file only contains language subtags, i.e. ISO 639 1 or 2 codes; no script/region/variant suffixes). See:
- #4753
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4882/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4882/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4882",
"html_url": "https://github.com/huggingface/datasets/pull/4882",
"diff_url": "https://github.com/huggingface/datasets/pull/4882.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4882.patch",
"merged_at": "2022-08-24T13:58:30"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4880
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4880/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4880/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4880/events
|
https://github.com/huggingface/datasets/pull/4880
| 1,348,452,776
|
PR_kwDODunzps49qyJr
| 4,880
|
Added names of less-studied languages
|
{
"login": "BenjaminGalliot",
"id": 23100612,
"node_id": "MDQ6VXNlcjIzMTAwNjEy",
"avatar_url": "https://avatars.githubusercontent.com/u/23100612?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/BenjaminGalliot",
"html_url": "https://github.com/BenjaminGalliot",
"followers_url": "https://api.github.com/users/BenjaminGalliot/followers",
"following_url": "https://api.github.com/users/BenjaminGalliot/following{/other_user}",
"gists_url": "https://api.github.com/users/BenjaminGalliot/gists{/gist_id}",
"starred_url": "https://api.github.com/users/BenjaminGalliot/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/BenjaminGalliot/subscriptions",
"organizations_url": "https://api.github.com/users/BenjaminGalliot/orgs",
"repos_url": "https://api.github.com/users/BenjaminGalliot/repos",
"events_url": "https://api.github.com/users/BenjaminGalliot/events{/privacy}",
"received_events_url": "https://api.github.com/users/BenjaminGalliot/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-23T19:32:38
| 2022-08-24T12:52:46
| 2022-08-24T12:52:46
|
CONTRIBUTOR
| null |
Added names of less-studied languages (nru – Narua and jya – Japhug) for existing datasets.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4880/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4880/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4880",
"html_url": "https://github.com/huggingface/datasets/pull/4880",
"diff_url": "https://github.com/huggingface/datasets/pull/4880.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4880.patch",
"merged_at": "2022-08-24T12:52:46"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4879
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4879/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4879/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4879/events
|
https://github.com/huggingface/datasets/pull/4879
| 1,348,346,407
|
PR_kwDODunzps49qbOl
| 4,879
|
Fix Citation Information section in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-23T18:06:43
| 2022-09-27T14:04:45
| 2022-08-24T04:09:07
|
MEMBER
| null |
Fix Citation Information section in dataset cards:
- cc_news
- conllpp
- datacommons_factcheck
- gnad10
- id_panl_bppt
- jigsaw_toxicity_pred
- kinnews_kirnews
- kor_sarcasm
- makhzan
- reasoning_bg
- ro_sts
- ro_sts_parallel
- sanskrit_classic
- telugu_news
- thaiqa_squad
- wiki_movies
This PR partially fixes the Citation Information section in dataset cards. Subsequent PRs will follow to complete this task.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4879/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4879/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4879",
"html_url": "https://github.com/huggingface/datasets/pull/4879",
"diff_url": "https://github.com/huggingface/datasets/pull/4879.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4879.patch",
"merged_at": "2022-08-24T04:09:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4878
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4878/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4878/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4878/events
|
https://github.com/huggingface/datasets/issues/4878
| 1,348,270,141
|
I_kwDODunzps5QXPg9
| 4,878
|
[not really a bug] `identical_ok` is deprecated in huggingface-hub's `upload_file`
|
{
"login": "severo",
"id": 1676121,
"node_id": "MDQ6VXNlcjE2NzYxMjE=",
"avatar_url": "https://avatars.githubusercontent.com/u/1676121?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/severo",
"html_url": "https://github.com/severo",
"followers_url": "https://api.github.com/users/severo/followers",
"following_url": "https://api.github.com/users/severo/following{/other_user}",
"gists_url": "https://api.github.com/users/severo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/severo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/severo/subscriptions",
"organizations_url": "https://api.github.com/users/severo/orgs",
"repos_url": "https://api.github.com/users/severo/repos",
"events_url": "https://api.github.com/users/severo/events{/privacy}",
"received_events_url": "https://api.github.com/users/severo/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892884,
"node_id": "MDU6TGFiZWwxOTM1ODkyODg0",
"url": "https://api.github.com/repos/huggingface/datasets/labels/help%20wanted",
"name": "help wanted",
"color": "008672",
"default": true,
"description": "Extra attention is needed"
},
{
"id": 1935892912,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEy",
"url": "https://api.github.com/repos/huggingface/datasets/labels/question",
"name": "question",
"color": "d876e3",
"default": true,
"description": "Further information is requested"
}
] |
closed
| false
| null |
[] | null |
[
"Resolved via https://github.com/huggingface/datasets/pull/4937."
] | 2022-08-23T17:09:55
| 2022-09-13T14:00:06
| 2022-09-13T14:00:05
|
CONTRIBUTOR
| null |
In the huggingface-hub dependency, the `identical_ok` argument has no effect in `upload_file` (and it will be removed soon)
See
https://github.com/huggingface/huggingface_hub/blob/43499582b19df1ed081a5b2bd7a364e9cacdc91d/src/huggingface_hub/hf_api.py#L2164-L2169
It's used here:
https://github.com/huggingface/datasets/blob/fcfcc951a73efbc677f9def9a8707d0af93d5890/src/datasets/dataset_dict.py#L1373-L1381
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4354-L4362
https://github.com/huggingface/datasets/blob/fdcb8b144ce3ef241410281e125bd03e87b8caa1/src/datasets/arrow_dataset.py#L4197-L4213
We should remove it.
Maybe the third code sample has an unexpected behavior since it uses the non-default value `identical_ok = False`, but the argument is ignored.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4878/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4878/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4877
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4877/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4877/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4877/events
|
https://github.com/huggingface/datasets/pull/4877
| 1,348,246,755
|
PR_kwDODunzps49qF-w
| 4,877
|
Fix documentation card of covid_qa_castorini dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-23T16:52:33
| 2022-08-23T18:05:01
| 2022-08-23T18:05:00
|
MEMBER
| null |
Fix documentation card of covid_qa_castorini dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4877/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4877/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4877",
"html_url": "https://github.com/huggingface/datasets/pull/4877",
"diff_url": "https://github.com/huggingface/datasets/pull/4877.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4877.patch",
"merged_at": "2022-08-23T18:05:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4876
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4876/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4876/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4876/events
|
https://github.com/huggingface/datasets/issues/4876
| 1,348,202,678
|
I_kwDODunzps5QW_C2
| 4,876
|
Move DatasetInfo from `datasets_infos.json` to the YAML tags in `README.md`
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"also @osanseviero @Pierrci @SBrandeis potentially",
"Love this in principle 🚀 \r\n\r\nLet's keep in mind users might rely on `dataset_infos.json` already.\r\n\r\nI'm not convinced by the two-syntax solution, wouldn't it be simpler to have only one syntax with a `default` config for datasets with only one config? ie, always having the `configs` field. This makes parsing the metadata easier IMO.\r\n\r\nMight also be good to wrap the tags under a `datasets_info` tag as follows:\r\n\r\n```yaml\r\ndescription: ...\r\ncitation: ...\r\ndataset_infos:\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n configs:\r\n - ...\r\n[...]\r\n```\r\n\r\nLet's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.",
"> Let's keep in mind users might rely on dataset_infos.json already.\r\n\r\nYea we'll full full backward compatibility\r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\nThe main things that may use or ingest these data IMO are:\r\n- users in the UI or IDE\r\n- `datasets` to populate `DatasetInfo` python object\r\n- moon landing which is already parsing YAML\r\n\r\nAm I missing something ? If not I think it's ok to use YAML\r\n\r\n> Might also be good to wrap the tags under a datasets_info tag as follows:\r\n\r\nMaybe one single syntax like this then ?\r\n```yaml\r\ndataset_infos:\r\n- config: unlabeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n- config: labeled\r\n download_size: 35142551\r\n dataset_size: 89789763\r\n version: 1.0.0\r\n splits:\r\n - name: train\r\n num_examples: 100\r\n features:\r\n - name: text\r\n dtype: string\r\n - name: label\r\n dtype: ClassLabel\r\n names:\r\n - negative\r\n - positive\r\n```\r\nand when you have only one config\r\n```yaml\r\ndataset_infos:\r\n- config: default\r\n splits:\r\n - name: train\r\n num_examples: 10000\r\n features:\r\n - name: text\r\n dtype: string\r\n```",
"love the idea, and the trend in general to move more things (like tasks) to a single place (YAML).\r\n\r\nalso, if you browse files on a dataset's page (in \"Files and versions\"), raw `README.md` files looks nice and readable, while `.json` files are just one long line that users need to scroll. \r\n\r\n> Let's also keep in mind that extracting YAML metadata from a markdown readme is a bit more fastidious for users than just parsing a JSON file.\r\n\r\ndo users often parse `datasets_infos.json` file themselves? ",
"> do users often parse datasets_infos.json file themselves?\r\n\r\nNot AFAIK, but I'm sure there should be a few users.\r\nUsers that access these info via the `DatasetInfo` from `datasets` won't see the change though e.g.\r\n```python\r\n>> from datasets import get_datasets_infos\r\n>>> get_datasets_infos(\"squad\")\r\n{'plain_text': DatasetInfo(description='Stanford Question Answering Dataset...\r\n```",
"> Maybe one single syntax like this then ?\r\n\r\nLGTM!\r\n\r\n> The main things that may use or ingest these data IMO are:\r\n> - users in the UI or IDE\r\n> - datasets to populate DatasetInfo python object\r\n> - moon landing which is already parsing YAML\r\n\r\nFair point!\r\n\r\nHaving dataset info in the README's YAML is great for API / `huggingface_hub` consumers as well as it will be inserted in the `cardData` field out of the box 🔥 \r\n",
"Very supportive of this!\r\n\r\nNesting an array of configs inside `dataset_infos: ` sounds good to me. One small tweak is that `config: default` can be optional for the default config (which can be the first one by convention)\r\n\r\nWe'll be able to implement metadata validation on the Hub side so we ensure that those metadata are always in the right format (maybe for @coyotte508 ? cc @Pierrci). From a quick glance the `features` might be the harder part to validate here, any doc will be welcome.\r\n\r\n### Other high-level points:\r\n- as we move from mostly academic datasets to *all* datasets (which include the data inside the repos), my intuition is that more and more datasets (Hub-stored) are going to be **single-config**\r\n- similarly, less and less datasets will have a loading script, **just the data + some metadata**\r\n- to lower the barrier to entry to contribution, in the long term users shouldn't need to compute/update this data via a command line. It could be filled automatically on the Hub through a \"bot\" inside Discussions & Pull requests for instance.",
"re: `config: default`\r\n\r\nNote also that the default config is not named `default`, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is `nbtpj--bionlp2021SAS` (which is awful)",
"> Note also that the default config is not named default, afaiu, but create from the repo name, eg: https://huggingface.co/datasets/nbtpj/bionlp2021SAS default config is nbtpj--bionlp2021SAS (which is awful)\r\n\r\nWe can change this to `default` I think or something else",
"> From a quick glance the features might be the harder part to validate here, any doc will be welcome.\r\n\r\nI dug into features validation, see:\r\n\r\n- the OpenAPI spec: https://github.com/huggingface/datasets-server/blob/main/chart/static-files/openapi.json#L460-L697\r\n- the node.js code: https://github.com/huggingface/moon-landing/blob/upgrade-datasets-server-client/server/lib/datasets/FeatureType.ts",
"> We can change this to default I think or something else\r\n\r\nI created https://github.com/huggingface/datasets/issues/4902 to discuss that",
"> Note also that the default config is not named `default`, afaiu, but create from the repo name\r\n\r\nin case of single-config you can even hide the config name from the UI IMO\r\n\r\n> I dug into features validation, see: the OpenAPI spec\r\n\r\nin moon-landing we use [Joi](https://joi.dev/api/) to validate metadata so we would need to generate from Joi code from the OpenAPI spec (or from somewhere else) but I guess that's doable – or just rewrite it manually, as it won't change often",
"I remember there was an ongoing discussion on this topic:\r\n- #3507\r\n\r\nI recall some of the concerns raised on that discussion:\r\n- @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627)\r\n- @severo: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776)\r\n - the metadata header might be very long, before reaching the start of the README/dataset card. \r\n - It also somewhat prevents including large strings like the checksums\r\n - two concepts are mixed in the same file (metadata and documentation). This means that if you're interested only in one of them, you still have to know how to parse the whole file. \r\n- @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: [#3507 (comment)](https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157)",
"Thanks for bringing these points up !\r\n\r\n> @lhoestq: Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets. They are using the exported dataset_infos.json files from github to get the metadata: https://github.com/huggingface/datasets/issues/3507#issuecomment-1056997627\r\n\r\nThe TFDS implementation is not super advanced, so it's ok IMO as long as we don't break all the dataset scripts. Note that users can still use `to_tf_dataset`.\r\n\r\nWe had a chance to discuss the two nexts points with @julien-c as well:\r\n\r\n> @severo: https://github.com/huggingface/datasets/issues/3507#issuecomment-1042779776\r\nthe metadata header might be very long, before reaching the start of the README/dataset card.\r\n\r\nIf we don't add the checksums we should be fine. We can also set a maximum number of supported configs in the README to keep it readable.\r\n\r\n> @severo: the future \"datasets server\" could be in charge of generating the dataset-info.json file: https://github.com/huggingface/datasets/issues/3507#issuecomment-1033752157\r\n\r\nI guess the \"HF Hub actions\" could open PRs to do the same in the YAML directly\r\n",
"Thanks for linking that similar discussion for context, @albertvillanova!"
] | 2022-08-23T16:16:41
| 2022-10-03T09:11:13
| 2022-10-03T09:11:13
|
MEMBER
| null |
Currently there are two places to find metadata for datasets:
- datasets_infos.json, which contains **per dataset config**
- description
- citation
- license
- splits and sizes
- checksums of the data files
- feature types
- and more
- YAML tags, which contain
- license
- language
- train-eval-index
- and more
It would be nice to have a single place instead. We can rely on the YAML tags more than the JSON file for consistency with models. And it would all be indexed by our back-end directly, which is nice to have.
One way would be to move everything to the YAML tags except the checksums (there can be tens of thousands of them). The description/citation is already in the dataset card so we probably don't need to have them in the YAML card, it would be redundant.
Here is an example for SQuAD
```yaml
download_size: 35142551
dataset_size: 89789763
version: 1.0.0
splits:
- name: train
num_examples: 87599
num_bytes: 79317110
- name: validation
num_examples: 10570
num_bytes: 10472653
features:
- name: id
dtype: string
- name: title
dtype: string
- name: context
dtype: string
- name: question
dtype: string
- name: answers
struct:
- name: text
list:
dtype: string
- name: answer_start
list:
dtype: int32
```
Since there is only one configuration for SQuAD, this structure is ok. For datasets with several configs we can see in a second step, but IMO it would be ok to have these fields per config using another syntax
```yaml
configs:
- config: unlabeled
splits:
- name: train
num_examples: 10000
features:
- name: text
dtype: string
- config: labeled
splits:
- name: train
num_examples: 100
features:
- name: text
dtype: string
- name: label
dtype: ClassLabel
names:
- negative
- positive
```
So in the end you could specify a YAML tag either at the top level (for all configs) or per config in the `configs` field
Alternatively we could keep config specific stuff in the `dataset_infos.json` as it it today
Not sure yet what's the best approach here but cc @julien-c @mariosasko @albertvillanova @polinaeterna for feedback :)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4876/reactions",
"total_count": 7,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 3,
"rocket": 0,
"eyes": 4
}
|
https://api.github.com/repos/huggingface/datasets/issues/4876/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4874
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4874/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4874/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4874/events
|
https://github.com/huggingface/datasets/pull/4874
| 1,347,618,197
|
PR_kwDODunzps49n_nI
| 4,874
|
[docs] Some tiny doc tweaks
|
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-23T09:19:40
| 2022-08-24T17:27:57
| 2022-08-24T17:27:56
|
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4874/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4874/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4874",
"html_url": "https://github.com/huggingface/datasets/pull/4874",
"diff_url": "https://github.com/huggingface/datasets/pull/4874.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4874.patch",
"merged_at": "2022-08-24T17:27:56"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4872
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4872/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4872/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4872/events
|
https://github.com/huggingface/datasets/pull/4872
| 1,347,180,765
|
PR_kwDODunzps49mjU9
| 4,872
|
Docs for creating an audio dataset
|
{
"login": "stevhliu",
"id": 59462357,
"node_id": "MDQ6VXNlcjU5NDYyMzU3",
"avatar_url": "https://avatars.githubusercontent.com/u/59462357?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/stevhliu",
"html_url": "https://github.com/stevhliu",
"followers_url": "https://api.github.com/users/stevhliu/followers",
"following_url": "https://api.github.com/users/stevhliu/following{/other_user}",
"gists_url": "https://api.github.com/users/stevhliu/gists{/gist_id}",
"starred_url": "https://api.github.com/users/stevhliu/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/stevhliu/subscriptions",
"organizations_url": "https://api.github.com/users/stevhliu/orgs",
"repos_url": "https://api.github.com/users/stevhliu/repos",
"events_url": "https://api.github.com/users/stevhliu/events{/privacy}",
"received_events_url": "https://api.github.com/users/stevhliu/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892861,
"node_id": "MDU6TGFiZWwxOTM1ODkyODYx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/documentation",
"name": "documentation",
"color": "0075ca",
"default": true,
"description": "Improvements or additions to documentation"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-23T01:07:09
| 2022-09-22T17:19:13
| 2022-09-21T10:27:04
|
MEMBER
| null |
This PR is a first draft of how to create audio datasets (`AudioFolder` and loading script). Feel free to let me know if there are any specificities I'm missing for this. 🙂
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4872/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4872/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4872",
"html_url": "https://github.com/huggingface/datasets/pull/4872",
"diff_url": "https://github.com/huggingface/datasets/pull/4872.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4872.patch",
"merged_at": "2022-09-21T10:27:04"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4871
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4871/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4871/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4871/events
|
https://github.com/huggingface/datasets/pull/4871
| 1,346,703,568
|
PR_kwDODunzps49k9Rm
| 4,871
|
Fix: wmt datasets - fix CWMT zh subsets
|
{
"login": "lhoestq",
"id": 42851186,
"node_id": "MDQ6VXNlcjQyODUxMTg2",
"avatar_url": "https://avatars.githubusercontent.com/u/42851186?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/lhoestq",
"html_url": "https://github.com/lhoestq",
"followers_url": "https://api.github.com/users/lhoestq/followers",
"following_url": "https://api.github.com/users/lhoestq/following{/other_user}",
"gists_url": "https://api.github.com/users/lhoestq/gists{/gist_id}",
"starred_url": "https://api.github.com/users/lhoestq/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/lhoestq/subscriptions",
"organizations_url": "https://api.github.com/users/lhoestq/orgs",
"repos_url": "https://api.github.com/users/lhoestq/repos",
"events_url": "https://api.github.com/users/lhoestq/events{/privacy}",
"received_events_url": "https://api.github.com/users/lhoestq/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-22T16:42:09
| 2022-08-23T10:00:20
| 2022-08-23T10:00:19
|
MEMBER
| null |
Fix https://github.com/huggingface/datasets/issues/4575
TODO: run `datasets-cli test`:
- [x] wmt17
- [x] wmt18
- [x] wmt19
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4871/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4871/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4871",
"html_url": "https://github.com/huggingface/datasets/pull/4871",
"diff_url": "https://github.com/huggingface/datasets/pull/4871.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4871.patch",
"merged_at": "2022-08-23T10:00:19"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4870
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4870/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4870/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4870/events
|
https://github.com/huggingface/datasets/pull/4870
| 1,346,160,498
|
PR_kwDODunzps49jGxD
| 4,870
|
audio folder check CI
|
{
"login": "polinaeterna",
"id": 16348744,
"node_id": "MDQ6VXNlcjE2MzQ4NzQ0",
"avatar_url": "https://avatars.githubusercontent.com/u/16348744?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/polinaeterna",
"html_url": "https://github.com/polinaeterna",
"followers_url": "https://api.github.com/users/polinaeterna/followers",
"following_url": "https://api.github.com/users/polinaeterna/following{/other_user}",
"gists_url": "https://api.github.com/users/polinaeterna/gists{/gist_id}",
"starred_url": "https://api.github.com/users/polinaeterna/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/polinaeterna/subscriptions",
"organizations_url": "https://api.github.com/users/polinaeterna/orgs",
"repos_url": "https://api.github.com/users/polinaeterna/repos",
"events_url": "https://api.github.com/users/polinaeterna/events{/privacy}",
"received_events_url": "https://api.github.com/users/polinaeterna/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-22T10:15:53
| 2022-11-02T11:54:35
| 2022-08-22T12:19:40
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4870/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4870/timeline
| null | null | true
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4870",
"html_url": "https://github.com/huggingface/datasets/pull/4870",
"diff_url": "https://github.com/huggingface/datasets/pull/4870.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4870.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4869
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4869/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4869/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4869/events
|
https://github.com/huggingface/datasets/pull/4869
| 1,345,513,758
|
PR_kwDODunzps49hBGY
| 4,869
|
Fix typos in documentation
|
{
"login": "fl-lo",
"id": 85993954,
"node_id": "MDQ6VXNlcjg1OTkzOTU0",
"avatar_url": "https://avatars.githubusercontent.com/u/85993954?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/fl-lo",
"html_url": "https://github.com/fl-lo",
"followers_url": "https://api.github.com/users/fl-lo/followers",
"following_url": "https://api.github.com/users/fl-lo/following{/other_user}",
"gists_url": "https://api.github.com/users/fl-lo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/fl-lo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/fl-lo/subscriptions",
"organizations_url": "https://api.github.com/users/fl-lo/orgs",
"repos_url": "https://api.github.com/users/fl-lo/repos",
"events_url": "https://api.github.com/users/fl-lo/events{/privacy}",
"received_events_url": "https://api.github.com/users/fl-lo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-21T15:10:03
| 2022-08-22T09:25:39
| 2022-08-22T09:09:58
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4869/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4869/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4869",
"html_url": "https://github.com/huggingface/datasets/pull/4869",
"diff_url": "https://github.com/huggingface/datasets/pull/4869.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4869.patch",
"merged_at": "2022-08-22T09:09:58"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4868
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4868/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4868/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4868/events
|
https://github.com/huggingface/datasets/pull/4868
| 1,345,191,322
|
PR_kwDODunzps49gBk0
| 4,868
|
adding mafand to datasets
|
{
"login": "dadelani",
"id": 23586676,
"node_id": "MDQ6VXNlcjIzNTg2Njc2",
"avatar_url": "https://avatars.githubusercontent.com/u/23586676?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/dadelani",
"html_url": "https://github.com/dadelani",
"followers_url": "https://api.github.com/users/dadelani/followers",
"following_url": "https://api.github.com/users/dadelani/following{/other_user}",
"gists_url": "https://api.github.com/users/dadelani/gists{/gist_id}",
"starred_url": "https://api.github.com/users/dadelani/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/dadelani/subscriptions",
"organizations_url": "https://api.github.com/users/dadelani/orgs",
"repos_url": "https://api.github.com/users/dadelani/repos",
"events_url": "https://api.github.com/users/dadelani/events{/privacy}",
"received_events_url": "https://api.github.com/users/dadelani/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-20T15:26:14
| 2022-08-22T11:00:50
| 2022-08-22T08:52:23
|
CONTRIBUTOR
| null |
I'm addding the MAFAND dataset by Masakhane based on the paper/repository below:
Paper: https://aclanthology.org/2022.naacl-main.223/
Code: https://github.com/masakhane-io/lafand-mt
Please, help merge this
Everything works except for creating dummy data file
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4868/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4868/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4868",
"html_url": "https://github.com/huggingface/datasets/pull/4868",
"diff_url": "https://github.com/huggingface/datasets/pull/4868.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4868.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4867
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4867/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4867/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4867/events
|
https://github.com/huggingface/datasets/pull/4867
| 1,344,982,646
|
PR_kwDODunzps49fZle
| 4,867
|
Complete tags of superglue dataset card
|
{
"login": "richarddwang",
"id": 17963619,
"node_id": "MDQ6VXNlcjE3OTYzNjE5",
"avatar_url": "https://avatars.githubusercontent.com/u/17963619?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/richarddwang",
"html_url": "https://github.com/richarddwang",
"followers_url": "https://api.github.com/users/richarddwang/followers",
"following_url": "https://api.github.com/users/richarddwang/following{/other_user}",
"gists_url": "https://api.github.com/users/richarddwang/gists{/gist_id}",
"starred_url": "https://api.github.com/users/richarddwang/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/richarddwang/subscriptions",
"organizations_url": "https://api.github.com/users/richarddwang/orgs",
"repos_url": "https://api.github.com/users/richarddwang/repos",
"events_url": "https://api.github.com/users/richarddwang/events{/privacy}",
"received_events_url": "https://api.github.com/users/richarddwang/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-19T23:44:39
| 2022-08-22T09:14:03
| 2022-08-22T08:58:31
|
CONTRIBUTOR
| null |
Related to #4479 .
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4867/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4867/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4867",
"html_url": "https://github.com/huggingface/datasets/pull/4867",
"diff_url": "https://github.com/huggingface/datasets/pull/4867.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4867.patch",
"merged_at": "2022-08-22T08:58:31"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4865
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4865/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4865/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4865/events
|
https://github.com/huggingface/datasets/issues/4865
| 1,344,552,626
|
I_kwDODunzps5QJD6y
| 4,865
|
Dataset Viewer issue for MoritzLaurer/multilingual_nli
|
{
"login": "MoritzLaurer",
"id": 41862082,
"node_id": "MDQ6VXNlcjQxODYyMDgy",
"avatar_url": "https://avatars.githubusercontent.com/u/41862082?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/MoritzLaurer",
"html_url": "https://github.com/MoritzLaurer",
"followers_url": "https://api.github.com/users/MoritzLaurer/followers",
"following_url": "https://api.github.com/users/MoritzLaurer/following{/other_user}",
"gists_url": "https://api.github.com/users/MoritzLaurer/gists{/gist_id}",
"starred_url": "https://api.github.com/users/MoritzLaurer/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/MoritzLaurer/subscriptions",
"organizations_url": "https://api.github.com/users/MoritzLaurer/orgs",
"repos_url": "https://api.github.com/users/MoritzLaurer/repos",
"events_url": "https://api.github.com/users/MoritzLaurer/events{/privacy}",
"received_events_url": "https://api.github.com/users/MoritzLaurer/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Thanks for reporting @MoritzLaurer.\r\n\r\nCurrently, the dataset preview is working properly: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli\r\n\r\nPlease note that when a dataset is modified, it might take some time until the preview is completely updated.\r\n\r\n@severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?",
"Thanks for your response. You are right, its now working well. I had waited for 30 min or so and refreshed several times and thought there was some other error. Yeah, a different error message sounds like a good idea to avoid confusion. ",
"I'm closing this issue then.",
"> @severo might it be worth adding a clearer error message, something like \"The preview is updating, please retry later\"?\r\n\r\nYes, it's a known issue, and we're about to ship a better version"
] | 2022-08-19T14:55:20
| 2022-08-22T14:47:14
| 2022-08-22T06:13:20
|
NONE
| null |
### Link
_No response_
### Description
I've just uploaded a new dataset to the hub and the viewer does not work for some reason, see here: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli
It displays the error:
```
Status code: 400
Exception: Status400Error
Message: The dataset does not exist.
```
Weirdly enough the dataviewer works for an earlier version of the same dataset. The only difference is that it is smaller, but I'm not aware of other changes I have made: https://huggingface.co/datasets/MoritzLaurer/multilingual_nli_test
Do you know why the dataviewer is not working?
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4865/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4865/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4863
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4863/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4863/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4863/events
|
https://github.com/huggingface/datasets/issues/4863
| 1,343,737,668
|
I_kwDODunzps5QF89E
| 4,863
|
TFDS wiki_dialog dataset to Huggingface dataset
|
{
"login": "djaym7",
"id": 12378820,
"node_id": "MDQ6VXNlcjEyMzc4ODIw",
"avatar_url": "https://avatars.githubusercontent.com/u/12378820?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/djaym7",
"html_url": "https://github.com/djaym7",
"followers_url": "https://api.github.com/users/djaym7/followers",
"following_url": "https://api.github.com/users/djaym7/following{/other_user}",
"gists_url": "https://api.github.com/users/djaym7/gists{/gist_id}",
"starred_url": "https://api.github.com/users/djaym7/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/djaym7/subscriptions",
"organizations_url": "https://api.github.com/users/djaym7/orgs",
"repos_url": "https://api.github.com/users/djaym7/repos",
"events_url": "https://api.github.com/users/djaym7/events{/privacy}",
"received_events_url": "https://api.github.com/users/djaym7/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 2067376369,
"node_id": "MDU6TGFiZWwyMDY3Mzc2MzY5",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset%20request",
"name": "dataset request",
"color": "e99695",
"default": false,
"description": "Requesting to add a new dataset"
}
] |
closed
| false
| null |
[] | null |
[
"@albertvillanova any help ? The linked dataset is in beam format which is similar to wikipedia dataset in huggingface that you scripted..",
"Nvm, I was able to port it to huggingface datasets, will upload to the hub soon",
"https://huggingface.co/datasets/djaym7/wiki_dialog",
"Thanks for the addition, @djaym7."
] | 2022-08-18T23:06:30
| 2022-08-22T09:41:45
| 2022-08-22T05:18:53
|
NONE
| null |
## Adding a Dataset
- **Name:** *Wiki_dialog*
- **Description: https://github.com/google-research/dialog-inpainting#:~:text=JSON%20object%2C%20for-,example,-%3A
- **Paper: https://arxiv.org/abs/2205.09073
- **Data: https://github.com/google-research/dialog-inpainting
- **Motivation:** *Research and Development on biggest corpus of dialog data*
Instructions to add a new dataset can be found [here](https://github.com/huggingface/datasets/blob/main/ADD_NEW_DATASET.md).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4863/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4863/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4862
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4862/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4862/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4862/events
|
https://github.com/huggingface/datasets/issues/4862
| 1,343,464,699
|
I_kwDODunzps5QE6T7
| 4,862
|
Got "AttributeError: 'xPath' object has no attribute 'read'" when loading an excel dataset with my own code
|
{
"login": "yana-xuyan",
"id": 38536635,
"node_id": "MDQ6VXNlcjM4NTM2NjM1",
"avatar_url": "https://avatars.githubusercontent.com/u/38536635?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/yana-xuyan",
"html_url": "https://github.com/yana-xuyan",
"followers_url": "https://api.github.com/users/yana-xuyan/followers",
"following_url": "https://api.github.com/users/yana-xuyan/following{/other_user}",
"gists_url": "https://api.github.com/users/yana-xuyan/gists{/gist_id}",
"starred_url": "https://api.github.com/users/yana-xuyan/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/yana-xuyan/subscriptions",
"organizations_url": "https://api.github.com/users/yana-xuyan/orgs",
"repos_url": "https://api.github.com/users/yana-xuyan/repos",
"events_url": "https://api.github.com/users/yana-xuyan/events{/privacy}",
"received_events_url": "https://api.github.com/users/yana-xuyan/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"What's more, the downloaded data is actually a folder instead of an excel file.",
"Hi hi, instead of using `download_and_extract` function, I only use `download` function: `base_dir = Path(dl_manager.download(urls))`. It turns out that the code works for `datasets==2.2.2`, however, it doesn't work with `datasets==2.4.0`. ",
"Hi @yana-xuyan, thanks for reporting.\r\n\r\nIndeed you already found the answer: an Excel file should be just downloaded and not downloaded-and-extracted.\r\n\r\nThe reason why is that if you call also extract, our library will try to infer the compression format (and extract it). And Excel files are viewed as ZIP files and extracted as so (into a directory). This is because the Office Open XML is indeed a zipped file under the hood): https://en.wikipedia.org/wiki/Office_Open_XML\r\n> Office Open XML (also informally known as OOXML) is a **zipped**, XML-based file format\r\n```python\r\nimport zipfile\r\n\r\nzipfile.is_zipfile(\"filename.xlsx\")\r\n```\r\nreturns `True`.",
"Hi @albertvillanova, thank you for your reply! Do you have any clue on why the same error still exists with `datasets==2.4.0` even after I don't extract the downloaded file? FYI, if I downgrade to `datasets==2.2.2`, the code works fine.",
"I guess this has to do with the cache: you should remove the previously-wrongly generated directory from the cache; otherwise `datasets` tries to re-use it."
] | 2022-08-18T18:36:14
| 2022-08-31T09:25:08
| 2022-08-31T09:25:08
|
NONE
| null |
## Describe the bug
A clear and concise description of what the bug is.
## Steps to reproduce the bug
```python
# Sample code to reproduce the bug
# The dataset function is as follows:
from pathlib import Path
from typing import Dict, List, Tuple
import datasets
import pandas as pd
_CITATION = """\
"""
_DATASETNAME = "jadi_ide"
_DESCRIPTION = """\
"""
_HOMEPAGE = ""
_LICENSE = "Unknown"
_URLS = {
_DATASETNAME: "https://github.com/fathanick/Javanese-Dialect-Identification-from-Twitter-Data/raw/main/Update 16K_Dataset.xlsx",
}
_SOURCE_VERSION = "1.0.0"
class JaDi_Ide(datasets.GeneratorBasedBuilder):
SOURCE_VERSION = datasets.Version(_SOURCE_VERSION)
BUILDER_CONFIGS = [
NusantaraConfig(
name="jadi_ide_source",
version=SOURCE_VERSION,
description="JaDi-Ide source schema",
schema="source",
subset_id="jadi_ide",
),
]
DEFAULT_CONFIG_NAME = "source"
def _info(self) -> datasets.DatasetInfo:
if self.config.schema == "source":
features = datasets.Features(
{
"id": datasets.Value("string"),
"text": datasets.Value("string"),
"label": datasets.Value("string")
}
)
return datasets.DatasetInfo(
description=_DESCRIPTION,
features=features,
homepage=_HOMEPAGE,
license=_LICENSE,
citation=_CITATION,
)
def _split_generators(self, dl_manager: datasets.DownloadManager) -> List[datasets.SplitGenerator]:
"""Returns SplitGenerators."""
# Dataset does not have predetermined split, putting all as TRAIN
urls = _URLS[_DATASETNAME]
base_dir = Path(dl_manager.download_and_extract(urls))
data_files = {"train": base_dir}
return [
datasets.SplitGenerator(
name=datasets.Split.TRAIN,
gen_kwargs={
"filepath": data_files["train"],
"split": "train",
},
),
]
def _generate_examples(self, filepath: Path, split: str) -> Tuple[int, Dict]:
"""Yields examples as (key, example) tuples."""
df = pd.read_excel(filepath, engine='openpyxl')
df.columns = ["id", "text", "label"]
if self.config.schema == "source":
for row in df.itertuples():
ex = {
"id": str(row.id),
"text": row.text,
"label": row.label,
}
yield row.id, ex
```
## Expected results
Expecting to load the dataset smoothly.
## Actual results
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 1751, in load_dataset
use_auth_token=use_auth_token,
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 705, in download_and_prepare
dl_manager=dl_manager, verify_infos=verify_infos, **download_and_prepare_kwargs
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 793, in _download_and_prepare
self._prepare_split(split_generator, **prepare_split_kwargs)
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/builder.py", line 1216, in _prepare_split
desc=f"Generating {split_info.name} split",
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/tqdm/std.py", line 1195, in __iter__
for obj in iterable:
File "/home/xuyan/.cache/huggingface/modules/datasets_modules/datasets/jadi_ide/7a539f2b6f726defea8fbe36ceda17bae66c370f6d6c418e3a08d760ebef7519/jadi_ide.py", line 107, in _generate_examples
df = pd.read_excel(filepath, engine='openpyxl')
File "/home/xuyan/anaconda3/lib/python3.7/site-packages/datasets/download/streaming_download_manager.py", line 701, in xpandas_read_excel
return pd.read_excel(BytesIO(filepath_or_buffer.read()), **kwargs)
AttributeError: 'xPath' object has no attribute 'read'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: Linux-4.15.0-142-generic-x86_64-with-debian-stretch-sid
- Python version: 3.7.4
- PyArrow version: 9.0.0
- Pandas version: 0.25.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4862/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4862/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4860
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4860/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4860/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4860/events
|
https://github.com/huggingface/datasets/pull/4860
| 1,342,311,540
|
PR_kwDODunzps49WjEu
| 4,860
|
Add collection3 dataset
|
{
"login": "pefimov",
"id": 16446994,
"node_id": "MDQ6VXNlcjE2NDQ2OTk0",
"avatar_url": "https://avatars.githubusercontent.com/u/16446994?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pefimov",
"html_url": "https://github.com/pefimov",
"followers_url": "https://api.github.com/users/pefimov/followers",
"following_url": "https://api.github.com/users/pefimov/following{/other_user}",
"gists_url": "https://api.github.com/users/pefimov/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pefimov/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pefimov/subscriptions",
"organizations_url": "https://api.github.com/users/pefimov/orgs",
"repos_url": "https://api.github.com/users/pefimov/repos",
"events_url": "https://api.github.com/users/pefimov/events{/privacy}",
"received_events_url": "https://api.github.com/users/pefimov/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892913,
"node_id": "MDU6TGFiZWwxOTM1ODkyOTEz",
"url": "https://api.github.com/repos/huggingface/datasets/labels/wontfix",
"name": "wontfix",
"color": "ffffff",
"default": true,
"description": "This will not be worked on"
}
] |
closed
| false
| null |
[] | null |
[] | 2022-08-17T21:31:42
| 2022-08-23T20:02:45
| 2022-08-22T09:08:59
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4860/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4860/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4860",
"html_url": "https://github.com/huggingface/datasets/pull/4860",
"diff_url": "https://github.com/huggingface/datasets/pull/4860.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4860.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4858
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4858/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4858/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4858/events
|
https://github.com/huggingface/datasets/issues/4858
| 1,340,859,853
|
I_kwDODunzps5P6-XN
| 4,858
|
map() function removes columns when input_columns is not None
|
{
"login": "pramodith",
"id": 16939722,
"node_id": "MDQ6VXNlcjE2OTM5NzIy",
"avatar_url": "https://avatars.githubusercontent.com/u/16939722?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/pramodith",
"html_url": "https://github.com/pramodith",
"followers_url": "https://api.github.com/users/pramodith/followers",
"following_url": "https://api.github.com/users/pramodith/following{/other_user}",
"gists_url": "https://api.github.com/users/pramodith/gists{/gist_id}",
"starred_url": "https://api.github.com/users/pramodith/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/pramodith/subscriptions",
"organizations_url": "https://api.github.com/users/pramodith/orgs",
"repos_url": "https://api.github.com/users/pramodith/repos",
"events_url": "https://api.github.com/users/pramodith/events{/privacy}",
"received_events_url": "https://api.github.com/users/pramodith/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Hi! Thanks for reporting! This looks like a bug. I've just opened a PR with the fix.",
"Awesome! Thank you. I'll close the issue once the PR gets merged. :-)",
"I guess we should reopen after the revert by:\r\n- #5006"
] | 2022-08-16T20:42:30
| 2022-09-22T13:55:24
| 2022-09-22T13:55:24
|
NONE
| null |
## Describe the bug
The map function, removes features from the dataset that are not present in the _input_columns_ list of columns, despite the columns being removed not mentioned in the _remove_columns_ argument.
## Steps to reproduce the bug
```python
from datasets import Dataset
ds = Dataset.from_dict({"a" : [1,2,3],"b" : [0,1,0], "c" : [2,4,5]})
def double(x,y):
x = x*2
y = y*2
return {"d" : x, "e" : y}
ds.map(double, input_columns=["a","c"])
```
## Expected results
```
Dataset({
features: ['a', 'b', 'c', 'd', 'e'],
num_rows: 3
})
```
## Actual results
```
Dataset({
features: ['a', 'c', 'd', 'e'],
num_rows: 3
})
```
In this specific example feature **b** should not be removed.
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: linux (colab)
- Python version: 3.7.13
- PyArrow version: 6.0.1
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4858/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4858/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4857
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4857/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4857/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4857/events
|
https://github.com/huggingface/datasets/issues/4857
| 1,340,397,153
|
I_kwDODunzps5P5NZh
| 4,857
|
No preprocessed wikipedia is working on huggingface/datasets
|
{
"login": "aninrusimha",
"id": 30733039,
"node_id": "MDQ6VXNlcjMwNzMzMDM5",
"avatar_url": "https://avatars.githubusercontent.com/u/30733039?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/aninrusimha",
"html_url": "https://github.com/aninrusimha",
"followers_url": "https://api.github.com/users/aninrusimha/followers",
"following_url": "https://api.github.com/users/aninrusimha/following{/other_user}",
"gists_url": "https://api.github.com/users/aninrusimha/gists{/gist_id}",
"starred_url": "https://api.github.com/users/aninrusimha/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/aninrusimha/subscriptions",
"organizations_url": "https://api.github.com/users/aninrusimha/orgs",
"repos_url": "https://api.github.com/users/aninrusimha/repos",
"events_url": "https://api.github.com/users/aninrusimha/events{/privacy}",
"received_events_url": "https://api.github.com/users/aninrusimha/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @aninrusimha.\r\n\r\nPlease, note that the preprocessed datasets are still available, as described in the dataset card, e.g.: https://huggingface.co/datasets/wikipedia\r\n```python\r\nds = load_dataset(\"wikipedia\", \"20220301.en\")\r\n``` ",
"This is working now, but I was getting an error a few days ago when running an existing script. Unfortunately I did not do a proper bug report, but for some reason I was unable to load the dataset due to a request being made to the wikimedia website. However, its working now. Thanks for the reply!"
] | 2022-08-16T13:55:33
| 2022-08-17T13:35:08
| 2022-08-17T13:35:08
|
NONE
| null |
## Describe the bug
20220301 wikipedia dump has been deprecated, so now there is no working wikipedia dump on huggingface
https://huggingface.co/datasets/wikipedia
https://dumps.wikimedia.org/enwiki/
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4857/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4857/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4856
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4856/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4856/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4856/events
|
https://github.com/huggingface/datasets/issues/4856
| 1,339,779,957
|
I_kwDODunzps5P22t1
| 4,856
|
file missing when load_dataset with openwebtext on windows
|
{
"login": "kingstarcraft",
"id": 10361976,
"node_id": "MDQ6VXNlcjEwMzYxOTc2",
"avatar_url": "https://avatars.githubusercontent.com/u/10361976?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kingstarcraft",
"html_url": "https://github.com/kingstarcraft",
"followers_url": "https://api.github.com/users/kingstarcraft/followers",
"following_url": "https://api.github.com/users/kingstarcraft/following{/other_user}",
"gists_url": "https://api.github.com/users/kingstarcraft/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kingstarcraft/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kingstarcraft/subscriptions",
"organizations_url": "https://api.github.com/users/kingstarcraft/orgs",
"repos_url": "https://api.github.com/users/kingstarcraft/repos",
"events_url": "https://api.github.com/users/kingstarcraft/events{/privacy}",
"received_events_url": "https://api.github.com/users/kingstarcraft/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
| null |
[] | null |
[
"I have tried to extract ```0015896-b1054262f7da52a0518521e29c8e352c.txt``` from ```17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz``` with 7-zip\r\nand put the file into cache_path ```F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa```\r\nthere is still raise the same error and I find the file was removed from cache_path after I run the run_mlm.py with ```python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base```."
] | 2022-08-16T04:04:22
| 2023-01-04T03:39:12
| 2023-01-04T03:39:12
|
NONE
| null |
## Describe the bug
0015896-b1054262f7da52a0518521e29c8e352c.txt is missing when I run run_mlm.py with openwebtext. I check the cache_path and can not find 0015896-b1054262f7da52a0518521e29c8e352c.txt. but I can find this file in the 17ecf461bfccd469a1fbc264ccb03731f8606eea7b3e2e8b86e13d18040bf5b3/urlsf_subset00-16_data.xz with 7-zip.
## Steps to reproduce the bug
```sh
python run_mlm.py --model_type roberta --tokenizer_name roberta-base --dataset_name openwebtext --per_device_train_batch_size 8 --per_device_eval_batch_size 8 --do_train --do_eval --output_dir F:/model/roberta-base
```
or
```python
from datasets import load_dataset
load_dataset("openwebtext", None, cache_dir=None, use_auth_token=None)
```
## Expected results
Loading is successful
## Actual results
Traceback (most recent call last):
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 704, in download_and_prepare
self._download_and_prepare(
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 1227, in _download_and_prepare
super()._download_and_prepare(dl_manager, verify_infos, check_duplicate_keys=verify_infos)
File "D:\Python\v3.8.5\lib\site-packages\datasets\builder.py", line 795, in _download_and_prepare
raise OSError(
OSError: Cannot find data file.
Original error:
[Errno 22] Invalid argument: 'F://huggingface/datasets/downloads/extracted/0901d27f43b7e9ac0577da0d0061c8c632ba0b70ecd1b4bfb21562d9b7486faa/0015896-b1054262f7da52a0518521e29c8e352c.txt'
## Environment info
<!-- You can run the command `datasets-cli env` and copy-and-paste its output below. -->
- `datasets` version: 2.4.0
- Platform: windows
- Python version: 3.8.5
- PyArrow version: 9.0.0
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4856/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4856/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4855
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4855/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4855/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4855/events
|
https://github.com/huggingface/datasets/issues/4855
| 1,339,699,975
|
I_kwDODunzps5P2jMH
| 4,855
|
Dataset Viewer issue for super_glue
|
{
"login": "wzsxxa",
"id": 54366859,
"node_id": "MDQ6VXNlcjU0MzY2ODU5",
"avatar_url": "https://avatars.githubusercontent.com/u/54366859?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/wzsxxa",
"html_url": "https://github.com/wzsxxa",
"followers_url": "https://api.github.com/users/wzsxxa/followers",
"following_url": "https://api.github.com/users/wzsxxa/following{/other_user}",
"gists_url": "https://api.github.com/users/wzsxxa/gists{/gist_id}",
"starred_url": "https://api.github.com/users/wzsxxa/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/wzsxxa/subscriptions",
"organizations_url": "https://api.github.com/users/wzsxxa/orgs",
"repos_url": "https://api.github.com/users/wzsxxa/repos",
"events_url": "https://api.github.com/users/wzsxxa/events{/privacy}",
"received_events_url": "https://api.github.com/users/wzsxxa/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 3470211881,
"node_id": "LA_kwDODunzps7O1zsp",
"url": "https://api.github.com/repos/huggingface/datasets/labels/dataset-viewer",
"name": "dataset-viewer",
"color": "E5583E",
"default": false,
"description": "Related to the dataset viewer on huggingface.co"
}
] |
closed
| false
| null |
[] | null |
[
"Thanks for reporting @wzsxxa.\r\n\r\nHowever the \"super_glue\" dataset is rendered properly by the Dataset preview: https://huggingface.co/datasets/super_glue"
] | 2022-08-16T01:34:56
| 2022-08-22T10:08:01
| 2022-08-22T10:07:45
|
NONE
| null |
### Link
https://huggingface.co/datasets/super_glue
### Description
can't view super_glue dataset on the web page
### Owner
_No response_
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4855/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4855/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4853
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4853/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4853/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4853/events
|
https://github.com/huggingface/datasets/pull/4853
| 1,339,456,490
|
PR_kwDODunzps49NFNL
| 4,853
|
Fix bug and checksums in exams dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-15T20:17:57
| 2022-08-16T06:43:57
| 2022-08-16T06:29:06
|
MEMBER
| null |
Fix #4852.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4853/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4853/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4853",
"html_url": "https://github.com/huggingface/datasets/pull/4853",
"diff_url": "https://github.com/huggingface/datasets/pull/4853.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4853.patch",
"merged_at": "2022-08-16T06:29:06"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4852
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4852/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4852/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4852/events
|
https://github.com/huggingface/datasets/issues/4852
| 1,339,450,991
|
I_kwDODunzps5P1mZv
| 4,852
|
Bug in multilingual_with_para config of exams dataset and checksums error
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892857,
"node_id": "MDU6TGFiZWwxOTM1ODkyODU3",
"url": "https://api.github.com/repos/huggingface/datasets/labels/bug",
"name": "bug",
"color": "d73a4a",
"default": true,
"description": "Something isn't working"
}
] |
closed
| false
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"Hi @albertvillanova. Unfortunately I still get this error. Is this because the merge has yet to be released? Is there a way to track the release?",
"Hi @thesofakillers, yes you are right: the fix will be available after next release (it was planned for today; Monday at the latest).\r\n\r\nIn the meantime, you can use the version of the `exams` on our main branch by passing `revision` to `load_dataset`:\r\n```python\r\nds = load_dataset(\"exams\", revision=\"main\")\r\n```"
] | 2022-08-15T20:14:52
| 2022-09-16T09:50:55
| 2022-08-16T06:29:07
|
MEMBER
| null |
## Describe the bug
There is a bug for "multilingual_with_para" config in exams dataset:
```python
ds = load_dataset("./datasets/exams", split="train")
```
raises:
```
KeyError: 'choices'
```
Moreover, there is a NonMatchingChecksumError:
```
NonMatchingChecksumError: Checksums didn't match for dataset source files:
['https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/train_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/dev_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/multilingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/test_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_bg_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_hu_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_it_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_mk_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pl_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_pt_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sq_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_sr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_tr_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/train_vi_with_para.jsonl.tar.gz', 'https://github.com/mhardalov/exams-qa/raw/main/data/exams/cross-lingual/with_paragraphs/dev_vi_with_para.jsonl.tar.gz']
```
CC: @thesofakillers
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4852/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4852/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4851
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4851/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4851/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4851/events
|
https://github.com/huggingface/datasets/pull/4851
| 1,339,085,917
|
PR_kwDODunzps49L6ee
| 4,851
|
Fix license tag and Source Data section in billsum dataset card
|
{
"login": "kashif",
"id": 8100,
"node_id": "MDQ6VXNlcjgxMDA=",
"avatar_url": "https://avatars.githubusercontent.com/u/8100?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/kashif",
"html_url": "https://github.com/kashif",
"followers_url": "https://api.github.com/users/kashif/followers",
"following_url": "https://api.github.com/users/kashif/following{/other_user}",
"gists_url": "https://api.github.com/users/kashif/gists{/gist_id}",
"starred_url": "https://api.github.com/users/kashif/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/kashif/subscriptions",
"organizations_url": "https://api.github.com/users/kashif/orgs",
"repos_url": "https://api.github.com/users/kashif/repos",
"events_url": "https://api.github.com/users/kashif/events{/privacy}",
"received_events_url": "https://api.github.com/users/kashif/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-15T14:37:00
| 2022-08-22T13:56:24
| 2022-08-22T13:40:59
|
CONTRIBUTOR
| null |
Fixed the data source and license fields
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4851/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4851/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4851",
"html_url": "https://github.com/huggingface/datasets/pull/4851",
"diff_url": "https://github.com/huggingface/datasets/pull/4851.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4851.patch",
"merged_at": "2022-08-22T13:40:59"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4850
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4850/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4850/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4850/events
|
https://github.com/huggingface/datasets/pull/4850
| 1,338,702,306
|
PR_kwDODunzps49KnZ8
| 4,850
|
Fix test of _get_extraction_protocol for TAR files
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-15T08:37:58
| 2022-08-15T09:42:56
| 2022-08-15T09:28:46
|
MEMBER
| null |
While working in another PR, I discovered an xpass test (a test that is supposed to xfail but nevertheless passes) when testing `_get_extraction_protocol`: https://github.com/huggingface/datasets/runs/7818845285?check_suite_focus=true
```
XPASS tests/test_streaming_download_manager.py::test_streaming_dl_manager_get_extraction_protocol_throws[https://foo.bar/train.tar]
```
This PR:
- refactors the test so that it tests the raise of the exceptions instead of xfailing
- fixes the test for TAR files: it does not raise an exception, but returns "tar"
- fixes some tests wrongly named: exchange `test_streaming_dl_manager_get_extraction_protocol` with `test_streaming_dl_manager_get_extraction_protocol_gg_drive`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4850/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4850/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4850",
"html_url": "https://github.com/huggingface/datasets/pull/4850",
"diff_url": "https://github.com/huggingface/datasets/pull/4850.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4850.patch",
"merged_at": "2022-08-15T09:28:46"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4849
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4849/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4849/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4849/events
|
https://github.com/huggingface/datasets/pull/4849
| 1,338,273,900
|
PR_kwDODunzps49JN8d
| 4,849
|
1.18.x
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-14T15:09:19
| 2022-08-14T15:10:02
| 2022-08-14T15:10:02
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4849/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4849/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4849",
"html_url": "https://github.com/huggingface/datasets/pull/4849",
"diff_url": "https://github.com/huggingface/datasets/pull/4849.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4849.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4848
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4848/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4848/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4848/events
|
https://github.com/huggingface/datasets/pull/4848
| 1,338,271,833
|
PR_kwDODunzps49JNj_
| 4,848
|
a
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-14T15:01:16
| 2022-08-14T15:09:59
| 2022-08-14T15:09:59
|
NONE
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4848/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4848/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4848",
"html_url": "https://github.com/huggingface/datasets/pull/4848",
"diff_url": "https://github.com/huggingface/datasets/pull/4848.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4848.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4847
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4847/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4847/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4847/events
|
https://github.com/huggingface/datasets/pull/4847
| 1,338,270,636
|
PR_kwDODunzps49JNWX
| 4,847
|
Test win ci
|
{
"login": "Mr-Robot-001",
"id": 49282718,
"node_id": "MDQ6VXNlcjQ5MjgyNzE4",
"avatar_url": "https://avatars.githubusercontent.com/u/49282718?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/Mr-Robot-001",
"html_url": "https://github.com/Mr-Robot-001",
"followers_url": "https://api.github.com/users/Mr-Robot-001/followers",
"following_url": "https://api.github.com/users/Mr-Robot-001/following{/other_user}",
"gists_url": "https://api.github.com/users/Mr-Robot-001/gists{/gist_id}",
"starred_url": "https://api.github.com/users/Mr-Robot-001/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/Mr-Robot-001/subscriptions",
"organizations_url": "https://api.github.com/users/Mr-Robot-001/orgs",
"repos_url": "https://api.github.com/users/Mr-Robot-001/repos",
"events_url": "https://api.github.com/users/Mr-Robot-001/events{/privacy}",
"received_events_url": "https://api.github.com/users/Mr-Robot-001/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-14T14:57:00
| 2022-08-14T14:57:45
| 2022-08-14T14:57:45
|
NONE
| null |
aa
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4847/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4847/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4847",
"html_url": "https://github.com/huggingface/datasets/pull/4847",
"diff_url": "https://github.com/huggingface/datasets/pull/4847.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4847.patch",
"merged_at": null
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4846
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4846/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4846/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4846/events
|
https://github.com/huggingface/datasets/pull/4846
| 1,337,979,897
|
PR_kwDODunzps49IYSC
| 4,846
|
Update documentation card of miam dataset
|
{
"login": "PierreColombo",
"id": 22492839,
"node_id": "MDQ6VXNlcjIyNDkyODM5",
"avatar_url": "https://avatars.githubusercontent.com/u/22492839?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/PierreColombo",
"html_url": "https://github.com/PierreColombo",
"followers_url": "https://api.github.com/users/PierreColombo/followers",
"following_url": "https://api.github.com/users/PierreColombo/following{/other_user}",
"gists_url": "https://api.github.com/users/PierreColombo/gists{/gist_id}",
"starred_url": "https://api.github.com/users/PierreColombo/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/PierreColombo/subscriptions",
"organizations_url": "https://api.github.com/users/PierreColombo/orgs",
"repos_url": "https://api.github.com/users/PierreColombo/repos",
"events_url": "https://api.github.com/users/PierreColombo/events{/privacy}",
"received_events_url": "https://api.github.com/users/PierreColombo/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-13T14:38:55
| 2022-08-17T00:50:04
| 2022-08-14T10:26:08
|
CONTRIBUTOR
| null |
Hi !
Paper has been published at EMNLP.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4846/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4846/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4846",
"html_url": "https://github.com/huggingface/datasets/pull/4846",
"diff_url": "https://github.com/huggingface/datasets/pull/4846.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4846.patch",
"merged_at": "2022-08-14T10:26:08"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4845
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4845/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4845/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4845/events
|
https://github.com/huggingface/datasets/pull/4845
| 1,337,928,283
|
PR_kwDODunzps49IOjf
| 4,845
|
Mark CI tests as xfail if Hub HTTP error
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-13T10:45:11
| 2022-08-23T04:57:12
| 2022-08-23T04:42:26
|
MEMBER
| null |
In order to make testing more robust (and avoid merges to master with red tests), we could mark tests as xfailed (instead of failed) when the Hub raises some temporary HTTP errors.
This PR:
- marks tests as xfailed only if the Hub raises a 500 error for:
- test_upstream_hub
- makes pytest report the xfailed/xpassed tests.
More tests could also be marked if needed.
Examples of CI failures due to temporary Hub HTTP errors:
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_multiple_files
- https://github.com/huggingface/datasets/runs/7806855399?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/datasets/__DUMMY_TRANSFORMERS_USER__/test-16603108028233/commit/main (Request ID: aZeAQ5yLktoGHQYBcJ3zo)`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_no_token
- https://github.com/huggingface/datasets/runs/7840022996?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://s3.us-east-1.amazonaws.com/lfs-staging.huggingface.co/repos/81/e3/81e3b831fa9bf23190ec041f26ef7ff6d6b71c1a937b8ec1ef1f1f05b508c089/caae596caa179cf45e7c9ac0c6d9a9cb0fe2d305291bfbb2d8b648ae26ed38b6?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Content-Sha256=UNSIGNED-PAYLOAD&X-Amz-Credential=AKIA4N7VTDGOZQA2IKWK%2F20220815%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20220815T144713Z&X-Amz-Expires=900&X-Amz-Signature=5ddddfe8ef2b0601e80ab41c78a4d77d921942b0d8160bcab40ff894095e6823&X-Amz-SignedHeaders=host&x-id=PutObject`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_dict_to_hub_private
- https://github.com/huggingface/datasets/runs/7835921082?check_suite_focus=true
`requests.exceptions.HTTPError: 500 Server Error: Internal Server Error for url: https://hub-ci.huggingface.co/api/repos/create (Request ID: gL_1I7i2dii9leBhlZen-) - Internal Error - We're working hard to fix that as soon as possible!`
- FAILED tests/test_upstream_hub.py::TestPushToHub::test_push_dataset_to_hub_custom_features_image_list
- https://github.com/huggingface/datasets/runs/7835920900?check_suite_focus=true
- This is not 500, but 404:
`requests.exceptions.HTTPError: 404 Client Error: Not Found for url: [https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects](https://hub-ci.huggingface.co/datasets/__DUMMY_TRANSFORMERS_USER__/test-16605586458339.git/info/lfs/objects/batch)`
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4845/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4845/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4845",
"html_url": "https://github.com/huggingface/datasets/pull/4845",
"diff_url": "https://github.com/huggingface/datasets/pull/4845.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4845.patch",
"merged_at": "2022-08-23T04:42:26"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4844
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4844/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4844/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4844/events
|
https://github.com/huggingface/datasets/pull/4844
| 1,337,878,249
|
PR_kwDODunzps49IFLa
| 4,844
|
Add 'val' to VALIDATION_KEYWORDS.
|
{
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-13T06:49:41
| 2022-08-30T10:17:35
| 2022-08-30T10:14:54
|
CONTRIBUTOR
| null |
This PR fixes #4839 by adding the word `"val"` to the `VALIDATION_KEYWORDS` so that the `load_dataset()` method with `imagefolder` (and probably, some other directives as well) reads folders named `"val"` as well.
I think the supported keywords have to be mentioned in the documentation as well, but I couldn't think of a proper place to add that.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4844/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4844/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4844",
"html_url": "https://github.com/huggingface/datasets/pull/4844",
"diff_url": "https://github.com/huggingface/datasets/pull/4844.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4844.patch",
"merged_at": "2022-08-30T10:14:54"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4843
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4843/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4843/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4843/events
|
https://github.com/huggingface/datasets/pull/4843
| 1,337,668,699
|
PR_kwDODunzps49HaWT
| 4,843
|
Fix typo in streaming docs
|
{
"login": "flozi00",
"id": 47894090,
"node_id": "MDQ6VXNlcjQ3ODk0MDkw",
"avatar_url": "https://avatars.githubusercontent.com/u/47894090?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/flozi00",
"html_url": "https://github.com/flozi00",
"followers_url": "https://api.github.com/users/flozi00/followers",
"following_url": "https://api.github.com/users/flozi00/following{/other_user}",
"gists_url": "https://api.github.com/users/flozi00/gists{/gist_id}",
"starred_url": "https://api.github.com/users/flozi00/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/flozi00/subscriptions",
"organizations_url": "https://api.github.com/users/flozi00/orgs",
"repos_url": "https://api.github.com/users/flozi00/repos",
"events_url": "https://api.github.com/users/flozi00/events{/privacy}",
"received_events_url": "https://api.github.com/users/flozi00/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T20:18:21
| 2022-08-14T11:43:30
| 2022-08-14T11:02:09
|
CONTRIBUTOR
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4843/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4843/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4843",
"html_url": "https://github.com/huggingface/datasets/pull/4843",
"diff_url": "https://github.com/huggingface/datasets/pull/4843.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4843.patch",
"merged_at": "2022-08-14T11:02:09"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4842
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4842/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4842/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4842/events
|
https://github.com/huggingface/datasets/pull/4842
| 1,337,527,764
|
PR_kwDODunzps49G8CC
| 4,842
|
Update stackexchange license
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T17:39:06
| 2022-08-14T10:43:18
| 2022-08-14T10:28:49
|
CONTRIBUTOR
| null |
The correct license of the stackexchange subset of the Pile is `cc-by-sa-4.0`, as can for example be seen here: https://stackoverflow.com/help/licensing
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4842/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4842/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4842",
"html_url": "https://github.com/huggingface/datasets/pull/4842",
"diff_url": "https://github.com/huggingface/datasets/pull/4842.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4842.patch",
"merged_at": "2022-08-14T10:28:49"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4841
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4841/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4841/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4841/events
|
https://github.com/huggingface/datasets/pull/4841
| 1,337,401,243
|
PR_kwDODunzps49Gf0I
| 4,841
|
Update ted_talks_iwslt license to include ND
|
{
"login": "cakiki",
"id": 3664563,
"node_id": "MDQ6VXNlcjM2NjQ1NjM=",
"avatar_url": "https://avatars.githubusercontent.com/u/3664563?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/cakiki",
"html_url": "https://github.com/cakiki",
"followers_url": "https://api.github.com/users/cakiki/followers",
"following_url": "https://api.github.com/users/cakiki/following{/other_user}",
"gists_url": "https://api.github.com/users/cakiki/gists{/gist_id}",
"starred_url": "https://api.github.com/users/cakiki/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/cakiki/subscriptions",
"organizations_url": "https://api.github.com/users/cakiki/orgs",
"repos_url": "https://api.github.com/users/cakiki/repos",
"events_url": "https://api.github.com/users/cakiki/events{/privacy}",
"received_events_url": "https://api.github.com/users/cakiki/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T16:14:52
| 2022-08-14T11:15:22
| 2022-08-14T11:00:22
|
CONTRIBUTOR
| null |
Excerpt from the paper's abstract: "Aside from its cultural and social relevance, this content, which is published under the Creative Commons BY-NC-ND license, also represents a precious language resource for the machine translation research community"
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4841/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4841/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4841",
"html_url": "https://github.com/huggingface/datasets/pull/4841",
"diff_url": "https://github.com/huggingface/datasets/pull/4841.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4841.patch",
"merged_at": "2022-08-14T11:00:22"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4839
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4839/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4839/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4839/events
|
https://github.com/huggingface/datasets/issues/4839
| 1,337,206,377
|
I_kwDODunzps5PtCZp
| 4,839
|
ImageFolder dataset builder does not read the validation data set if it is named as "val"
|
{
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"id": 1935892871,
"node_id": "MDU6TGFiZWwxOTM1ODkyODcx",
"url": "https://api.github.com/repos/huggingface/datasets/labels/enhancement",
"name": "enhancement",
"color": "a2eeef",
"default": true,
"description": "New feature or request"
}
] |
closed
| false
|
{
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
}
|
[
{
"login": "akt42",
"id": 98386959,
"node_id": "U_kgDOBd1EDw",
"avatar_url": "https://avatars.githubusercontent.com/u/98386959?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/akt42",
"html_url": "https://github.com/akt42",
"followers_url": "https://api.github.com/users/akt42/followers",
"following_url": "https://api.github.com/users/akt42/following{/other_user}",
"gists_url": "https://api.github.com/users/akt42/gists{/gist_id}",
"starred_url": "https://api.github.com/users/akt42/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/akt42/subscriptions",
"organizations_url": "https://api.github.com/users/akt42/orgs",
"repos_url": "https://api.github.com/users/akt42/repos",
"events_url": "https://api.github.com/users/akt42/events{/privacy}",
"received_events_url": "https://api.github.com/users/akt42/received_events",
"type": "User",
"site_admin": false
}
] | null |
[
"#take"
] | 2022-08-12T13:26:00
| 2022-08-30T10:14:55
| 2022-08-30T10:14:55
|
CONTRIBUTOR
| null |
**Is your feature request related to a problem? Please describe.**
Currently, the `'imagefolder'` data set builder in [`load_dataset()`](https://github.com/huggingface/datasets/blob/2.4.0/src/datasets/load.py#L1541] ) only [supports](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) the following names as the validation data set directory name: `["validation", "valid", "dev"]`. When the validation directory is named as `'val'`, the Data set will not have a validation split. I expected this to be a trivial task but ended up spending a lot of time before knowing that only the above names are supported.
Here's a minimal example of `val` not being recognized:
```python
import os
import numpy as np
import cv2
from datasets import load_dataset
# creating a dummy data set with the following structure:
# ROOT
# | -- train
# | ---- class_1
# | ---- class_2
# | -- val
# | ---- class_1
# | ---- class_2
ROOT = "data"
for which in ["train", "val"]:
for class_name in ["class_1", "class_2"]:
dir_name = os.path.join(ROOT, which, class_name)
if not os.path.exists(dir_name):
os.makedirs(dir_name)
for i in range(10):
cv2.imwrite(
os.path.join(dir_name, f"{i}.png"),
np.random.random((224, 224))
)
# trying to create a data set
dataset = load_dataset(
"imagefolder",
data_dir=ROOT
)
>> dataset
DatasetDict({
train: Dataset({
features: ['image', 'label'],
num_rows: 20
})
})
# ^ note how the dataset only has a 'train' subset
```
**Describe the solution you'd like**
The suggestion is to include `"val"` to [that list ](https://github.com/huggingface/datasets/blob/6c609a322da994de149b2c938f19439bca99408e/src/datasets/data_files.py#L31) as that's a commonly used phrase to name the validation directory.
Also, In the documentation, explicitly mention that only such directory names are supported as train/val/test directories to avoid confusion.
**Describe alternatives you've considered**
In the documentation, explicitly mention that only such directory names are supported as train/val/test directories without adding `val` to the above list.
**Additional context**
A question asked in the forum: [
Loading an imagenet-style image dataset with train/val directories](https://discuss.huggingface.co/t/loading-an-imagenet-style-image-dataset-with-train-val-directories/21554)
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4839/reactions",
"total_count": 1,
"+1": 1,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4839/timeline
| null |
completed
| null | null | false
|
https://api.github.com/repos/huggingface/datasets/issues/4838
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4838/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4838/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4838/events
|
https://github.com/huggingface/datasets/pull/4838
| 1,337,194,918
|
PR_kwDODunzps49F08R
| 4,838
|
Fix documentation card of adv_glue dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T13:15:26
| 2022-08-15T10:17:14
| 2022-08-15T10:02:11
|
MEMBER
| null |
Fix documentation card of adv_glue dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4838/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4838/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4838",
"html_url": "https://github.com/huggingface/datasets/pull/4838",
"diff_url": "https://github.com/huggingface/datasets/pull/4838.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4838.patch",
"merged_at": "2022-08-15T10:02:11"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4837
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4837/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4837/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4837/events
|
https://github.com/huggingface/datasets/pull/4837
| 1,337,079,723
|
PR_kwDODunzps49Fb6l
| 4,837
|
Add support for CSV metadata files to ImageFolder
|
{
"login": "mariosasko",
"id": 47462742,
"node_id": "MDQ6VXNlcjQ3NDYyNzQy",
"avatar_url": "https://avatars.githubusercontent.com/u/47462742?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/mariosasko",
"html_url": "https://github.com/mariosasko",
"followers_url": "https://api.github.com/users/mariosasko/followers",
"following_url": "https://api.github.com/users/mariosasko/following{/other_user}",
"gists_url": "https://api.github.com/users/mariosasko/gists{/gist_id}",
"starred_url": "https://api.github.com/users/mariosasko/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/mariosasko/subscriptions",
"organizations_url": "https://api.github.com/users/mariosasko/orgs",
"repos_url": "https://api.github.com/users/mariosasko/repos",
"events_url": "https://api.github.com/users/mariosasko/events{/privacy}",
"received_events_url": "https://api.github.com/users/mariosasko/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T11:19:18
| 2022-08-31T12:01:27
| 2022-08-31T11:59:07
|
CONTRIBUTOR
| null |
Fix #4814
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4837/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4837/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4837",
"html_url": "https://github.com/huggingface/datasets/pull/4837",
"diff_url": "https://github.com/huggingface/datasets/pull/4837.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4837.patch",
"merged_at": "2022-08-31T11:59:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4835
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4835/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4835/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4835/events
|
https://github.com/huggingface/datasets/pull/4835
| 1,336,994,835
|
PR_kwDODunzps49FJg9
| 4,835
|
Fix documentation card of ethos dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T09:51:06
| 2022-08-12T13:13:55
| 2022-08-12T12:59:39
|
MEMBER
| null |
Fix documentation card of ethos dataset.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4835/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4835/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4835",
"html_url": "https://github.com/huggingface/datasets/pull/4835",
"diff_url": "https://github.com/huggingface/datasets/pull/4835.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4835.patch",
"merged_at": "2022-08-12T12:59:39"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4834
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4834/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4834/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4834/events
|
https://github.com/huggingface/datasets/pull/4834
| 1,336,993,511
|
PR_kwDODunzps49FJOu
| 4,834
|
Fix documentation card of recipe_nlg dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T09:49:39
| 2022-08-12T11:28:18
| 2022-08-12T11:13:40
|
MEMBER
| null |
Fix documentation card of recipe_nlg dataset
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4834/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4834/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4834",
"html_url": "https://github.com/huggingface/datasets/pull/4834",
"diff_url": "https://github.com/huggingface/datasets/pull/4834.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4834.patch",
"merged_at": "2022-08-12T11:13:40"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4833
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4833/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4833/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4833/events
|
https://github.com/huggingface/datasets/pull/4833
| 1,336,946,965
|
PR_kwDODunzps49E_Nk
| 4,833
|
Fix missing tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T09:04:52
| 2022-09-22T14:41:23
| 2022-08-12T09:45:55
|
MEMBER
| null |
Fix missing tags in dataset cards:
- boolq
- break_data
- definite_pronoun_resolution
- emo
- kor_nli
- pg19
- quartz
- sciq
- squad_es
- wmt14
- wmt15
- wmt16
- wmt17
- wmt18
- wmt19
- wmt_t2t
This PR partially fixes the missing tags in dataset cards. Subsequent PRs will follow to complete this task.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4833/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4833/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4833",
"html_url": "https://github.com/huggingface/datasets/pull/4833",
"diff_url": "https://github.com/huggingface/datasets/pull/4833.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4833.patch",
"merged_at": "2022-08-12T09:45:55"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4832
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4832/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4832/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4832/events
|
https://github.com/huggingface/datasets/pull/4832
| 1,336,727,389
|
PR_kwDODunzps49EQav
| 4,832
|
Fix tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-12T04:11:23
| 2022-08-12T04:41:55
| 2022-08-12T04:27:24
|
MEMBER
| null |
Fix wrong tags in dataset cards.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4832/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4832/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4832",
"html_url": "https://github.com/huggingface/datasets/pull/4832",
"diff_url": "https://github.com/huggingface/datasets/pull/4832.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4832.patch",
"merged_at": "2022-08-12T04:27:24"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4831
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4831/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4831/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4831/events
|
https://github.com/huggingface/datasets/pull/4831
| 1,336,199,643
|
PR_kwDODunzps49Cibf
| 4,831
|
Add oversampling strategies to interleave datasets
|
{
"login": "ylacombe",
"id": 52246514,
"node_id": "MDQ6VXNlcjUyMjQ2NTE0",
"avatar_url": "https://avatars.githubusercontent.com/u/52246514?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/ylacombe",
"html_url": "https://github.com/ylacombe",
"followers_url": "https://api.github.com/users/ylacombe/followers",
"following_url": "https://api.github.com/users/ylacombe/following{/other_user}",
"gists_url": "https://api.github.com/users/ylacombe/gists{/gist_id}",
"starred_url": "https://api.github.com/users/ylacombe/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/ylacombe/subscriptions",
"organizations_url": "https://api.github.com/users/ylacombe/orgs",
"repos_url": "https://api.github.com/users/ylacombe/repos",
"events_url": "https://api.github.com/users/ylacombe/events{/privacy}",
"received_events_url": "https://api.github.com/users/ylacombe/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T16:24:51
| 2022-12-04T11:23:54
| 2022-08-24T16:46:07
|
CONTRIBUTOR
| null |
Hello everyone,
Here is a proposal to improve `interleave_datasets` function.
Following Issue #3064, and @lhoestq [comment](https://github.com/huggingface/datasets/issues/3064#issuecomment-1022333385), I propose here a code that performs oversampling when interleaving a `Dataset` list.
I have myself encountered this problem while trying to implement training on a multilingual dataset following a training strategy similar to that of [XLSUM paper](https://arxiv.org/pdf/2106.13822.pdf), a multilingual abstract summary dataset where the multilingual training dataset is created by sampling from the languages following a smoothing strategy. The main idea is to sample languages that have a low number of samples more frequently than other languages.
As in Issue #3064, the current default strategy is a undersampling strategy, which stops as soon as a dataset runs out of samples. The new `all_exhausted` strategy stops building the new dataset as soon as all samples in each dataset have been added at least once.
How does it work in practice:
- if ``probabilities`` is `None` and the strategy is `all_exhausted`, it simply performs a round robin interleaving that stops when the longest dataset is out of samples. Here the new dataset length will be $maxLengthDataset*nbDataset$.
- if ``probabilities`` is not `None` and the strategy is `all_exhausted`, it keeps trace of the datasets which were out of samples but continues to add them to the new dataset, and stops as soons as every dataset runs out of samples at least once.
- In the other cases, it is supposed to keep the same behaviour as before. Except that this time, when probabilities are precised, it really stops AS SOON AS a dataset is out of samples.
More on the last sentence:
The previous example of `interleave_datasets` was:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12]
With my implementation, `dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)` gives:
>>> dataset["a"]
[10, 0, 11, 1, 2]
because `d1` is already out of samples just after `2` is added.
Example of the results of applying the different strategies:
>>> from datasets import Dataset, interleave_datasets
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22]})
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 10, 0, 1, 2, 21, 0, 11, 1, 2, 0, 1, 12, 2, 10, 0, 22]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> d1 = Dataset.from_dict({"a": [0, 1, 2]})
>>> d2 = Dataset.from_dict({"a": [10, 11, 12, 13]})
>>> d3 = Dataset.from_dict({"a": [20, 21, 22, 23, 24]})
>>> dataset = interleave_datasets([d1, d2, d3])
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22]
>>> dataset = interleave_datasets([d1, d2, d3], stopping_strategy="all_exhausted")
>>> dataset["a"]
[0, 10, 20, 1, 11, 21, 2, 12, 22, 0, 13, 23, 1, 0, 24]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42)
>>> dataset["a"]
[10, 0, 11, 1, 2]
>>> dataset = interleave_datasets([d1, d2, d3], probabilities=[0.7, 0.2, 0.1], seed=42, stopping_strategy="all_exhausted")
>>> dataset["a"]
[10, 0, 11, 1, 2, 20, 12, 13, ..., 0, 1, 2, 0, 24]
**Final note:** I've been using that code for a research project involving a large-scale multilingual dataset. One should be careful when using oversampling to avoid to avoid exploding the size of the dataset. For example, if a very large data set has a low probability of being sampled, the final dataset may be several times the size of that large data set.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4831/reactions",
"total_count": 1,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 1,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4831/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4831",
"html_url": "https://github.com/huggingface/datasets/pull/4831",
"diff_url": "https://github.com/huggingface/datasets/pull/4831.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4831.patch",
"merged_at": "2022-08-24T16:46:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4830
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4830/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4830/events
|
https://github.com/huggingface/datasets/pull/4830
| 1,336,177,937
|
PR_kwDODunzps49Cdro
| 4,830
|
Fix task tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T16:06:06
| 2022-08-11T16:37:27
| 2022-08-11T16:23:00
|
MEMBER
| null | null |
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4830/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4830/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4830",
"html_url": "https://github.com/huggingface/datasets/pull/4830",
"diff_url": "https://github.com/huggingface/datasets/pull/4830.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4830.patch",
"merged_at": "2022-08-11T16:23:00"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4827
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4827/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4827/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4827/events
|
https://github.com/huggingface/datasets/pull/4827
| 1,335,994,312
|
PR_kwDODunzps49B1zi
| 4,827
|
Add license metadata to pg19
|
{
"login": "julien-c",
"id": 326577,
"node_id": "MDQ6VXNlcjMyNjU3Nw==",
"avatar_url": "https://avatars.githubusercontent.com/u/326577?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/julien-c",
"html_url": "https://github.com/julien-c",
"followers_url": "https://api.github.com/users/julien-c/followers",
"following_url": "https://api.github.com/users/julien-c/following{/other_user}",
"gists_url": "https://api.github.com/users/julien-c/gists{/gist_id}",
"starred_url": "https://api.github.com/users/julien-c/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/julien-c/subscriptions",
"organizations_url": "https://api.github.com/users/julien-c/orgs",
"repos_url": "https://api.github.com/users/julien-c/repos",
"events_url": "https://api.github.com/users/julien-c/events{/privacy}",
"received_events_url": "https://api.github.com/users/julien-c/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T13:52:20
| 2022-08-11T15:01:03
| 2022-08-11T14:46:38
|
MEMBER
| null |
As reported over email by Roy Rijkers
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4827/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4827/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4827",
"html_url": "https://github.com/huggingface/datasets/pull/4827",
"diff_url": "https://github.com/huggingface/datasets/pull/4827.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4827.patch",
"merged_at": "2022-08-11T14:46:38"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4826
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4826/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4826/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4826/events
|
https://github.com/huggingface/datasets/pull/4826
| 1,335,987,583
|
PR_kwDODunzps49B0V3
| 4,826
|
Fix language tags in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T13:47:14
| 2022-08-11T14:17:48
| 2022-08-11T14:03:12
|
MEMBER
| null |
Fix language tags in all dataset cards, so that they are validated (aligned with our `languages.json` resource).
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4826/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4826/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4826",
"html_url": "https://github.com/huggingface/datasets/pull/4826",
"diff_url": "https://github.com/huggingface/datasets/pull/4826.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4826.patch",
"merged_at": "2022-08-11T14:03:12"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4825
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4825/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4825/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4825/events
|
https://github.com/huggingface/datasets/pull/4825
| 1,335,856,882
|
PR_kwDODunzps49BYWL
| 4,825
|
[Windows] Fix Access Denied when using os.rename()
|
{
"login": "DougTrajano",
"id": 8703022,
"node_id": "MDQ6VXNlcjg3MDMwMjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8703022?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/DougTrajano",
"html_url": "https://github.com/DougTrajano",
"followers_url": "https://api.github.com/users/DougTrajano/followers",
"following_url": "https://api.github.com/users/DougTrajano/following{/other_user}",
"gists_url": "https://api.github.com/users/DougTrajano/gists{/gist_id}",
"starred_url": "https://api.github.com/users/DougTrajano/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/DougTrajano/subscriptions",
"organizations_url": "https://api.github.com/users/DougTrajano/orgs",
"repos_url": "https://api.github.com/users/DougTrajano/repos",
"events_url": "https://api.github.com/users/DougTrajano/events{/privacy}",
"received_events_url": "https://api.github.com/users/DougTrajano/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T11:57:15
| 2022-08-24T13:09:07
| 2022-08-24T13:09:07
|
CONTRIBUTOR
| null |
In this PR, we are including an additional step when `os.rename()` raises a PermissionError.
Basically, we will use `shutil.move()` on the temp files.
Fix #2937
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4825/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4825/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4825",
"html_url": "https://github.com/huggingface/datasets/pull/4825",
"diff_url": "https://github.com/huggingface/datasets/pull/4825.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4825.patch",
"merged_at": "2022-08-24T13:09:07"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4824
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4824/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4824/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4824/events
|
https://github.com/huggingface/datasets/pull/4824
| 1,335,826,639
|
PR_kwDODunzps49BR5H
| 4,824
|
Fix titles in dataset cards
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T11:27:48
| 2022-08-11T13:46:11
| 2022-08-11T12:56:49
|
MEMBER
| null |
Fix all the titles in the dataset cards, so that they conform to the required format.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4824/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4824/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4824",
"html_url": "https://github.com/huggingface/datasets/pull/4824",
"diff_url": "https://github.com/huggingface/datasets/pull/4824.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4824.patch",
"merged_at": "2022-08-11T12:56:49"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4823
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4823/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4823/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4823/events
|
https://github.com/huggingface/datasets/pull/4823
| 1,335,687,033
|
PR_kwDODunzps49A0O_
| 4,823
|
Update data URL in mkqa dataset
|
{
"login": "albertvillanova",
"id": 8515462,
"node_id": "MDQ6VXNlcjg1MTU0NjI=",
"avatar_url": "https://avatars.githubusercontent.com/u/8515462?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/albertvillanova",
"html_url": "https://github.com/albertvillanova",
"followers_url": "https://api.github.com/users/albertvillanova/followers",
"following_url": "https://api.github.com/users/albertvillanova/following{/other_user}",
"gists_url": "https://api.github.com/users/albertvillanova/gists{/gist_id}",
"starred_url": "https://api.github.com/users/albertvillanova/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/albertvillanova/subscriptions",
"organizations_url": "https://api.github.com/users/albertvillanova/orgs",
"repos_url": "https://api.github.com/users/albertvillanova/repos",
"events_url": "https://api.github.com/users/albertvillanova/events{/privacy}",
"received_events_url": "https://api.github.com/users/albertvillanova/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T09:16:13
| 2022-08-11T09:51:50
| 2022-08-11T09:37:52
|
MEMBER
| null |
Update data URL in mkqa dataset.
Fix #4817.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4823/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4823/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4823",
"html_url": "https://github.com/huggingface/datasets/pull/4823",
"diff_url": "https://github.com/huggingface/datasets/pull/4823.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4823.patch",
"merged_at": "2022-08-11T09:37:51"
}
| true
|
https://api.github.com/repos/huggingface/datasets/issues/4821
|
https://api.github.com/repos/huggingface/datasets
|
https://api.github.com/repos/huggingface/datasets/issues/4821/labels{/name}
|
https://api.github.com/repos/huggingface/datasets/issues/4821/comments
|
https://api.github.com/repos/huggingface/datasets/issues/4821/events
|
https://github.com/huggingface/datasets/pull/4821
| 1,335,664,588
|
PR_kwDODunzps49AvaE
| 4,821
|
Fix train_test_split docs
|
{
"login": "NielsRogge",
"id": 48327001,
"node_id": "MDQ6VXNlcjQ4MzI3MDAx",
"avatar_url": "https://avatars.githubusercontent.com/u/48327001?v=4",
"gravatar_id": "",
"url": "https://api.github.com/users/NielsRogge",
"html_url": "https://github.com/NielsRogge",
"followers_url": "https://api.github.com/users/NielsRogge/followers",
"following_url": "https://api.github.com/users/NielsRogge/following{/other_user}",
"gists_url": "https://api.github.com/users/NielsRogge/gists{/gist_id}",
"starred_url": "https://api.github.com/users/NielsRogge/starred{/owner}{/repo}",
"subscriptions_url": "https://api.github.com/users/NielsRogge/subscriptions",
"organizations_url": "https://api.github.com/users/NielsRogge/orgs",
"repos_url": "https://api.github.com/users/NielsRogge/repos",
"events_url": "https://api.github.com/users/NielsRogge/events{/privacy}",
"received_events_url": "https://api.github.com/users/NielsRogge/received_events",
"type": "User",
"site_admin": false
}
|
[] |
closed
| false
| null |
[] | null |
[] | 2022-08-11T08:55:45
| 2022-08-11T09:59:29
| 2022-08-11T09:45:40
|
CONTRIBUTOR
| null |
I saw that `stratify` is added to the `train_test_split` method as per #4322, hence the docs can be updated.
|
{
"url": "https://api.github.com/repos/huggingface/datasets/issues/4821/reactions",
"total_count": 0,
"+1": 0,
"-1": 0,
"laugh": 0,
"hooray": 0,
"confused": 0,
"heart": 0,
"rocket": 0,
"eyes": 0
}
|
https://api.github.com/repos/huggingface/datasets/issues/4821/timeline
| null | null | false
|
{
"url": "https://api.github.com/repos/huggingface/datasets/pulls/4821",
"html_url": "https://github.com/huggingface/datasets/pull/4821",
"diff_url": "https://github.com/huggingface/datasets/pull/4821.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/4821.patch",
"merged_at": "2022-08-11T09:45:40"
}
| true
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.