title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
IsADirectoryError when trying to download C4 | **TLDR**:
I fail to download C4 and see a stacktrace originating in `IsADirectoryError` as an explanation for failure.
How can the problem be fixed?
**VERBOSE**:
I use Python version 3.7 and have the following dependencies listed in my project:
```
datasets==1.2.0
apache-beam==2.26.0
```
When runn... | https://github.com/huggingface/datasets/issues/1710 | [
"I haven't tested C4 on my side so there so there may be a few bugs in the code/adjustments to make.\r\nHere it looks like in c4.py, line 190 one of the `files_to_download` is `'/'` which is invalid.\r\nValid files are paths to local files or URLs to remote files.",
"Fixed once processed data is used instead:\r\n... | null | 1,710 | false |
Databases | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | https://github.com/huggingface/datasets/issues/1709 | [] | null | 1,709 | false |
<html dir="ltr" lang="en" class="focus-outline-visible"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- **Paper:** *link to the dataset paper if available*
- **Data:** *link to the Github repository or current dataset location*
- **Motivation:** *what are some good reasons t... | https://github.com/huggingface/datasets/issues/1708 | [] | null | 1,708 | false |
Added generated READMEs for datasets that were missing one. | This is it: we worked on a generator with Yacine @yjernite , and we generated dataset cards for all missing ones (161), with all the information we could gather from datasets repository, and using dummy_data to generate examples when possible.
Code is available here for the moment: https://github.com/madlag/datasets... | https://github.com/huggingface/datasets/pull/1707 | [
"Looks like we need to trim the ones with too many configs, will look into it tomorrow!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1707",
"html_url": "https://github.com/huggingface/datasets/pull/1707",
"diff_url": "https://github.com/huggingface/datasets/pull/1707.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1707.patch",
"merged_at": "2021-01-18T14:32... | 1,707 | true |
Error when downloading a large dataset on slow connection. | I receive the following error after about an hour trying to download the `openwebtext` dataset.
The code used is:
```python
import datasets
datasets.load_dataset("openwebtext")
```
> Traceback (most recent call last): ... | https://github.com/huggingface/datasets/issues/1706 | [
"Hi ! Is this an issue you have with `openwebtext` specifically or also with other datasets ?\r\n\r\nIt looks like the downloaded file is corrupted and can't be extracted using `tarfile`.\r\nCould you try loading it again with \r\n```python\r\nimport datasets\r\ndatasets.load_dataset(\"openwebtext\", download_mode=... | null | 1,706 | false |
Add information about caching and verifications in "Load a Dataset" docs | Related to #215.
Missing improvements from @lhoestq's #1703. | https://github.com/huggingface/datasets/pull/1705 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1705",
"html_url": "https://github.com/huggingface/datasets/pull/1705",
"diff_url": "https://github.com/huggingface/datasets/pull/1705.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1705.patch",
"merged_at": "2021-01-12T14:08... | 1,705 | true |
Update XSUM Factuality DatasetCard | Update XSUM Factuality DatasetCard | https://github.com/huggingface/datasets/pull/1704 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1704",
"html_url": "https://github.com/huggingface/datasets/pull/1704",
"diff_url": "https://github.com/huggingface/datasets/pull/1704.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1704.patch",
"merged_at": "2021-01-12T13:30... | 1,704 | true |
Improvements regarding caching and fingerprinting | This PR adds these features:
- Enable/disable caching
If disabled, the library will no longer reload cached datasets files when applying transforms to the datasets.
It is equivalent to setting `load_from_cache` to `False` in dataset transforms.
```python
from datasets import set_caching_enabled
set_cach... | https://github.com/huggingface/datasets/pull/1703 | [
"I few comments here for discussion:\r\n- I'm not convinced yet the end user should really have to understand the difference between \"caching\" and 'fingerprinting\", what do you think? I think fingerprinting should probably stay as an internal thing. Is there a case where we want cahing without fingerprinting or ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1703",
"html_url": "https://github.com/huggingface/datasets/pull/1703",
"diff_url": "https://github.com/huggingface/datasets/pull/1703.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1703.patch",
"merged_at": "2021-01-19T17:32... | 1,703 | true |
Fix importlib metdata import in py38 | In Python 3.8 there's no need to install `importlib_metadata` since it already exists as `importlib.metadata` in the standard lib. | https://github.com/huggingface/datasets/pull/1702 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1702",
"html_url": "https://github.com/huggingface/datasets/pull/1702",
"diff_url": "https://github.com/huggingface/datasets/pull/1702.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1702.patch",
"merged_at": "2021-01-08T10:47... | 1,702 | true |
Some datasets miss dataset_infos.json or dummy_data.zip | While working on dataset REAME generation script at https://github.com/madlag/datasets_readme_generator , I noticed that some datasets miss a dataset_infos.json :
```
c4
lm1b
reclor
wikihow
```
And some does not have a dummy_data.zip :
```
kor_nli
math_dataset
mlqa
ms_marco
newsgroup
qa4mre
qanga... | https://github.com/huggingface/datasets/issues/1701 | [
"Thanks for reporting.\r\nWe should indeed add all the missing dummy_data.zip and also the dataset_infos.json at least for lm1b, reclor and wikihow.\r\n\r\nFor c4 I haven't tested the script and I think we'll require some optimizations regarding beam datasets before processing it.\r\n",
"Closing since the dummy d... | null | 1,701 | false |
Update Curiosity dialogs DatasetCard | Update Curiosity dialogs DatasetCard
There are some entries in the data fields section yet to be filled. There is little information regarding those fields. | https://github.com/huggingface/datasets/pull/1700 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1700",
"html_url": "https://github.com/huggingface/datasets/pull/1700",
"diff_url": "https://github.com/huggingface/datasets/pull/1700.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1700.patch",
"merged_at": "2021-01-12T18:51... | 1,700 | true |
Update DBRD dataset card and download URL | I've added the Dutch Bood Review Dataset (DBRD) during the recent sprint. This pull request makes two minor changes:
1. I'm changing the download URL from Google Drive to the dataset's GitHub release package. This is now possible because of PR #1316.
2. I've updated the dataset card.
Cheers! 😄 | https://github.com/huggingface/datasets/pull/1699 | [
"not sure why the CI was not triggered though"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1699",
"html_url": "https://github.com/huggingface/datasets/pull/1699",
"diff_url": "https://github.com/huggingface/datasets/pull/1699.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1699.patch",
"merged_at": "2021-01-07T13:40... | 1,699 | true |
Update Coached Conv Pref DatasetCard | Update Coached Conversation Preferance DatasetCard | https://github.com/huggingface/datasets/pull/1698 | [
"Really cool!\r\n\r\nCan you add some task tags for `dialogue-modeling` (under `sequence-modeling`) and `parsing` (under `structured-prediction`)?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1698",
"html_url": "https://github.com/huggingface/datasets/pull/1698",
"diff_url": "https://github.com/huggingface/datasets/pull/1698.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1698.patch",
"merged_at": "2021-01-08T17:04... | 1,698 | true |
Update DialogRE DatasetCard | Update the information in the dataset card for the Dialog RE dataset. | https://github.com/huggingface/datasets/pull/1697 | [
"Same as #1698, can you add a task tag for dialogue-modeling (under sequence-modeling) :) ?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1697",
"html_url": "https://github.com/huggingface/datasets/pull/1697",
"diff_url": "https://github.com/huggingface/datasets/pull/1697.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1697.patch",
"merged_at": "2021-01-07T13:34... | 1,697 | true |
Unable to install datasets | ** Edit **
I believe there's a bug with the package when you're installing it with Python 3.9. I recommend sticking with previous versions. Thanks, @thomwolf for the insight!
**Short description**
I followed the instructions for installing datasets (https://huggingface.co/docs/datasets/installation.html). Howev... | https://github.com/huggingface/datasets/issues/1696 | [
"Maybe try to create a virtual env with python 3.8 or 3.7",
"Thanks, @thomwolf! I fixed the issue by downgrading python to 3.7. ",
"Damn sorry",
"Damn sorry"
] | null | 1,696 | false |
fix ner_tag bugs in thainer | fix bug that results in `ner_tag` always equal to 'O'. | https://github.com/huggingface/datasets/pull/1695 | [
"> Thanks :)\r\n> \r\n> Apparently the dummy_data.zip got removed. Is this expected ?\r\n> Also can you remove the `data-pos.conll` file that you added ?\r\n\r\nNot expected. I forgot to remove the `dummy_data` folder used to create `dummy_data.zip`. \r\nChanged to only `dummy_data.zip`."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1695",
"html_url": "https://github.com/huggingface/datasets/pull/1695",
"diff_url": "https://github.com/huggingface/datasets/pull/1695.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1695.patch",
"merged_at": "2021-01-07T14:43... | 1,695 | true |
Add OSCAR | Continuation of #348
The files have been moved to S3 and only the unshuffled version is available.
Both original and deduplicated versions of each language are available.
Example of usage:
```python
from datasets import load_dataset
oscar_dedup_en = load_dataset("oscar", "unshuffled_deduplicated_en", split="... | https://github.com/huggingface/datasets/pull/1694 | [
"Hi @lhoestq, on the OSCAR dataset, the document boundaries are defined by an empty line. Are there any chances to keep this empty line or explicitly group the sentences of a document? I'm asking for this 'cause I need to know if some sentences belong to the same document on my current OSCAR dataset usage.",
"Ind... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1694",
"html_url": "https://github.com/huggingface/datasets/pull/1694",
"diff_url": "https://github.com/huggingface/datasets/pull/1694.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1694.patch",
"merged_at": "2021-01-25T09:10... | 1,694 | true |
Fix reuters metadata parsing errors | Was missing the last entry in each metadata category | https://github.com/huggingface/datasets/pull/1693 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1693",
"html_url": "https://github.com/huggingface/datasets/pull/1693",
"diff_url": "https://github.com/huggingface/datasets/pull/1693.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1693.patch",
"merged_at": "2021-01-07T14:01... | 1,693 | true |
Updated HuggingFace Datasets README (fix typos) | Awesome work on 🤗 Datasets. I found a couple of small typos in the README. Hope this helps.

| https://github.com/huggingface/datasets/pull/1691 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1691",
"html_url": "https://github.com/huggingface/datasets/pull/1691",
"diff_url": "https://github.com/huggingface/datasets/pull/1691.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1691.patch",
"merged_at": "2021-01-07T10:06... | 1,691 | true |
Fast start up | Currently if optional dependencies such as tensorflow, torch, apache_beam, faiss and elasticsearch are installed, then it takes a long time to do `import datasets` since it imports all of these heavy dependencies.
To make a fast start up for `datasets` I changed that so that they are not imported when `datasets` is ... | https://github.com/huggingface/datasets/pull/1690 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1690",
"html_url": "https://github.com/huggingface/datasets/pull/1690",
"diff_url": "https://github.com/huggingface/datasets/pull/1690.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1690.patch",
"merged_at": "2021-01-06T14:20... | 1,690 | true |
Fix ade_corpus_v2 config names | There are currently some typos in the config names of the `ade_corpus_v2` dataset, I fixed them:
- Ade_corpos_v2_classificaion -> Ade_corpus_v2_classification
- Ade_corpos_v2_drug_ade_relation -> Ade_corpus_v2_drug_ade_relation
- Ade_corpos_v2_drug_dosage_relation -> Ade_corpus_v2_drug_dosage_relation | https://github.com/huggingface/datasets/pull/1689 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1689",
"html_url": "https://github.com/huggingface/datasets/pull/1689",
"diff_url": "https://github.com/huggingface/datasets/pull/1689.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1689.patch",
"merged_at": "2021-01-05T14:55... | 1,689 | true |
Fix DaNE last example | The last example from the DaNE dataset is empty.
Fix #1686 | https://github.com/huggingface/datasets/pull/1688 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1688",
"html_url": "https://github.com/huggingface/datasets/pull/1688",
"diff_url": "https://github.com/huggingface/datasets/pull/1688.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1688.patch",
"merged_at": "2021-01-05T14:00... | 1,688 | true |
Question: Shouldn't .info be a part of DatasetDict? | Currently, only `Dataset` contains the .info or .features, but as many datasets contains standard splits (train, test) and thus the underlying information is the same (or at least should be) across the datasets.
For instance:
```
>>> ds = datasets.load_dataset("conll2002", "es")
>>> ds.info
Traceback (most rece... | https://github.com/huggingface/datasets/issues/1687 | [
"We could do something. There is a part of `.info` which is split specific (cache files, split instructions) but maybe if could be made to work.",
"Yes this was kinda the idea I was going for. DatasetDict.info would be the shared info amongs the datasets (maybe even some info on how they differ). "
] | null | 1,687 | false |
Dataset Error: DaNE contains empty samples at the end | The dataset DaNE, contains empty samples at the end. It is naturally easy to remove using a filter but should probably not be there, to begin with as it can cause errors.
```python
>>> import datasets
[...]
>>> dataset = datasets.load_dataset("dane")
[...]
>>> dataset["test"][-1]
{'dep_ids': [], 'dep_labels': ... | https://github.com/huggingface/datasets/issues/1686 | [
"Thanks for reporting, I opened a PR to fix that",
"One the PR is merged the fix will be available in the next release of `datasets`.\r\n\r\nIf you don't want to wait the next release you can still load the script from the master branch with\r\n\r\n```python\r\nload_dataset(\"dane\", script_version=\"master\")\r\... | null | 1,686 | false |
Update README.md of covid-tweets-japanese | Update README.md of covid-tweets-japanese added by PR https://github.com/huggingface/datasets/pull/1367 and https://github.com/huggingface/datasets/pull/1402.
- Update "Data Splits" to be more precise that no information is provided for now.
- old: [More Information Needed]
- new: No information about data spl... | https://github.com/huggingface/datasets/pull/1685 | [
"Thanks for reviewing and merging!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1685",
"html_url": "https://github.com/huggingface/datasets/pull/1685",
"diff_url": "https://github.com/huggingface/datasets/pull/1685.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1685.patch",
"merged_at": "2021-01-06T09:31... | 1,685 | true |
Add CANER Corpus | What does this PR do?
Adds the following dataset:
https://github.com/RamziSalah/Classical-Arabic-Named-Entity-Recognition-Corpus
Who can review?
@lhoestq | https://github.com/huggingface/datasets/pull/1684 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1684",
"html_url": "https://github.com/huggingface/datasets/pull/1684",
"diff_url": "https://github.com/huggingface/datasets/pull/1684.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1684.patch",
"merged_at": "2021-01-25T09:09... | 1,684 | true |
`ArrowInvalid` occurs while running `Dataset.map()` function for DPRContext | It seems to fail the final batch ):
steps to reproduce:
```
from datasets import load_dataset
from elasticsearch import Elasticsearch
import torch
from transformers import file_utils, set_seed
from transformers import DPRContextEncoder, DPRContextEncoderTokenizerFast
MAX_SEQ_LENGTH = 256
ctx_encoder = DPRCon... | https://github.com/huggingface/datasets/issues/1683 | [
"Looks like the mapping function returns a dictionary with a 768-dim array in the `embeddings` field. Since the map is batched, we actually expect the `embeddings` field to be an array of shape (batch_size, 768) to have one embedding per example in the batch.\r\n\r\nTo fix that can you try to remove one of the `[0]... | null | 1,683 | false |
Don't use xlrd for xlsx files | Since the latest release of `xlrd` (2.0), the support for xlsx files stopped.
Therefore we needed to use something else.
A good alternative is `openpyxl` which has also an integration with pandas si we can still call `pd.read_excel`.
I left the unused import of `openpyxl` in the dataset scripts to show users that ... | https://github.com/huggingface/datasets/pull/1682 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1682",
"html_url": "https://github.com/huggingface/datasets/pull/1682",
"diff_url": "https://github.com/huggingface/datasets/pull/1682.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1682.patch",
"merged_at": "2021-01-04T18:13... | 1,682 | true |
Dataset "dane" missing | the `dane` dataset appear to be missing in the latest version (1.1.3).
```python
>>> import datasets
>>> datasets.__version__
'1.1.3'
>>> "dane" in datasets.list_datasets()
True
```
As we can see it should be present, but doesn't seem to be findable when using `load_dataset`.
```python
>>> datasets.load... | https://github.com/huggingface/datasets/issues/1681 | [
"Hi @KennethEnevoldsen ,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of datasets.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of datasets using pip:\r\npip i... | null | 1,681 | false |
added TurkishProductReviews dataset | This PR added **Turkish Product Reviews Dataset contains 235.165 product reviews collected online. There are 220.284 positive, 14881 negative reviews**.
- **Repository:** [turkish-text-data](https://github.com/fthbrmnby/turkish-text-data)
- **Point of Contact:** Fatih Barmanbay - @fthbrmnby | https://github.com/huggingface/datasets/pull/1680 | [
"@lhoestq, can you please review this PR?",
"Thanks for the suggestions. Updates were made and dataset_infos.json file was created again."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1680",
"html_url": "https://github.com/huggingface/datasets/pull/1680",
"diff_url": "https://github.com/huggingface/datasets/pull/1680.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1680.patch",
"merged_at": "2021-01-04T18:15... | 1,680 | true |
Can't import cc100 dataset | There is some issue to import cc100 dataset.
```
from datasets import load_dataset
dataset = load_dataset("cc100")
```
FileNotFoundError: Couldn't find file at https://raw.githubusercontent.com/huggingface/datasets/1.1.3/datasets/cc100/cc100.py
During handling of the above exception, another exception occur... | https://github.com/huggingface/datasets/issues/1679 | [
"cc100 was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `cc100` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\nlang = \"en\"\r\ndataset = load_dataset(\"cc100\", la... | null | 1,679 | false |
Switchboard Dialog Act Corpus added under `datasets/swda` | Switchboard Dialog Act Corpus
Intro:
The Switchboard Dialog Act Corpus (SwDA) extends the Switchboard-1 Telephone Speech Corpus, Release 2,
with turn/utterance-level dialog-act tags. The tags summarize syntactic, semantic, and pragmatic information
about the associated turn. The SwDA project was undertaken at UC ... | https://github.com/huggingface/datasets/pull/1678 | [
"@lhoestq Thank you for your detailed comments! I fixed everything you suggested.\r\n\r\nPlease let me know if I'm missing anything else.",
"It looks like the Transcript and Utterance objects are missing, maybe we can mention it in the README ? Or just add them ? @gmihaila @bhavitvyamalik ",
"Hi @lhoestq,\r\nI'... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1678",
"html_url": "https://github.com/huggingface/datasets/pull/1678",
"diff_url": "https://github.com/huggingface/datasets/pull/1678.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1678.patch",
"merged_at": "2021-01-05T10:06... | 1,678 | true |
Switchboard Dialog Act Corpus added under `datasets/swda` | Pleased to announced that I added my first dataset **Switchboard Dialog Act Corpus**.
I think this is an important datasets to be added since it is the only one related to dialogue act classification.
Hope the pull request is ok. Wasn't able to see any special formatting for the pull request form.
The Swi... | https://github.com/huggingface/datasets/pull/1677 | [
"Need to fix code formatting."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1677",
"html_url": "https://github.com/huggingface/datasets/pull/1677",
"diff_url": "https://github.com/huggingface/datasets/pull/1677.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1677.patch",
"merged_at": null
} | 1,677 | true |
new version of Ted Talks IWSLT (WIT3) | In the previous iteration #1608 I had used language pairs. Which created 21,582 configs (109*108) !!!
Now, TED talks in _each language_ is a separate config. So it's more cleaner with _just 109 configs_ (one for each language). Dummy files were created manually.
Locally I was able to clear the `python dataset... | https://github.com/huggingface/datasets/pull/1676 | [
"> Nice thank you ! Actually as it is a translation dataset we should probably have one configuration = one language pair no ?\r\n> \r\n> Could you use the same trick for this dataset ?\r\n\r\nI was looking for this input, infact I had written a long post on the Slack channel,...(_but unfortunately due to the holid... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1676",
"html_url": "https://github.com/huggingface/datasets/pull/1676",
"diff_url": "https://github.com/huggingface/datasets/pull/1676.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1676.patch",
"merged_at": "2021-01-14T10:10... | 1,676 | true |
Add the 800GB Pile dataset? | ## Adding a Dataset
- **Name:** The Pile
- **Description:** The Pile is a 825 GiB diverse, open source language modelling data set that consists of 22 smaller, high-quality datasets combined together. See [here](https://twitter.com/nabla_theta/status/1345130408170541056?s=20) for the Twitter announcement
- **Paper:*... | https://github.com/huggingface/datasets/issues/1675 | [
"The pile dataset would be very nice.\r\nBenchmarks show that pile trained models achieve better results than most of actually trained models",
"The pile can very easily be added and adapted using this [tfds implementation](https://github.com/EleutherAI/The-Pile/blob/master/the_pile/tfds_pile.py) from the repo. \... | null | 1,675 | false |
dutch_social can't be loaded | Hi all,
I'm trying to import the `dutch_social` dataset described [here](https://huggingface.co/datasets/dutch_social).
However, the code that should load the data doesn't seem to be working, in particular because the corresponding files can't be found at the provided links.
```
(base) Koens-MacBook-Pro:~ koe... | https://github.com/huggingface/datasets/issues/1674 | [
"exactly the same issue in some other datasets.\r\nDid you find any solution??\r\n",
"Hi @koenvandenberge and @alighofrani95!\r\nThe datasets you're experiencing issues with were most likely added recently to the `datasets` library, meaning they have not been released yet. They will be released with the v2 of the... | null | 1,674 | false |
Unable to Download Hindi Wikipedia Dataset | I used the Dataset Library in Python to load the wikipedia dataset with the Hindi Config 20200501.hi along with something called beam_runner='DirectRunner' and it keeps giving me the error that the file is not found. I have attached the screenshot of the error and the code both. Please help me to understand how to reso... | https://github.com/huggingface/datasets/issues/1673 | [
"Currently this dataset is only available when the library is installed from source since it was added after the last release.\r\n\r\nWe pin the dataset version with the library version so that people can have a reproducible dataset and processing when pinning the library.\r\n\r\nWe'll see if we can provide access ... | null | 1,673 | false |
load_dataset hang on file_lock | I am trying to load the squad dataset. Fails on Windows 10 but succeeds in Colab.
Transformers: 3.3.1
Datasets: 1.0.2
Windows 10 (also tested in WSL)
```
datasets.logging.set_verbosity_debug()
datasets.
train_dataset = load_dataset('squad', split='train')
valid_dataset = load_dataset('squad', split='validat... | https://github.com/huggingface/datasets/issues/1672 | [
"Can you try to upgrade to a more recent version of datasets?",
"Thank, upgrading to 1.1.3 resolved the issue.",
"Having the same issue with `datasets 1.1.3` of `1.5.0` (both tracebacks look the same) and `kilt_wikipedia`, Ubuntu 20.04\r\n\r\n```py\r\nIn [1]: from datasets import load_dataset ... | null | 1,672 | false |
connection issue | Hi
I am getting this connection issue, resulting in large failure on cloud, @lhoestq I appreciate your help on this.
If I want to keep the codes the same, so not using save_to_disk, load_from_disk, but save the datastes in the way load_dataset reads from and copy the files in the same folder the datasets library r... | https://github.com/huggingface/datasets/issues/1671 | [
"Also, mayjor issue for me is the format issue, even if I go through changing the whole code to use load_from_disk, then if I do \r\n\r\nd = datasets.load_from_disk(\"imdb\")\r\nd = d[\"train\"][:10] => the format of this is no more in datasets format\r\nthis is different from you call load_datasets(\"train[10]\")\... | null | 1,671 | false |
wiki_dpr pre-processing performance | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | https://github.com/huggingface/datasets/issues/1670 | [
"Hi ! And thanks for the tips :) \r\n\r\nIndeed currently `wiki_dpr` takes some time to be processed.\r\nMultiprocessing for dataset generation is definitely going to speed up things.\r\n\r\nRegarding the index note that for the default configurations, the index is downloaded instead of being built, which avoid spe... | null | 1,670 | false |
wiki_dpr dataset pre-processesing performance | I've been working with wiki_dpr and noticed that the dataset processing is seriously impaired in performance [1]. It takes about 12h to process the entire dataset. Most of this time is simply loading and processing the data, but the actual indexing is also quite slow (3h).
I won't repeat the concerns around multipro... | https://github.com/huggingface/datasets/issues/1669 | [
"Sorry, double posted."
] | null | 1,669 | false |
xed_en_fi dataset Cleanup | Fix ClassLabel feature type and minor mistakes in the dataset card | https://github.com/huggingface/datasets/pull/1668 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1668",
"html_url": "https://github.com/huggingface/datasets/pull/1668",
"diff_url": "https://github.com/huggingface/datasets/pull/1668.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1668.patch",
"merged_at": "2020-12-30T17:22... | 1,668 | true |
Fix NER metric example in Overview notebook | Fix errors in `NER metric example` section in `Overview.ipynb`.
```
---------------------------------------------------------------------------
ImportError Traceback (most recent call last)
<ipython-input-37-ee559b166e25> in <module>()
----> 1 ner_metric = load_metric('seqeval')
... | https://github.com/huggingface/datasets/pull/1667 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1667",
"html_url": "https://github.com/huggingface/datasets/pull/1667",
"diff_url": "https://github.com/huggingface/datasets/pull/1667.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1667.patch",
"merged_at": "2020-12-30T17:21... | 1,667 | true |
Add language to dataset card for Makhzan dataset. | Add language to dataset card. | https://github.com/huggingface/datasets/pull/1666 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1666",
"html_url": "https://github.com/huggingface/datasets/pull/1666",
"diff_url": "https://github.com/huggingface/datasets/pull/1666.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1666.patch",
"merged_at": "2020-12-30T17:20... | 1,666 | true |
Add language to dataset card for Counter dataset. | Add language. | https://github.com/huggingface/datasets/pull/1665 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1665",
"html_url": "https://github.com/huggingface/datasets/pull/1665",
"diff_url": "https://github.com/huggingface/datasets/pull/1665.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1665.patch",
"merged_at": "2020-12-30T17:20... | 1,665 | true |
removed \n in labels | updated social_i_qa labels as per #1633 | https://github.com/huggingface/datasets/pull/1664 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1664",
"html_url": "https://github.com/huggingface/datasets/pull/1664",
"diff_url": "https://github.com/huggingface/datasets/pull/1664.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1664.patch",
"merged_at": "2020-12-30T17:18... | 1,664 | true |
update saving and loading methods for faiss index so to accept path l… | - Update saving and loading methods for faiss index so to accept path like objects from pathlib
The current code only supports using a string type to save and load a faiss index. This change makes it possible to use a string type OR a Path from [pathlib](https://docs.python.org/3/library/pathlib.html). The codes bec... | https://github.com/huggingface/datasets/pull/1663 | [
"Seems ok for me, what do you think @lhoestq ?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1663",
"html_url": "https://github.com/huggingface/datasets/pull/1663",
"diff_url": "https://github.com/huggingface/datasets/pull/1663.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1663.patch",
"merged_at": "2021-01-18T09:27... | 1,663 | true |
Arrow file is too large when saving vector data | I computed the sentence embedding of each sentence of bookcorpus data using bert base and saved them to disk. I used 20M sentences and the obtained arrow file is about 59GB while the original text file is only about 1.3GB. Are there any ways to reduce the size of the arrow file? | https://github.com/huggingface/datasets/issues/1662 | [
"Hi !\r\nThe arrow file size is due to the embeddings. Indeed if they're stored as float32 then the total size of the embeddings is\r\n\r\n20 000 000 vectors * 768 dimensions * 4 bytes per dimension ~= 60GB\r\n\r\nIf you want to reduce the size you can consider using quantization for example, or maybe using dimensi... | null | 1,662 | false |
updated dataset cards | added dataset instance in the card. | https://github.com/huggingface/datasets/pull/1661 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1661",
"html_url": "https://github.com/huggingface/datasets/pull/1661",
"diff_url": "https://github.com/huggingface/datasets/pull/1661.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1661.patch",
"merged_at": "2020-12-30T17:15... | 1,661 | true |
add dataset info | https://github.com/huggingface/datasets/pull/1660 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1660",
"html_url": "https://github.com/huggingface/datasets/pull/1660",
"diff_url": "https://github.com/huggingface/datasets/pull/1660.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1660.patch",
"merged_at": "2020-12-30T17:04... | 1,660 | true | |
update dataset info | https://github.com/huggingface/datasets/pull/1659 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1659",
"html_url": "https://github.com/huggingface/datasets/pull/1659",
"diff_url": "https://github.com/huggingface/datasets/pull/1659.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1659.patch",
"merged_at": "2020-12-30T16:55... | 1,659 | true | |
brwac dataset: add instances and data splits info | https://github.com/huggingface/datasets/pull/1658 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1658",
"html_url": "https://github.com/huggingface/datasets/pull/1658",
"diff_url": "https://github.com/huggingface/datasets/pull/1658.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1658.patch",
"merged_at": "2020-12-30T16:54... | 1,658 | true | |
mac_morpho dataset: add data splits info | https://github.com/huggingface/datasets/pull/1657 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1657",
"html_url": "https://github.com/huggingface/datasets/pull/1657",
"diff_url": "https://github.com/huggingface/datasets/pull/1657.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1657.patch",
"merged_at": "2020-12-30T16:51... | 1,657 | true | |
assin 2 dataset: add instances and data splits info | https://github.com/huggingface/datasets/pull/1656 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1656",
"html_url": "https://github.com/huggingface/datasets/pull/1656",
"diff_url": "https://github.com/huggingface/datasets/pull/1656.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1656.patch",
"merged_at": "2020-12-30T16:50... | 1,656 | true | |
assin dataset: add instances and data splits info | https://github.com/huggingface/datasets/pull/1655 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1655",
"html_url": "https://github.com/huggingface/datasets/pull/1655",
"diff_url": "https://github.com/huggingface/datasets/pull/1655.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1655.patch",
"merged_at": "2020-12-30T16:50... | 1,655 | true | |
lener_br dataset: add instances and data splits info | https://github.com/huggingface/datasets/pull/1654 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1654",
"html_url": "https://github.com/huggingface/datasets/pull/1654",
"diff_url": "https://github.com/huggingface/datasets/pull/1654.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1654.patch",
"merged_at": "2020-12-30T16:49... | 1,654 | true | |
harem dataset: add data splits info | https://github.com/huggingface/datasets/pull/1653 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1653",
"html_url": "https://github.com/huggingface/datasets/pull/1653",
"diff_url": "https://github.com/huggingface/datasets/pull/1653.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1653.patch",
"merged_at": "2020-12-30T16:49... | 1,653 | true | |
Update dataset cards from previous sprint | This PR updates the dataset cards/readmes for the 4 approved PRs I submitted in the previous sprint. | https://github.com/huggingface/datasets/pull/1652 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1652",
"html_url": "https://github.com/huggingface/datasets/pull/1652",
"diff_url": "https://github.com/huggingface/datasets/pull/1652.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1652.patch",
"merged_at": "2020-12-30T16:48... | 1,652 | true |
Add twi wordsim353 | Added the citation information to the README file | https://github.com/huggingface/datasets/pull/1651 | [
"Well actually it looks like it was already added in #1428 \r\n\r\nMaybe we can close this one ? Or you wanted to make changes to this dataset ?",
"Thank you, it's just a modification of Readme. I added the missing citation.",
"Indeed thanks"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1651",
"html_url": "https://github.com/huggingface/datasets/pull/1651",
"diff_url": "https://github.com/huggingface/datasets/pull/1651.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1651.patch",
"merged_at": "2021-01-04T09:39... | 1,651 | true |
Update README.md | added dataset summary | https://github.com/huggingface/datasets/pull/1650 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1650",
"html_url": "https://github.com/huggingface/datasets/pull/1650",
"diff_url": "https://github.com/huggingface/datasets/pull/1650.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1650.patch",
"merged_at": "2020-12-29T10:43... | 1,650 | true |
Update README.md | Added information in the dataset card | https://github.com/huggingface/datasets/pull/1649 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1649",
"html_url": "https://github.com/huggingface/datasets/pull/1649",
"diff_url": "https://github.com/huggingface/datasets/pull/1649.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1649.patch",
"merged_at": "2020-12-29T10:43... | 1,649 | true |
Update README.md | added dataset summary | https://github.com/huggingface/datasets/pull/1648 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1648",
"html_url": "https://github.com/huggingface/datasets/pull/1648",
"diff_url": "https://github.com/huggingface/datasets/pull/1648.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1648.patch",
"merged_at": "2020-12-29T10:39... | 1,648 | true |
NarrativeQA fails to load with `load_dataset` | When loading the NarrativeQA dataset with `load_dataset('narrativeqa')` as given in the documentation [here](https://huggingface.co/datasets/narrativeqa), I receive a cascade of exceptions, ending with
FileNotFoundError: Couldn't find file locally at narrativeqa/narrativeqa.py, or remotely at
https://r... | https://github.com/huggingface/datasets/issues/1647 | [
"Hi @eric-mitchell,\r\nI think the issue might be that this dataset was added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip i... | null | 1,647 | false |
Add missing homepage in some dataset cards | In some dataset cards the homepage field in the `Dataset Description` section was missing/empty | https://github.com/huggingface/datasets/pull/1646 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1646",
"html_url": "https://github.com/huggingface/datasets/pull/1646",
"diff_url": "https://github.com/huggingface/datasets/pull/1646.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1646.patch",
"merged_at": "2021-01-04T14:08... | 1,646 | true |
Rename "part-of-speech-tagging" tag in some dataset cards | `part-of-speech-tagging` was not part of the tagging taxonomy under `structure-prediction` | https://github.com/huggingface/datasets/pull/1645 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1645",
"html_url": "https://github.com/huggingface/datasets/pull/1645",
"diff_url": "https://github.com/huggingface/datasets/pull/1645.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1645.patch",
"merged_at": "2021-01-07T10:08... | 1,645 | true |
HoVeR dataset fails to load | Hi! I'm getting an error when trying to load **HoVeR** dataset. Another one (**SQuAD**) does work for me. I'm using the latest (1.1.3) version of the library.
Steps to reproduce the error:
```python
>>> from datasets import load_dataset
>>> dataset = load_dataset("hover")
Traceback (most recent call last):
... | https://github.com/huggingface/datasets/issues/1644 | [
"Hover was added recently, that's why it wasn't available yet.\r\n\r\nTo load it you can just update `datasets`\r\n```\r\npip install --upgrade datasets\r\n```\r\n\r\nand then you can load `hover` with\r\n\r\n```python\r\nfrom datasets import load_dataset\r\n\r\ndataset = load_dataset(\"hover\")\r\n```"
] | null | 1,644 | false |
Dataset social_bias_frames 404 | ```
>>> from datasets import load_dataset
>>> dataset = load_dataset("social_bias_frames")
...
Downloading and preparing dataset social_bias_frames/default
...
~/.pyenv/versions/3.7.6/lib/python3.7/site-packages/datasets/utils/file_utils.py in get_from_cache(url, cache_dir, force_download, proxies, etag_timeout, ... | https://github.com/huggingface/datasets/issues/1643 | [
"I see, master is already fixed in https://github.com/huggingface/datasets/commit/9e058f098a0919efd03a136b9b9c3dec5076f626"
] | null | 1,643 | false |
Ollie dataset | This is the dataset used to train the Ollie open information extraction algorithm. It has over 21M sentences. See http://knowitall.github.io/ollie/ for more details. | https://github.com/huggingface/datasets/pull/1642 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1642",
"html_url": "https://github.com/huggingface/datasets/pull/1642",
"diff_url": "https://github.com/huggingface/datasets/pull/1642.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1642.patch",
"merged_at": "2021-01-04T13:35... | 1,642 | true |
muchocine dataset cannot be dowloaded | ```python
---------------------------------------------------------------------------
FileNotFoundError Traceback (most recent call last)
/usr/local/lib/python3.6/dist-packages/datasets/load.py in prepare_module(path, script_version, download_config, download_mode, dataset, force_local_path, ... | https://github.com/huggingface/datasets/issues/1641 | [
"I have encountered the same error with `v1.0.1` and `v1.0.2` on both Windows and Linux environments. However, cloning the repo and using the path to the dataset's root directory worked for me. Even after having the dataset cached - passing the path is the only way (for now) to load the dataset.\r\n\r\n```python\r\... | null | 1,641 | false |
Fix "'BertTokenizerFast' object has no attribute 'max_len'" | Tensorflow 2.3.0 gives:
FutureWarning: The `max_len` attribute has been deprecated and will be removed in a future version, use `model_max_length` instead.
Tensorflow 2.4.0 gives:
AttributeError 'BertTokenizerFast' object has no attribute 'max_len' | https://github.com/huggingface/datasets/pull/1640 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1640",
"html_url": "https://github.com/huggingface/datasets/pull/1640",
"diff_url": "https://github.com/huggingface/datasets/pull/1640.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1640.patch",
"merged_at": "2020-12-28T17:26... | 1,640 | true |
bug with sst2 in glue | Hi
I am getting very low accuracy on SST2 I investigate this and observe that for this dataset sentences are tokenized, while this is correct for the other datasets in GLUE, please see below.
Is there any alternatives I could get untokenized sentences? I am unfortunately under time pressure to report some results on ... | https://github.com/huggingface/datasets/issues/1639 | [
"Maybe you can use nltk's treebank detokenizer ?\r\n```python\r\nfrom nltk.tokenize.treebank import TreebankWordDetokenizer\r\n\r\nTreebankWordDetokenizer().detokenize(\"it 's a charming and often affecting journey . \".split())\r\n# \"it's a charming and often affecting journey.\"\r\n```",
"I am looking for alte... | null | 1,639 | false |
Add id_puisi dataset | Puisi (poem) is an Indonesian poetic form. The dataset contains 7223 Indonesian puisi with its title and author. :) | https://github.com/huggingface/datasets/pull/1638 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1638",
"html_url": "https://github.com/huggingface/datasets/pull/1638",
"diff_url": "https://github.com/huggingface/datasets/pull/1638.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1638.patch",
"merged_at": "2020-12-30T16:34... | 1,638 | true |
Added `pn_summary` dataset | #1635
You did a great job with the fluent procedure regarding adding a dataset. I took the chance to add the dataset on my own. Thank you for your awesome job, and I hope this dataset found the researchers happy, specifically those interested in Persian Language (Farsi)! | https://github.com/huggingface/datasets/pull/1637 | [
"As always, I got stuck in the correct order of imports 😅\r\n@lhoestq, It's finished!",
"@lhoestq, It's done! Is there anything else that needs changes?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1637",
"html_url": "https://github.com/huggingface/datasets/pull/1637",
"diff_url": "https://github.com/huggingface/datasets/pull/1637.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1637.patch",
"merged_at": "2021-01-04T13:43... | 1,637 | true |
winogrande cannot be dowloaded | Hi,
I am getting this error when trying to run the codes on the cloud. Thank you for any suggestion and help on this @lhoestq
```
File "./finetune_trainer.py", line 318, in <module>
main()
File "./finetune_trainer.py", line 148, in main
for task in data_args.tasks]
File "./finetune_trainer.py", ... | https://github.com/huggingface/datasets/issues/1636 | [
"I have same issue for other datasets (`myanmar_news` in my case).\r\n\r\nA version of `datasets` runs correctly on my local machine (**without GPU**) which looking for the dataset at \r\n```\r\nhttps://raw.githubusercontent.com/huggingface/datasets/master/datasets/myanmar_news/myanmar_news.py\r\n```\r\n\r\nMeanwhi... | null | 1,636 | false |
Persian Abstractive/Extractive Text Summarization | Assembling datasets tailored to different tasks and languages is a precious target. This would be great to have this dataset included.
## Adding a Dataset
- **Name:** *pn-summary*
- **Description:** *A well-structured summarization dataset for the Persian language consists of 93,207 records. It is prepared for Abs... | https://github.com/huggingface/datasets/issues/1635 | [] | null | 1,635 | false |
Inspecting datasets per category | Hi
Is there a way I could get all NLI datasets/all QA datasets to get some understanding of available datasets per category? this is hard for me to inspect the datasets one by one in the webpage, thanks for the suggestions @lhoestq | https://github.com/huggingface/datasets/issues/1634 | [
"That's interesting, can you tell me what you think would be useful to access to inspect a dataset?\r\n\r\nYou can filter them in the hub with the search by the way: https://huggingface.co/datasets have you seen it?",
"Hi @thomwolf \r\nthank you, I was not aware of this, I was looking into the data viewer linked ... | null | 1,634 | false |
social_i_qa wrong format of labels | Hi,
there is extra "\n" in labels of social_i_qa datasets, no big deal, but I was wondering if you could remove it to make it consistent.
so label is 'label': '1\n', not '1'
thanks
```
>>> import datasets
>>> from datasets import load_dataset
>>> dataset = load_dataset(
... 'social_i_qa')
cahce dir /jul... | https://github.com/huggingface/datasets/issues/1633 | [
"@lhoestq, should I raise a PR for this? Just a minor change while reading labels text file",
"Sure feel free to open a PR thanks !"
] | null | 1,633 | false |
SICK dataset | Hi, this would be great to have this dataset included. I might be missing something, but I could not find it in the list of already included datasets. Thank you.
## Adding a Dataset
- **Name:** SICK
- **Description:** SICK consists of about 10,000 English sentence pairs that include many examples of the lexical,... | https://github.com/huggingface/datasets/issues/1632 | [] | null | 1,632 | false |
Update README.md | I made small change for citation | https://github.com/huggingface/datasets/pull/1631 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1631",
"html_url": "https://github.com/huggingface/datasets/pull/1631",
"diff_url": "https://github.com/huggingface/datasets/pull/1631.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1631.patch",
"merged_at": "2020-12-28T17:16... | 1,631 | true |
Adding UKP Argument Aspect Similarity Corpus | Hi, this would be great to have this dataset included.
## Adding a Dataset
- **Name:** UKP Argument Aspect Similarity Corpus
- **Description:** The UKP Argument Aspect Similarity Corpus (UKP ASPECT) includes 3,595 sentence pairs over 28 controversial topics. Each sentence pair was annotated via crowdsourcing as ei... | https://github.com/huggingface/datasets/issues/1630 | [
"Adding a link to the guide on adding a dataset if someone want to give it a try: https://github.com/huggingface/datasets#add-a-new-dataset-to-the-hub\r\n\r\nwe should add this guide to the issue template @lhoestq ",
"thanks @thomwolf , this is added now. The template is correct, sorry my mistake not to include i... | null | 1,630 | false |
add wongnai_reviews test set labels | - add test set labels provided by @ekapolc
- refactor `star_rating` to a `datasets.features.ClassLabel` field | https://github.com/huggingface/datasets/pull/1629 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1629",
"html_url": "https://github.com/huggingface/datasets/pull/1629",
"diff_url": "https://github.com/huggingface/datasets/pull/1629.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1629.patch",
"merged_at": "2020-12-28T17:23... | 1,629 | true |
made suggested changes to hate-speech-and-offensive-language | https://github.com/huggingface/datasets/pull/1628 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1628",
"html_url": "https://github.com/huggingface/datasets/pull/1628",
"diff_url": "https://github.com/huggingface/datasets/pull/1628.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1628.patch",
"merged_at": "2020-12-28T10:11... | 1,628 | true | |
`Dataset.map` disable progress bar | I can't find anything to turn off the `tqdm` progress bars while running a preprocessing function using `Dataset.map`. I want to do akin to `disable_tqdm=True` in the case of `transformers`. Is there something like that? | https://github.com/huggingface/datasets/issues/1627 | [
"Progress bar can be disabled like this:\r\n```python\r\nfrom datasets.utils.logging import set_verbosity_error\r\nset_verbosity_error()\r\n```\r\n\r\nThere is this line in `Dataset.map`:\r\n```python\r\nnot_verbose = bool(logger.getEffectiveLevel() > WARNING)\r\n```\r\n\r\nSo any logging level higher than `WARNING... | null | 1,627 | false |
Fix dataset_dict.shuffle with single seed | Fix #1610
I added support for single integer used in `DatasetDict.shuffle`. Previously only a dictionary of seed was allowed.
Moreover I added the missing `seed` parameter. Previously only `seeds` was allowed. | https://github.com/huggingface/datasets/pull/1626 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1626",
"html_url": "https://github.com/huggingface/datasets/pull/1626",
"diff_url": "https://github.com/huggingface/datasets/pull/1626.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1626.patch",
"merged_at": "2021-01-04T10:00... | 1,626 | true |
Fixed bug in the shape property | Fix to the bug reported in issue #1622. Just replaced `return tuple(self._indices.num_rows, self._data.num_columns)` by `return (self._indices.num_rows, self._data.num_columns)`. | https://github.com/huggingface/datasets/pull/1625 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1625",
"html_url": "https://github.com/huggingface/datasets/pull/1625",
"diff_url": "https://github.com/huggingface/datasets/pull/1625.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1625.patch",
"merged_at": "2020-12-23T14:13... | 1,625 | true |
Cannot download ade_corpus_v2 | I tried this to get the dataset following this url : https://huggingface.co/datasets/ade_corpus_v2
but received this error :
`Traceback (most recent call last):
File "/opt/anaconda3/lib/python3.7/site-packages/datasets/load.py", line 267, in prepare_module
local_path = cached_path(file_path, download_con... | https://github.com/huggingface/datasets/issues/1624 | [
"Hi @him1411, the dataset you are trying to load has been added during the community sprint and has not been released yet. It will be available with the v2 of `datasets`.\r\nFor now, you should be able to load the datasets after installing the latest (master) version of `datasets` using pip:\r\n`pip install git+htt... | null | 1,624 | false |
Add CLIMATE-FEVER dataset | As suggested by @SBrandeis , fresh PR that adds CLIMATE-FEVER. Replaces PR #1579.
---
A dataset adopting the FEVER methodology that consists of 1,535 real-world claims regarding climate-change collected on the internet. Each claim is accompanied by five manually annotated evidence sentences retrieved from the Eng... | https://github.com/huggingface/datasets/pull/1623 | [
"Thank you @lhoestq for your comments! 😄 I added your suggested changes, ran the tests and regenerated `dataset_infos.json` and `dummy_data`."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1623",
"html_url": "https://github.com/huggingface/datasets/pull/1623",
"diff_url": "https://github.com/huggingface/datasets/pull/1623.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1623.patch",
"merged_at": "2020-12-22T17:53... | 1,623 | true |
Can't call shape on the output of select() | I get the error `TypeError: tuple expected at most 1 argument, got 2` when calling `shape` on the output of `select()`.
It's line 531 in shape in arrow_dataset.py that causes the problem:
``return tuple(self._indices.num_rows, self._data.num_columns)``
This makes sense, since `tuple(num1, num2)` is not a valid call.... | https://github.com/huggingface/datasets/issues/1622 | [
"Indeed that's a typo, do you want to open a PR to fix it?",
"Yes, created a PR"
] | null | 1,622 | false |
updated dutch_social.py for loading jsonl (lines instead of list) files | the data_loader is modified to load files on the fly. Earlier it was reading the entire file and then processing the records
Pls refer to previous PR #1321 | https://github.com/huggingface/datasets/pull/1621 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1621",
"html_url": "https://github.com/huggingface/datasets/pull/1621",
"diff_url": "https://github.com/huggingface/datasets/pull/1621.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1621.patch",
"merged_at": "2020-12-23T11:51... | 1,621 | true |
Adding myPOS2017 dataset | myPOS Corpus (Myanmar Part-of-Speech Corpus) for Myanmar language NLP Research and Developments | https://github.com/huggingface/datasets/pull/1620 | [
"I've updated the code and Readme to reflect your comments.\r\nThank you very much,",
"looks like this PR includes changes about many other files than the ones for myPOS2017\r\n\r\nCould you open another branch and another PR please ?\r\n(or fix this branch)",
"Hi @hungluumfc ! Have you had a chance to fix this... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1620",
"html_url": "https://github.com/huggingface/datasets/pull/1620",
"diff_url": "https://github.com/huggingface/datasets/pull/1620.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1620.patch",
"merged_at": null
} | 1,620 | true |
data loader for reading comprehension task | added doc2dial data loader and dummy data for reading comprehension task. | https://github.com/huggingface/datasets/pull/1619 | [
"Thank you for all the feedback! I have updated the dummy data with a zip under 30KB, which needs to include at least one data instance from both document domain and dialog domain. Please let me know if it is still too big. Thanks!",
"Thank you again for the feedback! I am not too sure what the preferable style f... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1619",
"html_url": "https://github.com/huggingface/datasets/pull/1619",
"diff_url": "https://github.com/huggingface/datasets/pull/1619.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1619.patch",
"merged_at": "2020-12-28T10:32... | 1,619 | true |
Can't filter language:EN on https://huggingface.co/datasets | When visiting https://huggingface.co/datasets, I don't see an obvious way to filter only English datasets. This is unexpected for me, am I missing something? I'd expect English to be selectable in the language widget. This problem reproduced on Mozilla Firefox and MS Edge:
, it would be cool to have a small search widget in the filter dropdown as you have a ton of languages now here! Closing this in the meantime."
] | null | 1,618 | false |
cifar10 initial commit | CIFAR-10 dataset. Didn't add the tagging since there are no vision related tags. | https://github.com/huggingface/datasets/pull/1617 | [
"Yee a Computer Vision dataset!",
"Yep, the first one ! Thank @czabo "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1617",
"html_url": "https://github.com/huggingface/datasets/pull/1617",
"diff_url": "https://github.com/huggingface/datasets/pull/1617.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1617.patch",
"merged_at": "2020-12-22T10:11... | 1,617 | true |
added TurkishMovieSentiment dataset | This PR adds the **TurkishMovieSentiment: This dataset contains turkish movie reviews.**
- **Homepage:** [https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks](https://www.kaggle.com/mustfkeskin/turkish-movie-sentiment-analysis-dataset/tasks)
- **Point of Contact:** [Mustafa Keskin](htt... | https://github.com/huggingface/datasets/pull/1616 | [
"> I just generated the dataset_infos.json file\r\n> \r\n> Thanks for adding this one !\r\n\r\nThank you very much for your support."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1616",
"html_url": "https://github.com/huggingface/datasets/pull/1616",
"diff_url": "https://github.com/huggingface/datasets/pull/1616.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1616.patch",
"merged_at": "2020-12-23T16:50... | 1,616 | true |
Bug: Can't download TriviaQA with `load_dataset` - custom `cache_dir` | Hello,
I'm having issue downloading TriviaQA dataset with `load_dataset`.
## Environment info
- `datasets` version: 1.1.3
- Platform: Linux-4.19.129-aufs-1-x86_64-with-debian-10.1
- Python version: 3.7.3
## The code I'm running:
```python
import datasets
dataset = datasets.load_dataset("trivia_qa", "rc", c... | https://github.com/huggingface/datasets/issues/1615 | [
"Hi @SapirWeissbuch,\r\nWhen you are saying it freezes, at that time it is unzipping the file from the zip file it downloaded. Since it's a very heavy file it'll take some time. It was taking ~11GB after unzipping when it started reading examples for me. Hope that helps!\r\n` for a distributed sampler which needs to make sure datasets are consistent across different cores, for this, this is really necessary for me to use torch generator, based on documentation this generator is not supported with datasets, I... | https://github.com/huggingface/datasets/issues/1611 | [
"Is there a way one can convert the two generator? not sure overall what alternatives I could have to shuffle the datasets with a torch generator, thanks ",
"@lhoestq let me please expalin in more details, maybe you could help me suggesting an alternative to solve the issue for now, I have multiple large dataset... | null | 1,611 | false |
shuffle does not accept seed | Hi
I need to shuffle the dataset, but this needs to be based on epoch+seed to be consistent across the cores, when I pass seed to shuffle, this does not accept seed, could you assist me with this? thanks @lhoestq
| https://github.com/huggingface/datasets/issues/1610 | [
"Hi, did you check the doc on `shuffle`?\r\nhttps://huggingface.co/docs/datasets/package_reference/main_classes.html?datasets.Dataset.shuffle#datasets.Dataset.shuffle",
"Hi Thomas\r\nthanks for reponse, yes, I did checked it, but this does not work for me please see \r\n\r\n```\r\n(internship) rkarimi@italix17:/i... | null | 1,610 | false |
Not able to use 'jigsaw_toxicity_pred' dataset | When trying to use jigsaw_toxicity_pred dataset, like this in a [colab](https://colab.research.google.com/drive/1LwO2A5M2X5dvhkAFYE4D2CUT3WUdWnkn?usp=sharing):
```
from datasets import list_datasets, list_metrics, load_dataset, load_metric
ds = load_dataset("jigsaw_toxicity_pred")
```
I see below error:
>... | https://github.com/huggingface/datasets/issues/1609 | [
"Hi @jassimran,\r\nThe `jigsaw_toxicity_pred` dataset has not been released yet, it will be available with version 2 of `datasets`, coming soon.\r\nYou can still access it by installing the master (unreleased) version of datasets directly :\r\n`pip install git+https://github.com/huggingface/datasets.git@master`\r\n... | null | 1,609 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.