title stringlengths 1 290 | body stringlengths 0 228k ⌀ | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Add Hateful Memes Dataset | ## Add Hateful Memes Dataset
- **Name:** Hateful Memes
- **Description:** [https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set]( https://ai.facebook.com/blog/hateful-memes-challenge-and-data-set)
- **Paper:** [https://arxiv.org/pdf/2005.04790.pdf](https://arxiv.org/pdf/2005.04790.pdf)
- **Data:** [Thi... | https://github.com/huggingface/datasets/issues/1810 | [
"I am not sure, but would `datasets.Sequence(datasets.Sequence(datasets.Sequence(datasets.Value(\"int\")))` work?",
"Also, I found the information for loading only subsets of the data [here](https://github.com/huggingface/datasets/blob/master/docs/source/splits.rst).",
"Hi @lhoestq,\r\n\r\nRequest you to check ... | null | 1,810 | false |
Add FreebaseQA dataset | Adding FreebaseQA dataset suggested in PR #1435 with minor edits. Also closes that PR.
Requesting @lhoestq to review. | https://github.com/huggingface/datasets/pull/1809 | [
"Hi ! It looks like this PR contains changes about other datasets than freebase_qa such as DuoRC.\r\n\r\nCan you remove these changes please ?",
"Hi @lhoestq,\r\n\r\nI think this happened because of rebasing. I'm unable to remove the duorc commit from the branch. GEM, Arabic sarcasm datasets are also there. I ca... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1809",
"html_url": "https://github.com/huggingface/datasets/pull/1809",
"diff_url": "https://github.com/huggingface/datasets/pull/1809.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1809.patch",
"merged_at": null
} | 1,809 | true |
writing Datasets in a human readable format | Hi
I see there is a save_to_disk function to save data, but this is not human readable format, is there a way I could save a Dataset object in a human readable format to a file like json? thanks @lhoestq | https://github.com/huggingface/datasets/issues/1808 | [
"AFAIK, there is currently no built-in method on the `Dataset` object to do this.\r\nHowever, a workaround is to directly use the Arrow table backing the dataset, **but it implies loading the whole dataset in memory** (correct me if I'm mistaken @lhoestq).\r\n\r\nYou can convert the Arrow table to a pandas datafram... | null | 1,808 | false |
Adding an aggregated dataset for the GEM benchmark | This dataset gathers modified versions of several other conditional text generation datasets which together make up the shared task for the Generation Evaluation and Metrics workshop (think GLUE for text generation)
The changes from the original datasets are detailed in the Dataset Cards on the GEM website, which ar... | https://github.com/huggingface/datasets/pull/1807 | [
"Nice !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1807",
"html_url": "https://github.com/huggingface/datasets/pull/1807",
"diff_url": "https://github.com/huggingface/datasets/pull/1807.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1807.patch",
"merged_at": "2021-02-02T18:06... | 1,807 | true |
Update details to MLSUM dataset | Update details to MLSUM dataset | https://github.com/huggingface/datasets/pull/1806 | [
"Thanks!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1806",
"html_url": "https://github.com/huggingface/datasets/pull/1806",
"diff_url": "https://github.com/huggingface/datasets/pull/1806.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1806.patch",
"merged_at": "2021-02-01T18:46... | 1,806 | true |
can't pickle SwigPyObject objects when calling dataset.get_nearest_examples from FAISS index | So, I have the following instances in my dataset
```
{'question': 'An astronomer observes that a planet rotates faster after a meteorite impact. Which is the most likely effect of
this increase in rotation?',
'answer': 'C',
'example_id': 'ARCCH_Mercury_7175875',
'options':[{'option_context': 'One effect of ... | https://github.com/huggingface/datasets/issues/1805 | [
"Hi ! Indeed we used to require mapping functions to be picklable with `pickle` or `dill` in order to cache the resulting datasets. And FAISS indexes are not picklable unfortunately.\r\n\r\nBut since #1703 this is no longer required (the caching will simply be disabled). This change will be available in the next re... | null | 1,805 | false |
Add SICK dataset | Adds the SICK dataset (http://marcobaroni.org/composes/sick.html).
Closes #1772.
Edit: also closes #1632, which is the original issue requesting the dataset. The newer one is a duplicate. | https://github.com/huggingface/datasets/pull/1804 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1804",
"html_url": "https://github.com/huggingface/datasets/pull/1804",
"diff_url": "https://github.com/huggingface/datasets/pull/1804.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1804.patch",
"merged_at": "2021-02-05T15:49... | 1,804 | true |
Querying examples from big datasets is slower than small datasets | After some experiments with bookcorpus I noticed that querying examples from big datasets is slower than small datasets.
For example
```python
from datasets import load_dataset
b1 = load_dataset("bookcorpus", split="train[:1%]")
b50 = load_dataset("bookcorpus", split="train[:50%]")
b100 = load_dataset("bookcorp... | https://github.com/huggingface/datasets/issues/1803 | [
"Hello, @lhoestq / @gaceladri : We have been seeing similar behavior with bigger datasets, where querying time increases. Are you folks aware of any solution that fixes this problem yet? ",
"Hi ! I'm pretty sure that it can be fixed by using the Arrow IPC file format instead of the raw streaming format but I ha... | null | 1,803 | false |
add github of contributors | This PR will add contributors GitHub id at the end of every dataset cards. | https://github.com/huggingface/datasets/pull/1802 | [
"@lhoestq Can you confirm if this format is fine? I will update cards based on your feedback.",
"On HuggingFace side we also have a mapping of hf user => github user (GitHub info used to be required when signing up until not long ago – cc @gary149 @beurkinger) so we can also add a link to HF profile",
"All the ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1802",
"html_url": "https://github.com/huggingface/datasets/pull/1802",
"diff_url": "https://github.com/huggingface/datasets/pull/1802.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1802.patch",
"merged_at": "2021-02-03T10:06... | 1,802 | true |
[GEM] Updated the source link of the data to update correct tokenized version. | https://github.com/huggingface/datasets/pull/1801 | [
"@mounicam we'll keep the original version in the Turk dataset proper, and use the updated file in the GEM aggregated dataset which I'll add later today\r\n\r\n@lhoestq do not merge, I'll close when I've submitted the GEM dataset PR :) ",
"Closed by https://github.com/huggingface/datasets/pull/1807"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1801",
"html_url": "https://github.com/huggingface/datasets/pull/1801",
"diff_url": "https://github.com/huggingface/datasets/pull/1801.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1801.patch",
"merged_at": null
} | 1,801 | true | |
Add DuoRC Dataset | Hi,
DuoRC SelfRC is one type of the [DuoRC Dataset](https://duorc.github.io/). DuoRC SelfRC is a crowdsourced Abstractive/Extractive Question-Answering dataset based on Wikipedia movie plots. It contains examples that may have answers in the movie plot, synthesized answers which are not present in the movie plot, or... | https://github.com/huggingface/datasets/pull/1800 | [
"Thanks for approving @lhoestq!\r\nWill apply these changes for the other datasets I've added too."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1800",
"html_url": "https://github.com/huggingface/datasets/pull/1800",
"diff_url": "https://github.com/huggingface/datasets/pull/1800.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1800.patch",
"merged_at": "2021-02-02T22:49... | 1,800 | true |
Update: SWDA - Fixed code to use all metadata features. Added comments and cleaned c… | This is a dataset I currently use my research and I realized some features are not being returned.
Previous code was not using all available metadata and was kind of messy
I fixed code to use all metadata and made some modification to be more efficient and better formatted.
Please let me know if I need to ma... | https://github.com/huggingface/datasets/pull/1799 | [
"@yjernite Pushed all the changes you recommended. Thank you for your help!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1799",
"html_url": "https://github.com/huggingface/datasets/pull/1799",
"diff_url": "https://github.com/huggingface/datasets/pull/1799.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1799.patch",
"merged_at": "2021-02-09T15:49... | 1,799 | true |
Add Arabic sarcasm dataset | This MIT license dataset: https://github.com/iabufarha/ArSarcasm
Via https://sites.google.com/view/ar-sarcasm-sentiment-detection/ | https://github.com/huggingface/datasets/pull/1798 | [
"@lhoestq thanks for the comments - I believe these are now addressed. I re-generated the datasets_info.json and dummy data"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1798",
"html_url": "https://github.com/huggingface/datasets/pull/1798",
"diff_url": "https://github.com/huggingface/datasets/pull/1798.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1798.patch",
"merged_at": "2021-02-03T10:35... | 1,798 | true |
Connection error | Hi
I am hitting to the error, help me and thanks.
`train_data = datasets.load_dataset("xsum", split="train")`
`ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.0.2/datasets/xsum/xsum.py` | https://github.com/huggingface/datasets/issues/1797 | [
"Hi ! For future references let me add a link to our discussion here : https://github.com/huggingface/datasets/issues/759#issuecomment-770684693\r\n\r\nLet me know if you manage to fix your proxy issue or if we can do something on our end to help you :)"
] | null | 1,797 | false |
Filter on dataset too much slowww | I have a dataset with 50M rows.
For pre-processing, I need to tokenize this and filter rows with the large sequence.
My tokenization took roughly 12mins. I used `map()` with batch size 1024 and multi-process with 96 processes.
When I applied the `filter()` function it is taking too much time. I need to filter se... | https://github.com/huggingface/datasets/issues/1796 | [
"When I use the filter on the arrow table directly, it works like butter. But I can't find a way to update the table in `Dataset` object.\r\n\r\n```\r\nds_table = dataset.data.filter(mask=dataset['flag'])\r\n```",
"@thomwolf @lhoestq can you guys please take a look and recommend some solution.",
"Hi ! Currently... | null | 1,796 | false |
Custom formatting for lazy map + arrow data extraction refactor | Hi !
This PR refactors the way data are extracted from pyarrow tables to extend it to the use of custom formatting functions.
While the internal storage of the dataset is always the Apache Arrow format, by setting a specific format on a dataset, you can cast the output of `datasets.Dataset.__getitem__` in NumPy/p... | https://github.com/huggingface/datasets/pull/1795 | [
"This PR is amazing!!!\r\n\r\nI only looked at `arrow_dataset.py` and `formatting/formatting.py` but those look good to me.\r\n\r\nMy only (tiny) concern is the name of the function: I don't think it's self-evident that `set_format` applies a generic transformation, and some people might not look too far into the d... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1795",
"html_url": "https://github.com/huggingface/datasets/pull/1795",
"diff_url": "https://github.com/huggingface/datasets/pull/1795.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1795.patch",
"merged_at": "2021-02-05T09:54... | 1,795 | true |
Move silicone directory | The dataset was added in #1761 but not in the right directory. I'm moving it to /datasets | https://github.com/huggingface/datasets/pull/1794 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1794",
"html_url": "https://github.com/huggingface/datasets/pull/1794",
"diff_url": "https://github.com/huggingface/datasets/pull/1794.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1794.patch",
"merged_at": "2021-01-29T16:31... | 1,794 | true |
Minor fix the docstring of load_metric | Minor fix:
- duplicated attributes
- format fix | https://github.com/huggingface/datasets/pull/1793 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1793",
"html_url": "https://github.com/huggingface/datasets/pull/1793",
"diff_url": "https://github.com/huggingface/datasets/pull/1793.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1793.patch",
"merged_at": "2021-01-29T16:53... | 1,793 | true |
Allow loading dataset in-memory | Allow loading datasets either from:
- memory-mapped file (current implementation)
- from file descriptor, copying data to physical memory
Close #708 | https://github.com/huggingface/datasets/pull/1792 | [
"I am wondering how to test their difference...",
"> ring how to test their difference...\r\n\r\nHmm I don't think pyarrow exposes an API to check if a Table comes from a file that is memory-mapped. In particular since all the buffer/memory logic is in the C++ part of pyarrow.\r\n\r\nOtherwise we can still check ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1792",
"html_url": "https://github.com/huggingface/datasets/pull/1792",
"diff_url": "https://github.com/huggingface/datasets/pull/1792.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1792.patch",
"merged_at": "2021-02-12T14:13... | 1,792 | true |
Small fix with corrected logging of train vectors | Now you can set `train_size` to the whole dataset size via `train_size = -1` and login writes not `Training the index with the first -1 vectors` but (for example) `Training the index with the first 16123 vectors`. And maybe more than dataset length. Logging will be correct | https://github.com/huggingface/datasets/pull/1791 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1791",
"html_url": "https://github.com/huggingface/datasets/pull/1791",
"diff_url": "https://github.com/huggingface/datasets/pull/1791.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1791.patch",
"merged_at": "2021-01-29T17:05... | 1,791 | true |
ModuleNotFoundError: No module named 'apache_beam', when specific languages. | ```py
import datasets
wiki = datasets.load_dataset('wikipedia', '20200501.ja', cache_dir='./datasets')
```
then `ModuleNotFoundError: No module named 'apache_beam'` happend.
The error doesn't appear when it's '20200501.en'.
I don't know Apache Beam, but according to #498 it isn't necessary when it's saved to lo... | https://github.com/huggingface/datasets/issues/1790 | [
"Hi !\r\n\r\nApache Beam is a framework used to define data transformation pipelines. These pipeline can then be run in many runtimes: DataFlow, Spark, Flink, etc. There also exist a local runner called the DirectRunner.\r\nWikipedia is a dataset that requires some parsing, so to allow the processing to be run on t... | null | 1,790 | false |
[BUG FIX] typo in the import path for metrics | This tiny PR fixes a typo introduced in https://github.com/huggingface/datasets/pull/1726 which prevents loading new metrics | https://github.com/huggingface/datasets/pull/1789 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1789",
"html_url": "https://github.com/huggingface/datasets/pull/1789",
"diff_url": "https://github.com/huggingface/datasets/pull/1789.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1789.patch",
"merged_at": "2021-01-28T18:13... | 1,789 | true |
Doc2dial rc | https://github.com/huggingface/datasets/pull/1788 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1788",
"html_url": "https://github.com/huggingface/datasets/pull/1788",
"diff_url": "https://github.com/huggingface/datasets/pull/1788.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1788.patch",
"merged_at": null
} | 1,788 | true | |
Update the CommonGen citation information | https://github.com/huggingface/datasets/pull/1787 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1787",
"html_url": "https://github.com/huggingface/datasets/pull/1787",
"diff_url": "https://github.com/huggingface/datasets/pull/1787.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1787.patch",
"merged_at": "2021-01-28T13:56... | 1,787 | true | |
How to use split dataset | 
Hey,
I want to split the lambada dataset into corpus, test, train and valid txt files (like penn treebank) but I am not able to achieve this. What I am doing is, executing the lambada.py file in my pro... | https://github.com/huggingface/datasets/issues/1786 | [
"By default, all 3 splits will be loaded if you run the following:\r\n\r\n```python\r\nfrom datasets import load_dataset\r\ndataset = load_dataset(\"lambada\")\r\nprint(dataset[\"train\"])\r\nprint(dataset[\"valid\"])\r\n\r\n```\r\n\r\nIf you wanted to do load this manually, you could do this:\r\n\r\n```python\r\nf... | null | 1,786 | false |
Not enough disk space (Needed: Unknown size) when caching on a cluster | I'm running some experiments where I'm caching datasets on a cluster and accessing it through multiple compute nodes. However, I get an error when loading the cached dataset from the shared disk.
The exact error thrown:
```bash
>>> load_dataset(dataset, cache_dir="/path/to/cluster/shared/path")
OSError: Not eno... | https://github.com/huggingface/datasets/issues/1785 | [
"Hi ! \r\n\r\nWhat do you mean by \"disk_usage(\".\").free` can't compute on the cluster's shared disk\" exactly ?\r\nDoes it return 0 ?",
"Yes, that's right. It shows 0 free space even though there is. I suspect it might have to do with permissions on the shared disk.\r\n\r\n```python\r\n>>> disk_usage(\".\")\r\... | null | 1,785 | false |
JSONDecodeError on JSON with multiple lines | Hello :),
I have been trying to load data using a JSON file. Based on the [docs](https://huggingface.co/docs/datasets/loading_datasets.html#json-files), the following format is supported:
```json
{"key1":11, "key2":12, "key3":13}
{"key1":21, "key2":22, "key3":23}
```
But, when I try loading a dataset with th... | https://github.com/huggingface/datasets/issues/1784 | [
"Hi !\r\n\r\nThe `json` dataset script does support this format. For example loading a dataset with this format works on my side:\r\n```json\r\n{\"key1\":11, \"key2\":12, \"key3\":13}\r\n{\"key1\":21, \"key2\":22, \"key3\":23}\r\n```\r\n\r\nCan you show the full stacktrace please ? Also which version of datasets an... | null | 1,784 | false |
Dataset Examples Explorer | In the Older version of the Dataset, there are a useful Dataset Explorer that allow user to visualize the examples (training, test and validation) of a particular dataset, it is no longer there in current version.
Hope HuggingFace can re-enable the feature that at least allow viewing of the first 20 examples of a ... | https://github.com/huggingface/datasets/issues/1783 | [
"Hi @ChewKokWah,\r\n\r\nWe're working on it! In the meantime, you can still find the dataset explorer at the following URL: https://huggingface.co/datasets/viewer/",
"Glad to see that it still exist, this existing one is more than good enough for me, it is feature rich, simple to use and concise. \r\nHope similar... | null | 1,783 | false |
Update pyarrow import warning | Update the minimum version to >=0.17.1 in the pyarrow version check and update the message.
I also moved the check at the top of the __init__.py | https://github.com/huggingface/datasets/pull/1782 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1782",
"html_url": "https://github.com/huggingface/datasets/pull/1782",
"diff_url": "https://github.com/huggingface/datasets/pull/1782.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1782.patch",
"merged_at": "2021-01-26T13:50... | 1,782 | true |
AttributeError: module 'pyarrow' has no attribute 'PyExtensionType' during import | I'm using Colab. And suddenly this morning, there is this error. Have a look below!

| https://github.com/huggingface/datasets/issues/1781 | [
"Hi ! I'm not able to reproduce the issue. Can you try restarting your runtime ?\r\n\r\nThe PyExtensionType is available in pyarrow starting 0.17.1 iirc. If restarting your runtime doesn't fix this, can you try updating pyarrow ?\r\n```\r\npip install pyarrow --upgrade\r\n```",
"We should bump up the version test... | null | 1,781 | false |
Update SciFact URL | Hi,
I'm following up this [issue](https://github.com/huggingface/datasets/issues/1717). I'm the SciFact dataset creator, and I'm trying to update the SciFact data url in your repo. Thanks again for adding the dataset!
Basically, I'd just like to change the `_URL` to `"https://scifact.s3-us-west-2.amazonaws.com/re... | https://github.com/huggingface/datasets/pull/1780 | [
"Hi ! The error you get is the result of some verifications the library is doing when loading a dataset that already has some metadata in the dataset_infos.json. You can ignore the verifications with \r\n```\r\npython datasets-cli test datasets/scifact --save_infos --all_configs --ignore_verifications\r\n```\r\nThi... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1780",
"html_url": "https://github.com/huggingface/datasets/pull/1780",
"diff_url": "https://github.com/huggingface/datasets/pull/1780.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1780.patch",
"merged_at": "2021-01-28T10:19... | 1,780 | true |
Ignore definition line number of functions for caching | As noticed in #1718 , when a function used for processing with `map` is moved inside its python file, then the change of line number causes the caching mechanism to consider it as a different function. Therefore in this case, it recomputes everything.
This is because we were not ignoring the line number definition f... | https://github.com/huggingface/datasets/pull/1779 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1779",
"html_url": "https://github.com/huggingface/datasets/pull/1779",
"diff_url": "https://github.com/huggingface/datasets/pull/1779.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1779.patch",
"merged_at": "2021-01-26T10:20... | 1,779 | true |
Narrative QA Manual | Submitting the manual version of Narrative QA script which requires a manual download from the original repository | https://github.com/huggingface/datasets/pull/1778 | [
"@lhoestq sorry I opened a new pull request because of some issues with the previous code base. This pull request is originally from #1364",
"Excellent comments. Thanks for those valuable suggestions. I changed everything as you have pointed out :) ",
"I've copied the same template as NarrativeQA now. Please le... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1778",
"html_url": "https://github.com/huggingface/datasets/pull/1778",
"diff_url": "https://github.com/huggingface/datasets/pull/1778.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1778.patch",
"merged_at": "2021-01-29T09:34... | 1,778 | true |
GPT2 MNLI training using run_glue.py | Edit: I'm closing this because I actually meant to post this in `transformers `not `datasets`
Running this on Google Colab,
```
!python run_glue.py \
--model_name_or_path gpt2 \
--task_name mnli \
--do_train \
--do_eval \
--max_seq_length 128 \
--per_gpu_train_batch_size 10 \
--gradient_accu... | https://github.com/huggingface/datasets/issues/1777 | [] | null | 1,777 | false |
[Question & Bug Report] Can we preprocess a dataset on the fly? | I know we can use `Datasets.map` to preprocess a dataset, but I'm using it with very large corpus which generates huge cache file (several TB cache from a 400 GB text file). I have no disk large enough to save it. Can we preprocess a dataset on the fly without generating cache?
BTW, I tried raising `writer_batch_si... | https://github.com/huggingface/datasets/issues/1776 | [
"We are very actively working on this. How does your dataset look like in practice (number/size/type of files)?",
"It's a text file with many lines (about 1B) of Chinese sentences. I use it to train language model using https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm... | null | 1,776 | false |
Efficient ways to iterate the dataset | For a large dataset that does not fits the memory, how can I select only a subset of features from each example?
If I iterate over the dataset and then select the subset of features one by one, the resulted memory usage will be huge. Any ways to solve this?
Thanks | https://github.com/huggingface/datasets/issues/1775 | [
"It seems that selecting a subset of colums directly from the dataset, i.e., dataset[\"column\"], is slow.",
"I was wrong, ```dataset[\"column\"]``` is fast."
] | null | 1,775 | false |
is it possible to make slice to be more compatible like python list and numpy? | Hi,
see below error:
```
AssertionError: Requested slice [:10000000000000000] incompatible with 20 examples.
``` | https://github.com/huggingface/datasets/issues/1774 | [
"Hi ! Thanks for reporting.\r\nI am working on changes in the way data are sliced from arrow. I can probably fix your issue with the changes I'm doing.\r\nIf you have some code to reproduce the issue it would be nice so I can make sure that this case will be supported :)\r\nI'll make a PR in a few days ",
"Good i... | null | 1,774 | false |
bug in loading datasets | Hi,
I need to load a dataset, I use these commands:
```
from datasets import load_dataset
dataset = load_dataset('csv', data_files={'train': 'sick/train.csv',
'test': 'sick/test.csv',
'validation': 'sick/validation.csv'})
prin... | https://github.com/huggingface/datasets/issues/1773 | [
"Looks like an issue with your csv file. Did you use the right delimiter ?\r\nApparently at line 37 the CSV reader from pandas reads 2 fields instead of 1.",
"Note that you can pass any argument you would pass to `pandas.read_csv` as kwargs to `load_dataset`. For example you can do\r\n```python\r\nfrom datasets i... | null | 1,773 | false |
Adding SICK dataset | Hi
It would be great to include SICK dataset.
## Adding a Dataset
- **Name:** SICK
- **Description:** a well known entailment dataset
- **Paper:** http://marcobaroni.org/composes/sick.html
- **Data:** http://marcobaroni.org/composes/sick.html
- **Motivation:** this is an important NLI benchmark
Instruction... | https://github.com/huggingface/datasets/issues/1772 | [] | null | 1,772 | false |
Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.1/datasets/csv/csv.py | Hi,
When I load_dataset from local csv files, below error happened, looks raw.githubusercontent.com was blocked by the chinese government. But why it need to download csv.py? should it include when pip install the dataset?
```
Traceback (most recent call last):
File "/home/tom/pyenv/pystory/lib/python3.6/site-p... | https://github.com/huggingface/datasets/issues/1771 | [
"I temporary manually download csv.py as custom dataset loading script",
"Indeed in 1.2.1 the script to process csv file is downloaded. Starting from the next release though we include the csv processing directly in the library.\r\nSee PR #1726 \r\nWe'll do a new release soon :)",
"Thanks."
] | null | 1,771 | false |
how can I combine 2 dataset with different/same features? | to combine 2 dataset by one-one map like ds = zip(ds1, ds2):
ds1: {'text'}, ds2: {'text'}, combine ds:{'src', 'tgt'}
or different feature:
ds1: {'src'}, ds2: {'tgt'}, combine ds:{'src', 'tgt'} | https://github.com/huggingface/datasets/issues/1770 | [
"Hi ! Currently we don't have a way to `zip` datasets but we plan to add this soon :)\r\nFor now you'll need to use `map` to add the fields from one dataset to the other. See the comment here for more info : https://github.com/huggingface/datasets/issues/853#issuecomment-727872188",
"Good to hear.\r\nCurrently I ... | null | 1,770 | false |
_pickle.PicklingError: Can't pickle typing.Union[str, NoneType]: it's not the same object as typing.Union when calling datasets.map with num_proc=2 | It may be a bug of multiprocessing with Datasets, when I disable the multiprocessing by set num_proc to None, everything works fine.
The script I use is https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_mlm_wwm.py
Script args:
```
--model_name_or_path
../../../model/chine... | https://github.com/huggingface/datasets/issues/1769 | [
"More information: `run_mlm.py` will raise same error when `data_args.line_by_line==True`\r\n\r\nhttps://github.com/huggingface/transformers/blob/9152f16023b59d262b51573714b40325c8e49370/examples/language-modeling/run_mlm.py#L300\r\n",
"Hi ! What version of python and datasets do you have ? And also what version ... | null | 1,769 | false |
Mention kwargs in the Dataset Formatting docs | Hi,
This was discussed in Issue #1762 where the docs didn't mention that keyword arguments to `datasets.Dataset.set_format()` are allowed.
To prevent people from having to check the code/method docs, I just added a couple of lines in the docs.
Please let me know your thoughts on this.
Thanks,
Gunjan
@lho... | https://github.com/huggingface/datasets/pull/1768 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1768",
"html_url": "https://github.com/huggingface/datasets/pull/1768",
"diff_url": "https://github.com/huggingface/datasets/pull/1768.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1768.patch",
"merged_at": "2021-01-25T09:14... | 1,768 | true |
Add Librispeech ASR | This PR adds the librispeech asr dataset: https://www.tensorflow.org/datasets/catalog/librispeech
There are 2 configs: "clean" and "other" whereas there are two "train" datasets for "clean", hence the name "train.100" and "train.360".
As suggested by @lhoestq, due to the enormous size of the dataset in `.arrow` f... | https://github.com/huggingface/datasets/pull/1767 | [
"> Awesome thank you !\r\n> \r\n> The dummy data are quite big but it was expected given that the raw files are flac files.\r\n> Given that the script doesn't even read the flac files I think we can remove them. Or maybe use empty flac files (see [here](https://hydrogenaud.io/index.php?topic=118685.0) for example).... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1767",
"html_url": "https://github.com/huggingface/datasets/pull/1767",
"diff_url": "https://github.com/huggingface/datasets/pull/1767.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1767.patch",
"merged_at": "2021-01-25T20:37... | 1,767 | true |
Issues when run two programs compute the same metrics | I got the following error when running two different programs that both compute sacreblue metrics. It seems that both read/and/write to the same location (.cache/huggingface/metrics/sacrebleu/default/default_experiment-1-0.arrow) where it caches the batches:
```
File "train_matching_min.py", line 160, in <module>ch... | https://github.com/huggingface/datasets/issues/1766 | [
"Hi ! To avoid collisions you can specify a `experiment_id` when instantiating your metric using `load_metric`. It will replace \"default_experiment\" with the experiment id that you provide in the arrow filename. \r\n\r\nAlso when two `experiment_id` collide we're supposed to detect it using our locking mechanism.... | null | 1,766 | false |
Error iterating over Dataset with DataLoader | I have a Dataset that I've mapped a tokenizer over:
```
encoded_dataset.set_format(type='torch',columns=['attention_mask','input_ids','token_type_ids'])
encoded_dataset[:1]
```
```
{'attention_mask': tensor([[1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1]]),
'input_ids': tensor([[ 101, 178, 1198, 1400, 1714, 22233, 2... | https://github.com/huggingface/datasets/issues/1765 | [
"Instead of:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_sampler=32)\r\n```\r\nIt should be:\r\n```python\r\ndataloader = torch.utils.data.DataLoader(encoded_dataset, batch_size=32)\r\n```\r\n\r\n`batch_sampler` accepts a Sampler object or an Iterable, so you get an error.",
"@... | null | 1,765 | false |
Connection Issues | Today, I am getting connection issues while loading a dataset and the metric.
```
Traceback (most recent call last):
File "src/train.py", line 180, in <module>
train_dataset, dev_dataset, test_dataset = create_race_dataset()
File "src/train.py", line 130, in create_race_dataset
train_dataset = load_da... | https://github.com/huggingface/datasets/issues/1764 | [
"Academic WIFI was blocking."
] | null | 1,764 | false |
PAWS-X: Fix csv Dictreader splitting data on quotes |
```python
from datasets import load_dataset
# load english paws-x dataset
datasets = load_dataset('paws-x', 'en')
print(len(datasets['train'])) # outputs 49202 but official dataset has 49401 pairs
print(datasets['train'].unique('label')) # outputs [1, 0, -1] but labels are binary [0,1]
... | https://github.com/huggingface/datasets/pull/1763 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1763",
"html_url": "https://github.com/huggingface/datasets/pull/1763",
"diff_url": "https://github.com/huggingface/datasets/pull/1763.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1763.patch",
"merged_at": "2021-01-22T10:13... | 1,763 | true |
Unable to format dataset to CUDA Tensors | Hi,
I came across this [link](https://huggingface.co/docs/datasets/torch_tensorflow.html) where the docs show show to convert a dataset to a particular format. I see that there is an option to convert it to tensors, but I don't see any option to convert it to CUDA tensors.
I tried this, but Dataset doesn't suppor... | https://github.com/huggingface/datasets/issues/1762 | [
"Hi ! You can get CUDA tensors with\r\n\r\n```python\r\ndataset.set_format(\"torch\", columns=columns, device=\"cuda\")\r\n```\r\n\r\nIndeed `set_format` passes the `**kwargs` to `torch.tensor`",
"Hi @lhoestq,\r\n\r\nThanks a lot. Is this true for all format types?\r\n\r\nAs in, for 'torch', I can have `**kwargs`... | null | 1,762 | false |
Add SILICONE benchmark | My collaborators and I within the Affective Computing team at Telecom Paris would like to re-submit our spoken dialogue dataset for publication.
This is a new pull request relative to the [previously closed request](https://github.com/huggingface/datasets/pull/1712) which was reviewed by @lhoestq.
| https://github.com/huggingface/datasets/pull/1761 | [
"Thanks for the feedback. All your comments have been addressed!",
"Thank you for your constructive feedback! I now know how to best format future datasets that our team plans to publish in the near future :)",
"Awesome ! Looking forward to it :) ",
"Hi @lhoestq ! One last question. Our research team would li... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1761",
"html_url": "https://github.com/huggingface/datasets/pull/1761",
"diff_url": "https://github.com/huggingface/datasets/pull/1761.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1761.patch",
"merged_at": "2021-01-26T13:50... | 1,761 | true |
More tags | Since the hub v2 is going to be released soon I figured it would be great to add the missing tags at least for some of the datasets of reference listed [here](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#write-the-loadingprocessing-code) | https://github.com/huggingface/datasets/pull/1760 | [
"Conll has `multilingual` but is only tagged as `en`",
"good catch, that was a bad copy paste x)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1760",
"html_url": "https://github.com/huggingface/datasets/pull/1760",
"diff_url": "https://github.com/huggingface/datasets/pull/1760.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1760.patch",
"merged_at": "2021-01-22T09:40... | 1,760 | true |
wikipedia dataset incomplete | Hey guys,
I am using the https://github.com/huggingface/datasets/tree/master/datasets/wikipedia dataset.
Unfortunately, I found out that there is an incompleteness for the German dataset.
For reasons unknown to me, the number of inhabitants has been removed from many pages:
Thorey-sur-Ouche has 128 inhabitants a... | https://github.com/huggingface/datasets/issues/1759 | [
"Hi !\r\nFrom what pickle file fo you get this ?\r\nI guess you mean the dataset loaded using `load_dataset` ?",
"yes sorry, I used the `load_dataset`function and saved the data to a pickle file so I don't always have to reload it and are able to work offline. ",
"The wikipedia articles are processed using the ... | null | 1,759 | false |
dataset.search() (elastic) cannot reliably retrieve search results | I am trying to use elastic search to retrieve the indices of items in the dataset in their precise order, given shuffled training indices.
The problem I have is that I cannot retrieve reliable results with my data on my first search. I have to run the search **twice** to get the right answer.
I am indexing data t... | https://github.com/huggingface/datasets/issues/1758 | [
"Hi !\r\nI tried your code on my side and I was able to workaround this issue by waiting a few seconds before querying the index.\r\nMaybe this is because the index is not updated yet on the ElasticSearch side ?",
"Thanks for the feedback! I added a 30 second \"sleep\" and that seemed to work well!"
] | null | 1,758 | false |
FewRel | ## Adding a Dataset
- **Name:** FewRel
- **Description:** Large-Scale Supervised Few-Shot Relation Classification Dataset
- **Paper:** @inproceedings{han2018fewrel,
title={FewRel:A Large-Scale Supervised Few-Shot Relation Classification Dataset with State-of-the-Art Evaluation},
auth... | https://github.com/huggingface/datasets/issues/1757 | [
"+1",
"@dspoka Please check the following link : https://github.com/thunlp/FewRel\r\nThis link mentions two versions of the datasets. Also, this one seems to be the official link.\r\n\r\nI am assuming this is the correct link and implementing based on the same.",
"Hi @lhoestq,\r\n\r\nThis issue can be closed, I... | null | 1,757 | false |
Ccaligned multilingual translation dataset | ## Adding a Dataset
- **Name:** *name of the dataset*
- **Description:** *short description of the dataset (or link to social media or blog post)*
- CCAligned consists of parallel or comparable web-document pairs in 137 languages aligned with English. These web-document pairs were constructed by performing language ... | https://github.com/huggingface/datasets/issues/1756 | [] | null | 1,756 | false |
Using select/reordering datasets slows operations down immensely | I am using portions of HF's helpful work in preparing / scoring the SQuAD 2.0 data. The problem I have is that after using `select` to re-ordering the dataset, computations slow down immensely where the total scoring process on 131k training examples would take maybe 3 minutes, now take over an hour.
The below examp... | https://github.com/huggingface/datasets/issues/1755 | [
"You can use `Dataset.flatten_indices()` to make it fast after a select or shuffle.",
"Thanks for the input! I gave that a try by adding this after my selection / reordering operations, but before the big computation task of `score_squad`\r\n\r\n```\r\nexamples = examples.flatten_indices()\r\nfeatures = features.... | null | 1,755 | false |
Use a config id in the cache directory names for custom configs | As noticed by @JetRunner there was some issues when trying to generate a dataset using a custom config that is based on an existing config.
For example in the following code the `mnli_custom` would reuse the cache used to create `mnli` instead of generating a new dataset with the new label classes:
```python
from ... | https://github.com/huggingface/datasets/pull/1754 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1754",
"html_url": "https://github.com/huggingface/datasets/pull/1754",
"diff_url": "https://github.com/huggingface/datasets/pull/1754.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1754.patch",
"merged_at": "2021-01-25T09:12... | 1,754 | true |
fix comet citations | I realized COMET citations were not showing in the hugging face metrics page:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105164848-8b9da900-5b0d-11eb-9e20-a38f559d2037.png">
This pull request is intended to fix that.
Thanks! | https://github.com/huggingface/datasets/pull/1753 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1753",
"html_url": "https://github.com/huggingface/datasets/pull/1753",
"diff_url": "https://github.com/huggingface/datasets/pull/1753.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1753.patch",
"merged_at": "2021-01-20T14:39... | 1,753 | true |
COMET metric citation | In my last pull request to add COMET metric, the citations where not following the usual "format". Because of that they where not correctly displayed on the website:
<img width="814" alt="Screenshot 2021-01-20 at 09 48 44" src="https://user-images.githubusercontent.com/17256847/105158000-686efb80-5b05-11eb-8bb0-9c8... | https://github.com/huggingface/datasets/pull/1752 | [
"I think its better to create a new branch with this fix. I forgot I was still using the old branch."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1752",
"html_url": "https://github.com/huggingface/datasets/pull/1752",
"diff_url": "https://github.com/huggingface/datasets/pull/1752.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1752.patch",
"merged_at": null
} | 1,752 | true |
Updated README for the Social Bias Frames dataset | See the updated card at https://github.com/mcmillanmajora/datasets/tree/add-SBIC-card/datasets/social_bias_frames. I incorporated information from the [SBIC data statement](https://homes.cs.washington.edu/~msap/social-bias-frames/DATASTATEMENT.html), paper, and the corpus README file included with the dataset download. | https://github.com/huggingface/datasets/pull/1751 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1751",
"html_url": "https://github.com/huggingface/datasets/pull/1751",
"diff_url": "https://github.com/huggingface/datasets/pull/1751.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1751.patch",
"merged_at": "2021-01-20T14:56... | 1,751 | true |
Fix typo in README.md of cnn_dailymail | When I read the README.md of `CNN/DailyMail Dataset`, there seems to be a typo `CCN`.
I am afraid this is a trivial matter, but I would like to make a suggestion for revision. | https://github.com/huggingface/datasets/pull/1750 | [
"Good catch, thanks!",
"Thank you for merging!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1750",
"html_url": "https://github.com/huggingface/datasets/pull/1750",
"diff_url": "https://github.com/huggingface/datasets/pull/1750.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1750.patch",
"merged_at": "2021-01-19T09:48... | 1,750 | true |
Added metadata and correct splits for swda. | Switchboard Dialog Act Corpus
I made some changes following @bhavitvyamalik recommendation in #1678:
* Contains all metadata.
* Used official implementation from the [/swda](https://github.com/cgpotts/swda) repo.
* Add official train and test splits used in [Stolcke et al. (2000)](https://web.stanford.edu/~jur... | https://github.com/huggingface/datasets/pull/1749 | [
"I will push updates tomorrow.",
"@lhoestq thank you for your comments! I went ahead and fixed the code 😃. Please let me know if I missed anything."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1749",
"html_url": "https://github.com/huggingface/datasets/pull/1749",
"diff_url": "https://github.com/huggingface/datasets/pull/1749.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1749.patch",
"merged_at": "2021-01-29T18:38... | 1,749 | true |
add Stuctured Argument Extraction for Korean dataset | https://github.com/huggingface/datasets/pull/1748 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1748",
"html_url": "https://github.com/huggingface/datasets/pull/1748",
"diff_url": "https://github.com/huggingface/datasets/pull/1748.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1748.patch",
"merged_at": "2021-01-19T11:26... | 1,748 | true | |
datasets slicing with seed | Hi
I need to slice a dataset with random seed, I looked into documentation here https://huggingface.co/docs/datasets/splits.html
I could not find a seed option, could you assist me please how I can get a slice for different seeds?
thank you.
@lhoestq | https://github.com/huggingface/datasets/issues/1747 | [
"Hi :) \r\nThe slicing API from https://huggingface.co/docs/datasets/splits.html doesn't shuffle the data.\r\nYou can shuffle and then take a subset of your dataset with\r\n```python\r\n# shuffle and take the first 100 examples\r\ndataset = dataset.shuffle(seed=42).select(range(100))\r\n```\r\n\r\nYou can find more... | null | 1,747 | false |
Fix release conda worflow | The current workflow yaml file is not valid according to https://github.com/huggingface/datasets/actions/runs/487638110 | https://github.com/huggingface/datasets/pull/1746 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1746",
"html_url": "https://github.com/huggingface/datasets/pull/1746",
"diff_url": "https://github.com/huggingface/datasets/pull/1746.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1746.patch",
"merged_at": "2021-01-18T11:31... | 1,746 | true |
difference between wsc and wsc.fixed for superglue | Hi
I see two versions of wsc in superglue, and I am not sure what is the differences and which one is the original one. could you help to discuss the differences? thanks @lhoestq | https://github.com/huggingface/datasets/issues/1745 | [
"From the description given in the dataset script for `wsc.fixed`:\r\n```\r\nThis version fixes issues where the spans are not actually substrings of the text.\r\n```"
] | null | 1,745 | false |
Add missing "brief" entries to reuters | This brings the number of examples for ModApte to match the stated `Training set (9,603 docs)...Test Set (3,299 docs)` | https://github.com/huggingface/datasets/pull/1744 | [
"@lhoestq I ran `make style` but CI code quality still failing and I don't have access to logs",
"It's also likely that due to the previous placement of the field initialization, much of the data about topics etc was simply wrong and carried over from previous entries. Model scores seem to improve significantly w... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1744",
"html_url": "https://github.com/huggingface/datasets/pull/1744",
"diff_url": "https://github.com/huggingface/datasets/pull/1744.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1744.patch",
"merged_at": "2021-01-18T11:26... | 1,744 | true |
Issue while Creating Custom Metric | Hi Team,
I am trying to create a custom metric for my training as follows, where f1 is my own metric:
```python
def _info(self):
# TODO: Specifies the datasets.MetricInfo object
return datasets.MetricInfo(
# This is the description that will appear on the metrics page.
... | https://github.com/huggingface/datasets/issues/1743 | [
"Currently it's only possible to define the features for the two columns `references` and `predictions`.\r\nThe data for these columns can then be passed to `metric.add_batch` and `metric.compute`.\r\nInstead of defining more columns `text`, `offset_mapping` and `ground` you must include them in either references a... | null | 1,743 | false |
Add GLUE Compat (compatible with transformers<3.5.0) | Link to our discussion on Slack (HF internal)
https://huggingface.slack.com/archives/C014N4749J9/p1609668119337400
The next step is to add a compatible option in the new `run_glue.py`
I duplicated `glue` and made the following changes:
1. Change the name to `glue_compat`.
2. Change the label assignments for MN... | https://github.com/huggingface/datasets/pull/1742 | [
"Maybe it would be simpler to just overwrite the order of the label classes of the `glue` dataset ?\r\n```python\r\nmnli = load_dataset(\"glue\", \"mnli\", label_classes=[\"contradiction\", \"entailment\", \"neutral\"])\r\n```",
"Sounds good. Will close the issue if that works."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1742",
"html_url": "https://github.com/huggingface/datasets/pull/1742",
"diff_url": "https://github.com/huggingface/datasets/pull/1742.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1742.patch",
"merged_at": null
} | 1,742 | true |
error when run fine_tuning on text_classification | dataset:sem_eval_2014_task_1
pretrained_model:bert-base-uncased
error description:
when i use these resoruce to train fine_tuning a text_classification on sem_eval_2014_task_1,there always be some problem(when i use other dataset ,there exist the error too). And i followed the colab code (url:https://colab.researc... | https://github.com/huggingface/datasets/issues/1741 | [
"none"
] | null | 1,741 | false |
add id_liputan6 dataset | id_liputan6 is a large-scale Indonesian summarization dataset. The articles were harvested from an online news portal, and obtain 215,827 document-summary pairs: https://arxiv.org/abs/2011.00679 | https://github.com/huggingface/datasets/pull/1740 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1740",
"html_url": "https://github.com/huggingface/datasets/pull/1740",
"diff_url": "https://github.com/huggingface/datasets/pull/1740.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1740.patch",
"merged_at": "2021-01-20T13:41... | 1,740 | true |
fixes and improvements for the WebNLG loader | - fixes test sets loading in v3.0
- adds additional fields for v3.0_ru
- adds info to the WebNLG data card | https://github.com/huggingface/datasets/pull/1739 | [
"The dataset card is fantastic!\r\n\r\nLooks good to me! Did you check that this still passes the slow tests with the existing dummy data?",
"Yes, I ran and passed all the tests specified in [this guide](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md#automatically-add-code-metadata), inclu... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1739",
"html_url": "https://github.com/huggingface/datasets/pull/1739",
"diff_url": "https://github.com/huggingface/datasets/pull/1739.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1739.patch",
"merged_at": "2021-01-29T10:53... | 1,739 | true |
Conda support | Will push a new version on anaconda cloud every time a tag starting with `v` is pushed (like `v1.2.2`).
Will appear here: https://anaconda.org/huggingface/datasets
Depends on `conda-forge` for now, so the following is required for installation:
```
conda install -c huggingface -c conda-forge datasets
``` | https://github.com/huggingface/datasets/pull/1738 | [
"Nice thanks :) \r\nNote that in `datasets` the tags are simply the version without the `v`. For example `1.2.1`.",
"Do you push tags only for versions?",
"Yes I've always used tags only for versions"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1738",
"html_url": "https://github.com/huggingface/datasets/pull/1738",
"diff_url": "https://github.com/huggingface/datasets/pull/1738.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1738.patch",
"merged_at": "2021-01-15T10:08... | 1,738 | true |
update link in TLC to be github links | Base on this issue https://github.com/huggingface/datasets/issues/1064, I can now use the official links.
| https://github.com/huggingface/datasets/pull/1737 | [
"Thanks for updating this!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1737",
"html_url": "https://github.com/huggingface/datasets/pull/1737",
"diff_url": "https://github.com/huggingface/datasets/pull/1737.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1737.patch",
"merged_at": "2021-01-14T10:25... | 1,737 | true |
Adjust BrWaC dataset features name | I added this dataset some days ago, and today I used it to train some models and realized that the names of the features aren't so good.
Looking at the current features hierarchy, we have "paragraphs" with a list of "sentences" with a list of "sentences?!". But the actual hierarchy is a "text" with a list of "paragr... | https://github.com/huggingface/datasets/pull/1736 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1736",
"html_url": "https://github.com/huggingface/datasets/pull/1736",
"diff_url": "https://github.com/huggingface/datasets/pull/1736.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1736.patch",
"merged_at": "2021-01-14T10:29... | 1,736 | true |
Update add new dataset template | This PR fixes a few typos in the "Add new dataset template" and clarifies a bit what to do for the dummy data creation when the `auto_generate` flag can't work. | https://github.com/huggingface/datasets/pull/1735 | [
"Add new \"dataset\"? ;)",
"Lol, too used to Transformers ;-)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1735",
"html_url": "https://github.com/huggingface/datasets/pull/1735",
"diff_url": "https://github.com/huggingface/datasets/pull/1735.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1735.patch",
"merged_at": "2021-01-14T15:16... | 1,735 | true |
Fix empty token bug for `thainer` and `lst20` | add a condition to check if tokens exist before yielding in `thainer` and `lst20` | https://github.com/huggingface/datasets/pull/1734 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1734",
"html_url": "https://github.com/huggingface/datasets/pull/1734",
"diff_url": "https://github.com/huggingface/datasets/pull/1734.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1734.patch",
"merged_at": "2021-01-14T10:42... | 1,734 | true |
connection issue with glue, what is the data url for glue? | Hi
my codes sometimes fails due to connection issue with glue, could you tell me how I can have the URL datasets library is trying to read GLUE from to test the machines I am working on if there is an issue on my side or not
thanks | https://github.com/huggingface/datasets/issues/1733 | [
"Hello @juliahane, which config of GLUE causes you trouble?\r\nThe URLs are defined in the dataset script source code: https://github.com/huggingface/datasets/blob/master/datasets/glue/glue.py"
] | null | 1,733 | false |
[GEM Dataset] Added TurkCorpus, an evaluation dataset for sentence simplification. | We want to use TurkCorpus for validation and testing of the sentence simplification task. | https://github.com/huggingface/datasets/pull/1732 | [
"Thank you for the feedback! I updated the code. "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1732",
"html_url": "https://github.com/huggingface/datasets/pull/1732",
"diff_url": "https://github.com/huggingface/datasets/pull/1732.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1732.patch",
"merged_at": "2021-01-14T10:19... | 1,732 | true |
Couldn't reach swda.py | ConnectionError: Couldn't reach https://raw.githubusercontent.com/huggingface/datasets/1.2.0/datasets/swda/swda.py
| https://github.com/huggingface/datasets/issues/1731 | [
"Hi @yangp725,\r\nThe SWDA has been added very recently and has not been released yet, thus it is not available in the `1.2.0` version of 🤗`datasets`.\r\nYou can still access it by installing the latest version of the library (master branch), by following instructions in [this issue](https://github.com/huggingface... | null | 1,731 | false |
Add MNIST dataset | This PR adds the MNIST dataset to the library. | https://github.com/huggingface/datasets/pull/1730 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1730",
"html_url": "https://github.com/huggingface/datasets/pull/1730",
"diff_url": "https://github.com/huggingface/datasets/pull/1730.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1730.patch",
"merged_at": "2021-01-13T10:19... | 1,730 | true |
Is there support for Deep learning datasets? | I looked around this repository and looking the datasets I think that there's no support for images-datasets. Or am I missing something? For example to add a repo like this https://github.com/DZPeru/fish-datasets | https://github.com/huggingface/datasets/issues/1729 | [
"Hi @ZurMaD!\r\nThanks for your interest in 🤗 `datasets`. Support for image datasets is at an early stage, with CIFAR-10 added in #1617 \r\nMNIST is also on the way: #1730 \r\n\r\nIf you feel like adding another image dataset, I would advise starting by reading the [ADD_NEW_DATASET.md](https://github.com/huggingfa... | null | 1,729 | false |
Add an entry to an arrow dataset | Is it possible to add an entry to a dataset object?
**Motivation: I want to transform the sentences in the dataset and add them to the original dataset**
For example, say we have the following code:
``` python
from datasets import load_dataset
# Load a dataset and print the first examples in the training s... | https://github.com/huggingface/datasets/issues/1728 | [
"Hi @ameet-1997,\r\nI think what you are looking for is the `concatenate_datasets` function: https://huggingface.co/docs/datasets/processing.html?highlight=concatenate#concatenate-several-datasets\r\n\r\nFor your use case, I would use the [`map` method](https://huggingface.co/docs/datasets/processing.html?highlight... | null | 1,728 | false |
BLEURT score calculation raises UnrecognizedFlagError | Calling the `compute` method for **bleurt** metric fails with an `UnrecognizedFlagError` for `FLAGS.bleurt_batch_size`.
My environment:
```
python==3.8.5
datasets==1.2.0
tensorflow==2.3.1
cudatoolkit==11.0.221
```
Test code for reproducing the error:
```
from datasets import load_metric
bleurt = load_me... | https://github.com/huggingface/datasets/issues/1727 | [
"Upgrading tensorflow to version 2.4.0 solved the issue.",
"I still have the same error even with TF 2.4.0.",
"And I have the same error with TF 2.4.1. I believe this issue should be reopened. Any ideas?!",
"I'm seeing the same issue with TF 2.4.1 when running the following in https://colab.research.google.co... | null | 1,727 | false |
Offline loading | As discussed in #824 it would be cool to make the library work in offline mode.
Currently if there's not internet connection then modules (datasets or metrics) that have already been loaded in the past can't be loaded and it raises a ConnectionError.
This is because `prepare_module` fetches online for the latest vers... | https://github.com/huggingface/datasets/pull/1726 | [
"It's maybe a bit annoying to add but could we maybe have as well a version of the local data loading scripts in the package?\r\nThe `text`, `json`, `csv`. Thinking about people like in #1725 who are expecting to be able to work with local data without downloading anything.\r\n\r\nMaybe we can add them to package_d... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1726",
"html_url": "https://github.com/huggingface/datasets/pull/1726",
"diff_url": "https://github.com/huggingface/datasets/pull/1726.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1726.patch",
"merged_at": "2021-01-19T16:42... | 1,726 | true |
load the local dataset | your guidebook's example is like
>>>from datasets import load_dataset
>>> dataset = load_dataset('json', data_files='my_file.json')
but the first arg is path...
so how should i do if i want to load the local dataset for model training?
i will be grateful if you can help me handle this problem!
thanks a lot! | https://github.com/huggingface/datasets/issues/1725 | [
"You should rephrase your question or give more examples and details on what you want to do.\r\n\r\nit’s not possible to understand it and help you with only this information.",
"sorry for that.\r\ni want to know how could i load the train set and the test set from the local ,which api or function should i use .\... | null | 1,725 | false |
ADD S3 support for downloading and uploading processed datasets | # What does this PR do?
This PR adds the functionality to load and save `datasets` from and to s3.
You can save `datasets` with either `Dataset.save_to_disk()` or `DatasetDict.save_to_disk`.
You can load `datasets` with either `load_from_disk` or `Dataset.load_from_disk()`, `DatasetDict.load_from_disk()`.
Lo... | https://github.com/huggingface/datasets/pull/1723 | [
"I created the documentation for `FileSystem Integration for cloud storage` with loading and saving datasets to/from a filesystem with an example of using `datasets.filesystem.S3Filesystem`. I added a note on the `Saving a processed dataset on disk and reload` saying that it is also possible to use other filesystem... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1723",
"html_url": "https://github.com/huggingface/datasets/pull/1723",
"diff_url": "https://github.com/huggingface/datasets/pull/1723.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1723.patch",
"merged_at": "2021-01-26T17:02... | 1,723 | true |
could not run models on a offline server successfully | Hi, I really need your help about this.
I am trying to fine-tuning a RoBERTa on a remote server, which is strictly banning internet. I try to install all the packages by hand and try to run run_mlm.py on the server. It works well on colab, but when I try to run it on this offline server, it shows:
\r\n\r\nYes, I think it might make sense to enhance the tool a tiny bit to prevent this automatically",
"That's the lightest I can make it...it's long-range summarization so a single sample has ~11000 toke... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1721",
"html_url": "https://github.com/huggingface/datasets/pull/1721",
"diff_url": "https://github.com/huggingface/datasets/pull/1721.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1721.patch",
"merged_at": "2021-01-12T11:41... | 1,721 | true |
Adding the NorNE dataset for NER | NorNE is a manually annotated corpus of named entities which extends the annotation of the existing Norwegian Dependency Treebank. Comprising both of the official standards of written Norwegian (Bokmål and Nynorsk), the corpus contains around 600,000 tokens and annotates a rich set of entity types including persons, or... | https://github.com/huggingface/datasets/pull/1720 | [
"Quick question, @lhoestq. In this specific dataset, two special types `GPE_LOC` and `GPE_ORG` can easily be altered depending on the task, choosing either the more general `GPE` tag or the more specific `LOC`/`ORG` tags, conflating them with the other annotations of the same type. However, I have not found an easy... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1720",
"html_url": "https://github.com/huggingface/datasets/pull/1720",
"diff_url": "https://github.com/huggingface/datasets/pull/1720.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1720.patch",
"merged_at": null
} | 1,720 | true |
Fix column list comparison in transmit format | As noticed in #1718 the cache might not reload the cache files when new columns were added.
This is because of an issue in `transmit_format` where the column list comparison fails because the order was not deterministic. This causes the `transmit_format` to apply an unnecessary `set_format` transform with shuffled col... | https://github.com/huggingface/datasets/pull/1719 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1719",
"html_url": "https://github.com/huggingface/datasets/pull/1719",
"diff_url": "https://github.com/huggingface/datasets/pull/1719.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1719.patch",
"merged_at": "2021-01-11T18:45... | 1,719 | true |
Possible cache miss in datasets | Hi,
I am using the datasets package and even though I run the same data processing functions, datasets always recomputes the function instead of using cache.
I have attached an example script that for me reproduces the problem.
In the attached example the second map function always recomputes instead of loading fr... | https://github.com/huggingface/datasets/issues/1718 | [
"Thanks for reporting !\r\nI was able to reproduce thanks to your code and find the origin of the bug.\r\nThe cache was not reusing the same file because one object was not deterministic. It comes from a conversion from `set` to `list` in the `datasets.arrrow_dataset.transmit_format` function, where the resulting l... | null | 1,718 | false |
SciFact dataset - minor changes | Hi,
SciFact dataset creator here. First of all, thanks for adding the dataset to Huggingface, much appreciated!
I'd like to make a few minor changes, including the citation information and the `_URL` from which to download the dataset. Can I submit a PR for this?
It also looks like the dataset is being downloa... | https://github.com/huggingface/datasets/issues/1717 | [
"Hi Dave,\r\nYou are more than welcome to open a PR to make these changes! 🤗\r\nYou will find the relevant information about opening a PR in the [contributing guide](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md) and in the [dataset addition guide](https://github.com/huggingface/datasets/blob... | null | 1,717 | false |
Add Hatexplain Dataset | Adding Hatexplain - the first benchmark hate speech dataset covering multiple aspects of the issue | https://github.com/huggingface/datasets/pull/1716 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1716",
"html_url": "https://github.com/huggingface/datasets/pull/1716",
"diff_url": "https://github.com/huggingface/datasets/pull/1716.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1716.patch",
"merged_at": "2021-01-18T14:21... | 1,716 | true |
add Korean intonation-aided intention identification dataset | https://github.com/huggingface/datasets/pull/1715 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1715",
"html_url": "https://github.com/huggingface/datasets/pull/1715",
"diff_url": "https://github.com/huggingface/datasets/pull/1715.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1715.patch",
"merged_at": "2021-01-12T17:14... | 1,715 | true | |
Adding adversarialQA dataset | Adding the adversarialQA dataset (https://adversarialqa.github.io/) from Beat the AI (https://arxiv.org/abs/2002.00293) | https://github.com/huggingface/datasets/pull/1714 | [
"Oh that's a really cool one, we'll review/merge it soon!\r\n\r\nIn the meantime, do you have any specific positive/negative feedback on the process of adding a datasets Max?\r\nDid you follow the instruction in the [detailed step-by-step](https://github.com/huggingface/datasets/blob/master/ADD_NEW_DATASET.md)?",
... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1714",
"html_url": "https://github.com/huggingface/datasets/pull/1714",
"diff_url": "https://github.com/huggingface/datasets/pull/1714.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1714.patch",
"merged_at": "2021-01-13T16:05... | 1,714 | true |
Installation using conda | Will a conda package for installing datasets be added to the huggingface conda channel? I have installed transformers using conda and would like to use the datasets library to use some of the scripts in the transformers/examples folder but am unable to do so at the moment as datasets can only be installed using pip and... | https://github.com/huggingface/datasets/issues/1713 | [
"Yes indeed the idea is to have the next release on conda cc @LysandreJik ",
"Great! Did you guys have a timeframe in mind for the next release?\r\n\r\nThank you for all the great work in developing this library.",
"I think we can have `datasets` on conda by next week. Will see what I can do!",
"Thank you. Lo... | null | 1,713 | false |
Silicone | My collaborators and I within the Affective Computing team at Telecom Paris would like to push our spoken dialogue dataset for publication. | https://github.com/huggingface/datasets/pull/1712 | [
"When should we expect to see our dataset appear in the search dropdown at huggingface.co?",
"Hi @eusip,\r\n\r\n> When should we expect to see our dataset appear in the search dropdown at huggingface.co?\r\n\r\nwhen this PR is merged.",
"Thanks!",
"I've implemented all the changes requested by @lhoestq but I ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1712",
"html_url": "https://github.com/huggingface/datasets/pull/1712",
"diff_url": "https://github.com/huggingface/datasets/pull/1712.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1712.patch",
"merged_at": null
} | 1,712 | true |
Fix windows path scheme in cached path | As noticed in #807 there's currently an issue with `cached_path` not raising `FileNotFoundError` on windows for absolute paths. This is due to the way we check for a path to be local or not. The check on the scheme using urlparse was incomplete.
I fixed this and added tests | https://github.com/huggingface/datasets/pull/1711 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/1711",
"html_url": "https://github.com/huggingface/datasets/pull/1711",
"diff_url": "https://github.com/huggingface/datasets/pull/1711.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/1711.patch",
"merged_at": "2021-01-11T09:23... | 1,711 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.