title stringlengths 1 290 | body stringlengths 0 228k β | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
Add seed in metrics | With #361 we noticed that some metrics were not deterministic.
In this PR I allow the user to specify numpy's seed when instantiating a metric with `load_metric`.
The seed is set only when `compute` is called, and reset afterwards.
Moreover when calling `compute` with the same metric instance (i.e. same experiment... | https://github.com/huggingface/datasets/pull/404 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/404",
"html_url": "https://github.com/huggingface/datasets/pull/404",
"diff_url": "https://github.com/huggingface/datasets/pull/404.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/404.patch",
"merged_at": "2020-07-20T10:12:34"... | 404 | true |
return python objects instead of arrays by default | We were using to_pandas() to convert from arrow types, however it returns numpy arrays instead of python lists.
I fixed it by using to_pydict/to_pylist instead.
Fix #387
It was mentioned in https://github.com/huggingface/transformers/issues/5729
| https://github.com/huggingface/datasets/pull/403 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/403",
"html_url": "https://github.com/huggingface/datasets/pull/403",
"diff_url": "https://github.com/huggingface/datasets/pull/403.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/403.patch",
"merged_at": "2020-07-17T11:37:00"... | 403 | true |
Search qa | add SearchQA dataset
#336 | https://github.com/huggingface/datasets/pull/402 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/402",
"html_url": "https://github.com/huggingface/datasets/pull/402",
"diff_url": "https://github.com/huggingface/datasets/pull/402.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/402.patch",
"merged_at": "2020-07-16T14:26:59"... | 402 | true |
add web_questions | add Web Question dataset
#336
Maybe @patrickvonplaten you can help with the dummy_data structure? it still broken | https://github.com/huggingface/datasets/pull/401 | [
"What does the `nlp-cli dummy_data` command returns ?",
"`test.json` -> `test` \r\nand \r\n`train.json` -> `train`\r\n\r\nas shown by the `nlp-cli dummy_data` command ;-)",
"LGTM for merge @lhoestq - I let you merge if you want to."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/401",
"html_url": "https://github.com/huggingface/datasets/pull/401",
"diff_url": "https://github.com/huggingface/datasets/pull/401.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/401.patch",
"merged_at": "2020-08-06T06:16:19"... | 401 | true |
Web questions | add the WebQuestion dataset
#336 | https://github.com/huggingface/datasets/pull/400 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/400",
"html_url": "https://github.com/huggingface/datasets/pull/400",
"diff_url": "https://github.com/huggingface/datasets/pull/400.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/400.patch",
"merged_at": null
} | 400 | true |
Spelling mistake | In "Formatting the dataset" part, "The two toehr modifications..." should be "The two other modifications..." ,the word "other" wrong spelled as "toehr". | https://github.com/huggingface/datasets/pull/399 | [
"Thanks!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/399",
"html_url": "https://github.com/huggingface/datasets/pull/399",
"diff_url": "https://github.com/huggingface/datasets/pull/399.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/399.patch",
"merged_at": "2020-07-16T06:49:37"... | 399 | true |
Add inline links | Add inline links to `Contributing.md` | https://github.com/huggingface/datasets/pull/398 | [
"Do you mind adding a link to the much more extended pages on adding and sharing a dataset in the new documentation?",
"Sure, I will do that too"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/398",
"html_url": "https://github.com/huggingface/datasets/pull/398",
"diff_url": "https://github.com/huggingface/datasets/pull/398.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/398.patch",
"merged_at": "2020-07-22T10:14:22"... | 398 | true |
Add contiguous sharding | This makes dset.shard() play nice with nlp.concatenate_datasets(). When I originally wrote the shard() method, I was thinking about a distributed training scenario, but https://github.com/huggingface/nlp/pull/389 also uses it for splitting the dataset for distributed preprocessing.
Usage:
```
nlp.concatenate_datas... | https://github.com/huggingface/datasets/pull/397 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/397",
"html_url": "https://github.com/huggingface/datasets/pull/397",
"diff_url": "https://github.com/huggingface/datasets/pull/397.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/397.patch",
"merged_at": "2020-07-17T16:59:30"... | 397 | true |
Fix memory issue when doing select | We were passing the `nlp.Dataset` object to get the hash for the new dataset's file name.
Fix #395 | https://github.com/huggingface/datasets/pull/396 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/396",
"html_url": "https://github.com/huggingface/datasets/pull/396",
"diff_url": "https://github.com/huggingface/datasets/pull/396.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/396.patch",
"merged_at": "2020-07-16T08:07:30"... | 396 | true |
Memory issue when doing select | As noticed in #389, the following code loads the entire wikipedia in memory.
```python
import nlp
w = nlp.load_dataset("wikipedia", "20200501.en", split="train")
w.select([0])
```
This is caused by [this line](https://github.com/huggingface/nlp/blob/master/src/nlp/arrow_dataset.py#L626) for some reason, that ... | https://github.com/huggingface/datasets/issues/395 | [] | null | 395 | false |
Remove remaining nested dict | This PR deletes the remaining unnecessary nested dict
#378 | https://github.com/huggingface/datasets/pull/394 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/394",
"html_url": "https://github.com/huggingface/datasets/pull/394",
"diff_url": "https://github.com/huggingface/datasets/pull/394.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/394.patch",
"merged_at": "2020-07-16T07:39:51"... | 394 | true |
Fix extracted files directory for the DownloadManager | The cache dir was often cluttered by extracted files because of the download manager.
For downloaded files, we are using the `downloads` directory to make things easier to navigate, but extracted files were still placed at the root of the cache directory. To fix that I changed the directory for extracted files to ca... | https://github.com/huggingface/datasets/pull/393 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/393",
"html_url": "https://github.com/huggingface/datasets/pull/393",
"diff_url": "https://github.com/huggingface/datasets/pull/393.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/393.patch",
"merged_at": "2020-07-17T17:02:14"... | 393 | true |
Style change detection | Another [PAN task](https://pan.webis.de/clef20/pan20-web/style-change-detection.html). This time about identifying when the style/author changes in documents.
- There's the possibility of adding the [PAN19](https://zenodo.org/record/3577602) and PAN18 style change detection tasks too (these are datasets whose labels... | https://github.com/huggingface/datasets/pull/392 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/392",
"html_url": "https://github.com/huggingface/datasets/pull/392",
"diff_url": "https://github.com/huggingface/datasets/pull/392.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/392.patch",
"merged_at": "2020-07-17T17:13:23"... | 392 | true |
Concatenate datasets | I'm constructing the "WikiBooks" dataset, which is a concatenation of Wikipedia & BookCorpus. So I implemented the `Dataset.from_concat()` method, which concatenates two datasets with the same schema.
This would also be useful if someone wants to pretrain on a large generic dataset + their own custom dataset. Not in... | https://github.com/huggingface/datasets/pull/390 | [
"Looks cool :)\r\n\r\nI feel like \r\n```python\r\nconcatenated_dataset = dataset1.concatenate(dataset2)\r\n```\r\ncould be more natural. What do you think ?\r\n\r\nAlso could you also concatenate the `nlp.Dataset._data_files` ?\r\n```python\r\nreturn cls(table, info=info, split=split, data_files=self._data_files +... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/390",
"html_url": "https://github.com/huggingface/datasets/pull/390",
"diff_url": "https://github.com/huggingface/datasets/pull/390.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/390.patch",
"merged_at": "2020-07-22T09:49:58"... | 390 | true |
Fix pickling of SplitDict | It would be nice to pickle and unpickle Datasets, as done in [this tutorial](https://github.com/patil-suraj/exploring-T5/blob/master/T5_on_TPU.ipynb). Example:
```
wiki = nlp.load_dataset('wikipedia', split='train')
def sentencize(examples):
...
wiki = wiki.map(sentencize, batched=True)
torch.save(wiki, '... | https://github.com/huggingface/datasets/pull/389 | [
"By the way, the reason this is an issue for me is because I want to be able to \"save\" changes made to a dataset by writing something to disk. In this case, I would like to pre-process my dataset once, and then train multiple models on the dataset later without having to re-process the data. \r\n\r\nIs pickling/u... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/389",
"html_url": "https://github.com/huggingface/datasets/pull/389",
"diff_url": "https://github.com/huggingface/datasets/pull/389.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/389.patch",
"merged_at": null
} | 389 | true |
π [Dataset] Cannot download wmt14, wmt15 and wmt17 | 1. I try downloading `wmt14`, `wmt15`, `wmt17`, `wmt19` with the following code:
```
nlp.load_dataset('wmt14','de-en')
nlp.load_dataset('wmt15','de-en')
nlp.load_dataset('wmt17','de-en')
nlp.load_dataset('wmt19','de-en')
```
The code runs but the download speed is **extremely slow**, the same behaviour is not ob... | https://github.com/huggingface/datasets/issues/388 | [
"similar slow download speed here for nlp.load_dataset('wmt14', 'fr-en')\r\n`\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 658M/658M [1:00:42<00:00, 181kB/s]\r\nDownloading: 100%|ββββββββββββββββββββββββββββββββββββββββββββββββββββββββββ| 918M/918M [1:39:38<00:00, 154kB/s]\r\nDow... | null | 388 | false |
Conversion through to_pandas output numpy arrays for lists instead of python objects | In a related question, the conversion through to_pandas output numpy arrays for the lists instead of python objects.
Here is an example:
```python
>>> dataset._data.slice(key, 1).to_pandas().to_dict("list")
{'sentence1': ['Amrozi accused his brother , whom he called " the witness " , of deliberately distorting hi... | https://github.com/huggingface/datasets/issues/387 | [
"To convert from arrow type we have three options: to_numpy, to_pandas and to_pydict/to_pylist.\r\n\r\n- to_numpy and to_pandas return numpy arrays instead of lists but are very fast.\r\n- to_pydict/to_pylist can be 100x slower and become the bottleneck for reading data, but at least they return lists.\r\n\r\nMaybe... | null | 387 | false |
Update dataset loading and features - Add TREC dataset | This PR:
- add a template for a new dataset script
- update the caching structure so that the path to the cached data files is also a function of the dataset loading script hash. This way when you update a loading script the data will be automatically updated instead of falling back to the previous version (which is ... | https://github.com/huggingface/datasets/pull/386 | [
"I just copied the files that are on google storage to follow the new `_relative_data_dir ` format. It should be good to merge now :)\r\n\r\nWell actually it seems there are some merge conflicts to fix first"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/386",
"html_url": "https://github.com/huggingface/datasets/pull/386",
"diff_url": "https://github.com/huggingface/datasets/pull/386.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/386.patch",
"merged_at": "2020-07-16T08:17:58"... | 386 | true |
Remove unnecessary nested dict | This PR is removing unnecessary nested dictionary used in some datasets. For now the following datasets are updated:
- MLQA
- RACE
Will be adding more if necessary.
#378 | https://github.com/huggingface/datasets/pull/385 | [
"We can probably scan the dataset scripts with a regexpr to try to identify this pattern cc @patrickvonplaten maybe",
"@mariamabarham This script should work. I tested it for a couple of datasets. There might be exceptions where the script breaks - did not test everything.\r\n\r\n```python\r\n#!/usr/bin/env pytho... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/385",
"html_url": "https://github.com/huggingface/datasets/pull/385",
"diff_url": "https://github.com/huggingface/datasets/pull/385.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/385.patch",
"merged_at": "2020-07-15T10:03:53"... | 385 | true |
Adding the Linguistic Code-switching Evaluation (LinCE) benchmark | Hi,
First of all, this library is really cool! Thanks for putting all of this together!
This PR contains the [Linguistic Code-switching Evaluation (LinCE) benchmark](https://ritual.uh.edu/lince). As described in the official website (FAQ):
> 1. Why do we need LinCE?
>LinCE brings 10 code-switching datasets t... | https://github.com/huggingface/datasets/pull/383 | [
"I am checking the details of the CI log for the failed test, but I don't see how the error relates to the code I added; the error is coming from a config builder different than the `LinceConfig`, and it crashes when `self.config.data_files` because is self.config is None. I would appreciate if someone could help m... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/383",
"html_url": "https://github.com/huggingface/datasets/pull/383",
"diff_url": "https://github.com/huggingface/datasets/pull/383.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/383.patch",
"merged_at": "2020-07-16T16:19:46"... | 383 | true |
1080 | https://github.com/huggingface/datasets/issues/382 | [] | null | 382 | false | |
NLp | https://github.com/huggingface/datasets/issues/381 | [] | null | 381 | false | |
[dataset] Structure of MLQA seems unecessary nested | The features of the MLQA dataset comprise several nested dictionaries with a single element inside (for `questions` and `ids`): https://github.com/huggingface/nlp/blob/master/datasets/mlqa/mlqa.py#L90-L97
Should we keep this @mariamabarham @patrickvonplaten? Was this added for compatibility with tfds?
```python
... | https://github.com/huggingface/datasets/issues/378 | [
"Same for the RACE dataset: https://github.com/huggingface/nlp/blob/master/datasets/race/race.py\r\n\r\nShould we scan all the datasets to remove this pattern of un-necessary nesting?",
"You're right, I think we don't need to use the nested dictionary. \r\n"
] | null | 378 | false |
Iyy!!! | https://github.com/huggingface/datasets/issues/377 | [] | null | 377 | false | |
to_pandas conversion doesn't always work | For some complex nested types, the conversion from Arrow to python dict through pandas doesn't seem to be possible.
Here is an example using the official SQUAD v2 JSON file.
This example was found while investigating #373.
```python
>>> squad = load_dataset('json', data_files={nlp.Split.TRAIN: ["./train-v2.0.... | https://github.com/huggingface/datasets/issues/376 | [
"**Edit**: other topic previously in this message moved to a new issue: https://github.com/huggingface/nlp/issues/387",
"Could you try to update pyarrow to >=0.17.0 ? It should fix the `to_pandas` bug\r\n\r\nAlso I'm not sure that structures like list<struct> are fully supported in the lib (none of the datasets u... | null | 376 | false |
TypeError when computing bertscore | Hi,
I installed nlp 0.3.0 via pip, and my python version is 3.7.
When I tried to compute bertscore with the code:
```
import nlp
bertscore = nlp.load_metric('bertscore')
# load hyps and refs
...
print (bertscore.compute(hyps, refs, lang='en'))
```
I got the following error.
```
Traceback (most rece... | https://github.com/huggingface/datasets/issues/375 | [
"I am not able to reproduce this issue on my side.\r\nCould you give us more details about the inputs you used ?\r\n\r\nI do get another error though:\r\n```\r\n~/.virtualenvs/hf-datasets/lib/python3.7/site-packages/bert_score/utils.py in bert_cos_score_idf(model, refs, hyps, tokenizer, idf_dict, verbose, batch_siz... | null | 375 | false |
Add dataset post processing for faiss indexes | # Post processing of datasets for faiss indexes
Now that we can have datasets with embeddings (see `wiki_pr` for example), we can allow users to load the dataset + get the Faiss index that comes with it to do nearest neighbors queries.
## Implementation proposition
- Faiss indexes have to be added to the `nlp.... | https://github.com/huggingface/datasets/pull/374 | [
"I changed the `wiki_dpr` script to ignore the last 24 examples for now. Hopefully we'll have the full version soon.\r\nThe datasets_infos.json and the data on GCS are updated.\r\n\r\nAnd I also added a check to make sure we don't have post processing resources in sub-directories.",
"I added a dummy config that c... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/374",
"html_url": "https://github.com/huggingface/datasets/pull/374",
"diff_url": "https://github.com/huggingface/datasets/pull/374.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/374.patch",
"merged_at": "2020-07-13T13:44:01"... | 374 | true |
Segmentation fault when loading local JSON dataset as of #372 | The last issue was closed (#369) once the #372 update was merged. However, I'm still not able to load a SQuAD formatted JSON file. Instead of the previously recorded pyarrow error, I now get a segmentation fault.
```
dataset = nlp.load_dataset('json', data_files={nlp.Split.TRAIN: ["./datasets/train-v2.0.json"]}, f... | https://github.com/huggingface/datasets/issues/373 | [
"I've seen this sort of thing before -- it might help to delete the directory -- I've also noticed that there is an error with the json Dataloader for any data I've tried to load. I've replaced it with this, which skips over the data feature population step:\r\n\r\n\r\n```python\r\nimport os\r\n\r\nimport pyarrow.j... | null | 373 | false |
Make the json script more flexible | Fix https://github.com/huggingface/nlp/issues/359
Fix https://github.com/huggingface/nlp/issues/369
JSON script now can accept JSON files containing a single dict with the records as a list in one attribute to the dict (previously it only accepted JSON files containing records as rows of dicts in the file).
In t... | https://github.com/huggingface/datasets/pull/372 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/372",
"html_url": "https://github.com/huggingface/datasets/pull/372",
"diff_url": "https://github.com/huggingface/datasets/pull/372.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/372.patch",
"merged_at": "2020-07-10T14:52:05"... | 372 | true |
Fix cached file path for metrics with different config names | The config name was not taken into account to build the cached file path.
It should fix #368 | https://github.com/huggingface/datasets/pull/371 | [
"Thanks for the fast fix!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/371",
"html_url": "https://github.com/huggingface/datasets/pull/371",
"diff_url": "https://github.com/huggingface/datasets/pull/371.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/371.patch",
"merged_at": "2020-07-10T13:45:20"... | 371 | true |
Allow indexing Dataset via np.ndarray | https://github.com/huggingface/datasets/pull/370 | [
"Looks like a flaky CI, failed download from S3."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/370",
"html_url": "https://github.com/huggingface/datasets/pull/370",
"diff_url": "https://github.com/huggingface/datasets/pull/370.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/370.patch",
"merged_at": "2020-07-10T14:05:43"... | 370 | true | |
can't load local dataset: pyarrow.lib.ArrowInvalid: straddling object straddles two block boundaries | Trying to load a local SQuAD-formatted dataset (from a JSON file, about 60MB):
```
dataset = nlp.load_dataset(path='json', data_files={nlp.Split.TRAIN: ["./path/to/file.json"]})
```
causes
```
Traceback (most recent call last):
File "dataloader.py", line 9, in <module>
["./path/to/file.json"]})
File "/... | https://github.com/huggingface/datasets/issues/369 | [
"I am able to reproduce this with the official SQuAD `train-v2.0.json` file downloaded directly from https://rajpurkar.github.io/SQuAD-explorer/",
"I am facing this issue in transformers library 3.0.2 while reading a csv using datasets.\r\nIs this fixed in latest version? \r\nI updated the latest version 4.0.1 bu... | null | 369 | false |
load_metric can't acquire lock anymore | I can't load metric (glue) anymore after an error in a previous run. I even removed the whole cache folder `/home/XXX/.cache/huggingface/`, and the issue persisted. What are the steps to fix this?
Traceback (most recent call last):
File "/home/XXX/miniconda3/envs/ML-DL-py-3.7/lib/python3.7/site-packages/n... | https://github.com/huggingface/datasets/issues/368 | [
"I found that, in the same process (or the same interactive session), if I do\r\n\r\nimport nlp\r\n\r\nm1 = nlp.load_metric('glue', 'mrpc')\r\nm2 = nlp.load_metric('glue', 'sst2')\r\n\r\nI will get the same error `ValueError: Cannot acquire lock, caching file might be used by another process, you should setup a uni... | null | 368 | false |
Update Xtreme to add PAWS-X es | This PR adds the `PAWS-X.es` in the Xtreme dataset #362 | https://github.com/huggingface/datasets/pull/367 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/367",
"html_url": "https://github.com/huggingface/datasets/pull/367",
"diff_url": "https://github.com/huggingface/datasets/pull/367.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/367.patch",
"merged_at": "2020-07-09T12:37:10"... | 367 | true |
Add quora dataset | Added the [Quora question pairs dataset](https://www.quora.com/q/quoradata/First-Quora-Dataset-Release-Question-Pairs).
Implementation Notes:
- I used the original version provided on the quora website. There's also a [Kaggle competition](https://www.kaggle.com/c/quora-question-pairs) which has a nice train/test sp... | https://github.com/huggingface/datasets/pull/366 | [
"Tests seem to be failing because of pandas",
"Kaggle needs authentification to download datasets. We don't have a way to handle that in the lib for now"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/366",
"html_url": "https://github.com/huggingface/datasets/pull/366",
"diff_url": "https://github.com/huggingface/datasets/pull/366.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/366.patch",
"merged_at": "2020-07-13T17:35:21"... | 366 | true |
How to augment data ? | Is there any clean way to augment data ?
For now my work-around is to use batched map, like this :
```python
def aug(samples):
# Simply copy the existing data to have x2 amount of data
for k, v in samples.items():
samples[k].extend(v)
return samples
dataset = dataset.map(aug, batched=T... | https://github.com/huggingface/datasets/issues/365 | [
"Using batched map is probably the easiest way at the moment.\r\nWhat kind of augmentation would you like to do ?",
"Some samples in the dataset are too long, I want to divide them in several samples.",
"Using batched map is the way to go then.\r\nWe'll make it clearer in the docs that map could be used for aug... | null | 365 | false |
add MS MARCO dataset | This PR adds the MS MARCO dataset as requested in this issue #336. MS mARCO has multiple task including:
- Passage and Document Retrieval
- Keyphrase Extraction
- QA and NLG
This PR only adds the 2 versions of the QA and NLG task dataset which was realeased with the original paper here https://arxiv.org/pd... | https://github.com/huggingface/datasets/pull/364 | [
"The dummy data for v2.1 is missing as far as I can see. I think running the dummy data command should work correctly here. ",
"Also, it might be that the structure of the dummy data is wrong - looking at `generate_examples` the structure does not look too easy.",
"The fact that the dummy data for v2.1 is miss... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/364",
"html_url": "https://github.com/huggingface/datasets/pull/364",
"diff_url": "https://github.com/huggingface/datasets/pull/364.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/364.patch",
"merged_at": "2020-08-06T06:15:48"... | 364 | true |
Adding support for generic multi dimensional tensors and auxillary image data for multimodal datasets | nlp/features.py:
The main factory class is MultiArray, every single time this class is called, a corresponding pyarrow extension array and type class is generated (and added to the list of globals for future use) for a given root data type and set of dimensions/shape. I provide examples on working with this in datas... | https://github.com/huggingface/datasets/pull/363 | [
"Thank you! I just marked this as a draft PR. It probably would be better to create specific Array2D and Array3D classes as needed instead of a generic MultiArray for now, it should simplify the code a lot too so, I'll update it as such. Also i was meaning to reply earlier, but I wanted to thank you for the testing... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/363",
"html_url": "https://github.com/huggingface/datasets/pull/363",
"diff_url": "https://github.com/huggingface/datasets/pull/363.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/363.patch",
"merged_at": "2020-08-24T09:59:35"... | 363 | true |
[dateset subset missing] xtreme paws-x | I tried nlp.load_dataset('xtreme', 'PAWS-X.es') but get the value error
It turns out that the subset for Spanish is missing
https://github.com/google-research-datasets/paws/tree/master/pawsx | https://github.com/huggingface/datasets/issues/362 | [
"You're right, thanks for pointing it out. We will update it "
] | null | 362 | false |
π [Metrics] ROUGE is non-deterministic | If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.
Refer to [this Colab notebook](https://colab.research.google.com/drive/1wRssNXgb9ldcp4ulwj-hMJn0ywhDOiDy?usp=sharing) for reproducing the problem.
Example of F-score for ROUGE-1, ROUGE-2, ROUGE-L in 2 differe... | https://github.com/huggingface/datasets/issues/361 | [
"Hi, can you give a full self-contained example to reproduce this behavior?",
"> Hi, can you give a full self-contained example to reproduce this behavior?\r\n\r\nThere is a notebook in the post ;)",
"> If I run the ROUGE metric 2 times, with same predictions / references, the scores are slightly different.\r\n... | null | 361 | false |
[Feature request] Add dataset.ragged_map() function for many-to-many transformations | `dataset.map()` enables one-to-one transformations. Input one example and output one example. This is helpful for tokenizing and cleaning individual lines.
`dataset.filter()` enables one-to-(one-or-none) transformations. Input one example and output either zero/one example. This is helpful for removing portions from t... | https://github.com/huggingface/datasets/issues/360 | [
"Actually `map(batched=True)` can already change the size of the dataset.\r\nIt can accept examples of length `N` and returns a batch of length `M` (can be null or greater than `N`).\r\n\r\nI'll make that explicit in the doc that I'm currently writing.",
"You're two steps ahead of me :) In my testing, it also wor... | null | 360 | false |
ArrowBasedBuilder _prepare_split parse_schema breaks on nested structures | I tried using the Json dataloader to load some JSON lines files. but get an exception in the parse_schema function.
```
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-23-9aecfbee53bd> in <mo... | https://github.com/huggingface/datasets/issues/359 | [
"Hi, it depends on what it is in your `dataset_builder.py` file. Can you share it?\r\n\r\nIf you are just loading `json` files, you can also directly use the `json` script (which will find the schema/features from your JSON structure):\r\n\r\n```python\r\nfrom nlp import load_dataset\r\nds = load_dataset(\"json\", ... | null | 359 | false |
Starting to add some real doc | Adding a lot of documentation for:
- load a dataset
- explore the dataset object
- process data with the dataset
- add a new dataset script
- share a dataset script
- full package reference
This version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.htm... | https://github.com/huggingface/datasets/pull/358 | [
"Ok this is starting to be really big so it's probably good to merge this first version of the doc and continue in another PR :)\r\n\r\nThis first version of the doc can be explored here: https://2219-250213286-gh.circle-artifacts.com/0/docs/_build/html/index.html"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/358",
"html_url": "https://github.com/huggingface/datasets/pull/358",
"diff_url": "https://github.com/huggingface/datasets/pull/358.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/358.patch",
"merged_at": "2020-07-14T09:58:15"... | 358 | true |
Add hashes to cnn_dailymail | The URL hashes are helpful for comparing results from other sources. | https://github.com/huggingface/datasets/pull/357 | [
"Looks you to me :)\r\n\r\nCould you also update the json file that goes with the dataset script by doing \r\n```\r\nnlp-cli test ./datasets/cnn_dailymail --save_infos --all_configs\r\n```\r\nIt will update the features metadata and the size of the dataset with your changes.",
"@lhoestq I ran that command.\r\n\r\... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/357",
"html_url": "https://github.com/huggingface/datasets/pull/357",
"diff_url": "https://github.com/huggingface/datasets/pull/357.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/357.patch",
"merged_at": "2020-07-13T14:16:38"... | 357 | true |
Add text dataset | Usage:
```python
from nlp import load_dataset
dset = load_dataset("text", data_files="/path/to/file.txt")["train"]
```
I created a dummy_data.zip which contains three files: `train.txt`, `test.txt`, `dev.txt`. Each of these contains two lines. It passes
```bash
RUN_SLOW=1 pytest tests/test_dataset_common... | https://github.com/huggingface/datasets/pull/356 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/356",
"html_url": "https://github.com/huggingface/datasets/pull/356",
"diff_url": "https://github.com/huggingface/datasets/pull/356.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/356.patch",
"merged_at": "2020-07-10T14:19:03"... | 356 | true |
can't load SNLI dataset | `nlp` seems to load `snli` from some URL based on nlp.stanford.edu. This subdomain is frequently down -- including right now, when I'd like to load `snli` in a Colab notebook, but can't.
Is there a plan to move these datasets to huggingface servers for a more stable solution?
Btw, here's the stack trace:
```
... | https://github.com/huggingface/datasets/issues/355 | [
"I just added the processed files of `snli` on our google storage, so that when you do `load_dataset` it can download the processed files from there :)\r\n\r\nWe are thinking about having available those processed files for more datasets in the future, because sometimes files aren't available (like for `snli`), or ... | null | 355 | false |
More faiss control | Allow users to specify a faiss index they created themselves, as sometimes indexes can be composite for examples | https://github.com/huggingface/datasets/pull/354 | [
"> Ok, so we're getting rid of the `FaissGpuOptions`?\r\n\r\nWe support `device=...` because it's simple, but faiss GPU options can be used in so many ways (you can set different gpu options for the different parts of your index for example) that it's probably better to let the user create and configure its index a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/354",
"html_url": "https://github.com/huggingface/datasets/pull/354",
"diff_url": "https://github.com/huggingface/datasets/pull/354.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/354.patch",
"merged_at": "2020-07-09T09:54:51"... | 354 | true |
[Dataset requests] New datasets for Text Classification | We are missing a few datasets for Text Classification which is an important field.
Namely, it would be really nice to add:
- [x] TREC-6 dataset (see here for instance: https://pytorchnlp.readthedocs.io/en/latest/source/torchnlp.datasets.html#torchnlp.datasets.trec_dataset) **[done]**
- #386
- [x] Yelp-5
- #... | https://github.com/huggingface/datasets/issues/353 | [
"Pinging @mariamabarham as well",
"- `nlp` has MR! It's called `rotten_tomatoes`\r\n- SST is part of GLUE, or is that just SST-2?\r\n- `nlp` also has `ag_news`, a popular news classification dataset\r\n\r\nI'd also like to see:\r\n- the Yahoo Answers topic classification dataset\r\n- the Kaggle Fake News classifi... | null | 353 | false |
π[BugFix]fix seqeval | Fix seqeval process labels such as 'B', 'B-ARGM-LOC' | https://github.com/huggingface/datasets/pull/352 | [
"I think this is good but can you detail a bit the behavior before and after your fix?",
"examples:\r\n\r\ninput: `['B', 'I', 'I', 'O', 'B', 'I']`\r\nbefore: `[('B', 0, 0), ('I', 1, 2), ('B', 4, 4), ('I', 5, 5)]`\r\nafter: `[('_', 0, 2), ('_', 4, 5)]`\r\n\r\ninput: `['B-ARGM-LOC', 'I-ARGM-LOC', 'I-ARGM-LOC', 'O',... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/352",
"html_url": "https://github.com/huggingface/datasets/pull/352",
"diff_url": "https://github.com/huggingface/datasets/pull/352.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/352.patch",
"merged_at": "2020-07-16T08:26:46"... | 352 | true |
add pandas dataset | Create a dataset from serialized pandas dataframes.
Usage:
```python
from nlp import load_dataset
dset = load_dataset("pandas", data_files="df.pkl")["train"]
``` | https://github.com/huggingface/datasets/pull/351 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/351",
"html_url": "https://github.com/huggingface/datasets/pull/351",
"diff_url": "https://github.com/huggingface/datasets/pull/351.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/351.patch",
"merged_at": "2020-07-08T14:15:15"... | 351 | true |
add from_pandas and from_dict | I added two new methods to the `Dataset` class:
- `from_pandas()` to create a dataset from a pandas dataframe
- `from_dict()` to create a dataset from a dictionary (keys = columns)
It uses the `pa.Table.from_pandas` and `pa.Table.from_pydict` funcitons to do so.
It is also possible to specify the features types v... | https://github.com/huggingface/datasets/pull/350 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/350",
"html_url": "https://github.com/huggingface/datasets/pull/350",
"diff_url": "https://github.com/huggingface/datasets/pull/350.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/350.patch",
"merged_at": "2020-07-08T14:14:32"... | 350 | true |
Hyperpartisan news detection | Adding the hyperpartisan news detection dataset from PAN. This contains news article text, labelled with whether they're hyper-partisan and why kinds of biases they display.
Implementation notes:
- As with many PAN tasks, the data is hosted on [Zenodo](https://zenodo.org/record/1489920) and must be requested before... | https://github.com/huggingface/datasets/pull/349 | [
"Thank you so much for working on this! This is awesome!\r\n\r\nHow much would it help you if we would remove the manual request?\r\n\r\nWe are naturally interested in getting some broad idea of how many people and who are using our dataset. But if you consider hosting the dataset yourself, I would rather remove th... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/349",
"html_url": "https://github.com/huggingface/datasets/pull/349",
"diff_url": "https://github.com/huggingface/datasets/pull/349.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/349.patch",
"merged_at": "2020-07-07T14:57:11"... | 349 | true |
Add OSCAR dataset | I don't know if tests pass, when I run them it tries to download the whole corpus which is around 3.5TB compressed and I don't have that kind of space. I'll really need some help with it π
Thanks! | https://github.com/huggingface/datasets/pull/348 | [
"@pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\n ",
"> @pjox I think the tests don't pass because you haven't provided any dummy data (`dummy_data.zip`).\r\n\r\nBut can I do the dummy data without running `python nlp-cli test datasets/<your-dataset-folder... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/348",
"html_url": "https://github.com/huggingface/datasets/pull/348",
"diff_url": "https://github.com/huggingface/datasets/pull/348.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/348.patch",
"merged_at": null
} | 348 | true |
'cp950' codec error from load_dataset('xtreme', 'tydiqa') | 
I guess the error is related to python source encoding issue that my PC is trying to decode the source code with wrong encoding-decoding tools, perhaps :
https://www.python.org/dev/peps/pep-0263/
I gues... | https://github.com/huggingface/datasets/issues/347 | [
"This is probably a Windows issue, we need to specify the encoding when `load_dataset()` reads the original CSV file.\r\nTry to find the `open()` statement called by `load_dataset()` and add an `encoding='utf-8'` parameter.\r\nSee issues #242 and #307 ",
"It should be in `xtreme.py:L755`:\r\n```python\r\n ... | null | 347 | false |
Add emotion dataset | Hello π€ team!
I am trying to add an emotion classification dataset ([link](https://github.com/dair-ai/emotion_dataset)) to `nlp` but I am a bit stuck about what I should do when the URL for the dataset is not a ZIP file, but just a pickled `pandas.DataFrame` (see [here](https://www.dropbox.com/s/607ptdakxuh5i4s/me... | https://github.com/huggingface/datasets/pull/346 | [
"I've tried it and am getting the same error as you.\r\n\r\nYou could use the text files rather than the pickle:\r\n```\r\nhttps://www.dropbox.com/s/ikkqxfdbdec3fuj/test.txt\r\nhttps://www.dropbox.com/s/1pzkadrvffbqw6o/train.txt\r\nhttps://www.dropbox.com/s/2mzialpsgf9k5l3/val.txt\r\n```\r\n\r\nThen you would get a... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/346",
"html_url": "https://github.com/huggingface/datasets/pull/346",
"diff_url": "https://github.com/huggingface/datasets/pull/346.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/346.patch",
"merged_at": "2020-07-13T14:39:38"... | 346 | true |
Supporting documents in ELI5 | I was attempting to use the ELI5 dataset, when I realized that huggingface does not provide the supporting documents (the source documents from the common crawl). Without the supporting documents, this makes the dataset about as useful for my project as a block of cheese, or some other more apt metaphor. According to ... | https://github.com/huggingface/datasets/issues/345 | [
"Hi @saverymax ! For licensing reasons, the original team was unable to release pre-processed CommonCrawl documents. Instead, they provided a script to re-create them from a CommonCrawl dump, but it unfortunately requires access to a medium-large size cluster:\r\nhttps://github.com/facebookresearch/ELI5#downloading... | null | 345 | false |
Search qa | This PR adds the Search QA dataset used in **SearchQA: A New Q&A Dataset Augmented with Context from a Search Engine**. The dataset has the following config name:
- raw_jeopardy: raw data
- train_test_val: which is the splitted version
#336 | https://github.com/huggingface/datasets/pull/344 | [
"Could you rebase from master just to make sure we won't break anything for `fever` pls @mariamabarham ?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/344",
"html_url": "https://github.com/huggingface/datasets/pull/344",
"diff_url": "https://github.com/huggingface/datasets/pull/344.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/344.patch",
"merged_at": null
} | 344 | true |
Fix nested tensorflow format | In #339 and #337 we are thinking about adding a way to export datasets to tfrecords.
However I noticed that it was not possible to do `dset.set_format("tensorflow")` on datasets with nested features like `squad`. I fixed that using a nested map operations to convert features to `tf.ragged.constant`.
I also added ... | https://github.com/huggingface/datasets/pull/343 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/343",
"html_url": "https://github.com/huggingface/datasets/pull/343",
"diff_url": "https://github.com/huggingface/datasets/pull/343.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/343.patch",
"merged_at": "2020-07-06T13:11:51"... | 343 | true |
Features should be updated when `map()` changes schema | `dataset.map()` can change the schema and column names.
We should update the features in this case (with what is possible to infer). | https://github.com/huggingface/datasets/issues/342 | [
"`dataset.column_names` are being updated but `dataset.features` aren't indeed..."
] | null | 342 | false |
add fever dataset | This PR add the FEVER dataset https://fever.ai/ used in with the paper: FEVER: a large-scale dataset for Fact Extraction and VERification (https://arxiv.org/pdf/1803.05355.pdf).
#336 | https://github.com/huggingface/datasets/pull/341 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/341",
"html_url": "https://github.com/huggingface/datasets/pull/341",
"diff_url": "https://github.com/huggingface/datasets/pull/341.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/341.patch",
"merged_at": "2020-07-06T13:03:47"... | 341 | true |
Update cfq.py | Make the dataset name consistent with in the paper: Compositional Freebase Question => Compositional Freebase Questions. | https://github.com/huggingface/datasets/pull/340 | [
"Thanks @brainshawn for this update"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/340",
"html_url": "https://github.com/huggingface/datasets/pull/340",
"diff_url": "https://github.com/huggingface/datasets/pull/340.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/340.patch",
"merged_at": "2020-07-03T12:33:50"... | 340 | true |
Add dataset.export() to TFRecords | Fixes https://github.com/huggingface/nlp/issues/337
Some design decisions:
- Simplified the function API to not handle sharding. It writes the entire dataset as a single TFRecord file. This simplifies the function logic and users can use other functions (`select`, `shard`, etc) to handle custom sharding or splitt... | https://github.com/huggingface/datasets/pull/339 | [
"Really cool @jarednielsen !\r\nDo you think we can make it work with dataset with nested features like `squad` ?\r\n\r\nI just did a PR to fix `.set_format` for datasets with nested features, but as soon as it's merged we could try to make the conversion work on a dataset like `squad`.",
"For datasets with neste... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/339",
"html_url": "https://github.com/huggingface/datasets/pull/339",
"diff_url": "https://github.com/huggingface/datasets/pull/339.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/339.patch",
"merged_at": "2020-07-22T09:16:11"... | 339 | true |
Run `make style` | These files get changed when I run `make style` on an unrelated PR. Upstreaming these changes so development on a different branch can be easier. | https://github.com/huggingface/datasets/pull/338 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/338",
"html_url": "https://github.com/huggingface/datasets/pull/338",
"diff_url": "https://github.com/huggingface/datasets/pull/338.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/338.patch",
"merged_at": "2020-07-02T18:03:10"... | 338 | true |
[Feature request] Export Arrow dataset to TFRecords | The TFRecord generation process is error-prone and requires complex separate Python scripts to download and preprocess the data. I propose to combine the user-friendly features of `nlp` with the speed and efficiency of TFRecords. Sample API:
```python
# use these existing methods
ds = load_dataset("wikitext", "wik... | https://github.com/huggingface/datasets/issues/337 | [] | null | 337 | false |
[Dataset requests] New datasets for Open Question Answering | We are still a few datasets missing for Open-Question Answering which is currently a field in strong development.
Namely, it would be really nice to add:
- WebQuestions (Berant et al., 2013) [done]
- CuratedTrec (Baudis et al. 2015) [not open-source]
- MS-MARCO (NGuyen et al. 2016) [done]
- SearchQA (Dunn et al.... | https://github.com/huggingface/datasets/issues/336 | [] | null | 336 | false |
BioMRC Dataset presented in BioNLP 2020 ACL Workshop | https://github.com/huggingface/datasets/pull/335 | [
"I fixed the issues that you pointed out, re-run all the test and pushed the fixed code :-)",
"```\r\n=================================== FAILURES ===================================\r\n___________________ AWSDatasetTest.test_load_dataset_pandas ____________________\r\n\r\nself = <tests.test_dataset_common.AWSDat... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/335",
"html_url": "https://github.com/huggingface/datasets/pull/335",
"diff_url": "https://github.com/huggingface/datasets/pull/335.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/335.patch",
"merged_at": "2020-07-15T08:02:07"... | 335 | true | |
Add dataset.shard() method | Fixes https://github.com/huggingface/nlp/issues/312 | https://github.com/huggingface/datasets/pull/334 | [
"Great, done!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/334",
"html_url": "https://github.com/huggingface/datasets/pull/334",
"diff_url": "https://github.com/huggingface/datasets/pull/334.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/334.patch",
"merged_at": "2020-07-06T12:35:36"... | 334 | true |
fix variable name typo | https://github.com/huggingface/datasets/pull/333 | [
"Good catch :)\r\nI think there is another occurence that needs to be fixed in the second gist (line 4924 of the notebook file):\r\n```python\r\nbleu = nlp.load_metric(...)\r\n```",
"Was fixed in e16f79b5f7fc12a6a30c777722be46897a272e6f\r\nClosing it."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/333",
"html_url": "https://github.com/huggingface/datasets/pull/333",
"diff_url": "https://github.com/huggingface/datasets/pull/333.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/333.patch",
"merged_at": null
} | 333 | true | |
Add wiki_dpr | Presented in the [Dense Passage Retrieval paper](https://arxiv.org/pdf/2004.04906.pdf), this dataset consists in 21M passages from the english wikipedia along with their 768-dim embeddings computed using DPR's context encoder.
Note on the implementation:
- There are two configs: with and without the embeddings (73G... | https://github.com/huggingface/datasets/pull/332 | [
"The two configurations don't have the same sizes, I may change that so that they both have 21015300 examples for convenience, even though it's supposed to have 21015324 examples in total.\r\n\r\nOne configuration only has 21015300 examples because it seems that the embeddings of the last 24 examples are missing.",... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/332",
"html_url": "https://github.com/huggingface/datasets/pull/332",
"diff_url": "https://github.com/huggingface/datasets/pull/332.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/332.patch",
"merged_at": "2020-07-06T12:21:16"... | 332 | true |
Loading CNN/Daily Mail dataset produces `nlp.utils.info_utils.NonMatchingSplitsSizesError` | ```
>>> import nlp
>>> nlp.load_dataset('cnn_dailymail', '3.0.0')
Downloading and preparing dataset cnn_dailymail/3.0.0 (download: 558.32 MiB, generated: 1.26 GiB, total: 1.81 GiB) to /u/jm8wx/.cache/huggingface/datasets/cnn_dailymail/3.0.0/3.0.0...
Traceback (most recent call last):
File "<stdin>", line 1, in... | https://github.com/huggingface/datasets/issues/331 | [
"I couldn't reproduce on my side.\r\nIt looks like you were not able to generate all the examples, and you have the problem for each split train-test-validation.\r\nCould you try to enable logging, try again and send the logs ?\r\n```python\r\nimport logging\r\nlogging.basicConfig(level=logging.INFO)\r\n```",
"he... | null | 331 | false |
Doc red | Adding [DocRED](https://github.com/thunlp/DocRED) - a relation extraction dataset which tests document-level RE. A few implementation notes:
- There are 2 separate versions of the training set - *annotated* and *distant*. Instead of `nlp.Split.Train` I've used the splits `"train_annotated"` and `"train_distant"` to ... | https://github.com/huggingface/datasets/pull/330 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/330",
"html_url": "https://github.com/huggingface/datasets/pull/330",
"diff_url": "https://github.com/huggingface/datasets/pull/330.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/330.patch",
"merged_at": "2020-07-05T12:27:29"... | 330 | true |
[Bug] FileLock dependency incompatible with filesystem | I'm downloading a dataset successfully with
`load_dataset("wikitext", "wikitext-2-raw-v1")`
But when I attempt to cache it on an external volume, it hangs indefinitely:
`load_dataset("wikitext", "wikitext-2-raw-v1", cache_dir="/fsx") # /fsx is an external volume mount`
The filesystem when hanging looks like thi... | https://github.com/huggingface/datasets/issues/329 | [
"Hi, can you give details on your environment/os/packages versions/etc?",
"Environment is Ubuntu 18.04, Python 3.7.5, nlp==0.3.0, filelock=3.0.12.\r\n\r\nThe external volume is Amazon FSx for Lustre, and it by default creates files with limited permissions. My working theory is that FileLock creates a lockfile th... | null | 329 | false |
Fork dataset | We have a multi-task learning model training I'm trying to convert to using the Arrow-based nlp dataset.
We're currently training a custom TensorFlow model but the nlp paradigm should be a bridge for us to be able to use the wealth of pre-trained models in Transformers.
Our preprocessing flow parses raw text and... | https://github.com/huggingface/datasets/issues/328 | [
"To be able to generate the Arrow dataset you need to either use our csv or json utilities `load_dataset(\"json\", data_files=my_json_files)` OR write your own custom dataset script (you can find some inspiration from the [squad](https://github.com/huggingface/nlp/blob/master/datasets/squad/squad.py) script for exa... | null | 328 | false |
set seed for suffling tests | Some tests were randomly failing because of a missing seed in a test for `train_test_split(shuffle=True)` | https://github.com/huggingface/datasets/pull/327 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/327",
"html_url": "https://github.com/huggingface/datasets/pull/327",
"diff_url": "https://github.com/huggingface/datasets/pull/327.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/327.patch",
"merged_at": "2020-07-02T08:34:04"... | 327 | true |
Large dataset in Squad2-format | At the moment we are building an large question answering dataset and think about sharing it with the huggingface community.
Caused the computing power we splitted it into multiple tiles, but they are all in the same format.
Right now the most important facts about are this:
- Contexts: 1.047.671
- questions: 1.677... | https://github.com/huggingface/datasets/issues/326 | [
"I'm pretty sure you can get some inspiration from the squad_v2 script. It looks like the dataset is quite big so it will take some time for the users to generate it, but it should be reasonable.\r\n\r\nAlso you are saying that you are still making the dataset grow in size right ?\r\nIt's probably good practice to ... | null | 326 | false |
Add SQuADShifts dataset | This PR adds the four new variants of the SQuAD dataset used in [The Effect of Natural Distribution Shift on Question Answering Models](https://arxiv.org/abs/2004.14444) to facilitate evaluating model robustness to distribution shift. | https://github.com/huggingface/datasets/pull/325 | [
"Very cool to have this dataset, thank you for adding it :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/325",
"html_url": "https://github.com/huggingface/datasets/pull/325",
"diff_url": "https://github.com/huggingface/datasets/pull/325.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/325.patch",
"merged_at": "2020-06-30T17:07:31"... | 325 | true |
Error when calculating glue score | I was trying glue score along with other metrics here. But glue gives me this error;
```
import nlp
glue_metric = nlp.load_metric('glue',name="cola")
glue_score = glue_metric.compute(predictions, references)
```
```
---------------------------------------------------------------------------
--------------... | https://github.com/huggingface/datasets/issues/324 | [
"The glue metric for cola is a metric for classification. It expects label ids as integers as inputs.",
"I want to evaluate a sentence pair whether they are semantically equivalent, so I used MRPC and it gives the same error, does that mean we have to encode the sentences and parse as input?\r\n\r\nusing BertToke... | null | 324 | false |
Add package path to sys when downloading package as github archive | This fixes the `coval.py` metric so that imports within the downloaded module work correctly. We can use a similar trick to add the BLEURT metric (@ankparikh)
@thomwolf not sure how you feel about adding to the `PYTHONPATH` from the script. This is the only way I could make it work with my understanding of `importli... | https://github.com/huggingface/datasets/pull/323 | [
"Sorry for the long diff, everything after the imports comes from `black` for code quality :/ ",
" I think it's fine and I can't think of another way to make the import work anyways.\r\n\r\nMaybe we can have the `sys.path` behavior inside `prepare_module` instead ? Currently it seems to come out of nowhere in the... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/323",
"html_url": "https://github.com/huggingface/datasets/pull/323",
"diff_url": "https://github.com/huggingface/datasets/pull/323.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/323.patch",
"merged_at": null
} | 323 | true |
output nested dict in get_nearest_examples | As we are using a columnar format like arrow as the backend for datasets, we expect to have a dictionary of columns when we slice a dataset like in this example:
```python
my_examples = dataset[0:10]
print(type(my_examples))
# >>> dict
print(my_examples["my_column"][0]
# >>> this is the first element of the colum... | https://github.com/huggingface/datasets/pull/322 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/322",
"html_url": "https://github.com/huggingface/datasets/pull/322",
"diff_url": "https://github.com/huggingface/datasets/pull/322.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/322.patch",
"merged_at": "2020-07-02T08:33:32"... | 322 | true |
ERROR:root:mwparserfromhell | Hi,
I am trying to download some wikipedia data but I got this error for spanish "es" (but there are maybe some others languages which have the same error I haven't tried all of them ).
`ERROR:root:mwparserfromhell ParseError: This is a bug and should be reported. Info: C tokenizer exited with non-empty token sta... | https://github.com/huggingface/datasets/issues/321 | [
"It looks like it comes from `mwparserfromhell`.\r\n\r\nWould it be possible to get the bad `section` that causes this issue ? The `section` string is from `datasets/wikipedia.py:L548` ? You could just add a `try` statement and print the section if the line `section_text.append(section.strip_code().strip())` crashe... | null | 321 | false |
Blog Authorship Corpus, Non Matching Splits Sizes Error, nlp viewer | Selecting `blog_authorship_corpus` in the nlp viewer throws the following error:
```
NonMatchingSplitsSizesError: [{'expected': SplitInfo(name='train', num_bytes=610252351, num_examples=532812, dataset_name='blog_authorship_corpus'), 'recorded': SplitInfo(name='train', num_bytes=614706451, num_examples=535568, dat... | https://github.com/huggingface/datasets/issues/320 | [
"I wonder if this means downloading failed? That corpus has a really slow server.",
"This dataset seems to have a decoding problem that results in inconsistencies in the number of generated examples.\r\nSee #215.\r\nThat's why we end up with a `NonMatchingSplitsSizesError `."
] | null | 320 | false |
Nested sequences with dicts | Am pretty much finished [adding a dataset](https://github.com/ghomasHudson/nlp/blob/DocRED/datasets/docred/docred.py) for [DocRED](https://github.com/thunlp/DocRED), but am getting an error when trying to add a nested `nlp.features.sequence(nlp.features.sequence({key:value,...}))`.
The original data is in this form... | https://github.com/huggingface/datasets/issues/319 | [
"Oh yes, this is a backward compatibility feature with tensorflow_dataset in which a `Sequence` or `dict` is converted in a `dict` of `lists`, unfortunately it is not very intuitive, see here: https://github.com/huggingface/nlp/blob/master/src/nlp/features.py#L409\r\n\r\nTo avoid this behavior, you can just define ... | null | 319 | false |
Multitask | Following our discussion in #217, I've implemented a first working version of `MultiDataset`.
There's a function `build_multitask()` which takes either individual `nlp.Dataset`s or `dicts` of splits and constructs `MultiDataset`(s). I've added a notebook with example usage.
I've implemented many of the `nlp.Datas... | https://github.com/huggingface/datasets/pull/318 | [
"It's definitely going in the right direction ! Thanks for giving it a try\r\n\r\nI really like the API.\r\nIMO it's fine right now if we don't have all the dataset transforms (map, filter, etc.) as it can be done before building the multitask dataset, but it will be important to have them in the end.\r\nAll the fo... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/318",
"html_url": "https://github.com/huggingface/datasets/pull/318",
"diff_url": "https://github.com/huggingface/datasets/pull/318.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/318.patch",
"merged_at": null
} | 318 | true |
Adding a dataset with multiple subtasks | I intent to add the datasets of the MT Quality Estimation shared tasks to `nlp`. However, they have different subtasks -- such as word-level, sentence-level and document-level quality estimation, each of which having different language pairs, and some of the data reused in different subtasks.
For example, in [QE 201... | https://github.com/huggingface/datasets/issues/317 | [
"For one dataset you can have different configurations that each have their own `nlp.Features`.\r\nWe imagine having one configuration per subtask for example.\r\nThey are loaded with `nlp.load_dataset(\"my_dataset\", \"my_config\")`.\r\n\r\nFor example the `glue` dataset has many configurations. It is a bit differ... | null | 317 | false |
add AG News dataset | adds support for the AG-News topic classification dataset | https://github.com/huggingface/datasets/pull/316 | [
"Thanks @jxmorris12 for adding this adding. \r\nCan you please add a small description of the PR?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/316",
"html_url": "https://github.com/huggingface/datasets/pull/316",
"diff_url": "https://github.com/huggingface/datasets/pull/316.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/316.patch",
"merged_at": "2020-06-30T08:31:55"... | 316 | true |
[Question] Best way to batch a large dataset? | I'm training on large datasets such as Wikipedia and BookCorpus. Following the instructions in [the tutorial notebook](https://colab.research.google.com/github/huggingface/nlp/blob/master/notebooks/Overview.ipynb), I see the following recommended for TensorFlow:
```python
train_tf_dataset = train_tf_dataset.filter(... | https://github.com/huggingface/datasets/issues/315 | [
"Update: I think I've found a solution.\r\n\r\n```python\r\noutput_types = {\"input_ids\": tf.int64, \"token_type_ids\": tf.int64, \"attention_mask\": tf.int64}\r\ndef train_dataset_gen():\r\n for i in range(len(train_dataset)):\r\n yield train_dataset[i]\r\ntf_dataset = tf.data.Dataset.from_generator(tra... | null | 315 | false |
Fixed singlular very minor spelling error | An instance of "independantly" was changed to "independently". That's all. | https://github.com/huggingface/datasets/pull/314 | [
"Thank you BatJeti! The storm-joker, aka the typo, finally got caught!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/314",
"html_url": "https://github.com/huggingface/datasets/pull/314",
"diff_url": "https://github.com/huggingface/datasets/pull/314.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/314.patch",
"merged_at": "2020-06-25T12:43:59"... | 314 | true |
Add MWSC | Adding the [Modified Winograd Schema Challenge](https://github.com/salesforce/decaNLP/blob/master/local_data/schema.txt) dataset which formed part of the [decaNLP](http://decanlp.com/) benchmark. Not sure how much use people would find for it it outside of the benchmark, but it is general purpose.
Code is heavily bo... | https://github.com/huggingface/datasets/pull/313 | [
"Looks good to me"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/313",
"html_url": "https://github.com/huggingface/datasets/pull/313",
"diff_url": "https://github.com/huggingface/datasets/pull/313.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/313.patch",
"merged_at": "2020-06-30T08:28:10"... | 313 | true |
[Feature request] Add `shard()` method to dataset | Currently, to shard a dataset into 10 pieces on different ranks, you can run
```python
rank = 3 # for example
size = 10
dataset = nlp.load_dataset('wikitext', 'wikitext-2-raw-v1', split=f"train[{rank*10}%:{(rank+1)*10}%]")
```
However, this breaks down if you have a number of ranks that doesn't divide cleanly... | https://github.com/huggingface/datasets/issues/312 | [
"Hi Jared,\r\nInteresting, thanks for raising this question. You can also do that after loading with `dataset.select()` or `dataset.filter()` which let you keep only a specific subset of rows in a dataset.\r\nWhat is your use-case for sharding?",
"Thanks for the pointer to those functions! It's still a little mor... | null | 312 | false |
Add qa_zre | Adding the QA-ZRE dataset from ["Zero-Shot Relation Extraction via Reading Comprehension"](http://nlp.cs.washington.edu/zeroshot/).
A common processing step seems to be replacing the `XXX` placeholder with the `subject`. I've left this out as it's something you could easily do with `map`. | https://github.com/huggingface/datasets/pull/311 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/311",
"html_url": "https://github.com/huggingface/datasets/pull/311",
"diff_url": "https://github.com/huggingface/datasets/pull/311.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/311.patch",
"merged_at": "2020-06-29T16:37:38"... | 311 | true |
add wikisql | Adding the [WikiSQL](https://github.com/salesforce/WikiSQL) dataset.
Interesting things to note:
- Have copied the function (`_convert_to_human_readable`) which converts the SQL query to a human-readable (string) format as this is what most people will want when actually using this dataset for NLP applications.
- ... | https://github.com/huggingface/datasets/pull/310 | [
"That's great work @ghomasHudson !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/310",
"html_url": "https://github.com/huggingface/datasets/pull/310",
"diff_url": "https://github.com/huggingface/datasets/pull/310.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/310.patch",
"merged_at": "2020-06-25T12:32:25"... | 310 | true |
Add narrative qa | Test cases for dummy data don't pass
Only contains data for summaries (not whole story) | https://github.com/huggingface/datasets/pull/309 | [
"Does it make sense to download the full stories? I remember attempting to implement this dataset a while ago and ended up with something like:\r\n```python\r\n def _split_generators(self, dl_manager):\r\n \"\"\"Returns SplitGenerators.\"\"\"\r\n\r\n dl_dir = dl_manager.download_and_extract(_DOWNLO... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/309",
"html_url": "https://github.com/huggingface/datasets/pull/309",
"diff_url": "https://github.com/huggingface/datasets/pull/309.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/309.patch",
"merged_at": null
} | 309 | true |
Specify utf-8 encoding for MRPC files | Fixes #307, again probably a Windows-related issue. | https://github.com/huggingface/datasets/pull/308 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/308",
"html_url": "https://github.com/huggingface/datasets/pull/308",
"diff_url": "https://github.com/huggingface/datasets/pull/308.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/308.patch",
"merged_at": "2020-06-25T12:16:09"... | 308 | true |
Specify encoding for MRPC | Same as #242, but with MRPC: on Windows, I get a `UnicodeDecodeError` when I try to download the dataset:
```python
dataset = nlp.load_dataset('glue', 'mrpc')
```
```python
Downloading and preparing dataset glue/mrpc (download: Unknown size, generated: Unknown size, total: Unknown size) to C:\Users\Python\.cache... | https://github.com/huggingface/datasets/issues/307 | [] | null | 307 | false |
add pg19 dataset | https://github.com/huggingface/nlp/issues/274
Add functioning PG19 dataset with dummy data
`cos_e.py` was just auto-linted by `make style` | https://github.com/huggingface/datasets/pull/306 | [
"@lucidrains - Thanks a lot for making the PR - PG19 is a super important dataset! Thanks for making it. Many people are asking for PG-19, so it would be great to have that in the library as soon as possible @thomwolf .",
"@mariamabarham yup! around 11GB!",
"I'm looking forward to our first deep learning writte... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/306",
"html_url": "https://github.com/huggingface/datasets/pull/306",
"diff_url": "https://github.com/huggingface/datasets/pull/306.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/306.patch",
"merged_at": "2020-07-06T07:55:59"... | 306 | true |
Importing downloaded package repository fails | The `get_imports` function in `src/nlp/load.py` has a feature to download a package as a zip archive of the github repository and import functions from the unpacked directory. This is used for example in the `metrics/coval.py` file, and would be useful to add BLEURT (@ankparikh).
Currently however, the code seems to... | https://github.com/huggingface/datasets/issues/305 | [] | null | 305 | false |
Problem while printing doc string when instantiating multiple metrics. | When I load more than one metric and try to print doc string of a particular metric,. It shows the doc strings of all imported metric one after the other which looks quite confusing and clumsy.
Attached [Colab](https://colab.research.google.com/drive/13H0ZgyQ2se0mqJ2yyew0bNEgJuHaJ8H3?usp=sharing) Notebook for problem ... | https://github.com/huggingface/datasets/issues/304 | [] | null | 304 | false |
allow to move files across file systems | Users are allowed to use the `cache_dir` that they want.
Therefore it can happen that we try to move files across filesystems.
We were using `os.rename` that doesn't allow that, so I changed some of them to `shutil.move`.
This should fix #301 | https://github.com/huggingface/datasets/pull/303 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/303",
"html_url": "https://github.com/huggingface/datasets/pull/303",
"diff_url": "https://github.com/huggingface/datasets/pull/303.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/303.patch",
"merged_at": "2020-06-23T15:08:43"... | 303 | true |
Question - Sign Language Datasets | An emerging field in NLP is SLP - sign language processing.
I was wondering about adding datasets here, specifically because it's shaping up to be large and easily usable.
The metrics for sign language to text translation are the same.
So, what do you think about (me, or others) adding datasets here?
An exa... | https://github.com/huggingface/datasets/issues/302 | [
"Even more complicating - \r\n\r\nAs I see it, datasets can have \"addons\".\r\nFor example, the WebNLG dataset is a dataset for data-to-text. However, a work of mine and other works enriched this dataset with text plans / underlying text structures. In that case, I see a need to load the dataset \"WebNLG\" with \"... | null | 302 | false |
Setting cache_dir gives error on wikipedia download | First of all thank you for a super handy library! I'd like to download large files to a specific drive so I set `cache_dir=my_path`. This works fine with e.g. imdb and squad. But on wikipedia I get an error:
```
nlp.load_dataset('wikipedia', '20200501.de', split = 'train', cache_dir=my_path)
```
```
OSError ... | https://github.com/huggingface/datasets/issues/301 | [
"Whoops didn't mean to close this one.\r\nI did some changes, could you try to run it from the master branch ?",
"Now it works, thanks!"
] | null | 301 | false |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.