title stringlengths 1 290 | body stringlengths 0 228k β | html_url stringlengths 46 51 | comments list | pull_request dict | number int64 1 5.59k | is_pull_request bool 2
classes |
|---|---|---|---|---|---|---|
check if trasnformers has PreTrainedTokenizerBase | Fix #598 | https://github.com/huggingface/datasets/pull/601 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/601",
"html_url": "https://github.com/huggingface/datasets/pull/601",
"diff_url": "https://github.com/huggingface/datasets/pull/601.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/601.patch",
"merged_at": "2020-09-10T11:01:36"... | 601 | true |
Pickling error when loading dataset | Hi,
I modified line 136 in the original [run_language_modeling.py](https://github.com/huggingface/transformers/blob/master/examples/language-modeling/run_language_modeling.py) as:
```
# line 136: return LineByLineTextDataset(tokenizer=tokenizer, file_path=file_path, block_size=args.block_size)
dataset = load_da... | https://github.com/huggingface/datasets/issues/600 | [
"When I change from python3.6 to python3.8, it works! ",
"Does it work when you install `nlp` from source on python 3.6?",
"No, still the pickling error.",
"I wasn't able to reproduce on google colab (python 3.6.9 as well) with \r\n\r\npickle==4.0\r\ndill=0.3.2\r\ntransformers==3.1.0\r\ndatasets=1.0.1 (also t... | null | 600 | false |
Add MATINF dataset | @lhoestq The command to create metadata failed. I guess it's because the zip is not downloaded from a remote address? How to solve that? Also the CI fails and I don't know how to fix that :( | https://github.com/huggingface/datasets/pull/599 | [
"Hi ! sorry for the late response\r\n\r\nCould you try to rebase from master ? We changed the named of the library last week so you have to include this change in your code.\r\n\r\nCan you give me more details about the error you get when running the cli command ?\r\n\r\nNote that in case of a manual download you h... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/599",
"html_url": "https://github.com/huggingface/datasets/pull/599",
"diff_url": "https://github.com/huggingface/datasets/pull/599.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/599.patch",
"merged_at": null
} | 599 | true |
The current version of the package on github has an error when loading dataset | Instead of downloading the package from pip, downloading the version from source will result in an error when loading dataset (the pip version is completely fine):
To recreate the error:
First, installing nlp directly from source:
```
git clone https://github.com/huggingface/nlp.git
cd nlp
pip install -e .
``... | https://github.com/huggingface/datasets/issues/598 | [
"Thanks for reporting !\r\nWhich version of transformers are you using ?\r\nIt looks like it doesn't have the PreTrainedTokenizerBase class",
"I was using transformer 2.9. And I switch to the latest transformer package. Everything works just fine!!\r\n\r\nThanks for helping! I should look more carefully next time... | null | 598 | false |
Indices incorrect with multiprocessing | When `num_proc` > 1, the indices argument passed to the map function is incorrect:
```python
d = load_dataset('imdb', split='test[:1%]')
def fn(x, inds):
print(inds)
return x
d.select(range(10)).map(fn, with_indices=True, batched=True)
# [0, 1]
# [0, 1, 2, 3, 4, 5, 6, 7, 8, 9]
d.select(range(10... | https://github.com/huggingface/datasets/issues/597 | [
"I fixed a bug that could cause this issue earlier today. Could you pull the latest version and try again ?",
"Still the case on master.\r\nI guess we should have an offset in the multi-procs indeed (hopefully it's enough).\r\n\r\nAlso, side note is that we should add some logging before the \"test\" to say we ar... | null | 597 | false |
[style/quality] Moving to isort 5.0.0 + style/quality on datasets and metrics | Move the repo to isort 5.0.0.
Also start testing style/quality on datasets and metrics.
Specific rule: we allow F401 (unused imports) in metrics to be able to add imports to detect early on missing dependencies.
Maybe we could add this in datasets but while cleaning this I've seen many example of really unused i... | https://github.com/huggingface/datasets/pull/596 | [
"Ready for review @lhoestq, just updated a few 156 files here"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/596",
"html_url": "https://github.com/huggingface/datasets/pull/596",
"diff_url": "https://github.com/huggingface/datasets/pull/596.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/596.patch",
"merged_at": "2020-09-10T10:05:03"... | 596 | true |
`Dataset`/`DatasetDict` has no attribute 'save_to_disk' | Hi,
As the title indicates, both `Dataset` and `DatasetDict` classes don't seem to have the `save_to_disk` method. While the file [`arrow_dataset.py`](https://github.com/huggingface/nlp/blob/34bf0b03bfe03e7f77b8fec1cd48f5452c4fc7c1/src/nlp/arrow_dataset.py) in the repo here has the method, the file `arrow_dataset.p... | https://github.com/huggingface/datasets/issues/595 | [
"`pip install git+https://github.com/huggingface/nlp.git` should have done the job.\r\n\r\nDid you uninstall `nlp` before installing from github ?",
"> Did you uninstall `nlp` before installing from github ?\r\n\r\nI did not. I created a new environment and installed `nlp` directly from `github` and it worked!\r\... | null | 595 | false |
Fix germeval url | Continuation of #593 but without the dummy data hack | https://github.com/huggingface/datasets/pull/594 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/594",
"html_url": "https://github.com/huggingface/datasets/pull/594",
"diff_url": "https://github.com/huggingface/datasets/pull/594.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/594.patch",
"merged_at": "2020-09-09T13:34:34"... | 594 | true |
GermEval 2014: new download urls | Hi,
unfortunately, the download links for the GermEval 2014 dataset have changed: they're now located on a Google Drive.
I changed the URLs and bump version from 1.0.0 to 2.0.0. | https://github.com/huggingface/datasets/pull/593 | [
"/cc: @vblagoje",
"Closing this one as #594 is merged (same changes except the dummy data hack)",
"Awesome @stefan-it ! @lhoestq how soon can I use the fixed GermEval dataset in HF token classification examples?",
"I've manually updated the script on S3, so you can actually use it right now with\r\n```python\... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/593",
"html_url": "https://github.com/huggingface/datasets/pull/593",
"diff_url": "https://github.com/huggingface/datasets/pull/593.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/593.patch",
"merged_at": null
} | 593 | true |
Test in memory and on disk | I added test parameters to do every test both in memory and on disk.
I also found a bug in concatenate_dataset thanks to the new tests and fixed it. | https://github.com/huggingface/datasets/pull/592 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/592",
"html_url": "https://github.com/huggingface/datasets/pull/592",
"diff_url": "https://github.com/huggingface/datasets/pull/592.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/592.patch",
"merged_at": "2020-09-09T13:50:03"... | 592 | true |
fix #589 (backward compat) | Fix #589 | https://github.com/huggingface/datasets/pull/591 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/591",
"html_url": "https://github.com/huggingface/datasets/pull/591",
"diff_url": "https://github.com/huggingface/datasets/pull/591.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/591.patch",
"merged_at": "2020-09-09T08:57:54"... | 591 | true |
The process cannot access the file because it is being used by another process (windows) | Hi, I consistently get the following error when developing in my PC (windows 10):
```
train_dataset = train_dataset.map(convert_to_features, batched=True)
File "C:\Users\saareliad\AppData\Local\Continuum\miniconda3\envs\py38\lib\site-packages\nlp\arrow_dataset.py", line 970, in map
shutil.move(tmp_file.... | https://github.com/huggingface/datasets/issues/590 | [
"Hi, which version of `nlp` are you using?\r\n\r\nBy the way we'll be releasing today a significant update fixing many issues (but also comprising a few breaking changes).\r\nYou can see more informations here #545 and try it by installing from source from the master branch.",
"I'm using version 0.4.0.\r\n\r\n",
... | null | 590 | false |
Cannot use nlp.load_dataset text, AttributeError: module 'nlp.utils' has no attribute 'logging' |
```
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp/load.py", line 533, in load_dataset
builder_cls = import_main_class(module_path, dataset=True)
File "/root/anaconda3/envs/pytorch/lib/python3.7/site-packages/nlp... | https://github.com/huggingface/datasets/issues/589 | [] | null | 589 | false |
Support pathlike obj in load dataset | Fix #582
(I recreated the PR, I got an issue with git) | https://github.com/huggingface/datasets/pull/588 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/588",
"html_url": "https://github.com/huggingface/datasets/pull/588",
"diff_url": "https://github.com/huggingface/datasets/pull/588.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/588.patch",
"merged_at": "2020-09-08T07:45:17"... | 588 | true |
Support pathlike obj in load dataset | Fix #582 | https://github.com/huggingface/datasets/pull/587 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/587",
"html_url": "https://github.com/huggingface/datasets/pull/587",
"diff_url": "https://github.com/huggingface/datasets/pull/587.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/587.patch",
"merged_at": null
} | 587 | true |
Better message when data files is empty | Fix #581 | https://github.com/huggingface/datasets/pull/586 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/586",
"html_url": "https://github.com/huggingface/datasets/pull/586",
"diff_url": "https://github.com/huggingface/datasets/pull/586.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/586.patch",
"merged_at": "2020-09-09T09:00:07"... | 586 | true |
Fix select for pyarrow < 1.0.0 | Fix #583 | https://github.com/huggingface/datasets/pull/585 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/585",
"html_url": "https://github.com/huggingface/datasets/pull/585",
"diff_url": "https://github.com/huggingface/datasets/pull/585.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/585.patch",
"merged_at": "2020-09-08T07:43:15"... | 585 | true |
Use github versioning | Right now dataset scripts and metrics are downloaded from S3 which is in sync with master. It means that it's not currently possible to pin the dataset/metric script version.
To fix that I changed the download url from S3 to github, and adding a `version` parameter in `load_dataset` and `load_metric` to pin a certai... | https://github.com/huggingface/datasets/pull/584 | [
"I noticed that datasets like `cnn_dailymail` need the `version` parameter to be passed to its `config_kwargs`.\r\nShall we rename the `version` paramater in `load_dataset` ? Maybe `repo_version` or `script_version` ?"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/584",
"html_url": "https://github.com/huggingface/datasets/pull/584",
"diff_url": "https://github.com/huggingface/datasets/pull/584.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/584.patch",
"merged_at": "2020-09-09T13:37:34"... | 584 | true |
ArrowIndexError on Dataset.select | If the indices table consists in several chunks, then `dataset.select` results in an `ArrowIndexError` error for pyarrow < 1.0.0
Example:
```python
from nlp import load_dataset
mnli = load_dataset("glue", "mnli", split="train")
shuffled = mnli.shuffle(seed=42)
mnli.select(list(range(len(mnli))))
```
rai... | https://github.com/huggingface/datasets/issues/583 | [] | null | 583 | false |
Allow for PathLike objects | Using PathLike objects as input for `load_dataset` does not seem to work. The following will throw an error.
```python
files = list(Path(r"D:\corpora\yourcorpus").glob("*.txt"))
dataset = load_dataset("text", data_files=files)
```
Traceback:
```
Traceback (most recent call last):
File "C:/dev/python/dut... | https://github.com/huggingface/datasets/issues/582 | [] | null | 582 | false |
Better error message when input file does not exist | In the following scenario, when `data_files` is an empty list, the stack trace and error message could be improved. This can probably be solved by checking for each file whether it actually exists and/or whether the argument is not false-y.
```python
dataset = load_dataset("text", data_files=[])
```
Example err... | https://github.com/huggingface/datasets/issues/581 | [] | null | 581 | false |
nlp re-creates already-there caches when using a script, but not within a shell | `nlp` keeps creating new caches for the same file when launching `filter` from a script, and behaves correctly from within the shell.
Example: try running
```
import nlp
hans_easy_data = nlp.load_dataset('hans', split="validation").filter(lambda x: x['label'] == 0)
hans_hard_data = nlp.load_dataset('hans', s... | https://github.com/huggingface/datasets/issues/580 | [
"Couln't reproduce on my side :/ \r\nlet me know if you manage to reproduce on another env (colab for example)",
"Fixed with a clean re-install!"
] | null | 580 | false |
Doc metrics | Adding documentation on metrics loading/using/sharing | https://github.com/huggingface/datasets/pull/579 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/579",
"html_url": "https://github.com/huggingface/datasets/pull/579",
"diff_url": "https://github.com/huggingface/datasets/pull/579.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/579.patch",
"merged_at": "2020-09-10T13:06:10"... | 579 | true |
Add CommonGen Dataset | CC Authors:
@yuchenlin @MichaelZhouwang | https://github.com/huggingface/datasets/pull/578 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/578",
"html_url": "https://github.com/huggingface/datasets/pull/578",
"diff_url": "https://github.com/huggingface/datasets/pull/578.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/578.patch",
"merged_at": "2020-09-07T11:49:07"... | 578 | true |
Some languages in wikipedia dataset are not loading | Hi,
I am working with the `wikipedia` dataset and I have a script that goes over 92 of the available languages in that dataset. So far I have detected that `ar`, `af`, `an` are not loading. Other languages like `fr` and `en` are working fine. Here's how I am loading them:
```
import nlp
langs = ['ar'. 'af', '... | https://github.com/huggingface/datasets/issues/577 | [
"Some wikipedia languages have already been processed by us and are hosted on our google storage. This is the case for \"fr\" and \"en\" for example.\r\n\r\nFor other smaller languages (in terms of bytes), they are directly downloaded and parsed from the wikipedia dump site.\r\nParsing can take some time for langua... | null | 577 | false |
Fix the code block in doc | https://github.com/huggingface/datasets/pull/576 | [
"thanks :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/576",
"html_url": "https://github.com/huggingface/datasets/pull/576",
"diff_url": "https://github.com/huggingface/datasets/pull/576.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/576.patch",
"merged_at": "2020-09-07T07:37:18"... | 576 | true | |
Couldn't reach certain URLs and for the ones that can be reached, code just blocks after downloading. | Hi,
I'm following the [quick tour](https://huggingface.co/nlp/quicktour.html) and tried to load the glue dataset:
```
>>> from nlp import load_dataset
>>> dataset = load_dataset('glue', 'mrpc', split='train')
```
However, this ran into a `ConnectionError` saying it could not reach the URL (just pasting the la... | https://github.com/huggingface/datasets/issues/575 | [
"Update:\r\n\r\nThe imdb download completed after a long time (about 45 mins). Ofcourse once download loading was instantaneous. Also, the loaded object was of type `arrow_dataset`. \r\n\r\nThe urls for glue still doesn't work though.",
"Thanks for the report, I'll give a look!",
"I am also seeing a similar err... | null | 575 | false |
Add modules cache | As discusses in #554 , we should use a module cache directory outside of the python packages directory since we may not have write permissions.
I added a new HF_MODULES_PATH directory that is added to the python path when doing `import nlp`.
In this directory, a module `nlp_modules` is created so that datasets can ... | https://github.com/huggingface/datasets/pull/574 | [
"All the tests pass on my side. Not sure if it is a cache issue or a pytest issue or a circleci issue.\r\nEDIT: I have the same error on google colab. Trying to fix that",
"I think I fixed it (sorry didn't notice you were on it as well)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/574",
"html_url": "https://github.com/huggingface/datasets/pull/574",
"diff_url": "https://github.com/huggingface/datasets/pull/574.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/574.patch",
"merged_at": "2020-09-07T09:01:35"... | 574 | true |
Faster caching for text dataset | As mentioned in #546 and #548 , hashing `data_files` contents to get the cache directory name for a text dataset can take a long time.
To make it faster I changed the hashing so that it takes into account the `path` and the `last modified timestamp` of each data file, instead of iterating through the content of each... | https://github.com/huggingface/datasets/pull/573 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/573",
"html_url": "https://github.com/huggingface/datasets/pull/573",
"diff_url": "https://github.com/huggingface/datasets/pull/573.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/573.patch",
"merged_at": "2020-09-04T12:53:23"... | 573 | true |
Add CLUE Benchmark (11 datasets) | Add 11 tasks of [CLUE](https://github.com/CLUEbenchmark/CLUE). | https://github.com/huggingface/datasets/pull/572 | [
"Thanks, @lhoestq! I've addressed the comments. \r\nAlso, I have tried to use `ClassLabel` [when possible](https://github.com/huggingface/nlp/pull/572/files#diff-1026ac7d7b78bf029cb0ebe63162c77dR297). Is there still somewhere else we can use `ClassLabel`? ",
"I believe CI failure is unrelated.",
"Great job! "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/572",
"html_url": "https://github.com/huggingface/datasets/pull/572",
"diff_url": "https://github.com/huggingface/datasets/pull/572.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/572.patch",
"merged_at": "2020-09-07T09:59:10"... | 572 | true |
Serialization | I added `save` and `load` method to serialize/deserialize a dataset object in a folder.
It moves the arrow files there (or write them if the tables were in memory), and saves the pickle state in a json file `state.json`, except the info that are in a separate file `dataset_info.json`.
Example:
```python
import ... | https://github.com/huggingface/datasets/pull/571 | [
"I've added save/load for dataset dicts.\r\n\r\nI agree that in the future we should also have a way to save indexes too, and also the in-place history of transforms.\r\n\r\nAlso I understand that it would be cool to have the load function directly at the root of the library, but I'm not sure this should be inside ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/571",
"html_url": "https://github.com/huggingface/datasets/pull/571",
"diff_url": "https://github.com/huggingface/datasets/pull/571.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/571.patch",
"merged_at": "2020-09-07T07:46:07"... | 571 | true |
add reuters21578 dataset | Reopen a PR this the merge. | https://github.com/huggingface/datasets/pull/570 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/570",
"html_url": "https://github.com/huggingface/datasets/pull/570",
"diff_url": "https://github.com/huggingface/datasets/pull/570.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/570.patch",
"merged_at": "2020-09-03T10:46:51"... | 570 | true |
Revert "add reuters21578 dataset" | Reverts huggingface/nlp#471 | https://github.com/huggingface/datasets/pull/569 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/569",
"html_url": "https://github.com/huggingface/datasets/pull/569",
"diff_url": "https://github.com/huggingface/datasets/pull/569.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/569.patch",
"merged_at": "2020-09-03T10:07:12"... | 569 | true |
`metric.compute` throws `ArrowInvalid` error | I get the following error with `rouge.compute`. It happens only with distributed training, and it occurs randomly I can't easily reproduce it. This is using `nlp==0.4.0`
```
File "/home/beltagy/trainer.py", line 92, in validation_step
rouge_scores = rouge.compute(predictions=generated_str, references=gold_st... | https://github.com/huggingface/datasets/issues/568 | [
"Hmm might be related to what we are solving in #564",
"Could you try to update to `datasets>=1.0.0` (we changed the name of the library) and try again ?\r\nIf is was related to the distributed setup settings it must be fixed.\r\nIf it was related to empty metric inputs it's going to be fixed in #654 ",
"Closin... | null | 568 | false |
Fix BLEURT metrics for backward compatibility | Fix #565 | https://github.com/huggingface/datasets/pull/567 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/567",
"html_url": "https://github.com/huggingface/datasets/pull/567",
"diff_url": "https://github.com/huggingface/datasets/pull/567.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/567.patch",
"merged_at": "2020-09-03T07:29:50"... | 567 | true |
Remove logger pickling to fix gg colab issues | A `logger` objects are not picklable in google colab, contrary to `logger` objects in jupyter notebooks or in python shells.
It creates some issues in google colab right now.
Indeed by calling any `Dataset` method, the fingerprint update pickles the transform function, and as the logger comes with it, it results in... | https://github.com/huggingface/datasets/pull/566 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/566",
"html_url": "https://github.com/huggingface/datasets/pull/566",
"diff_url": "https://github.com/huggingface/datasets/pull/566.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/566.patch",
"merged_at": "2020-09-03T16:31:52"... | 566 | true |
No module named 'nlp.logging' | Hi, I am using nlp version 0.4.0. Trying to use bleurt as an eval metric, however, the bleurt script imports nlp.logging which creates the following error. What am I missing?
```
>>> import nlp
2020-09-02 13:47:09.210310: I tensorflow/stream_executor/platform/default/dso_loader.cc:48] Successfully opened dynamic l... | https://github.com/huggingface/datasets/issues/565 | [
"Thanks for reporting.\r\n\r\nApparently this is a versioning issue: the lib downloaded the `bleurt` script from the master branch where we did this change recently. We'll fix that in a new release this week or early next week. Cc @thomwolf \r\n\r\nUntil that, I'd suggest you to download the right bleurt folder fro... | null | 565 | false |
Wait for writing in distributed metrics | There were CI bugs where a distributed metric would try to read all the files in process 0 while the other processes haven't started writing.
To fix that I added a custom locking mechanism that waits for the file to exist before trying to read it | https://github.com/huggingface/datasets/pull/564 | [
"I agree this fix the problem for the CI where the files are always created in a new and clean temporary directory.\r\n\r\nHowever, in a general setting of a succession of fast distributed operation, the files could already exist from previous metrics runs but one process may still finish before another has even st... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/564",
"html_url": "https://github.com/huggingface/datasets/pull/564",
"diff_url": "https://github.com/huggingface/datasets/pull/564.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/564.patch",
"merged_at": "2020-09-09T09:13:22"... | 564 | true |
[Large datasets] Speed up download and processing | Various improvements to speed-up creation and processing of large scale datasets.
Currently:
- distributed downloads
- remove etag from datafiles hashes to spare a request when restarting a failed download | https://github.com/huggingface/datasets/pull/563 | [
"Looks all good :)\r\nI rebased from master and added a test for parallel `map_nested`",
"you're da best"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/563",
"html_url": "https://github.com/huggingface/datasets/pull/563",
"diff_url": "https://github.com/huggingface/datasets/pull/563.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/563.patch",
"merged_at": "2020-09-09T09:03:32"... | 563 | true |
[Reproductibility] Allow to pin versions of datasets/metrics | Repurpose the `version` attribute in datasets and metrics to let the user pin a specific version of datasets and metric scripts:
```
dataset = nlp.load_dataset('squad', version='1.0.0')
metric = nlp.load_metric('squad', version='1.0.0')
```
Notes:
- version number are the release version of the library
- curre... | https://github.com/huggingface/datasets/pull/562 | [
"Closing this one in favor of #584 "
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/562",
"html_url": "https://github.com/huggingface/datasets/pull/562",
"diff_url": "https://github.com/huggingface/datasets/pull/562.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/562.patch",
"merged_at": null
} | 562 | true |
Made `share_dataset` more readable | https://github.com/huggingface/datasets/pull/561 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/561",
"html_url": "https://github.com/huggingface/datasets/pull/561",
"diff_url": "https://github.com/huggingface/datasets/pull/561.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/561.patch",
"merged_at": "2020-09-03T09:00:29"... | 561 | true | |
Using custom DownloadConfig results in an error | ## Version / Environment
Ubuntu 18.04
Python 3.6.8
nlp 0.4.0
## Description
Loading `imdb` dataset works fine when when I don't specify any `download_config` argument. When I create a custom `DownloadConfig` object and pass it to the `nlp.load_dataset` function, this results in an error.
## How to reprodu... | https://github.com/huggingface/datasets/issues/560 | [
"From my limited understanding, part of the issue seems related to the `prepare_module` and `download_and_prepare` functions each handling the case where no config is passed. For example, `prepare_module` does mutate the object passed and forces the flags `extract_compressed_file` and `force_extract` to `True`.\r\... | null | 560 | false |
Adding the KILT knowledge source and tasks | This adds Wikipedia pre-processed for KILT, as well as the task data. Only the question IDs are provided for TriviaQA, but they can easily be mapped back with:
```
import nlp
kilt_wikipedia = nlp.load_dataset('kilt_wikipedia')
kilt_tasks = nlp.load_dataset('kilt_tasks')
triviaqa = nlp.load_dataset('trivia_qa',... | https://github.com/huggingface/datasets/pull/559 | [
"Feel free to merge when you are happy with it @yjernite :-)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/559",
"html_url": "https://github.com/huggingface/datasets/pull/559",
"diff_url": "https://github.com/huggingface/datasets/pull/559.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/559.patch",
"merged_at": "2020-09-04T18:05:47"... | 559 | true |
Rerun pip install -e | Hopefully it fixes the github actions | https://github.com/huggingface/datasets/pull/558 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/558",
"html_url": "https://github.com/huggingface/datasets/pull/558",
"diff_url": "https://github.com/huggingface/datasets/pull/558.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/558.patch",
"merged_at": "2020-09-01T17:24:50"... | 558 | true |
Fix a few typos | https://github.com/huggingface/datasets/pull/557 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/557",
"html_url": "https://github.com/huggingface/datasets/pull/557",
"diff_url": "https://github.com/huggingface/datasets/pull/557.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/557.patch",
"merged_at": "2020-09-02T07:39:06"... | 557 | true | |
Add DailyDialog | http://yanran.li/dailydialog.html
https://arxiv.org/pdf/1710.03957.pdf
| https://github.com/huggingface/datasets/pull/556 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/556",
"html_url": "https://github.com/huggingface/datasets/pull/556",
"diff_url": "https://github.com/huggingface/datasets/pull/556.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/556.patch",
"merged_at": "2020-09-03T15:38:39"... | 556 | true |
Upgrade pip in benchmark github action | It looks like it fixes the `import nlp` issue we have | https://github.com/huggingface/datasets/pull/555 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/555",
"html_url": "https://github.com/huggingface/datasets/pull/555",
"diff_url": "https://github.com/huggingface/datasets/pull/555.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/555.patch",
"merged_at": "2020-09-01T15:26:15"... | 555 | true |
nlp downloads to its module path | I am trying to package `nlp` for Nix, because it is now an optional dependency for `transformers`. The problem that I encounter is that the `nlp` library downloads to the module path, which is typically not writable in most package management systems:
```>>> import nlp
>>> squad_dataset = nlp.load_dataset('squad')
... | https://github.com/huggingface/datasets/issues/554 | [
"Indeed this is a known issue arising from the fact that we try to be compatible with cloupickle.\r\n\r\nDoes this also happen if you are installing in a virtual environment?",
"> Indeed this is a know issue with the fact that we try to be compatible with cloupickle.\r\n> \r\n> Does this also happen if you are in... | null | 554 | false |
[Fix GitHub Actions] test adding tmate | https://github.com/huggingface/datasets/pull/553 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/553",
"html_url": "https://github.com/huggingface/datasets/pull/553",
"diff_url": "https://github.com/huggingface/datasets/pull/553.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/553.patch",
"merged_at": null
} | 553 | true | |
Add multiprocessing | Adding multiprocessing to `.map`
It works in 3 steps:
- shard the dataset in `num_proc` shards
- spawn one process per shard and call `map` on them
- concatenate the resulting datasets
Example of usage:
```python
from nlp import load_dataset
dataset = load_dataset("squad", split="train")
def function... | https://github.com/huggingface/datasets/pull/552 | [
"Logging looks like\r\n\r\n```\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #0 will write at playground/tmp_00000_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess #1 will write at playground/tmp_00001_of_00004.arrow\r\nDone writing 21900 indices in 3854400 bytes .\r\nProcess... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/552",
"html_url": "https://github.com/huggingface/datasets/pull/552",
"diff_url": "https://github.com/huggingface/datasets/pull/552.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/552.patch",
"merged_at": "2020-09-02T10:01:25"... | 552 | true |
added HANS dataset | Adds the [HANS](https://github.com/tommccoy1/hans) dataset to evaluate NLI systems. | https://github.com/huggingface/datasets/pull/551 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/551",
"html_url": "https://github.com/huggingface/datasets/pull/551",
"diff_url": "https://github.com/huggingface/datasets/pull/551.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/551.patch",
"merged_at": "2020-09-01T12:17:10"... | 551 | true |
[BUGFIX] Solving mismatched checksum issue for the LinCE dataset (#539) | Hi,
I have added the updated `dataset_infos.json` file for the LinCE benchmark. This update is to fix the mismatched checksum bug #539 for one of the datasets in the LinCE benchmark. To update the file, I run this command from the nlp root directory:
```
python nlp-cli test ./datasets/lince --save_infos --all_co... | https://github.com/huggingface/datasets/pull/550 | [
"Thanks a lot for that!\r\nThe line you are mentioning is a bug indeed, do you mind fixing it at the same time?",
"No worries! \r\n\r\nI pushed right away the fix, but then I realized that the master branch already had it, so I ended up merging the master branch with lince locally and then overwriting the previou... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/550",
"html_url": "https://github.com/huggingface/datasets/pull/550",
"diff_url": "https://github.com/huggingface/datasets/pull/550.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/550.patch",
"merged_at": "2020-09-03T09:06:01"... | 550 | true |
Fix bleurt logging import | Bleurt started throwing an error in some code we have.
This looks like the fix but...
It's also unnerving that even a prebuilt docker image with pinned versions can be working 1 day and then fail the next (especially for production systems).
Any way for us to pin your metrics code so that they are guaranteed not... | https://github.com/huggingface/datasets/pull/549 | [
"Thatβs a good point that we started to discuss internally as well. We should pin the dataset en metrics code by default indeed.\r\nLetβs update this in the coming release.",
"Ok closed this with #567 and we are working on a more general solution to pin dataset version in #562 (should be in the coming release)."
... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/549",
"html_url": "https://github.com/huggingface/datasets/pull/549",
"diff_url": "https://github.com/huggingface/datasets/pull/549.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/549.patch",
"merged_at": null
} | 549 | true |
[Breaking] Switch text loading to multi-threaded PyArrow loading | Test if we can get better performances for large-scale text datasets by using multi-threaded text file loading based on Apache Arrow multi-threaded CSV loader.
If it works ok, it would fix #546.
**Breaking change**:
The text lines now do not include final line-breaks anymore. | https://github.com/huggingface/datasets/pull/548 | [
"Awesome !\r\nAlso I was wondering if we should try to make the hashing of the `data_files` faster (it is used to build the cache directory of datasets like `text` or `json`). Right now it reads each file and hashes all of its data. We could simply hash the path and some metadata including the `time last modified` ... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/548",
"html_url": "https://github.com/huggingface/datasets/pull/548",
"diff_url": "https://github.com/huggingface/datasets/pull/548.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/548.patch",
"merged_at": "2020-09-08T10:19:57"... | 548 | true |
[Distributed] Making loading distributed datasets a bit safer | Add some file-locks during dataset loading | https://github.com/huggingface/datasets/pull/547 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/547",
"html_url": "https://github.com/huggingface/datasets/pull/547",
"diff_url": "https://github.com/huggingface/datasets/pull/547.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/547.patch",
"merged_at": "2020-08-31T15:16:29"... | 547 | true |
Very slow data loading on large dataset | I made a simple python script to check the NLP library speed, which loads 1.1 TB of textual data.
It has been 8 hours and still, it is on the loading steps.
It does work when the text dataset size is small about 1 GB, but it doesn't scale.
It also uses a single thread during the data loading step.
```
train_fil... | https://github.com/huggingface/datasets/issues/546 | [
"When you load a text file for the first time with `nlp`, the file is converted into Apache Arrow format. Arrow allows to use memory-mapping, which means that you can load an arbitrary large dataset.\r\n\r\nNote that as soon as the conversion has been done once, the next time you'll load the dataset it will be much... | null | 546 | false |
New release coming up for this library | Hi all,
A few words on the roadmap for this library.
The next release will be a big one and is planed at the end of this week.
In addition to the support for indexed datasets (useful for non-parametric models like REALM, RAG, DPR, knn-LM and many other fast dataset retrieval technics), it will:
- have support f... | https://github.com/huggingface/datasets/issues/545 | [
"Update: release is planed mid-next week."
] | null | 545 | false |
[Distributed] Fix load_dataset error when multiprocessing + add test | Fix #543 + add test | https://github.com/huggingface/datasets/pull/544 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/544",
"html_url": "https://github.com/huggingface/datasets/pull/544",
"diff_url": "https://github.com/huggingface/datasets/pull/544.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/544.patch",
"merged_at": "2020-08-31T11:15:10"... | 544 | true |
nlp.load_dataset is not safe for multi processes when loading from local files | Loading from local files, e.g., `dataset = nlp.load_dataset('csv', data_files=['file_1.csv', 'file_2.csv'])`
concurrently from multiple processes, will raise `FileExistsError` from builder's line 430, https://github.com/huggingface/nlp/blob/6655008c738cb613c522deb3bd18e35a67b2a7e5/src/nlp/builder.py#L423-L438
Likel... | https://github.com/huggingface/datasets/issues/543 | [
"I'll take a look!"
] | null | 543 | false |
Add TensorFlow example | Update the Quick Tour documentation in order to add the TensorFlow equivalent source code for the classification example. Now it is possible to select either the code in PyTorch or in TensorFlow in the Quick tour. | https://github.com/huggingface/datasets/pull/542 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/542",
"html_url": "https://github.com/huggingface/datasets/pull/542",
"diff_url": "https://github.com/huggingface/datasets/pull/542.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/542.patch",
"merged_at": "2020-08-31T09:49:19"... | 542 | true |
Best practices for training tokenizers with nlp | Hi, thank you for developing this library.
What do you think are the best practices for training tokenizers using `nlp`? In the document and examples, I could only find pre-trained tokenizers used. | https://github.com/huggingface/datasets/issues/541 | [
"Docs that explain how to train a tokenizer with `datasets` are available here: https://huggingface.co/docs/tokenizers/training_from_memory#using-the-datasets-library"
] | null | 541 | false |
[BUGFIX] Fix Race Dataset Checksum bug | In #537 I noticed that there was a bug in checksum checking when I have tried to download the race dataset. The reason for this is that the current preprocessing was just considering the `high school` data and it was ignoring the `middle` one. This PR just fixes it :)
Moreover, I have added some descriptions. | https://github.com/huggingface/datasets/pull/540 | [
"I'm not sure this would fix #537 .\r\nHowever your point about the missing `middle` data is right and we probably want to include these data as well.\r\nDo you think it would we worth having different configurations for this dataset for users who want to only load part of it (`high school` or `middle` or `all`) ?"... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/540",
"html_url": "https://github.com/huggingface/datasets/pull/540",
"diff_url": "https://github.com/huggingface/datasets/pull/540.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/540.patch",
"merged_at": "2020-09-18T11:42:20"... | 540 | true |
[Dataset] `NonMatchingChecksumError` due to an update in the LinCE benchmark data | Hi,
There is a `NonMatchingChecksumError` error for the `lid_msaea` (language identification for Modern Standard Arabic - Egyptian Arabic) dataset from the LinCE benchmark due to a minor update on that dataset.
How can I update the checksum of the library to solve this issue? The error is below and it also appea... | https://github.com/huggingface/datasets/issues/539 | [
"Hi @gaguilar \r\n\r\nIf you want to take care of this, it very simple, you just need to regenerate the `dataset_infos.json` file as indicated [in the doc](https://huggingface.co/nlp/share_dataset.html#adding-metadata) by [installing from source](https://huggingface.co/nlp/installation.html#installing-from-source) ... | null | 539 | false |
[logging] Add centralized logging - Bump-up cache loads to warnings | Add a `nlp.logging` module to set the global logging level easily. The verbosity level also controls the tqdm bars (disabled when set higher than INFO).
You can use:
```
nlp.logging.set_verbosity(verbosity: int)
nlp.logging.set_verbosity_info()
nlp.logging.set_verbosity_warning()
nlp.logging.set_verbosity_debug... | https://github.com/huggingface/datasets/pull/538 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/538",
"html_url": "https://github.com/huggingface/datasets/pull/538",
"diff_url": "https://github.com/huggingface/datasets/pull/538.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/538.patch",
"merged_at": "2020-08-31T11:42:50"... | 538 | true |
[Dataset] RACE dataset Checksums error | Hi there, I just would like to use this awesome lib to perform a dataset fine-tuning on RACE dataset. I have performed the following steps:
```
dataset = nlp.load_dataset("race")
len(dataset["train"]), len(dataset["validation"])
```
But then I got the following error:
```
----------------------------------... | https://github.com/huggingface/datasets/issues/537 | [
"`NonMatchingChecksumError` means that the checksum of the downloaded file is not the expected one.\r\nEither the file you downloaded was corrupted along the way, or the host updated the file.\r\nCould you try to clear your cache and run `load_dataset` again ? If the error is still there, it means that there was an... | null | 537 | false |
Fingerprint | This PR is a continuation of #513 , in which many in-place functions were introduced or updated (cast_, flatten_) etc.
However the caching didn't handle these changes. Indeed the caching took into account only the previous cache file name of the table, and not the possible in-place transforms of the table.
To fix t... | https://github.com/huggingface/datasets/pull/536 | [
"I changed the way I implemented fingerprint updates to use decorator functions.\r\n\r\nI also added a new attribute called `_inplace_history` that stores the in-place history of transforms (like cast_, rename_columns, etc.). This history is useful to replay the changes that were done in-place when unpickling a dat... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/536",
"html_url": "https://github.com/huggingface/datasets/pull/536",
"diff_url": "https://github.com/huggingface/datasets/pull/536.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/536.patch",
"merged_at": "2020-08-31T14:20:39"... | 536 | true |
Benchmarks | Adding some benchmarks with DVC/CML
To add a new tracked benchmark:
- create a new python benchmarking script in `./benchmarks/`. The script can use the utilities in `./benchmarks/utils.py` and should output a JSON file with results in `./benchmarks/results/`.
- add a new pipeline stage in [dvc.yaml](./dvc.yaml) w... | https://github.com/huggingface/datasets/pull/535 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/535",
"html_url": "https://github.com/huggingface/datasets/pull/535",
"diff_url": "https://github.com/huggingface/datasets/pull/535.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/535.patch",
"merged_at": "2020-08-27T08:39:59"... | 535 | true |
`list_datasets()` is broken. | version = '0.4.0'
`list_datasets()` is broken. It results in the following error :
```
In [3]: nlp.list_datasets()
Out[3]: ---------------------------------------------------------------------------
AttributeError Traceback (most recent call last)
~/.virtualenvs/san-lgUCsFg_/lib/py... | https://github.com/huggingface/datasets/issues/534 | [
"Thanks for reporting !\r\nThis has been fixed in #475 and the fix will be available in the next release",
"What you can do instead to get the list of the datasets is call\r\n\r\n```python\r\nprint([dataset.id for dataset in nlp.list_datasets()])\r\n```",
"Thanks @lhoestq . "
] | null | 534 | false |
Fix ArrayXD for pyarrow 0.17.1 by using non fixed length list arrays | It should fix the CI problems in #513 | https://github.com/huggingface/datasets/pull/533 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/533",
"html_url": "https://github.com/huggingface/datasets/pull/533",
"diff_url": "https://github.com/huggingface/datasets/pull/533.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/533.patch",
"merged_at": "2020-08-26T08:02:23"... | 533 | true |
File exists error when used with TPU | Hi,
I'm getting a "File exists" error when I use [text dataset](https://github.com/huggingface/nlp/tree/master/datasets/text) for pre-training a RoBERTa model using `transformers` (3.0.2) and `nlp`(0.4.0) on a VM with TPU (v3-8).
I modified [line 131 in the original `run_language_modeling.py`](https://github.com/... | https://github.com/huggingface/datasets/issues/532 | [
"I am facing probably facing similar issues with \r\n\r\n`wiki40b_en_100_0`",
"Could you try to run `dataset = load_dataset(\"text\", data_files=file_path, split=\"train\")` once before calling the script ?\r\n\r\nIt looks like several processes try to create the dataset in arrow format at the same time. If the d... | null | 532 | false |
add concatenate_datasets to the docs | https://github.com/huggingface/datasets/pull/531 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/531",
"html_url": "https://github.com/huggingface/datasets/pull/531",
"diff_url": "https://github.com/huggingface/datasets/pull/531.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/531.patch",
"merged_at": "2020-08-25T09:02:19"... | 531 | true | |
use ragged tensor by default | I think it's better if it's clear whether the returned tensor is ragged or not when the type is set to tensorflow.
Previously it was a tensor (not ragged) if numpy could stack the output (which can change depending on the batch of example you take), which make things difficult to handle, as it may sometimes return a r... | https://github.com/huggingface/datasets/pull/530 | [
"Yes I agree. Maybe something that lets specify different format depending on the column ? Especially to better control dtype and shape (and ragged for tf)\r\n\r\nOh and I forgot: this one should also fix the second issue found in #477 for the next release",
"I am running into the same issue with the error messag... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/530",
"html_url": "https://github.com/huggingface/datasets/pull/530",
"diff_url": "https://github.com/huggingface/datasets/pull/530.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/530.patch",
"merged_at": "2020-08-24T19:22:25"... | 530 | true |
Add MLSUM | Hello (again :) !),
So, I started a new branch because of a [rebase issue](https://github.com/huggingface/nlp/pull/463), sorry for the mess.
However, the command `pytest tests/test_dataset_common.py::LocalDatasetTest::test_load_real_dataset_mlsum` still fails because there is no default language dataset : the s... | https://github.com/huggingface/datasets/pull/529 | [
"Could you test to run the test using the changes in #527 and let me know if it fixes the issue ? If so I'll merge it and we'll be good to go :)",
"Hello, it does work on the fixing real dataset branch. Merci Quentin :)",
"Nice, glad to hear that :)\r\nde rien !"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/529",
"html_url": "https://github.com/huggingface/datasets/pull/529",
"diff_url": "https://github.com/huggingface/datasets/pull/529.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/529.patch",
"merged_at": "2020-08-26T08:04:10"... | 529 | true |
fix missing variable names in docs | fix #524 | https://github.com/huggingface/datasets/pull/528 | [
"The problem came from `default: ` that is rendered differently and hides the parameter names. I changed `default: ...` to `defaults to ...`"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/528",
"html_url": "https://github.com/huggingface/datasets/pull/528",
"diff_url": "https://github.com/huggingface/datasets/pull/528.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/528.patch",
"merged_at": "2020-08-25T09:04:03"... | 528 | true |
Fix config used for slow test on real dataset | As noticed in #470, #474, #476, #504 , the slow test `test_load_real_dataset` couldn't run on datasets that require config parameters.
To fix that I replaced it with one test with the first config of BUILDER_CONFIGS `test_load_real_dataset`, and another test that runs all of the configs in BUILDER_CONFIGS `test_load... | https://github.com/huggingface/datasets/pull/527 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/527",
"html_url": "https://github.com/huggingface/datasets/pull/527",
"diff_url": "https://github.com/huggingface/datasets/pull/527.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/527.patch",
"merged_at": "2020-08-25T09:20:44"... | 527 | true |
Returning None instead of "python" if dataset is unformatted | Following the discussion on Slack, this small fix ensures that calling `dataset.set_format(type=dataset.format["type"])` works properly. Slightly breaking as calling `dataset.format` when the dataset is unformatted will return `None` instead of `python`. | https://github.com/huggingface/datasets/pull/526 | [
"We have to change the tests to expect `None` instead of `python` then",
"Merging!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/526",
"html_url": "https://github.com/huggingface/datasets/pull/526",
"diff_url": "https://github.com/huggingface/datasets/pull/526.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/526.patch",
"merged_at": "2020-08-24T12:50:42"... | 526 | true |
wmt download speed example | Continuing from the slack 1.0 roadmap thread w @lhoestq , I realized the slow downloads is only a thing sometimes. Here are a few examples, I suspect there are multiple issues. All commands were run from the same gcp us-central-1f machine.
```
import nlp
nlp.load_dataset('wmt16', 'de-en')
```
Downloads at 49.1 K... | https://github.com/huggingface/datasets/issues/525 | [
"Thanks for creating the issue :)\r\nThe download link for wmt-en-de raw looks like a mirror. We should use that instead of the current url.\r\nIs this mirror official ?\r\n\r\nAlso it looks like for `ro-en` it tried to download other languages. If we manage to only download the one that is asked it'd be cool\r\n\r... | null | 525 | false |
Some docs are missing parameter names | See https://huggingface.co/nlp/master/package_reference/main_classes.html#nlp.Dataset.map. I believe this is because the parameter names are enclosed in backticks in the docstrings, maybe it's an old docstring format that doesn't work with the current Sphinx version. | https://github.com/huggingface/datasets/issues/524 | [
"Indeed, good catch!"
] | null | 524 | false |
Speed up Tokenization by optimizing cast_to_python_objects | I changed how `cast_to_python_objects` works to make it faster.
It is used to cast numpy/pytorch/tensorflow/pandas objects to python lists, and it works recursively.
To avoid iterating over possibly long lists, it first checks if the first element that is not None has to be casted.
If the first element needs to be... | https://github.com/huggingface/datasets/pull/523 | [
"I took your comments into account and added tests for `cast_to_python_objects`"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/523",
"html_url": "https://github.com/huggingface/datasets/pull/523",
"diff_url": "https://github.com/huggingface/datasets/pull/523.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/523.patch",
"merged_at": "2020-08-24T08:54:14"... | 523 | true |
dictionnary typo in docs | Many places dictionary is spelled dictionnary, not sure if its on purpose or not.
Fixed in this pr:
https://github.com/huggingface/nlp/pull/521 | https://github.com/huggingface/datasets/issues/522 | [
"Thanks!"
] | null | 522 | false |
Fix dictionnary (dictionary) typo | This error happens many times I'm thinking maybe its spelled like this on purpose? | https://github.com/huggingface/datasets/pull/521 | [
"Hahah thanks Yonatan. It was not on purpose, we are just not very good at spelling :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/521",
"html_url": "https://github.com/huggingface/datasets/pull/521",
"diff_url": "https://github.com/huggingface/datasets/pull/521.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/521.patch",
"merged_at": "2020-08-20T07:52:04"... | 521 | true |
Transform references for sacrebleu | Currently it is impossible to use sacrebleu when len(predictions) != the number of references per prediction (very uncommon), due to a strange format expected by sacrebleu. If one passes in the data to `nlp.metric.compute()` in sacrebleu format, `nlp` throws an error due to mismatching lengths between predictions and r... | https://github.com/huggingface/datasets/pull/520 | [
"I think I agree @lhoestq so I pushed a change.\r\nThanks for your work on the library!"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/520",
"html_url": "https://github.com/huggingface/datasets/pull/520",
"diff_url": "https://github.com/huggingface/datasets/pull/520.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/520.patch",
"merged_at": "2020-08-20T09:30:53"... | 520 | true |
[BUG] Metrics throwing new error on master since 0.4.0 | The following error occurs when passing in references of type `List[List[str]]` to metrics like bleu.
Wasn't happening on 0.4.0 but happening now on master.
```
File "/usr/local/lib/python3.7/site-packages/nlp/metric.py", line 226, in compute
self.add_batch(predictions=predictions, references=references)
... | https://github.com/huggingface/datasets/issues/519 | [
"Update - maybe this is only failing on bleu because I was not tokenizing inputs to the metric",
"Closing - seems to be just forgetting to tokenize. And found the helpful discussion in huggingface/evaluate#105 "
] | null | 519 | false |
[METRICS, breaking] Refactor caching behavior, pickle/cloudpickle metrics and dataset, add tests on metrics | Move the acquisition of the filelock at a later stage during metrics processing so it can be pickled/cloudpickled after instantiation.
Also add some tests on pickling, concurrent but separate metric instances and concurrent and distributed metric instances.
Changes significantly the caching behavior for the metri... | https://github.com/huggingface/datasets/pull/518 | [
"(test failure is unrelated)",
"As discussed with @thomwolf merging since the hyperparameter-search has been merged in transformers."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/518",
"html_url": "https://github.com/huggingface/datasets/pull/518",
"diff_url": "https://github.com/huggingface/datasets/pull/518.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/518.patch",
"merged_at": "2020-08-24T16:01:39"... | 518 | true |
add MLDoc dataset | Hi,
I am recommending that someone add MLDoc, a multilingual news topic classification dataset.
- Here's a link to the Github: https://github.com/facebookresearch/MLDoc
- and the paper: http://www.lrec-conf.org/proceedings/lrec2018/pdf/658.pdf
Looks like the dataset contains news stories in multiple languages... | https://github.com/huggingface/datasets/issues/517 | [
"Any updates on this?",
"This request is still an open issue waiting to be addressed by any community member, @GuillemGSubies."
] | null | 517 | false |
[Breaking] Rename formated to formatted | `formated` is not correct but `formatted` is | https://github.com/huggingface/datasets/pull/516 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/516",
"html_url": "https://github.com/huggingface/datasets/pull/516",
"diff_url": "https://github.com/huggingface/datasets/pull/516.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/516.patch",
"merged_at": "2020-08-20T08:41:16"... | 516 | true |
Fix batched map for formatted dataset | If you had a dataset formatted as numpy for example, and tried to do a batched map, then it would crash because one of the elements from the inputs was missing for unchanged columns (ex: batch of length 999 instead of 1000).
The happened during the creation of the `pa.Table`, since columns had different lengths. | https://github.com/huggingface/datasets/pull/515 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/515",
"html_url": "https://github.com/huggingface/datasets/pull/515",
"diff_url": "https://github.com/huggingface/datasets/pull/515.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/515.patch",
"merged_at": "2020-08-20T20:30:42"... | 515 | true |
dataset.shuffle(keep_in_memory=True) is never allowed | As of commit ef4aac2, the usage of the parameter `keep_in_memory=True` is never possible: `dataset.select(keep_in_memory=True)`
The commit added the lines
```python
# lines 994-996 in src/nlp/arrow_dataset.py
assert (
not keep_in_memory or cache_file_name is None
), "Please use either... | https://github.com/huggingface/datasets/issues/514 | [
"This seems to be fixed in #513 for the filter function, replacing `cache_file_name` with `indices_cache_file_name` in the assert. Although not for the `map()` function @thomwolf ",
"Maybe I'm a bit tired but I fail to see the issue here.\r\n\r\nSince `cache_file_name` is `None` by default, if you set `keep_in_me... | null | 514 | false |
[speedup] Use indices mappings instead of deepcopy for all the samples reordering methods | Use an indices mapping instead of rewriting the dataset for all the samples re-ordering/selection methods (`select`, `sort`, `shuffle`, `shard`, `train_test_split`).
Added a `flatten_indices` method which copy the dataset to a new table to remove the indices mapping with tests.
All the samples re-ordering/selecti... | https://github.com/huggingface/datasets/pull/513 | [
"Ok I fixed `concatenate_datasets` and added tests\r\nFeel free to merge if it's good for you @thomwolf ",
"Ok, adding some benchmarks for map/filters and then I'll merge",
"Warning from pytorch that we should maybe consider at some point @lhoestq:\r\n```\r\n/__w/nlp/nlp/src/nlp/arrow_dataset.py:648: UserWarnin... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/513",
"html_url": "https://github.com/huggingface/datasets/pull/513",
"diff_url": "https://github.com/huggingface/datasets/pull/513.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/513.patch",
"merged_at": "2020-08-28T08:41:50"... | 513 | true |
Delete CONTRIBUTING.md | https://github.com/huggingface/datasets/pull/512 | [
"π±",
"Yeah, this is spammy behavior. I've reported the user handle."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/512",
"html_url": "https://github.com/huggingface/datasets/pull/512",
"diff_url": "https://github.com/huggingface/datasets/pull/512.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/512.patch",
"merged_at": null
} | 512 | true | |
dataset.shuffle() and select() resets format. Intended? | Calling `dataset.shuffle()` or `dataset.select()` on a dataset resets its format set by `dataset.set_format()`. Is this intended or an oversight?
When working on quite large datasets that require a lot of preprocessing I find it convenient to save the processed dataset to file using `torch.save("dataset.pt")`. Later... | https://github.com/huggingface/datasets/issues/511 | [
"Hi @vegarab yes feel free to open a discussion here.\r\n\r\nThis design choice was not very much thought about.\r\n\r\nSince `dataset.select()` (like all the method without a trailing underscore) is non-destructive and returns a new dataset it has most of its properties initialized from scratch (except the table a... | null | 511 | false |
Version of numpy to use the library | Thank you so much for your excellent work! I would like to use nlp library in my project. While importing nlp, I am receiving the following error `AttributeError: module 'numpy.random' has no attribute 'Generator'` Numpy version in my project is 1.16.0. May I learn which numpy version is used for the nlp library.
Th... | https://github.com/huggingface/datasets/issues/510 | [
"Seems like this method was added in 1.17. I'll add a requirement on this.",
"Thank you so much. After upgrading the numpy library, it worked."
] | null | 510 | false |
Converting TensorFlow dataset example | Hi,
I want to use TensorFlow datasets with this repo, I noticed you made some conversion script,
can you give a simple example of using it?
Thanks
| https://github.com/huggingface/datasets/issues/509 | [
"Do you want to convert a dataset script to the tfds format ?\r\nIf so, we currently have a comversion script nlp/commands/convert.py but it is a conversion script that goes from tfds to nlp.\r\nI think it shouldn't be too hard to do the changes in reverse (at some manual adjustments).\r\nIf you manage to make it w... | null | 509 | false |
TypeError: Receiver() takes no arguments | I am trying to load a wikipedia data set
```
import nlp
from nlp import load_dataset
dataset = load_dataset("wikipedia", "20200501.en", split="train", cache_dir=data_path, beam_runner='DirectRunner')
#dataset = load_dataset('wikipedia', '20200501.sv', cache_dir=data_path, beam_runner='DirectRunner')
```
Th... | https://github.com/huggingface/datasets/issues/508 | [
"Which version of Apache Beam do you have (can you copy your full environment info here)?",
"apache-beam==2.23.0\r\nnlp==0.4.0\r\n\r\nFor me this was resolved by running the same python script on Linux (or really WSL). ",
"Do you manage to run a dummy beam pipeline with python on windows ? \r\nYou can test a du... | null | 508 | false |
Errors when I use | I tried the following example code from https://huggingface.co/deepset/roberta-base-squad2 and got errors
I am using **transformers 3.0.2** code .
from transformers.pipelines import pipeline
from transformers.modeling_auto import AutoModelForQuestionAnswering
from transformers.tokenization_auto import AutoToke... | https://github.com/huggingface/datasets/issues/507 | [
"Looks like an issue with 3.0.2 transformers version. Works fine when I use \"master\" version of transformers."
] | null | 507 | false |
fix dataset.map for function without outputs | As noticed in #505 , giving a function that doesn't return anything in `.map` raises an error because of an unreferenced variable.
I fixed that and added tests.
Thanks @avloss for reporting | https://github.com/huggingface/datasets/pull/506 | [] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/506",
"html_url": "https://github.com/huggingface/datasets/pull/506",
"diff_url": "https://github.com/huggingface/datasets/pull/506.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/506.patch",
"merged_at": "2020-08-17T11:24:38"... | 506 | true |
tmp_file referenced before assignment | Just learning about this library - so might've not set up all the flags correctly, but was getting this error about "tmp_file". | https://github.com/huggingface/datasets/pull/505 | [
"Thanks for reporting the issue ! I'm creating a new PR to fix it and add tests.\r\n(I'm doing a new PR because I know there's some other place where it needs to be fixed)",
"I'm closing this one as I created the other PR."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/505",
"html_url": "https://github.com/huggingface/datasets/pull/505",
"diff_url": "https://github.com/huggingface/datasets/pull/505.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/505.patch",
"merged_at": null
} | 505 | true |
Added downloading to Hyperpartisan news detection | Following the discussion on Slack and #349, I've updated the hyperpartisan dataset to pull directly from Zenodo rather than manual install, which should make this dataset much more accessible. Many thanks to @johanneskiesel !
Currently doesn't pass `test_load_real_dataset` - I'm using `self.config.name` which is `de... | https://github.com/huggingface/datasets/pull/504 | [
"Thank you @ghomasHudson for making our dataset available! This is great!",
"The test passes since #527 :)"
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/504",
"html_url": "https://github.com/huggingface/datasets/pull/504",
"diff_url": "https://github.com/huggingface/datasets/pull/504.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/504.patch",
"merged_at": "2020-08-27T08:18:41"... | 504 | true |
CompGuessWhat?! 0.2.0 | We updated some metadata information associated with the dataset. In addition, we've updated the `create_dummy_data.py` script to generate data samples for the dataset. | https://github.com/huggingface/datasets/pull/503 | [
"I don't see any significant change in the dataset script (except the version value update), can you check that again please ?",
"Hi @aleSuglia , can you check that all the changes you wanted to do are in the dataset script ?",
"Hey sorry but I'm in the middle of a conference deadline. I'll let you know asap!",... | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/503",
"html_url": "https://github.com/huggingface/datasets/pull/503",
"diff_url": "https://github.com/huggingface/datasets/pull/503.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/503.patch",
"merged_at": null
} | 503 | true |
Fix tokenizers caching | I've found some cases where the caching didn't work properly for tokenizers:
1. if a tokenizer has a regex pattern, then the caching would be inconsistent across sessions
2. if a tokenizer has a cache attribute that changes after some calls, the the caching would not work after cache updates
3. if a tokenizer is u... | https://github.com/huggingface/datasets/pull/502 | [
"This should fix #501 and also the issue you sent me on slack @sgugger ."
] | {
"url": "https://api.github.com/repos/huggingface/datasets/pulls/502",
"html_url": "https://github.com/huggingface/datasets/pull/502",
"diff_url": "https://github.com/huggingface/datasets/pull/502.diff",
"patch_url": "https://github.com/huggingface/datasets/pull/502.patch",
"merged_at": "2020-08-19T13:37:17"... | 502 | true |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.