html_url
stringlengths
48
51
title
stringlengths
5
268
comments
stringlengths
63
51.8k
body
stringlengths
0
36.2k
comment_length
int64
16
1.52k
text
stringlengths
164
54.1k
embeddings
list
https://github.com/huggingface/datasets/issues/3634
Dataset.shuffle(seed=None) gives fixed row permutation
Hi! Thanks for reporting! Yes, this is not expected behavior. I've opened a PR with the fix.
## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5], "label": ["a", "b", "c", "d", "e"]} ) # Doesn't work...
17
Dataset.shuffle(seed=None) gives fixed row permutation ## Describe the bug Repeated attempts to `shuffle` a dataset without specifying a seed give the same results. ## Steps to reproduce the bug ```python import datasets # Some toy example data = datasets.Dataset.from_dict( {"feature": [1, 2, 3, 4, 5],...
[ 0.2256516367, -0.2673810124, 0.0401451588, 0.1096486673, 0.2171934992, 0.0051689851, 0.5835940242, -0.0988347903, -0.2688475251, 0.4067517519, 0.0916137397, 0.4485055506, -0.1216275916, 0.30659163, 0.1507817656, 0.1546613276, 0.3101200461, 0.0387729742, -0.0948881954, -0.262906...
https://github.com/huggingface/datasets/issues/3632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
Hi @AnzorGozalishvili, Maybe their site was temporarily down, but it seems to work fine now. Could you please try again and confirm if the problem persists?
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file ...
26
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid) ## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](ht...
[ -0.1619237661, 0.0289881676, 0.0137884766, 0.213355571, 0.1414141208, 0.0469214134, 0.2221632004, 0.1372686774, 0.1100815609, 0.0725905597, -0.2368589789, 0.2323393971, 0.105717063, 0.1059431136, 0.3223291636, 0.0096261734, 0.0029664445, -0.1015315205, -0.2132701129, 0.04631373...
https://github.com/huggingface/datasets/issues/3632
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid)
Hi @albertvillanova I checked and it works. It seems that it was really temporarily down. Thanks!
## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](http://data.statmt.org/cc-100/) which isn't accessible. Also the URLs for dataset file ...
16
Adding CC-100: Monolingual Datasets from Web Crawl Data (Datasets links are invalid) ## Describe the bug The dataset links are no longer valid for CC-100. It seems that the website which was keeping these files are no longer accessible and therefore this dataset became unusable. Check out the dataset [homepage](ht...
[ -0.1686966121, 0.0409710146, 0.0132524855, 0.2089214176, 0.1232639402, 0.0467350595, 0.2052616477, 0.1223095506, 0.090792425, 0.0844413564, -0.2521444559, 0.25331375, 0.0570643358, 0.1296910495, 0.3611768484, 0.0015481981, 0.0005133852, -0.1116832048, -0.2077596188, 0.049201142...
https://github.com/huggingface/datasets/issues/3625
Add a metadata field for when source data was produced
A question to the datasets maintainers: is there a policy about how the set of allowed metadata fields is maintained and expanded? Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has https://fr...
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests mak...
87
Add a metadata field for when source data was produced **Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period info...
[ -0.3327313662, 0.2428779155, -0.0236785784, -0.1035892591, -0.2707235217, -0.161868602, 0.3315870762, 0.2197172642, -0.5503545403, -0.0032399925, 0.4462813437, 0.3695133626, -0.199112162, 0.0012922501, -0.3244539201, -0.0545660928, -0.0300791133, 0.1974953562, 0.0388663746, 0.2...
https://github.com/huggingface/datasets/issues/3625
Add a metadata field for when source data was produced
> Metadata are very important, but defining the standard is always a struggle between allowing exhaustivity without being too complex. Archivists have Dublin Core, open data has [frictionlessdata.io](https://frictionlessdata.io/), geo has ISO 19139 and INSPIRE, etc. and it's always a mess! I'm not sure we want to dig t...
**Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period information is not included. This feature request suggests mak...
190
Add a metadata field for when source data was produced **Is your feature request related to a problem? Please describe.** The current problem is that information about when source data was produced is not easily visible. Though there are a variety of metadata fields available in the dataset viewer, time period info...
[ -0.3327313662, 0.2428779155, -0.0236785784, -0.1035892591, -0.2707235217, -0.161868602, 0.3315870762, 0.2197172642, -0.5503545403, -0.0032399925, 0.4462813437, 0.3695133626, -0.199112162, 0.0012922501, -0.3244539201, -0.0545660928, -0.0300791133, 0.1974953562, 0.0388663746, 0.2...
https://github.com/huggingface/datasets/issues/3621
Consider adding `ipywidgets` as a dependency.
Hi! We use `tqdm` to display progress bars, so I suggest you open this issue in their repo.
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
18
Consider adding `ipywidgets` as a dependency. When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, be...
[ -0.3716931641, 0.6513088346, -0.0791809782, -0.2230323553, 0.1602570266, -0.1791561693, 0.592005074, 0.1570910513, 0.0256811772, 0.2411099225, -0.2169428468, 0.0295983255, -0.1418178231, 0.468570739, 0.1340662986, 0.1758034974, 0.0730534568, 0.4884797335, -0.4984247386, -0.0682...
https://github.com/huggingface/datasets/issues/3621
Consider adding `ipywidgets` as a dependency.
It depends on how you use `tqdm`, no? Doesn't this library import via; ``` from tqdm.notebook import tqdm ```
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
19
Consider adding `ipywidgets` as a dependency. When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, be...
[ -0.2867106199, 0.4942075312, -0.0472910628, -0.2324493229, 0.1650672704, -0.220628038, 0.6369206309, 0.1494795531, -0.0111662177, 0.1839485317, -0.280236572, 0.0793604776, -0.1705818176, 0.2784908116, 0.1941234022, 0.2538711131, 0.0409017466, 0.3885454834, -0.5920870304, -0.022...
https://github.com/huggingface/datasets/issues/3621
Consider adding `ipywidgets` as a dependency.
Hi! Sorry for the late reply. We import `tqdm` as `from tqdm.auto import tqdm`, which should be equal to `from tqdm.notebook import tqdm` in Jupyter.
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
25
Consider adding `ipywidgets` as a dependency. When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, be...
[ -0.1685499251, 0.5084441304, -0.0195758417, -0.2097097784, 0.1149033159, -0.1665549576, 0.5911034346, 0.0885964856, 0.1410358548, 0.094332315, -0.1721209437, 0.0448136367, -0.1704341769, 0.2847237885, 0.2028015554, 0.2025005966, 0.1609587222, 0.4420511127, -0.4774923325, -0.061...
https://github.com/huggingface/datasets/issues/3621
Consider adding `ipywidgets` as a dependency.
Any objection if I make a PR that checks if the widgets library is installed beforehand?
When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, because I need to run shut down the jupyterlab ser...
16
Consider adding `ipywidgets` as a dependency. When I install `datasets` in a fresh virtualenv with jupyterlab I always see this error. ``` ImportError: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html ``` It's a bit of a nuisance, be...
[ -0.1698446423, 0.625315249, -0.0337269083, -0.1724758744, 0.0254313052, -0.0749921128, 0.5609041452, 0.0913157389, 0.3140844107, 0.1015946716, 0.078153044, -0.0286185611, -0.151409179, 0.2816159725, 0.2125277072, 0.2335123122, 0.2751716375, 0.5633582473, -0.3676190972, -0.09886...
https://github.com/huggingface/datasets/issues/3618
TIMIT Dataset not working with GPU
Hi ! I think you should avoid calling `timit_train['audio']`. Indeed by doing so you're **loading all the audio column in memory**. This is problematic in your case because the TIMIT dataset is huge. If you want to access the audio data of some samples, you should do this instead `timit_train[:10]["train"]` for exam...
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
84
TIMIT Dataset not working with GPU ## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimi...
[ -0.1453542113, -0.1040818691, -0.00356368, 0.4680360258, 0.4012950063, -0.0252673663, 0.4242496789, 0.2931960821, -0.1032611355, 0.0186554566, -0.0983693153, 0.5495102406, -0.0063605495, 0.2732047737, 0.1169297248, -0.2103835642, -0.1081266552, -0.1027195901, -0.19415012, 0.210...
https://github.com/huggingface/datasets/issues/3618
TIMIT Dataset not working with GPU
I get the same error when I try to do `timit_train[0]` or really any indexing into the whole thing. Really, that IS the code snippet that reproduces the issue. If you index into other fields like 'file' or whatever, it works. As soon as one of the fields you're looking into is 'audio', you get that issue. It's a we...
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
93
TIMIT Dataset not working with GPU ## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimi...
[ -0.1453542113, -0.1040818691, -0.00356368, 0.4680360258, 0.4012950063, -0.0252673663, 0.4242496789, 0.2931960821, -0.1032611355, 0.0186554566, -0.0983693153, 0.5495102406, -0.0063605495, 0.2732047737, 0.1169297248, -0.2103835642, -0.1081266552, -0.1027195901, -0.19415012, 0.210...
https://github.com/huggingface/datasets/issues/3618
TIMIT Dataset not working with GPU
Ok I see ! From the error you got, it looks like the `value` encoded in the arrow file of the TIMIT dataset you loaded is a string instead of a dictionary with keys "path" and "bytes" but we don't support this since 1.18 Can you try regenerating the dataset with `load_dataset('timit_asr', download_mode="force_redown...
## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimized) environment, with a single ml.g4...
62
TIMIT Dataset not working with GPU ## Describe the bug I am working trying to use the TIMIT dataset in order to fine-tune Wav2Vec2 model and I am unable to load the "audio" column from the dataset when working with a GPU. I am working on Amazon Sagemaker Studio, on the Python 3 (PyTorch 1.8 Python 3.6 GPU Optimi...
[ -0.1453542113, -0.1040818691, -0.00356368, 0.4680360258, 0.4012950063, -0.0252673663, 0.4242496789, 0.2931960821, -0.1032611355, 0.0186554566, -0.0983693153, 0.5495102406, -0.0063605495, 0.2732047737, 0.1169297248, -0.2103835642, -0.1081266552, -0.1027195901, -0.19415012, 0.210...
https://github.com/huggingface/datasets/issues/3615
Dataset BnL Historical Newspapers does not work in streaming mode
@albertvillanova let me know if there is anything I can do to help with this. I had a quick look at the code again and though I could try the following changes: - use `download` instead of `download_and_extract` https://github.com/huggingface/datasets/blob/d3d339fb86d378f4cb3c5d1de423315c07a466c6/datasets/bnl_newspap...
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
66
Dataset BnL Historical Newspapers does not work in streaming mode ## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that...
[ -0.5219474435, -0.026741961, 0.0061430652, 0.2931693196, 0.2553721666, 0.1118415147, 0.1230439991, 0.3831239939, 0.1705756783, -0.0961003527, -0.0062708762, 0.0785974413, -0.1458618343, 0.0590120964, -0.236588791, -0.3113933802, 0.2990855873, 0.1549980938, 0.0718444735, 0.05890...
https://github.com/huggingface/datasets/issues/3615
Dataset BnL Historical Newspapers does not work in streaming mode
Thanks @davanstrien. I have already been working on it so that it can be used in the BigScience workshop. I agree that the `rglob()` is not efficient in this case. I tried different solutions without success: - `iter_archive` cannot be used in this case because it does not support ZIP files yet Finally I h...
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
57
Dataset BnL Historical Newspapers does not work in streaming mode ## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that...
[ -0.5628231168, -0.1082761958, 0.0663999617, 0.3091987967, 0.1017114893, 0.0588339008, 0.2123790532, 0.4981285334, 0.177158609, -0.0876947269, -0.0313860998, 0.1764090061, -0.2093244344, -0.0390730761, -0.2260355949, -0.2108296007, 0.2899224162, 0.2498149425, 0.0469625555, 0.050...
https://github.com/huggingface/datasets/issues/3615
Dataset BnL Historical Newspapers does not work in streaming mode
I see this is fixed now 🙂. I also picked up a few other tips from your redactors so hopefully my next attempts will support streaming from the start.
## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that it works fast in streaming mode. CC: @davanstrien
29
Dataset BnL Historical Newspapers does not work in streaming mode ## Describe the bug When trying to load in streaming mode, it "hangs"... ## Steps to reproduce the bug ```python ds = load_dataset("bnl_newspapers", split="train", streaming=True) ``` ## Expected results The code should be optimized, so that...
[ -0.639200449, 0.1436474174, 0.0176750589, 0.0161989536, 0.2014867514, -0.0171704646, 0.3039646149, 0.3379111588, 0.1105153561, -0.0491941422, 0.0870378762, 0.1462349445, -0.1340525448, 0.1133008599, -0.2418841124, -0.3361700475, 0.1876149029, 0.2186702639, 0.042955488, -0.01793...
https://github.com/huggingface/datasets/issues/3613
Files not updating in dataset viewer
Yes. The jobs queue is full right now, following an upgrade... Back to normality in the next hours hopefully. I'll look at your datasets to be sure the dataset viewer works as expected on them.
## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is reading a cached version of the dataset and...
35
Files not updating in dataset viewer ## Dataset viewer issue for '*name of the dataset*' **Link:** Some examples: * https://huggingface.co/datasets/abidlabs/crowdsourced-speech4 * https://huggingface.co/datasets/abidlabs/test-audio-13 *short description of the issue* It seems that the dataset viewer is read...
[ -0.3797994256, 0.1163970158, -0.1014503166, 0.1263846308, 0.069827646, 0.0949605405, 0.0576605536, 0.2934433222, 0.0349196307, 0.1519840509, 0.0154691096, 0.0699015185, -0.0732654855, -0.0716505423, -0.1866235733, 0.1329112053, 0.1891683191, -0.0158929173, -0.1370634884, -0.042...
https://github.com/huggingface/datasets/issues/3608
Add support for continuous metrics (RMSE, MAE)
Hey @ck37 You can always use a custom metric as explained [in this guide from HF](https://huggingface.co/docs/datasets/master/loading_metrics.html#using-a-custom-metric-script). If this issue needs to be contributed to (for enhancing the metric API) I think [this link](https://scikit-learn.org/stable/modules/gen...
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP m...
40
Add support for continuous metrics (RMSE, MAE) **Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for...
[ -0.2287482768, -0.3802615106, -0.0881353021, -0.1381486356, 0.2096349597, 0.160359323, -0.230401352, 0.0744787082, 0.3844649196, 0.0689124912, -0.062576957, 0.2589017451, -0.3647340834, 0.2241398096, -0.1389572769, -0.3047387302, -0.0666413903, -0.0653638989, 0.2313116342, 0.11...
https://github.com/huggingface/datasets/issues/3608
Add support for continuous metrics (RMSE, MAE)
You can use a local metric script just by providing its path instead of the usual shortcut name
**Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for hate speech. Once we have this outcome our NLP m...
18
Add support for continuous metrics (RMSE, MAE) **Is your feature request related to a problem? Please describe.** I am uploading our dataset and models for the "Constructing interval measures" method we've developed, which uses item response theory to convert multiple discrete labels into a continuous spectrum for...
[ -0.3416827023, -0.2790041566, -0.1028750315, -0.1780608147, 0.1408375949, 0.1478315294, -0.1508249342, 0.2230169773, 0.4105049968, 0.1154447794, 0.0020978018, 0.2769636214, -0.3721655905, 0.2808315158, -0.1003166586, -0.2971885502, -0.1172103286, -0.0224492773, 0.2668163478, 0....
https://github.com/huggingface/datasets/issues/3606
audio column not saved correctly after resampling
Hi ! We just released a new version of `datasets` that should fix this. I tested resampling and using save/load_from_disk afterwards and it seems to be fixed now
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
28
audio column not saved correctly after resampling ## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk(...
[ -0.2528468966, 0.1135269701, 0.0847015381, 0.2255780399, 0.460318774, -0.073203519, 0.1811225116, 0.3404786289, -0.1009492427, 0.0725950599, -0.4832224846, 0.369177103, -0.0401452184, -0.0950341672, -0.0164750442, -0.1994142979, 0.2537722588, 0.0790266395, -0.1742042005, -0.216...
https://github.com/huggingface/datasets/issues/3606
audio column not saved correctly after resampling
Hi @lhoestq, Just tested the latest datasets version, and confirming that this is fixed for me. Thanks!
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
17
audio column not saved correctly after resampling ## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk(...
[ -0.2528468966, 0.1135269701, 0.0847015381, 0.2255780399, 0.460318774, -0.073203519, 0.1811225116, 0.3404786289, -0.1009492427, 0.0725950599, -0.4832224846, 0.369177103, -0.0401452184, -0.0950341672, -0.0164750442, -0.1994142979, 0.2537722588, 0.0790266395, -0.1742042005, -0.216...
https://github.com/huggingface/datasets/issues/3606
audio column not saved correctly after resampling
Also, just an FYI, data that I had saved (with save_to_disk) previously from common voice using datasets==1.17.0 now give the error below when loading (with load_from disk) using datasets==1.18.0. However, when starting fresh using load_dataset, then doing the resampling, the save/load_from disk worked fine. ``...
## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk() - load with load_from_disk() ## Expected resul...
290
audio column not saved correctly after resampling ## Describe the bug After resampling the audio column, saving with save_to_disk doesn't seem to save with the correct type. ## Steps to reproduce the bug - load a subset of common voice dataset (48Khz) - resample audio column to 16Khz - save with save_to_disk(...
[ -0.2528468966, 0.1135269701, 0.0847015381, 0.2255780399, 0.460318774, -0.073203519, 0.1811225116, 0.3404786289, -0.1009492427, 0.0725950599, -0.4832224846, 0.369177103, -0.0401452184, -0.0950341672, -0.0164750442, -0.1994142979, 0.2537722588, 0.0790266395, -0.1742042005, -0.216...
https://github.com/huggingface/datasets/issues/3598
Readme info not being parsed to show on Dataset card page
i suspect a markdown parsing error, @severo do you want to take a quick look at it when you have some time?
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatal...
22
Readme info not being parsed to show on Dataset card page ## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README...
[ -0.3140523732, -0.4610441625, -0.0429990664, 0.4984800518, 0.3827379644, 0.3550934196, 0.1448007375, 0.2580851018, -0.0173245277, 0.1981521398, 0.2267410755, 0.4555899203, 0.2250601947, 0.2512841523, 0.1963027567, 0.0808806345, 0.0349486545, -0.092020914, 0.1977230459, -0.10673...
https://github.com/huggingface/datasets/issues/3598
Readme info not being parsed to show on Dataset card page
# Problem The issue seems to coming from the front matter of the README ```--- annotations_creators: - no-annotation language_creators: - machine-generated languages: - 'ca' - 'de' licenses: - cc-by-4.0 multilinguality: - translation pretty_name: Catalan-German aligned corpora to train NMT systems. size_...
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatal...
121
Readme info not being parsed to show on Dataset card page ## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README...
[ -0.2200216651, -0.4916219413, 0.001602566, 0.6471502781, 0.460257858, 0.2593294382, 0.0698466077, 0.2069071531, -0.0885194764, 0.1984075308, 0.1808967143, 0.4812800288, 0.3071290255, 0.126343295, 0.2062165588, 0.1296862662, 0.0109634679, -0.2193579376, 0.2972039878, -0.18370930...
https://github.com/huggingface/datasets/issues/3598
Readme info not being parsed to show on Dataset card page
Thank you. It finally worked implementing your changes and leaving a white line between title and text in the description.
## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README file is this one: https://huggingface.co/datasets/softcatal...
20
Readme info not being parsed to show on Dataset card page ## Describe the bug The info contained in the README.md file is not being shown in the dataset main page. Basic info and table of contents are properly formatted in the README. ## Steps to reproduce the bug # Sample code to reproduce the bug The README...
[ -0.1167043969, -0.1736114621, -0.0457949899, 0.1973761022, 0.4050149024, 0.149337709, 0.2383347899, 0.0577396452, 0.0127058886, 0.2358467281, 0.1479034126, 0.3144131601, 0.3428182304, 0.3346486688, 0.2078316659, -0.0082408553, 0.0039033387, 0.0413327813, 0.1572258323, -0.048784...
https://github.com/huggingface/datasets/issues/3597
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content
Hi! The `cd` command in Jupyer/Colab needs to start with `%`, so this should work: ``` !git clone https://github.com/huggingface/datasets.git %cd datasets !pip install -e ".[streaming]" ```
## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streaming]" ``` ## Actual results Cloning into 'datasets'... remote: Enumerating objects: 50816, done. remot...
26
ERROR: File "setup.py" or "setup.cfg" not found. Directory cannot be installed in editable mode: /content ## Bug The install of streaming dataset is giving following error. ## Steps to reproduce the bug ```python ! git clone https://github.com/huggingface/datasets.git ! cd datasets ! pip install -e ".[streami...
[ -0.2805958986, 0.0059206099, 0.0532629862, 0.0867149234, 0.1233595163, 0.1511144936, -0.0670933351, 0.1101514399, -0.2740464807, 0.1692894697, -0.2694028616, 0.3483410478, -0.2103759199, 0.2704070807, -0.011787775, -0.1742322594, 0.0239560548, 0.331048429, -0.2299905121, 0.1471...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature.
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
30
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
> Hi! Thanks for reporting! The issue with `cast_column` should be fixed by #3575 and after we merge that PR I'll start working on the `push_to_hub` support for the `Image`/`Audio` feature. Thanks, I'll keep an eye out for #3575 getting merged. I managed to use `push_to_hub` sucesfully with images when they were loa...
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
92
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ?
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
25
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
> Hi ! We merged the PR and did a release of `datasets` that includes the changes. Can you try updating `datasets` and try again ? Thanks for checking. There is no longer an error when calling `select` but it appears the cast value isn't preserved. Before `select` ```python dataset.features {'url': Image(id=Non...
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
64
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
Hmmm, if I re-run your google colab I'm getting the right type at the end: ``` sample.features # {'url': Image(id=None)} ```
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
21
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3596
Loss of cast `Image` feature on certain dataset method
Appolgies - I've just run again and also got this output. I have also sucesfully used the `push_to_hub` method. I think this is fixed now so will close this issue.
## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to an `image`. This also happens when using select on a data...
30
Loss of cast `Image` feature on certain dataset method ## Describe the bug When an a column is cast to an `Image` feature, the cast type appears to be lost during certain operations. I first noticed this when using the `push_to_hub` method on a dataset that contained urls pointing to images which had been cast to ...
[ -0.212974757, 0.0474174246, 0.038972795, 0.0052658841, 0.6166043282, 0.1256070584, 0.5568077564, 0.3356984258, -0.0696557909, 0.0527435541, -0.058810696, 0.4363155663, 0.0470174327, -0.1245157272, 0.2604289949, -0.2225403041, 0.1796381027, -0.0886645988, -0.2497292608, -0.00938...
https://github.com/huggingface/datasets/issues/3583
Add The Medical Segmentation Decathlon Dataset
Hello! I have recently been involved with a medical image segmentation project myself and was going through the `The Medical Segmentation Decathlon Dataset` as well. I haven't yet had experience adding datasets to this repository yet but would love to get started. Should I take this issue? If yes, I've got two quest...
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
107
Add The Medical Segmentation Decathlon Dataset ## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, ...
[ 0.0942726061, -0.0452841111, -0.0943571702, -0.0122368205, -0.0802382752, 0.1419505775, 0.3158585727, 0.0644161105, -0.1904926002, 0.0251721032, 0.0979224294, -0.4280961454, -0.202884689, 0.2265471965, 0.256485343, -0.3091827035, 0.045162648, -0.2086950392, 0.3492043316, -0.205...
https://github.com/huggingface/datasets/issues/3583
Add The Medical Segmentation Decathlon Dataset
Hi! Sure, feel free to take this issue. You can self-assign the issue by commenting `#self-assign`. To answer your questions: 1. It makes the most sense to add each one as a separate config, so one dataset script with 10 configs in a single PR. 2. Just set masks in the test set to `None`. Note that the images/m...
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
111
Add The Medical Segmentation Decathlon Dataset ## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, ...
[ -0.0416804701, -0.2808955312, -0.0886708796, -0.0931892172, 0.0039942875, 0.0898778066, 0.5937296748, 0.3467416465, 0.2115156204, 0.2499487549, 0.0095475689, -0.1307509691, -0.2073960602, 0.416721046, 0.2649593949, -0.1821739376, -0.0593423396, 0.0177171286, -0.0619516596, 0.00...
https://github.com/huggingface/datasets/issues/3583
Add The Medical Segmentation Decathlon Dataset
> Note that the images/masks in this dataset are in NIfTI format, which our `Image` feature currently doesn't support, so I think it's best to yield the paths to the images/masks in the script and add a preprocessing section to the card where we explain how to load/process the images/masks with `nibabel` (I can help wi...
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
74
Add The Medical Segmentation Decathlon Dataset ## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, ...
[ -0.1030204222, -0.3495714366, -0.0923557281, -0.0719333515, 0.0702075511, 0.0257596634, 0.4989385605, 0.398943454, 0.1786017269, 0.288439244, 0.121741049, -0.0976137295, -0.0700705349, 0.4220615923, 0.2009046078, -0.2264316082, -0.0912129357, 0.1593818814, -0.0431683697, -0.061...
https://github.com/huggingface/datasets/issues/3583
Add The Medical Segmentation Decathlon Dataset
This is great! There is a first model on the HUb that uses this dataset! https://huggingface.co/MONAI/example_spleen_segmentation
## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, multi-site data, and small objects. - **Paper:*...
16
Add The Medical Segmentation Decathlon Dataset ## Adding a Dataset - **Name:** *The Medical Segmentation Decathlon Dataset* - **Description:** The underlying data set was designed to explore the axis of difficulties typically encountered when dealing with medical images, such as small data sets, unbalanced labels, ...
[ -0.0270238314, -0.3896330893, -0.1645667255, -0.2371069044, 0.0072027589, 0.0695009977, 0.4940696955, 0.3347842097, 0.0314945988, 0.3018218577, -0.0489493683, -0.1498311609, -0.1921692789, 0.4657769203, 0.2673726678, -0.333525002, -0.0200640932, 0.1890469044, 0.1262889653, -0.1...
https://github.com/huggingface/datasets/issues/3582
conll 2003 dataset source url is no longer valid
Thanks for reporting ! I pushed a temporary fix on `master` that uses an URL from a previous commit to access the dataset for now, until we have a better solution
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual r...
31
conll 2003 dataset source url is no longer valid ## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expect...
[ 0.0253509022, 0.0967388824, 0.065923214, 0.2801699936, -0.0838980228, -0.0983769894, 0.3944682181, 0.1143219024, -0.3661052287, 0.010017517, -0.1598331183, 0.153081283, -0.1152235493, -0.116748184, 0.0637081191, 0.1321539283, 0.0121241845, 0.000835884, 0.0810598731, 0.031079491...
https://github.com/huggingface/datasets/issues/3582
conll 2003 dataset source url is no longer valid
I changed the URL again to use another host, the fix is available on `master` and we'll probably do a new release of `datasets` tomorrow. In the meantime, feel free to do `load_dataset(..., revision="master")` to use the fixed script
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual r...
39
conll 2003 dataset source url is no longer valid ## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expect...
[ 0.0253509022, 0.0967388824, 0.065923214, 0.2801699936, -0.0838980228, -0.0983769894, 0.3944682181, 0.1143219024, -0.3661052287, 0.010017517, -0.1598331183, 0.153081283, -0.1152235493, -0.116748184, 0.0637081191, 0.1321539283, 0.0121241845, 0.000835884, 0.0810598731, 0.031079491...
https://github.com/huggingface/datasets/issues/3582
conll 2003 dataset source url is no longer valid
We just released a new version of `datasets` with a working URL. Feel free to update `datasets` and try again :)
## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expected results The dataset should load. ## Actual r...
21
conll 2003 dataset source url is no longer valid ## Describe the bug Loading `conll2003` dataset fails because it was removed (just yesterday 1/14/2022) from the location it is looking for. ## Steps to reproduce the bug ```python from datasets import load_dataset load_dataset("conll2003") ``` ## Expect...
[ 0.0253509022, 0.0967388824, 0.065923214, 0.2801699936, -0.0838980228, -0.0983769894, 0.3944682181, 0.1143219024, -0.3661052287, 0.010017517, -0.1598331183, 0.153081283, -0.1152235493, -0.116748184, 0.0637081191, 0.1321539283, 0.0121241845, 0.000835884, 0.0810598731, 0.031079491...
https://github.com/huggingface/datasets/issues/3580
Bug in wiki bio load
+1, here's the error I got: ``` >>> from datasets import load_dataset >>> >>> load_dataset("wiki_bio") Downloading: 7.58kB [00:00, 4.42MB/s] Downloading: 2.71kB [00:00, 1.30MB/s] Using custom data configuration default Downloading and preparing dataset wiki_bio/default (download: 318.53 MiB, generated: 736.9...
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
154
Bug in wiki bio load wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-image...
[ -0.0613215007, 0.0982688144, -0.030225791, 0.1260512322, 0.0026656557, 0.2975159883, 0.6769422293, 0.3991516232, 0.4630135298, 0.0595791228, 0.1997943223, -0.2422289401, 0.3955300152, 0.3415488601, 0.2029338628, -0.0963699743, 0.0754149184, 0.1798694134, 0.164155513, 0.02346262...
https://github.com/huggingface/datasets/issues/3580
Bug in wiki bio load
@alejandrocros and @lhoestq - you added the wiki_bio dataset in #1173. It doesn't work anymore. Can you take a look at this?
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
22
Bug in wiki bio load wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-image...
[ 0.0321893357, 0.0049939719, -0.0749883205, 0.122026898, 0.016177563, 0.2904135883, 0.6395812631, 0.2936856747, 0.4558522105, 0.0400671251, 0.0808757097, -0.1471681297, 0.4145267904, 0.3167213798, 0.2637818158, -0.0694372877, 0.1417486817, 0.1680009663, 0.0526714362, -0.04016467...
https://github.com/huggingface/datasets/issues/3580
Bug in wiki bio load
And if something is wrong with Google Drive, you could try to download (and collate and unzip) from here: https://github.com/DavidGrangier/wikipedia-biography-dataset
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
20
Bug in wiki bio load wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-image...
[ 0.1476312876, 0.1652756631, -0.0648238063, 0.1482926756, -0.0046193241, 0.3781234324, 0.5248034, 0.3122829199, 0.400731355, 0.1247579157, 0.1112694964, -0.2811300457, 0.4056453109, 0.1641614735, 0.063554883, -0.2315274328, 0.0691660717, 0.2504971027, 0.1303062588, -0.0925101787...
https://github.com/huggingface/datasets/issues/3580
Bug in wiki bio load
Hi ! Thanks for reporting. I've downloaded the data and concatenated them into a zip file available here: https://huggingface.co/datasets/wiki_bio/tree/main/data I guess we can update the dataset script to use this zip file now :)
wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-images.githubusercontent.com...
34
Bug in wiki bio load wiki_bio is failing to load because of a failing drive link . Can someone fix this ? ![7E90023B-A3B1-4930-BA25-45CCCB4E1710](https://user-images.githubusercontent.com/3104771/149617870-5a32a2da-2c78-483b-bff6-d7534215a423.png) ![653C1C76-C725-4A04-A0D8-084373BA612F](https://user-image...
[ 0.0163171161, -0.0223626625, -0.0350460447, 0.2036607116, 0.0102495905, 0.2816864252, 0.5000677109, 0.3356057107, 0.4480063319, 0.0293331668, 0.1067662314, -0.1336872429, 0.4862632751, 0.4575192034, 0.2189482152, -0.1626470238, 0.1658850759, 0.1360336542, 0.0206579864, -0.01275...
https://github.com/huggingface/datasets/issues/3578
label information get lost after parquet serialization
Hi ! We did a release of `datasets` today that may fix this issue. Can you try updating `datasets` and trying again ? EDIT: the issue is still there actually I think we can fix that by storing the Features in the parquet schema metadata, and then reload them when loading the parquet file
## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='train') dataset.save_to_disk("normal_save") # save ...
54
label information get lost after parquet serialization ## Describe the bug In *dataset_info.json* file, information about the label get lost after the dataset serialization. ## Steps to reproduce the bug ```python from datasets import load_dataset # normal save dataset = load_dataset('glue', 'sst2', split='...
[ 0.100364916, 0.291457355, 0.0859543756, 0.2712121308, 0.2289622724, 0.0452237986, 0.1918179691, 0.0536162853, -0.1856913865, 0.2095583528, 0.2182401866, 0.7100892663, 0.148033008, 0.3368785977, 0.0191617422, -0.0620363764, 0.0825962573, 0.0399495289, 0.1877959669, -0.1167053208...
https://github.com/huggingface/datasets/issues/3572
ConnectionError in IndicGLUE dataset
@sahoodib, thanks for reporting. Indeed, none of the data links appearing in the IndicGLUE website are working, e.g.: https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/soham-articles.tar.gz ``` <Error> <Code>UserProjectAccountProblem</Code> <Message>User project billing account not in...
While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403)
67
ConnectionError in IndicGLUE dataset While I am trying to load IndicGLUE dataset (https://huggingface.co/datasets/indic_glue) it is giving me with the error: ``` ConnectionError: Couldn't reach https://storage.googleapis.com/ai4bharat-public-indic-nlp-corpora/evaluations/wikiann-ner.tar.gz (error 403) @sahoodib, ...
[ -0.4043236375, 0.3878386617, -0.0267183259, 0.6261589527, 0.0222605765, 0.0886498168, 0.1807953119, 0.0415817574, 0.0253699701, 0.0412166119, -0.0606036223, -0.2099455595, 0.328040719, 0.0549296439, 0.1314405203, 0.036257498, -0.2428231835, 0.1725625396, -0.070566155, 0.2514490...
https://github.com/huggingface/datasets/issues/3568
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError
Hi @fabianslife, thanks for reporting. I think you were using an old version of `datasets` because this bug was already fixed in version `1.13.0` (13 Oct 2021): - Fix: 55fd140a63b8f03a0e72985647e498f1fc799d3f - PR: #3046 - Issue: #2969 Please, feel free to update the library: `pip install -U datasets`.
I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked everything and put it in the same folder as the medical_dialog.py which is: ``` ...
47
Downloading Hugging Face Medical Dialog Dataset NonMatchingSplitsSizesError I wanted to download the Nedical Dialog Dataset from huggingface, using this github link: https://github.com/huggingface/datasets/tree/master/datasets/medical_dialog After downloading the raw datasets from google drive, i unpacked every...
[ -0.0822347105, -0.1012541056, 0.0506370142, 0.2017839402, 0.2633808851, 0.1685770303, 0.0695230365, 0.3702645302, 0.0356435217, 0.1093899533, -0.3243410289, -0.1678570658, -0.2540262043, 0.2069634944, -0.0428670831, -0.0932669938, -0.1223635525, 0.1837748289, -0.0260311496, -0....
https://github.com/huggingface/datasets/issues/3563
Dataset.from_pandas preserves useless index
Hi! That makes sense. Sure, feel free to open a PR! Just a small suggestion: let's make `preserve_index` a parameter of `Dataset.from_pandas` (which we then pass to `InMemoryTable.from_pandas`) with `None` as a default value to not have this as a breaking change.
## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing code... dataset = Dataset.from_pandas(df) `...
42
Dataset.from_pandas preserves useless index ## Describe the bug Let's say that you want to create a Dataset object from pandas dataframe. Most likely you will write something like this: ``` import pandas as pd from datasets import Dataset df = pd.read_csv('some_dataset.csv') # Some DataFrame preprocessing...
[ -0.2243756503, 0.3234053254, 0.0587055907, 0.1705539376, 0.2824808955, 0.0902998, 0.4146197438, 0.3388771415, -0.3311904967, 0.2017637938, -0.0870507881, 0.7348047495, 0.0466929898, -0.0187785905, 0.0809659511, -0.0653653294, 0.0340247042, 0.3033463359, -0.2229703665, 0.0121820...
https://github.com/huggingface/datasets/issues/3561
Cannot load ‘bookcorpusopen’
The host of this copy of the dataset (https://the-eye.eu) is down and has been down for a good amount of time ([potentially months](https://www.reddit.com/r/Roms/comments/q82s15/theeye_downdied/)) Finding this dataset is a little esoteric, as the original authors took down the official BookCorpus dataset some time a...
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_pre...
109
Cannot load ‘bookcorpusopen’ ## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https:...
[ -0.2679781318, 0.2642430663, -0.0406800844, 0.3686515391, -0.1236103103, 0.0088020796, 0.3477190137, 0.2313549519, -0.0628313869, -0.0716194585, -0.3619252145, 0.118346341, 0.317332238, 0.1622451842, 0.2099274844, 0.1416336894, 0.1095947549, 0.1244373322, -0.0420725048, -0.1039...
https://github.com/huggingface/datasets/issues/3561
Cannot load ‘bookcorpusopen’
Hi! The `bookcorpusopen` dataset is not working for the same reason as explained in this comment: https://github.com/huggingface/datasets/issues/3504#issuecomment-1004564980
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_pre...
17
Cannot load ‘bookcorpusopen’ ## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https:...
[ -0.304099828, -0.0313268453, -0.0976268277, 0.3863776624, 0.1762519777, -0.0748730153, 0.3626802564, 0.1297377497, 0.0686660558, -0.011390727, -0.3443722129, 0.2929640412, 0.4512723684, 0.4727550447, 0.2462410331, -0.2257454246, 0.0581259392, 0.0326237418, -0.1498839706, -0.012...
https://github.com/huggingface/datasets/issues/3561
Cannot load ‘bookcorpusopen’
Hi @HUIYINXUE, it should work now that the data owners created a mirror server with all data, and we updated the URL in our library.
## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https://the-eye.eu/public/AI/pile_pre...
25
Cannot load ‘bookcorpusopen’ ## Describe the bug Cannot load 'bookcorpusopen' ## Steps to reproduce the bug ```python dataset = load_dataset('bookcorpusopen') ``` or ```python dataset = load_dataset('bookcorpusopen',script_version='master') ``` ## Actual results ConnectionError: Couldn't reach https:...
[ -0.2113384753, 0.1578316838, -0.0710607544, 0.3246243894, -0.0623678938, -0.1400378644, 0.3946614563, 0.1990953535, 0.0004558234, -0.0569876432, -0.3438051939, 0.318657428, 0.4406041205, 0.2468894124, 0.1852912009, -0.1837173253, -0.0093768798, 0.0977922305, -0.2471719235, -0.1...
https://github.com/huggingface/datasets/issues/3558
Integrate Milvus (pymilvus) library
Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets. Any suggestion on how we could start?
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
34
Integrate Milvus (pymilvus) library Milvus is a popular open-source vector database. We should add a new vector index to support this project. Hi @mariosasko,Just search randomly and I found this issue~ I'm the tech lead of Milvus and we are looking forward to integrate milvus together with huggingface datasets. ...
[ 0.0640774444, -0.6056883931, -0.1171271652, 0.3045168221, 0.2704040408, 0.0546639301, -0.0459689684, 0.1624069214, 0.0897542015, 0.3288972378, -0.0334931016, -0.2064775378, -0.2139965892, 0.0561555438, -0.0317880772, 0.0591635332, -0.0616706572, 0.2412931919, 0.001262023, -0.03...
https://github.com/huggingface/datasets/issues/3558
Integrate Milvus (pymilvus) library
Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer.
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
37
Integrate Milvus (pymilvus) library Milvus is a popular open-source vector database. We should add a new vector index to support this project. Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/E...
[ -0.19968687, -0.2113723904, -0.2736818194, -0.103936702, -0.0298224445, -0.0717026517, -0.0178257227, 0.2599717975, 0.2742888629, 0.0905229002, -0.0979632288, -0.1096255183, -0.2341021597, -0.0411486551, 0.0008002831, 0.0969581455, 0.0917071402, 0.1584693938, 0.0501033776, -0.0...
https://github.com/huggingface/datasets/issues/3558
Integrate Milvus (pymilvus) library
> Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss/ElasticSearch support. We could set up a Slack channel for additional guidance. Let me know what you prefer. Sure, we take a look and do some r...
Milvus is a popular open-source vector database. We should add a new vector index to support this project.
47
Integrate Milvus (pymilvus) library Milvus is a popular open-source vector database. We should add a new vector index to support this project. > Hi! For starters, I suggest you take a look at this file: https://github.com/huggingface/datasets/blob/master/src/datasets/search.py, which contains all the code for Faiss...
[ -0.1838495284, -0.2007055879, -0.2723218203, -0.1145946383, -0.0485320874, -0.0991515443, -0.0014097604, 0.2520477474, 0.2506119907, 0.0776884779, -0.082934998, -0.1187220886, -0.2495795786, -0.0352785327, -0.0100632748, 0.0800941512, 0.0907990932, 0.1512244493, 0.0490903668, -...
https://github.com/huggingface/datasets/issues/3555
DuplicatedKeysError when loading tweet_qa dataset
Hi, we've just merged the PR with the fix. The fixed version of the dataset can be downloaded as follows: ```python import datasets dset = datasets.load_dataset("tweet_qa", revision="master") ```
When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might be related to issues #2433 and #2333 - `datasets` ...
28
DuplicatedKeysError when loading tweet_qa dataset When loading the tweet_qa dataset with `load_dataset('tweet_qa')`, the following error occurs: `DuplicatedKeysError: FAILURE TO GENERATE DATASET ! Found duplicate Key: 2a167f9e016ba338e1813fed275a6a1e Keys should be unique and deterministic in nature ` Might b...
[ 0.0209830124, -0.2672317326, -0.0047798767, 0.206936419, 0.3006342351, 0.0638986528, 0.1432422847, 0.3013512194, -0.1967462152, 0.0905203298, 0.000505654, 0.3443004489, -0.199397102, 0.2585407495, 0.1344979852, -0.0611697547, -0.1748432666, -0.061942149, -0.0889877006, 0.046102...
https://github.com/huggingface/datasets/issues/3554
ImportError: cannot import name 'is_valid_waiter_error'
Hi! I can't reproduce this error in Colab, but I'm assuming you are using Amazon SageMaker Studio Notebooks (you mention the `conda_pytorch_p36` kernel), so maybe @philschmid knows more about what might be causing this issue?
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-u...
35
ImportError: cannot import name 'is_valid_waiter_error' Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: `...
[ -0.147882849, -0.1469279379, -0.2213934511, 0.1788341254, 0.1569535136, -0.1981433779, 0.3735085726, 0.1772010922, -0.0016112427, 0.0087774508, -0.1001608148, 0.3891834319, 0.0245094839, 0.115436241, -0.0132378004, -0.0450459421, 0.0596403442, 0.2368056029, -0.1831156015, -0.09...
https://github.com/huggingface/datasets/issues/3554
ImportError: cannot import name 'is_valid_waiter_error'
Hey @mariosasko. Yes, I am using **Amazon SageMaker Studio Jupyter Labs**. However, I no longer need this notebook; but it would be nice to have this problem solved for others. So don't stress too much if you two can't reproduce error.
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-u...
41
ImportError: cannot import name 'is_valid_waiter_error' Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: `...
[ -0.147882849, -0.1469279379, -0.2213934511, 0.1788341254, 0.1569535136, -0.1981433779, 0.3735085726, 0.1772010922, -0.0016112427, 0.0087774508, -0.1001608148, 0.3891834319, 0.0245094839, 0.115436241, -0.0132378004, -0.0450459421, 0.0596403442, 0.2368056029, -0.1831156015, -0.09...
https://github.com/huggingface/datasets/issues/3554
ImportError: cannot import name 'is_valid_waiter_error'
Hey @danielbellhv, This issue might be related to Studio probably not having an up to date `botocore` and `boto3` version. I ran into this as well a while back. My workaround was ```python # using older dataset due to incompatibility of sagemaker notebook & aws-cli with > s3fs and fsspec to >= 2021.10 !pip inst...
Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: ``` Requirement already satisfied: datasets in /home/ec2-u...
90
ImportError: cannot import name 'is_valid_waiter_error' Based on [SO post](https://stackoverflow.com/q/70606147/17840900). I'm following along to this [Notebook][1], cell "**Loading the dataset**". Kernel: `conda_pytorch_p36`. I run: ``` ! pip install datasets transformers optimum[intel] ``` Output: `...
[ -0.147882849, -0.1469279379, -0.2213934511, 0.1788341254, 0.1569535136, -0.1981433779, 0.3735085726, 0.1772010922, -0.0016112427, 0.0087774508, -0.1001608148, 0.3891834319, 0.0245094839, 0.115436241, -0.0132378004, -0.0450459421, 0.0596403442, 0.2368056029, -0.1831156015, -0.09...
https://github.com/huggingface/datasets/issues/3553
set_format("np") no longer works for Image data
This error also propagates to jax and is even trickier to fix, since `.with_format(type='jax')` will use numpy conversion internally (and fail). For a three line failure: ```python dataset = datasets.load_dataset("mnist") dataset.set_format("jax") X_train = dataset["train"]["image"] ```
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format(...
35
set_format("np") no longer works for Image data ## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy...
[ -0.0351040848, 0.1116232276, -0.0163861923, -0.1090965569, 0.5629806519, -0.0685696006, 0.6217341423, 0.4775542617, -0.1744560152, 0.1781869382, 0.0658131242, 0.527074039, -0.088053219, 0.1173616201, 0.0401965939, -0.3605453968, 0.072074309, 0.2647365034, 0.0808508024, 0.159616...
https://github.com/huggingface/datasets/issues/3553
set_format("np") no longer works for Image data
Hi! We've recently introduced a new Image feature that yields PIL Images (and caches transforms on them) instead of arrays. However, this feature requires a custom transform to yield np arrays directly: ```python ddict = datasets.load_dataset("mnist") def pil_image_to_array(batch): return {"image": [np.ar...
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format(...
127
set_format("np") no longer works for Image data ## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy...
[ -0.1515493989, 0.0228351578, 0.0084198676, -0.0925700441, 0.4749735594, -0.1852437109, 0.6634012461, 0.2563917935, -0.260232538, 0.1248800829, -0.1729411781, 0.5843823552, -0.1788263172, 0.1715169847, 0.1118780747, -0.3430039287, 0.0372204855, 0.3603931665, 0.0618001521, 0.1723...
https://github.com/huggingface/datasets/issues/3553
set_format("np") no longer works for Image data
Yes I agree it should return arrays and not a PIL image (and possible an array instead of a dict for audio data). I'm currently finishing some code refactoring of the image and audio and opening a PR today. Maybe we can look into that after the refactoring
## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy array ``` but now it doesn't work, `set_format(...
48
set_format("np") no longer works for Image data ## Describe the bug `dataset.set_format("np")` no longer works for image data, previously you could load the MNIST like this: ```python dataset = load_dataset("mnist") dataset.set_format("np") X_train = dataset["train"]["image"][..., None] # <== No longer a numpy...
[ -0.3260746598, 0.0780695602, -0.0351971686, -0.0143146245, 0.3845542967, -0.3260154128, 0.5448749065, 0.2585427761, -0.1670454443, 0.1273833066, -0.1693264544, 0.7061198354, -0.2626221478, 0.0751629323, 0.1509307921, -0.2775861919, 0.1291610748, 0.2442281842, 0.1163542941, 0.06...
https://github.com/huggingface/datasets/issues/3548
Specify the feature types of a dataset on the Hub without needing a dataset script
After looking into this, discovered that this is already supported if the `dataset_infos.json` file is configured correctly! Here is a working example: https://huggingface.co/datasets/abidlabs/test-audio-13 This should be probably be documented, though.
**Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to specify the types of the column, so that when loading the dataset I directly get the feat...
30
Specify the feature types of a dataset on the Hub without needing a dataset script **Is your feature request related to a problem? Please describe.** Currently if I upload a CSV with paths to audio files, the column type is string instead of Audio. **Describe the solution you'd like** I'd like to be able to spec...
[ -0.1981454343, -0.3338792622, -0.0593696311, -0.015883591, 0.2914840281, 0.0536743775, 0.5299441814, 0.2025271356, 0.2964348197, 0.2206585556, -0.0498791374, 0.4667156041, -0.0649666712, 0.60937953, 0.117898874, 0.0383371338, 0.154357627, 0.2395860553, -0.0806983635, 0.15829841...
https://github.com/huggingface/datasets/issues/3547
Datasets created with `push_to_hub` can't be accessed in offline mode
Thanks for reporting. I think this can be fixed by improving the `CachedDatasetModuleFactory` and making it look into the `parquet` cache directory (datasets from push_to_hub are loaded with the parquet dataset builder). I'll look into it
## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/matched_passages_wikidata") ``` in bash: ``` export HF_DATASETS_OFFLIN...
36
Datasets created with `push_to_hub` can't be accessed in offline mode ## Describe the bug In offline mode, one can still access previously-cached datasets. This fails with datasets created with `push_to_hub`. ## Steps to reproduce the bug in Python: ``` import datasets mpwiki = datasets.load_dataset("teven/ma...
[ -0.3195171058, -0.3339677453, 0.0332719758, 0.2394816726, 0.1446232647, 0.0975356847, 0.4311662614, 0.1092434898, 0.1457218379, 0.002082299, -0.164107725, 0.093978785, 0.1401516795, 0.2055078447, 0.0176242907, -0.014554346, 0.2575738132, -0.1719705909, 0.1519204676, -0.04281570...
https://github.com/huggingface/datasets/issues/3543
Allow loading community metrics from the hub, just like datasets
Hi ! Thanks for your message :) This is a great idea indeed. We haven't started working on this yet though. For now I guess you can host your metric on the Hub (either with your model or your dataset) and use `hf_hub_download` to download it (docs [here](https://github.com/huggingface/huggingface_hub/blob/main/docs/hub...
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must d...
48
Allow loading community metrics from the hub, just like datasets **Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. Th...
[ -0.5048287511, -0.1317422688, -0.0456834771, 0.1188793108, 0.0301415231, -0.0557257459, 0.1593100578, 0.013089898, 0.4941082001, 0.4180036783, -0.4741497338, 0.2941925526, -0.0305409059, 0.3988614082, -0.0859557539, -0.0085316356, -0.2955570221, -0.0084171826, -0.1162706763, 0....
https://github.com/huggingface/datasets/issues/3543
Allow loading community metrics from the hub, just like datasets
Here's the code I used, in case it can be of help to someone else: ```python import os, shutil from huggingface_hub import hf_hub_download def download_metric(repo_id, file_path): # repo_id: for models "username/model_name", for datasets "datasets/username/model_name" local_metric_path = hf_hub_download(r...
**Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. This means that if I want to allow other users to use it, they must d...
55
Allow loading community metrics from the hub, just like datasets **Is your feature request related to a problem? Please describe.** Currently, I can load a metric implemented by me by providing the local path to the file in `load_metric`. However, there is no option to do it with the metric uploaded to the hub. Th...
[ -0.47042045, -0.1260262728, 0.0011760294, 0.1479982883, 0.0563812032, -0.0769120753, 0.1348915249, 0.0765777901, 0.6092924476, 0.4103240371, -0.5060341358, 0.2940291762, -0.0671618879, 0.2850588262, -0.073428452, -0.0907046348, -0.2695915997, -0.027828997, -0.1544400901, -0.026...
https://github.com/huggingface/datasets/issues/3518
Add PubMed Central Open Access dataset
In the framework of BigScience: - bigscience-workshop/data_tooling#121 I have created this dataset as a community dataset: https://huggingface.co/datasets/albertvillanova/pmc_open_access However, I was wondering that it may be more appropriate to move it under an org namespace: `pubmed_central` or `pmc` This wa...
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm....
64
Add PubMed Central Open Access dataset ## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if avai...
[ 0.0158515554, 0.2765922248, -0.0521292724, -0.076617375, 0.0842334926, 0.0609657019, 0.1692059785, 0.1762033403, 0.0278634988, 0.0237174761, -0.0096464222, 0.0166482441, -0.215217486, -0.1154047474, 0.1166006252, -0.1897338033, 0.328087002, -0.1133034304, 0.1321059763, 0.042029...
https://github.com/huggingface/datasets/issues/3518
Add PubMed Central Open Access dataset
Why not ! Having them under such namespaces would also help people searching for this kind of datasets. We can also invite people from pubmed at one point
## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if available* - **Data:** https://www.ncbi.nlm....
28
Add PubMed Central Open Access dataset ## Adding a Dataset - **Name:** PubMed Central Open Access - **Description:** The PMC Open Access Subset includes more than 3.4 million journal articles and preprints that are made available under license terms that allow reuse. - **Paper:** *link to the dataset paper if avai...
[ -0.1280044317, 0.0742644891, -0.2116854191, -0.2449233383, -0.0705326125, 0.0678361356, 0.2654060125, 0.323445946, 0.0099771302, 0.2698273361, -0.0652089715, 0.0095179658, -0.2703078091, 0.0643100739, 0.0232646074, -0.0676072687, 0.0715512931, 0.1311511248, 0.2244762331, 0.0050...
https://github.com/huggingface/datasets/issues/3510
`wiki_dpr` details for Open Domain Question Answering tasks
Hi ! According to the DPR paper, the wikipedia dump is the one from Dec. 20, 2018. Each instance contains a paragraph of at most 100 word, as well as the title of the wikipedia page it comes from and the DPR embedding (a 768-d vector).
Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it uses, etc. Please respond at your earliest convenience regard...
46
`wiki_dpr` details for Open Domain Question Answering tasks Hey guys! Thanks for creating the `wiki_dpr` dataset! I am currently trying to use the dataset for context retrieval using DPR on NQ questions and need details about what each of the files and data instances mean, which version of the Wikipedia dump it...
[ 0.146807313, -0.3599883616, -0.1104262546, 0.5688871145, -0.2651137412, 0.0071313893, 0.1473311186, 0.1168371066, -0.1801617593, -0.2350244075, -0.0067062248, -0.0107021453, 0.0064400793, 0.1478558034, 0.1353964508, -0.5107955933, 0.10089463, 0.0089568133, 0.0648308471, -0.2404...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
IMO, the data streaming test is good enough of a test that the dataset works correctly (assuming that we can more or less ensure that if streaming works then the non-streaming case will also work), so that for datasets that have a working dataset preview, we can remove the dummy data IMO. On the other hand, it seems li...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
105
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2486526221, 0.0137668708, 0.0282523055, 0.1720922291, 0.0348425992, 0.2506077588, 0.3021661937, 0.1551800519, -0.1902869642, 0.0683339164, 0.2187203616, 0.0646495596, -0.0933549777, 0.244366765, -0.11610955, -0.0490053557, 0.1391410977, 0.0312336441, 0.1161179468, 0.07466427...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
I don't know all the details, but generally I'd be in favor of unifying the metadata formats into YAML inside .md (and so deprecating the dataset_infos.json) (Ultimately the CI can run on "HuggingFace Actions" instead of on GitHub)
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
38
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2565703988, -0.0132413674, 0.0461184047, 0.1416006088, 0.139359653, 0.2571756244, 0.3560964167, 0.188470155, -0.2326669395, 0.0781047493, 0.1527014375, 0.1550492197, -0.1056066528, 0.2942405343, -0.0938436612, 0.0398536287, 0.1528295577, -0.0513016991, 0.0435610339, 0.116653...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
The dataset_infos.json file currently has these useful infos for each dataset configuration, that I think can be moved to the dataset tags: - Size of the dataset in MB: download size, arrow file size, and total size (sum of download + arrow) - Size of each split in MB and number of examples. Again this can be moved t...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
355
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2210954279, 0.0069160708, 0.0213334933, 0.2000364214, 0.1244293451, 0.3079423308, 0.2800020278, 0.2172093689, -0.0784053355, 0.096394375, 0.2354323119, 0.1877381653, -0.0900824666, 0.3275886178, -0.072766304, -0.015771471, 0.1152651459, 0.0301218908, 0.173539862, 0.108022689...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
(note that if we wanted to display sizes, etc we could also pretty easily parse the `dataset_infos.json` on the hub side)
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
21
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2448803186, -0.0072730361, 0.0365820192, 0.1965517402, 0.078777127, 0.2738459408, 0.3356241882, 0.1830960214, -0.1585740298, 0.1159566864, 0.1745054424, 0.12919496, -0.0386308022, 0.2549432516, -0.1055927351, 0.0186514128, 0.1146853268, 0.0495393276, 0.0804453269, 0.06552887...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
I agree that we can move the relevant parts of `dataset_infos.json` to the YAML tags. > On the other hand, it seems like not all datasets have streaming enabled yet and for those datasets (if they are used a lot), I think it would be nice to continue testing some dummy data. < > > Yes indeed, or at least make sure ...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
112
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2844230533, -0.0424161591, 0.0515675135, 0.1613871753, 0.0921573266, 0.2515726089, 0.2895365357, 0.1487917751, -0.1381627172, 0.073765479, 0.223337993, 0.0595797002, -0.0699712709, 0.300771147, -0.0091128284, -0.0684612691, 0.1368359923, 0.0008898102, 0.1397280395, 0.0916286...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
About dummy data, please see e.g. this PR: https://github.com/huggingface/datasets/pull/3692/commits/62368daac0672041524a471386d5e78005cf357a - I updated the previous dummy data: I just had to rename the file and its directory - the dummy data zip contains only a single file: `pubmed22n0001.xml.gz` Then I discov...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
151
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.1429853886, -0.060847979, 0.0550825633, 0.2112828642, 0.1136176214, 0.2524834871, 0.3205831051, 0.1741959453, -0.1172098815, 0.1219318509, 0.1199110895, 0.0238968544, -0.037739858, 0.2567646205, -0.0817495063, -0.0914015099, 0.127621904, 0.0220600981, 0.0700239986, 0.0524603...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
I mention in https://github.com/huggingface/datasets-server/wiki/Preliminary-design that the future "datasets server" could be in charge of generating both the dummy data and the dataset-info.json file if required (or their equivalent).
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
28
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.1777145565, -0.003646753, 0.0536854379, 0.2157488018, 0.0634276718, 0.2323122323, 0.3523750007, 0.1556675434, -0.1672702879, 0.1026074961, 0.1788233519, 0.0926263481, -0.0486217141, 0.3518205583, -0.0754186213, -0.0335629694, 0.135946095, 0.0997845978, 0.0033886631, 0.086744...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
Hi ! I think dummy data generation is out of scope for the datasets server, since it's about generating the original data files. That would be amazing to have it generate the dataset_infos.json though !
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
35
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.1948589534, 0.0102332085, 0.0505853146, 0.2380984575, 0.0520472601, 0.2354520112, 0.3769235611, 0.118540749, -0.1812932342, 0.0504686907, 0.1859699786, 0.1205503345, -0.0686953738, 0.2514625788, -0.1029324681, 0.0122028058, 0.1358127594, 0.0883083045, 0.0188365951, 0.0695580...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
From some offline discussion with @mariosasko and especially for vision datasets, we'll probably not require dummy data anymore and use streaming instead :) This will make adding a new dataset much easier. This should also make sure that streaming works as expected directly in the CI, without having to check the datas...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
58
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.324167043, 0.0028328344, 0.0530671589, 0.1227909923, 0.0889795721, 0.2199126035, 0.2552140355, 0.181075722, -0.151201278, 0.098315753, 0.1719060093, 0.1244265884, -0.0338424742, 0.3266551495, -0.1211210117, -0.0729696378, 0.1732610166, 0.0790969282, 0.107607387, 0.1135489419...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
It seems that migration from dataset-info.json to dataset card YAML has been acted. Probably it's a good idea, but I didn't find the pros and cons of this decision, so I put some I could think of: pros: - only one file to parse, share, sync - it gives a hint to the users that if you write your dataset card, you...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
210
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2458129972, 0.0725631118, 0.0490858741, 0.1374435872, 0.0960530788, 0.2946307957, 0.3775528669, 0.2325655669, -0.2260082364, 0.0578873195, 0.204494223, 0.1616473794, -0.0737449676, 0.1169880629, -0.0357842892, 0.0991988853, 0.1957191378, -0.08241155, 0.0424896665, 0.08213612...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums Note that we co...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
214
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2032292634, 0.0275494643, 0.0572872944, 0.1954613179, 0.13915582, 0.2496224791, 0.4129858315, 0.1944946349, -0.1112275198, 0.0788740665, 0.1915496439, 0.1532637924, -0.013792634, 0.2418874502, -0.0354791172, 0.0570903011, 0.1467804313, -0.0240850616, 0.0660683811, 0.10137407...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
> the metadata header might be very long, before reaching the start of the README/dataset card. It might be surprising when you edit the file because the metadata is not shown on top when the dataset card is rendered (dataset page). It also somewhat prevents including large strings like the checksums We can definite...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
66
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.2525478601, -0.004522847, 0.0422584862, 0.2192870975, 0.1078495383, 0.2615562379, 0.3254029155, 0.2054439634, -0.1367204785, 0.0993757993, 0.178440094, 0.1867020875, -0.0159963649, 0.2168878168, -0.1041497812, 0.0664003864, 0.1291632652, 0.040331427, 0.0660883859, 0.04071745...
https://github.com/huggingface/datasets/issues/3507
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data
Tensorflow Datasets catalog includes a community catalog where you can find and use HF datasets (see [here](https://www.tensorflow.org/datasets/community_catalog/huggingface)). FYI I noticed today that they are using the exported dataset_infos.json files from github to get the metadata (see their code [here](https:/...
I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point of having also the JSON metadata (dataset_infos.json file)? On the other hand, the d...
39
Discuss whether support canonical datasets w/o dataset_infos.json and/or dummy data I open this PR to have a public discussion about this topic and make a decision. As previously discussed, once we have the metadata in the dataset card (README file, containing both Markdown info and YAML tags), what is the point o...
[ -0.1930968761, -0.0408927612, 0.0171367414, 0.1483712047, 0.0837191492, 0.2352292389, 0.4617837965, 0.3679298759, -0.1482816488, 0.1370883435, 0.0799530819, 0.0977993309, -0.0425133295, 0.1569617093, -0.0325315818, -0.1178674698, 0.0716281459, 0.084002845, 0.0171172768, -0.1026...
https://github.com/huggingface/datasets/issues/3505
cast_column function not working with map function in streaming mode for Audio features
Hi! This is probably due to the fact that `IterableDataset.map` sets `features` to `None` before mapping examples. We can fix the issue by passing the old `features` dict to the map generator and performing encoding/decoding there (before calling the map transform function).
## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' casted feature but getting path of audio file only. I am getting features of 'audio' of s...
42
cast_column function not working with map function in streaming mode for Audio features ## Describe the bug I am trying to use Audio class for loading audio features using custom dataset. I am able to cast 'audio' feature into 'Audio' format with cast_column function. On using map function, I am not getting 'Audio' ...
[ -0.2622295916, -0.2025067061, -0.0266618859, 0.3577167988, 0.6027312875, -0.2296938449, 0.3419704735, 0.3029460907, 0.2976568043, -0.0616048686, -0.0520022623, 0.7353785634, -0.1707476377, 0.1945964545, -0.0731458217, -0.2837214768, 0.1454188973, 0.1953483373, -0.2825004458, -0...
https://github.com/huggingface/datasets/issues/3504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
Hi @ToddMorrill, thanks for reporting. Three weeks ago I contacted the team who created the Pile dataset to report this issue with their data host server: https://the-eye.eu They told me that unfortunately, the-eye was heavily affected by the recent tornado catastrophe in the US. They hope to have their data back...
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce ...
53
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBME...
[ 0.0793420821, -0.1990926862, -0.0149749918, 0.3214186728, 0.0415689573, 0.0988509506, 0.2786985934, 0.3070810735, -0.0256312788, 0.0241771862, 0.0213186704, 0.0649859682, 0.2614779174, -0.0090518696, 0.0891454816, -0.137454614, -0.1224960014, 0.0288257934, 0.1671686172, -0.0706...
https://github.com/huggingface/datasets/issues/3504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
Hi @ToddMorrill, people from the Pile team have mirrored their data in a new host server: https://mystic.the-eye.eu See: - #3627 It should work if you update your URL. We should also update the URL in our course material.
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce ...
38
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBME...
[ 0.0793420821, -0.1990926862, -0.0149749918, 0.3214186728, 0.0415689573, 0.0988509506, 0.2786985934, 0.3070810735, -0.0256312788, 0.0241771862, 0.0213186704, 0.0649859682, 0.2614779174, -0.0090518696, 0.0891454816, -0.137454614, -0.1224960014, 0.0288257934, 0.1671686172, -0.0706...
https://github.com/huggingface/datasets/issues/3504
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst
The old URL is still present in the HuggingFace course here: https://huggingface.co/course/chapter5/4?fw=pt I have created a PR for the Notebook here: https://github.com/huggingface/notebooks/pull/148 Not sure if the HTML is in a public repo. I wasn't able to find it.
## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Steps to reproduce ...
38
Unable to download PUBMED_title_abstracts_2019_baseline.jsonl.zst ## Describe the bug I am unable to download the PubMed dataset from the link provided in the [Hugging Face Course (Chapter 5 Section 4)](https://huggingface.co/course/chapter5/4?fw=pt). https://the-eye.eu/public/AI/pile_preliminary_components/PUBME...
[ 0.0793420821, -0.1990926862, -0.0149749918, 0.3214186728, 0.0415689573, 0.0988509506, 0.2786985934, 0.3070810735, -0.0256312788, 0.0241771862, 0.0213186704, 0.0649859682, 0.2614779174, -0.0090518696, 0.0891454816, -0.137454614, -0.1224960014, 0.0288257934, 0.1671686172, -0.0706...
https://github.com/huggingface/datasets/issues/3499
Adjusting chunk size for streaming datasets
Hi ! Data streaming uses `fsspec` to read the data files progressively. IIRC the block size for buffering is 5MiB by default. So every time you finish iterating over a block, it downloads the next one. You can still try to increase the `fsspec` block size for buffering if it can help. To do so you just need to increase...
**Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) I hit a performance bottleneck because of the ...
99
Adjusting chunk size for streaming datasets **Is your feature request related to a problem? Please describe.** I want to use mc4 which I cannot save locally, so I stream it. However, I want to process the entire dataset and filter some documents from it. With the current chunk size of around 1000 documents (right?) ...
[ -0.5548549294, -0.2060949802, -0.1399072856, 0.0709016994, 0.0956754312, 0.0037738408, -0.1515313387, 0.3243740797, -0.1375258565, 0.2203868628, 0.0862965137, -0.0971089303, -0.1958140284, 0.5901542902, -0.1302232891, -0.1306979507, -0.1648022532, 0.0590562895, 0.0277798381, 0....
https://github.com/huggingface/datasets/issues/3490
Does datasets support load text from HDFS?
Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other filesystems (cloud storages, hdfs...) at one point though :)
The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs?
27
Does datasets support load text from HDFS? The raw text data is stored on HDFS due to the dataset's size is too large to store on my develop machine, so I wander does datasets support read data from hdfs? Hi ! `datasets` currently supports reading local files or files over HTTP. We may add support for other files...
[ -0.3971250355, 0.0771138072, -0.218753159, 0.3979658782, 0.0818777084, -0.0516116247, 0.3947126865, 0.0203535222, 0.3752122521, -0.010376093, -0.409894079, -0.113229543, 0.0458144359, 0.4414334893, 0.1317270696, 0.1704085916, 0.0876928717, -0.0880468041, -0.078074418, -0.066489...
https://github.com/huggingface/datasets/issues/3488
URL query parameters are set as path in the compression hop for fsspec
I think the test passes because it simply ignore what's after `gzip://`. The returned urlpath is expected to look like `gzip://filename::url`, and the filename is currently considered to be what's after the final `/`, hence the result. We can decide to change this and simply have `gzip://::url`, this way we don't...
## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=1Bt4Garpf0QLiwkJhHJzXaVa0I0H5Qhwz" urlpath = StreamingDownloadManager()....
61
URL query parameters are set as path in the compression hop for fsspec ## Describe the bug There is an ssue with `StreamingDownloadManager._extract`. I don't know how the test `test_streaming_gg_drive_gzipped` passes: For ```python TEST_GG_DRIVE_GZIPPED_URL = "https://drive.google.com/uc?export=download&id=...
[ -0.1200753972, -0.2668937147, 0.0669212714, 0.1035846621, 0.1400488615, -0.0840494558, 0.0135708721, 0.1877432615, -0.1366844773, 0.3031463623, 0.1042386517, 0.077818729, 0.1425676942, 0.2099909484, 0.179748401, -0.2270773351, -0.1237718537, -0.102872692, 0.0296686087, -0.10076...
https://github.com/huggingface/datasets/issues/3485
skip columns which cannot set to specific format when set_format
You can add columns that you wish to set into `torch` format using `dataset.set_format("torch", ['id', 'abc'])` so that input batch of the transform only contains those columns
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific forma...
27
skip columns which cannot set to specific format when set_format **Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the sol...
[ -0.3243052065, -0.1946023852, -0.0388779715, -0.1543871015, 0.4741528332, 0.2856017053, 0.2840107083, 0.4905923307, -0.3940932453, 0.0933625996, -0.0263921414, 0.2987178564, -0.0623508319, 0.3534404337, -0.2112528384, -0.2602641881, -0.0206499528, 0.2999278009, -0.2954287529, 0...
https://github.com/huggingface/datasets/issues/3485
skip columns which cannot set to specific format when set_format
Sorry, I miss `output_all_columns` args and thought after `dataset.set_format("torch", columns=columns)` I can only get specific columns I assigned.
**Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the solution you'd like** skip columns which cannot set to specific forma...
18
skip columns which cannot set to specific format when set_format **Is your feature request related to a problem? Please describe.** When using `dataset.set_format("torch")`, I must make sure every columns in datasets can convert to `torch`, however, sometimes I want to keep some string columns. **Describe the sol...
[ -0.2914406061, -0.2245162427, -0.0692841187, -0.1242927238, 0.4770968258, 0.3206258416, 0.1285251379, 0.5214928389, -0.3331496119, 0.2311048955, -0.0285504814, 0.3594092429, -0.1113210693, 0.3187877536, -0.200254783, -0.0854486376, -0.0791164264, 0.2076987624, -0.1604340672, 0....
https://github.com/huggingface/datasets/issues/3484
make shape verification to use ArrayXD instead of nested lists for map
Hi! Yes, this makes sense for numeric values, but first I have to finish https://github.com/huggingface/datasets/pull/3336 because currently ArrayXD only allows the first dimension to be dynamic.
As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=1&uniqifier=1), IMO make shape verifcaiton to use ArrayXD instead of nest...
26
make shape verification to use ArrayXD instead of nested lists for map As describe in https://github.com/huggingface/datasets/issues/2005#issuecomment-793716753 and mentioned by @mariosasko in [image feature example](https://colab.research.google.com/drive/1mIrTnqTVkWLJWoBzT1ABSe-LFelIep1c#scrollTo=ow3XHDvf2I0B&line=...
[ -0.3617073298, -0.4997362196, -0.1934599876, 0.2382970601, 0.3340795338, -0.0900439098, 0.3655255735, 0.2702302039, 0.4711226225, 0.2727881074, -0.013780782, 0.1076696068, -0.056096945, 0.2658049166, 0.0668741688, -0.0237629842, 0.1465830803, 0.2901389301, -0.3523030281, -0.003...
https://github.com/huggingface/datasets/issues/3480
the compression format requested when saving a dataset in json format is not respected
Thanks for reporting @SaulLu. At first sight I think the problem is caused because `pandas` only takes into account the `compression` parameter if called with a non-null file path or buffer. And in our implementation, we call pandas `to_json` with `None` `path_or_buf`. We should fix this: - either handling direc...
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression t...
67
the compression format requested when saving a dataset in json format is not respected ## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression...
[ 0.0959543958, 0.084326975, 0.0052433158, 0.1249306872, 0.2311229408, 0.2284452617, 0.1374153793, 0.735542655, -0.1682051718, 0.0322644599, 0.0988725051, 0.6285889149, 0.1668547392, -0.1779368669, -0.0252587833, -0.1777684838, 0.2336666286, 0.278378576, 0.1768105775, -0.04890567...
https://github.com/huggingface/datasets/issues/3480
the compression format requested when saving a dataset in json format is not respected
I was thinking if we can handle the `compression` parameter by ourselves? Compression types will be similar to what `pandas` offer. Initially, we can try this with 2-3 compression types and see how good/bad it is? Let me know if it sounds good, I can raise a PR for this next week
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression t...
52
the compression format requested when saving a dataset in json format is not respected ## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression...
[ 0.0959543958, 0.084326975, 0.0052433158, 0.1249306872, 0.2311229408, 0.2284452617, 0.1374153793, 0.735542655, -0.1682051718, 0.0322644599, 0.0988725051, 0.6285889149, 0.1668547392, -0.1779368669, -0.0252587833, -0.1777684838, 0.2336666286, 0.278378576, 0.1768105775, -0.04890567...
https://github.com/huggingface/datasets/issues/3480
the compression format requested when saving a dataset in json format is not respected
Hi ! Thanks for your help @bhavitvyamalik :) Maybe let's start with `gzip` ? I think it's the most common use case, then if we're fine with it we can add other compression methods
## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression="gzip"`, the saved file is not compressed. Would you also have expected compression t...
34
the compression format requested when saving a dataset in json format is not respected ## Describe the bug In the documentation of the `to_json` method, it is stated in the parameters that > **to_json_kwargs – Parameters to pass to pandas’s pandas.DataFrame.to_json. however when we pass for example `compression...
[ 0.0959543958, 0.084326975, 0.0052433158, 0.1249306872, 0.2311229408, 0.2284452617, 0.1374153793, 0.735542655, -0.1682051718, 0.0322644599, 0.0988725051, 0.6285889149, 0.1668547392, -0.1779368669, -0.0252587833, -0.1777684838, 0.2336666286, 0.278378576, 0.1768105775, -0.04890567...
https://github.com/huggingface/datasets/issues/3475
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish
Hi @puzzler10, thanks for reporting. Please note this dataset is not hosted on Hugging Face Hub. See: https://github.com/huggingface/datasets/blob/c8f914473b041833fd47178fa4373cdcb56ac522/datasets/rotten_tomatoes/rotten_tomatoes.py#L42 If there are issues with the source data of a dataset, you should contact th...
## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to the [dataset viewer](https://huggingface.co/datasets/viewer/?dataset=rotten_tomato...
95
The rotten_tomatoes dataset of movie reviews contains some reviews in Spanish ## Describe the bug See title. I don't think this is intentional and they probably should be removed. If they stay the dataset description should be at least updated to make it clear to the user. ## Steps to reproduce the bug Go to t...
[ 0.0775980204, 0.3383808732, -0.1574923396, 0.2872702479, 0.3433725834, 0.2087222636, 0.0929915011, 0.2478998899, -0.117431201, 0.0967246294, -0.4815720916, 0.0033949746, 0.0114612821, -0.1756013334, 0.0343776084, 0.0748326182, 0.2239582539, -0.0237260349, -0.173426643, -0.25152...
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
As discussed, I remember I set `decoded=False` here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the "audio" field (for Audio feature) was accessed.
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
32
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.1267590225, -0.3187014461, -0.0562812574, 0.4169413149, 0.2039797753, -0.0484367609, 0.1617482156, 0.0489288829, -0.0526031516, 0.2663674951, 0.0970081314, 0.5622793436, -0.0772200599, -0.1984000951, -0.0889359042, -0.1298073977, 0.0134134991, 0.3308178186, 0.031991154, -0.0...
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
> I set decoded=False here to avoid decoding just by iterating over examples of dataset. We wanted to decode only if the "audio" field (for Audio feature) was accessed https://github.com/huggingface/datasets/pull/3430 will add more control to decoding, so I think it's OK to enable decoding in `__iter__` for now. Aft...
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
61
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.1568084657, -0.3336033225, -0.0613897145, 0.4453437328, 0.2357357889, -0.0553116128, 0.1046500728, 0.0181350149, -0.0682632402, 0.2494532913, 0.0704558417, 0.5809364915, -0.1020295396, -0.1611496806, -0.0695886537, -0.1490104347, 0.0153620439, 0.3161056936, -0.0318765901, -0...
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
@mariosasko I wonder why there is no issue in `Audio` feature with decoding disabled in `__iter__`, whereas there is in `Image` feature. Enabling decoding in `__iter__` will make fail Audio regressions tests: https://github.com/huggingface/datasets/runs/4608657230?check_suite_focus=true ``` =======================...
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
52
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.1508928686, -0.2987540662, -0.1016991585, 0.3872718811, 0.1785421968, -0.0492620952, 0.2464302778, -0.004940175, -0.0694659576, 0.2798422873, 0.1267050803, 0.5622631907, -0.068370223, -0.1775305271, -0.0727771223, -0.1908052713, 0.0280031171, 0.3463977873, -0.0484352782, -0....
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
Please also note that the regression tests were implemented in accordance with the specifications: - when doing a `map` (wich calls `__iter__`) of a function that doesn't access the audio field, the decoding should be disabled; this is why the decoding is disabled in `__iter__` (and only enabled in `__getitem__`).
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
50
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.1789715141, -0.2866581976, -0.0869732425, 0.371363461, 0.1995099634, -0.053819146, 0.1906980127, 0.0316141956, -0.0140854567, 0.2848644853, 0.1102567166, 0.6305354834, -0.0778806582, -0.2217885107, -0.0395907164, -0.1451563835, -0.0334660597, 0.3611385822, -0.0952342898, -0....
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
> I wonder why there is no issue in Audio feature with decoding disabled in __iter__, whereas there is in Image feature. @albertvillanova Not sure if I understand this part. Currently, both the Image and the Audio feature don't decode data in `__iter__`, so their behavior is aligned there.
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
49
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.0908605009, -0.3369823098, -0.0806186199, 0.4213090837, 0.1601836979, -0.0303244703, 0.2214541733, -0.0054399865, -0.0804104805, 0.285702318, 0.0867209435, 0.5229113102, -0.0322172977, -0.1369960457, -0.1372658908, -0.2022331357, 0.0346545689, 0.3320819438, 0.0491904169, -0....
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
Therefore, this is not an issue, neither for Audio nor Image feature. Could you please elaborate more on the expected use case? @lhoestq @NielsRogge The expected use cases (in accordance with the specs: see #2324): - decoding should be enabled when accessing a specific item (`__getitem__`) - decoding should be...
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
88
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.130967021, -0.3112417161, -0.0679310188, 0.3901095688, 0.1656422317, -0.0171957146, 0.1670644283, 0.0072897859, -0.0455675423, 0.2232476175, 0.1539147198, 0.5721437335, -0.0708791912, -0.1515946537, -0.0927556232, -0.1920374781, 0.0026815264, 0.3228372931, -0.0092045367, -0....
https://github.com/huggingface/datasets/issues/3473
Iterating over a vision dataset doesn't decode the images
For me it's not an issue, actually. I just (mistakenly) tried to iterate over a PyTorch Dataset instead of a PyTorch DataLoader, i.e. I did this: `batch = next(iter(train_ds)) ` whereas I actually wanted to do `batch = next(iter(train_dataloader))` and then it turned out that in the first case, the imag...
## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_dataset("mnist", split="train") first_image = mnist[0]["image"...
66
Iterating over a vision dataset doesn't decode the images ## Describe the bug If I load `mnist` and I iterate over the dataset, the images are not decoded, and the dictionary with the bytes is returned. ## Steps to reproduce the bug ```python from datasets import load_dataset import PIL mnist = load_datas...
[ -0.087873362, -0.309632659, -0.0400017574, 0.3918700814, 0.1334655285, -0.0821136907, 0.1896429658, 0.0339636281, 0.0456691384, 0.2231494039, 0.1672267914, 0.5336773992, -0.0977951139, -0.1827502549, -0.042795606, -0.1867164522, -0.0286199581, 0.2891237438, -0.031148402, -0.077...