| | --- |
| | license: cc-by-nc-nd-4.0 |
| | task_categories: |
| | - robotics |
| | tags: |
| | - LeRobot |
| | configs: |
| | - config_name: default |
| | data_files: data/*/*.parquet |
| | language: |
| | - en |
| | --- |
| | <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息,填充链接</span> |
| | <div align="center"> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/GitHub-grey?logo=GitHub" alt="GitHub Badge"> |
| | </a> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/Project%20Page-blue?style=plastic" alt="Project Page Badge"> |
| | </a> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/Research_Blog-black?style=flat" alt="Research Blog Badge"> |
| | </a> |
| | <a href=""> |
| | <img src="https://img.shields.io/badge/Dataset-Overview-brightgreen?logo=googleforms" alt="Research Blog Badge"> |
| | </a> |
| | </div> |
| | |
| |
|
| | # Contents |
| | - [About the Dataset](#about-the-dataset) |
| | - [Dataset Structure](#dataset-structure) |
| | - [Folder hierarchy](#folder-hierarchy) |
| | - [Details](#details) |
| | - [Download the Dataset](#download-the-dataset) |
| | - [Load the Dataset](#get-started) |
| | - [License and Citation](#license-and-citation) |
| |
|
| | # [About the Dataset](#contents) |
| | - This dataset was created using [LeRobot](https://github.com/huggingface/lerobot) |
| | - **~200 hours real world scenarios** across **1** main task, **3** sub tasks |
| | - A clothing organization task that involves identifying the type of clothing and determining the next action based on its category |
| | - **sub-tasks** |
| | - **Folding** |
| | - Randomly pick a piece of clothing from the basket and place it on the workbench |
| | - If it is a short T-shirt, fold it |
| | - **Hanging Preparation** |
| | - Randomly pick a piece of clothing from the basket and place it on the workbench |
| | - If it is a dress shirt, locate the collar and drag the clothing to the right side |
| | - **Hanging** |
| | - Hang the dress shirt properly |
| |
|
| | # [Dataset Structure](#contents) |
| |
|
| | ## [Folder hierarchy](#contents) |
| | ```text |
| | dataset_root/ |
| | ├── data/ |
| | │ ├── chunk-000/ |
| | │ │ ├── episode_000000.parquet |
| | │ │ ├── episode_000001.parquet |
| | │ │ └── ... |
| | │ └── ... |
| | ├── videos/ |
| | │ ├── chunk-000/ |
| | │ │ ├── observation.images.hand_left |
| | │ │ │ ├── episode_000000.mp4 |
| | │ │ │ ├── episode_000001.mp4 |
| | │ │ │ └── ... |
| | │ │ ├── observation.images.hand_right |
| | │ │ │ ├── episode_000000.mp4 |
| | │ │ │ ├── episode_000001.mp4 |
| | │ │ │ └── ... |
| | │ │ ├── observation.images.top_head |
| | │ │ │ ├── episode_000000.mp4 |
| | │ │ │ ├── episode_000001.mp4 |
| | │ │ │ └── ... |
| | │ │ └── ... |
| | ├── meta/ |
| | │ ├── info.json |
| | │ ├── episodes.jsonl |
| | │ ├── tasks.jsonl |
| | │ └── episodes_stats.jsonl |
| | └ README.md |
| | ``` |
| |
|
| | <a id='Details'></a> |
| | ## [Details](#contents) |
| | ### info.json |
| | the basic struct of the [info.json](#meta/info.json) |
| | ```json |
| | { |
| | "codebase_version": "v2.1", |
| | "robot_type": "agilex", |
| | "total_episodes": ..., |
| | "total_frames": ..., |
| | "total_tasks": ..., |
| | "total_videos": ..., |
| | "total_chunks": ..., |
| | "chunks_size": ..., |
| | "fps": ..., |
| | "splits": { |
| | "train": ... |
| | }, |
| | "data_path": "data/chunk-{episode_chunk:03d}/episode_{episode_index:06d}.parquet", |
| | "video_path": "videos/chunk-{episode_chunk:03d}/{video_key}/episode_{episode_index:06d}.mp4", |
| | "features": { |
| | "observation.images.top_head": { |
| | "dtype": "video", |
| | "shape": [ |
| | 480, |
| | 640, |
| | 3 |
| | ], |
| | "names": [ |
| | "height", |
| | "width", |
| | "channel" |
| | ], |
| | "info": { |
| | "video.height": 480, |
| | "video.width": 640, |
| | "video.codec": "av1", |
| | "video.pix_fmt": "yuv420p", |
| | "video.is_depth_map": false, |
| | "video.fps": 30, |
| | "video.channels": 3, |
| | "has_audio": false |
| | } |
| | }, |
| | "observation.images.hand_left": { |
| | ... |
| | }, |
| | "observation.images.hand_right": { |
| | ... |
| | }, |
| | "observation.state": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 14 |
| | ], |
| | "names": null |
| | }, |
| | "action": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 14 |
| | ], |
| | "names": null |
| | }, |
| | "timestamp": { |
| | "dtype": "float32", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "frame_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "episode_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | }, |
| | "task_index": { |
| | "dtype": "int64", |
| | "shape": [ |
| | 1 |
| | ], |
| | "names": null |
| | } |
| | } |
| | ``` |
| |
|
| | ### [Parquet file format](#contents) |
| | | Field Name | shape | Meaning | |
| | |------------|-------------|-------------| |
| | | observation.state | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br> left`[:, 6]`, right `[:, 13]` , gripper open range| |
| | | action | [N, 14] |left `[:, :6]`, right `[:, 7:13]`, joint angle<br>left`[:, 6]`, right `[:, 13]` , gripper open range | |
| | | timestamp | [N, 1] | Time elapsed since the start of the episode (in seconds) | |
| | | frame_index | [N, 1] | Index of this frame within the current episode (0-indexed) | |
| | | episode_index | [N, 1] | Index of the episode this frame belongs to | |
| | | index | [N, 1] | Global unique index across all frames in the dataset | |
| | | task_index | [N, 1] | Index identifying the task type being performed | |
| | |
| | ## [tasks.jsonl](#meta/tasks.jsonl) |
| | positive/negitive: Labels indicating the advantage of each frame's action, where "positive" means the action benefits future outcomes and "negative" means otherwise. |
| | # [Download the Dataset](#contents) |
| | ### Python Script |
| | |
| | ```python |
| | from huggingface_hub import hf_hub_download, snapshot_download |
| | from datasets import load_dataset |
| |
|
| | # Download a single file |
| | hf_hub_download( |
| | repo_id="OpenDriveLab-org/kai0", |
| | filename="episodes.jsonl", |
| | subfolder="meta", |
| | repo_type="dataset", |
| | local_dir="where/you/want/to/save" |
| | ) |
| | |
| | # Download a specific folder |
| | snapshot_download( |
| | repo_id="OpenDriveLab-org/kai0", |
| | local_dir="/where/you/want/to/save", |
| | repo_type="dataset", |
| | allow_patterns=["data/*"] |
| | ) |
| | |
| | # Load the entire dataset |
| | dataset = load_dataset("OpenDriveLab-org/kai0") |
| | ``` |
| | |
| | ### Terminal (CLI) |
| | |
| | ```bash |
| | # Download a single file |
| | hf download OpenDriveLab-org/kai0 \ |
| | --include "meta/info.json" \ |
| | --repo-type dataset \ |
| | --local-dir "/where/you/want/to/save" |
| | |
| | # Download a specific folder |
| | hf download OpenDriveLab-org/kai0 \ |
| | --repo-type dataset \ |
| | --include "meta/*" \ |
| | --local-dir "/where/you/want/to/save" |
| | |
| | # Download the entire dataset |
| | hf download OpenDriveLab-org/kai0 \ |
| | --repo-type dataset \ |
| | --local-dir "/where/you/want/to/save" |
| | ``` |
| | |
| | # [Load the dataset](#contents) |
| | |
| | ## For LeRobot version < 0.4.0 |
| | |
| | Choose the appropriate import based on your version: |
| | |
| | | Version | Import Path | |
| | |---------|-------------| |
| | | `<= 0.1.0` | `from lerobot.common.datasets.lerobot_dataset import LeRobotDataset` | |
| | | `> 0.1.0` and `< 0.4.0` | `from lerobot.datasets.lerobot_dataset import LeRobotDataset` | |
| |
|
| | ```python |
| | # For version <= 0.1.0 |
| | from lerobot.common.datasets.lerobot_dataset import LeRobotDataset |
| | |
| | # For version > 0.1.0 and < 0.4.0 |
| | from lerobot.datasets.lerobot_dataset import LeRobotDataset |
| | |
| | # Load the dataset |
| | dataset = LeRobotDataset(repo_id='where/the/dataset/you/stored') |
| | ``` |
| |
|
| | ## For LeRobot version >= 0.4.0 |
| |
|
| | You need to migrate the dataset from v2.1 to v3.0 first. See the official documentation: [Migrate the dataset from v2.1 to v3.0](https://huggingface.co/docs/lerobot/lerobot-dataset-v3) |
| |
|
| | ```bash |
| | python -m lerobot.datasets.v30.convert_dataset_v21_to_v30 --repo-id=<HF_USER/DATASET_ID> |
| | ``` |
| | <span style="color: red; font-weight: bold; font-size: 24px;">⚠️ !!! 等待信息填充</span> |
| | # License and Citation |
| | All the data and code within this repo are under [](). Please consider citing our project if it helps your research. |
| |
|
| | ```BibTeX |
| | @misc{, |
| | title={}, |
| | author={}, |
| | howpublished={\url{}}, |
| | year={} |
| | } |