| | --- |
| | configs: |
| | - config_name: torchbench |
| | data_files: |
| | - split: benchmark |
| | path: "backend_bench_problems.parquet" |
| | - config_name: ops_traces_models |
| | data_files: |
| | - split: operator_input_models |
| | path: "operator_input_models_mapping.parquet" |
| | --- |
| | |
| | # TorchBench |
| |
|
| | The TorchBench suite of [BackendBench](https://github.com/meta-pytorch/BackendBench) is designed to mimic real-world use cases. It provides operators and inputs derived from 155 model traces found in [TIMM](https://huggingface.co/timm) (67), [Hugging Face Transformers](https://huggingface.co/docs/transformers/en/index) (45), and [TorchBench](https://github.com/pytorch/benchmark) (43). (These are also the models PyTorch developers use to [validate performance](https://hud.pytorch.org/benchmark/compilers).) You can view the origin of these traces by switching the subset in the dataset viewer to `ops_traces_models` and `torchbench` for the full dataset. |
| |
|
| | When running BackendBench, much of the extra information about what you are testing is abstracted away, so you can simply run `uv run python --suite torchbench ...`. Here, however, we provide the test suite as a dataset that can be explored directly. It includes details about why certain operations and arguments were included or excluded, reflecting the careful consideration behind curating the set. |
| |
|
| | You can download the dataset in either format: |
| |
|
| | - `backend_bench_problems.parquet` (default format on Hugging Face) |
| | |
| | - `backend_bench_problems.json` (more human-readable) |
| | |
| | |
| | ### Fields |
| |
|
| | - **uuid** – Unique identifier for the `(op_name, args)` pair. |
| | |
| | - **op_name** – Full name of the operator being tested. |
| | |
| | - **args** – Serialized form of the inputs from the trace. [See details below](#serialized-arguments-in-backendbench). |
| | |
| | - **runnable** – Whether the operator is runnable in BackendBench (some are not yet supported). |
| | |
| | - **included_in_benchmark** – Whether this `(op_name, args)` pair is tested in the TorchBench suite. |
| | |
| | - **why_excluded** – If not included, a list of reasons for exclusion (e.g., "BackendBench does not support correctness testing for random ops yet," "BackendBench does not support correctness testing for tensor creation and manipulation ops yet"). |
| | |
| | - **is_synthetic** – Marks synthetically generated inputs (e.g., very large tensors). These are currently excluded from the benchmark. |
| | |
| | - **runtime_ms** – Execution time (ms) on our hardware (single GPU from a machine with 8× H100s and an AMD EPYC 9654 96-core processor). |
| | |
| | - **relative_runtime_to_kernel_launch** – `runtime_ms` divided by the runtime of a dummy CUDA op (`torch.empty(0, device=cuda)`), representing launch overhead. |
| | |
| | - **is_overhead_dominated_op** – Flags operator/argument pairs running close to CUDA overhead as “performance canaries.” [Histogram analysis](https://github.com/meta-pytorch/BackendBench/issues/108) showed that a 1.3× threshold above CUDA overhead is a useful cutoff. These tests can be run for sanity-checking kernels with `uv run python --suite torchbench --check-overhead-dominated-ops ...`. |
| | |
| | - **count** – Number of times this operator/input pair appeared in model traces. |
| | |
| | - **in_models** – List of models (from real-world traces) where this operator/input pair appears. |
| | |
| | - **in_models_count** – Number of distinct models in which this operator/input pair occurs. |
| | |
| | |
| | # Serialized Arguments in BackendBench |
| |
|
| | Generally, arguments are serialized by storing tensor shapes and preserving everything else as it's fairly intuitive. For example: |
| |
|
| | `((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {})` |
| |
|
| | Below we'll go into detail about the format for rigor. |
| | ## Format |
| |
|
| | BackendBench stores function arguments as strings with all parameters needed to reproduce PyTorch operations: |
| |
|
| | ```python |
| | ((arg1, arg2, ...), {'key1': val1, 'key2': val2}) |
| | ``` |
| |
|
| | ```python |
| | (([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)}) |
| | ``` |
| |
|
| | ## Tensor Representation |
| |
|
| | Tensors use the format `T([shape], dtype)` or `T([shape], dtype, [stride])`: |
| |
|
| | ```python |
| | T([10, 20], f32) # 10×20 float32 tensor |
| | T([1, 512, 768], f16) # 1×512×768 float16 tensor |
| | T([64], i32) # 64-element int32 vector |
| | ``` |
| |
|
| | **Data types**: `f16/f32/f64` (float), `bf16` (bfloat16), `i32/i64` (int), `b8` (bool) |
| |
|
| | ## Examples |
| |
|
| | **Single tensor argument:** |
| |
|
| | ```python |
| | ((T([48, 24, 28, 28], f16),), {}) |
| | ``` |
| |
|
| | 48×24×28×28 float16 tensor, no keyword arguments |
| |
|
| | **Multiple tensors:** |
| |
|
| | ```python |
| | ((T([8, 8, 8, 8, 8], f16), T([8, 8, 8, 8, 8], f16)), {}) |
| | ``` |
| |
|
| | Two 5D tensors of identical shapes |
| |
|
| | **Mixed arguments:** |
| |
|
| | ```python |
| | ((T([128, 256], f16), [1024, 249, 249]), {'dtype': torch.float16, 'device': 'cuda'}) |
| | ``` |
| |
|
| | Args are a tensor, list, and keyword arguments |
| |
|
| | **Complex nested:** |
| |
|
| | ```python |
| | (([T([5, 5], f32), T([3, 3], i64), 42],), {'weight': T([3, 3], f32)}) |
| | ``` |
| |
|
| | List containing tensors and numbers, plus tensor keyword argument |
| |
|
| | ## Argument Types |
| |
|
| | - **Tensors**: `T([shape], dtype)` |
| | |
| | - **Lists**: `[item1, item2, ...]` (can contain tensors) |
| | |
| | - **Primitives**: `42`, `'hello'`, `True`, `None` |
| | |
| | - **PyTorch objects**: `torch.float16`, `torch.strided` |
| | |
| | |
| | # Trace Files in BackendBench |
| |
|
| | This repository includes `.txt` trace files, which were the original output format of model traces and are used to compose the dataset. Here’s their structure: |
| |
|
| | ## Format |
| |
|
| | Trace files capture PyTorch operations and arguments from real model executions: |
| |
|
| | ``` |
| | Operator: operation_name |
| | cnt: count, serialized_arguments |
| | cnt: count, serialized_arguments |
| | ... |
| | ``` |
| |
|
| | ## Structure |
| |
|
| | **Operator line**: Specifies the PyTorch operation |
| |
|
| | ``` |
| | Operator: aten.add.Tensor |
| | Operator: aten.relu.default |
| | Operator: aten.linear.default |
| | ``` |
| |
|
| | **Count lines**: Show how often each argument combination was used |
| |
|
| | ``` |
| | cnt: 42, ((T([10, 20], f16), T([10, 20], f16)), {}) |
| | cnt: 0, ((T([5, 5], f32), T([5, 5], f32)), {}) |
| | ``` |
| |
|
| | ## Reading Count Lines |
| |
|
| | - **Count `42`**: Argument combination appeared 42 times in traced models |
| | |
| | - **`cnt: 0`** = Synthetic/generated arguments (not from real models) |
| | |
| | - **`cnt: >0`** = Real usage frequency from model traces |
| | |
| | |
| | **Arguments**: Same format as serialized arguments – `((args), {kwargs})` |
| |
|
| | ## Example |
| |
|
| | ``` |
| | Operator: aten.add.Tensor |
| | cnt: 156, ((T([1, 512, 768], f16), T([1, 512, 768], f16)), {}) |
| | cnt: 89, ((T([32, 128], f32), T([32, 128], f32)), {}) |
| | cnt: 0, ((T([10, 10], f16), T([10, 10], f16)), {}) |
| | |
| | Operator: aten.relu.default |
| | cnt: 234, ((T([64, 256], f16),), {}) |
| | ``` |
| |
|
| | This shows: |
| |
|
| | - `aten.add.Tensor` called 156 times with 1×512×768 tensors |
| | |
| | - Same operation called 89 times with 32×128 tensors |
| | |
| | - One synthetic test case (`cnt: 0`) |
| | |
| | - `aten.relu.default` called 234 times with a 64×256 tensor |
| | |
| | |
| | **Note: Traces may be deprecated in the future, but are described here as they are currently included in the dataset/codebase.** |
| |
|
| | # Acknowledgements |
| |
|
| | We are extremely grateful to the [TritonBench](https://github.com/pytorch-labs/tritonbench/tree/main) team for these traces and their intuitive format. |