File size: 3,622 Bytes
ca40004 499fa65 42f0a0c ca40004 19502fa ca40004 19502fa ca40004 19502fa ca40004 d05742f f3598b6 d05742f 089c1e2 5996631 d05742f 5996631 d05742f 089c1e2 cf65ab0 5996631 e0406c2 d05742f 5996631 d05742f 5996631 ca40004 a7ea144 ca40004 a7ea144 d05742f ca40004 19502fa ca40004 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 | ---
library_name: diffusers
tags:
- pruna-ai
- safetensors
---
# Model Card for pruna-test/test-save-tiny-stable-diffusion-pipe-smashed
This model was created using the [pruna](https://github.com/PrunaAI/pruna) library. Pruna is a model optimization framework built for developers, enabling you to deliver more efficient models with minimal implementation overhead.
## Usage
First things first, you need to install the pruna library:
```bash
pip install pruna
```
You can [use the diffusers library to load the model](https://huggingface.co/pruna-test/test-save-tiny-stable-diffusion-pipe-smashed?library=diffusers) but this might not include all optimizations by default.
To ensure that all optimizations are applied, use the pruna library to load the model using the following code:
```python
from pruna import PrunaModel
loaded_model = PrunaModel.from_pretrained(
"pruna-test/test-save-tiny-stable-diffusion-pipe-smashed"
)
# we can then run inference using the methods supported by the base model
```
For inference, you can use the inference methods of the original model like shown in [the original model card](https://huggingface.co/hf-internal-testing/tiny-stable-diffusion-pipe?library=diffusers).
Alternatively, you can visit [the Pruna documentation](https://docs.pruna.ai/en/stable/) for more information.
## Smash Configuration
The compression configuration of the model is stored in the `smash_config.json` file, which describes the optimization methods that were applied to the model.
```bash
{
"awq": false,
"c_generate": false,
"c_translate": false,
"c_whisper": false,
"deepcache": false,
"diffusers_int8": false,
"fastercache": false,
"flash_attn3": false,
"fora": false,
"gptq": false,
"half": false,
"hqq": false,
"hqq_diffusers": false,
"hyper": false,
"ifw": false,
"img2img_denoise": false,
"ipex_llm": false,
"llm_int8": false,
"pab": false,
"padding_pruning": false,
"qkv_diffusers": false,
"quanto": false,
"realesrgan_upscale": false,
"reduce_noe": false,
"ring_attn": false,
"sage_attn": false,
"stable_fast": false,
"text_to_image_distillation_inplace_perp": false,
"text_to_image_distillation_lora": false,
"text_to_image_distillation_perp": false,
"text_to_image_inplace_perp": false,
"text_to_image_lora": false,
"text_to_image_perp": false,
"text_to_text_inplace_perp": false,
"text_to_text_lora": false,
"text_to_text_perp": false,
"torch_compile": false,
"torch_dynamic": false,
"torch_structured": false,
"torch_unstructured": false,
"torchao": false,
"whisper_s2t": false,
"x_fast": false,
"zipar": false,
"batch_size": 1,
"device": "cpu",
"device_map": null,
"save_fns": [],
"save_artifacts_fns": [],
"load_fns": [
"diffusers"
],
"load_artifacts_fns": [],
"reapply_after_load": {}
}
```
## 🌍 Join the Pruna AI community!
[](https://twitter.com/PrunaAI)
[](https://github.com/PrunaAI)
[](https://www.linkedin.com/company/93832878/admin/feed/posts/?feedType=following)
[](https://discord.gg/JFQmtFKCjd)
[](https://www.reddit.com/r/PrunaAI/) |