Spaces:
Running on Zero
New models and LoRAs
Could you please add these?
https://civitai.com/models/1138829?modelVersionId=1882574
https://civitai.com/models/626603/hyperrealistic-pony-or-illustrious?modelVersionId=1914557
https://civitai.com/models/1390490/neon-silhouette-style-illustriousxl
https://civitai.com/models/1290802/powerpuffmixlora
https://civitai.com/models/1719676/shantae-from-shantae-illustriousxl
Hi. I added them for now.
OK. I'm avoiding reposting models that are in early access, so I'll leave midnightpony for later, but I've converted the rest for now.😀
model request 2, thanks!
https://civitai.com/models/24350?modelVersionId=2001227 (only PerfectDeliberate XL version)
https://civitai.com/models/1745682/bridgetoons-mouse-mix (all versions are kinda good)
https://civitai.com/models/1832088/bridgetoons-comix
https://civitai.com/models/1691010/bridgetoons-mix (same here, although ver 4 and 3 seem to be the best)
https://civitai.com/models/1784467/lunaris-vey
https://civitai.com/models/1309123/shift
Could you please add these as well?
https://civitai.com/models/1586360/eternalvibe
https://civitai.com/models/1554081/editijon-kpop
https://civitai.com/models/1231943/detailer-il
https://civitai.com/models/1546402/colorij?modelVersionId=1749739
https://civitai.com/models/1235640/doubleexposure-il
https://civitai.com/models/1887699/cybersijren
https://civitai.com/models/1377820/add-micro-details-concept-illustrious-or-pony-or-noobai?modelVersionId=1963644
Could you please add version 9.0 of Five Stars Illustrious?
That model appears to be usable only directly on Civitai. The download button itself is disabled.
Ah, disappointing. Do you happen to know of any semi-realistic checkpoints with saturated colors like that one?
EDIT: Requesting these:
https://civitai.com/models/24350/perfectdeliberate?modelVersionId=2725374
https://civitai.com/models/111274/perfectdeliberate-anime
https://civitai.com/models/1984269/ptd-style-contrast-glow-and-lighting-enhancement
https://civitai.com/models/1359028?modelVersionId=2734657
Sorry. Hugging Face's storage limits have gotten stricter, so I can't upload any new large files anymore. At least for the time being...😓
Sorry. Hugging Face's storage limits have gotten stricter, so I can't upload any new large files anymore. At least for the time being...😓
Oh, man, that sucks. Hopefully they'll up the limit soon. On that note, does this space require that specifically you upload it, or can you use any model that anyone uploads to Hugging Face?
or can you use any model that anyone uploads to Hugging Face?
this.
or can you use any model that anyone uploads to Hugging Face?
this.
So, if I uploaded the model myself, would you be willing to add it to your space?
if I uploaded the model myself, would you be willing to add it to your space?
Yeah. But one thing to note: Only the model itself can be added to the model list. Adding LoRAs that way is a bit tricky (due to the structure of my space...).
Yeah. But one thing to note: Only the model itself can be added to the model list. Adding LoRAs that way is a bit tricky (due to the structure of my space...).
Nice, thanks. As far as the LoRAs, could you delete one that I've previously requested to add another? I'd like to swap DoubleExposure IL for (PTD) Style: Contrast, Glow & Lighting Enhancement.
Sorry. Currently, I'm not really in the mood to manage LoRA files either...
Sorry. Currently, I'm not really in the mood to manage LoRA files either...
Alright, that's totally fine. :)
I'll try to upload some models and link them here once I'm done.
Hm, I thought I could just upload the SafeTensor file from Civitai, but there seems to be more to it than that. I'm not really sure what to do.
I could just upload the SafeTensor file from Civitai,
Actually, with DiffuseCraft/DiffuseCraftMod, if it's uploaded to the HF Hub, you can use the standalone safetensors file as-is. The URL specification might be a bit confusing though...
Like this: https://huggingface.co/Raelina/Raehoshi-illust-XL-9/blob/main/raehoshi-illust-xl-9.safetensors
but there seems to be more to it than that.
Yeah. In my case, to work around bugs in the SDXL standard VAE or handle scenarios like single safetensors files without a built-in VAE, I apply conversion like the following to select the VAE, scheduler, or CLIP:
The easiest way
Use Diffusers like this:
- load the single
.safetensorsfile withStableDiffusionXLPipeline.from_single_file(...) - replace the scheduler with Euler A
- replace the VAE with
madebyollin/sdxl-vae-fp16-fix - save the pipeline with
save_pretrained(...)
This works because Diffusers supports loading SDXL from a single file, and save_pretrained() writes the normal Diffusers folder structure with subfolders. In Diffusers, A1111 “Euler a” = EulerAncestralDiscreteScheduler. The sdxl-vae-fp16-fix VAE was made to avoid SDXL fp16 NaNs, and its model card says it also works with SDXL 1.0. (Hugging Face)
Install once
Install Diffusers
pip install -U diffusers transformers accelerate safetensors
Diffusers’ install docs list diffusers and transformers, and the model-format docs note that safetensors support requires the Safetensors library to be installed. (Hugging Face)
One-model converter script
Save this as convert_one_sdxl.py.
import os
import gc
import torch
from diffusers import (
StableDiffusionXLPipeline,
AutoencoderKL,
EulerAncestralDiscreteScheduler,
)
# ==== EDIT THESE 2 LINES ====
input_file = r"/path/to/your_model.safetensors"
output_dir = r"/path/to/your_model_diffusers"
# ============================
print("Loading SDXL safetensors file...")
pipe = StableDiffusionXLPipeline.from_single_file(
input_file,
use_safetensors=True,
torch_dtype=torch.float16,
)
print("Setting scheduler to Euler A...")
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
print("Loading fp16-fix VAE...")
pipe.vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16,
)
print("Saving as Diffusers folder...")
pipe.save_pretrained(
output_dir,
safe_serialization=True,
)
del pipe
gc.collect()
if torch.cuda.is_available():
torch.cuda.empty_cache()
print("Done.")
print(f"Saved to: {output_dir}")
This script always uses torch.float16, saves the pipeline in Diffusers folder format, and makes the saved folder default to Euler A and fp16-fix VAE because those components are set on the pipeline before saving. save_pretrained() is the Diffusers-supported way to create the folder layout. (Hugging Face)
Run it
python convert_one_sdxl.py
After it finishes, output_dir will contain a normal Diffusers model folder instead of one big checkpoint file. (Hugging Face)
What each important line does
from_single_file(...)
Loads your single SDXL.safetensorscheckpoint into Diffusers. (Hugging Face)torch_dtype=torch.float16
Forces the pipeline to load in fp16. (Hugging Face)EulerAncestralDiscreteScheduler.from_config(...)
Changes the scheduler to Euler A. (Hugging Face)AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", ...)
Replaces the VAE with the fp16-safe SDXL VAE fix. (Hugging Face)save_pretrained(..., safe_serialization=True)
Writes the Diffusers folders and saves with safetensors serialization. (Hugging Face)
Minimal answer
Use this pattern:
pipe = StableDiffusionXLPipeline.from_single_file(
input_file,
use_safetensors=True,
torch_dtype=torch.float16,
)
pipe.scheduler = EulerAncestralDiscreteScheduler.from_config(pipe.scheduler.config)
pipe.vae = AutoencoderKL.from_pretrained(
"madebyollin/sdxl-vae-fp16-fix",
torch_dtype=torch.float16,
)
pipe.save_pretrained(output_dir, safe_serialization=True)
That converts one SDXL 1.0 .safetensors file into a Diffusers folder, with defaults set to Euler A, fp16-fix VAE, and fp16 loading in the converter. (Hugging Face)