How to use from the
Use from the
Diffusers library
pip install -U diffusers transformers accelerate
import torch
from diffusers import DiffusionPipeline

# switch to "mps" for apple devices
pipe = DiffusionPipeline.from_pretrained("digiplay/SweetMuse_diffusers", dtype=torch.bfloat16, device_map="cuda")

prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
image = pipe(prompt).images[0]

Model info : https://civitai.com/models/81668/sweetmuse

商用OK❀️

Author's Twitter: https://twitter.com/minami_ai01

Sample image δΈ‹θΌ‰ - 2023-06-08T064531.900.png

Recently, diffusers converter I don't why,Model shows error, if you use this model in your diffusers, show some AutoencoderKL errors, don't worry, please use the codes below, you can still generate images :)

modelid="digiplay/SweetMuse_diffusers"

from diffusers.models import AutoencoderKL

vae = AutoencoderKL.from_pretrained("stabilityai/sd-vae-ft-mse")

pipe = DiffusionPipeline.from_pretrained(modelid, vae=vae)
Downloads last month
6
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using digiplay/SweetMuse_diffusers 54