Hugging Face's logo Hugging Face
  • Models
  • Datasets
  • Spaces
  • Buckets new
  • Docs
  • Enterprise
  • Pricing

  • Log In
  • Sign Up

dylanebert
/
multi-view-diffusion

Image-to-3D
Diffusers
Safetensors
MVDreamPipeline
Model card Files Files and versions
xet
Community

Instructions to use dylanebert/multi-view-diffusion with libraries, inference providers, notebooks, and local apps. Follow these links to get started.

  • Libraries
  • Diffusers

    How to use dylanebert/multi-view-diffusion with Diffusers:

    pip install -U diffusers transformers accelerate
    import torch
    from diffusers import DiffusionPipeline
    
    # switch to "mps" for apple devices
    pipe = DiffusionPipeline.from_pretrained("dylanebert/multi-view-diffusion", dtype=torch.bfloat16, device_map="cuda")
    
    prompt = "Astronaut in a jungle, cold color palette, muted colors, detailed, 8k"
    image = pipe(prompt).images[0]
  • Notebooks
  • Google Colab
  • Kaggle
multi-view-diffusion / image_encoder
1.26 GB
Ctrl+K
Ctrl+K
  • 4 contributors
History: 1 commit
dylanebert
add imagedream
3916e55 almost 2 years ago
  • config.json
    563 Bytes
    add imagedream almost 2 years ago
  • model.safetensors
    1.26 GB
    xet
    add imagedream almost 2 years ago