Ekenayy's picture
init project
e635ff3

A newer version of the Gradio SDK is available: 6.5.0

Upgrade

Deployment Guide for FLUX Kontext Style Transfer Space

Quick Start

1. Create a New Hugging Face Space

  1. Go to Hugging Face Spaces
  2. Click "Create new Space"
  3. Choose:
    • Space name: flux-kontext-style-transfer (or your preferred name)
    • License: Apache 2.0
    • SDK: Gradio
    • Hardware: ZeroGPU (recommended) or T4 Medium
  4. Click "Create Space"

2. Upload Files

Upload all the files from this directory to your new Space:

  • app.py - Main application file
  • requirements.txt - Python dependencies
  • README.md - Space documentation
  • config.py - Configuration settings
  • .gitignore - Git ignore file
  • Dockerfile - Docker configuration (optional)

3. Space Configuration

The Space should automatically start building once you upload the files. The README.md contains the necessary YAML frontmatter with the Space configuration.

4. Hardware Requirements

For optimal performance, use:

  • ZeroGPU: Best for public spaces (free with queue)
  • T4 Medium or Large: For consistent performance
  • A10G Small or Medium: For faster inference

5. Environment Variables (Optional)

If you need to set environment variables:

  1. Go to your Space settings
  2. Add variables in the "Variables and secrets" section
  3. Common variables:
    • HF_TOKEN: Hugging Face token (if needed for private models)

File Structure

your-space/
β”œβ”€β”€ app.py              # Main Gradio application
β”œβ”€β”€ requirements.txt    # Python dependencies
β”œβ”€β”€ README.md          # Space documentation with metadata
β”œβ”€β”€ config.py          # Configuration settings
β”œβ”€β”€ .gitignore         # Git ignore patterns
β”œβ”€β”€ Dockerfile         # Docker configuration (optional)
└── deploy.md          # This deployment guide

Features Included

  • Complete Gradio Interface: Ready-to-use web interface
  • 20+ Style LoRAs: All styles from the original model
  • GPU Optimization: Configured for ZeroGPU
  • Memory Management: Efficient GPU memory usage
  • Examples: Pre-loaded example images
  • Advanced Settings: Customizable parameters
  • Professional UI: Clean, modern interface

Customization Options

Adding New Styles

  1. Update STYLE_TYPE_LORA_DICT in app.py
  2. Add new LoRA files to the model repository
  3. Update style descriptions in config.py

UI Modifications

  • Edit the CSS in app.py for custom styling
  • Modify the Gradio layout in the interface section
  • Add new components or remove existing ones

Performance Tuning

  • Adjust default parameters in config.py
  • Modify memory management settings
  • Update hardware requirements in README.md

Troubleshooting

Common Issues

  1. Out of Memory Errors

    • Reduce default image size
    • Enable CPU offloading in config
    • Use smaller batch sizes
  2. Slow Loading

    • LoRAs are downloaded on first use
    • Consider pre-downloading popular LoRAs
    • Use faster hardware tier
  3. Import Errors

    • Check requirements.txt versions
    • Ensure all dependencies are compatible
    • Update to latest diffusers version

Performance Tips

  • Use ZeroGPU for cost-effective deployment
  • Cache LoRA files for faster loading
  • Implement model compilation for speed
  • Monitor GPU memory usage

Support

For issues with:

License

This deployment is under Apache 2.0 License, following the original model's licensing.