Installing ComfyUI on Windows for AMD GPUs

Wed Jan 15 2025

ComfyUI is a powerful and modular GUI that allows you to design and execute advanced stable diffusion pipelines using a graph-based interface. This blog post will guide you through the process of installing ComfyUI on a Windows system with an AMD GPU.

Prerequisites

Before we begin, make sure you have the following:

  • A Windows operating system.
  • An AMD GPU. Note that some older models might require specific driver configurations.
  • 7-Zip for extracting the ComfyUI archive (if you're using the portable version).
  • Python installed (version 3.12 is recommended).

Installation Methods

There are two primary methods for installing ComfyUI on Windows with an AMD GPU:

  1. Using the Portable Standalone Build (Easiest)
  2. Manual Installation (For More Control)

Method 1: Using the Portable Standalone Build

This is the simplest method, especially if you're new to ComfyUI or prefer a quick setup.

  1. Download: Go to the ComfyUI releases page and download the latest portable build for Windows (e.g., ComfyUI_windows_portable_nvidia.7z). Even though the file name mentions Nvidia, it will still work for AMD GPUs if you follow the DirectML instructions below.
  2. Extract: Use 7-Zip to extract the downloaded archive to your desired location.
  3. Install Torch DirectML: Open a command prompt or PowerShell window in the extracted ComfyUI folder and run the following command: bash pip install torch-directml
  4. Run ComfyUI: bash python main.py --directml
  5. Place Models: Before running, place your Stable Diffusion checkpoints (the large .ckpt or .safetensors files) in the ComfyUI\models\checkpoints directory.

That's it! ComfyUI should now be running. You can access the interface through your web browser, typically at http://127.0.0.1:8188.

Method 2: Manual Installation

This method gives you more control over the installation process but requires a few more steps.

  1. Clone the Repository: Open your terminal or command prompt and clone the ComfyUI repository: bash git clone https://github.com/comfyanonymous/ComfyUI.git
  2. Navigate to the Directory: bash cd ComfyUI
  3. Install Dependencies: bash pip install -r requirements.txt
  4. Install Torch DirectML: bash pip install torch-directml
  5. Place Models: Put your Stable Diffusion checkpoints in models/checkpoints and your VAE models in models/vae.
  6. Run ComfyUI: bash python main.py --directml

Note for Specific AMD GPUs:

  • If you encounter issues with your specific AMD card model, you might need to use an override command:
    • For 6700, 6600, and some older RDNA2 cards: HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py --directml
    • For 7600 and some RDNA3 cards: HSA_OVERRIDE_GFX_VERSION=11.0.0 python main.py --directml

Sharing Models with Other UIs

If you have another Stable Diffusion UI installed (like Automatic1111) and want to share models with ComfyUI to save disk space, you can modify the model search paths:

  1. Rename: Find the extra_model_paths.yaml.example file in the ComfyUI directory and rename it to extra_model_paths.yaml.
  2. Edit: Open the extra_model_paths.yaml file with a text editor.
  3. Configure Paths: Modify the paths in the file to point to the model directories of your other UI.

Conclusion

You have now successfully installed ComfyUI on your Windows system with an AMD GPU. You can start exploring the powerful features of ComfyUI and create complex Stable Diffusion workflows. For examples and inspiration, visit the ComfyUI Examples page. Remember that the ComfyUI community is active and helpful, so don't hesitate to seek support on the Matrix space or Comfy.org if you encounter any issues. Happy creating!