Installing ComfyUI on Windows with NVIDIA GPUsInstalling ComfyUI on Windows with NVIDIA GPUsInstalling ComfyUI on Windows with NVIDIA GPUs

2025::01::15
4 min
AUTHOR:Z.SHINCHVEN

Introduction

ComfyUI is a highly flexible and powerful node-based GUI and backend for stable diffusion models, allowing users to design and execute complex AI art workflows with unparalleled control. This guide will walk you through the process of installing ComfyUI on a Windows system equipped with an NVIDIA GPU, taking full advantage of CUDA for accelerated performance.

Prerequisites

Before you begin, ensure you have the following:

  • Operating System: Windows 10 or 11.
  • NVIDIA GPU: An NVIDIA graphics card with up-to-date drivers. Ensure your drivers support the latest CUDA version (e.g., CUDA 12.x).
  • 7-Zip: Installed for extracting archives. Download it from 7-zip.org.
  • Git: (Optional, but recommended for manual installation) Download and install from git-scm.com.

Installation Methods

There are two primary methods for installing ComfyUI on Windows with an NVIDIA GPU, both leveraging CUDA for performance.

Method 1: Using the Portable Standalone Build (Recommended)

This is the easiest and most recommended method, as it includes Python and all necessary dependencies pre-packaged.

  1. Download the Portable Build: Go to the ComfyUI Releases page. Under the "Assets" section of the latest release, download the appropriate portable build for NVIDIA GPUs. Look for files such as:
    • ComfyUI_windows_portable_nvidia.7z (standard, often with newer CUDA/Python)
    • ComfyUI_windows_portable_nvidia_cu118_py310.7z (or similar, specifying older CUDA/Python for compatibility with older GPUs like NVIDIA 10 series).
  2. Extract the Archive: Use 7-Zip to extract the downloaded .7z file to a location of your choice (e.g., C:\ComfyUI_Portable).
    • Important: If you encounter issues during extraction or running, right-click the .7z file, go to Properties, and check the "Unblock" box, then try extracting again.
  3. Place Your Models: Copy your Stable Diffusion checkpoint files (the large .ckpt or .safetensors files) into the ComfyUI\models\checkpoints directory within the extracted folder.
    • For other model types (LoRAs, VAEs, etc.), place them in their respective ComfyUI\models\ subdirectories.
  4. Run ComfyUI: Navigate to the extracted ComfyUI folder and double-click run_nvidia_gpu.bat. This will launch the ComfyUI server and open the interface in your web browser, typically at http://127.0.0.1:8188.
    • Troubleshooting: If the application doesn't start, ensure your NVIDIA drivers are up to date.

Method 2: Manual Installation (Advanced)

This method provides more control over your Python environment and dependencies.

  1. Clone the Repository: Open your terminal (Command Prompt or PowerShell) and clone the ComfyUI repository:

    git clone https://github.com/comfyanonymous/ComfyUI.git
    cd ComfyUI
    
  2. Install PyTorch with CUDA Support: It's recommended to create and activate a Python virtual environment first (e.g., python -m venv venv then .\venv\Scripts\activate on Windows) to avoid conflicts. Install PyTorch, torchvision, and torchaudio with CUDA support. The CUDA version (e.g., cu130 for CUDA 13.0) should match your NVIDIA driver capabilities.

    • Stable PyTorch (Recommended):

      pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130
      
    • Nightly PyTorch (for the latest features/performance, potentially less stable):

      pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130
      
    • Troubleshooting "Torch not compiled with CUDA enabled": If you encounter this error, it means PyTorch isn't correctly linking with your CUDA installation. First, uninstall any existing torch packages: pip uninstall torch. Then, reinstall using the appropriate command above, ensuring the cuXXX version matches your system's CUDA setup.

  3. Install Other Dependencies: Install the remaining Python packages required by ComfyUI:

    pip install -r requirements.txt
    
  4. Place Your Models: Move your Stable Diffusion checkpoints (.ckpt or .safetensors files) to ComfyUI/models/checkpoints and your VAE models to ComfyUI/models/vae. Similarly, place LoRAs in ComfyUI/models/loras.

  5. Run ComfyUI: Execute ComfyUI from your terminal:

    python main.py
    

    This will start the ComfyUI server, and you can access the interface via the URL provided in the terminal output.

Sharing Models with Other UIs

If you have another Stable Diffusion UI (like Automatic1111) installed and want to share models with ComfyUI to save disk space, you can configure ComfyUI to look in external directories.

  1. Rename Config File: Find the extra_model_paths.yaml.example file in your ComfyUI directory and rename it to extra_model_paths.yaml.
  2. Edit the File: Open extra_model_paths.yaml with a text editor. Modify the paths within this file to point to the model directories of your other UI. This allows ComfyUI to use models without duplicating them.

Conclusion

You have successfully installed ComfyUI on your Windows system with an NVIDIA GPU, harnessing the power of CUDA. You are now ready to dive into its flexible node-based interface and explore advanced Stable Diffusion workflows. For further examples and community support, refer to the official ComfyUI GitHub and its associated resources. Happy creating!

RackNerd Billboard Banner
Share Node:

RELATED_DATA_STREAMS

SCANNING_DATABASE_FOR_CORRELATIONS...