Introduction
ComfyUI is a highly flexible and powerful node-based GUI and backend for stable diffusion models, allowing users to design and execute complex AI art workflows with unparalleled control. This guide will walk you through the process of installing ComfyUI on a Windows system equipped with an NVIDIA GPU, taking full advantage of CUDA for accelerated performance.
Prerequisites
Before you begin, ensure you have the following:
- Operating System: Windows 10 or 11.
- NVIDIA GPU: An NVIDIA graphics card with up-to-date drivers. Ensure your drivers support the latest CUDA version (e.g., CUDA 12.x).
- 7-Zip: Installed for extracting archives. Download it from 7-zip.org.
- Git: (Optional, but recommended for manual installation) Download and install from git-scm.com.
Installation Methods
There are two primary methods for installing ComfyUI on Windows with an NVIDIA GPU, both leveraging CUDA for performance.
Method 1: Using the Portable Standalone Build (Recommended)
This is the easiest and most recommended method, as it includes Python and all necessary dependencies pre-packaged.
- Download the Portable Build:
Go to the ComfyUI Releases page. Under the "Assets" section of the latest release, download the appropriate portable build for NVIDIA GPUs. Look for files such as:
ComfyUI_windows_portable_nvidia.7z(standard, often with newer CUDA/Python)ComfyUI_windows_portable_nvidia_cu118_py310.7z(or similar, specifying older CUDA/Python for compatibility with older GPUs like NVIDIA 10 series).
- Extract the Archive:
Use 7-Zip to extract the downloaded
.7zfile to a location of your choice (e.g.,C:\ComfyUI_Portable).- Important: If you encounter issues during extraction or running, right-click the
.7zfile, go to Properties, and check the "Unblock" box, then try extracting again.
- Important: If you encounter issues during extraction or running, right-click the
- Place Your Models:
Copy your Stable Diffusion checkpoint files (the large
.ckptor.safetensorsfiles) into theComfyUI\models\checkpointsdirectory within the extracted folder.- For other model types (LoRAs, VAEs, etc.), place them in their respective
ComfyUI\models\subdirectories.
- For other model types (LoRAs, VAEs, etc.), place them in their respective
- Run ComfyUI:
Navigate to the extracted ComfyUI folder and double-click
run_nvidia_gpu.bat. This will launch the ComfyUI server and open the interface in your web browser, typically athttp://127.0.0.1:8188.- Troubleshooting: If the application doesn't start, ensure your NVIDIA drivers are up to date.
Method 2: Manual Installation (Advanced)
This method provides more control over your Python environment and dependencies.
Clone the Repository: Open your terminal (Command Prompt or PowerShell) and clone the ComfyUI repository:
git clone https://github.com/comfyanonymous/ComfyUI.git cd ComfyUIInstall PyTorch with CUDA Support: It's recommended to create and activate a Python virtual environment first (e.g.,
python -m venv venvthen.\venv\Scripts\activateon Windows) to avoid conflicts. Install PyTorch, torchvision, and torchaudio with CUDA support. The CUDA version (e.g.,cu130for CUDA 13.0) should match your NVIDIA driver capabilities.Stable PyTorch (Recommended):
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130Nightly PyTorch (for the latest features/performance, potentially less stable):
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130Troubleshooting "Torch not compiled with CUDA enabled": If you encounter this error, it means PyTorch isn't correctly linking with your CUDA installation. First, uninstall any existing
torchpackages:pip uninstall torch. Then, reinstall using the appropriate command above, ensuring thecuXXXversion matches your system's CUDA setup.
Install Other Dependencies: Install the remaining Python packages required by ComfyUI:
pip install -r requirements.txtPlace Your Models: Move your Stable Diffusion checkpoints (
.ckptor.safetensorsfiles) toComfyUI/models/checkpointsand your VAE models toComfyUI/models/vae. Similarly, place LoRAs inComfyUI/models/loras.Run ComfyUI: Execute ComfyUI from your terminal:
python main.pyThis will start the ComfyUI server, and you can access the interface via the URL provided in the terminal output.
Sharing Models with Other UIs
If you have another Stable Diffusion UI (like Automatic1111) installed and want to share models with ComfyUI to save disk space, you can configure ComfyUI to look in external directories.
- Rename Config File:
Find the
extra_model_paths.yaml.examplefile in your ComfyUI directory and rename it toextra_model_paths.yaml. - Edit the File:
Open
extra_model_paths.yamlwith a text editor. Modify the paths within this file to point to the model directories of your other UI. This allows ComfyUI to use models without duplicating them.
Conclusion
You have successfully installed ComfyUI on your Windows system with an NVIDIA GPU, harnessing the power of CUDA. You are now ready to dive into its flexible node-based interface and explore advanced Stable Diffusion workflows. For further examples and community support, refer to the official ComfyUI GitHub and its associated resources. Happy creating!
