Introduction
ComfyUI is a highly flexible, node-based GUI and backend for Stable Diffusion. It's a favorite among power users for its performance and the ability to create complex generation workflows. Linux is often the best environment for running AI workloads due to its efficiency and developer-friendly tools.
This guide will walk you through installing ComfyUI on a Linux system, with specific instructions for NVIDIA, AMD, and Intel Arc GPUs.
Prerequisites
Before starting, ensure your system is up to date and has the following installed:
- Git: For cloning the repository.
- Python 3: Version 3.12 or 3.13 is recommended. (3.14 works but is experimental).
- Virtual Environment (venv): Highly recommended to keep dependencies isolated.
Step 1: Clone the Repository
Open your terminal and clone the ComfyUI repository to your desired location:
git clone https://github.com/comfyanonymous/ComfyUI.git
cd ComfyUI
Step 2: Set Up a Virtual Environment
Creating a virtual environment prevents conflicts with your system's Python packages.
# Create a virtual environment named 'venv'
python3 -m venv venv
# Activate the environment
source venv/bin/activate
Note: You will need to run source venv/bin/activate every time you open a new terminal to run ComfyUI.
Step 3: Install PyTorch (GPU Specific)
This is the most critical step. You need to install the version of PyTorch that matches your hardware.
Option A: NVIDIA GPUs (CUDA)
For NVIDIA users, CUDA is the standard.
Stable Version (Recommended):
pip install torch torchvision torchaudio --extra-index-url https://download.pytorch.org/whl/cu130Nightly Version (For latest features):
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/cu130
Option B: AMD GPUs (ROCm)
AMD users should use ROCm. Support has improved significantly with recent updates.
Stable Version (ROCm 6.4):
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/rocm6.4Nightly Version (ROCm 7.1+):
pip install --pre torch torchvision torchaudio --index-url https://download.pytorch.org/whl/nightly/rocm7.1For RDNA 3 / 4 (RX 7000/9000 Series) - Specific Nightlies: If you have a modern card, use the specialized nightly builds for best performance:
- RX 7000 Series:
pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx110X-dgpu/ - RX 9000 Series:
pip install --pre torch torchvision torchaudio --index-url https://rocm.nightlies.amd.com/v2/gfx120X-all/
- RX 7000 Series:
Option C: Intel Arc GPUs (XPU)
Intel Arc users can use the native XPU support.
- Stable Version:
pip install torch torchvision torchaudio --index-url https://download.pytorch.org/whl/xpu
Step 4: Install Dependencies
Once PyTorch is installed, install the rest of the required Python packages:
pip install -r requirements.txt
Step 5: Place Your Models
Before running the server, you need to add your Stable Diffusion models.
- Checkpoints: Copy your
.ckptor.safetensorsfiles toComfyUI/models/checkpoints/. - VAEs: Copy VAE files to
ComfyUI/models/vae/.
Step 6: Run ComfyUI
You are now ready to launch ComfyUI!
python main.py
Access the interface by opening your browser to http://127.0.0.1:8188.
Troubleshooting & Tips
- AMD RDNA 2 Issues: If you have an RX 6000 series card (e.g., 6700 XT) and it crashes, try forcing the graphics version:
HSA_OVERRIDE_GFX_VERSION=10.3.0 python main.py - "Torch not compiled with CUDA enabled": This means you installed the generic
torchpackage instead of the CUDA version. Uninstall it (pip uninstall torch) and re-run the specific command from Step 3. - Performance:
- AMD: To enable experimental memory optimization:
TORCH_ROCM_AOTRITON_ENABLE_EXPERIMENTAL=1 python main.py --use-pytorch-cross-attention - General: If you have low VRAM, use
--lowvramflag:python main.py --lowvram.
- AMD: To enable experimental memory optimization:
Conclusion
Installing ComfyUI on Linux is a straightforward process that rewards you with a stable and high-performance environment for AI art generation. Whether you are Team Green, Red, or Blue, ComfyUI has you covered.
For the latest updates, keep an eye on the official repository. Happy prompting!
