Deploy ComfyUI with Flux Models Easily on Google Colab

Running powerful AI image generation tools like ComfyUI, especially with cutting-edge models like Flux, often requires significant local setup and powerful hardware. Google Colab offers a fantastic alternative, providing free access to GPUs in the cloud.

This post will guide you through using a prepared Google Colab notebook to quickly set up ComfyUI and download the necessary Flux models (FP8, Schnell, and Regular FP16) along with their dependencies. The full code for the notebook is included below.

What the Colab Notebook Does

The provided Colab notebook code automates the entire setup process:

  • Clones the latest ComfyUI repository.
  • Installs all required Python packages.
  • Downloads the different Flux model variants (Single-file FP8, Schnell FP8, Regular FP16) using wget.
  • Downloads the specific CLIP and VAE models needed for each Flux variant using wget.
  • Organizes all downloaded files into the correct ComfyUI/models/ subdirectories (checkpoints, unet, clip, vae).

Colab Notebook Code

You can copy and paste the code below into separate cells in a Google Colab notebook.

python
|
# -*- coding: utf-8 -*- """ Colab Notebook for Setting Up ComfyUI with Flux Models using wget and %cd This notebook automates the following steps: 1. Clones the ComfyUI repository. 2. Installs necessary dependencies. 3. Navigates into the models directory. 4. Downloads the different Flux model variants (Single-file FP8, Schnell FP8, Regular FP16) into relative subdirectories. 5. Downloads the required CLIP models and VAEs into relative subdirectories. 6. Places all downloaded files into their correct relative directories within the ComfyUI installation. Instructions: 1. Create a new Google Colab notebook. 2. Ensure the runtime type is set to GPU (Runtime > Change runtime type). 3. Copy the code sections below into separate cells in your notebook. 4. Run each cell sequentially. 5. After the setup is complete, run the final cell to start ComfyUI (it navigates back to the ComfyUI root first). 6. A link (usually ending with `trycloudflare.com` or `gradio.live`) will be generated. Click this link to access the ComfyUI interface in your browser. 7. Once in the ComfyUI interface, you can manually load the workflow JSON files provided in the original tutorial. """ # Cell 1: Clone ComfyUI Repository and Install Dependencies !git clone https://github.com/comfyanonymous/ComfyUI.git %cd ComfyUI !pip install -r requirements.txt # Install xformers for potential performance improvements (optional but recommended) !pip install xformers # Cell 2: Navigate to Models Dir, Create Subdirs, and Download Files using wget import os # Navigate into the models directory %cd models # --- Create Subdirectories --- # Create directories relative to the current 'models' directory os.makedirs("checkpoints", exist_ok=True) os.makedirs("unet", exist_ok=True) os.makedirs("clip", exist_ok=True) os.makedirs("vae", exist_ok=True) # --- Download Files using wget directly into relative paths --- print("\n--- Downloading Single-file FP8 Model ---") # Download directly into the 'checkpoints' subdirectory !wget -c -O checkpoints/flux1-dev-fp8.safetensors https://huggingface.co/Comfy-Org/flux1-dev/resolve/main/flux1-dev-fp8.safetensors print("\n--- Downloading Schnell FP8 Models & Dependencies ---") # Download directly into respective subdirectories !wget -c -O unet/flux1-schnell-fp8.safetensors https://huggingface.co/Comfy-Org/flux1-schnell/resolve/main/flux1-schnell-fp8.safetensors !wget -c -O vae/flux_schnell_ae.safetensors https://huggingface.co/black-forest-labs/FLUX.1-schnell/resolve/main/ae.safetensors !wget -c -O clip/clip_l.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/clip_l.safetensors !wget -c -O clip/t5xxl_fp8_e4m3fn.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp8_e4m3fn.safetensors print("\n--- Downloading Regular FP16 Models & Dependencies ---") # Note: You might need to agree to terms on Hugging Face for this one first manually in a browser if wget fails. # If you encounter issues, download manually and upload to Colab's ComfyUI/models/unet directory. !wget -c -O unet/flux1-dev.safetensors https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/flux1-dev.safetensors !wget -c -O vae/flux_regular_ae.safetensors https://huggingface.co/black-forest-labs/FLUX.1-dev/resolve/main/ae.safetensors # clip_l.safetensors is already downloaded (or attempted above) !wget -c -O clip/t5xxl_fp16.safetensors https://huggingface.co/comfyanonymous/flux_text_encoders/resolve/main/t5xxl_fp16.safetensors print("\n--- All Downloads Attempted ---") print("Please check the output for any download errors.") print(f"Files should be in the respective subdirectories within the current 'models' folder.") # Navigate back to the ComfyUI root directory before starting the server %cd .. # Cell 3: Run ComfyUI # This will start the ComfyUI server from the root directory and provide a public link (usually cloudflare) # If you get an error about port 8188 being in use, you might need to restart the Colab runtime. !python main.py --listen --port 8188 --enable-cors --preview-method auto # Note: The first time running might take a while as it sets things up. # Once you see output like "To see the GUI go to: https://...", click the link. # You will need to manually load the workflow JSON files into the ComfyUI interface.

Where to Find Flux Workflow JSON Files

After setting up ComfyUI using the Colab notebook, you'll need workflow files (.json) to load into the interface. Here are some places where you can find examples based on recent searches:

Remember to download the .json file and use the "Load" button in the ComfyUI interface running in your Colab instance.

How to Use the Notebook Code

  1. Create Notebook: Open Google Colab and create a new notebook.
  2. Set Runtime: Ensure your Colab notebook is using a GPU runtime (Runtime > Change runtime type > Hardware accelerator > GPU).
  3. Copy & Paste Cells: Copy the code sections marked # Cell 1, # Cell 2, and # Cell 3 into separate code cells in your Colab notebook.
  4. Run Cell 1 (Setup): Execute the first code cell. This installs ComfyUI and dependencies.
  5. Run Cell 2 (Download Models): Execute the second code cell. This downloads all the models using wget. Monitor the output for errors.
  6. Run Cell 3 (Start ComfyUI): Execute the third code cell. This starts the server.
  7. Access ComfyUI: Look for a URL in the output of the third cell (e.g., https://....trycloudflare.com). Click this link to open the ComfyUI web interface.
  8. Load Workflows: Your ComfyUI instance is running! Use the "Load" button in the interface to load the Flux workflow .json files from the original tutorial.

Important Notes

  • GPU Runtime: T4 GPU (often free tier) is usually sufficient for the FP8 models. You might need a higher-tier GPU (like A100 from paid plans) for the Regular FP16 model due to VRAM requirements (24GB+).
  • Download Errors: If wget fails for the regular flux1-dev.safetensors model, visit the Hugging Face page in your browser, accept the terms, then rerun the download cell. Alternatively, download it manually and upload it to the ComfyUI/models/unet/ directory in Colab using the file browser on the left.
  • Workflow Files: This notebook sets up the environment and models. You still need the workflow .json files to tell ComfyUI how to connect the nodes.