Want to train LoRAs for Flux.1, Stable Diffusion, Z-Image, or Qwen-Image but lack a powerful local GPU? The best way is to run AI Toolkit on Google Colab. This guide shows you how to get the toolkit up and running in minutes using a cloud-hosted UI.
Quick Start
To get started, simply run the following setup block in a Google Colab notebook with a GPU runtime (T4, L4, or A100).
#@title Setup ENV
!git clone https://github.com/ostris/ai-toolkit.git
%cd ai-toolkit
!pip install -r requirements.txt
from huggingface_hub import login
from google.colab import userdata
# Access the secret using its name
hfToken = userdata.get('HF_TOKEN')
# Use the API key in your code
print("API key loaded successfully.")
login(hfToken)
!wget https://github.com/cloudflare/cloudflared/releases/latest/download/cloudflared-linux-amd64.deb
!dpkg -i cloudflared-linux-amd64.deb
!pip install gradio --upgrade
%cd /content/ai-toolkit/ui
!npm install && npm run build
Running the AI Toolkit UI
Once the environment is set up, you can run the web interface. We use cloudflared to create a secure tunnel, allowing you to access the UI from your local browser.
#@title Run AI Tooklit UI
%cd /content/ai-toolkit/ui
import subprocess
import threading
import time
import socket
import urllib.request
def iframe_thread(port):
while True:
time.sleep(0.5)
sock = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
result = sock.connect_ex(('127.0.0.1', port))
if result == 0:
break
sock.close()
print("\nAI Toolkit UI finished loading, trying to run cloudflared\n")
p = subprocess.Popen(["cloudflared", "tunnel", "--url", "http://127.0.0.1:{}".format(port)], stdout=subprocess.PIPE, stderr=subprocess.PIPE)
for line in p.stderr:
l = line.decode()
if "trycloudflare.com " in l:
print("This is the URL to access AI Tookit UI:", l[l.find("http"):], end='')
threading.Thread(target=iframe_thread, daemon=True, args=(8675,)).start()
# Set a secure password for the UI
!AI_TOOLKIT_AUTH=YOUR_SUPER_SECRET_PASSWORD_HERE npm run build_and_start
Prerequisites
- Hugging Face Token: You need a Hugging Face write token to download base models and upload your results. Store this in your Colab Secrets as
HF_TOKEN. - GPU Runtime: Ensure your Colab notebook is using a GPU (Change runtime type -> T4 GPU or better).
- Storage: Depending on the model size, you may want to mount Google Drive to save your checkpoints permanently.
Key Features
- Multi-Model Support: Train LoRAs for Stable Diffusion XL, Flux.1, Z-Image, and Qwen-Image.
- Web UI: A user-friendly interface for configuring training parameters without touching JSON files.
- Cloudflared Integration: Bypass Colab's local networking restrictions to access the full UI features.
How to Train LoRAs
Once you have the UI running, follow these steps to start your training:
- Upload Dataset: Prepare your training images and corresponding
.txtcaption files (ensure they have the same filename, e.g.,image1.jpgandimage1.txt). Upload them to the toolkit's dataset directory via the UI. - Create a New Job: In the interface, add a "New Job." You will need to select your desired base model (e.g., Flux or SDXL), point to your uploaded dataset, and provide an example prompt for validation.
- Start the Training Queue: Note that jobs are not automatically started upon creation. You must navigate to the Training Queue and manually start the jobs you've created to begin the training process.
What is AI Toolkit?
AI Toolkit is a powerful, all-in-one training suite designed by Ostris for fine-tuning the latest diffusion models. It supports high-performance models like Flux.1, SDXL, Z-Image, and Qwen-Image. Its standout feature is the built-in web interface, which simplifies the complex process of setting up training parameters, managing datasets, and monitoring progress without needing to manually edit JSON configuration files.
What is Google Colab?
Google Colab is a cloud-based service that allows you to write and execute Python code in your browser. It is particularly popular in the AI community because it provides free or low-cost access to powerful GPUs like the NVIDIA T4, L4, and A100. By running AI Toolkit on Colab, you can perform heavy-duty LoRA training without needing to invest in expensive local hardware.
