avatar

ShīnChvën ✨

Effective Accelerationism

Powered by Druid

Create Your Own ChatGPT with One API and NextChat or Lobe Chat

Introduction

ChatGPT, a revolutionary generative AI, has taken the world by storm in 2023, transforming the way we work, study, and interact with technology. Its capabilities in generating human-like text, translating languages, and answering complex questions have captivated users worldwide. While ChatGPT has garnered immense popularity, many are unaware that OpenAI offers a GPT-model API service, enabling developers to create their own ChatGPT-like AI assistants. Recognizing this opportunity, numerous AI service providers have emerged, offering their own LLM (Large Language Model) APIs. This has opened up exciting possibilities for creating a more comprehensive and versatile ChatGPT application with access to multiple LLM models from various providers with a single API key and entrypoint.

To harness the full potential of these diverse LLM models, a unified gateway is essential. This gateway would serve as a central hub, seamlessly managing and accessing multiple language models from various providers. Additionally, a customizable ChatGPT-like web client would provide an intuitive user interface, allowing users to interact with the models in a user-friendly manner. Fortunately, the open source community has stepped up to the challenge, providing a solution that empowers developers to build their own ChatGPT-like AI assistants with ease.

In this blog post, we will embark on a journey to create a ChatGPT-like AI assistant using One-API, a unified programming model that simplifies development across diverse hardware architectures. We will also explore popular ChatGPT-like web clients such as NextChat and Lobe Chat, demonstrating how to integrate them with our custom AI assistant. By the end of this post, you will have a comprehensive understanding of the process involved in building a powerful and versatile AI assistant that leverages the capabilities of multiple LLM models.

One API: Your Unified Gateway to Leading AI Language Models

One API is a robust OpenAI API management and distribution system that stands out for its versatility and user-friendly approach. With One API, you have the luxury of accessing a multitude of large language models (LLMs) through a single, streamlined interface. Whether you're using OpenAI's SDK or the standard HTTP request protocol, One API simplifies the process by requiring just one API key.

Key Features:

  • Multi-Provider Support: Connect with various AI providers using one unified system.
  • Streaming Support: Enjoy real-time interaction with most models, thanks to the streaming support.
  • Built-In Billing: Control your costs with a token and request-based billing system, where the admin can set individual prices for each model and provider.
  • Pay As You Go: Users benefit from a flexible and cost-effective solution that beats the traditional subscription models like ChatGPT Plus.

NextChat: A Customizable ChatGPT-Like Web Client

NextChat

NextChat, formerly known as ChatGPT Next Web, is a cutting-edge, cross-platform web client designed for OpenAI Service. It goes beyond the standard chatting interface by offering enhanced features for those who crave a more tailored AI interaction.

Key Features:

  • Custom Instructions and Prompts: Tailor your AI experience with custom GPT instructions and prompt shortcuts.
  • Built-in Agents: Enjoy a variety of built-in agents with different instructions and prompts.
  • Adjustable Parameters: Fine-tune your conversations by setting parameters such as temperature, max_tokens, top_p, and attached message count.
  • Greater Customization: NextChat provides a more personalized touch compared to the standard ChatGPT, catering to users who want more control over their AI interactions.
  • PWA Support: The web app can be installed as a PWA (Progressive Web App) on your device, allowing for a more native experience.

Lobe Chat: Your Sleek AI Communication Platform

Lobe Chat

Lobe Chat is an open-source chatbot framework that redefines AI interactions. It's a powerful, user-friendly platform that offers:

  • Modern UI: Enjoy a sleek, intuitive interface that's easy on the eyes.
  • GPT-4-Vision support: Engage in image-based conversations with GPT-4-Vision.
  • Multimodal Capabilities: Interact using text, voice, and more for a richer experience.
  • Agent Store: You can install the shared agents from the agent store, or create and share your own agents.
  • Custom Instructions: Create your own custom instructions for a more personalized AI experience.
  • Plugin System: Easily extend functionality with custom plugins search, mindmap, website crawler, and more.
  • Speech Synthesis: Engage with AI in natural, spoken dialogue.
  • PWA Support: The web app can be installed as a PWA (Progressive Web App) on your device, allowing for a more native experience.

Combining One API with ChatGPT Like Web Clients: A Personal AI Powerhouse

By integrating One API with NextChat or Lobe Chat, you can deploy your own private ChatGPT instance using your API key and custom GPT prompts. This combination offers a unique opportunity:

  • One Key, Multiple Providers: Access a variety of LLM providers and models with a single API key.
  • Private and Shareable: Set up a private ChatGPT for yourself or invite friends and family to enjoy your custom AI platform.
  • Easy Deployment: Thanks to Docker, both projects are containerized, allowing for straightforward deployment with docker-compose.

Deloying One API and NextChat

To get started with your personalized ChatGPT experience, you'll need to:

  1. Install Docker and Docker Compose on your system.
  2. Clone the repositories for One API and NextChat.
  3. Configure the settings according to your preferences and the provided documentation.
  4. Use docker-compose to deploy each service, and voilà—you're ready to engage with your customized AI!
version: "3.8"

services:
  # https://github.com/songquanpeng/one-api/blob/main/docker-compose.yml
  one-api:
    image: justsong/one-api:latest # Repalce it with `justsong/one-api-en:latest` to use English version
    container_name: one-api
    restart: always
    command: --log-dir /app/logs
    ports:
      - "3000:3000"
    volumes:
      - ./data/oneapi:/data
      - ./logs:/app/logs
    environment:
      - SQL_DSN=oneapi:123456@tcp(db:3306)/one-api  # Modified this line, or comment out to use SQLite as database
      - REDIS_CONN_STRING=redis://redis
      - SESSION_SECRET=random_string  # Replace with a random string
      - TZ=Asia/Shanghai  # Use your own timezone
    depends_on:
      - redis
      - db
    healthcheck:
      test: [ "CMD-SHELL", "wget -q -O - http://localhost:3000/api/status | grep -o '\"success\":\\s*true' | awk -F: '{print $2}'" ]
      interval: 30s
      timeout: 10s
      retries: 3

  redis:
    image: redis:latest
    container_name: redis
    restart: always

  db:
    image: mysql:8.2.0
    restart: always
    container_name: mysql
    volumes:
      - ./data/mysql:/var/lib/mysql
    ports:
      - '3306:3306'
    environment: # Init the database with these environment variables
      TZ: Asia/Shanghai
      MYSQL_ROOT_PASSWORD: 'OneAPI@justsong'
      MYSQL_USER: oneapi
      MYSQL_PASSWORD: '123456'
      MYSQL_DATABASE: one-api

  # https://github.com/ChatGPTNextWeb/ChatGPT-Next-Web/blob/main/docker-compose.yml
  chatgpt-next-web:
    image: yidadaa/chatgpt-next-web
    container_name: chatgpt-next-web
    restart: always
    ports:
      - "3001:3000"

  # Alternatively, you can deploy Lobe Chat instead of NextChat, it's a ChatGPT-like web client with modern UI and it supports GPT-4-Vision model and it is under active development.
  lobe-chat:
    image: lobehub/lobe-chat  # The image you built
    container_name: lobe-chat  # Optional: Specify a custom container name
    environment:  # Define environment variables
      - CUSTOM_MODELS=gemini-pro
    #   - NODE_ENV=production
    #   - HOSTNAME=0.0.0.0
    #   - PORT=3210
    #   - ACCESS_CODE=lobe66
    #   - NEXT_PUBLIC_CUSTOM_MODELS=
    #   - OPENAI_API_KEY=
    #   - OPENAI_PROXY_URL=
    #   - USE_AZURE_OPENAI=
    #   - AZURE_API_KEY=
    #   - AZURE_API_VERSION=
    ports:
      - "3002:3210"  # Map the container's port to the host
    user: "1001"  # Run the container as the nextjs user
    # volumes:  # Define any volumes if needed
      # - ./local/path:/app/path  # Uncomment and modify if you need to bind mount local directories
    restart: unless-stopped  # Optional: Restart policy

If you are using Nginx as a reverse proxy, you may need to add the following configuration to your Nginx configuration file to ensure streaming output works correctly:

server {

    location / {
        proxy_set_header Upgrade "";
        proxy_set_header Connection "upgrade";
        chunked_transfer_encoding off;
        proxy_cache off;
        proxy_buffering off;
        proxy_request_buffering off;
    }

}

Now the two services are running on your machine, and you can access them via http://localhost:3000 and http://localhost:3001 respectively.

Please follow the documentation of One API to configure the settings and deploy them properly.

How to Use Your Custom ChatGPT

After successfully deploying your services, the initial step involves configuring your preferred Large Language Models (LLMs) and their respective providers within One API. Subsequently, generate an token which will serve as your unique API key. This key needs to be integrated with One API's entry point in the NextChat settings, establishing the link between the two services.

Once the configuration is complete, you are ready to engage in conversation with your personalized AI.

For added convenience, you can designate NextChat as the default chat client in One API's settings. By doing so, a shortcut for automatic configuration of NextChat will be made accessible on One API's token management page, streamlining the process of connecting and using your AI platform.

Configuring Lobe Chat is a similar process that involves setting up the API key and entry point in the Lobe Chat settings. Once configured, you can access your AI platform by visiting the Lobe Chat URL. Currently there is no configuration shortcut for Lobe Chat.

Conclusion

By combining One API with NextChat or Lobe Chat, you can create a private, customizable, and cost-effective ChatGPT experience for personal or shared use. This powerful combination offers a unique opportunity to leverage the capabilities of multiple LLM models, providing a more comprehensive and versatile AI assistant. With the help of these open source projects, you can build your own ChatGPT-like AI assistant with ease. I hope this gives you a solid foundation for your blog post! Feel free to expand on any specific sections, add personal insights and examples, and make it your own. Happy chatting!