OpenAI's o3-mini: Where to Find this Powerful Reasoning Model

Introduction

OpenAI has recently released its latest large language model (LLM), o3-mini, designed to provide enhanced reasoning capabilities with improved efficiency and cost-effectiveness. It's the most cost-efficient model in OpenAI's reasoning series, making it more accessible to a wider audience. This article explores the various platforms where developers and users can access this powerful model.

OpenAI ChatGPT

o3-mini is available on ChatGPT for all user tiers. Free users can access o3-mini by selecting "Reason" in the message composer or regenerating a response. This marks the first time a reasoning model has been made available to free users in ChatGPT. ChatGPT Plus, Team, and Pro users can select "o3-mini" or "o3-mini-high" from the model picker dropdown list, where it replaces the o1-mini model. Pro users have unlimited access to both o3-mini and o3-mini-high, while Plus and Team users have a daily limit of 150 messages, increased from the 50 messages/day limit with o1-mini.

The o3-mini model offers a reasoning effort parameter that allows users to adjust the model's cognitive load with low, medium, and high reasoning levels. This parameter gives users greater control over the balance between response speed and reasoning depth. For example, users can select "low" reasoning effort for quick and straightforward responses, while "high" reasoning effort can be used for more complex queries that require deeper analysis.

OpenAI states that o3-mini provides a specialized alternative for technical domains that require speed and precision. With medium reasoning effort, it matches the performance of o1 in science, math, and coding while delivering faster responses. In fact, o3-mini is 24% faster than o1-mini in A/B testing, with an average response time of 7.7 seconds compared to o1-mini's 10.16 seconds.

Unfortunately, information regarding the number of requests allowed for free, plus, and pro tier users and the pricing for each tier on OpenAI ChatGPT is currently unavailable.

OpenAI API

o3-mini is available through OpenAI's API, offering developers access to its advanced reasoning capabilities for various applications. It is the first small reasoning model to support highly requested developer features, including function calling, developer messages, and Structured Outputs, making it production-ready.

Features

o3-mini boasts several key features that enhance AI reasoning and customization:

  • Reasoning effort parameter: Allows users to adjust the model's cognitive load with low, medium, and high reasoning levels, providing greater control over the response and latency.
  • Structured outputs: The model now supports JSON Schema constraints, making it easier to generate well-defined, structured outputs for automated workflows.
  • Functions and Tools support: o3-mini seamlessly integrates with functions and external tools, making it ideal for AI-powered automation.
  • Developer messages: The "role": "developer" attribute replaces the system message in previous models, offering more flexible and structured instruction handling.
  • Context window: o3-mini has a 100,000 token output limit, which is significantly larger than the 16,000 token limit for GPT-4o. This allows o3-mini to generate more extensive and comprehensive responses.

Pricing

The pricing for o3-mini on the OpenAI API is as follows:

Model Input Cached Input Output
o3-mini o3-mini-2025-01-31 $1.10 / 1M tokens $0.55 / 1M tokens $4.40 / 1M tokens

This pricing makes o3-mini significantly more cost-efficient than other reasoning models like GPT-4o and o1.

Access

API access to o3-mini is currently limited to developers in API usage tiers 3, 4, and 5.

GitHub Marketplace Models

o3-mini is also available on GitHub Copilot and GitHub Models. GitHub Copilot Pro, Business, and Enterprise users can access it via the model picker in Visual Studio Code and github.com chat. Support for Visual Studio and JetBrains will be added soon. Paid Copilot subscribers have a limit of 50 messages every 12 hours.

To access o3-mini on GitHub Marketplace Models, developers need a valid GitHub account and must agree to the Marketplace Terms of Service. Accessing o3-mini through GitHub Copilot requires a GitHub Copilot license.

Developers can use the GitHub Models playground to experiment with o3-mini and compare it with other models from Cohere, DeepSeek, Meta, and Mistral. This allows developers to explore the model's capabilities and evaluate its performance for different tasks.

Azure OpenAI Service

Microsoft Azure OpenAI Service also provides access to o3-mini. Developers can sign up for Azure AI Foundry to access o3-mini and leverage its capabilities for various applications. Azure OpenAI Service offers enterprise-grade security and compliance, ensuring data privacy and security with 99.9% reliability.

To access o3-mini on Azure OpenAI Service, users need a Microsoft Azure account and must register for access through the Azure AI Foundry platform. Once access is granted, developers can update their existing integrations or create new automation pipelines leveraging JSON Schema and reasoning control. Azure also provides tools to monitor performance, allowing developers to track latency metrics and optimize system workflows.

Azure OpenAI Service provides o3-mini in the following regions:

Region Model Availability
East US2 o3-mini Global Standard
Sweden Central o3-mini Global Standard

OpenRouter

While OpenAI models are not directly available on OpenRouter, it provides access to a wide range of LLMs from different providers. Developers can create an OpenRouter account and access these models without needing an account with the LLM provider.

Performance Benchmarks

o3-mini has been evaluated on various benchmarks, demonstrating its capabilities in coding, STEM reasoning, and logical problem-solving. Here's a summary of its performance:

  • AIME (American Invitational Mathematics Examination): o3-mini with medium reasoning effort matches the performance of o1, while with high reasoning effort, it outperforms both o1-mini and o1.
  • GPQA (Graduate-Level Google-Proof Q&A): o3-mini with medium reasoning effort matches o1's performance, and with high reasoning effort, it demonstrates strong performance in detailed and factual question-answering tasks.
  • FrontierMath: o3-mini with high reasoning effort performs better than its predecessor on FrontierMath, solving over 32% of problems on the first attempt, including more than 28% of the challenging (T3) problems.
  • Codeforces: o3-mini achieves progressively higher Elo scores with increased reasoning effort, outperforming o1-mini and matching o1's performance with medium reasoning effort.
  • SWE-bench Verified: o3-mini is OpenAI's highest-performing released model on SWE-bench Verified.
  • LiveBench Coding: o3-mini surpasses o1-high even at medium reasoning effort, highlighting its efficiency in coding tasks.

Safety and Risk Assessment

OpenAI's Safety Advisory Group (SAG) has classified the o3-mini model as medium risk overall under the Preparedness Framework. It scores medium risk for persuasion, CBRN (chemical, biological, radiological, nuclear), and model autonomy, and low risk for cybersecurity.

Use Cases

o3-mini's advanced reasoning capabilities make it suitable for a wide range of applications. Here are a few examples:

  • Code generation: o3-mini can generate code in various programming languages, assist with debugging, and provide code suggestions.
  • Question answering: o3-mini can answer complex questions accurately and provide detailed explanations.
  • Problem-solving: o3-mini can solve logical problems, mathematical equations, and scientific queries.
  • AI-powered automation: o3-mini can be integrated with external tools and APIs to automate tasks and workflows.

Conclusion

OpenAI's o3-mini is a powerful and versatile reasoning model that offers significant improvements in efficiency and cost-effectiveness compared to its predecessors. Its availability across various platforms, including ChatGPT, OpenAI API, GitHub Marketplace Models, and Azure OpenAI Service, makes it accessible to a wide range of developers and users. The model's strong performance in various benchmarks highlights its capabilities in coding, STEM reasoning, and logical problem-solving.

The introduction of features like the reasoning effort parameter, Structured Outputs, and support for functions and tools further enhances o3-mini's utility for developers. These features allow for greater control over the model's reasoning process and enable seamless integration with external tools and automated workflows.

While o3-mini currently lacks support for image processing, its text-only processing capability, combined with its advanced reasoning abilities and cost-effectiveness, makes it a compelling choice for various applications. The availability of o3-mini to free users in ChatGPT is a significant step towards democratizing access to powerful AI models, potentially driving innovation in various fields, such as coding, STEM research, and AI-powered automation.

References

  1. OpenAI releases o3-mini as its 'most cost-efficient model' in reasoning series, accessed February 4, 2025
  2. OpenAI o3-mini: Performance, How to Access, and More - Analytics Vidhya, accessed February 4, 2025
  3. OpenAI o3-mini, accessed February 4, 2025
  4. Announcing the availability of the o3-mini reasoning model in Microsoft Azure OpenAI Service, accessed February 4, 2025
  5. OpenAI Blog - ChatGPT, accessed January 1, 1970
  6. OpenAI Unveils o3-mini With Enhanced Coding, STEM Reasoning - BankInfoSecurity, accessed February 4, 2025
  7. OpenAI o3-mini, now available in LLM, accessed February 4, 2025
  8. Pricing - OpenAI API, accessed February 4, 2025
  9. API access to o1 and o3-mini - OpenAI Help Center, accessed February 4, 2025
  10. OpenAI o3-mini now available in GitHub Copilot and GitHub Models (Public Preview), accessed February 4, 2025
  11. GitHub Marketplace Terms of Service, accessed February 4, 2025
  12. Models (GitHub) · GitHub Marketplace, accessed February 4, 2025
  13. OpenAI o3-mini now available on GitHub Copilot and Microsoft Azure - Neowin, accessed February 4, 2025
  14. Microsoft Launches o3-Mini: A New Era in AI Reasoning Models | Windows Forum, accessed February 4, 2025
  15. Announcing the availability of the o3-mini model on Azure OpenAI Service - YouTube, accessed February 4, 2025
  16. Azure OpenAI Service quotas and limits - Microsoft Learn, accessed February 4, 2025
  17. How to Access OpenAI o1 API Without a Tier 5 Account - YouTube, accessed February 4, 2025
  18. OpenAI o3-mini System Card, accessed February 4, 2025