Skip to main content
Omnifact supports multiple model families from top providers like OpenAI, Anthropic, Google, and Mistral. The “best” model isn’t always the most powerful one—it depends on your specific task, speed requirements, and hosting preferences. You can switch models directly in the chat interface at any time to find the one that works best for your current needs.

How to switch models

You can select the AI model before you send a message or at any point during a conversation using the model selector in the chat header. You can also switch models using the model selector in the chat input area. See Selecting an AI Model for more details. For details on which models are available to your team, see Model Management. Choose your model based on what you need to achieve.

Flagship

Best for: Complex writing, nuanced analysis, creativity, and high-quality output.

GPT-5.2

OpenAI’s most capable model for general tasks.

Claude 4.6 Sonnet

Anthropic’s balanced model, excellent for writing and coding.

Gemini 3 Pro

Google’s top-tier model with strong reasoning capabilities.

Mistral Large

Mistral’s flagship model, offering strong performance and EU hosting options.

Fast & Efficient

Best for: Quick questions, summaries, simple tasks, and speed.

GPT-5 mini

Fast and cost-effective for everyday tasks.

Claude 4.5 Haiku

Extremely fast and efficient, great for simple queries.

Gemini 2.5 Flash

Optimized for high speed and low latency.

GPT-4o mini

A capable small model for general use.

Reasoning

Best for: Math, logic puzzles, complex coding architecture, and multi-step problem solving.

o3 / o3-mini

OpenAI’s reasoning-focused models that “think” before responding.

o4-mini

Next-generation efficient reasoning.

Gemini 3 Pro

Strong reasoning capabilities across modalities.

Image Generation

Best for: Creating visual assets from text descriptions.

GPT-Image-1

Best for prompt adherence, text rendering, and polished final assets.

Nano Banana Pro

Best for higher-quality artistic outputs and demanding creative workflows.

Nano Banana

Best for speed, quick iteration, and playful stylized outputs.
Some older models may still appear in your workspace for compatibility. For new work, prefer the newer GPT-5, Claude 4, Gemini 2.5/3, and Mistral models.

Strategy: Combining Models

You don’t have to stick to one model for an entire conversation. A powerful workflow involves combining models:
  1. Draft with Speed: Use a “Fast & Efficient” model like GPT-5 mini or Claude 4.5 Haiku to create outlines, brainstorm ideas, or draft initial text. This is quick and saves quota.
  2. Refine with Quality: Switch to a “Flagship” model like GPT-5.2 or Claude 4.6 Sonnet to rewrite, polish, or critique the work.
  3. Solve Hard Problems with Reasoning: If you hit a logic blocker, switch to o3 or Gemini 3 Pro to work through the specific problem, then switch back.
Common Pitfall: Many users stick to the default model for everything. While capable, you might be using “sledgehammer” models for simple tasks (wasting quota/speed) or struggling with complex tasks that a reasoning model could solve easily.

How to read model names

Model names often follow patterns that hint at their capability and speed.
PatternMeaningExamples
Mini / NanoFaster, cheaper, lighterGPT-5 mini, GPT-5 nano
Flash / HaikuSpeed and efficiencyGemini 2.5 Flash, Claude 4.5 Haiku
Pro / SonnetMainline capable modelsGemini 2.5 Pro, Claude 4.6 Sonnet
OpusHighest capability tierClaude 4.6 Opus
o3 / o4Reasoning-focused modelso3, o3-mini, o4-mini
While “4o” in GPT-4o typically indicated a flagship omnichannel model, naming conventions can vary by provider. Always check the model description if unsure.

Model Hosting, Privacy, and Compliance

The Privacy Filter protects you regardless of the model.Omnifact’s Privacy Filter adds a layer of protection to every interaction, masking sensitive data before it reaches any model, regardless of where that model is hosted.
When choosing a model, consider your data residency requirements:
  • EU-hosted by default: Azure OpenAI, Mistral AI
  • US-hosted: OpenAI, Anthropic, Google
  • Custom providers: Hosting depends on your specific workspace configuration.
In the model picker, look for the hosting-region indicator flag to confirm where a model is processed.

Image Generation Restrictions

When the Privacy Filter is enabled, image generation may be restricted to specific models or providers that comply with your organization’s data privacy standards (often requiring EU hosting). If you cannot select certain image models, this compliance setting is likely active.