ai-assistant/Multi‑Model Management

3.3 Multi‑Model Management (Model Manager)

Pop provides a powerful and flexible multi‑model management system, allowing you to freely switch between various model providers such as OpenAI, DeepSeek, Ollama local models, and custom enterprise API models. This chapter explains how to configure, manage, switch, and optimize these models.


🎯 Why Do You Need Multiple Models?

Different AI models excel in different task scenarios:

Model Best At
OpenAI (GPT series) General ability, reasoning, stability
DeepSeek series Math logic, coding, stronger reasoning
Local models (Ollama/LM Studio) Offline use, privacy, low cost
Enterprise self‑hosted models Data privacy, custom fine‑tuning
Multi‑vendor API pools Choose the optimal model per task

Therefore, Pop allows you to configure multiple models simultaneously and switch freely in each session/window.


🧱 Model Management Interface Structure

The model manager typically includes:

  1. Model List (configured models)
  2. Add New Model
  3. Model Details / Edit
  4. Global Default Model
  5. Enable / Disable Switch

You can view all model statuses, test connectivity, and adjust settings.


🛠 Model Types & Configuration

Pop supports the following categories, each with slightly different configuration requirements.


1) OpenAI Models

Supported:

  • GPT‑4 series
  • GPT‑4o / GPT‑4o mini
  • GPT‑3.5 series
  • Custom OpenAI‑compatible endpoints

Required Fields:

  • API Key
  • Base URL (optional, for OpenAI‑compatible services)
  • Model name (e.g., gpt-4o)
  • Default parameters (temperature, max_tokens, etc.)

Optional Features:

  • Proxy support
  • Rate‑limit protection

2) DeepSeek Models

Supported:

  • DeepSeek‑R1
  • DeepSeek‑V3
  • All DeepSeek official API models

Required Fields:

  • API Key
  • Base URL (optional)
  • Model name (e.g., deepseek-r1)

Best For:

  • Math reasoning
  • Code generation
  • Multi‑step chain‑of‑thought tasks
  • High‑complexity logic

3) Local Models (Ollama / LM Studio)

Pop supports local LLMs — ideal for offline and privacy‑sensitive scenarios.

Required Fields:

  • Local server address (default http://localhost:11434/)
  • Model name (e.g., llama3, qwen2.5, mistral)

Features:

  • Fully offline
  • Privacy‑safe
  • Speed depends on local hardware
  • Zero cost

Best For:

  • Enterprise intranet
  • Personal knowledge workflows
  • Document summarization and daily tasks

4) Enterprise / Custom API Models

For custom endpoints such as:

  • OpenAI‑compatible APIs
  • Custom embedding services
  • Enterprise gateways (ChatGLM, Qwen, DeepSeek private edition)

Required Fields:

  • API Key
  • Base URL (required)
  • Model name
  • Custom headers (optional)

Use Cases:

  • Corporate intranet deployment
  • Auditing & logging
  • Fully controlled model output

🧩 Model Parameters (Per‑Session Customization)

Each AI session can have its own model parameters:

Parameter Description
temperature Randomness — higher = more creative
top_p Sampling range
presence_penalty Reduce repetitive topics
frequency_penalty Reduce repeated content
max_tokens Maximum output length

Settings apply only to the current session.


🔄 Switching Models in a Conversation

Switching models is easy:

  • Open the Model Selector
  • Choose a model from the list
  • Takes effect immediately

Examples:

  • Window A uses DeepSeek
  • Window B uses GPT‑4o
  • Window C uses a local LLM

Perfect for comparing outputs.


🧪 Testing Model Connectivity

You can click Test Connection to:

  • Validate request success
  • Check response time
  • Confirm API Key correctness

If it fails, relevant error messages (401, 404, SSL, etc.) are shown.


⭐ Best Practices & Recommendations

Scenario Recommended Model
Coding / Debugging DeepSeek‑R1 / GPT‑4
Creative writing GPT‑4o / GPT‑4o mini
Translation GPT‑4o
Math reasoning DeepSeek‑R1
Local privacy tasks Ollama (Qwen / Llama)
Long‑document tasks GPT‑4o / DeepSeek‑V3
Enterprise gateway Custom OpenAI‑compatible API

📌 Summary

Pop’s multi‑model system allows users to:

  • Use the best model for each task
  • Switch models, roles, and knowledge bases freely
  • Combine cloud + local models
  • Meet both enterprise and personal AI needs