Skip to content
/
OpenRouter
© 2026 OpenRouter, Inc

Product

  • Chat
  • Rankings
  • Apps
  • Models
  • Providers
  • Pricing
  • Enterprise
  • Labs

Company

  • About
  • Announcements
  • CareersHiring
  • Privacy
  • Terms of Service
  • Support
  • State of AI
  • Works With OR
  • Data

Developer

  • Documentation
  • API Reference
  • SDK
  • Status

Connect

  • Discord
  • GitHub
  • LinkedIn
  • X
  • YouTube
Favicon for GMICloud

GMICloud

Browse models provided by GMICloud (Terms of Service)

6 models

Tokens processed on OpenRouter

  • Favicon for deepseek
    DeepSeek: DeepSeek V4 ProDeepSeek V4 Pro

    DeepSeek V4 Pro is a large-scale Mixture-of-Experts model from DeepSeek with 1.6T total parameters and 49B activated parameters, supporting a 1M-token context window. It is designed for advanced reasoning, coding, and long-horizon agent workflows, with strong performance across knowledge, math, and software engineering benchmarks. Built on the same architecture as DeepSeek V4 Flash, it introduces a hybrid attention system for efficient long-context processing. Reasoning efforts `high` and `xhigh` are supported; `xhigh` maps to max reasoning. It is well suited for complex workloads such as full-codebase analysis, multi-step automation, and large-scale information synthesis, where both capability and efficiency are critical.

    by deepseekApr 24, 20261.05M context$1.392 /M input tokens$2.784 /M output tokens
  • Favicon for deepseek
    DeepSeek: DeepSeek V4 FlashDeepSeek V4 Flash

    DeepSeek V4 Flash is an efficiency-optimized Mixture-of-Experts model from DeepSeek with 284B total parameters and 13B activated parameters, supporting a 1M-token context window. It is designed for fast inference and high-throughput workloads, while maintaining strong reasoning and coding performance. The model includes hybrid attention for efficient long-context processing. Reasoning efforts `high` and `xhigh` are supported; `xhigh` maps to max reasoning. It is well suited for applications such as coding assistants, chat systems, and agent workflows where responsiveness and cost efficiency are important.

    by deepseekApr 24, 20261.05M context$0.112 /M input tokens$0.224 /M output tokens
  • Favicon for z-ai
    Z.ai: GLM 5.1GLM 5.1

    GLM-5.1 delivers a major leap in coding capability, with particularly significant gains in handling long-horizon tasks. Unlike previous models built around minute-level interactions, GLM-5.1 can work independently and continuously on a single task for more than 8 hours, autonomously planning, executing, and improving itself throughout the process, ultimately delivering complete, engineering-grade results.

    by z-aiApr 7, 2026203K context$0.98 /M input tokens$3.08 /M output tokens
  • Favicon for qwen
    Qwen: Qwen3.5 397B A17BQwen3.5 397B A17B

    The Qwen3.5 series 397B-A17B native vision-language model is built on a hybrid architecture that integrates a linear attention mechanism with a sparse mixture-of-experts model, achieving higher inference efficiency. It delivers state-of-the-art performance comparable to leading-edge models across a wide range of tasks, including language understanding, logical reasoning, code generation, agent-based tasks, image understanding, video understanding, and graphical user interface (GUI) interactions. With its robust code-generation and agent capabilities, the model exhibits strong generalization across diverse agent.

    by qwenFeb 16, 2026256K context$0.60 /M input tokens$3.60 /M output tokens
  • Favicon for z-ai
    Z.ai: GLM 5GLM 5

    GLM-5 is Z.ai’s flagship open-source foundation model engineered for complex systems design and long-horizon agent workflows. Built for expert developers, it delivers production-grade performance on large-scale programming tasks, rivaling leading closed-source models. With advanced agentic planning, deep backend reasoning, and iterative self-correction, GLM-5 moves beyond code generation to full-system construction and autonomous execution.

    by z-aiFeb 11, 2026203K context$0.60 /M input tokens$1.92 /M output tokens
  • Favicon for deepseek
    DeepSeek: DeepSeek V3 0324DeepSeek V3 0324

    DeepSeek V3, a 685B-parameter, mixture-of-experts model, is the latest iteration of the flagship chat model family from the DeepSeek team. It succeeds the [DeepSeek V3](/deepseek/deepseek-chat-v3) model and performs really well on a variety of tasks.

    by deepseekMar 24, 2025131K context$0.29 /M input tokens$1.14 /M output tokens