LobeChat
Discover
Ctrl K
Create
Home
Assistants
Plugins
Models
Model Providers
Model List
24
Hunyuan
Hunyuan Large
Tencent/Hunyuan-A52B-Instruct
Hunyuan-Large is the industry's largest open-source Transformer architecture MoE model, with a total of 389 billion parameters and 52 billion active parameters.
32K
DeepSeek
DeepSeek V2.5
deepseek-ai/DeepSeek-V2.5
DeepSeek V2.5 combines the excellent features of previous versions, enhancing general and coding capabilities.
32K
Qwen
Qwen2.5 7B
Qwen/Qwen2.5-7B-Instruct
Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks.
32K
Qwen
Qwen2.5 14B
Qwen/Qwen2.5-14B-Instruct
Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks.
32K
Qwen
Qwen2.5 32B
Qwen/Qwen2.5-32B-Instruct
Qwen2.5 is a brand new series of large language models designed to optimize the handling of instruction-based tasks.
32K
Qwen
Qwen2.5 72B
Qwen/Qwen2.5-72B-Instruct-128K
Qwen2.5 is a new large language model series with enhanced understanding and generation capabilities.
128K
Qwen
Qwen2 VL 7B
Pro/Qwen/Qwen2-VL-7B-Instruct
Qwen2-VL is the latest iteration of the Qwen-VL model, achieving state-of-the-art performance in visual understanding benchmarks.
32K
Qwen
Qwen2 VL 72B
Qwen/Qwen2-VL-72B-Instruct
Qwen2-VL is the latest iteration of the Qwen-VL model, achieving state-of-the-art performance in visual understanding benchmarks.
32K
Qwen
Qwen2.5 Math 72B
Qwen/Qwen2.5-Math-72B-Instruct
Qwen2.5-Math focuses on problem-solving in the field of mathematics, providing expert solutions for challenging problems.
4K
Qwen
Qwen2.5 Coder 32B
Qwen/Qwen2.5-Coder-32B-Instruct
Qwen2.5-Coder focuses on code writing.
32K
InternLM
Internlm 2.5 7B
internlm/internlm2_5-7b-chat
InternLM2.5 offers intelligent dialogue solutions across multiple scenarios.
32K
InternLM
Internlm 2.5 20B
internlm/internlm2_5-20b-chat
The innovative open-source model InternLM2.5 enhances dialogue intelligence through a large number of parameters.
32K
InternLM
InternVL2 8B
Pro/OpenGVLab/InternVL2-8B
InternVL2 demonstrates exceptional performance across various visual language tasks, including document and chart understanding, scene text understanding, OCR, and solving scientific and mathematical problems.
32K
InternLM
InternVL2 26B
OpenGVLab/InternVL2-26B
InternVL2 demonstrates exceptional performance across various visual language tasks, including document and chart understanding, scene text understanding, OCR, and solving scientific and mathematical problems.
32K
InternLM
InternVL2 Llama3 76B
OpenGVLab/InternVL2-Llama3-76B
InternVL2 demonstrates exceptional performance across various visual language tasks, including document and chart understanding, scene text understanding, OCR, and solving scientific and mathematical problems.
8K
ChatGLM
GLM-4 9B
THUDM/glm-4-9b-chat
GLM-4 9B is an open-source version that provides an optimized conversational experience for chat applications.
32K
Yi
Yi-1.5 9B
01-ai/Yi-1.5-9B-Chat-16K
Yi-1.5 9B supports 16K tokens, providing efficient and smooth language generation capabilities.
16K
Yi
Yi-1.5 34B
01-ai/Yi-1.5-34B-Chat-16K
Yi-1.5 34B delivers superior performance in industry applications with a wealth of training samples.
16K
Gemma
Gemma 2 9B
google/gemma-2-9b-it
Gemma 2 is Google's lightweight open-source text model series.
8K
Gemma
Gemma 2 27B
google/gemma-2-27b-it
Gemma 2 continues the design philosophy of being lightweight and efficient.
8K
Meta
Llama 3.1 8B
meta-llama/Meta-Llama-3.1-8B-Instruct
LLaMA 3.1 provides multilingual support and is one of the industry's leading generative models.
32K
Meta
Llama 3.1 70B
meta-llama/Meta-Llama-3.1-70B-Instruct
LLaMA 3.1 70B offers efficient conversational support in multiple languages.
32K
Meta
Llama 3.1 405B
meta-llama/Meta-Llama-3.1-405B-Instruct
LLaMA 3.1 405B is a powerful model for pre-training and instruction tuning.
32K
Meta
Llama 3.1 Nemotron 70B
nvidia/Llama-3.1-Nemotron-70B-Instruct
Llama 3.1 Nemotron 70B is a large language model customized by NVIDIA, designed to enhance the help provided by LLM-generated responses to user queries.
32K