Llama 3.1 is a leading model launched by Meta, supporting up to 405B parameters, applicable in complex dialogues, multilingual translation, and data analysis.
128K
Llama 3.1 70B
llama3.1:70b
70b.description
128K
Llama 3.1 405B
llama3.1:405b
405b.description
128K
Code Llama 7B
codellama
Code Llama is an LLM focused on code generation and discussion, combining extensive programming language support, suitable for developer environments.
16K
Code Llama 13B
codellama:13b
13b.description
16K
Code Llama 34B
codellama:34b
34b.description
16K
Code Llama 70B
codellama:70b
70b.description
16K
Gemma 2 2B
gemma2:2b
2b.description
8K
Gemma 2 9B
gemma2
Gemma 2 is an efficient model launched by Google, covering a variety of application scenarios from small applications to complex data processing.
8K
Gemma 2 27B
gemma2:27b
27b.description
8K
CodeGemma 2B
codegemma:2b
2b.description
8K
CodeGemma 7B
codegemma
CodeGemma is a lightweight language model dedicated to various programming tasks, supporting rapid iteration and integration.
8K
Phi-3 3.8B
phi3
Phi-3 is a lightweight open model launched by Microsoft, suitable for efficient integration and large-scale knowledge reasoning.
128K
Phi-3 14B
phi3:14b
14b.description
128K
WizardLM 2 7B
wizardlm2
WizardLM 2 is a language model provided by Microsoft AI, excelling in complex dialogues, multilingual capabilities, reasoning, and intelligent assistant applications.
32K
WizardLM 2 8x22B
wizardlm2:8x22b
8x22b.description
64K
MathΣtral 7B
mathstral
MathΣtral is designed for scientific research and mathematical reasoning, providing effective computational capabilities and result interpretation.
32K
Mistral 7B
mistral
Mistral is a 7B model released by Mistral AI, suitable for diverse language processing needs.
32K
Mixtral 8x7B
mixtral
Mixtral is an expert model from Mistral AI, featuring open-source weights and providing support in code generation and language understanding.
32K
Mixtral 8x22B
mixtral:8x22b
8x22b.description
64K
Mixtral Large 123B
mistral-large
Mixtral Large is Mistral's flagship model, combining capabilities in code generation, mathematics, and reasoning, supporting a 128k context window.
128K
Mixtral Nemo 12B
mistral-nemo
Mistral Nemo, developed in collaboration with Mistral AI and NVIDIA, is a high-performance 12B model.
128K
Codestral 22B
codestral
Codestral is Mistral AI's first code model, providing excellent support for code generation tasks.
32K
Aya 23 8B
aya
Aya 23 is a multilingual model launched by Cohere, supporting 23 languages, facilitating diverse language applications.