LobeChat
Ctrl K
Back to Discovery
InternLM

InternVL2 8B

InternVL2-8B
InternVL2-8B is a powerful visual language model that supports multimodal processing of images and text, capable of accurately recognizing image content and generating relevant descriptions or answers.

Providers Supporting This Model

InternLM
SiliconCloudSiliconCloud
InternLMInternVL2-8B
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$0.05
Output Price
$0.05
GiteeAIGiteeAI
InternLMInternVL2-8B
Maximum Context Length
--
Maximum Output Length
--
Input Price
--
Output Price
--

Model Parameters

Randomness
temperature

This setting affects the diversity of the model's responses. Lower values lead to more predictable and typical responses, while higher values encourage more diverse and less common responses. When set to 0, the model always gives the same response to a given input. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 2.00
Nucleus Sampling
top_p

This setting limits the model's selection to a certain proportion of the most likely vocabulary: only selecting those top words whose cumulative probability reaches P. Lower values make the model's responses more predictable, while the default setting allows the model to choose from the entire range of vocabulary. View Documentation

Type
FLOAT
Default Value
1.00
Range
0.00 ~ 1.00
Topic Freshness
presence_penalty

This setting aims to control the reuse of vocabulary based on its frequency in the input. It attempts to use less of those words that appear more frequently in the input, with usage frequency proportional to occurrence frequency. Vocabulary penalties increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Frequency Penalty
frequency_penalty

This setting adjusts the frequency at which the model reuses specific vocabulary that has already appeared in the input. Higher values reduce the likelihood of such repetition, while negative values have the opposite effect. Vocabulary penalties do not increase with frequency of occurrence. Negative values encourage vocabulary reuse. View Documentation

Type
FLOAT
Default Value
0.00
Range
-2.00 ~ 2.00
Single Response Limit
max_tokens

This setting defines the maximum length that the model can generate in a single response. Setting a higher value allows the model to produce longer replies, while a lower value restricts the length of the response, making it more concise. Adjusting this value appropriately based on different application scenarios can help achieve the desired response length and level of detail. View Documentation

Type
INT
Default Value
--

Related Models

Qwen

Qwen2.5 72B Instruct

Qwen2.5-72B-Instruct
Qwen2.5-72B-Instruct supports 16k context and generates long texts exceeding 8K. It enables seamless interaction with external systems through function calls, greatly enhancing flexibility and scalability. The model's knowledge has significantly increased, and its coding and mathematical abilities have been greatly improved, with multilingual support for over 29 languages.
16K
Qwen

Qwen2.5 Coder 32B Instruct

Qwen2.5-Coder-32B-Instruct
Qwen2.5-Coder-32B-Instruct is a large language model specifically designed for code generation, code understanding, and efficient development scenarios, featuring an industry-leading 32 billion parameters to meet diverse programming needs.
Qwen

Qwen2.5 7B Instruct

Qwen2.5-7B-Instruct
Qwen2.5-7B-Instruct is a large language model with 7 billion parameters, supporting function calls and seamless interaction with external systems, greatly enhancing flexibility and scalability. It is optimized for Chinese and multilingual scenarios, supporting applications such as intelligent Q&A and content generation.
Qwen

Qwen2.5 32B Instruct

Qwen2.5-32B-Instruct
Qwen2.5-32B-Instruct is a large language model with 32 billion parameters, offering balanced performance, optimized for Chinese and multilingual scenarios, and supporting applications such as intelligent Q&A and content generation.
Qwen

Qwen2.5 14B Instruct

Qwen2.5-14B-Instruct
Qwen2.5-14B-Instruct is a large language model with 14 billion parameters, delivering excellent performance, optimized for Chinese and multilingual scenarios, and supporting applications such as intelligent Q&A and content generation.