LobeChat
Ctrl K
Back to Discovery
HunyuanHunyuan
@Hunyuan
9 models
A large language model developed by Tencent, equipped with powerful Chinese creative capabilities, logical reasoning abilities in complex contexts, and reliable task execution skills.

Supported Models

Hunyuan
Maximum Context Length
256K
Maximum Output Length
6K
Input Price
--
Output Price
--
Maximum Context Length
32K
Maximum Output Length
2K
Input Price
$0.63
Output Price
$0.70
Maximum Context Length
256K
Maximum Output Length
6K
Input Price
$2.10
Output Price
$8.40
Maximum Context Length
32K
Maximum Output Length
4K
Input Price
$2.10
Output Price
$7.00

Using Tencent Hunyuan in LobeChat

cover

Tencent Hunyuan is a large model launched by Tencent, designed to provide users with intelligent assistant services. It utilizes natural language processing technology to help users solve problems, offer suggestions, and generate content. By conversing with the model, users can quickly access the information they need, thereby enhancing work efficiency.

This article will guide you on how to use Tencent Hunyuan in LobeChat.

Step 1: Obtain the Tencent Hunyuan API Key

  • Register and log in to the Tencent Cloud Console
  • Navigate to Hunyuan Large Model and click on API KEY Management
  • Create an API key
Create API Key
  • Click View, and copy the API key from the pop-up panel, ensuring you save it securely
Save Key

Step 2: Configure Tencent Hunyuan in LobeChat

  • Go to the Settings page in LobeChat
  • Find the Tencent Hunyuan settings under Language Models
Enter API Key
  • Enter the API key you obtained
  • Select a Tencent Hunyuan model for your AI assistant to start the conversation
Select Tencent Hunyuan Model and Start Conversation

During usage, you may need to pay the API service provider, please refer to Tencent Hunyuan's relevant pricing policy.

You can now engage in conversations using the models provided by Tencent Hunyuan in LobeChat.

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.