LobeChat
Ctrl K
Back to Discovery
TongyiTongyi
@Qwen
20 models
Tongyi Qianwen is a large-scale language model independently developed by Alibaba Cloud, featuring strong natural language understanding and generation capabilities. It can answer various questions, create written content, express opinions, and write code, playing a role in multiple fields.

Supported Models

Tongyi
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$0.04
Output Price
$0.08
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$0.11
Output Price
$0.28
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$2.80
Output Price
$8.40
Maximum Context Length
1M
Maximum Output Length
--
Input Price
$0.07
Output Price
$0.28

在 LobeChat 中使用通义千问

在 LobeChat 中使用通义千问

通义千问是阿里云自主研发的超大规模语言模型,具有强大的自然语言理解和生成能力。它可以回答各种问题、创作文字内容、表达观点看法、撰写代码等,在多个领域发挥作用。

本文档将指导你如何在 LobeChat 中使用通义千问:

步骤一:开通 DashScope 模型服务

  • 访问并登录阿里云 DashScope 平台
  • 初次进入时需要开通 DashScope 服务
  • 若你已开通,可跳过该步骤
开通 DashScope 服务

步骤二:获取 DashScope API 密钥

  • 进入API-KEY 界面,并创建一个 API 密钥
创建通义千问 API 密钥
  • 在弹出的对话框中复制 API 密钥,并妥善保存
复制通义千问 API 密钥

请安全地存储密钥,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新密钥。

步骤三:在LobeChat 中配置通义千问

  • 访问 LobeChat 的设置界面
  • 语言模型下找到通义千问的设置项
填写 API 密钥
  • 打开通义千问并填入获得的 API 密钥
  • 为你的 AI 助手选择一个 Qwen 的模型即可开始对话
选择 Qwen 模型并开始对话

在使用过程中你可能需要向 API 服务提供商付费,请参考通义千问的相关费用政策。

至此你已经可以在 LobeChat 中使用通义千问提供的模型进行对话了。

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.