LobeChat
Ctrl K
Back to Discovery
SiliconCloudSiliconCloud
@SiliconCloud
17 models
SiliconFlow is dedicated to accelerating AGI for the benefit of humanity, enhancing large-scale AI efficiency through an easy-to-use and cost-effective GenAI stack.

Supported Models

SiliconCloud
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$0.19
Output Price
$0.19
Maximum Context Length
32K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$0.10
Output Price
$0.10
Maximum Context Length
32K
Maximum Output Length
--
Input Price
$0.18
Output Price
$0.18

Using SiliconCloud in LobeChat

SiliconCloud is a cost-effective large model service provider, offering various services such as text generation and image generation.

This document will guide you on how to use SiliconCloud in LobeChat:

Step 1: Get your SiliconCloud API Key

Currently, new users can get 14 yuan free credit upon registration
  • Go to the API Key menu and click Create New API Key

  • Click copy API key and keep it safe

Step 2: Configure SiliconCloud in LobeChat

  • Visit the App Settings interface of LobeChat

  • Under Language Model, find the SiliconCloud settings

  • Enable SiliconCloud and enter the obtained API key

  • Choose a SiliconCloud model for your assistant and start chatting

You may need to pay the API service provider during use. Please refer to SiliconCloud's relevant fee policy.

Now you can use the models provided by SiliconCloud for conversation in LobeChat.

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
7 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
12 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.