LobeChat
Ctrl K
Back to Discovery
Groq
Groq
@Groq
12 models
Groq's LPU inference engine has excelled in the latest independent large language model (LLM) benchmarks, redefining the standards for AI solutions with its remarkable speed and efficiency. Groq represents instant inference speed, demonstrating strong performance in cloud-based deployments.

Supported Models

Groq
Maximum Context Length
8K
Maximum Output Length
8K
Input Price
$0.05
Output Price
$0.08
Maximum Context Length
8K
Maximum Output Length
8K
Input Price
$0.59
Output Price
$0.79
Maximum Context Length
128K
Maximum Output Length
8K
Input Price
$0.05
Output Price
$0.08
Maximum Context Length
128K
Maximum Output Length
8K
Input Price
$0.59
Output Price
$0.79

在 LobeChat 中使用 Groq

在 LobeChat 中使用 Groq

Groq 的 LPU 推理引擎 在最新的独立大语言模型(LLM)基准测试中表现卓越,以其惊人的速度和效率重新定义了 AI 解决方案的标准。通过 LobeChat 与 Groq Cloud 的集成,你现在可以轻松地利用 Groq 的技术,在 LobeChat 中加速大语言模型的运行。

Groq LPU 推理引擎在内部基准测试中连续达到每秒 300 个令牌的速度,据 ArtificialAnalysis.ai 的基准测试确认,Groq 在吞吐量(每秒 241 个令牌)和接收 100 个输出令牌的总时间(0.8 秒)方面优于其他提供商。

本文档将指导你如何在 LobeChat 中使用 Groq:

获取 GroqCloud API Key

首先,你需要到 GroqCloud Console 中获取一个 API Key。

获取 GroqCloud API Key

在控制台的 API Keys 菜单中创建一个 API Key。

保存 GroqCloud API Key

妥善保存弹窗中的 key,它只会出现一次,如果不小心丢失了,你需要重新创建一个 key。

在 LobeChat 中配置 Groq

你可以在 设置 -> 语言模型 中找到 Groq 的配置选项,将刚才获取的 API Key 填入。

Groq 服务商设置

接下来,在助手的模型选项中,选中一个 Groq 支持的模型,就可以在 LobeChat 中体验 Groq 强大的性能了。

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.