LobeChat
Ctrl K
Back to Discovery
GithubGithub
@GitHub
26 models
With GitHub Models, developers can become AI engineers and leverage the industry's leading AI models.

Supported Models

Github
Maximum Context Length
128K
Maximum Output Length
64K
Input Price
$3.00
Output Price
$12.00
Maximum Context Length
128K
Maximum Output Length
32K
Input Price
$15.00
Output Price
$60.00
Maximum Context Length
128K
Maximum Output Length
16K
Input Price
$0.15
Output Price
$0.60
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$2.50
Output Price
$10.00

在 LobeChat 中使用 GitHub Models

cover

GitHub Models 是 GitHub 最近推出的一项新功能,旨在为开发者提供一个免费的平台来访问和实验多种 AI 模型。GitHub Models 提供了一个互动沙盒环境,用户可以在此测试不同的模型参数和提示语,观察模型的响应。该平台支持多种先进的语言模型,包括 OpenAI 的 GPT-4o、Meta 的 Llama 3.1 和 Mistral 的 Large 2 等,覆盖了从大规模语言模型到特定任务模型的广泛应用。

本文将指导你如何在 LobeChat 中使用 GitHub Models。

GitHub Models 速率限制

当前 Playground 和免费 API 的使用受到每分钟请求数、每日请求数、每个请求的令牌数以及并发请求数的限制。若达到速率限制,则需等待限制重置后方可继续发出请求。不同模型(低、高及嵌入模型)的速率限制有所不同。 模型类型信息请参阅 GitHub Marketplace。

GitHub Models 速率限制

这些限制可能随时更改,具体信息请参考 GitHub 官方文档


GitHub Models 配置指南

步骤一:获得 GitHub 的访问令牌

  • 登录 GitHub 并打开 访问令牌 页面
  • 创建并设置一个新的访问令牌
创建访问令牌
  • 在返回的结果中复制并保存生成的令牌
保存访问令牌
  • GitHub Models 测试期间,要使用 GitHub Models,用户需要申请加入等待名单(waitlist) 通过后才能获得访问权限。

  • 请安全地存储访问令牌,因为它只会出现一次。如果您意外丢失它,您将需要创建一个新令牌。

步骤二:在 LobeChat 中配置 GitHub Models

  • 访问 LobeChat 的设置界面
  • 语言模型下找到 GitHub 的设置项
填入访问令牌
  • 填入获得的访问令牌
  • 为你的 AI 助手选择一个 GitHub 的模型即可开始对话
选择 GitHub 模型并开始对话

至此你已经可以在 LobeChat 中使用 GitHub 提供的模型进行对话了。

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.