LobeChat
Ctrl K
Back to Discovery
CloudflareCloudflare
WorkersAIWorkersAI

Supported Models

Cloudflare
Maximum Context Length
16K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
2K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
4K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
8K
Maximum Output Length
--
Input Price
--
Output Price
--

Using Cloudflare Workers AI in LobeChat

cover

Cloudflare Workers AI is a service that integrates AI capabilities into the Cloudflare Workers serverless computing platform. Its core functionality lies in delivering fast, scalable computing power through Cloudflare's global network, thereby reducing operational overhead.

This document will guide you on how to use Cloudflare Workers AI in LobeChat:

Step 1: Obtain Your Cloudflare Workers AI API Key

Cloudflare Workers AI
  • In the Using REST API section, click the Create Workers AI API Token button.
  • In the drawer dialog, copy and save your API token.
  • Also, copy and save your Account ID.
Cloudflare Workers AI API Token
  • Please store your API token securely, as it will only be displayed once. If you accidentally lose it, you will need to create a new token.

Step 2: Configure Cloudflare Workers AI in LobeChat

  • Go to the Settings interface in LobeChat.
  • Under Language Model, find the Cloudflare settings.
Input API Token
  • Enter the API Token you obtained.
  • Input your Account ID.
  • Choose a Cloudflare Workers AI model for your AI assistant to start the conversation.
Choose Cloudflare Workers AI Model and Start Conversation

You may incur charges while using the API service, please refer to Cloudflare's pricing policy for details.

At this point, you can start conversing with the model provided by Cloudflare Workers AI in LobeChat.

Related Providers

LobeHubLobeHub
@LobeHub
12 models
LobeChat Cloud enables the invocation of AI models through officially deployed APIs, using a Credits system to measure the usage of AI models, corresponding to the Tokens used by large models.
OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.