LobeChat
Ctrl K
Back to Discovery
BaiduCloudBaiduCloud
WenxinWenxin千帆
@Wenxin
12 models
An enterprise-level one-stop platform for large model and AI-native application development and services, providing the most comprehensive and user-friendly toolchain for the entire process of generative artificial intelligence model development and application development.

Supported Models

Wenxin
Maximum Context Length
8K
Maximum Output Length
--
Input Price
$0.11
Output Price
$0.28
Maximum Context Length
8K
Maximum Output Length
--
Input Price
$0.11
Output Price
$0.28
Maximum Context Length
128K
Maximum Output Length
--
Input Price
$0.11
Output Price
$0.28
Maximum Context Length
8K
Maximum Output Length
--
Input Price
$4.20
Output Price
$12.61

Using Wenxin Qianfan in LobeChat

cover

Wenxin Qianfan is an artificial intelligence large language model platform launched by Baidu, supporting a variety of application scenarios, including literary creation, commercial copywriting, and mathematical logic reasoning. The platform features deep semantic understanding and generation capabilities across modalities and languages, and it is widely utilized in fields such as search Q&A, content creation, and smart office applications.

This article will guide you on how to use Wenxin Qianfan in LobeChat.

Step 1: Obtain the Wenxin Qianfan API Key

  • Register and log in to the Baidu Intelligent Cloud Console
  • Navigate to Baidu Intelligent Cloud Qianfan ModelBuilder
  • Choose Application Access from the left-side menu
  • Create an application
Create Application
  • Enter the Security Authentication -> Access Key management page from the user account menu
  • Copy the Access Key and Secret Key, and store them safely
Save Keys

Step 2: Configure Wenxin Qianfan in LobeChat

  • Go to the Settings interface in LobeChat
  • Locate the settings for Wenxin Qianfan under Language Model
Enter API Keys
  • Enter the obtained Access Key and Secret Key
  • Select a Wenxin Qianfan model for your AI assistant to start interacting
Select Wenxin Qianfan Model and Start Chat

During usage, you may need to pay the API service provider. Please refer to Wenxin Qianfan's relevant fee policy.

You can now use the models provided by Wenxin Qianfan for conversations in LobeChat.

Related Providers

OpenAIOpenAI
@OpenAI
22 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
40 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
8 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
14 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
14 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.