PrboChat
Back to Discovery
ModelScopeModelScope
@ModelScope
5 models
modelscope.description

Supported Models

ModelScope
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--
Maximum Context Length
128K
Maximum Output Length
--
Input Price
--
Output Price
--

ModelScope 提供商配置

ModelScope(魔塔社区)是阿里巴巴的开源模型社区,提供各种 AI 模型的访问服务。本指南将帮助您在 LobeChat 中设置 ModelScope 提供商。

前置条件

在使用 ModelScope API 之前,您需要:

  1. 创建 ModelScope 账户

  2. 绑定阿里云账户

    • 重要:ModelScope API 需要绑定阿里云账户
    • 访问您的 ModelScope 访问令牌页面
    • 按照说明绑定您的阿里云账户
    • 此步骤是 API 访问的必要条件
  3. 获取 API 令牌

    • 绑定阿里云账户后,生成 API 令牌
    • 复制令牌以在 LobeChat 中使用

配置

环境变量

在您的 .env 文件中添加以下环境变量:

bash
# 启用 ModelScope 提供商
ENABLED_MODELSCOPE=1

# ModelScope API 密钥(必需)
MODELSCOPE_API_KEY=your_modelscope_api_token

# 可选:自定义模型列表(逗号分隔)
MODELSCOPE_MODEL_LIST=deepseek-ai/DeepSeek-V3-0324,Qwen/Qwen3-235B-A22B

# 可选:代理 URL(如需要)
MODELSCOPE_PROXY_URL=https://your-proxy-url

Docker 配置

如果使用 Docker,请在您的 docker-compose.yml 中添加 ModelScope 环境变量:

yaml
environment:
  - ENABLED_MODELSCOPE=1
  - MODELSCOPE_API_KEY=your_modelscope_api_token
  - MODELSCOPE_MODEL_LIST=deepseek-ai/DeepSeek-V3-0324,Qwen/Qwen3-235B-A22B

可用模型

ModelScope 提供各种模型的访问,包括:

  • DeepSeek 模型:DeepSeek-V3、DeepSeek-R1 系列
  • Qwen 模型:Qwen3 系列、Qwen2.5 系列
  • Llama 模型:Meta-Llama-3 系列
  • 其他模型:各种开源模型

故障排除

常见问题

  1. "请先绑定阿里云账户后使用" 错误

  2. 401 认证错误

    • 检查您的 API 令牌是否正确
    • 确保令牌没有过期
    • 验证您的阿里云账户是否正确绑定
  3. 模型不可用

    • 某些模型可能需要额外权限
    • 查看 ModelScope 上模型页面的访问要求

调试模式

启用调试模式以查看详细日志:

bash
DEBUG_MODELSCOPE_CHAT_COMPLETION=1

注意事项

  • ModelScope API 与 OpenAI API 格式兼容
  • 该服务主要面向中国用户设计
  • 某些模型可能有使用限制或需要额外验证
  • 某些模型的 API 响应默认为中文

支持

对于 ModelScope 特定问题:

对于 LobeChat 集成问题:

模型 ID 格式

ModelScope 使用命名空间前缀格式的模型 ID,例如:

txt
deepseek-ai/DeepSeek-V3-0324
deepseek-ai/DeepSeek-R1-0528
Qwen/Qwen3-235B-A22B
Qwen/Qwen3-32B

在配置模型列表时,请使用完整的模型 ID 格式。

API 限制

  • ModelScope API 有速率限制
  • 某些模型可能需要特殊权限
  • 建议在生产环境中监控 API 使用情况
  • 部分高级模型可能需要付费使用

Related Providers

OpenAIOpenAI
@OpenAI
24 models
OpenAI is a global leader in artificial intelligence research, with models like the GPT series pushing the frontiers of natural language processing. OpenAI is committed to transforming multiple industries through innovative and efficient AI solutions. Their products demonstrate significant performance and cost-effectiveness, widely used in research, business, and innovative applications.
OllamaOllama
@Ollama
45 models
Ollama provides models that cover a wide range of fields, including code generation, mathematical operations, multilingual processing, and conversational interaction, catering to diverse enterprise-level and localized deployment needs.
Anthropic
ClaudeClaude
@Anthropic
11 models
Anthropic is a company focused on AI research and development, offering a range of advanced language models such as Claude 3.5 Sonnet, Claude 3 Sonnet, Claude 3 Opus, and Claude 3 Haiku. These models achieve an ideal balance between intelligence, speed, and cost, suitable for various applications from enterprise workloads to rapid-response scenarios. Claude 3.5 Sonnet, as their latest model, has excelled in multiple evaluations while maintaining a high cost-performance ratio.
AWS
BedrockBedrock
@Bedrock
17 models
Bedrock is a service provided by Amazon AWS, focusing on delivering advanced AI language and visual models for enterprises. Its model family includes Anthropic's Claude series, Meta's Llama 3.1 series, and more, offering a range of options from lightweight to high-performance, supporting tasks such as text generation, conversation, and image processing for businesses of varying scales and needs.
Google
GeminiGemini
@Google
12 models
Google's Gemini series represents its most advanced, versatile AI models, developed by Google DeepMind, designed for multimodal capabilities, supporting seamless understanding and processing of text, code, images, audio, and video. Suitable for various environments from data centers to mobile devices, it significantly enhances the efficiency and applicability of AI models.