-
Notifications
You must be signed in to change notification settings - Fork 4
Add dedicated VLM model configuration and wire it through multi-tenant VLM creation #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Changes from all commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -15,6 +15,11 @@ LLM_API_KEY="your_llm_api_key_here" | |
| LLM_BASE_URL="https://ark.ap-southeast.bytepluses.com/api/v3" | ||
| # 使用的模型名称 | ||
| LLM_MODEL=seed-1-6-250615 | ||
| # 用于多模态图片理解的 VLM 模型(必填,独立于 LLM_MODEL) | ||
| VLM_MODEL=seed-1-6-250615 | ||
| # 可选:VLM 使用独立的密钥/域名(未设置时复用 LLM 配置) | ||
| # VLM_API_KEY="your_vlm_api_key_here" | ||
| # VLM_BASE_URL="https://api.example.com/v1" | ||
| # LLM 供应商标识(ark/openai/claude) | ||
| LLM_PROVIDER=ark | ||
| # VLM 图片理解 API 超时时间(秒,默认 120 秒) | ||
|
Comment on lines
15
to
25
Owner
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. LLM和VLM难道不可以换供应商吗?难道一定要用LLM的Base URL吗?
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. 已支持 VLM 独立的 API Key/Base URL(env 示例已添加 VLM_API_KEY/VLM_BASE_URL),不再强制复用 LLM Base URL(e5548d2)。 |
||
|
|
@@ -318,4 +323,3 @@ TZ=Asia/Shanghai | |
|
|
||
| # --- Python 配置 --- | ||
| PYTHONUNBUFFERED=1 # 禁用输出缓冲,实时查看日志 | ||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
|
|
@@ -8,7 +8,6 @@ | |
| 重构原因: 统一配置管理,从服务商导向改为功能导向命名 | ||
| """ | ||
|
|
||
| import os | ||
| from typing import Optional | ||
| from pydantic import Field | ||
| from pydantic_settings import BaseSettings | ||
|
|
@@ -22,6 +21,9 @@ class LLMConfig(BaseSettings): | |
| api_key: str = Field(..., description="LLM API Key") | ||
| base_url: str = Field(..., description="LLM API Base URL") | ||
| model: str = Field(default="seed-1-6-250615", description="LLM Model Name") | ||
| vlm_model: str = Field(..., description="VLM Model Name", alias="VLM_MODEL") | ||
| vlm_api_key: Optional[str] = Field(default=None, description="VLM API Key", alias="VLM_API_KEY") | ||
| vlm_base_url: Optional[str] = Field(default=None, description="VLM API Base URL", alias="VLM_BASE_URL") | ||
|
Comment on lines
+24
to
+26
|
||
| vlm_timeout: int = Field(default=120, description="VLM Image Understanding Timeout (seconds)") | ||
| timeout: int = Field(default=60, description="General LLM Timeout (seconds)") | ||
|
|
||
|
|
||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The documentation says VLM_MODEL is required (必填), but in the code at src/config.py:24, vlm_model uses Field(...) which makes it required at the pydantic validation level. However, the comment also says it should be "independent from LLM_MODEL" (独立于 LLM_MODEL), but the default value for LLM_MODEL and the example value for VLM_MODEL are both the same: seed-1-6-250615. This creates confusion about whether they should actually be different models. Consider clarifying whether VLM_MODEL can use the same model as LLM_MODEL (for models that support both text and vision), or if they must be different models.