-
-
Notifications
You must be signed in to change notification settings - Fork 11.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] 使用DeepSeek-R1 模型助手返回的信息顶部被轻微截断 #5530
Comments
📦 Deployment environmentDocker 📌 Software versionv1.47.9 💻 System environmentmacOS 🌐 BrowserEdge 🐛 Problem description📷 Steps to reproduceNo response 🚦 Expected resultsNo response 📝 Supplementary informationNo response |
👀 @ysicing Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible. |
I found a similar closed issue related to the lobe-chat framework, but unfortunately, no solution was found in the comments [1]. For your specific issue with the DeepSeek-R1 model where the information at the top is slightly truncated, you might want to check if there are any updates or patches available for the version you are using (v1.47.9). Additionally, reviewing the rendering component responsible for the chat interface might provide insights. The To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
The issue of text being slightly truncated at the top when using the DeepSeek-R1 model in the lobe-chat application could be related to the Additionally, the application handles different screen sizes and resolutions using layout tokens, which might influence text display. This includes specific dimensions for UI components that adapt to various screen sizes, potentially affecting how text is rendered on different devices [2][3]. To address this issue, you might want to investigate the To continue talking to Dosu, mention @dosu. Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other |
实际上不只是 DeepSeek R1 回复的问题,只要是 LLM 的返回中开头结尾都带有<>标签就会出现这种情况 |
In fact, this is not just a problem with the DeepSeek R1 reply, this will happen as long as the LLM reply has <> tags at the beginning and end. |
感觉是不是要支持对特定的模型,比如deepseek-r1模型,对 标签进行解析去额外展示他的思考过程呢 |
Is it necessary to support parsing the tag for specific models, such as the deepseek-r1 model, to additionally demonstrate his thinking process? |
以及从 deepseek 官方网站中可以看到,cot 无需放到对话上下文中 找到了 open-webui 的处理方法:open-webui/open-webui#8706 |
As can be seen from the deepseek official website, cot does not need to be placed in the conversation context Found the solution for open-webui: open-webui/open-webui#8706 |
📦 部署环境
Docker
📌 软件版本
v1.47.9
💻 系统环境
macOS
🌐 浏览器
Edge
🐛 问题描述
📷 复现步骤
No response
🚦 期望结果
No response
📝 补充信息
No response
The text was updated successfully, but these errors were encountered: