Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Bug] 使用DeepSeek-R1 模型助手返回的信息顶部被轻微截断 #5530

Open
ysicing opened this issue Jan 21, 2025 · 12 comments
Open
Labels
🐛 Bug Something isn't working | 缺陷

Comments

@ysicing
Copy link

ysicing commented Jan 21, 2025

📦 部署环境

Docker

📌 软件版本

v1.47.9

💻 系统环境

macOS

🌐 浏览器

Edge

🐛 问题描述

Image

📷 复现步骤

No response

🚦 期望结果

No response

📝 补充信息

No response

@ysicing ysicing added the 🐛 Bug Something isn't working | 缺陷 label Jan 21, 2025
@github-project-automation github-project-automation bot moved this to Roadmap - Chat 1.x in Lobe Chat Routine Jan 21, 2025
@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


📦 Deployment environment

Docker

📌 Software version

v1.47.9

💻 System environment

macOS

🌐 Browser

Edge

🐛 Problem description

Image

📷 Steps to reproduce

No response

🚦 Expected results

No response

📝 Supplementary information

No response

@lobehubbot
Copy link
Member

👀 @ysicing

Thank you for raising an issue. We will investigate into the matter and get back to you as soon as possible.
Please make sure you have given us as much context as possible.
非常感谢您提交 issue。我们会尽快调查此事,并尽快回复您。 请确保您已经提供了尽可能多的背景信息。

Copy link

dosubot bot commented Jan 21, 2025

I found a similar closed issue related to the lobe-chat framework, but unfortunately, no solution was found in the comments [1].

For your specific issue with the DeepSeek-R1 model where the information at the top is slightly truncated, you might want to check if there are any updates or patches available for the version you are using (v1.47.9). Additionally, reviewing the rendering component responsible for the chat interface might provide insights. The Render component in the Render.tsx file is responsible for displaying chat artifacts and handling interactions [2][3].

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

Copy link

dosubot bot commented Jan 21, 2025

The issue of text being slightly truncated at the top when using the DeepSeek-R1 model in the lobe-chat application could be related to the renderMessage function in the ChatItem component. This function is responsible for rendering messages and might be affecting how text is displayed [1].

Additionally, the application handles different screen sizes and resolutions using layout tokens, which might influence text display. This includes specific dimensions for UI components that adapt to various screen sizes, potentially affecting how text is rendered on different devices [2][3].

To address this issue, you might want to investigate the renderMessage function and any CSS or styling rules applied to the chat interface that could affect text rendering, especially in the Edge browser on macOS.

To continue talking to Dosu, mention @dosu.


Help Dosu learn! Give it feedback: Great Response | Irrelevant Answer | Incorrect Sources | Too Verbose | Hallucination | Other

@2019YKL
Copy link

2019YKL commented Jan 21, 2025

MacRro M3 docker compose部署,版本v1.47.9 问题可复现

Image

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


MacRro M3 docker compose deployment, version v1.47.9 problem can be reproduced

Image

@YuJian920
Copy link

实际上不只是 DeepSeek R1 回复的问题,只要是 LLM 的返回中开头结尾都带有<>标签就会出现这种情况

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


In fact, this is not just a problem with the DeepSeek R1 reply, this will happen as long as the LLM reply has <> tags at the beginning and end.

@deepwzh
Copy link

deepwzh commented Jan 22, 2025

感觉是不是要支持对特定的模型,比如deepseek-r1模型,对 标签进行解析去额外展示他的思考过程呢

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


Is it necessary to support parsing the tag for specific models, such as the deepseek-r1 model, to additionally demonstrate his thinking process?

@wangerzi
Copy link

以及从 deepseek 官方网站中可以看到,cot 无需放到对话上下文中

Image

找到了 open-webui 的处理方法:open-webui/open-webui#8706

@lobehubbot
Copy link
Member

Bot detected the issue body's language is not English, translate it automatically. 👯👭🏻🧑‍🤝‍🧑👫🧑🏿‍🤝‍🧑🏻👩🏾‍🤝‍👨🏿👬🏿


As can be seen from the deepseek official website, cot does not need to be placed in the conversation context

Image

Found the solution for open-webui: open-webui/open-webui#8706

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
🐛 Bug Something isn't working | 缺陷
Projects
Status: Roadmap - Chat 1.x
Development

No branches or pull requests

6 participants