-
Notifications
You must be signed in to change notification settings - Fork 153
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(lightspeed): introduce lightspeed backend #1988
Conversation
Quality Gate failedFailed conditions |
|
||
router.post( | ||
'/chat/completions', | ||
validateCompletionsRequest, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe we can validate only via openai's APIError
, or via using openapi.yaml
schema and openapi-backend
stream: true, | ||
}); | ||
|
||
for await (const chunk of stream) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We would need the streaming support in the API as well so that we can show streaming data in the UI.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, I tried this approach in frontend plugin to read streamed data from backend (request has headers set to 'Content-Type': 'text/event-stream',
):
const reader = response.body!.pipeThrough(new TextDecoderStream())
.getReader();
while (true) {
const { value, done } = await reader.read();
if (done) break;
console.log('Received: ', value);`
}
Which logs data as they are streamed from backend.
}); | ||
|
||
try { | ||
const stream = await openai.chat.completions.create({ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wouldn't it be better if we just do proxying of requests from UI to the API server? That way we can utilize any OpenAI's complaint API from the UI directly. We can build other APIs as needed into the backend like the chat history or something like that but I am not sure if we should wrap the completion API like this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yes, that could be another approach. And then sending messages to save in separate API calls. We should discuss this and then redefine RHIDP-2949 if needed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New plugins should be contributed to https://github.com/backstage/community-plugins as this repo is deprecated.
@@ -0,0 +1,78 @@ | |||
# Lightspeed Backend | |||
|
|||
This is the lightspeed backend plugin that enables you to interact with any LLM server running a model with OpenAI's API compatibility. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
New plugins should be contributed to https://github.com/backstage/community-plugins as this repo is deprecated.
Description:
Introduces a new backend plugin that can act as a proxy to LLM models
Fixes:
RHIDP-2949