Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Predicted tokens support in instructor #1312

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

sreeprasannar
Copy link
Contributor

@sreeprasannar sreeprasannar commented Jan 18, 2025

#1145 is trivially supported by instructor (since we just pass on the kwargs to the openai endpoint). This PR bubbles up predicted tokens details in the total usage in update_total_usage from the response which can be useful for callers.

Also added a test to demonstrate the correct behaviour


Important

Add tracking of predicted tokens in update_total_usage and test in test_predicted_outputs.py.

  • Behavior:
    • update_total_usage in utils.py now tracks accepted_prediction_tokens and rejected_prediction_tokens from completion_tokens_details.
  • Testing:
    • New test test_predicted_outputs in test_predicted_outputs.py verifies tracking of predicted tokens in response usage.

This description was created by Ellipsis for 3f38122. It will automatically update as commits are pushed.

…ompletion usage

only relevant when `prediction` is in the input field -- otherwise will default to 0
Copy link
Contributor

@ellipsis-dev ellipsis-dev bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 Looks good to me! Reviewed everything up to 3f38122 in 21 seconds

More details
  • Looked at 53 lines of code in 2 files
  • Skipped 0 files when reviewing.
  • Skipped posting 2 drafted comments based on config settings.
1. tests/llm/test_openai/test_predicted_outputs.py:3
  • Draft comment:
    The client parameter is not initialized in this test function. Ensure that client is properly set up before using it in the test.
  • Reason this comment was not posted:
    Confidence changes required: 80%
    The test function is missing a proper setup for the client parameter, which is crucial for the test to run correctly. This could lead to test failures or unexpected behavior.
2. tests/llm/test_openai/test_predicted_outputs.py:33
  • Draft comment:
    Assertions should include error messages for better debugging. This applies to both assertions in this file.
  • Reason this comment was not posted:
    Confidence changes required: 80%
    The assertions in the test file lack error messages, which is against the rule that assertions should always have an error message.

Workflow ID: wflow_01M0obeWPvzXT3GQ


You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet mode, and more.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant