Skip to content

Commit

Permalink
Merge pull request #12 from mrsteele/feature/stringifyFeature
Browse files Browse the repository at this point in the history
feat: adding stringify argument
  • Loading branch information
mrsteele authored Aug 7, 2023
2 parents e031f77 + 8c1841c commit 3d6855c
Show file tree
Hide file tree
Showing 3 changed files with 33 additions and 3 deletions.
15 changes: 13 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -57,10 +57,13 @@ Embeddings support a lot of data, and sometimes more data than you have room for
// protect your requests from going over:
await fetch('https://api.openai.com/v1/embeddings', {
method: 'POST',
body: JSON.stringify(truncateWrapper({
body: truncateWrapper({
mode: 'text-embedding-ada-002',
opts: {
stringify: true // we will even take care of this for you
},
inputs: ['large data set, pretend this goes on for most of eternity...']
}))
})
})
```

Expand Down Expand Up @@ -101,6 +104,14 @@ const truncatedBody = truncateWrapper({
})
```

#### Options

You can pass options to the truncate wrapper as seen in the examples above. The following are the current supported options:

* **limit** (Int) - The token limit you want to enforce on the messages/input. This is the aggregated results for messages (GPT/Completions), and the individual results for inputs/embeddings which is how they are calculated by OpenAI. Defaults to the model maximum.
* **buffer** (Int) - The amount of additional restriction you want to apply to the limit. The math equates to `max = limit - buffer`. Defaults to `0`.
* **stringify** (Bool) - If you want the output to be a stringified JSON object instead of a parsed JSON object. Defaults to `false`

### Validate

The validation tools are used if you need to get information about the prompt costs or token amount.
Expand Down
18 changes: 18 additions & 0 deletions src/index.test.js
Original file line number Diff line number Diff line change
Expand Up @@ -34,4 +34,22 @@ describe('Index', () => {
expect(response.model).toBe('text-embedding-ada-002')
expect(response.messages.length).toBe(1)
})

test('truncateWrapper - stringify', () => {
const obj = {
model: 'gpt-3.5-turbo',
opts: {
stringify: true
},
messages: [{
role: 'user',
content: 'hi'
}]
}
const response = truncateWrapper(obj)

const { opts, ...expected } = obj

expect(response).toBe(JSON.stringify(expected))
})
})
3 changes: 2 additions & 1 deletion src/truncate.js
Original file line number Diff line number Diff line change
Expand Up @@ -87,7 +87,8 @@ const truncateCompletion = (originalBody = {}) => {
*/
const truncateWrapper = (body = {}) => {
const fn = body.input ? truncateEmbedding : truncateCompletion
return fn(body)
const res = fn(body)
return body.opts?.stringify ? JSON.stringify(res) : res
}

module.exports = {
Expand Down

0 comments on commit 3d6855c

Please sign in to comment.