You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
## Description
Added batching feature to llms so that `onTokenCallback` is not
triggered on each token, but after every batch to reduce number of
rerenders.
### Introduces a breaking change?
- [ ] Yes
- [x] No
### Type of change
- [ ] Bug fix (change which fixes an issue)
- [x] New feature (change which adds functionality)
- [x] Documentation update (improves or adds clarity to existing
documentation)
- [ ] Other (chores, tests, code style improvements etc.)
### Tested on
- [x] iOS
- [x] Android
### Testing instructions
<!-- Provide step-by-step instructions on how to test your changes.
Include setup details if necessary. -->
### Screenshots
<!-- Add screenshots here, if applicable -->
### Related issues
<!-- Link related issues here using #issue-number -->
### Checklist
- [x] I have performed a self-review of my code
- [x] I have commented my code, particularly in hard-to-understand areas
- [x] I have updated the documentation accordingly
- [x] My changes generate no new warnings
### Additional notes
<!-- Include any additional information, assumptions, or context that
reviewers might need to understand this PR. -->
---------
Co-authored-by: Mateusz Kopciński <[email protected]>
|`generate()`|`(messages: Message[], tools?: LLMTool[]) => Promise<void>`| Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
66
-
|`interrupt()`|`() => void`| Function to interrupt the current inference. |
67
-
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model. |
68
-
|`token`|`string`| The most recently generated token. |
69
-
|`isReady`|`boolean`| Indicates whether the model is ready. |
70
-
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response. |
71
-
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
72
-
|`error`| <code>string | null</code> | Contains the error message if the model failed to load. |
73
-
|`configure`|`({ chatConfig?: Partial<ChatConfig>, toolsConfig?: ToolsConfig }) => void`| Configures chat and tool calling. See more details in [configuring the model](#configuring-the-model). |
74
-
|`sendMessage`|`(message: string) => Promise<void>`| Function to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
75
-
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
76
-
|`messageHistory`|`Message[]`| History containing all messages in conversation. This field is updated after model responds to `sendMessage`. |
|`generate()`|`(messages: Message[], tools?: LLMTool[]) => Promise<void>`| Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
66
+
|`interrupt()`|`() => void`| Function to interrupt the current inference. |
67
+
|`response`|`string`| State of the generated response. This field is updated with each token generated by the model. |
68
+
|`token`|`string`| The most recently generated token. |
69
+
|`isReady`|`boolean`| Indicates whether the model is ready. |
70
+
|`isGenerating`|`boolean`| Indicates whether the model is currently generating a response. |
71
+
|`downloadProgress`|`number`| Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
72
+
|`error`| <code>string | null</code> | Contains the error message if the model failed to load. |
73
+
|`configure`|`({chatConfig?: Partial<ChatConfig>, toolsConfig?: ToolsConfig, generationConfig?: GenerationConfig}) => void`| Configures chat and tool calling. See more details in [configuring the model](#configuring-the-model). |
74
+
|`sendMessage`|`(message: string) => Promise<void>`| Function to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
75
+
|`deleteMessage`|`(index: number) => void`| Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
76
+
|`messageHistory`|`Message[]`| History containing all messages in conversation. This field is updated after model responds to `sendMessage`. |
77
+
|`getGeneratedTokenCount`|`() => number`| Returns the number of tokens generated in the last response. |
You can use functions returned from this hooks in two manners:
149
157
150
-
1. Functional/pure - we will not keep any state for you. You'll need to keep conversation history and handle function calling yourself. Use `generate` (and rarely `forward`) and `response`. Note that you don't need to run `configure` to use those. Furthermore, it will not have any effect on those functions.
158
+
1. Functional/pure - we will not keep any state for you. You'll need to keep conversation history and handle function calling yourself. Use `generate` (and rarely `forward`) and `response`. Note that you don't need to run `configure` to use those. Furthermore, `chatConfig` and `toolsConfig` will not have any effect on those functions.
151
159
152
160
2. Managed/stateful - we will manage conversation state. Tool calls will be parsed and called automatically after passing appropriate callbacks. See more at [managed LLM chat](#managed-llm-chat).
153
161
@@ -267,6 +275,12 @@ To configure model (i.e. change system prompt, load initial conversation history
267
275
268
276
-**`displayToolCalls`** - If set to true, JSON tool calls will be displayed in chat. If false, only answers will be displayed.
269
277
278
+
**`generationConfig`** - Object configuring generation settings, currently only output token batching.
279
+
280
+
-**`outputTokenBatchSize`** - Soft upper limit on the number of tokens in each token batch (in certain cases there can be more tokens in given batch, i.e. when the batch would end with special emoji join character).
281
+
282
+
-**`batchTimeInterval`** - Upper limit on the time interval between consecutive token batches.
283
+
270
284
### Sending a message
271
285
272
286
In order to send a message to the model, one can use the following code:
@@ -459,6 +473,10 @@ The response should include JSON:
459
473
}
460
474
```
461
475
476
+
## Token Batching
477
+
478
+
Depending on selected model and the user's device generation speed can be above 60 tokens per second. If the `tokenCallback` triggers rerenders and is invoked on every single token it can significantly decrease the app's performance. To alleviate this and help improve performance we've implemented token batching. To configure this you need to call `configure` method and pass `generationConfig`. Inside you can set two parameters `outputTokenBatchSize` and `batchTimeInterval`. They set the size of the batch before tokens are emitted and the maximum time interval between consecutive batches respectively. Each batch is emitted if either `timeInterval` elapses since last batch or `countInterval` number of tokens are generated. This allows for smooth generation even if model lags during generation. Default parameters are set to 10 tokens and 80ms for time interval (~12 batches per second).
0 commit comments