Skip to content

Commit f80168a

Browse files
mkopcinsMateusz Kopciński
andauthored
feature: llm output tokens batching (#628)
## Description Added batching feature to llms so that `onTokenCallback` is not triggered on each token, but after every batch to reduce number of rerenders. ### Introduces a breaking change? - [ ] Yes - [x] No ### Type of change - [ ] Bug fix (change which fixes an issue) - [x] New feature (change which adds functionality) - [x] Documentation update (improves or adds clarity to existing documentation) - [ ] Other (chores, tests, code style improvements etc.) ### Tested on - [x] iOS - [x] Android ### Testing instructions <!-- Provide step-by-step instructions on how to test your changes. Include setup details if necessary. --> ### Screenshots <!-- Add screenshots here, if applicable --> ### Related issues <!-- Link related issues here using #issue-number --> ### Checklist - [x] I have performed a self-review of my code - [x] I have commented my code, particularly in hard-to-understand areas - [x] I have updated the documentation accordingly - [x] My changes generate no new warnings ### Additional notes <!-- Include any additional information, assumptions, or context that reviewers might need to understand this PR. --> --------- Co-authored-by: Mateusz Kopciński <[email protected]>
1 parent cec2156 commit f80168a

File tree

12 files changed

+207
-55
lines changed

12 files changed

+207
-55
lines changed

docs/docs/02-hooks/01-natural-language-processing/useLLM.md

Lines changed: 33 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -60,20 +60,21 @@ For more information on loading resources, take a look at [loading models](../..
6060

6161
### Returns
6262

63-
| Field | Type | Description |
64-
| ------------------ | --------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
65-
| `generate()` | `(messages: Message[], tools?: LLMTool[]) => Promise<void>` | Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
66-
| `interrupt()` | `() => void` | Function to interrupt the current inference. |
67-
| `response` | `string` | State of the generated response. This field is updated with each token generated by the model. |
68-
| `token` | `string` | The most recently generated token. |
69-
| `isReady` | `boolean` | Indicates whether the model is ready. |
70-
| `isGenerating` | `boolean` | Indicates whether the model is currently generating a response. |
71-
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
72-
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load. |
73-
| `configure` | `({ chatConfig?: Partial<ChatConfig>, toolsConfig?: ToolsConfig }) => void` | Configures chat and tool calling. See more details in [configuring the model](#configuring-the-model). |
74-
| `sendMessage` | `(message: string) => Promise<void>` | Function to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
75-
| `deleteMessage` | `(index: number) => void` | Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
76-
| `messageHistory` | `Message[]` | History containing all messages in conversation. This field is updated after model responds to `sendMessage`. |
63+
| Field | Type | Description |
64+
| ------------------------ | -------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------- |
65+
| `generate()` | `(messages: Message[], tools?: LLMTool[]) => Promise<void>` | Runs model to complete chat passed in `messages` argument. It doesn't manage conversation context. |
66+
| `interrupt()` | `() => void` | Function to interrupt the current inference. |
67+
| `response` | `string` | State of the generated response. This field is updated with each token generated by the model. |
68+
| `token` | `string` | The most recently generated token. |
69+
| `isReady` | `boolean` | Indicates whether the model is ready. |
70+
| `isGenerating` | `boolean` | Indicates whether the model is currently generating a response. |
71+
| `downloadProgress` | `number` | Represents the download progress as a value between 0 and 1, indicating the extent of the model file retrieval. |
72+
| `error` | <code>string &#124; null</code> | Contains the error message if the model failed to load. |
73+
| `configure` | `({chatConfig?: Partial<ChatConfig>, toolsConfig?: ToolsConfig, generationConfig?: GenerationConfig}) => void` | Configures chat and tool calling. See more details in [configuring the model](#configuring-the-model). |
74+
| `sendMessage` | `(message: string) => Promise<void>` | Function to add user message to conversation. After model responds, `messageHistory` will be updated with both user message and model response. |
75+
| `deleteMessage` | `(index: number) => void` | Deletes all messages starting with message on `index` position. After deletion `messageHistory` will be updated. |
76+
| `messageHistory` | `Message[]` | History containing all messages in conversation. This field is updated after model responds to `sendMessage`. |
77+
| `getGeneratedTokenCount` | `() => number` | Returns the number of tokens generated in the last response. |
7778

7879
<details>
7980
<summary>Type definitions</summary>
@@ -102,9 +103,11 @@ interface LLMType {
102103
configure: ({
103104
chatConfig,
104105
toolsConfig,
106+
generationConfig,
105107
}: {
106108
chatConfig?: Partial<ChatConfig>;
107109
toolsConfig?: ToolsConfig;
110+
generationConfig?: GenerationConfig;
108111
}) => void;
109112
generate: (messages: Message[], tools?: LLMTool[]) => Promise<void>;
110113
sendMessage: (message: string) => Promise<void>;
@@ -138,6 +141,11 @@ interface ToolCall {
138141
arguments: Object;
139142
}
140143

144+
interface GenerationConfig {
145+
outputTokenBatchSize: number;
146+
batchTimeInterval: number;
147+
}
148+
141149
type LLMTool = Object;
142150
```
143151

@@ -147,7 +155,7 @@ type LLMTool = Object;
147155

148156
You can use functions returned from this hooks in two manners:
149157

150-
1. Functional/pure - we will not keep any state for you. You'll need to keep conversation history and handle function calling yourself. Use `generate` (and rarely `forward`) and `response`. Note that you don't need to run `configure` to use those. Furthermore, it will not have any effect on those functions.
158+
1. Functional/pure - we will not keep any state for you. You'll need to keep conversation history and handle function calling yourself. Use `generate` (and rarely `forward`) and `response`. Note that you don't need to run `configure` to use those. Furthermore, `chatConfig` and `toolsConfig` will not have any effect on those functions.
151159

152160
2. Managed/stateful - we will manage conversation state. Tool calls will be parsed and called automatically after passing appropriate callbacks. See more at [managed LLM chat](#managed-llm-chat).
153161

@@ -267,6 +275,12 @@ To configure model (i.e. change system prompt, load initial conversation history
267275

268276
- **`displayToolCalls`** - If set to true, JSON tool calls will be displayed in chat. If false, only answers will be displayed.
269277

278+
**`generationConfig`** - Object configuring generation settings, currently only output token batching.
279+
280+
- **`outputTokenBatchSize`** - Soft upper limit on the number of tokens in each token batch (in certain cases there can be more tokens in given batch, i.e. when the batch would end with special emoji join character).
281+
282+
- **`batchTimeInterval`** - Upper limit on the time interval between consecutive token batches.
283+
270284
### Sending a message
271285

272286
In order to send a message to the model, one can use the following code:
@@ -459,6 +473,10 @@ The response should include JSON:
459473
}
460474
```
461475

476+
## Token Batching
477+
478+
Depending on selected model and the user's device generation speed can be above 60 tokens per second. If the `tokenCallback` triggers rerenders and is invoked on every single token it can significantly decrease the app's performance. To alleviate this and help improve performance we've implemented token batching. To configure this you need to call `configure` method and pass `generationConfig`. Inside you can set two parameters `outputTokenBatchSize` and `batchTimeInterval`. They set the size of the batch before tokens are emitted and the maximum time interval between consecutive batches respectively. Each batch is emitted if either `timeInterval` elapses since last batch or `countInterval` number of tokens are generated. This allows for smooth generation even if model lags during generation. Default parameters are set to 10 tokens and 80ms for time interval (~12 batches per second).
479+
462480
## Available models
463481

464482
| Model Family | Sizes | Quantized |

0 commit comments

Comments
 (0)