Skip to content

Commit bbd7f53

Browse files
committed
chore: update docs subsections
1 parent b9de78e commit bbd7f53

File tree

24 files changed

+357
-236
lines changed

24 files changed

+357
-236
lines changed

docs/docs/02-hooks/01-natural-language-processing/useSpeechToText.md

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -75,20 +75,20 @@ For more information on loading resources, take a look at [loading models](../..
7575

7676
### Returns
7777

78-
| Field | Type | Description |
79-
| --------------------------- | ---------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
80-
| `transcribe` | `(waveform: Float32Array \| number[], options?: DecodingOptions \| undefined) => Promise<string>` | Starts a transcription process for a given input array, which should be a waveform at 16kHz. The second argument is an options object, e.g. `{ language: 'es' }` for multilingual models. Resolves a promise with the output transcription when the model is finished. Passing `number[]` is deprecated. |
81-
| `stream` | `(options?: DecodingOptions \| undefined) => Promise<string>` | Starts a streaming transcription process. Use in combination with `streamInsert` to feed audio chunks and `streamStop` to end the stream. The argument is an options object, e.g. `{ language: 'es' }` for multilingual models. Updates `committedTranscription` and `nonCommittedTranscription` as transcription progresses. |
82-
| `streamInsert` | `(waveform: Float32Array \| number[]) => void` | Inserts a chunk of audio data (sampled at 16kHz) into the ongoing streaming transcription. Call this repeatedly as new audio data becomes available. Passing `number[]` is deprecated. |
83-
| `streamStop` | `() => void` | Stops the ongoing streaming transcription process. |
84-
| `encode` | `(waveform: Float32Array \| number[]) => Promise<Float32Array>` | Runs the encoding part of the model on the provided waveform. Passing `number[]` is deprecated. |
85-
| `decode` | `(tokens: number[] \| Int32Array, encoderOutput: Float32Array \| number[]) => Promise<Float32Array>` | Runs the decoder of the model. Passing `number[]` is deprecated. |
86-
| `committedTranscription` | `string` | Contains the part of the transcription that is finalized and will not change. Useful for displaying stable results during streaming. |
87-
| `nonCommittedTranscription` | `string` | Contains the part of the transcription that is still being processed and may change. Useful for displaying live, partial results during streaming. |
88-
| `error` | `string \| null` | Contains the error message if the model failed to load. |
89-
| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. |
90-
| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. |
91-
| `downloadProgress` | `number` | Tracks the progress of the model download process. |
78+
| Field | Type | Description |
79+
| --------------------------- | ---------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
80+
| `transcribe` | `(waveform: Float32Array \| number[], options?: DecodingOptions \| undefined) => Promise<string>` | Starts a transcription process for a given input array, which should be a waveform at 16kHz. The second argument is an options object, e.g. `{ language: 'es' }` for multilingual models. Resolves a promise with the output transcription when the model is finished. Passing `number[]` is deprecated. |
81+
| `stream` | `(options?: DecodingOptions \| undefined) => Promise<string>` | Starts a streaming transcription process. Use in combination with `streamInsert` to feed audio chunks and `streamStop` to end the stream. The argument is an options object, e.g. `{ language: 'es' }` for multilingual models. Updates `committedTranscription` and `nonCommittedTranscription` as transcription progresses. |
82+
| `streamInsert` | `(waveform: Float32Array \| number[]) => void` | Inserts a chunk of audio data (sampled at 16kHz) into the ongoing streaming transcription. Call this repeatedly as new audio data becomes available. Passing `number[]` is deprecated. |
83+
| `streamStop` | `() => void` | Stops the ongoing streaming transcription process. |
84+
| `encode` | `(waveform: Float32Array \| number[]) => Promise<Float32Array>` | Runs the encoding part of the model on the provided waveform. Passing `number[]` is deprecated. |
85+
| `decode` | `(tokens: number[] \| Int32Array, encoderOutput: Float32Array \| number[]) => Promise<Float32Array>` | Runs the decoder of the model. Passing `number[]` is deprecated. |
86+
| `committedTranscription` | `string` | Contains the part of the transcription that is finalized and will not change. Useful for displaying stable results during streaming. |
87+
| `nonCommittedTranscription` | `string` | Contains the part of the transcription that is still being processed and may change. Useful for displaying live, partial results during streaming. |
88+
| `error` | `string \| null` | Contains the error message if the model failed to load. |
89+
| `isGenerating` | `boolean` | Indicates whether the model is currently processing an inference. |
90+
| `isReady` | `boolean` | Indicates whether the model has successfully loaded and is ready for inference. |
91+
| `downloadProgress` | `number` | Tracks the progress of the model download process. |
9292

9393
<details>
9494
<summary>Type definitions</summary>
@@ -340,4 +340,4 @@ function App() {
340340

341341
| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] |
342342
| ------------ | :--------------------: | :----------------: |
343-
| WHISPER_TINY | 900 | 600 |
343+
| WHISPER_TINY | 410 | 375 |

docs/docs/02-hooks/01-natural-language-processing/useTextEmbeddings.md

Lines changed: 12 additions & 12 deletions
Original file line numberDiff line numberDiff line change
@@ -133,25 +133,25 @@ For the supported models, the returned embedding vector is normalized, meaning t
133133

134134
| Model | Android (XNNPACK) [MB] | iOS (XNNPACK) [MB] |
135135
| -------------------------- | :--------------------: | :----------------: |
136-
| ALL_MINILM_L6_V2 | 85 | 100 |
137-
| ALL_MPNET_BASE_V2 | 390 | 465 |
138-
| MULTI_QA_MINILM_L6_COS_V1 | 115 | 130 |
139-
| MULTI_QA_MPNET_BASE_DOT_V1 | 415 | 490 |
140-
| CLIP_VIT_BASE_PATCH32_TEXT | 195 | 250 |
136+
| ALL_MINILM_L6_V2 | 95 | 110 |
137+
| ALL_MPNET_BASE_V2 | 405 | 455 |
138+
| MULTI_QA_MINILM_L6_COS_V1 | 120 | 140 |
139+
| MULTI_QA_MPNET_BASE_DOT_V1 | 435 | 455 |
140+
| CLIP_VIT_BASE_PATCH32_TEXT | 200 | 280 |
141141

142142
### Inference time
143143

144144
:::warning warning
145145
Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization.
146146
:::
147147

148-
| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) | OnePlus 12 (XNNPACK) [ms] |
149-
| -------------------------- | :--------------------------: | :------------------------------: | :------------------------: | :--------------------------: | :-----------------------: |
150-
| ALL_MINILM_L6_V2 | 15 | 22 | 23 | 36 | 31 |
151-
| ALL_MPNET_BASE_V2 | 71 | 96 | 101 | 112 | 105 |
152-
| MULTI_QA_MINILM_L6_COS_V1 | 15 | 22 | 23 | 36 | 31 |
153-
| MULTI_QA_MPNET_BASE_DOT_V1 | 71 | 95 | 100 | 112 | 105 |
154-
| CLIP_VIT_BASE_PATCH32_TEXT | 31 | 47 | 48 | 55 | 49 |
148+
| Model | iPhone 17 Pro (XNNPACK) [ms] | iPhone 16 Pro (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
149+
| -------------------------- | :--------------------------: | :--------------------------: | :------------------------: | :-------------------------------: | :-----------------------: |
150+
| ALL_MINILM_L6_V2 | 16 | 16 | 19 | 54 | 28 |
151+
| ALL_MPNET_BASE_V2 | 115 | 116 | 144 | 145 | 95 |
152+
| MULTI_QA_MINILM_L6_COS_V1 | 16 | 16 | 20 | 47 | 28 |
153+
| MULTI_QA_MPNET_BASE_DOT_V1 | 112 | 119 | 144 | 146 | 96 |
154+
| CLIP_VIT_BASE_PATCH32_TEXT | 47 | 45 | 57 | 65 | 48 |
155155

156156
:::info
157157
Benchmark times for text embeddings are highly dependent on the sentence length. The numbers above are based on a sentence of around 80 tokens. For shorter or longer sentences, inference time may vary accordingly.

docs/docs/02-hooks/02-computer-vision/useClassification.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -100,14 +100,14 @@ function App() {
100100

101101
| Model | Android (XNNPACK) [MB] | iOS (Core ML) [MB] |
102102
| ----------------- | :--------------------: | :----------------: |
103-
| EFFICIENTNET_V2_S | 130 | 85 |
103+
| EFFICIENTNET_V2_S | 230 | 87 |
104104

105105
### Inference time
106106

107107
:::warning warning
108108
Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization.
109109
:::
110110

111-
| Model | iPhone 16 Pro (Core ML) [ms] | iPhone 13 Pro (Core ML) [ms] | iPhone SE 3 (Core ML) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
111+
| Model | iPhone 17 Pro (Core ML) [ms] | iPhone 16 Pro (Core ML) [ms] | iPhone SE 3 (Core ML) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
112112
| ----------------- | :--------------------------: | :--------------------------: | :------------------------: | :-------------------------------: | :-----------------------: |
113-
| EFFICIENTNET_V2_S | 100 | 120 | 130 | 180 | 170 |
113+
| EFFICIENTNET_V2_S | 105 | 110 | 149 | 299 | 227 |

docs/docs/02-hooks/02-computer-vision/useImageEmbeddings.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -123,9 +123,9 @@ For the supported models, the returned embedding vector is normalized, meaning t
123123
Times presented in the tables are measured as consecutive runs of the model. Initial run times may be up to 2x longer due to model loading and initialization. Performance also heavily depends on image size, because resize is expansive operation, especially on low-end devices.
124124
:::
125125

126-
| Model | iPhone 16 Pro (XNNPACK) [ms] | iPhone 14 Pro Max (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
127-
| --------------------------- | :--------------------------: | :------------------------------: | :------------------------: | :-------------------------------: | :-----------------------: |
128-
| CLIP_VIT_BASE_PATCH32_IMAGE | 48 | 64 | 69 | 65 | 63 |
126+
| Model | iPhone 17 Pro (XNNPACK) [ms] | iPhone 16 Pro (XNNPACK) [ms] | iPhone SE 3 (XNNPACK) [ms] | Samsung Galaxy S24 (XNNPACK) [ms] | OnePlus 12 (XNNPACK) [ms] |
127+
| --------------------------- | :--------------------------: | :--------------------------: | :------------------------: | :-------------------------------: | :-----------------------: |
128+
| CLIP_VIT_BASE_PATCH32_IMAGE | 70 | 70 | 90 | 66 | 58 |
129129

130130
:::info
131131
Image embedding benchmark times are measured using 224×224 pixel images, as required by the model. All input images, whether larger or smaller, are resized to 224×224 before processing. Resizing is typically fast for small images but may be noticeably slower for very large images, which can increase total inference time.

0 commit comments

Comments
 (0)