Skip to content

Commit ac9a946

Browse files
Merge branch 'main' into @nk/monorepo-setup
2 parents c9db730 + a38e0a1 commit ac9a946

File tree

2 files changed

+18
-23
lines changed

2 files changed

+18
-23
lines changed

README.md

Lines changed: 18 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -10,16 +10,14 @@
1010

1111
**Table of contents:**
1212

13-
- [🦙 **Quickstart - Running Llama**](#-quickstart---running-llama)
14-
- [1️⃣ **Installation**](#1️⃣-installation)
15-
- [2️⃣ **Setup \& Initialization**](#2️⃣-setup--initialization)
16-
- [3️⃣ **Run the model!**](#3️⃣-run-the-model)
17-
- [Minimal supported versions](#minimal-supported-versions)
18-
- [Examples 📲](#examples-)
19-
- [Warning](#warning)
20-
- [License](#license)
21-
- [What's next?](#whats-next)
22-
- [React Native ExecuTorch is created by Software Mansion](#react-native-executorch-is-created-by-software-mansion)
13+
- [Compatibility](#compatibility)
14+
- [Ready-made models 🤖](#ready-made-models-)
15+
- [Documentation 📚](#documentation-)
16+
- [Quickstart - Running Llama 🦙](#quickstart---running-llama-)
17+
- [Minimal supported versions](#minimal-supported-versions)
18+
- [Examples 📲](#examples-)
19+
- [License](#license)
20+
- [What's next?](#whats-next)
2321

2422
## Compatibility
2523

@@ -36,7 +34,7 @@ To run any AI model in ExecuTorch, you need to export it to a `.pte` format. If
3634
Take a look at how our library can help build you your React Native AI features in our docs:
3735
https://docs.swmansion.com/react-native-executorch
3836

39-
# 🦙 **Quickstart - Running Llama**
37+
## **Quickstart - Running Llama** 🦙
4038

4139
**Get started with AI-powered text generation in 3 easy steps!**
4240

@@ -48,8 +46,6 @@ yarn add react-native-executorch
4846
cd ios && pod install && cd ..
4947
```
5048

51-
---
52-
5349
### 2️⃣ **Setup & Initialization**
5450

5551
Add this to your component file:
@@ -58,6 +54,7 @@ Add this to your component file:
5854
import {
5955
useLLM,
6056
LLAMA3_2_1B,
57+
LLAMA3_2_TOKENIZER,
6158
LLAMA3_2_TOKENIZER_CONFIG,
6259
} from 'react-native-executorch';
6360

@@ -72,8 +69,6 @@ function MyComponent() {
7269
}
7370
```
7471

75-
---
76-
7772
### 3️⃣ **Run the model!**
7873

7974
```tsx
@@ -91,18 +86,18 @@ const handleGenerate = async () => {
9186

9287
## Minimal supported versions
9388

94-
The minimal supported version is 17.0 for iOS and Android 13.
89+
The minimal supported version are:
90+
* iOS 17.0
91+
* Android 13
9592

9693
## Examples 📲
9794

98-
https://github.com/user-attachments/assets/27ab3406-c7f1-4618-a981-6c86b53547ee
99-
10095
We currently host a few example apps demonstrating use cases of our library:
10196

102-
- apps/llm - chat application showcasing use of LLMs
103-
- apps/speech-to-text - Whisper and Moonshine models ready for transcription tasks
104-
- apps/computer-vision - computer vision related tasks
105-
- apps/text-embeddings - computing text representations for semantic search
97+
- `apps/llm` - chat application showcasing use of LLMs
98+
- `apps/speech-to-text` - Whisper and Moonshine models ready for transcription tasks
99+
- `apps/computer-vision` - computer vision related tasks
100+
- `apps/text-embeddings` - computing text representations for semantic search
106101

107102
If you would like to run it, navigate to it's project directory, for example `apps/llm` from the repository root and install dependencies with:
108103

@@ -130,7 +125,7 @@ or iOS:
130125
yarn expo run:ios
131126
```
132127

133-
### Warning
128+
### Warning ⚠️
134129

135130
Running LLMs requires a significant amount of RAM. If you are encountering unexpected app crashes, try to increase the amount of RAM allocated to the emulator.
136131

9.43 KB
Binary file not shown.

0 commit comments

Comments
 (0)