You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Run larger models by offloading to Ollama’s cloud while keeping your local workflow.
56
+
57
+
- Supported models: `deepseek-v3.1:671b-cloud`, `gpt-oss:20b-cloud`, `gpt-oss:120b-cloud`, `kimi-k2:1t-cloud`, `qwen3-coder:480b-cloud`, `kimi-k2-thinking` See [Ollama Models - Cloud](https://ollama.com/search?c=cloud) for more information
58
+
59
+
### Run via local Ollama
60
+
61
+
1) Sign in (one-time):
62
+
63
+
```
64
+
ollama signin
65
+
```
66
+
67
+
2) Pull a cloud model:
68
+
69
+
```
70
+
ollama pull gpt-oss:120b-cloud
71
+
```
72
+
73
+
3) Make a request:
74
+
75
+
```python
76
+
from ollama import Client
77
+
78
+
client = Client()
79
+
80
+
messages = [
81
+
{
82
+
'role': 'user',
83
+
'content': 'Why is the sky blue?',
84
+
},
85
+
]
86
+
87
+
for part in client.chat('gpt-oss:120b-cloud', messages=messages, stream=True):
88
+
print(part.message.content, end='', flush=True)
89
+
```
90
+
91
+
### Cloud API (ollama.com)
92
+
93
+
Access cloud models directly by pointing the client at `https://ollama.com`.
94
+
95
+
1) Create an API key from [ollama.com](https://ollama.com/settings/keys) , then set:
0 commit comments