How to use with Aider? #2064
Replies: 3 comments
-
|
@engelmi @bmahabirbu is this something we should handle via the relay server? RamaLama is not necessarily the exact same as ollama. we are providing a compatible API to OpenAI, but not necessarily the full Ollama API. By default when you talk to an API running within the container, this API is provided by llama.cpp. |
Beta Was this translation helpful? Give feedback.
-
|
Yes, Ollama essentially spins up a server which provides a REST API used by its CLI. RamaLama CLI (run/serve) spins up a container and starts the LLM (providing the OpenAI compatible API). @rhatdan Yes, I think this the daemon could be extended for this. @splitbrain There is an effort to split RamaLama into a CLI ("frontend") and a server providing a REST API ("backend"): @splitbrain I assume with |
Beta Was this translation helpful? Give feedback.
-
|
Thanks for the hint that the ollama API is different from the OpenAI compatible API. Using the following works: |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
From what I understand, this should be compatible with Ollama, but I can't get it to work with Aider.
I'm starting the model like this:
And aider with:
But all I get are 404s:
The ramalama console shows:
Can anyone give me a hint on how to get it working?
Beta Was this translation helpful? Give feedback.
All reactions