-
Notifications
You must be signed in to change notification settings - Fork 41
Open
Description
Describe the bug
A clear and concise description of what the bug is.
Unable to use OLLAMA local model
To Reproduce
Steps to reproduce the behavior:
- Install using pip
- interpreter -m local-model -md code -dc
- Error
Traceback (most recent call last):
File "", line 198, in _run_module_as_main
File "", line 88, in run_code
File "C:\Users\x.conda\envs\xy\Scripts\interpreter.exe_main.py", line 7, in
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\interpreter.py", line 56, in main
interpreter = Interpreter(args)
^^^^^^^^^^^^^^^^^
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 49, in init
self.initialize()
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 85, in initialize
self.initialize_client()
File "C:\Users\x.conda\envs\xy\Lib\site-packages\open_code_interpreter\libs\interpreter_lib.py", line 127, in initialize_client
raise Exception(f"{api_key_name} not found in .env file.")
Exception: HUGGINGFACE_API_KEY not found in .env file.
Expected behavior
App to start
Screenshots
If applicable, add screenshots to help explain your problem.
Desktop (please complete the following information):
- OS: Windows 11
- Browser edge
Local-model.config
The temperature parameter controls the randomness of the model's output. Lower values make the output more deterministic.
temperature = 0.1
The maximum number of new tokens that the model can generate.
max_tokens = 2048
The start separator for the generated code.
start_sep = ```
The end separator for the generated code.
end_sep = ```
If True, the first line of the generated text will be skipped.
skip_first_line = True
The model used for generating the code.
#HF_MODEL = ollama/qwen2.5-coder
api_base = https://localhost:11434/api/generate
Metadata
Metadata
Assignees
Labels
No labels