Skip to content

Commit 2034bdd

Browse files
authored
Merge pull request #565 from rhatdan/version
Bump to v0.5.0
2 parents ad08674 + e764515 commit 2034bdd

File tree

7 files changed

+8
-18
lines changed

7 files changed

+8
-18
lines changed

README.md

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -112,7 +112,7 @@ curl -fsSL https://raw.githubusercontent.com/containers/ramalama/s/install.sh |
112112

113113
### Running Models
114114

115-
You can `run` a chatbot on a model using the `run` command. By default, it pulls from the ollama registry.
115+
You can `run` a chatbot on a model using the `run` command. By default, it pulls from the Ollama registry.
116116

117117
Note: RamaLama will inspect your machine for native GPU support and then will
118118
use a container engine like Podman to pull an OCI container image with the
@@ -158,7 +158,7 @@ ollama://moondream:latest 6 days ago
158158
```
159159
### Pulling Models
160160

161-
You can `pull` a model using the `pull` command. By default, it pulls from the ollama registry.
161+
You can `pull` a model using the `pull` command. By default, it pulls from the Ollama registry.
162162

163163
```
164164
$ ramalama pull granite-code
@@ -167,7 +167,7 @@ $ ramalama pull granite-code
167167

168168
### Serving Models
169169

170-
You can `serve` multiple models using the `serve` command. By default, it pulls from the ollama registry.
170+
You can `serve` multiple models using the `serve` command. By default, it pulls from the Ollama registry.
171171

172172
```
173173
$ ramalama serve --name mylama llama3

docs/ramalama.1.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -19,7 +19,7 @@ Running in containers eliminates the need for users to configure the host system
1919

2020
RamaLama pulls AI Models from model registries. Starting a chatbot or a rest API service from a simple single command. Models are treated similarly to how Podman and Docker treat container images.
2121

22-
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behaviour. When neither are installed RamaLama attempts to run the model with software on the local system.
22+
When both Podman and Docker are installed, RamaLama defaults to Podman, The `RAMALAMA_CONTAINER_ENGINE=docker` environment variable can override this behavior. When neither are installed RamaLama attempts to run the model with software on the local system.
2323

2424
Note:
2525

docs/ramalama.conf.5.md

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -105,7 +105,7 @@ llama.cpp explains this as:
105105

106106
The lower the number is, the more deterministic the response.
107107

108-
The higher the number is the more creative the response is, but moee likely to hallucinate when set too high.
108+
The higher the number is the more creative the response is, but more likely to hallucinate when set too high.
109109

110110
Usage: Lower numbers are good for virtual assistants where we need deterministic responses. Higher numbers are good for roleplay or creative tasks like editing stories
111111

pyproject.toml

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
[project]
22
name = "ramalama"
3-
version = "0.4.0"
3+
version = "0.5.0"
44
dependencies = [
55
"argcomplete",
66
]

ramalama/common.py

Lines changed: 0 additions & 10 deletions
Original file line numberDiff line numberDiff line change
@@ -99,16 +99,6 @@ def find_working_directory():
9999
return os.path.dirname(__file__)
100100

101101

102-
def run_curl_cmd(args, filename):
103-
if not verify_checksum(filename):
104-
try:
105-
run_cmd(args, debug=args.debug)
106-
except subprocess.CalledProcessError as e:
107-
if e.returncode == 22:
108-
perror(filename + " not found")
109-
raise e
110-
111-
112102
def verify_checksum(filename):
113103
"""
114104
Verifies if the SHA-256 checksum of a file matches the checksum provided in

rpm/python-ramalama.spec

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -1,7 +1,7 @@
11
%global pypi_name ramalama
22
%global forgeurl https://github.com/containers/%{pypi_name}
33
# see ramalama/version.py
4-
%global version0 0.4.0
4+
%global version0 0.5.0
55
%forgemeta
66

77
%global summary RamaLama is a command line tool for working with AI LLM models

setup.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -63,7 +63,7 @@ def find_package_modules(self, package, package_dir):
6363

6464
setuptools.setup(
6565
name="ramalama",
66-
version="0.4.0",
66+
version="0.5.0",
6767
packages=find_packages(),
6868
cmdclass={"build_py": build_py},
6969
scripts=["bin/ramalama"],

0 commit comments

Comments
 (0)