You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: docsite/docs/commands/ramalama/info.mdx
+13-3Lines changed: 13 additions & 3 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -20,14 +20,24 @@ show this help message and exit
20
20
21
21
## FIELDS
22
22
23
+
The `Accelerator` field indicates the accelerator type for the machine.
24
+
25
+
The `Config` field shows the list of paths to RamaLama configuration files used.
26
+
23
27
The `Engine` field indicates the OCI container engine used to launch the container in which to run the AI Model
24
28
25
29
The `Image` field indicates the default container image in which to run the AI Model
26
30
27
-
The `Runtime` field indicates which backend engine is used to execute the AI model:
31
+
The `Inference` field lists the currently used inference engine as well as a list of available engine specification and schema files used for model inference.
32
+
For example:
33
+
34
+
-`llama.cpp`
35
+
-`vllm`
36
+
-`mlx`
37
+
38
+
The `Selinux` field indicates if SELinux is activated or not.
28
39
29
-
-`llama.cpp`: Uses the llama.cpp library for model execution
30
-
-`vllm`: Uses the vLLM library for model execution
40
+
The `Shortnames` field shows the used list of configuration files specifying AI Model short names as well as the merged list of shortnames.
31
41
32
42
The `Store` field indicates the directory path where RamaLama stores its persistent data, including downloaded models, configuration files, and cached data. By default, this is located in the user's local share directory.
0 commit comments