Replies: 17 comments 5 replies
-
|
With |
Beta Was this translation helpful? Give feedback.
-
|
Beta Was this translation helpful? Give feedback.
-
|
can you take the logs of the container that is returning garbage? not reproducing with interesting :) the logs start with this: which should be independent of Vulkan |
Beta Was this translation helpful? Give feedback.
-
|
Would not be surprised if the garbage output is related to #1374. |
Beta Was this translation helpful? Give feedback.
-
|
Here's what I get, with container logs: First attempt just crashed: On container side: Second attempt didn't crash but produced garbage: On container side: |
Beta Was this translation helpful? Give feedback.
-
|
Is there an easy way to try out a newer Mesa? I see Rawhide has 25.1.x: https://packages.fedoraproject.org/pkgs/mesa/mesa-libEGL/ but not 25.2.x that is presumably better. Is there a way to pull it into the container? (Would we need to backport the slp COPR patches or are they already present in 25.1.x branch?) I wonder if it would be too hard to build a replacement container built from Nix packages that already contain 25.2.x+... |
Beta Was this translation helpful? Give feedback.
-
|
the important part of the logs is: so it's a template issue, nothing linked with the Vulkan backend, most likely
the latest Ramalama image should use Mesa |
Beta Was this translation helpful? Give feedback.
-
|
@kpouget when I run with an image that uses |
Beta Was this translation helpful? Give feedback.
-
|
And to be on safe side, I rebuilt the image with the latest The behaviour is still bad. |
Beta Was this translation helpful? Give feedback.
-
but it is an old image, or a recent one? (if it's an old one, things may have change since then) could be interesting to see the behavior when running natively. Metal vs Vulkan vs Kompute. Does Vulkan fail, and everything else succeeds ? |
Beta Was this translation helpful? Give feedback.
-
|
@kpouget when I test |
Beta Was this translation helpful? Give feedback.
-
|
Some more testing. With With With So I tried with a different model - as listed in README: I found a model that is listed in So not sure which model would even work with |
Beta Was this translation helpful? Give feedback.
-
|
The fact that I was able to reproduce it on llama vulkan with I did I still don't know a fix for podman scenario. UPD: I probably haven't restarted podman machine properly the previous time I checked. I can now confirm that the PR linked also fixes the issue for podman scenario. |
Beta Was this translation helpful? Give feedback.
-
|
Not sure if this bug needs anything done in ramalama. At best, README could mention that a newer moltenvk is needed when vulkan is used (krunkit and llama server with vulkan enabled). Not sure if this project wants to track this explicitly though. |
Beta Was this translation helpful? Give feedback.
-
|
Where is moltenvk installed, locally on the MAC? Is there a way in the pypi to force an upgrade? Or is this yet another reason to add a dmg installer. |
Beta Was this translation helpful? Give feedback.
-
or On MacOS, It can also come from Podman installer, which contains the krunkit release binaries, so must be |
Beta Was this translation helpful? Give feedback.
-
|
Can you open an issue with those packagers krunkit to automatically pull release binaries of 1.2.12 or later. |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Issue Description
This same behavior is seen for other models, for example:
When image is compiled with #1831 then the result is functionally ok-ish, but
nvtopshows GPU is not loaded, and CPU is spinning. (I guess it's expected since Vulkan is off?)(Note that the model seems to not stop at
im_endfor some reason and goes on in the example. But at least it looks like reasonable output.)I tried to backport the Mesa patch mentioned here since it was referred to from this comment as a possible solution. I took the slp COPR and added my attempt at the backport in here then rebuilt the image with this COPR as input. It didn't change anything. NOTE I may have made some mistakes when backporting; there were some conflicts to solve that I could mess up because I know very little about Mesa. For reference, here's my backport patch.
(I think it would be nice to try out with mesa 25.2+ but I don't know how to do it with Fedora images.)
Steps to reproduce the issue
Describe the results you received
.
Describe the results you expected
.
ramalama info output
Upstream Latest Release
Yes
Additional environment details
All components - podman, ramalama, libkrun and krunkit - are built via Nixpkgs.
In case you need to consult how krunkit / libkrun-efi were built,
Additional information
No response
Beta Was this translation helpful? Give feedback.
All reactions