Skip to content

Conversation

@talkraghu
Copy link

Summary:
This PR pins the Go base image to golang:1.24.4 in the piraeus-ha-controller Dockerfile.
The change reduces the exposure to CVEs.

Testing:
Rebuilt the image locally using golang:1.24.4
Deployed it in a cluster with a working Linstor setup.
Verified the ha-controller pod starts correctly and behaves as expected. Pod logs are as below

[root@rags2node-k8sc-node1-2 nfs_csi]# kubectl get pods -A | grep nsp-psa
nsp-psa-privileged   ha-controller-m8l98                                        1/1     Running   0               48s
nsp-psa-privileged   ha-controller-svqdj                                        1/1     Running   0               48s
nsp-psa-privileged   linstor-controller-7d4bbf6778-lk8gw                        1/1     Running   0               49s
nsp-psa-privileged   linstor-csi-controller-7dfb5d4dd5-tcxkm                    7/7     Running   0               48s
nsp-psa-privileged   linstor-csi-node-85xts                                     3/3     Running   0               48s
nsp-psa-privileged   linstor-csi-node-gq2l5                                     3/3     Running   0               48s
nsp-psa-privileged   linstor-satellite.rags2node-node1-92xhk                    2/2     Running   0               43s
nsp-psa-privileged   linstor-satellite.rags2node-node2-m9lhx                    2/2     Running   0               44s
nsp-psa-privileged   local-path-provisioner-6459897cff-4l84p                    1/1     Running   0               75m
nsp-psa-privileged   nsp-piraeus-operator-controller-manager-68f9c8dcb9-vqxzn   1/1     Running   0               84s
nsp-psa-restricted   cert-manager-7576cb7bb6-lhg7j                              1/1     Running   0               78m
nsp-psa-restricted   cert-manager-cainjector-664c7dbbbc-m7l4h                   1/1     Running   0               78m
nsp-psa-restricted   cert-manager-webhook-747dcc8f67-mps7t                      1/1     Running   0               78m
nsp-psa-restricted   nsp-pki-server-5b7994f699-vljgk                            1/1     Running   0               76m
nsp-psa-restricted   trust-manager-8dd9688bd-45tvc                              1/1     Running   2 (76m ago)     76m
[root@rags2node-k8sc-node1-2 nfs_csi]# kubectl logs -f ha-controller-m8l98 -n nsp-psa-privileged | more
I0701 08:53:11.056824       1 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
I0701 08:53:11.056952       1 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
I0701 08:53:11.056957       1 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
I0701 08:53:11.056961       1 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
I0701 08:53:11.056964       1 envvar.go:172] "Feature gate default state" feature="InOrderInformers" enabled=true
I0701 08:53:11.057559       1 agent.go:207] version: v0.0.0-0.unknown
I0701 08:53:11.057570       1 agent.go:208] node: rags2node-node2
I0701 08:53:11.057605       1 agent.go:234] waiting for caches to sync
I0701 08:53:11.158621       1 agent.go:236] caches synced
I0701 08:53:11.158666       1 agent.go:259] starting reconciliation
I0701 08:53:21.058476       1 agent.go:259] starting reconciliation
I0701 08:53:31.057807       1 agent.go:259] starting reconciliation
I0701 08:53:41.057959       1 agent.go:259] starting reconciliation
I0701 08:53:51.057686       1 agent.go:259] starting reconciliation
I0701 08:54:01.058131       1 agent.go:259] starting reconciliation
I0701 08:54:11.057784       1 agent.go:259] starting reconciliation
I0701 08:54:21.057960       1 agent.go:259] starting reconciliation
I0701 08:54:31.057778       1 agent.go:259] starting reconciliation
I0701 08:54:41.057690       1 agent.go:259] starting reconciliation
I0701 08:54:51.057783       1 agent.go:259] starting reconciliation
I0701 08:55:01.058097       1 agent.go:259] starting reconciliation
I0701 08:55:11.058491       1 agent.go:259] starting reconciliation
I0701 08:55:21.058073       1 agent.go:259] starting reconciliation
I0701 08:55:31.058544       1 agent.go:259] starting reconciliation
I0701 08:55:41.057625       1 agent.go:259] starting reconciliation
I0701 08:55:51.057692       1 agent.go:259] starting reconciliation
I0701 08:56:01.057672       1 agent.go:259] starting reconciliation
I0701 08:56:11.058627       1 agent.go:259] starting reconciliation
I0701 08:56:21.058630       1 agent.go:259] starting reconciliation

The CVE scans report 0 critical CVE's and 9 high CVE's.
The high cve's can be ignored i beleive. Pasted below are the same

[vagrant@nsp-latest generated]$ cat piraeus-ha-controller-secure | grep -i high
libldap-2.5-0       2.5.13+dfsg-5           (won't fix)  deb   CVE-2023-2953     High        79.88    1.1  
libldap-common      2.5.13+dfsg-5           (won't fix)  deb   CVE-2023-2953     High        79.88    1.1  
perl-base           5.36.0-7+deb12u2        (won't fix)  deb   CVE-2023-31484    High        79.42    1.1  
libpam-modules      1.5.2-6+deb12u1                      deb   CVE-2025-6020     High         4.44  < 0.1  
libpam-modules-bin  1.5.2-6+deb12u1                      deb   CVE-2025-6020     High         4.44  < 0.1  
libpam-runtime      1.5.2-6+deb12u1                      deb   CVE-2025-6020     High         4.44  < 0.1  
libpam0g            1.5.2-6+deb12u1                      deb   CVE-2025-6020     High         4.44  < 0.1  
libc-bin            2.36-9+deb12u10         (won't fix)  deb   CVE-2025-4802     High         0.55  < 0.1  
libc6               2.36-9+deb12u10         (won't fix)  deb   CVE-2025-4802     High         0.55  < 0.1  
[vagrant@nsp-latest generated]$ 

@WanzenBug @js185692

Raghavendra K added 2 commits July 1, 2025 04:56
@WanzenBug
Copy link
Member

This is not a general solution. This would just cause us to have to update the base image reference a new go version releases. It will (in the current setup), also not push a new 1.3.0 image. This would not scale, and we would spend the whole day just rebuilding images.

What would need to happen instead is that the latest tag is rebuild whenever an update to the base image or toolchain was available. That rebuild would need to use --no-cache or equivalent.

@talkraghu
Copy link
Author

Hi @WanzenBug , thank you for your quick and thoughtful response.

You're right — manually pinning Go versions in the Dockerfile is not a long-term solution and does not scale well across multiple versions like v1.3.0.

My goal here was simply to highlight known CVEs and reduce exposure in the short term by updating to a secure Go version (1.24.4). However, I fully agree with your point: a general solution would require an automated rebuild process that can detect upstream changes and refresh the :latest image accordingly, using something like --no-cache.

Unfortunately, I don’t have prior experience with GitHub Actions or workflow automation, so I am not in a position to implement that myself. Since you are your team are more familiar with the project's CI/CD setup, I would really appreciate it if you could consider adding such a rebuild process as part of your release or security hygiene process.

I would be happy to continue scanning and reporting if that helps, and thanks again for maintaining the project.

@talkraghu
Copy link
Author

talkraghu commented Jul 1, 2025

Instead of hardcoding the GO version to 1.24.4 can we set it "latest" version?

i.e change from "FROM --platform=$BUILDPLATFORM golang:1 as builder" to "FROM --platform=$BUILDPLATFORM golang:latest" as builder"

The "latest" is getting is go with version 1.24.4

@WanzenBug
Copy link
Member

i.e change from "FROM --platform=$BUILDPLATFORM golang:1 as builder" to "FROM --platform=$BUILDPLATFORM golang:latest" as builder"

golang:1 will also fetch the latest image, so there is not actually anything to change, other than rebuilding without any caches...

@talkraghu
Copy link
Author

talkraghu commented Jul 1, 2025

Thanks @WanzenBug , shall I withdraw this merge request? And as you said, can you pls trigger the rebuild without caches so that it gets us new docker image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants