-
Notifications
You must be signed in to change notification settings - Fork 84
Open
Description
Hey everyone,
this is not an issue or bug ticket, but more a request of if someone could explain to me how to run Restic based backups with operator based installations like Kube-Prometheus-Stack? My issue is, I want to take backups of a StatefulSet with three Prometheus Pods and ship everything into my AWS S3 bucket. Everytime I run following BackupConfiguration:
apiVersion: stash.appscode.com/v1beta1
kind: BackupConfiguration
metadata:
labels:
app.kubernetes.io/component: stash-backup
app.kubernetes.io/name: prometheus
app.kubernetes.io/part-of: kube-prometheus
name: prometheus-s3
namespace: ...
spec:
driver: Restic
repository:
name: prometheus-s3
retentionPolicy:
keepDaily: 7
keepLast: 96
keepMonthly: 11
keepWeekly: 4
name: keep-monthly-for-one-year
prune: true
runtimeSettings:
container:
resources:
limits:
cpu: 4000m
memory: 4Gi
requests:
cpu: 100m
memory: 250Mi
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
privileged: false
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
pod:
securityContext:
runAsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
schedule: 24 1 * * *
target:
ref:
apiVersion: apps/v1
kind: StatefulSet
name: prometheus
volumeMounts:
- name: source-dataprometheus-db
mountPath: /prometheusI see one of the Prometheus StatefulSet Pods restarting (guess its trying to inject the sidecar) but then nothing happpens and sidecar is gone (guess because Prometheus operator removes it again). Also the STS Pod logs looks like this:
│ I0820 07:31:25.890222 1 request.go:629] Waited for 195.541837ms due to client-side throttling, not priority and fairness, request: GET:https://10.233.0.1:443/apis/snapshot.storage.k8s.io/v1/namespaces/.../volumesnapshots/prometheus-db-prometheus-0-1724138821 Cheers
Metadata
Metadata
Assignees
Labels
No labels