-
Notifications
You must be signed in to change notification settings - Fork 4.7k
Description
/kind bug
1. What kops version are you running? The command kops version, will display
this information.
1.32.2 -> 1.34.0
2. What Kubernetes version are you running? kubectl version will print the
version if a cluster is running or provide the Kubernetes version specified as
a kops flag.
1.33.5
3. What cloud provider are you using?
AWS
4. What commands did you run? What is the simplest way to reproduce this issue?
kops get assets
5. What happened after the commands executed?
W1029 04:37:22.366026 389 executor.go:141] error running task "IAMRolePolicy/bastions.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
W1029 04:37:22.366060 389 executor.go:141] error running task "IAMRolePolicy/nodes.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
W1029 04:37:22.366074 389 executor.go:141] error running task "IAMRolePolicy/masters.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
Error: error running tasks: deadline exceeded executing task IAMRolePolicy/bastions.k8s.example.net. Example error: error rendering PolicyDocument: error opening resource: DNS ZoneID not set
6. What did you expect to happen?
kops get assets should return a list of images and versions.
7. Please provide your cluster manifest. Execute
kops get --name my.example.com -o yaml to display your cluster manifest.
You may want to remove your cluster name and other sensitive information.
apiVersion: kops.k8s.io/v1alpha2
kind: Cluster
metadata:
creationTimestamp: null
generation: 7
name: REDACTED
spec:
additionalPolicies:
master: |
[
{
"Effect": "Allow",
"Action": [
"ecr:BatchCheckLayerAvailability",
"ecr:BatchGetImage",
"ecr:GetDownloadUrlForLayer",
"ecr:GetAuthorizationToken"
],
"Resource": ["*"]
},
{
"Effect": "Allow",
"Action": [
"kms:Decrypt",
"kms:Encrypt"
],
"Resource": [
"REDACTED"
]
}
]
node: |
[
{
"Effect": "Allow",
"Action": [
"sts:AssumeRole"
],
"Resource": [
"REDACTED"
]
}
]
api:
loadBalancer:
additionalSecurityGroups:
- REDACTED
class: Network
crossZoneLoadBalancing: true
idleTimeoutSeconds: 4000
type: Public
assets:
containerProxy: AWS-ACC-NUMBER.dkr.ecr.ap-southeast-2.amazonaws.com/k8s
authentication:
aws:
image: PRIVATE-DOCKER-HUB-REPO/aws-iam-authenticator:0.7.8.0
authorization:
rbac: {}
awsLoadBalancerController:
enabled: false
certManager:
enabled: true
channel: stable
cloudConfig:
awsEBSCSIDriver:
enabled: true
cloudProvider: aws
configBase: s3://S3-BUCKET-FOR-STATE/REDACTED
dnsZone: REDACTED
encryptionConfig: true
etcdClusters:
- etcdMembers:
- instanceGroup: master-a
name: a
- instanceGroup: master-c
name: c
- instanceGroup: master-b
name: b
name: main
- etcdMembers:
- instanceGroup: master-a
name: a
- instanceGroup: master-c
name: c
- instanceGroup: master-b
name: b
name: events
fileAssets:
- content: |
apiVersion: audit.k8s.io/v1 # This is required.
kind: Policy
# Don't generate audit events for all requests in RequestReceived stage.
omitStages:
- "RequestReceived"
rules:
# The following requests were manually identified as high-volume and low-risk,
# so drop them.
- level: None
users: ["system:kube-proxy"]
verbs: ["watch"]
resources:
- group: "" # core
resources: ["endpoints", "services"]
- level: None
users: ["system:unsecured"]
namespaces: ["kube-system"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["configmaps"]
- level: None
users: ["kubelet"] # legacy kubelet identity
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes"]
- level: None
userGroups: ["system:nodes"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["nodes"]
- level: None
users:
- system:kube-controller-manager
- system:kube-scheduler
- system:serviceaccount:kube-system:endpoint-controller
verbs: ["get", "update"]
namespaces: ["kube-system"]
resources:
- group: "" # core
resources: ["endpoints"]
- level: None
users: ["system:apiserver"]
verbs: ["get"]
resources:
- group: "" # core
resources: ["namespaces"]
# Don't log these read-only URLs.
- level: None
nonResourceURLs:
- /nginxhealth*
- /health*
- /version
# Don't log events requests.
- level: None
resources:
- group: "" # core
resources: ["events"]
# Secrets, ConfigMaps, and TokenReviews can contain sensitive & binary data,
# so only log at the Metadata level.
- level: Metadata
resources:
- group: "" # core
resources: ["secrets", "configmaps"]
- group: authentication.k8s.io
resources: ["tokenreviews"]
# Get repsonses can be large; skip them.
- level: Request
verbs: ["get", "list", "watch"]
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
# Default level for known APIs
- level: RequestResponse
resources:
- group: "" # core
- group: "admissionregistration.k8s.io"
- group: "apps"
- group: "authentication.k8s.io"
- group: "authorization.k8s.io"
- group: "autoscaling"
- group: "batch"
- group: "certificates.k8s.io"
- group: "extensions"
- group: "networking.k8s.io"
- group: "policy"
- group: "rbac.authorization.k8s.io"
- group: "settings.k8s.io"
- group: "storage.k8s.io"
# Default level for all other requests.
- level: Metadata
name: audit-policy.yaml
path: /srv/kubernetes/kube-apiserver/audit-policy.yaml
roles:
- ControlPlane
- content: |
apiVersion: v1
kind: Pod
metadata:
annotations:
scheduler.alpha.kubernetes.io/critical-pod: ""
labels:
k8s-app: aws-encryption-provider
name: aws-encryption-provider
namespace: kube-system
spec:
containers:
- image: AWS-ACC-ID.dkr.ecr.ap-southeast-2.amazonaws.com/aws-encryption-provider:0.0.6
name: aws-encryption-provider
command:
- /aws-encryption-provider
- --key=REDACTED
- --region=ap-southeast-2
- --listen=/srv/kubernetes/kube-apiserver/socket.sock
- --health-port=:8083
ports:
- containerPort: 8083
protocol: TCP
livenessProbe:
httpGet:
path: /healthz
port: 8083
volumeMounts:
- mountPath: /srv/kubernetes/kube-apiserver
name: kmsplugin
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- name: kmsplugin
hostPath:
path: /srv/kubernetes/kube-apiserver
type: DirectoryOrCreate
name: aws-encryption-provider.yaml
path: /etc/kubernetes/manifests/aws-encryption-provider.yaml
roles:
- ControlPlane
iam:
allowContainerRegistry: true
legacy: false
kubeAPIServer:
auditLogMaxAge: 10
auditLogMaxBackups: 1
auditLogMaxSize: 100
auditLogPath: /var/log/kube-apiserver-audit.log
auditPolicyFile: /srv/kubernetes/kube-apiserver/audit-policy.yaml
disableBasicAuth: true
enableAggregatorRouting: true
runtimeConfig:
autoscaling/v2beta1: "true"
kubeDNS:
nodeLocalDNS:
enabled: true
provider: CoreDNS
kubelet:
anonymousAuth: false
authenticationTokenWebhook: true
authorizationMode: Webhook
podInfraContainerImage: registry.k8s.io/pause:3.9
streamingConnectionIdleTimeout: 0s
kubernetesApiAccess:
- REDACTED
kubernetesVersion: 1.33.5
networkCIDR: 10.29.0.0/16
networkID: VPC-ID
networking:
calico:
crossSubnet: true
mtu: 8912
typhaReplicas: 3
nodeTerminationHandler:
enabled: false
nonMasqueradeCIDR: 100.64.0.0/10
snapshotController:
enabled: true
sshAccess:
- REDACTED
subnets:
- id: subnet-
name: ap-southeast-2c
type: Utility
zone: ap-southeast-2c
- id: subnet-
name: ap-southeast-2b
type: Utility
zone: ap-southeast-2b
- id: subnet-
name: ap-southeast-2a
type: Utility
zone: ap-southeast-2a
- egress: External
id: subnet-
name: private-ap-southeast-2c
type: Private
zone: ap-southeast-2c
- egress: External
id: subnet-
name: private-ap-southeast-2b
type: Private
zone: ap-southeast-2b
- egress: External
id: subnet-
name: private-ap-southeast-2a
type: Private
zone: ap-southeast-2a
topology:
dns:
type: Public
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2025-10-29T02:34:20Z"
labels:
kops.k8s.io/cluster: REDACTED
name: bastions
spec:
associatePublicIp: true
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: t2.micro
maxSize: 1
minSize: 1
role: Bastion
rootVolumeEncryption: true
rootVolumeType: gp3
subnets:
- ap-southeast-2c
- ap-southeast-2b
- ap-southeast-2a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2025-10-29T02:34:21Z"
labels:
kops.k8s.io/cluster: REDACTED
name: k8s-nodes
spec:
additionalSecurityGroups:
- sg-
cloudLabels:
k8s.io/cluster-autoscaler/enabled: ""
k8s.io/cluster-autoscaler/REDACTED: ""
k8s.io/cluster-autoscaler/node-template/label: ""
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: m6a.2xlarge
maxSize: 6
minSize: 3
nodeLabels:
kops.k8s.io/instancegroup: k8s-nodes
role: Node
rootVolumeEncryption: true
rootVolumeType: gp3
subnets:
- private-ap-southeast-2c
- private-ap-southeast-2b
- private-ap-southeast-2a
---
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2025-10-29T02:34:20Z"
labels:
kops.k8s.io/cluster: REDACTED
name: master-a
spec:
additionalSecurityGroups:
- sg-
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: r6g.large
maxSize: 1
minSize: 1
nodeLabels:
dedicated: master
kops.k8s.io/instancegroup: master-a
role: Master
rootVolumeEncryption: true
rootVolumeType: gp3
subnets:
- ap-southeast-2a
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2025-10-29T02:34:21Z"
labels:
kops.k8s.io/cluster: REDACTED
name: master-b
spec:
additionalSecurityGroups:
- sg-
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: r6g.large
maxSize: 1
minSize: 1
nodeLabels:
dedicated: master
kops.k8s.io/instancegroup: master-b
role: Master
rootVolumeEncryption: true
rootVolumeType: gp3
subnets:
- ap-southeast-2b
---
apiVersion: kops.k8s.io/v1alpha2
kind: InstanceGroup
metadata:
creationTimestamp: "2025-10-29T02:34:21Z"
labels:
kops.k8s.io/cluster: REDACTED
name: master-c
spec:
additionalSecurityGroups:
- sg-
instanceMetadata:
httpPutResponseHopLimit: 3
httpTokens: required
machineType: r6g.large
maxSize: 1
minSize: 1
nodeLabels:
dedicated: master
kops.k8s.io/instancegroup: master-c
role: Master
rootVolumeEncryption: true
rootVolumeType: gp3
subnets:
- ap-southeast-2c8. Please run the commands with most verbose logging by adding the -v 10 flag.
Paste the logs into this report, or in a gist and provide the gist link here.
W1029 04:37:22.366026 389 executor.go:141] error running task "IAMRolePolicy/bastions.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
W1029 04:37:22.366060 389 executor.go:141] error running task "IAMRolePolicy/nodes.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
W1029 04:37:22.366074 389 executor.go:141] error running task "IAMRolePolicy/masters.k8s.example.net" (0s remaining to succeed): error rendering PolicyDocument: error opening resource: DNS ZoneID not set
Error: error running tasks: deadline exceeded executing task IAMRolePolicy/bastions.k8s.example.net. Example error: error rendering PolicyDocument: error opening resource: DNS ZoneID not set
9. Anything else do we need to know?
If I change spec.dnsZone to be the Hosted Zone ID it runs fine. If I set it to be the DNS name it fails. This failure starts from kops 1.32.X onwards and is still present in 1.34.0. This only effects kops get assets. Other kops commands seem complete without error.