-
Notifications
You must be signed in to change notification settings - Fork 23
e2e: Multiple namespaces #80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
Pull Request Test Coverage Report for Build 19827019308Details
💛 - Coveralls |
| spec: | ||
| containers: | ||
| - name: macvlan-worker1 | ||
| image: ghcr.io/k8snetworkplumbingwg/multi-networkpolicy-iptables:e2e-test |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
using ghcr.io here is breaking the e2e test as they were designed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do you mean it is not directly using changes from /e2e/Dockerfile ?
all the tests has this image configuration, which I know it is suboptimal, but it does not relate to this PR. right?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I mean all of them use localhost:5000
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Got it. Yes, that would be a good improvement
A networkpolicy can refer to multiple NADs, which can be in different namespaces. Create an end2end test case that involves pods from multiple namespaces. Signed-off-by: Andrea Panattoni <[email protected]>
530b42a to
b87e443
Compare
WalkthroughAdds end-to-end test suite for MultiNetworkPolicy validation across multiple namespaces with distinct network attachments. Includes BATS test script that creates test resources, verifies cross-namespace pod connectivity according to policy rules, checks denial cases, and cleans up resources including lingering iptables cache files. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~20 minutes
Pre-merge checks and finishing touches❌ Failed checks (1 inconclusive)
✅ Passed checks (2 passed)
✨ Finishing touches
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
e2e/tests/multi-namespace-multinet.yml (1)
114-114: Align test image registry with existing e2e conventionOther e2e manifests in this repo use a local registry (
localhost:5000/...) so that the tests exercise the image built from/e2e/Dockerfile. Usingghcr.io/...:e2e-testhere diverges from that pattern and can break the “use locally built image” assumption.Consider switching these image references to the local registry variant used elsewhere (or centralizing the image name via a variable/template) for consistency.
Verify which image reference your existing e2e tests use (e.g., via
rg -n "localhost:5000" e2e/tests) and align this manifest to that pattern.Also applies to: 131-131, 149-149, 166-166, 184-184, 201-201
🧹 Nitpick comments (4)
e2e/tests/multi-namespace-multinet.yml (1)
112-118: Privileged test pods: acceptable for e2e, but consider documenting or scopingAll six pods run with
securityContext.privileged: true, which static analysis correctly flags as high‑risk. For these iptables/multi‑network e2e tests that’s likely required, but it would help future readers and security tooling if this were explicitly called out as test‑only and/or excluded from generic policy checks.If you keep them privileged, I’d suggest either:
- Adding a brief comment that these pods must be privileged for network/iptables manipulation; and/or
- Configuring your security scanners to ignore this specific e2e path.
Confirm that the privileged context here matches what other existing e2e tests in this repo use, and that your CI security scanners are configured to treat this directory as test‑only.
Also applies to: 129-135, 147-153, 164-170, 182-188, 199-205
e2e/tests/multi-namespace-multinet.bats (3)
3-6: Update comment to reflect three‑namespace scenarioThe header still says this test creates “two namespaces”, but the manifest and tests clearly involve three namespaces (A, B, and C). Updating the comment avoids confusion for future readers.
Double‑check the manifest (
multi-namespace-multinet.yml) to ensure all three namespaces are indeed required, then adjust this wording accordingly.
8-20:setup()precomputes pod IPs before resources exist in the first test
setup()runs before every test, so for the very first test (which actually creates the manifest) theget_net1_ipcalls happen before any pods exist. That’s harmless here (the IP variables aren’t used in that test and failures fromkubectl execinside command substitution are effectively ignored), but it’s a bit surprising and does extra kubectl calls.Consider either:
- Moving the
get_net1_ipcalls into the tests that actually need them (after thekubectl wait), or- Adding a brief comment explaining that the first run of
setup()runs before resource creation and is effectively a no‑op for IP discovery.Verify how Bats runs
setup()in your CI (order andset -ebehavior) to ensure these earlykubectl execfailures can’t accidentally cause test failures on your environment.
102-116: Cleanup: also consider namespace‑C pods and multiple controller podsTwo minor robustness points in the cleanup test:
- You wait for pod deletion only in namespaces A and B. Since the manifest also creates namespace C, waiting there as well would make cleanup more deterministic:
kubectl -n test-namespace-c wait --all --for=delete pod --timeout="${kubewait_timeout}"- The
pod_namepipeline can return multiple lines if there are several matching kube‑system pods. In that casekubectl exec ${pod_name}will fail. You might want to either:
- Restrict to the first match (e.g.,
| head -n1), or- Loop over matches and assert all have zero iptables cache files.
These tweaks would make the final iptables‑cache assertion more stable across different cluster topologies.
Test this cleanup step on a multi‑worker cluster (or kind cluster with multiple worker nodes) to confirm the
kubectl execstill behaves as expected when more than one matching pod exists.
📜 Review details
Configuration used: CodeRabbit UI
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
e2e/tests/multi-namespace-multinet.bats(1 hunks)e2e/tests/multi-namespace-multinet.yml(1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
e2e/tests/multi-namespace-multinet.bats (1)
e2e/tests/common.bash (1)
get_net1_ip(5-12)
🪛 Checkov (3.2.334)
e2e/tests/multi-namespace-multinet.yml
[medium] 102-118: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 102-118: Container should not be privileged
(CKV_K8S_16)
[medium] 102-118: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 119-135: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 119-135: Container should not be privileged
(CKV_K8S_16)
[medium] 119-135: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 137-153: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 137-153: Container should not be privileged
(CKV_K8S_16)
[medium] 137-153: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 154-170: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 154-170: Container should not be privileged
(CKV_K8S_16)
[medium] 154-170: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 172-188: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 172-188: Container should not be privileged
(CKV_K8S_16)
[medium] 172-188: Minimize the admission of root containers
(CKV_K8S_23)
[medium] 189-205: Containers should not run with allowPrivilegeEscalation
(CKV_K8S_20)
[high] 189-205: Container should not be privileged
(CKV_K8S_16)
[medium] 189-205: Minimize the admission of root containers
(CKV_K8S_23)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
- GitHub Check: build-images
- GitHub Check: test (1.24.x, ubuntu-latest)
- GitHub Check: e2e-kind
- GitHub Check: Analyze (go)
🔇 Additional comments (4)
e2e/tests/multi-namespace-multinet.yml (1)
12-32: Namespace, NAD, and MultiNetworkPolicy wiring looks consistentNamespaces, labels, NAD names/IPAM ranges, and the MultiNetworkPolicy
podSelector/namespaceSelector/policy-forannotation are internally consistent and match the intended traffic matrix (A/pod‑1 selected, ingress from B/pod‑1, egress to B/pod‑2, no references to C beyondpolicy-for). No functional issues from a manifest‑wiring perspective.Please still run this end‑to‑end on a fresh cluster to confirm the policy behaves exactly as expected with your MultiNetworkPolicy implementation.
Also applies to: 33-53, 55-75, 77-97, 206-235
e2e/tests/multi-namespace-multinet.bats (3)
38-72: Connectivity assertions align with the MultiNetworkPolicy definitionThe “Allowed connectivity” and “Denied connectivity” tests match the policy semantics:
- B/pod‑1 → A/pod‑1 and A/pod‑1 → B/pod‑2 are allowed.
- All other {A1→A2,B1,C1,C2} and {A2,B2,C1,C2→A1} flows are denied.
This gives good coverage of both ingress and egress constraints on the selected A/pod‑1 while leaving other pods unaffected.
Once the readiness timing is solid, it’s worth running these tests with extra logging (e.g.,
set -xor additionalkubectl getdumps on failure) to confirm that any unexpected failures aren’t due to timing rather than policy logic.
74-100: “Allowed by policy absence” section nicely validates non‑selected pods remain unrestrictedThe flows you exercise here (A/pod‑2 and B/pod‑1 to peers) correctly demonstrate that pods not selected by the MultiNetworkPolicy still enjoy default connectivity, which is an important regression check.
No issues with the expectations; this complements the denied‑traffic tests well.
If you later add more policies in this test suite, re‑verify that none of them accidentally select these pods and change the expected behavior for this section.
22-36: Wait for namespace‑C pods and avoid fixed sleep for iptables syncThis test waits for pods in namespaces A and B, but later tests rely on pods in namespace C as well. Not waiting for namespace‑C pods to become Ready can make connectivity tests flaky if those pods start slowly. The fixed
sleep 3for iptables sync is also somewhat brittle.Suggestions:
- Add a readiness wait for namespace C, similar to A/B:
kubectl -n test-namespace-c wait --all --for=condition=ready pod --timeout="${kubewait_timeout}"- If feasible, replace the fixed sleep with a more deterministic signal that the MultiNetworkPolicy iptables rules have been applied (even a slightly longer but still fixed timeout tuned to your CI would be more robust than "3 seconds and hope").
A networkpolicy can refer to multiple NADs, which can be in different namespaces. Create an end2end test case that involves pods from multiple namespaces.
Summary by CodeRabbit
✏️ Tip: You can customize this high-level summary in your review settings.