Skip to content

Conversation

@zeeke
Copy link
Member

@zeeke zeeke commented Oct 8, 2025

A networkpolicy can refer to multiple NADs, which can be in different namespaces. Create an end2end test case that involves pods from multiple namespaces.

Summary by CodeRabbit

  • Tests
    • Added end-to-end tests validating MultiNetworkPolicy behavior across multiple namespaces with distinct network attachments and cross-namespace pod-to-pod connectivity.

✏️ Tip: You can customize this high-level summary in your review settings.

@coveralls
Copy link

coveralls commented Oct 8, 2025

Pull Request Test Coverage Report for Build 19827019308

Details

  • 0 of 0 changed or added relevant lines in 0 files are covered.
  • No unchanged relevant lines lost coverage.
  • Overall coverage remained the same at 57.865%

Totals Coverage Status
Change from base Build 19426624268: 0.0%
Covered Lines: 1133
Relevant Lines: 1958

💛 - Coveralls

spec:
containers:
- name: macvlan-worker1
image: ghcr.io/k8snetworkplumbingwg/multi-networkpolicy-iptables:e2e-test

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

using ghcr.io here is breaking the e2e test as they were designed.

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

do you mean it is not directly using changes from /e2e/Dockerfile ?

all the tests has this image configuration, which I know it is suboptimal, but it does not relate to this PR. right?

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean all of them use localhost:5000

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Got it. Yes, that would be a good improvement

A networkpolicy can refer to multiple NADs, which can be in different namespaces.
Create an end2end test case that involves pods from multiple namespaces.

Signed-off-by: Andrea Panattoni <[email protected]>
@zeeke zeeke force-pushed the e2e/different-namespaces branch from 530b42a to b87e443 Compare December 1, 2025 14:58
@coderabbitai
Copy link

coderabbitai bot commented Dec 1, 2025

Walkthrough

Adds end-to-end test suite for MultiNetworkPolicy validation across multiple namespaces with distinct network attachments. Includes BATS test script that creates test resources, verifies cross-namespace pod connectivity according to policy rules, checks denial cases, and cleans up resources including lingering iptables cache files.

Changes

Cohort / File(s) Summary
Multi-namespace MultiNetworkPolicy E2E tests
e2e/tests/multi-namespace-multinet.bats, e2e/tests/multi-namespace-multinet.yml
Introduces comprehensive E2E test suite validating MultiNetworkPolicy behavior across three namespaces with macvlan network attachments. Test script exercises pod-to-pod connectivity enforcement, policy-based ingress/egress rules, absence of connectivity without policies, and resource cleanup with iptables cache verification. YAML manifest defines namespaces, NetworkAttachmentDefinitions with distinct IPAM ranges, six pods with network attachments, and MultiNetworkPolicy with cross-namespace ingress/egress rules.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • YAML manifest complexity: multiple namespaces, NetworkAttachmentDefinitions with IPAM configuration, and MultiNetworkPolicy with cross-namespace selector rules require validation
  • Test script verification: ensure all connectivity checks (allow and deny cases), status assertions, and cleanup steps correctly validate the policy behavior
  • Namespace and network attachment references: confirm proper linkage between pods, networks, and policy definitions across three namespaces

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'e2e: Multiple namespaces' is vague and generic, using non-descriptive language that doesn't convey the specific nature of the changes. Consider a more specific title that captures the main change, such as 'e2e: Add multi-namespace MultiNetworkPolicy tests' or 'e2e: Test MultiNetworkPolicy across multiple namespaces'.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed Docstring coverage is 100.00% which is sufficient. The required threshold is 80.00%.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

♻️ Duplicate comments (1)
e2e/tests/multi-namespace-multinet.yml (1)

114-114: Align test image registry with existing e2e convention

Other e2e manifests in this repo use a local registry (localhost:5000/...) so that the tests exercise the image built from /e2e/Dockerfile. Using ghcr.io/...:e2e-test here diverges from that pattern and can break the “use locally built image” assumption.

Consider switching these image references to the local registry variant used elsewhere (or centralizing the image name via a variable/template) for consistency.

Verify which image reference your existing e2e tests use (e.g., via rg -n "localhost:5000" e2e/tests) and align this manifest to that pattern.

Also applies to: 131-131, 149-149, 166-166, 184-184, 201-201

🧹 Nitpick comments (4)
e2e/tests/multi-namespace-multinet.yml (1)

112-118: Privileged test pods: acceptable for e2e, but consider documenting or scoping

All six pods run with securityContext.privileged: true, which static analysis correctly flags as high‑risk. For these iptables/multi‑network e2e tests that’s likely required, but it would help future readers and security tooling if this were explicitly called out as test‑only and/or excluded from generic policy checks.

If you keep them privileged, I’d suggest either:

  • Adding a brief comment that these pods must be privileged for network/iptables manipulation; and/or
  • Configuring your security scanners to ignore this specific e2e path.

Confirm that the privileged context here matches what other existing e2e tests in this repo use, and that your CI security scanners are configured to treat this directory as test‑only.

Also applies to: 129-135, 147-153, 164-170, 182-188, 199-205

e2e/tests/multi-namespace-multinet.bats (3)

3-6: Update comment to reflect three‑namespace scenario

The header still says this test creates “two namespaces”, but the manifest and tests clearly involve three namespaces (A, B, and C). Updating the comment avoids confusion for future readers.

Double‑check the manifest (multi-namespace-multinet.yml) to ensure all three namespaces are indeed required, then adjust this wording accordingly.


8-20: setup() precomputes pod IPs before resources exist in the first test

setup() runs before every test, so for the very first test (which actually creates the manifest) the get_net1_ip calls happen before any pods exist. That’s harmless here (the IP variables aren’t used in that test and failures from kubectl exec inside command substitution are effectively ignored), but it’s a bit surprising and does extra kubectl calls.

Consider either:

  • Moving the get_net1_ip calls into the tests that actually need them (after the kubectl wait), or
  • Adding a brief comment explaining that the first run of setup() runs before resource creation and is effectively a no‑op for IP discovery.

Verify how Bats runs setup() in your CI (order and set -e behavior) to ensure these early kubectl exec failures can’t accidentally cause test failures on your environment.


102-116: Cleanup: also consider namespace‑C pods and multiple controller pods

Two minor robustness points in the cleanup test:

  1. You wait for pod deletion only in namespaces A and B. Since the manifest also creates namespace C, waiting there as well would make cleanup more deterministic:
    kubectl -n test-namespace-c wait --all --for=delete pod --timeout="${kubewait_timeout}"
  2. The pod_name pipeline can return multiple lines if there are several matching kube‑system pods. In that case kubectl exec ${pod_name} will fail. You might want to either:
    • Restrict to the first match (e.g., | head -n1), or
    • Loop over matches and assert all have zero iptables cache files.

These tweaks would make the final iptables‑cache assertion more stable across different cluster topologies.

Test this cleanup step on a multi‑worker cluster (or kind cluster with multiple worker nodes) to confirm the kubectl exec still behaves as expected when more than one matching pod exists.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6304ccd and b87e443.

📒 Files selected for processing (2)
  • e2e/tests/multi-namespace-multinet.bats (1 hunks)
  • e2e/tests/multi-namespace-multinet.yml (1 hunks)
🧰 Additional context used
🧬 Code graph analysis (1)
e2e/tests/multi-namespace-multinet.bats (1)
e2e/tests/common.bash (1)
  • get_net1_ip (5-12)
🪛 Checkov (3.2.334)
e2e/tests/multi-namespace-multinet.yml

[medium] 102-118: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 102-118: Container should not be privileged

(CKV_K8S_16)


[medium] 102-118: Minimize the admission of root containers

(CKV_K8S_23)


[medium] 119-135: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 119-135: Container should not be privileged

(CKV_K8S_16)


[medium] 119-135: Minimize the admission of root containers

(CKV_K8S_23)


[medium] 137-153: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 137-153: Container should not be privileged

(CKV_K8S_16)


[medium] 137-153: Minimize the admission of root containers

(CKV_K8S_23)


[medium] 154-170: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 154-170: Container should not be privileged

(CKV_K8S_16)


[medium] 154-170: Minimize the admission of root containers

(CKV_K8S_23)


[medium] 172-188: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 172-188: Container should not be privileged

(CKV_K8S_16)


[medium] 172-188: Minimize the admission of root containers

(CKV_K8S_23)


[medium] 189-205: Containers should not run with allowPrivilegeEscalation

(CKV_K8S_20)


[high] 189-205: Container should not be privileged

(CKV_K8S_16)


[medium] 189-205: Minimize the admission of root containers

(CKV_K8S_23)

⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (4)
  • GitHub Check: build-images
  • GitHub Check: test (1.24.x, ubuntu-latest)
  • GitHub Check: e2e-kind
  • GitHub Check: Analyze (go)
🔇 Additional comments (4)
e2e/tests/multi-namespace-multinet.yml (1)

12-32: Namespace, NAD, and MultiNetworkPolicy wiring looks consistent

Namespaces, labels, NAD names/IPAM ranges, and the MultiNetworkPolicy podSelector/namespaceSelector/policy-for annotation are internally consistent and match the intended traffic matrix (A/pod‑1 selected, ingress from B/pod‑1, egress to B/pod‑2, no references to C beyond policy-for). No functional issues from a manifest‑wiring perspective.

Please still run this end‑to‑end on a fresh cluster to confirm the policy behaves exactly as expected with your MultiNetworkPolicy implementation.

Also applies to: 33-53, 55-75, 77-97, 206-235

e2e/tests/multi-namespace-multinet.bats (3)

38-72: Connectivity assertions align with the MultiNetworkPolicy definition

The “Allowed connectivity” and “Denied connectivity” tests match the policy semantics:

  • B/pod‑1 → A/pod‑1 and A/pod‑1 → B/pod‑2 are allowed.
  • All other {A1→A2,B1,C1,C2} and {A2,B2,C1,C2→A1} flows are denied.

This gives good coverage of both ingress and egress constraints on the selected A/pod‑1 while leaving other pods unaffected.

Once the readiness timing is solid, it’s worth running these tests with extra logging (e.g., set -x or additional kubectl get dumps on failure) to confirm that any unexpected failures aren’t due to timing rather than policy logic.


74-100: “Allowed by policy absence” section nicely validates non‑selected pods remain unrestricted

The flows you exercise here (A/pod‑2 and B/pod‑1 to peers) correctly demonstrate that pods not selected by the MultiNetworkPolicy still enjoy default connectivity, which is an important regression check.

No issues with the expectations; this complements the denied‑traffic tests well.

If you later add more policies in this test suite, re‑verify that none of them accidentally select these pods and change the expected behavior for this section.


22-36: Wait for namespace‑C pods and avoid fixed sleep for iptables sync

This test waits for pods in namespaces A and B, but later tests rely on pods in namespace C as well. Not waiting for namespace‑C pods to become Ready can make connectivity tests flaky if those pods start slowly. The fixed sleep 3 for iptables sync is also somewhat brittle.

Suggestions:

  • Add a readiness wait for namespace C, similar to A/B:
    kubectl -n test-namespace-c wait --all --for=condition=ready pod --timeout="${kubewait_timeout}"
  • If feasible, replace the fixed sleep with a more deterministic signal that the MultiNetworkPolicy iptables rules have been applied (even a slightly longer but still fixed timeout tuned to your CI would be more robust than "3 seconds and hope").

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants