|
| 1 | +--- |
| 2 | +weight: 20 |
| 3 | +title: "KubeVirt L2 Integration" |
| 4 | +description: "Integrate OpenPERouter with KubeVirt for VM-to-VM traffic via L2 EVPN/VXLAN overlay" |
| 5 | +icon: "article" |
| 6 | +date: "2025-06-15T15:03:22+02:00" |
| 7 | +lastmod: "2025-06-15T15:03:22+02:00" |
| 8 | +toc: true |
| 9 | +--- |
| 10 | + |
| 11 | +This example demonstrates how to connect KubeVirt virtual machines to an L2 EVPN/VXLAN overlay using OpenPERouter, extending the [Layer 2 integration example](layer2.md). |
| 12 | + |
| 13 | +## Overview |
| 14 | + |
| 15 | +The setup creates both Layer 2 and Layer 3 VNIs, with OpenPERouter automatically creating a Linux bridge on the host. Two KubeVirt virtual machines are connected to this bridge via Multus secondary interfaces of type `bridge`. |
| 16 | + |
| 17 | +### Architecture |
| 18 | + |
| 19 | +- **L3VNI (VNI 100)**: Provides routing capabilities and connects to external networks |
| 20 | +- **L2VNI (VNI 110)**: Creates a Layer 2 domain for VM-to-VM communication |
| 21 | +- **Linux Bridge**: Automatically created by OpenPERouter for VM connectivity |
| 22 | +- **VM Connectivity**: VMs connect to the bridge using Multus network attachments |
| 23 | + |
| 24 | + |
| 25 | + |
| 26 | +## Prerequisites |
| 27 | + |
| 28 | +- A Kubernetes cluster with KubeVirt installed |
| 29 | +- OpenPERouter deployed and configured |
| 30 | +- Multus CNI installed and configured |
| 31 | +- Access to a real cluster (this example cannot run on kind) |
| 32 | + |
| 33 | +## Configuration |
| 34 | + |
| 35 | +### 1. OpenPERouter Resources |
| 36 | + |
| 37 | +Create the L3VNI and L2VNI resources: |
| 38 | + |
| 39 | +```yaml |
| 40 | +apiVersion: openpe.openperouter.github.io/v1alpha1 |
| 41 | +kind: L3VNI |
| 42 | +metadata: |
| 43 | + name: red |
| 44 | + namespace: openperouter-system |
| 45 | +spec: |
| 46 | + asn: 64514 |
| 47 | + vni: 100 |
| 48 | + localcidr: |
| 49 | + ipv4: 192.169.10.0/24 |
| 50 | + hostasn: 64515 |
| 51 | +--- |
| 52 | +apiVersion: openpe.openperouter.github.io/v1alpha1 |
| 53 | +kind: L2VNI |
| 54 | +metadata: |
| 55 | + name: layer2 |
| 56 | + namespace: openperouter-system |
| 57 | +spec: |
| 58 | + hostmaster: |
| 59 | + autocreate: true |
| 60 | + type: bridge |
| 61 | + l2gatewayip: 192.170.1.1/24 |
| 62 | + vni: 110 |
| 63 | + vrf: red |
| 64 | + vxlanport: 4789 |
| 65 | +``` |
| 66 | +
|
| 67 | +**Key Configuration Points:** |
| 68 | +
|
| 69 | +- The L2VNI references the L3VNI via the `vrf: red` field, enabling routed traffic |
| 70 | +- `hostmaster.autocreate: true` creates a `br-hs-110` bridge on the host |
| 71 | +- `l2gatewayip` defines the gateway IP for the VM subnet |
| 72 | + |
| 73 | +### 2. Network Attachment Definition |
| 74 | + |
| 75 | +Create a Multus network attachment definition for the bridge: |
| 76 | + |
| 77 | +```yaml |
| 78 | +apiVersion: k8s.cni.cncf.io/v1 |
| 79 | +kind: NetworkAttachmentDefinition |
| 80 | +metadata: |
| 81 | + name: evpn |
| 82 | + namespace: default |
| 83 | +spec: |
| 84 | + config: | |
| 85 | + { |
| 86 | + "cniVersion": "0.3.1", |
| 87 | + "name": "evpn", |
| 88 | + "type": "bridge", |
| 89 | + "bridge": "br-hs-110", |
| 90 | + "macspoofchk": false, |
| 91 | + "disableContainerInterface": true |
| 92 | + } |
| 93 | +``` |
| 94 | + |
| 95 | +### 3. Virtual Machine Configuration |
| 96 | + |
| 97 | +Create two virtual machines with network connectivity: |
| 98 | + |
| 99 | +```yaml |
| 100 | +apiVersion: kubevirt.io/v1 |
| 101 | +kind: VirtualMachine |
| 102 | +metadata: |
| 103 | + name: vm-cirros |
| 104 | +spec: |
| 105 | + running: false |
| 106 | + template: |
| 107 | + spec: |
| 108 | + networks: |
| 109 | + - name: evpn |
| 110 | + multus: |
| 111 | + networkName: evpn |
| 112 | + domain: |
| 113 | + devices: |
| 114 | + interfaces: |
| 115 | + - bridge: {} |
| 116 | + name: evpn |
| 117 | + disks: |
| 118 | + - disk: |
| 119 | + bus: virtio |
| 120 | + name: containerdisk |
| 121 | + - disk: |
| 122 | + bus: virtio |
| 123 | + name: cloudinitdisk |
| 124 | + resources: |
| 125 | + requests: |
| 126 | + memory: 1024M |
| 127 | + terminationGracePeriodSeconds: 0 |
| 128 | + volumes: |
| 129 | + - containerDisk: |
| 130 | + image: quay.io/kubevirt/cirros-container-disk-demo:devel |
| 131 | + name: containerdisk |
| 132 | + - cloudInitNoCloud: |
| 133 | + userData: | |
| 134 | + #!/bin/sh |
| 135 | + sudo ip address add 192.170.1.3/24 dev eth0 |
| 136 | + sudo ip r add default via 192.170.1.1 |
| 137 | + echo 'printed from cloud-init userdata' |
| 138 | + name: cloudinitdisk |
| 139 | +``` |
| 140 | + |
| 141 | +**VM Configuration Details:** |
| 142 | + |
| 143 | +- Uses the `evpn` network attachment for bridge connectivity |
| 144 | +- Cloud-init configures the VM's IP address and default gateway |
| 145 | +- Second VM should use IP `192.170.1.4/24` for testing |
| 146 | + |
| 147 | +## Validation |
| 148 | + |
| 149 | +### VM-to-VM Connectivity |
| 150 | + |
| 151 | +Test connectivity between the two VMs: |
| 152 | + |
| 153 | +```bash |
| 154 | +# Connect to VM console |
| 155 | +virtctl console vm-cirros |
| 156 | +
|
| 157 | +# Test ping to the other VM |
| 158 | +ping 192.170.1.4 |
| 159 | +``` |
| 160 | + |
| 161 | +Expected output: |
| 162 | +``` |
| 163 | +PING 192.170.1.4 (192.170.1.4): 56 data bytes |
| 164 | +64 bytes from 192.170.1.4: seq=0 ttl=64 time=0.470 ms |
| 165 | +64 bytes from 192.170.1.4: seq=1 ttl=64 time=0.321 ms |
| 166 | +``` |
| 167 | + |
| 168 | +### Packet Flow Verification |
| 169 | + |
| 170 | +Monitor packet flow on the router to verify traffic: |
| 171 | + |
| 172 | +```bash |
| 173 | +# Check packets flowing through the bridge |
| 174 | +tcpdump -i br-hs-110 -n |
| 175 | +``` |
| 176 | + |
| 177 | +Expected packet flow: |
| 178 | + |
| 179 | +```bash |
| 180 | +13:56:16.151606 pe-110 P IP 192.170.1.3 > 192.170.1.4: ICMP echo request |
| 181 | +13:56:16.151610 vni110 Out IP 192.170.1.3 > 192.170.1.4: ICMP echo request |
| 182 | +13:56:16.152073 vni110 P IP 192.170.1.4 > 192.170.1.3: ICMP echo reply |
| 183 | +13:56:16.152075 pe-110 Out IP 192.170.1.4 > 192.170.1.3: ICMP echo reply |
| 184 | +``` |
| 185 | + |
| 186 | +### Layer 3 Connectivity |
| 187 | + |
| 188 | +Test connectivity to hosts in the L3VNI domain: |
| 189 | + |
| 190 | +```bash |
| 191 | +# From VM console, ping a host in the L3VNI |
| 192 | +ping 192.168.10.3 |
| 193 | +``` |
| 194 | + |
| 195 | +Expected output: |
| 196 | + |
| 197 | +```bash |
| 198 | +PING 192.168.10.3 (192.168.10.3): 56 data bytes |
| 199 | +64 bytes from 192.168.10.3: seq=0 ttl=62 time=1.207 ms |
| 200 | +64 bytes from 192.168.10.3: seq=1 ttl=62 time=0.998 ms |
| 201 | +``` |
| 202 | + |
| 203 | +### Live Migration Testing |
| 204 | + |
| 205 | +Verify that connectivity persists during live migration: |
| 206 | + |
| 207 | +```bash |
| 208 | +# Start continuous ping from one VM to another |
| 209 | +ping 192.170.1.4 |
| 210 | +
|
| 211 | +# In another terminal, initiate live migration |
| 212 | +virtctl migrate vm-cirros1 |
| 213 | +``` |
| 214 | + |
| 215 | +The ping should continue working throughout the migration process. |
| 216 | + |
| 217 | +## Troubleshooting |
| 218 | + |
| 219 | +### Common Issues |
| 220 | + |
| 221 | +1. **Bridge not created**: Verify the L2VNI has `hostmaster.autocreate: true` |
| 222 | +2. **VM cannot reach gateway**: Check that the VM's IP is in the same subnet as `l2gatewayip` |
| 223 | +3. **No VM-to-VM connectivity**: Ensure both VMs are connected to the same bridge (`br-hs-110`) |
| 224 | + |
| 225 | +### Debug Commands |
| 226 | + |
| 227 | +```bash |
| 228 | +# Check bridge creation |
| 229 | +ip link show br-hs-110 |
| 230 | +
|
| 231 | +# Verify network attachment |
| 232 | +kubectl get network-attachment-definitions |
| 233 | +
|
| 234 | +# Check VM network interfaces |
| 235 | +virtctl console vm-cirros |
| 236 | +ip addr show |
| 237 | +``` |
| 238 | + |
0 commit comments