weight: 13

Automatic Interconnection of Underlay and Overlay Subnets

If a cluster has both Underlay and Overlay subnets, by default, Pods under the Overlay subnet can access Pods' IPs in the Underlay subnet through a gateway using NAT. However, Pods in the Underlay subnet need to configure node routing to access Pods in the Overlay subnet.

To achieve automatic interconnection between Underlay and Overlay subnets, you can manually modify the YAML file of the Underlay subnet. Once configured, Kube-OVN will also use an additional Underlay IP to connect the Underlay subnet and the ovn-cluster logical router, setting the corresponding routing rules to enable interconnection.

Procedure

  1. Go to Administrator.

  2. In the left navigation bar, click on Cluster Management > Resource Management.

  3. Enter Subnet to filter resource objects.

  4. Click on ⋮ > Update next to the Underlay subnet to be modified.

  5. Modify the YAML file, adding the field u2oInterconnection: true in the Spec.

  6. Click Update.

Note: Existing compute components in the Underlay subnet need to be recreated for the changes to take effect.

Isolation Between Underlay Subnets with u2oInterconnection Enabled

When multiple Underlay subnets have u2oInterconnection: true enabled, traffic between them no longer goes through the physical gateway but is routed directly via the internal OVN network.

If you need to isolate two Underlay subnets while both have u2oInterconnection enabled, you must first configure the kube-ovn-controller parameter, then configure the subnet isolation.

Step 1: Configure kube-ovn-controller

Modify the kube-ovn-controller Deployment to disable connection tracking skip for destination logical port IPs:

kubectl edit deployment kube-ovn-controller -n kube-system

Add or modify the following argument:

spec:
  template:
    spec:
      containers:
      - name: kube-ovn-controller
        args:
        - --ls-ct-skip-dst-lport-ips=false
CAUTION

--ls-ct-skip-dst-lport-ips controls whether to skip connection tracking (conntrack) for traffic destined to logical port IPs. The default value is true, which skips conntrack to improve performance. Setting it to false does not affect functionality but may slightly impact performance.

However, for Underlay subnets with ACL-based isolation, you must set it to false. Otherwise, gateway-to-Pod traffic will fail (e.g., ping requests reach the Pod but replies are dropped), because ACL isolation uses allow-related which requires conntrack state; without it, replies cannot be identified as "related" and get dropped.

Step 2: Configure Subnet Isolation

Configure the subnet with the following parameters:

spec:
  u2oInterconnection: true
  acls:
  - action: drop
    direction: to-lport  # Ingress direction (traffic entering the logical port)
    match: ip4.src == 172.20.0.0/16
    priority: 1002
  - action: drop
    direction: to-lport  # Ingress direction
    match: ip4.src == 192.50.0.0/16
    priority: 1002

ACL Parameters:

ParameterDescription
actionThe action to take: allow, drop, or allow-related
directionTraffic direction: to-lport (ingress) or from-lport (egress)
matchOVN match expression using L2-L4 fields and boolean operators
priorityRule priority (higher values are evaluated first; recommended range: 1002-1899)
NOTE
  • The acls field provides priority-based rule evaluation, offering more flexibility than standard Kubernetes NetworkPolicy.
  • When using to-lport direction, ip4.src refers to the source IP of incoming traffic.
  • Recommended priority range: 1002 to 1899 to avoid conflicts with system default ACL rules.