Uplink Topologies
When deploying bare-metal Kubernetes, servers (Machines) can connect to the network fabric through different several different Uplink Topologies,
There are a few common options, balancing simplicity against resiliency and flexibility:
Single-homed L2 Uplink
The server connects with a single NIC to one Top-of-Rack (ToR)/leaf switch. The port is configured as either:
- Access port: server lives in a single VLAN
- Trunk port: limited set of VLANs are allowed to the host
When it’s used:
- Lab, test, or proof-of-concept setups
- Small edge or branch deployments
- Environments where high availability isn’t required
Pros | Cons |
---|---|
Very simple to configure | Single point of failure (NIC or switch) |
Minimal operational overhead | Limited scalability and resiliency |
No special features required | Not suitable for production-grade high availability |
Dual-homed L2 Uplink with Link Aggregation
The server uses two (or more) NICs, bonded together (e.g. LACP 802.3ad or active/standby).
The uplinks connect either:
- To the same switch (classic port-channel), or
- To a pair of switches in MLAG/vPC/stacking mode for redundancy.
When it’s used:
- Most common setup in enterprise production data centers
- Environments where redundancy and higher bandwidth are important
- Default choice for mission-critical workloads without advanced routing requirements
Pros | Cons |
---|---|
Provides redundancy (NIC or switch) | Requires MLAG/vPC/stack config on switches |
Higher bandwidth with active/active | Limited to L2 — scaling across racks is harder |
Still simple compared to routed BGP | Troubleshooting can be harder if LACP/MLAG issues occur |
Dual-homed L3 Uplink with ECMP
The server participates directly in the routed fabric. Each NIC connects to a ToR/leaf switch using Layer-3, often with BGP sessions established.
Common in EVPN/VXLAN fabrics where hosts act as fabric peers instead of just L2 endpoints.
When it’s used
- Modern data centers with leaf-spine architectures
- Kubernetes or cloud-native clusters at scale
- Multi-tenant environments where VLAN stretching is undesirable
Pros | Cons |
---|---|
Scales well — no VLAN sprawl or stretching | Higher operational complexity on the host |
Fabric treats servers as first-class participants | Requires BGP/FRR or similar routing on the host |
Enables automation, multi-tenancy, and mobility | Needs more operational maturity to run reliably |
Simplifies network with routed underlay model | Not as widely adopted in smaller environments |
L3 Uplinks can be combined with L2 link aggregation to simplify configuration and reduce management overheads. For a good example of how it can be integrated, see Isovalent Cilium and Cisco ACI Blog Post.
Next, check out how Network Profiles can be used to model the different Uplink Topologies in meltcloud.