Juniper Networks Juniper Cloud Native Router

तपशील
- Model: Juniper Cloud Native Router 25.4
- प्रकाशित: 2025-12-19
- निर्माता: जुनिपर नेटवर्क्स, इंक.
- Location: 1133 Innovation Way Sunnyvale, California 94089 USA
- संपर्क: ५७४-५३७-८९००
- Webसाइट: www.juniper.net
उत्पादन माहिती
- The Juniper Cloud Native Router 25.4 is a high-performance router designed for cloud-native environments. It offers a wide range of Layer 2 (L2) and Layer 3 (L3) features to support various networking requirements.
उत्पादन वापर सूचना:
- परिचय:
- The Juniper Cloud-Native Router provides advanced networking capabilities for cloud environments. It consists of various components such as vRouter Datapath, Deployment Modes, and Interfaces Overview.
- L2 वैशिष्ट्ये:
- The L2 features include Static VXLAN, Layer 2 Circuit, Loop Detection, Access Control Lists, MAC Learning, Native VLAN, and more. Follow the specific configuration steps provided in the user manual for each feature.
- L3 वैशिष्ट्ये:
- The L3 features cover protocols like LLDP, DHCP Relay, CoS,TWAMP, Segment Routing, IPsec Security Services, and more. Configure these features based on your network requirements and follow the guidelines outlined in the manual.
- Workload Configuration:
- Learn about different use cases and configurations for the Cloud-Native Router, including deploying L2 Pods with Kernel Interfaces in Access Mode. Refer to the detailed instructions provided in the user guide for step-by-step setup.
वारंवार विचारले जाणारे प्रश्न
Q: Is the Juniper Cloud Native Router compatible with all cloud platforms?
A: The Juniper Cloud Native Router is designed to work seamlessly with various clod platforms. However, it's recommended to check for specific compatibility requirements with your cloud provider.
Q: How can I update the firmware of the Cloud Native Router?
A: Firmware updates for the Cloud Native Router can be obtained from the official Juniper Networks website. Follow the instructionsprovided in the firmware update guide to ensure a smooth update process.
Juniper Cloud Native Router 25.4 User Guide
प्रकाशित
५७४-५३७-८९००
ii
जुनिपर नेटवर्क्स, इंक. 1133 इनोव्हेशन वे सनीवेल, कॅलिफोर्निया 94089 यूएसए ५७४-५३७-८९०० www.juniper.net
जुनिपर नेटवर्क्स, द ज्युनिपर नेटवर्क लोगो, जुनिपर आणि जुनोस हे युनायटेड स्टेट्स आणि इतर देशांमध्ये जुनिपर नेटवर्क्स, इंक. चे नोंदणीकृत ट्रेडमार्क आहेत. इतर सर्व ट्रेडमार्क, सेवा चिन्ह, नोंदणीकृत चिन्हे किंवा नोंदणीकृत सेवा चिन्ह त्यांच्या संबंधित मालकांची मालमत्ता आहेत.
ज्युनिपर नेटवर्क या दस्तऐवजातील कोणत्याही चुकीची जबाबदारी घेत नाही. ज्युनिपर नेटवर्क्सने या प्रकाशनास सूचना न देता बदलण्याचा, सुधारण्याचा, हस्तांतरित करण्याचा किंवा अन्यथा सुधारण्याचा अधिकार राखून ठेवला आहे.
Juniper Cloud Native Router 25.4 User Guide Copyright © 2025 Juniper Networks, Inc. All rights reserved.
या दस्तऐवजातील माहिती शीर्षक पृष्ठावरील तारखेनुसार वर्तमान आहे.
वर्ष 2000 ची सूचना
जुनिपर नेटवर्क्स हार्डवेअर आणि सॉफ्टवेअर उत्पादने वर्ष 2000 अनुरूप आहेत. जुनोस OS ला 2038 सालापर्यंत वेळ-संबंधित मर्यादा नाहीत. तथापि, NTP ऍप्लिकेशनला 2036 मध्ये काही अडचण आल्याची माहिती आहे.
शेवटचा वापरकर्ता परवाना करार
या तांत्रिक दस्तऐवजीकरणाचा विषय असलेल्या ज्युनिपर नेटवर्क उत्पादनामध्ये ज्युनिपर नेटवर्क सॉफ्टवेअरचा समावेश आहे (किंवा वापरण्यासाठी आहे) अशा सॉफ्टवेअरचा वापर हा https://support.juniper.net/support/eula/ वर पोस्ट केलेल्या अंतिम वापरकर्ता परवाना कराराच्या (“EULA”) अटी व शर्तींच्या अधीन आहे. असे सॉफ्टवेअर डाउनलोड करून, स्थापित करून किंवा वापरून, तुम्ही त्या EULA च्या अटी व शर्तींना सहमती देता.
iii
सामग्री सारणी
1
परिचय
Juniper Cloud-Native Router Overview | ८७७.६७७.७३७०
Juniper Cloud-Native Router Components | 5
Juniper Cloud-Native Router vRouter Datapath | 11
Cloud-Native Router Deployment Modes | 13
Cloud-Native Router Interfaces Overview | ८७७.६७७.७३७०
2
L2 वैशिष्ट्ये
L2 Features Overview | ८७७.६७७.७३७०
Static VXLAN with IPv4 and IPv6 Underlay | 28
Layer 2 Circuit | 35
Loop Detection in Pure L2 Mode | 50
Access Control Lists (Firewall Filters) | 52
MAC Learning and Aging | 55
APIs and CLI Commands for Bond Interfaces | 57
Native VLAN | 60
Enabling Dynamic Device Personalization (DDP) on Individual Interfaces | 62
3
L3 वैशिष्ट्ये
L3 Features Overview | ८७७.६७७.७३७०
Link Layer Discovery Protocol (LLDP) | 67 LLDP Overview | ८७७.६७७.७३७०
LLDP Verification | 70
DHCP Relay | 73
Layer-3 Class of Service (CoS) | 80
iv
L3 CoS Overview | ८७७.६७७.७३७०
L3 CoS Configuration | 87
L3 CoS Verification | 95
Two-Way Active Measurement Protocol (TWAMP) | 111
Segment Routing | 122
Topology-Independent Loop-Free Alternates (TI-LFA) | 143 TI-LFA Overview | ८७७.६७७.७३७०
TI-LFA Configuration (SR-MPLS on IS-IS) | 147
Access Control Lists (Firewall Filters) | 157
IPsec Security Services | 176
Cloud-Native Router as a Transit Gateway | 205
EVPN Type 5 Routing over VXLAN Tunnels | 206
Integrated Routing and Bridging on JCNR | 214
L3 Routing Protocols | 222
MPLS Support | 225
Bidirectional Forwarding Detection (BFD) | 226
Virtual Router Redundancy Protocol (VRRP) | 228
Virtual Routing Instance (VRF-Lite) | 229
ECMP | 233
BGP Unnumbered | 233
Layer-3 VLAN Sub-Interfaces | 234
Enabling Dynamic Device Personalization (DDP) on Individual Interfaces | 236
4
Workload Configuration
Cloud-Native Router Use-Cases and Configuration Overview | ८७७.६७७.७३७०
Deploy L2 Pod with Kernel Interface (Access Mode) | 245
v
ओव्हरview | ८७७.६७७.७३७०
कॉन्फिगरेशन उदाampले | ७
Deploy L2 Pod with virtio Interface (Trunk Mode) | 250 Overview | ८७७.६७७.७३७०
कॉन्फिगरेशन उदाampले | ७
Deploy L3 Pod with VPN Interface | 255 Overview | ८७७.६७७.७३७०
कॉन्फिगरेशन उदाampले | ७
Deploy L3 Pod with VLAN Sub-Interface | 261 Overview | ८७७.६७७.७३७०
कॉन्फिगरेशन उदाampले | ७
Deploy a KubeVirt-based VM | 268
Configuration Steps | 268
5
देखरेख आणि लॉगिंग
Using Cloud-Native Router Controller CLI (cRPD) | 283
Telemetry Capabilities | 289
Logging and Notifications | 333
6
समस्यानिवारण
Troubleshoot using the vRouter CLI | 339
Troubleshoot using Introspect | 351
7
परिशिष्ट
Access cRPD CLI | 354
Access vRouter CLI | 356
Juniper Technology Previews (टेक प्रीviews) | 358
1 प्रकरण
परिचय
IN THIS CHAPTER Juniper Cloud-Native Router Overview | 2 Juniper Cloud-Native Router Components | 5 Juniper Cloud-Native Router vRouter Datapath | 11 Cloud-Native Router Deployment Modes | 13 Cloud-Native Router Interfaces Overview | ८७७.६७७.७३७०
2
Juniper Cloud-Native Router Overview
सारांश
हा विषय एक ओव्हर प्रदान करतोview of the Juniper Cloud-Native Router (JCNR) overview, use cases, and features.
या विभागात
ओव्हरview | 2 Use Cases | 2 Architecture and Key Components | 3 Features | 4
ओव्हरview
While 5G unleashes higher bandwidth, lower latency and higher capacity, it also brings in new infrastructure challenges such as increased number of base stations or cell sites, more backhaul links with larger capacity and more cell site routers and aggregation routers. Service providers are integrating cloud-native infrastructure in distributed RAN (D-RAN) topologies, which are usually small, leased spaces, with limited power, space and cooling. The disaggregation of radio access network (RAN) and the expansion of 5G data centers into cloud hyperscalers has added newer requirements for cloudnative routing. The Juniper Cloud-Native Router provides the service providers the flexibility to roll out the expansion requirements for 5G rollouts, reducing both the CapEx and OpEx.
Juniper Cloud-Native Router (JCNR) is a containerized router that combines Juniper’s proven routing technology with the Junos containerized routing protocol daemon (cRPD) as the controller and a highperformance Data Plane Development Kit (DPDK) or extended Berkley Packet Filter (eBPF) eXpress Data Path (XDP) datapath based vRouter forwarding plane. It is implemented in Kubernetes and interacts seemlessly with a Kubernetes container network interface (CNI) framework.
केसेस वापरा
The Cloud-Native Router has the following use cases:
· Radio Access Network (RAN)
The new 5G-only sites are a mix of centralized RAN (C-RAN) and distributed RAN (D-RAN). The CRAN sites are typically large sites owned by the carrier and continue to deploy physical routers. The D-RAN sites, on the other hand, are tens of thousands of smaller sites, closer to the users.
3
Optimization of CapEx and OpEx is a huge factor for the large number of D-RAN sites. These sites are also typically leased, with limited space, power and cooling capacities. There is limited connectivity over leased lines for transit back to the mobile core. Juniper Cloud-Native Router is designed to work in the constraints of a D-RAN. It is integrated with the distributed unit (DU) and installable on an existing 1 U server.
· Telco virtual private cloud (VPC)
The 5G data centers are expanding into cloud hyperscalers to support more radio sites. The cloudnative routing available in public cloud environments do not support the routing demands of telco VPCs, such as MPLS, quality of service (QoS), L3 VPN, and more. The Juniper Cloud-Native Router integrates directly into the cloud as a containerized network function (CNF), managed as a cloudnative Kubernetes component, while providing advanced routing capabilities.
आर्किटेक्चर आणि मुख्य घटक
The Juniper Cloud-Native Router consists of the Junos containerized routing protocol Daemon (cRPD) as the control plane (Cloud-Native Router Controller), providing topology discovery, route advertisement and forwarding information base (FIB) programming, as well as dynamic underlays and overlays. It uses the Data Plane Development Kit (DPDK) or eBPF XDP datapath enabled vRouter as a forwarding plane, providing packet forwarding for applications in a pod and host path I/O for protocol sessions. The third component is the Cloud-Native Router container network interface (CNI) that interacts with Kubernetes as a secondary CNI to create pod interfaces, assign addresses and generate the router configuration. The CNI also provides networking for virtual machines created using KuberVirt.
The Data Plane Development Kit (DPDK) is an open source set of libraries and drivers. DPDK enables fast packet processing by allowing network interface cards (NICs) to send direct memory access (DMA) packets directly into an application’s address space. The applications poll for packets, to avoid the overhead of interrupts from the NIC. Integrating with DPDK allows a vRouter to process more packets per second than is possible when the vRouter runs as a kernel module.
The extended Berkley Packet Filter (eBPF) is a Linux kernel technology that executes user-defined programs inside a sandbox virtual machine. It enables low-level networking programs to execute with optimal performance. The eXpress Data Path (XDP) frameworks enables high-speed packet processing for the eBPF programs. Cloud-Native Router supports eBPF XDP datapath based vRouter.
In this integrated solution, the Cloud-Native Router Controller uses gRPC, a high performance Remote Procedure Call, based services to exchange messages and to communicate with the vRouter, thus creating the fully functional Cloud-Native Router. This close communication allows you to:
· Learn about fabric and workload interfaces.
· Provision DPDK or kernel-based interfaces for Kubernetes pods as needed.
4
· Configure IPv4 and IPv6 address allocation for pods. · Run routing protocols such as ISIS, BGP, and OSPF and much more.
वैशिष्ट्ये
· Easy deployment, removal, and upgrade on general purpose compute devices using Helm. · Higher packet forwarding performance with DPDK-based JCNR-vRouter. · Full routing, switching, and forwarding stacks in software. · Out-of-the-box software-based open radio access network (O-RAN) support. · Quick spin up with containerized deployment. · Highly scalable solution. · Deployable in Container Network Function (CNF) or Container Network Interface (CNI) modes. · CNI support for pods and KuberVirt-based VMs. · L3 features such as transit gateway, support for routing protocols, BFD, VRRP, VRF-Lite, EVPN
Type-5, ECMP and BGP Unnumbered, access control lists, SRv6. · L2 functionality, such as MAC learning, MAC aging, MAC limiting, native VLAN, L2 statistics, and
access control lists (ACLs). · L2 reachability to Radio Units (RU) for management traffic. · L2 or L3 reachability to physical distributed units (DU) such as 5G millimeter wave DUs or 4G DUs. · VLAN tagging and bridge domains. · Trunk and access ports. · Support for multiple virtual functions (VF) on Ethernet NICs. · Support for bonded VF interfaces. · Rate limiting of egress broadcast, unknown unicast, and multicast traffic on fabric interfaces. · IPv4 and IPv6 routing.
Juniper Cloud-Native Router Components

सारांश
The Juniper Cloud-Native Router solution consists of several components including the Cloud-Native Router controller, the Data Plane Development Kit (DPDK) or extended Berkley Packet Filter (eBPF) eXpress Data Path (XDP) datapath based CloudNative Router vRouter and the JCNR-CNI. This topic provides a brief overview of the components of the Juniper Cloud-Native Router.
या विभागात
Cloud-Native Router Components | 5 Cloud-Native Router Controller | 7 Cloud-Native Router vRouter | 8 JCNR-CNI | 9 Syslog-NG | 11
Cloud-Native Router Components
The Juniper Cloud-Native Router has primarily three components–the Cloud-Native Router Controller control plane, the Cloud-Native Router vRouter forwarding plane, and the JCNR-CNI for Kubernetes integration. All Cloud-Native Router components are deployed as containers.
Figure 1 on page 6 shows the components of the Juniper Cloud-Native Router inside a Kubernetes cluster when implemented with DPDK based vRouter.
6 Figure 1: Components of Juniper Cloud-Native Router (DPDK Datapath)
Figure 2 on page 7 shows the components of the Juniper Cloud-Native Router inside a Kubernetes cluster when implemented with eBPF XDP based vRouter.
7 Figure 2: Components of Juniper Cloud-Native Router (eBPF XDP Datapath)
Cloud-Native Router Controller
The Cloud-Native Router Controller is the control-plane of the cloud-native router solution that runs the Junos containerized routing protocol Daemon (cRPD). It is implemented as a statefulset. The controller communicates with the other elements of the cloud-native router. Configuration, policies, and rules that you set on the controller at deployment time are communicated to the Cloud-Native Router vRouter and other components for implementation. For example, firewall filters (ACLs) configured on the controller are sent to the Cloud-Native Router vRouter (through the vRouter agent). Juniper Cloud-Native Router Controller Functionality: · Exposes Junos OS compatible CLI configuration and operation commands that are accessible to
external automation and orchestration systems using the NETCONF protocol. · Supports vRouter as the high-speed forwarding plane. This enables applications that are built using
the DPDK framework to send and receive packets directly to the application and the vRouter without passing through the kernel.
8
· Supports configuration of VLAN-tagged sub-interfaces on physical function (PF), virtual function (VF), virtio, access, and trunk interfaces managed by the DPDK-enabled vRouter.
· Supports configuration of bridge domains, VLANs, and virtual-switches. · Advertises DPDK application reachability to core network using routing protocols primarily with
BGP, IS-IS and OSPF. · Distributes L3 network reachability information of the pods inside and outside a cluster. · Maintains configuration for L2 firewall. · Passes configuration information to the vRouter through the vRouter-agent. · Stores license key information. · Works as a BGP Speaker, establishing peer relationships with other BGP speakers to exchange
routing information. · Exports control plane telemetry data to Prometheus and gNMI. Configuration Options Use the configlet resource to configure the cRPD pods.
Cloud-Native Router vRouter
The Cloud-Native Router vRouter is a high-performance datapath component. It is an alternative to the Linux bridge or the Open vSwitch (OVS) module in the Linux kernel. It runs as a user-space process. The vRouter functionality is implemented in two pods, one for the vrouter-agent and the vrouter-telemetryexporter, and the other for the vrouter-agent-dpdk. This split gives you the flexibility to tailor CPU resources to the different vRouter components as needed. The vRouter supports both Data Plane Development Kit (DPDK) and extended Berkley Packet Filter (eBPF) eXpress Data Path (XDP) datapath.
NOTE: Cloud-Native Router eBPF XDP Datapath is a “Juniper Technology Preview (Tech Preview)” on page 358 feature. Limited features are supported. See Juniper Cloud-Native Router vRouter Datapath for more details.
Cloud-Native Router vRouter Functionality: · Performs routing with Layer 3 virtual private networks.
9
· Performs L2 forwarding. · Supports high-performance DPDK-based forwarding. · Supports high performance eBPF XDP datapath based forwarding. · Exports data plane telemetry data to Prometheus and gNMI. Benefits of vRouter: · High-performance packet processing. · Forwarding plane provides faster forwarding capabilities than kernel-based forwarding. · Forwarding plane is more scalable than kernel-based forwarding. · Support for the following NICs:
· Intel E810 (Columbiaville) family · Intel XL710 (Fortville) family · NVIDIA Mellanox ConnectX-6 and ConnectX-7
JCNR-CNI
JCNR-CNI is a new container network interface (CNI) developed by Juniper. JCNR-CNI is a Kubernetes CNI plugin installed on each node to provision network interfaces for application pods and KubeVirtbased Virtual Machines (VMs). During pod creation, Kubernetes delegates pod interface creation and configuration to JCNR-CNI. JCNR-CNI interacts with Cloud-Native Router controller and the vRouter to setup DPDK interfaces. When a pod is removed, JCNR-CNI is invoked to de-provision the pod interface, configuration, and associated state in Kubernetes and cloud-native router components. JCNR-CNI works as a secondary CNI, along with the Multus CNI to add and configure pod interfaces. JCNR-CNI Functionality: · Manages the networking tasks in Kubernetes pods such as:
· assigning IP addresses. · allocating MAC addresses. · setting up untagged, access, and other interfaces between the pod and vRouter in a Kubernetes
cluster. · creating VLAN sub-interfaces.
10
· creating L3 interfaces. · Acts on pod events such as add and delete. · Generates cRPD configuration. The JCNR-CNI manages the secondary interfaces that the pods use. It creates the required interfaces based on the configuration in YAML-formatted network attachment definition (NAD) files. The JCNRCNI configures some interfaces before passing them to their final location or connection point and provides an API for further interface configuration options such as: · Instantiating different kinds of pod interfaces. · Creating virtio-based high performance interfaces for pods that leverage the DPDK data plane. · Creating veth pair interfaces that allow pods to communicate using the Linux Kernel networking
stack. · Creating pod interfaces in access or trunk mode. · Attaching pod interfaces to bridge domains and virtual routers. · Supporting IPAM plug-in for Dynamic IP address allocation. · Allocating unique socket interfaces for virtio interfaces. · Managing the networking tasks in pods such as assigning IP addresses and setting up of interfaces
between the pod and vRouter in a Kubernetes cluster. · Connecting pod interface to a network including pod-to-pod and pod-to-network. · Integrating with the vRouter for offloading packet processing. Benefits of JCNR-CNI: · Improved pod interface management · Customizable administrative and monitoring capabilities · Increased performance through tight integration with the controller and vRouter components The Role of JCNR-CNI in Pod Creation: When you create a pod for use in the Cloud-Native Router, the Kubernetes component known as kubelet calls the Multus CNI to set up pod networking and interfaces. Multus reads the annotations section of the pod manifest to find the NADs. If a NAD points to JCNR-CNI as the CNI plug in, Multus calls the JCNR-CNI to set up the pod interface. JCNR-CNI creates the interface as specified in the NAD. JCNR-CNI then generates and pushes a configuration into the controller.
11
Syslog-NG
Juniper Cloud-Native Router uses a syslog-ng pod to gather event logs from cRPD and vRouter and transform the logs into JSON-based notifications. The notifications are logged to a file. Syslog-ng runs as a daemonset.
Juniper Cloud-Native Router vRouter Datapath
सारांश
Cloud-Native Router supports both Data Plane Development Kit (DPDK) and extended Berkley Packet Filter (eBPF) eXpress Data Path (XDP) datapath based vRouter forwarding plane.
IN THIS SECTION Data Plane Development Kit (DPDK) | 11 eBPF XDP | 12
The Cloud-Native Router vRouter forwarding plane supports both the Data Plane Development Kit (DPDK) and extended Berkley Packet Filter (eBPF) eXpress Data Path (XDP) datapath for high-speed packet processing.
डेटा प्लेन डेव्हलपमेंट किट (DPDK)
DPDK is an open-source set of libraries and drivers for rapid packet processing. DPDK enables fast packet processing by allowing network interface cards (NICs) to send direct memory access (DMA) packets directly into an application’s address space. This method of packet routing lets the application poll for packets, which prevents the overhead of interrupts from the NIC.
DPDK’s poll mode drivers (PMDs) use the physical interface (NIC) of a VM’s host instead of the Linux kernel’s interrupt-based drivers. The NIC’s registers operate in user space, which makes them accessible by DPDK’s PMDs. As a result, the host OS does not need to manage the NIC’s registers. This means that the DPDK application manages all packet polling, packet processing, and packet forwarding of a NIC. Instead of waiting for an I/O interrupt to occur, a DPDK application constantly polls for packets and processes these packets immediately upon receiving them. The vRouter dataplane is based off of DPDK 24.11.
12
eBPF XDP
NOTE: This is a “Juniper Technology Preview (Tech Preview)” on page 358 feature.
Cloud-Native Router also supports an eBPF XDP datapath based vRouter. eBPF (extended Berkley Packet Filter) is a Linux kernel technology that executes user-defined programs inside a sandbox virtual machine. It enables low-level networking programs to execute with optimal performance. The eXpress Data Path (XDP) frameworks enables high-speed packet processing for the eBPF programs. CloudNative Router supports XDP in native (driver) mode on Baremental server deployments for limited drivers only. Please see the System Requirements for more details.
Benefits of eBPF XDP Datapath
Benefits of eBPF XDP Datapath include: · An eBPF XDP kernel program and its custom library is easier to maintain across kernel versions and
has wider kernel compatibility. The kernel dependencies are limited to a small set of eBPF helper functions. · The program is safer since it is analysed by the in-built Linux eBPF verifier before it is loaded into the kernel. · Offers higher performance using kernel bypass and omitting socket buffer (skb) allocation.
Supported Cloud-Native Router Features for eBPF XDP
The following Cloud-Native Router Features are supported with eBPF XDP for IPv4 traffic only: · L3 traffic with Cloud-Native Router deployed as a sending, receiving or transit router · VRF-Lite · MPLSoUDP · IGPs–OSPF, IS-IS · BGP route advertisements
NOTE: When deploying JCNR, you can configure the agentModeType attribute in the helmchart to select either a DPDK based or eBPF XDP datapath based vRouter.
13
Cloud-Native Router Deployment Modes
सारांश
Read this topic to know about the various modes of deploying the cloud-native router.
IN THIS SECTION Deployment Modes | 13
Deployment Modes
Starting with Juniper Cloud-Native Router Release 23.2, you can deploy and operate Juniper CloudNative Router in L2, L3 and L2-L3 modes, auto-derived based on the interface configuration in the values.yaml file prior to deployment.
NOTE: In the values.yaml file: · When all the interfaces have an interface_mode key configured, then the mode of
deployment would be L2.
· When one or more interfaces have an interface_mode key configured and some of the interfaces do not have the interface_mode key configured, then the mode of deployment would be L2-L3.
· When none of the interfaces have the interface_mode key configured, then the mode of deployment would be L3.
In L2 mode, the cloud-native router behaves like a switch and therefore does not performs any routing functions and it doesn not run any routing protocols. The pod network uses VLANs to direct traffic to various destinations.
In L3 mode, the cloud-native router behaves like a router and therefore performs routing functions and runs routing protocols such as ISIS, BGP, OSPF, and segment routing-MPLS. In L3 mode, the pod network is divided into an IPv4 or IPv6 underlay network and an IPv4 or IPv6 overlay network. The underlay network is used for control plane traffic.
The L2-L3 mode provides the functionality of both the switch and the router at the same time. It enables Cloud-Native Router to act as both a switch and a router simultaneously by performing switching in a set of interfaces and routing in the other set of interfaces. Cell site routers in a 5G deployment need to handle both L2 and L3 traffic. DHCP packets from radio outdoor unit (RU) is an
14 माजीample of L2 traffic and data packets moving from outdoor unit (ODU) to central unit (CU) is an example of L3 traffic.
Cloud-Native Router Interfaces Overview
सारांश
This topic provides information on the network communication interfaces provided by the JCNRController. Fabric interfaces are aggregated interfaces that receive traffic from multiple interfaces. Interfaces to which different workloads are connected are called workload interfaces.
या विभागात
Juniper Cloud-Native Router Interface Types | 14 Cloud-Native Router Interface Details | 15
Read this topic to understand the network communication interfaces provided by the JCNR-Controller. We cover interface names, what they connect to, how they communicate and the services they provide.
Juniper Cloud-Native Router Interface Types
Juniper Cloud-Native Router supports two types of interfaces:
· Fabric interfaces–Aggregated interfaces that receive traffic from multiple interfaces. Fabric interfaces are always physical interfaces. They can either be a physical function (PF) or a virtual function (VF). The throughput requirement for these interfaces is higher, hence multiple hardware queues are allocated to them. Each hardware queue is allocated with a dedicated CPU core . The interfaces are configured for the cloud-native router using the appropriate values.yaml file in the deployer helmcharts. You can view the interface mapping using the dpdkinfo -c command (View the “Troubleshoot using the vRouter CLI” on page 339 topic for more details). You also have fabric workload interfaces that have low throughput requirement. Only one hardware queue is allocated to the interface, thereby saving precious CPU resources. These interfaces can be configured using the appropriate values.yaml file in the deployer helmcharts.
· Workload interfaces–Interfaces to which different workloads are connected. They can either be software-based or hardware-based interfaces. Software-based interfaces (pod interfaces) are either high-performance interfaces using the Data Plane Development Kit (DPDK) poll mode driver (PMD) or a low-performance interfaces using the kernel driver. Typically the DPDK interfaces are used for data traffic such as the GPRS Tunneling Protocol for user data (GTP-U) traffic and the kernel-based
15
interfaces are used for control plane data traffic such as TCP. The kernel pod interfaces are typically for the operations, administration and maintenance (OAM) traffic or are used by non-DPDK pods. The kernel pod interfaces are configured as a veth-pair, with one end of the interface in the pod and the other end in the Linux kernel on the host. The DPDK native pod interfaces (virtio interfaces) are plumbed as vhost-user interfaces to the DPDK vRouter by the CNI. Cloud-Native Router also supports bonded interfaces via the link bonding PMD. These interfaces can be configured using the appropriate values.yaml file in the deployer helmcharts.
Cloud-Native Router supports different types of VLAN interfaces including trunk, access and subinterfaces across fabric and workload interfaces.
Cloud-Native Router Interface Details
The different Cloud-Native Router interfaces are provided in detail below:
Agent Interface

The vRouter has only one agent interface. The agent interface enables communication between the vRouter-agent and the vRouter containers. On the vRouter CLI when you issue the vif –list command, the agent interface looks like this:
vif0/0
Socket: unix Type:Agent HWaddr:00:00:5e:00:01:00 Vrf:65535 Flags:L2 QOS:-1 Ref:3 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:650 bytes:99307 errors:0 Drops:0
L3 Fabric Interface (DPDK) A layer-3 fabric interface bound to the DPDK. L3 fabric interface in cRPD can be reviewed on the cRPD shell using the junos show interfaces command:
show interfaces routing ens2f2
इंटरफेस
State Addresses
ens2f2
Up MPLS enabled
ISO enabled
16
INET 192.21.2.4 INET6 2001:192:21:2::4 INET6 fe80::c5da:7e9c:e168:56d7 INET6 fe80::a0be:69ff:fe59:8b58
The corresponding physical and tap interfaces can be seen on the vRouter using the vif –list command on the vRouter shell.
vif0/1 Address
PCI: 0000:17:01.1 (Speed 25000, Duplex 1) NH: 7 MTU: 9000 <- PCI
Type:Physical HWaddr:d6:93:87:91:45:6c IPaddr: 192.21.2.4 <- Physical interface IP6addr:2001:192:21:2::4 <- IPv6 address DDP: OFF SwLB: ON Vrf:2 Mcast Vrf:2 Flags:L3L2Vof QOS:0 Ref:16 <- L3 (only) interface RX port packets:423168341 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:17:01.1 Status: UP Driver: net_iavf RX packets:423168341 bytes:29123418594 errors:0 TX packets:417508247 bytes:417226216530 errors:0 Drops:8 TX port packets:417508247 errors:0
vif0/2 vif 1
PMD: ens2f2 NH: 12 MTU: 9000 <- Tap interface name as seen by cRPD Type:Host HWaddr:d6:93:87:91:45:6c IPaddr: 192.21.2.4 <- Tap interface type IP6addr:2001:192:21:2::4 DDP: OFF SwLB: ON Vrf:2 Mcast Vrf:65535 Flags:L3DProxyEr QOS:-1 Ref:15 TxXVif:1 <-cross-connected to
RX device packets:306995 bytes:25719830 errors:0 RX queue packets:306995 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:306995 bytes:25719830 errors:0 TX packets:307489 bytes:25880250 errors:0 Drops:0 TX queue packets:307489 errors:0 TX device packets:307489 bytes:25880250 errors:0
17
L3 Bond Interface (DPDK) A layer 3 bond interface bound to DPDK.
show interfaces routing bond34
इंटरफेस
State Addresses
बाँड34
Up INET6 2001:192:7:7::4
ISO enabled
INET 192.7.7.4
INET6 fe80::527c:6fff:fe48:7574
vif0/3 0)
PCI: 0000:00:00.0 (Speed 25000, Duplex 1) NH: 6 MTU: 1514 <- Bond interface (PCI id
Type:Physical HWaddr:50:7c:6f:48:75:74 IPaddr:192.7.7.4 <- Physical interface IP6addr:2001:192:7:7::4 DDP: OFF SwLB: ON Vrf:1 Mcast Vrf:1 Flags:TcL3L2Vof QOS:0 Ref:18 RX port packets:402183888 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: eth_bond_bond34 Status: UP Driver: net_bonding <- Bonded master Slave Interface(0): 0000:5e:00.0 Status: UP Driver: net_ice <- Bond slave – 1 Slave Interface(1): 0000:af:00.0 Status: UP Driver: net_ice <- Bond slave – 2 RX packets:402183888 bytes:49519387070 errors:0 TX packets:79226 bytes:7330912 errors:0 Drops:1393 TX port packets:79226 errors:0
vif0/4 bond
PMD: bond34 NH: 11 MTU: 9000 Type:Host HWaddr:50:7c:6f:48:75:74 IPaddr:192.7.7.4 <- Tap interface IP6addr:2001:192:7:7::4 DDP: OFF SwLB: ON Vrf:1 Mcast Vrf:65535 Flags:L3DProxyEr QOS:-1 Ref:15 TxXVif:3 <- Tap interface for
RX device packets:76357 bytes:7101918 errors:0 RX queue packets:76357 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:76357 bytes:7101918 errors:0 TX packets:75349 bytes:6946908 errors:0 Drops:0
18
TX queue packets:75349 errors:0 TX device packets:75349 bytes:6946908 errors:0
L3 Pod VLAN Sub-Interface (DPDK)
Starting in Juniper Cloud-Native Router Release 23.2, the cloud-native router supports the use of VLAN sub-interfaces in L3 mode, bound to DPDK. Corresponding interface state in cRPD:
show interfaces routing ens1f0v1.201
इंटरफेस
State Addresses
ens1f0v1.201
Up MPLS enabled
ISO enabled
INET6 fe80::b89c:fff:feab:e2c9
vif0/2
PCI: 0000:17:01.1 (Speed 25000, Duplex 1) NH: 7 MTU: 9000 Type:Physical HWaddr:d6:93:87:91:45:6c IPaddr:0.0.0.0 IP6addr:fe80::d493:87ff:fe91:456c <- IPv6 address DDP: OFF SwLB: ON Vrf:2 Mcast Vrf:2 Flags:L3L2Vof QOS:0 Ref:16 <- L3 (only) interface RX port packets:423168341 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:17:01.1 Status: UP Driver: net_iavf RX packets:423168341 bytes:29123418594 errors:0 TX packets:417508247 bytes:417226216530 errors:0 Drops:8 TX port packets:417508247 errors:0
vif0/5 interface
PMD: ens1f0v1 NH: 12 MTU: 9000 Type:Host HWaddr:d6:93:87:91:45:6c IPaddr:0.0.0.0 IP6addr:fe80::d493:87ff:fe91:456c DDP: OFF SwLB: ON Vrf:2 Mcast Vrf:65535 Flags:L3DProxyEr QOS:-1 Ref:15 TxXVif:2 <- L3 (only) tap
RX device packets:306995 bytes:25719830 errors:0 RX queue packets:306995 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:306995 bytes:25719830 errors:0
19
errors:0
TX packets:307489 bytes:25880250
Drops:0 TX queue packets:307489 errors:0 TX device packets:307489 bytes:25880250 errors:0
vif0/9
Virtual: ens1f0v1.201 Vlan(o/i)(,S): 201/201 Parent:vif0/2 NH: 36 MTU: 1514 <- VLAN
fabric sub-intf with parent as vif 2 and VLAN tag १५ म्हणून
Type:Virtual(Vlan) HWaddr:d6:93:87:91:45:6c IPaddr:103.1.1.2
IP6addr:fe80::d493:87ff:fe91:456c
DDP: OFF SwLB: ON
Vrf:1 Mcast Vrf:1 Flags:L3DProxyEr QOS:-1 Ref:4
RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RX packets:0 bytes:0 errors:0
TX packets:0 bytes:0 errors:0
Drops:0
vif0/10 Virtual: ens1f0v1.201 Vlan(o/i)(,S): 201/201 Parent:vif0/5 NH: 21 MTU: 9000 Type:Virtual(Vlan) HWaddr:d6:93:87:91:45:6c IPaddr:103.1.1.2 IP6addr:fe80::d493:87ff:fe91:456c DDP: OFF SwLB: ON Vrf:1 Mcast Vrf:65535 Flags:L3DProxyEr QOS:-1 Ref:4 TxXVif:9 <- VLAN tap sub-intf
cross connected to fabric sub-intf vif 9 and parent as tap intf vif 5 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
vif0/50 interface
PMD: vhostnet1-9403fd77-648a-47 NH: 177 MTU: 9160
Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:65535 Mcast Vrf:65535 Flags:L3DProxyEr QOS:-1 Ref:20 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0
—> pod
20
TX packets:0 bytes:0 errors:0 Drops:0
vif0/51 Virtual: vhostnet1-9403fd77-648a-47.201 Vlan(o/i)(,S): 201/201 NH: 17 MTU: 1514
Parent:vif0/50
—->L3 pod
sub-interface, parent is the pod interface
Type:Virtual(Vlan) HWaddr:00:00:5e:00:01:00 IPaddr:99.62.0.2
IP6addr:1234::633e:2
DDP: OFF SwLB: ON
Vrf:2 Mcast Vrf:2 Flags:PL3DProxyEr QOS:-1 Ref:4
RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0
RX packets:0 bytes:0 errors:0
TX packets:0 bytes:0 errors:0
Drops:0
L3 Pod Kernel Interface These are non-DPDK L3 pod interfaces. Interface state in the cRPD:
show interfaces routing jvknet1-0af476e
इंटरफेस
State Addresses
jvknet1-0af476e Up INET6 enabled
INET6 abcd:2:51:1::4
ISO enabled
INET enabled
INET 2.51.1.4
vif0/13
Ethernet: jvknet1-0af476e NH: 35 MTU: 9160 <- Kernel interface (jvk) of CNF Type:Virtual HWaddr:00:00:5e:00:01:00 IPaddr:2.51.1.4 <- pod/ workload IP6addr:abcd:2:51:1::4 DDP: OFF SwLB: ON Vrf:1 Mcast Vrf:1 Flags:PL3DVofProxyEr QOS:-1 Ref:11 RX port packets:47 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:47 bytes:13012 errors:0 TX packets:0 bytes:0 errors:0 Drops:47
21
L2 Fabric Interface (DPDK, Physical Trunk)
DPDK L2 fabric interfaces, which are associated with the physical network interface card (NIC) on the host server, accept traffic from multiple VLANs. The trunk interfaces accept only tagged packets. Any untagged packets are dropped. These interfaces can accept a VLAN filter to allow only specific VLAN packets. A trunk interface can be a part of multiple bridge-domains (BD). A bridge domain is a set of logical ports that share the same flooding or broadcast characteristics. Like a VLAN, a bridge domain spans one or more ports of multiple devices.
The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):
interfaces { ens786f0v0 { unit 0 { family bridge { interface-mode trunk; vlan-id-list 1001-1100; } } }
}
On the vRouter CLI when you issue the vif –list command, the DPDK VF fabric interface looks like this:
vif0/1
PCI: 0000:31:01.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:d6:22:c5:42:de:c3 Vrf:65535 Flags:L2Vof QOS:-1 Ref:12 RX queue packets:11813 errors:1 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 1 0 Fabric Interface: 0000:31:01.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 1001-1100 RX packets:0 bytes:0 errors:49962 TX packets:18188356 bytes:2037400554 errors:0 Drops:49963
22
DPDK L2 Bond Interface (Active-Standby, Trunk)
Layer-2 Bond interfaces accept traffic from multiple VLANs. A bond interface runs in the active or standby mode (mode 0). You define the bond interface in the helm chart configuration as follows:
bondInterfaceConfigs:
– name: “bond0”
mode: 1
# ACTIVE_BACKUP MODE
slaveInterfaces:
– “ens2f0v1”
– “ens2f1v1”
– bond0: ddp: “auto” interface_mode: trunk vlan-id-list: [1001-1100] storm-control-profile: rate_limit_pf1 native-vlan-id: 1001 no-local-switching: true
The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):
interfaces { bond0 { unit 0 { family bridge interface-mode trunk; vlan-id-list 1001-1100; } }
}
On the vRouter CLI when you issue the vif –list command, the bond interface looks like this:
vif0/2
PCI: 0000:00:00.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:32:f8:ad:8c:d3:bc Vrf:65535 Flags:L2Vof QOS:-1 Ref:8 RX queue packets:1882 errors:0
23
RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: eth_bond_bond0 Status: UP Driver: net_bonding Slave Interface(0): 0000:81:01.0 Status: UP Driver: net_iavf Slave Interface(1): 0000:81:03.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 1001-1100 RX packets:8108366000 bytes:486501960000 errors:4234 TX packets:65083776 bytes:4949969408 errors:0 Drops:8108370394
DPDK L2 Pod Interface

DPDK L2 Pod Interface (Virtio Trunk)
The trunk interfaces accept only tagged packets. Any untagged packets are dropped. These interfaces can accept a VLAN filter to allow only specific VLAN packets. A trunk interface can be a part of multiple bridge-domains (BD). A bridge domain is a set of logical ports that share the same flooding or broadcast characteristics. Like a VLAN, a bridge domain spans one or more ports of multiple devices. Virtio interfaces are associated with pod interfaces that use virtio on the DPDK data plane.
The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):
interfaces { vhost242ip-93883f16-9ebb-4acf-b { unit 0 { family bridge { interface-mode trunk; vlan-id-list 1001-1003; } } }
}
On the vRouter CLI when you issue the vif –list command, the virtio with DPDK data plane interface
असे दिसते:
vif0/3
PMD: vhost242ip-93883f16-9ebb-4acf-b Type:Virtual HWaddr:00:16:3e:7e:84:a3 Vrf:65535 Flags:L2 QOS:-1 Ref:13 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Trunk Vlan: 1001-1003 RX packets:0 bytes:0 errors:0 TX packets:10604432 bytes:1314930908 errors:0
24
Drops:0 TX port packets:0 errors:10604432
L2 Pod Kernel Interface (Access)
The access interfaces accept both tagged आणि untagged packets. Untagged packets are tagged with the access VLAN or access BD. Any tagged packets other than the ones with access VLAN are dropped. The access interfaces is a part of a single bridge-domain. It does not have any parent interface.
The cRPD interface configuration using the show configuration command looks like this (the output is trimmed for brevity):
routing-instances { switch { instance-type virtual-switch; bridge-domains
{
} }
bd1001 { vlan-id 1001; interface jvknet1-eed79ff;
} }
On the vRouter CLI when you issue the vif –list command, the veth pair interface looks like this:
vif0/4
Ethernet: jvknet1-88c44c3 Type:Virtual HWaddr:02:00:00:3a:8f:73 Vrf:0 Flags:L2Vof QOS:-1 Ref:10 RX queue packets:524 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Access Vlan Id: 1001 OVlan Id: 1001 RX packets:9 bytes:802 errors:515 TX packets:0 bytes:0 errors:0 Drops: 525
25
L2 Pod VLAN Sub-interface (DPDK)
You can configure a user pod with a Layer 2 VLAN sub-interface and attach it to the Cloud-Native Router instance. VLAN sub-interfaces are like logical interfaces on a physical switch or router. They access only tagged packets that match the configured VLAN tag. A sub-interface has a parent interface. A parent interface can have multiple sub-interfaces, each with a VLAN ID. When you run the cloudnative router, you must associate each sub-interface with a specific VLAN.
The cRPD interface configuration viewed using the show configuration command is as shown below (the output is trimmed for brevity).
For L2:
routing-instances { switch { instance-type virtual-switch; bridge-domains
{
bd3003 { vlan-id 3003; interface vhostnet1-71cd7db1-1a5e-49.3003;
} } } }
On the vRouter, a VLAN sub-interface configuration is as shown below:
vif0/4 vif0/5
PMD: vhostnet1-71cd7db1-1a5e-49 MTU: 9160 Type:Virtual HWaddr:02:00:00:84:dc:42 DDP: OFF SwLB: ON Vrf:65535 Flags:L2 QOS:-1 Ref:14 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0 TX port packets:0 errors:293
Virtual: vhostnet1-71cd7db1-1a5e-49.3003 Vlan(o/i)(,S): 3003/3003 Parent:vif0/4 Type:Virtual(Vlan) HWaddr:00:99:99:99:33:09 Vrf:0 Flags:L2 QOS:-1 Ref:3
26
RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
RELATED DOCUMENTATION Cloud-Native Router Use-Cases and Configuration Overview | ८७७.६७७.७३७०
2 प्रकरण
L2 वैशिष्ट्ये
IN THIS CHAPTER L2 Features Overview | 28 Static VXLAN with IPv4 and IPv6 Underlay | 28 Layer 2 Circuit | 35 Loop Detection in Pure L2 Mode | 50 Access Control Lists (Firewall Filters) | 52 MAC Learning and Aging | 55 APIs and CLI Commands for Bond Interfaces | 57 Native VLAN | 60 Enabling Dynamic Device Personalization (DDP) on Individual Interfaces | 62
28
L2 Features Overview
SUMMARY Read this topic to learn about the features available in the Juniper Cloud-Native Router when deployed in L2 (switch) mode.
The Juniper Cloud-Native Router supports multiple “deployment modes” on page 13. In L2 mode, the cloud-native router behaves like a switch and so performs no routing functions and runs no routing protocols. The pod network uses VLANs to direct traffic to various destinations. This chapter provides information about the various L2 features supported by JCNR.
Static VXLAN with IPv4 and IPv6 Underlay
सारांश
Juniper Cloud-Native Router supports static VXLAN to extend Layer 2 networks over a Layer 3 IP underlay through manually configured VXLAN tunnels.
IN THIS SECTION Configuration | 29 Verification | 31
Static (Virtual extensible LAN) VXLAN enables organizations to extend Layer 2 networks over a Layer 3 IP underlay through manually configured VXLAN tunnels. The VXLAN Identifier (VNI) and VXLAN Tunnel Endpoint (VTEP) configurations are configured manually, instead of relying on EVPN control plane protocols to dynamically discover MAC-to-VTEP mapping. Static VXLAN is ideal for small-scale environments, air-gapped systems, edge computing nodes, and deployments that require predictable behavior and simpler operational model. Static VXLAN offers key advantages including lower complexity, reduced resource consumption, and easier deployment in minimalistic infrastructures. However, it also requires greater diligence in configuration and operational monitoring to prevent inconsistencies and outages. Since dynamic advertisement and auto-discovery are not available, all failover and redundance mechanisms must be carefully planned. You can read more about static VXLAN in the Junos documentation.
29
Static VXLANs is often implemented in multiple edge, enterprise, and telecom usecases, such as in 5G networks for slice isolation between DU and CU, to extend enterprise LAN across different geographies, and remote branch deployments.
कॉन्फिगरेशन
Static VXLAN configuration includes multiple VTEPs, each configured with a set of VNIs and corresponding remote VTEPs. There is no central controller or signaling mechanism. Traffic flows based on static mapping and local MAC learning. You must configure the following elements on the CloudNative router to bring up static VXLAN:
· Configure IP loopback interface or source interface for VXLAN
· Assign a unique VNI for each logical Layer 2 domain
· Ensure reachability for all configured remote VTEPs
· Enable VLAN tagging or bridge domains to map to VNIs
· Set the MTU in the deployment helm chart to accommodate VXLAN header overhead
NOTE: Cloud-Native Router must be deployed in L2 or L2-L3 mode to support bridge domains.
You must perform static VXLAN configuration using a Configlet. Review Customize JCNR Configuration for more details. A sample configlet is provided below:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set interfaces lo0 unit 0 family inet address 10.3.3.3/32 set interfaces lo0 unit 0 family inet6 address 2001:db8:10:3:3::3/128 set interfaces enp94s0f2v0 unit 0 family bridge interface-mode trunk set interfaces enp94s0f2v0 unit 0 family bridge vlan-id-list 201-205 set routing-instances vswitch instance-type virtual-switch set routing-instances vswitch interface enp94s0f2v0 set routing-instances vswitch vtep-source-interface lo0.0
30
set routing-instances vswitch bridge-domains bd201 vlan-id 201 set routing-instances vswitch bridge-domains bd201 vxlan vni 2001 set routing-instances vswitch bridge-domains bd201 vxlan static-remote-vtep-list 2001:db8:10:5:5::5 set routing-instances vswitch bridge-domains bd202 vlan-id 202 set routing-instances vswitch bridge-domains bd202 vxlan vni 2002 set routing-instances vswitch bridge-domains bd202 vxlan static-remote-vtep-list 10.5.5.5 set routing-instances vswitch bridge-domains bd203 vlan-id 203 set routing-instances vswitch bridge-domains bd203 vxlan vni 2003 set routing-instances vswitch bridge-domains bd203 vxlan static-remote-vtep-list 10.5.5.5 set routing-instances vswitch bridge-domains bd204 vlan-id 204 set routing-instances vswitch bridge-domains bd204 vxlan vni 2004 set routing-instances vswitch bridge-domains bd204 vxlan static-remote-vtep-list 10.5.5.5 set routing-instances vswitch bridge-domains bd205 vlan-id 205 set routing-instances vswitch bridge-domains bd205 vxlan vni 2005 set routing-instances vswitch bridge-domains bd205 vxlan static-remote-vtep-list 2001:db8:10:5:5::5 crpdSelector: matchLabels:
node: worker
You can also configure Layer 2 circuit (L2CKT) with static VXLAN, such that the Layer 2 control traffic
can tunnel over a VXLAN overlay network by manually configuring the tunnel endpoints. The L2CKT stiching requires lt interface pairing. One lt pair is a part of the bridge domain with encapsulation ethernet-bridge and the other participates in the L2 circuit, with encapsulation ethernet-ccc. A sample configlet is provided below:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set interfaces lo0 unit 0 family inet address 10.3.3.3/32 set interfaces lo0 unit 0 family inet6 address 2001:db8:10:3:3::3/128 set interfaces lt unit 1 peer-unit 2 set interfaces lt unit 1 vlan-id 102 set interfaces lt unit 1 encapsulation ethernet-bridge set interfaces lt unit 2 peer-unit 1 set interfaces lt unit 2 encapsulation ethernet-ccc set interfaces lt unit 3 peer-unit 4
31
set interfaces lt unit 3 vlan-id 104 set interfaces lt unit 3 encapsulation ethernet-bridge set interfaces lt unit 4 peer-unit 3 set interfaces lt unit 4 encapsulation ethernet-ccc set protocols l2circuit neighbor 111.1.1.1 interface lt.2 virtual-circuit-id 102 set protocols l2circuit neighbor 111.1.1.1 interface lt.2 ignore-encapsulation-mismatch set protocols l2circuit neighbor 111.1.1.1 interface lt.2 ignore-mtu-mismatch set protocols l2circuit neighbor 111.1.1.1 interface lt.2 pseudowire-status-tlv set protocols l2circuit neighbor 111.1.1.1 interface lt.4 virtual-circuit-id 104 set protocols l2circuit neighbor 111.1.1.1 interface lt.4 no-control-word set protocols l2circuit neighbor 111.1.1.1 interface lt.4 ignore-encapsulation-mismatch set protocols l2circuit neighbor 111.1.1.1 interface lt.4 ignore-mtu-mismatch set protocols l2circuit neighbor 111.1.1.1 interface lt.4 pseudowire-status-tlv set routing-instances static-vxlan instance-type virtual-switch set routing-instances static-vxlan vtep-source-interface lo0.0 set routing-instances static-vxlan bridge-domains bd102 vlan-id 102 set routing-instances static-vxlan bridge-domains bd102 interface lt.1 set routing-instances static-vxlan bridge-domains bd102 vxlan vni 1002 set routing-instances static-vxlan bridge-domains bd102 vxlan static-remote-vtep-list 10.7.7.7 set routing-instances static-vxlan bridge-domains bd104 vlan-id 104 set routing-instances static-vxlan bridge-domains bd104 interface lt.3 set routing-instances static-vxlan bridge-domains bd104 vxlan vni 1004 set routing-instances static-vxlan bridge-domains bd104 vxlan static-remote-vtep-list 2001:db8:10:7:7::7 crpdSelector: matchLabels:
node: worker
पडताळणी
You can verify the static VXLAN configuring using the “vRouter CLI” on page 356: · Verify the interface list corresponding to the lt interfaces:
bash-5.1# vif –list Vrouter Interface Table
… vif0/9
Virtual: lt.1 Vlan: 102
32
vif0/10 vif0/11 vif0/12
Type:LT HWaddr:00:00:5e:00:01:00 DDP: OFF SwLB: ON Vrf:2 Flags:L2L QOS:-1 Ref:6 TxXVif:10 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Access Vlan Id: 102 OVlan Id: 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: lt.2 NH: 66 Type:LT HWaddr:00:00:5e:00:01:00 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:65535 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:4 TxXVif:9 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: lt.3 Vlan: 104 Type:LT HWaddr:00:00:5e:00:01:00 DDP: OFF SwLB: ON Vrf:2 Flags:L2L QOS:-1 Ref:6 TxXVif:12 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Vlan Mode: Access Vlan Id: 104 OVlan Id: 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: lt.4 NH: 69 Type:LT HWaddr:00:00:5e:00:01:00 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:65535 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:4 TxXVif:11 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:0 bytes:0 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
33
· Verify the VXLAN table using the vxlan –dump command:
bash-5.1# vxlan –dump VXLAN Table
Flags: L=Local bridge learn
VNID NextHop BD Flags
—————————
०६ ४०
६७ एल
०६ ४०
६७ एल
०६ ४०
६७ एल
०६ ४०
६७ एल
०६ ४०
६७ एल
०६ ४०
६७ एल
०६ ४०
६७ एल
· Verify the bridge domain table using the bd –dump command:
bash-5.1# bd –dump
Bridge Domain (BD) Table
——————————————-
VRF
VLAN
BD
——————————————-
1
102
10
1
104
13
2
201
3
2
202
14
2
203
4
2
204
5
2
205
1
——————————————-
· Verify the routes in the vRouter bridge table for VRFs 1 and 2 corresponding to the static-vxlan and vswitch routing instances defined in the configuration:
bash-5.1# rt –dump 1 –family bridge Flags: L=Label Valid, Df=DHCP flood, Mm=Mac Moved, L2c=L2 Evpn Control Word, N=New Entry, Ec=EvpnControlProcessing vRouter bridge table 0/1
34
निर्देशांक
BdID
DestMac
ध्वज
Label/VNID
नेक्स्टहॉप
आकडेवारी
33808
13
0:10:94:0:0:c
LDf
1004
48
17313339
67044
10
0:10:94:0:0:5
Df
–
41
17315525
129816
13
ff:ff:ff:ff:ff:ff
LDf
0
90
24788
162204
0
0:0:5e:0:1:0
Df
–
3
0
181564
10
ff:ff:ff:ff:ff:ff
LDf
0
52
18341
236720
13
0:10:94:0:0:b
Df
–
87
17313665
248572
10
0:10:94:0:0:9
LDf
1002
33
17315380
bash-5.1# rt –dump 2 –family bridge
Flags: L=Label Valid, Df=DHCP flood, Mm=Mac Moved, L2c=L2 Evpn Control Word, N=New Entry,
Ec=EvpnControlProcessing
vRouter bridge table 0/2
निर्देशांक
BdID
DestMac
ध्वज
Label/VNID
नेक्स्टहॉप
आकडेवारी
30120
2
0:10:94:0:0:1b
Df
–
11
2966399
73480
0
0:0:5e:0:1:0
Df
–
3
3
78924
14
ff:ff:ff:ff:ff:ff
LDf
0
69
24065
85392
5
ff:ff:ff:ff:ff:ff
LDf
0
71
24063
101204
4
0:10:94:0:0:1e
LDf
2003
28
5935375
109492
7
0:11:11:11:21:11
Df
–
118
1
113348
5
0:10:94:0:0:15
Df
–
11
5935600
141852
4
0:10:94:0:0:14
Df
–
11
5935517
146472
14
0:10:94:0:0:13
Df
–
11
5935522
35
154692
4
ff:ff:ff:ff:ff:ff
LDf
0
70
24063
160996
1
0:10:94:0:0:16
Df
–
11
2966405
162964
14
0:10:94:0:0:1d
LDf
2002
28
5935388
177844
3
ff:ff:ff:ff:ff:ff
LDf
0
68
2990359
208392
2
ff:ff:ff:ff:ff:ff
LDf
0
77
2990320
214548
3
0:10:94:0:0:12
Df
–
11
2966408
236544
5
0:10:94:0:0:1f
LDf
2004
28
5935479
246800
7
ff:ff:ff:ff:ff:ff
LDf
0
103
454
257312
1
ff:ff:ff:ff:ff:ff
LDf
0
72
2990345
Layer 2 Circuit
सारांश
Juniper-Cloud Native Router supports Layer 2 circuits over an IP/MPLS-based service provider’s network. This topic provides configuration and verification details.
IN THIS SECTION Configuration | 36 Verification | 43
Juniper Cloud-Native Router supports Layer 2 circuits for a point-to-point Layer 2 connection over an IP/MPLS-based service provider’s network. To establish the Layer 2 circuit, it uses Label Distribution Protocol (LDP) as the signaling protocol to advertise the ingress label to the remote PE routers. A targeted LDP session is established between the loopback addresses of the two PEs to exchange VPN labels. For more information on L2 circuits, please review Layer 2 Circuit Overview.
The following Layer-2 circuit features are supported by the Cloud-Native Router:
· Enable l2circuit protocol on Physical (PF/VF), VLAN sub-interface, bond interface (towards core), and pod interfaces
36
· Ethernet CCC encapsulations– ethernet-ccc and vlan-ccc · Local interface switching · Control word · Protect Interface for CE redundancy · Backup Neighbor for Core redundancy · Pseudowire cold and hot-standby · Support L2 circuit with BGP-Labeled Unicast (BGP-LU) · Support for dual VLAN tagging · Interoperability with other Junos devices
कॉन्फिगरेशन
You can configure Layer 2 circuits in Juniper Cloud-Native Router using configlets. Multiple configlet samples are provided in this section based on the following topology:
For Layer 2 circuit configuration, you configure the Ethernet-based CE-facing interface with the CCC encapsulation type of your choice–ethernet-ccc or vlan-ccc. The Layer 2 circuit configuration such as the remote PE neighbour (usually the loopback address), the interface connected to the CE-router, and a virtual circuit identifier for the virtual circuit (VC) is performed under the edit protocols l2circuit statement. Eventually, you configure MPLS, LDP and an IGP to enable signaling for your Layer 2 circuit. Please review Example: Enternet-based Layer 2 Circuit Configuration for a end-to-end Junos configuration example. Layer 2 Circuit with ethernet-ccc
37
Configure Ethernet CCC encapsulation on CE-facing Ethernet interfaces on PE-1 (JCNR) that must accept packets carrying standard Tag Protocol ID (TPID) values.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 0 encapsulation ethernet-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set routing-options router-id 192.168.1.11 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0 virtual-circuit-id 100 set protocols ldp interface enp13s0f2 set protocols ldp interface lo0.0 set protocols mpls interface enp13s0f2.0 set protocols ospf area 0.0.0.0 interface lo0.0 set protocols ospf area 0.0.0.0 interface enp13s0f2 crpdSelector: matchLabels:
kubernetes.io/hostname: node-1
Layer 2 Circuits with vlan-ccc
Configure VLAN CCC encapsulation on CE-facing Ethernet interfaces on PE-1 (JCNR) with VLAN tagging enabled.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 102 vlan-id 102 set interfaces enp13s0f0 unit 102 encapsulation vlan-ccc
38
set interfaces enp13s0f0 unit 104 vlan-id 104 set interfaces enp13s0f0 unit 104 encapsulation vlan-ccc set interfaces enp13s0f0 unit 106 vlan-id 106 set interfaces enp13s0f0 unit 106 encapsulation vlan-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set routing-options router-id 192.168.1.11 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.102 virtual-circuit-id 102 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.104 virtual-circuit-id 104 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.106 virtual-circuit-id 106 set protocols ldp interface enp13s0f2 set protocols ldp interface lo0.0 set protocols mpls interface enp13s0f2.0 set protocols ospf area 0.0.0.0 interface lo0.0 set protocols ospf area 0.0.0.0 interface enp13s0f2 crpdSelector: matchLabels:
kubernetes.io/hostname: node-1
Layer 2 Circuit with Local Switching
You can configure Layer 2 circuit with local switching. Optionally, configure protect-interface for local or remote end to ensure traffic switch over when the primary interface goes down. Protect interfaces act as backups for their associated interfaces that link a virtual circuit to its destination. The primary interface has priority over the protect interface and carries network traffic as long as it is functional. If the primary interface fails, the protect interface is activated. Optionally, configure no-revert to prevent switch back to primary when primary interface is back up.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 0 encapsulation ethernet-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set routing-options router-id 192.168.1.11 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0 virtual-circuit-id 100 set protocols l2circuit local-switching interface enp13s0f0 no-revert protect-interface
39
enp13s0f1 end-interface interface enp13s0f3 no-revert protect-interface enp13s0f8 set protocols ldp interface enp13s0f2 set protocols ldp interface lo0.0 set protocols mpls interface enp13s0f2.0 set protocols ospf area 0.0.0.0 interface lo0.0 set protocols ospf area 0.0.0.0 interface enp13s0f2
crpdSelector: matchLabels: kubernetes.io/hostname: node-1
Layer 2 Circuit with Ignore Mismatch
You can configure the Layer 2 circuit to establish even though the MTU (ignore-mtu-mismatch), encapsulation (ignore-encapsulation-mismatch) or VLAN ID (no-vlan-id-validate) configured on the CE device interface does not match the setting configured on the Layer 2 circuit interface.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 0 encapsulation ethernet-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set routing-options router-id 192.168.1.11 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0 virtual-circuit-id 100 ignore-encapsulation-mismatch ignore-mtu-mismatch no-vlan-id-validate set protocols ldp interface enp13s0f2 set protocols ldp interface lo0.0 set protocols mpls interface enp13s0f2.0 set protocols ospf area 0.0.0.0 interface lo0.0 set protocols ospf area 0.0.0.0 interface enp13s0f2 crpdSelector: matchLabels:
kubernetes.io/hostname: node-1
40
Static Layer 2 Circuits
Configure static Layer 2 circuit pseudowires for networks that do not support LDP or do not have LDP
enabled. Static pseudowires require you to configure static values for the in and out labels that enable a
pseudowire connection.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 102 vlan-id 102 set interfaces enp13s0f0 unit 102 encapsulation vlan-ccc set interfaces enp13s0f0 unit 104 vlan-id 104 set interfaces enp13s0f0 unit 104 encapsulation vlan-ccc set interfaces enp13s0f0 unit 106 vlan-id 106 set interfaces enp13s0f0 unit 106 encapsulation vlan-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set routing-options router-id 192.168.1.11 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.102 static incoming-label 103 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.102 static outgoing-label 102 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.102 virtual-circuit-id 102 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.104 static incoming-label 105 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.104 static outgoing-label 104 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.104 virtual-circuit-id 104 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.106 static incoming-label 107 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.106 static outgoing-label 106 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.106 virtual-circuit-id 106 set protocols ldp interface enp13s0f2 set protocols mpls label-range static-label-range 16 200 set protocols mpls interface enp13s0f2.0 set protocols ospf area 0.0.0.0 interface lo0.0
41
set protocols ospf area 0.0.0.0 interface enp13s0f2 crpdSelector:
matchLabels: kubernetes.io/hostname: node-1
Layer 2 Circuit with BGP-LU
Enable Layer 2 circuit over BGP-Labeled Unicast (BGP-LU). BGP-LU advertises the ingress label to its
peer PE routers.
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-l2-ckt-pe1 namespace: jcnr spec: config: |-
set interfaces enp13s0f2 unit 0 family inet address 172.16.25.1/24 set interfaces enp13s0f0 description “to CE-1 7/4” set interfaces enp13s0f0 unit 102 vlan-id 102 set interfaces enp13s0f0 unit 102 encapsulation vlan-ccc set interfaces enp13s0f0 unit 104 vlan-id 104 set interfaces enp13s0f0 unit 104 encapsulation vlan-ccc set interfaces enp13s0f0 unit 106 vlan-id 106 set interfaces enp13s0f0 unit 106 encapsulation vlan-ccc set interfaces lo0 unit 0 family inet address 192.168.1.11/32 set policy-options policy-statement local-prefixes from protocol direct set policy-options policy-statement local-prefixes from prefix-list local-prefixes set policy-options policy-statement local-prefixes then accept set policy-options policy-statement send-pe from route-filter 192.168.1.11/32 exact set policy-options policy-statement send-pe then accept set policy-options prefix-list local-prefixes 192.168.1.11/32 set routing-options router-id 192.168.1.11 set routing-options autonomous-system 65001 set routing-options rib-groups INET0_to_INET3 import-rib inet.0 set routing-options rib-groups INET0_to_INET3 import-rib inet.3 set protocols bgp group external type external set protocols bgp group external local-address 172.16.25.1 set protocols bgp group external family inet labeled-unicast rib inet.3 set protocols bgp group external family inet unicast rib-group INET0_to_INET3 set protocols bgp group external export local-prefixes
42
set protocols bgp group external neighbor 172.16.30.11 peer-as 65002 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.102 virtual-circuit-id 102 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.104 virtual-circuit-id 104 set protocols l2circuit neighbor 192.168.3.33 interface enp13s0f0.106 virtual-circuit-id 106 set protocols ldp interface lo0.0 set protocols mpls interface enp13s0f2 crpdSelector: matchLabels:
kubernetes.io/hostname: node-1
Configuring via JCNR-CNI (Pod Configuration)
When using the Cloud-Native Router in CNI mode, you can configure cRPD with layer 2 circuit configuration using the JCNR-CNI. Here is an example pod configuration:
–apiVersion: v1 kind: Namespace metadata:
name: jcnr-l2vpn-tests –apiVersion: “k8s.cni.cncf.io/v1” kind: NetworkAttachmentDefinition metadata:
name: l2ckt-pod1-nad namespace: jcnr-l2vpn-tests spec: config: ‘{
“cniVersion”:”0.4.0″, “name”: “l2ckt-pod1-nad”, “plugins”: [
{ “type”: “jcnr”, “kubeConfig”:”/etc/kubernetes/kubelet.conf”
} ] }’
–apiVersion: v1 kind: Pod
43
metadata: name: l2ckt-pod1-nad namespace: jcnr-l2vpn-tests annotations: k8s.v1.cni.cncf.io/networks: | [ { “name”: “l2ckt-pod1-nad”, “interface”: “net1”, “cni-args”: { “l2CktNbr”: “192.168.3.33”, “l2CktVcid”: “10” } } ]
spec: containers:
… <trimmed>
पडताळणी
You can verify the Layer 2 circuit configuration and statistics on the “cRPD shell” on page 354 and “vRouter shell” on page 356.
Verify L2 circuit Connections
Display status information about Layer 2 virtual circuits from the Cloud-Native Router to its neighbors using show l2circuit connections command on the cRPD.
user@host> show l2circuit connections Layer-2 Circuit Connections:
Legend for connection status (St)
EI — encapsulation invalid
NP — interface h/w not present
MM — mtu mismatch
Dn — down
EM — encapsulation mismatch VC-Dn — Virtual circuit Down
CM — control-word mismatch
Up — operational
VM — vlan id mismatch
CF — Call admission control failure
OL — no outgoing label
IB — TDM incompatible bitrate
44
NC — intf encaps not CCC/TCC BK — Backup Connection CB — rcvd cell-bundle size bad LD — local site signaled down RD — remote site signaled down XX — unknown
TM — TDM misconfiguration ST — Standby Connection SP — Static Pseudowire RS — remote site standby HS — Hot-standby Connection
Legend for interface status
Up — operational
Dn — down
Neighbor: 192.168.3.33
इंटरफेस
Type St Time last up
# Up trans
enp13s0f0.102(vc 102) rmt Up Mar 12 15:45:20 2025
2
Remote PE: 192.168.3.33, Negotiated control-word: Yes (Null)
Incoming label: 17, Outgoing label: 16
Negotiated PW status TLV: No
Local interface: enp13s0f0.102, Status: Up, Encapsulation: VLAN
Flow Label Transmit: No, Flow Label Receive: No
enp13s0f0.104(vc 104) rmt Up Mar 12 15:45:24 2025
1
Remote PE: 192.168.3.33, Negotiated control-word: Yes (Null)
Incoming label: 18, Outgoing label: 17
Negotiated PW status TLV: No
Local interface: enp13s0f0.104, Status: Up, Encapsulation: VLAN
Flow Label Transmit: No, Flow Label Receive: No
enp13s0f0.106(vc 106) rmt Up Mar 12 15:45:24 2025
1
Remote PE: 192.168.3.33, Negotiated control-word: Yes (Null)
Incoming label: 19, Outgoing label: 18
Negotiated PW status TLV: No
Local interface: enp13s0f0.106, Status: Up, Encapsulation: VLAN
Flow Label Transmit: No, Flow Label Receive: No
Verify the LDP sessions
Display information about LDP sessions using the show ldp session command on the cRPD.
user@root> show ldp session extensive Address: 192.168.3.33, State: Operational, Connection: Open, Hold time: 23
Session ID: 192.168.1.11:0–192.168.3.33:0 Next keepalive in 3 seconds Passive, Maximum PDU: 4096, Hold time: 30, Neighbor count: 1 Neighbor types: configured-layer2
45
Keepalive interval: 10, Connect retry interval: 1
Local address: 192.168.1.11, Remote address: 192.168.3.33
Up for 01:13:18
Capabilities advertised: none
Capabilities received: none
Protection: disabled
Session flags: none
Local – Restart: disabled, Helper mode: enabled
Remote – Restart: disabled, Helper mode: enabled
Local maximum neighbor reconnect time: 120000 msec
Local maximum neighbor recovery time: 240000 msec
Local Label Advertisement mode: Downstream unsolicited
Remote Label Advertisement mode: Downstream unsolicited
Negotiated Label Advertisement mode: Downstream unsolicited
MTU discovery: disabled
Nonstop routing state: Not in sync
Next-hop addresses received:
192.168.3.33
Queue depth: 0
संदेश प्रकार
एकूण
शेवटचे ५ सेकंद
Sent Received
पाठवले
प्राप्त झाले
आरंभ करणे
1
1
0
0
जिवंत ठेवा
439
439
1
1
सूचना
0
0
0
0
पत्ता
1
1
0
0
Address withdraw
0
0
0
0
Label mapping
7
7
0
0
Label request
0
0
0
0
Label withdraw
3
3
0
0
Label release
3
3
0
0
Label abort
0
0
0
0
Verify LDP Database
user@root> show ldp database
Input label database, 192.168.1.11:0–192.168.3.33:0
Labels received: 14
Label Prefix
27
L2CKT CtrlWord ETHERNET VC 9
28
L2CKT CtrlWord ETHERNET VC 10
46
Output label database, 111.1.1.1:0–133.3.3.3:0
Labels advertised: 14
Label Prefix
36
L2CKT CtrlWord ETHERNET VC 9
37
L2CKT CtrlWord ETHERNET VC 10
Verify vRouter Interfaces Verify the vRouter interfaces for CE-facing interfaces.
bash-5.1# vif –list Vrouter Interface Table
Flags: P=Policy, X=Cross Connect, S=Service Chain, Mr=Receive Mirror Mt=Transmit Mirror, Tc=Transmit Checksum Offload, L3=Layer 3, L2=Layer 2 D=DHCP, Vp=Vhost Physical, Pr=Promiscuous, Vnt=Native Vlan Tagged Mnp=No MAC Proxy, Dpdk=DPDK PMD Interface, Rfl=Receive Filtering Offload, Mon=Interface
is Monitored Uuf=Unknown Unicast Flood, Vof=VLAN insert/strip offload, Df=Drop New Flows, L=MAC
Learning Enabled Proxy=MAC Requests Proxied Always, Er=Etree Root, Mn=Mirror without Vlan Tag, HbsL=HBS
Left Intf HbsR=HBS Right Intf, Ig=Igmp Trap Enabled, Ml=MAC-IP Learning Enabled, Me=Multicast
Enabled LsDp=Link down in DP only, Ccc=CCC Enabled, HwTs=Hardware support for Timestamp
…<trimmed>
vif0/2
PCI: 0000:0d:00.0 (Speed 10000, Duplex 1) NH: 7 MTU: 9000 Type:Physical HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:TcL3Vof QOS:0 Ref:18 RX port packets:899840301 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: 0000:0d:00.0 Status: UP Driver: net_ice RX packets:899840302 bytes:4080744017446 errors:0 TX packets:896126245 bytes:4059828585689 errors:0 Drops:1315423 TX port packets:896126245 errors:0
vif0/9
PMD: enp13s0f0 NH: 21 MTU: 9000 Type:Host HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0
47
vif0/16 vif0/17 vif0/18 vif0/19
DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:17 TxXVif:2 RX device packets:70 bytes:6064 errors:0 RX queue packets:70 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:70 bytes:6064 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: enp13s0f0.102 Vlan(o/i)(,S): 102/102 Parent:vif0/9 Sub-type: Host-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:1 TxXVif:17 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:8 bytes:592 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: enp13s0f0.102 Vlan(o/i)(,S): 102/102 NH: 44 Parent:vif0/2 Sub-type: physical-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:Ccc QOS:-1 Ref:5 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:299403723 bytes:1357784724696 errors:0 TX packets:298673909 bytes:1351924185670 errors:0 Drops:612067
Virtual: enp13s0f0.104 Vlan(o/i)(,S): 104/104 Parent:vif0/9 Sub-type: Host-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:1 TxXVif:19 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:9 bytes:666 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: enp13s0f0.104 Vlan(o/i)(,S): 104/104 NH: 48 Parent:vif0/2 Sub-type: physical-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON
48
vif0/20 vif0/21
Vrf:0 Mcast Vrf:0 Flags:Ccc QOS:-1 Ref:5 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:299381730 bytes:1357688891917 errors:0 TX packets:298673939 bytes:1351924259113 errors:0 Drops:593380
Virtual: enp13s0f0.106 Vlan(o/i)(,S): 106/106 Parent:vif0/9 Sub-type: Host-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:65535 Flags:L3ProxyEr QOS:-1 Ref:1 TxXVif:21 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:8 bytes:592 errors:0 TX packets:0 bytes:0 errors:0 Drops:0
Virtual: enp13s0f0.106 Vlan(o/i)(,S): 106/106 NH: 56 Parent:vif0/2 Sub-type: physical-tap Type:Virtual(Vlan) HWaddr:40:a6:b7:c4:23:f4 IPaddr:0.0.0.0 DDP: OFF SwLB: ON Vrf:0 Mcast Vrf:0 Flags:Ccc QOS:-1 Ref:5 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 RX packets:299359992 bytes:1357592722594 errors:0 TX packets:298673884 bytes:1351924089886 errors:0 Drops:571666
Verify Interface Rate Statistics Verify the interface statistics inclusing received and transmitted packets and errors.
bash-5.1# vif –list –rate
Interface rate statistics ————————-
इंटरफेस नाव
ID
RX
VIF TX
Errors Packets
Errors Packets
Agent: unix
49
vif0/0
0
Physical: eth_bond_bond0
vif0/1
0
Physical: 0000:0d:00.0
vif0/2
0
Physical: 0000:0d:00.1
vif0/3
0
Physical: 0000:0d:00.2
vif0/4
0
Physical: 0000:0d:00.3
vif0/5
0
Physical: 0000:b5:00.2
vif0/6
0
Physical: 0000:b5:00.3
vif0/7
0
Host: bond0
vif0/8
0
Host: enp13s0f0
vif0/9
0
Host: enp13s0f1
vif0/10
0
Host: enp13s0f2
vif0/11
0
Host: enp13s0f3
vif0/12
0
Host: enp181s0f2
vif0/13
0
Host: enp181s0f3
vif0/14
0
Virtual(Vlan): enp13s0f0.102
vif0/16
0
0 397296 201471 0 200853 0 0 0 0 0 0 0 0 0 0 0
0
0
0
402976
0
197687
0
0
0
199410
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
Key ‘q’ for quit, key ‘k’ for previous page, key ‘j’ for next page. 2025-03-12 16:58:44 +0000
Verify Routes and Nexthops Verify the routes and nexthops for interfaces with Ethernet CCC encapsulation enabled.
bash-5.1# rt –dump 0 –family ccc Flags: L=Label Valid, L2c=L2 Control Word
50
vRouter ccc table 0/0 Interface Id
१ ३०० ६९३ ६५७
Flags LL2c LL2c LL2c LL2c
Label/VNID 18 17 110 16
Nexthop 61 61 40 61
Stats 0 0 0 0
bash-5.1# nh –get 61
आयडी:१
Type:Tunnel
Fmly: AF_MPLS Rid:0 Ref_cnt:4
Vrf:0
Flags:Valid, Etree Root, MPLS,
Oif:1 Len:14 Data:36 d0 5b f4 f3 ef a2 2a 5f 77 e7 8b 88 47 Number of Transport
Labels:1 Transport Labels:31,
bash-5.1# nh –get 40
आयडी:१
Type:Tunnel
Fmly: AF_MPLS Rid:0 Ref_cnt:2
Vrf:0
Flags:Valid, Etree Root, MPLS,
Oif:1 Len:14 Data:36 d0 5b f4 f3 ef a2 2a 5f 77 e7 8b 88 47 Number of Transport
Labels:1 Transport Labels:18,
Loop Detection in Pure L2 Mode
सारांश
Juniper Cloud-Native Router supports Layer 2 loop detection mechanisms by detecting frequent MAC address movements between ports.
Juniper Cloud-Native Router supports Layer 2 loop detection mechanism in the vRouter data path. Frequent MAC address movements between ports, many a times resulting from incorrect wiring, can result in L2 loops, resulting in network instability, broadcast storms, and degragded performance. Traditional loop detection mechanisms such as Spanning Tree Protocol (STP) may not always be feasible in modern data center environments.
51
Cloud-Native Router implements a loop detection mechanism that identifies MAC address learning loops by monitoring the frequency of MAC address movements between ports. The vRouter uses the MAC-MOVE table to track these MAC movemements, including source MAC, VLAN tag, hit count, and timestamp. The vRouter detects high hit counts on the same MAC address entry in the MAC_MOVE table, indicating continuous movement of the MAC address between two ports within a short interval of time.
You can use the rt –show-mac-move –family bridge command on the “vRouter shell” on page 356 to view the MAC move:
bash-5.1# rt –family bridge –show-mac-move
MAC Move Detection Table
==========================================================
MAC पत्ता
BDID
Hit Count Timestamp
==========================================================
00:01:01:01:01:03 12
46
1731038878
==========================================================
Total Entries: 1 / 10000 (Used/Capacity)
The vRouter also logs the detected loop and shuts down affected ports.
2024-11-18 06:14:19,130 DPCORE: NOTIFICATION, CRIT, Purel2 Macmove Loop Detected, mac 00:01:01:01:01:03, vlan 1221, Current Port 5, Older Port 1, hitcount 52 2024-11-18 06:14:19,130 VROUTER: Func: dpdk_port_shutdown, Line 1296, Shutting Down Port 0 of Vif 5 2024-11-18 06:14:19,343 lcore 17 called tx_pkt_burst for not ready port ( 2024-11-18 06:14:23,242 VROUTER: Port ID: 4 Link Status: DOWN intf _name:0000:d8:00.0 drv_name:net_ixgbe
2024-11-18 06:14:23,242 VROUTER: Notifed Link status update to agent for interface 0000: d8:00.0 as 0 2024-11-18 06:14:36,299 DPCORE: vr_purel2_mac_move_table_req_process vr_purel2_mac_move_table_req 456 received OP 1
2024-11-18 06 14:38,133 DPCORE: vr_purel2_mac_move_table_req_process vr_purel2_mac_move_table_req 456 received OP 1
52
A Syslog notification is also generated when the loop is detected:
{“jcnr”: {“header”: {“sysUpTime”: “40 days, 11 hours, 35 minutes, 45 seconds” 45 “program”: “vrouter-dpdk”, “notificationType”: “DPCORE: “, “eventDate”: “2024-12-02T21:32:23-08:00”} , “body”: “NOTIFICATION, CRIT, Purel2 Macmove Loop Detected, mac 00:01:01:01:01:03, vlan 1221. Current Port 4, Older Port 2, hitcount 51”}}
Access Control Lists (Firewall Filters)
सारांश
Read this topic to learn about Layer 2 access control lists (Firewall filters) in the cloud-native router.
IN THIS SECTION Access Control Lists (Firewall Filters) | 52 Configuration Example | 53 Troubleshooting | 54
Access Control Lists (Firewall Filters)
Starting with Juniper Cloud-Native Router Release 22.2 we’ve included a limited firewall filter capability. You can configure the filters using the Junos OS CLI within the cloud-native router controller, using NETCONF, or the cloud-native router APIs. Starting with Juniper Cloud-Native Router Release 23.2, you can also configure firewall filters using node annotations and custom configuration template at the time of Cloud-Native Router deployment. Please review the deployment guide for more details.
During deployment, the system defines and applies firewall filters to block traffic from passing directly between the router interfaces. You can dynamically define and apply more filters. Use the firewall filters to:
· Define firewall filters for bridge family traffic.
· Define filters based on one or more of the following fields: source MAC address, destination MAC address, or EtherType.
· Define multiple terms within each filter.
· Discard the traffic that matches the filter.
53
· Apply filters to bridge domains.
कॉन्फिगरेशन उदाample
खाली आपण माजी पाहू शकताample of a firewall filter configuration from a cloud-native router deployment:
root@jcnr01> show configuration firewall firewall {
family { bridge { filter example { term t1 { from { destination-mac-address 10:10:10:10:10:11; source-mac-address 10:10:10:10:10:10; ether-type arp; } then { discard; } } } }
} }
NOTE: You can configure up to 16 terms in a single firewall filter. The only then action you can configure in a firewall filter is the discard action.
After configuration, you must apply your firewall filters to a bridge domain using the set routing-instances vswitch bridge-domains bd3001 forwarding-options filter input filter1 configuration command. Then you must commit the configuration for the firewall filter to take effect. To see how many packets matched the filter (per VLAN), you can issue the show firewall filter filter1 command on the controller CLI. For exampले:
show firewall filter filter1 Filter : filter1 vlan-id : 3001
54
मुदत
पॅकेट
t1
0
मागील माजीample, we applied the filter to the bridge domain bd3001. The filter has not yet matched any packets.
समस्यानिवारण
The following table lists some of the potential problems that you might face when you implement firewall rules or ACLs in the cloud-native router. You run most of these commands on the host server.
Table 1: L2 Firewall Filter or ACL Troubleshooting
समस्या
Possible Causes and Resolution
आज्ञा
Firewall filters or ACLs not working
gRPC connection (port 50052) to the vRouter is down. Check the gRPC connection.
netstat -antp|grep 50052
The ui-pubd process is not running. Check whether ui-pubd is running.
ps aux|grep ui-pubd
Firewall filter or ACL show commands not working
The gRPC connection (port 50052) to the vRouter is down. Check the gRPC connection.
netstat -antp|grep 50052
The firewall service is not running.
ps aux|grep firewall
show log filter.log
You must run this command in the JCNR-controller (cRPD) CLI.
55
MAC Learning and Aging
सारांश
Juniper Cloud-Native Router provides automated learning and aging of MAC addresses. Read this topic for an overview of the MAC learning and aging functionality in the cloud-native router.
IN THIS SECTION MAC Learning | 55 MAC Entry Aging | 57
MAC शिकणे
MAC learning enables the cloud-native router to efficiently send the received packets to their respective destinations. The cloud-native router maintains a table of MAC addresses grouped by interface. The table includes MAC addresses, VLANs, and the interface on which the vRouter learns each MAC address and VLAN. The MAC table informs the vRouter about the MAC addresses that each interface can reach.
The cloud-native router caches the source MAC address for a new packet flow to record the incoming interface into the MAC table. The router learns the MAC addresses for each VLAN or bridge domain. The cloud-native router creates a key in the MAC table from the MAC address and VLAN of the packet. Queries sent to the MAC table return the interface associated with the key. To enable MAC learning, the cloud-native router performs these steps:
· Records the incoming interface into the MAC table by caching the source MAC address for a new packet flow.
· Learns the MAC addresses for each VLAN or bridge domain.
· Creates a key in the MAC table from the MAC address and VLAN of the packet.
If the destination MAC address and VLAN are missing (lookup failure), the cloud-native router floods the packet out all the interfaces (except the incoming interface) in the bridge domain.
मुलभूतरित्या:
· MAC table entries time out after 60 seconds.
· The MAC table size is limited to 10,240 entries.
We recommend that you do not change the default values. Please contact Juniper Support if you need to change the default values.
You can see the MAC table entries by using:
56 · Introspect agent at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry
· The command show bridge mac-table on the Cloud-Native Router controller CLI:
show bridge mac-table
Routing Instance : default-domain:default-project:ip-fabric:__default__
Bridging domain VLAN id : 3002
MAC
MAC
तार्किक
पत्ता
झेंडे
इंटरफेस
00:00:5E:00:53:01
D
बाँड0
· The command rt –dump vrf_id –family bridge on the CLI of the vRouter pod:
bash-5.1# rt –dump 0 –family bridge
Flags: L=Label Valid, Df=DHCP flood, Mm=Mac Moved, L2c=L2 Evpn Control Word, N=New Entry,
Ec=EvpnControlProcessing
vRouter bridge table 0/1
निर्देशांक
BdID
DestMac
ध्वज
Label/VNID
नेक्स्टहॉप
आकडेवारी
5644
1
00:10:94:00:00:01
Df
–
23
615123154
6480
1
00:10:94:00:00:01
Df
–
16
615123154
5628
1
00:10:94:00:00:01
LDf
0
26
615123154
If you exceed the MAC address limit, the counter pkt_drop_due_to_mactable_limit increments. You can see this counter by using the introspect agent at http://host server IP:8085/Snh_AgentStatsReq.
If you delete or disable an interface, the cloud-native router deletes all the MAC entries associated with that interface from the MAC table.
57
MAC Entry Aging
The aging timeout for cached MAC entries is 60 seconds. You can configure the aging timeout at deployment time by editing the values.yaml file. The minimum timeout is 60 seconds and the maximum timeout is 10,240 seconds. You can see the time that is left for each MAC entry through introspect at http://host server IP:8085/mac_learning.xml#Snh_FetchL2MacEntry. We show an example of the output below:
l2_mac_entry_list
vrf_id
vlan_id
मॅक
time_since_add
last_stats_change
0
1001
00:10:94:00:00:01
12:55:14.248785
00:00:00.155450
0
1001
00:10:94:00:00:01
12:55:14.247765
00:00:00.155461
0
1002
00:10:94:00:00:01
12:55:14.248295
00:00:00.155470
index 5644 6480 5628
packets 615123154 615123154 615123154
APIs and CLI Commands for Bond Interfaces
सारांश
Read this topic to learn about the APIs and CLIs available in the L2 mode of the Juniper Cloud-Native Router. Cloud-Native Router supports an API that can be used to force traffic to switch from the active interface to the standby interface in a bonded pair. Another Cloud-Native Router API and a CLI can be used to view the active node details in a bond interface.
IN THIS SECTION APIs for Bond Interfaces | 58 CLI Commands for Bond Interfaces | 59
58
APIs for Bond Interfaces
When you run cloud-native router in L2 mode with cascaded nodes, you can configure those nodes to use bond interfaces. You can configure the bond mode in the values.yaml file before deployment. For exampले:
bondInterfaceConfigs: – name: “bond0” mode: 1 slaveInterfaces: – “enp59s0f0v0” – “enp59s0f0v1”
# ACTIVE_BACKUP MODE
API to View the Active and Backup Interfaces in a Bond Interface Pair
Starting with Cloud-Native Router Release 23.3, use the REST API call: curl -X GET http://127.0.0.1:9091/ bond-get-active/bond0 on localhost port 9091 to fetch the active and backup interface details of a bond interface pair.
ए एसample output is shown below:
root@nodep23:~# curl -X GET http://127.0.0.1:9091/bond-get-active/bond0 {“active”: “0000:af:01.0”, “backup”: “0000:af:01.1”}
API to Force Bond Link Switchover
Starting with Cloud-Native Router Release 22.4, you can force traffic switchover from an active to backup interface in a bond interface pair using a REST API. If you have configured the bond interface pair in the ACTIVE_BACKUP mode before deploying JCNR, then the vRouter-agent exposes the REST API call: curl -X POST http://127.0.0.1:9091/bond-switch/bond0 on localhost port 9091. Use this REST API call to force traffic to switch from the active interface to the backup interface.
ए एसample output is shown below:
root@nodep23:~# curl -X GET http://127.0.0.1:9091/bond-get-active/bond0 {“active”: “0000:af:01.0”, “backup”: “0000:af:01.1”} root@nodep23:~# curl -X POST http://127.0.0.1:9091/bond-switch/bond0 {}
59
root@nodep23:~# curl -X GET http://127.0.0.1:9091/bond-get-active/bond0 {“active”: “0000:af:01.1”, “backup”: “0000:af:01.0”}
CLI Commands for Bond Interfaces
The vRouter contains the following CLI commands which are related to bond interfaces: · dpdkinfo -b–displays the active interface in a bonded pair.
Slave Interface(0): 0000:17:01.0 Slave Interface Driver: net_iavf Slave Interface (0): Active Slave Interface Mac : 6E: BD: 45:0F: 4A:02
MII status: UP MII Link Speed: 10000 Mbps
Slave Interface (1): 0000:17:11.0
Slave Interface Driver: net_iavf
Slave Interface Mac
6E: BD: 45:0F: 4A: C2
MII status: UP MII Link Speed: 25000 Mbps
· dpdkinfo -n–displays the traffic statistics associated with your bond interfaces.
[root@jcnr-01 /]# dpdkinfo -n2 Master Info (eth_bond_bond0): RX Device Packets: 72019, Bytes: 96419113, Errors:0, Nombufs:060
Dropped RX Packets: 37475 TX Device Packets:0, Bytes:0, Errors:0 Queue Rx: Tx: Rx Bytes: Tx Bytes: Errors:
Slave Info (0000:17:01.0): Rx Device Packets: 72019, Bytes:66073908, Errors:0, Nombufs:0 Dropped RX Packets: 588 TX Device Packets:0, Bytes:0, Errors:0 Queue Rx: Tx: Rx Bytes: Tx Bytes: Errors:
Slave Info (0000:17:11.0): RX Device Packets:0, Bytes:30345205, Errors:0, Nombufs:0 Dropped R Packets:36887 TX Device Packets:0, Bytes:0, Errors:0 Queue Rx: Tx: Rx Bytes: Tx Bytes: Errors:
मूळ VLAN
या विभागात
Native VLAN | 61
Starting in Juniper Cloud-Native Router Release 23.1, Cloud-Native Router supports receiving and forwarding untagged packets on a trunk interface. Typically, trunk ports accept only tagged packets, and
61
the untagged packets are dropped. You can enable a Cloud-Native Router fabric trunk port to accept untagged packets by configuring a native VLAN identifier (ID) on the interface on which you want the untagged packets to be received. When a Cloud-Native Router fabric trunk port is enabled to accept untagged packets, such packets are forwarded in the native VLAN domain.
मूळ VLAN
Enable the native-vlan-id key in the Helm chart, at the time of deployment, to configure the VLAN identifier and associate it with untagged data packets received on the fabric trunk interface. Edit the values.yaml file in Juniper_Cloud_Native_Router_<release-number>/helmchart directory and add the key native-vlan-id along with a value for it. For exampले:
fabricInterface: – eth1: ddp: on interface_mode: trunk vlan-id-list: [100, 200, 300, 700-705] storm-control-profile: rate_limit_pf1 native-vlan-id: 100 no-local-switching: true
NOTE: After editing the values.yaml file, you have to install or upgrade Cloud-Native Router using the edited values.yaml to ensure that the native-vlan-id key is enabled.
To verify, if native VLAN is enabled for an interface, connect to the vRouter agent by executing the command kubectl exec -it -n contrail contrail-vrouter-<agent container> — bash command, and then run the command vif –get <interface index id>. A sample output is shown below:
vif0/1
PCI: 0000:00:00.0 (Speed 10000, Duplex 1) Type:Physical HWaddr:6a:45:b2:a8:ce:5c Vrf:0 Flags:L2Vof QOS:-1 Ref:11 RX port packets:36550 errors:0 RX queue packets:36550 errors:0 RX queue errors to lcore 0 0 0 0 0 0 0 0 0 0 0 0 0 0 Fabric Interface: eth_bond_bond0 Status: UP Driver: net_bonding Slave Interface(0): 0000:3b:02.0 Status: UP Driver: net_iavf Vlan Mode: Trunk Vlan: 100 200 300 Native vlan id: 100
62
RX packets:36550 bytes:5875795 errors:0 TX packets:0 bytes:0 errors:0 Drops:613
Enabling Dynamic Device Personalization (DDP) on Individual Interfaces
SUMMARY Dynamic Device Personalization (DDP) is a technology that enables programmable packet processing pipeline provided by Intel as a profile to their NICs. Cloud-Native Router supports enabling Dynamic Device Personalization (DDP) on individual interfaces.
Starting with Juniper Cloud-Native Router (JCNR) Release 23.2, Cloud-Native Router supports enabling Dynamic Device Personalization (DDP) on individual interfaces. This feature is available on Cloud-Native Router in L2, L3, and L2-L3 modes. Dynamic Device Personalization (DDP) is a technology that enables programmable packet processing pipeline provided by Intel as a profile to their NICs. Multiple Intel NICs support this technology. The support varies based on the Intel NIC type. DDP is used in packet classification where the profiles applied to the NIC can classify multiple packet formats on the NIC enabling speeds and feeds to the Data Plane Development Kit (DPDK). Juniper cloud native router (JCNR) provides routing and switching functionality. Cloud-Native Router supports interfaces from different NIC cards. Some of the Intel NICs support DDP and some of them don’t support DDP. Therefore, in a deployment scenario, Cloud-Native Router might have one interface from one NIC that supports DDP and another interface from a different NIC that does not support DDP. Cloud-Native Router supports enabling DDP per interface to overcome such issues.
NOTE: For E810 PF, Cloud-Native Router loads the DDP package which is bundled with JCNR. However, for other NICs, ensure you load the DDP package on the NICs before starting JCNR.
A DDP configuration is available per interface. This configuration option overrides global DDP (ddp) configuration for that interface. If you do not configure an interface DDP, then the global configuration
63
value serves as the value for that interface. If you do not configure the global DDP configuration, then the default value for the global configuration which is off takes effect.
NOTE: DDP is supported on the following NICs: · E810 VF · E810 PF · X710 PF · XXV710 PF DDP support is not available when interfaces are defined under subnets.
You should configure DDP in the helm chart before deployment. Configuring the DDP configurations in the helm charts for both global and at interface levels is optional. If you do not configure the DDP keys, then the default value for global DDP which is off takes effect. The global DDP configuration is available in the values.yaml file खाली दर्शविल्याप्रमाणे:
# Set ddp to enable Dynamic Device Personalization (DDP) # Provides datapath optimization at NIC for traffic like GTPU, SCTP etc. # Options include auto or on or off; default: off ddp: “auto”
You can configure one of the following options for ddp at the interface level: 1. Auto–when set to auto, Cloud-Native Router checks if the NIC supports DDP or not during
deployment and configures DPDK accordingly. Detecting whether a NIC supports DDP at run time makes is easier to deploy Cloud-Native Router in volumes. 2. On–option enables DDP on the interface without validating the NIC. Use this option only if you are sure that the NIC supports DDP. 3. Off–is the default option at the interface level. This option disables DDP on the interface. For exampले,
eth1: ddp: “off” ## auto or on or off
64
NOTE: Each interface can have a different configuration for ddp. DDP is enabled for a bond interface only if all the slave interface NICs support DDP.
3 प्रकरण
L3 वैशिष्ट्ये
IN THIS CHAPTER L3 Features Overview | 67 Link Layer Discovery Protocol (LLDP) | 67 DHCP Relay | 73 Layer-3 Class of Service (CoS) | 80 Two-Way Active Measurement Protocol (TWAMP) | 111 Segment Routing | 122 Topology-Independent Loop-Free Alternates (TI-LFA) | 143 Access Control Lists (Firewall Filters) | 157 IPsec Security Services | 176 Cloud-Native Router as a Transit Gateway | 205 EVPN Type 5 Routing over VXLAN Tunnels | 206 Integrated Routing and Bridging on JCNR | 214 L3 Routing Protocols | 222 MPLS Support | 225 Bidirectional Forwarding Detection (BFD) | 226 Virtual Router Redundancy Protocol (VRRP) | 228 Virtual Routing Instance (VRF-Lite) | 229 ECMP | 233 BGP Unnumbered | 233 Layer-3 VLAN Sub-Interfaces | 234 Enabling Dynamic Device Personalization (DDP) on Individual Interfaces | 236
67
L3 Features Overview
SUMMARY Read this topic to learn about the features available in the Juniper Cloud-Native Router when deployed in L3 (router) mode.
The Juniper Cloud-Native Router supports multiple “deployment modes” on page 13. In L3 mode, the cloud-native router behaves like a router and so performs routing functions and runs routing protocols such as ISIS, BGP, OSPF, and segment routing-MPLS. In L3 mode, the pod network is divided into an IPv6 underlay network and an IPv4 or IPv6 overlay network. The IPv6 underlay network is used for control plane traffic. All L3 features are supported with DPDK datapath. Please review Supported Cloud-Native Router Features for eBPF XDP to verify feature support for eBPF XDP datapath. This chapter provides information about the various L3 features supported by JCNR.
लिंक लेयर डिस्कव्हरी प्रोटोकॉल (LLDP)
सारांश
Juniper Cloud-Native Router supports LLDP on Layer-3 interfaces to advertise capabilities, identity, and other information onto a LAN.
IN THIS SECTION LLDP Overview | 67 LLDP Verification | 70
LLDP Overview
The Link Layer Discovery Protocol (LLDP) is an industry-standard method to enable networked devices to advertise capabilities, identity, and other information onto a LAN. The Juniper Cloud-Native Router supports LLDP on Layer 3 interfaces. LLDP information is sent at a fixed interval as an Ethernet frame.
68
Each frame contains one LLDP protocol data unit (PDU). Each LLDP PDU is a sequence of type-lengthvalue (TLV) elements. Cloud-Native Router transmits the following mandatory and non-mandatory TLVs:
Table 2: Supported TLVs
अनिवार्य
Non-Mandatory
चेसिस आयडी
पोर्ट वर्णन
पोर्ट आयडी
सिस्टम नाव
Time to Live (TTL)
सिस्टम वर्णन
End TLV
System Capabilities (MAC/PHY configurations and status)
Link Aggregation (Type 3 and Type 7)
कमाल फ्रेम आकार
NOTE: By default Management-address TLV is not supported. The Management-address TLV is sent out only once the management address or management-interface is configured on Cloud-Native Router.
You can configure tlv-filter and tlv-select to filter or select non-mandatory TLVs. Cloud-Native Router will receive all TLVs support by Junos OS. You can view the received TLVs using the show lldp neighbors command output. Please review the LLDP Overview junos documentation for more details.
LLDP Configuration
You can configure the following options for the LLDP protocol in the Cloud-Native Router:
# set protocols lldp ?
संभाव्य पूर्णता:
advertisement-interval Transmit interval for LLDP messages (5..32768 seconds)
+ apply-groups
Groups from which to inherit configuration data
+ apply-groups-except Don’t inherit configuration data from these groups
> chassis-id
Chassis-id to be used for Chassis ID TLV generation
69
dest-mac-type
Destination address to be used
अक्षम करा
Disable LLDP
fast-rx-processing Start optimised processing of received pdu
hold-multiplier
Hold timer interval for LLDP messages (2..10)
> interface
इंटरफेस कॉन्फिगरेशन
lldp-tx-fast-init Transmission count in fast transmission mode (1..8)
management-address LLDP management address
management-interface Management interface to be used in LLDP PDUs
mau-type
Populate mau-type in lldp PDU
neighbour-port-info-display Show lldp neighbors to display port-id or port-description
port-description-type The Interfaces Group MIB object to be used for Port Description TLV
पिढी
port-id-subtype
Sub-type to be used for Port ID TLV generation
system-description System description to be used in system-description TLV
system-name
System name to be used in system-name TLV
+ tlv-filter
Filter TLVs to be sent
+ tlv-select
Select TLVs to be sent
> traceoptions
Trace options for LLDP
transmit-delay
Transmit delay time interval for LLDP messages (1..8192 seconds)
कृपया पुन्हाview edit-protocols-lldp topic for more details about the configurable options.
You must perform LLDP configuration using a Configlet. Review Customize JCNR Configuration for more details. A sample configlet is provided below:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set protocols lldp enable set protocols lldp management-address 192.168.1.10 set protocols lldp advertisement-interval 5 set protocols lldp transmit-delay 1 set protocols lldp hold-multiplier 2 set protocols lldp chassis-id chassis-id-type mac-address set protocols lldp chassis-id chassis-id-value 2c:6b:f5:15:6b:c0 set protocols lldp interface ens19 set protocols lldp system-name jcnr1.ix.juniper.net crpdSelector:
70
matchLabels: node: worker
LLDP Verification
You can verify the LLDP configuration and statistics using the cRPD show commands: 1. Verify the LLDP configuration:
user@host> show lldp
एलएलडीपी
: सक्षम
Advertisement interval : 5 seconds
Transmit delay
: 1 सेकंद
Hold timer
: 10 सेकंद
Notification interval : 5 Second(s)
Tx fast start count
: 1 Packets
Config Trap Interval
: 0 सेकंद
Connection Hold timer : 300 seconds
Port ID TLV subtype
: locally-assigned
Port Description TLV type : interface-alias (ifAlias)
Interface ens19
Parent Interface –
LLDP Enabled
LLDP-MED
Power Negotiation
user@host> show lldp detail
एलएलडीपी
: सक्षम
Advertisement interval : 5 seconds
Transmit delay
: 1 सेकंद
Hold timer
: 10 सेकंद
Notification interval : 5 Second(s)
Tx fast start count
: 1 Packets
Config Trap Interval
: 0 सेकंद
Connection Hold timer : 300 seconds
Port ID TLV subtype
: locally-assigned
Port Description TLV type : interface-alias (ifAlias)
इंटरफेस
Parent Interface LLDP
LLDP-MED
Power Negotiation
शेजारी
71
count ens19 1
Dest MAC
–
सक्षम केले
01:80:C2:00:00:0E
Basic Management TLVs supported: End Of LLDPDU, Chassis ID, Port ID, Time To Live, Port Description, System Name, System Description, System Capabilities, Management Address
Organizationally Specific TLVs supported: Port VLAN tag, Port VLAN name, MAC/PHY configuration/status, Link aggregation, Maximum Frame Size
2. Verify LLDP local information of the local device:
user@host> show lldp local-information
LLDP Local Information details
Chassis ID : 2c:6b:f5:15:6b:c0 Chassis type : Mac address System name : jcnr1.ix.juniper.net System descr : Juniper Networks, Inc. crpd internet router, kernel JUNOS , Build date: 2024-11-19 12:16:34 UTC Copyright (c) 1996-2024
जुनिपर नेटवर्क्स, इंक.
सिस्टम क्षमता
समर्थित
: Bridge Router
सक्षम केले
: Bridge Router
व्यवस्थापन माहिती
Interface Name : ens19
Address Subtype : IPv4(1)
पत्ता
: ६९६१७७९७९७७७
इंटरफेस क्रमांक
: ६९६१७७९७९७७७
Interface Numbering Subtype : Unknown(1)
Interface name ens19
Parent Interface Interface ID
–
6
Interface description ens19
Status Up
user@host> show lldp statistics Interface Parent Interface Received Unknown TLVs With Errors Discarded TLVs
72
Transmitted Untransmitted
ens19
–
458
0
0
486
0
3. Verify LLDP neighbors:
user@host> show lldp neighbors
Local Interface Parent Interface Chassis Id
माहिती
सिस्टम नाव
ens19
–
2c:6b:f5:16:6b:c0
ens20
jcnr2.ix.juniper.net
0 पोर्ट
user@host> show lldp neighbors detail
LLDP Neighbor Information:
स्थानिक माहिती:
Index: 2 Time to live: 10 Time mark: Wed Nov 20 12:08:13 2024 Age: 4 secs
Local Interface : ens19
Parent Interface : –
स्थानिक पोर्ट आयडी
: ६९६१७७९७९७७७
Ageout Count
: ६९६१७७९७९७७७
Neighbor Information:
चेसिस प्रकार
: Mac address
चेसिस आयडी
: 2c:6b:f5:16:6b:c0
पोर्ट प्रकार
: Locally assigned
पोर्ट आयडी
: ६९६१७७९७९७७७
Port description : ens20
सिस्टम नाव
: jcnr2.ix.juniper.net
System Description : Juniper Networks, Inc. crpd internet router, kernel JUNOS , Build date: 2024-11-19 12:16:34 UTC Copyright (c) 1996-2024 Juniper Networks, Inc.
System capabilities Supported: Bridge Router Enabled : Bridge Router
Management address Address Type Address Interface Number
: IPv4(1) : 192.168.1.20 : 0
73
Interface Subtype : Unknown(1)
OID
: १.
संस्थेची माहिती
OUI
: IEEE 802.3 Private (0x00120f)
Subtype : MAC/PHY Configuration/Status (1)
Info : Autonegotiation [not supported, disabled (0x0)], PMD Autonegotiation
Capability (0x6c1d), MAU Type (0x0)
Index : 1
संस्थेची माहिती
OUI
: IEEE 802.3 Private (0x00120f)
Subtype : Link Aggregation (3)
Info : Aggregation Status [supported, disabled (0x1)], Aggregation Port ID (0)
Index : 2
संस्थेची माहिती
OUI
: IEEE 802.3 Private (0x00120f)
Subtype : Maximum Frame Size (4)
Info : MTU Size (1500)
Index : 3
संस्थेची माहिती
OUI
: Ethernet Bridged (0x0080c2)
Subtype : Link Aggregation 802.1 (7)
Info : Aggregation Status [supported, disabled, Aggregator Port %(0x1)],
Aggregation Port ID (0)
Index : 4
DHCP रिले
सारांश
Juniper Cloud-Native Router can relay DHCP messages between cascaded Next-Generation Distributed Units (NGDUs) and an external DHCP server.
74
Juniper Cloud-Native Router can be configured as a Stateless DHCP Relay agent for an L2-L3 deployment. It can relay DHCP messages between cascaded Next-Generation Distributed Units (NGDUs) and an external DHCP server. It supports simple packet forwarding non-snooping DHCPv4 and DHCPv6 relay feature between the DHCP client and DHCP server. It does not maintain leases or client state. When configured as a DHCPv4 relay agent, Cloud-Native Router is bypassed for subsequent lease renewals, once the client has obtained its address and configuration from the DHCP server. You can configure the same behavior for DHCPv6 implementation as well. In the forward-only implementation, the relay agent does not participate in the state exchange between the client and server. Hence, events such as reboot, Graceful Routing Engine switchover (GRES), or failover can quickly self-correct as the clients retry interrupted transactions.
कॉन्फिगरेशन
The following table lists the knobs and overrides that are supported for DHCPv4 and DHCPv6 relay options on Cloud-Native Router: Table 3: DHCPv4 and DHCPv6 Support
प्रोटोकॉल
Supported Knobs
Supported Overrides
DHCPv4
forward-only; relay-option-82
always-write-option-82 (circuitid | remote-id); relay-source; trust-option-82; user-defined-option-82 string
DHCPv6
forward-only; relay-agent-interface-id relay-agent-remote-id
No DHPCv6 overrides supported
The configuration syntax for DHCPv4 relay agent is provided below. You can configure DHCPv4 relay agent under the [edit] and [edit routing-instances] hierarchy. Please review DHCP Relay CLI for command description and options.
[edit] forwarding-options {
dhcp-relay { active-server-group name; duplicate-clients-in-subnet (incoming-interface | option-82);
75
forward-only; overrides {
always-write-option-82 (circuit-id | remote-id); relay-source; trust-option-82; user-defined-option-82 string } relay-option-82 { circuit-id {
prefix { host-name; logical-system-name; routing-instance-name;
} use-interface-description (device | logical); user-defined; } remote-id { prefix {
host-name; logical-system-name; routing-instance-name; } use-interface-description (device | logical); } } server-group name { ip-address; } group name { relay-option-82 { circuit-id { prefix {
host-name; logical-system-name; routing-instance-name; } use-interface-description (device | logical); user-defined; } remote-id { prefix { host-name;
76
logical-system-name; routing-instance-name; } use-interface-description (device | logical); } } active-server-group name; interface interface_name; } } }
NOTE: If a packet arrives with an option-82 record and trust-option-82 is not configured the packet will be dropped. If a packet arrives with an option-82 record while relay-option-82 is configured, the original incoming option-82 value is preserved with no changes.
The configuration syntax for DHCPv6 relay agent is provided below. You can configure DHCPv4 relay agent under the [edit] and [edit routing-instances] hierarchy. Please review DHCPv6 Relay CLI for command description and options.
[edit] forwarding-options {
dhcp-relay { dhcpv6 { active-server-group name; forward-only; relay-agent-interface-id { prefix { host-name; logical-system-name; routing-instance-name; } use-interface-description (device | logical); } relay-agent-remote-id { prefix { host-name; logical-system-name; routing-instance-name;
77
} use-interface-description (device | logical); } server-group <name> { ip-address; } group name { relay-agent-interface-id {
prefix { host-name; logical-system-name; routing-instance-name;
} use-interface-description (device | logical); } relay-agent-remote-id { prefix {
host-name; logical-system-name; routing-instance-name; } use-interface-description (device | logical); } active-server-group name; interface interface_name; } } } }
You can configure DHCP tracing using the traceoptions configuration as shown in the snippet below:
[संपादित करा] प्रणाली {
processes { dhcp-service { traceoptions { file jdhcpd size 20m; level all; flag all; } }
78
} }
Verification You can verify the DHCP statistics via the “cRPD shell” on page 354. Use show dhcp statistics to view DHCP service statistics.
root@controller-0> show dhcp statistics
Packets dropped:
एकूण
16
No routing instance
16
Use show dhcp relay statistics to display DHCP relay statistics.
root@controller-0> show dhcp relay statistics
Packets dropped:
एकूण
16
dhcp-service total
16
Messages received:
BOOTREQUEST
0
DHCPDECLINE
0
DHCPDISCOVER
0
DHCPINFORM
0
DHCPRELEASE
0
DHCPREQUEST
0
DHCPLEASEACTIVE
0
DHCPLEASEUNASSIGNED
0
DHCPLEASEUNKNOWN
0
DHCPLEASEQUERYDONE
0
DHCPACTIVELEASEQUERY
0
संदेश पाठवले:
BOOTREPLY
0
DHCPOFFER
0
DHCPACK
10
DHCPNAK
0
DHCPFORCERENEW
2
DHCPLEASEQUERY
2
79
DHCPBULKLEASEQUERY
0
DHCPLEASEACTIVE
0
DHCPLEASEUNASSIGNED
0
DHCPLEASEUNKNOWN
0
DHCPLEASEQUERYDONE
0
DHCPACTIVELEASEQUERY
7
Use show dhcpv6 relay statistics to view DHCPv6 relay statistics.
root@controller-0> show dhcpv6 relay statistics
Dhcpv6 Packets dropped:
एकूण
0
Messages received:
DHCPV6_DECLINE
0
DHCPV6_SOLICIT
2
DHCPV6_INFORMATION_REQUEST 0
DHCPV6_RELEASE
0
DHCPV6_REQUEST
2
DHCPV6_CONFIRM
0
DHCPV6_RENEW
0
DHCPV6_REBIND
0
DHCPV6_RELAY_FORW
0
DHCPV6_LEASEQUERY_REPLY 0
DHCPV6_LEASEQUERY_DATA 0
DHCPV6_LEASEQUERY_DONE 0
DHCPV6_ACTIVELEASEQUERY 0
संदेश पाठवले:
DHCPV6_ADVERTISE
0
DHCPV6_REPLY
0
DHCPV6_RECONFIGURE
0
DHCPV6_RELAY_REPLY
0
DHCPV6_LEASEQUERY
0
DHCPV6_LEASEQUERY_REPLY 2
DHCPV6_LEASEQUERY_DATA 0
DHCPV6_LEASEQUERY_DONE 0
DHCPV6_ACTIVELEASEQUERY 0
80
You can clear the DHCP statistics using the commands provided below:
clear dhcp statistics clear dhcp relay statistics clear dhcpv6 relay statistics
Layer-3 Class of Service (CoS)
सारांश
Juniper Cloud-Native Router supports Layer 3 Class of Service (CoS), also known as L3 Quality of Service (QoS). This topic provides an overview of the supports CoS mechanisms followed by configuration and verification exampलेस
IN THIS SECTION L3 CoS Overview | 80 L3 CoS Configuration | 87 L3 CoS Verification | 95
L3 CoS Overview
IN THIS SECTION Cloud-Native Router Supported CoS Mechanisms | 81
When a network experiences congestion and delay, some packets must be prioritized to avoid random loss of data. Class of service (CoS), also known as Quality of Service (QoS) accomplishes this prioritization by dividing similar types of traffic, such as e-mail, streaming video, voice, large document file transfer, into classes. You then apply different levels of priority, such as those for throughput and packet loss, to each group, and thereby control traffic behavior. Juniper Cloud-Native Router supports CoS, enabling it to differentiate or classify traffic. It can either drop traffic or lower the priority of the traffic as per the rules configured. You can read more about Class of Service in the Junos documentation.
The Cloud-Native Router CoS application supports DiffServ, which uses a 6-bit differentiated services code point (DSCP) in the differentiated services field of the IPv4 and IPv6 packet header. For IPv6,
81
DSCP is referred to as traffic class. The configuration uses DSCP values to decide the CoS treatment for the incoming packet. The DSCP field is also used to notify the modified priority of a packet to the next hop.
Cloud-Native Router Supported CoS Mechanisms
Cloud-Native Router supports Classifier, Policer and Rewrite/Marker CoS mechanisms. Let us look at the Cloud-Native Router CoS implementations in detail in the sections below.
Forwarding Classes and Loss Priority
The forwarding classes affect the forwarding and marking policies applied to packets as they transit JCNR. By default Cloud-Native Router has no forwarding classes defined. You can define up to 16 custom forwarding classes mapped to 8 queues.
In Cloud-Native Router CoS implementation forwarding class along with the loss priority is used for rewrite rules and policer. Loss priority is set by the classifier as low, medium low, medium high and high. Here are some of the ways loss priority is used in the Cloud-Native Router CoS implementation:
Table 4: Using Forwarding Class and Loss Priority
QoS Block
How Loss priority is used
वर्गीकरण करणारा
Forwarding class and loss priority are set by the classifier.
पुन्हा लिहा
Loss priority is used as an index along with the Traffic Class (Forwarding Class) to obtain a new DSCP value.
Policer
Only color aware policer uses loss priority. Loss priority maps to traffic colors as follows: · Loss priority Low is mapped to Green · Loss priority Medium High and Medium Low are mapped to Yellow · Loss priority High is mapped to Red
शेड्युलर
Forwarding class and loss priority are inputs to the scheduler for queuing the traffic based on strict priority.
82
Classifiers
Packet classification refers to the examination of an incoming packet. This function associates the packet with a particular CoS servicing level. Classifiers associate incoming packets with a forwarding class and loss priority and, based on the associated forwarding class, assign packets to output queues. Two general types of classifiers are supported:
· Behavior Aggregate Classifier–Behavior aggregate (BA) is a method of classification that operates on a packet as it enters JCNR. The CoS value in the packet header is examined, and this single field determines the CoS settings applied to the packet. BA classifiers allow you to set the forwarding class and loss priority of a packet based on the Differentiated Services code point (DSCP) value and DSCP IPv6 value. BA classifier is configured at the interface level. You can read more about Behavior Aggregate Classifer in the Junos documentation.
· Multifield Traffic Classifier–A multifield (MF) classifier can examine multiple fields in the packet, such as the source and destination address of the packet as well as the source and destination port numbers. With multifield classifiers, you set the forwarding class and loss priority of a packet based on firewall filter (ACL) rules. If a packet matches both BA and MF classifiers, MF classifier takes precedence. You can read more about the Multified Traffic Classifier in the Junos documentation.
Policers
Policers allow you to limit traffic of a certain class to a specified bandwidth and burst size. Policer meters (measures) each packet against traffic rates and burst sizes configured. It then passes the packet and the metering result to the marker (rewrite rules), which assigns a packet loss priority that corresponds to the metering result. Based on the particular set of traffic limits configured, a policer identifies a traffic flow as belonging to one of either two or three categories that are similar to the colors of a traffic light used to control automobile traffic. Policers can be applied per packet or per traffic class. Cloud-Native Router CoS implementation supports 16 policer profiles. You can read more about Policer Implementation in Junos documentation. In Cloud-Native Router CoS implementation policers work in Single-rate three color marker (srTCM) or Two-rate three-color marker (trTCM) mode. Policers can perform traffic color marking in color aware or color blind modes. In color aware mode, the policer considers the packet’s color as derived by classifier as additional input. In color blind mode the policer does not take into consideration packet’s color while determining the new color. We will look at the different considerations for each of the modes in the tables provided hereafter.
· Single-rate three-color–A Single Rate Three Color Marker (srTCM) meters traffic based on the configured committed information rate (CIR), committed burst size (CBS), and the peak burst size (PBS). A single-rate three-color policer is most useful when a service is structured according to packet length and not peak arrival rate. Traffic is marked as belonging to one of three categories– green, yellow, or red based on the following considerations:
83
Table 5: Color Aware srTCM
Incoming Color
Packet Metered Against
Possible Cases
New Color
Action on the packet
हिरवा
CIR, CBS, PBS
Below CBS
हिरवा
Not dropped
Above CBS but below PBS
पिवळा
Change the traffic class using the rewrite rules.
Above PBS
लाल
टाकून द्या
पिवळा
PBS
Below PBS
पिवळा
Change the traffic class using the rewrite rules.
Above PBS
लाल
टाकून द्या
लाल
Not metered
NA
Table 6: Color Blind srTCM
Packet Metered Against Possible Cases
Red New Color
Discard Action on the packet
CIR, CBS, PBS
Below CBS
हिरवा
Not dropped
Above CBS but below PBS
पिवळा
Change the traffic class using the rewrite rules.
Above PBS
लाल
टाकून द्या
· Two-rate three-color–A Two Rate Three Color Marker (trTCM) meters traffic based on the configured CIR and peak information rate (PIR). A two-rate three-color policer is most useful when a service is structured according to arrival rates and not necessarily packet length. Traffic is marked as belonging to one of three categories based on the following considerations:
84
Table 7: Color Aware trTCM
Incoming Color
Packet Metered Against
Possible Cases
New Color
Action on the packet
हिरवा
CIR, PIR
Below CIR
हिरवा
Not dropped
Above CIR but below PIR
पिवळा
Change the traffic class using the rewrite rules.
Above PIR
लाल
टाकून द्या
पिवळा
PIR
Below PIR
पिवळा
Change the traffic class using the rewrite rules.
Above PIR
लाल
टाकून द्या
लाल
Not metered
NA
Table 8: Color Blind trTCM
Packet Metered Against Possible Cases
Red New Color
Discard Action on the packet
CIR, PIR
Below CIR
हिरवा
Not dropped
Above CIR but below PIR
पिवळा
Change the traffic class using the rewrite rules.
Above PIR
लाल
टाकून द्या
The colors marked by Policer are mapped to loss priority as follows:
Table 9: Mapping Colors to Loss Priority Color Green Yellow
लाल
85
Loss Priority Low Medium-Low (Medium-High is not used while mapping color to loss priority) High
Rewrite/Marker
A rewrite rule or marker sets the appropriate CoS bits in the outgoing packet. This allows the next downstream routing device to classify the packet into the appropriate service group. Rewriting, or marking, outbound packets is useful when the routing device is at the border of a network and must alter the CoS values to meet the policies of the targeted peer. A rewrite profile is applied to the interface and is based on the forwarding class and loss priority derived by the classifier and/or the metering results of the policer. Cloud-Native Router rewrite rules supports copying outer IPv4/IPv6 DSCP marking to inner IP header DSCP field as well as MPLS EXP bit marking. Cloud-Native Router supports 16 rewrite profiles. You can read more about Rewrite rules in Junos documentation.
NOTE: Cloud-Native Router CoS implementation does not support implicit best effort treatment for traffic that does not match any CoS block. An explicit catch-all rule must be configured for best effort treatment.
NOTE: Cloud-Native Router CoS implementation modifies the DSCP value of the outer IP header only for tunneled packets (MPLSoUDP and VXLAN).
शेड्युलर
Scheduler is the last block in Cloud-Native Router’s CoS implementation and computes the priority of the packets. Cloud-Native Router implements a strict priority 8-queue scheduler, with priority order high to low. The forwarding class is directly mapped to scheduler priority. The Cloud-Native Router supports a maximum of 4 scheduler profiles in the deployment helm chart and a maximum of 16 scheduler maps and forwarding classes. You can read more about Scheduler in Junos documentation. You can configure a scheduler with one of 8 priorities as provided in the below table. Note that the scheduler priorities are already mapped to interface queues (8 queues).
86
Table 10: Scheduler Priorities Priority high low low-high low-latency low-medium medium-high medium-low strict-high
Scheduling Priority (Queue) Scheduling priority 1 Scheduling priority 7 (Least) Scheduling priority 5 Scheduling priority 4 Scheduling priority 6 Scheduling priority 2 Scheduling priority 3 Scheduling priority 0 (Highest)
Dropper and Shaper
The scheduler blocks also includes dropper and shaper modules.
· Dropper–The DPDK dropper module drops packets arriving at the scheduler block to avoid congestion. The drop is performed based on the weighted random early detection (WRED) drop profile maps configured. In the event of severe congestion, the dropper module may perform tail drop, resulting in dropping all arriving packets. The drop profile is applied per scheduler queue and may be based on packet loss priority. A maximum of 32 drop profiles and 3 drop profile maps per scheduler are supported. You can read more about dropper in Junos documentation.
· Shaper–The shaper or transmit rate is used to shape traffic per egress queue. By limiting the queue transmit rate, shaper can prevent high priority queues from starving low priority queues. Shaping rate is applied on the scheduler queues.
समर्थित इंटरफेस
Cloud-Native Router supports the following interfaces for CoS implementation:
87
Table 11: Support Interfaces for CoS Implementation
CoS Component/ Interface Type
Pod Interface
फॅब्रिक इंटरफेस
वर्गीकरण करणारा
समर्थित
समर्थित
Policer
समर्थित
समर्थित
मार्कर
समर्थित
समर्थित
शेड्युलर
समर्थित नाही
समर्थित
IRB Interface Supported Supported Supported Not Supported
L3 CoS Configuration
You must configure a CoS scheduler profile per interface in the helm chart that defines the number of scheduler lcores and bandwidth. Review Helm Chart customization for more details. You must perform CoS configuration on the Cloud-Native Router control plane using a Configlet. Review Customize Cloud-Native Router Configuration for more details. Sample configlets are provided below:
Configure Forwarding Classes
Cloud-Native Router does not have any forwarding classes configured by default. You can configure upto 16 forwarding classes, mapped to 8 queues. The user-defined queue mapping is ignored since queue numbers are derived from the priority defined in the scheduler configuration, mapped one-onone with the queue number:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set class-of-service forwarding-classes class af11 queue-num 3 set class-of-service forwarding-classes class af12 queue-num 3 set class-of-service forwarding-classes class af13 queue-num 3 set class-of-service forwarding-classes class af21 queue-num 4
88
set class-of-service forwarding-classes class af22 queue-num 4 set class-of-service forwarding-classes class af23 queue-num 4 set class-of-service forwarding-classes class af31 queue-num 5 set class-of-service forwarding-classes class af32 queue-num 5 set class-of-service forwarding-classes class af33 queue-num 5 set class-of-service forwarding-classes class cs1 queue-num 6 set class-of-service forwarding-classes class cs2 queue-num 6 set class-of-service forwarding-classes class cs3 queue-num 7 set class-of-service forwarding-classes class cs4 queue-num 7 set class-of-service forwarding-classes class ef queue-num 1 set class-of-service forwarding-classes class nc1 queue-num 0 set class-of-service forwarding-classes class nc2 queue-num 0 crpdSelector: matchLabels:
node: worker
Configure Classifiers
You can configure a BA classifier and apply it on an interface:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set class-of-service classifiers dscp BA_1 forwarding-class af11 loss-priority high codepoints 000001
set class-of-service classifiers dscp BA_1 forwarding-class af11 loss-priority low codepoints 001010
set class-of-service classifiers dscp BA_1 forwarding-class af11 loss-priority medium-high code-points 001110
set class-of-service classifiers dscp BA_1 forwarding-class af11 loss-priority medium-low code-points 001100
set class-of-service classifiers dscp BA_1 forwarding-class ef loss-priority low code-points 101111
set class-of-service classifiers dscp BA_1 forwarding-class nc1 loss-priority low codepoints 110000
set class-of-service classifiers dscp BA_1 forwarding-class nc1 loss-priority low codepoints 111000
89
set class-of-service classifiers dscp-ipv6 BA6_1 forwarding-class ef loss-priority low codepoints 101111
set class-of-service classifiers dscp-ipv6 BA6_1 forwarding-class nc2 loss-priority high code-points 111000
set class-of-service interfaces eno2 unit 0 classifiers dscp BA_1 set class-of-service interfaces eno2 unit 0 classifiers dscp-ipv6 BA6_1 crpdSelector: matchLabels:
node: worker
You can configure MF classifier as a firewall filter:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set firewall family inet filter MCFF_1 term 1 from source-port 1001 set firewall family inet filter MCFF_1 term 1 then forwarding-class af11 set firewall family inet filter MCFF_1 term 1 then loss-priority medium-low set firewall family inet filter MCFF_1 term 2 from icmp-type echo-request set firewall family inet filter MCFF_1 term 2 then forwarding-class af31 set firewall family inet filter MCFF_1 term 2 then loss-priority medium-low set firewall family inet filter MCFF_1 term 3 then accept set firewall family inet6 filter MCFF6_1 term 1 from source-port 1001 set firewall family inet6 filter MCFF6_1 term 1 then forwarding-class af11 set firewall family inet6 filter MCFF6_1 term 1 then loss-priority low set firewall family inet6 filter MCFF6_1 term 2 then accept set interfaces eno2 unit 0 family inet filter input MCFF_1 set interfaces eno2 unit 0 family inet6 filter input MCFF6_1 crpdSelector: matchLabels:
node: worker
Configure Policers
You can configure the policer with the action type three-color-policer as follows:
90
Single Rate Three Color Meter (srTCM) Policer:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set firewall three-color-policer srTCM_1 action loss-priority high then discard set firewall three-color-policer srTCM_1 single-rate color-aware set firewall three-color-policer srTCM_1 single-rate committed-information-rate 500m set firewall three-color-policer srTCM_1 single-rate committed-burst-size 20k set firewall three-color-policer srTCM_1 single-rate excess-burst-size 20k set firewall family inet filter FF_1 term 1 then three-color-policer single-rate srTCM_1 set firewall family inet6 filter FF6_1 term 1 then three-color-policer single-rate srTCM_1 crpdSelector: matchLabels:
node: worker
Two Rate Three Color Meter (trTCM) Policer:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set firewall three-color-policer trTCM_1 action loss-priority high then discard set firewall three-color-policer trTCM_1 two-rate color-aware set firewall three-color-policer trTCM_1 two-rate committed-information-rate 100m set firewall three-color-policer trTCM_1 two-rate committed-burst-size 2048 set firewall three-color-policer trTCM_1 two-rate peak-information-rate 200m set firewall three-color-policer trTCM_1 two-rate peak-burst-size 2048 set firewall family inet filter FF_1 term 1 from forwarding-class af11 set firewall family inet filter FF_1 term 1 then three-color-policer two-rate trTCM_1 set firewall family inet6 filter FF6_1 term 1 from forwarding-class af11 set firewall family inet6 filter FF6_1 term 1 then three-color-policer two-rate trTCM_1 crpdSelector:
91
matchLabels: node: worker
You can configure classifiers with policers.
BA classifier with policer:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set firewall family inet filter FF_1 term 1 from forwarding-class af11 set firewall family inet filter FF_1 term 1 then three-color-policer single-rate srTCM_1 set firewall family inet6 filter FF6_1 term 1 from forwarding-class af11 set firewall family inet6 filter FF6_1 term 1 then three-color-policer single-rate srTCM_1 set class-of-service interfaces eno2 unit 0 classifiers dscp BA_1 set class-of-service interfaces eno2 unit 0 classifiers dscp-ipv6 BA6_1 set interfaces eno2 unit 0 family inet filter input FF_1 set interfaces eno2 unit 0 family inet6 filter input FF6_1 crpdSelector: matchLabels:
node: worker
MF classifier as a firewall filter with policer:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set firewall family inet filter MCFF_1 term 1 from source-port 1001 set firewall family inet filter MCFF_1 term 1 then forwarding-class af11 set firewall family inet filter MCFF_1 term 1 then three-color-policer single-rate srTCM_1 set firewall family inet filter MCFF_1 term 2 from icmp-type echo-request set firewall family inet filter MCFF_1 term 2 then forwarding-class af31 set firewall family inet filter MCFF_1 term 2 then three-color-policer single-rate srTCM_1 set firewall family inet filter MCFF_1 term 3 then accept
92
set firewall family inet6 filter MCFF6_1 term 1 from source-port 1001 set firewall family inet6 filter MCFF6_1 term 1 then forwarding-class af11 set firewall family inet6 filter MCFF6_1 term 1 then three-color-policer single-rate srTCM_1 set firewall family inet6 filter MCFF6_1 term 2 then accept set interfaces eno2 unit 0 family inet filter input MCFF_1 set interfaces eno2 unit 0 family inet6 filter input MCFF6_1 crpdSelector: matchLabels:
node: worker
NOTE: When the same policer is configured for multiple firewall filter terms, multiple policer instances are created, one per term.
Configure Rewrite/Marker
You can configure the rewrite/marker as follows:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set class-of-service rewrite-rules dscp RE_1 forwarding-class af11 loss-priority high codepoint 010010
set class-of-service rewrite-rules dscp RE_1 forwarding-class af11 loss-priority low codepoint 001100
set class-of-service rewrite-rules dscp RE_1 forwarding-class af11 loss-priority medium-high code-point 000001
set class-of-service rewrite-rules dscp RE_1 forwarding-class af11 loss-priority medium-low code-point 001110
set class-of-service rewrite-rules dscp RE_1 forwarding-class ef loss-priority low codepoint 001010
set class-of-service rewrite-rules dscp RE_1 forwarding-class ef loss-priority medium-high code-point 001001
set class-of-service rewrite-rules dscp-ipv6 RE6_1 forwarding-class af11 loss-priority high code-point 010010
set class-of-service rewrite-rules dscp-ipv6 RE6_1 forwarding-class af11 loss-priority low code-point 001100
93
set class-of-service rewrite-rules dscp-ipv6 RE6_1 forwarding-class af11 loss-priority medium-high code-point 000001
set class-of-service rewrite-rules dscp-ipv6 RE6_1 forwarding-class af11 loss-priority medium-low code-point 001110
set class-of-service rewrite-rules dscp-ipv6 RE6_1 forwarding-class ef loss-priority low code-point 001001
set class-of-service interfaces eno3 unit 0 rewrite-rules dscp RE_1 set class-of-service interfaces eno3 unit 0 rewrite-rules dscp-ipv6 RE6_1 crpdSelector: matchLabels:
node: worker
MPLS Exp Rewrite for tunnel packets:
apiVersion: configplane.juniper.net/v1 kind: Configlet metadata:
name: configlet-sample namespace: jcnr spec: config: |-
set class-of-service rewrite-rules exp RE_3 forwarding-class af11 loss-priority low codepoint 111
set class-of-service rewrite-rules exp RE_3 forwarding-class af11 loss-priority medium-high code-point 101
set class-of-service rewrite-rules exp RE_3 forwarding-class af11 loss-priority medium-low code-point 110
set class-of-service interfaces eno3 unit 0 rewrite-rules exp RE_3 crpdSelector:
matchLabels: node: worker
Configure Schedulers
You can configure the schedulers to one of 8 priorities. The scheduler map configures the forwarding class to scheduler mapping and
कागदपत्रे / संसाधने
![]() |
Juniper Networks Juniper Cloud Native Router [pdf] वापरकर्ता मार्गदर्शक 25.4, Juniper Cloud Native Router, Cloud Native Router, Native Router |

