-
Notifications
You must be signed in to change notification settings - Fork 212
staticgueencap_and_bgp_path_selection
RT-3.52: Multidimensional test for Static GUE Encap/Decap based on BGP path selection and selective DSCP marking
The goal of this test is to:
-
Test the implementation of Static GUE encap where the Tunnel endpoint is resolved over EBGP, while the Payload's destination is learned over IBGP.
-
Confirm that before GUE encapsulation, the device correctly selects the path for the payload destination from multiple available IBGP routes. Path selection needs to follow the BGP best path algorithm rules, like giving preference to routes with a higher BGP Local Preference. Furthermore, it must successfully switch over to backup/alternative IBGP paths when the preferred path fails.
-
Validate that encapsulated traffic has its TOS bits copied from the inner header to the outer header.
-
The DUT will be configured to encapsulate traffic over multiple tunnels. It is expected to tunnel traffic towards the correct tunnel destination, based on the IBGP learned routes.
-
The DUT will be configured to decapsulate traffic received over various tunnel destinations, and it is expected to accurately decapsulate traffic across all these received destinations.
-
Confirm that the TTL value for the outer IP header created during GUE encapsulation can be explicitly configured.
-
Confirm that when the DUT handles GUEv1 traffic from the reverse path, it successfully performs decapsulation. During this decapsulation process, the system must not transfer the DSCP and TTL bits from the outer header to the inner header. Instead, following decapsulation, the DUT should decrement the inner header's TTL by 1 before forwarding the packet
Please note: In the diagram below, ATE1 and ATE2 can be the same or different ATE
graph LR;
subgraph DUT [DUT]
B1[Port1]
B2[Port2]
B4[Port4]
end
subgraph ATE2 [ATE2]
C1[Port1]
C3[Port3]
end
A1[ATE1:Port1] <-- IBGP(ASN100) --> B1;
B2 <-- IBGP(ASN100) --> C1;
B4 <-- EBGP(ASN100:ASN200) --> C3;
+-------------------------------------------+
| ATE1 (ASN-100) |
|-------------------------------------------|
| IBGP Peer: |
| - $ATE1_IBGP.v4/32 |
| - $ATE1_IBGP.v6/128 |
| |
| Prefixes Advertised: |
| - $ATE1_PORT1_user[1-5].v4/24 |
| - $ATE1_PORT1_user[1-5].v6/64 |
| |
| ATE1_Port1 <-> DUT_Port1: |
| - $ATE1PORT1_DUTPORT1_1.v4/31 |
| - $ATE1PORT1_DUTPORT1_1.v6/127 |
+---------------------+---------------------+
|
| (IS-IS and IBGP)
|
+---------------------+---------------------+
| Port1: (to ATE1) |
| - $ATE1PORT1_DUTPORT1_2.v4/31 |
| - $ATE1PORT1_DUTPORT1_2.v6/127 |
|-------------------------------------------|
| DUT (ASN-100) |
|-------------------------------------------|
| Loopback: |
| - $DUT_lo0.v4 |
| - $DUT_lo0.v6 |
|-------------------------------------------|
| Port2: (to ATE2_Port1) <------------+ (IS-IS and IBGP)
| - $ATE2PORT1_DUTPORT2_2.v4/31 | |
| - $ATE2PORT1_DUTPORT2_2.v6/127 | |
|-------------------------------------------| |
| Port4: (to ATE2_Port3) | |
| - $ATE2PORT3_DUTPORT4_2.v4/31 <----------------------+ (EBGP)
| - $ATE2PORT3_DUTPORT4_2.v6/127 | | |
+-------------------------------------------+ | |
| |
+-----------------------------------------------------+ | |
| ATE2 (ASN 100 & 200) | | |
|-----------------------------------------------------+ | |
| Port1: (to DUT) <--+ |
| - $ATE2PORT1_DUTPORT2_1.v4/31 | |
| - $ATE2PORT1_DUTPORT2_1.v6/127 | |
|-----------------------------------------------------+ |
| Port3: (to DUT) <-------------+
| - `$ATE2PORT3_DUTPORT4_1.v4/31` |
| - `$ATE2PORT3_DUTPORT4_1.v6/127` |
|-----------------------------------------------------+
| IBGP Peer IPs: |
| - $ATE2_C.IBGP.v6/128 |
| - $ATE2_M.IBGP.v4/32, .v6/128 |
| - $ATE2_IBGP.v4/32, .v6/128 |
| Prefixes Advertised over IBGP and EBGP peering: |
| - $ATE2_INTERNAL[6-10].v4/24 |
| - $ATE2_INTERNAL[6-10].v6/64 |
+-----------------------------------------------------+
| Different IP addresses | Description |
|---|---|
$ATE1PORT1_DUTPORT1_1.v4/31$ATE1PORT1_DUTPORT1_1.v6/127
|
Configured on the ATE1PORT1 port of ATE1_Port1<>DUT_Port1 point-to-point connection and used for IS-IS adjacency. |
$ATE1PORT1_DUTPORT1_2.v4/31$ATE1PORT1_DUTPORT1_2.v6/127
|
Configured on the DUTPORT1 port of ATE1_Port1<>DUT_Port1 point-to-point connection and used for IS-IS adjacency. |
$ATE1_IBGP.v4/32$ATE1_IBGP.v6/128
|
Exchanged over IS-IS adjacency. Used to establish IBGP peering between ATE1 and DUT1 |
$ATE1_PORT1_user1.v4/24$ATE1_PORT1_user2.v4/24$ATE1_PORT1_user3.v4/24$ATE1_PORT1_user4.v4/24$ATE1_PORT1_user5.v4/24$ATE1_PORT1_user1.v6/64$ATE1_PORT1_user2.v6/64$ATE1_PORT1_user3.v6/64$ATE1_PORT1_user4.v6/64$ATE1_PORT1_user5.v6/64
|
- $ATE1_IBGP.v[46] advertises these user prefixes to DUT_lo0.v[46] over IBGP- $DUT_lo0.v[46] advertises these further to $ATE2_PORT1_IBGP.v[46] and to $ $ATE2_C.IBGP.v6
|
$ATE2PORT1_DUTPORT2_1.v4/31$ATE2PORT1_DUTPORT2_1.v6/127
|
Configured on the ATE2Port1 port of ATE2_Port1<>DUT_Port2 point-to-point connection and used for IS-IS adjacency. |
$ATE2PORT1_DUTPORT2_2.v4/31$ATE2PORT1_DUTPORT2_2.v6/127
|
Configured on DUTPORT2 of the DUT_PORT2<>ATE2_PORT1 point-to-point connection and used for IS-IS adjacency |
$ATE2_PORT1.IBGP.v4/32$ATE2_PORT1.IBGP.v6/128
|
Regular IBGP peering between $ATE2_PORT1.IBGP.v[46] and DUT_lo0.v[46] |
$ATE2_C.IBGP.v6/128$ATE2_PPNH[12].v6/128
|
- For IBGP peering between $ATE2_C.IBGP.v6 and DUT_lo0.v6- Pseudo protocol next-hop for the IBGP routes advertised by $ATE2_C.IBGP.v6 to the DUT |
$ATE2_M.IBGP.v4/32$ATE2_M.IBGP.v6/128
|
For IBGP peering between $ATE2_M.IBGP.v[4|6] and DUT_lo0.v[4|6]
|
$ATE2PORT3_DUTPORT4_1.v4/31$ATE2PORT3_DUTPORT4_1.v6/127
|
Configured on ATE2Port3 port of the ATE2_Port3<>DUT_Port4 point-to-point peering and used for EBGP peering between DUT_PORT4<>ATE2_PORT3 |
$ATE2PORT3_DUTPORT4_2.v4/31$ATE2PORT3_DUTPORT4_2.v6/127
|
Configured on DUTPort4 port of the ATE2_Port3<>DUT_Port4 point-to-point peering and used for EBGP peering between DUT_PORT4<>ATE2_PORT3 |
$DUT_lo0.v4$DUT_lo0.v6
|
Adertised over IS-IS and used for IBGP peering. Also used as IPoUDP tunnel source address |
$DUT_TE11.v6/128
|
IPoUDP tunnel destination address on the DUT. This IP MUST receive traffic meant for single shard on the DUT |
$DUT_TE10.v6/128
|
IPoUDP tunnel destination address on the DUT. This IP MUST receive traffic meant for multiple shards on the DUT |
$ATE2_INTERNAL_TE11.v6/128
|
IPoUDP tunnel destination address on the ATE2. This IP MUST receive traffic meant for single shard on the ATE |
$ATE2_INTERNAL_TE10.v6/128
|
IPoUDP tunnel destination address on the ATE. This IP MUST receive traffic meant for multiple shards on the ATE |
$ATE2_INTERNAL6.v4/24$ATE2_INTERNAL6.v6/64$ATE2_INTERNAL7.v4/24$ATE2_INTERNAL7.v6/64$ATE2_INTERNAL8.v4/24$ATE2_INTERNAL8.v6/64$ATE2_INTERNAL9.v4/24$ATE2_INTERNAL9.v6/64$ATE2_INTERNAL10.v4/24$ATE2_INTERNAL10.v6/64
|
- Internal Public prefixes - Advertised to the DUT over the IBGP peering between ATE2_PORT1.IBGP.v[46]<>$DUT_lo0.v[46] and $ATE2_C.IBGP.v6<>$DUT_lo0.v6- Advertised further to $ATE1_IBGP.v[46] over the IBGP peering between $ATE1_IBGP.v[46]<>DUT_lo0.v[46] over their respective AFI peering |
IS-IS:
| Different IS-IS L2 adjacencies | Prefixes advertised |
|---|---|
ATE1_Port1<>DUT_Port1 |
- ATE1_PORT1 --> DUT_PORT1: $ATE1_IBGP.v4, $ATE1_IBGP.v6- DUT_PORT1 --> ATE1_PORT1: $DUT_lo0.v4, $DUT_lo0.v6
|
ATE2_Port1<>DUT_Port2 |
- ATE2_Port1 --> DUT_Port2: $ATE2_PORT1.IBGP.v4/32, $ATE2_PORT1.IBGP.v6/128, $ATE2_C.IBGP.v6/128, $ATE2_M.IBGP.v6/128- DUT_Port2 --> ATE2_Port1: $DUT_lo0.v4, $DUT_lo0.v6
|
BGP:
| Different peering | BGP peering type | Prefixes advertised |
|---|---|---|
$ATE1_IBGP.v[46]<>$DUT_lo0.v[46] |
IBGP |
- $DUT_lo0.v[46] is the route-reflector server and $ATE1_IBGP.v[46] is the route-reflector client- $ATE1_IBGP.v[46] advertises prefixes $ATE1_PORT1_user[1-5].v[46] to $DUT_lo0.v[46] on their respective AFI peering- $DUT_lo0.v[46] advertises $ATE2_INTERNAL[6-10].v[46] to $ATE1_IBGP.v[46] on their respective AFI peering- MULTIPATH enabled on this peering |
$ATE2_IBGP.v[46]<>$DUT_lo0.v[46] |
IBGP |
- $ATE2_IBGP.v[46] advertises $ATE2_INTERNAL[6-10].v[46], $ATE2_INTERNAL_TE10.v6/64. and $ATE2_INTERNAL_TE11.v6/64 to DUT_lo0.v[46] on their respective AFI peering- $DUT_lo0.v[46] advertises $ATE1_PORT1_user[1-5].v[46], $DUT_TE10.v6/64, $DUT_TE11.v6/64, $DUT_TE10.v6/128 and $DUT_TE11.v6/128 to $ATE2_IBGP.v[46]
|
$ATE2_C.IBGP.v6<>$DUT_lo0.v6 |
IBGP |
- $ATE2_C.IBGP.v6 advertises $ATE2_INTERNAL[6-8].v[46] to DUT_lo0.v6 with Next-hop as $ATE2_PPNH1.v6/128 and a Local-Pref of 200. Similarly advertises $ATE2_INTERNAL[9-10].v[46] to DUT_lo0.v6 with Next-hop as $ATE2_PPNH2.v6/128 and a Local-Pref of 200. Please Note: These prefixes are gradually advertised by $ATE2_C.IBGP.v6 in different Sub tests.$ATE2_INTERNAL6.v[46] in RT-3.52.2 to RT-3.52.9$ATE2_INTERNAL7.v[46] in RT-3.52.3 to RT-3.52.9$ATE2_INTERNAL8.v[46] in RT-3.52.4 to RT-3.52.9$ATE2_INTERNAL9.v[46] in RT-3.52.5 to RT-3.52.9$ATE2_INTERNAL10.v[46] in RT-3.52.6 to RT-3.52.9- $DUT_lo0.v6 advertises $ATE1_PORT1_user[1-5].v[46] to $ATE2_C.IBGP.v6
|
$ATE2_M.IBGP.v[46]<>$DUT_lo0.v[46] |
IBGP |
- DUT_lo0.v[46] advertises, all its ECMP routes to $ATE2_M.IBGP.v[46] on the respective AFI peering.- This peering will have ADD-Path for Multipath routes enabled. |
$ATE2_Port3<>$DUT_Port4 |
EBGP |
- ATE2_Port3 advertises $ATE2_INTERNAL_TE10.v6/64 and $ATE2_INTERNAL_TE11.v6/64 to DUT_Port4- DUT_Port4 advertises $DUT_TE10.v6/128, $DUT_TE11.v6/128, $DUT_TE10.v6/64 and $DUT_TE11.v6/64 to ATE2_Port3
|
| Local ASN as ASN100 for following interfaces |
$ATE1_IBGP.v[46], $DUT_lo0.v[46], $ATE2_IBGP.v[46], $DUT_lo0.v[46], $ATE2_C.IBGP.v6, $ATE2_M.IBGP.v[46], $DUT_lo0.v[46], $DUT_Port2
|
|
| Local ASN as ASN200 for following interfaces | $ATE2_Port3 |
Different Flows used throughout the test:
| Src_destination of flows | From_IP --> To_IP | DSCP | Tunnel endpoint used |
|---|---|---|---|
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user1.v4/24 --> $ATE2_INTERNAL6.v4/24 |
BE1 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user1.v6/64 --> $ATE2_INTERNAL6.v6/64 |
BE1 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user2.v4/24 --> $ATE2_INTERNAL7.v4/24 |
AF1 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user2.v6/64 --> $ATE2_INTERNAL7.v6/64 |
AF1 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user3.v4/24 --> $ATE2_INTERNAL8.v4/24 |
AF2 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#1 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user3.v6/64 --> $ATE2_INTERNAL8.v6/64 |
AF2 | $ATE2_INTERNAL_TE11.v6/128 |
| Flow-Set#2 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user4.v4/24 --> $ATE2_INTERNAL9.v4/24 |
AF3 | $ATE2_INTERNAL_TE10.v6/128 |
| Flow-Set#2 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user4.v6/64 --> $ATE2_INTERNAL9.v6/64 |
AF3 | $ATE2_INTERNAL_TE10.v6/128 |
| Flow-Set#2 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user5.v4/24 --> $ATE2_INTERNAL10.v4/24 |
AF4 | $ATE2_INTERNAL_TE10.v6/128 |
| Flow-Set#2 from ATE1_Port1 --> ATE2_[Either Port1 or Port3 depending on the FIB entries of the DUT] | $ATE1_PORT1_user5.v6/64 --> $ATE2_INTERNAL10.v6/64 |
AF4 | $ATE2_INTERNAL_TE10.v6/128 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL6.v4/24 --> $ATE1_PORT1_user1.v4/24 |
BE1 | $DUT_TE11.v6/64 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL6.v6/64 --> $ATE1_PORT1_user1.v6/64 |
BE1 | $DUT_TE11.v6/64 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL7.v4/24 --> $ATE1_PORT1_user2.v4/24 |
AF1 | $DUT_TE11.v6/64 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL7.v6/64 --> $ATE1_PORT1_user2.v6/64 |
AF1 | $DUT_TE11.v6/64 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL8.v4/24 --> $ATE1_PORT1_user3.v4/24 |
AF2 | $DUT_TE11.v6/64 |
| Flow-Set#3 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE11.v6/64. ATE2 must use different IPs in $DUT_TE11.v6/64 for different traffic classes (BE1 to AF2) of Flow-Set#3 | $ATE2_INTERNAL8.v6/64 --> $ATE1_PORT1_user3.v6/64 |
AF2 | $DUT_TE11.v6/64 |
| Flow-Set#4 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE10.v6/64. ATE2 must use different IPs in $DUT_TE10.v6/64 for different traffic classes (AF3 to AF4) of Flow-Set#4 | $ATE2_INTERNAL9.v4/24 --> $ATE1_PORT1_user4.v4/24 |
AF3 | $DUT_TE10.v6/64 |
| Flow-Set#4 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE10.v6/64. ATE2 must use different IPs in $DUT_TE10.v6/64 for different traffic classes (AF3 to AF4) of Flow-Set#4 | $ATE2_INTERNAL9.v6/64 --> $ATE1_PORT1_user4.v6/64 |
AF3 | $DUT_TE10.v6/64 |
| Flow-Set#4 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE10.v6/64. ATE2 must use different IPs in $DUT_TE10.v6/64 for different traffic classes (AF3 to AF4) of Flow-Set#4 | $ATE2_INTERNAL10.v4/24 --> $ATE1_PORT1_user5.v4/24 |
AF4 | $DUT_TE10.v6/64 |
| Flow-Set#4 from ATE2_Port3 --> ATE1_Port1 are GUE encapsulated with Tunnel destination as $DUT_TE10.v6/64. ATE2 must use different IPs in $DUT_TE10.v6/64 for different traffic classes (AF3 to AF4) of Flow-Set#4 | $ATE2_INTERNAL10.v6/64 --> $ATE1_PORT1_user5.v6/64 |
AF4 | $DUT_TE10.v6/64 |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL6.v4/24 --> $ATE1_PORT1_user1.v4/24 |
BE1 | N/A |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL6.v6/64 --> $ATE1_PORT1_user1.v6/64 |
BE1 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL7.v4/24 --> $ATE1_PORT1_user2.v4/24 |
AF1 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL7.v6/64 --> $ATE1_PORT1_user2.v6/64 |
AF1 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL8.v4/24 --> $ATE1_PORT1_user3.v4/24 |
AF2 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL8.v6/64 --> $ATE1_PORT1_user3.v6/64 |
AF2 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL9.v4/24 --> $ATE1_PORT1_user4.v4/24 |
AF3 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL9.v6/64 --> $ATE1_PORT1_user4.v6/64 |
AF3 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL10.v4/24 --> $ATE1_PORT1_user5.v4/24 |
AF4 | |
| Flow-Set#5 from ATE2:Port1 to ATE1:Port1 are sent Unencapsulated | $ATE2_INTERNAL10.v6/64 --> $ATE1_PORT1_user5.v6/64 |
AF4 |
In addition to the adjacencies and peering configurations described in the tables above, the DUT requires the following configurations:
-
IS-IS:
- The DUT's loopback interface must be passive for IS-IS.
-
BGP:
- Define import and export route policies to match the advertisements for each BGP peering.
-
Static GUE Encapsulation:
-
Configure static GUE encapsulation as follows:
-
Define UDP ports to be used for IPv4oUDP and IPv6oUDP which is 6080.
-
Define the Tunnel NHG configuration with these parameters:
ttl = 64tunnel-source = $DUT_lo0.v6-
tunnel-destination1 = $ATE2_INTERNAL_TE11.v6/128. -
tunnel-destination2 = $ATE2_INTERNAL_TE10.v6/128. - The DUT must have a static route pointing
"$ATE2_PPNH1.v6/128"and$ATE2_PPNH2.v6/128to the NHG created above (below is an example of the static route). The IBGP peer $ATE2_C.IBGP.v6/128 is expected to advertise both IPv4 and IPv6 prefixes with the Next-Hop as"$ATE2_PPNH1.v6/128"or"$ATE2_PPNH2.v6/128"static dst: ATE2_PPNH1.v6/128 next-hop: $ATE2_INTERNAL_TE11.v6/128 static dst: ATE2_PPNH2.v6/128 next-hop: $ATE2_INTERNAL_TE10.v6/128
-
-
-
GUE Decapsulation:
-
For a GUE decapsulation node, configure the following:
- UDP port 6080 (configurable) must be used for decapsulating IPv4 and IPv6 payload. The implementation MUST look at the first 4 bits of the UDP payload to determine the GUE version as well as the IP version of the payload to be either IPv4 or IPv6 as explained in the IETF draft
- The Decapsulation node must be configured for decapsulating traffic received for the range i.e $DUT_TE11.v6/64 and $DUT_TE10.v6/64 in place of the corresponding /32 addresses.
- After decapsulation, the outer TTL and DSCP bits must not be copied to the inner header
-
Use # Health-1.1: Generic Health Check. If errors identified then the test Must fail.
-
Test Steps:
- Generate DUT Configuration as specified in the test setup
- Use gnmi to push the configuration to the DUT
- Ensure no prefixes are exchanged over the IBGP peering between
$ATE2_C.IBGP.v6and$DUT_lo0.v6. Validate this using OC
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state/session-state
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state/prefixes/received
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state/prefixes/sent
* Validate DUT using health-1.1 steps
* Start flow set 1, 2 and 5
* Set 50,000 packets per flow at 1,000pps
* Validate 0 packet loss on the ATE side
* Execute post-test health checks and compare the results with the
baseline. Verify that there are no core dumps, or other issues.
ATE2_Port1 path to DUT_Port4 --> ATE2_Port3 path
-
Test Steps:
- Re-run he baseline test (RT-3.52.1), with flows active in Flow-Set #1, Flow-Set #2, and Flow-Set #5
- Execute the health checks described previously
- The IBGP session between
$ATE2_C.IBGP.v6and$DUT_lo0.v6should now advertise only$ATE2_INTERNAL6.v[46]with a local preference of 200 and Pseudo Protocol Next-Hop as$ATE2_PPNH1.v6/128 -
$DUT_lo0.v6advertises$ATE1_PORT1_user[1-5].v[46]to$ATE2_C.IBGP.v6
-
Expectations:
- Routes to prefixes
$ATE2_INTERNAL6.v4/24and$ATE2_INTERNAL6.v6/64, learned from$ATE2_C.IBGP.v6/128, should be placed in the FIB. Other prefixes from ATE2 will continue to be learnt via the IBGP peering between$ATE2_PORT1.IBGP.v[46]and$DUT_lo0.v[46]and hence will be in the DUT's FIB. Please use the AFT paths specified below for verification. - Flows destined for
$ATE2_INTERNAL6.v4/24and$ATE2_INTERNAL6.v6/64should be GUE-encapsulated with tunnel destination$ATE2_INTERNAL_TE11.v6and routed over the EBGP peering between$ATE2_Port3and$DUT_Port4, and these flows must be 100% successful (zero loss). Please check this on the ATE2 (capture 1 packet using OTG from each flow and then decode the packet to validate headers. Remaining packets can be trusted to also be encapsulated.). Verify that traffic is unencapsulated before the migration and encapsulated after the migration. - The outer header TTL should be 63 upon arrival at
ATE2_Port1(before decapsulation). Please check this on the ATE2. - Post decapsulation at
ATE2_Port1, ensure that the DSCP bits on the payload is the same as the DSCP bits set by ATE1:Port1 before sending it to the DUT for encap. - Verify on ATEs that the amount of packets sent is the same as the amount
of encapsulated packets received per tunnel endpoint. Also check the
interface counters using OC.
/interfaces/interface/state/counters/out-unicast-pkts - Unencapsulated flows from ATE2 to
ATE1_Port1must have 100% success (Zero loss), routing via the IBGP peering between$ATE2_IBGP.v[46]and$DUT_lo0.v[46] - Post-test health checks should be performed and compared against the baseline. Verify the absence of drops or core dumps. If any, the test Must fail
- Routes to prefixes
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/prefix
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/prefix
Follow the same steps as in RT-3.52.2 and gradually move one Traffic class at a time in the following order. Note changes in RT-3.52.5 and RT-3.52.6
* Migrate routing of AF1 flows from `DUT_Port2` --> `ATE2_Port1` to
`DUT_Port4` --> `ATE2_Port3`.
* BE1 and AF1 are now migrated
* Complete validation same as RT-3.52.2 above
* Migrate routing of AF2 flows from `DUT_Port2` --> `ATE2_Port1` to
`DUT_Port4` --> `ATE2_Port3`.
* BE1-AF2 are now migrated
* Complete validation same as RT-3.52.2 above
* Migrate routing of AF3 flows from `DUT_Port2` --> `ATE2_Port1` to
`DUT_Port4` --> `ATE2_Port3`.
* `$ATE2_C.IBGP.v6` will advertise `$ATE2_INTERNAL9.v4/24` and
`$ATE2_INTERNAL9.v6/64` with next-hop as `$ATE2_PPNH2.v6/128`.
Traffic towards `$ATE2_INTERNAL9.v[46]/24` will have tunnel
destination `$ATE2_INTERNAL_TE10.v6/128`.
* BE1-AF3 are now migrated
* Complete validation same as RT-3.52.2 above
* Migrate routing of AF4 flows from `DUT_Port2` --> `ATE2_Port1` to
`DUT_Port4` --> `ATE2_Port3`.
* `$ATE2_C.IBGP.v6` will advertise `$ATE2_INTERNAL10.v4/24` and
`$ATE2_INTERNAL10.v6/64` with next-hop as `$ATE2_PPNH2.v6/128`
Traffic towards `$ATE2_INTERNAL10.v[46]` will have tunnel
destination `$ATE2_INTERNAL_TE10.v6/128`.
* BE1-AF4 are now migrated
* Complete validation same as RT-3.52.2 above
-
Situation:
- The test begins from the state established in RT-3.52.3, where all traffic from ATE1 to ATE2 is encapsulated by the DUT and routed via DUT_Port4 --> ATE2_Port3 path.
-
Test Steps:
- Perform all previously defined health checks as a baseline.
- Stop Flow-Set #5 and start Flow-Set #4, resulting in active flows for Flow-Set #1 through Flow-Set #4.
-
Expectations:
-
Traffic from ATE1 to ATE2 should be GUE encapsulated with tunnel destinations
$ATE2_INTERNAL_TE11.v6/128and$ATE2_INTERNAL_TE10.v6/128and routed out$DUT_Port4<>$ATE2_Port3. Verify this on the ATE2. -
ATE2_Port3sends encapsulated flows (Flow-Set #3 and Flow-Set #4) toATE1_Port1through the DUT.BE1 to AF2 flows are expected to have a tunnel destination of
$DUT_TE11.v6/128, while AF3 and AF4 flows should have$DUT_TE10.v6/128as their tunnel destination. Traffic should reach the destination successfully with zero loss. -
Post-test health checks should be performed and compared against the baseline. Verify the absence of drops or core dumps. If any, the test Must fail
-
-
Situation:
- The test begins from the final state of RT-3.52.7 In this state, the DUT
encapsulates BE1-AF2 traffic from ATE1 to ATE2 towards tunnel
destination address
"$ATE2_INTERNAL_TE11.v6/128", and AF3-AF4 traffic is encapsulated towards"$ATE2_INTERNAL_TE10.v6/128"Similarly, BE1-AF2 traffic from ATE2 to ATE1 is encapsulated with tunnel destination"$DUT_TE11.v6/128", and AF3-AF4 traffic uses"$DUT_TE10.v6/128" - ATE2 do not send any unencapsulated flows (Flow-Set#5)
- The test begins from the final state of RT-3.52.7 In this state, the DUT
encapsulates BE1-AF2 traffic from ATE1 to ATE2 towards tunnel
destination address
-
Test Steps:
- Execute the previously defined health checks as a baseline
- Flow-Sets #1 through #4 should be active
- On
ATE2_Port3, stop advertising the prefixes"$ATE2_INTERNAL_TE11.v6/128"and"$ATE2_INTERNAL_TE10.v6/128"toDUT_Port4over EBGP
-
Expectations:
- When
ATE2_Port3withdraws the route advertisement on the EBGP peering withDUT_Port4:- The tunnel endpoints
"$ATE2_INTERNAL_TE11.v6/128"and"$ATE2_INTERNAL_TE10.v6/128", learned via the IBGP peering between $ATE2_IBGP.v[46]<>$DUT_lo0.v[46], should be placed in the FIB. - Traffic from ATE1 to ATE2 should then take the path
DUT_Port2-->ATE2_Port1path after encapsulation on the DUT, with no traffic loss expected due to this change. Please verify this behavior on the ATE2. - Post-test health checks should be performed and compared against the baseline. Verify the absence of drops or core dumps. If any, the test Must fail
- The tunnel endpoints
- When
Inflight
-
Situation:
- The test starts from the end state of RT-3.52.8, restart all the flows.
Traffic between ATE1 to ATE2 is routed via the
DUT_Port2-->ATE2_Port1path after encapsulation on the DUT. ATE2 --> ATE1 traffic is routed via the ATE2_Port3 --> DUT_Port4 path. - Static routes for
$ATE2_PPNH1.v6/128and$ATE2_PPNH2.v6/128are active because tunnel endpoints"$ATE2_INTERNAL_TE11.v6/128"and"$ATE2_INTERNAL_TE10.v6/128"are reachable via the IBGP peering between$ATE2_IBGP.v[46]and$DUT_lo0.v[46] - Routes for
ATE2_INTERNAL[6-10].v[46], advertised by ATE2 over the IBGP peering$ATE2_C.IBGP.v6<>$DUT_lo0.v6, remain active on the DUT. - Verify the FIB entries using the AFT streamed data.
- The test starts from the end state of RT-3.52.8, restart all the flows.
Traffic between ATE1 to ATE2 is routed via the
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/prefix
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/prefix
-
Test Steps:
- Configure
$ATE2_IBGP.v[46]to stop advertising tunnel endpoints"$ATE2_INTERNAL_TE11.v6/128"and"$ATE2_INTERNAL_TE10.v6/128"to$DUT_lo0.v[46]over their IBGP peering
- Configure
-
Expectations:
- Static routes for
$ATE2_PPNH1.v6/128and$ATE2_PPNH2.v6/128must become invalid. Verify using the AFT table - Routes for
ATE2_INTERNAL[6-10].v[46]advertised by ATE2 over the IBGP peering$ATE2_C.IBGP.v6<>$DUT_lo0.v6must become invalid. Verify using the AFT table - Routes for
ATE2_INTERNAL[6-10].v[46]advertised by$ATE2_IBGP.v[46]over the IBGP peering$ATE2_IBGP.v[46]<>$DUT_lo0.v[46]must be placed in the FIB. Verify using AFT. - Traffic from ATE1 to ATE2 towards
ATE2_INTERNAL[6-10].v[46]destinations must not experience any drops and should be routed unencapsulated via theATE2_Port1<>DUT_Port2path. - Post-test health checks should be performed and compared against the baseline. Verify the absence of drops or core dumps. If any, the test Must fail
- AFT paths below to be used for verification.
- Static routes for
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/prefix
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/prefix
-
Test Steps:
- Establish all IS-IS adjacencies. Ensure that prefix
$ATE2_C.IBGP.v6/128is not advertised fromATE2_Port1toDUT_Port2, and prefixes$DUT_lo0.v[46]are not advertised fromDUT_Port2toATE2_Port1. Validate using AFT entries. - Run the previously defined health checks.
- On their mutual EBGP session,
DUT_Port4advertises$DUT_lo0.v[46]andATE2_Port3advertises$ATE2_C.IBGP.v6/128, in addition to any existing exchanges. This establishes the IBGP session between$DUT_lo0.v6/128and$ATE2_C.IBGP.v6/128via the EBGP session betweenDUT_Port4andATE2_Port3. - Disable the connection between
DUT_Port2andATE2_Port1. - Verify that
$ATE2_C.IBGP.v6/128and$DUT_lo0.v6/128exchange the same prefixes as before, according to the table mentioned earlier over their IBGP session. - Start all flows from Flow-Set #1 to Flow-Set #4.
- Establish all IS-IS adjacencies. Ensure that prefix
-
Expectations:
- Ensure no packet drops occur after the IBGP transport migration. Validate on the ATE.
- Packets should be sent encapsulated between
DUT:Port4andATE2:Port3. Validate on the ATE. - Post-test health checks should be performed and compared against the baseline. Verify the absence of drops or core dumps. If any, the test Must fail. Following AFT paths to be used for validation
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/prefix
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/counters/octets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/counters/packets-forwarded
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/next-hop-group
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/origin-protocol
/network-instances/network-instance/afts/ipv6-unicast/ipv4-entry/state/prefix
{
"network-instances": {
"network-instance": [
{
"name": "DEFAULT",
"config": {
"name": "DEFAULT"
},
"protocols": {
"protocol": [
{
"identifier": "STATIC",
"name": "STATIC",
"config": {
"identifier": "STATIC",
"name": "STATIC"
},
"static-routes": {
"static": [
{
"config": {
"prefix": "fc00:10::1/128"
},
"next-hop-group": {
"config": {
"name": "ENCAP-NHG-1"
}
},
"prefix": "fc00:10::1/128"
}
]
}
}
]
},
"static": {
"next-hop-groups": {
"next-hop-group": [
{
"name": "ENCAP-NHG-1",
"config": {
"name": "ENCAP-NHG-1"
},
"next-hops": {
"next-hop": [
{
"index": "0",
"config": {
"index": "0"
}
}
]
}
}
]
},
"next-hops": {
"next-hop": [
{
"index": "0",
"config": {
"index": "0"
},
"encap-headers": {
"encap-header": [
{
"index": 0,
"config": {
"index": 0,
"type": "UDPV4"
},
"udp-v4": {
"config": {
"dscp": 32,
"dst-ip": "10.50.50.1",
"dst-udp-port": 6080,
"ip-ttl": 64,
"src-ip": "10.5.5.5",
"src-udp-port": 49152
}
}
}
]
}
}
]
}
}
}
]
}
}{
"defined-sets": {
"ipv6-prefix-sets": {
"ipv6-prefix-set": [
{
"name": "dst_prefix_v6_gue",
"config": {
"name": "dst_prefix_v6_gue",
"prefix": [
"2001:db8:1::/64",
"2001:db8:2::/64"
]
}
}
]
}
},
"network-instances": {
"network-instance": [
{
"name": "DEFAULT",
"config": {
"name": "DEFAULT"
},
"policy-forwarding": {
"policies": {
"policy": [
{
"policy-id": "decap-policy",
"config": {
"policy-id": "decap-policy"
},
"rules": {
"rule": [
{
"sequence-id": 1,
"config": {
"sequence-id": 1
},
"action": {
"config": {
"decapsulate-gue": true
}
},
"ipv6": {
"config": {
"destination-address-prefix-set": "dst_prefix_v6_gue",
"protocol": "IP_UDP"
}
},
"transport": {
"config": {
"destination-port": 6080
}
}
}
]
}
}
]
}
}
}
]
}
}paths:
# config
/network-instances/network-instance/static/next-hop-groups/next-hop-group/config/name:
/network-instances/network-instance/static/next-hop-groups/next-hop-group/next-hops/next-hop/config/index:
/network-instances/network-instance/static/next-hops/next-hop/config/index:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/config/index:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/config/type:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/dscp:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/dst-ip:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/dst-udp-port:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/ip-ttl:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/src-ip:
/network-instances/network-instance/static/next-hops/next-hop/encap-headers/encap-header/udp-v4/config/src-udp-port:
/network-instances/network-instance/protocols/protocol/static-routes/static/next-hop-group/config/name:
# telemetry
# BGP
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/state/session-state:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state/prefixes/received:
/network-instances/network-instance/protocols/protocol/bgp/neighbors/neighbor/afi-safis/afi-safi/state/prefixes/sent:
# IS-IS
/network-instances/network-instance/protocols/protocol/isis/interfaces/interface/levels/level/adjacencies/adjacency/state/adjacency-state:
# AFT
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/octets-forwarded:
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/counters/packets-forwarded:
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/next-hop-group:
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/origin-protocol:
/network-instances/network-instance/afts/ipv4-unicast/ipv4-entry/state/prefix:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/octets-forwarded:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/counters/packets-forwarded:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/next-hop-group:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/origin-protocol:
/network-instances/network-instance/afts/ipv6-unicast/ipv6-entry/state/prefix:
# interface counters
/interfaces/interface/state/counters/out-unicast-pkts:
rpcs:
gnmi:
gNMI.Set:
union_replace: true
gNMI.Subscribe:
on_change: true- Specify the minimum DUT-type:
- FFF - fixed form factor
-
Home
- Test Plans
- ACCTZ-1.1: Record Subscribe Full
- ACCTZ-2.1: Record Subscribe Partial
- ACCTZ-3.1: Record Subscribe Non-gRPC
- ACCTZ-4.1: Record History Truncation
- ACCTZ-4.2: Record Payload Truncation
- ACCTZ-5.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - client becomes silent
- ACCTZ-6.1: gNSI.acctz.v1 (Accounting) Test RecordSubscribe Idle Timeout - DoA client
- ACCTZ-7.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Multi-transaction
- ACCTZ-8.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Failure - Uni-transaction
- ACCTZ-9.1: gNSI.acctz.v1 (Accounting) Test Accounting Privilege Escalation
- ACCTZ-10.1: gNSI.acctz.v1 (Accounting) Test Accounting Authentication Error - Multi-transaction
- ACL-1.1: ACL match based on L3/L4 fields and DSCP value
- ACL-1.2: ACL Update (Make-before-break)
- ACL-1.3: Large Scale ACL with TCAM profile
- AFT-1.1: AFTs Base
- AFT-1.2: AFTs slow collector
- AFT-1.3: AFTs collector Flap
- AFT-2.1: AFTs Prefix Counters
- AFT-3.1: AFTs Atomic Flag Check
- AFT-5.1: AFTs DUT Reboot
- AFT-6.1: AFT Prefix Filtering
- AFT-6.2: AFT Prefix Filtering Dual-Stack
- AFT-6.3: AFT Prefix Filtering Resilience
- AFT-6.4: AFT Prefix Filtering Dynamic Updates
- attestz-1: General enrollz and attestz tests
- Authz: General Authz (1-4) tests
- BMP-1.1: BMP Session Establishment and Telemetry Test
- BMP-2.7: BMP Pre Policy Test
- BMP-2.8: BMP Post Policy Test
- bootz: General bootz bootstrap tests
- Certz-1: gNSI Client Certificate Tests
- CERTZ-2: Server Certificate
- Certz-3: Server Certificate Rotation
- Certz-4: Trust Bundle
- Certz-5: Trust Bundle Rotation
- CFM-1.1: CFM over ETHoCWoMPLSoGRE
- CNTR-1: Basic container lifecycle via
gnoi.Containerz. - CNTR-2: Container network connectivity tests
- CNTR-3: Container Supervisor Failover
- CPT-1.1: Interface based ARP policer
- Credentialz-1: Password console login
- Credentialz-2: SSH Password Login Disallowed
- Credentialz-3: Host Certificates
- Credentialz-4: SSH Public Key Authentication
- Credentialz-5: Hiba Authentication
- DP-1.2: QoS policy feature config
- DP-1.3: QoS ECN feature config
- DP-1.4: QoS Interface Output Queue Counters
- DP-1.5: Egress Strict Priority scheduler with bursty traffic
- DP-1.7: One strict priority queue traffic test
- DP-1.8: Two strict priority queue traffic test
- DP-1.9: WRR traffic test
- DP-1.10: Mixed strict priority and WRR traffic test
- DP-1.11: Bursty traffic test
- DP-1.12: ECN enabled traffic test
- DP-1.13: DSCP and ECN bits are copied over during IPinIP encap and decap
- DP-1.14: QoS basic test
- DP-1.15: Egress Strict Priority scheduler
- DP-1.16: Ingress traffic classification and rewrite
- DP-1.17: DSCP Transparency with ECN
- DP-1.19: Egress traffic DSCP rewrite
- DP-2.2: QoS scheduler with 1 rate 2 color policer, classifying on next-hop group
- DP-2.4: Police traffic on input matching all packets using 1 rate, 2 color marker
- DP-2.5: Police traffic on input matching all packets using 2 rate, 3 color marker
- DP-2.6: Police traffic on input matching all packets using 2 rate, 3 color marker with classifier
- enrollz-1: enrollz test for TPM 2.0 HMAC-based Enrollment flow
- enrollz-2: enrollz test for TPM 1.2 Enrollment flow
- example-0.1: Topology Test
- FP-1.1: Power admin DOWN/UP Test
- FPGA-1.1: FPGA Status Test
- gNMI-1.1: cli Origin
- gNMI-1.2: Benchmarking: Full Configuration Replace
- gNMI-1.3: Benchmarking: Drained Configuration Convergence Time
- gNMI-1.4: Telemetry: Inventory
- gNMI-1.5: Telemetry: Port Speed Test
- gNMI-1.6: System gRPC Servers running in more than one network-instance
- gNMI-1.8: Configuration Metadata-only Retrieve and Replace
- gNMI-1.9: Get requests
- gNMI-1.10: Telemetry: Basic Check
- gNMI-1.11: Telemetry: Interface Packet Counters
- gNMI-1.12: Mixed OpenConfig/CLI Origin
- gNMI-1.13: Optics Telemetry, Instant, threshold, and miscellaneous static info
- gNMI-1.14: OpenConfig metadata consistency during large config push
- gNMI-1.15: Set Requests
- gNMI-1.16: Fabric redundnacy test
- gNMI-1.17: Controller card redundancy test
- gNMI-1.18: gNMI subscribe with sample mode for backplane capacity counters
- gNMI-1.19: ConfigPush and ConfigPull after Control Card switchover
- gNMI-1.20: Telemetry: Optics Thresholds
- gNMI-1.21: Integrated Circuit Hardware Resource Utilization Test
- gNMI-1.22: Controller card port attributes
- gNMI-1.23: Telemetry: Aggregate Interface Counters
- gNMI-1.24: gNMI Leaf-List Update Test
- gNMI-1.25: Telemetry: Interface Last Change Timestamp
- gNMI-1.26: Carrier Transitions Test
- gNMI-1.27: gNMI Sample Mode Test
- gNMI-1.28: Telemetry: Interface openconfig validation.
- gNMI-2: gnmi_subscriptionlist_test
- gNMI-3: union_replace
- gNOI-2.1: Packet-based Link Qualification on 100G and 400G links
- gNOI-3.1: Complete Chassis Reboot
- gNOI-3.2: Per-Component Reboot
- gNOI-3.3: Supervisor Switchover
- gNOI-3.4: Chassis Reboot Status and Reboot Cancellation
- gNOI-4.1: Software Upgrade
- gNOI-5.1: Ping Test
- gNOI-5.2: Traceroute Test
- gNOI-5.3: Copying Debug Files
- gNOI-6.1: Factory Reset
- gNOI-7.1: BootConfig
- gNPSI-1: Sampling and Subscription Check
- HA-1.0: Telemetry: Firewall High Availability.
- Health-1.1: Generic Health Check
- Health-1.2: Healthz component status paths
- INT-1.1: Interface Performance
- IPSEC-1.1: IPSec with MACSec over aggregated links.
- IPSEC-1.2: IPSec Scaling with MACSec over aggregated links.
- IPSEC-1.3: IPSec Packet-Order with MACSec over aggregated links.
- MGT-1: Management HA solution test
- MPLS-1.1: MPLS label blocks using ISIS
- MPLS-1.2: MPLS Traffic Class Marking
- MPLS-2.2: MPLS forwarding via static LSP to BGP next-hop.
- MSEC-1.1: MACsec Configuration and Verification (DUT-to-DUT)
- MTU-1.3: Large IP Packet Transmission
- MTU-1.4: Large IP Packet through GRE/GUE tunnel Transmission
- MTU-1.5: Path MTU handing
- OC-1.2: Default Address Families
- OC-26.1: Network Time Protocol (NTP)
- P4RT-1.1: Base P4RT Functionality
- P4RT-1.2: P4RT Daemon Failure
- P4RT-1.3: P4RT behavior when a device/node is dowm
- P4RT-2.1: P4RT Election
- P4RT-2.2: P4RT Metadata Validation
- P4RT-3.1: Google Discovery Protocol: PacketIn
- P4RT-3.2: Google Discovery Protocol: PacketOut
- P4RT-3.21: Google Discovery Protocol: PacketOut with LAG
- P4RT-5.1: Traceroute: PacketIn
- P4RT-5.2: Traceroute Packetout
- P4RT-5.3: Traceroute: PacketIn With VRF Selection
- P4RT-6.1: Required Packet I/O rate: Performance
- P4RT-7.1: LLDP: PacketIn
- P4RT-7.2: LLDP: PacketOut
- PF-1.1: IPv4/IPv6 policy-forwarding to indirect NH matching DSCP/TC.
- PF-1.2: Policy-based traffic GRE Encapsulation to IPv4 GRE tunnel
- PF-1.3: Policy-based IPv4 GRE Decapsulation
- PF-1.4: GUEv1 Decapsulation rule using destination-address-prefix-set and TTL and DSCP behavior test
- PF-1.6: Policy based VRF selection for IPV4/IPV6
- PF-1.7: Decapsulate MPLS in GRE and UDP
- PF-1.8: Ingress handling of TTL
- PF-1.9: Egress handling of TTL
- PF-1.11: Rewrite the ingress innner packet TTL
- PF-1.12: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.13: MPLSoGRE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.14: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload
- PF-1.15: MPLSoGRE IPV4 encapsulation of IPV4/IPV6 payload scale test
- PF-1.16: MPLSoGRE IPV4 encapsulation IPV4/IPV6 local proxy test
- PF-1.17: MPLSoGRE and MPLSoGUE MACsec
- PF-1.18: MPLSoGRE and MPLSoGUE QoS
- PF-1.19: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload
- PF-1.20: MPLSoGUE IPV4 decapsulation of IPV4/IPV6 payload scale test
- PF-1.21: Configurable IPv6 flow labels corresponding to IPV6 tunnels
- PF-1.22: GUEv1 Decapsulation and ECMP test for IPv4 and IPv6 payload
- PF-1.23: EthoCWoMPLSoGRE IPV4 forwarding of IPV4/IPV6 payload
- PF-1.24: Add and remove interface bound to PBF
- PF-1.25: Egress Static MPLS LSP Verification
- PF-1.26: Double GUEv1 Decapsulation for Overlay Probing
- PF-2.3: Multiple VRFs and GUE DECAP in Default VRF
- PLT-1.1: Interface breakout Test
- PLT-1.2: Parent component validation test
- PLT-1.3: OnChange Subscription Test for Breakout Interfaces
- RELAY-1.1: DHCP Relay functionality
- Replay-1.0: Record/replay presession test
- Replay-1.1: Record/replay diff command trees test
- Replay-1.2: P4RT Replay Test
- RT-1.1: Base BGP Session Parameters
- RT-1.2: BGP Policy & Route Installation
- RT-1.3: BGP Route Propagation
- RT-1.4: BGP Graceful Restart
- RT-1.5: BGP Prefix Limit
- RT-1.7: Local BGP Test
- RT-1.8: BGP Route Reflector Test at scale
- RT-1.10: BGP Keepalive and HoldTimer Configuration Test
- RT-1.11: BGP remove private AS
- RT-1.12: BGP always compare MED
- RT-1.14: BGP Long-Lived Graceful Restart
- RT-1.15: BGP Addpath on scale with and without routing policy
- RT-1.19: BGP 2-Byte and 4-Byte ASN support
- RT-1.21: BGP TCP MSS and PMTUD
- RT-1.23: BGP AFI SAFI OC DEFAULTS
- RT-1.24: BGP 2-Byte and 4-Byte ASN support with policy
- RT-1.25: Management network-instance default static route
- RT-1.26: Basic static route support
- RT-1.27: Static route to BGP redistribution
- RT-1.28: BGP to IS-IS redistribution
- RT-1.29: BGP chained import/export policy attachment
- RT-1.30: BGP nested import/export policy attachment
- RT-1.31: BGP 3 levels of nested import/export policy with match-set-options
- RT-1.32: BGP policy actions - MED, LocPref, prepend, flow-control
- RT-1.33: BGP Policy with prefix-set matching
- RT-1.34: BGP route-distance configuration
- RT-1.35: BGP Graceful Restart Extended route retention (ExRR)
- RT-1.36: AIGP feature support test
- RT-1.51: BGP multipath ECMP
- RT-1.52: BGP multipath UCMP support with Link Bandwidth Community
- RT-1.53: prefix-list test
- RT-1.54: BGP Override AS-path split-horizon
- RT-1.55: BGP session mode (active/passive)
- RT-1.63: BGP Multihop
- RT-1.64: BGP Import/Export Policy (Control plane only) Functional Test Case
- RT-1.65: BGP scale test
- RT-1.66: IPv4 Static Route with IPv6 Next-Hop
- RT-1.67: IPv4 and IPv6 Static Route using Vlan Interface
- RT-1.71: BGP Disable Peer AS Filter (
disable-peer-as-filter) - RT-2.1: Base IS-IS Process and Adjacencies
- RT-2.2: IS-IS LSP Updates
- RT-2.6: IS-IS Hello-Padding enabled at interface level
- RT-2.7: IS-IS Passive is enabled at interface level
- RT-2.8: IS-IS metric style wide not enabled
- RT-2.9: IS-IS metric style wide enabled
- RT-2.10: IS-IS change LSP lifetime
- RT-2.11: IS-IS Passive is enabled at the area level
- RT-2.12: Static route to IS-IS redistribution
- RT-2.13: Weighted-ECMP for IS-IS
- RT-2.14: IS-IS Drain Test
- RT-2.15: IS-IS Extensions for Segment Routing
- RT-2.16: IS-IS Graceful Restart Helper
- RT-2.17: IS-IS scale test
- RT-2.18: IS-IS Multi-adjacencies scale test
- RT-3.1: Policy based VRF selection
- RT-3.2: Multiple <Protocol, DSCP> Rules for VRF Selection
- RT-3.52: Multidimensional test for Static GUE Encap/Decap based on BGP path selection and selective DSCP marking
- RT-3.53: Static route based GUE Encapsulation to IPv6 tunnel
- RT-4.10: AFTs Route Summary
- RT-4.11: AFTs Route Summary
- RT-5.1: Singleton Interface
- RT-5.2: Aggregate Interfaces
- RT-5.3: Aggregate Balancing
- RT-5.4: Aggregate Forwarding Viable
- RT-5.5: Interface hold-time
- RT-5.6: Interface Loopback mode
- RT-5.7: Aggregate Not Viable All
- RT-5.8: IPv6 Link Local
- RT-5.9: Disable IPv6 ND Router Arvetisment
- RT-5.10: IPv6 Link Local generated by SLAAC
- RT-5.11: LACP Intervals
- RT-5.12: Suppress IPv6 ND Router Advertisement [Depreciated]
- RT-5.13: Flow control test
- RT-5.14: Aggregate Subinterface in Default and Non-default Network Instance
- RT-5.15: LACP Fallback Support
- RT-6.1: Core LLDP TLV Population
- RT-7.1: BGP default policies
- RT-7.2: BGP Policy Community Set
- RT-7.3: BGP Policy AS Path Set
- RT-7.4: BGP Policy AS Path Set and Community Set
- RT-7.5: BGP Policy - Match and Set Link Bandwidth Community
- RT-7.6: BGP Link Bandwidth Community - Cumulative
- RT-7.8: BGP Policy Match Standard Community and Add Community Import/Export Policy
- RT-7.9: BGP ECMP for iBGP with IS-IS protocol nexthop
- RT-7.10: Routing policy statement insertion and removal
- RT-7.11: BGP Policy - Import/Export Policy Action Using Multiple Criteria
- RT-7.51: BGP Auto-Generated Link-Bandwidth Community
- RT-8: Singleton with breakouts
- RT-10.1: Default Route Generation based on 192.0.0.0/8 Presence
- RT-10.2: Non-default Route Generation based on 192.168.2.2/32 Presence in ISIS
- RT-14.2: GRIBI Route Test
- SEC-3.1: Authentication
- SFLOW-1: sFlow Configuration and Sampling
- SR-1.1: Transit forwarding to Node-SID via ISIS
- SR-1.2: Egress Node Forwarding for MPLS traffic with Explicit Null label
- Storage-1.1: Storage File System Check
- SYS-1.1: Test default COPP policy thresholds for Arista
- SYS-2.1: Ingress control-plane ACL.
- SYS-3.1: AAA and TACACS+ Configuration Verification Test Suite
- SYS-4.1: System Mount Points State Verification
- System-1.1: System banner test
- System-1.2: System g protocol test
- System-1.3: System hostname test
- System-1.4: System time test
- System-1.5: System software-version test
- TE-1.1: Static ARP
- TE-1.2: My Station MAC
- TE-2.1: gRIBI IPv4 Entry
- TE-2.2: gRIBI IPv4 Entry With Aggregate Ports
- TE-3.1: Base Hierarchical Route Installation
- TE-3.2: Traffic Balancing According to Weights
- TE-3.3: Hierarchical weight resolution
- TE-3.5: Ordering: ACK Received
- TE-3.6: ACK in the Presence of Other Routes
- TE-3.7: Base Hierarchical NHG Update
- TE-3.31: Hierarchical weight resolution with PBF
- TE-4.1: Base Leader Election
- TE-4.2: Persistence Mode
- TE-5.1: gRIBI Get RPC
- TE-6.1: Route Removal via Flush
- TE-6.2: Route Removal In Non Default VRF
- TE-6.3: Route Leakage between Non Default VRF
- TE-6.4: gRIBI to BGP Route Redistribution for IPv4
- TE-8.1: DUT Daemon Failure
- TE-8.2: Supervisor Failure
- TE-9.1: gRIBI MPLS Compliance
- TE-9.3: FIB FAILURE DUE TO HARDWARE RESOURCE EXHAUST
- TE-10: gRIBI MPLS Forwarding
- TE-11.1: Backup NHG: Single NH
- TE-11.2: Backup NHG: Multiple NH
- TE-11.3: Backup NHG: Actions
- TE-11.21: Backup NHG: Multiple NH with PBF
- TE-11.31: Backup NHG: Actions with PBF
- TE-13.1: gRIBI route ADD during Failover
- TE-13.2: gRIBI route DELETE during Failover
- TE-14.1: gRIBI Scaling
- TE-14.2: encap and decap scale
- TE-14.3: gRIBI Scaling - full scale setup, target T1
- TE-14.4: gRIBI Scaling - full scale setup, target T2
- TE-15.1: gRIBI Compliance
- TE-16.1: basic encapsulation tests
- TE-16.2: encapsulation FRR scenarios
- TE-16.3: encapsulation FRR scenarios
- TE-17.1: VRF selection policy driven TE
- TE-18.1: gRIBI MPLS-in-UDP Encapsulation
- TE-18.3: MPLS in UDP Encapsulation Scale Test
- TE-18.4: ECMP hashing on outer and inner packets with MPLSoUDP encapsulation
- TR-6.1: Remote Syslog feature config
- TR-6.2: Local logging destinations
- TRANSCEIVER-1.1: Telemetry: 400ZR Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-1.2: Telemetry: 400ZR_PLUS Chromatic Dispersion(CD) telemetry values streaming
- TRANSCEIVER-3.1: Telemetry: 400ZR Optics firmware version streaming
- TRANSCEIVER-3.2: Telemetry: 400ZR_PLUS Optics firmware version streaming
- TRANSCEIVER-4.1: Telemetry: 400ZR RX input and TX output power telemetry values streaming.
- TRANSCEIVER-4.2: Telemetry: 400ZR_PLUS RX input and TX output power telemetry values streaming.
- TRANSCEIVER-5.1: Configuration: 400ZR channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-5.2: Configuration: 400ZR_PLUS channel frequency, output TX launch power and operational mode setting.
- TRANSCEIVER-6.1: Telemetry: 400ZR Optics performance metrics (pm) streaming.
- TRANSCEIVER-6.2: Telemetry: 400ZR_PLUS Optics performance metrics (pm) streaming.
- TRANSCEIVER-7.1: Telemetry: 400ZR Optics inventory info streaming
- TRANSCEIVER-7.2: Telemetry: 400ZR_PLUS Optics inventory info streaming
- TRANSCEIVER-8.1: Telemetry: 400ZR Optics module temperature streaming.
- TRANSCEIVER-8.2: Telemetry: 400ZR_PLUS Optics module temperature streaming.
- TRANSCEIVER-9.1: Telemetry: 400ZR TX laser bias current telemetry values streaming.
- TRANSCEIVER-9.2: Telemetry: 400ZR_PLUS TX laser bias current telemetry values streaming.
- TRANSCEIVER-10.1: Telemetry: 400ZR Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-10.2: Telemetry: 400ZR_PLUS Optics FEC(Forward Error Correction) Uncorrectable Frames Streaming.
- TRANSCEIVER-11.1: Telemetry: 400ZR Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-11.2: Telemetry: 400ZR_PLUS Optics logical channels provisioning and related telemetry.
- TRANSCEIVER-12.1: Telemetry: 400ZR Transceiver Supply Voltage streaming.
- TRANSCEIVER-12.2: Telemetry: 400ZR_PLUS Transceiver Supply Voltage streaming.
- TRANSCEIVER-13.1: Configuration: 400ZR Transceiver Low Power Mode Setting.
- TRANSCEIVER-13.2: Configuration: 400ZR_PLUS Transceiver Low Power Mode Setting.
- TRANSCEIVER-101: Telemetry: ZR platform OC paths streaming.
- TRANSCEIVER-102: Telemetry: ZR terminal-device OC paths streaming.
- TRANSCEIVER-103: Telemetry: ZR Plus platform OC paths streaming.
- TRANSCEIVER-104: Telemetry: ZR Plus terminal-device OC paths streaming.
- TRANSCEIVER-105: Telemetry: ZR platform OC paths streaming.
- TRANSCEIVER-106: Telemetry: ZR terminal-device OC paths streaming.
- TRANSCEIVER-107: Telemetry: ZR Plus platform OC paths streaming.
- TRANSCEIVER-108: Telemetry: ZR Plus terminal-device OC paths streaming.
- TUN-1.3: Interface based IPv4 GRE Encapsulation
- TUN-1.4: Interface based IPv6 GRE Encapsulation
- TUN-1.6: Tunnel End Point Resize for Ecapsulation - Interface Based GRE Tunnel
- TUN-1.9: GRE inner packet DSCP
- URPF-1.1: uRPF validation from non-default network-instance
- Test Plans