Test the Network Connectivity
iPerf is a tool commonly used to measure network performance, perform network tuning, and more. The iPerf application consists of two main services, iperf-sleep (client) and iperf-server.
Prerequisites for using the iPerf Tool
To deploy a application, you must create a namespace for that application in both the client and server clusters before creating the slice.
Create the iperf
namespace on the worker clusters identified as the
client and server using the following command:
kubectl create ns iperf
Deploy the iPerf Application
Deploy the iPerf application and test the network connectivity between the worker clusters.
You can also use an intra-cluster slice to test the intra cluster connectivity. To know more, see deploying the iPerf application on an intra-cluster slice.
Identify a worker cluster as a client and another worker cluster as a server and configure them to test the network connectivity.
To establish the connectivity between two worker clusters:
-
Switch context to the worker cluster identified as the client using the following command:
kubectx <cluster name>
-
Onboard the existing
iperf
namespace to the slice. To know more, see onboarding namespaces.cautionEnsure that you have onboarded the iperf namespace. If you create a namespace after the slice creation, then you could face issues when you deploy the application as the namespace creation takes some time.
-
Create the
iperf-sleep.yaml
using the following template.apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-sleep
namespace: iperf
labels:
app: iperf-sleep
spec:
replicas: 1
selector:
matchLabels:
app: iperf-sleep
template:
metadata:
labels:
app: iperf-sleep
spec:
containers:
- name: iperf
image: mlabbe/iperf
imagePullPolicy: Always
command: ["/bin/sleep", "3650d"]
- name: sidecar
image: nicolaka/netshoot
imagePullPolicy: IfNotPresent
command: ["/bin/sleep", "3650d"]
securityContext:
capabilities:
add: ["NET_ADMIN"]
allowPrivilegeEscalation: true
privileged: true -
Apply the
iperf-sleep.yaml
using the following command:kubectl apply -f iperf-sleep.yaml -n iperf
-
Validate the iPerf client by checking if the pods are running on the worker cluster using the following command:
kubectl get pods -n iperf
Expected Output
NAME READY STATUS RESTARTS AGE
iperf-sleep-676b945fbf-9l9h7 3/3 Running 0 60s -
Switch context to the worker cluster identified as the server using the following command:
kubectx <cluster name>
-
Onboard the existing
iperf
namespace on the slice. To know more, see onboarding namespaces.cautionEnsure that you have onboarded the iperf namespace. If you create a namespace after the slice creation, then you could face issues when you deploy the application as the namespace creation takes some time.
-
Create the
iperf-server.yaml
file using the following template.apiVersion: apps/v1
kind: Deployment
metadata:
name: iperf-server
namespace: iperf
labels:
app: iperf-server
spec:
replicas: 1
selector:
matchLabels:
app: iperf-server
template:
metadata:
labels:
app: iperf-server
spec:
containers:
- name: iperf
image: mlabbe/iperf
imagePullPolicy: Always
args:
- '-s'
- '-p'
- '5201'
ports:
- containerPort: 5201
name: server
- name: sidecar
image: nicolaka/netshoot
imagePullPolicy: IfNotPresent
command: ["/bin/sleep", "3650d"]
securityContext:
capabilities:
add: ["NET_ADMIN"]
allowPrivilegeEscalation: true
privileged: true
---
apiVersion: networking.kubeslice.io/v1beta1
kind: ServiceExport
metadata:
name: iperf-server
namespace: iperf
spec:
slice: <slicename> # water
selector:
matchLabels:
app: iperf-server
ingressEnabled: false
ports:
- name: tcp
containerPort: 5201
protocol: TCP -
Apply the
iperf-server.yaml
configured in the worker cluster using the following command:kubectl apply -f iperf-server.yaml -n iperf
-
Validate the iPerf server by checking if the pods are running on the worker cluster using the following command:
kubectl get pods -n iperf
Expected Output
NAME READY STATUS RESTARTS AGE
iperf-server-7889799774-s5zrs 3/3 Running 0 60s
- Validate the service export of the iPerf server on the worker clusters using the following command:
kubectl get serviceexport -n iperf
Expected Output
NAME SLICE INGRESS PORT(S) ENDPOINTS STATUS
iperf-server water 5201/TCP 1 READY
- Validate the service import of the iPerf server on the worker cluster using the following command:
kubectl get serviceimport -n iperf
Expected Output
NAME SLICE PORT(S) ENDPOINTS STATUS
iperf-server water 5201/TCP 1 READY
- Validate the service import of the iPerf client on the other worker cluster by running the following command:
kubectl get serviceimport -n iperf
Expected Output
NAME SLICE PORT(S) ENDPOINTS STATUS
iperf-server water 5201/TCP 1 READY
-
Switch context to the iperf client cluster using the following command:
kubectx <cluster name>
-
Check the connectivity from the iPerf client by using the following command:
kubectl exec -it deploy/iperf-sleep -c iperf -n iperf -- iperf -c iperf-server.iperf.svc.slice.local -p 5201 -i 1 -b 10Mb;
Expected Output
------------------------------------------------------------
Client connecting to iperf-server.iperf.svc.slice.local, TCP port 5201
TCP window size: 45.0 KByte (default)
------------------------------------------------------------
[ 1] local 10.1.1.5 port 58116 connected with 10.1.2.5 port 5201
[ ID] Interval Transfer Bandwidth
[ 1] 0.00-1.00 sec 640 KBytes 5.24 Mbits/sec
[ 1] 1.00-2.00 sec 640 KBytes 5.24 Mbits/sec
[ 1] 2.00-3.00 sec 640 KBytes 5.24 Mbits/sec
[ 1] 3.00-4.00 sec 512 KBytes 4.19 Mbits/sec
[ 1] 4.00-5.00 sec 640 KBytes 5.24 Mbits/sec
[ 1] 5.00-6.00 sec 768 KBytes 6.29 Mbits/sec
[ 1] 6.00-7.00 sec 512 KBytes 4.19 Mbits/sec
[ 1] 7.00-8.00 sec 512 KBytes 4.19 Mbits/sec
[ 1] 8.00-9.00 sec 512 KBytes 4.19 Mbits/sec
[ 1] 9.00-10.00 sec 768 KBytes 6.29 Mbits/sec
[ 1] 10.00-10.45 sec 384 KBytes 7.04 Mbits/sec
[ 1] 0.00-10.45 sec 6.38 MBytes 5.12 Mbits/secsuccessThe connectivity between the worker clusters on a slice is successful!