Skip to main content
Version: 0.6.0

Test the Network Connectivity

iPerf is a tool commonly used to measure network performance, perform network tuning, and more. The iPerf application consists of two main services, iperf-sleep (client) and iperf-server.

Prerequisites for using the iPerf Tool

To deploy a application, you must create a namespace for that application in both the client and server clusters before creating the slice.

Create the iperf namespace on the worker clusters identified as the client and server using the following command:

kubectl create ns iperf

Deploy the iPerf Application

Deploy the iPerf application and test the network connectivity between the worker clusters.

info

You can also use an intra-cluster slice to test the intra cluster connectivity. To know more, see deploying the iPerf application on an intra-cluster slice.

Identify a worker cluster as a client and another worker cluster as a server and configure them to test the network connectivity.

To establish the connectivity between two worker clusters:

  1. Switch context to the worker cluster identified as the client using the following command:

    kubectx <cluster name>
  2. Onboard the existing iperf namespace to the slice. To know more, see onboarding namespaces.

    caution

    Ensure that you have onboarded the iperf namespace. If you create a namespace after the slice creation, then you could face issues when you deploy the application as the namespace creation takes some time.

  3. Create the iperf-sleep.yaml using the following template.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: iperf-sleep
    namespace: iperf
    labels:
    app: iperf-sleep
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: iperf-sleep
    template:
    metadata:
    labels:
    app: iperf-sleep
    spec:
    containers:
    - name: iperf
    image: mlabbe/iperf
    imagePullPolicy: Always
    command: ["/bin/sleep", "3650d"]
    - name: sidecar
    image: nicolaka/netshoot
    imagePullPolicy: IfNotPresent
    command: ["/bin/sleep", "3650d"]
    securityContext:
    capabilities:
    add: ["NET_ADMIN"]
    allowPrivilegeEscalation: true
    privileged: true
  4. Apply theiperf-sleep.yaml using the following command:

    kubectl apply -f iperf-sleep.yaml -n iperf
  5. Validate the iPerf client by checking if the pods are running on the worker cluster using the following command:

    kubectl get pods -n iperf

    Expected Output

    NAME                           READY   STATUS    RESTARTS   AGE
    iperf-sleep-676b945fbf-9l9h7 3/3 Running 0 60s
  6. Switch context to the worker cluster identified as the server using the following command:

    kubectx <cluster name>
  7. Onboard the existing iperf namespace on the slice. To know more, see onboarding namespaces.

    caution

    Ensure that you have onboarded the iperf namespace. If you create a namespace after the slice creation, then you could face issues when you deploy the application as the namespace creation takes some time.

  8. Create the iperf-server.yaml file using the following template.

    apiVersion: apps/v1
    kind: Deployment
    metadata:
    name: iperf-server
    namespace: iperf
    labels:
    app: iperf-server
    spec:
    replicas: 1
    selector:
    matchLabels:
    app: iperf-server
    template:
    metadata:
    labels:
    app: iperf-server
    spec:
    containers:
    - name: iperf
    image: mlabbe/iperf
    imagePullPolicy: Always
    args:
    - '-s'
    - '-p'
    - '5201'
    ports:
    - containerPort: 5201
    name: server
    - name: sidecar
    image: nicolaka/netshoot
    imagePullPolicy: IfNotPresent
    command: ["/bin/sleep", "3650d"]
    securityContext:
    capabilities:
    add: ["NET_ADMIN"]
    allowPrivilegeEscalation: true
    privileged: true
    ---
    apiVersion: networking.kubeslice.io/v1beta1
    kind: ServiceExport
    metadata:
    name: iperf-server
    namespace: iperf
    spec:
    slice: <slicename> # water
    selector:
    matchLabels:
    app: iperf-server
    ingressEnabled: false
    ports:
    - name: tcp
    containerPort: 5201
    protocol: TCP
  9. Apply the iperf-server.yaml configured in the worker cluster using the following command:

    kubectl apply -f iperf-server.yaml -n iperf
  10. Validate the iPerf server by checking if the pods are running on the worker cluster using the following command:

    kubectl get pods -n iperf

    Expected Output

    NAME                            READY   STATUS    RESTARTS   AGE
    iperf-server-7889799774-s5zrs 3/3 Running 0 60s
  11. Validate the service export of the iPerf server on the worker clusters using the following command:

    kubectl get serviceexport -n iperf

    Expected Output

    NAME           SLICE   INGRESS   PORT(S)    ENDPOINTS   STATUS
    iperf-server water 5201/TCP 1 READY
  12. Validate the service import of the iPerf server on the worker cluster using the following command:

    kubectl get serviceimport -n iperf

    Expected Output

    NAME           SLICE   PORT(S)    ENDPOINTS   STATUS
    iperf-server water 5201/TCP 1 READY
  13. Validate the service import of the iPerf client on the other worker cluster by running the following command:

    kubectl get serviceimport -n iperf

    Expected Output

    NAME           SLICE   PORT(S)    ENDPOINTS   STATUS
    iperf-server water 5201/TCP 1 READY
  14. Switch context to the iperf client cluster using the following command:

    kubectx <cluster name>
  15. Check the connectivity from the iPerf client by using the following command:

    kubectl exec -it deploy/iperf-sleep -c iperf -n iperf -- iperf -c iperf-server.iperf.svc.slice.local -p 5201 -i 1 -b 10Mb;

    Expected Output

    ------------------------------------------------------------
    Client connecting to iperf-server.iperf.svc.slice.local, TCP port 5201
    TCP window size: 45.0 KByte (default)
    ------------------------------------------------------------
    [ 1] local 10.1.1.5 port 58116 connected with 10.1.2.5 port 5201
    [ ID] Interval Transfer Bandwidth
    [ 1] 0.00-1.00 sec 640 KBytes 5.24 Mbits/sec
    [ 1] 1.00-2.00 sec 640 KBytes 5.24 Mbits/sec
    [ 1] 2.00-3.00 sec 640 KBytes 5.24 Mbits/sec
    [ 1] 3.00-4.00 sec 512 KBytes 4.19 Mbits/sec
    [ 1] 4.00-5.00 sec 640 KBytes 5.24 Mbits/sec
    [ 1] 5.00-6.00 sec 768 KBytes 6.29 Mbits/sec
    [ 1] 6.00-7.00 sec 512 KBytes 4.19 Mbits/sec
    [ 1] 7.00-8.00 sec 512 KBytes 4.19 Mbits/sec
    [ 1] 8.00-9.00 sec 512 KBytes 4.19 Mbits/sec
    [ 1] 9.00-10.00 sec 768 KBytes 6.29 Mbits/sec
    [ 1] 10.00-10.45 sec 384 KBytes 7.04 Mbits/sec
    [ 1] 0.00-10.45 sec 6.38 MBytes 5.12 Mbits/sec
    success

    The connectivity between the worker clusters on a slice is successful!