Skip to main content
Version: 0.4.0

Create a Slice

Introduction

This topic describes the steps to create a slice on the worker clusters that are registered with the KubeSlice Controller.

Prerequisites

Before you begin, ensure the following prerequisites are met:

  • You have the KubeSlice Controller installed on a separate cluster. For more information, see Installing the KubeSlice Controller.
  • You have registered two or more worker clusters with the KubeSlice Controller. For more information, see Registering the Worker Cluster.
  • You have installed Istio in the worker clusters to configure the external gateways.

Creating the Slice YAML File

After successfully registering the worker clusters with the KubeSlice Controller, create a slice to onboard your application namespaces. You can create a slice across the multiple clusters or intra-cluster.

The following information is required to create the .yaml file for the slice.

info

Before creating the slice configuration .yaml file, see Managing Namespaces to add namespaces in the slice configuration.

Slice Configuration

info

For example YAML files on kind clusters, see kind YAML examples.

Create the slice configuration .yaml file using the following template.

info

If you want to add a QoS profile configuration for multiple slices, create a standard QoS profile. Add the name of the external QoS profile as the value of standardQosProfileName in the slice configuration YAML file.

In a slice configuration YAML file, the standardQosProfileName parameter and the qosProfileDetails object are mutually exclusive.

info

To understand more about the configuration parameters, see Slice Configuration Parameters.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: <slice-name>
namespace: kubeslice-<projectname>
spec:
sliceSubnet: <slice-subnet>
maxClusters: <2 - 32> #Ex: 5. By default, the maxClusters value is set to 16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- <registered-cluster-name-1>
- <registered-cluster-name-2>
qosProfileDetails:
queueType: HTB
priority: <qos_priority> #keep integer values from 0 to 3
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: iperf
clusters:
- '*'
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- '*'

Slice Configuration using Istio Gateway

A slice can be configured to use Istio ingress and egress gateways for East-West traffic(inter-cluster, egress from one cluster, and ingress into another cluster). Gateways operate at the edges of the clusters. Ingress gateway act as an entry point and Egress gateway acts as exit point for East-West traffic in a slice.

info

Ingress/Egress gateway is not a core component of KubeSlice, it is an add-on feature that users can activate if needed. Currently, Istio gateways are the only type of external gateways supported.

There are different ways to configure a slice that enables you to route the application traffic. Below are the scenarios to configure a slice with/without egress and ingress gateways.

Scenario 1: Slice Configuration only with Egress Gateways

Create the slice configuration file with Istio egress gateway using the following template.

info

If you want to add a QoS profile configuration for multiple slices, create a standard QoS profile. Add the name of the external QoS profile as the value of standardQosProfileName in the slice configuration YAML file.

In a slice configuration YAML file, the standardQosProfileName parameter and the qosProfileDetails object are mutually exclusive.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: <slice-name>
namespace: kubeslice-<projectname>
spec:
sliceSubnet: <slice-subnet> #Ex: 10.1.0.0/16
maxClusters: <2 - 32> #Ex: 5. By default, the maxClusters value is set to 16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- <registered-cluster-name-1>
- <registered-cluster-name-2>
qosProfileDetails:<qos-profile>
queueType: HTB
priority: 1 #keep integer values from 0 to 3
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: iperf
clusters:
- '*'
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- '*'
externalGatewayConfig:
- ingress:
enabled: false
egress:
enabled: true
nsIngress:
enabled: false
gatewayType: istio
clusters:
- <cluster-name-1>
- ingress:
enabled: false
egress:
enabled: false
nsIngress:
enabled: false
gatewayType: istio
clusters:
- <cluster-name-2>

Scenario 2: Slice Configuration only with Ingress Gateways

Create the slice configuration file with Istio ingress gateways using the following template.

info

If you want to add a QoS profile configuration for multiple slices, create a standard QoS profile. Add the name of the external QoS profile as the value of standardQosProfileName in the slice configuration YAML file.

In a slice configuration YAML file, the standardQosProfileName parameter and the qosProfileDetails object are mutually exclusive.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: <slice-name>
namespace: kubeslice-<projectname>
spec:
sliceSubnet: <slice-subnet> #Ex: 10.1.0.0/16
maxClusters: <2 - 32> #Ex: 5. By default, the maxClusters value is set to 16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- <registered-cluster-name-1>
- <registered-cluster-name-2>
qosProfileDetails:
queueType: HTB
priority: 1 #keep integer values from 0 to 3
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: iperf
clusters:
- '*'
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- '*'
externalGatewayConfig:
- ingress:
enabled: false
egress:
enabled: false
nsIngress:
enabled: false
gatewayType: istio
clusters:
- <cluster-name-1>
- ingress:
enabled: true
egress:
enabled: false
nsIngress:
enabled: false
gatewayType: istio
clusters:
- <cluster-name-2>

Scenario 3: Slice Configuration with Egress and Ingress Gateways

Create the slice configuration file with Istio ingress and egress gateways using the following template.

info

If you want to add a QoS profile configuration for multiple slices, create a standard QoS profile. Add the name of the external QoS profile as the value of standardQosProfileName in the slice configuration YAML file.

In a slice configuration YAML file, the standardQosProfileName parameter and the qosProfileDetails object are mutually exclusive.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: <slice-name>
namespace: kubeslice-<projectname>
spec:
sliceSubnet: <slice-subnet>
maxClusters: <2 - 32> #Ex: 5. By default, the maxClusters value is set to 16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- <registered-cluster-name-1>
- <registered-cluster-name-2>
qosProfileDetails:
queueType: HTB
priority: <qos_priority> #keep integer values from 0 to 3
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2560
dscpClass: AF11
namespaceIsolationProfile:
applicationNamespaces:
- namespace: iperf
clusters:
- '*'
isolationEnabled: false #make this true in case you want to enable isolation
allowedNamespaces:
- namespace: kube-system
clusters:
- '*'
externalGatewayConfig: #enable which gateway we wanted to and on which cluster
- ingress:
enabled: false
egress:
enabled: true
gatewayType: istio
clusters:
- <cluster-name-1>
- ingress:
enabled: true
egress:
enabled: false
gatewayType: istio
clusters:
- <cluster-name-2>

Apply the Slice Configuration

The following information is required.

VariableDescription
<cluster name>The name of the cluster.
<slice configuration>The name of the slice configuration file.
<project namespace>The project namespace on which you apply the slice configuration file.

Perform these steps:

  1. Switch context to the KubeSlice Controller using the following command:

    kubectx <cluster name>
  2. Apply the YAML on the project namespace using the following command:

    kubectl apply -f <slice configuration>.yaml -n <project namespace>
    success

    You have successfully created the Slice on the worker clusters.

Create a Standard QoS Profile

The slice configuration file contains a QoS profile object. To apply a QoS profile to multiple slices, you can create a separate QOS profile YAML file and call it out in other slice configuration.

Create a Standard QoS Profile YAML File

Use the following template to create a standard sliceqosconfig file.

info

To understand more about the configuration parameters, see Standard QoS Profile Parameters.

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceQoSConfig
metadata:
name: profile1
spec:
queueType: HTB
priority: 1
tcType: BANDWIDTH_CONTROL
bandwidthCeilingKbps: 5120
bandwidthGuaranteedKbps: 2562
dscpClass: AF11

Apply the Standard QOS Profile YAML File

Apply the slice-qos-config file using the following command.

kubectl apply -f <full path of slice-qos-config.yaml> -n project-namespace
info

You can only add the filename if you are on the project namespace using the following command.

kubectl apply slice-qos-config.yaml -n project-namespace

Validate the Standard QoS Profile

To validate the standard QoS profile that you created, use the following command:

 kubectl get sliceqosconfigs.controller.kubeslice.io -n project-namespace

Expected Output

NAME       AGE
profile1 33s

After applying the slice-qos-config.yaml file, add the profile name in a slice configuration. You must add the name of the QoS profile for the standardQosProfileName parameter in a slice configuration YAML file as illustrated in the following examples.

info

In a slice configuration YAML file, the standardQosProfileName parameter and the qosProfileDetails object are mutually exclusive.

Example of using the standard QoS Profile without Istio

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: red
spec:
sliceSubnet: 10.1.0.0/16
maxClusters: <2 - 32> #Ex: 5. By default, the maxClusters value is set to 16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- cluster-1
- cluster-2
standardQosProfileName: profile1

Example of using the standard QoS Profile with Istio

apiVersion: controller.kubeslice.io/v1alpha1
kind: SliceConfig
metadata:
name: red
spec:
sliceSubnet: 10.1.0.0/16
sliceType: Application
sliceGatewayProvider:
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
clusters:
- cluster-1
- cluster-2
standardQosProfileName: profile1
externalGatewayConfig:
- ingress:
enabled: false
egress:
enabled: false
nsIngress:
enabled: false
gatewayType: none
clusters:
- "*"
- ingress:
enabled: true
egress:
enabled: true
nsIngress:
enabled: true
gatewayType: istio
clusters:
- cluster-2

Validate the Installation

Validate the slice configuration on the KubeSlice Controller and the worker clusters.

Validate the Slice on the Controller Cluster

Perform these steps:

  1. Validate the slice configuration in the KubeSlice controller, using the following command:

    kubectl get workersliceconfig -n kubeslice-<projectname>

    Example

    kubectl get workersliceconfig -n kubeslice-avesha

    Example Output

    NAME                        AGE
    red-dev-worker-cluster-1 45s
    red-dev-worker-cluster-2 45s
  2. validate the slice gateway in the KubeSlice Controller, using the following command:

    kubectl get workerslicegateway -n kubeslice-<projectname>

    Example

    kubectl get workerslicegateway -n kubeslice-avesha

    Example Output

    NAME                                            AGE
    red-dev-worker-cluster-1-dev-worker-cluster-2 45s
    red-dev-worker-cluster-2-dev-worker-cluster-1 45s

Validate the Slice on the Worker Clusters

Perform these steps:

  1. Validate the slice creation on worker clusters, using the following command on each worker cluster:

    kubectl get slice -n kubeslice-system

    Example Output

    NAME    AGE
    red 45s
  2. Validate the slice gateway creation on your worker cluster, using the following command:

    kubectl get slicegw -n kubeslice-system

    Example Output

    NAME                                          SUBNET        REMOTE SUBNET   REMOTE CLUSTER        GW STATUS
    red-dev-worker-cluster-1-dev-worker-cluster-2 10.1.1.0/24 10.1.2.0/24 dev-worker-cluster-2