Skip to main content
Version: 0.4.0

Install KubeSlice

There are the five main steps for installing the KubeSlice:

  • Installing the KubeSlice Controller
  • Creating a project
  • Registering the worker clusters
  • Creating a slice
  • Onboarding namespaces onto the slices

This topic describes the steps to install KubeSlice in your existing multi-cluster configuration.

Install KubeSlice in a Multi-Cluster Configuration

kubeslice-cli is a tool that makes it simple to install KubeSlice into your existing clusters. The primary purpose of the tool is to install KubeSlice into your already existing clusters using the description of those clusters that you provide in a custom topology YAML file. Depending on the requirements, the KubeSlice Controller and/or worker components can be installed either incrementally (one cluster at a time) or across all the clusters in your topology.

For demonstration purposes, to set up a three-cluster topology of kind clusters in your local environment, use the —profile <minimal-demo | full-demo> option instead of the —config option.

Install the KubeSlice Controller and Worker Clusters

You must create a topology configuration file that includes the names of the clusters, the project name, and the cluster contexts that host the KubeSlice Controller and the worker clusters. For more information, see sample topology configuration file.

Use the following command to install the controller and worker cluster:

kubeslice-cli --config=<topology-configuration-file> install

The above command installs the KubeSlice Controller, creates a project, and registers the worker cluster with the project by installing the Slice Operator on the worker cluster.

Register a Worker Cluster

The kubeslice-cli allows you to add a new worker cluster to an existing KubeSlice configuration.

Use the following template to add a new worker cluster.

kube_config_path: <kubeconfig-file>
name: <controller-cluster-name>
context_name: <controller-cluster-context>
- name: <new-worker-cluster-name>
context_name: <new-worker-cluster-context>
project_name: <project-namespace>

The following is an example topology file for registering a new worker cluster.

kube_config_path: <PATH-TO-KUBECONFIG>
name: controller
context_name: kind-controller
- name: worker-3
context_name: ks-w-3
project_name: demo

If the KubeSlice Controller is already installed, the -s controller option skips installing it. The -s kind option skips creating a new kind cluster. The -s enterprise option skips using enterprise installation, while the -s demo option skips creating a demo set up.

In a demo setup, use the following command to register a new worker cluster with the KubeSlice Controller:

kubeslice-cli install --config=<new-worker-topology-yaml> -s kind -s controller -s enterprise -s demo

In an existing multi-cluster configuration, use the following command to register a new worker cluster with the KubeSlice Controller:

kubeslice-cli install --config-<new-worker-topology-yaml> -s controller

Onboard Namespaces

To onboard your existing namespaces (and their applications) onto a slice, follow these steps:

  1. Create a slice configuration YAML file (choose the namespaces, clusters, and so on to be part of the slice).
  2. Use the kubeslice-cli create sliceConfig command to apply the slice configuration YAML file.

Create a Slice

Use the following template to create a slice configuration YAML file.


To understand more about the configuration parameters, see Slice Configuration Parameters.

kind: SliceConfig
name: <slice-name> #The name of the slice
sliceSubnet: <slice-subnet> #The slice subnet
sliceType: Application
sliceGatewayType: OpenVPN
sliceCaType: Local
sliceIpamType: Local
- <worker-cluster-name1> #The name of your worker cluster1
- <worker-cluster-name2> #The name of your worker cluster2
queueType: HTB
priority: 0
bandwidthCeilingKbps: 30000
bandwidthGuaranteedKbps: 20000
dscpClass: AF11

Apply the Slice Configuration YAML file


The kubeslice-cli create sliceConfig command returns after the configuration is applied. However, in each cluster, the relevant pods for controlling and managing the slice may still be starting. Ensure to wait for the slice to complete initialization before deploying services to it.

To apply the slice configuration, use the following command:

kubeslice-cli create sliceConfig -n <project-namespace> -f <slice-configuration-yaml>


kubeslice-cli create sliceConfig -n kubeslice-demo -f slice-config.yaml

Example output

🏃 Running command: /usr/local/bin/kubectl apply -f slice-config.yaml -n kubeslice-demo created

Successfully Applied Slice Configuration.

Deploy the Application


If the application is already deployed on a namespace that is onboarded to a slice, then re-deploy the application.

Creating a Service Export

To create a service export, use the following command:

kubeslice-cli create serviceExportConfig -f <service-export-yaml> -n <application-namespace>

Validate the Service Export

When an application service runs on one of the worker clusters that are onboarded to a slice, the worker generates a ServiceExport for the application and propagates it to the KubeSlice Controller.

To verify the service export on the controller cluster, use the following command:

kubeslice-cli get serviceExportConfig -n <project-namespace>


kubeslice-cli get serviceExportConfig -n kubeslice-demo

Example Output

Fetching KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl get -n kubeslice-demo
iperf-server-iperf-kind-ks-w-1 43s

To view the details of the service export configuration, use the following command:

kubeslice-cli describe serviceExportConfig <resource-name> -n <project-namespace>


kubeslice-cli describe serviceExportConfig iperf-server-iperf-kind-ks-w-1 -n kubeslice-demo

The following output shows the ServiceExportConfig for iperf-server application is present on the controller cluster.

Describe KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl describe iperf-server-iperf-kind-ks-w-1 -n kubeslice-demo
Name: iperf-server-iperf-kind-ks-w-1
Namespace: kubeslice-demo
Labels: original-slice-name=slice-red
Annotations: <none>
API Version:
Kind: ServiceExportConfig
Service Discovery Ports:
Name: tcp
Port: 5201
Protocol: TCP
Service Name: iperf-server
Service Namespace: iperf
Slice Name: slice-red
Source Cluster: kind-ks-w-1

Modify the Service Discovery Configuration

kubeslice-cli enables you to modify the service discovery parameters. For example, to modify the port on which the service is running, edit the value and save. This updates the ServiceExportConfig. The ServiceExportConfig will again be propagated to all the worker clusters.

To edit the service export configuration, use the following command:

kubeslice-cli edit serviceExportConfig <resource-name> -n <project-namespace>


kubeslice-cli edit serviceExportConfig iperf-server-iperf-kind-ks-w-1 -n kubeslice-demo

Example Output

Editing KubeSlice serviceExportConfig...
🏃 Running command: /usr/local/bin/kubectl edit iperf-server-iperf-kind-ks-w-1 -n kubeslice-genos