Skip to main content

· 4 min read

The KubeSlice project recently hosted the second episode of its Office Hours series on Thursday, 25th August 2022. This project, which was just open sourced by Avesha, brings together the entire community every other Thursday to learn more about different facets of KubeSlice. Visit our YouTube channel to watch recordings of earlier events, including our very first office hours.

The configuration of KubeSlice and how it can be used to give multi-tenancy capabilities in a multi-cluster Kubernetes environment were covered in this episode by Prabhu Navali, director of engineering at Avesha.

Based on the concept of dividing a cluster to let different teams inside an organization use a dedicated set of resources, KubeSlice uses pre-existing Kubernetes primitives to define tenancy. With the aid of an overlay network, on which you may specify namespaces and apps, it is possible to construct a wider workspace over many clusters for a number of applications that can easily span between different clusters.

In addition to the above, KubeSlice also takes into account the following limitations while providing multi-tenancy in a multi-cluster Kubernetes environment:

  • How can clusters that are dispersed across numerous cloud providers, geographies, and edges be connected?
  • How can the same multi-tenancy be maintained throughout these clusters?

To learn more about how it achieves this, refer to the KubeSlice documentation.

With this foundation laid, Prabhu demonstrated the inner workings of KubeSlice with the help of two use cases.

If you want to follow along with the demo, the prerequisites are listed below:

Prabhu used the kind cluster bash automation script that is available on the GitHub repo for his live demo, even though you can manually construct a cluster and install the slice. This script sets up a Kubernetes cluster using kind and installs the necessary KubeSlice components.

After installing the prerequisites mentioned above, you can follow along by directly cloning the repository locally and running the kind.sh script. In order to test the multi-cluster functionality, the script additionally configures an iPerf client-server application. As an alternative, Terraform can be used to install KiND and Kubernetes on AWS EC2 instances.

P.S. We are in the process of optimizing & simplifying these for a better user experience.If this is something you are interested in helping with, do hop on to the #kubeslice channel on the Kubernetes slack.

After testing the connectivity using iPerf, Prabhu deployed a simple web application called book-info using a yaml config file. This application displays information related to a book such as the author, year of publishing, reviews, ratings etc. on the browser. The config file used to deploy the application is available in the kubeslice/examples repository on GitHub.

The product page is deployed in the worker cluster-1 and the book-info details, reviews, service exports, and ratings pages are deployed in worker cluster-2. The NodePort service is used to expose the product page deployed in worker cluster-1. Since the kind.sh script creates a book-info namespace created in each cluster, the http application will be automatically onboarded onto the Slice once we deploy the yaml file. Once created, it can be accessed across the slice and isolated from the other namespaces within the cluster - solving the noisy/nosey neighbor problem. Not only this, granular quotas can be set so as to avoid resource hogging by any particular application thus ensuring equitable distribution of resources in a multi-cluster, multi-tenant setup.

If you face any issue while following along or want to know more about the project, we encourage you to raise questions in the #kubeslice channel on the Kubernetes slack where we hang out. Additionally, the Office Hours are intended to be as interactive as possible and we’d love you to join us on our next one to be held on 8th September 2022. To receive an invite for the next one in your inbox & stay updated with the latest goings-on in the project, join our Google Group.

Until next time!

· 6 min read

The KubeSlice project recently hosted the second episode of its Office Hours series on Thursday, 8th September, 2022

In this episode, Eric Peterson, Vice President Of Engineering at Avesha walked us through the road map, introduced us to multi-cluster, which is one of the basic issues, KubeSlice is attempting to address and demonstrated the connectivity to Kubernetes as well as Kubernetes' support for multiple tenants works.

With KubeSlice, you can span two or more clusters. groupings, and can locate an individual within each cluster with a set of resources, followed by a set of namespaces which we call a Slice ultimately. The Slice can work effectively within a cluster, so if you just have one cluster, you can still divide your resources into slices in that cluster and again expand that cluster or expand the slice over several clusters. and one of the first things we do is install an overlay network. And by deploying the applications on the overlay network, it helps you to avoid any address conflicts that can arise as different clouds have different addressing policies. And if there are the same addresses on every cluster and it causes address overlapping issues while interconnecting. So KubeSlice addresses these issues by placing this overlay in place and enabling you to communicate on top of that. All the applications that are part of that same slice are reachable. Specific services can be exported from one cluster to other clusters and it results in the fact that from the application's perspective, it's just any other service which the programme tries to contact using the DNS entry, and information will simply be routed through whichever inter-cluster policy you specify into the remote cluster where that particular service is offered.

Additionally, there are other options available in the kubernetes for multi-cluster communication. There are a few different ways to accomplish this, but essentially what you would have to do is to go over external firewalls, leave the cluster and return to it in the remote cluster, which is referred to as the north-south connection which leads to more complication when doing it because you have to go via firewalls and api gateways.And here with KubeSlice we are basically attempting to make that part simpler from the standpoint of an application, both for the operator and the application developer. This will lead to lower costs because the traffic will be sent there via the overlay network without the need of any firewalls. It's also possible to deploy mTLS which helps to let you ensure that your applications that are extended across multiple clusters are still actually who you think they are.

One of the most important things is Observability, and you should be able to monitor the application performance and know where it's going, so suppose it could be a good idea to install some applications at the edge so they can provide a lower latency version of their service to the end consumers and hence increased awareness of where traffic is going can help you decide where to put your applications and are a crucial component of observability. Multi-tenancy here means the capacity to essentially carve off and assemble the cluster resources and link them to various organizations or teams.

Contribution Roadmap

In the KubeSlice Roadmap, we've put a number of concepts in place here as well as some items that are initiatives which you may start and continue, and there are some reasonable approaches to get familiar with KubeSlice. There is a kind.sh file in the repository, which simply produces a topology with three clusters of the same kind and one of them is the KubeSlice Multi-Cluster Controller, the other two are set up as the KubeSlice worker clusters. A book info example is also now available under the examples, and it is open source.

Istio in KubeSlice

Istio is a service mesh, it is deployed in a single cluster and it takes all of the the services that you want to relate together, on top of that it provides some infrastructure so that the services can communicate with each other. Along with that it also provides some observability features to monitor which pod is speaking to which other pod and how much traffic is passing by. It makes it very simple to understand what's happening and it also offers some certificate management characteristics that will allow you to essentially ensures that when an application has been broken into many microservices and distributed across various nodes in a cluster or distributed across maybe many clusters, that you're still talking to the applications you think you're talking to and ensuring that level of service or security. Moreover security is maintained via mTLS here.

You can also connect to various clouds such as Google, Amazon, Microsoft, Oracle, and others, down to edge locations or private data centers by using KubeSlice in any Kubernetes configuration.

Lastly, is our ci cd pipeline that we track internally for our infrastructure, and now since we've open-sourced our product, we'd like to elaborate as a member of the community that when a contributor adds something to the examples, or to the open source software itself in the controller, they are aware of appropriate procedures, and can develop and test it properly so they may be sure what they're doing isn't harmful or breaking anything, and when it will be able to pass, they can contribute by providing a new test to confirm this new functionality.

You can jump in here, the KubeSlice Roadmap, and feel free to take up any issue and again you can reach out on the #kubeslice channel on the Kubernetes slack where we hang out, if you have questions about something or if you need any guidance and support. Additionally, the Office Hours are intended to be as interactive as possible and we’d love you to join us on our next one to be held on 8th September 2022. To receive an invite for the next one in your inbox & stay updated with the latest goings-on in the project, join our Google Group. Visit our YouTube channel to watch recordings of earlier events, including our very first office hours.

Until next time!