Isolation by Design: A Peek Into the Future of CIAM Using Kubernetes
Embracing best practices in technology, even from a project’s inception, doesn’t have to be difficult. There are numerous – arguably too many – open-source tools contributing to software efficiency, security and simplification. Engineering teams can produce immensely resilient, uncompromisingly secure applications by utilizing a mere few of these tools.
At Strivacity, we are doing just that – embracing and supporting these sophisticated tools to solve our customers’ Customer Identity and Access Management (CIAM) challenges in an incredibly secure way. Additionally, we believe in transparency, and continuously share how we help brands break down barriers associated with CIAM cloud adoption.
So, what are some of these sophisticated tools, and what do they offer? Let’s start with one that although has been around for years, is rapidly growing in popularity, and becoming ubiquitous in software development and site reliability engineering. When paired with your cloud provider, this system can provide an array of options to secure your application. I am referring to Kubernetes!
It’s no secret Kubernetes is the go-to container orchestration platform, despite some of us initially preferring other tools. With its self-healing, auto-provisioning and auto-scaling features, it’s your operations and site reliability engineering (SRE) teams’ best friend. Additionally, its flexibility, broad support features, and usability across operating systems and software – such as Docker Desktop, Minikube, MicroK8s, kops and Skaffold – make it your development team’s most reliable sidekick. But most importantly, it can be your CISO’s cozy, warm security blanket.
What are some things you can do to lock down your application within your Kubernetes cluster in the cloud? Depending on your application, company and paranoia levels, you may be required to use a combination of the following solutions – at Strivacity, they’re all factored into our offering.
Host in a Private Cloud
Using a dedicated cloud or isolated virtual private cloud (VPC) to run Kubernetes for an installation brings the probability of compromise to near zero after an attack on another installation or cluster. This would be one of the most secure options, since all resources would be dedicated to your brand.
Dedicated Instance Tenancy in the Cloud
Virtual machines (VMs) run on hardware, and it's likely that the specific hardware is hosting VMs owned by you, as well as others. If this is a concern, it's possible to run on dedicated hardware, for an additional cost from the cloud provider.
Shared or Dedicated Nodes/VMs for Applications in Kubernetes
By default, a cluster's scheduler will place applications where resources are available, which is efficient, especially for any container orchestration platform. However, this action could place different brands’ applications on the same underlying nodes. While this is typically OK – especially if encryption is used – we have the ability to restrict brand-specific applications to brand-specific nodes by defining a Kubernetes affinity. There are different types of affinities, some more strict than others, but a typical example is an affinity that requires applications to be scheduled to nodes with specific labels.
Network Policies in Kubernetes
This one is pretty simple. In Kubernetes, a namespace is the next layer of isolation after the cluster itself. Namespaces are more commonly used as a logical layer of separation without any specific isolation, but by defining network policies specific to namespaces, you can restrict API access, constrain resource usage and limit application communication to within the namespace.
Node Authorization, Restriction and Labeling in Kubernetes
When describing dedicated nodes above, we briefly discussed Kubernetes affinities and how you can use a node’s labels to schedule specific applications. That wouldn’t be overly useful if processes you do not control could change those labels – this is where Node Restriction and Authorization plays a critical role by preventing unauthorized API access and ensuring compromised nodes cannot use their Kubelet credentials to maliciously label their node object to schedule pods.
Taints and Tolerations in Kubernetes
This is a “softer” way to label nodes, which prevents applications from being scheduled on the node unless they "tolerate" the "taint."
Node Selection in Kubernetes
And finally, the “softest” constraints on the nodes allow applications to "select" nodes that will host them.
At this point, with the above solutions, we’ve taken a solid first step toward isolating and securing applications by simply using features enabled with Kubernetes that can run in the cloud. When you implement every solution in your cluster, save for hosting in a private cloud, you have implemented a multi-tenant cluster with very little overhead and cost. Congratulations!
Kubernetes – hard tenancy models in particular – is constantly evolving and improving, so expect this tool to harden and expand.
In an upcoming post, we’ll discuss mechanisms to secure applications in your newly secured multi-tenant cluster – specifically, how to encrypt data at rest and in transit using service mesh – as well as best practices around a brand, namespace and application-specific secret management. Furthermore, we’ll talk about what is shared within a cluster, even after implementing these security features.