Infrastructure: Planning for Change

Eric Lordahl
Eric Lordahl
  • Oct 16, 2020
  • 8 min read

As infrastructures improve and adhere to the natural evolution of available technology, the best ways to run applications change. Finding the best way for your product can be sticky, with varying solutions that may depend on your team, company, or even industry. In this multi-part blog series, I'll discuss how we at Strivacity host our product, how we securely distribute and isolate our services, why a microservice architecture is paramount to the future success of any team, and how it all comes together to provide an ultramodern architecture of secure, versatile, and cost effective products. It's not too technical, but does assume a baseline understanding of modern software development fundamentals --- containers, continuous integration, bad kubernetes jokes, etc. This is part one:

KIAM: Cloud native CIAM on Kubernetes

Hosting

Let's start with arguably the most important of our infrastructure decisions -- the primary cloud provider to host Strivacity Fusion. We chose Amazon Web Services (AWS). Our teams have a wealth of knowledge and experience using its services successfully at scale in production, it's a global platform which has been widely adopted, and finally, its security services and documentation entail terrific detail. That said, we've built our platform so that a migration away or multi-cloud support is not terribly nontrivial. I'm intentionally not saying it's trivial here, because it's not a completely effortless operation. It's one that would consist of merely translating our automation playbooks from one cloud provider to another, and by carefully selecting cloud-native components and microservices that run well on Kubernetes, this translation is minimal. That's it!

Now, unless you skipped the last paragraph, it's no secret that we're using Kubernetes, and have been a Kubernetes shop since the beginning. It allows us to be nimble and effective, to efficiently focus on our CIAM solutions rather than operational problems, and to scale scale scale. There are similar platforms, but none that feel as natural and useful to our teams. Containers and microservices gave development and operations teams the boost they needed to get their processes out of the stratosphere, and Kubernetes puts them k-omfortably into a k-ruising trajectory to k-ontinue with rapid, reliable releases (with Kubernetes you must replace all Cs with Ks, right?). In addition to running Fusion in Kubernetes, it is also the home to most of our hosted development tools, like our CI platform. Furthermore, we've separated our application deployment mechanism (think publishing an application to Kubernetes) from our Kubernetes deployment mechanism (provisioning Kubernetes itself). So again, if we want to run on a provider other than AWS, we can.

No Feature-lock: Portability and Flexibility

If you're thinking "did they develop a single solution that is capable of running anywhere (this is likely NOT news to your engineering team)? In the cloud, in my virtual infrastructure, or on bare metal?" Well, sort of. A big chunk of our solution is portable, with only bits and pieces requiring specific, hardware and/or middleware support. We've buckled ourselves into production-grade Kubernetes, and we let it worry about the underlying hosting platform.

And ... we know not everyone will run in the cloud. In fact, I know of production systems (not Strivacity's) running on bare-metal today! That may not be you, or us, but our solutions are portable, and our support options are flexible, at least as much as Kubernetes is, by design.

Want to test a small feature? Want to implement a new service? Is your PM screaming about a "pivot" due to some shift in the market? No problem. Since we're not embalming our decade-old behemoth application, and embracing a microservices architecture, there's no need to panic or scramble.

If you weren't paying for a product or service, would the provider rebuild it? And if you knew the answer was 'yes', would you still make the purchase? We're using the best tools for the task at hand, which keeps changes and improvements from having to be glacial. And let's face it, that's a very good thing! How many of us have been through a migration that involves sweeping changes which never quite cross the finish line? Whether there's some legacy service a customer is still paying for (and probably costing more to maintain), or some other team that just can't let go due to the IKEA effect, hanging on can slow you down. Plainly stated, that's not something we're willing to build into our application.

Maturity, Stability, Confidence

OK OK, but if it's versatile and can run anywhere, why focus solely on AWS? Five years ago, there may have been hesitation to jump into what was seemingly an ocean of promise, when accompanied by the unknowns inherent to the depth of such magnitude. Teams might have consciously remained on their own hardware to stifle the uncertainty, but there's been an incredible, ubiquitous shift for a multitude of reasons. I won't go into specifics, but as cloud providers' services have expanded and matured, they're likely to be just as secure, if not moreso, than any in-house hosting solution. Furthermore, if you consider the added convenience (think Data Sovereignty--or see below), it's a pretty solid pitch to run past your CISO, especially for smaller teams that don't have the luxury, or budget, of building out dedicated infrastructure farms. AWS provides this level of maturity, and the Open-source community is well aware. While many Open-source tools and applications support numerous cloud providers, we've found that feature-sets specific to AWS are typically rich and in most cases, considered stable. This is in contrast to support for other cloud-providers with experimental or alpha/beta features (more on this later).

Data Sovereignty

Flipping back to CIAM for a second: Data Sovereignty is the idea that the storage, processing and general handling of data must adhere to the laws of the country for which it resides. While it seems pretty straightforward, it wasn't always obvious where your data was stored when it was in the cloud. With Strivacity, you'll know, because you'll choose!

Isolation & Tenancy

Now that you know how and where our products are hosted, let's talk about how our environments are isolated. It may just be me, but "tenant" has been a point of confusion at times: Is there a single instance of a tenant? Multiple tenants to an instance? Are you multi-tenant? multi-instance? hyper-region? Quasi-huh? Are we talking about another company's software that happens to be running on the same hardware within your cloud provider's infrastructure? Is it the single, monolithic environment providing services to multiple brands? Does the software itself have some other definition of tenancy that it's pushing? Or did you simply miss your monthly lease payment, and your landlord has demoted you from beloved "friend" to dreaded "tenant?"

For us, our product is inline with the multi-instance terminology. Every brand (what we call a customer) using our product has their own dedicated and isolated environment. We took a security first approach when making this decision, and while we may offer a multi-tenant option in the future, it hasn't been our focus.

To be clear, there is nothing wrong with a multi-tenant solution. If you're using a cloud based productivity application, or one where there's less of a focus on security, an environment where multiple customers share a single environment, perhaps with dedicated datastores, may be a better fit. In fact, we initially considered a multi-tenant offering as a way of delivering a more cost-effective application to appeal to the individual developer. However, we ultimately chose not to focus on this initially because of some complexities around data security and transfer within the same private cloud (we had a lot to do!).

Our solution gives us the ability to operate and scale easily within the microservice paradigm, and without the typical downsides of a multi-instance scheme, and so we feel we're achieving the best of both worlds with our Kubernetes based, multi-instance offering with complete brand isolation.

Overall, each brand's environment will have a dedicated Kubernetes cluster, dedicated VMs, applications, and databases --- everything, all safely within their own VPC (that's AWS' acronym for their Private Cloud)! When adhering to modern software development best practices, the overhead that comes with managing "multiple environments" is minimal. In fact, we're transferring terabytes of data for the provisioning of environments alone, each month.

Fusion

So that's it. We're deploying modular services to Kubernetes on AWS, using tools that have massive communities of support and by doing so keeping our options, and yours, flexible, reliable, and secure. Many of us have experience managing and scaling variants of our above solutions, and would love to discuss them -- drop us a line, or see for yourself with a free trial at https://strivacity.com/#signup.

More articles from this author