38 lines
1.3 KiB
Markdown
38 lines
1.3 KiB
Markdown
---
|
|
title: Multi Tannancy - Micro Clusters
|
|
weight: 5
|
|
---
|
|
|
|
Part of the Multitannancy Con presented by Adobe
|
|
|
|
## Challenges
|
|
|
|
* Spin up Edge Infra globally fast
|
|
|
|
## Implementation
|
|
|
|
### First try - Single Tenant Cluster
|
|
|
|
* Azure in Base - AWS on the edge
|
|
* Single Tenant Clusters (Simpler Governance)
|
|
* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc)
|
|
* Problem: Huge manual investment and overprovisioning
|
|
* Result: Access Control to tenant Namespaces and Capacity Planning -> Pretty much a multi tenant cluster with one tenant per cluster
|
|
|
|
### Second Try - Microcluster
|
|
|
|
* One Cluster per Service
|
|
|
|
### Third Try - Multitennancy
|
|
|
|
* Use a bunch of components deployed by platform Team (Ingress, CD/CD, Monitoring, ...)
|
|
* Harmonized general Runtime (cloud agnostic): Codenamed Ethos -> OVer 300 Clusters
|
|
* Both shared clusters (shared by namespace) and dedicated clusters
|
|
* Cluster config is a basic json with name, capacity, teams
|
|
* Capacity Managment get's Monitored using Prometheus
|
|
* Cluster Changes should be non-desruptive -> K8S-Shredder
|
|
* Cost efficiency: Use good PDBs and livelyness/readyness Probes alongside ressource requests and limits
|
|
|
|
## Conclusion
|
|
|
|
* There is a balance between cost, customization, setup and security between single-tenant und multi-tenant |