kubecon24/content/day2/08_multicloud_saas.md
2024-03-25 13:45:10 +01:00

103 lines
3.4 KiB
Markdown

---
title: Building a large scale multi-cloud multi-region SaaS platform with kubernetes controllers
weight: 8
tags:
- platform
- operator
- scaling
---
> Interchangeable wording in this talk: controller == operator
A talk by elastic.
## About elastic
* Elestic cloud as a managed service
* Deployed across AWS/GCP/Azure in over 50 regions
* 600.000+ Containers
### Elastic and Kube
* They offer elastic obervability
* They offer the ECK operator for simplified deployments
## The baseline
* Goal: A large scale (1M+ containers resilient platform on k8s
* Architecture
* Global Control: The control plane (api) for users with controllers
* Regional Apps: The "shitload" of kubernetes clusters where the actual customer services live
## Scalability
* Challenge: How large can our cluster be, how many clusters do we need
* Problem: Only basic guidelines exist for that
* Decision: Horizontaly scale the number of clusters (5ßß-1K nodes each)
* Decision: Disposable clusters
* Throw away without data loss
* Single source of throuth is not cluster etcd but external -> No etcd backups needed
* Everything can be recreated any time
## Controllers
{{% notice style="note" %}}
I won't copy the explanations of operators/controllers in this notes
{{% /notice %}}
* Many different controllers, including (but not limited to)
* cluster controler: Register cluster to controller
* Project controller: Schedule user's project to cluster
* Product controllers (Elasticsearch, Kibana, etc.)
* Ingress/Certmanager
* Sometimes controllers depend on controllers -> potential complexity
* Pro:
* Resilient (Selfhealing)
* Level triggered (desired state vs procedure triggered)
* Simple reasoning when comparing desired state vs state machine
* Official controller runtime lib
* Workque: Automatic Dedup, Retry backoff and so on
## Global Controllers
* Basic operation
* Uses project config from Elastic cloud as the desired state
* The actual state is a k9s ressource in another cluster
* Challenge: Where is the source of thruth if the data is not stored in etc
* Solution: External datastore (postgres)
* Challenge: How do we sync the db sources to kubernetes
* Potential solutions: Replace etcd with the external db
* Chosen solution:
* The controllers don't use CRDs for storage, but they expose a webapi
* Reconciliation still now interacts with the external db and go channels (que) instead
* Then the CRs for the operators get created by the global controller
### Large scale
* Problem: Reconcile gets triggered for all objects on restart -> Make sure nothing gets missed and is used with the latest controller version
* Idea: Just create more workers for 100K+ Objects
* Problem: CPU go brrr and db gets overloaded
* Problem: If you create an item during restart, suddenly it is at the end of a 100Kü item work-queue
### Reconcile
* User-driven events are processed asap
* reconcole of everything should happen, bus with low prio slowly in the background
* Solution: Status: LastReconciledRevision (timestamp) get's compare to revision, if larger -> User change
* Prioritization: Just a custom event handler with the normal queue and a low prio
* Low Prio Queue: Just a queue that adds items to the normal work-queue with a rate limit
```mermaid
flowchart LR
low-->rl(ratelimit)
rl-->wq(work queue)
wq-->controller
high-->wq
```
## Related
* Argo for CI/CD
* Crossplane for cluster autoprovision