Day 4 typos

This commit is contained in:
Nicolai Ort 2024-03-26 15:19:51 +01:00
parent daf83861af
commit 9ee562e88d
Signed by: niggl
GPG Key ID: 13AFA55AF62F269F
8 changed files with 94 additions and 72 deletions

View File

@ -93,3 +93,25 @@ Tekton
KPack
Multiarch
Tanzu
Kubebuilder
finalizer
OLM
depply
CatalogD
Rukoak
kapp
Depply
Jetstack
kube-lego
PKI-usecase
multimanager
kubebuider
kubebuilder
FluentD
FluentBit
OpenMetrics
upsert
tektone-based
ODIT.Services
Planetscale
vitess

View File

@ -9,11 +9,11 @@ tags:
## Problems
* Dockerfiles are hard and not 100% reproducible
* Buildpoacks are reproducible but result in large single-arch images
* Buildpacks are reproducible but result in large single-arch images
* Nix has multiple ways of doing things
## Solutions
* Degger as a CI solution
* Multistage docker images with distroless -> Small image, small attack surcface
* Language specific solutions (ki, jib)
* Dagger as a CI solution
* Multistage docker images with distroless -> Small image, small attack surface
* Language specific solutions (`ki`, `jib`)

View File

@ -5,12 +5,12 @@ tags:
- ebpf
---
A talk by isovalent with a full room (one of the large ones).
A talk by Isovalent with a full room (one of the large ones).
## Baseline
* eBPF lets you run custom code in the kernel -> close to hardware
* Typical usecases: Networking, Observability, Tracing/Profiling, security
* Typical use cases: Networking, Observability, Tracing/Profiling, security
* Question: Is eBPF truing complete and can it be used for more complex scenarios (TLS, LK7)?
## eBPF verifier
@ -19,9 +19,9 @@ A talk by isovalent with a full room (one of the large ones).
* Principles
* Read memory only with correct permissions
* All writes to valid and safe memory
* Valid in-bounds and well formed control flow
* Execution on-cpu time is bounded: sleep, scheduled callbacks, interations, program acutally compketes
* Aquire/release and reference count semantics
* Valid in-bounds and well-formed control flow
* Execution on CPU time is bounded: sleep, scheduled callbacks, iterations, program actually completes
* Acquire/release and reference count semantics
## Demo: Game of life
@ -34,7 +34,7 @@ A talk by isovalent with a full room (one of the large ones).
* Instruction limit to let the verifier actually verify the program in reasonable time
* Limit is based on: Instruction limit and verifier step limit
* nowadays the limit it 4096 unprivileged calls and 1 million privileged istructions
* nowadays the limit it 4096 unprivileged calls and 1 million privileged instructions
* Only jump forward -> No loops
* Is a basic limitation to ensure no infinite loops can ruin the day
* Limitation: Only finite iterations can be performed
@ -43,14 +43,14 @@ A talk by isovalent with a full room (one of the large ones).
* Solution: subprogram (aka function) and the limit is only for each function -> `x*subprogramms = x*limit`
* Limit: Needs real skill
* Programs have to terminate
* Well eBPF really only wants to release the cpu, the program doesn't have to end per se
* Iterator: walk abitrary lists of objects
* Sleep on pagefault or other memory operations
* Well eBPF really only wants to release the CPU, the program doesn't have to end per se
* Iterator: walk arbitrary lists of objects
* Sleep on page fault or other memory operations
* Timer callbacks (including the timer 0 for run me asap)
* Memory allocation
* Maps are used as the memory management system
## Result
* You can execure abitrary tasks via eBPF
* You can execute arbitrary tasks via eBPF
* It can be used for HTTP or TLS - it's just not implemented yet™

View File

@ -7,20 +7,20 @@ tags:
- scaling
---
By the nice opertor framework guys at IBM and RedHat.
By the nice operator framework guys at IBM and Red Hat.
I'll skip the baseline introduction of what an operator is.
## Operator DSK
> Build the operator
* Kubebuilder with v4 Plugines -> Supports the latest Kubernetes
* Java Operator SDK is not a part of Operator SDK and they released 5.0.0
* Kubebuilder with v4 Plugins -> Supports the latest Kubernetes
* Java Operator SDK is not a part of Operator SDK, and they released 5.0.0
* Now with server side apply in the background
* Better status updates and finalizer handling
* Dependent ressource handling (alongside optional dependent ressources)
* Dependent resource handling (alongside optional dependent resources)
## Operator Liefecycle Manager
## Operator Lifecycle Manager
> Manage the operator -> A operator for installing operators
@ -28,16 +28,16 @@ I'll skip the baseline introduction of what an operator is.
* New API Set -> The old CRDs were overwhelming
* More GitOps friendly with per-tenant support
* Prediscribes update paths (maybe upgrade)
* Suport for operator bundels as k8s manifests/helmchart
* Prescribes update paths (maybe upgrade)
* Support for operator bundles as k8s manifests/helm chart
### OLM v1 Components
* Cluster Extension (User-Facing API)
* Defines the app you want to install
* Resolvs requirements through catalogd/depply
* Catalogd (Catalog Server/Operator)
* Depply (Dependency/Contraint solver)
* Resolves requirements through CatalogD/depply
* CatalogD (Catalog Server/Operator)
* Depply (Dependency/Constraint solver)
* Applier (Rukoak/kapp compatible)
```mermaid

View File

@ -7,20 +7,20 @@ tags:
- security
---
A talk by the certmanager maintainers that also staffed the certmanager booth.
Humor is present, but the main focus is still thetechnical integration
A talk by the cert manager maintainers that also staffed the cert manager booth.
Humor is present, but the main focus is still the technical integration
## Baseline
* Certmanager is the best™ way of getting certificats
* Poster features: Autorenewal, ACME, PKI, HC Vault
* Cert manager is the best™ way of getting certificates
* Poster features: Auto-renewal, ACME, PKI, HC Vault
* Numbers: 20M downloads 427 contributors 11.3 GitHub stars
* Currently on the gratuation path
* Currently on the graduation path
## History
* 2016: Jetstack created kube-lego -> A operator that generated LE certificates for ingress based on annotations
* 2o17: Certmanager launch -> Cert ressources and issuer ressources
* 2o17: Cert manager launch -> Cert resources and issuer resources
* 2020: v1.0.0 and joined CNCF sandbox
* 2022: CNCF incubating
* 2024: Passed the CNCF security audit and on the way to graduation
@ -30,17 +30,17 @@ Humor is present, but the main focus is still thetechnical integration
### How it came to be
* The idea: Mix the digital certificate with the classical seal
* Started as the stamping idea to celebrate v1 and send contributors a thank you with candels
* Problems: Candels are not allowed -> Therefor glue gun
* Started as the stamping idea to celebrate v1 and send contributors a thank you with candles
* Problems: Candles are not allowed -> Therefor glue gun
### How it works
* Components
* RASPI with k3s
* Raspberry Pi with k3s
* Printer
* Certmanager
* A go-based webui
* QR-Code: Contains link to certificate with privatekey
* Cert manager
* A Go-based Web-UI
* QR-Code: Contains link to certificate with private key
```mermaid
flowchart LR
@ -53,14 +53,14 @@ flowchart LR
### What is new this year
* Idea: Certs should be usable for TLS
* Solution: The QR-Code links to a zip-download with the cert and provate key
* Solution: The QR-Code links to a zip-download with the cert and private key
* New: ECDSA for everything
* New: A stable root ca with intermediate for every conference
* New: Guestbook that can only be signed with a booth issued certificate -> Available via script
## Learnings
* This demo is just a private CA with certmanager -> Can be applied to any PKI-usecase
* This demo is just a private CA with cert manager -> Can be applied to any PKI-usecases
* The certificate can be created via the CR, CSI driver (create secret and mount in container), ingress annotations, ...
* You can use multiple different Issuers (CA Issuer aka PKI, Let's Encrypt, Vault, AWS, ...)
@ -74,4 +74,4 @@ flowchart LR
## Conclusion
* This is not just a demo -> Just apply it for machines
* They have regular meetings (daily standups and bi-weekly)
* They have regular meetings (daily stand-ups and bi-weekly)

View File

@ -7,14 +7,14 @@ tags:
- scaling
---
A talk by TikTok/ByteDace (duh) focussed on using central controllers instead of on the edge.
A talk by TikTok/ByteDance (duh) focussed on using central controllers instead of on the edge.
## Background
> Global means non-china
* Edge platform team for cdn, livestreaming, uploads, realtime communication, etc.
* Around 250 cluster with 10-600 nodes each - mostly non-cloud aka baremetal
* Edge platform team for CDN, livestreaming, uploads, real-time communication, etc.
* Around 250 cluster with 10-600 nodes each - mostly non-cloud aka bare-metal
* Architecture: Control plane clusters (platform services) - data plane clusters (workload by other teams)
* Platform includes logs, metrics, configs, secrets, ...
@ -24,28 +24,28 @@ A talk by TikTok/ByteDace (duh) focussed on using central controllers instead of
* Operators are essential for platform features
* As the feature requests increase, more operators are needed
* The deployment of operators throughout many clusters is complex (namespace, deployments, pollicies, ...)
* The deployment of operators throughout many clusters is complex (namespace, deployments, policies, ...)
### Edge
* Limited ressources
* Cost implication of platfor features
* Limited resources
* Cost implication of platform features
* Real time processing demands by platform features
* Balancing act between ressorces used by workload vs platform features (20-25%)
* Balancing act between resources used by workload vs platform features (20-25%)
### The classic flow
1. New feature get's requested
2. Use kube-buiders with the sdk to create the operator
1. New feature gets requested
2. Use kubebuider with the SDK to create the operator
3. Create namespaces and configs in all clusters
4. Deploy operator to all clsuters
4. Deploy operator to all clusters
## Possible Solution
### Centralized Control Plane
* Problem: The controller implementation is limited to a cluster boundry
* Idea: Why not create a signle operator that can manage multiple edge clusters
* Problem: The controller implementation is limited to a cluster boundary
* Idea: Why not create a single operator that can manage multiple edge clusters
* Implementation: Just modify kubebuilder to accept multiple clients (and caches)
* Result: It works -> Simpler deployment and troubleshooting
* Concerns: High code complexity -> Long familiarization
@ -54,14 +54,14 @@ A talk by TikTok/ByteDace (duh) focussed on using central controllers instead of
### Attempt it a bit more like kubebuilder
* Each cluster has its own manager
* There is a central multimanager that starts all of the cluster specific manager
* There is a central multimanager that starts all the cluster specific manager
* Controller registration to the manager now handles cluster names
* The reconciler knows which cluster it is working on
* The multi cluster management basicly just tets all of the cluster secrets and create a manager+controller for each cluster secret
* Challenges: Network connectifiy
* The multi cluster management basically just test all the cluster secrets and create a manager+controller for each cluster secret
* Challenges: Network connectivity
* Solutions:
* Dynamic add/remove of clusters with go channels to prevent pod restarts
* Connectivity health checks -> For loss the recreate manager get's triggered
* Connectivity health checks -> For loss the `recreate manager` gets triggered
```mermaid
flowchart TD
@ -80,7 +80,7 @@ flowchart LR
## Conclusion
* Acknowlege ressource contrains on edge
* Acknowledge resource constraints on edge
* Embrace open source adoption instead of build your own
* Simplify deployment
* Recognize your own optionated approach and it's use cases
* Recognize your own opinionated approach and it's use cases

View File

@ -15,22 +15,22 @@ Notes may be a bit unstructured due to tired note taker.
## Basics
* Fluentbit is compatible with
* prometheus (It can replace the prometheus scraper and node exporter)
* openmetrics
* opentelemetry (HTTPS input/output)
* FluentBit is compatible with
* Prometheus (It can replace the Prometheus scraper and node exporter)
* OpenMetrics
* OpenTelemetry (HTTPS input/output)
* FluentBit can export to Prometheus, Splunk, InfluxDB or others
* So pretty much it can be used to collect data from a bunch of sources and pipe it out to different backend destinations
* Fluent ecosystem: No vendor lock-in to observability
### Arhitectures
### Architectures
* The fluent agent collects data and can send it to one or multiple locations
* FluentBit can be used for aggregation from other sources
### In the kubernetes logging ecosystem
### In the Kubernetes logging ecosystem
* Pods logs to console -> Streamed stdout/err gets piped to file
* Pod logs to console -> Streamed stdout/err gets piped to file
* The logs in the file get encoded as JSON with metadata (date, channel)
* Labels and annotations only live in the control plane -> You have to collect it additionally -> Expensive
@ -56,8 +56,8 @@ flowchart LR
### Solution
* Solution: Processor - a seperate thread segmented by telemetry type
* Plugins can be written in your favourite language /c, rust, go, ...)
* Solution: Processor - a separate thread segmented by telemetry type
* Plugins can be written in your favorite language (c, rust, go, ...)
```mermaid
flowchart LR
@ -74,7 +74,7 @@ flowchart LR
### General new features in v3
* Native HTTP/2 support in core
* Contetn modifier with multiple operations (insert, upsert, delete, rename, hash, extract, convert)
* Content modifier with multiple operations (insert, upsert, delete, rename, hash, extract, convert)
* Metrics selector (include or exclude metrics) with matcher (name, prefix, substring, regex)
* SQL processor -> Use SQL expression for selections (instead of filters)
* Better OpenTelemetry output

View File

@ -15,15 +15,15 @@ Who have I talked to today, are there any follow-ups or learnings?
They will follow up with a quick demo
{{% /notice %}}
* A interesting tektone-based CI/CD solutions that also integrates with oter platforms
* May be interesting for either ODIT or some of our customers
* An interesting tektone-based CI/CD solutions that also integrates with other platforms
* May be interesting for either ODIT.Services or some of our customers
## Docker
* Talked to one salesperson just aboput the general conference
* Talked to one technical guy about docker buildtime optimization
* Talked to one salesperson just about the general conference
* Talked to one technical guy about docker build time optimization
## Rancher/Suse
## Rancher/SUSE
* I just got some swag, a friend of mine got a demo focussing on runtime security