Day 1 typos

This commit is contained in:
Nicolai Ort 2024-03-26 14:39:44 +01:00
parent b515be2220
commit e2e3b2fdf3
Signed by: niggl
GPG Key ID: 13AFA55AF62F269F
19 changed files with 160 additions and 119 deletions

38
.vscode/ltex.dictionary.en-US.txt vendored Normal file
View File

@ -0,0 +1,38 @@
CloudNativeCon
Syntasso
OpenTelemetry
Multitannancy
Multitenancy
PDBs
Buildpacks
buildpacks
Konveyor
GenAI
Kube
Kustomize
KServe
kube
InferenceServices
Replicafailure
etcd
RBAC
CRDs
CRs
GitOps
CnPG
mTLS
WAL
AZs
DBs
kNative
Kaniko
Dupr
crossplane
DBaaS
APPaaS
CLUSTERaaS
OpsManager
multicluster
Statefulset
eBPF
Parca

3
.vscode/ltex.disabledRules.en-US.txt vendored Normal file
View File

@ -0,0 +1,3 @@
ARROWS
ARROWS
ARROWS

View File

@ -0,0 +1,2 @@
{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QJust create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind)\nYou can also activate replication streaming\\E$"}
{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QResulting needs\nCluster aaS (using crossplane - in this case using aws)\nDBaaS (using crossplane - again usig pq on aws)\nApp aaS\\E$"}

View File

@ -9,7 +9,7 @@ This current version is probably full of typos - will fix later. This is what ty
## How did I get there? ## How did I get there?
I attended KubeCon + CloudNAtiveCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative. I attended KubeCon + CloudNativeCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative.
## Style Guide ## Style Guide

View File

@ -7,4 +7,4 @@ tags:
--- ---
The first "event" of the day was - as always - the opening keynote. The first "event" of the day was - as always - the opening keynote.
Today presented by Redhat and Syntasso. Today presented by Red Hat and Syntasso.

View File

@ -6,34 +6,33 @@ tags:
- dx - dx
--- ---
By VMware (of all people) - kinda funny that they chose this title with the wole Broadcom fun. By VMware (of all people) - kinda funny that they chose this title with the whole Broadcom fun.
The main topic of this talk is: What interface do we choose for what capability. The main topic of this talk is: What interface do we choose for what capability.
## Personas ## Personas
* Experts: Kubernetes, DB Engee * Experts: Kubernetes, DB engineer
* Users: Employees that just want to do stuff * Users: Employees that just want to do stuff
* Platform Engeneers: Connect Users to Services by Experts * Platform engineers: Connect Users to Services by Experts
## Goal ## Goal
* Create Interfaces * Create Interfaces: Connect Users to Services
* Interface: Connect Users to Services * Problem: Many different types of Interfaces (SaaS, GUI, CLI) with different capabilities
* Problem: Many diferent types of Interfaces (SaaS, GUI, CLI) with different capabilities
## Dimensions ## Dimensions
> These are the dimensions of interface design proposed in the talk > These are the dimensions of interface design proposed in the talk
* Autonomy: external dependency (low) <-> self-service (high) * Autonomy: external dependency (low) <-> self-service (high)
* low: Ticket system -> But sometimes good for getting an expert * low: Ticket system -> But sometimes good for getting an expert
* high: Portal -> Nice, but somethimes we just need a human contact * high: Portal -> Nice, but sometimes we just need a human contact
* Contextual distance: stay in the same tool (low) <-> switch tools (high) * Contextual distance: stay in the same tool (low) <-> switch tools (high)
* low: IDE plugin -> High potential friction if stuff goes wrong/complex (context switch needed) * low: IDE plugin -> High potential friction if stuff goes wrong/complex (context switch needed)
* high: Wiki or ticketing system * high: Wiki or ticketing system
* Capability skill: anyone can do it (low) <-> Made for experts (high) * Capability skill: anyone can do it (low) <-> Made for experts (high)
* low: transparent sidecar (eg vuln scanner) * low: transparent sidecar (e.g. vulnerability scanner)
* high: cli * high: CLI
* Interface skill: anyone can do it (low) <-> needs specialized interface skills (high) * Interface skill: anyone can do it (low) <-> needs specialized interface skills (high)
* low: Documentation in web aka wiki-style * low: Documentation in web aka wiki-style
* high: Code templates (a sample helm values.yaml or raw terraform provider) * high: Code templates (a sample helm values.yaml or raw terraform provider)
@ -42,4 +41,4 @@ The main topic of this talk is: What interface do we choose for what capability.
* You can use multiple interfaces for one capability * You can use multiple interfaces for one capability
* APIs (proverbial pig) are the most important interface b/c it can provide the baseline for all other interfaces * APIs (proverbial pig) are the most important interface b/c it can provide the baseline for all other interfaces
* The beautification (lipstick) of the API through other interfaces makes uers happy * The beautification (lipstick) of the API through other interfaces makes users happy

View File

@ -62,10 +62,10 @@ Presented by the implementers at Thoughtworks (TW).
### Observability ### Observability
* Tool: Honeycomb * Tool: Honeycomb
* Metrics: Opentelemetry * Metrics: OpenTelemetry
* Operator reconcile steps are exposed as traces * Operator reconcile steps are exposed as traces
## Q&A ## Q&A
* Your teams are pretty autonomus -> What to do with more classic teams: Over a multi-year jurney every team settles on the ownership and selfservice approach * Your teams are pretty autonomous -> What to do with more classic teams: Over a multi-year journey every team settles on the ownership and self-service approach
* How to teams get access to stages: They just get temselves a stage namespace, attach to ingress and have fun (admission handles the rest) * How teams get access to stages: They just get themselves a stage namespace, attach to ingress and have fun (admission handles the rest)

View File

@ -17,6 +17,6 @@ No real value
## What do we need ## What do we need
* User documentation * User documentation
* Adoption & Patnership * Adoption & Partnership
* Platform as a Product * Platform as a Product
* Customer feedback * Customer feedback

View File

@ -10,7 +10,7 @@ tags:
- multicluster - multicluster
--- ---
Part of the Multitannancy Con presented by Adobe Part of the Multi-tenancy Con presented by Adobe
## Challenges ## Challenges
@ -22,24 +22,24 @@ Part of the Multitannancy Con presented by Adobe
* Azure in Base - AWS on the edge * Azure in Base - AWS on the edge
* Single Tenant Clusters (Simpler Governance) * Single Tenant Clusters (Simpler Governance)
* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc) * Responsibility is Shared between App and Platform (Monitoring, Ingress, etc.)
* Problem: Huge manual investment and overprovisioning * Problem: Huge manual investment and over provisioning
* Result: Access Control to tenant Namespaces and Capacity Planning -> Pretty much a multi tenant cluster with one tenant per cluster * Result: Access Control to tenant Namespaces and Capacity Planning -> Pretty much a multi tenant cluster with one tenant per cluster
### Second Try - Microcluster ### Second Try - Micro Clusters
* One Cluster per Service * One Cluster per Service
### Third Try - Multitennancy ### Third Try - Multi-tenancy
* Use a bunch of components deployed by platform Team (Ingress, CD/CD, Monitoring, ...) * Use a bunch of components deployed by platform Team (Ingress, CD/CD, Monitoring, ...)
* Harmonized general Runtime (cloud agnostic): Codenamed Ethos -> OVer 300 Clusters * Harmonized general Runtime (cloud-agnostic): Code-named Ethos -> Over 300 Clusters
* Both shared clusters (shared by namespace) and dedicated clusters * Both shared clusters (shared by namespace) and dedicated clusters
* Cluster config is a basic json with name, capacity, teams * Cluster config is a basic JSON with name, capacity, teams
* Capacity Managment get's Monitored using Prometheus * Capacity Management gets Monitored using Prometheus
* Cluster Changes should be non-desruptive -> K8S-Shredder * Cluster Changes should be nondestructive -> K8S-Shredder
* Cost efficiency: Use good PDBs and livelyness/readyness Probes alongside ressource requests and limits * Cost efficiency: Use good PDBs and liveliness/readiness Probes alongside resource requests and limits
## Conclusion ## Conclusion
* There is a balance between cost, customization, setup and security between single-tenant und multi-tenant * There is a balance between cost, customization, setup and security between single-tenant and multi-tenant

View File

@ -3,42 +3,41 @@ title: Lightning talks
weight: 6 weight: 6
--- ---
The lightning talks are 10-minute talks by diferent cncf projects. The lightning talks are 10-minute talks by different CNCF projects.
## Building contaienrs at scale using buildpacks ## Building containers at scale using buildpacks
A Project lightning talk by heroku and the cncf buildpacks. A Project lightning talk by Heroku and the CNCF buildpacks.
### How and why buildpacks? ### How and why buildpacks?
* What: A simple way to build reproducible contaienr images * What: A simple way to build reproducible container images
* Why: Scale, Reuse, Rebase * Why: Scale, Reuse, Rebase: Buildpacks are structured as layers
* Rebase: Buildpacks are structured as layers
* Dependencies, app builds and the runtime are seperated -> Easy update * Dependencies, app builds and the runtime are seperated -> Easy update
* How: Use the PAck CLI `pack build <image>` `docker run <image>` * How: Use the Pack CLI `pack build <image>` `docker run <image>`
## Konveyor ## Konveyor
A Platform for migration of legacy apps to cloudnative platforms. A Platform for migration of legacy apps to cloud native platforms.
* Parts: Hub, Analysis (with langugage server), Assesment * Parts: Hub, Analysis (with language server), assessment
* Roadmap: Multi language support, GenAI, Asset Generation (e.g. Kube Deployments) * Roadmap: Multi language support, GenAI, Asset Generation (e.g. Kube Deployments)
## Argo'S Communuty Driven Development ## Argo's Community Driven Development
Pretty mutch a short intropduction to Argo Project Pretty much a short introduction to Argo Project
* Project Parts: Workflows (CI), Events, CD, Rollouts * Project Parts: Workflows (CI), Events, CD, Rollouts
* NPS: Net Promoter Score (How likely are you to recoomend this) -> Everyone loves argo (based on their survey) * NPS: Net Promoter Score (How likely are you to recommend this) -> Everyone loves Argo (based on their survey)
* Rollouts: Can be based with prometheus metrics * Rollouts: Can be based with Prometheus metrics
## Flux ## Flux
* Components: Helm, Kustomize, Terrafrorm, ... * Components: Helm, Kustomize, Terraform, ...
* Flagger Now supports gateway api, prometheus, datadog and more * Flagger Now supports gateway API, Prometheus, Datadog and more
* New Releases * New Releases
## A quick logg at the TAG App-Delivery ## A quick look at the TAG App-Delivery
* Mission: Everything related to cloud-native application delivery * Mission: Everything related to cloud-native application delivery
* Bi-Weekly Meetings * Bi-Weekly Meetings

View File

@ -8,30 +8,30 @@ tags:
- dx - dx
--- ---
This talks looks at bootstrapping Platforms using KSere. This talk looks at bootstrapping Platforms using KServe.
They do this in regards to AI Workflows. They do this in regard to AI Workflows.
## Szenario ## Scenario
* Deploy AI Workloads - Sometime consiting of different parts * Deploy AI Workloads - Sometime consisting of different parts
* Models get stored in a model registry * Models get stored in a model registry
## Baseline ## Baseline
* Consistent APIs throughout the platform * Consistent APIs throughout the platform
* Not the kube api directly b/c: * Not the kube API directly b/c:
* Data scientists are a bit overpowered by the kube api * Data scientists are a bit overpowered by the kube API
* Not only Kubernetes (also monitoring tools, feedback tools, etc) * Not only Kubernetes (also monitoring tools, feedback tools, etc.)
* Better debugging experience for specific workloads * Better debugging experience for specific workloads
## The debugging api ## The debugging API
* Specific API with enhanced statuses and consistent UX across Code and UI * Specific API with enhanced statuses and consistent UX across Code and UI
* Exampüle Endpoints: Pods, Deployments, InferenceServices * Example Endpoints: Pods, Deployments, InferenceServices
* Provides a status summary-> Consistent health info across all related ressources * Provides a status summary-> Consistent health info across all related resources
* Example: Deployments have progress/availability, Pods have phases, Containers have readyness -> What do we interpret how? * Example: Deployments have progress/availability, Pods have phases, Containers have readiness -> What do we interpret how?
* Evaluation: Progressing, Available Count vs Readyness, Replicafailure, Pod Phase, Container Readyness * Evaluation: Progressing, Available Count vs Readiness, Replicafailure, Pod Phase, Container Readiness
* The rules themselfes may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple * The rules themselves may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple
### Debugging Metrics ### Debugging Metrics
@ -47,15 +47,15 @@ They do this in regards to AI Workflows.
* Kine is used to replace/extend etcd with the relational dock db -> Relation namespace<->manifests is stored here and RBAC can be used * Kine is used to replace/extend etcd with the relational dock db -> Relation namespace<->manifests is stored here and RBAC can be used
* Launchpad: Select Namespace and check resource (fuel) availability/utilization * Launchpad: Select Namespace and check resource (fuel) availability/utilization
### Clsuter maintainance ### Cluster maintenance
* Deplyoments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters * Deployments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters
* The excact same manifests get deployed to two clusters * The exact same manifests get deployed to two clusters
* Cluster desired state is stored externally to enable effortless upogrades, rescale, etc * Cluster desired state is stored externally to enable effortless upgrades, rescale, etc
### Versioning API ### Versioning API
* Basicly the dock DB * Basically the dock DB
* CRDs are the representations of the inference manifests * CRDs are the representations of the inference manifests
* Rollbacks, Promotion and History is managed via the CRs * Rollbacks, Promotion and History is managed via the CRs
* Why not GitOps: Internal Diffs, deployment overrides, customized features * Why not GitOps: Internal Diffs, deployment overrides, customized features

View File

@ -7,25 +7,25 @@ tags:
- db - db
--- ---
A short Talk as Part of the DOK day - presendet by the VP of CloudNative at EDB (one of the biggest PG contributors) A short Talk as Part of the Data on Kubernetes day - presented by the VP of Cloud Native at EDB (one of the biggest PG contributors)
Stated target: Make the world your single point of failure Stated target: Make the world your single point of failure
## Proposal ## Proposal
* Get rid of Vendor-Lockin using the oss projects PG, K8S and CnPG * Get rid of Vendor-Lockin using the OSS projects PG, K8S and CnPG
* PG was the DB of the year 2023 and a bunch of other times in the past * PG was the DB of the year 2023 and a bunch of other times in the past
* CnPG is a Level 5 mature operator * CnPG is a Level 5 mature operator
## 4 Pillars ## 4 Pillars
* Seamless KubeAPI Integration (Operator PAttern) * Seamless Kube API Integration (Operator Pattern)
* Advanced observability (Prometheus Exporter, JSON logging) * Advanced observability (Prometheus Exporter, JSON logging)
* Declarative Config (Deploy, Scale, Maintain) * Declarative Config (Deploy, Scale, Maintain)
* Secure by default (Robust contaienrs, mTLS, and so on) * Secure by default (Robust containers, mTLS, and so on)
## Clusters ## Clusters
* Basic Ressource that defines name, instances, snyc and storage (and other params that have same defaults) * Basic Resource that defines name, instances, sync and storage (and other parameters that have same defaults)
* Implementation: Operator creates: * Implementation: Operator creates:
* The volumes (PG_Data, WAL (Write ahead log) * The volumes (PG_Data, WAL (Write ahead log)
* Primary and Read-Write Service * Primary and Read-Write Service
@ -35,15 +35,15 @@ Stated target: Make the world your single point of failure
* Failure detected * Failure detected
* Stop R/W Service * Stop R/W Service
* Promote Replica * Promote Replica
* Activat R/W Service * Activate R/W Service
* Kill old promary and demote to replica * Kill old primary and demote to replica
## Backup/Recovery ## Backup/Recovery
* Continuos Backup: Write Ahead Log Backup to object store * Continuous Backup: Write Ahead Log Backup to object store
* Physical: Create from primary or standby to object store or kube volumes * Physical: Create from primary or standby to object store or kube volumes
* Recovery: Copy full backup and apply WAL until target (last transactio or specific timestamp) is reached * Recovery: Copy full backup and apply WAL until target (last transaction or specific timestamp) is reached
* Replica Cluster: Basicly recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode * Replica Cluster: Basically recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode
* Planned: Backup Plugin Interface * Planned: Backup Plugin Interface
## Multi-Cluster ## Multi-Cluster
@ -51,21 +51,21 @@ Stated target: Make the world your single point of failure
* Just create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind) * Just create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind)
* You can also activate replication streaming * You can also activate replication streaming
## Reccomended architecutre ## Recommended architecture
* Dev Cluster: 1 Instance without PDB and with Continuos backup * Dev Cluster: 1 Instance without PDB and with Continuous backup
* Prod: 3 Nodes with automatic failover and continuos backups * Prod: 3 Nodes with automatic failover and continuous backups
* Symmetric: Two clusters * Symmetric: Two clusters
* Primary: 3-Node Cluster * Primary: 3-Node Cluster
* Secondary: WAL-Based 3-Node Cluster with a designated primary (to take over if primary cluster fails) * Secondary: WAL based 3-Node Cluster with a designated primary (to take over if primary cluster fails)
* Symmetric Streaming: Same as Secondary, but you manually enable the streaming api for live replication * Symmetric Streaming: Same as Secondary, but you manually enable the streaming API for live replication
* Cascading Replication: Scale Symmetric to more clusters * Cascading Replication: Scale Symmetric to more clusters
* Single availability zone: Well, do your best to spread to nodes and aspire to streched kubernetes to more AZs * Single availability zone: Well, do your best to spread to nodes and aspire to stretched Kubernetes to more AZs
## Roadmap ## Roadmap
* Replica Cluster (Symmetric) Switchover * Replica Cluster (Symmetric) Switchover
* Synchronous Symmetric * Synchronous Symmetric
* 3rd PArty Plugins * 3rd Party Plugins
* Manage DBs via the Operator * Manage DBs via the Operator
* Storage Autoscaling * Storage Autoscaling

View File

@ -4,14 +4,14 @@ weight: 9
--- ---
> When I say serverless I don't mean lambda - I mean serverless > When I say serverless I don't mean lambda - I mean serverless
> That is thousands of lines of yaml - but I don't want to depress you > That is thousands of lines of YAML - but I don't want to depress you
> It will be eventually done > It will be eventually done
> Imagine this error is not happening > Imagine this error is not happening
> Just imagine how I did this last night > Just imagine how I did this last night
## Goal ## Goal
* Take my sourcecode and run it, scale it - jsut don't ask me * Take my source code and run it, scale it - just don't ask me
## Baseline ## Baseline
@ -20,9 +20,9 @@ weight: 9
* Use Kaniko/Shipwright for building * Use Kaniko/Shipwright for building
* Use Dupr for inter-service Communication * Use Dupr for inter-service Communication
## Openfunction ## Open function
> The glue between different tools to achive serverless > The glue between different tools to achieve serverless
* CRD that describes: * CRD that describes:
* Build this image and push it to the registry * Build this image and push it to the registry
@ -35,8 +35,8 @@ weight: 9
* Open Questions * Open Questions
* Where are the serverless servers -> Cluster, dependencies, secrets * Where are the serverless servers -> Cluster, dependencies, secrets
* How do I create DBs, etc * How do I create DBs, etc.
* Resulting needs * Resulting needs
* Cluster aaS (using crossplane - in this case using aws) * CLUSTERaaS (using crossplane - in this case using AWS)
* DBaaS (using crossplane - again usig pq on aws) * DBaaS (using crossplane - again using pg on AWS)
* App aaS * APPaaS

View File

@ -14,21 +14,21 @@ Another talk as part of the Data On Kubernetes Day.
* Managed: Atlas * Managed: Atlas
* Semi: Cloud manager * Semi: Cloud manager
* Selfhosted: Enterprise and community operator * Self-hosted: Enterprise and community operator
### Mongo on K8s ### MongoDB on K8s
* Cluster Architecture * Cluster Architecture
* Control Plane: Operator * Control Plane: Operator
* Data Plane: MongoDB Server + Agen (Sidecar Proxy) * Data Plane: MongoDB Server + Agent (Sidecar Proxy)
* Enterprise Operator * Enterprise Operator
* Opsmanager CR: Deploys 3-node operator DB and OpsManager * OpsManager CR: Deploys 3-node operator DB and OpsManager
* MongoDB CR: The MongoDB cLusters (Compromised of agents) * MongoDB CR: The MongoDB clusters (Compromised of agents)
* Advanced Usecase: Data Platform with mongodb on demand * Advanced use case: Data Platform with MongoDB on demand
* Control Plane on one cluster (or on VMs/Hardmetal), data plane in tennant clusters * Control Plane on one cluster (or on VMs/Bare-metal), data plane in tenant clusters
* Result: MongoDB CR can not relate to OpsManager CR directly * Result: MongoDB CR can not relate to OpsManager CR directly
## Pitfalls ## Pitfalls
* Storage: Agnostic, Topology aware, configureable and resizeable (can't be done with statefulset) * Storage: Agnostic, Topology aware, configurable and resizable (can't be done with Statefulset)
* Networking: Cluster-internal (Pod to Pod/Service), External (Split horizon over multicluster) * Networking: Cluster-internal (Pod to Pod/Service), External (Split horizon over multicluster)

View File

@ -9,8 +9,8 @@ tags:
## CNCF Platform maturity model ## CNCF Platform maturity model
* Was donated to the cncf by syntasso * Was donated to the CNCF by Syntasso
* Constantly evolving since 1.0 in november 2023 * Constantly evolving since 1.0 in November 2023
### Overview ### Overview
@ -25,7 +25,7 @@ tags:
* Investment: How are funds/staff allocated to platform capabilities * Investment: How are funds/staff allocated to platform capabilities
* Adoption: How and why do users discover this platform * Adoption: How and why do users discover this platform
* Interfaces: How do users interact with and consume platform capabilities * Interfaces: How do users interact with and consume platform capabilities
* Operations: How are platforms and capabilities planned, prioritzed, developed and maintained * Operations: How are platforms and capabilities planned, prioritized, developed and maintained
* Measurement: What is the process for gathering and incorporating feedback/learning? * Measurement: What is the process for gathering and incorporating feedback/learning?
## Goals ## Goals
@ -34,24 +34,24 @@ tags:
* Outcomes & Practices * Outcomes & Practices
* Where are you at * Where are you at
* Limits & Opportunities * Limits & Opportunities
* Behaviours and outcome * Behaviors and outcome
* Balance People and processes * Balance People and processes
## Typical Journeys ## Typical Journeys
### Steps of the jurney ### Steps of the journey
1. What are your goals and limitations 1. What are your goals and limitations
2. What is my current landscape 2. What is my current landscape
3. Plan babysteps & iterate 3. Plan baby steps & iterate
### Szenarios ### Scenarios
* Bad: I want to improve my k8s platform * Bad: I want to improve my k8s platform
* Good: Scaling an enterprise COE (Center Of Excellence) * Good: Scaling an enterprise COE (Center Of Excellence)
* What: Onboard 20 Teams within 20 Months and enforce 8 security regulations * What: Onboard 20 Teams within 20 Months and enforce 8 security regulations
* Where: We have a dedicated team of centrally funded people * Where: We have a dedicated team of centrally funded people
* Lay the foundation: More funding for more larger teams -> Switch from Project to platform mindset * Lay the foundation: More funding for more, larger teams -> Switch from Project to platform mindset
* Do your technical Due diligence in parallel * Do your technical Due diligence in parallel
## Key Lessons ## Key Lessons
@ -60,8 +60,8 @@ tags:
* Know your landscape * Know your landscape
* Plan in baby steps and iterate * Plan in baby steps and iterate
* Lay the foundation for building the right thing and not just anything * Lay the foundation for building the right thing and not just anything
* Dont forget to do your technical dd in parallel * Don't forget to do your technical dd in parallel
## Conclusion ## Conclusion
* Majurity model is a helpful part but not the entire plan * Maturity model is a helpful part but not the entire plan

View File

@ -6,43 +6,43 @@ tags:
- network - network
--- ---
Held by Cilium regarding ebpf and hubble Held by Cilium regarding eBPF and Hubble
## eBPF ## eBPF
> Extend the capabilities of the kernel without requiring to change the kernel source code or load modules > Extend the capabilities of the kernel without requiring to change the kernel source code or load modules
* Benefits: Reduce performance overhead, gain deep visibility while being widely available * Benefits: Reduce performance overhead, gain deep visibility while being widely available
* Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Opservability), Tetragon (Security) * Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Observability), Tetragon (Security)
## Cilium ## Cilium
> Opensource Solution for network connectivity between workloads > Open source Solution for network connectivity between workloads
## Hubble ## Hubble
> Observability-Layer for cilium > Observability-Layer for cilium
### Featureset ### Feature set
* CLI: TCP-Dump on steroids + API Client * CLI: TCP-Dump on steroids + API Client
* UI: Graphical dependency and connectivity map * UI: Graphical dependency and connectivity map
* Prometheus + Grafana + Opentelemetry compatible * Prometheus + Grafana + OpenTelemetry compatible
* Metrics up to L7 * Metrics up to L7
### Where can it be used ### Where can it be used
* Service dependency with frequency * Service dependency with frequency
* Kinds of http calls * Kinds of HTTP calls
* Network Problems between L4 and L7 (including DNS) * Network Problems between L4 and L7 (including DNS)
* Application Monitoring through status codes and latency * Application Monitoring through status codes and latency
* Security-Related Network Blocks * Security-Related Network Blocks
* Services accessed from outside the cluser * Services accessed from outside the cluster
### Architecture ### Architecture
* Cilium Agent: Runs as the CNI für all Pods * Cilium Agent: Runs as the CNI for all Pods
* Server: Runs on each node and retrieves the ebpf from cilium * Server: Runs on each node and retrieves the eBPF from cilium
* Relay: Provide visibility throughout all nodes * Relay: Provide visibility throughout all nodes
## TL;DR ## TL;DR

View File

@ -7,10 +7,10 @@ weight: 1
Day one is the Day for co-located events aka CloudNativeCon. Day one is the Day for co-located events aka CloudNativeCon.
I spent most of the day attending the Platform Engineering Day - as one might have guessed it's all about platform engineering. I spent most of the day attending the Platform Engineering Day - as one might have guessed it's all about platform engineering.
Everything started with badge pickup - a very smooth experence (but that may be related to me showing up an hour or so too early). Everything started with badge pickup - a very smooth experience (but that may be related to me showing up an hour or so too early).
## Talk reccomandations ## Talk recommendations
* Beyond Platform Thinking... * Beyond Platform Thinking...
* Hitchhikers Guide to ... * Hitchhiker's Guide to ...
* To K8S and beyond... * To K8S and beyond...

View File

@ -11,7 +11,7 @@ The keynote itself was presented by the CEO of the CNCF.
## The numbers ## The numbers
* Over 2000 attendees * Over 12000 attendees
* 10 Years of Kubernetes * 10 Years of Kubernetes
* 60% of large organizations expect rapid cost increases due to AI/ML (FinOps Survey) * 60% of large organizations expect rapid cost increases due to AI/ML (FinOps Survey)

View File

@ -5,7 +5,7 @@ weight: 2
--- ---
Day two is also the official day one of KubeCon (Day one was just CloudNativeCon). Day two is also the official day one of KubeCon (Day one was just CloudNativeCon).
This is where all of the people joined (over 2000) This is where all of the people joined (over 12000)
The opening keynotes were a mix of talks and panel discussions. The opening keynotes were a mix of talks and panel discussions.
The main topic was - who could have guessed - AI and ML. The main topic was - who could have guessed - AI and ML.