From e2e3b2fdf3a272e1c146af6c0c2f2cab52100844 Mon Sep 17 00:00:00 2001 From: Nicolai Ort Date: Tue, 26 Mar 2024 14:39:44 +0100 Subject: [PATCH] Day 1 typos --- .vscode/ltex.dictionary.en-US.txt | 38 +++++++++++++++++++ .vscode/ltex.disabledRules.en-US.txt | 3 ++ .vscode/ltex.hiddenFalsePositives.en-US.txt | 2 + content/_index.md | 2 +- content/day1/01_opening.md | 2 +- ..._sometimes_lipstick_is_what_a_pig_needs.md | 21 +++++----- content/day1/03_beyond_platform_thinking.md | 6 +-- content/day1/04_user_friendsly_devplatform.md | 2 +- content/day1/05_multitennancy.md | 22 +++++------ content/day1/06_lightning_talks.md | 31 ++++++++------- .../day1/07_hitchhikers_guid_to_platform.md | 36 +++++++++--------- content/day1/08_scaling_pg.md | 34 ++++++++--------- content/day1/09_serverless.md | 16 ++++---- ...ons_learned_from_building_a_db_operator.md | 16 ++++---- content/day1/11_platform_beyond_k8s.md | 20 +++++----- content/day1/12_oversavbility.md | 18 ++++----- content/day1/_index.md | 6 +-- content/day2/01_opening.md | 2 +- content/day2/_index.md | 2 +- 19 files changed, 160 insertions(+), 119 deletions(-) create mode 100644 .vscode/ltex.dictionary.en-US.txt create mode 100644 .vscode/ltex.disabledRules.en-US.txt create mode 100644 .vscode/ltex.hiddenFalsePositives.en-US.txt diff --git a/.vscode/ltex.dictionary.en-US.txt b/.vscode/ltex.dictionary.en-US.txt new file mode 100644 index 0000000..1640dde --- /dev/null +++ b/.vscode/ltex.dictionary.en-US.txt @@ -0,0 +1,38 @@ +CloudNativeCon +Syntasso +OpenTelemetry +Multitannancy +Multitenancy +PDBs +Buildpacks +buildpacks +Konveyor +GenAI +Kube +Kustomize +KServe +kube +InferenceServices +Replicafailure +etcd +RBAC +CRDs +CRs +GitOps +CnPG +mTLS +WAL +AZs +DBs +kNative +Kaniko +Dupr +crossplane +DBaaS +APPaaS +CLUSTERaaS +OpsManager +multicluster +Statefulset +eBPF +Parca diff --git a/.vscode/ltex.disabledRules.en-US.txt b/.vscode/ltex.disabledRules.en-US.txt new file mode 100644 index 0000000..1e7df84 --- /dev/null +++ b/.vscode/ltex.disabledRules.en-US.txt @@ -0,0 +1,3 @@ +ARROWS +ARROWS +ARROWS diff --git a/.vscode/ltex.hiddenFalsePositives.en-US.txt b/.vscode/ltex.hiddenFalsePositives.en-US.txt new file mode 100644 index 0000000..6977368 --- /dev/null +++ b/.vscode/ltex.hiddenFalsePositives.en-US.txt @@ -0,0 +1,2 @@ +{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QJust create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind)\nYou can also activate replication streaming\\E$"} +{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QResulting needs\nCluster aaS (using crossplane - in this case using aws)\nDBaaS (using crossplane - again usig pq on aws)\nApp aaS\\E$"} diff --git a/content/_index.md b/content/_index.md index 18aebe2..62c3707 100644 --- a/content/_index.md +++ b/content/_index.md @@ -9,7 +9,7 @@ This current version is probably full of typos - will fix later. This is what ty ## How did I get there? -I attended KubeCon + CloudNAtiveCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative. +I attended KubeCon + CloudNativeCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative. ## Style Guide diff --git a/content/day1/01_opening.md b/content/day1/01_opening.md index ef4fe8c..02a679a 100644 --- a/content/day1/01_opening.md +++ b/content/day1/01_opening.md @@ -7,4 +7,4 @@ tags: --- The first "event" of the day was - as always - the opening keynote. -Today presented by Redhat and Syntasso. +Today presented by Red Hat and Syntasso. diff --git a/content/day1/02_sometimes_lipstick_is_what_a_pig_needs.md b/content/day1/02_sometimes_lipstick_is_what_a_pig_needs.md index 95b2f08..b7618bd 100644 --- a/content/day1/02_sometimes_lipstick_is_what_a_pig_needs.md +++ b/content/day1/02_sometimes_lipstick_is_what_a_pig_needs.md @@ -6,34 +6,33 @@ tags: - dx --- -By VMware (of all people) - kinda funny that they chose this title with the wole Broadcom fun. +By VMware (of all people) - kinda funny that they chose this title with the whole Broadcom fun. The main topic of this talk is: What interface do we choose for what capability. ## Personas -* Experts: Kubernetes, DB Engee +* Experts: Kubernetes, DB engineer * Users: Employees that just want to do stuff -* Platform Engeneers: Connect Users to Services by Experts +* Platform engineers: Connect Users to Services by Experts ## Goal -* Create Interfaces -* Interface: Connect Users to Services -* Problem: Many diferent types of Interfaces (SaaS, GUI, CLI) with different capabilities +* Create Interfaces: Connect Users to Services +* Problem: Many different types of Interfaces (SaaS, GUI, CLI) with different capabilities ## Dimensions > These are the dimensions of interface design proposed in the talk -* Autonomy: external dependency (low) <-> self-service (high) +* Autonomy: external dependency (low) <-> self-service (high) * low: Ticket system -> But sometimes good for getting an expert - * high: Portal -> Nice, but somethimes we just need a human contact + * high: Portal -> Nice, but sometimes we just need a human contact * Contextual distance: stay in the same tool (low) <-> switch tools (high) * low: IDE plugin -> High potential friction if stuff goes wrong/complex (context switch needed) * high: Wiki or ticketing system * Capability skill: anyone can do it (low) <-> Made for experts (high) - * low: transparent sidecar (eg vuln scanner) - * high: cli + * low: transparent sidecar (e.g. vulnerability scanner) + * high: CLI * Interface skill: anyone can do it (low) <-> needs specialized interface skills (high) * low: Documentation in web aka wiki-style * high: Code templates (a sample helm values.yaml or raw terraform provider) @@ -42,4 +41,4 @@ The main topic of this talk is: What interface do we choose for what capability. * You can use multiple interfaces for one capability * APIs (proverbial pig) are the most important interface b/c it can provide the baseline for all other interfaces -* The beautification (lipstick) of the API through other interfaces makes uers happy +* The beautification (lipstick) of the API through other interfaces makes users happy diff --git a/content/day1/03_beyond_platform_thinking.md b/content/day1/03_beyond_platform_thinking.md index 658636e..06947e3 100644 --- a/content/day1/03_beyond_platform_thinking.md +++ b/content/day1/03_beyond_platform_thinking.md @@ -62,10 +62,10 @@ Presented by the implementers at Thoughtworks (TW). ### Observability * Tool: Honeycomb -* Metrics: Opentelemetry +* Metrics: OpenTelemetry * Operator reconcile steps are exposed as traces ## Q&A -* Your teams are pretty autonomus -> What to do with more classic teams: Over a multi-year jurney every team settles on the ownership and selfservice approach -* How to teams get access to stages: They just get temselves a stage namespace, attach to ingress and have fun (admission handles the rest) +* Your teams are pretty autonomous -> What to do with more classic teams: Over a multi-year journey every team settles on the ownership and self-service approach +* How teams get access to stages: They just get themselves a stage namespace, attach to ingress and have fun (admission handles the rest) diff --git a/content/day1/04_user_friendsly_devplatform.md b/content/day1/04_user_friendsly_devplatform.md index 78e1cd3..e619bbd 100644 --- a/content/day1/04_user_friendsly_devplatform.md +++ b/content/day1/04_user_friendsly_devplatform.md @@ -17,6 +17,6 @@ No real value ## What do we need * User documentation -* Adoption & Patnership +* Adoption & Partnership * Platform as a Product * Customer feedback diff --git a/content/day1/05_multitennancy.md b/content/day1/05_multitennancy.md index ba96bac..6fcf125 100644 --- a/content/day1/05_multitennancy.md +++ b/content/day1/05_multitennancy.md @@ -10,7 +10,7 @@ tags: - multicluster --- -Part of the Multitannancy Con presented by Adobe +Part of the Multi-tenancy Con presented by Adobe ## Challenges @@ -22,24 +22,24 @@ Part of the Multitannancy Con presented by Adobe * Azure in Base - AWS on the edge * Single Tenant Clusters (Simpler Governance) -* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc) -* Problem: Huge manual investment and overprovisioning +* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc.) +* Problem: Huge manual investment and over provisioning * Result: Access Control to tenant Namespaces and Capacity Planning -> Pretty much a multi tenant cluster with one tenant per cluster -### Second Try - Microcluster +### Second Try - Micro Clusters * One Cluster per Service -### Third Try - Multitennancy +### Third Try - Multi-tenancy * Use a bunch of components deployed by platform Team (Ingress, CD/CD, Monitoring, ...) -* Harmonized general Runtime (cloud agnostic): Codenamed Ethos -> OVer 300 Clusters +* Harmonized general Runtime (cloud-agnostic): Code-named Ethos -> Over 300 Clusters * Both shared clusters (shared by namespace) and dedicated clusters -* Cluster config is a basic json with name, capacity, teams -* Capacity Managment get's Monitored using Prometheus -* Cluster Changes should be non-desruptive -> K8S-Shredder -* Cost efficiency: Use good PDBs and livelyness/readyness Probes alongside ressource requests and limits +* Cluster config is a basic JSON with name, capacity, teams +* Capacity Management gets Monitored using Prometheus +* Cluster Changes should be nondestructive -> K8S-Shredder +* Cost efficiency: Use good PDBs and liveliness/readiness Probes alongside resource requests and limits ## Conclusion -* There is a balance between cost, customization, setup and security between single-tenant und multi-tenant +* There is a balance between cost, customization, setup and security between single-tenant and multi-tenant diff --git a/content/day1/06_lightning_talks.md b/content/day1/06_lightning_talks.md index 3840f3e..741a6ab 100644 --- a/content/day1/06_lightning_talks.md +++ b/content/day1/06_lightning_talks.md @@ -3,42 +3,41 @@ title: Lightning talks weight: 6 --- -The lightning talks are 10-minute talks by diferent cncf projects. +The lightning talks are 10-minute talks by different CNCF projects. -## Building contaienrs at scale using buildpacks +## Building containers at scale using buildpacks -A Project lightning talk by heroku and the cncf buildpacks. +A Project lightning talk by Heroku and the CNCF buildpacks. ### How and why buildpacks? -* What: A simple way to build reproducible contaienr images -* Why: Scale, Reuse, Rebase -* Rebase: Buildpacks are structured as layers +* What: A simple way to build reproducible container images +* Why: Scale, Reuse, Rebase: Buildpacks are structured as layers * Dependencies, app builds and the runtime are seperated -> Easy update -* How: Use the PAck CLI `pack build ` `docker run ` +* How: Use the Pack CLI `pack build ` `docker run ` ## Konveyor -A Platform for migration of legacy apps to cloudnative platforms. +A Platform for migration of legacy apps to cloud native platforms. -* Parts: Hub, Analysis (with langugage server), Assesment +* Parts: Hub, Analysis (with language server), assessment * Roadmap: Multi language support, GenAI, Asset Generation (e.g. Kube Deployments) -## Argo'S Communuty Driven Development +## Argo's Community Driven Development -Pretty mutch a short intropduction to Argo Project +Pretty much a short introduction to Argo Project * Project Parts: Workflows (CI), Events, CD, Rollouts -* NPS: Net Promoter Score (How likely are you to recoomend this) -> Everyone loves argo (based on their survey) -* Rollouts: Can be based with prometheus metrics +* NPS: Net Promoter Score (How likely are you to recommend this) -> Everyone loves Argo (based on their survey) +* Rollouts: Can be based with Prometheus metrics ## Flux -* Components: Helm, Kustomize, Terrafrorm, ... -* Flagger Now supports gateway api, prometheus, datadog and more +* Components: Helm, Kustomize, Terraform, ... +* Flagger Now supports gateway API, Prometheus, Datadog and more * New Releases -## A quick logg at the TAG App-Delivery +## A quick look at the TAG App-Delivery * Mission: Everything related to cloud-native application delivery * Bi-Weekly Meetings diff --git a/content/day1/07_hitchhikers_guid_to_platform.md b/content/day1/07_hitchhikers_guid_to_platform.md index 5ef1e62..9262d1c 100644 --- a/content/day1/07_hitchhikers_guid_to_platform.md +++ b/content/day1/07_hitchhikers_guid_to_platform.md @@ -8,30 +8,30 @@ tags: - dx --- -This talks looks at bootstrapping Platforms using KSere. -They do this in regards to AI Workflows. +This talk looks at bootstrapping Platforms using KServe. +They do this in regard to AI Workflows. -## Szenario +## Scenario -* Deploy AI Workloads - Sometime consiting of different parts +* Deploy AI Workloads - Sometime consisting of different parts * Models get stored in a model registry ## Baseline * Consistent APIs throughout the platform -* Not the kube api directly b/c: - * Data scientists are a bit overpowered by the kube api - * Not only Kubernetes (also monitoring tools, feedback tools, etc) +* Not the kube API directly b/c: + * Data scientists are a bit overpowered by the kube API + * Not only Kubernetes (also monitoring tools, feedback tools, etc.) * Better debugging experience for specific workloads -## The debugging api +## The debugging API * Specific API with enhanced statuses and consistent UX across Code and UI -* Exampüle Endpoints: Pods, Deployments, InferenceServices -* Provides a status summary-> Consistent health info across all related ressources - * Example: Deployments have progress/availability, Pods have phases, Containers have readyness -> What do we interpret how? - * Evaluation: Progressing, Available Count vs Readyness, Replicafailure, Pod Phase, Container Readyness -* The rules themselfes may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple +* Example Endpoints: Pods, Deployments, InferenceServices +* Provides a status summary-> Consistent health info across all related resources + * Example: Deployments have progress/availability, Pods have phases, Containers have readiness -> What do we interpret how? + * Evaluation: Progressing, Available Count vs Readiness, Replicafailure, Pod Phase, Container Readiness +* The rules themselves may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple ### Debugging Metrics @@ -47,15 +47,15 @@ They do this in regards to AI Workflows. * Kine is used to replace/extend etcd with the relational dock db -> Relation namespace<->manifests is stored here and RBAC can be used * Launchpad: Select Namespace and check resource (fuel) availability/utilization -### Clsuter maintainance +### Cluster maintenance -* Deplyoments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters -* The excact same manifests get deployed to two clusters -* Cluster desired state is stored externally to enable effortless upogrades, rescale, etc +* Deployments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters +* The exact same manifests get deployed to two clusters +* Cluster desired state is stored externally to enable effortless upgrades, rescale, etc ### Versioning API -* Basicly the dock DB +* Basically the dock DB * CRDs are the representations of the inference manifests * Rollbacks, Promotion and History is managed via the CRs * Why not GitOps: Internal Diffs, deployment overrides, customized features diff --git a/content/day1/08_scaling_pg.md b/content/day1/08_scaling_pg.md index 852a97e..271eee2 100644 --- a/content/day1/08_scaling_pg.md +++ b/content/day1/08_scaling_pg.md @@ -7,25 +7,25 @@ tags: - db --- -A short Talk as Part of the DOK day - presendet by the VP of CloudNative at EDB (one of the biggest PG contributors) +A short Talk as Part of the Data on Kubernetes day - presented by the VP of Cloud Native at EDB (one of the biggest PG contributors) Stated target: Make the world your single point of failure ## Proposal -* Get rid of Vendor-Lockin using the oss projects PG, K8S and CnPG +* Get rid of Vendor-Lockin using the OSS projects PG, K8S and CnPG * PG was the DB of the year 2023 and a bunch of other times in the past * CnPG is a Level 5 mature operator ## 4 Pillars -* Seamless KubeAPI Integration (Operator PAttern) +* Seamless Kube API Integration (Operator Pattern) * Advanced observability (Prometheus Exporter, JSON logging) * Declarative Config (Deploy, Scale, Maintain) -* Secure by default (Robust contaienrs, mTLS, and so on) +* Secure by default (Robust containers, mTLS, and so on) ## Clusters -* Basic Ressource that defines name, instances, snyc and storage (and other params that have same defaults) +* Basic Resource that defines name, instances, sync and storage (and other parameters that have same defaults) * Implementation: Operator creates: * The volumes (PG_Data, WAL (Write ahead log) * Primary and Read-Write Service @@ -35,15 +35,15 @@ Stated target: Make the world your single point of failure * Failure detected * Stop R/W Service * Promote Replica - * Activat R/W Service - * Kill old promary and demote to replica + * Activate R/W Service + * Kill old primary and demote to replica ## Backup/Recovery -* Continuos Backup: Write Ahead Log Backup to object store +* Continuous Backup: Write Ahead Log Backup to object store * Physical: Create from primary or standby to object store or kube volumes -* Recovery: Copy full backup and apply WAL until target (last transactio or specific timestamp) is reached -* Replica Cluster: Basicly recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode +* Recovery: Copy full backup and apply WAL until target (last transaction or specific timestamp) is reached +* Replica Cluster: Basically recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode * Planned: Backup Plugin Interface ## Multi-Cluster @@ -51,21 +51,21 @@ Stated target: Make the world your single point of failure * Just create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind) * You can also activate replication streaming -## Reccomended architecutre +## Recommended architecture -* Dev Cluster: 1 Instance without PDB and with Continuos backup -* Prod: 3 Nodes with automatic failover and continuos backups +* Dev Cluster: 1 Instance without PDB and with Continuous backup +* Prod: 3 Nodes with automatic failover and continuous backups * Symmetric: Two clusters * Primary: 3-Node Cluster - * Secondary: WAL-Based 3-Node Cluster with a designated primary (to take over if primary cluster fails) -* Symmetric Streaming: Same as Secondary, but you manually enable the streaming api for live replication + * Secondary: WAL based 3-Node Cluster with a designated primary (to take over if primary cluster fails) +* Symmetric Streaming: Same as Secondary, but you manually enable the streaming API for live replication * Cascading Replication: Scale Symmetric to more clusters -* Single availability zone: Well, do your best to spread to nodes and aspire to streched kubernetes to more AZs +* Single availability zone: Well, do your best to spread to nodes and aspire to stretched Kubernetes to more AZs ## Roadmap * Replica Cluster (Symmetric) Switchover * Synchronous Symmetric -* 3rd PArty Plugins +* 3rd Party Plugins * Manage DBs via the Operator * Storage Autoscaling diff --git a/content/day1/09_serverless.md b/content/day1/09_serverless.md index 3630087..6ee17ec 100644 --- a/content/day1/09_serverless.md +++ b/content/day1/09_serverless.md @@ -4,14 +4,14 @@ weight: 9 --- > When I say serverless I don't mean lambda - I mean serverless -> That is thousands of lines of yaml - but I don't want to depress you +> That is thousands of lines of YAML - but I don't want to depress you > It will be eventually done > Imagine this error is not happening > Just imagine how I did this last night ## Goal -* Take my sourcecode and run it, scale it - jsut don't ask me +* Take my source code and run it, scale it - just don't ask me ## Baseline @@ -20,9 +20,9 @@ weight: 9 * Use Kaniko/Shipwright for building * Use Dupr for inter-service Communication -## Openfunction +## Open function -> The glue between different tools to achive serverless +> The glue between different tools to achieve serverless * CRD that describes: * Build this image and push it to the registry @@ -35,8 +35,8 @@ weight: 9 * Open Questions * Where are the serverless servers -> Cluster, dependencies, secrets - * How do I create DBs, etc + * How do I create DBs, etc. * Resulting needs - * Cluster aaS (using crossplane - in this case using aws) - * DBaaS (using crossplane - again usig pq on aws) - * App aaS + * CLUSTERaaS (using crossplane - in this case using AWS) + * DBaaS (using crossplane - again using pg on AWS) + * APPaaS diff --git a/content/day1/10_lessons_learned_from_building_a_db_operator.md b/content/day1/10_lessons_learned_from_building_a_db_operator.md index 3a13b83..46e03c8 100644 --- a/content/day1/10_lessons_learned_from_building_a_db_operator.md +++ b/content/day1/10_lessons_learned_from_building_a_db_operator.md @@ -14,21 +14,21 @@ Another talk as part of the Data On Kubernetes Day. * Managed: Atlas * Semi: Cloud manager -* Selfhosted: Enterprise and community operator +* Self-hosted: Enterprise and community operator -### Mongo on K8s +### MongoDB on K8s * Cluster Architecture * Control Plane: Operator - * Data Plane: MongoDB Server + Agen (Sidecar Proxy) + * Data Plane: MongoDB Server + Agent (Sidecar Proxy) * Enterprise Operator - * Opsmanager CR: Deploys 3-node operator DB and OpsManager - * MongoDB CR: The MongoDB cLusters (Compromised of agents) -* Advanced Usecase: Data Platform with mongodb on demand - * Control Plane on one cluster (or on VMs/Hardmetal), data plane in tennant clusters + * OpsManager CR: Deploys 3-node operator DB and OpsManager + * MongoDB CR: The MongoDB clusters (Compromised of agents) +* Advanced use case: Data Platform with MongoDB on demand + * Control Plane on one cluster (or on VMs/Bare-metal), data plane in tenant clusters * Result: MongoDB CR can not relate to OpsManager CR directly ## Pitfalls -* Storage: Agnostic, Topology aware, configureable and resizeable (can't be done with statefulset) +* Storage: Agnostic, Topology aware, configurable and resizable (can't be done with Statefulset) * Networking: Cluster-internal (Pod to Pod/Service), External (Split horizon over multicluster) diff --git a/content/day1/11_platform_beyond_k8s.md b/content/day1/11_platform_beyond_k8s.md index d736743..c136f52 100644 --- a/content/day1/11_platform_beyond_k8s.md +++ b/content/day1/11_platform_beyond_k8s.md @@ -9,8 +9,8 @@ tags: ## CNCF Platform maturity model -* Was donated to the cncf by syntasso -* Constantly evolving since 1.0 in november 2023 +* Was donated to the CNCF by Syntasso +* Constantly evolving since 1.0 in November 2023 ### Overview @@ -25,7 +25,7 @@ tags: * Investment: How are funds/staff allocated to platform capabilities * Adoption: How and why do users discover this platform * Interfaces: How do users interact with and consume platform capabilities - * Operations: How are platforms and capabilities planned, prioritzed, developed and maintained + * Operations: How are platforms and capabilities planned, prioritized, developed and maintained * Measurement: What is the process for gathering and incorporating feedback/learning? ## Goals @@ -34,24 +34,24 @@ tags: * Outcomes & Practices * Where are you at * Limits & Opportunities - * Behaviours and outcome + * Behaviors and outcome * Balance People and processes ## Typical Journeys -### Steps of the jurney +### Steps of the journey 1. What are your goals and limitations 2. What is my current landscape -3. Plan babysteps & iterate +3. Plan baby steps & iterate -### Szenarios +### Scenarios * Bad: I want to improve my k8s platform * Good: Scaling an enterprise COE (Center Of Excellence) * What: Onboard 20 Teams within 20 Months and enforce 8 security regulations * Where: We have a dedicated team of centrally funded people - * Lay the foundation: More funding for more larger teams -> Switch from Project to platform mindset + * Lay the foundation: More funding for more, larger teams -> Switch from Project to platform mindset * Do your technical Due diligence in parallel ## Key Lessons @@ -60,8 +60,8 @@ tags: * Know your landscape * Plan in baby steps and iterate * Lay the foundation for building the right thing and not just anything - * Dont forget to do your technical dd in parallel + * Don't forget to do your technical dd in parallel ## Conclusion -* Majurity model is a helpful part but not the entire plan +* Maturity model is a helpful part but not the entire plan diff --git a/content/day1/12_oversavbility.md b/content/day1/12_oversavbility.md index 739baab..93b89c5 100644 --- a/content/day1/12_oversavbility.md +++ b/content/day1/12_oversavbility.md @@ -6,43 +6,43 @@ tags: - network --- -Held by Cilium regarding ebpf and hubble +Held by Cilium regarding eBPF and Hubble ## eBPF > Extend the capabilities of the kernel without requiring to change the kernel source code or load modules * Benefits: Reduce performance overhead, gain deep visibility while being widely available -* Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Opservability), Tetragon (Security) +* Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Observability), Tetragon (Security) ## Cilium -> Opensource Solution for network connectivity between workloads +> Open source Solution for network connectivity between workloads ## Hubble > Observability-Layer for cilium -### Featureset +### Feature set * CLI: TCP-Dump on steroids + API Client * UI: Graphical dependency and connectivity map -* Prometheus + Grafana + Opentelemetry compatible +* Prometheus + Grafana + OpenTelemetry compatible * Metrics up to L7 ### Where can it be used * Service dependency with frequency -* Kinds of http calls +* Kinds of HTTP calls * Network Problems between L4 and L7 (including DNS) * Application Monitoring through status codes and latency * Security-Related Network Blocks -* Services accessed from outside the cluser +* Services accessed from outside the cluster ### Architecture -* Cilium Agent: Runs as the CNI für all Pods -* Server: Runs on each node and retrieves the ebpf from cilium +* Cilium Agent: Runs as the CNI for all Pods +* Server: Runs on each node and retrieves the eBPF from cilium * Relay: Provide visibility throughout all nodes ## TL;DR diff --git a/content/day1/_index.md b/content/day1/_index.md index 9959604..3d9c278 100644 --- a/content/day1/_index.md +++ b/content/day1/_index.md @@ -7,10 +7,10 @@ weight: 1 Day one is the Day for co-located events aka CloudNativeCon. I spent most of the day attending the Platform Engineering Day - as one might have guessed it's all about platform engineering. -Everything started with badge pickup - a very smooth experence (but that may be related to me showing up an hour or so too early). +Everything started with badge pickup - a very smooth experience (but that may be related to me showing up an hour or so too early). -## Talk reccomandations +## Talk recommendations * Beyond Platform Thinking... -* Hitchhikers Guide to ... +* Hitchhiker's Guide to ... * To K8S and beyond... diff --git a/content/day2/01_opening.md b/content/day2/01_opening.md index 81cd1ac..f5a27bf 100644 --- a/content/day2/01_opening.md +++ b/content/day2/01_opening.md @@ -11,7 +11,7 @@ The keynote itself was presented by the CEO of the CNCF. ## The numbers -* Over 2000 attendees +* Over 12000 attendees * 10 Years of Kubernetes * 60% of large organizations expect rapid cost increases due to AI/ML (FinOps Survey) diff --git a/content/day2/_index.md b/content/day2/_index.md index 111229d..d23c1a6 100644 --- a/content/day2/_index.md +++ b/content/day2/_index.md @@ -5,7 +5,7 @@ weight: 2 --- Day two is also the official day one of KubeCon (Day one was just CloudNativeCon). -This is where all of the people joined (over 2000) +This is where all of the people joined (over 12000) The opening keynotes were a mix of talks and panel discussions. The main topic was - who could have guessed - AI and ML.