Day 1 typos
This commit is contained in:
parent
b515be2220
commit
e2e3b2fdf3
38
.vscode/ltex.dictionary.en-US.txt
vendored
Normal file
38
.vscode/ltex.dictionary.en-US.txt
vendored
Normal file
@ -0,0 +1,38 @@
|
||||
CloudNativeCon
|
||||
Syntasso
|
||||
OpenTelemetry
|
||||
Multitannancy
|
||||
Multitenancy
|
||||
PDBs
|
||||
Buildpacks
|
||||
buildpacks
|
||||
Konveyor
|
||||
GenAI
|
||||
Kube
|
||||
Kustomize
|
||||
KServe
|
||||
kube
|
||||
InferenceServices
|
||||
Replicafailure
|
||||
etcd
|
||||
RBAC
|
||||
CRDs
|
||||
CRs
|
||||
GitOps
|
||||
CnPG
|
||||
mTLS
|
||||
WAL
|
||||
AZs
|
||||
DBs
|
||||
kNative
|
||||
Kaniko
|
||||
Dupr
|
||||
crossplane
|
||||
DBaaS
|
||||
APPaaS
|
||||
CLUSTERaaS
|
||||
OpsManager
|
||||
multicluster
|
||||
Statefulset
|
||||
eBPF
|
||||
Parca
|
3
.vscode/ltex.disabledRules.en-US.txt
vendored
Normal file
3
.vscode/ltex.disabledRules.en-US.txt
vendored
Normal file
@ -0,0 +1,3 @@
|
||||
ARROWS
|
||||
ARROWS
|
||||
ARROWS
|
2
.vscode/ltex.hiddenFalsePositives.en-US.txt
vendored
Normal file
2
.vscode/ltex.hiddenFalsePositives.en-US.txt
vendored
Normal file
@ -0,0 +1,2 @@
|
||||
{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QJust create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind)\nYou can also activate replication streaming\\E$"}
|
||||
{"rule":"MORFOLOGIK_RULE_EN_US","sentence":"^\\QResulting needs\nCluster aaS (using crossplane - in this case using aws)\nDBaaS (using crossplane - again usig pq on aws)\nApp aaS\\E$"}
|
@ -9,7 +9,7 @@ This current version is probably full of typos - will fix later. This is what ty
|
||||
|
||||
## How did I get there?
|
||||
|
||||
I attended KubeCon + CloudNAtiveCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative.
|
||||
I attended KubeCon + CloudNativeCon Europe 2024 as the one and only [ODIT.Services](https://odit.services) representative.
|
||||
|
||||
## Style Guide
|
||||
|
||||
|
@ -7,4 +7,4 @@ tags:
|
||||
---
|
||||
|
||||
The first "event" of the day was - as always - the opening keynote.
|
||||
Today presented by Redhat and Syntasso.
|
||||
Today presented by Red Hat and Syntasso.
|
||||
|
@ -6,20 +6,19 @@ tags:
|
||||
- dx
|
||||
---
|
||||
|
||||
By VMware (of all people) - kinda funny that they chose this title with the wole Broadcom fun.
|
||||
By VMware (of all people) - kinda funny that they chose this title with the whole Broadcom fun.
|
||||
The main topic of this talk is: What interface do we choose for what capability.
|
||||
|
||||
## Personas
|
||||
|
||||
* Experts: Kubernetes, DB Engee
|
||||
* Experts: Kubernetes, DB engineer
|
||||
* Users: Employees that just want to do stuff
|
||||
* Platform Engeneers: Connect Users to Services by Experts
|
||||
* Platform engineers: Connect Users to Services by Experts
|
||||
|
||||
## Goal
|
||||
|
||||
* Create Interfaces
|
||||
* Interface: Connect Users to Services
|
||||
* Problem: Many diferent types of Interfaces (SaaS, GUI, CLI) with different capabilities
|
||||
* Create Interfaces: Connect Users to Services
|
||||
* Problem: Many different types of Interfaces (SaaS, GUI, CLI) with different capabilities
|
||||
|
||||
## Dimensions
|
||||
|
||||
@ -27,13 +26,13 @@ The main topic of this talk is: What interface do we choose for what capability.
|
||||
|
||||
* Autonomy: external dependency (low) <-> self-service (high)
|
||||
* low: Ticket system -> But sometimes good for getting an expert
|
||||
* high: Portal -> Nice, but somethimes we just need a human contact
|
||||
* high: Portal -> Nice, but sometimes we just need a human contact
|
||||
* Contextual distance: stay in the same tool (low) <-> switch tools (high)
|
||||
* low: IDE plugin -> High potential friction if stuff goes wrong/complex (context switch needed)
|
||||
* high: Wiki or ticketing system
|
||||
* Capability skill: anyone can do it (low) <-> Made for experts (high)
|
||||
* low: transparent sidecar (eg vuln scanner)
|
||||
* high: cli
|
||||
* low: transparent sidecar (e.g. vulnerability scanner)
|
||||
* high: CLI
|
||||
* Interface skill: anyone can do it (low) <-> needs specialized interface skills (high)
|
||||
* low: Documentation in web aka wiki-style
|
||||
* high: Code templates (a sample helm values.yaml or raw terraform provider)
|
||||
@ -42,4 +41,4 @@ The main topic of this talk is: What interface do we choose for what capability.
|
||||
|
||||
* You can use multiple interfaces for one capability
|
||||
* APIs (proverbial pig) are the most important interface b/c it can provide the baseline for all other interfaces
|
||||
* The beautification (lipstick) of the API through other interfaces makes uers happy
|
||||
* The beautification (lipstick) of the API through other interfaces makes users happy
|
||||
|
@ -62,10 +62,10 @@ Presented by the implementers at Thoughtworks (TW).
|
||||
### Observability
|
||||
|
||||
* Tool: Honeycomb
|
||||
* Metrics: Opentelemetry
|
||||
* Metrics: OpenTelemetry
|
||||
* Operator reconcile steps are exposed as traces
|
||||
|
||||
## Q&A
|
||||
|
||||
* Your teams are pretty autonomus -> What to do with more classic teams: Over a multi-year jurney every team settles on the ownership and selfservice approach
|
||||
* How to teams get access to stages: They just get temselves a stage namespace, attach to ingress and have fun (admission handles the rest)
|
||||
* Your teams are pretty autonomous -> What to do with more classic teams: Over a multi-year journey every team settles on the ownership and self-service approach
|
||||
* How teams get access to stages: They just get themselves a stage namespace, attach to ingress and have fun (admission handles the rest)
|
||||
|
@ -17,6 +17,6 @@ No real value
|
||||
## What do we need
|
||||
|
||||
* User documentation
|
||||
* Adoption & Patnership
|
||||
* Adoption & Partnership
|
||||
* Platform as a Product
|
||||
* Customer feedback
|
||||
|
@ -10,7 +10,7 @@ tags:
|
||||
- multicluster
|
||||
---
|
||||
|
||||
Part of the Multitannancy Con presented by Adobe
|
||||
Part of the Multi-tenancy Con presented by Adobe
|
||||
|
||||
## Challenges
|
||||
|
||||
@ -22,24 +22,24 @@ Part of the Multitannancy Con presented by Adobe
|
||||
|
||||
* Azure in Base - AWS on the edge
|
||||
* Single Tenant Clusters (Simpler Governance)
|
||||
* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc)
|
||||
* Responsibility is Shared between App and Platform (Monitoring, Ingress, etc.)
|
||||
* Problem: Huge manual investment and over provisioning
|
||||
* Result: Access Control to tenant Namespaces and Capacity Planning -> Pretty much a multi tenant cluster with one tenant per cluster
|
||||
|
||||
### Second Try - Microcluster
|
||||
### Second Try - Micro Clusters
|
||||
|
||||
* One Cluster per Service
|
||||
|
||||
### Third Try - Multitennancy
|
||||
### Third Try - Multi-tenancy
|
||||
|
||||
* Use a bunch of components deployed by platform Team (Ingress, CD/CD, Monitoring, ...)
|
||||
* Harmonized general Runtime (cloud agnostic): Codenamed Ethos -> OVer 300 Clusters
|
||||
* Harmonized general Runtime (cloud-agnostic): Code-named Ethos -> Over 300 Clusters
|
||||
* Both shared clusters (shared by namespace) and dedicated clusters
|
||||
* Cluster config is a basic json with name, capacity, teams
|
||||
* Capacity Managment get's Monitored using Prometheus
|
||||
* Cluster Changes should be non-desruptive -> K8S-Shredder
|
||||
* Cost efficiency: Use good PDBs and livelyness/readyness Probes alongside ressource requests and limits
|
||||
* Cluster config is a basic JSON with name, capacity, teams
|
||||
* Capacity Management gets Monitored using Prometheus
|
||||
* Cluster Changes should be nondestructive -> K8S-Shredder
|
||||
* Cost efficiency: Use good PDBs and liveliness/readiness Probes alongside resource requests and limits
|
||||
|
||||
## Conclusion
|
||||
|
||||
* There is a balance between cost, customization, setup and security between single-tenant und multi-tenant
|
||||
* There is a balance between cost, customization, setup and security between single-tenant and multi-tenant
|
||||
|
@ -3,42 +3,41 @@ title: Lightning talks
|
||||
weight: 6
|
||||
---
|
||||
|
||||
The lightning talks are 10-minute talks by diferent cncf projects.
|
||||
The lightning talks are 10-minute talks by different CNCF projects.
|
||||
|
||||
## Building contaienrs at scale using buildpacks
|
||||
## Building containers at scale using buildpacks
|
||||
|
||||
A Project lightning talk by heroku and the cncf buildpacks.
|
||||
A Project lightning talk by Heroku and the CNCF buildpacks.
|
||||
|
||||
### How and why buildpacks?
|
||||
|
||||
* What: A simple way to build reproducible contaienr images
|
||||
* Why: Scale, Reuse, Rebase
|
||||
* Rebase: Buildpacks are structured as layers
|
||||
* What: A simple way to build reproducible container images
|
||||
* Why: Scale, Reuse, Rebase: Buildpacks are structured as layers
|
||||
* Dependencies, app builds and the runtime are seperated -> Easy update
|
||||
* How: Use the PAck CLI `pack build <image>` `docker run <image>`
|
||||
* How: Use the Pack CLI `pack build <image>` `docker run <image>`
|
||||
|
||||
## Konveyor
|
||||
|
||||
A Platform for migration of legacy apps to cloud native platforms.
|
||||
|
||||
* Parts: Hub, Analysis (with langugage server), Assesment
|
||||
* Parts: Hub, Analysis (with language server), assessment
|
||||
* Roadmap: Multi language support, GenAI, Asset Generation (e.g. Kube Deployments)
|
||||
|
||||
## Argo'S Communuty Driven Development
|
||||
## Argo's Community Driven Development
|
||||
|
||||
Pretty mutch a short intropduction to Argo Project
|
||||
Pretty much a short introduction to Argo Project
|
||||
|
||||
* Project Parts: Workflows (CI), Events, CD, Rollouts
|
||||
* NPS: Net Promoter Score (How likely are you to recoomend this) -> Everyone loves argo (based on their survey)
|
||||
* Rollouts: Can be based with prometheus metrics
|
||||
* NPS: Net Promoter Score (How likely are you to recommend this) -> Everyone loves Argo (based on their survey)
|
||||
* Rollouts: Can be based with Prometheus metrics
|
||||
|
||||
## Flux
|
||||
|
||||
* Components: Helm, Kustomize, Terrafrorm, ...
|
||||
* Flagger Now supports gateway api, prometheus, datadog and more
|
||||
* Components: Helm, Kustomize, Terraform, ...
|
||||
* Flagger Now supports gateway API, Prometheus, Datadog and more
|
||||
* New Releases
|
||||
|
||||
## A quick logg at the TAG App-Delivery
|
||||
## A quick look at the TAG App-Delivery
|
||||
|
||||
* Mission: Everything related to cloud-native application delivery
|
||||
* Bi-Weekly Meetings
|
||||
|
@ -8,30 +8,30 @@ tags:
|
||||
- dx
|
||||
---
|
||||
|
||||
This talks looks at bootstrapping Platforms using KSere.
|
||||
They do this in regards to AI Workflows.
|
||||
This talk looks at bootstrapping Platforms using KServe.
|
||||
They do this in regard to AI Workflows.
|
||||
|
||||
## Szenario
|
||||
## Scenario
|
||||
|
||||
* Deploy AI Workloads - Sometime consiting of different parts
|
||||
* Deploy AI Workloads - Sometime consisting of different parts
|
||||
* Models get stored in a model registry
|
||||
|
||||
## Baseline
|
||||
|
||||
* Consistent APIs throughout the platform
|
||||
* Not the kube api directly b/c:
|
||||
* Data scientists are a bit overpowered by the kube api
|
||||
* Not only Kubernetes (also monitoring tools, feedback tools, etc)
|
||||
* Not the kube API directly b/c:
|
||||
* Data scientists are a bit overpowered by the kube API
|
||||
* Not only Kubernetes (also monitoring tools, feedback tools, etc.)
|
||||
* Better debugging experience for specific workloads
|
||||
|
||||
## The debugging api
|
||||
## The debugging API
|
||||
|
||||
* Specific API with enhanced statuses and consistent UX across Code and UI
|
||||
* Exampüle Endpoints: Pods, Deployments, InferenceServices
|
||||
* Provides a status summary-> Consistent health info across all related ressources
|
||||
* Example: Deployments have progress/availability, Pods have phases, Containers have readyness -> What do we interpret how?
|
||||
* Evaluation: Progressing, Available Count vs Readyness, Replicafailure, Pod Phase, Container Readyness
|
||||
* The rules themselfes may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple
|
||||
* Example Endpoints: Pods, Deployments, InferenceServices
|
||||
* Provides a status summary-> Consistent health info across all related resources
|
||||
* Example: Deployments have progress/availability, Pods have phases, Containers have readiness -> What do we interpret how?
|
||||
* Evaluation: Progressing, Available Count vs Readiness, Replicafailure, Pod Phase, Container Readiness
|
||||
* The rules themselves may be pretty complex, but - since the user doesn't have to check them themselves - the status is simple
|
||||
|
||||
### Debugging Metrics
|
||||
|
||||
@ -47,15 +47,15 @@ They do this in regards to AI Workflows.
|
||||
* Kine is used to replace/extend etcd with the relational dock db -> Relation namespace<->manifests is stored here and RBAC can be used
|
||||
* Launchpad: Select Namespace and check resource (fuel) availability/utilization
|
||||
|
||||
### Clsuter maintainance
|
||||
### Cluster maintenance
|
||||
|
||||
* Deplyoments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters
|
||||
* The excact same manifests get deployed to two clusters
|
||||
* Cluster desired state is stored externally to enable effortless upogrades, rescale, etc
|
||||
* Deployments can be launched to multiple clusters (even two clusters at once) -> HA through identical clusters
|
||||
* The exact same manifests get deployed to two clusters
|
||||
* Cluster desired state is stored externally to enable effortless upgrades, rescale, etc
|
||||
|
||||
### Versioning API
|
||||
|
||||
* Basicly the dock DB
|
||||
* Basically the dock DB
|
||||
* CRDs are the representations of the inference manifests
|
||||
* Rollbacks, Promotion and History is managed via the CRs
|
||||
* Why not GitOps: Internal Diffs, deployment overrides, customized features
|
||||
|
@ -7,25 +7,25 @@ tags:
|
||||
- db
|
||||
---
|
||||
|
||||
A short Talk as Part of the DOK day - presendet by the VP of CloudNative at EDB (one of the biggest PG contributors)
|
||||
A short Talk as Part of the Data on Kubernetes day - presented by the VP of Cloud Native at EDB (one of the biggest PG contributors)
|
||||
Stated target: Make the world your single point of failure
|
||||
|
||||
## Proposal
|
||||
|
||||
* Get rid of Vendor-Lockin using the oss projects PG, K8S and CnPG
|
||||
* Get rid of Vendor-Lockin using the OSS projects PG, K8S and CnPG
|
||||
* PG was the DB of the year 2023 and a bunch of other times in the past
|
||||
* CnPG is a Level 5 mature operator
|
||||
|
||||
## 4 Pillars
|
||||
|
||||
* Seamless KubeAPI Integration (Operator PAttern)
|
||||
* Seamless Kube API Integration (Operator Pattern)
|
||||
* Advanced observability (Prometheus Exporter, JSON logging)
|
||||
* Declarative Config (Deploy, Scale, Maintain)
|
||||
* Secure by default (Robust contaienrs, mTLS, and so on)
|
||||
* Secure by default (Robust containers, mTLS, and so on)
|
||||
|
||||
## Clusters
|
||||
|
||||
* Basic Ressource that defines name, instances, snyc and storage (and other params that have same defaults)
|
||||
* Basic Resource that defines name, instances, sync and storage (and other parameters that have same defaults)
|
||||
* Implementation: Operator creates:
|
||||
* The volumes (PG_Data, WAL (Write ahead log)
|
||||
* Primary and Read-Write Service
|
||||
@ -35,15 +35,15 @@ Stated target: Make the world your single point of failure
|
||||
* Failure detected
|
||||
* Stop R/W Service
|
||||
* Promote Replica
|
||||
* Activat R/W Service
|
||||
* Kill old promary and demote to replica
|
||||
* Activate R/W Service
|
||||
* Kill old primary and demote to replica
|
||||
|
||||
## Backup/Recovery
|
||||
|
||||
* Continuos Backup: Write Ahead Log Backup to object store
|
||||
* Continuous Backup: Write Ahead Log Backup to object store
|
||||
* Physical: Create from primary or standby to object store or kube volumes
|
||||
* Recovery: Copy full backup and apply WAL until target (last transactio or specific timestamp) is reached
|
||||
* Replica Cluster: Basicly recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode
|
||||
* Recovery: Copy full backup and apply WAL until target (last transaction or specific timestamp) is reached
|
||||
* Replica Cluster: Basically recreates a new cluster to a full recovery but keeps the cluster in Read-Only Replica Mode
|
||||
* Planned: Backup Plugin Interface
|
||||
|
||||
## Multi-Cluster
|
||||
@ -51,21 +51,21 @@ Stated target: Make the world your single point of failure
|
||||
* Just create a replica cluster via WAL-files from S3 on another kube cluster (lags 5 mins behind)
|
||||
* You can also activate replication streaming
|
||||
|
||||
## Reccomended architecutre
|
||||
## Recommended architecture
|
||||
|
||||
* Dev Cluster: 1 Instance without PDB and with Continuos backup
|
||||
* Prod: 3 Nodes with automatic failover and continuos backups
|
||||
* Dev Cluster: 1 Instance without PDB and with Continuous backup
|
||||
* Prod: 3 Nodes with automatic failover and continuous backups
|
||||
* Symmetric: Two clusters
|
||||
* Primary: 3-Node Cluster
|
||||
* Secondary: WAL-Based 3-Node Cluster with a designated primary (to take over if primary cluster fails)
|
||||
* Symmetric Streaming: Same as Secondary, but you manually enable the streaming api for live replication
|
||||
* Secondary: WAL based 3-Node Cluster with a designated primary (to take over if primary cluster fails)
|
||||
* Symmetric Streaming: Same as Secondary, but you manually enable the streaming API for live replication
|
||||
* Cascading Replication: Scale Symmetric to more clusters
|
||||
* Single availability zone: Well, do your best to spread to nodes and aspire to streched kubernetes to more AZs
|
||||
* Single availability zone: Well, do your best to spread to nodes and aspire to stretched Kubernetes to more AZs
|
||||
|
||||
## Roadmap
|
||||
|
||||
* Replica Cluster (Symmetric) Switchover
|
||||
* Synchronous Symmetric
|
||||
* 3rd PArty Plugins
|
||||
* 3rd Party Plugins
|
||||
* Manage DBs via the Operator
|
||||
* Storage Autoscaling
|
||||
|
@ -4,14 +4,14 @@ weight: 9
|
||||
---
|
||||
|
||||
> When I say serverless I don't mean lambda - I mean serverless
|
||||
> That is thousands of lines of yaml - but I don't want to depress you
|
||||
> That is thousands of lines of YAML - but I don't want to depress you
|
||||
> It will be eventually done
|
||||
> Imagine this error is not happening
|
||||
> Just imagine how I did this last night
|
||||
|
||||
## Goal
|
||||
|
||||
* Take my sourcecode and run it, scale it - jsut don't ask me
|
||||
* Take my source code and run it, scale it - just don't ask me
|
||||
|
||||
## Baseline
|
||||
|
||||
@ -22,7 +22,7 @@ weight: 9
|
||||
|
||||
## Open function
|
||||
|
||||
> The glue between different tools to achive serverless
|
||||
> The glue between different tools to achieve serverless
|
||||
|
||||
* CRD that describes:
|
||||
* Build this image and push it to the registry
|
||||
@ -35,8 +35,8 @@ weight: 9
|
||||
|
||||
* Open Questions
|
||||
* Where are the serverless servers -> Cluster, dependencies, secrets
|
||||
* How do I create DBs, etc
|
||||
* How do I create DBs, etc.
|
||||
* Resulting needs
|
||||
* Cluster aaS (using crossplane - in this case using aws)
|
||||
* DBaaS (using crossplane - again usig pq on aws)
|
||||
* App aaS
|
||||
* CLUSTERaaS (using crossplane - in this case using AWS)
|
||||
* DBaaS (using crossplane - again using pg on AWS)
|
||||
* APPaaS
|
||||
|
@ -14,21 +14,21 @@ Another talk as part of the Data On Kubernetes Day.
|
||||
|
||||
* Managed: Atlas
|
||||
* Semi: Cloud manager
|
||||
* Selfhosted: Enterprise and community operator
|
||||
* Self-hosted: Enterprise and community operator
|
||||
|
||||
### Mongo on K8s
|
||||
### MongoDB on K8s
|
||||
|
||||
* Cluster Architecture
|
||||
* Control Plane: Operator
|
||||
* Data Plane: MongoDB Server + Agen (Sidecar Proxy)
|
||||
* Data Plane: MongoDB Server + Agent (Sidecar Proxy)
|
||||
* Enterprise Operator
|
||||
* Opsmanager CR: Deploys 3-node operator DB and OpsManager
|
||||
* MongoDB CR: The MongoDB cLusters (Compromised of agents)
|
||||
* Advanced Usecase: Data Platform with mongodb on demand
|
||||
* Control Plane on one cluster (or on VMs/Hardmetal), data plane in tennant clusters
|
||||
* OpsManager CR: Deploys 3-node operator DB and OpsManager
|
||||
* MongoDB CR: The MongoDB clusters (Compromised of agents)
|
||||
* Advanced use case: Data Platform with MongoDB on demand
|
||||
* Control Plane on one cluster (or on VMs/Bare-metal), data plane in tenant clusters
|
||||
* Result: MongoDB CR can not relate to OpsManager CR directly
|
||||
|
||||
## Pitfalls
|
||||
|
||||
* Storage: Agnostic, Topology aware, configureable and resizeable (can't be done with statefulset)
|
||||
* Storage: Agnostic, Topology aware, configurable and resizable (can't be done with Statefulset)
|
||||
* Networking: Cluster-internal (Pod to Pod/Service), External (Split horizon over multicluster)
|
||||
|
@ -9,8 +9,8 @@ tags:
|
||||
|
||||
## CNCF Platform maturity model
|
||||
|
||||
* Was donated to the cncf by syntasso
|
||||
* Constantly evolving since 1.0 in november 2023
|
||||
* Was donated to the CNCF by Syntasso
|
||||
* Constantly evolving since 1.0 in November 2023
|
||||
|
||||
### Overview
|
||||
|
||||
@ -25,7 +25,7 @@ tags:
|
||||
* Investment: How are funds/staff allocated to platform capabilities
|
||||
* Adoption: How and why do users discover this platform
|
||||
* Interfaces: How do users interact with and consume platform capabilities
|
||||
* Operations: How are platforms and capabilities planned, prioritzed, developed and maintained
|
||||
* Operations: How are platforms and capabilities planned, prioritized, developed and maintained
|
||||
* Measurement: What is the process for gathering and incorporating feedback/learning?
|
||||
|
||||
## Goals
|
||||
@ -34,24 +34,24 @@ tags:
|
||||
* Outcomes & Practices
|
||||
* Where are you at
|
||||
* Limits & Opportunities
|
||||
* Behaviours and outcome
|
||||
* Behaviors and outcome
|
||||
* Balance People and processes
|
||||
|
||||
## Typical Journeys
|
||||
|
||||
### Steps of the jurney
|
||||
### Steps of the journey
|
||||
|
||||
1. What are your goals and limitations
|
||||
2. What is my current landscape
|
||||
3. Plan baby steps & iterate
|
||||
|
||||
### Szenarios
|
||||
### Scenarios
|
||||
|
||||
* Bad: I want to improve my k8s platform
|
||||
* Good: Scaling an enterprise COE (Center Of Excellence)
|
||||
* What: Onboard 20 Teams within 20 Months and enforce 8 security regulations
|
||||
* Where: We have a dedicated team of centrally funded people
|
||||
* Lay the foundation: More funding for more larger teams -> Switch from Project to platform mindset
|
||||
* Lay the foundation: More funding for more, larger teams -> Switch from Project to platform mindset
|
||||
* Do your technical Due diligence in parallel
|
||||
|
||||
## Key Lessons
|
||||
@ -60,8 +60,8 @@ tags:
|
||||
* Know your landscape
|
||||
* Plan in baby steps and iterate
|
||||
* Lay the foundation for building the right thing and not just anything
|
||||
* Dont forget to do your technical dd in parallel
|
||||
* Don't forget to do your technical dd in parallel
|
||||
|
||||
## Conclusion
|
||||
|
||||
* Majurity model is a helpful part but not the entire plan
|
||||
* Maturity model is a helpful part but not the entire plan
|
||||
|
@ -6,14 +6,14 @@ tags:
|
||||
- network
|
||||
---
|
||||
|
||||
Held by Cilium regarding ebpf and hubble
|
||||
Held by Cilium regarding eBPF and Hubble
|
||||
|
||||
## eBPF
|
||||
|
||||
> Extend the capabilities of the kernel without requiring to change the kernel source code or load modules
|
||||
|
||||
* Benefits: Reduce performance overhead, gain deep visibility while being widely available
|
||||
* Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Opservability), Tetragon (Security)
|
||||
* Example Tools: Parca (Profiling), Cilium (Networking), Hubble (Observability), Tetragon (Security)
|
||||
|
||||
## Cilium
|
||||
|
||||
@ -27,22 +27,22 @@ Held by Cilium regarding ebpf and hubble
|
||||
|
||||
* CLI: TCP-Dump on steroids + API Client
|
||||
* UI: Graphical dependency and connectivity map
|
||||
* Prometheus + Grafana + Opentelemetry compatible
|
||||
* Prometheus + Grafana + OpenTelemetry compatible
|
||||
* Metrics up to L7
|
||||
|
||||
### Where can it be used
|
||||
|
||||
* Service dependency with frequency
|
||||
* Kinds of http calls
|
||||
* Kinds of HTTP calls
|
||||
* Network Problems between L4 and L7 (including DNS)
|
||||
* Application Monitoring through status codes and latency
|
||||
* Security-Related Network Blocks
|
||||
* Services accessed from outside the cluser
|
||||
* Services accessed from outside the cluster
|
||||
|
||||
### Architecture
|
||||
|
||||
* Cilium Agent: Runs as the CNI für all Pods
|
||||
* Server: Runs on each node and retrieves the ebpf from cilium
|
||||
* Cilium Agent: Runs as the CNI for all Pods
|
||||
* Server: Runs on each node and retrieves the eBPF from cilium
|
||||
* Relay: Provide visibility throughout all nodes
|
||||
|
||||
## TL;DR
|
||||
|
@ -7,10 +7,10 @@ weight: 1
|
||||
Day one is the Day for co-located events aka CloudNativeCon.
|
||||
I spent most of the day attending the Platform Engineering Day - as one might have guessed it's all about platform engineering.
|
||||
|
||||
Everything started with badge pickup - a very smooth experence (but that may be related to me showing up an hour or so too early).
|
||||
Everything started with badge pickup - a very smooth experience (but that may be related to me showing up an hour or so too early).
|
||||
|
||||
## Talk reccomandations
|
||||
## Talk recommendations
|
||||
|
||||
* Beyond Platform Thinking...
|
||||
* Hitchhikers Guide to ...
|
||||
* Hitchhiker's Guide to ...
|
||||
* To K8S and beyond...
|
||||
|
@ -11,7 +11,7 @@ The keynote itself was presented by the CEO of the CNCF.
|
||||
|
||||
## The numbers
|
||||
|
||||
* Over 2000 attendees
|
||||
* Over 12000 attendees
|
||||
* 10 Years of Kubernetes
|
||||
* 60% of large organizations expect rapid cost increases due to AI/ML (FinOps Survey)
|
||||
|
||||
|
@ -5,7 +5,7 @@ weight: 2
|
||||
---
|
||||
|
||||
Day two is also the official day one of KubeCon (Day one was just CloudNativeCon).
|
||||
This is where all of the people joined (over 2000)
|
||||
This is where all of the people joined (over 12000)
|
||||
|
||||
The opening keynotes were a mix of talks and panel discussions.
|
||||
The main topic was - who could have guessed - AI and ML.
|
||||
|
Loading…
x
Reference in New Issue
Block a user