Compare commits

...

7 Commits

Author SHA1 Message Date
14edda0bfb docs(day2): Added multi tenant isolation talk
All checks were successful
Build latest image / build-container (push) Successful in 45s
2025-07-22 16:19:51 +02:00
b4b5c11f12 fix: ":" in title
All checks were successful
Build latest image / build-container (push) Successful in 45s
2025-07-22 15:55:01 +02:00
c5fd44b890 docs(day2): Added devex talk
Some checks failed
Build latest image / build-container (push) Failing after 33s
2025-07-22 15:51:59 +02:00
3b8fdea331 fix(day1): Added missing tag 2025-07-22 15:47:34 +02:00
6c5ae7ac6a fix: Changed baseurl 2025-07-22 15:47:23 +02:00
b78b472be2 docs(day2): Added compliance automation talk notes
Some checks failed
Build latest image / build-container (push) Failing after 32s
2025-07-22 14:45:02 +02:00
0c9aa34b7f docs(day2): Added q&a to kcp talk 2025-07-22 14:26:23 +02:00
8 changed files with 163 additions and 2 deletions

View File

@@ -1,4 +1,4 @@
baseURL: "https://cnsmuc25.nicolai-ort.com"
baseURL: "https://cnsmunich25.nicolai-ort.com"
title: "Cloud Native Summit Munich 2025"

View File

@@ -17,6 +17,8 @@ After a short talk with my boss, I got sent there by my employer [DATEV eG](http
I'd say that attending CNS Munich 2025 was worth it. The event is pretty close to my place of employment (2hrs by car or train) and relatively small in size (400 attendees). The talks varied a bit - the first day had a bunch of interesting talks but the second day indulged in ai-related talks (and they were not quite my cup of tea). This might me fine for others but I've heard enogh about ai use cases for the coming years at the last events i attended (or just reddit).
Maybe disributing the ai-talks over the two days - while always providing an interesting alternative - might be the right move for next time.
Apart from AI many talks focused on Platforms and Platform Engineering
## And how does this website get it's content
```mermaid

View File

@@ -3,6 +3,7 @@ title: What going cloud native taught us about developer experience
weight: 7
tags:
- devex
- dx
---
<!-- {{% button href="https://youtu.be/rkteV6Mzjfs" style="warning" icon="video" %}}Watch talk on YouTube{{% /button %}} -->

View File

@@ -102,3 +102,8 @@ OrgA-->TeamA
- The internal pülatform can be bought, customized or diyed but the api layer does not change -> Interchangeable backend switching
- Kubernetes is already widespread and makes it easy to use different projects
- Backed by the CNCF, flat learning curve
## Q&A
- Is OIDC Provided: Yes, r/n globally for all workspaces, per workspace oidc is WIP
- What about KCPxCrossplane: Yes it is possible, more in septemeber with a talk during Container Days

View File

@@ -0,0 +1,55 @@
---
title: "Automating Compliance and Infrastructure Plumbing: Tackling the Boring Stuff"
weight: 6
tags:
- compliance
- backstage
---
<!-- {{% button href="https://youtu.be/rkteV6Mzjfs" style="warning" icon="video" %}}Watch talk on YouTube{{% /button %}} -->
<!-- {{% button href="https://docs.google.com/presentation/d/1nEK0CVC_yQgIDqwsdh-PRihB6dc9RyT-" style="tip" icon="person-chalkboard" %}}Slides{{% /button %}} -->
They basicly presented a bunch of examples about how their platforn handles createion of different resource.
Most of the examples were too detailed, so i did not note them down.
The DX also did not feel that easy (at least from their examples and screenshots)
## The "Blueprint"
### Idea
- Centralized Configuration (Source of truth)
- Automatic Provisioning and managmeent of services
- Continuos reconciliation
- Version control (git) for auditing
### Platform components
- Classic: Slow manual provisioning with a tendency towards config drift
- Service Catalog: YAML files in a central repo following the backstage definition
- Automation: GitOps
- Backstage: For The UI
### Implementation
- A bunch of backstage components with operators (some crossplane, some not)
- Example - New resource with Namespace: Namespace get's created in Kubernetes and Elasticsearch alongside a EntraID Group with members for the rolebinding for the Namespace
- Example - DNS: Registers Route in Kong, DNS in ExternalDNS and generates Certificate for Route (via Certmanager)
- Monitoring: Elasticsearch, CR(D) Status/Events, Backstage Catalog (just shows the kubernetes Status)
### Challenges
- Developer buy-in -> Workshops, talks, enforcement b/c compliance and stuff
- Integration with existing systems
- Conflicting requirements -> They just forced this via "b/c compliance needs unified interface"
## Q&A
- Why the backstage YAML format: Well the engineers decided to
- How did you convince them to switch over from service now: No one was sad to get rid of service now
- Is the backstage read-only: No, it also supports write actions (natively and through headlamp)
## TL;DR
- They use git (ops) for Auditing
- They use operators and crossplane for reconciliation
- Backstage acts as the UI for all of this (visualizes Service Status and relationships)

69
content/day2/07_devex.md Normal file
View File

@@ -0,0 +1,69 @@
---
title: "Creating a smooth Developer Experience: from complexity to simplicity"
weight: 7
tags:
- dx
- devex
---
<!-- {{% button href="https://youtu.be/rkteV6Mzjfs" style="warning" icon="video" %}}Watch talk on YouTube{{% /button %}} -->
<!-- {{% button href="https://docs.google.com/presentation/d/1nEK0CVC_yQgIDqwsdh-PRihB6dc9RyT-" style="tip" icon="person-chalkboard" %}}Slides{{% /button %}} -->
## Why?
- Complexity in software fev is increasing
- Kubernetes itself is simple, but the landscape is gigantic
- Automation reduces complexity, but creating automation adds complexity
## Cognitive Load
- Instrinsic Load: Natural complexity of a task
- Germane: Effort needed to build expertise and optimise workflows
- Extraneous: Unnecessary load caused by poor tools or processes combined with distractions
## Example for the rest of the talk
1. Dev: Needs Environment
2. Ops: Sets up the env in multiple steps
3. Dev: Receivs new env
### Problems
- Maybe the ops person has no time
- Doing things via tickets takes a shit ton of time
## Our Lord and savior: Platform Engineering
1. Centralize
2. Automate: Automate the devops cycle
3. Simplify: Solve complexities associated with the cloud
4. Efficency: Avoid bottlenecks
5. Quality and Reliability
6. Self-Service
### Circular Economy
1. When a team creates a service, it can it to a marketplace
2. Another team cann use it, kustomizue it and publish the customized version to the marketplace
## Metrics
- The invisible engine behind every innovation -> No improvement with out proving it
- Who wants to know what?
- User: Wants to measure the platforms impact on dx and productivity (Feedback, Provisioning time, ...)
- Org: How does the platform support org efficiency and delivery speed (Time to market, cost savings, ...)
- Platform: Stability and team effectiveness (Mean Time to Resolution, uptime, ...)
- Metric-Frameworks: DORA (DevOps Research and Assesment), SPACE (Satisfaction Performance, Activity, Communication and Efficiency)
## And how do we drag AI into this?
- Frameworks kann make dx measureable
- Challenge: NEw tools and processes can create friction
- AI Can help with: Assistance, Code generation, Debugging, Conversational explaination instead of reading docs yourself
## Tips
- Adopt the right mindset (Why a platform, what do i need and how would it impact exsisting metrics)
- Invest in a effortles selfservice experience
- Gradualy move habits towards the platform - don't force everything at once
- Create solutions that people enjoy using

View File

@@ -0,0 +1,29 @@
---
title: Isolating Workloads in Multi-Tenant Kubernetes Clusters
weight: 8
tags:
- multi-tenant
- isolation
---
<!-- {{% button href="https://youtu.be/rkteV6Mzjfs" style="warning" icon="video" %}}Watch talk on YouTube{{% /button %}} -->
<!-- {{% button href="https://docs.google.com/presentation/d/1nEK0CVC_yQgIDqwsdh-PRihB6dc9RyT-" style="tip" icon="person-chalkboard" %}}Slides{{% /button %}} -->
## Container Isolation
- It's a process with capabilities and user access control
- Plus: Namespaces, CGroups, Seccomp
- Problem: Shared Kernel means that the runtime and kernel make everything else vurnerable
## Sandboxing
- Solution: Sandboxing (wrapping) the container to isolate it from the kernel
- Software based with gVisor: Software Layer that basicly emulates the kernel by intercepting all syscalls
- Hardware based with kata: Create a vm (one per pod) that runs our secure container instead of just running it on the host
- Impact: Start up time with kata or gVisor is 2x the time needed by traditional runc
## Optimisation
- Unikernel: A stripped down kernel that only contains what our application needs
- urunc: CRI compatible runtime with sandboxes and support for unikernel -> Sets up the specialized env, builds the container and then starts the optimized VM
- Impact: Depending on the urunc variant ony 16-30% slower than native runc

View File

@@ -6,7 +6,7 @@ weight: 2
The schedule on day 2 was pretty ai platform focused.
Sadly all of the ai focused talks were about building workflows and platforms with gitops and friends, not about actually building the base (gpus scheduling and so on).
We also had some "normal" work tasks resulting in less talks visited and more "normal" work + networking.
We also had some "normal" work tasks resulting in less talks (well I skipped two talk slots) visited and a bit of "normal" work + networking.
## Reccomended talks