Nico Vibert, my beloved,,,,
Nico Vibert, my beloved,,,,
is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?
That would make sense to me as using the native network layer 2/3 routing.
Or am I required to turn on SNAT using the IP masquerading feature?
Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...
Kubernetes DNS question:
Couldn't the CNI actually manage DNS instead of CoreDNS?
I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.
It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.
...in my defence, I never said my ideas were good...
Ok, serious question, what do I replace tailscale with? Can I run ipsec in a k8s pod?
I know #cilium can do multicluster but it wasn't fun. Managing another CA and making sure policies are multicluster-aware sucks. And I've hit a few issues where I had to restart the cilium node agent until it'd finally catch up (was a while ago, so maybe a non-issue nowadays).
What I want to have is to have a k8s service in cluster A that resolves to a k8s pod in cluster B. It's almost http only but not quite.
I guess I could get away with setting up an LB pool in both clusters and doing a host-to-host wireguard or ipsec to bridge those lb pools over. Still not ideal as it'd be harder to firewall everything off.
A race condition in the #Cilium agent can cause the agent to ignore labels that should be applied to a node. This could in turn cause CiliumClusterwideNetworkPolicies intended for nodes with the ignored label to not apply, leading to policy bypass.
#ebpf
https://github.com/cilium/cilium/security/advisories/GHSA-q7w8-72mr-vpgw
Using Cilium Hubble Exporter to log blocked egress traffic on Azure Kubernetes Service https://www.danielstechblog.io/using-cilium-hubble-exporter-to-log-blocked-egress-traffic-on-azure-kubernetes-service/ #Azure #AKS #AzureKubernetesService #Kubernetes #Cilium
Up early studying the Cilium docs. Hopefully some great wisdom is revealed to me about IPv6 native routing and what's holding me up
I also ended up replacing my Ubiquiti routers with #mikrotik. I want working load balancing in this cluster, so I have tried out #metallb. However, I could not get it to work in L2 operating mode with either #calico or #cilium. So, I've overhauled my home network to support BGP in order to have more options. This project keeps getting bigger.
Note to self: When converting a Kubernetes cluster with Cilium as CNI to replace MetalLB with Cilium's new L2 announcements, you need to tweak some settings in your Cilium installation. Especially enabling Cilium to act as a kube-proxy replacement (if you are not already doing so) and enabling the l2 announcements. Which means kube-proxy needs to be disabled in k3s.
In other words: k3s on my Raspi4 is now providing a loadbalancer to blocky using Cilium's L2 announcements...
Read “eBPF and Cilium Are Cool, So Why Do We Keep Choosing Kube-Proxy?“ by Mr.PlanB on Medium: https://medium.com/@PlanB./ebpf-and-cilium-are-cool-so-why-do-we-keep-choosing-kube-proxy-4423bdefca7d #k8s #cilium
Meine virtuelle #opensuse Leap ist jetzt die Hostplattform für meinen k3s #kubernetes Server. Als nächstes #Postgres drauf und meine #Java-Plattform! Und dann mal mit #cilium rumspielen.
Egress traffic blocking with Cilium cluster-wide network policies on Azure Kubernetes Service https://www.danielstechblog.io/egress-traffic-blocking-with-cilium-cluster-wide-network-policies-on-azure-kubernetes-service/ #Azure #AKS #AzureKubernetesService #Kubernetes #Cilium
Cursed homelab update:
As of a couple of days ago, I'm one VM lighter and running a supported version of Kubernetes. (Not that my servers are any less unhappy)
I had previously installed Kubernetes 1.25 as that was the default in @geerlingguy's excellent Ansible role and I didn't know better, so it very quickly got severely out of date. A couple of days ago I ran the gauntlet of getting it up to 1.31. Upgrade path is easy but time consuming and is crying out to be automated assuming I can get machine-readable output from kubeadm. (The big hiccough here is that there's the potential for manual steps being required and I need a way to detect that. Blind upgrades are easy, all things considered.)
That VM was running a Squid proxy in the hope that my servers' updates wouldn't use too much bandwidth and due to my servers being a touch overloaded was, by far, the weak point on the network and frequently bogged down due to disk slowness. I'd also advertised it using WPAD so all our Windows work computers automatically used it.
It is now two pods in my Kubernetes cluster with a couple of services, some Gateway API magic and some HAProxy configuration to hook it up to the outside world. At the same time, I replaced my existing Apache reverse-proxying with Gateway routes and now the vast majority of my internal web serving goes through that gateway. The gateway API is awesome and Cilium makes it all so easy and efficient (but has some minor gotchas due to it using eBPF magic to do it's stuff instead of "real" TCP and stuff.)
Next steps are:
1. Repartitioning server #1 so I can give Ceph a few extra TB and finish moving stuff off server #2
2. Upgrading server #2 to a low-power 4th gen Intel motherboard which should give it quite a performance boost
this week i learned that cilium can do layer 2 advertisement and can act as a load balancer. i'm considering dropping metallb, and instead letting cilium handle that responsibility.
does anybody have experience with this kind of setup? this is for a homelab running on proxmox, and it's exposed to the internet using chisel tunnel
xyhhx vs gvisor and cilium: round 2