hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.7K
active users

#cilium

4 posts2 participants0 posts today

is Cilium native routing mode supposed to publish pod IPs on the interfaces in the host network namespace?

That would make sense to me as using the native network layer 2/3 routing.

Or am I required to turn on SNAT using the IP masquerading feature?

Pods are getting valid IPv6 GUAs in the LAN/host subnet, but of course nothing can return to them...

Kubernetes DNS question:

Couldn't the CNI actually manage DNS instead of CoreDNS?

I mean it'd be potentially a lot of data to throw at eBPF for in-cluster records. It's already distributing information for routing.

It could also enforce all-pods upstreams by using DNAT - assuming DNSSEC remains a niche concern.

...in my defence, I never said my ideas were good...

Ok, serious question, what do I replace tailscale with? Can I run ipsec in a k8s pod?

I know #cilium can do multicluster but it wasn't fun. Managing another CA and making sure policies are multicluster-aware sucks. And I've hit a few issues where I had to restart the cilium node agent until it'd finally catch up (was a while ago, so maybe a non-issue nowadays).

What I want to have is to have a k8s service in cluster A that resolves to a k8s pod in cluster B. It's almost http only but not quite.

I guess I could get away with setting up an LB pool in both clusters and doing a host-to-host wireguard or ipsec to bridge those lb pools over. Still not ideal as it'd be harder to firewall everything off.

Note to self: When converting a Kubernetes cluster with Cilium as CNI to replace MetalLB with Cilium's new L2 announcements, you need to tweak some settings in your Cilium installation. Especially enabling Cilium to act as a kube-proxy replacement (if you are not already doing so) and enabling the l2 announcements. Which means kube-proxy needs to be disabled in k3s.

In other words: k3s on my Raspi4 is now providing a loadbalancer to blocky using Cilium's L2 announcements...

Cursed homelab update:

As of a couple of days ago, I'm one VM lighter and running a supported version of Kubernetes. (Not that my servers are any less unhappy)

I had previously installed Kubernetes 1.25 as that was the default in @geerlingguy's excellent Ansible role and I didn't know better, so it very quickly got severely out of date. A couple of days ago I ran the gauntlet of getting it up to 1.31. Upgrade path is easy but time consuming and is crying out to be automated assuming I can get machine-readable output from kubeadm. (The big hiccough here is that there's the potential for manual steps being required and I need a way to detect that. Blind upgrades are easy, all things considered.)

That VM was running a Squid proxy in the hope that my servers' updates wouldn't use too much bandwidth and due to my servers being a touch overloaded was, by far, the weak point on the network and frequently bogged down due to disk slowness. I'd also advertised it using WPAD so all our Windows work computers automatically used it.

It is now two pods in my Kubernetes cluster with a couple of services, some Gateway API magic and some HAProxy configuration to hook it up to the outside world. At the same time, I replaced my existing Apache reverse-proxying with Gateway routes and now the vast majority of my internal web serving goes through that gateway. The gateway API is awesome and Cilium makes it all so easy and efficient (but has some minor gotchas due to it using eBPF magic to do it's stuff instead of "real" TCP and stuff.)

Next steps are:
1. Repartitioning server #1 so I can give Ceph a few extra TB and finish moving stuff off server #2
2. Upgrading server #2 to a low-power 4th gen Intel motherboard which should give it quite a performance boost

this week i learned that cilium can do layer 2 advertisement and can act as a load balancer. i'm considering dropping metallb, and instead letting cilium handle that responsibility.

does anybody have experience with this kind of setup? this is for a homelab running on proxmox, and it's exposed to the internet using chisel tunnel