hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.7K
active users

#MetalLB

1 post1 participant0 posts today
beyondwatts<p><a href="https://beyondwatts.social/tags/Talos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Talos</span></a> <a href="https://beyondwatts.social/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> single node cluster up and running with <a href="https://beyondwatts.social/tags/Calico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Calico</span></a>, <a href="https://beyondwatts.social/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a>, <a href="https://beyondwatts.social/tags/Traefik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Traefik</span></a> and a test <a href="https://beyondwatts.social/tags/whoami" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>whoami</span></a> deployment</p><p>In no way scientific but it feels much more responsive that <a href="https://beyondwatts.social/tags/microk8s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>microk8s</span></a> </p><p>Next step to rebuild a clean node and then migrate some services</p>
Travis KJ5DCL<p>wawaweewa <a href="https://nangang.travnewmatic.com/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a> <a href="https://nangang.travnewmatic.com/tags/ingress" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ingress</span></a> adventure time!</p><p><a href="https://docs.cilium.io/en/stable/network/servicemesh/ingress/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">docs.cilium.io/en/stable/netwo</span><span class="invisible">rk/servicemesh/ingress/</span></a><br><a href="https://docs.cilium.io/en/stable/network/lb-ipam/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">docs.cilium.io/en/stable/netwo</span><span class="invisible">rk/lb-ipam/</span></a></p><p>look ma! no <a href="https://nangang.travnewmatic.com/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a> !</p>
makuharigaijin<p>I also ended up replacing my Ubiquiti routers with <a href="https://fosstodon.org/tags/mikrotik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>mikrotik</span></a>. I want working load balancing in this cluster, so I have tried out <a href="https://fosstodon.org/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a>. However, I could not get it to work in L2 operating mode with either <a href="https://fosstodon.org/tags/calico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calico</span></a> or <a href="https://fosstodon.org/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a>. So, I've overhauled my home network to support BGP in order to have more options. This project keeps getting bigger.</p>
thecodelab<p>MetalLB in Kubernetes claims IP addresses on your existing physical NICs using either ARP or BGP.  The speaker pod responds to ARP requests for service IPs, leveraging the node's existing network interface and MAC address. <a href="https://mastodon.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a> <a href="https://mastodon.social/tags/Networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Networking</span></a> <a href="https://mastodon.social/tags/BareMetal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BareMetal</span></a> <a href="https://mastodon.social/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a></p>
thecodelab<p>MetalLB in Kubernetes claims IP addresses on your existing physical NICs using either ARP or BGP. 💡 The MetalLB speaker pod responds to ARP requests for service IPs, leveraging the node's existing network interface and MAC address. <a href="https://mastodon.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a> <a href="https://mastodon.social/tags/Networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Networking</span></a> <a href="https://mastodon.social/tags/BareMetal" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BareMetal</span></a> <a href="https://mastodon.social/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a> <a href="https://mastodon.social/tags/ServiceIP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ServiceIP</span></a></p>
Johannes Kastl<p>Note to self: When converting a Kubernetes cluster with Cilium as CNI to replace MetalLB with Cilium's new L2 announcements, you need to tweak some settings in your Cilium installation. Especially enabling Cilium to act as a kube-proxy replacement (if you are not already doing so) and enabling the l2 announcements. Which means kube-proxy needs to be disabled in k3s.</p><p>In other words: k3s on my Raspi4 is now providing a loadbalancer to blocky using Cilium's L2 announcements...</p><p><a href="https://digitalcourage.social/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://digitalcourage.social/tags/k3s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>k3s</span></a> <a href="https://digitalcourage.social/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a> <a href="https://digitalcourage.social/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://digitalcourage.social/tags/metalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metalLB</span></a> <a href="https://digitalcourage.social/tags/loadbalancer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>loadbalancer</span></a> <a href="https://digitalcourage.social/tags/dns" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>dns</span></a> <a href="https://digitalcourage.social/tags/blocky" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>blocky</span></a> <a href="https://digitalcourage.social/tags/hellyeah" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>hellyeah</span></a></p>
Gilgwath<p>Those who've been reading my toots, might have picked up on the fact that I'm building a <a href="https://social.tchncs.de/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> cluster from scratch (yes, I like pain). After figuring out <a href="https://social.tchncs.de/tags/cri_o" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cri_o</span></a> <a href="https://social.tchncs.de/tags/calico" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>calico</span></a> <a href="https://social.tchncs.de/tags/certmanager" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>certmanager</span></a> <a href="https://social.tchncs.de/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> <a href="https://social.tchncs.de/tags/traefik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>traefik</span></a> and <a href="https://social.tchncs.de/tags/cloudnativepg" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cloudnativepg</span></a> I finally deployed my first actual application: <a href="https://social.tchncs.de/tags/nextcloud" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>nextcloud</span></a> ! Wueeh! Extremely stocked! Now I need to figure out how I rope in my ZFS box for persistence, and then I'm ready for a deployment in testing! <a href="https://social.tchncs.de/tags/k8s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>k8s</span></a> <a href="https://social.tchncs.de/tags/selfhosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosting</span></a></p>
xyhhx 🔻 (plz hire me)<p>this week i learned that cilium can do layer 2 advertisement and can act as a load balancer. i'm considering dropping metallb, and instead letting cilium handle that responsibility.</p><p>does anybody have experience with this kind of setup? this is for a homelab running on proxmox, and it's exposed to the internet using chisel tunnel</p><p><a href="https://nso.group/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a> <a href="https://nso.group/tags/metalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metalLB</span></a> <a href="https://nso.group/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://nso.group/tags/networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>networking</span></a> <a href="https://nso.group/tags/loadBalancers" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>loadBalancers</span></a> <a href="https://nso.group/tags/k8s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>k8s</span></a> <a href="https://nso.group/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://nso.group/tags/selfHosting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfHosting</span></a> <a href="https://nso.group/tags/CNI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNI</span></a> <a href="https://nso.group/tags/networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>networking</span></a> <a href="https://nso.group/tags/BGP" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>BGP</span></a></p>
h2owasser🌊<p><a href="https://social.tchncs.de/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a> at home in old and used <a href="https://social.tchncs.de/tags/minipc" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>minipc</span></a> driven by <a href="https://social.tchncs.de/tags/Talos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Talos</span></a> <a href="https://social.tchncs.de/tags/Linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Linux</span></a> and <a href="https://social.tchncs.de/tags/proxmox" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>proxmox</span></a>. <a href="https://social.tchncs.de/tags/selfhosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfhosted</span></a> <a href="https://social.tchncs.de/tags/quorum" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>quorum</span></a> <a href="https://social.tchncs.de/tags/Loadbalancer" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Loadbalancer</span></a> <a href="https://social.tchncs.de/tags/Metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Metallb</span></a> <a href="https://social.tchncs.de/tags/Devops" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Devops</span></a> 🖥️12 CPU - 48 GB RAM. Low <a href="https://social.tchncs.de/tags/Power" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Power</span></a> comsumption.</p>
chihuamaranian<p>I spent today trying to set up <a href="https://tech.lgbt/tags/istio" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>istio</span></a> and/or <a href="https://tech.lgbt/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a> in my <a href="https://tech.lgbt/tags/microk8s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>microk8s</span></a> single node cluster. </p><p>The basic setup works, but falls apart when I try to use either as an ingress. </p><p>They both want a proper load balancer, and I really dont understand <a href="https://tech.lgbt/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> enough to troubleshoot that alongside the network mesh alongside the ingress config. </p><p>Metallb docs assume I have a large range of ips. I just have one computer statically assigned on a home router. I'm out of my depths. </p><p><a href="https://tech.lgbt/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://tech.lgbt/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://tech.lgbt/tags/selfHosted" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>selfHosted</span></a></p>
Markus Werle<p><span class="h-card" translate="no"><a href="https://mastodon.online/@vwbusguy" class="u-url mention" rel="nofollow noopener noreferrer" target="_blank">@<span>vwbusguy</span></a></span> I am curious about what you use as <a href="https://nrw.social/tags/Kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Kubernetes</span></a> Stack, especially which load balancer you prefer for Pods that need their unique IP address. We played around with <a href="https://nrw.social/tags/k3s" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>k3s</span></a> and <a href="https://nrw.social/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a>, but I would like to know if you prefer other options.</p>
Sean Hood<p>I have this ~weird intermittent issue accessing services on my <a href="https://hachyderm.io/tags/k3s" class="mention hashtag" rel="tag">#<span>k3s</span></a> box over <a href="https://hachyderm.io/tags/tailscale" class="mention hashtag" rel="tag">#<span>tailscale</span></a> w/ Subnet routing. I&#39;m running <a href="https://hachyderm.io/tags/metallb" class="mention hashtag" rel="tag">#<span>metallb</span></a> as the Service type: LoadBalancer. But like sometimes things are fine, others not. If I SSH into the server and use SOCKS all works fine. I can access other services via the subnet router fine too. </p><p>Mostly rambling for when I figure out wtf is going on and either fix or replace it.</p><p>Tbh, metallb might be replaced soon with Cilium&#39;s L2 Annoucements</p>
GeneBean<p>I’ve got these bits running in a vm now so next is to add <a href="https://fosstodon.org/tags/Multus" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Multus</span></a> CNI, kube-vip, &amp; something to provide for load balancer services. I’m going to check out the option in Cilium and see if it’s a good alternative to <a href="https://fosstodon.org/tags/MetalLB" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>MetalLB</span></a>. Once that’s sorted, it’s going to be time for <a href="https://fosstodon.org/tags/ArgoCD" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ArgoCD</span></a> &amp; the <a href="https://fosstodon.org/tags/Nginx" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>Nginx</span></a> ingress (ingress-nginx). After that I have a several pieces of the puzzle to finish researching and assembling as a PoC.</p>
Johannes Kastl<p>Has anyone used <a href="https://digitalcourage.social/tags/talos" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>talos</span></a> <a href="https://digitalcourage.social/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> with <a href="https://digitalcourage.social/tags/cilium" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>cilium</span></a> as <a href="https://digitalcourage.social/tags/CNI" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>CNI</span></a> and successfully gotten the IPAM part of cilium working? This basically should do what <a href="https://digitalcourage.social/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> does, announce "additional" IPs via L2 or BGP and provide them as loadbalancer IPs inside the cluster.</p><p>I have the machine running, cilium is running, I defined the Cilium IPPool and my <a href="https://digitalcourage.social/tags/traefik" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>traefik</span></a> service got an external IP address from the ippool's range. I have ingressroutes ready and DNS set up for the ingressroute hostnames to point to that IP address.</p><p>But I cannot reach anything on that IP, curl tells me that it could not connect to the server...</p><p>Looks like nothing is listening on that IP.</p>
Aaron Longchamps<p>The good news is that my core problem seems to be fixed: powering down the k8s nodes and powering them back up will not break MetalLB. All the pods (eventually) become healthy in their own sequence and pass traffic.</p><p><a href="https://infosec.exchange/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://infosec.exchange/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://infosec.exchange/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> <a href="https://infosec.exchange/tags/networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>networking</span></a></p>
Aaron Longchamps<p>Found the easy way to fix this sequencing issue since Kubernetes is an orchestration engine after all. By using a priority class for each piece, I can ask Kubernetes to run the controller first followed by the speaker pods.</p><p>That being said, I think the scope for these priorities is on a per-node basis. If other nodes are ready to run the DaemonSet pods, they'll run. I might have accidentally got it working by having the controller start before a speaker node for whichever node is running both of them.</p><p>Scheduling info: <a href="https://kubernetes.io/docs/tasks/administer-cluster/guaranteed-scheduling-critical-addon-pods/" rel="nofollow noopener noreferrer" translate="no" target="_blank"><span class="invisible">https://</span><span class="ellipsis">kubernetes.io/docs/tasks/admin</span><span class="invisible">ister-cluster/guaranteed-scheduling-critical-addon-pods/</span></a></p><p><a href="https://infosec.exchange/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://infosec.exchange/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://infosec.exchange/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> <a href="https://infosec.exchange/tags/networking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>networking</span></a></p>
Aaron Longchamps<p>Well, I'm part of the way there. It seems there's a leftover 'speaker' process as part of the MetalLB service and there's one of these pods that runs on each node. 5 nodes, 5 pods, 1:1 relationship. This is normal.</p><p>When I boot up my k8s home lab cluster, all 5 of the k8s nodes have unhealthy MetalLB pods with the same error. I don't leave the ESXi host running all the time because it's a whole HPE DL380 Gen9 server, loud fans and all :)</p><p>Using netstat, I found the offending leftover process using my port, killed that process, deleted the pod (forcing k8s to redeploy it) and then everything was healthy again. This process was also called 'speaker' which tells me it is a part of MetalLB itself.</p><p>Now what I can't tell is why there seems to be some 'abandoned' process using :7946 when the pod going away means the port usage should also go away. Two processes can't bind to the same port of course, so it has to be something leftover from the last time MetalLB services were running.</p><p>I might be able to figure out what's going on via k8s or via Linux's own internal mechanisms. I'm not sure which one is the better approach yet.</p><p>The next question I have is this: if I reboot one of the k8s nodes, does it come up unhealthy again? If so, I'm just going to automate this repair with Ansible, at least in the short term.</p><p>To really debug this, I would need to track processes as they are launched, ports as they are consumed, line that up with Kubernetes events, and logs from MetalLB containers. This would be a really good use case for a dedicated syslog endpoint that everything sends to.</p><p><a href="https://infosec.exchange/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a> <a href="https://infosec.exchange/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://infosec.exchange/tags/opensource" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>opensource</span></a> <a href="https://infosec.exchange/tags/golang" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>golang</span></a> <a href="https://infosec.exchange/tags/ansible" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>ansible</span></a> <a href="https://infosec.exchange/tags/linux" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>linux</span></a> <a href="https://infosec.exchange/tags/troubleshooting" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>troubleshooting</span></a> <a href="https://infosec.exchange/tags/syslog" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>syslog</span></a></p>
Aaron Longchamps<p>Contemplating digging in to the metallb source code tonight so I can figure out why:</p><p>1) it always works the first time I install it<br>2) never works after a shutdown/startup of my homelab k8s environment no matter what</p><p>There's a pretty obvious error about port 7946 being in use so at least it makes sense why it's not starting up. The weird part I still need to figure out is why.</p><p>Sample part of the error:<br>listen tcp 10.0.1.103:7946: bind: address already in use</p><p><a href="https://infosec.exchange/tags/homelab" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>homelab</span></a> <a href="https://infosec.exchange/tags/kubernetes" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetes</span></a> <a href="https://infosec.exchange/tags/kubernetesnetworking" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>kubernetesnetworking</span></a> <a href="https://infosec.exchange/tags/metallb" class="mention hashtag" rel="nofollow noopener noreferrer" target="_blank">#<span>metallb</span></a></p>
Mika<p>I've just merged a huge PR to my <a href="https://sakurajima.social/tags/Orked" rel="nofollow noopener noreferrer" target="_blank">#Orked</a> (O-tomated RKE Distribution - GREAT NAME I KNOW) that makes it easier than ever for anyone to set up a production-ready <a href="https://sakurajima.social/tags/RKE2" rel="nofollow noopener noreferrer" target="_blank">#RKE2</a> <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a> cluster in their <a href="https://sakurajima.social/tags/homelab" rel="nofollow noopener noreferrer" target="_blank">#homelab</a><span>.<br><br>With this collection of scripts, all you need to do is just provision the nodes required, including a login/management node, and run the scripts right from the login node to configure all of the other nodes to make up the cluster. This setup includes:<br><br>- Configuring the Login node with any required or essential dependencies (such as </span><a href="https://sakurajima.social/tags/Helm" rel="nofollow noopener noreferrer" target="_blank">#Helm</a>, <a href="https://sakurajima.social/tags/Docker" rel="nofollow noopener noreferrer" target="_blank">#Docker</a>, <a href="https://sakurajima.social/tags/k9s" rel="nofollow noopener noreferrer" target="_blank">#k9s</a>, <a href="https://sakurajima.social/tags/kubens" rel="nofollow noopener noreferrer" target="_blank">#kubens</a>, <a href="https://sakurajima.social/tags/kubectx" rel="nofollow noopener noreferrer" target="_blank">#kubectx</a><span>, etc.)<br><br>- Setup passwordless </span><a href="https://sakurajima.social/tags/SSH" rel="nofollow noopener noreferrer" target="_blank">#SSH</a><span> access from the Login node to the rest of the Kubernetes nodes<br><br>- Update the </span><code>hosts</code><span> file for strictly necessary name resolution on the Login node and between the Kubernetes nodes<br><br>- Necessary, best practice configurations for all of the Kubernetes nodes including networking configuration, disabling unnecessary services, disabling swap, loading required modules, etc.<br><br>- Installation and configuration of RKE2 on all the Kubernetes nodes and joining them together as a cluster<br><br>- Installation and configuration of </span><a href="https://sakurajima.social/tags/Longhorn" rel="nofollow noopener noreferrer" target="_blank">#Longhorn</a><span> storage, including formatting/configuring their virtual disks on the Worker nodes<br><br>- Deployment and configuration of </span><a href="https://sakurajima.social/tags/MetalLB" rel="nofollow noopener noreferrer" target="_blank">#MetalLB</a><span> as the cluster's load-balancer<br><br>- Deployment and configuration of </span><a href="https://sakurajima.social/tags/Ingress" rel="nofollow noopener noreferrer" target="_blank">#Ingress</a> <a href="https://sakurajima.social/tags/NGINX" rel="nofollow noopener noreferrer" target="_blank">#NGINX</a><span> as the ingress controller and reverse proxy for the cluster - this helps manage external access to the services in the cluster<br><br>- Setup and configuration of </span><a href="https://sakurajima.social/tags/cert-manager" rel="nofollow noopener noreferrer" target="_blank">#cert-manager</a> to obtain and renew <a href="https://sakurajima.social/tags/LetsEncrypt" rel="nofollow noopener noreferrer" target="_blank">#LetsEncrypt</a> certs automatically - supports both <a href="https://sakurajima.social/tags/DNS" rel="nofollow noopener noreferrer" target="_blank">#DNS</a> and HTTP validation with <a href="https://sakurajima.social/tags/Cloudflare" rel="nofollow noopener noreferrer" target="_blank">#Cloudflare</a><span><br><br>- Installation and configuration of </span><a href="https://sakurajima.social/tags/csi-driver-smb" rel="nofollow noopener noreferrer" target="_blank">#csi-driver-smb</a><span> which adds support for integrating your external SMB storage to the Kubernetes cluster<br><br>Besides these, there are also some other </span><i>helper</i> scripts to make certain related tasks easy such as a script to set a unique static IP address and hostname, and another to toggle <a href="https://sakurajima.social/tags/SELinux" rel="nofollow noopener noreferrer" target="_blank">#SELinux</a><span> enforcement to on or off - should you need to turn it off (temporarily).<br><br>If you already have an existing RKE2 cluster, there's a step-by-step guide on how you could use it to easily configure and join additional nodes to your cluster if you're planning on expanding.<br><br>Orked currently expects and supports </span><a href="https://sakurajima.social/tags/RockyLinux" rel="nofollow noopener noreferrer" target="_blank">#RockyLinux</a> 8+ (should also support any other <a href="https://sakurajima.social/tags/RHEL" rel="nofollow noopener noreferrer" target="_blank">#RHEL</a> distros such as <a href="https://sakurajima.social/tags/AlmaLinux" rel="nofollow noopener noreferrer" target="_blank">#AlmaLinux</a>), but I am planning to improve the project over time by adding more <a href="https://sakurajima.social/tags/Linux" rel="nofollow noopener noreferrer" target="_blank">#Linux</a> distros, <a href="https://sakurajima.social/tags/IPv6" rel="nofollow noopener noreferrer" target="_blank">#IPv6</a> support, and possibly even <a href="https://sakurajima.social/tags/K3s" rel="nofollow noopener noreferrer" target="_blank">#K3s</a> for a more lightweight <a href="https://sakurajima.social/tags/RaspberryPi" rel="nofollow noopener noreferrer" target="_blank">#RaspberryPi</a><span> cluster for example.<br><br>I've used this exact setup to deploy and manage vital services to hundreds of unique clients/organisations that I've become </span><i>obsessed</i><span> with sharing it to everyone and making it easier to get started. If this is something that interests you, feel free to check it out!<br><br>If you're wondering what to deploy on a Kubernetes cluster - feel free to also check out my </span><a href="https://sakurajima.social/tags/mika" rel="nofollow noopener noreferrer" target="_blank">#mika</a> helm chart repo 🥳<span><br><br></span>🔗 <a href="https://github.com/irfanhakim-as/orked" rel="nofollow noopener noreferrer" target="_blank">https://github.com/irfanhakim-as/orked</a><span><br><br></span>🔗 <a href="https://github.com/irfanhakim-as/charts" rel="nofollow noopener noreferrer" target="_blank">https://github.com/irfanhakim-as/charts</a></p>
Mika<p>Update: I've figured out that I've worse networking knowledge than I thought. For people like me that plans to spin up a secondary <a href="https://sakurajima.social/tags/Kubernetes" rel="nofollow noopener noreferrer" target="_blank">#Kubernetes</a> cluster - also with <a href="https://sakurajima.social/tags/Ingress" rel="nofollow noopener noreferrer" target="_blank">#Ingress</a><span> support to publish your service online, do the following:<br><br>1. Ensure </span><a href="https://sakurajima.social/tags/MetalLB" rel="nofollow noopener noreferrer" target="_blank">#MetalLB</a> and Ingress <a href="https://sakurajima.social/tags/NGINX" rel="nofollow noopener noreferrer" target="_blank">#NGINX</a><span> controller have been set up on the cluster, no (port) customisation required - leave them as is<br><br>2. Register a domain name on your DNS server i.e. </span><a href="https://sakurajima.social/tags/Cloudflare" rel="nofollow noopener noreferrer" target="_blank">#Cloudflare</a> pointing towards your public IP (router) - I've a <a href="https://sakurajima.social/tags/Helm" rel="nofollow noopener noreferrer" target="_blank">#Helm</a> <a href="https://github.com/irfanhakim-as/charts/tree/master/mika/cloudflareddns" rel="nofollow noopener noreferrer" target="_blank">chart</a><span> that automates this which works with non-enterprise networks with dynamic public IP<br><br>3. Set up port forwarding on your router for the second cluster - HTTP (wan port: 8080, virtual host port: 80, lan host IP: Cluster 2 loadbalancer IP) and HTTPS (wan port: 8443, virtual host port: 443, lan host IP: Cluster 2 loadbalancer IP)<br><br>4. Deploy an ingress for said service as per usual<br><br>All of these steps are the same for your "primary" Cluster except the different WAN ports since </span><code>80</code> and <code>443</code><span> will have been taken by your primary cluster on your router/network.<br><br></span><b>NOW</b>, the thing I've missed thruout this goose chase, is... when you visit that domain you've specified in your service's ingress (in your Cluster 2), please add the custom port you've used (i.e. <code>:8443</code>) at the back... 🙃 Otherwise, ofc you'll get 404 NGINX errors since when you don't append the custom port at the back of the address, it will reach the default/standard ports <code>80</code> (http) or <code>443</code><span> (https) instead.<br><br>Jesus, this could've all been avoided if I hadn't went </span><i>autopilot</i> mode too much in my networking classes years ago. Happy kubernetes-ing everyone.</p>