hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.9K
active users

Cassandrich

Some technical thoughts on init & service launching/supervision systems: most dependencies & ordering relationships are bogus.

A common antipattern is for stuff to depend on network or another service being up, but either this is a fake dependency or a bug.

Network for example can have blip or outage at any time. A dependency on it coming up is inherently a TOCTOU race. But no decent sw cares. It already handles ups & downs gracefully.

This is why my boot script is basically just a & b & c & d & e & (& = background).

@dalias Incidentally, this suggests that *all* NFS and iSCSI implementations packaged on most distros are badly-written software.
@dalias I certainly wouldn't complain if good implementations popped up.

@dalias most software does not meet your criteria for decent

@dalias

I can't find it right now, but your minimal init that basically just kicks off some scripts, and then just lives to reap zombies is nice and clean.

@dalias
Interesting. The networking example is really good.

What do you think of mount points. Is there a simple way for the software to wait until the correct mount points are available?

@sertonix The mount point should come up immediately with no network or device access and block on accesses while the underlying resource is intermittently unavailable.

If it's synchronous, you have a blip-leading-to-boot-failure condition.

@dalias As far as I know the mount command blocks until the resource is available. Is there an option/event to only wait until the mount point is there?

@sertonix Implementing the fs backend in FUSE rather than using the kernel NFS impl? 🤷 🙄 🤦

(Not necessarily a serious suggestion, just indicative of where problem is.)

@dalias

To answer my own question here: I think (or at least hope) that autofs can be used to reserve mount points for later use.

kernel.org/doc/html/v6.12/file

There is however the risk of a deadlock when there is a recursive dependency.

www.kernel.orgautofs - how it works — The Linux Kernel documentation
@dalias Bullseye on what's probably the only thing I do not like with OpenRC, ~restarting the network stuff or worse, shutting down one interface shouldn't cause a whole bunch of side-effects, like shutting down services, specially when there's runlevels if you'd want offline/online modes.

@lanodan I've never used openrc to do that, and what I like about it is that it didn't make me do so. I can just use ifup/down or even ifconfig.

@dalias ifup/ifdown as in the debian-style /etc/network/interfaces? Haven't used those in ages but I remember not liking it at all.

I think I'll end up just removing "provide net" in dhcpcd and netifrc, no more side-effects. (And maybe throw away netifrc, I don't think I need this wrapper when I know all the commands)
@dalias A dependency on a network interface with a statically assigned IP address is perfectly fine and even necessary with multiple interfaces. A dependency on the existence of a virtual interface that is created statically is also required if you want to support VLAN/VXLANs, VRF, GRE tunnels or VPNs.

Those do not depend on any external state, they merely depend on a software being started after the network configuration is done, regardless of whether the cable is plugged or not.

Handling dynamically added/removed network interfaces and IP addresses tends to end up as a non-portable pile of bugs. The most illustrative example is Avahi, whose dynamic handling code has been broken for decades, despite its creator schooling others about it.