hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9K
active users

#zfs

24 posts23 participants4 posts today

Bericht KW16:

- Mo: Inbetriebname mit #Heilein eines #Unifi #Netzwerk mit 72 Access Points und 13 Switches
- Di: Fortsetzung, Finalisierung, Firewall Regeln, Netzwerksegmentierung
- Mi: Fortsetzung, #Dokumentation muss sein, Diverses
- Do: Neuer DSL Anschluss, #OPNsense Updates. #SAP VMs per ZFS Snapshots auf andere #Server Hardware verschoben. Erstes #Teammeeting
- Fr: Feiertag

Highlight: Was haben wir eigentlich ohne #ZFS gemacht? Auch Industrie kann schick sein.

Frohe Ostern!

Auto decrypting datasets

Hi folks,

I have a question concerning automatic decryption of dataset when rebooting. On a recent 2.5A episode (237) Allan mentioned that datasets which have a keylocation set should (?) decrypt automatically. I have two servers running Ubuntu (one on 24.04 with zfs 2.2.2 and one on 22.04 and zfs 2.1.5) and neither is showing the expected behaviour.
I was wondering whether this might be due to the zfs version or the distro configs? The way Allan phrased it made it sound like a zfs native function. Atm my solution would be either a script + cron or a systemd unit file, but I would like to know if I am missing something on the zfs side that would make it work without additional stuff.

Thanks!
D

4 posts - 3 participants

Read full topic

Practical ZFS · Auto decrypting datasetsHi folks, I have a question concerning automatic decryption of dataset when rebooting. On a recent 2.5A episode (237) Allan mentioned that datasets which have a keylocation set should (?) decrypt automatically. I have two servers running Ubuntu (one on 24.04 with zfs 2.2.2 and one on 22.04 and zfs 2.1.5) and neither is showing the expected behaviour. I was wondering whether this might be due to the zfs version or the distro configs? The way Allan phrased it made it sound like a zfs native func...

Usar tem sido um aprendizado. Queria ter dado mais atenção a importância de criar datasets antes, em vez de como agora ficar precisando mover meus dados de um lado para outro para separa-los em conjuntos que fazem sentido

Replied in thread

@vermaden @justine
Addendum: prefix "pkg upgrade" with "make-snapshots" to be able to rollback the file systems(s) without fuss ...

make-snapshots \
&& pkg upgrade && make-snapshots \
&& pkg autoremove && make-snapshots \
&& pkg clean && make-snapshots

... I had made the change before the issue of {missing,disappearing}-packages-on-upgrade that various other peoples are experiencing currently.

Documentation in operating systems is cool. It is possible to extend and rewrite utilities as time goes on, as #freebsd proves. You can still have cool utilities, like #containers and #zfs and #hypervisors, good docs for them and a consistent base system.

I dunno where I am going with this, other than wishing I didn't have to peruse the Arch wiki and the Gentoo wiki for everything when I get stuck, and instead could just "man xyz" and get good answers, speaking as #nixos user.

Replied in thread
@ianthetechie @feld I can confirm that #Python on #FreeBSD behaves as one would expect. It consumes all RAM (with #ZFS releasing ARC as expected) and then dips into swap. As soon as Python releases memory after the ingestion routine, the swap is purged to near zero and the RAM then becomes available (and used) by the system. Far more predictable and reliable.

If you have big, vertical workloads, FreeBSD is where it is at.

fuckkkk ZFS why you do this. guess i know what i'm spending tonight doing.

any #zfs experts out there able to help me recover my pool? i apparently broke things quite significantly lol

Today, out of nowhere, my shell started to misbehave. My prompt suddenly reverted to default. Some unexpected "command not found" errors started popping up. Something was off.

I went to check my shell configuration. The directory was not there. I then went look into ~/.config. Half of the directories inside were simply gone.

I immediately flipped into what the fuck is happening mode.

This system is an Alpine root-on-ZFS. I have a script called by cron every 20 minutes that snapshots everything.

First I went into the snapshot directory and started copying some things. I soon noticed it was just too much missing. How to map out what was gone in the first place? Even so, copying would only go so far because I was duplicating things.

I looked to my left at the resource monitor. I had less than 1 GB left of free space. That was not going to work.

I flip some pages, looking for an incantation...

% zfs rollback zroot/home@20m

A long second hanged in the air. Then all the resource monitor's bars flipped at once to green: 53% free space.

"Blessed be the ZFS Daemons and the Authors who conjured Them."

The system was still confused, so I rebooted. It let its conscience drift - as it is used to -, uncertainty still heavy in the air. Then it resurfaced... every line of output unconcerned.

Back up, no trace was left of the seconds leading up to the warp. The only suspicion came from a cryptography guardian, who noticed something was wrong about the timestamps. "Please re-enter the passcodes", it asked. Every other blob was either unconcerned or unimpressed with the glitch.

Like any time travel, the only trace left was in my memory. No history anywhere has me looking for that spell. I booted 20 minutes into the past and that's from when I am writing to you.

A nice side effect of expanding my ZFS pool is that scrubs are now quite a bit faster. Down from ~19 hours to under 15. Makes sense since it can read data faster now.

I'd still like to know why the speed of scrubs decreases over as it progresses.

scrub repaired 0B in 14:27:28 with 0 errors on Mon Apr 14 14:51:33 2025

Still messing around with my system with broken #openzfs FS. To be honest, #btrfs has never been so problematic to me in the past. But I have to admit that I never used raid5 there (known to be broken), while I do use raidz on #zfs.