hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

8.9K
active users

Lazyweb, what's a reasonable removable media filesystem to use for a ridiculously large number of smallish files, hundreds of GB total? FAT32 is just incredibly slow.

@dalias probably depends on what operating systems you need to be able to read it

@aburka Really only *needs* to be Linux, but in a pinch it's preferable to be more widely readable.

@dalias @aburka that many small files is going to be the worst performance ever. If you can consolidate in a tarball, a zip, a rat, even without compression, before copying, you'll see a huge performance increase. This was my bane when I had to do backups on a PB of data across many multi-TB volumes. Weeks. When they moved to "archive" and I consolidated them, hours to write the same data to WORM tapes. It's the open/read/close cycle. Open/close are fixed times, so more is slower.

Cassandrich

@emag @aburka It's not the open/close taking up time. It's the actual data transfer. But even if it were, putting the data into archives would just move the cost to archiving and unarchiving it. All the costs here except the disk being stupidly slow are pretty much fundamental.