hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

8.9K
active users

"why does software require maintenance, it's not like it wears down"

Because software is not generally useful for any inherent good; what matters is its relationship with the surrounding context, and that context is in perpetual change.

@chris__martin @dalias > "why does software require maintenance"

Improperly defined interfaces between systems & libraries and software, most of the time.

Then there's the odd once in a while (often with decades-long spacing) that one needs to change something to support a new system entirely.

> what matters is its relationship with the surrounding context, and that context is in perpetual change.

That's mostly true for software with unlimited scope. It is perfectly possible to have software that is /complete/ and needs no more maintenance than the second example I gave above.

@lispi314@udongein.xyz @chris__martin@functional.cafe @dalias@hachyderm.io Free and open source software by its very definition encourages people to make frequent changes to every part of the system, world-breaking changes that are otherwise unthinkble elsewhere are welcomed on a daily basis, and the community members in general are proud that they're so innovative (e.g. one can port an entire OS to a new CPU within a year). The consequence of this system is that most software projects are not a product, there's no such a thing called "the software", but a human process: reporting issues, creating breakages, writing patches, doing CI/CD, packaging for distro, that are in constant motion. If the motion stops, the software will stop working very soon.

If you look at a Win32 app, it's exactly the opposite - it's a product, not a process, once it's completed it's "set in stone", and some people will still use the same binary 20 years later, sometimes they spend great effort to keep the mysterious binary running, even when very little is known about it. The later "minimum maintenance" approach is historically rarely used by the free software community. Perhaps some projects should try it seriously.

@niconiconi @dalias @chris__martin @lispi314 Well you can try to do the same kind of thing as the few ancient enterprise binaries which still work on current Windows with static linux binaries (like Mosaic can be ran that way) or shipping the few libraries without guarantees on ABI (Opera 12 still runs).
But it'll also mean just throwing out security entirely, which enterprise software does all the time.

Meanwhile usually source code works for quite a while as well, with less problems when it comes to security, but you do need to be careful about your dependencies, which seems to be a lost ideal in many ecosystems today.
@lanodan @niconiconi @dalias @chris__martin > with static linux binaries (like Mosaic can be ran that way)

This is an issue mostly because the wrong layer is being distributed. The Operating System APIs for Linux are specified and standard at the ABI level, so particular cached computations (a binary) targeting that ABI will continue working in environments supporting it (different computer architectures being different environments). The stable/standard compatibility layer for most Linux binaries is typically at the source level inherited by libraries that only provide compatiblity & reliability guarantees at the source level.

It makes no sense to distribute the program at a layer that is not subject to reliability guarantees otherwise mentioned. (related: https://gbracha.blogspot.com/2020/01/the-build-is-always-broken.html)

(I don't think GNU LibC makes any claims of ABI stability across versions, so a given cached version of it cannot be used as a reliable primary artifact.)

> But it'll also mean just throwing out security entirely, which enterprise software does all the time.

Unless cryptography is involved, that shouldn't be the case and suggests there's something wrong with the running environment rather than the program.

For cryptography there are ways of handling that, mainly presenting a stable API at some layer or another. Programs using SSH don't need to know more than its command line interface (which SSH itself has somewhat standardized I think), programs using I2P (as clients) need nothing more than standard HTTP or TCP support (through reverse-proxy tunnels) & do not care about the I2P version used. And of course, libraries presenting a stable API (whether type/function-call based or protocol-based, with appropriate upgrade support in the background), which I think is the standard way to go now, will simply keep working.
gbracha.blogspot.comThe Build is Always Broken Programmers are always talking about broken builds: "The build is broken", "I broke the build" etc. However, the real problem is that the ...

@lispi314 @chris__martin @dalias @niconiconi @lanodan
> I don't think GNU LibC makes any claims of ABI stability across versions

Slight confusion, as glibc uses versioned symbols extensively to provide different variants of its functions that transparently support all older ABIs when they change, they only add new versions, never remove old ones so old binaries run just fine. This feature exists in the linker but few libraries have the discipline to make guarantees using it.

Cassandrich

@raven667 @lispi314 @chris__martin @niconiconi @lanodan It's a very bad mechanism, because symbol versioning binds at link time, but dependency on a particular version is determined at compile time.

The right way to do this is to throw away symbol versioning and do versioned interfaces with the preprocessor in the library's public header file. Bonus: it's portable, not dependent on a GNU tooling extension.