hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.4K
active users

What a browser privacy policy should look like:

We acknowledge that your use of the browser is purely an interaction between you and the sites you explicitly attempt to connect to by following links or entering URLs, and does not involve any third parties unless you have explicitly added extensions to make it so.

We acknowledge that your settings are chosen by you to protect your privacy and accessibility needs, and will not attempt to override them with updates.

We acknowledge that attempting to subvert your privacy configuration through updates, either ones delivered through automatic channels you have opted in to or via publications indicating that an update is important to your security, constitutes unauthorized use of your computer and may be subject to prosecution (such as under the US CFAA).

...

@dalias what browser do you use now that Firefox has had its ‘don’t be evil’ moment?
I got librewolf yesterday

@theearthisapringle @dalias I am probably switching to LibreWolf as well, although with some reservations.

* it's dependent on firefox for upstream development
* the switch/transfer process from firefox is awful. Well really, it's nonexistent.

**edit**: Also, there's no mobile version. Desktop only.

@swordgeek @theearthisapringle @dalias I’d avoid downstream forks of browsers unless they have a record of pulling updates from upstream within days of upstream updates.

@alwayscurious @swordgeek @theearthisapringle I'd do the opposite. If they're just pulling everything immediately from upstream, they're not vetting changes and they're vulnerable to whatever latest shenanigans upstream pulls. Responsible fork is triaging upstream changes between critical security/safety, desirable new functionality that can be merged on relaxed schedule, and antifeatures that will be new difficulties in future merge conflicts.

@dalias @swordgeek @theearthisapringle The problem is the security patch gap. If one diverges too far from upstream then one risks not being able to release security patches in time.

@alwayscurious @swordgeek @theearthisapringle This is really a problem in philosophy of security fixes I've written about in detail before. It's harder to work around when you don't control upstream's bad behavior, but it should be possible to mitigate most security problems without even needing code changes, as long as you can document what functionality to gate to cut off the vector without excessive loss of functionality.

Most browser vulns are fixable with an extension blocking the garbage feature nobody asked for.

@dalias @swordgeek @theearthisapringle A lot of browser vulnerabilities are JS engine bugs, and those are much harder to mitigate unless one disables JS altogether.

@alwayscurious @swordgeek @theearthisapringle That happens a lot more in Chrome than Firefox probably because of their SV cowboy attitudes about performance, but it might also be a matter of more eyes/valuable targets.

In any case, if you have a real kill switch for JIT, or even better an option to disable the native zomg-UB-is-so-fast engine and use DukTape or something (I suspect you could even do that with an extension running DukTape compiled to wasm...), even these can be mitigated without updates.

@dalias @theearthisapringle @swordgeek @alwayscurious I've been disabling the JIT for a while now.

Even Microsoft's browser research team published an article on how little returns the JIT gives vs security improvement in the majority of use-cases: https://microsoftedge.github.io/edgevr/posts/Super-Duper-Secure-Mode/

It's not because it's impossible to do JIT safely/properly, it's *purely* because JS engines prioritize speed over correctness & security everytime.
Microsoft Browser Vulnerability Research · Super Duper Secure ModeIntroduction

@lispi314 @alwayscurious @theearthisapringle @swordgeek An unexplored aspect of this is that "JIT" typically refers to often conflated but unrelated things:

1. Performing transformations on the AST/IR to optimize the code abstractly, and

2. Dynamic translation into native machine code and injection of that into the live process.

It's #1 that gets you the vast majority of the performance benefits, but #2 that introduces all the vulnerabilities (because it's all YOLO, there's no formal model for safety of any of that shit).

@dalias @swordgeek @theearthisapringle @alwayscurious The latter has been done better by Self among other languages.

There are ways to structure it, but most importantly JS JIT usually discards type info instead of keeping it in the native dynamic code, because otherwise the performance wins are more marginal. That's the wrong thing to do. It's a dynamic language, it should act like it.

@hayley would be able to give more concrete and relevant examples.
@lispi314 @dalias @alwayscurious @theearthisapringle @swordgeek Here's your formal model (not including deoptimisation/stack replacement shenanigans): https://applied-langua.ge/~hayley/honours-thesis.pdf

I don't follow the point about discarding type information and how it affects performance.

@hayley @dalias @theearthisapringle @swordgeek @lispi314 JS is a very badly designed language from a performance perspective: every property access is semantically a dictionary lookup, and the JS engine must do heroic optimizations to get rid of that lookup. It’s much easier to write a Scheme or Common Lisp compiler because record type accessors are strictly typed, so they will either access something with a known offset or raise a type error.

@alwayscurious @swordgeek @theearthisapringle @dalias @hayley > every property access is semantically a dictionary lookup

Oh, it did the Python silliness.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley Yup! Duck typing is absolutely horrible from a performance perspective, unless compile-time monomorphization gets rid of it.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley Yep. But getting rid of it is JIT in the type 1 sense not the type 2 sense.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley What kind of performance can one get from a type-1 only JIT? If one only compiles to a bytecode then performance is limited to that of an interpreter, and my understanding is that even threaded code is still quite a bit slower than native code (due to CPU branch predictor limitations I think?). On the other hand, compiling to a safe low-level IR (such as WebAssembly or a typed assembly language) and generating machine code from that could get great performance, but that requires trusting the assembler (which, while probably much simpler than a full JS engine, isn’t trivial either).

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley Nobody cares if it's a constant factor like 3 slower if it's safe. Dynamic injection of executable pages is always unsafe. But I think it can be made even closer than that in typical code that's memory access bound not insn rate bound.

Cassandrich

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley And if you get to choose the bytecode you have a lot of options to make it easier to execute with high performance.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley If you are wanting to get performance that is anything close to what the hardware can actually do, you aren’t doing most of the work on the CPU. You are doing it on the GPU, and that is a nightmare of its own security-wise. Oh, and I highly doubt you will ever able to run an interpreter there with performance that is remotely reasonable due to how the hardware works.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley We're talking about a web browser not AAA gaming. GPU access should be denied.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley People want to run games. How should they do it? “Don’t do it” is not an answer.

If you limit the browser too much, people will just run desktop applications instead, and for stuff that isn’t fully trusted that is a security regression.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias Don't run proprietary malware games on hardware that is trusted for anything at all, I guess.

Though games sure could try to optimize to run properly purely on CPU, specifically in abandonning the notion they'll be given undue trust.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley That means throwing away more than a decade’s worth of hardware advancements, not to mention completely ruining battery life on mobile. An Apple M1 CPU can only emulate a mid-2010s Intel iGPU and uses a (probably a lot) more power while doing so.

GPUs exist for a reason: they are vastly more efficient for not-too-branchy, highly-threaded code where throughput is more important than latency. The problem is not that games want to use the GPU. The problem is that there is no secure way to let untrusted code use the GPU.

Don't run proprietary malware games on hardware that is trusted for anything at all, I guess.

That’s tantamount to accepting that the vast majority of people’s systems will always be insecure. I’m not willing to give up that fight.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias It may be possible to formulate a bytecode & runtime where semantic uses of the GPU could be proven safe, and requiring its use or that of a compatible API.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley WebGPU is the closest I can think of, and it is full of holes. I’m much more interested in solutions that assume the attacker can execute arbitrary GPU (or CPU) machine code, and still keep the user’s system and data safe.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias I consider this impossible without access to the full hardware schematics & documentation.

One can only ever be relatively sure a subset of operations are not risky to permit.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley They can run their games in any of the plethora of other browsers if they're too slow. You don't put vuln surface and fingerprinting surface in the browser you use for everything just so it can also be a AAA game platform.

@dalias @hayley @swordgeek @theearthisapringle @alwayscurious There is also this, yes. I think you mentioned in another thread the desirability of proper process isolation instead of intermixing uses of a single browser between security domains.

This could be extended to a game-only browser with each site being isolated from others.