hachyderm.io is one of the many independent Mastodon servers you can use to participate in the fediverse.
Hachyderm is a safe space, LGBTQIA+ and BLM, primarily comprised of tech industry professionals world wide. Note that many non-user account types have restrictions - please see our About page.

Administered by:

Server stats:

9.3K
active users

What a browser privacy policy should look like:

We acknowledge that your use of the browser is purely an interaction between you and the sites you explicitly attempt to connect to by following links or entering URLs, and does not involve any third parties unless you have explicitly added extensions to make it so.

We acknowledge that your settings are chosen by you to protect your privacy and accessibility needs, and will not attempt to override them with updates.

We acknowledge that attempting to subvert your privacy configuration through updates, either ones delivered through automatic channels you have opted in to or via publications indicating that an update is important to your security, constitutes unauthorized use of your computer and may be subject to prosecution (such as under the US CFAA).

...

@dalias what browser do you use now that Firefox has had its ‘don’t be evil’ moment?
I got librewolf yesterday

@theearthisapringle @dalias I am probably switching to LibreWolf as well, although with some reservations.

* it's dependent on firefox for upstream development
* the switch/transfer process from firefox is awful. Well really, it's nonexistent.

**edit**: Also, there's no mobile version. Desktop only.

@swordgeek @theearthisapringle @dalias I’d avoid downstream forks of browsers unless they have a record of pulling updates from upstream within days of upstream updates.

@alwayscurious @swordgeek @theearthisapringle I'd do the opposite. If they're just pulling everything immediately from upstream, they're not vetting changes and they're vulnerable to whatever latest shenanigans upstream pulls. Responsible fork is triaging upstream changes between critical security/safety, desirable new functionality that can be merged on relaxed schedule, and antifeatures that will be new difficulties in future merge conflicts.

@dalias @swordgeek @theearthisapringle The problem is the security patch gap. If one diverges too far from upstream then one risks not being able to release security patches in time.

@alwayscurious @swordgeek @theearthisapringle This is really a problem in philosophy of security fixes I've written about in detail before. It's harder to work around when you don't control upstream's bad behavior, but it should be possible to mitigate most security problems without even needing code changes, as long as you can document what functionality to gate to cut off the vector without excessive loss of functionality.

Most browser vulns are fixable with an extension blocking the garbage feature nobody asked for.

@dalias @swordgeek @theearthisapringle A lot of browser vulnerabilities are JS engine bugs, and those are much harder to mitigate unless one disables JS altogether.

Cassandrich

@alwayscurious @swordgeek @theearthisapringle That happens a lot more in Chrome than Firefox probably because of their SV cowboy attitudes about performance, but it might also be a matter of more eyes/valuable targets.

In any case, if you have a real kill switch for JIT, or even better an option to disable the native zomg-UB-is-so-fast engine and use DukTape or something (I suspect you could even do that with an extension running DukTape compiled to wasm...), even these can be mitigated without updates.

@dalias @theearthisapringle @swordgeek @alwayscurious I've been disabling the JIT for a while now.

Even Microsoft's browser research team published an article on how little returns the JIT gives vs security improvement in the majority of use-cases: https://microsoftedge.github.io/edgevr/posts/Super-Duper-Secure-Mode/

It's not because it's impossible to do JIT safely/properly, it's *purely* because JS engines prioritize speed over correctness & security everytime.
Microsoft Browser Vulnerability Research · Super Duper Secure ModeIntroduction

@lispi314 @alwayscurious @theearthisapringle @swordgeek An unexplored aspect of this is that "JIT" typically refers to often conflated but unrelated things:

1. Performing transformations on the AST/IR to optimize the code abstractly, and

2. Dynamic translation into native machine code and injection of that into the live process.

It's #1 that gets you the vast majority of the performance benefits, but #2 that introduces all the vulnerabilities (because it's all YOLO, there's no formal model for safety of any of that shit).

@dalias @swordgeek @theearthisapringle @alwayscurious The latter has been done better by Self among other languages.

There are ways to structure it, but most importantly JS JIT usually discards type info instead of keeping it in the native dynamic code, because otherwise the performance wins are more marginal. That's the wrong thing to do. It's a dynamic language, it should act like it.

@hayley would be able to give more concrete and relevant examples.

@lispi314 @alwayscurious @theearthisapringle @swordgeek @hayley Discarding type info is an aspect of 1 in order to make 2 easier. It probably also saves some resources. But it's at best constant factor stuff not vast wins like most of 1.

@dalias @hayley @swordgeek @theearthisapringle @alwayscurious Failed to open tbb for the webUI to edit fast enough then...

I also meant to add that JS JIT also discards a *lot* of checks in compiling. This is again something it should do to a much lesser degree if at all.

Basically JS JIT seems to have to goal of turning dynamic interpreted code in to static native code. That's the wrong approach.

@lispi314 @alwayscurious @theearthisapringle @swordgeek @hayley That's only supposed to happen in code paths where you can prove the type is known, like for data that's known numeric or known integer. But there have been lots of vulns in V8 in this area...

@dalias @lispi314 @theearthisapringle @swordgeek @hayley Are you sure about that?

I’m pretty sure you cannot get good JS performance unless:

  1. a + b is a few machine instructions (add + branch on overflow, etc), not a call to a runtime function.
  2. a.b is the equivalent of a C struct access, not a dictionary lookup.
  3. a[b] (if a is an Array or TypedArray) is an array access with bounds check on an object with known type, not a generic property lookup.

Getting those things to work efficiently requires pretty much the whole profiling+type inference+deoptimization mess. I believe the same was found to be the case for Ruby runtimes, with the attempts that did not use deoptimization not being able to compete with TruffleRuby on performance.

A much more sound approach would be to write a compiler that targets a memory-safe IR, such that even if the compiler frontend is buggy, the resulting code will still be memory-safe (even if wrong). I believe that V8’s code sandbox is designed to do just that: if the resulting code is wrong, it might do arbitrarily wrong things at the JS level, but it won’t be able to read or write things it should not be able to.

If this is still insufficient hardening, I think that pushing developers to use WebAssembly for anything remotely performance-critical is the only feasible option.

@alwayscurious @dalias @lispi314 @theearthisapringle @swordgeek > write a compiler that targets a memory-safe IR, such that even if the compiler frontend is buggy, the resulting code will still be memory-safe (even if wrong).

you're not gonna believe what my bachelors thesis (linked up-thread) is about

@hayley @dalias @theearthisapringle @swordgeek @lispi314 Interestingly graphics compilers targeting GPUs have no need for this, because sensitive CPU-side data structures are not even mapped into the GPU address space.

@lispi314 @dalias @alwayscurious @theearthisapringle @swordgeek Here's your formal model (not including deoptimisation/stack replacement shenanigans): https://applied-langua.ge/~hayley/honours-thesis.pdf

I don't follow the point about discarding type information and how it affects performance.

@hayley @alwayscurious @theearthisapringle @swordgeek @lispi314 I'll grant that you can make a model, but if it's not supported by the underlying host language models, their memory lifetime models, etc. then there's going to be loads of load bearing UB in the glue...

@dalias @hayley @theearthisapringle @swordgeek @lispi314 In theory, this can be solved by writing a formal proof of correctness for not only the source code but also the generated machine code. That removes the host-language compiler from the trusted computing base. Whether that is practical is another matter.

@hayley @dalias @theearthisapringle @swordgeek @lispi314 JS is a very badly designed language from a performance perspective: every property access is semantically a dictionary lookup, and the JS engine must do heroic optimizations to get rid of that lookup. It’s much easier to write a Scheme or Common Lisp compiler because record type accessors are strictly typed, so they will either access something with a known offset or raise a type error.

@alwayscurious @swordgeek @theearthisapringle @dalias @hayley > every property access is semantically a dictionary lookup

Oh, it did the Python silliness.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley Yup! Duck typing is absolutely horrible from a performance perspective, unless compile-time monomorphization gets rid of it.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley Yep. But getting rid of it is JIT in the type 1 sense not the type 2 sense.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley What kind of performance can one get from a type-1 only JIT? If one only compiles to a bytecode then performance is limited to that of an interpreter, and my understanding is that even threaded code is still quite a bit slower than native code (due to CPU branch predictor limitations I think?). On the other hand, compiling to a safe low-level IR (such as WebAssembly or a typed assembly language) and generating machine code from that could get great performance, but that requires trusting the assembler (which, while probably much simpler than a full JS engine, isn’t trivial either).

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley Nobody cares if it's a constant factor like 3 slower if it's safe. Dynamic injection of executable pages is always unsafe. But I think it can be made even closer than that in typical code that's memory access bound not insn rate bound.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley And if you get to choose the bytecode you have a lot of options to make it easier to execute with high performance.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley If you are wanting to get performance that is anything close to what the hardware can actually do, you aren’t doing most of the work on the CPU. You are doing it on the GPU, and that is a nightmare of its own security-wise. Oh, and I highly doubt you will ever able to run an interpreter there with performance that is remotely reasonable due to how the hardware works.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley We're talking about a web browser not AAA gaming. GPU access should be denied.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley People want to run games. How should they do it? “Don’t do it” is not an answer.

If you limit the browser too much, people will just run desktop applications instead, and for stuff that isn’t fully trusted that is a security regression.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias Don't run proprietary malware games on hardware that is trusted for anything at all, I guess.

Though games sure could try to optimize to run properly purely on CPU, specifically in abandonning the notion they'll be given undue trust.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley That means throwing away more than a decade’s worth of hardware advancements, not to mention completely ruining battery life on mobile. An Apple M1 CPU can only emulate a mid-2010s Intel iGPU and uses a (probably a lot) more power while doing so.

GPUs exist for a reason: they are vastly more efficient for not-too-branchy, highly-threaded code where throughput is more important than latency. The problem is not that games want to use the GPU. The problem is that there is no secure way to let untrusted code use the GPU.

Don't run proprietary malware games on hardware that is trusted for anything at all, I guess.

That’s tantamount to accepting that the vast majority of people’s systems will always be insecure. I’m not willing to give up that fight.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias It may be possible to formulate a bytecode & runtime where semantic uses of the GPU could be proven safe, and requiring its use or that of a compatible API.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley WebGPU is the closest I can think of, and it is full of holes. I’m much more interested in solutions that assume the attacker can execute arbitrary GPU (or CPU) machine code, and still keep the user’s system and data safe.

@alwayscurious @hayley @swordgeek @theearthisapringle @dalias I consider this impossible without access to the full hardware schematics & documentation.

One can only ever be relatively sure a subset of operations are not risky to permit.

@alwayscurious @lispi314 @theearthisapringle @swordgeek @hayley They can run their games in any of the plethora of other browsers if they're too slow. You don't put vuln surface and fingerprinting surface in the browser you use for everything just so it can also be a AAA game platform.

@dalias @hayley @swordgeek @theearthisapringle @alwayscurious There is also this, yes. I think you mentioned in another thread the desirability of proper process isolation instead of intermixing uses of a single browser between security domains.

This could be extended to a game-only browser with each site being isolated from others.

@dalias @lispi314 @theearthisapringle @swordgeek @hayley I was assuming that it would be a factor of 10 or more, especially in tight loops doing math. Not an issue for most code, but a major issue for something like a game engine. Also, a factor of 3 slowdown could easily become a factor of 3 increase in power consumption, and people absolutely do care about that.

One problem with things like this is that if one makes the web unusable as a platform for certain applications, people will fall back to unsandboxed desktop applications, which are far, far worse from a security perspective if they don’t come from a trusted source. Applications that just do lightweight DOM manipulation probably don’t care about JS performance too much, but anything doing heavy math is going to be seriously hurt.

@alwayscurious @dalias @theearthisapringle @swordgeek @lispi314 >It’s much easier to write a Scheme or Common Lisp compiler

and yet our Scheme and Common Lisp compilers suck and the JS ones are half-decent

> they will either access something with a known offset or raise a type error

not so with CLOS (and I don't think "don't use CLOS" is a solution)

@hayley @dalias @theearthisapringle @swordgeek @lispi314 I’ve heard that SBCL is pretty good. Otherwise, this is almost certainly due to the vast amounts of funding poured into making JS engines efficient.

@hayley @dalias @theearthisapringle @swordgeek @lispi314 I didn’t consider CLOS because it is well known to be something that comes with a performance penalty and therefore should not be used in hot code paths. defstruct is the appropriate way to write a cheap-to-use record type.

@hayley @swordgeek @theearthisapringle @alwayscurious @dalias There are ways to statically resolve & compile CLOS calls.

Those *do* involve other tradeoffs around liveness or redefinition though. At least when implemented as libraries.
@lispi314 @dalias @alwayscurious @theearthisapringle @swordgeek I would simply not implement them as a library, my god https://bibliography.selflanguage.org/_static/dynamic-deoptimization.pdf is 33 years old can we please have 90s compiler tech I beg of you
@hayley @swordgeek @theearthisapringle @alwayscurious @dalias Of course. I mentioned that because I know of a specific library, not because it's the best option or anything like that.

@lispi314 @dalias @theearthisapringle @swordgeek @hayley The problem with applying deoptimization to Lisp isn’t that it can’t be done (it can), but that someone needs to actually do it.

@hayley @alwayscurious @dalias @lispi314 @swordgeek @theearthisapringle TBF writing a whole ass common lisp compiler seems pretty fucking hard (whereas js seems quite achievable for one person with not that much effort). but mostly not because optimisation lol