about:config
Firefox About:Config Tricks & Hardening (based on kb.mozillazine.org, 12bytes.org, user.js project)
Speed up and harden your Mozilla Firefox Browser!
==== Introduced by CHEF-KOCH ====
This project provides a hardened user.js among some other files which are designed for everyone to drag & drop into his Firefox/Fenix profile folder, this process can (of course) be automated via external scripts/programs and is optional. Please make sure you check the documents & FAQ.
More is not always better and can lead into problems (website breakage, bugs etc.).
- FFCK project does hope to provide a middle way between privacy and a normal daily usage habit without breaking too much.
- FFCK project does not believe in FUD or claims made-up without evidence.
- FFCK project does not explain web standards or is designed to talk about web security policy mechanism, Mozilla itself has an entire portal for this.
- FFCK project does not document each about:config flag, if you want a documentation, check the official docs (all.js) or read the given documentation.
- The owner of this project highly recommended that you only use the latest stable Firefox version (or for testing beta).
- Harden the browser against known data disclosure or code execution vulnerabilities (see Security aspect).
- Harden the browser's encryption (cipher suites, protocols, trusted CAs).
- Limit possibilities to uniquely identify the browser/device using browser fingerprinting.
- Limit the browser from storing anything even remotely sensitive persistently.
- Limit the possibilities to track the user through web analytics.
- Reduce the overall attack surface by disabling various features.
- Still be usable in daily use environment.
Several changes you can make within the Firefox options (and the invisible ones via about:config) might reducing the overall attack surface, they can be tested against several test pages to check wherever they work or not. Those pages are mostly PoC pages to prove that certain things can be exploited or are vulnerable, however those changes are highly depending on the settings and Browser version and need to be constantly maintained to adopt new changes which are might coming from the Browser or new web standards itself.
The project list is not designed to secure Firefox, it is designed to bring as much as privacy (via about:flags changes) as possible. The project owner does not believe that the configuration increases any security aspect of the Browser itself, there was never a proof given that a about:config really has an impact on the overall security level of each individual config user.
- An attacker can use the open source list to find weaknesses (forgotten flags, mistakes or find other strategies to bypass it)
- The Browser itself ignores several flags or the provided options are bugged or entirely broken
- The flags are depending on the Browser source code which basically spoken means that if the source is flawed or doesn't worked as advertised an attacker can find its way around with new a Proof of concept (PoC) which might not even public available.
- About:config changes does not migrate attacks like MITM, data leaks, source code problems, exploits, 0day or other spoofing attacks which might be unknown.
Archiving more security is way harder than trying to build defenses against privacy related attacks because those two things are not the same and the philosophy (in my opinion) must be that important problems have to be always addresses by the Browser itself without the need to change some flags/options. Stay away from "Ultimative Firefox Privacy & Security Guides" those are in 99% of all cases clickbait and are very soon outdated, browser security itself is very very hard to archive if not almost impossible and needs constantly to be monitored.
What does Firefox (by default) submits?
Firefox sends more/same stuff back than Chrome does, I believe this is not needed in an stable environment, that's why all of this was disabled via configuration. You can review you telemetry data via
about:preferences#privacy (`about:telemetry) in the "Firefox Data Collection and Use" section
Why I'm still fingerprintable?
Test pages like Panopticlick have several problems which needs to be addresses and that's why they are not 100% reliable -in the real-world.
- The uniqueness of a test pages such as Panopticlick are only relative to how many visitors the test-website had or how many are in the database found. Compared to the real-world it is only a small percentage. This basically means if a lot of e.g. Tor users/browsers visiting the website and you then do your test with Chrome you will be displayed as more unique because the Chrome Browser is more often used (statistically spoken). However, even if such pages testing older Browser clients you will rank higher (more unique) on their database because statistical outdated Browser ID's are way les often used (and tested).
- Randomized fingerprint values are questionable. In fact it does not prevent you from getting fingerprinted at all, the ID itself would more unique because it changes with every single visited page, that means you are overall spoken faster identifiable since no one else was to the same time on the same time with this particular fingerprint. This is also the reason why Tor users getting the same fingerprint because it makes it impossible to identify which user was one which page with which fingerprint (because they are all the same).
- A test pages does not covers every scenario, it is possible that new tricks (unknown ones) are already available on the internet which are (not yet) covered.
- You don't need to hide/fake every possible outcome because that forces a website to develop new techniques to bypass it - as often done e.g. The New York Times detects ad-blockers and each time this got "fixed" it ended up with more trackers & anti-adblock scripts. The goal should be to swim with the mass.
- The test page or the results itself can be manipulated to fake the results or to corrupt the database (with wrong results).
Problematic fingerprinting techniques
Here is the overview of problematic fingerprinting techniques which can compromise your privacy. The list is not complete because not every fingerprinting techniques are still functional or a threat to your privacy because the Browser might already protect your against it, or its addresses in the configuration file in this repository.
navigator.mediaDevices.enumerateDevices() - Can expose you by giving away identifiers of media devices on your client, but mainly a Chrome problem.
Resource Timing API Support - Mostly a Chromium problem but some Browser still allowing RTA, which is a significant privacy risk.
window.navigator.mediaDevices.enumerateDevices basically allows a website, without any permission or consent to learn about your device capabilities. See here and here for more information.
navigator.maxTouchPoints - Allows websites, to access device capability information.
TLS session resumption tracking - Advertisers tracking people using TLS session resumption data in order to improve their service and to collect (possible sell) such data.
URL tracking parameters - Stripping out tracking query string parameters can be done via scripts/extensions but is still not implemented in Firefox (or any other Browser by default).
window.name - A known bug in Firefox, and well discussed topic which allows passing cookies through
window.name, as part of a bigger project to keep third-parties from abusing the first-party storage.
HSTS fingerprinting - HSTS fingerprinting can also be achieved in first party context, this is still an issue in almost all Browsers.
speechSynthesis getVoices - Can expose device information, its clear how often it is used and which website/apps really using it.
navigator.connection - See here for more information, it basically allows websites to read the current network state, and track changes.
IPTC Metadata - Services/Websites like Facebook & Instagram (except Twitter use IPTC Metadata in images to track their users and behavior. Removal instructions (there is no extension or script for it), are explained here.
Keyboard API fingerprinting - Already semi-controlled via our configuration file, but is still a problem. Firefox implemented version of it is less attackable than the one from Chrome.
Trackability of QUIC connections - Not really an QUIC problem, it's connected to pre-fetching data which is (of course) not only a QUIC problem, however it's listed because in case someone uses the protocol he can be tracked (there is only an on/off toggle without any further protection).
FP2.js - fingerprintjs2 and other stuff like UA, Fonts, window.navigator & co. This is "more or less" already migrated but there in our configuration (but not fully).
Service workers - Third-party party service workers should be blocked by default. You can also do this with popular web-filtering extensions, disallowing all service workers might break several services/extensions. For WebKit there are double-key service workers.
First-party storage isolation - Currently the only fully working implemented is integrated in Tor Browser, Firefox own isolation mechanism (FPI) e.g. isolates the DNS cache.
Ultra-Sonic device tracking - Audio context data can be blocked to prevent advertisements/websites to expose/deanonymize user traffic. Firefox already protects you against it (see config) but does not prevent cross-device tracking because its still possible to draw connections by between devices playing and listening for ultrasonic sounds.
Tracking of Zoom Levels - This is already been "solved" but some fingerprinting websites still showing correct results (needs more work, same like USDT?)
eTag and cached scripts tracking - This still seems to be an issue for Chrome/Firefox. Extensions like SecretAgent can overwrite
ETag to prevent such tracking.
navigator.sendBeacon - This is not exclusively used for tracking, the problem is that disabling it will break websites. However the Beacon API can be disabled (or parts of it, which we already did).
Overdramatizing the "importance" of font fingerprinting tracking
This gets a special place here, apparently there is a lot of misinformation regarding font "tracking". Here are the official documents to cover the technical aspect and how its been implemented:
One (of many) popular deployed libs that uses font-based fingerprinting, you see such scripts on various websites.
Some real background:
- Anti font-fingerprinting was officially introduced since Firefox 52 (stable).
Font fingerprinting is not a security risk, it's a privacy concern/problem and nothing more. Making things up ala "Website knows who I'm because they know my fonts" is nothing but horse shit (I'm terrible sorry to say it this way).
- Detecting and even tracking fonts don't expose your as a person. The bits on information which are been send by popular "fingerprinting scripts" are not enough to expose "important information", the overall effectiveness is questionable - just because you assume/know I use e.g. "Arial" as font does not mean there are automatically exploits or methods which "somehow" can compromise me and even if there are some, they need to bypass additional Browser/OS protection mechanism first.
- The tracking website can get the list of fonts available on your (local) system on in Safari this is hard coded, under Firefox we control/"block" it via configuration or enable Firefox own "tracking protection mechanism". There are tons of extensions to fake the read-out, randomize or pre-set zero or xyz fonts so that the detection gets wrong results in case you like to fine configure it.
- Rendering in applications/browsers on e.g. Windows is done in
User mode, which got fixed after 7 years. Before that it was running in
- Privilege escalation is a security problem, so why isn't it not connected to a "security" topic (according to you)? Privilege escalation is a problem which does not ("only") affects fonts (rendering software/engines). The software needs to be fixed and not the fonts, this already happened in LibreOffice, MSOffice, Windows and many other systems and applications.
- Using Scripts/Extensions - Why? Ask the webmaster/website owner to implement open source fonts if there are privacy concerns. This is often easier than messing around with external tools (which you have to trust), a lot of website owners are not aware of privacy concerns regarding fonts or they simply aren't aware of any alternatives.
- Using extensions to "emulate" web content will (at some point) cause some trouble, it will consume more memory to load stuff at Firefox start and it also does not prevent from websites which getting "crippled". Some extensions addressing it by adding a whitelist mode, this is however also only a "eat or die solution" and at best only a workaround. Emulated or using outdated fonts/resources might be causing additional problems because the website/CDN/hoster might then triggers another script which shows warnings about outdated Browsers/fonts etc. (unless you block such additional extension/browser detection, which we of course did in our configuration).
- Faking, randomizing or emulating fonts can make you more unique. Whenever the website/script detect that there are no fonts or "strange" readings you get a more unique ID back (from e.g. font fingerprinting test pages or trackers).
- Assuming that a website sells data because they detecting e.g. your fonts is FUD, unless it's explicitly mentioned on the website or in case there are additional information/leaks to prove the website wrong. A lot of websites do not sell any information even if there are trackers.
- There are bug related problems, which might even break font detect/results or affect any tracking measurements.
- The integrated tracking protection does not protect you against all "privacy attacks", it is designed primarily to enhance your local privacy from other users on your network or on the same machine. Most people don't like that their history, searches, cookies and temporary files are been deleted because they typically trust their own local machine or the IT admins. The problem here is that this aims to provide some basic protection against remote machines, but not all known attacks.
- Firefox own TP (by default) is not set to the strongest settings because, as explained above, some people want their files or it might only end up with a broken or crippled website.
- Using anti-tracking mechanism results in (overall) more tracking because the website tries to bypass it (it's a cat-and-mouse game). Keep in mind that some trackers are also open source and designed to collect only meta data to detect website problems, this is controversial but from a developer perspective which tries to "improve" his site totally legitimate, as of today the integrated tp in Mozilla's browser has no ability to whitelist individual websites/scripts/resources with in its GUI. That been said it's an eat or die solution, because the user has to disable tracking protection (pause it) completely.
- The project does not enable any of Mozilla's "tracking protection" mechanism in the configuration, web-filtering (that what tracking protection basically should be) can be done with other (better) solutions (see suggested extensions). At this point it's still unclear why Mozilla did not work directly with certain known developers together to introduce several "anti-tracking" techniques, which are known to be more effective and controllable than the currently integrated ones.
How do I switch from Chrome to Firefox?
There are several pages you can read to make your switch easier: