Closed Bug 1781277 Opened 3 years ago Closed 3 months ago

Do not base storage estimate on user's disk when RFP is enabled

Categories

(Core :: Storage: Quota Manager, enhancement)

enhancement

Tracking

()

RESOLVED FIXED
142 Branch
Tracking Status
firefox-esr102 --- affected
firefox-esr140 --- fixed
firefox142 --- fixed

People

(Reporter: tjr, Assigned: fkilic)

References

(Blocks 1 open bug)

Details

(Keywords: perf-alert)

Attachments

(2 files, 3 obsolete files)

When RFP is enabled we should not make a storage estimate based on any user hardware, and just pick a number and use that. Nothing we pick is likely to collide nicely with some percentage of existing users (at least without doing a telemetry measurement) so I'll arbitrarily offer 10GB?

When we finish this patch, we would like to backport it to ESR 102 for Tor.

Actually, we use the percentage of disk capacity only when an origin is persisted. Normally we use the group limit which is usually 10GB.

Clearing the ESR tracking request to get it out of the bug queries. Feel free to re-nominate if/when this bug starts moving forward.

Looking at the code, it seems like even the group limit can be based on the user's disk space - if there is less than 50 GB, but more than 10MB, that will leak through in the group limit. (And if there is less than 10 MB, that's also a leak.)

Given this, I think what I'm leaning towards would be to hardcode the limit when RFP is enabled, similar to the override pref. This would affect all origins (that is, an exempted origin wouldn't get a different quota) but that's okay.

If you have 50GB or more free, the group limit is capped to 10GB, the only thing that leaks is the user has 50GB or more space. If you have 50MB - 50GB, you would, in theory, be leaking the exact amount of disk space. If we made a cap, we will disguise 50MB - cap. (I think...)

My worry is how will the browser behave if the user's free disk space is very low, say 100MB, but we report there is more space available? I presume the website will be able to detect this somehow... so in effect we are really only disguising things for people who have at least cap free space.

Does that sound correct?

Flags: needinfo?(jvarga)

(In reply to Tom Ritter [:tjr] from comment #3)

Sorry for late response.

Looking at the code, it seems like even the group limit can be based on the user's disk space - if there is less than 50 GB, but more than 10MB, that will leak through in the group limit. (And if there is less than 10 MB, that's also a leak.)

Given this, I think what I'm leaning towards would be to hardcode the limit when RFP is enabled, similar to the override pref. This would affect all origins (that is, an exempted origin wouldn't get a different quota) but that's okay.

Doing it in a similar way as with the pref override sounds good to me. Let's keep things simple.

If you have 50GB or more free, the group limit is capped to 10GB, the only thing that leaks is the user has 50GB or more space. If you have 50MB - 50GB, you would, in theory, be leaking the exact amount of disk space. If we made a cap, we will disguise 50MB - cap. (I think...)

My worry is how will the browser behave if the user's free disk space is very low, say 100MB, but we report there is more space available? I presume the website will be able to detect this somehow... so in effect we are really only disguising things for people who have at least cap free space.

Does that sound correct?

Well, we used to calculate the limits based on the free disk space which had it upsides and downsides. The upside was that it was less likely for a write operation to fail due to a full disk situation. The downside was obviously the fingerprinting. We switched to total disk space to mitigate the fingerprinting and also to match the spec which explicitly states "It must not be a function of the available storage space on the device."
So now it's more likely to end up in full disk situation and that's more or less given and we can only try to come up with some new mitigations if we see that users experience that too often.
From this point of view, reporting more space available in the situation which you described will be consistent with the approach which we've already taken (reporting more space available in general).

Not sure if it matters here, but we plan to increase the group limit cap from 10 GB to something like 40 GB in near future.

Flags: needinfo?(jvarga)

I forgot to mention that you probably want to inspect QuotaManager::GetUsageAndLimitForEstimate which is used to handle navigator.storage.estimate(). Especially https://searchfox.org/mozilla-central/rev/d307d4d9f06dab6d16e963a4318e5e8ff4899141/dom/quota/ActorsParent.cpp#6356-6358 where we return the calculated global limit as quota (using total disk space) .

(In reply to Jan Varga [:janv] from comment #5)

I forgot to mention that you probably want to inspect QuotaManager::GetUsageAndLimitForEstimate which is used to handle navigator.storage.estimate(). Especially https://searchfox.org/mozilla-central/rev/d307d4d9f06dab6d16e963a4318e5e8ff4899141/dom/quota/ActorsParent.cpp#6356-6358 where we return the calculated global limit as quota (using total disk space) .

In that case we probably want to hardcode the limit (quota) as well.

Yes please, I think it would very important to lie about the free space for best effort storage.

(In reply to Jan Varga [:janv] from comment #4)

Not sure if it matters here, but we plan to increase the group limit cap from 10 GB to something like 40 GB in near future.

Have you considered having a different cap for Android?
I think the current min(0.1 * totalSpace, 10GB) is already bad for many old or low end devices (from a fingerprinting point of view).
Doing some bucketing for all users (e.g., always round to the GB/GiB, not only with RFP enabled) would be good, in my opinion.

Then, of course, lying unconditionally when RFP is enabled would be the best.
Whenever the total disk is less than 100GiB, Firefox currently leaks the exact storage size with byte precision.
I'd expect a consistent number of users to pair RFP with VMs with less than 100GiB of storage, or with live systems (e.g., Tails), in which case the amount of memory is leaked, instead.

If the attempts to fix this is as abandoned as it seems, could you please enhance the information textboxes in the privacy settings and explain that the harddisk size is exposed to random websites?

(In reply to Simon Mainey from comment #7)

FYI

Thanks for this extra context! I understand the unlanded proposed Tor fix PR 690 to, when resist fingerprinting is active, return 8 TiB as the limit if the origin is persisted and 10 GiB if the origin isn't persisted. The patch is also bit-rotted against m-c, especially since bug 1838415 had all RFP checks specify which target they're using. The targets potentially change at runtime to fix web compat breakage in bug 1824222 with some notable workers changes in https://phabricator.services.mozilla.com/D185014.

It does not appear that any target was created for storage purposes yet, and we probably would want to add that.

Restating other potentially notable context as it relates to the Tor Browser:

(In reply to Tom Ritter [:tjr] from comment #3)

My worry is how will the browser behave if the user's free disk space is very low, say 100MB, but we report there is more space available? I presume the website will be able to detect this somehow... so in effect we are really only disguising things for people who have at least cap free space.

Currently, because we do not monitor the available disk space from QuotaManager, it's possible for us to attempt to over-provision storage use. What would potentially happen is that the disk fills up, lower level OS calls start failing due to a full disk, those get mapped to/reported as NS_ERROR_FILE_NO_DEVICE_SPACE which will in turn map to NS_ERROR_DOM_QUOTA_EXCEEDED_ERR which map to QuotaExceededError as reported to content. (Browser functionality that's trying to perform writes for other reasons would also likely have issues.)

We likely need an enhancement to QuotaManager as a follow-up to bug 1735713 to establish a "cap" capability like you discuss to help with this. Right now we use mTemporaryStorageLimit for both defining the group limit and checking if a quota usage increment would put us over that limit. It seems like we really want a concept of "reported" and "actual". Reported could still continue to be based on the disk capacity (although we do need to resume quantization like we were doing with the chunk size logic pre-bug 1735713). But actual really needs to be based on the pre-bug 1735713 approach of basically: "amount of disk already used by QM for storage" + "amount of free disk space" - "buffer for the OS, other apps, and Firefox". We used a percentage (50%) before instead of subtracting off a buffer, but it seems like we definitely would want to ensure an absolute buffer/cap even if we did use a percentage. And this should be driving our decision of when to NotifyStoragePressure and clear origins based on storage pressure, so in the event we do have at least enough storage space available to honor what we're claiming for active origins, that should work out.

Pre-bug 1735713 we didn't actually update our free space awareness as we ran, just at quota init. This could be a reasonable initial approach because of the potential complexity in dealing with hysteresis, in which case we would want to make sure the buffer has a large enough absolute factor to provide some level of confidence about not running into problems from the HTTP cache, temporary files, etc.

With quota enforcement then somewhat decoupled from the reported quota, we could then adopt something akin to the Tor patch approach where we define a consistent quota to report per origin (possibly varying between mobile and desktop, which is probably otherwise fingerprintable), but with the confidence that the origin shouldn't be able to fill up the disk.

For the case where the user literally only has 100MB of free disk space, I think that ends up being the case where we need the OS to help out. I know windows has a low disk space warning it triggers, and I think that's something that the browser can't really help with, other than QM should run its storage pressure logic in such a case to try and clean up storage it's using. That said, especially if we introduce an absolute buffer/cap, I guess there's a question of whether we should try and do something where there's enough space for the cap, but not really any space for storage. We could disable use of Storage APIs in general, but that might be more hurtful than helpful. Maybe the least bad thing would be to just make sure that our buffer/cap is set so that we are willing to use enough disk space so the low disk space warning would still trigger on the given platform?

Assuming you need a fake storage object to virtualize the whole storage API during unittest, why not use such an object, make it 10mb in size, and thats it? I have yet to understand why a random website should store gigabytes of stuff on a user machine, especially in private mode.
I do not know how FF is storing things, but assume you cannot forward web files and directories to user files and directories, otherwise you leak maximum directory depth (might be relative to your ff profile folder), max file length, special characters supported in file names, or even suffer from directory traversal attacks using ".." or such, so thinking you have it all sandboxed, whyjust not put a "virtual hdd" underneath all the complicated stuff that you have build upon it, and make it a fixed size? If you let all storage related stuff, including estimation and whatnot, pass through a virtual file system, you have both: total determinsm and total control. If you make it too complicated it will anyway not work.

In general one note: in a mode like resistfingerprinting, which should not be a config switch but a really a checkbox in the privacy settings, you should weight privacy larger than any website feature. If user preferres privacy and webgl would not work because it leaks too many bits, disable webgl. If webrtc is known to leak ip addresses (did it use to leak even uuid-like ipv6 addresses?), disable it. If storage is leaking your hdd, disable it.
I do not see why a privacy-first mode does not fit intomthe year 2024. Most of the advanced JS features are not essential, few websites use them for intended purpose, others use it for tracking only. The quota estimation issue is a productive ingredient of the commercial fingerprinters for years and noone followed up this bug... despite it leaks so many bits! if you could sacrifice function to privacy mode, you can quickly disable function while fix is in the work. This is safe and transparent, and trackers obtain a single bit only. fingerprint-AddOns that need to modify js will leak patched js environment. so assuming that going for privacy is a golden bit you will always leak and never be able to prevent, follow-up entropy by disabling other features is not a problem. All is good if all FF behave the same, and if you had such a switch you can just go back to surfing and wait until bugs are fixed... safely...

A couple of points

(In reply to Andrew Sutherland [:asuth] (he/him) from comment #10)

which means the browser is always in permanent private browsing mode
symmetry between PBM and non-PBM

Just FYI, it is entirely possible that Tor Browser integrates a non-PBM "somewhere". Bug 1366318 & others: IDB, caches, SWers, clearkey in persistentState (Bug 1706121) etc, when all plumbed and working, we should hopefully have no trivial-performant core [1] differences.

So I agree, it would be nice best for permissions/gating to align between PBM and non-PBM. Auto rejection + timing is an issue, but so are fingerprinting scripts asking as it's user hostile unless intended to be used. So if the default was to always await the user in both modes, that works for me

[1] Orthogonal to that is FPP which is ETP Strict (and site exceptions) or PB mode, so not a 100% indicator but trivial to detect and a degree of confidence based on defaults

possibly varying between mobile and desktop, which is probably otherwise fingerprintable

It's impossible to hide this distinction and Tor Browser doesn't even try - it literally says Windows, Mac, Linux or Android 10; Mobile in navigator. So using different strategies for desktop vs mobile (or even per platform) here is fine if it makes sense. I just don't see platform as being equivalency - this is a h/w disk/space issue

Duplicate of this bug: 1871979

Further comment regarding

"symmetry between PBM and non-PBM": this really should not block the fix of HDD size leak. its just one bit.

and most ironically i think they use another property of storage to detect PBM anyway:

storage.getDirectory() => ok => non private mode
storage.getDirectory() => "Security error when calling GetDirectory" => private mode

:/

This is a really hard issue unfortunately :| We could lie for navigator.storage.estimate() calls and just return max(usage + 1GB, 10GB) to make the estimate result useless for fingerprinters and that would be fine for MOST fingerprinters (non-invasive ones), but if a fingerprinter is really determined to get this number, they could just put items to an IndexedDB until they get a quota error. While the 10GB cap can help with this, devices with less than 10GB free storage will still leak their real storage size if an invasive fingerprinter tries to fill up the indexeddb with junk. The only real solution I can think of for this is changing the cap to 1GB in RFP and hope that no one visits pages with really invasive fingerprinters (fingerprinters willing to fill up to 1gb of user data per origin) with less than 1GB of free storage available.

Changing the cap to 1GB should be fine for most people, but I'm not sure if that would be enough for Tor users, because some tend to run Tor in a VM, and we can't control the amount of storage allocated.

Other than the 10gb/1gb issue, I think for fingerprinting purposes lying about limit for navigator.storage.estimate() calls sounds like a viable option.

(In reply to Fatih Kilic from comment #17)

Changing the cap to 1GB should be fine for most people, but I'm not sure if that would be enough for Tor users, because some tend to run Tor in a VM, and we can't control the amount of storage allocated.

Other than the 10gb/1gb issue, I think for fingerprinting purposes lying about limit for navigator.storage.estimate() calls sounds like a viable option.

This might be a problem for Tails users, who run a live system, and system memory will be also their storage limit.
I guess that nowadays 1GB should be fine also for that though ๐Ÿ˜„.

Assignee: nobody → fkilic
Status: NEW → ASSIGNED

(In reply to Pier Angelo Vendrame from comment #18)

This might be a problem for Tails users, who run a live system, and system memory will be also their storage limit.
I guess that nowadays 1GB should be fine also for that though ๐Ÿ˜„.

Oh right there's that too. I really don't think any fingerprinter would try to fill up indexeddb to fingerprint a user. To successfully do it, you would have to first fill it up, which might take a while depending on write speeds, and clear it back before page de-loads. Doing this on every origin a user visits would also be very costly. I was going to suggest disabling indexeddb might make sense for Tor but if a user wants that level of privacy, it makes more sense to use no-js mode instead as indexeddb/storage isn't the only source of entropy and is probably one of the harder/costliest ones to exploit.

I separated StorageEstimate and StorageCap RFP Target though, just so that we can allow FF users to do +AllTargets,-StorageCap if they wish or allow it for specific sites using remote granular overrides

Could someone briefly summarize what the latest plan here and what the patches intend to do ?
It would be also good if @asuth took a quick look since he commented a lot about this previously.
Thanks.

In summary lie about quota and set it to min(usage + 1GB, 10GB) and set temporary limit to 1 GB but both are managed with different flags, so it is up to the user to enable them. We can wait for @asuth's comment and :tjr's review.

(In reply to Fatih Kilic from comment #24)

In summary lie about quota and set it to min(usage + 1GB, 10GB) and set temporary limit to 1 GB but both are managed with different flags, so it is up to the user to enable them. We can wait for @asuth's comment and :tjr's review.

Andrew, are you ok with this solution ?
It seems you proposed something much more complex in comment 10.

Flags: needinfo?(bugmail)

(In reply to Fatih Kilic from comment #17)

This is a really hard issue unfortunately :| We could lie for navigator.storage.estimate() calls and just return max(usage + 1GB, 10GB) to make the estimate result useless for fingerprinters and that would be fine for MOST fingerprinters (non-invasive ones), but if a fingerprinter is really determined to get this number, they could just put items to an IndexedDB until they get a quota error.

Would this be mitigated but not faking just quota returned by estimate() but doing that actually for the limit (quota) which is used internally by quota manager for quota checks ?

Would this be mitigated by not faking just quota returned by estimate() but doing that actually for the limit (quota) which is used internally by quota manager for quota checks?

Yeah we could lie about it in parent process as well, but my idea is no fingerprinter would try to hit the actual cap by filling up the browser with junk, so that's why I did the implementation like this. I don't really want to break websites for legit uses (I don't know what type of website would require 1GB of content but I'm sure there might be one)

Also I feel like if privacy measures need to be at that level, dom.quotaManager.temporaryStorage.fixedLimit could be used to limit the quota actually for even more hardened browsers like Tor mullvad etc.

(In reply to Fatih Kilic from comment #27)

Would this be mitigated by not faking just quota returned by estimate() but doing that actually for the limit (quota) which is used internally by quota manager for quota checks?

Yeah we could lie about it in parent process as well, but my idea is no fingerprinter would try to hit the actual cap by filling up the browser with junk, so that's why I did the implementation like this. I don't really want to break websites for legit uses

But your patch actually allows quota manager to use normal group limit internally for quota checks, it's just estimate() that reports different limit (quota). So the internal limit would be still potentially large allowing storing a lot of data.
Am I missing something ?

So my understanding is, a fingerprinter could get info about the disk space using two methods:

  • Filling browser with junk until it hits quota error
  • Calling navigator.storage.estimate()

The first option (IMO) wouldn't be very desirable for fingerprinters. For the following reasons,

  • Most fingerprinters need to be fast, trying to hit the cap would take a while.
  • User experience would suffer by the IO
  • Junk data would have to be reliably cleared out before user exits page. For example, if the fingerprinter doesn't reach 10GB of quota before the user exits the page, then that data is going to take a lot of space as long as it isn't not evicted, and this process would need to run per origin.

So, IMO it is enough to prevent the quick way to get disk space and keep the internal checks without any modifications/spoofing. The goal isn't to break websites requiring large storage, but to prevent fingerprinters from gathering disk space in a quick way. D223842 should also set the group limit to 1gb (after I fix the typo).

Just to be clear navigator.storage.estimate() is used by applications to avoid quota exceeded errors (or rather mitigate them), see this example:
https://storage.spec.whatwg.org/#example-3a7051a8

This might require more work, but could we generate the global limit based on a random number and then also generate group limit based on random numbers for each eTLD+1 domain ?

(In reply to Jan Varga [:janv] from comment #30)

Just to be clear navigator.storage.estimate() is used by applications to avoid quota exceeded errors (or rather mitigate them), see this example:
https://storage.spec.whatwg.org/#example-3a7051a8

This might require more work, but could we generate the global limit based on a random number and then also generate group limit based on random numbers for each eTLD+1 domain ?

See also bug 1290481

There's also one very simple option. D223842 would use 50GB fixed cap (instead of 1/10GB). The group limit would be then always 10GB. Nothing would be based on disk capacity or free disk space. We could provide a smaller cap for mobile (Android).

The goal: Hide users with less than 50gb storage size (because group limit uses uses 20% of total disk space).
Proposed Solutions

  • Used + 1gb cap at 10gb
    • Cons
      • vulnerable to filling up with junk to get the actual value
      • not consistent with internal and reported values
    • Pros
      • doesn't affect chrome
    • Achieves goal? Yes, as long as no one fills up browser with junk, more likely to cause breakages
  • Fixed 50gb cap or lower on Android
    • Cons
      • vulnerable to filling up with junk to get the actual value
      • affects chrome too, but a search on searchfox suggests storage.estimate is not used by any component at the moment
      • android devices with more than 50gb storage could be affected (maybe we can provide the same storage limit as desktop for Android 12+ devices because we already expose android 12+ this assumes android 12+ devices has more than 50gbs of storage though)
    • Pros
      • more consistent with internal and reported values
    • Achieves goal? Yes, as long as no one fills up browser with junk, less likely to cause breakages

I agree with your solution. If both are susceptible to junk data thing, then choosing the one with more consistency makes sense. Since the estimate is based on total disk space and not free space, devs should still expect to get out of quota errors if there's no free space available anyway, so to not break web-facing behaviour I think your solution makes more sense.

Also if other browsers require even stricter limits, they can just set dom.quotaManager.temporaryStorage.fixedLimit to a lower limit.

chrome wouldn't be affected because chrome code uses special persistent storage with no quota checks

Oh i see. Even better then

Attachment #9427585 - Attachment is obsolete: true
Attachment #9427584 - Attachment is obsolete: true

The patch is now pursuing Jan's suggested comment 32 behavior and that seems good. In comment 10 I was discussing us handling the QM storage directory actually running out of disk space, but I think that's something that is out-of-scope for this bug and something QM would need to handle first.

Flags: needinfo?(bugmail)

Status?

I believe Tor decided to set dom.quotaManager.temporaryStorage.fixedLimit and locking the pref instead of patching the behaviour.

That's also an option.

Lets do it

Use tor behaviour if rfp = true

(In reply to Bug from comment #42)

Use tor behaviour if rfp = true

Tests at https://arkenfox.github.io/TZP/tzp.html#storage

Apologies if I mix up the "terms" ... Tor Browser currently only has to deal with PB mode and returns a fixed value for storage quota

navigator.storage.estimate().then(estimate => {let bytes = estimate.quota})

PB mode does not currently show a storage manager permission prompt - and if requested simply returns the same value as without the permission. RFP would also need to decide what happens when permission is granted - is this permission session only? If so, then I would suggest RFP allow it, similar to canvas - i.e it is only the one site, not a universal reveal of the real value. Keeping in mind that Tor Browser might move to using normal windows (not PB windows) in the future

navigator.storage.persist().then(function(persistent) {
    navigator.storage.estimate().then(estimate => {
        let bytes = estimate.quota

Do we really want to implement a RFP target for limiting this, or should we just use the pref and close this issue?

(In reply to Fatih Kilic [:fkilic] from comment #44)

Do we really want to implement a RFP target for limiting this, or should we just use the pref and close this issue?

IIUIC ... we'd still want the behavior behind a target - shouldn't all RFP protections be behind targets? Are there some that aren't?

Yep, usually having RFPTargets is a better solution than using prefs.

Not that I know of. Only letter boxing i think. My thinking was if different forks wanted different limits, we wouldn't have to deal with it and we wouldn't have to update it every x years (on firefox's tree) for example, but i mean if RFPTargets preferred, sure.

Is this the correct place to steal the fixed amount from Tor Browser?

Attachment #9427586 - Attachment description: Bug 1781277: Implement StorageCap RFP target. r?tjr → Bug 1781277: Implement DiskStorageLimit RFP target. r?tjr

(In reply to Fatih Kilic [:fkilic] from comment #47)

Not that I know of. Only letter boxing i think.

I was thinking about prefs like spoof English, which gives troubles at several levels (e.g., the only RFPTarget which doesn't obey RFP, it changes intl.accepted_language behind the scenes, which is problematic because of how that pref is implemented and for tests, and maybe others I can't think about now)

My thinking was if different forks wanted different limits, we wouldn't have to deal with it and we wouldn't have to update it every x years (on firefox's tree) for example, but i mean if RFPTargets preferred, sure.

Is this the correct place to steal the fixed amount from Tor Browser?

The value was chosen to match non-RFP Firefox, which grants 10GiB to sites.
However, it was multiplied by 5 because that's how the mechanism is implemented.
It'd be great to have a method to query the default quota (these 10GiB), without hardcoding it in nsRFPService.h or something similar.
I.e., it'd be good to have a static method to replace the 10 GB hardcoded in https://searchfox.org/mozilla-central/rev/cc4985b75f4b6434eaba39c0dce35768521fcd6f/dom/quota/ActorsParent.cpp#7344.

Pushed by ctuns@mozilla.com: https://github.com/mozilla-firefox/firefox/commit/ae22ab00ba11 https://hg.mozilla.org/integration/autoland/rev/db3e68818933 Revert "Bug 1781277: Implement DiskStorageLimit RFP target. r=tjr,dom-storage-reviewers,asuth" for causing xpcshell failures in test_temporaryStorageRFP.js

Backed out for causing xpchell failures

  • Backout link
  • Push with failures
  • Failure Log
  • Failure line: TEST-UNEXPECTED-FAIL | dom/quota/test/xpcshell/test_temporaryStorageRFP.js | xpcshell return code: 0
    TEST-UNEXPECTED-FAIL | dom/quota/test/xpcshell/test_temporaryStorageRFP.js | testSteps - [testSteps : 56] RFP limit should be applied - 6871947673.6 == 6871947673
Flags: needinfo?(fkilic)

i forgot the math.floor the second expected limit value. fixed it and pushed it.

Flags: needinfo?(fkilic)
Status: ASSIGNED → RESOLVED
Closed: 3 months ago
Resolution: --- → FIXED
Target Milestone: --- → 142 Branch

site: https://arkenfox.github.io/TZP/tzp.html#storage

control: defaults (no RFP)

  • storage quota: 10 GB [10737418240 bytes]

RFP

  • set dom.quotaManager.temporaryStorage.fixedLimit to e.g. 32000000 (so we can get a different value to RFP)
  • close and reopen FF (otherwise the fixedLimit isn't updated)
    • storage quota: 6-7 GB [6553600000 bytes]
  • in a new tab enable RFP
  • reload the test (F5)
    • storage quota: 6-7 GB [6553600000 bytes] - expected 10 GB
  • close all tabs but not the browser, reload the test
    • still 6-7 GB, expected 10GB

@fkilic - unless I'm doing something wrong, the pref is overriding the RFP value?

edit: nightly is Built from https://hg.mozilla.org/mozilla-central/rev/588b002c5e6c93b6a1a561996949ab74eee2fd9d

temporary storage limit is set only once per run afaict, so you have to close the browser too after enabling the RFP Target (similar to refresh rate rfp target)

Regressions: 1975544

tested, restarts between each

  • default
    • storage quota: 10 GB [10737418240 bytes]
    • allowed: 465 GB [499388512256 bytes]
  • control (pref 32000000)
    • storage quota: 6-7 GB [6553600000 bytes]
    • allowed: 30.5 GB [32768000000 bytes] <-- because of the fixedLimit
  • RFP (pref 32000000 to show difference)
    • storage quota: 10 GB [10737418240 bytes] - verified
    • allowed: 50 GB [53687091200 bytes] <-- edit: ????? because of RFP's override fixedLimit

real world

  • RFP (pref -1 = default)
    • storage quota: 10 GB [10737418240 bytes]
    • allowed: 465 GB [499388512256 bytes]
    • can't tell if it's working because it matches my default

@fkilic - is this what we expected

sorry for the noise - ESR140.1 uplift?

RFP (pref -1 = default)

...
allowed: 465 GB [499388512256 bytes]

Hmm i think this one should be 50gbs as well. Do you still get 465gbs when you clear site data from the padlock icon?

Do you still get 465gbs when you clear site data from the padlock icon

Nope, I now get 50GB - and in fact I get 50GB every time.

NI why I got/reported 465 earlier - weird - doesn't seem possible since after a restart the values are locked in!

There's bug 1558478, but i'm not sure if that's why

Attachment #9498610 - Flags: approval-mozilla-esr140?

huh https://phabricator.services.mozilla.com/D256110 picked up the wrong commit/revision

Attachment #9498610 - Attachment is obsolete: true
Attachment #9498610 - Flags: approval-mozilla-esr140?
Attachment #9498611 - Flags: approval-mozilla-esr140?

huh again, it is picking up the old patch and not the new one interesting. i'll manually update the uplift patch

firefox-esr140 Uplift Approval Request

  • User impact if declined: Only affects "Resist Fingerprinting" users' UX. Firefox doesn't support this configuration. We are requesting the uplift for Tor Browser.
  • Code covered by automated testing: yes
  • Fix verified in Nightly: yes
  • Needs manual QE test: no
  • Steps to reproduce for manual QE testing: See steps in https://bugzilla.mozilla.org/show_bug.cgi?id=1781277#c55, but restart the browser and clear site data after persistence permission is granted
  • Risk associated with taking this patch: None
  • Explanation of risk level: N/A
  • String changes made/needed: No
  • Is Android affected?: yes
Attachment #9498611 - Flags: approval-mozilla-esr140? → approval-mozilla-esr140+

(In reply to Pulsebot from comment #50)

Pushed by ctuns@mozilla.com:
https://github.com/mozilla-firefox/firefox/commit/ae22ab00ba11
https://hg.mozilla.org/integration/autoland/rev/db3e68818933
Revert "Bug 1781277: Implement DiskStorageLimit RFP target.
r=tjr,dom-storage-reviewers,asuth" for causing xpcshell failures in
test_temporaryStorageRFP.js

Perfherder has detected a devtools performance change from push db3e688189334dee49b1ff946a3cae0b039fa074. As author of one of the patches included in that push, we need your help to address this regression.

If you have any questions or need any help with the investigation, please reach out to a performance sheriff. Alternatively, you can find help on Slack by joining #perf-help, and on Matrix you can find help by joining #perftest.

Regressions:

Ratio Test Platform Options Absolute values (old vs new)
4% damp custom.netmonitor.exportHar macosx1470-64-shippable e10s fission stylo webrender 598.47 -> 621.90

Details of the alert can be found in the alert summary, including links to graphs and comparisons for each of the affected tests.

If you need the profiling jobs you can trigger them yourself from treeherder job view or ask a performance sheriff to do that for you.

You can run all of these tests on try with ./mach try perf --alert 45903

The following documentation link provides more information about this command.

Keywords: perf-alert

I think this is wrong alert. this patch adds a single line of change (besides moving stuff around) that's pretty optimized, so the change should be very minimal. however, the alert is on the backout, so assuming our additional check is no longer present, it should be the same as before. so i think this is just a wrong alert.

I agree that the alert seems likely to be noise/erroneous (and reiterating the alert is on the backout, not the landing). devtools code primarily runs under the system principal which is not subject to quota limits in the first place (also I think the HAR is only accumulated in memory, not written to disk?). I did a quick scan of the code used in the test and it does not seem likely that anything should be related. I also tried to look at the alert to see if there is anything like a profile available to sanity check, but in multiple attempted reloads, the treeherder UI errored out with different errors that all seem to suggest server problems.

QA Whiteboard: [qa-triage-done-c142/b141]
See Also: → 1977787
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: