Closed Bug 1823641 Opened 3 years ago Closed 3 years ago

0.49 - 0.3% Base Content JS / Base Content JS (Windows) regression on Thu March 16 2023

Categories

(Toolkit :: General, defect)

defect

Tracking

()

RESOLVED WONTFIX
Tracking Status
firefox-esr102 --- unaffected
firefox111 --- unaffected
firefox112 --- unaffected
firefox113 --- affected

People

(Reporter: bacasandrei, Unassigned)

References

(Regression)

Details

(Keywords: perf, perf-alert, regression)

Attachments

(1 file)

Perfherder has detected a awsy performance regression from push 651dadff8b4864650922a5beaa0c458cfda0028d. As author of one of the patches included in that push, we need your help to address this regression.

Regressions:

Ratio Test Platform Options Absolute values (old vs new)
0.49% Base Content JS windows11-64-2009-shippable-qr fission 1,620,292.50 -> 1,628,292.00
0.30% Base Content JS windows11-64-2009-shippable-qr fission 1,620,243.00 -> 1,625,104.00

Details of the alert can be found in the alert summary, including links to graphs and comparisons for each of the affected tests. Please follow our guide to handling regression bugs and let us know your plans within 3 business days, or the offending patch(es) may be backed out in accordance with our regression policy.

If you need the profiling jobs you can trigger them yourself from treeherder job view or ask a sheriff to do that for you.

For more information on performance sheriffing please see our FAQ.

Flags: needinfo?(standard8)

The patches only changed how we load our modules. So I suspect the regression would likely be to how the JS engine loads modules, hence passing the NI to :arai to see if they have any idea.

Flags: needinfo?(standard8) → needinfo?(arai.unmht)

Set release status flags based on info from the regressing bug 1822556

there is some previous investigation around the memory consumption done in bug 1793588.

on Windows, the regression is around +8kB, and there are ~40 files converted in the patch. so, +200B per each, that doesn't sound too big.
on macOS and linux, there seems to be +4kB around that range, so +100B per each.

possible source of the regression is either:

  1. module metadata held by each module
  2. hash table size class

I'll check the memory consumption on xpcshell to see if the module itself takes much.

for each before/after patch, I've tested loading 0, 5, 10, 15, 20, 25, 30, 35, 40, and 45 modules in xpcshell and checked the memory consumption.

I observed increase in the following items, but none of them are proportional to the number of modules:

  • js-main-runtime/zones/unused-gc-things
  • js-main-runtime/zones/sundries/malloc-heap
  • js-main-runtime/gc-heap/unused-arenas
  • js-main-runtime/realms/sundries/malloc-heap

so, it could be related to the memory consumption of the module loader itself, instead of each module script

This seems potentially concerning if it's increased overhead from ESMification as we continue those efforts.

:jonco, can I have your input here?

Flags: needinfo?(arai.unmht) → needinfo?(jcoppeard)

Thanks for the details memory comparison arai.

From this we can see the biggest contributions are from:

  • js-main-runtime/gc-heap/unused-arenas
    • these are unused arenas which will be decomitted at the next GC
  • js-main-runtime/zones/unused-gc-things
    • these are external fragementation that will be cleaned up at the next shrinking GC
  • js-main-runtime/realms/sundries/malloc-heap
    • not enough information to see where this is coming from
  • js-main-runtime/zones/sundries/malloc-heap
    • not enough information to see where this is coming from
  • explicit/js-non-window/runtime/gc/nursery-malloced-buffers
    • these will either be cleaned up at the next minor GC or become part of js-main-runtime/realms/classes/objects/malloc-heap

So mostly this is stuff that will get cleaned up when we GC.

I don't think this is a big deal. We've converted over 65% of our modules to ESM without significant memory regression and arai's work show that the memory increase is not proportional to the number of modules, so I don't think there's anything to worry about here.

Flags: needinfo?(jcoppeard)

Based on Jon's comment, I think this is effectively wontfix - there's not really anything we meaningfully can / want to do to address this regression specifically - the size is small and it doesn't look like it's proportional to # of loaded modules, and therefore shouldn't be getting worse.

Status: NEW → RESOLVED
Closed: 3 years ago
Resolution: --- → WONTFIX
You need to log in before you can comment on or make changes to this bug.

Attachment

General

Created:
Updated:
Size: