Problems With Evaluating Panoptic Scene Graphs
Our research identified a significant flaw in the current evaluation protocol for scene graph generation models. When evaluating a scene graph model, existing works did not always enforce a single predicted mask per subject/object and a single predicate distribution per subject-object pair.
Consequently, models that output multiple masks or predicate distributions are at an unfair advantage over models that don't.
The figure above shows a schematic comparison of the output from existing one-stage methods to our proposed two-stage method. One-stage methods often output multiple masks per real world object, visualized with colored masks in B. This results in one predicate score distribution per mask-mask pair but multiple distributions for pairs that share the same ground truth subject and object. In current evaluation implementations, multiple masks or relations are not aggregated and can therefore be exploited to increase mR@k scores. Our new method does not have this flaw.
The effect of an unfair evaluation can be seen below:
The figure above shows a comparison of achieved mR@50 scores with: (1) original flawed MultiMPO values, (2) an exploit to improve MultiMPO scores that uses a better mask model for two-stage methods, and (3) our newly introduced fair SingleMPO. Even though all methods are evaluated equally, mR@50 scores for all one-stage methods decline with a maximum decrease of 19.3 for SingleMPO.