A Fair Ranking and New Model for Panoptic Scene Graph Generation

University of Augsburg
ECCV 2024

Abstract

In panoptic scene graph generation (PSGG), models retrieve interactions between objects in an image which are grounded by panoptic segmentation masks. Previous evaluations on panoptic scene graphs have been subject to an erroneous evaluation protocol where multiple masks for the same object can lead to multiple relation distributions per mask-mask pair. This can be exploited to increase the final score. We correct this flaw and provide a fair ranking over a wide range of existing PSGG models. The observed scores for existing methods increase by up to 7.4 mR@50 for all two-stage methods, while dropping by up to 19.3 mR@50 for all one-stage methods, highlighting the importance of a correct evaluation. Contrary to recent publications, we show that existing two-stage methods are competitive to one-stage methods. Building on this, we introduce the Decoupled SceneFormer (DSFormer), a novel two-stage model that outperforms all existing scene graph models by a large margin of +11 mR@50 and +10 mNgR@50 on the corrected evaluation, thus setting a new SOTA. As a core design principle, DSFormer encodes subject and object masks directly into feature space.

Problems With Evaluating Panoptic Scene Graphs

Our research identified a significant flaw in the current evaluation protocol for scene graph generation models. When evaluating a scene graph model, existing works did not always enforce a single predicted mask per subject/object and a single predicate distribution per subject-object pair.
Consequently, models that output multiple masks or predicate distributions are at an unfair advantage over models that don't.

The figure above shows a schematic comparison of the output from existing one-stage methods to our proposed two-stage method. One-stage methods often output multiple masks per real world object, visualized with colored masks in B. This results in one predicate score distribution per mask-mask pair but multiple distributions for pairs that share the same ground truth subject and object. In current evaluation implementations, multiple masks or relations are not aggregated and can therefore be exploited to increase mR@k scores. Our new method does not have this flaw.

The effect of an unfair evaluation can be seen below:

The figure above shows a comparison of achieved mR@50 scores with: (1) original flawed MultiMPO values, (2) an exploit to improve MultiMPO scores that uses a better mask model for two-stage methods, and (3) our newly introduced fair SingleMPO. Even though all methods are evaluated equally, mR@50 scores for all one-stage methods decline with a maximum decrease of 19.3 for SingleMPO.

DSFormer Architecture

Recent work pivoted to using one-stage architectures for panoptic scene graph generation. However, we are confident that a pure segmentation model is unlikely to be outperformed by a scene graph model. Therefore, we devise a new scene graph architecture that is not trained end-to-end but receives subject and object masks as input.
For a given subject-object prompt, DSFormer encodes the location as an additional positional encoding to each patch token in feature space.

Normal and Anomalous Representations

Evaluation results are shown below. The original MultiMPO column shows the originally published values.

Predicate Classification ↑ Scene Graph Generation ↑
Method mR@20 mR@50 mNgR@50 mR@20 mR@50 mNgR@50 original MultiMPO
IMP 11.25 12.72 27.58 8.81 9.78 21.73 7.88
MOTIFS 20.00 21.83 47.98 15.10 16.32 37.96 10.10
GPS-Net 15.46 18.62 33.60 12.35 14.48 27.14 7.49
VCTree 21.19 23.07 50.24 16.29 17.58 39.41 10.20
PSGTR - - - 10.93 11.62 27.57 20.80
PSGFormer - - - 8.20 8.20 21.75 18.30
Pair-Net - - - 18.02 19.64 21.48 28.50
HiLo - - - 17.51 18.33 40.48 37.60
Ours 34.03 40.06 64.05 27.20 30.67 50.08 (50.08)