Evaluation Metrics
1 mIoU(mean Intersection over Union)
Mean Intersection over Union (mIoU) represents the average intersection over union ratio between predicted pixels and ground truth pixels across all classes. IoU (Intersection over Union) is calculated by dividing the intersection area between the predicted segmentation and the ground truth by the union area between them. mIoU is computed by taking the average of IoU for each class.
2 maxF
maxF is calculated based on the F-measure. The F-measure is the harmonic mean of precision and recall. In the saliency detection task, precision represents the proportion of truly salient regions among those detected as salient by the model, while recall represents the proportion of all truly salient regions that are correctly detected by the model. The calculation of maxF usually involves sliding different thresholds across the entire image, binarizing the model's output, and computing the F-measure at each threshold. maxF is the maximum value of the F-measure obtained during this process. The value of maxF ranges from 0 to 100. A value closer to 100 indicates that the saliency detection result is more similar to the real salient regions, meaning that the saliency detection result is more accurate.
3 RMSE(Root Mean Square Error)
RMSE can reflect the absolute error between the predicted results and the true depth values.
4 mErr(Mean Angular Error)
mErr is a commonly used metric for measuring the average angular error between the estimated normal directions and the true normal directions.
5 MTL gain (Multi-Task Learning gain)
To evaluate the overall performance across all tasks, we follow the method of (Maninis et al., 2019) and adopt the multi-task learning gain metric (∆m).