SoDeep: A Sorting Deep Net to Learn Ranking Loss Surrogates

Published in CVPR, 2019

Abstract

Several tasks in machine learning are evaluated using non-differentiable metrics such as mean average precision or Spearman correlation. However, their nondifferentiability prevents from using them as objective functions in a learning framework. Surrogate and relaxation methods exist but tend to be specific to a given metric.

In the present work, we introduce a new method to learn approximations of such non-differentiable objective functions. Our approach is based on a deep architecture that approximates the sorting of arbitrary sets of scores. It is trained virtually for free using synthetic data. This sorting deep (SoDeep) net can then be combined in a plug-and-play manner with existing deep architectures. We demonstrate the interest of our approach in three different tasks that require ranking: Cross-modal text-image retrieval, multilabel image classification and visual memorability ranking. Our approach yields very competitive results on these three tasks, which validates the merit and the flexibility of SoDeep as a proxy for sorting operation in ranking-based losses.

Paper Code Poster Slides

Citation

If you found this work useful, please cite the associated paper:

M. Engilberge, L. Chevallier, P. Perez, and M. Cord, “SoDeep: A Sorting Deep Net to Learn Ranking Loss Surrogates,” in Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 2019, pp. 10792–10801

BibTex:

@inproceedings{engilbergeSoDeep2019,
  title = {SoDeep: A Sorting Deep Net to Learn Ranking Loss Surrogates},
  booktitle = {Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  author = {Engilberge, Martin and Chevallier, Louis and Pérez, Patrick and Cord, Matthieu},
  year = {2019},
  pages = {10792--10801}
}