This data competition seeks to study the problem of map synchronization. The goal is to construct consistent and faithful maps across a collection of relevant objects, where the maps take the form of either feature correspondences across images/shapes or rigid transformations between depth scans. The map synchronization problem is a fundamental task arising in many scientific disciplines, ranging from 3D geometry reconstruction with partial scans, data-driven geometry completion and reconstruction, texture transfer, to comparative biology, joint data-analysis, and data exploration and organization.
This data competition is hosted as part of the 28th 2018 IEEE 28th International Workshop on Machine Learning for Signal Processing (MLSP), September 17-20, 2018, Aalborg, Denmark. Awards will be given at the workshop. There will be time slots during MLSP where competition results can be discussed and presented upon arrangements.
A typical pipeline for map synchronization consists of two stages:
The motivation is two-fold. Currently, there is no standard benchmark dataset for this task. The methods developed in the literature were evaluated on different datasets, making it hard to compare and evaluate different methods. This data competition aims to provide a platform that allows empirical comparisons of various map synchronization methods.
From the theoretical perspective, although map synchronization can be formulated as constrained low-rank matrix recovery, this problem is in stark contrast with most existing low-rank recovery problems. Two aspects are worth emphasizing, which also form the motivation for initiating the data competition:
We also invite new methods for submitting to this data competition to advance state-of-the-art in map synchronization.
The first track aims at computing dense correspondences across geometric objects. We give both the input objects and precomputed pairwise maps as input. The input objects are from SHREC07 Watertight Repository as well as Shape COSEG. For categories from SHREC07, the evaluation is with respect to the feature point annotations provided by Vladmir G. Kim et al. For large-scale categories in Shape COSEG, the evaluation is with respect to ground-truth segmentations in Shape COSEG. For map synchronization techniques that operate on a subset of samples, we provide tools for point sampling from a given mesh as well as extrapolating correspondences between samples into dense correspondences between mesh vertices.
The rotation track aims at computing consistent pair-wise rotations across range scans or images. The applications are in geometry reconstruction from range scans or multi-view structure from motion of internet images. The depth scans are taken from the Redwood dataset. We give both the input point clouds, and pair-wise alignments obtained using Super4PCS. The ground-truth transformations are obtained by manual alignments and refined using global rigid alignment. Note that we only use the rotation components for evaluation, due to the popularity of this task. Three datasets of rotation synchronization for multiview structure-from-motion are taken from IU Computer Vision Lab and CVL, IIS.
For each track, we have given a subset of the benchmark datasets for testing each individual algorithm. We hold the remaining subset for evaluation. Please submit your code to firstname.lastname@example.org. The submission should include the source code, an executable and a readme file on how to compile the source code and run the executable. We have provided evaluation code to access the output of each algorithm. The specification is given in the readme file of each benchmark dataset.