A fundamental problem in robotic perception is matching identical objects or data, with applications such as loop closure detection, place recognition, object tracking, and map fusion. The problem becomes more challenging when matching is done jointly across multiple, multimodal sets of data, however, the robustness and accuracy of matching in the presence of noise and outliers is greatly improved in this setting. At present, multimodal techniques do not leverage multiway information, and multiway techniques do not incorporate different modalities, leading to inferior results. To address this issue, members of the DCIST alliance developed a principled mixed-integer quadratic framework to formulate the multimodal, multiway data association, and a novel algorithm, called Multimodality association matrIX fusER (MIXER), to find solutions. MIXER uses a continuous relaxation in a projected gradient descent scheme that guarantees feasible solutions of the integer program are obtained efficiently. Experiments demonstrated that correspondences obtained from MIXER are more stable to noise and errors than state-of-the-art techniques. Tested on a robotics dataset, MIXER resulted in a 35% increase in the accuracy of data association (measured as the F1 score) when compared to the best alternative.
Points of Contact: Jonathan How (PI), Kaveh Fathian
Citation: P. C. Lusk, R. Roy, K. Fathian, J. P. How, “MIXER: A Principled Framework for Multimodal, Multiway Data Association,” in IEEE ICRA workshop on robust perception, 2021.