Construction of a comparison tree in the Euclidean setting: (i) current cell, (ii) pick random pivots with opposite labels, (iii) split the cell according to the ordinal comparisons involving the pivots, (iv) iterate.

Ulrike von Luxburg (Project Leader),
Damien Garreau,
Michael Perrot,
Leena Chennuru Vankadara,
Siavash Haghiri
(University of Tübingen)

In the typical machine learning setting we are given a data set $\mathcal{D}$ of objects and a distance function $\delta$ that quantifies how ``close'' the objects are to each other. In recent years, a whole new branch of the machine learning literature has emerged that relaxes this scenario. Instead of being able to evaluate the distance function $\delta$ itself, we only get to see the results of comparisons such as $\delta(A,B) < \delta(C,D)$, where $A,B,C,D\in\mathcal{D}$. We refer to any collection of answers to such comparisons as ordinal distance information. Our group investigates how machine learning algorithms can work with such data. We can learn a Euclidean representation that respects the comparisons but evaluating the quality of such an embedding is difficult \citep{VanLux18}. As an alternative we proposed several algorithmic solutions that learn directly from the comparisons.

First, we developed kernels, allowing us to use kernel methods with ordinal distance comparisons \citep{KleLux17b}. In another line of work, we used ordinal information of the form ``$A$ is most central among $A$, $B$, and $C$'' to develop algorithms for medoid estimation, outlier identification, classification, and clustering \citep{KleLux17a}. Our approach consists in studying the intersections between disks (the ``lens'') and relative nearest neighbor graphs. We also proposed a method for large scale classification with generalization guarantees. The idea is to aggregate the ordinal comparisons into many weak classifiers and then to use boosting to learn a weighted combination with good predictive capabilities.

The search for nearest neighbors is another example of a classical machine learning task for which we proposed a comparison-based approach. The comparison tree \citep{HagGhoLux17} is a random tree constructed by recursively splitting the space by choosing random pivots and assigning data points to the nearest pivot. This structure allows for efficient search of the nearest neighbors of a query point, and we proved theoretical guarantees on the performance of this method. Since individual decision trees have a tendency to overfit on the training set, it is natural to aggregate many of these trees to construct a *random forest*. Albeit having access to very little information, these ``comparison-based random forests'' perform about as well as methods that have access to the true distances. We can also prove that they are statistically consistent \citep{HagGarLux18}.

In a related study \citep{TanGarLux18}, we investigated the theoretical properties of classic random forests. More precisely, we showed that the subsampling step in the construction of a random forest is important, in the sense that no subsampling or too severe subsampling can lead to inconsistency.

We also proposed clustering algorithms that are similar in spirit to agglomerative hierarchical clustering paradigms, but use only ordinal information. We theoretically proved that, for a planted hierarchical model, single linkage and complete linkage only recover the correct hierarchy for prohibitively large signal-to-noise ratios. Conversely, our average linkage based methods recover it even for small ratios.

Siavash Haghiri

University of Tübingen

7 results

**Foundations of Comparison-Based Hierarchical Clustering**
2018, arXiv preprint (arXiv:1811.00928) (article)

**Boosting for Comparison-Based Learning**
2018, arXiv preprint (arXiv:1810.13333) (article)

**Measures of distortion for machine learning**
In *Proceedings Neural Information Processing Systems*, Neural Information Processing Systems (NIPS 2018) , 2018 (inproceedings)

**When do random forests fail?**
In *Proceedings Neural Information Processing Systems*, Neural Information Processing Systems (NIPS 2018) , December 2018 (inproceedings)

**Lens Depth Function and k-Relative Neighborhood Graph: Versatile Tools for Ordinal Data Analysis**
*Journal of Machine Learning Research (JMLR)*, *Journal of Machine Learning Research*, 18, 2017 (article)

**Kernel functions based on triplet comparisons**
In *Proceedings Neural Information Processing Systems*, Neural Information Processing Systems (NIPS), 2017 (inproceedings)

**Comparison-based nearest neighbor search**
In *Artificial Intelligence and Statistics*, Artificial Intelligence and Statistics (AISTATS), 2017 (inproceedings)