Rohit Babbar - Machine Learning Strategies for Large-scale Taxonomies

12:00
Friday
17
Oct
2014
Speaker: 
Rohit Babbar

Jury:

  • Yiming Yang, Carnegie Mellon University, rapporteur  
  • Yann Guermeur, Université de Lorraine, rapporteur
  • Bernhard Schölkopf,  Max Planck Institute for Intelligent Systems, examinateur  
  • Denis Trystram,  Université Grenoble Alpes, examinateur
  • Thierry Artières, Université Pierre et Marie Curie, examinateur
  • Eric Gaussier, Université Grenoble Alpes, directeur de thèse
  • Massih-reza AminiI, Université Grenoble Alpes, co-irecteur de thèse 

 

Réalisation technique : Djamel Hadji | Tous droits réservés

In the era of Big Data, we need efficient and scalable machine learning algorithms which can perform automatic classification of Tera-Bytes of data. In this thesis, we study the machine learning challenges for classification in large-scale taxonomies. These challenges include computational complexity of training and prediction and the performance on unseen data. In the first part of the thesis, we study the underlying power-law distribution in large-scale taxonomies. This analysis then motivates the derivation of bounds on space complexity of hierarchical classifiers. Exploiting the study of this distribution further, we then design classification scheme which leads to better accuracy on large-scale power-law distributed categories. We also propose an efficient method for model-selection when training multi-class version of classifiers such as Support Vector Machine and Logistic Regression. Finally, we address another key model selection problem in large scale classification!


Concerning the choice between flat versus hierarchical classification from a learning theoretic aspect. The presented generalization error analysis provides an explanation to empirical findings in many recent studies in large-scale hierarchical classification. We further exploit the developed bounds to propose two methods for adapting the given taxonomy of categories to output taxonomies which yield better test accuracy when used in a top-down setup.