Session Detail Information
Add this session to your itinerary

Cluster :  Sparse Optimization and Applications

Session Information  : Thursday Jul 16, 14:45 - 16:15

Title:  Convex Optimization and Statistical Learning
Chair: Venkat Chandrasekaran,Caltech, 1200 E. California Blvd, MC 305-16, Pasadena CA 91125, United States of America, venkatc@caltech.edu

Abstract Details

Title: Nearly Linear Time Algorithms for Structured Sparsity
 Presenting Author: Chinmay Hegde,MIT, 32 Vassar St, G564, Cambridge MA 02139, United States of America, chinmay@csail.mit.edu
 
Abstract: Structured sparsity has been proven beneficial in a number of applications in statistical learning and signal processing. However, these benefits do not come for free: enforcing complex structures in data typically involves cumbersome, computationally intensive algorithms. I will outline a series of new methods for structured sparse modeling that integrate ideas from combinatorial optimization and approximation algorithms. For several structure classes, these methods enjoy a nearly linear time complexity, thereby enabling their application to massive datasets.
  
Title: ***no show***The Entropic Barrier: A Simple and Optimal Universal Self-Concordant Barrier
 Presenting Author: Sebastien Bubeck,Microsoft, Microsoft campus, Redmond, United States of America, sebubeck@microsoft.com
 Co-Author: Ronen Eldan,Postdoc, Microsoft Research Redmond, roneneldan@gmail.com
 
Abstract: A fundamental result in the theory of Interior Point Methods is Nesterov and Nemirovski's construction of a universal self-concordant barrier. In this talk I will introduce the entropic barrier, a new (and in some sense optimal) universal self-concordant barrier. The entropic barrier connects many topics of interest in Machine Learning: exponential families, convex duality, log-concave distributions, Mirror Descent, and exponential weights.
  
Title: Convex Regularization with the Diversity Function
 Presenting Author: Maryam Fazel,Associate Professor, University of Washington, Campus Box 352500, Seattle WA 98195, United States of America, mfazel@uw.edu
 Co-Author: Amin Jalali,University of Washington, Electrical Eng Dept., Seattle Wa 98195, United States of America, amjalali@uw.edu
 Lin Xiao,Microsoft Research, Machine Learning Groups, Redmond WA 98052, United States of America, Lin.Xiao@microsoft.com
 
Abstract: We propose a new class of penalties, called diversity functions, that can promote orthogonality among a set of vectors in a vector space, with applications in hierarchical classification, multitask learning, and estimation of vectors with disjoint supports. We give conditions under which the penalties are convex, show how to characterize the subdifferential and compute the proximal operator, and discuss efficient optimization algorithms. Numerical experiments on a hierarchical classification application are presented.