Subgradient Descent Learns Orthogonal Dictionaries
From MaRDI portal
Publication:6308749
arXiv1810.10702MaRDI QIDQ6308749
Author name not available (Why is that?)
Publication date: 24 October 2018
Abstract: This paper concerns dictionary learning, i.e., sparse coding, a fundamental representation learning problem. We show that a subgradient descent algorithm, with random initialization, can provably recover orthogonal dictionaries on a natural nonsmooth, nonconvex minimization formulation of the problem, under mild statistical assumptions on the data. This is in contrast to previous provable methods that require either expensive computation or delicate initialization schemes. Our analysis develops several tools for characterizing landscapes of nonsmooth functions, which might be of independent interest for provable training of deep networks with nonsmooth activations (e.g., ReLU), among numerous other applications. Preliminary experiments corroborate our analysis and show that our algorithm works well empirically in recovering orthogonal dictionaries.
Has companion code repository: https://github.com/sunju/ODL_L1
This page was built for publication: Subgradient Descent Learns Orthogonal Dictionaries
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6308749)