On the Reconstruction Risk of Convolutional Sparse Dictionary Learning

From MaRDI portal
Publication:6290611

arXiv1708.08587MaRDI QIDQ6290611

Author name not available (Why is that?)

Publication date: 29 August 2017

Abstract: Sparse dictionary learning (SDL) has become a popular method for adaptively identifying parsimonious representations of a dataset, a fundamental problem in machine learning and signal processing. While most work on SDL assumes a training dataset of independent and identically distributed samples, a variant known as convolutional sparse dictionary learning (CSDL) relaxes this assumption, allowing more general sequential data sources, such as time series or other dependent data. Although recent work has explored the statistical properties of classical SDL, the statistical properties of CSDL remain unstudied. This paper begins to study this by identifying the minimax convergence rate of CSDL in terms of reconstruction risk, by both upper bounding the risk of an established CSDL estimator and proving a matching information-theoretic lower bound. Our results indicate that consistency in reconstruction risk is possible precisely in the `ultra-sparse' setting, in which the sparsity (i.e., the number of feature occurrences) is in o(N) in terms of the length N of the training sequence. Notably, our results make very weak assumptions, allowing arbitrary dictionaries and dependent measurement noise. Finally, we verify our theoretical results with numerical experiments on synthetic data.




Has companion code repository: https://github.com/sss1/convolutional-dictionary








This page was built for publication: On the Reconstruction Risk of Convolutional Sparse Dictionary Learning

Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6290611)