Incoherent dictionary learning method based on unit norm tight frame and manifold optimization for sparse representation (Q1793796)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Incoherent dictionary learning method based on unit norm tight frame and manifold optimization for sparse representation |
scientific article; zbMATH DE number 6953786
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Incoherent dictionary learning method based on unit norm tight frame and manifold optimization for sparse representation |
scientific article; zbMATH DE number 6953786 |
Statements
Incoherent dictionary learning method based on unit norm tight frame and manifold optimization for sparse representation (English)
0 references
12 October 2018
0 references
Summary: Optimizing the mutual coherence of a learned dictionary plays an important role in sparse representation and compressed sensing. In this paper, a efficient framework is developed to learn an incoherent dictionary for sparse representation. In particular, the coherence of a previous dictionary (or Gram matrix) is reduced sequentially by finding a new dictionary (or Gram matrix), which is closest to the reference unit norm tight frame of the previous dictionary (or Gram matrix). The optimization problem can be solved by restricting the tightness and coherence alternately at each iteration of the algorithm. The significant and different aspect of our proposed framework is that the learned dictionary can approximate an equiangular tight frame. Furthermore, manifold optimization is used to avoid the degeneracy of sparse representation while only reducing the coherence of the learned dictionary. This can be performed after the dictionary update process rather than during the dictionary update process. Experiments on synthetic and real audio data show that our proposed methods give notable improvements in lower coherence, have faster running times, and are extremely robust compared to several existing methods.
0 references
0 references
0 references
0 references
0 references
0.8957062
0 references
0.8548265
0 references
0.84257233
0 references
0.83048683
0 references
0.82552963
0 references
0.8247351
0 references
0.82323456
0 references
0.8231174
0 references