Independent component analysis: principles and practice (Q2703796)

From MaRDI portal





scientific article
Language Label Description Also known as
English
Independent component analysis: principles and practice
scientific article

    Statements

    15 March 2001
    0 references
    Independent component analysis
    0 references
    independent components analysis
    0 references
    blind source separation
    0 references
    data classification and visualisation
    0 references
    particle filters
    0 references
    model order selection
    0 references
    dynamic source models
    0 references
    neural networks
    0 references
    graphical modelling methods
    0 references
    image denoising
    0 references
    mobile communications
    0 references
    varia
    0 references
    Independent component analysis: principles and practice (English)
    0 references
    Independent Component Analysis (ICA) is a method of separating out independent sources from linearly mixed data, and belongs to the class of general linear models. Perhaps the most famous illustration of ICA is the ``cocktail party problem'', in which a listener is faced with the problem of separating the independent voices chattering at a cocktail party. ICA provides a better decomposition than other well-known models such as principal components analysis.NEWLINENEWLINENEWLINEThis self-contained book contains a structured series of edited papers by leading researchers in the field, including an extensive introduction to ICA. The major theoretical bases are reviewed from a modern perspective, current developments are surveyed and many case studies of applications are described in detail. The latter include biomedical examples, signal and image denoising and mobile communications. ICA is discussed in the framework of general linear models, but also in comparison with other paradigms such as neural network and graphical modelling methods. The book is ideal for graduate students and researchers in the field. Contents:NEWLINENEWLINENEWLINECh. 1, Introduction (\textit{S.J. Roberts} and \textit{R.M. Everson}), offers an introduction to independent components analysis and aims to give the reader an accessible way into the techniques, issues and jargon of ICA. Ch. 2, Fast ICA by a fixed-point algorithm that maximizes non-Gaussianity (\textit{A. Hyvärinen}), details one of the most popular approaches to ICA based on polynomial approximations to the mutual information. The author details the theoretical developments of the approximations, their justification and the rapid fixed-point algorithm by which the sources are recovered. Background material on the relationships between learning algorithms is also presented along with results on a number of datasets. Ch. 3, ICA, graphical models and variational methods (\textit{H. Attias}), pitches ICA into the important context of graphical models, whereby the relationships between model parameters are represented by a directed acyclic graph. Flexible ICA methods (sometimes known as Independent Factor Analysis) where the source densities are modelled by mixtures of Gaussians and an explicit additive (sensor) noise term exists, are considered. Inference is performed using a variational learning approach. Ch. 4, Nonlinear ICA (\textit{J. Karhunen}), extends ICA from a general linear model of source mixing to the nonlinear case. The author explores in detail the issues involved in forming learning paradigms for such nonlinear ICA and develops promising algorithms to deal with nonlinear mixing. The chapter is illustrated with comparative examples.NEWLINENEWLINENEWLINECh. 5, Separation of non-stationary natural signals (\textit{L.C. Parra} and \textit{C.D. Spencer}), extends ICA to consider the issue of source non-stationarity. The authors show how higher order statistics used to locate independent components arise naturally in non-stationary signals. They examine whether linear mixing is a good model for acoustic signals and natural images. Exploiting the property of non-stationarity they show how good blind separation may be achieved using multiple linear decorrelation. Ch. 6, Separation of non-stationary sources: algorithms and performance (\textit{J.-F. Cardoso} and \textit{D.-T. Pham}), offers a different perspective on the separation of non-stationary sources. An elegant methodology in which non-stationarity may be handled and indeed utilized to aid in the unmixing process is derived. Ch. 7, Blind source separation by sparse decomposition in a signal dictionary (\textit{M. Zibulevsky, B.A. Pearlmutter, P. Bofill} and \textit{P. Kisilev}), considers source separation in the case when the sources are represented by a sparse mixture from a signal dictionary (such as wavelet packets). Under these circumstances ICA naturally seeks sources which are as sparse in their representation as possible. This extra information enables the authors to obtain impressive results in situations when there are more sources than observations. Ch. 8, Ensemble learning for blind source separation (\textit{J.W. Miskin} and \textit{D.J.C. MacKay}), pitches ICA as a graphical model with densities over variables being inferred using a variational learning framework. As both parameters and hyper-parameters of the model are inferred as part of a single learning strategy, this approach is referred to as ensemble learning. The authors consider mixtures of Gaussian source models and show results from model-selection on real-world problems. They also show that a positivity constraint on the hypothesized mixing process gives rise to ICA solutions which are more local in their support.NEWLINENEWLINENEWLINE Ch. 9, Image processing methods using ICA mixture models (\textit{T.-W. Lee} and \textit{M.S. Lewicki}), applies ICA to the domain of image decomposition and processing. By assuming that natural images are generated by a linear combination of independent sources (textures and edges for example) ICA may be used to estimate, given an image, a basis for decomposition. The authors show that this basis has an intuitively appealing form (typically that of local filters) and show how ICA may then be utilized to perform image denoising. Ch. 10, Latent class and trait models for data classification and visualisation (\textit{M.A. Girolami}), regards ICA as a general linear transformation of the same form as the linear discriminant of pattern classification. Using a nested hierarchy of ICA decompositions of real data, the author shows that excellent results may be obtained in difficult unsupervised classification problems. He then proceeds to consider the process of visualisation of high-dimensional data (mapping to a two-dimensional space, for example) as an ICA-like procedure for which learning rules may be obtained.NEWLINENEWLINENEWLINECh. 11, Particle filters for non-stationary ICA (\textit{R.M. Everson} and \textit{S.J. Roberts}), allows a model in which the mixing matrix of a linear ICA model is considered to be non-stationary. This matrix may then be tracked using a particle filter. The approach is shown to be very effective when tracking non-stationary mixing of temporally uncorrelated sources. Some solutions to the more difficult problem of tracking mixtures of temporally correlated sources are presented. Ch. 12, ICA: model order selection and dynamic source models (\textit{W.D. Penny, S.J. Roberts} and \textit{R.M. Everson}), extends the standard ICA model by allowing the source density models to be dynamic rather than static. This is achieved by the use of linear dynamic models within the ICA framework. The authors also consider the important issue of model order selection to determine the most probable number of underlying sources. Results are presented for a variety of datasets.
    0 references

    Identifiers

    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references
    0 references