Latent-space disentanglement with untrained generator networks for the isolation of different motion types in video data
From MaRDI portal
Publication:6399743
arXiv2205.10367MaRDI QIDQ6399743
Author name not available (Why is that?)
Publication date: 20 May 2022
Abstract: Isolating different types of motion in video data is a highly relevant problem in video analysis. Applications can be found, for example, in dynamic medical or biological imaging, where the analysis and further processing of the dynamics of interest is often complicated by additional, unwanted dynamics, such as motion of the measurement subject. In this work, it is empirically shown that a representation of video data via untrained generator networks, together with a specific technique for latent space disentanglement that uses minimal, one-dimensional information on some of the underlying dynamics, allows to efficiently isolate different, highly non-linear motion types. In particular, such a representation allows to freeze any selection of motion types, and to obtain accurate independent representations of other dynamics of interest. Obtaining such a representation does not require any pre-training on a training data set, i.e., all parameters of the generator network are learned directly from a single video.
Has companion code repository: https://github.com/hollerm/generator_based_motion_separation
This page was built for publication: Latent-space disentanglement with untrained generator networks for the isolation of different motion types in video data
Report a bug (only for logged in users!)Click here to report a bug for this page (MaRDI item Q6399743)