Spatiotemporal super-resolution reconstruction based on robust optical flow and Zernike moment for video sequences (Q474057)
From MaRDI portal
| This is the item page for this Wikibase entity, intended for internal use and editing purposes. Please use this page instead for the normal view: Spatiotemporal super-resolution reconstruction based on robust optical flow and Zernike moment for video sequences |
scientific article; zbMATH DE number 6372622
| Language | Label | Description | Also known as |
|---|---|---|---|
| English | Spatiotemporal super-resolution reconstruction based on robust optical flow and Zernike moment for video sequences |
scientific article; zbMATH DE number 6372622 |
Statements
Spatiotemporal super-resolution reconstruction based on robust optical flow and Zernike moment for video sequences (English)
0 references
24 November 2014
0 references
Summary: In order to improve the spatiotemporal resolution of the video sequences, a novel spatiotemporal super-resolution reconstruction model (STSR) based on robust optical flow and Zernike moment is proposed in this paper, which integrates the spatial resolution reconstruction and temporal resolution reconstruction into a unified framework. The model does not rely on accurate estimation of subpixel motion and is robust to noise and rotation. Moreover, it can effectively overcome the problems of hole and block artifacts. First we propose an efficient robust optical flow motion estimation model based on motion details preserving, then we introduce the biweighted fusion strategy to implement the spatiotemporal motion compensation. Next, combining the self-adaptive region correlation judgment strategy, we construct a fast fuzzy registration scheme based on Zernike moment for better STSR with higher efficiency, and then the final video sequences with high spatiotemporal resolution can be obtained by fusion of the complementary and redundant information with nonlocal self-similarity between the adjacent video frames. Experimental results demonstrate that the proposed method outperforms the existing methods in terms of both subjective visual and objective quantitative evaluations.
0 references
0.86256576
0 references