Sciweavers

TIP
2016

Encoder-Driven Inpainting Strategy in Multiview Video Compression

8 years 10 days ago
Encoder-Driven Inpainting Strategy in Multiview Video Compression
—In free viewpoint video systems, a user has the freedom to select a virtual view from which an image of the 3D scene is rendered, and the scene is commonly represented by color and depth images of multiple nearby viewpoints. In such representation, there exists data redundancy across multiple dimensions: 1) a 3D voxel may be represented by pixels in multiple viewpoint images (inter-view redundancy); 2) a pixel patch may recur in a distant spatial region of the same image due to self-similarity (inter-patch redundancy); and 3) pixels in a local spatial region tend to be similar (inter-pixel redundancy). It is important to exploit these redundancies during interview prediction toward effective multiview video compression. In this paper, we propose an encoder-driven inpainting strategy for inter-view predictive coding, where explicit instructions are transmitted minimally, and the decoder is left to independently recover remaining missing data via inpainting, resulting in lower coding ...
Yu Gao, Gene Cheung, Thomas Maugey, Pascal Frossar
Added 11 Apr 2016
Updated 11 Apr 2016
Type Journal
Year 2016
Where TIP
Authors Yu Gao, Gene Cheung, Thomas Maugey, Pascal Frossard, Jie Liang
Comments (0)