Sciweavers

ICIP
2005
IEEE

Video coding with MC-EZBC and redundant-wavelet multihypothesis

14 years 6 months ago
Video coding with MC-EZBC and redundant-wavelet multihypothesis
Motion compensation with redundant-wavelet multihypothesis, in which multiple predictions that are diverse in transform phase contribute to a single motion estimate, is deployed into the fully scalable MC-EZBC video coder. The bidirectional motion-compensated temporal-filtering process of MC-EZBC is adapted to the redundant-wavelet domain, wherein transform redundancy is exploited to generate a phase-diverse multihypothesis prediction of the true temporal filtering. Noise not captured by the motion model is substantially reduced, leading to greater coding efficiency. In experimental results, the proposed system exhibits substantial gains in rate-distortion performance over the original MC-EZBC coder for sequences with fast or complex motion.
Joseph B. Boettcher, James E. Fowler
Added 23 Oct 2009
Updated 14 Nov 2009
Type Conference
Year 2005
Where ICIP
Authors Joseph B. Boettcher, James E. Fowler
Comments (0)