loading
Papers Papers/2022 Papers Papers/2022

Research.Publish.Connect.

Paper

Paper Unlock

Authors: Ehsan Rahimi and Chris Joslin

Affiliation: Department of Systems and Computer Engineering, Carleton University, 1125 Colonel By Dr., Ottawa, ON, Canada

Keyword(s): 3D Video, Stereoscopic Video, Dolor Image, Depth Map Image, Multiple Descriotion Coding, Temopral MDC, Spatial MDC, Hybrid MDC, Reliable Multimedia Streaming, Region of Interest.

Abstract: 3D video applications are being more favourable for observers as their requirements to receive and display 3D videos become more available recently. Therefore the demand for more processing power and bandwidth is increasing to stream and display 3D multimedia services in either the wired or wireless networks. Since channel failure has been always as an integral part of communication between a receiver and transmitters, a robust method of video streaming is always a hot topic for researchers. To make robustness against failure more stronger, it needs to increase redundancies however it destroys the coding and compressing efficiency. Therefore there is a trade-off problem between the coding efficiency and robustness of the stream. Among different methods of reliable video streaming, this paper introduces a new reliable 3D video streaming using hybrid multiple description coding. The proposed multiple description coding creates 3D video descriptions identifying interesting objects of th e scene. To this end, a map for the region of interest is extracted from the depth map image first with a not complex algorithm compared to the available machine learning algorithm. Having realized region of interest, the proposed hybrid multiple description coding algorithm creates the descriptions for the color video using the advantages of both spatial and temporal multiple description coding methods; To this end, a non-identical decimation method concerning the identified objects assigns more bandwidth to those objects; second, background quality is improved with the temporal information. This way, first, the proposed method provides better visual performance as the human eye is more sensitive to objects than it is to pixels; second, the background is reconstructed with higher quality as it usually has a low movement and temporal information is a better choice to estimate the lost information. The objective test results verify the fact that the proposed method provides an improved performance than previous methods. (More)

CC BY-NC-ND 4.0

Sign In Guest: Register as new SciTePress user now for free.

Sign In SciTePress user: please login.

PDF ImageMy Papers

You are not signed in, therefore limits apply to your IP address 3.143.4.181

In the current month:
Recent papers: 100 available of 100 total
2+ years older papers: 200 available of 200 total

Paper citation in several formats:
Rahimi, E. and Joslin, C. (2020). 3D Video Spatiotemporal Multiple Description Coding Considering Region of Interest. In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP; ISBN 978-989-758-402-2; ISSN 2184-4321, SciTePress, pages 474-481. DOI: 10.5220/0009158304740481

@conference{visapp20,
author={Ehsan Rahimi. and Chris Joslin.},
title={3D Video Spatiotemporal Multiple Description Coding Considering Region of Interest},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP},
year={2020},
pages={474-481},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0009158304740481},
isbn={978-989-758-402-2},
issn={2184-4321},
}

TY - CONF

JO - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (VISIGRAPP 2020) - Volume 4: VISAPP
TI - 3D Video Spatiotemporal Multiple Description Coding Considering Region of Interest
SN - 978-989-758-402-2
IS - 2184-4321
AU - Rahimi, E.
AU - Joslin, C.
PY - 2020
SP - 474
EP - 481
DO - 10.5220/0009158304740481
PB - SciTePress