Audio-guided Video Interpolation via Human Pose Features

Takayuki Nakatsuka, Masatoshi Hamanaka, Shigeo Morishima

Abstract

This paper describes a method that generates in-between frames of two videos of a musical instrument being played. While image generation achieves a successful outcome in recent years, there is ample scope for improvement in video generation. The keys to improving the quality of video generation are the high resolution and temporal coherence of videos. We solved these requirements by using not only visual information but also aural information. The critical point of our method is using two-dimensional pose features to generate high-resolution in-between frames from the input audio. We constructed a deep neural network with a recurrent structure for inferring pose features from the input audio and an encoder-decoder network for padding and generating video frames using pose features. Our method, moreover, adopted a fusion approach of generating, padding, and retrieving video frames to improve the output video. Pose features played an essential role in both end-to-end training with a differentiable property and combining a generating, padding, and retrieving approach. We conducted a user study and confirmed that the proposed method is effective in generating interpolated videos.

Download


Paper Citation


in Harvard Style

Nakatsuka T., Hamanaka M. and Morishima S. (2020). Audio-guided Video Interpolation via Human Pose Features.In Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP, ISBN 978-989-758-402-2, pages 27-35. DOI: 10.5220/0008876600270035


in Bibtex Style

@conference{visapp20,
author={Takayuki Nakatsuka and Masatoshi Hamanaka and Shigeo Morishima},
title={Audio-guided Video Interpolation via Human Pose Features},
booktitle={Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,},
year={2020},
pages={27-35},
publisher={SciTePress},
organization={INSTICC},
doi={10.5220/0008876600270035},
isbn={978-989-758-402-2},
}


in EndNote Style

TY - CONF

JO - Proceedings of the 15th International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications - Volume 5: VISAPP,
TI - Audio-guided Video Interpolation via Human Pose Features
SN - 978-989-758-402-2
AU - Nakatsuka T.
AU - Hamanaka M.
AU - Morishima S.
PY - 2020
SP - 27
EP - 35
DO - 10.5220/0008876600270035