3D Head Pose Estimation through Facial Features and Deep Convolutional Neural Networks
Face image analysis is one among several important cues in computer vision. Over the last five decades, methods for face analysis have received immense attention due to large scale applications in various face analysis tasks. Face parsing strongly benefits various human face image analysis tasks inducing face pose estimation. In this paper we propose a 3D head pose estimation framework developed through a prior end to end deep face parsing model. We have developed an end to end face parts segmentation framework through deep convolutional neural networks (DCNNs). For training a deep face parts parsing model, we label face images for seven different classes, including eyes, brows, nose, hair, mouth, skin, and back. We extract features from gray scale images by using DCNNs. We train a classifier using the extracted features. We use the probabilistic classification method to produce gray scale images in the form of probability maps for each dense semantic class. We use a next stage of DCNNs and extract features from grayscale images created as probability maps during the segmentation phase. We assess the performance of our newly proposed model on four standard head pose datasets, including Pointing’04, Annotated Facial Landmarks in the Wild (AFLW), Boston University (BU), and ICT-3DHP, obtaining superior results as compared to previous results.
Other Information
Published in: Computers, Materials & Continua
License: https://creativecommons.org/licenses/by/4.0
See article on publisher's website: https://dx.doi.org/10.32604/cmc.2020.013590
Funding
Open Access funding provided by the Qatar National Library.
History
Language
- English
Publisher
Tech Science PressPublication Year
- 2021
License statement
This Item is licensed under the Creative Commons Attribution 4.0 International License.Institution affiliated with
- Hamad Bin Khalifa University
- College of Science and Engineering - HBKU