Spatiotemporal Learning Transformer for Video-Based Human Pose Estimation

Authors: Gai, D., Feng, R., Min, W., Yang, X., Su, P., Wang, Q. and Han, Q.

Journal: IEEE Transactions on Circuits and Systems for Video Technology

Volume: 33

Issue: 9

Pages: 4564-4576

eISSN: 1558-2205

ISSN: 1051-8215

DOI: 10.1109/TCSVT.2023.3269666

Abstract:

Multi-frame human pose estimation has long been an appealing and fundamental issue in visual perception. Owing to the frequent rapid motion and pose occlusion in videos, this task is extremely challenging. Current state-of-the-art methods seek to model spatiotemporal features by equally fusing each frame in the local sequence, which weakens the target frame information. In addition, existing approaches usually emphasize more on deep features while ignoring the detailed information implied in the shallow feature maps, resulting in the dropping of crucial features. To address the above problems, we propose an effective framework, namely spatiotemporal learning transformer for video-based human pose estimation (SLT-Pose), which consists of a Personalized Feature Extraction Module (PFEM), Self-feature Refinement Module (SRM), Cross-frame Temporal Learning Module (CTLM) and Disentangled Keypoint Detector (DKD). To be specific, we propose PFEM which extracts and modulates the individual frame features to adapt to the varying human shape, and integrates single-frame features to obtain the spatiotemporal features. We further present SRM to establish global correlation spatial cues on the target frame to attain the refinement feature. Then, a CTLM is designed to search for the information most closely related to the target frame from the spatiotemporal features to intensify the interaction between the target frame and the local sequence, using both the shallow detailed and the deep semantic representations. Finally, we employ DKD to extract the disentangled characteristics of each joint and encode the articulated joint pairs in the human body, promoting the model to reasonably and accurately predict the keypoint heatmaps. Extensive experiments on three huamn motion benchmarks, including PoseTrack2017, PoseTrack2018, and Sub-JHMDB dataset, demonstrate that SLT-Pose plays favorably against state-of-the-art approaches in terms of both objective evaluation and subjective visual performance.

https://eprints.bournemouth.ac.uk/38864/

Source: Scopus

Spatiotemporal Learning Transformer for Video-Based Human Pose Estimation

Authors: Gai, D., Feng, R., Min, W., Yang, X., Su, P., Wang, Q. and Han, Q.

Journal: IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY

Volume: 33

Issue: 9

Pages: 4564-4576

eISSN: 1558-2205

ISSN: 1051-8215

DOI: 10.1109/TCSVT.2023.3269666

https://eprints.bournemouth.ac.uk/38864/

Source: Web of Science (Lite)

Spatiotemporal Learning Transformer for Video-Based Human Pose Estimation

Authors: Gai, D., Feng, R., Min, W., Yang, X., Su, P., Wang, Q. and Han, Q.

Journal: IEEE Transactions on Circuits and Systems for Video Technology

Volume: 33

Issue: 9

Pages: 4564-4576

ISSN: 1051-8215

Abstract:

Multi-frame human pose estimation has long been an appealing and fundamental issue in visual perception. Owing to the frequent rapid motion and pose occlusion in videos, this task is extremely challenging. Current state-of-the-art methods seek to model spatiotemporal features by equally fusing each frame in the local sequence, which weakens the target frame information. In addition, existing approaches usually emphasize more on deep features while ignoring the detailed information implied in the shallow feature maps, resulting in the dropping of crucial features. To address the above problems, we propose an effective framework, namely spatiotemporal learning transformer for video-based human pose estimation (SLT-Pose), which consists of a Personalized Feature Extraction Module (PFEM), Self-feature Refinement Module (SRM), Cross-frame Temporal Learning Module (CTLM) and Disentangled Keypoint Detector (DKD). To be specific, we propose PFEM which extracts and modulates the individual frame features to adapt to the varying human shape, and integrates single-frame features to obtain the spatiotemporal features. We further present SRM to establish global correlation spatial cues on the target frame to attain the refinement feature. Then, a CTLM is designed to search for the information most closely related to the target frame from the spatiotemporal features to intensify the interaction between the target frame and the local sequence, using both the shallow detailed and the deep semantic representations. Finally, we employ DKD to extract the disentangled characteristics of each joint and encode the articulated joint pairs in the human body, promoting the model to reasonably and accurately predict the keypoint heatmaps. Extensive experiments on three huamn motion benchmarks, including PoseTrack2017, PoseTrack2018, and Sub-JHMDB dataset, demonstrate that SLT-Pose plays favorably against state-of-the-art approaches in terms of both objective evaluation and subjective visual performance.

https://eprints.bournemouth.ac.uk/38864/

Source: BURO EPrints