YogNet: A two-stream network for realtime multiperson yoga action recognition and posture correction
Authors: Yadav, S.K., Agarwal, A., Kumar, A., Tiwari, K., Pandey, H.M. and Akbar, S.A.
Journal: Knowledge-Based Systems
Volume: 250
ISSN: 0950-7051
DOI: 10.1016/j.knosys.2022.109097
Abstract:Yoga is a traditional Indian exercise. It specifies various body postures called asanas, practicing them is beneficial for the physical, mental, and spiritual well-being. To support the yoga practitioners, there is a need of an expert yoga asanas recognition system that can automatically analyze practitioner's postures and could provide suitable posture correction instructions. This paper proposes YogNet, a multi-person yoga expert system for 20 asanas using a two-stream deep spatiotemporal neural network architecture. The first stream utilizes a keypoint detection approach to detect the practitioner's pose, followed by the formation of bounding boxes across the subject. The model then applies time distributed convolutional neural networks (CNNs) to extract frame-wise postural features, followed by regularized long short-term memory (LSTM) networks to give temporal predictions. The second stream utilizes 3D-CNNs for spatiotemporal feature extraction from RGB videos. Finally, the scores of two streams are fused using multiple fusion techniques. A yoga asana recognition database (YAR) containing 1206 videos is collected using a single 2D web camera for 367 min with the help of 16 participants and contains four view variations i.e. front, back, left, and right sides. The proposed system is novel as this is the earliest two-stream deep learning-based system that can perform multi-person yoga asanas recognition and correction in realtime. Simulation result reveals that YogNet system achieved 77.29%, 89.29%, and 96.31% accuracies using pose stream, RGB stream, and via fusion of both streams, respectively. These results are impressive and sufficiently high for recommendation towards general adaption of the system.
https://eprints.bournemouth.ac.uk/36993/
Source: Scopus
YogNet: A two-stream network for realtime multiperson yoga action recognition and posture correction
Authors: Yadav, S.K., Agarwal, A., Kumar, A., Tiwari, K., Pandey, H.M. and Akbar, S.A.
Journal: KNOWLEDGE-BASED SYSTEMS
Volume: 250
eISSN: 1872-7409
ISSN: 0950-7051
DOI: 10.1016/j.knosys.2022.109097
https://eprints.bournemouth.ac.uk/36993/
Source: Web of Science (Lite)
YogNet: A Two-Stream Network for Realtime Multiperson Yoga Action Recognition and Posture Correction
Authors: Pandey, H., Yadav, S., Agarwal, A., Kumar, A., Tiwari, K. and Akbar, S.A.
Journal: Knowledge-Based Systems
Publisher: Elsevier
ISSN: 0950-7051
Abstract:Yoga is a traditional Indian exercise. It specifies various body postures called asanas, practicing them is beneficial for the physical, mental, and spiritual well-being. To support the yoga practitioners, there is a need of an expert yoga asanas recognition system that can automatically analyze practitioner’s postures and could provide suitable posture correction instructions. This paper proposes YogNet, a multi-person yoga expert system for 20 asanas using a two-stream deep spatiotemporal neural network architecture. The first stream utilizes a keypoint detection approach to detect the practitioner’s pose, followed by the formation of bounding boxes across the subject. The model then applies time distributed convolutional neural networks (CNNs) to extract framewise postural features, followed by regularized long short-term memory (LSTM) networks to give temporal predictions. The second stream utilizes 3D-CNNs for spatiotemporal feature extraction from RGB videos. Finally, the scores of two streams are fused using multiple fusion techniques. A yoga asana recognition database (YAR) containing 1206 videos is collected using a single 2D web camera for 367 minutes with the help of 16 participants and contains four view variations i.e. front, back, left, and right sides. The proposed system is novel as this is the earliest two-stream deep learning-based system that can perform multi-person yoga asanas recognition and correction in realtime. Simulation result reveals that YogNet system achieved 77.29%, 89.29%, and 96.31% accuracies using pose stream, RGB stream, and via fusion of both streams, respectively. These results are impressive and sufficiently high for recommendation towards general adaption of the system.
https://eprints.bournemouth.ac.uk/36993/
Source: Manual
YogNet: A Two-Stream Network for Realtime Multiperson Yoga Action Recognition and Posture Correction
Authors: Yadav, S., Agarwal, A., Kumar, A., Tiwari, K., Pandey, H. and Akbar, S.A.
Journal: Knowledge-Based Systems
Volume: 250
Issue: August
Publisher: Elsevier
ISSN: 0950-7051
Abstract:Yoga is a traditional Indian exercise. It specifies various body postures called asanas, practicing them is beneficial for the physical, mental, and spiritual well-being. To support the yoga practitioners, there is a need of an expert yoga asanas recognition system that can automatically analyze practitioner’s postures and could provide suitable posture correction instructions. This paper proposes YogNet, a multi-person yoga expert system for 20 asanas using a two-stream deep spatiotemporal neural network architecture. The first stream utilizes a keypoint detection approach to detect the practitioner’s pose, followed by the formation of bounding boxes across the subject. The model then applies time distributed convolutional neural networks (CNNs) to extract framewise postural features, followed by regularized long short-term memory (LSTM) networks to give temporal predictions. The second stream utilizes 3D-CNNs for spatiotemporal feature extraction from RGB videos. Finally, the scores of two streams are fused using multiple fusion techniques.
A yoga asana recognition database (YAR) containing 1206 videos is collected using a single 2D web camera for 367 minutes with the help of 16 participants and contains four view variations i.e. front, back, left, and right sides. The proposed system is novel as this is the earliest two-stream deep learning-based system that can perform multi-person yoga asanas recognition and correction in realtime. Simulation result reveals that YogNet system achieved 77.29%, 89.29%, and 96.31% accuracies using pose stream, RGB stream, and via fusion of both streams, respectively. These results are impressive and sufficiently high for recommendation towards general adaption of the system.
https://eprints.bournemouth.ac.uk/36993/
Source: BURO EPrints