High-Level Feature Extraction for Crowd Behaviour Analysis: A Computer Vision Approach

Authors: Bruno, A., Ferjani, M., Sabeur, Z., Arbab-Zavar, B., Cetinkaya, D., Johnstone, L., Sallal, M. and Benaouda, D.

Journal: Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume: 13374 LNCS

Pages: 59-70

eISSN: 1611-3349

ISBN: 9783031133237

ISSN: 0302-9743

DOI: 10.1007/978-3-031-13324-4_6

Abstract:

The advent of deep learning has brought in disruptive techniques with unprecedented accuracy rates in so many fields and scenarios. Tasks such as the detection of regions of interest and semantic features out of images and video sequences are quite effectively tackled because of the availability of publicly available and adequately annotated datasets. This paper describes a use case scenario with a deep learning models’ stack being used for crowd behaviour analysis. It consists of two main modules preceded by a pre-processing step. The first deep learning module relies on the integration of YOLOv5 and DeepSORT to detect and track down pedestrians from CCTV cameras’ video sequences. The second module ingests each pedestrian’s spatial coordinates, velocity, and trajectories to cluster groups of people using the Coherent Neighbor Invariance technique. The method envisages the acquisition of video sequences from cameras overlooking pedestrian areas, such as public parks or squares, in order to check out any possible unusualness in crowd behaviour. Due to its design, the system first checks whether some anomalies are underway at the microscale level. Secondly, It returns clusters of people at the mesoscale level depending on velocity and trajectories. This work is part of the physical behaviour detection module developed for the S4AllCities H2020 project.

https://eprints.bournemouth.ac.uk/37092/

Source: Scopus

High-Level Feature Extraction for Crowd Behaviour Analysis: A Computer Vision Approach

Authors: Bruno, A., Ferjani, M., Sabeur, Z., Arbab-Zavar, B., Cetinkaya, D., Johnstone, L., Sallal, M. and Benaouda, D.

Journal: IMAGE ANALYSIS AND PROCESSING, ICIAP 2022 WORKSHOPS, PT II

Volume: 13374

Pages: 59-70

eISSN: 1611-3349

ISBN: 978-3-031-13323-7

ISSN: 0302-9743

DOI: 10.1007/978-3-031-13324-4_6

https://eprints.bournemouth.ac.uk/37092/

Source: Web of Science (Lite)

High-level feature extraction for crowd behaviour analysis: a computer vision approach

Authors: Bruno, A., Ferjani, M., Sabeur, Z., Arbab-Zavar, B., Cetinkaya, D., Johnstone, L., Sallal, M. and Benaouda, D.

Conference: Human Behaviour Analysis for Smart City Environment Safety (HBAxSCES) held within ICIAP'21 (21st International Conference on Image Analysis and Processing)

Dates: 23 May 2022

https://eprints.bournemouth.ac.uk/37092/

Source: Manual

High-level feature extraction for crowd behaviour analysis: a computer vision approach

Authors: Bruno, A., Ferjani, M., Sabeur, Z., Arbab-Zavar, B., Cetinkaya, D., Johnstone, L., Sallal, M. and Benaouda, D.

Conference: ICIAP 2021: International Conference on Image Analysis and Processing: Human Behaviour Analysis for Smart City Environment Safety (HBAxSCES)

Pages: 1-12

Publisher: Italian Association for Research in Computer Vision, Pattern Recognition and Machine Learning (CVPL, ex GIRPR) which is part of the International Association for Pattern Recognition (IAPR)

Abstract:

The advent of deep learning has brought in disruptive techniques with unprecedented accuracy rates in so many fields and scenarios.

Tasks such as the detection of regions of interest and semantic features out of images and video sequences are quite effectively tackled because of the availability of publicly available and adequately annotated datasets. This paper describes a use case scenario with a deep learning models’ stack being used for crowd behaviour analysis. It consists of two main modules preceded by a pre-processing step. The first deep learning module relies on the integration of YOLOv5 and DeepSORT to detect and track down pedestrians from CCTV cameras’ video sequences. The second module ingests each pedestrian’s spatial coordinates, velocity, and trajectories to cluster groups of people using the Coherent Neighbor Invariance technique. The method envisages the acquisition of video sequences from cameras overlooking pedestrian areas, such as public parks or squares, in order to check out any possible unusualness in crowd behaviour. Due to its design, the system first checks whether some anomalies are underway at the microscale level. Secondly, It returns clusters of people at the mesoscale level depending on velocity and trajectories. This work is part of the physical behaviour detection module developed for the S4AllCities H2020 project.

https://eprints.bournemouth.ac.uk/37092/

https://sites.google.com/view/hbaxsces/home

Source: BURO EPrints