Multi-Dialect Arabic Speech Recognition

Authors: Ali, A.R.

Journal: Proceedings of the International Joint Conference on Neural Networks

DOI: 10.1109/IJCNN48605.2020.9206658

Abstract:

This paper presents the design and development of multi-dialect automatic speech recognition for Arabic. Deep neural networks are becoming an effective tool to solve sequential data problems, particularly, adopting an end-to-end training of the system. Arabic speech recognition is a complex task because of the existence of multiple dialects, non-availability of large corpora, and missing vocalization. Thus, the first contribution of this work is the development of a large multi-dialectal corpus with either full or at least partially vocalized transcription. Additionally, the open-source corpus has been gathered from multiple sources that bring non-standard Arabic alphabets in transcription which are normalized by defining a common character-set. The second contribution is the development of a framework to train an acoustic model achieving state-of-the-art performance. The network architecture comprises of a combination of convolutional and recurrent layers. The spectrogram features of the audio data are extracted in the frequency vs time domain and fed in the network. The output frames, produced by the recurrent model, are further trained to align the audio features with its corresponding transcription sequences. The sequence alignment is performed using a beam search decoder with a tetra-gram language model. The proposed system achieved a 14% error rate which outperforms previous systems.

Source: Scopus

Multi-Dialect Arabic Speech Recognition

Authors: Ali, A.R.

Journal: 2020 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS (IJCNN)

ISSN: 2161-4393

DOI: 10.1109/ijcnn48605.2020.9206658

Source: Web of Science (Lite)