Asynchronous Stochastic Variational Inference

Authors: Mohamed, S., Bouchachia, A. and Sayed-Mouchaweh, M.

Conference: INNS Big Data and Deep Learning 2019

Dates: 16-18 April 2019

Abstract:

Stochastic variational inference (SVI) employs stochastic optimization to scale up Bayesian computation to massive data. Since SVI is at its core a stochastic gradient-based algorithm, horizontal parallelism can be harnessed to allow larger scale inference. We propose a lock-free parallel implementation for SVI which allows distributed computations over multiple slaves in an asynchronous style. We show that our implementation leads to linear speed-up while guaranteeing an asymptotic ergodic convergence rate O(1/ √ T) while the number of slaves is bounded by √ T (T is the total number of iterations). The implementation is done in a high-performance computing environment using message passing interface for python (MPI4py). The empirical evaluation shows that our parallel SVI is lossless, performing comparably well to its counterpart serial SVI with linear speed-up.

https://eprints.bournemouth.ac.uk/32444/

Source: Manual

Asynchronous Stochastic Variational Inference

Authors: Mohamed, S., Bouchachia, A. and Sayed-Mouchaweh, M.

Conference: INNS Big Data and Deep Learning 2019: Proceedings of the International Neural Networks Society Conference

Publisher: Recent Advances in Big Data and Deep Learning, Springer

ISBN: 978-3-030-16840-7

Abstract:

Stochastic variational inference (SVI) employs stochastic optimization to scale up Bayesian computation to massive data. Since SVI is at its core a stochastic gradient-based algorithm, horizontal parallelism can be harnessed to allow larger scale inference. We propose a lock-free parallel implementation for SVI which allows distributed computations over multiple slaves in an asynchronous style. We show that our implementation leads to linear speed-up while guaranteeing an asymptotic ergodic convergence rate O(1/ √ T) while the number of slaves is bounded by √ T (T is the total number of iterations). The implementation is done in a high-performance computing environment using message passing interface for python (MPI4py). The empirical evaluation shows that our parallel SVI is lossless, performing comparably well to its counterpart serial SVI with linear speed-up.

https://eprints.bournemouth.ac.uk/32444/

https://innsbddl2019.org/

Source: BURO EPrints