Competitive regression.
Authors: Bowden, M. and Jamil, W.
Conference: Bournemouth University, Faculty of Science and Technology
Abstract:This thesis is about investigating the predictive complexity of online regression. In essence, supervised learning from a data sequence consisting of n-dimensional input and the corresponding output is considered. In this work online learning scenario considered consists of sequential arrival of data, without making any stochastic assumptions on the nature of the arriving data. At each trial, the learner receives the input, it produces a prediction. Then as a second step, the true output is observed, and the learner suffers a loss which is consequently used to learn. The goal of online learning regression in this thesis, is to minimise the regret suffered when considering the loss of the minimum of the sum of squares.
In the present work, three novel algorithms (Online Shrinkage via Limit of Gibbs sampler (OSLOG), Competitive Iterated Ridge Regression (CIRR) and Competitive Normalised Least Squares (CNLS)) are derived and analysed. The development of these algorithms is driven by Kolmogorov complexity (known also as “competitive analysis”). OSLOG appraises the Bayesian approach, CIRR relies on game theory, whereas CNLS makes use of gradient descent methods. The analysis of the algorithms investigates two aspects: (1) formulating the upper bound on the cumulative square loss and (2) identifying the precise conditions under which they perform better than other algorithms. In fact, the theoretical results indicate that they have a better guarantee than the state-of-the-art algorithms. The empirical study conducted on real-world datasets show also that the performance of the proposed algorithms is better than the state-of-the-art algorithms and close to the minimum of the sum of squares.
https://eprints.bournemouth.ac.uk/34279/
Source: Manual
Competitive regression.
Authors: Jamil, W.
Conference: Bournemouth University
Abstract:This thesis is about investigating the predictive complexity of online regression. In essence, supervised learning from a data sequence consisting of n-dimensional input and the corresponding output is considered. In this work online learning scenario considered consists of sequential arrival of data, without making any stochastic assumptions on the nature of the arriving data. At each trial, the learner receives the input, it produces a prediction. Then as a second step, the true output is observed, and the learner suffers a loss which is consequently used to learn. The goal of online learning regression in this thesis, is to minimise the regret suffered when considering the loss of the minimum of the sum of squares. In the present work, three novel algorithms (Online Shrinkage via Limit of Gibbs sampler (OSLOG), Competitive Iterated Ridge Regression (CIRR) and Competitive Normalised Least Squares (CNLS)) are derived and analysed. The development of these algorithms is driven by Kolmogorov complexity (known also as “competitive analysis”). OSLOG appraises the Bayesian approach, CIRR relies on game theory, whereas CNLS makes use of gradient descent methods. The analysis of the algorithms investigates two aspects: (1) formulating the upper bound on the cumulative square loss and (2) identifying the precise conditions under which they perform better than other algorithms. In fact, the theoretical results indicate that they have a better guarantee than the state-of-the-art algorithms. The empirical study conducted on real-world datasets show also that the performance of the proposed algorithms is better than the state-of-the-art algorithms and close to the minimum of the sum of squares.
https://eprints.bournemouth.ac.uk/34279/
Source: BURO EPrints