Research Repository

On accelerated gradient approximation for least square regression with L1-regularization

Zhang, Y and Sun, J (2015) On accelerated gradient approximation for least square regression with L1-regularization. In: UNSPECIFIED, ? - ?.

Full text not available from this repository.

Abstract

In this paper, we consider an online least square regression problem where the objective function is composed of a quadratic loss function and an L1 regularization on model parameter. For each training sample, we propose to approximate the L1 regularization by a convex function. This results in an overall convex approximation to the original objective function. We apply an efficient accelerated stochastic approximation algorithm to solve the approximation. The developed algorithm does not need to store previous samples which reduces the space complexity. We further prove that the developed algorithm is guaranteed to converge to the global optimum with a convergence rate O (ln n/sqrtn) where n is the number of training samples. The proof is based on a weaker assumption than those applied in similar research work.

Item Type: Conference or Workshop Item (Paper)
Additional Information: Published proceedings: Proceedings - 2015 IEEE Symposium Series on Computational Intelligence, SSCI 2015
Subjects: Q Science > QA Mathematics > QA75 Electronic computers. Computer science
Depositing User: Jim Jamieson
Date Deposited: 15 Dec 2016 16:03
Last Modified: 07 Apr 2021 10:16
URI: http://repository.essex.ac.uk/id/eprint/17629

Actions (login required)

View Item View Item