Perdikis, Serafeim and Leeb, Robert and Chavarriaga, Ricardo and Millan, Jose del R (2021) Context–aware Learning for Generative Models. IEEE Transactions on Neural Networks and Learning Systems, 32 (8). pp. 3471-3483. DOI https://doi.org/10.1109/tnnls.2020.3011671
Perdikis, Serafeim and Leeb, Robert and Chavarriaga, Ricardo and Millan, Jose del R (2021) Context–aware Learning for Generative Models. IEEE Transactions on Neural Networks and Learning Systems, 32 (8). pp. 3471-3483. DOI https://doi.org/10.1109/tnnls.2020.3011671
Perdikis, Serafeim and Leeb, Robert and Chavarriaga, Ricardo and Millan, Jose del R (2021) Context–aware Learning for Generative Models. IEEE Transactions on Neural Networks and Learning Systems, 32 (8). pp. 3471-3483. DOI https://doi.org/10.1109/tnnls.2020.3011671
Abstract
This work studies the class of algorithms for learning with side-information that emerges by extending generative models with embedded context-related variables. Using finite mixture models (FMMs) as the prototypical Bayesian network, we show that maximum-likelihood estimation (MLE) of parameters through expectation-maximization (EM) improves over the regular unsupervised case and can approach the performances of supervised learning, despite the absence of any explicit ground-truth data labeling. By direct application of the missing information principle (MIP), the algorithms' performances are proven to range between the conventional supervised and unsupervised MLE extremities proportionally to the information content of the contextual assistance provided. The acquired benefits regard higher estimation precision, smaller standard errors, faster convergence rates, and improved classification accuracy or regression fitness shown in various scenarios while also highlighting important properties and differences among the outlined situations. Applicability is showcased with three real-world unsupervised classification scenarios employing Gaussian mixture models. Importantly, we exemplify the natural extension of this methodology to any type of generative model by deriving an equivalent context-aware algorithm for variational autoencoders (VAs), thus broadening the spectrum of applicability to unsupervised deep learning with artificial neural networks. The latter is contrasted with a neural-symbolic algorithm exploiting side information.
Item Type: | Article |
---|---|
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 14 Jul 2020 14:15 |
Last Modified: | 16 May 2024 20:29 |
URI: | http://repository.essex.ac.uk/id/eprint/28199 |
Available files
Filename: Perdikis_CA.pdf