Yousaf, Muhammad Zain and Guerrero, Josep M and Sadiq, Muhammad Tariq (2025) Optimizing machine learning algorithms for fault classification in rolling bearings: A Bayesian Optimization approach. Engineering Applications of Artificial Intelligence, 150. p. 110597. DOI https://doi.org/10.1016/j.engappai.2025.110597
Yousaf, Muhammad Zain and Guerrero, Josep M and Sadiq, Muhammad Tariq (2025) Optimizing machine learning algorithms for fault classification in rolling bearings: A Bayesian Optimization approach. Engineering Applications of Artificial Intelligence, 150. p. 110597. DOI https://doi.org/10.1016/j.engappai.2025.110597
Yousaf, Muhammad Zain and Guerrero, Josep M and Sadiq, Muhammad Tariq (2025) Optimizing machine learning algorithms for fault classification in rolling bearings: A Bayesian Optimization approach. Engineering Applications of Artificial Intelligence, 150. p. 110597. DOI https://doi.org/10.1016/j.engappai.2025.110597
Abstract
Modern power machinery is inherently complex and operates under dynamic operating conditions, so they demand advanced solutions based on deep learning to diagnose bearing faults inside rotating equipment that cause unplanned downtime and safety issues, leading to operational challenges. However, most deep learning approaches aim to improve performance by incorporating hybrid neural networks that rely on multiple convolutional and temporal units, often overlooking optimizing the large number of hyperparameters that define the structure and performance of hybrid models along with the associated computational constraints. To address this gap, this study presents an innovative approach for the detection and classification of bearing faults by integrating an optimized sparse deep autoencoder (DAE) with a Bidirectional Long Short-Term Memory model (Bi-LSTM). The optimal network structure and hyperparameters are determined through Bayesian optimization (BO) with parallel settings, which automatically searches for network configurations that improve the feature extraction ability of the DAE and the generalization ability of the Bi-LSTM for more efficient fault classification in rolling bearings. Parallel optimization accelerates network structure and hyperparameter tuning by evaluating multiple configurations at once. It leverages the full potential of available multi-core Central Processing Units (CPUs)/Graphics Processing Units (GPUs) in conjunction with a lightweight BO surrogate model. This autonomous and user-friendly framework generates inputs from principal component analysis for linear and BO-DAE for non-linear feature extraction and selection, which are then used to train a BO-enhanced Bi-LSTM. This three-stage optimized method effectively captures spatial and temporal dependencies in vibrational signals, achieving superior efficiency, accuracy, and reliability compared to shallow and deep learning models. Evaluation metrics, including macro precision (99.50 %), recall (99.60 %), F1-Score (99.57 %), and Cohen's Kappa metric (Cκ = 99.53 %), demonstrate the efficacy of our approach for bearing fault classification in industrial applications.
Item Type: | Article |
---|---|
Subjects: | Z Bibliography. Library Science. Information Resources > ZR Rights Retention |
Divisions: | Faculty of Science and Health Faculty of Science and Health > Computer Science and Electronic Engineering, School of |
SWORD Depositor: | Unnamed user with email elements@essex.ac.uk |
Depositing User: | Unnamed user with email elements@essex.ac.uk |
Date Deposited: | 06 May 2025 09:31 |
Last Modified: | 06 May 2025 09:33 |
URI: | http://repository.essex.ac.uk/id/eprint/40566 |
Available files
Filename: Revised manuscript with no changes marked.pdf
Licence: Creative Commons: Attribution 4.0