Table of Links
Abstract and 1. Introduction
2. Preliminaries and 2.1. Blind deconvolution
2.2. Quadratic neural networks
3. Methodology
3.1. Time domain quadratic convolutional filter
3.2. Superiority of cyclic features extraction by QCNN
3.3. Frequency domain linear filter with envelope spectrum objective function
3.4. Integral optimization with uncertainty-aware weighing scheme
4. Computational experiments
4.1. Experimental configurations
4.2. Case study 1: PU dataset
4.3. Case study 2: JNU dataset
4.4. Case study 3: HIT dataset
5. Computational experiments
5.1. Comparison of BD methods
5.2. Classification results on various noise conditions
5.3. Employing ClassBD to deep learning classifiers
5.4. Employing ClassBD to machine learning classifiers
5.5. Feature extraction ability of quadratic and conventional networks
5.6. Comparison of ClassBD filters
6. Conclusions
Appendix and References
5.4. Employing ClassBD to machine learning classifiers
In comparison of deep learning models, classical machine learning (ML) classifiers offer some distinct advantages, including robust interpretability and lightweight models. However, these “shallow” ML methods invariably rely on
human-engineered features for bearing fault diagnosis, thereby exhibiting limited generalization ability in the design of end-to-end diagnozing models [6]. Given that ClassBD can enhance the performance of deep learning classifiers, we posit that it can also serve as a feature extractor to augment the performance of classical ML classifiers.
Consequently, in this experiment, we utilize the pre-trained ClassBD as a feature extractor and feed the output of ClassBD into several ML classifiers for comparison: support vector machine (SVM) [88], k-nearest neighbor (KNN) [89], random forest (RF) [90], logistic regression (LR) [91], and a highly efficient gradient boosting decision tree (LightGBM) [92].
The results are presented in Table 14. Evidently, ClassBD significantly facilitates the performance of ML methods. Directly inputting raw signals into these ML classifiers results in markedly poor performance. On the JNU and HIT datasets, SVM and RF even fail to converge. However, with the incorporation of ClassBD, the classification performance experiences a substantial improvement. For instance, the KNN achieves a 90.71% F1 score on the JNU dataset, compared to a mere 2.89% F1 score without ClassBD. This performance even surpasses some deep learning methods. Nonetheless, ML methods exhibit instability across different datasets. The best-performing ML methods can only achieve a 49.77% score on the PU dataset. Despite this, we believe that the combination of ClassBD and ML methods presents a promising solution, promoting the study of high interpretability and efficiency in diagnostic approaches. We will explore this topic further in our future work.
Authors:
(1) Jing-Xiao Liao, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China and School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(2) Chao He, School of Mechanical, Electronic and Control Engineering, Beijing Jiaotong University, Beijing, China;
(3) Jipu Li, Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China;
(4) Jinwei Sun, School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(5) Shiping Zhang (Corresponding author), School of Instrumentation Science and Engineering, Harbin Institute of Technology, Harbin, China;
(6) Xiaoge Zhang (Corresponding author), Department of Industrial and Systems Engineering, The Hong Kong Polytechnic University, Hong Kong, Special Administrative Region of China.
This paper is available on arxiv under CC by 4.0 Deed (Attribution 4.0 International) license.