Conceitos essenciais
This paper presents a computationally efficient learning method for Fuzzy Logic Systems (FLSs) embedded within the realm of Deep Learning (DL), tackling the challenges of learning large-scale data.
Resumo
The paper focuses on the learning problem of Type-1 (T1) and Interval Type-2 (IT2) Fuzzy Logic Systems (FLSs) and presents a computationally efficient learning method embedded within the realm of Deep Learning (DL).
Key highlights:
- Provides parameterization tricks to transform the constrained learning problem of FLSs into unconstrained ones, enabling the use of standard DL optimizers.
- Presents efficient mini-batched inference implementations for both T1-FLS and IT2-FLS, eliminating the iterative nature of the Karnik-Mendel Algorithm (KMA) for IT2-FLS.
- The proposed method minimizes training time while leveraging optimizers and automatic differentiation provided within DL frameworks.
- Illustrates the efficiency of the DL framework for FLSs on benchmark datasets, showing significant improvements in training time without compromising accuracy.
The authors first provide background on T1 and IT2 FLSs, then present the core components of the DL framework to learn FLSs, including handling constraints and developing efficient mini-batch FLS inferences. The paper concludes with a performance analysis on various datasets, demonstrating the effectiveness of the proposed approach.
Estatísticas
The training time for the proposed T1-FLS and IT2-FLS (abbreviated as IT2-fKM) implementations is significantly shorter compared to the traditional KMA-based IT2-FLS.
For the CCPP dataset with 15 rules, the training time for T1-FLS is 11s, IT2-fKM is 50s, while the KMA-based IT2-FLS takes 57h 11m.
For the BH dataset with 15 rules, the training time for T1-FLS is 5s, IT2-fKM is 9s, while the KMA-based IT2-FLS takes 2h 28m.
For the ENB dataset with 15 rules, the training time for T1-FLS is 7s, IT2-fKM is 45s, while the KMA-based IT2-FLS takes 9h 57m.
Citações
"Our implementation of IT2-FLS was 7218 times faster than the KMA since our implementation computed all of the possible combinations of the KMA in parallel in GPU."
"Thanks to our efficient implementations, we were able to seamlessly solve the learning problem of the FLSs by leveraging automatic differentiation and DL optimizers."