Interpretable Neural Network Tree for Continuous-Feature Data Sets


Abstract

Abstract—Neural network tree (NNTree) is a hybrid learning model. Currently We have proposed a simplified multiple objective optimization based genetic algorithm for evolving such kind of NNTrees and shown through experments that an NNTree can be interpreted easily if the number of inputs for each expert neural network (ENN) is limited. One problem remained is that for those problems the input features are continuous. This means that even if the number of inputs for each ENN is, say 4, the number of corresponding binary inputs will be 64 if each continuous input is represented with a 16-bit binary number. This also means that the computational complexity is proportional to 2^64. To make the NNTrees more interpretable, we propose an interpretable NNTree through self-organized learning of features. We will show through experiments that the NNTrees built from the training data after self-organized learning are equally good as those obtained from the original data. Further, the number of quantization points in each dimension is usually less than 10 for the databases we used. This means that 3 or 4 binary inputs are enough to represent each continuous input, and thus, the NNTrees so obtained are much more interpretable.