MOEA/D with opposition-based learning for multiobjective optimization problem


Abstract

Multiobjective evolutionary algorithm based on decomposition (MOEA/D) has attracted a great deal of attention and has obtained enormous success in the field of evolutionary multiobjective optimization. It converts a multiobjective optimization problem (MOP) into a set of scalar optimization subproblems and then uses the evolutionary algorithm (EA) to optimize these subproblems simultaneously. However, there is a great deal of randomness in MOEA/D. Researchers in the field of evolutionary algorithm, reinforcement learning and neural network have reported that the simultaneous consideration of randomness and opposition has an advantage over pure randomness. A new scheme, called opposition-based learning (OBL), has been proposed in the machine learning field. In this paper, OBL has been integrated into the framework of MOEA/D to accelerate its convergence speed. Hence, our proposed approach is called opposition-based learning MOEA/D (MOEA/D-OBL). Compared with MOEA/D, MOEA/D-OBL uses an opposition-based initial population and opposition-based learning strategy to generate offspring during the evolutionary process. It is compared with its parent algorithm MOEA/D on four representative kinds of MOPs and many-objective optimization problems. Experimental results indicate that MOEA/D-OBL outperforms or performs similar to MOEA/D. Moreover, the parameter sensitivity of generalization opposite point and the probable to use OBL is experimentally investigated.