Using Gradient Based Information to Build Hybrid Multi-objective Evolutionary Algorithms


Abstract

Over the last decades evolutionary algorithms have become very popular to solve multiobjective optimization problems (MOPs). Several multi-objective evolutionary algorithms (MOEAs) have been developed to solve MOPs with successfull results. A feature of these algorithms is that they do not exploit concrete information, about continuity or differentiability of the objective functions of the problems—which is considered as information of the problem domain. One question that arises when seeking for more efficient MOEAs, is about the effectiveness of including this mathematical information during the MOEA execution. In particular, we are interested in exploiting the gradient information of the objective functions during the evolutionary search. In this thesis, the inclusion of gradient-based local searchers into MOEAs is presented. An in depth study of the gradient-based search directions is included, as well as the proposal of diverse types of hybridization. This coupling has two aims, one is made in order to improve the performance of these stochastic algorithms, and the second one is to efficiently refine their solution sets. Hybrid gradient-based MOEAs are built and tested, in this work, over widely used benchmark MOPs. The numerical results are analyzed and discussed; also, conclusions and extensions for promising future research paths are included.