The Role of Communication Messages and Explicit Niching in Distributed Evolutionary Multi-Objective Optimization


Abstract

Dealing with optimization problems with more than one objective has been an important research area in evolutionary computation. The class of multi-objective problems (MOPs) is an important one because multi-objectivity exists in almost all aspects of human life; whereby there usually exist several compromises in each problem. Multi-objective evolutionary algorithms (MOEAs) have been applied widely in many real-world problems. This is because (1) they work with a population during the course of action, which hence offer more flexible control to find a set of efficient solutions, and (2) real-world problems are usually black-box where an explicit mathematical representation is unknown. However, MOEAs usually require a large amount of computational effort. This is a substantial challenge in bringing MOEAs to practice. This thesis primarily aims to address this challenge through an investigation into issues of scalability and the balance between exploration and exploitation. These have been outstanding research challenges, not only for MOEAs, but also for evolutionary algorithms in general. A distributed framework of local models using explicit niching is introduced as an overarching umbrella to solve multi-objective optimization problems. This framework is used to address the two-part question about first, the role of communication messages and second, the role of explicit niching in distributed evolutionary multi-objective optimization. The concept behind the framework of local models is for the search to be conducted locally in different areas of the decision search space, which allows the local models to be distributed on different processing nodes. During the optimization process, local models interact (exchange messages) with each other using rules inspired from Particle Swarm Optimization (PSO). Hence, the hypothesis of this work is that running simultaneously several search engines in different local areas is better for exploiting local information, while exchanging messages among those diverse engines can provide a better exploration strategy. For this framework, as the models work locally, they gain access to some global knowledge of each other. In order to validate the proposed framework, a series of experiments on a wide range of test problems was conducted. These experiments were motivated by the following studies which in their totality contribute to the verification of our hypothesis: (1) studying the performance of the framework under different aspects such as initialization, convergence, diversity, scalability, and sensitivity to the framework's parameters, (2) investigating interleaving guidance in both the decision and objective spaces, (3) applying local models using estimation of distributions, (4) evaluating local models in noisy environments and (5) the role of communication messages and explicit niching in distributed computing. The experimental results showed that: (1) the use of local models increases the chance of MOEAs to improve their performance in finding the Pareto optimal front, (2) interaction strategies using PSO rules are suitable for controlling local models, and that they also can be coupled with specialization in order to refine the obtained non-dominated set, (3) estimation of distribution improves when coupled with local models, (4) local models work well in noisy environments, and (5) the communication cost in distributed systems with local models can be reduced significantly by using summary information (such as the direction information naturally determined by local models) as the communication messages, in comparison with conventional approaches using descriptive information of individuals. In summary, the proposed framework is a successful step towards efficient distributed MOEAs.