Parallel Optimisation Algorithms for Continuous, Non-Linear Numerical Simulations


In computational science and engineering there are growing numbers of increasingly sophisticated, rigorous and realistic numerical simulations of physical systems. Detailed knowledge of a particular area of inquiry is expressed in mathematical terms, realised in computer programs and run on increasingly powerful computer systems. The use of such simulations is now commonplace in a growing collection of industrial design areas. Often, users of these models want to understand their behaviour in response to a variety of input stimuli, bounded by various operational parameters. Commonplace in the engineering design process is the need to find the combination of design parameters that minimise (or maximise) some design objective. Optimisation programs seek to apply mathematical techniques and heuristics to guide a computer in choosing trial parameter sets itself in an attempt to satisfy the expressed design objective. The more realistic the numerical simulations become, the more demanding of computational resources they become. Many of them consume hours, or days, of computing time on supercomputers to deliver a single trial solution. Optimisation algorithms invariably require the model be run more than once, often many times. In the absence of any means to reduce the computational cost of a single run any further, there can be two responses to this dilemma: 1. Reduce the number of model evaluations required by the optimisation algorithm, or 2. reduce the time the whole collection of model evaluations takes by runnings as many as possible at the same time. The research in this thesis is directed toward developing methods that use the approach of parallel computing to reduce total optimisation time by exploiting concurrency within the optimisation algorithms developed. For generality it assumes the numerical simulations to which it may be applied will have real-valued parameters, i.e. they are continuous, and that they may be non-linear in nature. The following contributions are described: * The idea of developing a set of "sandbox" case studies for effective testing of optimisation algorithms is presented and established as a feasible alternative to the use of artificial test functions. An initial set of problems with varying characteristics is also presented. * A parallel implementation of the quasi-Newton gradient method with BFGS update and its efficacy in comparison to a corresponding sequential algorithm and widely-used method of simulated annealing is demonstrated. * The use of a method of parallel line search with the Nelder-Mead simplex algorithm and its advantages compared to the original algorithm, in speed and reliability, are clearly shown. * New direct search methods, the Reducing Set Concurrent Simplex (RSCS) algorithm with line searching variants, are presented, and their superior performance compared to a variety of direct search methods demonstrated. * A novel Evolutionary Programming algorithm using concepts of self-organised criticality, EPSOC, is presented, and demonstrated to be superior in performance to a wide variety of gradient, direct search and stochastic methods on a set of test cases drawn from real-world problems. Evidence is presented of its potential for multi-objective optimisation using a novel implementation with multiple, "virtual" archives. * Methods of preconditioning optimisation problems to reduce the total time taken to achieve an optimal result are presented. Temporal preconditioning, based on the time behaviour of the numerical simulations, is demonstrated to yield substantial speedup. * Some conclusions have been drawn on the applicability of specific optimisation methods to different classes of real-world problems. All of the methods described are implemented in the framework of a general-purpose optimisation toolset, Nimrod/O, to provide a sound basis for future work, easy adoption across a wide range of engineering design problems and potential commercial application.