Convergence rates of stochastic global optimisation algorithms with backtracking : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Statistics at Massey University
dc.contributor.author | Alexander, D.L.J. | |
dc.date.accessioned | 2010-09-05T21:15:33Z | |
dc.date.available | NO_RESTRICTION | en_US |
dc.date.available | 2010-09-05T21:15:33Z | |
dc.date.issued | 2004 | |
dc.description.abstract | A useful measure of quality of a global optimisation algorithm such as simulated annealing is the length of time it must be run to reach a global optimum within a certain accuracy. Such a performance measure assists in choosing and tuning algorithms. This thesis proposes an approach to obtaining such a measure through successive approximation of a generic stochastic global optimisation algorithm with a sequence of stochastic processes culminating in backtracking adaptive search. The overall approach is to approximate the progress of an optimisation algorithm with that of a model process, backtracking adaptive search. The known convergence rate of the model then provides an estimator of the unknown convergence rate of the original algorithm. Parameters specifying this model are chosen based on observation of the optimisation algorithm. The optimisation algorithm may first be approximated with a time-inhomogeneous Markovian process defined on the problem range. The distribution of the number of iterations to convergence for this averaged range process is shown to be identical with that of the original process. This process is itself approximated by a time-homogeneous Markov process in the range, the asymptotic averaged range process. This approximation is defined for all Markovian optimisation algorithms and a weak condition under which its convergence time closely matches that of the original algorithm is developed. The asymptotic averaged range process is of the same form as backtracking adaptive search, the final stage of approximation. Backtracking adaptive search is an optimisation algorithm which generalises pure adaptive search and hesitant adaptive search. In this thesis the distribution of the number of iterations for which the algorithm runs in order to reach a sufficiently extreme objective function level is derived. Several examples of backtracking adaptive search on finite problems are also presented, including special cases that have received attention in the literature. Computational results of the entire approximation framework are reported for several examples. The method can be applied to any optimisation algorithm to obtain an estimate of the time required to obtain solutions of a certain quality. Directions for further work in order to improve the accuracy of such estimates are also indicated. | en_US |
dc.identifier.uri | http://hdl.handle.net/10179/1613 | |
dc.language.iso | en | en_US |
dc.publisher | Massey University | en_US |
dc.rights | The Author | en_US |
dc.subject | Convergence rate | en_US |
dc.subject | Markov process | en_US |
dc.subject.other | Fields of Research::230000 Mathematical Sciences::230200 Statistics::230202 Stochastic analysis and modelling | en_US |
dc.title | Convergence rates of stochastic global optimisation algorithms with backtracking : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Statistics at Massey University | en_US |
dc.type | Thesis | en_US |
massey.contributor.author | Alexander, D.L.J. | |
thesis.degree.discipline | Statistics | en_US |
thesis.degree.grantor | Massey University | en_US |
thesis.degree.level | Doctoral | en_US |
thesis.degree.name | Doctor of Philosophy (Ph.D.) | en_US |
Files
License bundle
1 - 1 of 1
Loading...
- Name:
- license.txt
- Size:
- 895 B
- Format:
- Item-specific license agreed upon to submission
- Description: