Misplaced Pages

Local unimodal sampling: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editNext edit →Content deleted Content addedVisualWikitext
Revision as of 17:14, 22 February 2011 editKiefer.Wolfowitz (talk | contribs)39,688 editsm References← Previous edit Revision as of 17:26, 22 February 2011 edit undoKiefer.Wolfowitz (talk | contribs)39,688 edits simplify lead. If it works on discontinuous functions, then don't worry about gradientsNext edit →
Line 1: Line 1:
'''Local unimodal sampling (LUS)''' is an ] procedure that does not use the ] of the problem to be optimized. Thus, LUS can be used on functions that are not ]. Such optimization methods are also known as direct-search, derivative-free, purely primal, or black-box methods. '''Local unimodal sampling (LUS)''' is an ] procedure that can be used on functions that are not ]. Such optimization methods are also known as direct-search, derivative-free, purely primal, or black-box methods.


LUS works by having a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from a neighbourhood of the current position using a ]. The sampling-range is initially the full search-space and decreases exponentially during optimization.<ref name=pedersen08thesis/> LUS works by having a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from a neighbourhood of the current position using a ]. The sampling-range is initially the full search-space and decreases exponentially during optimization.<ref name=pedersen08thesis/>

Revision as of 17:26, 22 February 2011

Local unimodal sampling (LUS) is an optimization procedure that can be used on functions that are not continuous. Such optimization methods are also known as direct-search, derivative-free, purely primal, or black-box methods.

LUS works by having a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from a neighbourhood of the current position using a uniform distribution. The sampling-range is initially the full search-space and decreases exponentially during optimization.

Motivation

When the current position x is far from the optimum the probability is 1/2 for finding an improvement through uniform random sampling.
As we approach the optimum the probability of finding further improvements through uniform sampling decreases towards zero if the sampling-range d is kept fixed.

Using a fixed sampling-range for randomly sampling the search-space of a unimodal function the probability of finding improved positions will decrease as we approach the optimum. This is because a decreasing portion of the sampling-range will yield improved fitness (see pictures for the single-dimensional case.) Hence, the sampling range must be decreased somehow. Pattern search works well for optimizing unimodal functions and its halving of the sampling-range was translated into a formula that would have similar effect when using a uniform distribution for the sampling.

Algorithm

Let f: ℝ → ℝ be the fitness or cost function which must be minimized. Let x ∈ ℝ designate a position or candidate solution in the search-space. The LUS algorithm can then be described as:

  • Initialize x~U(blo,bup) with a random uniform position in the search-space, where blo and bup are the lower and upper boundaries, respectively.
  • Set the initial sampling range to cover the entire search-space: d = bup − blo
  • Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
    • Pick a random vector a ~ U(−dd)
    • Add this to the current position x to create the new potential position y = x + a
    • If (f(y) < f(x)) then move to the new position by setting x = y, otherwise decrease the sampling-range by multiplying with factor q (see below): d = q d
  • Now x holds the best-found position.

Sampling-range decrease factor

The factor q for exponentially decreasing the sampling-range is defined by: q = 2 where n is the dimensionality of the search-space and α is a user-adjustable parameter. Setting 0<α<1 causes slower decrease of the sampling-range and setting α > 1 causes more rapid decrease of the sampling-range. Typically it is set to α = 1/3

Note that decreasing the sampling-range n times by factor q results in an overall decrease of q = 2 and for α = 1 this would mean a halving of the sampling-range.

See also

References

  1. Pedersen, M.E.H. (2010). Tuning & Simplifying Heuristical Optimization (PDF) (PhD thesis). University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group.
Optimization: Algorithms, methods, and heuristics
Unconstrained nonlinear
Functions
Gradients
Convergence
Quasi–Newton
Other methods
Hessians
Graph of a strictly concave quadratic function with unique maximum.
Optimization computes maxima and minima.
Constrained nonlinear
General
Differentiable
Convex optimization
Convex
minimization
Linear and
quadratic
Interior point
Basis-exchange
Combinatorial
Paradigms
Graph
algorithms
Minimum
spanning tree
Shortest path
Network flows
Metaheuristics
Category: