Misplaced Pages

Local unimodal sampling: Difference between revisions

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Browse history interactively← Previous editContent deleted Content addedVisualWikitext
Revision as of 17:26, 22 February 2011 editKiefer.Wolfowitz (talk | contribs)39,688 edits simplify lead. If it works on discontinuous functions, then don't worry about gradients← Previous edit Latest revision as of 03:04, 25 February 2011 edit undoMichael Hardy (talk | contribs)Administrators210,279 edits Redirected page to Luus–Jaakola 
(One intermediate revision by one other user not shown)
Line 1: Line 1:
#REDIRECT ]
'''Local unimodal sampling (LUS)''' is an ] procedure that can be used on functions that are not ]. Such optimization methods are also known as direct-search, derivative-free, purely primal, or black-box methods.

LUS works by having a single position in the search-space and moving to a new position in case of improvement to the fitness or cost function. New positions are sampled from a neighbourhood of the current position using a ]. The sampling-range is initially the full search-space and decreases exponentially during optimization.<ref name=pedersen08thesis/>

== Motivation ==

]

]

Using a fixed sampling-range for randomly sampling the search-space of a ] the probability of finding improved positions will decrease as we approach the optimum. This is because a decreasing portion of the sampling-range will yield improved fitness (see pictures for the single-dimensional case.) Hence, the sampling range must be decreased somehow. ] works well for optimizing unimodal functions and its halving of the sampling-range was translated into a formula that would have similar effect when using a uniform distribution for the sampling.

== Algorithm ==

Let ''f'':&nbsp;{{Unicode|&#x211D;}}<sup>''n''</sup>&nbsp;→ {{Unicode|&#x211D;}} be the fitness or cost function which must be minimized. Let '''x'''&nbsp;∈ {{Unicode|&#x211D;}}<sup>''n''</sup> designate a position or candidate solution in the search-space. The LUS algorithm can then be described as:

* Initialize '''x'''~''U''('''b<sub>lo</sub>''','''b<sub>up</sub>''') with a random ] position in the search-space, where '''b<sub>lo</sub>''' and '''b<sub>up</sub>''' are the lower and upper boundaries, respectively.
* Set the initial sampling range to cover the entire search-space: '''d'''&nbsp;=&nbsp;'''b<sub>up</sub>'''&nbsp;&minus;&nbsp;'''b<sub>lo</sub>'''
* Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following:
** Pick a random vector '''a'''&nbsp;~&nbsp;''U''(&minus;'''d''',&nbsp;'''d''')
** Add this to the current position '''x''' to create the new potential position '''y'''&nbsp;=&nbsp;'''x'''&nbsp;+&nbsp;'''a'''
** If (''f''('''y''')&nbsp;<&nbsp;''f''('''x''')) then move to the new position by setting '''x'''&nbsp;=&nbsp;'''y''', otherwise decrease the sampling-range by multiplying with factor&nbsp;''q'' (see below): '''d'''&nbsp;=&nbsp;''q''&nbsp;'''d'''
* Now '''x''' holds the best-found position.

== Sampling-range decrease factor ==

The factor ''q'' for exponentially decreasing the sampling-range is defined by: ''q''&nbsp;=&nbsp;2<sup>&minus;''α''/''n''</sup> where ''n'' is the dimensionality of the search-space and ''α'' is a user-adjustable parameter. Setting 0<''α''<1 causes slower decrease of the sampling-range and setting ''α''&nbsp;>&nbsp;1 causes more rapid decrease of the sampling-range. Typically it is set to&nbsp;''α''&nbsp;=&nbsp;1/3

Note that decreasing the sampling-range ''n'' times by factor ''q'' results in an overall decrease of ''q''<sup>''n''</sup>&nbsp;=&nbsp;2<sup>&minus;''α''</sup> and for ''α''&nbsp;=&nbsp;1 this would mean a halving of the sampling-range.

== See also ==

* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution.
* ] is a related family of optimization methods which sample from a ] instead of a uniform distribution.
* ] takes steps along the axes of the search-space using exponentially decreasing step sizes.

== References ==

{{reflist|refs=
<ref name=pedersen08thesis>
{{cite book
|type=PhD thesis
|title=Tuning & Simplifying Heuristical Optimization
|url=http://www.hvass-labs.org/people/magnus/thesis/pedersen08thesis.pdf
|last=Pedersen
|first=M.E.H.
|year=2010
|publisher=University of Southampton, School of Engineering Sciences, Computational Engineering and Design Group
}}
</ref>
}}
]

{{Optimization algorithms}}

Latest revision as of 03:04, 25 February 2011

Redirect to: