Misplaced Pages

Iterated logarithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Inverse function to a tower of powers For the theorem in probability theory, see Law of the iterated logarithm.
Figure 1. Demonstrating log* 4 = 2 for the base-e iterated logarithm. The value of the iterated logarithm can be found by "zig-zagging" on the curve y = logb(x) from the input n, to the interval . In this case, b = e. The zig-zagging entails starting from the point (n, 0) and iteratively moving to (n, logb(n) ), to (0, logb(n) ), to (logb(n), 0 ).

In computer science, the iterated logarithm of n {\displaystyle n} , written log*  n {\displaystyle n} (usually read "log star"), is the number of times the logarithm function must be iteratively applied before the result is less than or equal to 1 {\displaystyle 1} . The simplest formal definition is the result of this recurrence relation:

log n := { 0 if  n 1 ; 1 + log ( log n ) if  n > 1 {\displaystyle \log ^{*}n:={\begin{cases}0&{\mbox{if }}n\leq 1;\\1+\log ^{*}(\log n)&{\mbox{if }}n>1\end{cases}}}

In computer science, lg* is often used to indicate the binary iterated logarithm, which iterates the binary logarithm (with base 2 {\displaystyle 2} ) instead of the natural logarithm (with base e). Mathematically, the iterated logarithm is well defined for any base greater than e 1 / e 1.444667 {\displaystyle e^{1/e}\approx 1.444667} , not only for base 2 {\displaystyle 2} and base e. The "super-logarithm" function s l o g b ( n ) {\displaystyle \mathrm {slog} _{b}(n)} is "essentially equivalent" to the base b {\displaystyle b} iterated logarithm (although differing in minor details of rounding) and forms an inverse to the operation of tetration.

Analysis of algorithms

The iterated logarithm is useful in analysis of algorithms and computational complexity, appearing in the time and space complexity bounds of some algorithms such as:

The iterated logarithm grows at an extremely slow rate, much slower than the logarithm itself, or repeats of it. This is because the tetration grows much faster than iterated exponential:

y b = b b b y b b b y n {\displaystyle {^{y}b}=\underbrace {b^{b^{\cdot ^{\cdot ^{b}}}}} _{y}\gg \underbrace {b^{b^{\cdot ^{\cdot ^{b^{y}}}}}} _{n}}

the inverse grows much slower: log b x log b n x {\displaystyle \log _{b}^{*}x\ll \log _{b}^{n}x} .

For all values of n relevant to counting the running times of algorithms implemented in practice (i.e., n ≤ 2, which is far more than the estimated number of atoms in the known universe), the iterated logarithm with base 2 has a value no more than 5.

The base-2 iterated logarithm
x lg* x
(−∞, 1] 0
(1, 2] 1
(2, 4] 2
(4, 16] 3
(16, 65536] 4
(65536, 2] 5

Higher bases give smaller iterated logarithms.

Other applications

The iterated logarithm is closely related to the generalized logarithm function used in symmetric level-index arithmetic. The additive persistence of a number, the number of times someone must replace the number by the sum of its digits before reaching its digital root, is O ( log n ) {\displaystyle O(\log ^{*}n)} .

In computational complexity theory, Santhanam shows that the computational resources DTIMEcomputation time for a deterministic Turing machine — and NTIME — computation time for a non-deterministic Turing machine — are distinct up to n log n . {\displaystyle n{\sqrt {\log ^{*}n}}.}

See also

References

  1. Cormen, Thomas H.; Leiserson, Charles E.; Rivest, Ronald L.; Stein, Clifford (2009) . "The iterated logarithm function, in Section 3.2: Standard notations and common functions". Introduction to Algorithms (3rd ed.). MIT Press and McGraw-Hill. pp. 58–59. ISBN 0-262-03384-4.
  2. Furuya, Isamu; Kida, Takuya (2019). "Compaction of Church numerals". Algorithms. 12 (8) 159: 159. doi:10.3390/a12080159. MR 3998658.
  3. Devillers, Olivier (March 1992). "Randomization yields simple O ( n log n ) {\displaystyle O(n\log ^{\ast }n)} algorithms for difficult Ω ( n ) {\displaystyle \Omega (n)} problems" (PDF). International Journal of Computational Geometry & Applications. 2 (1): 97–111. arXiv:cs/9810007. doi:10.1142/S021819599200007X. MR 1159844. S2CID 60203.
  4. Alon, Noga; Azar, Yossi (April 1989). "Finding an approximate maximum" (PDF). SIAM Journal on Computing. 18 (2): 258–267. doi:10.1137/0218017. MR 0986665.
  5. Cole, Richard; Vishkin, Uzi (July 1986). "Deterministic coin tossing with applications to optimal parallel list ranking" (PDF). Information and Control. 70 (1): 32–53. doi:10.1016/S0019-9958(86)80023-7. MR 0853994.
  6. Santhanam, Rahul (2001). "On separators, segregators and time versus space" (PDF). Proceedings of the 16th Annual IEEE Conference on Computational Complexity, Chicago, Illinois, USA, June 18-21, 2001. IEEE Computer Society. pp. 286–294. doi:10.1109/CCC.2001.933895. ISBN 0-7695-1053-1.
Categories: