Misplaced Pages

Self-Similarity of Network Data Analysis

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
This article includes a list of references, related reading, or external links, but its sources remain unclear because it lacks inline citations. Please help improve this article by introducing more precise citations. (March 2013) (Learn how and when to remove this message)

In computer networks, self-similarity is a feature of network data transfer dynamics. When modeling network data dynamics the traditional time series models, such as an autoregressive moving average model are not appropriate. This is because these models only provide a finite number of parameters in the model and thus interaction in a finite time window, but the network data usually have a long-range dependent temporal structure. A self-similar process is one way of modeling network data dynamics with such a long range correlation. This article defines and describes network data transfer dynamics in the context of a self-similar process. Properties of the process are shown and methods are given for graphing and estimating parameters modeling the self-similarity of network data.

Definition

Suppose X {\displaystyle X} be a weakly stationary (2nd-order stationary) process with mean μ {\displaystyle \mu } , variance σ 2 {\displaystyle \sigma ^{2}} , and autocorrelation function γ ( t ) {\displaystyle \gamma (t)} . Assume that the autocorrelation function γ ( t ) {\displaystyle \gamma (t)} has the form γ ( t ) t β L ( t ) {\displaystyle \gamma (t)\rightarrow t^{-\beta }L(t)} as t {\displaystyle t\to \infty } , where 0 < β < 1 {\displaystyle 0<\beta <1} and L ( t ) {\displaystyle L(t)} is a slowly varying function at infinity, that is lim t L ( t x ) L ( t ) = 1 {\displaystyle \lim _{t\to \infty }{\frac {L(tx)}{L(t)}}=1} for all x > 0 {\displaystyle x>0} . For example, L ( t ) = c o n s t {\displaystyle L(t)=const} and L ( t ) = log ( t ) {\displaystyle L(t)=\log(t)} are slowly varying functions.
Let X k ( m ) = 1 m H ( X k m m + 1 + + X k m ) {\displaystyle X_{k}^{(m)}={\frac {1}{m^{H}}}(X_{km-m+1}+\cdot \cdot \cdot +X_{km})} , where k = 1 , 2 , 3 , {\displaystyle k=1,2,3,\ldots } , denote an aggregated point series over non-overlapping blocks of size m {\displaystyle m} , for each m {\displaystyle m} is a positive integer.

Exactly self-similar process

  • X {\displaystyle X} is called an exactly self-similar process if there exists a self-similar parameter H {\displaystyle H} such that X k ( m ) {\displaystyle X_{k}^{(m)}} has the same distribution as X {\displaystyle X} . An example of exactly self-similar process with H {\displaystyle H} is Fractional Gaussian Noise (FGN) with 1 2 < H < 1 {\displaystyle {\frac {1}{2}}<H<1} .

Definition:Fractional Gaussian Noise (FGN)

X ( t ) = B H ( t + 1 ) B H ( t ) ,   t 1 {\displaystyle X(t)=B_{H}(t+1)-B_{H}(t),~\forall t\geq 1} is called the Fractional Gaussian Noise, where B H ( ) {\displaystyle B_{H}(\cdot )} is a Fractional Brownian motion.

exactly second order self-similar process

  • X {\displaystyle X} is called an exactly second order self-similar process if there exists a self-similar parameter H {\displaystyle H} such that X k ( m ) {\displaystyle X_{k}^{(m)}} has the same variance and autocorrelation as X {\displaystyle X} .

asymptotic second order self-similar process

  • X {\displaystyle X} is called an asymptotic second order self-similar process with self-similar parameter H {\displaystyle H} if γ ( m ) ( t ) 1 2 [ ( t + 1 ) 2 H 2 t 2 H + ( t 1 ) 2 H ] {\displaystyle \gamma ^{(m)}(t)\to {\frac {1}{2}}} as m {\displaystyle m\to \infty } ,   t = 1 , 2 , 3 , {\displaystyle ~\forall t=1,2,3,\ldots }

Some relative situations of Self-Similar Processes

Long-Range-Dependence(LRD)

Suppose X ( t ) {\displaystyle X(t)} be a weakly stationary (2nd-order stationary) process with mean μ {\displaystyle \mu } and variance σ 2 {\displaystyle \sigma ^{2}} . The Autocorrelation Function (ACF) of lag t {\displaystyle t} is given by γ ( t ) = c o v ( X ( h ) , X ( h + t ) ) σ 2 = E [ ( X ( h ) μ ) ( X ( h + t ) μ ) ] σ 2 {\displaystyle \gamma (t)={\mathrm {cov} (X(h),X(h+t)) \over \sigma ^{2}}={E \over \sigma ^{2}}}

Definition:

A weakly stationary process is said to be "Long-Range-Dependence" if t = 0 | γ ( t ) | = {\displaystyle \sum _{t=0}^{\infty }|\gamma (t)|=\infty }

A process which satisfies γ ( t ) t β L ( t ) {\displaystyle \gamma (t)\rightarrow t^{-\beta }L(t)} as t {\displaystyle t\to \infty } is said to have long-range dependence. The spectral density function of long-range dependence follows a power law near the origin. Equivalently to γ ( t ) t β L ( t ) {\displaystyle \gamma (t)\rightarrow t^{-\beta }L(t)} , X {\displaystyle X} has long-range dependence if the spectral density function of autocorrelation function, f t ( w ) = t = 0 γ ( t ) e i w t {\displaystyle f_{t}(w)=\sum _{t=0}^{\infty }\gamma (t)e^{iwt}} , has the form of w γ L ( w ) {\displaystyle w^{-\gamma }L(w)} as w 0 {\displaystyle w\to 0} where 0 < γ < 1 {\displaystyle 0<\gamma <1} , L {\displaystyle L} is slowly varying at 0.

also see

Slowly decaying variances

X ( m ) = 1 m ( X 1 + + X m ) {\displaystyle X^{(m)}={\frac {1}{m}}(X_{1}+\cdot \cdot \cdot +X_{m})}
When an autocorrelation function of a self-similar process satisfies γ ( t ) t β L ( t ) {\displaystyle \gamma (t)\rightarrow t^{-\beta }L(t)} as t {\displaystyle t\to \infty } , that means it also satisfies V a r ( X ( m ) ) a m β {\displaystyle Var(X^{(m)})\to am^{-\beta }} as m {\displaystyle m\to \infty } , where a {\displaystyle a} is a finite positive constant independent of m, and 0<β<1.

Estimating the self-similarity parameter "H"

R/S analysis

Assume that the underlying process X {\displaystyle X} is Fractional Gaussian Noise. Consider the series X ( 1 ) , , X ( n ) {\displaystyle X_{(1)},\ldots ,X_{(n)}} , and let Y ( n ) = i = 1 n X ( i ) {\displaystyle Y_{(n)}=\sum _{i=1}^{n}X_{(i)}} .

The sample variance of X ( i ) {\displaystyle X_{(i)}} is S 2 ( n ) = 1 n i = 1 n X ( i ) 2 ( 1 n ) 2 Y n 2 {\displaystyle S^{2}(n)={\frac {1}{n}}\sum _{i=1}^{n}X_{(i)}^{2}-({\frac {1}{n}})^{2}Y_{n}^{2}}

Definition:R/S statistic

R S ( n ) = 1 S ( n ) [ max 0 t n ( Y t t n Y n ) min 0 t n ( Y t t n Y n ) ] {\displaystyle {\frac {R}{S}}(n)={\frac {1}{S(n)}}}

If X ( i ) {\displaystyle X_{(i)}} is FGN, then E ( R S ( n ) ) C H × n H {\displaystyle E({\frac {R}{S}}(n))\to C_{H}\times n^{H}}
Consider fitting a regression model : log R S ( n ) = log ( C H ) + H log ( n ) + ϵ n {\displaystyle \log {\frac {R}{S}}(n)=\log(C_{H})+H\log(n)+\epsilon _{n}} , where ϵ n N ( 0 , σ 2 ) {\displaystyle \epsilon _{n}\thicksim N(0,\sigma ^{2})}
In particular for a time series of length N {\displaystyle N} divide the time series data into k {\displaystyle k} groups each of size N k {\displaystyle {\frac {N}{k}}} , compute R S ( n ) {\displaystyle {\frac {R}{S}}(n)} for each group.
Thus for each n we have k {\displaystyle k} pairs of data ( log ( n ) , log R S ( n ) {\displaystyle \log(n),\log {\frac {R}{S}}(n)} ).There are k {\displaystyle k} points for each n {\displaystyle n} , so we can fit a regression model to estimate H {\displaystyle H} more accurately. If the slope of the regression line is between 0.5~1, it is a self-similar process.

Variance-time plot

Variance of the sample mean is given by V a r ( X ¯ n ) c n 2 H 2 ,   c > 0 {\displaystyle Var({\bar {X}}_{n})\to cn^{2H-2},~\forall c>0} .
For estimating H, calculate sample means X ¯ 1 , X ¯ 2 , , X ¯ m k {\displaystyle {\bar {X}}_{1},{\bar {X}}_{2},\cdots ,{\bar {X}}_{m_{k}}} for m k {\displaystyle m_{k}} sub-series of length k {\displaystyle k} .
Overall mean can be given by X ¯ ( k ) = 1 m k i = 1 m k X ¯ i ( k ) {\displaystyle {\bar {X}}(k)={\frac {1}{m_{k}}}\sum _{i=1}^{m_{k}}{\bar {X}}_{i}(k)} , sample variance S 2 ( k ) = 1 m k 1 i = 1 m k ( X ¯ i ( k ) X ¯ ( k ) ) 2 {\displaystyle S^{2}(k)={\frac {1}{m_{k}-1}}\sum _{i=1}^{m_{k}}({\bar {X}}_{i}(k)-{\bar {X}}(k))^{2}} .
The variance-time plots are obtained by plotting log S 2 ( k ) {\displaystyle \log S^{2}(k)} against log k {\displaystyle \log k} and we can fit a simple least square line through the resulting points in the plane ignoring the small values of k.

For large values of k {\displaystyle k} , the points in the plot are expected to be scattered around a straight line with a negative slope 2 H 2 {\displaystyle 2H-2} .For short-range dependence or independence among the observations, the slope of the straight line is equal to -1.
Self-similarity can be inferred from the values of the estimated slope which is asymptotically between –1 and 0, and an estimate for the degree of self-similarity is given by H ^ = 1 + 1 2 ( s l o p e ) . {\displaystyle {\hat {H}}=1+{\frac {1}{2}}(slope).}

Periodogram-based analysis

Whittle's approximate maximum likelihood estimator (MLE) is applied to solve the Hurst's parameter via the spectral density of X {\displaystyle X} . It is not only a tool for visualizing the Hurst's parameter, but also a method to do some statistical inference about the parameters via the asymptotic properties of the MLE. In particular, X {\displaystyle X} follows a Gaussian process. Let the spectral density of X {\displaystyle X} , f x ( w ; θ ) = σ ϵ 2 f x ( w ; ( 1 , η ) ) {\displaystyle f_{x}(w;\theta )=\sigma _{\epsilon }^{2}f_{x}(w;(1,\eta ))} , where θ = ( σ ϵ 2 , η ) = ( σ ϵ 2 , H , θ 3 , , θ k ) , H = γ + 1 2 {\displaystyle \theta =(\sigma _{\epsilon }^{2},\eta )=(\sigma _{\epsilon }^{2},H,\theta _{3},\ldots ,\theta _{k}),H={\frac {\gamma +1}{2}}} , and θ 3 , , θ k {\displaystyle \theta _{3},\ldots ,\theta _{k}} construct a short-range time series autoregression (AR) model, that is X j = i = 1 k α i X j i + ϵ j {\displaystyle X_{j}=\sum _{i=1}^{k}\alpha _{i}X_{j-i}+\epsilon _{j}} , with V a r ( ϵ j ) = σ ϵ 2 {\displaystyle Var(\epsilon _{j})=\sigma _{\epsilon }^{2}} .

Thus, the Whittle's estimator η ^ {\displaystyle {\hat {\eta }}} of η {\displaystyle \eta } minimizes the function Q ( η ) = π π I ( w ) f ( w ; ( 1 , η ) ) d w {\displaystyle Q(\eta )=\int _{-\pi }^{\pi }{\frac {I(w)}{f(w;(1,\eta ))}}\,dw} , where I ( w ) {\displaystyle I(w)} denotes the periodogram of X as ( 2 π n ) 1 | j = 1 n X j e i w j | 2 {\displaystyle (2\pi n)^{-1}|\sum _{j=1}^{n}X_{j}e^{iwj}|^{2}} and σ ^ 2 = π π I ( w ) f ( w ; ( 1 , η ^ ) ) d w {\displaystyle {\hat {\sigma }}^{2}=\int _{-\pi }^{\pi }{\frac {I(w)}{f(w;(1,{\hat {\eta }}))}}\,dw} . These integrations can be assessed by Riemann sum.

Then n 1 / 2 ( θ ^ θ ) {\displaystyle n^{1/2}({\hat {\theta }}-\theta )} asymptotically follows a normal distribution if X j {\displaystyle X_{j}} can be expressed as a form of an infinite moving average model.

To estimate H {\displaystyle H} , first, one has to calculate this periodogram. Since I n ( w ) {\displaystyle I_{n}(w)} is an estimator of the spectral density, a series with long-range dependence should have a periodogram, which is proportional to | λ | 1 2 H {\displaystyle |\lambda |^{1-2H}} close to the origin. The periodogram plot is obtained by plotting log ( I n ( w ) ) {\displaystyle \log(I_{n}(w))} against log ( w ) {\displaystyle \log(w)} .
Then fitting a regression model of the log ( I n ( w ) ) {\displaystyle \log(I_{n}(w))} on the log ( w ) {\displaystyle \log(w)} should give a slope of β ^ {\displaystyle {\hat {\beta }}} . The slope of the fitted straight line is also the estimation of 1 2 H {\displaystyle 1-2H} . Thus, the estimation H ^ {\displaystyle {\hat {H}}} is obtained.

Note:
There are two common problems when we apply the periodogram method. First, if the data does not follow a Gaussian distribution, transformation of the data can solve this kind of problems. Second, the sample spectrum which deviates from the assumed spectral density is another one. An aggregation method is suggested to solve this problem. If X {\displaystyle X} is a Gaussian process and the spectral density function of X {\displaystyle X} satisfies w γ L ( w ) {\displaystyle w^{-\gamma }L(w)} as w {\displaystyle w\to \infty } , the function, m H L 1 2 ( m ) i = ( j 1 ) m + 1 m k ( X i E ( | X i | ) ) ,   j = 1 , 2 , , [ n m ] {\displaystyle m^{-H}L^{-{\frac {1}{2}}}(m)\sum _{i=(j-1)m+1}^{m}k(X_{i}-E(|X_{i}|)),~j=1,2,\ldots ,} , converges in distribution to FGN as m {\displaystyle m\to \infty } .

References

  • P. Whittle, "Estimation and information in stationary time series", Art. Mat. 2, 423-434, 1953.
  • K. PARK, W. WILLINGER, Self-Similar Network Traffic and Performance Evaluation, WILEY,2000.
  • W. E. Leland, W. Willinger, M. S. Taqqu, D. V. Wilson, "On the self-similar nature of Ethernet traffic", ACM SIGCOMM Computer Communication Review 25,202-213,1995.
  • W. Willinger, M. S. Taqqu, W. E. Leland, D. V. Wilson, "Self-Similarity in High-Speed Packet Traffic: Analysis and Modeling of Ethernet Traffic Measurements", Statistical Science 10,67-85,1995.
  1. W. E. Leland, W. Willinger, M. S. Taqqu, D. V. Wilson, "On the self-similar nature of Ethernet traffic", ACM SIGCOMM Computer Communication Review 25,202-213,1995.
Category: