Misplaced Pages

Bartels–Stewart algorithm

Article snapshot taken from Wikipedia with creative commons attribution-sharealike license. Give it a read and then ask your questions in the chat. We can research this topic together.
Algorithm in numerical linear algebra

In numerical linear algebra, the Bartels–Stewart algorithm is used to numerically solve the Sylvester matrix equation A X X B = C {\displaystyle AX-XB=C} . Developed by R.H. Bartels and G.W. Stewart in 1971, it was the first numerically stable method that could be systematically applied to solve such equations. The algorithm works by using the real Schur decompositions of A {\displaystyle A} and B {\displaystyle B} to transform A X X B = C {\displaystyle AX-XB=C} into a triangular system that can then be solved using forward or backward substitution. In 1979, G. Golub, C. Van Loan and S. Nash introduced an improved version of the algorithm, known as the Hessenberg–Schur algorithm. It remains a standard approach for solving Sylvester equations when X {\displaystyle X} is of small to moderate size.

The algorithm

Let X , C R m × n {\displaystyle X,C\in \mathbb {R} ^{m\times n}} , and assume that the eigenvalues of A {\displaystyle A} are distinct from the eigenvalues of B {\displaystyle B} . Then, the matrix equation A X X B = C {\displaystyle AX-XB=C} has a unique solution. The Bartels–Stewart algorithm computes X {\displaystyle X} by applying the following steps:

1.Compute the real Schur decompositions

R = U T A U , {\displaystyle R=U^{T}AU,}
S = V T B T V . {\displaystyle S=V^{T}B^{T}V.}

The matrices R {\displaystyle R} and S {\displaystyle S} are block-upper triangular matrices, with diagonal blocks of size 1 × 1 {\displaystyle 1\times 1} or 2 × 2 {\displaystyle 2\times 2} .

2. Set F = U T C V . {\displaystyle F=U^{T}CV.}

3. Solve the simplified system R Y Y S T = F {\displaystyle RY-YS^{T}=F} , where Y = U T X V {\displaystyle Y=U^{T}XV} . This can be done using forward substitution on the blocks. Specifically, if s k 1 , k = 0 {\displaystyle s_{k-1,k}=0} , then

( R s k k I ) y k = f k + j = k + 1 n s k j y j , {\displaystyle (R-s_{kk}I)y_{k}=f_{k}+\sum _{j=k+1}^{n}s_{kj}y_{j},}

where y k {\displaystyle y_{k}} is the k {\displaystyle k} th column of Y {\displaystyle Y} . When s k 1 , k 0 {\displaystyle s_{k-1,k}\neq 0} , columns [ y k 1 y k ] {\displaystyle } should be concatenated and solved for simultaneously.

4. Set X = U Y V T . {\displaystyle X=UYV^{T}.}

Computational cost

Using the QR algorithm, the real Schur decompositions in step 1 require approximately 10 ( m 3 + n 3 ) {\displaystyle 10(m^{3}+n^{3})} flops, so that the overall computational cost is 10 ( m 3 + n 3 ) + 2.5 ( m n 2 + n m 2 ) {\displaystyle 10(m^{3}+n^{3})+2.5(mn^{2}+nm^{2})} .

Simplifications and special cases

In the special case where B = A T {\displaystyle B=-A^{T}} and C {\displaystyle C} is symmetric, the solution X {\displaystyle X} will also be symmetric. This symmetry can be exploited so that Y {\displaystyle Y} is found more efficiently in step 3 of the algorithm.

The Hessenberg–Schur algorithm

The Hessenberg–Schur algorithm replaces the decomposition R = U T A U {\displaystyle R=U^{T}AU} in step 1 with the decomposition H = Q T A Q {\displaystyle H=Q^{T}AQ} , where H {\displaystyle H} is an upper-Hessenberg matrix. This leads to a system of the form H Y Y S T = F {\displaystyle HY-YS^{T}=F} that can be solved using forward substitution. The advantage of this approach is that H = Q T A Q {\displaystyle H=Q^{T}AQ} can be found using Householder reflections at a cost of ( 5 / 3 ) m 3 {\displaystyle (5/3)m^{3}} flops, compared to the 10 m 3 {\displaystyle 10m^{3}} flops required to compute the real Schur decomposition of A {\displaystyle A} .

Software and implementation

The subroutines required for the Hessenberg-Schur variant of the Bartels–Stewart algorithm are implemented in the SLICOT library. These are used in the MATLAB control system toolbox.

Alternative approaches

For large systems, the O ( m 3 + n 3 ) {\displaystyle {\mathcal {O}}(m^{3}+n^{3})} cost of the Bartels–Stewart algorithm can be prohibitive. When A {\displaystyle A} and B {\displaystyle B} are sparse or structured, so that linear solves and matrix vector multiplies involving them are efficient, iterative algorithms can potentially perform better. These include projection-based methods, which use Krylov subspace iterations, methods based on the alternating direction implicit (ADI) iteration, and hybridizations that involve both projection and ADI. Iterative methods can also be used to directly construct low rank approximations to X {\displaystyle X} when solving A X X B = C {\displaystyle AX-XB=C} .

References

  1. ^ Bartels, R. H.; Stewart, G. W. (1972). "Solution of the matrix equation AX + XB = C [F4]". Communications of the ACM. 15 (9): 820–826. doi:10.1145/361573.361582. ISSN 0001-0782.
  2. ^ Golub, G.; Nash, S.; Loan, C. Van (1979). "A Hessenberg–Schur method for the problem AX + XB= C". IEEE Transactions on Automatic Control. 24 (6): 909–913. doi:10.1109/TAC.1979.1102170. hdl:1813/7472. ISSN 0018-9286.
  3. Simoncini, V. (2016). "Computational Methods for Linear Matrix Equations". SIAM Review. 58 (3): 377–441. doi:10.1137/130912839. hdl:11585/586011. ISSN 0036-1445. S2CID 17271167.
Numerical linear algebra
Key concepts
Problems
Hardware
Software
Categories: