Aller au contenu principal

Hankel matrix


Hankel matrix


In linear algebra, a Hankel matrix (or catalecticant matrix), named after Hermann Hankel, is a square matrix in which each ascending skew-diagonal from left to right is constant. For example,

More generally, a Hankel matrix is any n × n {\displaystyle n\times n} matrix A {\displaystyle A} of the form

In terms of the components, if the i , j {\displaystyle i,j} element of A {\displaystyle A} is denoted with A i j {\displaystyle A_{ij}} , and assuming i j {\displaystyle i\leq j} , then we have A i , j = A i + k , j k {\displaystyle A_{i,j}=A_{i+k,j-k}} for all k = 0 , . . . , j i . {\displaystyle k=0,...,j-i.}

Properties

  • Any Hankel matrix is symmetric.
  • Let J n {\displaystyle J_{n}} be the n × n {\displaystyle n\times n} exchange matrix. If H {\displaystyle H} is an m × n {\displaystyle m\times n} Hankel matrix, then H = T J n {\displaystyle H=TJ_{n}} where T {\displaystyle T} is an m × n {\displaystyle m\times n} Toeplitz matrix.
    • If T {\displaystyle T} is real symmetric, then H = T J n {\displaystyle H=TJ_{n}} will have the same eigenvalues as T {\displaystyle T} up to sign.
  • The Hilbert matrix is an example of a Hankel matrix.
  • The determinant of a Hankel matrix is called a catalecticant.

Hankel operator

Given a formal Laurent series

the corresponding Hankel operator is defined as This takes a polynomial g C [ z ] {\displaystyle g\in \mathbf {C} [z]} and sends it to the product f g {\displaystyle fg} , but discards all powers of z {\displaystyle z} with a non-negative exponent, so as to give an element in z 1 C [ [ z 1 ] ] {\displaystyle z^{-1}\mathbf {C} [[z^{-1}]]} , the formal power series with strictly negative exponents. The map H f {\displaystyle H_{f}} is in a natural way C [ z ] {\displaystyle \mathbf {C} [z]} -linear, and its matrix with respect to the elements 1 , z , z 2 , C [ z ] {\displaystyle 1,z,z^{2},\dots \in \mathbf {C} [z]} and z 1 , z 2 , z 1 C [ [ z 1 ] ] {\displaystyle z^{-1},z^{-2},\dots \in z^{-1}\mathbf {C} [[z^{-1}]]} is the Hankel matrix Any Hankel matrix arises in this way. A theorem due to Kronecker says that the rank of this matrix is finite precisely if f {\displaystyle f} is a rational function, that is, a fraction of two polynomials

Approximations

We are often interested in approximations of the Hankel operators, possibly by low-order operators. In order to approximate the output of the operator, we can use the spectral norm (operator 2-norm) to measure the error of our approximation. This suggests singular value decomposition as a possible technique to approximate the action of the operator.

Note that the matrix A {\displaystyle A} does not have to be finite. If it is infinite, traditional methods of computing individual singular vectors will not work directly. We also require that the approximation is a Hankel matrix, which can be shown with AAK theory.

Hankel matrix transform

The Hankel matrix transform, or simply Hankel transform, of a sequence b k {\displaystyle b_{k}} is the sequence of the determinants of the Hankel matrices formed from b k {\displaystyle b_{k}} . Given an integer n > 0 {\displaystyle n>0} , define the corresponding ( n × n ) {\displaystyle (n\times n)} -dimensional Hankel matrix B n {\displaystyle B_{n}} as having the matrix elements [ B n ] i , j = b i + j . {\displaystyle [B_{n}]_{i,j}=b_{i+j}.} Then the sequence h n {\displaystyle h_{n}} given by

is the Hankel transform of the sequence b k . {\displaystyle b_{k}.} The Hankel transform is invariant under the binomial transform of a sequence. That is, if one writes as the binomial transform of the sequence b n {\displaystyle b_{n}} , then one has det B n = det C n . {\displaystyle \det B_{n}=\det C_{n}.}

Applications of Hankel matrices

Hankel matrices are formed when, given a sequence of output data, a realization of an underlying state-space or hidden Markov model is desired. The singular value decomposition of the Hankel matrix provides a means of computing the A, B, and C matrices which define the state-space realization. The Hankel matrix formed from the signal has been found useful for decomposition of non-stationary signals and time-frequency representation.

Method of moments for polynomial distributions

The method of moments applied to polynomial distributions results in a Hankel matrix that needs to be inverted in order to obtain the weight parameters of the polynomial distribution approximation.

Positive Hankel matrices and the Hamburger moment problems

See also

  • Cauchy matrix
  • Jacobi operator
  • Toeplitz matrix, an "upside down" (that is, row-reversed) Hankel matrix
  • Vandermonde matrix

Notes

References

  • Brent R.P. (1999), "Stability of fast algorithms for structured linear systems", Fast Reliable Algorithms for Matrices with Structure (editors—T. Kailath, A.H. Sayed), ch.4 (SIAM).
  • Fuhrmann, Paul A. (2012). A polynomial approach to linear algebra. Universitext (2 ed.). New York, NY: Springer. doi:10.1007/978-1-4614-0338-8. ISBN 978-1-4614-0337-1. Zbl 1239.15001.
  • Victor Y. Pan (2001). Structured matrices and polynomials: unified superfast algorithms. Birkhäuser. ISBN 0817642404.
  • J.R. Partington (1988). An introduction to Hankel operators. LMS Student Texts. Vol. 13. Cambridge University Press. ISBN 0-521-36791-3.

Text submitted to CC-BY-SA license. Source: Hankel matrix by Wikipedia (Historical)