(octave.info)Matrix Factorizations


Next: Functions of a Matrix Prev: Basic Matrix Functions Up: Linear Algebra
Enter node , (file) or (file)node

18.3 Matrix Factorizations
==========================

 -- : R = chol (A)
 -- : [R, P] = chol (A)
 -- : [R, P, Q] = chol (A)
 -- : [R, P, Q] = chol (A, "vector")
 -- : [L, ...] = chol (..., "lower")
 -- : [R, ...] = chol (..., "upper")
     Compute the upper Cholesky factor, R, of the real symmetric or
     complex Hermitian positive definite matrix A.

     The upper Cholesky factor R is computed by using the upper
     triangular part of matrix A and is defined by

          R' * R = A.

     Calling ‘chol’ using the optional "upper" flag has the same
     behavior.  In contrast, using the optional "lower" flag, ‘chol’
     returns the lower triangular factorization, computed by using the
     lower triangular part of matrix A, such that

          L * L' = A.

     Called with one output argument ‘chol’ fails if matrix A is not
     positive definite.  Note that if matrix A is not real symmetric or
     complex Hermitian then the lower triangular part is considered to
     be the (complex conjugate) transpose of the upper triangular part,
     or vice versa, given the "lower" flag.

     Called with two or more output arguments P flags whether the matrix
     A was positive definite and ‘chol’ does not fail.  A zero value of
     P indicates that matrix A is positive definite and R gives the
     factorization.  Otherwise, P will have a positive value.

     If called with three output arguments matrix A must be sparse and a
     sparsity preserving row/column permutation is applied to matrix A
     prior to the factorization.  That is R is the factorization of
     ‘A(Q,Q)’ such that

          R' * R = Q' * A * Q.

     The sparsity preserving permutation is generally returned as a
     matrix.  However, given the optional flag "vector", Q will be
     returned as a vector such that

          R' * R = A(Q, Q).

     In general the lower triangular factorization is significantly
     faster for sparse matrices.

     See also: Note: hess, Note: lu, Note: qr,
     Note: qz, Note: schur, Note: svd, Note:
     ichol, Note: cholinv, *note chol2inv:
     XREFchol2inv, Note: cholupdate, *note cholinsert:
     XREFcholinsert, Note: choldelete, *note cholshift:
     XREFcholshift.

 -- : cholinv (A)
     Compute the inverse of the symmetric positive definite matrix A
     using the Cholesky factorization.

     See also: Note: chol, Note: chol2inv, Note:
     inv.

 -- : chol2inv (U)
     Invert a symmetric, positive definite square matrix from its
     Cholesky decomposition, U.

     Note that U should be an upper-triangular matrix with positive
     diagonal elements.  ‘chol2inv (U)’ provides ‘inv (U'*U)’ but it is
     much faster than using ‘inv’.

     See also: Note: chol, Note: cholinv, Note:
     inv.

 -- : [R1, INFO] = cholupdate (R, U, OP)
     Update or downdate a Cholesky factorization.

     Given an upper triangular matrix R and a column vector U, attempt
     to determine another upper triangular matrix R1 such that

        • R1’*R1 = R’*R + U*U’ if OP is "+"

        • R1’*R1 = R’*R - U*U’ if OP is "-"

     If OP is "-", INFO is set to

        • 0 if the downdate was successful,

        • 1 if R’*R - U*U’ is not positive definite,

        • 2 if R is singular.

     If INFO is not present, an error message is printed in cases 1 and
     2.

     See also: Note: chol, Note: cholinsert,
     Note: choldelete, Note: cholshift.

 -- : R1 = cholinsert (R, J, U)
 -- : [R1, INFO] = cholinsert (R, J, U)
     Update a Cholesky factorization given a row or column to insert in
     the original factored matrix.

     Given a Cholesky factorization of a real symmetric or complex
     Hermitian positive definite matrix A = R’*R, R upper triangular,
     return the Cholesky factorization of A1, where A1(p,p) = A,
     A1(:,j) = A1(j,:)’ = u and p = [1:j-1,j+1:n+1].  u(j) should be
     positive.

     On return, INFO is set to

        • 0 if the insertion was successful,

        • 1 if A1 is not positive definite,

        • 2 if R is singular.

     If INFO is not present, an error message is printed in cases 1 and
     2.

     See also: Note: chol, Note: cholupdate,
     Note: choldelete, Note: cholshift.

 -- : R1 = choldelete (R, J)
     Update a Cholesky factorization given a row or column to delete
     from the original factored matrix.

     Given a Cholesky factorization of a real symmetric or complex
     Hermitian positive definite matrix A = R’*R, R upper triangular,
     return the Cholesky factorization of A(p,p), where
     p = [1:j-1,j+1:n+1].

     See also: Note: chol, Note: cholupdate,
     Note: cholinsert, Note: cholshift.

 -- : R1 = cholshift (R, I, J)
     Update a Cholesky factorization given a range of columns to shift
     in the original factored matrix.

     Given a Cholesky factorization of a real symmetric or complex
     Hermitian positive definite matrix A = R’*R, R upper triangular,
     return the Cholesky factorization of A(p,p), where p is the
     permutation
     ‘p = [1:i-1, shift(i:j, 1), j+1:n]’ if I < J
     or
     ‘p = [1:j-1, shift(j:i,-1), i+1:n]’ if J < I.

     See also: Note: chol, Note: cholupdate,
     Note: cholinsert, Note: choldelete.

 -- : H = hess (A)
 -- : [P, H] = hess (A)
     Compute the Hessenberg decomposition of the matrix A.

     The Hessenberg decomposition is ‘P * H * P' = A’ where P is a
     square unitary matrix (‘P' * P = I’, using complex-conjugate
     transposition) and H is upper Hessenberg (‘H(i, j) = 0 forall i >
     j+1)’.

     The Hessenberg decomposition is usually used as the first step in
     an eigenvalue computation, but has other applications as well (see
     Golub, Nash, and Van Loan, IEEE Transactions on Automatic Control,
     1979).

     See also: Note: eig, Note: chol, *note lu:
     XREFlu, Note: qr, Note: qz, Note: schur,
     Note: svd.

 -- : [L, U] = lu (A)
 -- : [L, U, P] = lu (A)
 -- : [L, U, P, Q] = lu (S)
 -- : [L, U, P, Q, R] = lu (S)
 -- : [...] = lu (S, THRES)
 -- : Y = lu (...)
 -- : [...] = lu (..., "vector")
     Compute the LU decomposition of A.

     If A is full then subroutines from LAPACK are used, and if A is
     sparse then UMFPACK is used.

     The result is returned in a permuted form, according to the
     optional return value P.  For example, given the matrix ‘a = [1, 2;
     3, 4]’,

          [l, u, p] = lu (A)

     returns

          l =

            1.00000  0.00000
            0.33333  1.00000

          u =

            3.00000  4.00000
            0.00000  0.66667

          p =

            0  1
            1  0

     The matrix is not required to be square.

     When called with two or three output arguments and a sparse input
     matrix, ‘lu’ does not attempt to perform sparsity preserving column
     permutations.  Called with a fourth output argument, the sparsity
     preserving column transformation Q is returned, such that ‘P * A *
     Q = L * U’.  This is the *preferred* way to call ‘lu’ with sparse
     input matrices.

     Called with a fifth output argument and a sparse input matrix, ‘lu’
     attempts to use a scaling factor R on the input matrix such that ‘P
     * (R \ A) * Q = L * U’.  This typically leads to a sparser and more
     stable factorization.

     An additional input argument THRES, that defines the pivoting
     threshold can be given.  THRES can be a scalar, in which case it
     defines the UMFPACK pivoting tolerance for both symmetric and
     unsymmetric cases.  If THRES is a 2-element vector, then the first
     element defines the pivoting tolerance for the unsymmetric UMFPACK
     pivoting strategy and the second for the symmetric strategy.  By
     default, the values defined by ‘spparms’ are used ([0.1, 0.001]).

     Given the string argument "vector", ‘lu’ returns the values of P
     and Q as vector values, such that for full matrix, ‘A(P,:) = L *
     U’, and ‘R(P,:) * A(:,Q) = L * U’.

     With two output arguments, returns the permuted forms of the upper
     and lower triangular matrices, such that ‘A = L * U’.  With one
     output argument Y, then the matrix returned by the LAPACK routines
     is returned.  If the input matrix is sparse then the matrix L is
     embedded into U to give a return value similar to the full case.
     For both full and sparse matrices, ‘lu’ loses the permutation
     information.

     See also: Note: luupdate, Note: ilu, Note:
     chol, Note: hess, Note: qr, *note qz:
     XREFqz, Note: schur, Note: svd.

 -- : [L, U] = luupdate (L, U, X, Y)
 -- : [L, U, P] = luupdate (L, U, P, X, Y)
     Given an LU factorization of a real or complex matrix A = L*U,
     L lower unit trapezoidal and U upper trapezoidal, return the
     LU factorization of A + X*Y.’, where X and Y are column vectors
     (rank-1 update) or matrices with equal number of columns (rank-k
     update).

     Optionally, row-pivoted updating can be used by supplying a row
     permutation (pivoting) matrix P; in that case, an updated
     permutation matrix is returned.  Note that if L, U, P is a pivoted
     LU factorization as obtained by ‘lu’:

          [L, U, P] = lu (A);

     then a factorization of A+X*Y.’ can be obtained either as

          [L1, U1] = lu (L, U, P*X, Y)

     or

          [L1, U1, P1] = lu (L, U, P, X, Y)

     The first form uses the unpivoted algorithm, which is faster, but
     less stable.  The second form uses a slower pivoted algorithm,
     which is more stable.

     The matrix case is done as a sequence of rank-1 updates; thus, for
     large enough k, it will be both faster and more accurate to
     recompute the factorization from scratch.

     See also: Note: lu, Note: cholupdate, Note:
     qrupdate.

 -- : [Q, R] = qr (A)
 -- : [Q, R, P] = qr (A) # non-sparse A
 -- : X = qr (A) # non-sparse A
 -- : R = qr (A) # sparse A
 -- : [C, R] = qr (A, B)
 -- : [...] = qr (..., 0)
 -- : [...] = qr (..., "vector")
 -- : [...] = qr (..., "matrix")
     Compute the QR factorization of A, using standard LAPACK
     subroutines.

     The QR factorization is

          Q * R = A

     where Q is an orthogonal matrix and R is upper triangular.

     For example, given the matrix ‘A = [1, 2; 3, 4]’,

          [Q, R] = qr (A)

     returns

          Q =

            -0.31623  -0.94868
            -0.94868   0.31623

          R =

            -3.16228  -4.42719
             0.00000  -0.63246

     which multiplied together return the original matrix

          Q * R
            ⇒
               1.0000   2.0000
               3.0000   4.0000

     If just a single return value is requested then it is either R, if
     A is sparse, or X, such that ‘R = triu (X)’ if A is full.  (Note:
     unlike most commands, the single return value is not the first
     return value when multiple values are requested.)

     If the matrix A is full, and a third output P is requested, then
     ‘qr’ calculates the permuted QR factorization

          Q * R = A * P

     where Q is an orthogonal matrix, R is upper triangular, and P is a
     permutation matrix.

     The permuted QR factorization has the additional property that the
     diagonal entries of R are ordered by decreasing magnitude.  In
     other words, ‘abs (diag (R))’ will be ordered from largest to
     smallest.

     For example, given the matrix ‘A = [1, 2; 3, 4]’,

          [Q, R, P] = qr (A)

     returns

          Q =

            -0.44721  -0.89443
            -0.89443   0.44721

          R =

            -4.47214  -3.13050
             0.00000   0.44721

          P =

             0  1
             1  0

     If the input matrix A is sparse then the sparse QR factorization is
     computed using CSPARSE.  Because the matrix Q is, in general, a
     full matrix, it is recommended to request only one return value R.
     In that case, the computation avoids the construction of Q and
     returns R such that ‘R = chol (A' * A)’.

     If an additional matrix B is supplied and two return values are
     requested, then ‘qr’ returns C, where ‘C = Q' * B’.  This allows
     the least squares approximation of ‘A \ B’ to be calculated as

          [C, R] = qr (A, B)
          X = R \ C

     If the final argument is the string "vector" then P is a
     permutation vector (of the columns of A) instead of a permutation
     matrix.  In this case, the defining relationship is

          Q * R = A(:, P)

     The default, however, is to return a permutation matrix and this
     may be explicitly specified by using a final argument of "matrix".

     If the final argument is the scalar 0 an "economy" factorization is
     returned.  When the original matrix A has size MxN and M > N then
     the "economy" factorization will calculate just N rows in R and N
     columns in Q and omit the zeros in R.  If M ≤ N there is no
     difference between the economy and standard factorizations.  When
     calculating an "economy" factorization the output P is always a
     vector rather than a matrix.

     Background: The QR factorization has applications in the solution
     of least squares problems

          min norm (A*x - b)

     for overdetermined systems of equations (i.e., A is a tall, thin
     matrix).

     The permuted QR factorization ‘[Q, R, P] = qr (A)’ allows the
     construction of an orthogonal basis of ‘span (A)’.

     See also: Note: chol, Note: hess, *note lu:
     XREFlu, Note: qz, Note: schur, *note svd:
     XREFsvd, Note: qrupdate, *note qrinsert:
     XREFqrinsert, Note: qrdelete, *note qrshift:
     XREFqrshift.

 -- : [Q1, R1] = qrupdate (Q, R, U, V)
     Update a QR factorization given update vectors or matrices.

     Given a QR factorization of a real or complex matrix A = Q*R,
     Q unitary and R upper trapezoidal, return the QR factorization of
     A + U*V’, where U and V are column vectors (rank-1 update) or
     matrices with equal number of columns (rank-k update).  Notice that
     the latter case is done as a sequence of rank-1 updates; thus, for
     k large enough, it will be both faster and more accurate to
     recompute the factorization from scratch.

     The QR factorization supplied may be either full (Q is square) or
     economized (R is square).

     See also: Note: qr, Note: qrinsert, Note:
     qrdelete, Note: qrshift.

 -- : [Q1, R1] = qrinsert (Q, R, J, X, ORIENT)
     Update a QR factorization given a row or column to insert in the
     original factored matrix.

     Given a QR factorization of a real or complex matrix A = Q*R,
     Q unitary and R upper trapezoidal, return the QR factorization of
     [A(:,1:j-1) x A(:,j:n)], where U is a column vector to be inserted
     into A (if ORIENT is "col"), or the QR factorization of
     [A(1:j-1,:);x;A(:,j:n)], where X is a row vector to be inserted
     into A (if ORIENT is "row").

     The default value of ORIENT is "col".  If ORIENT is "col", U may be
     a matrix and J an index vector resulting in the QR factorization of
     a matrix B such that B(:,J) gives U and B(:,J) = [] gives A.
     Notice that the latter case is done as a sequence of k insertions;
     thus, for k large enough, it will be both faster and more accurate
     to recompute the factorization from scratch.

     If ORIENT is "col", the QR factorization supplied may be either
     full (Q is square) or economized (R is square).

     If ORIENT is "row", full factorization is needed.

     See also: Note: qr, Note: qrupdate, Note:
     qrdelete, Note: qrshift.

 -- : [Q1, R1] = qrdelete (Q, R, J, ORIENT)
     Update a QR factorization given a row or column to delete from the
     original factored matrix.

     Given a QR factorization of a real or complex matrix A = Q*R,
     Q unitary and R upper trapezoidal, return the QR factorization of
     [A(:,1:j-1), U, A(:,j:n)], where U is a column vector to be
     inserted into A (if ORIENT is "col"), or the QR factorization of
     [A(1:j-1,:);X;A(:,j:n)], where X is a row ORIENT is "row").  The
     default value of ORIENT is "col".

     If ORIENT is "col", J may be an index vector resulting in the
     QR factorization of a matrix B such that A(:,J) = [] gives B.
     Notice that the latter case is done as a sequence of k deletions;
     thus, for k large enough, it will be both faster and more accurate
     to recompute the factorization from scratch.

     If ORIENT is "col", the QR factorization supplied may be either
     full (Q is square) or economized (R is square).

     If ORIENT is "row", full factorization is needed.

     See also: Note: qr, Note: qrupdate, Note:
     qrinsert, Note: qrshift.

 -- : [Q1, R1] = qrshift (Q, R, I, J)
     Update a QR factorization given a range of columns to shift in the
     original factored matrix.

     Given a QR factorization of a real or complex matrix A = Q*R,
     Q unitary and R upper trapezoidal, return the QR factorization of
     A(:,p), where p is the permutation
     ‘p = [1:i-1, shift(i:j, 1), j+1:n]’ if I < J
     or
     ‘p = [1:j-1, shift(j:i,-1), i+1:n]’ if J < I.

     See also: Note: qr, Note: qrupdate, Note:
     qrinsert, Note: qrdelete.

 -- : LAMBDA = qz (A, B)
 -- : [AA, BB, Q, Z, V, W, LAMBDA] = qz (A, B)
 -- : [AA, BB, Z] = qz (A, B, OPT)
 -- : [AA, BB, Z, LAMBDA] = qz (A, B, OPT)
     Compute the QZ decomposition of a generalized eigenvalue problem.

     The generalized eigenvalue problem is defined as

     A x = LAMBDA B x

     There are three calling forms of the function:

       1. ‘LAMBDA = qz (A, B)’

          Compute the generalized eigenvalues LAMBDA.

       2. ‘[AA, BB, Q, Z, V, W, LAMBDA] = qz (A, B)’

          Compute QZ decomposition, generalized eigenvectors, and
          generalized eigenvalues.


               A * V = B * V * diag (LAMBDA)
               W' * A = diag (LAMBDA) * W' * B
               AA = Q * A * Z, BB = Q * B * Z


          with Q and Z orthogonal (unitary for complex case).

       3. ‘[AA, BB, Z {, LAMBDA}] = qz (A, B, OPT)’

          As in form 2 above, but allows ordering of generalized
          eigenpairs for, e.g., solution of discrete time algebraic
          Riccati equations.  Form 3 is not available for complex
          matrices, and does not compute the generalized eigenvectors V,
          W, nor the orthogonal matrix Q.

          OPT
               for ordering eigenvalues of the GEP pencil.  The leading
               block of the revised pencil contains all eigenvalues that
               satisfy:

               "N"
                    unordered (default)

               "S"
                    small: leading block has all |LAMBDA| < 1

               "B"
                    big: leading block has all |LAMBDA| ≥ 1

               "-"
                    negative real part: leading block has all
                    eigenvalues in the open left half-plane

               "+"
                    non-negative real part: leading block has all
                    eigenvalues in the closed right half-plane

     Note: ‘qz’ performs permutation balancing, but not scaling (Note:
     balance.), which may be lead to less accurate results
     than ‘eig’.  The order of output arguments was selected for
     compatibility with MATLAB.

     See also: Note: eig, Note: balance, *note lu:
     XREFlu, Note: chol, Note: hess, *note qr:
     XREFqr, Note: qzhess, Note: schur, Note:
     svd.

 -- : [AA, BB, Q, Z] = qzhess (A, B)
     Compute the Hessenberg-triangular decomposition of the matrix
     pencil ‘(A, B)’, returning ‘AA = Q * A * Z’, ‘BB = Q * B * Z’, with
     Q and Z orthogonal.

     For example:

          [aa, bb, q, z] = qzhess ([1, 2; 3, 4], [5, 6; 7, 8])
               ⇒ aa = [ -3.02244, -4.41741;  0.92998,  0.69749 ]
               ⇒ bb = [ -8.60233, -9.99730;  0.00000, -0.23250 ]
               ⇒  q = [ -0.58124, -0.81373; -0.81373,  0.58124 ]
               ⇒  z = [ 1, 0; 0, 1 ]

     The Hessenberg-triangular decomposition is the first step in Moler
     and Stewart’s QZ decomposition algorithm.

     Algorithm taken from Golub and Van Loan, ‘Matrix Computations, 2nd
     edition’.

     See also: Note: lu, Note: chol, *note hess:
     XREFhess, Note: qr, Note: qz, *note schur:
     XREFschur, Note: svd.

 -- : S = schur (A)
 -- : S = schur (A, "real")
 -- : S = schur (A, "complex")
 -- : S = schur (A, OPT)
 -- : [U, S] = schur (...)
     Compute the Schur decomposition of A.

     The Schur decomposition is defined as

          S = U' * A * U

     where U is a unitary matrix (‘U'* U’ is identity) and S is upper
     triangular.  The eigenvalues of A (and S) are the diagonal elements
     of S.  If the matrix A is real, then the real Schur decomposition
     is computed, in which the matrix U is orthogonal and S is block
     upper triangular with blocks of size at most ‘2 x 2’ along the
     diagonal.  The diagonal elements of S (or the eigenvalues of the ‘2
     x 2’ blocks, when appropriate) are the eigenvalues of A and S.

     The default for real matrices is a real Schur decomposition.  A
     complex decomposition may be forced by passing the flag "complex".

     The eigenvalues are optionally ordered along the diagonal according
     to the value of OPT.  ‘OPT = "a"’ indicates that all eigenvalues
     with negative real parts should be moved to the leading block of S
     (used in ‘are’), ‘OPT = "d"’ indicates that all eigenvalues with
     magnitude less than one should be moved to the leading block of S
     (used in ‘dare’), and ‘OPT = "u"’, the default, indicates that no
     ordering of eigenvalues should occur.  The leading K columns of U
     always span the A-invariant subspace corresponding to the K leading
     eigenvalues of S.

     The Schur decomposition is used to compute eigenvalues of a square
     matrix, and has applications in the solution of algebraic Riccati
     equations in control (see ‘are’ and ‘dare’).

     See also: Note: rsf2csf, Note: ordschur,
     Note: lu, Note: chol, Note: hess, Note:
     qr, Note: qz, Note: svd.

 -- : [U, T] = rsf2csf (UR, TR)
     Convert a real, upper quasi-triangular Schur form TR to a complex,
     upper triangular Schur form T.

     Note that the following relations hold:

     UR * TR * UR’ = U * T * U’ and ‘U' * U’ is the identity matrix I.

     Note also that U and T are not unique.

     See also: Note: schur.

 -- : [UR, SR] = ordschur (U, S, SELECT)
     Reorders the real Schur factorization (U,S) obtained with the
     ‘schur’ function, so that selected eigenvalues appear in the upper
     left diagonal blocks of the quasi triangular Schur matrix.

     The logical vector SELECT specifies the selected eigenvalues as
     they appear along S’s diagonal.

     For example, given the matrix ‘A = [1, 2; 3, 4]’, and its Schur
     decomposition

          [U, S] = schur (A)

     which returns

          U =

            -0.82456  -0.56577
             0.56577  -0.82456

          S =

            -0.37228  -1.00000
             0.00000   5.37228


     It is possible to reorder the decomposition so that the positive
     eigenvalue is in the upper left corner, by doing:

          [U, S] = ordschur (U, S, [0,1])

     See also: Note: schur.

 -- : ANGLE = subspace (A, B)
     Determine the largest principal angle between two subspaces spanned
     by the columns of matrices A and B.

 -- : S = svd (A)
 -- : [U, S, V] = svd (A)
 -- : [U, S, V] = svd (A, "econ")
 -- : [U, S, V] = svd (A, 0)
     Compute the singular value decomposition of A.

     The singular value decomposition is defined by the relation

          A = U*S*V'

     The function ‘svd’ normally returns only the vector of singular
     values.  When called with three return values, it computes U, S,
     and V.  For example,

          svd (hilb (3))

     returns

          ans =

            1.4083189
            0.1223271
            0.0026873

     and

          [u, s, v] = svd (hilb (3))

     returns

          u =

            -0.82704   0.54745   0.12766
            -0.45986  -0.52829  -0.71375
            -0.32330  -0.64901   0.68867

          s =

            1.40832  0.00000  0.00000
            0.00000  0.12233  0.00000
            0.00000  0.00000  0.00269

          v =

            -0.82704   0.54745   0.12766
            -0.45986  -0.52829  -0.71375
            -0.32330  -0.64901   0.68867

     When given a second argument that is not 0, ‘svd’ returns an
     economy-sized decomposition, eliminating the unnecessary rows or
     columns of U or V.

     If the second argument is exactly 0, then the choice of
     decomposition is based on the matrix A.  If A has more rows than
     columns then an economy-sized decomposition is returned, otherwise
     a regular decomposition is calculated.

     Algorithm Notes: When calculating the full decomposition (left and
     right singular matrices in addition to singular values) there is a
     choice of two routines in LAPACK.  The default routine used by
     Octave is ‘gesdd’ which is 5X faster than the alternative ‘gesvd’,
     but may use more memory and may be less accurate for some matrices.
     See the documentation for ‘svd_driver’ for more information.

     See also: Note: svd_driver, Note: svds,
     Note: eig, Note: lu, Note: chol, Note:
     hess, Note: qr, Note: qz.

 -- : VAL = svd_driver ()
 -- : OLD_VAL = svd_driver (NEW_VAL)
 -- : svd_driver (NEW_VAL, "local")
     Query or set the underlying LAPACK driver used by ‘svd’.

     Currently recognized values are "gesdd" and "gesvd".  The default
     is "gesdd".

     When called from inside a function with the "local" option, the
     variable is changed locally for the function and any subroutines it
     calls.  The original variable value is restored when exiting the
     function.

     Algorithm Notes: The LAPACK library provides two routines for
     calculating the full singular value decomposition (left and right
     singular matrices as well as singular values).  When calculating
     just the singular values the following discussion is not relevant.

     The default routine use by Octave is the newer ‘gesdd’ which is
     based on a Divide-and-Conquer algorithm that is 5X faster than the
     alternative ‘gesvd’, which is based on QR factorization.  However,
     the new algorithm can use significantly more memory.  For an MxN
     input matrix the memory usage is of order O(min(M,N) ^ 2), whereas
     the alternative is of order O(max(M,N)). In general, modern
     computers have abundant memory so Octave has chosen to prioritize
     speed.

     In addition, there have been instances in the past where some input
     matrices were not accurately decomposed by ‘gesdd’.  This appears
     to have been resolved with modern versions of LAPACK.  However, if
     certainty is required the accuracy of the decomposition can always
     be tested after the fact with

          [U, S, V] = svd (X);
          norm (X - U*S*V', "fro")

     See also: Note: svd.

 -- : [HOUSV, BETA, ZER] = housh (X, J, Z)
     Compute Householder reflection vector HOUSV to reflect X to be the
     j-th column of identity, i.e.,

          (I - beta*housv*housv')x =  norm (x)*e(j) if x(j) < 0,
          (I - beta*housv*housv')x = -norm (x)*e(j) if x(j) >= 0

     Inputs

     X
          vector

     J
          index into vector

     Z
          threshold for zero (usually should be the number 0)

     Outputs (see Golub and Van Loan):

     BETA
          If beta = 0, then no reflection need be applied (zer set to 0)

     HOUSV
          householder vector

 -- : [U, H, NU] = krylov (A, V, K, EPS1, PFLG)
     Construct an orthogonal basis U of a block Krylov subspace.

     The block Krylov subspace has the following form:

          [v a*v a^2*v ... a^(k+1)*v]

     The construction is made with Householder reflections to guard
     against loss of orthogonality.

     If V is a vector, then H contains the Hessenberg matrix such that
     a*u == u*h+rk*ek’, in which ‘rk = a*u(:,k)-u*h(:,k)’, and ek’ is
     the vector ‘[0, 0, ..., 1]’ of length K.  Otherwise, H is
     meaningless.

     If V is a vector and K is greater than ‘length (A) - 1’, then H
     contains the Hessenberg matrix such that ‘a*u == u*h’.

     The value of NU is the dimension of the span of the Krylov subspace
     (based on EPS1).

     If B is a vector and K is greater than M-1, then H contains the
     Hessenberg decomposition of A.

     The optional parameter EPS1 is the threshold for zero.  The default
     value is 1e-12.

     If the optional parameter PFLG is nonzero, row pivoting is used to
     improve numerical behavior.  The default value is 0.

     Reference: A. Hodel, P. Misra, ‘Partial Pivoting in the Computation
     of Krylov Subspaces of Large Sparse Systems’, Proceedings of the
     42nd IEEE Conference on Decision and Control, December 2003.


automatically generated by info2www version 1.2.2.9