(octave.info)Linear Least Squares


Prev: Nonlinear Programming Up: Optimization
Enter node , (file) or (file)node

25.4 Linear Least Squares
=========================

Octave also supports linear least squares minimization.  That is, Octave
can find the parameter b such that the model y = x*b fits data (x,y) as
well as possible, assuming zero-mean Gaussian noise.  If the noise is
assumed to be isotropic the problem can be solved using the ‘\’ or ‘/’
operators, or the ‘ols’ function.  In the general case where the noise
is assumed to be anisotropic the ‘gls’ is needed.

 -- : [BETA, SIGMA, R] = ols (Y, X)
     Ordinary least squares (OLS) estimation.

     OLS applies to the multivariate model Y = X*B + E where Y is a
     t-by-p matrix, X is a t-by-k matrix, B is a k-by-p matrix, and E is
     a t-by-p matrix.

     Each row of Y is a p-variate observation in which each column
     represents a variable.  Likewise, the rows of X represent k-variate
     observations or possibly designed values.  Furthermore, the
     collection of observations X must be of adequate rank, k, otherwise
     B cannot be uniquely estimated.

     The observation errors, E, are assumed to originate from an
     underlying p-variate distribution with zero mean and p-by-p
     covariance matrix S, both constant conditioned on X.  Furthermore,
     the matrix S is constant with respect to each observation such that
     ‘mean (E) = 0’ and ‘cov (vec (E)) = kron (S, I)’.  (For cases that
     don’t meet this criteria, such as autocorrelated errors, see
     generalized least squares, gls, for more efficient estimations.)

     The return values BETA, SIGMA, and R are defined as follows.

     BETA
          The OLS estimator for matrix B.  BETA is calculated directly
          via ‘inv (X'*X) * X' * Y’ if the matrix ‘X'*X’ is of full
          rank.  Otherwise, ‘BETA = pinv (X) * Y’ where ‘pinv (X)’
          denotes the pseudoinverse of X.

     SIGMA
          The OLS estimator for the matrix S,

               SIGMA = (Y-X*BETA)' * (Y-X*BETA) / (t-rank(X))

     R
          The matrix of OLS residuals, ‘R = Y - X*BETA’.

     See also: Note: gls, Note: pinv.

 -- : [BETA, V, R] = gls (Y, X, O)
     Generalized least squares (GLS) model.

     Perform a generalized least squares estimation for the multivariate
     model Y = X*B + E where Y is a t-by-p matrix, X is a t-by-k matrix,
     B is a k-by-p matrix and E is a t-by-p matrix.

     Each row of Y is a p-variate observation in which each column
     represents a variable.  Likewise, the rows of X represent k-variate
     observations or possibly designed values.  Furthermore, the
     collection of observations X must be of adequate rank, k, otherwise
     B cannot be uniquely estimated.

     The observation errors, E, are assumed to originate from an
     underlying p-variate distribution with zero mean but possibly
     heteroscedastic observations.  That is, in general, ‘mean (E) = 0’
     and ‘cov (vec (E)) = (s^2)*O’ in which s is a scalar and O is a
     t*p-by-t*p matrix.

     The return values BETA, V, and R are defined as follows.

     BETA
          The GLS estimator for matrix B.

     V
          The GLS estimator for scalar s^2.

     R
          The matrix of GLS residuals, R = Y - X*BETA.

     See also: Note: ols.

 -- : X = lsqnonneg (C, D)
 -- : X = lsqnonneg (C, D, X0)
 -- : X = lsqnonneg (C, D, X0, OPTIONS)
 -- : [X, RESNORM] = lsqnonneg (...)
 -- : [X, RESNORM, RESIDUAL] = lsqnonneg (...)
 -- : [X, RESNORM, RESIDUAL, EXITFLAG] = lsqnonneg (...)
 -- : [X, RESNORM, RESIDUAL, EXITFLAG, OUTPUT] = lsqnonneg (...)
 -- : [X, RESNORM, RESIDUAL, EXITFLAG, OUTPUT, LAMBDA] = lsqnonneg (...)

     Minimize ‘norm (C*X - D)’ subject to ‘X >= 0’.

     C and D must be real matrices.

     X0 is an optional initial guess for the solution X.

     OPTIONS is an options structure to change the behavior of the
     algorithm (Note: optimset.).  ‘lsqnonneg’ recognizes
     these options: "MaxIter", "TolX".

     Outputs:

     RESNORM
          The squared 2-norm of the residual: ‘norm (C*X-D)^2’

     RESIDUAL
          The residual: ‘D-C*X’

     EXITFLAG
          An indicator of convergence.  0 indicates that the iteration
          count was exceeded, and therefore convergence was not reached;
          >0 indicates that the algorithm converged.  (The algorithm is
          stable and will converge given enough iterations.)

     OUTPUT
          A structure with two fields:

             • "algorithm": The algorithm used ("nnls")

             • "iterations": The number of iterations taken.

     LAMBDA
          Undocumented output

     See also: Note: pqpnonneg, Note: lscov,
     Note: optimset.

 -- : X = lscov (A, B)
 -- : X = lscov (A, B, V)
 -- : X = lscov (A, B, V, ALG)
 -- : [X, STDX, MSE, S] = lscov (...)

     Compute a generalized linear least squares fit.

     Estimate X under the model B = AX + W, where the noise W is assumed
     to follow a normal distribution with covariance matrix {\sigma^2}
     V.

     If the size of the coefficient matrix A is n-by-p, the size of the
     vector/array of constant terms B must be n-by-k.

     The optional input argument V may be an n-element vector of
     positive weights (inverse variances), or an n-by-n symmetric
     positive semi-definite matrix representing the covariance of B.  If
     V is not supplied, the ordinary least squares solution is returned.

     The ALG input argument, a guidance on solution method to use, is
     currently ignored.

     Besides the least-squares estimate matrix X (p-by-k), the function
     also returns STDX (p-by-k), the error standard deviation of
     estimated X; MSE (k-by-1), the estimated data error covariance
     scale factors (\sigma^2); and S (p-by-p, or p-by-p-by-k if k > 1),
     the error covariance of X.

     Reference: Golub and Van Loan (1996), ‘Matrix Computations (3rd
     Ed.)’, Johns Hopkins, Section 5.6.3

     See also: Note: ols, Note: gls, *note lsqnonneg:
     XREFlsqnonneg.

 -- : optimset ()
 -- : OPTIONS = optimset ()
 -- : OPTIONS = optimset (PAR, VAL, ...)
 -- : OPTIONS = optimset (OLD, PAR, VAL, ...)
 -- : OPTIONS = optimset (OLD, NEW)
     Create options structure for optimization functions.

     When called without any input or output arguments, ‘optimset’
     prints a list of all valid optimization parameters.

     When called with one output and no inputs, return an options
     structure with all valid option parameters initialized to ‘[]’.

     When called with a list of parameter/value pairs, return an options
     structure with only the named parameters initialized.

     When the first input is an existing options structure OLD, the
     values are updated from either the PAR/VAL list or from the options
     structure NEW.

     Valid parameters are:

     AutoScaling

     ComplexEqn

     Display
          Request verbose display of results from optimizations.  Values
          are:

          "off" [default]
               No display.

          "iter"
               Display intermediate results for every loop iteration.

          "final"
               Display the result of the final loop iteration.

          "notify"
               Display the result of the final loop iteration if the
               function has failed to converge.

     FinDiffType

     FunValCheck
          When enabled, display an error if the objective function
          returns an invalid value (a complex number, NaN, or Inf).
          Must be set to "on" or "off" [default].  Note: the functions
          ‘fzero’ and ‘fminbnd’ correctly handle Inf values and only
          complex values or NaN will cause an error in this case.

     GradObj
          When set to "on", the function to be minimized must return a
          second argument which is the gradient, or first derivative, of
          the function at the point X.  If set to "off" [default], the
          gradient is computed via finite differences.

     Jacobian
          When set to "on", the function to be minimized must return a
          second argument which is the Jacobian, or first derivative, of
          the function at the point X.  If set to "off" [default], the
          Jacobian is computed via finite differences.

     MaxFunEvals
          Maximum number of function evaluations before optimization
          stops.  Must be a positive integer.

     MaxIter
          Maximum number of algorithm iterations before optimization
          stops.  Must be a positive integer.

     OutputFcn
          A user-defined function executed once per algorithm iteration.

     TolFun
          Termination criterion for the function output.  If the
          difference in the calculated objective function between one
          algorithm iteration and the next is less than ‘TolFun’ the
          optimization stops.  Must be a positive scalar.

     TolX
          Termination criterion for the function input.  If the
          difference in X, the current search point, between one
          algorithm iteration and the next is less than ‘TolX’ the
          optimization stops.  Must be a positive scalar.

     TypicalX

     Updating

     See also: Note: optimget.

 -- : optimget (OPTIONS, PARNAME)
 -- : optimget (OPTIONS, PARNAME, DEFAULT)
     Return the specific option PARNAME from the optimization options
     structure OPTIONS created by ‘optimset’.

     If PARNAME is not defined then return DEFAULT if supplied,
     otherwise return an empty matrix.

     See also: Note: optimset.


automatically generated by info2www version 1.2.2.9