(octave.info)Specialized Solvers
18.5 Specialized Solvers
========================
-- : X = bicg (A, B)
-- : X = bicg (A, B, TOL)
-- : X = bicg (A, B, TOL, MAXIT)
-- : X = bicg (A, B, TOL, MAXIT, M)
-- : X = bicg (A, B, TOL, MAXIT, M1, M2)
-- : X = bicg (A, B, TOL, MAXIT, M, [], X0)
-- : X = bicg (A, B, TOL, MAXIT, M1, M2, X0)
-- : X = bicg (A, B, TOL, MAXIT, M, [], X0, ...)
-- : X = bicg (A, B, TOL, MAXIT, M1, M2, X0, ...)
-- : [X, FLAG, RELRES, ITER, RESVEC] = bicg (A, B, ...)
Solve the linear system of equations ‘A * X = B’ by means of the
Bi-Conjugate Gradient iterative method.
The input arguments are:
• A is the matrix of the linear system and it must be square. A
can be passed as a matrix, function handle, or inline function
‘Afun’ such that ‘Afun (x, "notransp") = A * x’ and
‘Afun (x, "transp") = A' * x’. Additional parameters to
‘Afun’ may be passed after X0.
• B is the right-hand side vector. It must be a column vector
with the same number of rows as A.
• TOL is the required relative tolerance for the residual error,
‘B - A * X’. The iteration stops if
‘norm (B - A * X)’ ≤ ‘TOL * norm (B)’. If TOL is omitted or
empty, then a tolerance of 1e-6 is used.
• MAXIT is the maximum allowed number of iterations; if MAXIT is
omitted or empty then a value of 20 is used.
• M1, M2 are the preconditioners. The preconditioner M is given
as ‘M = M1 * M2’. Both M1 and M2 can be passed as a matrix or
as a function handle or inline function ‘g’ such that
‘g (X, "notransp") = M1 \ X’ or ‘g (X, "notransp") = M2 \ X’
and ‘g (X, "transp") = M1' \ X’ or
‘g (X, "transp") = M2' \ X’. If M1 is omitted or empty, then
preconditioning is not applied. The preconditioned system is
theoretically equivalent to applying the ‘bicg’ method to the
linear system ‘inv (M1) * A * inv (M2) * Y = inv (M1) * B’ and
‘inv (M2') * A' * inv (M1') * Z = inv (M2') * B’ and then
setting ‘X = inv (M2) * Y’.
• X0 is the initial guess. If X0 is omitted or empty then the
function sets X0 to a zero vector by default.
Any arguments which follow X0 are treated as parameters, and passed
in an appropriate manner to any of the functions (AFUN or MFUN) or
that have been given to ‘bicg’.
The output parameters are:
• X is the computed approximation to the solution of
‘A * X = B’. If the algorithm did not converge, then X is the
iteration which has the minimum residual.
• FLAG indicates the exit status:
• 0: The algorithm converged to within the prescribed
tolerance.
• 1: The algorithm did not converge and it reached the
maximum number of iterations.
• 2: The preconditioner matrix is singular.
• 3: The algorithm stagnated, i.e., the absolute value of
the difference between the current iteration X and the
previous is less than ‘eps * norm (X,2)’.
• 4: The algorithm could not continue because intermediate
values became too small or too large for reliable
computation.
• RELRES is the ratio of the final residual to its initial
value, measured in the Euclidean norm.
• ITER is the iteration which X is computed.
• RESVEC is a vector containing the residual at each iteration.
The total number of iterations performed is given by ‘length
(RESVEC) - 1’.
Consider a trivial problem with a tridiagonal matrix
n = 20;
A = toeplitz (sparse ([1, 1], [1, 2], [2, 1] * n ^ 2, 1, n)) + ...
toeplitz (sparse (1, 2, -1, 1, n) * n / 2, ...
sparse (1, 2, 1, 1, n) * n / 2);
b = A * ones (n, 1);
restart = 5;
[M1, M2] = ilu (A); # in this tridiag case, it corresponds to lu (A)
M = M1 * M2;
Afun = @(x, string) strcmp (string, "notransp") * (A * x) + ...
strcmp (string, "transp") * (A' * x);
Mfun = @(x, string) strcmp (string, "notransp") * (M \ x) + ...
strcmp (string, "transp") * (M' \ x);
M1fun = @(x, string) strcmp (string, "notransp") * (M1 \ x) + ...
strcmp (string, "transp") * (M1' \ x);
M2fun = @(x, string) strcmp (string, "notransp") * (M2 \ x) + ...
strcmp (string, "transp") * (M2' \ x);
EXAMPLE 1: simplest usage of ‘bicg’
x = bicg (A, b)
EXAMPLE 2: ‘bicg’ with a function that computes ‘A*X’ and ‘A'*X’
x = bicg (Afun, b, [], n)
EXAMPLE 3: ‘bicg’ with a preconditioner matrix M
x = bicg (A, b, 1e-6, n, M)
EXAMPLE 4: ‘bicg’ with a function as preconditioner
x = bicg (Afun, b, 1e-6, n, Mfun)
EXAMPLE 5: ‘bicg’ with preconditioner matrices M1 and M2
x = bicg (A, b, 1e-6, n, M1, M2)
EXAMPLE 6: ‘bicg’ with functions as preconditioners
x = bicg (Afun, b, 1e-6, n, M1fun, M2fun)
EXAMPLE 7: ‘bicg’ with as input a function requiring an argument
function y = Ap (A, x, string, z)
## compute A^z * x or (A^z)' * x
y = x;
if (strcmp (string, "notransp"))
for i = 1:z
y = A * y;
endfor
elseif (strcmp (string, "transp"))
for i = 1:z
y = A' * y;
endfor
endif
endfunction
Apfun = @(x, string, p) Ap (A, x, string, p);
x = bicg (Apfun, b, [], [], [], [], [], 2);
References:
1. Y. Saad, ‘Iterative Methods for Sparse Linear Systems’, Second
edition, 2003, SIAM.
See also: Note: bicgstab, Note: cgs, Note:
gmres, Note: pcg, Note: qmr, Note:
tfqmr.
-- : X = bicgstab (A, B, TOL, MAXIT, M1, M2, X0, ...)
-- : X = bicgstab (A, B, TOL, MAXIT, M, [], X0, ...)
-- : [X, FLAG, RELRES, ITER, RESVEC] = bicgstab (A, B, ...)
Solve ‘A x = b’ using the stabilizied Bi-conjugate gradient
iterative method.
The input parameters are:
− A is the matrix of the linear system and it must be square. A
can be passed as a matrix, function handle, or inline function
‘Afun’ such that ‘Afun(x) = A * x’. Additional parameters to
‘Afun’ are passed after X0.
− B is the right hand side vector. It must be a column vector
with the same number of rows as A.
− TOL is the required relative tolerance for the residual error,
‘B - A * X’. The iteration stops if
‘norm (B - A * X)’ ≤ ‘TOL * norm (B)’. If TOL is omitted or
empty, then a tolerance of 1e-6 is used.
− MAXIT the maximum number of outer iterations, if not given or
set to [] the default value ‘min (20, numel (b))’ is used.
− M1, M2 are the preconditioners. The preconditioner M is given
as ‘M = M1 * M2’. Both M1 and M2 can be passed as a matrix or
as a function handle or inline function ‘g’ such that ‘g(X) =
M1 \ X’ or ‘g(X) = M2 \ X’. The technique used is the right
preconditioning, i.e., it is solved ‘A * inv (M) * Y = B’ and
then ‘X = inv (M) * Y’.
− X0 the initial guess, if not given or set to [] the default
value ‘zeros (size (B))’ is used.
The arguments which follow X0 are treated as parameters, and passed
in a proper way to any of the functions (A or M) which are passed
to ‘bicstab’.
The output parameters are:
− X is the approximation computed. If the method doesn’t
converge then it is the iterated with the minimum residual.
− FLAG indicates the exit status:
− 0: iteration converged to the within the chosen tolerance
− 1: the maximum number of iterations was reached before
convergence
− 2: the preconditioner matrix is singular
− 3: the algorithm reached stagnation
− 4: the algorithm can’t continue due to a division by zero
− RELRES is the relative residual obtained with as ‘(A*X-B) /
norm(B)’.
− ITER is the (possibily half) iteration which X is computed.
If it is an half iteration then it is ‘ITER + 0.5’
− RESVEC is a vector containing the residual of each half and
total iteration (There are also the half iterations since X is
computed in two steps at each iteration). Doing
‘(length(RESVEC) - 1) / 2’ is possible to see the total number
of (total) iterations performed.
Let us consider a trivial problem with a tridiagonal matrix
n = 20;
A = toeplitz (sparse ([1, 1], [1, 2], [2, 1] * n ^ 2, 1, n)) + ...
toeplitz (sparse (1, 2, -1, 1, n) * n / 2, ...
sparse (1, 2, 1, 1, n) * n / 2);
b = A * ones (n, 1);
restart = 5;
[M1, M2] = ilu (A); # in this tridiag case, it corresponds to lu (A)
M = M1 * M2;
Afun = @(x) A * x;
Mfun = @(x) M \ x;
M1fun = @(x) M1 \ x;
M2fun = @(x) M2 \ x;
EXAMPLE 1: simplest usage of ‘bicgstab’
x = bicgstab (A, b, [], n)
EXAMPLE 2: ‘bicgstab’ with a function which computes ‘A * X’
x = bicgstab (Afun, b, [], n)
EXAMPLE 3: ‘bicgstab’ with a preconditioner matrix M
x = bicgstab (A, b, [], 1e-06, n, M)
EXAMPLE 4: ‘bicgstab’ with a function as preconditioner
x = bicgstab (Afun, b, 1e-6, n, Mfun)
EXAMPLE 5: ‘bicgstab’ with preconditioner matrices M1 and M2
x = bicgstab (A, b, [], 1e-6, n, M1, M2)
EXAMPLE 6: ‘bicgstab’ with functions as preconditioners
x = bicgstab (Afun, b, 1e-6, n, M1fun, M2fun)
EXAMPLE 7: ‘bicgstab’ with as input a function requiring an
argument
function y = Ap (A, x, z) # compute A^z * x
y = x;
for i = 1:z
y = A * y;
endfor
endfunction
Apfun = @(x, string, p) Ap (A, x, string, p);
x = bicgstab (Apfun, b, [], [], [], [], [], 2);
EXAMPLE 8: explicit example to show that ‘bicgstab’ uses a right
preconditioner
[M1, M2] = ilu (A + 0.1 * eye (n)); # factorization of A perturbed
M = M1 * M2;
## reference solution computed by bicgstab after one iteration
[x_ref, fl] = bicgstab (A, b, [], 1, M)
## right preconditioning
[y, fl] = bicgstab (A / M, b, [], 1)
x = M \ y # compare x and x_ref
References:
1. Y. Saad, ‘Iterative Methods for Sparse Linear Systems’, Second
edition, 2003, SIAM
See also: Note: bicg, Note: cgs, *note gmres:
XREFgmres, Note: pcg, Note: qmr, *note tfqmr:
XREFtfqmr.
-- : X = cgs (A, B, TOL, MAXIT, M1, M2, X0, ...)
-- : X = cgs (A, B, TOL, MAXIT, M, [], X0, ...)
-- : [X, FLAG, RELRES, ITER, RESVEC] = cgs (A, B, ...)
Solve ‘A x = b’, where A is a square matrix, using the Conjugate
Gradients Squared method.
The input arguments are:
− A is the matrix of the linear system and it must be square. A
can be passed as a matrix, function handle, or inline function
‘Afun’ such that ‘Afun(x) = A * x’. Additional parameters to
‘Afun’ are passed after X0.
− B is the right hand side vector. It must be a column vector
with same number of rows of A.
− TOL is the relative tolerance, if not given or set to [] the
default value 1e-6 is used.
− MAXIT the maximum number of outer iterations, if not given or
set to [] the default value ‘min (20, numel (b))’ is used.
− M1, M2 are the preconditioners. The preconditioner matrix is
given as ‘M = M1 * M2’. Both M1 and M2 can be passed as a
matrix or as a function handle or inline function ‘g’ such
that ‘g(x) = M1 \ x’ or ‘g(x) = M2 \ x’. If M1 is empty or
not passed then no preconditioners are applied. The technique
used is the right preconditioning, i.e., it is solved
‘A*inv(M)*y = b’ and then ‘X = inv(M)*y’.
− X0 the initial guess, if not given or set to [] the default
value ‘zeros (size (b))’ is used.
The arguments which follow X0 are treated as parameters, and passed
in a proper way to any of the functions (A or P) which are passed
to ‘cgs’.
The output parameters are:
− X is the approximation computed. If the method doesn’t
converge then it is the iterated with the minimum residual.
− FLAG indicates the exit status:
− 0: iteration converged to the within the chosen tolerance
− 1: the maximum number of iterations was reached before
convergence
− 2: the preconditioner matrix is singular
− 3: the algorithm reached stagnation
− 4: the algorithm can’t continue due to a division by zero
− RELRES is the relative residual obtained with as ‘(A*X-B) /
norm(B)’.
− ITER is the iteration which X is computed.
− RESVEC is a vector containing the residual at each iteration.
Doing ‘length(RESVEC) - 1’ is possible to see the total number
of iterations performed.
Let us consider a trivial problem with a tridiagonal matrix
n = 20;
A = toeplitz (sparse ([1, 1], [1, 2], [2, 1] * n ^ 2, 1, n)) + ...
toeplitz (sparse (1, 2, -1, 1, n) * n / 2, ...
sparse (1, 2, 1, 1, n) * n / 2);
b = A * ones (n, 1);
restart = 5;
[M1, M2] = ilu (A); # in this tridiag case it corresponds to chol (A)'
M = M1 * M2;
Afun = @(x) A * x;
Mfun = @(x) M \ x;
M1fun = @(x) M1 \ x;
M2fun = @(x) M2 \ x;
EXAMPLE 1: simplest usage of ‘cgs’
x = cgs (A, b, [], n)
EXAMPLE 2: ‘cgs’ with a function which computes ‘A * X’
x = cgs (Afun, b, [], n)
EXAMPLE 3: ‘cgs’ with a preconditioner matrix M
x = cgs (A, b, [], 1e-06, n, M)
EXAMPLE 4: ‘cgs’ with a function as preconditioner
x = cgs (Afun, b, 1e-6, n, Mfun)
EXAMPLE 5: ‘cgs’ with preconditioner matrices M1 and M2
x = cgs (A, b, [], 1e-6, n, M1, M2)
EXAMPLE 6: ‘cgs’ with functions as preconditioners
x = cgs (Afun, b, 1e-6, n, M1fun, M2fun)
EXAMPLE 7: ‘cgs’ with as input a function requiring an argument
function y = Ap (A, x, z) # compute A^z * x
y = x;
for i = 1:z
y = A * y;
endfor
endfunction
Apfun = @(x, string, p) Ap (A, x, string, p);
x = cgs (Apfun, b, [], [], [], [], [], 2);
EXAMPLE 8: explicit example to show that ‘cgs’ uses a right
preconditioner
[M1, M2] = ilu (A + 0.3 * eye (n)); # factorization of A perturbed
M = M1 * M2;
## reference solution computed by cgs after one iteration
[x_ref, fl] = cgs (A, b, [], 1, M)
## right preconditioning
[y, fl] = cgs (A / M, b, [], 1)
x = M \ y # compare x and x_ref
References:
1. Y. Saad, ‘Iterative Methods for Sparse Linear Systems’, Second
edition, 2003, SIAM
See also: Note: pcg, Note: bicgstab, Note:
bicg, Note: gmres, Note: qmr, Note:
tfqmr.
-- : X = gmres (A, B, RESTART, TOL, MAXIT, M1, M2, X0, ...)
-- : X = gmres (A, B, RESTART, TOL, MAXIT, M, [], X0, ...)
-- : [X, FLAG, RELRES, ITER, RESVEC] = gmres (A, B, ...)
Solve ‘A x = b’ using the Preconditioned GMRES iterative method
with restart, a.k.a. PGMRES(restart).
The input arguments are:
− A is the matrix of the linear system and it must be square. A
can be passed as a matrix, function handle, or inline function
‘Afun’ such that ‘Afun(x) = A * x’. Additional parameters to
‘Afun’ are passed after X0.
− B is the right hand side vector. It must be a column vector
with the same numbers of rows as A.
− RESTART is the number of iterations before that the method
restarts. If it is [] or N = numel (b), then the restart is
not applied.
− TOL is the required relative tolerance for the preconditioned
residual error, ‘inv (M) * (B - A * X)’. The iteration stops
if ‘norm (inv (M) * (B - A * X)) ≤ TOL * norm (inv (M) * B)’.
If TOL is omitted or empty, then a tolerance of 1e-6 is used.
− MAXIT is the maximum number of outer iterations, if not given
or set to [], then the default value ‘min (10, N / RESTART)’
is used. Note that, if RESTART is empty, then MAXIT is the
maximum number of iterations. If RESTART and MAXIT are not
empty, then the maximum number of iterations is ‘RESTART *
MAXIT’. If both RESTART and MAXIT are empty, then the maximum
number of iterations is set to ‘min (10, N)’.
− M1, M2 are the preconditioners. The preconditioner M is given
as ‘M = M1 * M2’. Both M1 and M2 can be passed as a matrix,
function handle, or inline function ‘g’ such that ‘g(x) = M1 \
x’ or ‘g(x) = M2 \ x’. If M1 is [] or not given, then the
preconditioner is not applied. The technique used is the
left-preconditioning, i.e., it is solved ‘inv(M) * A * X =
inv(M) * B’ instead of ‘A * X = B’.
− X0 is the initial guess, if not given or set to [], then the
default value ‘zeros (size (B))’ is used.
The arguments which follow X0 are treated as parameters, and passed
in a proper way to any of the functions (A or M or M1 or M2) which
are passed to ‘gmres’.
The outputs are:
− X the computed approximation. If the method does not
converge, then it is the iterated with minimum residual.
− FLAG indicates the exit status:
0 : iteration converged to within the specified tolerance
1 : maximum number of iterations exceeded
2 : the preconditioner matrix is singular
3 : algorithm reached stagnation (the relative difference between two
consecutive iterations is less than eps)
− RELRES is the value of the relative preconditioned residual of
the approximation X.
− ITER is a vector containing the number of outer iterations and
inner iterations performed to compute X. That is:
• ITER(1): number of outer iterations, i.e., how many times
the method restarted. (if RESTART is empty or N, then it
is 1, if not 1 ≤ ITER(1) ≤ MAXIT).
• ITER(2): the number of iterations performed before the
restart, i.e., the method restarts when ‘ITER(2) =
RESTART’. If RESTART is empty or N, then 1 ≤ ITER(2) ≤
MAXIT.
To be more clear, the approximation X is computed at the
iteration ‘(ITER(1) - 1) * RESTART + ITER(2)’. Since the
output X corresponds to the minimal preconditioned residual
solution, the total number of iterations that the method
performed is given by ‘length (resvec) - 1’.
− RESVEC is a vector containing the preconditioned relative
residual at each iteration, including the 0-th iteration ‘norm
(A * X0 - B)’.
Let us consider a trivial problem with a tridiagonal matrix
n = 20;
A = toeplitz (sparse ([1, 1], [1, 2], [2, 1] * n ^ 2, 1, n)) + ...
toeplitz (sparse (1, 2, -1, 1, n) * n / 2, ...
sparse (1, 2, 1, 1, n) * n / 2);
b = A * ones (n, 1);
restart = 5;
[M1, M2] = ilu (A); # in this tridiag case, it corresponds to lu (A)
M = M1 * M2;
Afun = @(x) A * x;
Mfun = @(x) M \ x;
M1fun = @(x) M1 \ x;
M2fun = @(x) M2 \ x;
EXAMPLE 1: simplest usage of ‘gmres’
x = gmres (A, b, [], [], n)
EXAMPLE 2: ‘gmres’ with a function which computes ‘A * X’
x = gmres (Afun, b, [], [], n)
EXAMPLE 3: usage of ‘gmres’ with the restart
x = gmres (A, b, restart);
EXAMPLE 4: ‘gmres’ with a preconditioner matrix M with and without
restart
x = gmres (A, b, [], 1e-06, n, M)
x = gmres (A, b, restart, 1e-06, n, M)
EXAMPLE 5: ‘gmres’ with a function as preconditioner
x = gmres (Afun, b, [], 1e-6, n, Mfun)
EXAMPLE 6: ‘gmres’ with preconditioner matrices M1 and M2
x = gmres (A, b, [], 1e-6, n, M1, M2)
EXAMPLE 7: ‘gmres’ with functions as preconditioners
x = gmres (Afun, b, 1e-6, n, M1fun, M2fun)
EXAMPLE 8: ‘gmres’ with as input a function requiring an argument
function y = Ap (A, x, p) # compute A^p * x
y = x;
for i = 1:p
y = A * y;
endfor
endfunction
Apfun = @(x, p) Ap (A, x, p);
x = gmres (Apfun, b, [], [], [], [], [], [], 2);
EXAMPLE 9: explicit example to show that ‘gmres’ uses a left
preconditioner
[M1, M2] = ilu (A + 0.1 * eye (n)); # factorization of A perturbed
M = M1 * M2;
## reference solution computed by gmres after two iterations
[x_ref, fl] = gmres (A, b, [], [], 1, M)
## left preconditioning
[x, fl] = gmres (M \ A, M \ b, [], [], 1)
x # compare x and x_ref
References:
1. Y. Saad, ‘Iterative Methods for Sparse Linear Systems’, Second
edition, 2003, SIAM
See also: Note: bicg, Note: bicgstab, Note:
cgs, Note: pcg, Note: pcr, *note qmr:
XREFqmr, Note: tfqmr.
-- : X = qmr (A, B, RTOL, MAXIT, M1, M2, X0)
-- : X = qmr (A, B, RTOL, MAXIT, P)
-- : [X, FLAG, RELRES, ITER, RESVEC] = qmr (A, B, ...)
Solve ‘A x = b’ using the Quasi-Minimal Residual iterative method
(without look-ahead).
− RTOL is the relative tolerance, if not given or set to [] the
default value 1e-6 is used.
− MAXIT the maximum number of outer iterations, if not given or
set to [] the default value ‘min (20, numel (b))’ is used.
− X0 the initial guess, if not given or set to [] the default
value ‘zeros (size (b))’ is used.
A can be passed as a matrix or as a function handle or inline
function ‘f’ such that ‘f(x, "notransp") = A*x’ and ‘f(x, "transp")
= A'*x’.
The preconditioner P is given as ‘P = M1 * M2’. Both M1 and M2 can
be passed as a matrix or as a function handle or inline function
‘g’ such that ‘g(x, "notransp") = M1 \ x’ or ‘g(x, "notransp") = M2
\ x’ and ‘g(x, "transp") = M1' \ x’ or ‘g(x, "transp") = M2' \ x’.
If called with more than one output parameter
− FLAG indicates the exit status:
− 0: iteration converged to the within the chosen tolerance
− 1: the maximum number of iterations was reached before
convergence
− 3: the algorithm reached stagnation
(the value 2 is unused but skipped for compatibility).
− RELRES is the final value of the relative residual.
− ITER is the number of iterations performed.
− RESVEC is a vector containing the residual norms at each
iteration.
References:
1. R. Freund and N. Nachtigal, ‘QMR: a quasi-minimal residual
method for non-Hermitian linear systems’, Numerische
Mathematik, 1991, 60, pp. 315-339.
2. R. Barrett, M. Berry, T. Chan, J. Demmel, J. Donato, J.
Dongarra, V. Eijkhour, R. Pozo, C. Romine, and H. van der
Vorst, ‘Templates for the solution of linear systems: Building
blocks for iterative methods’, SIAM, 2nd ed., 1994.
See also: Note: bicg, Note: bicgstab, Note:
cgs, Note: gmres, Note: pcg.
-- : X = tfqmr (A, B, TOL, MAXIT, M1, M2, X0, ...)
-- : X = tfqmr (A, B, TOL, MAXIT, M, [], X0, ...)
-- : [X, FLAG, RELRES, ITER, RESVEC] = tfqmr (A, B, ...)
Solve ‘A x = b’ using the Transpose-Tree qmr method, based on the
cgs.
The input parameters are:
− A is the matrix of the linear system and it must be square. A
can be passed as a matrix, function handle, or inline function
‘Afun’ such that ‘Afun(x) = A * x’. Additional parameters to
‘Afun’ are passed after X0.
− B is the right hand side vector. It must be a column vector
with the same number of rows as A.
− TOL is the relative tolerance, if not given or set to [] the
default value 1e-6 is used.
− MAXIT the maximum number of outer iterations, if not given or
set to [] the default value ‘min (20, numel (b))’ is used. To
be compatible, since the method as different behaviors in the
iteration number is odd or even, is considered as iteration in
‘tfqmr’ the entire odd-even cycle. That is, to make an entire
iteration, the algorithm performs two sub-iterations: the odd
one and the even one.
− M1, M2 are the preconditioners. The preconditioner M is given
as ‘M = M1 * M2’. Both M1 and M2 can be passed as a matrix or
as a function handle or inline function ‘g’ such that ‘g(x) =
M1 \ x’ or ‘g(x) = M2 \ x’. The technique used is the
right-preconditioning, i.e., it is solved ‘A*inv(M)*y = b’ and
then ‘x = inv(M)*y’, instead of ‘A x = b’.
− X0 the initial guess, if not given or set to [] the default
value ‘zeros (size (b))’ is used.
The arguments which follow X0 are treated as parameters, and passed
in a proper way to any of the functions (A or M) which are passed
to ‘tfqmr’.
The output parameters are:
− X is the approximation computed. If the method doesn’t
converge then it is the iterated with the minimum residual.
− FLAG indicates the exit status:
− 0: iteration converged to the within the chosen tolerance
− 1: the maximum number of iterations was reached before
convergence
− 2: the preconditioner matrix is singular
− 3: the algorithm reached stagnation
− 4: the algorithm can’t continue due to a division by zero
− RELRES is the relative residual obtained as ‘(A*X-B) / norm
(B)’.
− ITER is the iteration which X is computed.
− RESVEC is a vector containing the residual at each iteration
(including ‘norm (b - A x0)’). Doing ‘length (RESVEC) - 1’ is
possible to see the total number of iterations performed.
Let us consider a trivial problem with a tridiagonal matrix
n = 20;
A = toeplitz (sparse ([1, 1], [1, 2], [2, 1] * n ^ 2, 1, n)) + ...
toeplitz (sparse (1, 2, -1, 1, n) * n / 2, ...
sparse (1, 2, 1, 1, n) * n / 2);
b = A * ones (n, 1);
restart = 5;
[M1, M2] = ilu (A); # in this tridiag case it corresponds to chol (A)'
M = M1 * M2;
Afun = @(x) A * x;
Mfun = @(x) M \ x;
M1fun = @(x) M1 \ x;
M2fun = @(x) M2 \ x;
EXAMPLE 1: simplest usage of ‘tfqmr’
x = tfqmr (A, b, [], n)
EXAMPLE 2: ‘tfqmr’ with a function which computes ‘A * X’
x = tfqmr (Afun, b, [], n)
EXAMPLE 3: ‘tfqmr’ with a preconditioner matrix M
x = tfqmr (A, b, [], 1e-06, n, M)
EXAMPLE 4: ‘tfqmr’ with a function as preconditioner
x = tfqmr (Afun, b, 1e-6, n, Mfun)
EXAMPLE 5: ‘tfqmr’ with preconditioner matrices M1 and M2
x = tfqmr (A, b, [], 1e-6, n, M1, M2)
EXAMPLE 6: ‘tfmqr’ with functions as preconditioners
x = tfqmr (Afun, b, 1e-6, n, M1fun, M2fun)
EXAMPLE 7: ‘tfqmr’ with as input a function requiring an argument
function y = Ap (A, x, z) # compute A^z * x
y = x;
for i = 1:z
y = A * y;
endfor
endfunction
Apfun = @(x, string, p) Ap (A, x, string, p);
x = tfqmr (Apfun, b, [], [], [], [], [], 2);
EXAMPLE 8: explicit example to show that ‘tfqmr’ uses a right
preconditioner
[M1, M2] = ilu (A + 0.3 * eye (n)); # factorization of A perturbed
M = M1 * M2;
## reference solution computed by tfqmr after one iteration
[x_ref, fl] = tfqmr (A, b, [], 1, M)
## right preconditioning
[y, fl] = tfqmr (A / M, b, [], 1)
x = M \ y # compare x and x_ref
References:
1. Y. Saad, ‘Iterative Methods for Sparse Linear Systems’, Second
edition, 2003, SIAM
See also: Note: bicg, Note: bicgstab, Note:
cgs, Note: gmres, Note: pcg, Note:
qmr, Note: pcr.
automatically generated by info2www version 1.2.2.9