Lapack functions
    

Those functions are available mainly for dense matrices. When it is available for sparse matrices, it will be specified and the use detailed. In the case of dense matrices, Lapack subroutines are called and you will need to define SELDON_WITH_BLAS and SELDON_WITH_LAPACK.

Lapack flags

Several classes have been implemented as matrices flags:

  • SeldonTranspose : product with the original matrix, its transpose or its conjugated transpose.
  • SeldonSide : left or right eigenvectors for example.
  • SeldonConjugate : conjugates or not a system.
  • SeldonNorm : informs which norm to use.
  • SeldonUplo : solves the upper triangular part or lower triangular part.
  • SeldonDiag : solves knowing that the diagonal only contains ones or not.

Below we show an example of use for these flags

// if you want to define a function with trans
// which can be either transpose, no transpose or conjugate transpose
// you declare trans as SeldonTranspose
void Mlt(const SeldonTranspose& trans, const Matrix<double>& A, const Vector<double>& x, Vector<double>& y)
{
  if (trans.NoTrans())
    {
      // for the non transpose case (y = A x)
    }
  else if (trans.Trans())
    {
      // transpose case (y = A^T x)
    }
  else if (trans.ConjTrans())
    {
      // transpose conjugate case (y = A^H x)      
    }
}

// if you want to avoid an "if" inside your function, you can write three functions:
void MltFast(const class_SeldonNoTrans& trans, const Matrix<double>& A, const Vector<double>& x, Vector<double>& y)
{
   // for the non transpose case (y = A x)
}

void MltFast(const class_SeldonTrans& trans, const Matrix<double>& A, const Vector<double>& x, Vector<double>& y)
{
  // transpose case (y = A^T x)
}

void MltFast(const class_SeldonConjTrans& trans, const Matrix<double>& A, const Vector<double>& x, Vector<double>& y)
{
   // transpose conjugate case (y = A^H x)      
}
      
int main()
{
  Matrix<double> A;
  Vector<double> x, y;
  
  // you can call the function Mlt
  // with SeldonTrans, SeldonNoTrans or SeldonConjTrans (which are instances of class SeldonTranspose)
  Mlt(SeldonNoTrans, A, x, y); // y = A x
  Mlt(SeldonTrans, A, x, y); // y = A^T x
  Mlt(SeldonConjTrans, A, x, y); // y = A^H x

  // MltFast works in the same way (you spare an if with MltFast)
  MltFast(SeldonNoTrans, A, x, y); // y = A x
  MltFast(SeldonTrans, A, x, y); // y = A^T x
  MltFast(SeldonConjTrans, A, x, y); // y = A^H x

  // the following classes work in the same model as SeldonTranspose (see file MatrixFlag.hxx):

  // SeldonTranspose : SeldonNoTrans, SeldonTrans and SeldonConjTrans
  // methods : Trans(), NoTrans() and ConjTrans()
  
  // SeldonDiag : SeldonNonUnit and SeldonUnit (to inform if the diagonal is unitary or not)
  // methods : Unit() and NonUnit()
  
  // SeldonUplo : SeldonUpper and SeldonLower (to solve the upper or lower part of a matrix)
  // methods : Lower() and Upper()
  
  // SeldonNorm : SeldonNormInf and SeldonNorm1

  // SeldonConjugate : SeldonConj and SeldonUnConj
  // method : Conj()
  
  // SeldonSide : SeldonLeft and SeldonRight (multplication on the right or left)
  // method : Left() and Right()
}

Functions

GetLU performs a LU (or LDL^t) factorization
SolveLU solves linear system by using LU factorization
GetAndSolveLU factorisation and solution of a linear system
RefineSolutionLU improves solution computed by SolveLU
GetCholesky performs a Cholesky (A = LLT) factorization for symmetric positive definite matrices
SolveCholesky solves linear system by using Cholesky factorization
MltCholesky performs matrix vector product by using Cholesky factorization
ReciprocalConditionNumber computes the inverse of matrix condition number
GetScalingFactors computes row and column scalings to equilibrate a matrix
GetInverse computes the matrix inverse
GetQR QR factorization of matrix
GetQR_Pivot QR factorization of matrix with pivoting
GetLQ LQ factorization of matrix
GetQ_FromQR Forms explicitely Q from QR factorization
MltQ_FromQR multiplies vector by Q
GetQ_FromLQ Forms explicitely Q from LQ factorization
MltQ_FromLQ multiplies vector by Q
SolveQR solves least-square problems by using QR factorization
SolveLQ solves least-square problems by using LQ factorization
GetEigenvalues computes eigenvalues
GetEigenvaluesEigenvectors computes eigenvalues and eigenvectors
GetSVD performs singular value decomposition (SVD)
GetHessenberg reduces a dense matrix to its Hessenberg form
GetQZ reduces matrices A and B to quasi-triangular forms
SolveHessenberg solution of H X = Y with H an Hessenberg matrix
SolveHessenbergTwo solution of H X = Y with H a matrix with two sub-diagonals non-null
SolveSylvester solution of Sylvester equation A X B* + C X D* = E
GetPseudoInverse computes the pseudo-inverse of a given matrix



GetLU, SolveLU

Syntax :

  void GetLU(Matrix&, Vector<int>& pivot);
  void SolveLU(const Matrix&, const Vector<int>& pivot, Vector&);
  void GetLU(Matrix&, MatrixMumps&);
  void SolveLU(MatrixMumps&, Vector&);
  void GetLU(Matrix&, MatrixPastix&);
  void SolveLU(MatrixPastix&, Vector&);
  void GetLU(Matrix&, MatrixSuperLU&);
  void SolveLU(MatrixSuperLU&, Vector&);
  void GetLU(Matrix&, MatrixUmfPack&);
  void SolveLU(MatrixUmfPack&, Vector&);
  void GetAndSolveLU(Matrix&, Vector&);

GetLU performs a LU factorization or LDLT factorization (for symmetric matrices) of the provided matrix. This function is implemented both for dense and sparse matrices. In the case of sparse matrices, Seldon is interfaced with external librairies, i.e. MUMPS, UMFPACK, SUPERLU and Pastix. You need to define SELDON_WITH_MUMPS, SELDON_WITH_SUPERLU, SELDON_WITH_PASTIX and/or SELDON_WITH_UMFPACK if you want to factorize a sparse matrix. After a call to GetLU, you can call SolveLU to solve a linear system by using the computed factorization. A class enabling the choice between the different direct solvers has also been implemented. Its use is detailed in the section devoted to direct solvers. If you want to perform a factorisation followed by a solution, you can use the function GetAndSolveLU, but with this function the factorisation is cleared at the end, therefore not available if you need to compute other solutions.

Example :

// lapack for dense matrices
#define SELDON_WITH_LAPACK

// external libraries for sparse matrices
#define SELDON_WITH_MUMPS
#define SELDON_WITH_SUPERLU
#define SELDON_WITH_UMFPACK
#define SELDON_WITH_PASTIX

#include "Seldon.hxx"
#include "SeldonSolver.hxx"

using namespace Seldon;

int main()
{
  // dense matrices
  Matrix<double> A(3, 3);
  Vector<int> pivot;

  // factorization
  GetLU(A, pivot);

  // now you can solve the linear system A x = b
  Vector<double> X(3), B(3);
  B.Fill();
  X = B;
  SolveLU(A, pivot, X);
  
  // sparse matrices, use of Mumps for example
  MatrixMumps<double> mat_lu;
  Matrix<double, General, ArrayRowSparse> Asp(n, n);
  // you add all non-zero entries to matrix Asp
  // then you call GetLU to factorize the matrix
  GetLU(Asp, mat_lu);
  // Asp is empty after GetLU
  // if you want to keep it, GetLU(Asp, mat_lu, true)
  // you can solve Asp x = b 
  X = B;
  SolveLU(mat_lu, X);

  // If you have a single right-hand side
  // you can factorize and solve in a single step
  // if Seldon is interfaced with one direct solver (Mumps, Pastix, etc)
  // this solver will be used
  Asp.Reallocate(n, n); //...
  X = B;
  GetAndSolveLU(Asp, X);
  // Asp should be empty after calling this function
  
  return 0;
}

Related topics :

Direct solvers for sparse linear systems

Location :

Lapack_LinearEquations.cxx
Mumps.cxx
SuperLU.cxx
UmfPack.cxx
Pastix.cxx
SparseSolver.cxx

RefineSolutionLU

Syntax :

  void RefineSolutionLU(const Matrix&, const Matrix&, Vector<int>&,
                        Vector&, const Vector&, T&, T&);

RefineSolutionLU improves the computed solution to a system of linear equations and provides error bounds and backward error estimates for the solution. This function should be called after GetLU and SolveLU.

Example :

Matrix<double> A(3, 3);
Matrix<double> mat_lu(3, 3);
// initialization of A
A.Fill();
// factorization of A
mat_lu = A;
Vector<int> pivot;
GetLU(mat_lu, pivot);

// solution of linear system
Vector<double> X(3), B(3);
B.Fill();
X = B;
GetLU(mat_lu, pivot, X);

// now we can call RefineSolutionLU
// backward error
double berr;
// forward error
double ferr;
RefineSolutionLU(A, mat_lu, pivot, X, B, ferr, berr);

Related topics :

GetLU

Location :


Lapack_LinearEquations.cxx

GetCholesky, SolveCholesky, MltCholesky

Syntax :

  void GetCholesky(Matrix&);
  void SolveCholesky(SeldonTrans, const Matrix&, Vector&);
  void SolveCholesky(SeldonNoTrans, const Matrix&, Vector&);
  void MltCholesky(SeldonTrans, const Matrix&, Vector&);
  void MltCholesky(SeldonNoTrans, const Matrix&, Vector&);

SolveCholesky performs a Cholesky factorization (A = LLT) of the provided matrix. This function is implemented both for dense and sparse symmetric matrices. For sparse matrices, it is preferable to use the Cholmod interface (#define SELDON_WITH_CHOLMOD). SolveCholesky performs a solution by L or LT. MltCholesky performs a matrix vector product by L or LT. For sparse matrices, you can use SparseCholeskySolver, which is detailed in the section devoted to direct solvers.

Example :

// lapack for dense matrices
#define SELDON_WITH_LAPACK
// Cholmod for sparse matrices
#define SELDON_WITH_CHOLMOD

#include "Seldon.hxx"

int main()
{
  // symmetric matrix
  int n = 3;
  Matrix<double, Symmetric, RowSymPacked> A(n, n);

  // for example, you can set A = C*C^T
  // if you don't provide a definite positive matrix, GetCholesky will fail
  Matrix<double, General, RowMajor> C(n, n), A2(n, n);
  C.FillRand(); Mlt(1e-9, C); A2.Fill(0);
  MltAdd(1.0, SeldonTrans, C, SeldonNoTrans, C, 0.0, A2);
  for (int i = 0; i < n; i++)
    for (int j = i; j < n; j++)
      A(i, j) = A2(i, j);

  // Cholesky factorization
  GetCholesky(A);

  Vector<double> X(n), B(n);
  X.Fill();

  // you can perform a matrix vector product by A : b = A*x = L L^T x
  // first step : b = L ^T x
  B = X;
  MltCholesky(SeldonTrans, A, B);
  // second step : b = L b
  MltCholesky(SeldonNoTrans, A, B);

  // now you can solve the linear system A x = b, ie L L^T x = b
  // first step x = L \ b
  X = B;
  SolveCholesky(SeldonNoTrans, A, X);
  // second step x = L^T \ x
  SolveCholesky(SeldonTrans, A, X);

  // for sparse matrices, use Cholmod
  MatrixCholmod mat_chol;
  Matrix<double, Symmetric, ArrayRowSymSparse> M(n, n);
  GetCholesky(M, mat_chol);
  // M should be empty after GetCholesky
  // if you want to keep it, GetCholesky(M, mat_chol, true);
  // first step x = L \ b
  X = B;
  SolveCholesky(SeldonNoTrans, mat_chol, X);
  // second step x = L^T \ x
  SolveCholesky(SeldonTrans, mat_chol, X);
  
  // MltCholesky is also available

  return 0;
}

Location :

Lapack_LinearEquations.cxx
Cholmod.cxx
SparseCholeskyFactorisation.cxx

ReciprocalConditionNumber

Syntax :

  T ReciprocalConditionNumber(const Matrix&, const Vector<int>&,
                              SeldonNorm1, T&);
  T ReciprocalConditionNumber(const Matrix&, const Vector<int>&,
                              SeldonNormInf, T&);

This function estimates the reciprocal of the condition number of a matrix A, in either the 1-norm or the infinity-norm, using the LU factorization computed by GetLU.

Example :

Matrix<double> A(3, 3);
// initialization of A
A.Fill();
// computation of 1-norm and infinity-norm of matrix
double anorm_one = Norm1(A);
double anorm_inf = NormInf(A);
// factorization of A
Vector<int> pivot;
GetLU(mat_lu, pivot);

// computation of reciprocal of condition number in 1-norm
double rcond = ReciprocalConditionNumber(A, pivot, SeldonNorm1, anorm_one);
// or infinity norm
rcond = ReciprocalConditionNumber(A, pivot, SeldonNormInf, anorm_inf);

Related topics :

GetLU

Location :

Lapack_LinearEquations.cxx

GetScalingFactors

Syntax :

  void GetScalingFactors(const Matrix&, Vector&, Vector&, T&, T&, T&);

This function computes a row and column scaling that reduce the condition number of the matrix. This function is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).

Example :

Matrix<double> A(5, 3);
// initialization of A
A.Fill();

// computation of scaling
int m = A.GetM(), n = A.GetN();
Vector<double> row_scale(m), col_scale(n);
double row_condition_number, col_condition_number;

GetScalingFactors(A, row_scale, col_scale, row_condition_number, col_condition_number, amax);

Location :

Lapack_LinearEquations.cxx

GetInverse

Syntax :

  void GetInverse(Matrix&);

This function overwrites a dense matrix with its inverse.

Example :

Matrix<double> A(3, 3);
// initialization of A
A.Fill();

// computation of the inverse
GetInverse(A);

Related topics :

GetLU

Location :

Lapack_LinearEquations.cxx

GetQR, SolveQR

Syntax :

  void GetQR(Matrix&, Vector&)
  void GetQR(Matrix&, Matrix&, Matrix&)
  void SolveQR(Matrix&, Vector&, Vector&)

GetQR computes the QR factorization of a rectangular matrix, while SolveQR exploits this factorization to solve a least-squares problem. This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices). You can also compute explicitly the factors Q and R, but the storage will be more important in that case.

Example :

Matrix<double> A(5, 3);
// initialization of A
A.Fill();

// QR Factorization
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
GetQR(A, tau);

// Solving Least squares problem QR X = B  (m >= n)
Vector<double> X, B(m);
B.Fill();
SolveQR(A, tau, X);

// you can also compute Q and R such that A = Q R
A.Fill();
Matrix<double> Q, R;
GetQR(A, Q, R);

Location :

Lapack_LeastSquares.cxx

GetQR_Pivot

Syntax :

  void GetQR_Pivot(Matrix&, Vector&, Vector<int>& pivot)

GetQR_Pivot computes the QR factorization of a rectangular matrix with pivoting (subroutine dgeqp3), it is only implemented for ColMajor storage.

Example :

Matrix<double> A(5, 3);
// initialization of A
A.Fill();

// QR Factorization
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
Vector<int> pivot;
GetQR_Pivot(A, tau, pivot);

// Solving Least squares problem QR X = B  (m >= n)
Vector<double> X, B(m);
B.Fill();
SolveQR(A, tau, X);

Location :

Lapack_LeastSquares.cxx

GetLQ, SolveLQ

Syntax :

  void GetLQ(Matrix<T>&, Vector<T>&);
  void GetLQ(Matrix&, Matrix&, Matrix&)
  void SolveLQ(Matrix<T>&, Vector<T>&, Vector<T>&);

GetLQ computes the LQ factorization of a rectangular matrix, while SolveLQ exploits this factorization to solve a least-squares problem. This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices). You can also compute explicitly the factors Q and L, but the storage will be more important in that case.

Example :

Matrix<double> A(3, 5);
// initialization of A
A.Fill();

// LQ Factorization
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
GetLQ(A, tau);

// Solving Least squares problem LQ X = B  (m <= n)
Vector<double> X, B(m);
B.Fill();
SolveLQ(A, tau, X);

// you can also compute Q and L such that A = L Q
A.Fill();
Matrix<double> Q, L;
GetLQ(A, L, Q);

Location :

Lapack_LeastSquares.cxx

MltQ_FromQR, MltQ_FromLQ

Syntax :

  void MltQ_FromQR(const Side&, const Trans&, const Matrix&,
                   const Vector&, Matrix&);
  void MltQ_FromLQ(const Side&, const Trans&, const Matrix&,
                   const Vector&, Matrix&);

This function multiplies a matrix by Q, where Q is the orthogonal matrix computed during QR factorization (or LQ factorization). This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).

Example :

Matrix<double> A(5, 3);
// initialization of A
A.Fill();

// QR Factorization
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
GetQR(A, tau);

// computation of Q*C
Matrix<double> C(m, m);
MltQ_FromQR(SeldonLeft, SeldonNoTrans, A, tau, C);

// you can compute C*transpose(Q)
MltQ_FromQR(SeldonRight, SeldonTrans, A, tau, C);

// for complex numbers, you have SeldonConjTrans

Location :

Lapack_LeastSquares.cxx

GetQ_FromQR, GetQ_FromLQ

Syntax :

  void GetQ_FromQR(Matrix&, Vector&);
  void GetQ_FromLQ(Matrix&, Vector&);

This functions overwrites the QR factorization (or LQ factorisation) by matrix Q. This is only defined for storages RowMajor and ColMajor (unsymmetric dense matrices).

Example :

Matrix<double> A(5, 3);
// initialization of A
A.Fill();

// QR Factorization
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
GetQR(A, tau);

Matrix<double> Q = A;
GetQ_FromQR(Q, tau);

// similar stuff with LQ Factorization
A.Fill();
int m = A.GetM(), n = A.GetN();
Vector<double> tau;
GetLQ(A, tau);

Q = A;
GetQ_FromLQ(Q, tau);

Location :

Lapack_LeastSquares.cxx

GetEigenvalues

Syntax :

  void GetEigenvalues(Matrix&, Vector&);
  void GetEigenvalues(Matrix&, Vector&, Vector&);
  void GetEigenvalues(Matrix&, Matrix&, Vector&);
  void GetEigenvalues(Matrix&, Matrix&, Vector&, Vector&);
  void GetEigenvalues(Matrix&, Matrix&, Vector&, Vector&, Vector&);

This function computes the eigenvalues of a matrix. The matrix is modified after calling this function.

Example :

Matrix<double> A(5, 5);
Vector<double> lambda_real, lambda_imag;
// initialization of A
A.Fill();

// computing eigenvalues (real part and imaginary part)
GetEigenvalues(A, lambda_real, lambda_imag);

// for symmetric matrices, eigenvalues are real
Matrix<double, Symmetric, RowSymPacked> B(5, 5);
Vector<double> lambda;
// initialization of B
B.Fill();
GetEigenvalues(B, lambda);

// for hermitian matrices too
Matrix<complex<double>, General, RowHermPacked> C(5, 5);
// initialization of C
C.Fill();
GetEigenvalues(C, lambda);

// other complex matrices -> complex eigenvalues
Matrix<complex<double>, Symmetric, RowSymPacked> D(5, 5);
Vector<complex<double> > lambda_cpx;
// initialization of D
D.Fill();
GetEigenvalues(D, lambda_cpx);

The function can also solve a generalized eigenvalue problem, as detailed in the following example.

Example :

// symmetric matrices A, B
Matrix<double, Symmetric, RowSymPacked> A(5, 5), B(5, 5);
Vector<double> lambda;
// initialization of A and B as you want
// B has to be positive definite
A.FillRand(); B.SetIdentity();

// we solve generalized eigenvalue problem
// i.e. seeking lambda so that A x = lambda B x
// the function assumes that B is POSITIVE DEFINITE
GetEigenvalues(A, B, lambda);

// same use for hermitian matrices
Matrix<complex<double>, General, RowHermPacked> Ah(5, 5), Bh(5,5);
// initialize Ah and Bh as you want
// Bh has to be positive definite
// as a result, eigenvalues are real and you compute them 
GetEigenvalues(Ah, Bh, lambda);

// other complex matrices
// provide complex eigenvalues, potentially infinite if B is indefinite
Matrix<complex<double> > C(5, 5), D(5, 5);
Vector<complex<double> > alphac, betac;

// eigenvalues are written in the form lambda = alphac/betac
GetEigenvalues(C, D, alphac, betac);

// for unsymmetric real matrices, real part and imaginary are stored
// in different vectors
Matrix<double> Ar(5, 5), Br(5, 5);
Vector<double> alpha_real, alpha_imag, beta;

// lambda are written in the form lambda = (alpha_real,alpha_imag)/beta
GetEigenvalues(Ar, Br, alpha_real, alpha_imag, beta);

Location :

Lapack_Eigenvalues.cxx

GetEigenvaluesEigenvectors

Syntax :

  void GetEigenvaluesEigenvectors(Matrix&, Vector&, Matrix&);
  void GetEigenvaluesEigenvectors(Matrix&, Vector&, Vector&, Matrix&);
  void GetEigenvaluesEigenvectors(Matrix&, Matrix&, Vector&, Matrix&);
  void GetEigenvaluesEigenvectors(Matrix&, Matrix&, Vector&, Vector&,
                                  Matrix&);

This function computes the eigenvalues and eigenvectors of a matrix. The matrix is modified after the call to this function. Each eigenvector is stored in a column. When real unsymmetric matrices are selected, you need to compute real and imaginary part of eigenvalues, then if the j-th eigenvalue is real, the j-th column is the eigenvector associated. If j-th and j+1-th are complex conjugate eigenvalues, the j-th and j+1-the associated columns are vectors A et B such that A+iB and A-iB are the complex conjugate eigenvectors of the initial matrix. This function has also been overloaded for sparse eigenvalue problems, by calling an external eigenvalue solver (such as Arpack or Anasazi).

Example :

Matrix<double> A(5, 5);
Vector<double> lambda_real, lambda_imag;
Matrix<double> eigen_vectors;
// initialization of A
A.Fill();

// computing eigenvalues and eigenvectors
GetEigenvalues(A, lambda_real, lambda_imag, eigen_vectors);

// for symmetric matrices, eigenvalues are real
Matrix<double, Symmetric, RowSymPacked> B(5, 5);
Vector<double> lambda;
// initialization of B
B.Fill();
GetEigenvalues(B, lambda);

// for hermitian matrices too
Matrix<complex<double>, General, RowHermPacked> C(5, 5);
// initialization of C
C.Fill();
GetEigenvalues(C, lambda);

// other complex matrices -> complex eigenvalues
Matrix<complex<double>, Symmetric, RowSymPacked> D(5, 5);
Vector<complex<double> > lambda_cpx;
// initialization of D
D.Fill();
GetEigenvalues(D, lambda_cpx);

As for GetEigenvalues, this function can be used to solve generalized eigenvalues problems as detailed in the following example.

Example :

// symmetric matrices A, B
Matrix<double, Symmetric, RowSymPacked> A(5, 5), B(5, 5);
Vector<double> lambda;
Matrix<double> eigen_vectors;
// initialization of A and B as you want
// B has to be positive definite
A.FillRand(); B.SetIdentity();

// we solve generalized eigenvalue problem
// i.e. seeking lambda so that A x = lambda B x
// the function assumes that B is POSITIVE DEFINITE
GetEigenvalues(A, B, lambda, eigen_vectors);

// same use for hermitian matrices
Matrix<complex<double>, General, RowHermPacked> Ah(5, 5), Bh(5,5);
// initialize Ah and Bh as you want
// Bh has to be positive definite
// as a result, eigenvalues are real and you compute them 
Matrix<complex<double> > eigen_vec_cpx;
GetEigenvalues(Ah, Bh, lambda, eigen_vec_cpx);

// other complex matrices
// provide complex eigenvalues, potentially infinite if B is indefinite
Matrix<complex<double> > C(5, 5), D(5, 5);
Vector<complex<double> > alphac, betac;

// eigenvalues are written in the form lambda = alphac/betac
GetEigenvalues(C, D, alphac, betac, eigen_vec_cpx);

// for unsymmetric real matrices, real part and imaginary are stored
// in different vectors
Matrix<double> Ar(5, 5), Br(5, 5);
Vector<double> alpha_real, alpha_imag, beta;

// lambda are written in the form lambda = (alpha_real,alpha_imag)/beta
GetEigenvalues(Ar, Br, alpha_real, alpha_imag, beta, eigen_vector);

Location :

Lapack_Eigenvalues.cxx

GetSVD

Syntax :

  void GetSVD(Matrix&, Vector&, Matrix&, Matrix&);

This function computes the singular value decomposition of a rectangular matrix. As a result, this function is defined only for storages RowMajor and ColMajor.

Example :

Matrix<double> A(10, 5);
Vector<double> lambda;
Matrix<double> U, V;
// initialization of A
A.Fill();

// computing singular value decomposition
// A = U diag(lambda) V
GetSVD(A, lambda, U, V);

Location :

Lapack_Eigenvalues.cxx

GetHessenberg

Syntax :

  void GetHessenberg(Matrix& A, Matrix& Q);
  void GetHessenberg(Matrix& A, Matrix& B, Matrix& Q, Matrix& Z);

This function reduces a matrix A to its Hessenberg form, Q being an orthogonal matrix that is generated during the procedure.

Example :

// square matrix
Matrix<double> A(10, 10), Q;
A.FillRand();

// computation of H and Q, such that
// Q^H A Q = H
// A is replaced by H
// equivalent Matlab function : [Q, A] = hess(A)
GetHessenberg(A, Q);

// If you consider two matrices A, B, you can compute
// AA, BB, Q and Z such that
// Q^H A Z = AA  and Q^H B Z = BB
// A and B are overwritten with matrices AA and BB
// Equivalent Matlab function :
// [A, B, Q, Z] = hess(A, B); Q = Q'; Z = Z';
Matrix<double> B(10, 10), Z;
B.FillRand();
GetHessenberg(A, B, Q, Z);

Location :

Lapack_Eigenvalues.cxx

GetQZ

Syntax :

  void GetQZ(Matrix& A, Matrix& B, Matrix& Q, Matrix& Z);

This function reduces matrices A and B to quasi-triangular forms, Q and Z being orthogonal matrix that are generated during the procedure.

Example :

// square matrix
Matrix<double> A(10, 10), Q;
A.FillRand();

// If you consider two matrices A, B, you can compute
// AA, BB, Q and Z such that
// Q^H A Z = AA  and Q^H B Z = BB
// A and B are overwritten with matrices AA and BB
// Equivalent Matlab function :
// [A, B, Q, Z] = qz(A, B); Q = Q'; Z = Z';
Matrix<double> B(10, 10), Z;
B.FillRand();
GetQZ(A, B, Q, Z);

Location :

Lapack_Eigenvalues.cxx

SolveHessenberg

Syntax :

  void SolveHessenberg(Matrix& A, Vector& x);

This function replaces x by the solution of A y = x, assuming that A is an Hessenberg matrix. A is modified in the procedure, therefore you can only solve a system.

Example :

// Hessenberg matrix (upper triangular + subdiagonal)
Matrix<double> A(10, 10), Q;
A.FillRand();
for (int i = 0; i < 10; i++)
  for (int j = 0; j < i-1; j++)
    A(i, j) = 0;

// solution of A x = b
Vector<double> x, b(10);
b.FillRand(); x = b;
SolveHessenberg(A, x);

Location :

Lapack_Eigenvalues.cxx

SolveHessenbergTwo

Syntax :

  void SolveHessenbergTwo(Matrix& A, Vector& x);

This function replaces x by the solution of A y = x, assuming that A is a an upper-triangular matrix plus two sub-diagonals. A is modified in the procedure, therefore you can only solve a system.

Example :

// 2-Hessenberg matrix (upper triangular + two subdiagonals)
Matrix<double> A(10, 10), Q;
A.FillRand();
for (int i = 0; i < 10; i++)
  for (int j = 0; j < i-2; j++)
    A(i, j) = 0;

// solution of A x = b
Vector<double> x, b(10);
b.FillRand(); x = b;
SolveHessenbergTwo(A, x);

Location :

Lapack_Eigenvalues.cxx

SolveSylvester

Syntax :

  void SolveSylvester(Matrix& A, Matrix& B, Matrix& C, Matrix& D, Matrix& E);

This function replaces E by the solution of A X BH + C X DH = E.

Example :

// Sylvester equation : A X B^H + C X D^H = E
Matrix<complex<double> > A(10, 10), B(10, 10), C(10, 10), D(10, 10), E(10, 10);
A.FillRand(); B.FillRand(); C.FillRand();
D.FillRand(); E.FillRand();

// E is overwritten by solution, A, B, C and D are modified in the process
SolveSylvester(A, B, C, D, E);

Location :

Lapack_Eigenvalues.cxx

GetPseudoInverse

Syntax :

  void GetPseudoInverse(Matrix& A, double epsilon);

This function replaces the matrix A by its pseudo-inverse.

Example :

// we fill a rectangular matrix
Matrix<double> A(12, 10);
A.FillRand();

// A is overwritten by its pseudo-inverse
// the second argument is a threshold to determine null singular values
GetPseudoInverse(A, 1e-15);

Location :

Lapack_Eigenvalues.cxx