The RPA equations (3) and (4) constitute a non-hermitian
eigenproblem with
non-positive-definite norm.
We solve this problem by using an
iterative method that during each
iteration only needs to know the product of the RPA matrix and a density
vector, that is, the right-hand sides of Eqs. (3) and (4):
Various iterative methods, which only
need to know the products of the diagonalized matrix and vectors, exist for
non-hermitian matrix eigenvalue equations, and good examples with pseudocode
are shown in Ref. [1]. We chose the non-hermitian Lanczos
method [5] in a modified form, because it conserves all
odd-power energy weighed sum rules (EWSR) if the starting vector
(pivot) of iteration is chosen correctly.
In this work, we start from a pivot vector that has its elements
set to the matrix elements of electromagnetic multipole operator,
Because we calculate the RPA matrix-vector products by using the mean-field
method, and not with a precalculated RPA matrix, we introduce small,
but significant numerical noise to the resulting vectors. If corrective
measures are not used to remove or reduce this noise, the iteration
method fails and produces complex RPA eigenvalues early on in the
iteration. We stabilize our iterative solution method by modifying
the method of Ref. [5] in two ways. First, we use the
non-hermitian Arnoldi method instead of the non-hermitian Lanczos method.
The advantage of Arnoldi method is that it orthogonalizes each new
basis vector against all previous basis vectors and their opposite
norm partners, that is,
In the Lanczos method, only the tridiagonal parts of RPA matrices are calculated, and small changes in basis vectors due to Lanczos re-orthogonalization (which always must be used to preserve orthogonality) do not show up in the constructed RPA matrix. In the Arnoldi method, these small but important elements outside the tridigonal part improve the stability as compared to Lanczos.
The norm of the obtained new residual vector in Eq. (12) can
be either positive or negative. We do not in practice use
Eq. (13), which in exact arithmetic would duplicate the
results of Eq. (12). Instead, we store only the positive-norm basis
states and use a similar method as in Ref. [5] to
change sign of the norm in case the norm of the residual
vector in Eq. (12)
is negative. Thus, explicitly, for the positive norm
of the residual vector
,
we define the new normalized positive-norm basis vector as
When maximum number of iterations has been made or the
iteration has been stopped, the generated Krylov-space
RPA matrix, with dimension , is diagonalized with
standard methods, that is we solve:
Standard RPA method that constructs and diagonalizes the full RPA matrix can ensure that the lower matrices in the RPA supermatrix are exact complex conjugates of the upper matrices. Our mean-field method can have small differences in the implicitly used upper and lower RPA matrices due to finite numerical precision. The consequence of this is that we will in general have and . This spoils the consistency of Eqs. (12) and (13) and can make the Arnoldi iteration to fail and produce complex energy solutions.
The numerical errors in the matrix-vector products can be reduced by
symmetrization. We thus calculate the RPA fields twice, first using
the densities of a positive norm basis vector
, and second using the densities of negative norm
vector
. The two
resulting vectors are subtracted from each other to get the final
stabilized RPA matrix-vector product,