LCOV - code coverage report
Current view: top level - src/elpa1/legacy_interface - elpa1.F90 (source / functions) Hit Total Coverage
Test: coverage_50ab7a7628bba174fc62cee3ab72b26e81f87fe5.info Lines: 4 4 100.0 %
Date: 2018-01-10 09:29:53 Functions: 1 1 100.0 %

          Line data    Source code
       1             : !    This file is part of ELPA.
       2             : !
       3             : !    The ELPA library was originally created by the ELPA consortium,
       4             : !    consisting of the following organizations:
       5             : !
       6             : !    - Rechenzentrum Garching der Max-Planck-Gesellschaft (RZG),
       7             : !    - Bergische Universität Wuppertal, Lehrstuhl für angewandte
       8             : !      Informatik,
       9             : !    - Technische Universität München, Lehrstuhl für Informatik mit
      10             : !      Schwerpunkt Wissenschaftliches Rechnen ,
      11             : !    - Fritz-Haber-Institut, Berlin, Abt. Theorie,
      12             : !    - Max-Plack-Institut für Mathematik in den Naturwissenschaften,
      13             : !      Leipzig, Abt. Komplexe Strukutren in Biologie und Kognition,
      14             : !      and
      15             : !    - IBM Deutschland GmbH
      16             : !
      17             : !    This particular source code file contains additions, changes and
      18             : !    enhancements authored by Intel Corporation which is not part of
      19             : !    the ELPA consortium.
      20             : !
      21             : !    More information can be found here:
      22             : !    http://elpa.mpcdf.mpg.de/
      23             : !
      24             : !    ELPA is free software: you can redistribute it and/or modify
      25             : !    it under the terms of the version 3 of the license of the
      26             : !    GNU Lesser General Public License as published by the Free
      27             : !    Software Foundation.
      28             : !
      29             : !    ELPA is distributed in the hope that it will be useful,
      30             : !    but WITHOUT ANY WARRANTY; without even the implied warranty of
      31             : !    MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
      32             : !    GNU Lesser General Public License for more details.
      33             : !
      34             : !    You should have received a copy of the GNU Lesser General Public License
      35             : !    along with ELPA.  If not, see <http://www.gnu.org/licenses/>
      36             : !
      37             : !    ELPA reflects a substantial effort on the part of the original
      38             : !    ELPA consortium, and we ask you to respect the spirit of the
      39             : !    license that we chose: i.e., please contribute any changes you
      40             : !    may have back to the original ELPA library distribution, and keep
      41             : !    any derivatives of ELPA under the same license that we chose for
      42             : !    the original distribution, the GNU Lesser General Public License.
      43             : !
      44             : !
      45             : ! ELPA1 -- Faster replacements for ScaLAPACK symmetric eigenvalue routines
      46             : !
      47             : ! Copyright of the original code rests with the authors inside the ELPA
      48             : ! consortium. The copyright of any additional modifications shall rest
      49             : ! with their original authors, but shall adhere to the licensing terms
      50             : ! distributed along with the original code in the file "COPYING".
      51             : 
      52             : !> \mainpage
      53             : !> Eigenvalue SoLvers for Petaflop-Applications (ELPA)
      54             : !> \par
      55             : !> http://elpa.mpcdf.mpg.de
      56             : !>
      57             : !> \par
      58             : !>    The ELPA library was originally created by the ELPA consortium,
      59             : !>    consisting of the following organizations:
      60             : !>
      61             : !>    - Max Planck Computing and Data Facility (MPCDF) formerly known as
      62             : !>      Rechenzentrum Garching der Max-Planck-Gesellschaft (RZG),
      63             : !>    - Bergische Universität Wuppertal, Lehrstuhl für angewandte
      64             : !>      Informatik,
      65             : !>    - Technische Universität München, Lehrstuhl für Informatik mit
      66             : !>      Schwerpunkt Wissenschaftliches Rechnen ,
      67             : !>    - Fritz-Haber-Institut, Berlin, Abt. Theorie,
      68             : !>    - Max-Plack-Institut für Mathematik in den Naturwissenschaften,
      69             : !>      Leipzig, Abt. Komplexe Strukutren in Biologie und Kognition,
      70             : !>      and
      71             : !>    - IBM Deutschland GmbH
      72             : !>
      73             : !>   Some parts and enhancements of ELPA have been contributed and authored
      74             : !>   by the Intel Corporation which is not part of the ELPA consortium.
      75             : !>
      76             : !>   Contributions to the ELPA source have been authored by (in alphabetical order):
      77             : !>
      78             : !> \author T. Auckenthaler, Volker Blum, A. Heinecke, L. Huedepohl, R. Johanni, Werner Jürgens, and A. Marek
      79             : 
      80             : 
      81             : #include "config-f90.h"
      82             : 
      83             : !> \brief Fortran module which provides the routines to use the one-stage ELPA solver
      84             : module elpa1
      85             :   use, intrinsic :: iso_c_binding
      86             :   use elpa_utilities
      87             :   use elpa1_auxiliary
      88             :   !use elpa1_utilities
      89             : 
      90             :   implicit none
      91             : 
      92             :   ! The following routines are public:
      93             :   private
      94             : 
      95             :   public :: elpa_get_communicators               !< Sets MPI row/col communicators as needed by ELPA
      96             : 
      97             :   public :: elpa_solve_evp_real_1stage_double    !< Driver routine for real double-precision 1-stage eigenvalue problem
      98             : 
      99             :   public :: solve_evp_real_1stage                !< Driver routine for real double-precision eigenvalue problem
     100             :   public :: solve_evp_real_1stage_double         !< Driver routine for real double-precision eigenvalue problem
     101             : #ifdef WANT_SINGLE_PRECISION_REAL
     102             :   public :: solve_evp_real_1stage_single         !< Driver routine for real single-precision eigenvalue problem
     103             :   public :: elpa_solve_evp_real_1stage_single    !< Driver routine for real single-precision 1-stage eigenvalue problem
     104             : 
     105             : #endif
     106             :   public :: elpa_solve_evp_complex_1stage_double !< Driver routine for complex 1-stage eigenvalue problem
     107             :   public :: solve_evp_complex_1stage             !< Driver routine for complex double-precision eigenvalue problem
     108             :   public :: solve_evp_complex_1stage_double      !< Driver routine for complex double-precision eigenvalue problem
     109             : #ifdef WANT_SINGLE_PRECISION_COMPLEX
     110             :   public :: solve_evp_complex_1stage_single      !< Driver routine for complex single-precision eigenvalue problem
     111             :   public :: elpa_solve_evp_complex_1stage_single !< Driver routine for complex 1-stage eigenvalue problem
     112             : #endif
     113             : 
     114             :   ! imported from elpa1_auxilliary
     115             : 
     116             :   public :: elpa_mult_at_b_real_double       !< Multiply double-precision real matrices A**T * B
     117             :   public :: mult_at_b_real                   !< old, deprecated interface to multiply double-precision real matrices A**T * B  DO NOT USE
     118             : 
     119             :   public :: elpa_mult_ah_b_complex_double    !< Multiply double-precision complex matrices A**H * B
     120             :   public :: mult_ah_b_complex                !< old, deprecated interface to multiply double-preicion complex matrices A**H * B  DO NOT USE
     121             : 
     122             :   public :: elpa_invert_trm_real_double      !< Invert double-precision real triangular matrix
     123             :   public :: invert_trm_real                  !< old, deprecated interface to invert double-precision real triangular matrix  DO NOT USE
     124             : 
     125             :   public :: elpa_invert_trm_complex_double   !< Invert double-precision complex triangular matrix
     126             :   public :: invert_trm_complex               !< old, deprecated interface to invert double-precision complex triangular matrix  DO NOT USE
     127             : 
     128             :   public :: elpa_cholesky_real_double        !< Cholesky factorization of a double-precision real matrix
     129             :   public :: cholesky_real                    !< old, deprecated interface to do Cholesky factorization of a double-precision real matrix  DO NOT USE
     130             : 
     131             :   public :: elpa_cholesky_complex_double     !< Cholesky factorization of a double-precision complex matrix
     132             :   public :: cholesky_complex                 !< old, deprecated interface to do Cholesky factorization of a double-precision complex matrix  DO NOT USE
     133             : 
     134             :   public :: elpa_solve_tridi_double          !< Solve a double-precision tridiagonal eigensystem with divide and conquer method
     135             : 
     136             : #ifdef WANT_SINGLE_PRECISION_REAL
     137             :   public :: elpa_mult_at_b_real_single       !< Multiply single-precision real matrices A**T * B
     138             :   public :: elpa_invert_trm_real_single      !< Invert single-precision real triangular matrix
     139             :   public :: elpa_cholesky_real_single        !< Cholesky factorization of a single-precision real matrix
     140             :   public :: elpa_solve_tridi_single          !< Solve a single-precision tridiagonal eigensystem with divide and conquer method
     141             : #endif
     142             : 
     143             : #ifdef WANT_SINGLE_PRECISION_COMPLEX
     144             :   public :: elpa_mult_ah_b_complex_single    !< Multiply single-precision complex matrices A**H * B
     145             :   public :: elpa_invert_trm_complex_single   !< Invert single-precision complex triangular matrix
     146             :   public :: elpa_cholesky_complex_single     !< Cholesky factorization of a single-precision complex matrix
     147             : #endif
     148             : 
     149             :   ! Timing results, set by every call to solve_evp_xxx
     150             : 
     151             :   real(kind=c_double), public :: time_evp_fwd    !< time for forward transformations (to tridiagonal form)
     152             :   real(kind=c_double), public :: time_evp_solve  !< time for solving the tridiagonal system
     153             :   real(kind=c_double), public :: time_evp_back   !< time for back transformations of eigenvectors
     154             : 
     155             :   logical, public :: elpa_print_times = .false. !< Set elpa_print_times to .true. for explicit timing outputs
     156             : 
     157             : 
     158             :   interface solve_evp_real_1stage
     159             :     module procedure elpa_solve_evp_real_1stage_double
     160             :   end interface
     161             : 
     162             : !> \brief elpa_solve_evp_real_1stage_double: Fortran function to solve the real eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_real"
     163             : !>
     164             : !  Parameters
     165             : !
     166             : !> \param  na                   Order of matrix a
     167             : !>
     168             : !> \param  nev                  Number of eigenvalues needed.
     169             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     170             : !>
     171             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     172             : !>                              Distribution is like in Scalapack.
     173             : !>                              The full matrix must be set (not only one half like in scalapack).
     174             : !>                              Destroyed on exit (upper and lower half).
     175             : !>
     176             : !>  \param lda                  Leading dimension of a
     177             : !>
     178             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     179             : !>
     180             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     181             : !>                              Distribution is like in Scalapack.
     182             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     183             : !>                              even if only a part of the eigenvalues is needed.
     184             : !>
     185             : !>  \param ldq                  Leading dimension of q
     186             : !>
     187             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     188             : !>
     189             : !>  \param matrixCols           distributed number of matrix columns
     190             : !>
     191             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     192             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     193             : !>
     194             : !>  \result                     success
     195             :   !interface elpa_solve_evp_real_1stage_double
     196             :    ! module procedure solve_evp_real_1stage_double
     197             :   !end interface
     198             : 
     199             :   interface solve_evp_real_1stage_double
     200             :     module procedure elpa_solve_evp_real_1stage_double
     201             :   end interface
     202             : 
     203             : 
     204             : !> \brief solve_evp_complex_1stage: old, deprecated Fortran function to solve the complex eigenvalue problem with 1-stage solver. will be deleted at some point. Better use "solve_evp_complex_1stage" or "elpa_solve_evp_complex"
     205             : !>
     206             : !> \details
     207             : !> The interface and variable definition is the same as in "elpa_solve_evp_complex_1stage_double"
     208             : !  Parameters
     209             : !
     210             : !> \param  na                   Order of matrix a
     211             : !>
     212             : !> \param  nev                  Number of eigenvalues needed.
     213             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     214             : !>
     215             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     216             : !>                              Distribution is like in Scalapack.
     217             : !>                              The full matrix must be set (not only one half like in scalapack).
     218             : !>                              Destroyed on exit (upper and lower half).
     219             : !>
     220             : !>  \param lda                  Leading dimension of a
     221             : !>
     222             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     223             : !>
     224             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     225             : !>                              Distribution is like in Scalapack.
     226             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     227             : !>                              even if only a part of the eigenvalues is needed.
     228             : !>
     229             : !>  \param ldq                  Leading dimension of q
     230             : !>
     231             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     232             : !>
     233             : !>  \param matrixCols           distributed number of matrix columns
     234             : !>
     235             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     236             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     237             : !>
     238             : !>  \result                     success
     239             :   interface solve_evp_complex_1stage
     240             :     module procedure elpa_solve_evp_complex_1stage_double
     241             :   end interface
     242             : 
     243             : !> \brief solve_evp_complex_1stage_double: Fortran function to solve the complex eigenvalue problem with 1-stage solver. This is called by "elpa_solve_evp_complex"
     244             : !>
     245             : !  Parameters
     246             : !
     247             : !> \param  na                   Order of matrix a
     248             : !>
     249             : !> \param  nev                  Number of eigenvalues needed.
     250             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     251             : !>
     252             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     253             : !>                              Distribution is like in Scalapack.
     254             : !>                              The full matrix must be set (not only one half like in scalapack).
     255             : !>                              Destroyed on exit (upper and lower half).
     256             : !>
     257             : !>  \param lda                  Leading dimension of a
     258             : !>
     259             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     260             : !>
     261             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     262             : !>                              Distribution is like in Scalapack.
     263             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     264             : !>                              even if only a part of the eigenvalues is needed.
     265             : !>
     266             : !>  \param ldq                  Leading dimension of q
     267             : !>
     268             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     269             : !>
     270             : !>  \param matrixCols           distributed number of matrix columns
     271             : !>
     272             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     273             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     274             : !>
     275             : !>  \result                     success
     276             :   interface solve_evp_complex_1stage_double
     277             :     module procedure elpa_solve_evp_complex_1stage_double
     278             :   end interface
     279             : 
     280             : #ifdef WANT_SINGLE_PRECISION_REAL
     281             : !> \brief solve_evp_real_1stage_single: Fortran function to solve the real single-precision eigenvalue problem with 1-stage solver
     282             : !>
     283             : !  Parameters
     284             : !
     285             : !> \param  na                   Order of matrix a
     286             : !>
     287             : !> \param  nev                  Number of eigenvalues needed.
     288             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     289             : !>
     290             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     291             : !>                              Distribution is like in Scalapack.
     292             : !>                              The full matrix must be set (not only one half like in scalapack).
     293             : !>                              Destroyed on exit (upper and lower half).
     294             : !>
     295             : !>  \param lda                  Leading dimension of a
     296             : !>
     297             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     298             : !>
     299             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     300             : !>                              Distribution is like in Scalapack.
     301             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     302             : !>                              even if only a part of the eigenvalues is needed.
     303             : !>
     304             : !>  \param ldq                  Leading dimension of q
     305             : !>
     306             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     307             : !>
     308             : !>  \param matrixCols           distributed number of matrix columns
     309             : !>
     310             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     311             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     312             : !>
     313             : !>  \result                     success
     314             : 
     315             :   interface solve_evp_real_1stage_single
     316             :     module procedure elpa_solve_evp_real_1stage_single
     317             :   end interface
     318             : #endif
     319             : 
     320             : #ifdef WANT_SINGLE_PRECISION_COMPLEX
     321             : !> \brief solve_evp_complex_1stage_single: Fortran function to solve the complex single-precision eigenvalue problem with 1-stage solver
     322             : !>
     323             : !  Parameters
     324             : !
     325             : !> \param  na                   Order of matrix a
     326             : !>
     327             : !> \param  nev                  Number of eigenvalues needed.
     328             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     329             : !>
     330             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     331             : !>                              Distribution is like in Scalapack.
     332             : !>                              The full matrix must be set (not only one half like in scalapack).
     333             : !>                              Destroyed on exit (upper and lower half).
     334             : !>
     335             : !>  \param lda                  Leading dimension of a
     336             : !>
     337             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     338             : !>
     339             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     340             : !>                              Distribution is like in Scalapack.
     341             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     342             : !>                              even if only a part of the eigenvalues is needed.
     343             : !>
     344             : !>  \param ldq                  Leading dimension of q
     345             : !>
     346             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     347             : !>
     348             : !>  \param matrixCols           distributed number of matrix columns
     349             : !>
     350             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     351             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     352             : !>  \param mpi_comm_all         global MPI communicator
     353             : !>  \param useGPU
     354             : !>
     355             : !>  \result                     success
     356             : interface solve_evp_complex_1stage_single
     357             :   module procedure elpa_solve_evp_complex_1stage_single
     358             : end interface
     359             : #endif
     360             : 
     361             : 
     362             : contains
     363             : 
     364             : !-------------------------------------------------------------------------------
     365             : 
     366             : ! All ELPA routines need MPI communicators for communicating within
     367             : ! rows or columns of processes, these are set here.
     368             : ! mpi_comm_rows/mpi_comm_cols can be free'd with MPI_Comm_free if not used any more.
     369             : !
     370             : !  Parameters
     371             : !
     372             : !> \param  mpi_comm_global   Global communicator for the calculations (in)
     373             : !>
     374             : !> \param  my_prow           Row coordinate of the calling process in the process grid (in)
     375             : !>
     376             : !> \param  my_pcol           Column coordinate of the calling process in the process grid (in)
     377             : !>
     378             : !> \param  mpi_comm_rows     Communicator for communicating within rows of processes (out)
     379             : !>
     380             : !> \param  mpi_comm_cols     Communicator for communicating within columns of processes (out)
     381             : !> \result mpierr            integer error value of mpi_comm_split function
     382             : 
     383             : 
     384       12288 : function elpa_get_communicators(mpi_comm_global, my_prow, my_pcol, mpi_comm_rows, mpi_comm_cols) result(mpierr)
     385             :    ! use precision
     386             :    use elpa_mpi
     387             :    use iso_c_binding
     388             :    implicit none
     389             : 
     390             :    integer(kind=c_int), intent(in)  :: mpi_comm_global, my_prow, my_pcol
     391             :    integer(kind=c_int), intent(out) :: mpi_comm_rows, mpi_comm_cols
     392             : 
     393             :    integer(kind=c_int)              :: mpierr
     394             : 
     395             :    ! mpi_comm_rows is used for communicating WITHIN rows, i.e. all processes
     396             :    ! having the same column coordinate share one mpi_comm_rows.
     397             :    ! So the "color" for splitting is my_pcol and the "key" is my row coordinate.
     398             :    ! Analogous for mpi_comm_cols
     399             : 
     400       12288 :    call mpi_comm_split(mpi_comm_global,my_pcol,my_prow,mpi_comm_rows,mpierr)
     401       12288 :    call mpi_comm_split(mpi_comm_global,my_prow,my_pcol,mpi_comm_cols,mpierr)
     402             : 
     403       12288 : end function elpa_get_communicators
     404             : 
     405             : 
     406             : !> \brief elpa_solve_evp_real_1stage_double: Fortran function to solve the real double-precision eigenvalue problem with 1-stage solver
     407             : !>
     408             : !  Parameters
     409             : !
     410             : !> \param  na                   Order of matrix a
     411             : !>
     412             : !> \param  nev                  Number of eigenvalues needed.
     413             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     414             : !>
     415             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     416             : !>                              Distribution is like in Scalapack.
     417             : !>                              The full matrix must be set (not only one half like in scalapack).
     418             : !>                              Destroyed on exit (upper and lower half).
     419             : !>
     420             : !>  \param lda                  Leading dimension of a
     421             : !>
     422             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     423             : !>
     424             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     425             : !>                              Distribution is like in Scalapack.
     426             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     427             : !>                              even if only a part of the eigenvalues is needed.
     428             : !>
     429             : !>  \param ldq                  Leading dimension of q
     430             : !>
     431             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     432             : !>
     433             : !>  \param matrixCols           distributed number of matrix columns
     434             : !>
     435             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     436             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     437             : !>  \param mpi_comm_all         global MPI communicator
     438             : !>  \param useGPU              use GPU version (.true. or .false.)
     439             : !>
     440             : !>  \result                     success
     441             : 
     442             : #define REALCASE 1
     443             : #define DOUBLE_PRECISION 1
     444             : #include "../../general/precision_macros.h"
     445             : #include "./elpa1_template.F90"
     446             : #undef REALCASE
     447             : #undef DOUBLE_PRECISION
     448             : 
     449             : #ifdef WANT_SINGLE_PRECISION_REAL
     450             : !> \brief elpa_solve_evp_real_1stage_single: Fortran function to solve the real single-precision eigenvalue problem with 1-stage solver
     451             : !>
     452             : !  Parameters
     453             : !
     454             : !> \param  na                   Order of matrix a
     455             : !>
     456             : !> \param  nev                  Number of eigenvalues needed.
     457             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     458             : !>
     459             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     460             : !>                              Distribution is like in Scalapack.
     461             : !>                              The full matrix must be set (not only one half like in scalapack).
     462             : !>                              Destroyed on exit (upper and lower half).
     463             : !>
     464             : !>  \param lda                  Leading dimension of a
     465             : !>
     466             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     467             : !>
     468             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     469             : !>                              Distribution is like in Scalapack.
     470             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     471             : !>                              even if only a part of the eigenvalues is needed.
     472             : !>
     473             : !>  \param ldq                  Leading dimension of q
     474             : !>
     475             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     476             : !>
     477             : !>  \param matrixCols           distributed number of matrix columns
     478             : !>
     479             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     480             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     481             : !>  \param mpi_comm_all         global MPI commuicator
     482             : !>  \param useGPU
     483             : !>
     484             : !>  \result                     success
     485             : 
     486             : #define REALCASE 1
     487             : #define SINGLE_PRECISION 1
     488             : #include "../../general/precision_macros.h"
     489             : #include "./elpa1_template.F90"
     490             : #undef REALCASE
     491             : #undef SINGLE_PRECISION
     492             : #endif /* WANT_SINGLE_PRECISION_REAL */
     493             : 
     494             : !> \brief elpa_solve_evp_complex_1stage_double: Fortran function to solve the complex double-precision eigenvalue problem with 1-stage solver
     495             : !>
     496             : !  Parameters
     497             : !
     498             : !> \param  na                   Order of matrix a
     499             : !>
     500             : !> \param  nev                  Number of eigenvalues needed.
     501             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     502             : !>
     503             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     504             : !>                              Distribution is like in Scalapack.
     505             : !>                              The full matrix must be set (not only one half like in scalapack).
     506             : !>                              Destroyed on exit (upper and lower half).
     507             : !>
     508             : !>  \param lda                  Leading dimension of a
     509             : !>
     510             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     511             : !>
     512             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     513             : !>                              Distribution is like in Scalapack.
     514             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     515             : !>                              even if only a part of the eigenvalues is needed.
     516             : !>
     517             : !>  \param ldq                  Leading dimension of q
     518             : !>
     519             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     520             : !>
     521             : !>  \param matrixCols           distributed number of matrix columns
     522             : !>
     523             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     524             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     525             : !>  \param mpi_comm_all         global MPI Communicator
     526             : !>  \param useGPU              use GPU version (.true. or .false.)
     527             : !>
     528             : !>  \result                     success
     529             : #define COMPLEXCASE 1
     530             : #define DOUBLE_PRECISION 1
     531             : #include "../../general/precision_macros.h"
     532             : #include "./elpa1_template.F90"
     533             : #undef DOUBLE_PRECISION
     534             : #undef COMPLEXCASE
     535             : 
     536             : 
     537             : #ifdef WANT_SINGLE_PRECISION_COMPLEX
     538             : 
     539             : !> \brief elpa_solve_evp_complex_1stage_single: Fortran function to solve the complex single-precision eigenvalue problem with 1-stage solver
     540             : !>
     541             : !  Parameters
     542             : !
     543             : !> \param  na                   Order of matrix a
     544             : !>
     545             : !> \param  nev                  Number of eigenvalues needed.
     546             : !>                              The smallest nev eigenvalues/eigenvectors are calculated.
     547             : !>
     548             : !> \param  a(lda,matrixCols)    Distributed matrix for which eigenvalues are to be computed.
     549             : !>                              Distribution is like in Scalapack.
     550             : !>                              The full matrix must be set (not only one half like in scalapack).
     551             : !>                              Destroyed on exit (upper and lower half).
     552             : !>
     553             : !>  \param lda                  Leading dimension of a
     554             : !>
     555             : !>  \param ev(na)               On output: eigenvalues of a, every processor gets the complete set
     556             : !>
     557             : !>  \param q(ldq,matrixCols)    On output: Eigenvectors of a
     558             : !>                              Distribution is like in Scalapack.
     559             : !>                              Must be always dimensioned to the full size (corresponding to (na,na))
     560             : !>                              even if only a part of the eigenvalues is needed.
     561             : !>
     562             : !>  \param ldq                  Leading dimension of q
     563             : !>
     564             : !>  \param nblk                 blocksize of cyclic distribution, must be the same in both directions!
     565             : !>
     566             : !>  \param matrixCols           distributed number of matrix columns
     567             : !>
     568             : !>  \param mpi_comm_rows        MPI-Communicator for rows
     569             : !>  \param mpi_comm_cols        MPI-Communicator for columns
     570             : !>  \param mpi_comm_all         global MPI communicator
     571             : !>  \param useGPU
     572             : !>
     573             : !>  \result                     success
     574             : 
     575             : #define COMPLEXCASE 1
     576             : #define SINGLE_PRECISION
     577             : #include "../../general/precision_macros.h"
     578             : #include "./elpa1_template.F90"
     579             : #undef COMPLEXCASE
     580             : #undef SINGLE_PRECISION
     581             : #endif /* WANT_SINGLE_PRECISION_COMPLEX */
     582             : 
     583             : end module elpa1

Generated by: LCOV version 1.12