You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
correct the SO version which was wrong in ELPA 2021.05.001
allow the user to set the mapping of MPI tasks to GPU id per set/get
experimental feature: port to AMD GPUS, works correctly, performance yet
unclear; only tested --with-mpi=0
On request, ELPA can print the pinning of MPI tasks and OpenMP thread
support for FUGAKU: some minor fix still have to be fixed due to compiler
issues
BUG FIX: if matrix is already banded, check whether bandwidth >= 2. DO NOT
ALLOW a bandwidth = 1, since this would imply that the input matrix is
already diagonal which the ELPA algorithms do not support
BUG FIX in internal test programs: do not consider a residual of 0.0 to be
an error
support for skew-symmetric matrices now enabled by default
BUG FIX in generalized case: in setups like "mpiexec -np 4 ./validate_real_double_generalized_1stage_random 90 90 45`
ELPA_SETUPS does now (in case of MPI-runs) check whether the user-provided BLACSGRID is reasonable (i.e. ELPA does not rely anymore that the user does check prior to calling ELPA whether the BLACSGRID is ok) if this check fails
then ELPA returns with an error
limit number of OpenMP threads to one, if MPI thread level is not at least MPI_THREAD_SERIALIZED
allow checking of the supported threading level of the MPI library at build time