multiple loadcases: speeding up linear solutions and parallelizing solutions. #3991
-
IntroductionI have a massive simulation that takes ages to complete. It is running on a very refined model (10^7 degrees of freedom) for several load cases (10^2) where each node has a nodal force (so no superelements even for linear analyses since every node would be a master node). The boundary conditions are the same for each loadcase (i.e., same constraints) but the nodal forces change at every loadstep.
I have two questions to speed up the solution, which are described in the following sections. I would love to add an example if I managed to solve this question, since it would increase speed by quite a lot. Faster solveAs far as I understood by looking at the .out files, for linear runs it looks like that lsSolve is doing something like this under the hood: for i in 1:loadcase_count
# remove the constrained degrees of freedom
K\F=x # solve each loadstep, by recalculating LU(K)
# compute results and store in .rst file
end That means basically that the solver is calculating LU(K) from K instead of doing it once for all load cases. Considering that the constrained degrees of freedom in my case are always the same, it would make so much more sense to do the following: # remove the constrained degrees of freedom
lu!(K)
for i in 1:loadcase_count
x=K*F
# compute results and store in the .rst file
end To make it more general it would be sufficient to first check if the loadcases share the same constraints and then for each set of constraints do the for loop above: for i in 1:loadcase_count
# check if same set of constraints
end
for i in 1:sets_of_constraints_count
# remove the constrained DOFs
lu!(K)
for i in loadcases_with_same_constraints_count
x=K*F
# compute results and store in the .rst file
end
end Is it possible to implement something like this using pymapdl? I could do it using mapdl math, but then I would also have to work out the nodal results on my own and honestly I'm not into that (maybe there is a smart way to do it). An other option would be to dig deep into the MAPDL solver and write the corresponding of a DMAP in nastran to alter the solver and do this kind of thing, but my firm would not allow me to do it. Parallelize the solutionConsidering that I can have more than just one processor available, and that the solver time does not scale linearly to the number of cores used, I would love to something like the following in julia: Threads.@threads for i in 1:loadcase_count
mapdl.lssolve(i)
end I have worked it out (not entirely but I'm close) in bash lounching ansys in batch doing the following (more or less).
apdl_code_to_first_solve_filename=apdl_code_to_first_solve
loadcase_count=number_of_load_cases
cat <<EOF > "${apdl_code_to_first_solve_filename}.ans"
! general preprocessing and solution options
*do,loadcase,1,loadcase_count
!apply load cases
*endDo
save,"${apdl_code_to_first_solve_filename}",db
EOF
ansysVXX -i "${apdl_code_to_first_solve_filename}.ans"-o "${apdl_code_to_first_solve_filename}.out" -j "${apdl_code_to_first_solve_filename}.ans" -np $X
for i in $(seq 1 "$loadcase_count"); do
echo resume,"${apdl_code_to_first_solve_filename}",db > "loadstep_${i}.ans"
echo /solu >> "loadstep_${i}.ans"
echo lsSolve,"$i">> "loadstep_${i}.ans"
ansysVXX -i "loadstep_${i}.ans" -o "loadstep_${i}.out" -j "loadstep_${i}" -np 1 &
# check that I haven't run out of processors, a simple if that I am not copying here
done
# then combine the results file, still have not figured that out, but it will be kinda easy The issue is that doing it like this creats a crazy amount of files (basically .full file for each loadstep) that are useless. I could just generate the .full once and then have all the load cases accesss the same one, but can't manage to do it. Is there a workaround for it in pymapdl? I think that the mapdl pool will start generating a lot of .full files as well, does that make sense? |
Beta Was this translation helpful? Give feedback.
Replies: 1 comment
-
Hi @francesco123212 You will need to solve for the first load set up with the EQSLV keep set. Then issue the KUSE prior to solving the rest of the load set ups. Why don't you try that on a small set like 2-3 load sets and convince yourself that it is working. Then we will move on to other ideas to speed this up. Do you have access to a single compute solver or many? What is the hardware in terms of RAM and CPU make/model? Depending on the answers we may want to set up a pool to solve. Mike |
Beta Was this translation helpful? Give feedback.
Hi @francesco123212
Please see the KUSE and EQSLV commands. EQSLV has a field to keep the solver files from a sparse (direct) solver analysis and KUSE can be used to tell the solver to reuse those files. Since the boundary conditions of the linear model don't change, you can use these to skip the creation of the K matrix for each set of loads.
You will need to solve for the first load set up with the EQSLV keep set. Then issue the KUSE prior to solving the rest of the load set ups.
Why don't you try that on a small set like 2-3 load sets and convince yourself that it is working. Then we will move on to other ideas to speed this up.
Do you have access to a single compute solver or many? Wh…