Replies: 1 comment
-
Hello, I think you should use |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
Hello everyone,
I'm currently modifying the turbulent channel flow case for a custom problem in Xcompact3D. My want to to compute flow rate vs. time at a cross-section perpendicular to the x-axis. I am calculating average velocity inside the code for the flow rate.
Since the mesh is non-uniform in channel height direction, I'm using the area weighted average rule to compute the average velocity using the velocity component ux1. The integration is performed locally on each MPI process, and the results are combined using MPI_ALLREDUCE.
The issue arises due to domain decomposition. For instance, if the y-direction grid spans 1–100 and is split across 2 MPI processes (1–50 and 51–100), the area weighted part of the cell between the interface points 50 and 51 is missed in the calculation. This leads to an under-prediction of the average velocity or integrated area. I want to avoid gathering the entire field on one process due to memory and performance constraints.
Below is a simplified version of my implementation in case_custom.f90:
end subroutine flowrate_channel_tpg
My question:
Is there a recommended approach for computing cross-sectional integrals on non-uniform grids in a parallel-decomposed domain correctly inside the code?
Any guidance or suggestions would be greatly appreciated!
Thanks and best regards,
Ravi
Beta Was this translation helpful? Give feedback.
All reactions