Skip to content

Multicomponent diffusion fluxes, thermal conduction, and mixture viscosity #934

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 51 commits into
base: master
Choose a base branch
from

Conversation

DimAdam-01
Copy link
Contributor

@DimAdam-01 DimAdam-01 commented Jul 11, 2025

User description

Description

Please include a summary of the changes and the related issue(s) if they exist.

This update implements the diffusive fluxes for chemical species, includes thermal conduction in the energy equation, and computes the mixture’s viscosity.

Please also include relevant motivation and context.

Fixes #(issue) [optional]

Type of change

Please delete options that are not relevant.

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Something else

Scope

  • This PR comprises a set of related changes with a common goal

If you cannot check the above box, please split your PR into multiple PRs that each have a common goal.

How Has This Been Tested?

Please describe the tests that you ran to verify your changes.
Provide instructions so we can reproduce.
Please also list any relevant details for your test configuration

  • Test A: This test examines one-dimensional multicomponent diffusion of four distinct species, as detailed in DOI 10.2514/6.2020-1751.
image image
  • Test B: Test B examines a two-dimensional premixed flame with an imposed perturbation, then tracks the ensuing thermo-diffusive instability growth.
image image

Test Configuration:

  • What computers and compilers did you use to test this:

My Laptop, Delta (A100) and Delta AI (GH)

Checklist

  • I have added comments for the new code
  • I added Doxygen docstrings to the new code
  • I have made corresponding changes to the documentation (docs/)
  • I have added regression tests to the test suite so that people can verify in the future that the feature is behaving as expected
  • I have added example cases in examples/ that demonstrate my new feature performing as expected.
    They run to completion and demonstrate "interesting physics"
  • I ran ./mfc.sh format before committing my code
  • New and existing tests pass locally with my changes, including with GPU capability enabled (both NVIDIA hardware with NVHPC compilers and AMD hardware with CRAY compilers) and disabled
  • This PR does not introduce any repeated code (it follows the DRY principle)
  • I cannot think of a way to condense this code and reduce any introduced additional line count

If your code changes any code source files (anything in src/simulation)

To make sure the code is performing as expected on GPU devices, I have:

  • Checked that the code compiles using NVHPC compilers
  • Checked that the code compiles using CRAY compilers
  • Ran the code on either V100, A100, or H100 GPUs and ensured the new feature performed as expected (the GPU results match the CPU results)
  • Ran the code on MI200+ GPUs and ensure the new features performed as expected (the GPU results match the CPU results)
  • Enclosed the new feature via nvtx ranges so that they can be identified in profiles
  • Ran a Nsight Systems profile using ./mfc.sh run XXXX --gpu -t simulation --nsys, and have attached the output file (.nsys-rep) and plain text results to this PR
  • Ran a Rocprof Systems profile using ./mfc.sh run XXXX --gpu -t simulation --rsys --hip-trace, and have attached the output file and plain text results to this PR.
  • Ran my code using various numbers of different GPUs (1, 2, and 8, for example) in parallel and made sure that the results scale similarly to what happens if you run without the new code/feature

PR Type

Enhancement, Tests, Bug fix


Description

• Implemented comprehensive multicomponent diffusion fluxes for chemical species transport
• Added thermal conduction capabilities to the energy equation
• Integrated mixture viscosity calculations based on chemistry
• Added new subroutines compute_viscosity_and_inversion and s_compute_chemistry_diffusion_flux for transport property calculations
• Integrated chemistry diffusion flux computation into RHS calculation pipeline
• Added mass conservation corrections and thermal conduction contributions
• Fixed undefined variable bug in chemistry sound speed calculation (replaced Tolerance with verysmall)
• Added comprehensive test cases for multicomponent diffusion and 1D premixed flame simulations
• Added golden reference data and metadata for validation of new chemistry features
• Created new example case for 1D premixed flame simulation with chemistry


Changes walkthrough 📝

Relevant files
Enhancement
4 files
m_rhs.fpp
Integrate chemistry diffusion flux computation into RHS   

src/simulation/m_rhs.fpp

• Added allocation for energy flux source when chemistry diffusion is
enabled without viscous effects
• Integrated chemistry diffusion flux
computation into the RHS calculation pipeline
• Separated flux
application logic for viscous/surface tension vs chemistry diffusion
cases
• Added conditional energy flux handling for non-viscous
chemistry diffusion scenarios

+127/-34
m_chemistry.fpp
Implement multicomponent diffusion and mixture viscosity calculations

src/common/m_chemistry.fpp

• Added new subroutine compute_viscosity_and_inversion for mixture
viscosity calculation
• Implemented comprehensive
s_compute_chemistry_diffusion_flux subroutine for multicomponent
diffusion
• Added transport property calculations including mass
diffusivities and thermal conductivity
• Implemented mass conservation
corrections and thermal conduction contributions

+167/-1 
m_riemann_solvers.fpp
Integrate chemistry viscosity and diffusion flux initialization

src/simulation/m_riemann_solvers.fpp

• Added chemistry-based viscosity computation calls in Riemann solver
routines
• Integrated compute_viscosity_and_inversion function calls
for chemistry cases
• Added flux source initialization for chemistry
diffusion in all spatial directions

+51/-0   
case.py
Add 1D premixed flame chemistry example case                         

examples/1D_Premixed_Flame/case.py

• Added new example case for 1D premixed flame simulation with
chemistry
• Configured multicomponent diffusion and reaction
parameters
• Set up initial conditions with species mass fractions and
temperature profiles

+93/-0   
Bug fix
1 files
m_variables_conversion.fpp
Fix undefined variable in chemistry sound speed calculation

src/common/m_variables_conversion.fpp

• Replaced undefined Tolerance variable with verysmall in chemistry
sound speed calculation

+1/-2     
Tests
12 files
cases.py
Add chemistry diffusion and premixed flame test cases       

toolchain/mfc/test/cases.py

• Added new test case for 1D multicomponent diffusion validation

Added test case for 1D premixed flame simulation
• Updated existing
chemistry test tolerances and WENO parameters

+30/-2   
golden.txt
Add golden reference data for multicomponent diffusion test

tests/D54F258B/golden.txt

• Added golden reference data for multicomponent diffusion test case

Contains conservative and primitive variable outputs at different time
steps

+56/-0   
golden-metadata.txt
Update test metadata for chemistry test configuration       

tests/4D7D85B4/golden-metadata.txt

• Updated test metadata with new invocation parameters and system
information
• Modified test configuration details and CPU
specifications

+15/-117
golden.txt
Add golden reference data for multicomponent diffusion test

tests/A745163D/golden.txt

• Added comprehensive golden reference data for multicomponent
diffusion test case
• Contains numerical values for conservative and
primitive variables across multiple time steps
• Data includes 8
different variables (cons.1-8 and prim.1-8) at two time points (000000
and 000050)
• Each line contains 101 data points representing spatial
distribution of variables

+32/-0   
golden-metadata.txt
Update test metadata for 1D Chemistry configuration           

tests/6B3AE48F/golden-metadata.txt

• Updated test metadata with new invocation command for 1D Chemistry
tests
• Changed Git commit hash and branch from qtsf-merge to
Diffusion
• Removed detailed CMake configuration sections for
simulation, pre_process, and syscheck
• Updated CPU information with
different model and virtualization details

+15/-117
golden-metadata.txt
Update test metadata for 1D Chemistry configuration           

tests/2BDE2018/golden-metadata.txt

• Updated test metadata with new invocation command for 1D Chemistry
tests
• Changed Git commit hash and branch from qtsf-merge to
Diffusion
• Removed detailed CMake configuration sections for
simulation, pre_process, and syscheck
• Updated CPU information with
different model and virtualization details

+15/-117
golden-metadata.txt
Add metadata for MultiComponent_Diffusion test case           

tests/D54F258B/golden-metadata.txt

• Added new test metadata file for MultiComponent_Diffusion test case

• Contains complete CMake configuration details for simulation,
syscheck, and pre_process
• Includes CPU information and system
specifications
• Uses Debug build type with MPI enabled

+149/-0 
golden-metadata.txt
Test metadata update for 1D Chemistry configuration           

tests/5DCF300C/golden-metadata.txt

• Updated test invocation from Chemistry to 1D Chemistry with generate
flag
• Changed MPI configuration from disabled to enabled
• Updated
Git commit hash and branch from qtsf-merge to Diffusion
• Removed
detailed CMake configuration sections for simulation, pre_process, and
syscheck
• Updated CPU model from i9-12900HK to i7-12700H with
corresponding hardware changes

+15/-117
golden-metadata.txt
Test metadata update for 1D Chemistry configuration           

tests/F8ADA51B/golden-metadata.txt

• Updated test invocation from Chemistry to 1D Chemistry with generate
flag
• Changed MPI configuration from disabled to enabled
• Updated
Git commit hash and branch from qtsf-merge to Diffusion
• Removed
detailed CMake configuration sections for simulation, pre_process, and
syscheck
• Updated CPU model from i9-12900HK to i7-12700H with
corresponding hardware changes

+15/-117
golden-metadata.txt
New test metadata for MultiComponent_Diffusion                     

tests/A745163D/golden-metadata.txt

• Created new test metadata file for MultiComponent_Diffusion test

Configured with MPI enabled and clean Git state on Diffusion branch

Recorded CPU information for i7-12700H processor
• Includes complete
system vulnerability and cache information

+50/-0   
golden-metadata.txt
New test metadata for 1D Chemistry                                             

tests/54BEE3ED/golden-metadata.txt

• Created new test metadata file for 1D Chemistry test
• Configured
with MPI enabled and dirty Git state on Diffusion branch
• Recorded
CPU information for i7-12700H processor
• Includes complete system
vulnerability and cache information

+50/-0   
golden-metadata.txt
New test metadata for 1D Chemistry                                             

tests/140C353C/golden-metadata.txt

• Created new test metadata file for 1D Chemistry test
• Configured
with MPI enabled and dirty Git state on Diffusion branch
• Recorded
CPU information for i7-12700H processor
• Includes complete system
vulnerability and cache information

+50/-0   
Additional files
6 files
golden.txt +56/-0   
golden.txt +26/-26 
golden.txt +26/-26 
golden.txt +56/-0   
golden.txt +26/-26 
golden.txt +26/-26 

Need help?
  • Type /help how to ... in the comments thread for any questions about Qodo Merge usage.
  • Check out the documentation for more information.
  • Copy link

    qodo-merge-pro bot commented Jul 11, 2025

    PR Reviewer Guide 🔍

    (Review updated until commit 42e8f58)

    Here are some key observations to aid the review process:

    ⏱️ Estimated effort to review: 4 🔵🔵🔵🔵⚪
    🧪 PR contains tests
    🔒 No security concerns identified
    ⚡ Recommended focus areas for review

    Code Duplication

    The flux computation loops for x, y, and z directions contain nearly identical code blocks with only minor differences in grid spacing calculations and array indexing. This duplication increases maintenance burden and potential for inconsistencies.

        if (surface_tension .or. viscous) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = momxb, E_idx
                            rhs_vf(i)%sf(j, k, l) = &
                                rhs_vf(i)%sf(j, k, l) + 1._wp/dx(j)* &
                                (flux_src_n(i)%sf(j - 1, k, l) &
                                 - flux_src_n(i)%sf(j, k, l))
                        end do
                    end do
                end do
            end do
        end if
    
        if (chem_params%diffusion) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = chemxb, chemxe
                            rhs_vf(i)%sf(j, k, l) = &
                                rhs_vf(i)%sf(j, k, l) + 1._wp/dx(j)* &
                                (flux_src_n(i)%sf(j - 1, k, l) &
                                 - flux_src_n(i)%sf(j, k, l))
                        end do
    
                        if (.not. viscous) then
                            rhs_vf(E_idx)%sf(j, k, l) = &
                                rhs_vf(E_idx)%sf(j, k, l) + 1._wp/dx(j)* &
                                (flux_src_n(E_idx)%sf(j - 1, k, l) &
                                 - flux_src_n(E_idx)%sf(j, k, l))
                        end if
                    end do
                end do
            end do
        end if
    
    elseif (idir == 2) then ! y-direction
    
        if (surface_tension) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        rhs_vf(c_idx)%sf(j, k, l) = &
                            rhs_vf(c_idx)%sf(j, k, l) + 1._wp/dy(k)* &
                            q_prim_vf(c_idx)%sf(j, k, l)* &
                            (flux_src_n(advxb)%sf(j, k, l) - &
                             flux_src_n(advxb)%sf(j, k - 1, l))
                    end do
                end do
            end do
        end if
    
        if (cyl_coord .and. ((bc_y%beg == BC_REFLECTIVE) .or. (bc_y%beg == BC_AXIS))) then
            if (viscous) then
                if (p > 0) then
                    call s_compute_viscous_stress_tensor(q_prim_vf, &
                                                         dq_prim_dx_vf(mom_idx%beg:mom_idx%end), &
                                                         dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                         dq_prim_dz_vf(mom_idx%beg:mom_idx%end), &
                                                         tau_Re_vf, &
                                                         idwbuff(1), idwbuff(2), idwbuff(3))
                else
                    call s_compute_viscous_stress_tensor(q_prim_vf, &
                                                         dq_prim_dx_vf(mom_idx%beg:mom_idx%end), &
                                                         dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                         dq_prim_dy_vf(mom_idx%beg:mom_idx%end), &
                                                         tau_Re_vf, &
                                                         idwbuff(1), idwbuff(2), idwbuff(3))
                end if
    
                $:GPU_PARALLEL_LOOP(collapse=2)
                do l = 0, p
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = momxb, E_idx
                            rhs_vf(i)%sf(j, 0, l) = &
                                rhs_vf(i)%sf(j, 0, l) + 1._wp/(y_cc(1) - y_cc(-1))* &
                                (tau_Re_vf(i)%sf(j, -1, l) &
                                 - tau_Re_vf(i)%sf(j, 1, l))
                        end do
                    end do
                end do
    
            end if
    
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 1, n
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = momxb, E_idx
                            rhs_vf(i)%sf(j, k, l) = &
                                rhs_vf(i)%sf(j, k, l) + 1._wp/dy(k)* &
                                (flux_src_n(i)%sf(j, k - 1, l) &
                                 - flux_src_n(i)%sf(j, k, l))
                        end do
                    end do
                end do
            end do
    
        else
    
            if (viscous .or. surface_tension) then
                $:GPU_PARALLEL_LOOP(collapse=3)
                do l = 0, p
                    do k = 0, n
                        do j = 0, m
                            $:GPU_LOOP(parallelism='[seq]')
                            do i = momxb, E_idx
                                rhs_vf(i)%sf(j, k, l) = &
                                    rhs_vf(i)%sf(j, k, l) + 1._wp/dy(k)* &
                                    (flux_src_n(i)%sf(j, k - 1, l) &
                                     - flux_src_n(i)%sf(j, k, l))
                            end do
                        end do
                    end do
                end do
            end if
    
            if (chem_params%diffusion) then
                $:GPU_PARALLEL_LOOP(collapse=3)
                do l = 0, p
                    do k = 0, n
                        do j = 0, m
                            $:GPU_LOOP(parallelism='[seq]')
                            do i = chemxb, chemxe
                                rhs_vf(i)%sf(j, k, l) = &
                                    rhs_vf(i)%sf(j, k, l) + 1._wp/dy(k)* &
                                    (flux_src_n(i)%sf(j, k - 1, l) &
                                     - flux_src_n(i)%sf(j, k, l))
                            end do
                            if (.not. viscous) then
                                rhs_vf(E_idx)%sf(j, k, l) = &
                                    rhs_vf(E_idx)%sf(j, k, l) + 1._wp/dy(k)* &
                                    (flux_src_n(E_idx)%sf(j, k - 1, l) &
                                     - flux_src_n(E_idx)%sf(j, k, l))
                            end if
                        end do
                    end do
                end do
            end if
        end if
    
        ! Applying the geometrical viscous Riemann source fluxes calculated as average
        ! of values at cell boundaries
        if (cyl_coord) then
            if ((bc_y%beg == BC_REFLECTIVE) .or. (bc_y%beg == BC_AXIS)) then
    
                $:GPU_PARALLEL_LOOP(collapse=3)
                do l = 0, p
                    do k = 1, n
                        do j = 0, m
                            $:GPU_LOOP(parallelism='[seq]')
                            do i = momxb, E_idx
                                rhs_vf(i)%sf(j, k, l) = &
                                    rhs_vf(i)%sf(j, k, l) - 5.e-1_wp/y_cc(k)* &
                                    (flux_src_n(i)%sf(j, k - 1, l) &
                                     + flux_src_n(i)%sf(j, k, l))
                            end do
                        end do
                    end do
                end do
    
                if (viscous) then
                    $:GPU_PARALLEL_LOOP(collapse=2)
                    do l = 0, p
                        do j = 0, m
                            $:GPU_LOOP(parallelism='[seq]')
                            do i = momxb, E_idx
                                rhs_vf(i)%sf(j, 0, l) = &
                                    rhs_vf(i)%sf(j, 0, l) - 1._wp/y_cc(0)* &
                                    tau_Re_vf(i)%sf(j, 0, l)
                            end do
                        end do
                    end do
                end if
            else
    
                $:GPU_PARALLEL_LOOP(collapse=3)
                do l = 0, p
                    do k = 0, n
                        do j = 0, m
                            $:GPU_LOOP(parallelism='[seq]')
                            do i = momxb, E_idx
                                rhs_vf(i)%sf(j, k, l) = &
                                    rhs_vf(i)%sf(j, k, l) - 5.e-1_wp/y_cc(k)* &
                                    (flux_src_n(i)%sf(j, k - 1, l) &
                                     + flux_src_n(i)%sf(j, k, l))
                            end do
                        end do
                    end do
                end do
    
            end if
        end if
    
    elseif (idir == 3) then ! z-direction
    
        if (surface_tension) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        rhs_vf(c_idx)%sf(j, k, l) = &
                            rhs_vf(c_idx)%sf(j, k, l) + 1._wp/dz(l)* &
                            q_prim_vf(c_idx)%sf(j, k, l)* &
                            (flux_src_n(advxb)%sf(j, k, l) - &
                             flux_src_n(advxb)%sf(j, k, l - 1))
                    end do
                end do
            end do
        end if
    
        if (viscous .or. surface_tension) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = momxb, E_idx
                            rhs_vf(i)%sf(j, k, l) = &
                                rhs_vf(i)%sf(j, k, l) + 1._wp/dz(l)* &
                                (flux_src_n(i)%sf(j, k, l - 1) &
                                 - flux_src_n(i)%sf(j, k, l))
                        end do
                    end do
                end do
            end do
        end if
    
        if (chem_params%diffusion) then
            $:GPU_PARALLEL_LOOP(collapse=3)
            do l = 0, p
                do k = 0, n
                    do j = 0, m
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = chemxb, chemxe
                            rhs_vf(i)%sf(j, k, l) = &
                                rhs_vf(i)%sf(j, k, l) + 1._wp/dz(l)* &
                                (flux_src_n(i)%sf(j, k, l - 1) &
                                 - flux_src_n(i)%sf(j, k, l))
                        end do
                        if (.not. viscous) then
                            rhs_vf(E_idx)%sf(j, k, l) = &
                                rhs_vf(E_idx)%sf(j, k, l) + 1._wp/dz(l)* &
                                (flux_src_n(E_idx)%sf(j, k, l - 1) &
                                 - flux_src_n(E_idx)%sf(j, k, l))
                        end if
                    end do
                end do
            end do
        end if
    Performance Concern

    The diffusion flux computation involves nested loops with multiple expensive function calls (get_mixture_molecular_weight, get_species_mass_diffusivities_mixavg, etc.) inside the innermost loop. This could significantly impact performance for large grids.

    subroutine s_compute_chemistry_diffusion_flux(idir, q_prim_qp, flux_src_vf, irx, iry, irz)
    
        type(scalar_field), dimension(sys_size), intent(in) :: q_prim_qp
        type(scalar_field), dimension(sys_size), intent(inout) :: flux_src_vf
        type(int_bounds_info), intent(in) :: irx, iry, irz
    
        integer, intent(in) :: idir
    
        real(wp), dimension(num_species) :: Xs_L, Xs_R, Xs_cell, Ys_L, Ys_R, Ys_cell
        real(wp), dimension(num_species) :: mass_diffusivities_mixavg1, mass_diffusivities_mixavg2
        real(wp), dimension(num_species) :: mass_diffusivities_mixavg_Cell, dXk_dxi, h_l, h_r, h_k
        real(wp), dimension(num_species) :: Mass_Diffu_Flux
        real(wp) :: Mass_Diffu_Energy
        real(wp) :: MW_L, MW_R, MW_cell, Rgas_L, Rgas_R, T_L, T_R, P_L, P_R, rho_L, rho_R, rho_cell, rho_Vic
        real(wp) :: lambda_L, lambda_R, lambda_Cell, dT_dxi, grid_spacing
    
        integer :: x, y, z, i, n, eqn
        integer, dimension(3) :: offsets
    
        isc1 = irx; isc2 = iry; isc3 = irz
    
        $:GPU_UPDATE(device='[isc1,isc2,isc3]')
    
        if (chemistry) then
            ! Set offsets based on direction using array indexing
            offsets = 0
            offsets(idir) = 1
    
            $:GPU_PARALLEL_LOOP(collapse=3,  private='[Ys_L, Ys_R, Ys_cell, &
                    & Xs_L, Xs_R, mass_diffusivities_mixavg1, mass_diffusivities_mixavg2, &
                    & mass_diffusivities_mixavg_Cell, h_l, h_r, Xs_cell, h_k, &
                    & dXk_dxi,Mass_Diffu_Flux]', copyin='[offsets]')
            do z = isc3%beg, isc3%end
                do y = isc2%beg, isc2%end
                    do x = isc1%beg, isc1%end
                        ! Calculate grid spacing using direction-based indexing
                        select case (idir)
                        case (1)
                            grid_spacing = x_cc(x + 1) - x_cc(x)
                        case (2)
                            grid_spacing = y_cc(y + 1) - y_cc(y)
                        case (3)
                            grid_spacing = z_cc(z + 1) - z_cc(z)
                        end select
    
                        ! Extract species mass fractions
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = chemxb, chemxe
                            Ys_L(i - chemxb + 1) = q_prim_qp(i)%sf(x, y, z)
                            Ys_R(i - chemxb + 1) = q_prim_qp(i)%sf(x + offsets(1), y + offsets(2), z + offsets(3))
                            Ys_cell(i - chemxb + 1) = 0.5_wp*(Ys_L(i - chemxb + 1) + Ys_R(i - chemxb + 1))
                        end do
    
                        ! Calculate molecular weights and mole fractions
                        call get_mixture_molecular_weight(Ys_L, MW_L)
                        call get_mixture_molecular_weight(Ys_R, MW_R)
                        MW_cell = 0.5_wp*(MW_L + MW_R)
    
                        call get_mole_fractions(MW_L, Ys_L, Xs_L)
                        call get_mole_fractions(MW_R, Ys_R, Xs_R)
    
                        ! Calculate gas constants and thermodynamic properties
                        Rgas_L = gas_constant/MW_L
                        Rgas_R = gas_constant/MW_R
    
                        P_L = q_prim_qp(E_idx)%sf(x, y, z)
                        P_R = q_prim_qp(E_idx)%sf(x + offsets(1), y + offsets(2), z + offsets(3))
    
                        rho_L = q_prim_qp(1)%sf(x, y, z)
                        rho_R = q_prim_qp(1)%sf(x + offsets(1), y + offsets(2), z + offsets(3))
    
                        T_L = P_L/rho_L/Rgas_L
                        T_R = P_R/rho_R/Rgas_R
    
                        rho_cell = 0.5_wp*(rho_L + rho_R)
                        dT_dxi = (T_R - T_L)/grid_spacing
    
                        ! Get transport properties
                        call get_species_mass_diffusivities_mixavg(P_L, T_L, Ys_L, mass_diffusivities_mixavg1)
                        call get_species_mass_diffusivities_mixavg(P_R, T_R, Ys_R, mass_diffusivities_mixavg2)
    
                        call get_mixture_thermal_conductivity_mixavg(T_L, Ys_L, lambda_L)
                        call get_mixture_thermal_conductivity_mixavg(T_R, Ys_R, lambda_R)
    
                        call get_species_enthalpies_rt(T_L, h_l)
                        call get_species_enthalpies_rt(T_R, h_r)
    
                        ! Calculate species properties and gradients
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = chemxb, chemxe
                            h_l(i - chemxb + 1) = h_l(i - chemxb + 1)*gas_constant*T_L/molecular_weights(i - chemxb + 1)
                            h_r(i - chemxb + 1) = h_r(i - chemxb + 1)*gas_constant*T_R/molecular_weights(i - chemxb + 1)
                            Xs_cell(i - chemxb + 1) = 0.5_wp*(Xs_L(i - chemxb + 1) + Xs_R(i - chemxb + 1))
                            h_k(i - chemxb + 1) = 0.5_wp*(h_l(i - chemxb + 1) + h_r(i - chemxb + 1))
                            dXk_dxi(i - chemxb + 1) = (Xs_R(i - chemxb + 1) - Xs_L(i - chemxb + 1))/grid_spacing
                        end do
    
                        ! Calculate mixture-averaged diffusivities
                        $:GPU_LOOP(parallelism='[seq]')
                        do i = chemxb, chemxe
                            mass_diffusivities_mixavg_Cell(i - chemxb + 1) = &
                                (mass_diffusivities_mixavg2(i - chemxb + 1) + mass_diffusivities_mixavg1(i - chemxb + 1))/ &
                                2.0_wp*(1.0_wp - Xs_cell(i - chemxb + 1))/(1.0_wp - Ys_cell(i - chemxb + 1))
                        end do
    
                        lambda_Cell = 0.5_wp*(lambda_R + lambda_L)
    
                        ! Calculate mass diffusion fluxes
                        rho_Vic = 0.0_wp
                        Mass_Diffu_Energy = 0.0_wp
    
                        $:GPU_LOOP(parallelism='[seq]')
                        do eqn = chemxb, chemxe
                            Mass_Diffu_Flux(eqn - chemxb + 1) = rho_cell*mass_diffusivities_mixavg_Cell(eqn - chemxb + 1)* &
                                                                molecular_weights(eqn - chemxb + 1)/MW_cell*dXk_dxi(eqn - chemxb + 1)
                            rho_Vic = rho_Vic + Mass_Diffu_Flux(eqn - chemxb + 1)
                            Mass_Diffu_Energy = Mass_Diffu_Energy + h_k(eqn - chemxb + 1)*Mass_Diffu_Flux(eqn - chemxb + 1)
                        end do
    
                        ! Apply corrections for mass conservation
                        $:GPU_LOOP(parallelism='[seq]')
                        do eqn = chemxb, chemxe
                            Mass_Diffu_Energy = Mass_Diffu_Energy - h_k(eqn - chemxb + 1)*Ys_cell(eqn - chemxb + 1)*rho_Vic
                            Mass_Diffu_Flux(eqn - chemxb + 1) = Mass_Diffu_Flux(eqn - chemxb + 1) - rho_Vic*Ys_cell(eqn - chemxb + 1)
                        end do
    
                        ! Add thermal conduction contribution
                        Mass_Diffu_Energy = lambda_Cell*dT_dxi + Mass_Diffu_Energy
    
                        ! Update flux arrays
                        flux_src_vf(E_idx)%sf(x, y, z) = flux_src_vf(E_idx)%sf(x, y, z) - Mass_Diffu_Energy
    
                        $:GPU_LOOP(parallelism='[seq]')
                        do eqn = chemxb, chemxe
                            flux_src_vf(eqn)%sf(x, y, z) = flux_src_vf(eqn)%sf(x, y, z) - Mass_diffu_Flux(eqn - chemxb + 1)
                        end do
                    end do
                end do
            end do
        end if
    
    end subroutine s_compute_chemistry_diffusion_flux
    Logic Complexity

    The conditional logic for handling energy flux updates when viscous is false appears in multiple locations with slight variations. The interaction between viscous and diffusion flags needs careful validation to ensure correct physics.

    if (.not. viscous) then
        rhs_vf(E_idx)%sf(j, k, l) = &
            rhs_vf(E_idx)%sf(j, k, l) + 1._wp/dx(j)* &
            (flux_src_n(E_idx)%sf(j - 1, k, l) &
             - flux_src_n(E_idx)%sf(j, k, l))
    end if

    Copy link

    qodo-merge-pro bot commented Jul 11, 2025

    PR Code Suggestions ✨

    Latest suggestions up to 42e8f58
    Explore these optional code suggestions:

    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Fix duplicate dictionary key
    Suggestion Impact:The suggestion was directly implemented - the duplicate key "patch_icpp(1)%Y(10)" was changed to "patch_icpp(1)%Y(9)" exactly as suggested

    code diff:

    -    "patch_icpp(1)%Y(10)": "0.0",
    +    "patch_icpp(1)%Y(9)": "0.0",

    The same dictionary key "patch_icpp(1)%Y(10)" is defined twice, which means the
    first assignment will be overwritten by the second. This appears to be a
    copy-paste error.

    examples/1D_Premixed_Flame/case.py [78-79]

    -"patch_icpp(1)%Y(10)": "0.0",
    +"patch_icpp(1)%Y(9)": "0.0",
     "patch_icpp(1)%Y(10)": "(0.7452197771115008 + (0.7452197771115058 - 0.7452197771115008) * 0.5*(1-tanh(1250*(x-0.04))))",
    Suggestion importance[1-10]: 8

    __

    Why: The suggestion correctly identifies a duplicate dictionary key patch_icpp(1)%Y(10), which is a bug that would cause the first value to be silently overwritten.

    Medium
    Prevent division by zero

    Add a check to prevent division by zero when Ys_cell approaches 1.0. This could
    cause numerical instability or crashes when a species mass fraction is very
    close to unity.

    src/common/m_chemistry.fpp [253-258]

     $:GPU_LOOP(parallelism='[seq]')
     do i = chemxb, chemxe
    -    mass_diffusivities_mixavg_Cell(i - chemxb + 1) = &
    -        (mass_diffusivities_mixavg2(i - chemxb + 1) + mass_diffusivities_mixavg1(i - chemxb + 1))/ &
    -        2.0_wp*(1.0_wp - Xs_cell(i - chemxb + 1))/(1.0_wp - Ys_cell(i - chemxb + 1))
    +    if (abs(1.0_wp - Ys_cell(i - chemxb + 1)) > verysmall) then
    +        mass_diffusivities_mixavg_Cell(i - chemxb + 1) = &
    +            (mass_diffusivities_mixavg2(i - chemxb + 1) + mass_diffusivities_mixavg1(i - chemxb + 1))/ &
    +            2.0_wp*(1.0_wp - Xs_cell(i - chemxb + 1))/(1.0_wp - Ys_cell(i - chemxb + 1))
    +    else
    +        mass_diffusivities_mixavg_Cell(i - chemxb + 1) = 0.0_wp
    +    end if
     end do
    Suggestion importance[1-10]: 7

    __

    Why: The suggestion correctly identifies a potential division-by-zero issue if Ys_cell is close to 1, and proposes a valid fix to improve numerical stability.

    Medium
    • Update

    Previous suggestions

    Suggestions up to commit 61b88ee
    CategorySuggestion                                                                                                                                    Impact
    Possible issue
    Fix duplicate dictionary key
    Suggestion Impact:The suggestion was directly implemented - the duplicate key "patch_icpp(1)%Y(10)" was changed to "patch_icpp(1)%Y(9)" exactly as suggested

    code diff:

    -    "patch_icpp(1)%Y(10)": "0.0",
    +    "patch_icpp(1)%Y(9)": "0.0",

    The same dictionary key patch_icpp(1)%Y(10) is defined twice, causing the first
    assignment to be overwritten. Use different species indices or remove the
    duplicate key definition.

    examples/1D_Premixed_Flame/case.py [78-79]

    -"patch_icpp(1)%Y(10)": "0.0",
    +"patch_icpp(1)%Y(9)": "0.0",
     "patch_icpp(1)%Y(10)": "(0.7452197771115008 + (0.7452197771115058 - 0.7452197771115008) * 0.5*(1-tanh(1250*(x-0.04))))",
    Suggestion importance[1-10]: 9

    __

    Why: The suggestion correctly identifies a duplicate dictionary key patch_icpp(1)%Y(10), which would cause the first value to be silently overwritten, leading to incorrect initial conditions for the simulation.

    High
    Prevent division by zero

    Division by zero can occur when Ys_cell approaches 1.0 for a species. Add a
    small epsilon to the denominator to prevent numerical instability and division
    by zero errors.

    src/common/m_chemistry.fpp [253-258]

     $:GPU_LOOP(parallelism='[seq]')
     do i = chemxb, chemxe
         mass_diffusivities_mixavg_Cell(i - chemxb + 1) = &
             (mass_diffusivities_mixavg2(i - chemxb + 1) + mass_diffusivities_mixavg1(i - chemxb + 1))/ &
    -        2.0_wp*(1.0_wp - Xs_cell(i - chemxb + 1))/(1.0_wp - Ys_cell(i - chemxb + 1))
    +        2.0_wp*(1.0_wp - Xs_cell(i - chemxb + 1))/(1.0_wp - Ys_cell(i - chemxb + 1) + verysmall)
     end do
    Suggestion importance[1-10]: 8

    __

    Why: The suggestion correctly identifies a potential division-by-zero error if a species mass fraction Ys_cell is 1, which is a valid physical scenario that would crash the simulation.

    Medium

    @DimAdam-01 DimAdam-01 closed this Jul 11, 2025
    @DimAdam-01 DimAdam-01 reopened this Jul 12, 2025
    Copy link

    codecov bot commented Jul 12, 2025

    Codecov Report

    Attention: Patch coverage is 40.50633% with 94 lines in your changes missing coverage. Please review.

    Project coverage is 43.74%. Comparing base (8026a1c) to head (742a84b).
    Report is 3 commits behind head on master.

    Files with missing lines Patch % Lines
    src/simulation/m_rhs.fpp 24.24% 40 Missing and 10 partials ⚠️
    src/common/m_chemistry.fpp 63.23% 25 Missing ⚠️
    src/simulation/m_riemann_solvers.fpp 17.39% 15 Missing and 4 partials ⚠️
    Additional details and impacted files
    @@            Coverage Diff             @@
    ##           master     #934      +/-   ##
    ==========================================
    + Coverage   43.71%   43.74%   +0.02%     
    ==========================================
      Files          68       68              
      Lines       18360    18470     +110     
      Branches     2292     2310      +18     
    ==========================================
    + Hits         8026     8079      +53     
    - Misses       8945     8992      +47     
    - Partials     1389     1399      +10     

    ☔ View full report in Codecov by Sentry.
    📢 Have feedback on the report? Share it here.

    🚀 New features to boost your workflow:
    • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

    Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
    Development

    Successfully merging this pull request may close these issues.

    2 participants