Skip to content

Comments

Fix mv read loop hardcoding 4 instead of nnode#1190

Open
sbryngelson wants to merge 5 commits intoMFlowCode:masterfrom
sbryngelson:fix/mv-loop-nnode
Open

Fix mv read loop hardcoding 4 instead of nnode#1190
sbryngelson wants to merge 5 commits intoMFlowCode:masterfrom
sbryngelson:fix/mv-loop-nnode

Conversation

@sbryngelson
Copy link
Member

@sbryngelson sbryngelson commented Feb 21, 2026

User description

Summary

Severity: HIGH — mv data file loop uses wrong bounds and offsets when nnode != 4.

File: src/pre_process/m_start_up.fpp, lines 479-482

The pb (pressure) read loop just above correctly uses nnode for both the loop bound and file number offset calculation. The mv (mass/velocity) read loop immediately below hardcodes 4 instead.

Before

! pb loop (correct)
do r = 1, nnode
    write (file_num, '(I0)') sys_size + r + (i - 1)*nnode
    file_loc = ... '/pb' ...

! mv loop (wrong)
do r = 1, 4                                              ! should be nnode
    write (file_num, '(I0)') sys_size + r + (i - 1)*4    ! should be nnode
    file_loc = ... '/mv' ...

After

do r = 1, nnode
    write (file_num, '(I0)') sys_size + r + (i - 1)*nnode

Why this went undetected

nnode is 4 in the most common configuration, so the hardcoded value happens to be correct.

Test plan

  • Run bubble restart with non-default nnode value
  • Verify all mv data files are read correctly

🤖 Generated with Claude Code

Fixes #1210


CodeAnt-AI Description

Fix reading and MPI I/O for QBMM variables when quadrature node count is not 4

What Changed

  • Replaced hardcoded "4" with the nnode parameter across file read loops, MPI I/O view/var allocations, and buffer pack/unpack loops so QBMM fields (pb/mv and related buffers) scale with actual quadrature nodes
  • Corrected the mv data-file read loop in startup so mv files are located and read correctly when nnode != 4
  • Adjusted MPI buffer sizes and array allocations so MPI file views, reads, and send/recv buffers match the expanded QBMM variable range

Impact

✅ Correct mv file reads with non-default nnode
✅ Fewer MPI I/O size/view mismatches for QBMM variables
✅ Correct buffer packing/unpacking for variable node counts

💡 Usage Guide

Checking Your Pull Request

Every time you make a pull request, our system automatically looks through it. We check for security issues, mistakes in how you're setting up your infrastructure, and common code problems. We do this to make sure your changes are solid and won't cause any trouble later.

Talking to CodeAnt AI

Got a question or need a hand with something in your pull request? You can easily get in touch with CodeAnt AI right here. Just type the following in a comment on your pull request, and replace "Your question here" with whatever you want to ask:

@codeant-ai ask: Your question here

This lets you have a chat with CodeAnt AI about your pull request, making it easier to understand and improve your code.

Example

@codeant-ai ask: Can you suggest a safer alternative to storing this secret?

Preserve Org Learnings with CodeAnt

You can record team preferences so CodeAnt AI applies them in future reviews. Reply directly to the specific CodeAnt AI suggestion (in the same thread) and replace "Your feedback here" with your input:

@codeant-ai: Your feedback here

This helps CodeAnt AI learn and adapt to your team's coding style and standards.

Example

@codeant-ai: Do not flag unused imports.

Retrigger review

Ask CodeAnt AI to review the PR again, by typing:

@codeant-ai: review

Check Your Repository Health

To analyze the health of your code repository, visit our dashboard at https://app.codeant.ai. This tool helps you identify potential issues and areas for improvement in your codebase, ensuring your repository maintains high standards of code health.

Summary by CodeRabbit

  • Bug Fixes
    • Corrected MPI communication buffers and I/O data allocation to properly scale with the actual number of MPI nodes, improving compatibility with variable-sized parallel systems and memory efficiency.

Copilot AI review requested due to automatic review settings February 21, 2026 03:23
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 21, 2026

CodeAnt AI is reviewing your PR.


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Feb 21, 2026

📝 Walkthrough

Walkthrough

The pull request systematically replaces hardcoded multiplier 4 with the dynamic nnode variable across MPI communication buffers, I/O initialization, and data-reading paths. This changes buffer sizing, memory layout, and indexing calculations from fixed to node-count-dependent values throughout initialization, allocation, and loop bounds.

Changes

Cohort / File(s) Summary
MPI Communication Buffers
src/common/m_mpi_common.fpp
Replaced all nb*4 calculations with nb*nnode in buffer indexing, MPI datatype subarray creations, loop bounds, and r-index computations affecting packing/unpacking and GPU parallel loops when qbmm_comm is active.
Pre-Process I/O Initialization
src/pre_process/m_global_parameters.fpp, src/pre_process/m_start_up.fpp
Updated MPI I/O data allocation sizes and loop bounds from sys_size + 2*nb*4 to sys_size + 2*nb*nnode; adjusted file index calculations in data-reading loops from (i - 1)*4 to (i - 1)*nnode for bubble variable reads.
Simulation I/O Initialization
src/simulation/m_global_parameters.fpp
Changed MPI I/O data allocation sizes from sys_size + 2*nb*4 to sys_size + 2*nb*nnode for bubble-related data in parallel I/O initialization.

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Suggested labels

Review effort 3/5

Poem

🐰 Four was once the magic key,
Now nnode sets the buffers free,
Scaling wide across the nodes,
Memory dances down new roads,
Dynamic sizing, bold and true!

🚥 Pre-merge checks | ✅ 3 | ❌ 2

❌ Failed checks (1 warning, 1 inconclusive)

Check name Status Explanation Resolution
Linked Issues check ⚠️ Warning The PR addresses the mv loop hardcoding but does not address the uninitialized time_real variable bug from linked issue #1210. Add fix for uninitialized time_real in src/post_process/m_data_output.fpp to align with linked issue #1210 objectives.
Out of Scope Changes check ❓ Inconclusive Changes to MPI buffer allocations and QBMM variable handling are consistent with the stated objective of parameterizing hardcoded 4s for nnode, though beyond the stated mv loop scope. Clarify whether MPI I/O and buffer changes are intended fixes for the same nnode parameterization issue or separate scope.
✅ Passed checks (3 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately and concisely summarizes the primary fix: replacing hardcoded 4 with nnode in the mv read loop.
Description check ✅ Passed The PR description includes a clear summary, code before/after comparison, issue reference, and test plan, though testing checklist items are unchecked.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codeant-ai codeant-ai bot added the size:XS This PR changes 0-9 lines, ignoring generated files label Feb 21, 2026
@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 21, 2026

Nitpicks 🔍

🔒 No security issues identified
⚡ Recommended areas for review

  • Parallel I/O inconsistency
    The serial mv loop was updated to use nnode, but the parallel ic-reading loop still uses a hardcoded 4 in the bound sys_size + 2*nb*4. This is an inconsistency: parallel I/O ranges should be updated to use nnode as well (or otherwise validated) so serialized and parallel read logic match.

  • File-number string width
    file_num is allocated with length based only on sys_size digits. When computing filenames with indices like sys_size + r + (i-1)*nnode (and the pb/mv ranges), the printed numeric value may require more digits than computed. This can lead to truncated filenames or incorrect file names when nnode, nb, or sys_size grows. Ensure file_num can hold the maximum index used.

  • Serial mv loop update
    The serial mv file-read loop was changed to iterate r=1..nnode and to compute the file number using nnode (sys_size + r + (i-1)*nnode). Verify this matches the intended file-numbering scheme for all nb and nnode combinations and is consistent with the pb loop. Validate with non-default nnode to ensure expected files are opened and read.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 21, 2026

CodeAnt AI finished reviewing your PR.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes a bug in the bubble restart file reading logic where the mv data file loop was hardcoded to iterate 4 times instead of using the nnode variable, which could cause incorrect file reads when nnode != 4.

Changes:

  • Updated the mv data file loop to use nnode instead of the hardcoded value 4
  • Corrected the file number offset calculation to use nnode for proper indexing

cubic-dev-ai[bot]

This comment was marked as off-topic.

The pb loop just above correctly uses nnode for both the loop bound
and file number offset. The mv loop should match.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
sbryngelson and others added 2 commits February 21, 2026 00:51
…ommon

Fix all remaining instances of the literal integer 4 representing the
QBMM quadrature node count (nnode) across simulation and common modules,
completing the nnode parameterization started in the PR.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
Fix remaining instances of the literal 4 representing QBMM quadrature
node count in pre_process/m_global_parameters.fpp and m_start_up.fpp,
consistent with the fixes already applied to common and simulation.

Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
@codecov
Copy link

codecov bot commented Feb 21, 2026

Codecov Report

❌ Patch coverage is 10.00000% with 18 lines in your changes missing coverage. Please review.
✅ Project coverage is 44.05%. Comparing base (df28255) to head (9a63b88).
⚠️ Report is 9 commits behind head on master.

Files with missing lines Patch % Lines
src/common/m_mpi_common.fpp 7.14% 13 Missing ⚠️
src/pre_process/m_start_up.fpp 0.00% 3 Missing ⚠️
src/pre_process/m_global_parameters.fpp 33.33% 2 Missing ⚠️
Additional details and impacted files
@@           Coverage Diff           @@
##           master    #1190   +/-   ##
=======================================
  Coverage   44.05%   44.05%           
=======================================
  Files          70       70           
  Lines       20496    20496           
  Branches     1989     1989           
=======================================
  Hits         9030     9030           
  Misses      10328    10328           
  Partials     1138     1138           

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 22, 2026

CodeAnt AI is running Incremental review


Thanks for using CodeAnt! 🎉

We're free for open-source projects. if you're enjoying it, help us grow by sharing.

Share on X ·
Reddit ·
LinkedIn

@codeant-ai codeant-ai bot added size:M This PR changes 30-99 lines, ignoring generated files and removed size:XS This PR changes 0-9 lines, ignoring generated files labels Feb 22, 2026
if (bubbles_euler .and. qbmm .and. .not. polytropic) then
do i = sys_size + 1, sys_size + 2*nb*4
do i = sys_size + 1, sys_size + 2*nb*nnode
allocate (MPI_IO_DATA%var(i)%sf(0:m, 0:n, 0:p))
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggestion: In the loop initializing the extended MPI_IO_DATA%var range for the bubbles_euler .and. qbmm .and. .not. polytropic case, sf is first allocated and then immediately pointer-assigned to null(), which discards the only reference to the newly allocated memory and causes a memory leak for each i in that range; since sf is later used as a pointer to other targets, the allocation here is unnecessary and should be removed so only the nullification remains. [memory leak]

Severity Level: Major ⚠️
- ⚠️ Extra MPI_IO_DATA extended-range buffers leaked at initialization.
- ⚠️ Increased memory footprint for qbmm Euler bubble runs.
Suggested change
allocate (MPI_IO_DATA%var(i)%sf(0:m, 0:n, 0:p))
Steps of Reproduction ✅
1. Run any simulation configuration that sets `bubbles_euler = .true.`, `qbmm = .true.`,
and `polytropic = .false.` in the input, causing the non-polytropic Eulerian bubble + qbmm
path to be active when `m_global_parameters` is initialized.

2. During startup, the driver calls `s_initialize_global_parameters_module` in
`src/simulation/m_global_parameters.fpp` (see around lines 1100–1400), which computes
`sys_size` and then executes the allocation block at lines 1235–1238 for
`MPI_IO_DATA%var(sys_size+1:sys_size+2*nb*nnode)%sf`.

3. For each `i` in `sys_size + 1, ..., sys_size + 2*nb*nnode`, the statement `allocate
(MPI_IO_DATA%var(i)%sf(0:m, 0:n, 0:p))` allocates a new target and associates the pointer
component `sf` with it, and the next line `MPI_IO_DATA%var(i)%sf => null()` immediately
disassociates `sf` from that target.

4. Because no other pointer references the just-allocated targets and there is no
subsequent `deallocate` for them (finalization in `s_finalize_global_parameters_module`
only nullifies `sf` again; see around lines 1500–1560), those allocations become
unreachable and remain allocated until process termination, which can be confirmed by
running the executable under a memory debugger (e.g., Valgrind) and observing leaked
blocks originating from `m_global_parameters.fpp:1236`.
Prompt for AI Agent 🤖
This is a comment left during a code review.

**Path:** src/simulation/m_global_parameters.fpp
**Line:** 1236:1236
**Comment:**
	*Memory Leak: In the loop initializing the extended `MPI_IO_DATA%var` range for the `bubbles_euler .and. qbmm .and. .not. polytropic` case, `sf` is first allocated and then immediately pointer-assigned to `null()`, which discards the only reference to the newly allocated memory and causes a memory leak for each `i` in that range; since `sf` is later used as a pointer to other targets, the allocation here is unnecessary and should be removed so only the nullification remains.

Validate the correctness of the flagged issue. If correct, How can I resolve this? If you propose a fix, implement it and please make it concise.
👍 | 👎

@codeant-ai
Copy link
Contributor

codeant-ai bot commented Feb 22, 2026

CodeAnt AI Incremental review completed.

@github-actions
Copy link

Claude Code Review

No issues found. Checked for bugs and CLAUDE.md compliance.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/pre_process/m_start_up.fpp (1)

417-420: ⚠️ Potential issue | 🟡 Minor

file_num buffer may be too short for sys_size + nb*nnode when nnode is large.

The buffer is sized for the digit count of sys_size alone, but numbers written during pb/mv reads go up to sys_size + nb*nnode. For the current default (nnode = 4) this is benign because sys_size already incorporates nb*nmom (which exceeds nb*4). However, for larger nnode values — which this PR explicitly enables — the number of digits can increase, causing Fortran to fill file_num with '*' characters and silently failing to find the data file.

🐛 Proposed fix
-character(LEN= &
-          int(floor(log10(real(sys_size, wp)))) + 1) :: file_num
+character(LEN= &
+          int(floor(log10(real(sys_size + 2*nb*nnode, wp)))) + 1) :: file_num
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/pre_process/m_start_up.fpp` around lines 417 - 420, The declaration for
file_num is too short because it only uses digits of sys_size; change the sizing
to compute the maximum possible index (e.g. max_index = sys_size + nb*nnode) and
use int(floor(log10(real(max_index, wp))) + 1) for the character(LEN=...) length
so file_num can hold numbers up to sys_size + nb*nnode; update any related
initializations that assume the old length and keep the symbol names file_num,
sys_size, nb, nnode, and wp to locate and modify the declaration and any
dependent code.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/pre_process/m_start_up.fpp`:
- Around line 417-420: The declaration for file_num is too short because it only
uses digits of sys_size; change the sizing to compute the maximum possible index
(e.g. max_index = sys_size + nb*nnode) and use int(floor(log10(real(max_index,
wp))) + 1) for the character(LEN=...) length so file_num can hold numbers up to
sys_size + nb*nnode; update any related initializations that assume the old
length and keep the symbol names file_num, sys_size, nb, nnode, and wp to locate
and modify the declaration and any dependent code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

size:M This PR changes 30-99 lines, ignoring generated files

Development

Successfully merging this pull request may close these issues.

time_real used uninitialized in Lagrangian bubble output

1 participant