-
Notifications
You must be signed in to change notification settings - Fork 31
feat: measuring compute efficiency per job #221
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
WalkthroughThis update introduces job efficiency reporting to the SLURM executor plugin for Snakemake. It replaces the log cleanup mechanism with a shutdown hook, adds new configuration options for efficiency reports, implements the report generation logic, updates documentation, adjusts dependencies, and expands the test suite to validate the new feature. Changes
Sequence Diagram(s)sequenceDiagram
participant User
participant Executor
participant EfficiencyReport
participant SLURM (sacct)
participant FileSystem
User->>Executor: Run workflow with efficiency_report enabled
Executor->>Executor: On shutdown, call clean_old_logs
Executor->>EfficiencyReport: create_efficiency_report(threshold, uuid, path, logger)
EfficiencyReport->>SLURM (sacct): Query job data (by workflow UUID)
SLURM (sacct)-->>EfficiencyReport: Return job accounting data
EfficiencyReport->>FileSystem: Write efficiency_report_<uuid>.csv
EfficiencyReport->>Executor: Log report location and warnings if needed
Possibly related PRs
Suggested reviewers
Poem
✨ Finishing Touches
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. 🪧 TipsChatThere are 3 ways to chat with CodeRabbit:
SupportNeed help? Create a ticket on our support page for assistance with any issues or questions. Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments. CodeRabbit Commands (Invoked using PR comments)
Other keywords and placeholders
Documentation and Community
|
…-executor-plugin-slurm into docs/review-new-docs
…TODOs for clarification
…on for regular jobs, as this is covered in the where to do configuration section and the main snakemake docs
…seful for a user -- just explain special cases MPI and GPU below
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/tests.py (1)
41-60
:⚠️ Potential issueCritical issue: Test will fail without proper SLURM environment mocking.
Based on previous review discussions, this test will fail in environments without SLURM because the efficiency report generation depends on the
sacct
command. The current implementation searches for files that won't be created ifsacct
fails.You need to implement the mocking solution discussed in previous reviews. Here's the corrected implementation:
def test_simple_workflow(self, tmp_path): + import subprocess + from unittest.mock import patch, MagicMock + + # Mock sacct output with expected format: + # JobID|JobName|Comment|Elapsed|TotalCPU|NNodes|NCPUS|MaxRSS|ReqMem + mock_sacct_output = "1234|test_job|rule_test|00:01:30|00:00:45|1|2|1024K|2G\n1235|another_job|rule_another|00:02:00|00:01:00|1|4|2048K|4G" + + # Create a mock result object + mock_result = MagicMock() + mock_result.stdout = mock_sacct_output + mock_result.returncode = 0 + + def mock_subprocess_run(cmd, *args, **kwargs): + # Check if this is a sacct command + if isinstance(cmd, list) and len(cmd) > 0 and 'sacct' in cmd[0]: + return mock_result + else: + # For other commands, call the real subprocess.run + return subprocess.run(cmd, *args, **kwargs) + + # Patch subprocess.run in the correct module + with patch('snakemake_executor_plugin_slurm.subprocess.run', side_effect=mock_subprocess_run): self.run_workflow("simple", tmp_path) - # The efficiency report is created in the - # current working directory pattern = re.compile(r"efficiency_report_[\w-]+\.csv") report_found = False - # as the directory is unclear, we need a path walk: - for root, dirs, files in os.walk("/tmp/pytest-of-runner/"): - for fname in files: - if pattern.match(fname): - report_found = True - report_path = os.path.join(root, fname) - # Verify it's not empty - assert ( - os.stat(report_path).st_size > 0 - ), f"Efficiency report {report_path} is empty" - break + # Check both current working directory and tmp_path for the report + from pathlib import Path + for search_dir in [Path.cwd(), tmp_path]: + for filepath in search_dir.glob("efficiency_report_*.csv"): + if pattern.match(filepath.name): + report_found = True + assert filepath.stat().st_size > 0, f"Efficiency report {filepath} is empty" + break + if report_found: + break + assert report_found, "Efficiency report file not found"Additional improvements:
- Remove hardcoded path: Replaced the environment-specific "/tmp/pytest-of-runner/" path with proper search in current directory and tmp_path
- Add SLURM mocking: Mock the
sacct
command to return realistic test data- Fix unused variable: Use
_
instead ofdirs
in the loop (addresses static analysis warning)🧰 Tools
🪛 Ruff (0.11.9)
50-50: Loop control variable
dirs
not used within loop bodyRename unused
dirs
to_dirs
(B007)
🪛 Pylint (3.3.7)
[convention] 41-41: Missing function or method docstring
(C0116)
[warning] 50-50: Unused variable 'dirs'
(W0612)
🧹 Nitpick comments (2)
tests/tests.py (2)
28-39
: Add missing docstring for the new test class.The static analysis correctly flags missing class docstring. Consider adding documentation to explain the purpose of this test class.
class TestEfficiencyReport(snakemake.common.tests.TestWorkflowsLocalStorageBase): + """Test class for verifying efficiency report generation functionality.""" __test__ = True
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 28-28: Missing class docstring
(C0115)
[convention] 31-31: Missing function or method docstring
(C0116)
[convention] 34-34: Missing function or method docstring
(C0116)
50-50
: Fix unused variable flagged by static analysis.The
dirs
variable inos.walk()
is not used. Replace it with_
to indicate it's intentionally unused.- for root, dirs, files in os.walk("/tmp/pytest-of-runner/"): + for root, _, files in os.walk("/tmp/pytest-of-runner/"):Note: This suggestion is part of the larger refactoring needed above.
🧰 Tools
🪛 Ruff (0.11.9)
50-50: Loop control variable
dirs
not used within loop bodyRename unused
dirs
to_dirs
(B007)
🪛 Pylint (3.3.7)
[warning] 50-50: Unused variable 'dirs'
(W0612)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/tests.py
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/tests.py (1)
snakemake_executor_plugin_slurm/__init__.py (1)
ExecutorSettings
(42-124)
🪛 Pylint (3.3.7)
tests/tests.py
[convention] 1-1: Missing module docstring
(C0114)
[convention] 28-28: Missing class docstring
(C0115)
[convention] 31-31: Missing function or method docstring
(C0116)
[convention] 34-34: Missing function or method docstring
(C0116)
[convention] 41-41: Missing function or method docstring
(C0116)
[warning] 50-50: Unused variable 'dirs'
(W0612)
🪛 Ruff (0.11.9)
tests/tests.py
50-50: Loop control variable dirs
not used within loop body
Rename unused dirs
to _dirs
(B007)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (2)
tests/tests.py (2)
1-2
: Imports added for filesystem operations and pattern matching.The new imports support the efficiency report file search functionality in the test.
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
22-25
: Timing adjustment for better test stability.Increasing the initial delay from 1 to 2 seconds should help with test reliability.
@coderabbitai: This line in the test |
Tip For best results, initiate chat on the files or code changes. 🧩 Analysis chain@cmeesters Let me examine the current test implementation and the efficiency report creation code to suggest better alternatives to the hard-coded path. Based on my analysis, here are several better approaches to replace the hard-coded path: Recommended Solutions1. Use the actual log directory (Best approach)The efficiency report is saved in the configured log directory or current working directory. You should look there instead: def test_simple_workflow(self, tmp_path):
self.run_workflow("simple", tmp_path)
# Look in current working directory first
cwd = os.getcwd()
efficiency_files = []
# Search in current working directory
for file in os.listdir(cwd):
if re.match(r'efficiency_report_[\w-]+\.csv', file):
efficiency_files.append(os.path.join(cwd, file))
# If no files found in cwd, check the tmp_path directory tree
if not efficiency_files:
for root, _, files in os.walk(str(tmp_path)):
for file in files:
if re.match(r'efficiency_report_[\w-]+\.csv', file):
efficiency_files.append(os.path.join(root, file))
assert len(efficiency_files) > 0, "No efficiency report file found"
assert os.path.getsize(efficiency_files[0]) > 0, "Efficiency report file is empty" 2. Use environment variables (Portable approach)import tempfile
def test_simple_workflow(self, tmp_path):
self.run_workflow("simple", tmp_path)
# Use system temporary directory
temp_dir = os.environ.get('TMPDIR', tempfile.gettempdir())
# Search for efficiency report files
found = False
for root, _, files in os.walk(temp_dir):
for file in files:
if re.match(r'efficiency_report_[\w-]+\.csv', file):
file_path = os.path.join(root, file)
assert os.path.getsize(file_path) > 0
found = True
break
if found:
break
assert found, f"No efficiency report found in {temp_dir}" 3. Mock the file creation (Most reliable for testing)from unittest.mock import patch, mock_open
def test_simple_workflow(self, tmp_path):
mock_file = mock_open()
with patch("builtins.open", mock_file):
with patch("pandas.DataFrame.to_csv") as mock_to_csv:
self.run_workflow("simple", tmp_path)
# Verify that to_csv was called (efficiency report was created)
mock_to_csv.assert_called_once()
# Verify the filename pattern
call_args = mock_to_csv.call_args[0]
assert re.match(r'.*efficiency_report_[\w-]+\.csv', call_args[0]) 4. Configure explicit output directoryIf possible, modify the test setup to specify where the efficiency report should be written: def test_simple_workflow(self, tmp_path):
# Create a specific directory for test outputs
output_dir = tmp_path / "efficiency_reports"
output_dir.mkdir()
# If the executor supports configuring the output directory, use it
# Then search only in that directory
self.run_workflow("simple", tmp_path)
efficiency_files = list(output_dir.glob("efficiency_report_*.csv"))
assert len(efficiency_files) > 0
assert efficiency_files[0].stat().st_size > 0 Why tmp_path might not have workedThe
The efficiency report creation logic needs to be aligned with where the test expects to find the file. I recommend approach #1 as it matches the actual file creation logic in your code, making the test more realistic and maintainable. 🏁 Script executed:
Length of output: 20096 🏁 Script executed:
Length of output: 866 🏁 Script executed:
Length of output: 3496 🏁 Script executed:
Length of output: 69 Perfect! Now I can see exactly what's happening. Looking at the efficiency report creation code, the file is saved to:
In your Recommended SolutionsOption 1: Search in Current Working Directory (Simplest)Since no def test_simple_workflow(self, tmp_path):
self.run_workflow("simple", tmp_path)
# The efficiency report is saved in current working directory
pattern = re.compile(r"efficiency_report_[\w-]+\.csv")
cwd = os.getcwd()
# Search current working directory
for fname in os.listdir(cwd):
if pattern.match(fname):
report_path = os.path.join(cwd, fname)
assert os.stat(report_path).st_size > 0, f"Efficiency report {report_path} is empty"
return # Test passed
assert False, f"Efficiency report file not found in {cwd}" Option 2: Configure logdir to use tmp_path (Most Controlled)Configure the test to save the efficiency report in a controlled location: def get_executor_settings(self) -> Optional[ExecutorSettingsBase]:
return ExecutorSettings(
efficiency_report=True,
logdir=Path("/tmp/test_efficiency_reports"), # Or use tmp_path somehow
init_seconds_before_status_checks=5,
)
def test_simple_workflow(self, tmp_path):
# Create the log directory
log_dir = Path("/tmp/test_efficiency_reports")
log_dir.mkdir(exist_ok=True)
self.run_workflow("simple", tmp_path)
pattern = re.compile(r"efficiency_report_[\w-]+\.csv")
# Search in the configured log directory
for fname in os.listdir(log_dir):
if pattern.match(fname):
report_path = log_dir / fname
assert report_path.stat().st_size > 0, f"Efficiency report {report_path} is empty"
return # Test passed
assert False, f"Efficiency report file not found in {log_dir}" Option 3: Hybrid Approach (Most Robust)Search in both possible locations: def test_simple_workflow(self, tmp_path):
self.run_workflow("simple", tmp_path)
pattern = re.compile(r"efficiency_report_[\w-]+\.csv")
# Check current working directory first
cwd = os.getcwd()
for fname in os.listdir(cwd):
if pattern.match(fname):
report_path = os.path.join(cwd, fname)
assert os.stat(report_path).st_size > 0, f"Efficiency report {report_path} is empty"
return # Test passed
# If not found in cwd, check if there's a configured logdir
settings = self.get_executor_settings()
if hasattr(settings, 'logdir') and settings.logdir:
for fname in os.listdir(settings.logdir):
if pattern.match(fname):
report_path = os.path.join(settings.logdir, fname)
assert os.stat(report_path).st_size > 0, f"Efficiency report {report_path} is empty"
return # Test passed
assert False, f"Efficiency report file not found in {cwd} or configured logdir" Option 4: Use Environment Variable for Temp DirectoryFor maximum portability across different CI environments: import tempfile
def test_simple_workflow(self, tmp_path):
# Temporarily change working directory to tmp_path
original_cwd = os.getcwd()
try:
os.chdir(tmp_path)
self.run_workflow("simple", tmp_path)
pattern = re.compile(r"efficiency_report_[\w-]+\.csv")
# Search in current directory (which is now tmp_path)
for fname in os.listdir("."):
if pattern.match(fname):
report_path = os.path.join(".", fname)
assert os.stat(report_path).st_size > 0, f"Efficiency report {report_path} is empty"
return # Test passed
assert False, f"Efficiency report file not found in {tmp_path}"
finally:
os.chdir(original_cwd) Why
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (1)
snakemake_executor_plugin_slurm/__init__.py (1)
796-817
:⚠️ Potential issueFix inconsistent return behavior.
The method returns
None
on error but has no explicit return on success, making the return behavior inconsistent.Add an explicit return statement at the end of the method:
self.logger.info( f"Efficiency report for workflow {self.run_uuid} saved to {logfile}." ) + return logfile
Alternatively, if no return value is needed, remove the
return None
on error for consistency.🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 796-796: Either all return statements in a function should return an expression, or none of them should.
(R1710)
🧹 Nitpick comments (4)
tests/tests.py (2)
1-3
: Add module docstring and consider organizing imports.The static analysis tool correctly identifies the missing module docstring. Consider adding a brief description of what this test module covers.
+""" +Test suite for the SLURM executor plugin, including workflow execution, +efficiency reporting, GPU/GRES string handling, and resource configuration. +""" import os import re from pathlib import Path🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
29-42
: Add class and method docstrings.The static analysis tool correctly identifies missing docstrings for the new test class and methods.
class TestEfficiencyReport(snakemake.common.tests.TestWorkflowsLocalStorageBase): + """Test class for efficiency report generation functionality.""" __test__ = True def get_executor(self) -> str: + """Return the executor name for testing.""" return "slurm" def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: + """Return executor settings with efficiency reporting enabled.""" return ExecutorSettings(🧰 Tools
🪛 Pylint (3.3.7)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
snakemake_executor_plugin_slurm/__init__.py (2)
116-146
: LGTM! Well-designed configuration fields with good defaults.The new ExecutorSettings fields are well-documented and provide appropriate configurability for the efficiency reporting feature. The 80% default threshold is reasonable for CPU efficiency warnings.
Consider clarifying in the
efficiency_report_path
help text that it should be a directory path, not a full file path, since the implementation appends the filename to this path.
904-904
: Fix typo in comment.There's a typo in the comment that should be corrected for clarity.
- # we construct a path object to allow for a customi + # we construct a path object to allow for a custom
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
snakemake_executor_plugin_slurm/__init__.py
(6 hunks)tests/tests.py
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/__init__.py (1)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/__init__.py:0-0
Timestamp: 2025-06-03T17:10:51.258Z
Learning: In the snakemake-executor-plugin-slurm repository, the create_efficiency_report method in snakemake_executor_plugin_slurm/__init__.py contains temporary testing/debugging code that logs directory contents using os.listdir() and directory paths. This code was inserted for testing purposes only and should be removed from the production implementation.
🧬 Code Graph Analysis (2)
snakemake_executor_plugin_slurm/__init__.py (2)
snakemake_executor_plugin_slurm/utils.py (1)
delete_empty_dirs
(26-48)snakemake_executor_plugin_slurm/efficiency_report.py (3)
time_to_seconds
(5-17)parse_maxrss
(20-30)parse_reqmem
(33-50)
tests/tests.py (1)
snakemake_executor_plugin_slurm/__init__.py (1)
ExecutorSettings
(42-146)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/__init__.py
[error] 19-19: Unable to import 'pandas'
(E0401)
[refactor] 796-796: Either all return statements in a function should return an expression, or none of them should.
(R1710)
tests/tests.py
[convention] 1-1: Missing module docstring
(C0114)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
[convention] 43-43: Missing function or method docstring
(C0116)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (9)
tests/tests.py (1)
23-27
: Minor timing adjustment looks good.The increase from 1 to 2 seconds for
init_seconds_before_status_checks
is a reasonable change to improve test reliability.snakemake_executor_plugin_slurm/__init__.py (8)
18-19
: LGTM! Pandas import is appropriate for the new efficiency reporting functionality.The pandas import is necessary for DataFrame operations in the efficiency report generation. The static analysis error is a false positive since pandas is properly added as a dependency.
🧰 Tools
🪛 Pylint (3.3.7)
[error] 19-19: Unable to import 'pandas'
(E0401)
32-37
: LGTM! Well-structured imports for the new efficiency reporting functionality.The imports from the local modules are appropriately organized and necessary for the efficiency reporting feature.
191-204
: LGTM! Clean shutdown method implementation.The shutdown method override properly calls the parent method first, then performs the additional cleanup and optional efficiency reporting. This is a well-structured approach that follows good OOP practices.
215-215
: LGTM! Improved logging consistency.Adding the trailing period improves consistency in log message formatting.
224-224
: LGTM! Appropriate severity level for deletion failures.Changing from warning to error is appropriate since file deletion failures represent actual problems that should be highlighted.
229-231
: LGTM! Improved error handling and messaging.The enhanced error message and appropriate error severity level provide better feedback when directory cleanup fails.
788-794
: LGTM! Improved error message formatting.The enhanced formatting makes the error message clearer and more readable while maintaining the same important information.
796-921
: LGTM! Comprehensive efficiency reporting implementation with excellent functionality.This is a well-implemented efficiency reporting feature that:
✅ Uses controlled sacct format ensuring column presence (following previous feedback)
✅ Properly handles missing Comment column with appropriate warnings
✅ Protects against division by zero in memory usage calculations
✅ Filters out batch/extern jobs appropriately
✅ Provides configurable efficiency threshold warnings
✅ Saves reports to configurable locationsThe implementation demonstrates good understanding of SLURM accounting data and provides valuable insights for workflow optimization.
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 796-796: Either all return statements in a function should return an expression, or none of them should.
(R1710)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/tests.py (1)
43-72
:⚠️ Potential issueCRITICAL: Implement proper SLURM environment mocking as discussed in previous reviews.
This test implementation still has the fundamental issues that were extensively discussed in previous reviews:
- Hardcoded paths: Using
/tmp/pytest-of-runner/
makes the test fragile and non-portable- Missing SLURM mocking: The test will fail in CI environments without SLURM since the
sacct
command is not mocked- Path inconsistencies: The search logic is disconnected from where the efficiency report is actually created
Based on the extensive previous review discussion, you need to implement the recommended mocking solution.
def test_simple_workflow(self, tmp_path): - # for an unkown reason, the efficiency report is not created - # reliably in `tmp_path`, so we use a fixed path - # to ensure the test is reproducible - - # a worklfow aborted: - # error message: - # OSError: Cannot save file into a non-existent directory: - # '/tmp/efficiency_report_test' - # runpath = Path("/tmp/efficiency_report_test") - # runpath.mkdir(parents=True, exist_ok=True) - self.run_workflow("simple", tmp_path) - - # The efficiency report is created in the - # current working directory - pattern = re.compile(r"efficiency_report_[\w-]+\.csv") - report_found = False - - # as the directory is unclear, we need a path walk: - for root, _, files in os.walk("/tmp/pytest-of-runner/"): - for fname in files: - if pattern.match(fname): - report_found = True - report_path = os.path.join(root, fname) - # Verify it's not empty - assert ( - os.stat(report_path).st_size > 0 - ), f"Efficiency report {report_path} is empty" - break - assert report_found, "Efficiency report file not found" + """Test efficiency report generation with proper SLURM mocking.""" + import subprocess + from unittest.mock import patch, MagicMock + from pathlib import Path + + # Mock sacct output with the exact format expected: + # JobID|JobName|Comment|Elapsed|TotalCPU|NNodes|NCPUS|MaxRSS|ReqMem + mock_sacct_output = "1234|test_job|rule_test|00:01:30|00:00:45|1|2|1024K|2G\n1235|another_job|rule_another|00:02:00|00:01:00|1|4|2048K|4G" + + # Create a mock result object that mimics subprocess.CompletedProcess + mock_result = MagicMock() + mock_result.stdout = mock_sacct_output + mock_result.returncode = 0 + + # Mock subprocess.run to return our fake sacct data + def mock_subprocess_run(cmd, *args, **kwargs): + # Check if this is a sacct command + if isinstance(cmd, list) and len(cmd) > 0 and 'sacct' in cmd[0]: + return mock_result + else: + # For other commands, call the real subprocess.run + return subprocess.run(cmd, *args, **kwargs) + + # Patch subprocess.run in the correct module + with patch('snakemake_executor_plugin_slurm.subprocess.run', side_effect=mock_subprocess_run): + self.run_workflow("simple", tmp_path) + + # The efficiency report is created in the current working directory + pattern = re.compile(r"efficiency_report_[\w-]+\.csv") + report_found = False + + # Check both cwd and the tmp_path for the report file + for search_dir in [Path.cwd(), tmp_path]: + for filepath in search_dir.glob("efficiency_report_*.csv"): + if pattern.match(filepath.name): + report_found = True + # Verify it's not empty + assert filepath.stat().st_size > 0, f"Efficiency report {filepath} is empty" + break + if report_found: + break + + assert report_found, "Efficiency report was not generated"This solution:
- ✅ Eliminates hardcoded paths
- ✅ Mocks the SLURM
sacct
command as extensively discussed in previous reviews- ✅ Works in any environment (with or without SLURM)
- ✅ Tests the actual efficiency report generation logic
- ✅ Uses proper search locations instead of hardcoded paths
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 43-43: Missing function or method docstring
(C0116)
🧹 Nitpick comments (2)
tests/tests.py (2)
3-3
: Remove unused import.The
Path
import frompathlib
is not used in the current implementation.-from pathlib import Path
🧰 Tools
🪛 Ruff (0.11.9)
3-3:
pathlib.Path
imported but unusedRemove unused import:
pathlib.Path
(F401)
🪛 Pylint (3.3.7)
[warning] 3-3: Unused Path imported from pathlib
(W0611)
29-42
: Add missing docstrings for better code documentation.The new test class and its methods lack docstrings, which affects code maintainability and understanding.
class TestEfficiencyReport(snakemake.common.tests.TestWorkflowsLocalStorageBase): + """Test class for SLURM efficiency reporting functionality.""" __test__ = True def get_executor(self) -> str: + """Return the executor type for efficiency report testing.""" return "slurm" def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: + """Configure executor settings to enable efficiency reporting.""" return ExecutorSettings( efficiency_report=True, init_seconds_before_status_checks=5, # efficiency_report_path=Path("/tmp/efficiency_report_test"), # seconds_between_status_checks=5, )🧰 Tools
🪛 Pylint (3.3.7)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/tests.py
(2 hunks)
🧰 Additional context used
🪛 Ruff (0.11.9)
tests/tests.py
3-3: pathlib.Path
imported but unused
Remove unused import: pathlib.Path
(F401)
🪛 Pylint (3.3.7)
tests/tests.py
[convention] 1-1: Missing module docstring
(C0114)
[warning] 3-3: Unused Path imported from pathlib
(W0611)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
[convention] 43-43: Missing function or method docstring
(C0116)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/tests.py (1)
43-74
:⚠️ Potential issueCritical issues remain unresolved - implement the previously recommended mocking solution.
This test still suffers from the same fundamental issues that were extensively discussed in previous reviews:
- Hardcoded paths: Using
/tmp/pytest-of-runner/
makes tests fragile and environment-dependent- Missing SLURM mocking: The test will fail in CI environments without SLURM because
sacct
command is unavailable- Debug code: Contains print statements and commented-out troubleshooting code
The comprehensive mocking solution was already provided in previous reviews but hasn't been implemented. As discussed, the root cause is that
sacct
fails in test environments, so no efficiency report gets created.Implement the previously recommended solution:
def test_simple_workflow(self, tmp_path): - # for an unkown reason, the efficiency report is not created - # reliably in `tmp_path`, so we use a fixed path - # to ensure the test is reproducible - - # a worklfow aborted: - # error message: - # OSError: Cannot save file into a non-existent directory: - # '/tmp/efficiency_report_test' - # runpath = Path("/tmp/efficiency_report_test") - # runpath.mkdir(parents=True, exist_ok=True) - self.run_workflow("simple", tmp_path) - - # The efficiency report is created in the - # current working directory - pattern = re.compile(r"efficiency_report_[\w-]+\.csv") - report_found = False - # report the tmp_path directory for debugging - print(f"'tmp_path' is: {tmp_path}") - - # as the directory is unclear, we need a path walk: - for root, _, files in os.walk("/tmp/pytest-of-runner/"): - for fname in files: - if pattern.match(fname): - report_found = True - report_path = os.path.join(root, fname) - # Verify it's not empty - assert ( - os.stat(report_path).st_size > 0 - ), f"Efficiency report {report_path} is empty" - break - assert report_found, "Efficiency report file not found" + """Test efficiency report generation with proper SLURM mocking.""" + import subprocess + from unittest.mock import patch, MagicMock + from pathlib import Path + + # Mock sacct output with the exact format expected: + # JobID|JobName|Comment|Elapsed|TotalCPU|NNodes|NCPUS|MaxRSS|ReqMem + mock_sacct_output = "1234|test_job|rule_test|00:01:30|00:00:45|1|2|1024K|2G\n1235|another_job|rule_another|00:02:00|00:01:00|1|4|2048K|4G" + + # Create a mock result object that mimics subprocess.CompletedProcess + mock_result = MagicMock() + mock_result.stdout = mock_sacct_output + mock_result.returncode = 0 + + # Mock subprocess.run to return our fake sacct data + def mock_subprocess_run(cmd, *args, **kwargs): + # Check if this is a sacct command + if isinstance(cmd, list) and len(cmd) > 0 and 'sacct' in cmd[0]: + return mock_result + else: + # For other commands, call the real subprocess.run + return subprocess.run(cmd, *args, **kwargs) + + # Patch subprocess.run in the correct module + with patch('snakemake_executor_plugin_slurm.subprocess.run', side_effect=mock_subprocess_run): + self.run_workflow("simple", tmp_path) + + # The efficiency report is created in the current working directory + pattern = re.compile(r"efficiency_report_[\w-]+\.csv") + report_found = False + + # Check both cwd and the tmp_path for the report file + for search_dir in [Path.cwd(), tmp_path]: + for filepath in search_dir.glob("efficiency_report_*.csv"): + if pattern.match(filepath.name): + report_found = True + # Verify it's not empty + assert filepath.stat().st_size > 0, f"Efficiency report {filepath} is empty" + break + if report_found: + break + + assert report_found, "Efficiency report was not generated"This solution:
- ✅ Eliminates all hardcoded paths
- ✅ Mocks the SLURM
sacct
command as required- ✅ Works in any environment (with or without SLURM)
- ✅ Tests the actual efficiency report generation logic
- ✅ Removes debug code and commented sections
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 43-43: Missing function or method docstring
(C0116)
🧹 Nitpick comments (1)
tests/tests.py (1)
29-41
: Add missing docstrings to address static analysis warnings.The test class and methods are missing docstrings, which would improve code documentation and address the pylint warnings.
+class TestEfficiencyReport(snakemake.common.tests.TestWorkflowsLocalStorageBase): + """Test class for SLURM efficiency report generation functionality.""" __test__ = True def get_executor(self) -> str: + """Return the executor name for testing.""" return "slurm" def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: + """Return executor settings with efficiency reporting enabled.""" return ExecutorSettings(🧰 Tools
🪛 Pylint (3.3.7)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/tests.py
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/tests.py (1)
snakemake_executor_plugin_slurm/__init__.py (1)
ExecutorSettings
(42-146)
🪛 Pylint (3.3.7)
tests/tests.py
[convention] 1-1: Missing module docstring
(C0114)
[convention] 29-29: Missing class docstring
(C0115)
[convention] 32-32: Missing function or method docstring
(C0116)
[convention] 35-35: Missing function or method docstring
(C0116)
[convention] 43-43: Missing function or method docstring
(C0116)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (2)
tests/tests.py (2)
1-2
: LGTM! Imports are appropriate for the test functionality.The
os
andre
imports are necessary for the file operations and regex pattern matching used in the efficiency report test.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
23-26
: LGTM! Timing adjustment improves test stability.Increasing the initial delay from 1 to 2 seconds is a reasonable change to improve test reliability and avoid potential race conditions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (1)
tests/tests.py (1)
44-75
:⚠️ Potential issueCritical: Implement the SLURM mocking solution from previous reviews.
This test implementation still has the same critical issues that were extensively discussed in previous reviews:
- Hardcoded paths: Using
/tmp/pytest-of-runner/
makes tests fragile and non-portable- Missing SLURM mocking: Without SLURM environment, the
sacct
command fails and no efficiency report is created- Test will fail in CI: As noted in previous reviews, this approach doesn't work in environments without SLURM
Please implement the mocking solution that was provided in previous reviews:
def test_simple_workflow(self, tmp_path): - # for an unkown reason, the efficiency report is not created - # reliably in `tmp_path`, so we use a fixed path - # to ensure the test is reproducible - - # a worklfow aborted: - # error message: - # OSError: Cannot save file into a non-existent directory: - # '/tmp/efficiency_report_test' - # runpath = Path("/tmp/efficiency_report_test") - # runpath.mkdir(parents=True, exist_ok=True) - self.run_workflow("simple", tmp_path) - - # The efficiency report is created in the - # current working directory - pattern = re.compile(r"efficiency_report_[\w-]+\.csv") - report_found = False - # report the tmp_path directory for debugging - print(f"'tmp_path' is: {tmp_path}") - - # as the directory is unclear, we need a path walk: - for root, _, files in os.walk("/tmp/pytest-of-runner/"): - for fname in files: - if pattern.match(fname): - report_found = True - report_path = os.path.join(root, fname) - # Verify it's not empty - assert ( - os.stat(report_path).st_size > 0 - ), f"Efficiency report {report_path} is empty" - break - assert report_found, "Efficiency report file not found" + """Test efficiency report generation with proper SLURM mocking.""" + import subprocess + from unittest.mock import patch, MagicMock + from pathlib import Path + + # Mock sacct output with the exact format expected: + # JobID|JobName|Comment|Elapsed|TotalCPU|NNodes|NCPUS|MaxRSS|ReqMem + mock_sacct_output = "1234|test_job|rule_test|00:01:30|00:00:45|1|2|1024K|2G\n1235|another_job|rule_another|00:02:00|00:01:00|1|4|2048K|4G" + + # Create a mock result object that mimics subprocess.CompletedProcess + mock_result = MagicMock() + mock_result.stdout = mock_sacct_output + mock_result.returncode = 0 + + # Mock subprocess.run to return our fake sacct data + def mock_subprocess_run(cmd, *args, **kwargs): + # Check if this is a sacct command + if isinstance(cmd, list) and len(cmd) > 0 and 'sacct' in cmd[0]: + return mock_result + else: + # For other commands, call the real subprocess.run + return subprocess.run(cmd, *args, **kwargs) + + # Patch subprocess.run in the correct module + with patch('snakemake_executor_plugin_slurm.subprocess.run', side_effect=mock_subprocess_run): + self.run_workflow("simple", tmp_path) + + # The efficiency report is created in the current working directory + pattern = re.compile(r"efficiency_report_[\w-]+\.csv") + report_found = False + + # Check both cwd and the tmp_path for the report file + for search_dir in [Path.cwd(), tmp_path]: + for filepath in search_dir.glob("efficiency_report_*.csv"): + if pattern.match(filepath.name): + report_found = True + # Verify it's not empty + assert filepath.stat().st_size > 0, f"Efficiency report {filepath} is empty" + break + if report_found: + break + + assert report_found, "Efficiency report was not generated"This solution:
- ✅ Eliminates hardcoded paths
- ✅ Mocks the SLURM
sacct
command as discussed in previous reviews- ✅ Works in any environment (with or without SLURM)
- ✅ Tests the actual efficiency report generation logic
- ✅ Uses proper search locations instead of hardcoded paths
🧰 Tools
🪛 Pylint (3.3.7)
[convention] 44-44: Missing function or method docstring
(C0116)
🧹 Nitpick comments (1)
tests/tests.py (1)
30-42
: Add docstrings for better documentation.The test class structure and settings are appropriate for testing efficiency report functionality. However, consider adding docstrings to improve code documentation as flagged by static analysis.
+class TestEfficiencyReport(snakemake.common.tests.TestWorkflowsLocalStorageBase): + """Test class for SLURM efficiency report generation functionality.""" __test__ = True + def get_executor(self) -> str: + """Return the executor type for efficiency report testing.""" return "slurm" + def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: + """Configure executor settings to enable efficiency reporting.""" return ExecutorSettings(🧰 Tools
🪛 Pylint (3.3.7)
[convention] 30-30: Missing class docstring
(C0115)
[convention] 33-33: Missing function or method docstring
(C0116)
[convention] 36-36: Missing function or method docstring
(C0116)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
tests/tests.py
(2 hunks)
🧰 Additional context used
🧬 Code Graph Analysis (1)
tests/tests.py (1)
snakemake_executor_plugin_slurm/__init__.py (1)
ExecutorSettings
(42-146)
🪛 Pylint (3.3.7)
tests/tests.py
[convention] 1-1: Missing module docstring
(C0114)
[convention] 30-30: Missing class docstring
(C0115)
[convention] 33-33: Missing function or method docstring
(C0116)
[convention] 36-36: Missing function or method docstring
(C0116)
[convention] 44-44: Missing function or method docstring
(C0116)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (2)
tests/tests.py (2)
1-2
: LGTM! Appropriate imports for the new test functionality.The added imports for
os
andre
are necessary for the filesystem operations and regex pattern matching used in the efficiency report test.🧰 Tools
🪛 Pylint (3.3.7)
[convention] 1-1: Missing module docstring
(C0114)
24-27
: LGTM! Minor timing adjustments for test stability.The increase in initialization delay from 1 to 2 seconds and the commented-out status check interval are reasonable adjustments for test reliability.
"Please consult the documentation if you are unsure how to " | ||
"query the status of your jobs." | ||
) | ||
|
||
def create_efficiency_report(self): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe move this into a separate module, in order to keep init smaller?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yes, this was the idea, initially. Yet, not possible with the atexit
registration. Now, a refactoring might be possible. Need to look into it.
tests/tests.py
Outdated
# for an unkown reason, the efficiency report is not created | ||
# reliably in `tmp_path`, so we use a fixed path | ||
# to ensure the test is reproducible | ||
|
||
# a worklfow aborted: | ||
# error message: | ||
# OSError: Cannot save file into a non-existent directory: | ||
# '/tmp/efficiency_report_test' | ||
# runpath = Path("/tmp/efficiency_report_test") | ||
# runpath.mkdir(parents=True, exist_ok=True) | ||
self.run_workflow("simple", tmp_path) | ||
|
||
# The efficiency report is created in the | ||
# current working directory | ||
pattern = re.compile(r"efficiency_report_[\w-]+\.csv") | ||
report_found = False | ||
# report the tmp_path directory for debugging | ||
print(f"'tmp_path' is: {tmp_path}") | ||
|
||
# as the directory is unclear, we need a path walk: | ||
for root, _, files in os.walk("/tmp/pytest-of-runner/"): | ||
for fname in files: | ||
if pattern.match(fname): | ||
report_found = True | ||
report_path = os.path.join(root, fname) | ||
# Verify it's not empty | ||
assert ( | ||
os.stat(report_path).st_size > 0 | ||
), f"Efficiency report {report_path} is empty" | ||
break |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't get this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
me neither. Yet, every attempt to simply look for the file failed. You might want to have a look in the actions.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
here you go:
- https://github.com/snakemake/snakemake-executor-plugin-slurm/actions/runs/15635040777/job/44055058897 - a successful run with that convoluted code from above
- https://github.com/snakemake/snakemake-executor-plugin-slurm/actions/runs/15641548011/job/44069833011 - a broken run, in which that logger shows that the file we are looking for is in pwd and which looking in
tmp_path
fails.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
🧹 Nitpick comments (2)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
5-17
: Consider supporting day-prefixed SLURM timesSLURM may emit times like
3-12:34:56
(days-hours).time_to_seconds
currently ignores the day component, so reported runtimes can be off by ≥24 h.A minimal extension:
if "-" in time_str: # D-H:M:S days, hms = time_str.split("-", 1) parts = hms.split(":") return int(days) * 86400 + int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2])🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 11-16: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
157-168
: Simplify and ensurelogfile
path construction
Path()
instancep
is redundant; use a one-liner and rely on the earlier suggested import.-logfile = f"efficiency_report_{run_uuid}.csv" -if e_report_path: - logfile = Path(e_report_path) / logfile -else: - logfile = p.cwd() / logfile +logfile = Path(e_report_path) / f"efficiency_report_{run_uuid}.csv" if e_report_path \ + else Path.cwd() / f"efficiency_report_{run_uuid}.csv"🧰 Tools
🪛 Ruff (0.11.9)
159-159: Undefined name
Path
(F821)
163-166: Use ternary operator
logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead ofif
-else
-blockReplace
if
-else
-block withlogfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
164-164: Undefined name
Path
(F821)
🪛 Pylint (3.3.7)
[error] 159-159: Undefined variable 'Path'
(E0602)
[error] 164-164: Undefined variable 'Path'
(E0602)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
snakemake_executor_plugin_slurm/__init__.py
(6 hunks)snakemake_executor_plugin_slurm/efficiency_report.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (1)
- snakemake_executor_plugin_slurm/init.py
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/efficiency_report.py (1)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
🪛 Ruff (0.11.9)
snakemake_executor_plugin_slurm/efficiency_report.py
64-64: Undefined name subprocess
(F821)
65-65: Undefined name shlex
(F821)
68-68: Undefined name subprocess
(F821)
159-159: Undefined name Path
(F821)
163-166: Use ternary operator logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead of if
-else
-block
Replace if
-else
-block with logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
164-164: Undefined name Path
(F821)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/efficiency_report.py
[refactor] 11-16: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[error] 64-64: Undefined variable 'subprocess'
(E0602)
[error] 65-65: Undefined variable 'shlex'
(E0602)
[error] 68-68: Undefined variable 'subprocess'
(E0602)
[error] 159-159: Undefined variable 'Path'
(E0602)
[error] 164-164: Undefined variable 'Path'
(E0602)
[refactor] 53-53: Either all return statements in a function should return an expression, or none of them should.
(R1710)
df["CPU Efficiency (%)"] = ( | ||
df["TotalCPU_sec"] / (df["Elapsed_sec"] * df["NCPUS"]) | ||
) * 100 | ||
df["CPU Efficiency (%)"] = df["CPU Efficiency (%)"].fillna(0).round(2) | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Handle division-by-zero and inf
in CPU-efficiency calculation
Jobs with Elapsed_sec == 0
or NCPUS == 0
yield inf
or NaN
, which survive fillna
and propagate to CSV/logs.
df["CPU Efficiency (%)"] = (
df["TotalCPU_sec"] / (df["Elapsed_sec"].clip(lower=1) * df["NCPUS"].clip(lower=1))
) * 100
df.replace([np.inf, -np.inf], 0, inplace=True)
(Requires import numpy as np
.)
Committable suggestion skipped: line range outside the PR's diff.
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py around lines 114 to 118,
the CPU efficiency calculation can produce infinite or NaN values when
Elapsed_sec or NCPUS are zero, which are not handled by fillna. To fix this,
after computing the CPU Efficiency column, replace infinite values with NaN
using numpy's isinf function, then fill NaNs with zero and round the results.
Make sure to import numpy as np at the top of the file.
# If the "Comment" column is empty, | ||
# a) delete the column | ||
# b) issue a warning | ||
if df["Comment"].isnull().all(): | ||
logger.warning( | ||
f"No comments found for workflow {run_uuid}. " | ||
"This field is used to store the rule name. " | ||
"Please ensure that the 'comment' field is set for your cluster. " | ||
"Administrators can set this up in the SLURM configuration." | ||
) | ||
df.drop(columns=["Comment"], inplace=True) | ||
# remember, that the comment column is not available | ||
nocomment = True |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Comment
emptiness check misses empty-string rows
isnull()
only catches NaN. When sacct
returns an empty field it is read as ''
, so the branch treating “no comment available” is skipped and later code assumes rule names exist.
-if df["Comment"].isnull().all():
+if df["Comment"].replace("", pd.NA).isna().all():
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
# If the "Comment" column is empty, | |
# a) delete the column | |
# b) issue a warning | |
if df["Comment"].isnull().all(): | |
logger.warning( | |
f"No comments found for workflow {run_uuid}. " | |
"This field is used to store the rule name. " | |
"Please ensure that the 'comment' field is set for your cluster. " | |
"Administrators can set this up in the SLURM configuration." | |
) | |
df.drop(columns=["Comment"], inplace=True) | |
# remember, that the comment column is not available | |
nocomment = True | |
# If the "Comment" column is empty, | |
# a) delete the column | |
# b) issue a warning | |
if df["Comment"].replace("", pd.NA).isna().all(): | |
logger.warning( | |
f"No comments found for workflow {run_uuid}. " | |
"This field is used to store the rule name. " | |
"Please ensure that the 'comment' field is set for your cluster. " | |
"Administrators can set this up in the SLURM configuration." | |
) | |
df.drop(columns=["Comment"], inplace=True) | |
# remember, that the comment column is not available | |
nocomment = True |
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py around lines 88 to 100,
the check for empty "Comment" column uses isnull() which misses empty strings.
Update the condition to also consider empty strings by checking if all entries
are either null or empty strings. This ensures the warning and column drop
happen correctly when comments are missing.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 3
♻️ Duplicate comments (2)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
94-107
:⚠️ Potential issueEmpty‐string comments aren’t caught → downstream KeyError
isnull()
ignores''
. Replace empty strings withNA
first.-if df["Comment"].isnull().all(): +if df["Comment"].replace("", pd.NA).isna().all():
117-121
: 🛠️ Refactor suggestionDivision-by-zero →
inf
, propagate to CSV
Elapsed_sec
orNCPUS
can be 0/NaN, yieldinginf
. Clip to ≥1 and scrub infinities:-df["CPU Efficiency (%)"] = ( - df["TotalCPU_sec"] / (df["Elapsed_sec"] * df["NCPUS"]) -) * 100 -df["CPU Efficiency (%)"] = df["CPU Efficiency (%)"].fillna(0).round(2) +df["CPU Efficiency (%)"] = ( + df["TotalCPU_sec"] + / (df["Elapsed_sec"].clip(lower=1) * df["NCPUS"].clip(lower=1)) +) * 100 +df.replace([np.inf, -np.inf], 0, inplace=True) +df["CPU Efficiency (%)"] = df["CPU Efficiency (%)"].fillna(0).round(2)
🧹 Nitpick comments (3)
docs/further.md (1)
621-624
: Phrase needs comma + slight re-phrasing for clarityMissing comma before the clause ‘as this plugin …’, and “the memory values is” → “value is”.
-With `--slurm-efficiency-report` you can generate a table of all efficiency data. A logfile `efficiency_report_<workflow_id>.log` will be generated in your current directory. This is equivalent to the information with `seff <jobid>` for individual jobs. It works best if "comments" are stored as a job property on your cluster as this plugin uses the "comment" parameter to store the rule name. +With `--slurm-efficiency-report`, you can generate a table of all efficiency data. +A logfile named `efficiency_report_<workflow_id>.log` will be generated in your current directory. +This is equivalent to running `seff <jobid>` for individual jobs. +It works best if “comments” are stored as a job property on your cluster, because this plugin uses the “comment” parameter to store the rule name.🧰 Tools
🪛 LanguageTool
[uncategorized] ~623-~623: Possible missing comma found.
Context: ...s" are stored as a job property on your cluster as this plugin uses the "comment" param...(AI_HYDRA_LEO_MISSING_COMMA)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
14-20
: Simplify control flow & keep return types consistent
return
in the first branch makes the subsequentelif
unnecessary, and mixingint
/float
for seconds is harmless but inconsistent.- if len(parts) == 3: # H:M:S - return int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2]) - elif len(parts) == 2: # M:S - return int(parts[0]) * 60 + float(parts[1]) - elif len(parts) == 1: # S - return float(parts[0]) + if len(parts) == 3: # H:M:S + h, m, s = map(float, parts) + return h * 3600 + m * 60 + s + if len(parts) == 2: # M:S + m, s = map(float, parts) + return m * 60 + s + if len(parts) == 1: # S + return float(parts[0])🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 14-19: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
166-170
: Avoid needlessPath()
instance & write CSV without index- logfile = f"efficiency_report_{run_uuid}.csv" - if e_report_path: - logfile = Path(e_report_path) / logfile - else: - logfile = p.cwd() / logfile - df.to_csv(logfile) + logfile = ( + Path(e_report_path) if e_report_path else Path.cwd() + ) / f"efficiency_report_{run_uuid}.csv" + df.to_csv(logfile, index=False)🧰 Tools
🪛 Ruff (0.11.9)
166-169: Use ternary operator
logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead ofif
-else
-blockReplace
if
-else
-block withlogfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (4)
docs/further.md
(1 hunks)pyproject.toml
(1 hunks)snakemake_executor_plugin_slurm/__init__.py
(6 hunks)snakemake_executor_plugin_slurm/efficiency_report.py
(1 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
- pyproject.toml
- snakemake_executor_plugin_slurm/init.py
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/efficiency_report.py (1)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
🪛 Ruff (0.11.9)
snakemake_executor_plugin_slurm/efficiency_report.py
166-169: Use ternary operator logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead of if
-else
-block
Replace if
-else
-block with logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/efficiency_report.py
[refactor] 14-19: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[refactor] 56-56: Either all return statements in a function should return an expression, or none of them should.
(R1710)
🪛 LanguageTool
docs/further.md
[uncategorized] ~623-~623: Possible missing comma found.
Context: ...s" are stored as a job property on your cluster as this plugin uses the "comment" param...
(AI_HYDRA_LEO_MISSING_COMMA)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
import re | ||
import pandas as pd | ||
from pathlib import Path | ||
import subprocess | ||
import shlex | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numpy
missing – required later for ∞/NaN sanitising
Several calculations later replace/compare with np.inf
, but numpy
is not imported, which will raise NameError
once that code is added (or when you adopt the fix below).
Add the import now so follow-up patches don’t forget.
import re
import pandas as pd
from pathlib import Path
import subprocess
import shlex
+import numpy as np
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import re | |
import pandas as pd | |
from pathlib import Path | |
import subprocess | |
import shlex | |
import re | |
import pandas as pd | |
from pathlib import Path | |
import subprocess | |
import shlex | |
import numpy as np |
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py at the beginning of the
file (lines 1 to 6), numpy is used later in the code for handling infinity and
NaN values but is not imported, which will cause a NameError. Add an import
statement for numpy as np alongside the existing imports to ensure numpy
functions and constants are available for later use.
def parse_reqmem(reqmem, number_of_nodes=1): | ||
"""Convert requested memory to MB.""" | ||
if pd.isna(reqmem) or reqmem.strip() == "": | ||
return 0 | ||
match = re.match( | ||
r"(\d+)([KMG])?(\S+)?", reqmem | ||
) # Handles "4000M" or "4G" or "2G/node" | ||
if match: | ||
value, unit, per_unit = match.groups() | ||
value = int(value) | ||
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | ||
mem_mb = value * unit_multipliers.get(unit, 1) | ||
if per_unit and "/node" in per_unit: | ||
# the memory values is per node, hence we need to | ||
# multiply with the number of nodes | ||
return mem_mb * number_of_nodes | ||
return mem_mb # Default case (per CPU or total) | ||
return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
NaN propagation & ‘/cpu’ interpretation
- When
number_of_nodes
isNaN
, multiplyingmem_mb * number_of_nodes
yieldsNaN
, propagating through the dataframe. - SLURM sometimes encodes memory as
4G/cpu
; the “per-CPU” case currently falls through to “total”, overstating requested memory.
Consider:
- if per_unit and "/node" in per_unit:
- # the memory values is per node, hence we need to
- # multiply with the number of nodes
- return mem_mb * number_of_nodes
- return mem_mb # Default case (per CPU or total)
+ if per_unit:
+ if "/node" in per_unit:
+ return mem_mb * (number_of_nodes if pd.notna(number_of_nodes) else 1)
+ if "/cpu" in per_unit:
+ # multiply by CPUs per task later; keep per-cpu value for now
+ return mem_mb
+ return mem_mb
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def parse_reqmem(reqmem, number_of_nodes=1): | |
"""Convert requested memory to MB.""" | |
if pd.isna(reqmem) or reqmem.strip() == "": | |
return 0 | |
match = re.match( | |
r"(\d+)([KMG])?(\S+)?", reqmem | |
) # Handles "4000M" or "4G" or "2G/node" | |
if match: | |
value, unit, per_unit = match.groups() | |
value = int(value) | |
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | |
mem_mb = value * unit_multipliers.get(unit, 1) | |
if per_unit and "/node" in per_unit: | |
# the memory values is per node, hence we need to | |
# multiply with the number of nodes | |
return mem_mb * number_of_nodes | |
return mem_mb # Default case (per CPU or total) | |
return 0 | |
def parse_reqmem(reqmem, number_of_nodes=1): | |
"""Convert requested memory to MB.""" | |
if pd.isna(reqmem) or reqmem.strip() == "": | |
return 0 | |
match = re.match( | |
r"(\d+)([KMG])?(\S+)?", reqmem | |
) # Handles "4000M" or "4G" or "2G/node" | |
if match: | |
value, unit, per_unit = match.groups() | |
value = int(value) | |
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | |
mem_mb = value * unit_multipliers.get(unit, 1) | |
if per_unit: | |
if "/node" in per_unit: | |
# memory is per node; guard against NaN nodes | |
return mem_mb * (number_of_nodes if pd.notna(number_of_nodes) else 1) | |
if "/cpu" in per_unit: | |
# memory is per CPU; aggregate later based on CPU count | |
return mem_mb | |
return mem_mb # Default case (total) | |
return 0 |
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py around lines 36 to 53,
the function parse_reqmem does not handle the case when number_of_nodes is NaN,
causing NaN propagation in calculations, and it also does not correctly
interpret memory requests specified as per CPU (e.g., "4G/cpu"), which leads to
overstated memory values. Fix this by checking if number_of_nodes is NaN and
treating it as 1 or a default value before multiplication, and add logic to
detect "/cpu" in the per_unit string to divide the memory by the number of CPUs
or handle it appropriately instead of treating it as total memory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@coderabbitai - there is no data frame present in this function. Hence, pd.notna()
will not work as expected. Both /node
and /cpu
are apparently undocumented. I found /node
by chance. Is there any documentation link you are aware of?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🧩 Analysis chain
@cmeesters You're absolutely right about pd.notna()
- that was my mistake. Since parse_reqmem
operates on scalar values, not DataFrame columns, the correct approach for NaN checking would be different.
Let me search for SLURM documentation about memory specification formats to clarify the /node
and /cpu
suffixes:
For the NaN handling in scalar context, here's the corrected approach:
if per_unit and "/node" in per_unit:
# Guard against NaN/None values in number_of_nodes
if number_of_nodes is None or (isinstance(number_of_nodes, float) and math.isnan(number_of_nodes)):
return mem_mb # fallback to treating as total
return mem_mb * number_of_nodes
However, since you're passing row["NNodes"]
which comes from a pandas DataFrame after pd.to_numeric()
, it might be better to handle NaN values at the DataFrame level before calling this function.
🌐 Web query:
SLURM sacct ReqMem field format documentation memory per node per cpu
💡 Result:
In SLURM, the sacct
command provides job accounting information, including the ReqMem
field, which indicates the minimum required memory for a job. The format of the ReqMem
field can specify memory requirements per CPU or per node:
- Memory per CPU: Appended with a 'c' (e.g.,
ReqMem=4Gc
), indicating the amount of memory requested per CPU. - Memory per Node: Appended with an 'n' (e.g.,
ReqMem=16Gn
), indicating the total amount of memory requested per node.
For example, if a job requests 4 GB of memory per CPU and is allocated 2 CPUs, the ReqMem
field would display ReqMem=8Gc
. If the same job requests 16 GB of memory per node and is allocated 1 node, the ReqMem
field would display ReqMem=16Gn
.
This format allows users to specify memory requirements based on the number of CPUs or nodes allocated to the job. For more detailed information, refer to the SLURM documentation on the sacct
command. (slurm.schedmd.com)
Citations:
Thank you for the correction! Based on the SLURM documentation, I found the official format for the ReqMem
field:
According to SLURM documentation:
- Memory per CPU: Uses
c
suffix (e.g.,ReqMem=4Gc
) - Memory per node: Uses
n
suffix (e.g.,ReqMem=16Gn
)
However, there's a discrepancy: Your current code checks for "/node"
in the suffix, but the documentation shows it should be just "n"
. This suggests either:
- The documentation doesn't cover all real-world formats
- Different SLURM versions use different formats
- Site-specific configurations might affect the format
Corrected function based on documented format:
def parse_reqmem(reqmem, number_of_nodes=1):
"""Convert requested memory to MB."""
if pd.isna(reqmem) or reqmem.strip() == "":
return 0
match = re.match(r"(\d+(?:\.\d+)?)([KMG])?([cn]?)", reqmem)
if match:
value, unit, per_unit = match.groups()
value = float(value)
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024}
mem_mb = value * unit_multipliers.get(unit, 1)
if per_unit == "n": # per node
return mem_mb * max(number_of_nodes, 1) # avoid multiplication by 0
elif per_unit == "c": # per CPU - needs CPU count from caller
return mem_mb # Return per-CPU value; caller handles multiplication
return mem_mb # Default: total memory
return 0
Recommendation: Since you found /node
working in practice, you might want to check your actual SLURM output to see which format is being used in your environment. The implementation might need to handle both documented (n
) and undocumented (/node
) formats.
🧠 Learnings used
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 2
♻️ Duplicate comments (4)
snakemake_executor_plugin_slurm/efficiency_report.py (4)
38-55
: Still suffers from NaN-propagation &/cpu
overestimation
The issues previously raised about handlingnumber_of_nodes = NaN
and"/cpu"
modifiers remain unresolved. See earlier discussion; the same patch applies.
96-105
:isnull()
misses empty-string commentsRows where
sacct
leavesComment
blank are''
, not NaN, so the column is treated as filled. Replace with the combined null / empty-string check proposed earlier.
118-123
: Division-by-zero still possible, yieldsinf/NaN
When
Elapsed_sec
orNCPUS
is 0 the efficiency explodes. Clip the denominators or replace infinities as previously suggested.
142-144
:str.contains
drops NaN rowsAdd
na=False
to avoid unintentionally discarding jobs with missing names.
🧹 Nitpick comments (2)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
7-8
: Remove temporaryos
debug import and directory listing before merging
os
is only used for a debug log (line 177). Keeping ad-hoc debug code and extra imports in production modules clutters the API surface and makes linting noisier. Drop the import and the associatedlogger.debug
once investigation is done.-import os # only temporarily needed for printf debugging ... - # state directory contents for debugging purposes - logger.debug(f"Current directory contents in '{p.cwd()}': {os.listdir(p.cwd())}")
162-171
: Constructlogfile
without the throw-awayPath()
instanceMinor readability nit:
Path.cwd()
is clearer than creating an intermediate object.- p = Path() ... - logfile = p.cwd() / logfile + logfile = Path.cwd() / logfile🧰 Tools
🪛 Ruff (0.11.9)
168-171: Use ternary operator
logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead ofif
-else
-blockReplace
if
-else
-block withlogfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
snakemake_executor_plugin_slurm/efficiency_report.py
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/efficiency_report.py (1)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
🪛 Ruff (0.11.9)
snakemake_executor_plugin_slurm/efficiency_report.py
168-171: Use ternary operator logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead of if
-else
-block
Replace if
-else
-block with logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/efficiency_report.py
[refactor] 16-21: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[refactor] 58-58: Either all return statements in a function should return an expression, or none of them should.
(R1710)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
try: | ||
result = subprocess.run( | ||
shlex.split(cmd), capture_output=True, text=True, check=True | ||
) | ||
lines = result.stdout.strip().split("\n") | ||
except subprocess.CalledProcessError: | ||
logger.error(f"Failed to retrieve job data for workflow {run_uuid}.") | ||
return None | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
Gracefully handle empty sacct
output
If sacct
returns no rows (e.g., wrong run_uuid
or accounting purge), lines == ['']
and DataFrame construction throws ValueError: 1 columns passed, 9 expected
. Add an explicit emptiness check to avoid crashing the shutdown hook.
- lines = result.stdout.strip().split("\n")
+ raw = result.stdout.strip()
+ if not raw:
+ logger.warning(f"No accounting data returned for workflow {run_uuid}.")
+ return None
+ lines = raw.split("\n")
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
try: | |
result = subprocess.run( | |
shlex.split(cmd), capture_output=True, text=True, check=True | |
) | |
lines = result.stdout.strip().split("\n") | |
except subprocess.CalledProcessError: | |
logger.error(f"Failed to retrieve job data for workflow {run_uuid}.") | |
return None | |
try: | |
result = subprocess.run( | |
shlex.split(cmd), capture_output=True, text=True, check=True | |
) | |
raw = result.stdout.strip() | |
if not raw: | |
logger.warning(f"No accounting data returned for workflow {run_uuid}.") | |
return None | |
lines = raw.split("\n") | |
except subprocess.CalledProcessError: | |
logger.error(f"Failed to retrieve job data for workflow {run_uuid}.") | |
return None |
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py around lines 68 to 76,
the code does not handle the case when the `sacct` command returns empty output,
causing a ValueError during DataFrame construction. Add a check after splitting
the output lines to verify if the result is empty or contains only an empty
string, and if so, log an appropriate message and return None to prevent the
shutdown hook from crashing.
Co-authored-by: coderabbitai[bot] <136622811+coderabbitai[bot]@users.noreply.github.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 0
♻️ Duplicate comments (5)
snakemake_executor_plugin_slurm/efficiency_report.py (5)
68-76
:⚠️ Potential issueHandle empty
sacct
output to avoid DataFrame crashIf
sacct
returns nothing (e.g., accounting purge, wrongrun_uuid
),lines == ['']
and the DataFrame constructor raisesValueError: 1 columns passed, 9 expected
.- lines = result.stdout.strip().split("\n") + raw = result.stdout.strip() + if not raw: + logger.warning(f"No accounting data returned for workflow {run_uuid}.") + return None + lines = raw.split("\n")Prevents the shutdown hook from failing and ensures the executor exits gracefully.
142-144
:⚠️ Potential issue
str.contains
must setna=False
Rows with
NaN
inJobName
raise aTypeError: bad operand type for unary ~
when this filter executes.-df = df[~df["JobName"].str.contains("batch|extern")] +df = df[~df["JobName"].str.contains("batch|extern", na=False)]Guarantees robust filtering even when
JobName
is missing.
38-55
: 🛠️ Refactor suggestionExtend
parse_reqmem
for/cpu
specs & NaN node countsMemory values like
4G/cpu
are interpreted as TOTAL, overstating request.
Alsonumber_of_nodes
may beNaN
, propagating tomem_mb * number_of_nodes
.- if per_unit and "/node" in per_unit: - # the memory values is per node, hence we need to - # multiply with the number of nodes - return mem_mb * number_of_nodes - return mem_mb # Default case (per CPU or total) + if per_unit: + if "/node" in per_unit: + nodes = number_of_nodes if pd.notna(number_of_nodes) else 1 + return mem_mb * nodes + if "/cpu" in per_unit: + # keep per-CPU value; caller will multiply by NCPUS if needed + return mem_mb + return mem_mb # total memory requestPrevents NaN propagation and corrects per-CPU interpretation.
93-104
: 🛠️ Refactor suggestionEmpty-string “Comment” rows not detected
isnull()
catches onlyNaN
. SLURM often returns empty strings (''
) when no comment is set, so the warning/drop logic is silently skipped.-if df["Comment"].isnull().all(): +if df["Comment"].replace("", pd.NA).isna().all():Ensures users are warned and later code relying on
RuleName
doesn’t break.
118-123
:⚠️ Potential issueGuard against division-by-zero /
inf
in CPU-efficiency calculation
Elapsed_sec
orNCPUS
can legitimately be0
(e.g. cancelled jobs, bookkeeping rows).
The current formula therefore yieldsinf
/NaN
, which survivefillna
and propagate to the CSV and warning loop.-df["CPU Efficiency (%)"] = ( - df["TotalCPU_sec"] / (df["Elapsed_sec"] * df["NCPUS"]) -) * 100 -df["CPU Efficiency (%)"] = df["CPU Efficiency (%)"].fillna(0).round(2) +df["CPU Efficiency (%)"] = ( + df["TotalCPU_sec"] + / (df["Elapsed_sec"].clip(lower=1) * df["NCPUS"].clip(lower=1)) +) * 100 +# replace inf/-inf produced by unexpected zeros +df.replace([np.inf, -np.inf], 0, inplace=True) +df["CPU Efficiency (%)"] = df["CPU Efficiency (%)"].fillna(0).round(2)Requires
import numpy as np
at the top.
Without this, the report and threshold warnings are unreliable.
🧹 Nitpick comments (2)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
162-177
: Remove temporary debug helpers & simplify path handling
os.listdir()
dump and the intermediatePath()
instance are left-over debug artefacts and bloat logs.- # we construct a path object to allow for a customi - # logdir, if specified - p = Path() ... - else: - logfile = p.cwd() / logfile + else: + logfile = Path.cwd() / logfile ... - logger.debug(f"Current directory contents in '{p.cwd()}': {os.listdir(p.cwd())}") + # drop noisy directory listingAlso delete
import os
at the top once the debug line is removed.🧰 Tools
🪛 Ruff (0.11.9)
168-171: Use ternary operator
logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead ofif
-else
-blockReplace
if
-else
-block withlogfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
10-22
: Minor: simplify control flow intime_to_seconds
Ruff flagged the unnecessary
elif
cascade after an earlyreturn
.
Refactor improves clarity:- if len(parts) == 3: # H:M:S - return int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2]) - elif len(parts) == 2: # M:S - return int(parts[0]) * 60 + float(parts[1]) - elif len(parts) == 1: # S - return float(parts[0]) + if len(parts) == 3: # H:M:S + return int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2]) + if len(parts) == 2: # M:S + return int(parts[0]) * 60 + float(parts[1]) + if len(parts) == 1: # S + return float(parts[0]) return 0Pure readability polish; functional behaviour unchanged.
🧰 Tools
🪛 Pylint (3.3.7)
[refactor] 16-21: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (1)
snakemake_executor_plugin_slurm/efficiency_report.py
(1 hunks)
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/efficiency_report.py (1)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
🪛 Ruff (0.11.9)
snakemake_executor_plugin_slurm/efficiency_report.py
168-171: Use ternary operator logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead of if
-else
-block
Replace if
-else
-block with logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/efficiency_report.py
[refactor] 16-21: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[refactor] 58-58: Either all return statements in a function should return an expression, or none of them should.
(R1710)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actionable comments posted: 4
♻️ Duplicate comments (2)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
63-76
: Emptysacct
output crashes DataFrame construction.If
sacct
returns no rows (e.g. accounting purged),lines == ['']
and
pd.DataFrame
raisesValueError: 1 columns passed, 9 expected
.result = subprocess.run(..., check=True) raw = result.stdout.strip() if not raw: logger.warning("No SLURM accounting data found for %s", run_uuid) return None lines = raw.split("\n")
118-123
: Division by zero →inf
CPU-efficiency values leak into CSV.Guard the denominator:
df["CPU Efficiency (%)"] = ( - df["TotalCPU_sec"] / (df["Elapsed_sec"] * df["NCPUS"]) + df["TotalCPU_sec"] / (df["Elapsed_sec"].clip(lower=1) + * df["NCPUS"].clip(lower=1)) ) * 100 df.replace([np.inf, -np.inf], 0, inplace=True)(Requires
import numpy as np
.)
🧹 Nitpick comments (3)
snakemake_executor_plugin_slurm/efficiency_report.py (3)
10-22
:time_to_seconds
mixesint
andfloat
, unnecessaryelif
chain.More concise, type-consistent implementation:
- parts = time_str.split(":") - - if len(parts) == 3: # H:M:S - return int(parts[0]) * 3600 + int(parts[1]) * 60 + float(parts[2]) - elif len(parts) == 2: # M:S - return int(parts[0]) * 60 + float(parts[1]) - elif len(parts) == 1: # S - return float(parts[0]) - return 0 + h, m, s = (["0", "0", "0"] + time_str.split(":"))[-3:] + return float(h) * 3600 + float(m) * 60 + float(s)
162-172
: UnnecessaryPath()
instance and if-block – simplify path building.- p = Path() - logfile = f"efficiency_report_{run_uuid}.csv" - if e_report_path: - logfile = Path(e_report_path) / logfile - else: - logfile = p.cwd() / logfile + logfile = (Path(e_report_path) if e_report_path else Path.cwd()) / \ + f"efficiency_report_{run_uuid}.csv"
174-178
: Remove leftover printf-style debugging (os.listdir
) before merging.Debug output of full current directory is noisy and may expose PII paths in CI
logs. Recommend dropping or downgrading tologger.debug
only when
logger.isEnabledFor(DEBUG)
.- logger.debug(f"Current directory contents in '{p.cwd()}': {os.listdir(p.cwd())}") + logger.debug("Current directory contents: %s", + list(Path.cwd().iterdir()) if logger.isEnabledFor(logging.DEBUG) else "")
📜 Review details
Configuration used: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
📒 Files selected for processing (2)
snakemake_executor_plugin_slurm/efficiency_report.py
(1 hunks)tests/tests.py
(2 hunks)
🧰 Additional context used
🧠 Learnings (1)
snakemake_executor_plugin_slurm/efficiency_report.py (2)
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:34-49
Timestamp: 2025-05-26T12:22:07.005Z
Learning: In the parse_reqmem function in snakemake_executor_plugin_slurm/efficiency_report.py, the conditional check for per_unit == "/node" is necessary for proper conversion of memory values in multi-node jobs, not redundant as it might initially appear.
Learnt from: cmeesters
PR: snakemake/snakemake-executor-plugin-slurm#221
File: snakemake_executor_plugin_slurm/efficiency_report.py:0-0
Timestamp: 2025-06-16T08:54:07.947Z
Learning: In the context of SLURM executor plugin development, user cmeesters (experienced with SLURM) noted they are not aware of SLURM typically reporting memory values with decimals (like 1.5G), but accepted defensive programming to handle such cases anyway.
🪛 Ruff (0.11.9)
snakemake_executor_plugin_slurm/efficiency_report.py
168-171: Use ternary operator logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
instead of if
-else
-block
Replace if
-else
-block with logfile = Path(e_report_path) / logfile if e_report_path else p.cwd() / logfile
(SIM108)
tests/tests.py
40-40: Undefined name Path
(F821)
63-63: Undefined name Path
(F821)
🪛 Pylint (3.3.7)
snakemake_executor_plugin_slurm/efficiency_report.py
[refactor] 16-21: Unnecessary "elif" after "return", remove the leading "el" from "elif"
(R1705)
[refactor] 58-58: Either all return statements in a function should return an expression, or none of them should.
(R1710)
tests/tests.py
[error] 40-40: Undefined variable 'Path'
(E0602)
[error] 63-63: Undefined variable 'Path'
(E0602)
⏰ Context from checks skipped due to timeout of 90000ms (1)
- GitHub Check: testing
🔇 Additional comments (1)
tests/tests.py (1)
45-55
: Still no SLURM mocking ‒ test will fail on CI runners withoutsacct
.Previous review rounds already highlighted this.
Without a running SLURM scheduler,sacct
returns nothing → no report → test fails.Consider patching
subprocess.run
inside
snakemake_executor_plugin_slurm.efficiency_report
:from unittest.mock import patch, MagicMock mock = MagicMock(stdout="123|job|rule|0:00:10|0:00:05|1|1|10M|100M") mock.returncode = 0 with patch( "snakemake_executor_plugin_slurm.efficiency_report.subprocess.run", return_value=mock, ): self.run_workflow("simple", tmp_path)This keeps the test environment-agnostic.
import os | ||
import re | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Path
is not imported – all subsequent uses will raise NameError
.
You reference Path
later in the file (Path.pwd()
/ Path.cwd()
), but never import it.
-import os
-import re
+import os
+import re
+from pathlib import Path
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
import os | |
import re | |
import os | |
import re | |
from pathlib import Path |
🤖 Prompt for AI Agents
In tests/tests.py at the beginning of the file (lines 1 to 3), the Path class is
used later but not imported, which will cause a NameError. Add an import
statement for Path from the pathlib module at the top of the file to fix this
issue.
def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: | ||
return ExecutorSettings( | ||
efficiency_report=True, | ||
init_seconds_before_status_checks=5, | ||
efficiency_report_path=Path.pwd() / "efficiency_report_test", | ||
# seconds_between_status_checks=5, | ||
) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Wrong API ‒ Path.pwd()
does not exist, and the variable is mis-typed anyway.
Path
provides Path.cwd()
, not pwd()
.
Using the non-existent method will raise AttributeError
the first time this test runs.
- efficiency_report_path=Path.pwd() / "efficiency_report_test",
+ efficiency_report_path=Path.cwd() / "efficiency_report_test",
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: | |
return ExecutorSettings( | |
efficiency_report=True, | |
init_seconds_before_status_checks=5, | |
efficiency_report_path=Path.pwd() / "efficiency_report_test", | |
# seconds_between_status_checks=5, | |
) | |
def get_executor_settings(self) -> Optional[ExecutorSettingsBase]: | |
return ExecutorSettings( | |
efficiency_report=True, | |
init_seconds_before_status_checks=5, | |
- efficiency_report_path=Path.pwd() / "efficiency_report_test", | |
+ efficiency_report_path=Path.cwd() / "efficiency_report_test", | |
# seconds_between_status_checks=5, | |
) |
🧰 Tools
🪛 Ruff (0.11.9)
40-40: Undefined name Path
(F821)
🪛 Pylint (3.3.7)
[error] 40-40: Undefined variable 'Path'
(E0602)
🤖 Prompt for AI Agents
In tests/tests.py around lines 36 to 42, the code incorrectly uses Path.pwd(),
which does not exist and causes an AttributeError. Replace Path.pwd() with
Path.cwd() to correctly get the current working directory. This will fix the
error and allow the test to run properly.
report_path = None | ||
expected_path = Path.pwd() / "efficiency_report_test" | ||
|
||
# Check if the efficiency report file exists - based on the regex pattern | ||
for fname in os.listdir(expected_path): | ||
if pattern.match(fname): | ||
report_found = True | ||
report_path = os.path.join(expected_path, fname) | ||
# Verify it's not empty |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
expected_path
may not exist → FileNotFoundError
and false-negatives.
You derive the directory from Path.pwd()
(see previous issue) and then call
os.listdir(expected_path)
without verifying that the directory exists.
If the report is written elsewhere (or generation failed) the test crashes
instead of failing gracefully.
-expected_path = Path.pwd() / "efficiency_report_test"
-if not expected_path.exists():
- pytest.fail(f"Expected efficiency-report directory {expected_path} not found")
+expected_path = Path.cwd() / "efficiency_report_test"
+expected_path.mkdir(exist_ok=True) # create when using a tmp path
Committable suggestion skipped: line range outside the PR's diff.
🧰 Tools
🪛 Ruff (0.11.9)
63-63: Undefined name Path
(F821)
🪛 Pylint (3.3.7)
[error] 63-63: Undefined variable 'Path'
(E0602)
🤖 Prompt for AI Agents
In tests/tests.py around lines 62 to 70, the code calls os.listdir on
expected_path without checking if the directory exists, which can raise a
FileNotFoundError and cause the test to crash. To fix this, add a check to
verify that expected_path exists and is a directory before calling os.listdir.
If the directory does not exist, handle this case gracefully by setting
report_found to False or skipping the check, so the test fails properly without
crashing.
def parse_reqmem(reqmem, number_of_nodes=1): | ||
"""Convert requested memory to MB.""" | ||
if pd.isna(reqmem) or reqmem.strip() == "": | ||
return 0 | ||
match = re.match( | ||
r"(\d+)([KMG])?(\S+)?", reqmem | ||
) # Handles "4000M" or "4G" or "2G/node" | ||
if match: | ||
value, unit, per_unit = match.groups() | ||
value = int(value) | ||
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | ||
mem_mb = value * unit_multipliers.get(unit, 1) | ||
if per_unit and "/node" in per_unit: | ||
# the memory values is per node, hence we need to | ||
# multiply with the number of nodes | ||
return mem_mb * number_of_nodes | ||
return mem_mb # Default case (per CPU or total) | ||
return 0 |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🛠️ Refactor suggestion
parse_reqmem
misses decimals & documented /c
//n
suffixes, propagates NaNs.
- match = re.match(
- r"(\d+)([KMG])?(\S+)?", reqmem
- )
+ # 4Gc (per-CPU) / 16Gn (per-node) / 2.5G
+ match = re.match(r"(\d+(?:\.\d+)?)([KMG])?([cn]|/node)?", reqmem)
- value = int(value)
+ value = float(value)
- if per_unit and "/node" in per_unit:
- return mem_mb * number_of_nodes
+ if per_unit in ("n", "/node"): # per node
+ nodes = 1 if pd.isna(number_of_nodes) else number_of_nodes
+ return mem_mb * nodes
+ # `/c` or `c` → per-CPU; caller may multiply later
Catches 1.5G
, proper per-CPU/per-node semantics, and avoids
mem_mb * NaN → NaN
.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
def parse_reqmem(reqmem, number_of_nodes=1): | |
"""Convert requested memory to MB.""" | |
if pd.isna(reqmem) or reqmem.strip() == "": | |
return 0 | |
match = re.match( | |
r"(\d+)([KMG])?(\S+)?", reqmem | |
) # Handles "4000M" or "4G" or "2G/node" | |
if match: | |
value, unit, per_unit = match.groups() | |
value = int(value) | |
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | |
mem_mb = value * unit_multipliers.get(unit, 1) | |
if per_unit and "/node" in per_unit: | |
# the memory values is per node, hence we need to | |
# multiply with the number of nodes | |
return mem_mb * number_of_nodes | |
return mem_mb # Default case (per CPU or total) | |
return 0 | |
def parse_reqmem(reqmem, number_of_nodes=1): | |
"""Convert requested memory to MB.""" | |
if pd.isna(reqmem) or reqmem.strip() == "": | |
return 0 | |
# 4Gc (per-CPU) / 16Gn (per-node) / 2.5G | |
match = re.match(r"(\d+(?:\.\d+)?)([KMG])?([cn]|/node)?", reqmem) | |
if match: | |
value, unit, per_unit = match.groups() | |
value = float(value) | |
unit_multipliers = {"K": 1 / 1024, "M": 1, "G": 1024} | |
mem_mb = value * unit_multipliers.get(unit, 1) | |
if per_unit in ("n", "/node"): # per-node | |
nodes = 1 if pd.isna(number_of_nodes) else number_of_nodes | |
return mem_mb * nodes | |
# `/c` or `c` → per-CPU; caller may multiply later | |
return mem_mb # Default case (per CPU or total) | |
return 0 |
🤖 Prompt for AI Agents
In snakemake_executor_plugin_slurm/efficiency_report.py around lines 38 to 55,
the parse_reqmem function does not handle decimal values in memory requests,
does not properly support documented suffixes like /c or /n for per-CPU or
per-node memory, and can propagate NaN values leading to incorrect results.
Update the regex to capture decimal numbers, add logic to correctly interpret /c
and /n suffixes for per-CPU and per-node memory semantics, and ensure that NaN
values are handled safely to avoid returning NaN from calculations.
The aim of this PR is:
seff
Note:
seff
reports so-called "Memory Efficiency". What is meant is: "Memory Usage", because an application which reserve a compute node to compute and hardly uses RAM to do so, will have an apparently low "Memory Efficiency". It needs to reserve the memory of that node, might be highly efficient, but will not have used memory.The resulting code of this PR will hence NOT report warnings about memory usage.
This PR is related to issue #147
Summary by CodeRabbit
New Features
Documentation
Bug Fixes
Tests
Chores