Skip to content

Conversation

@mikel-brostrom
Copy link
Owner

No description provided.

Copilot AI review requested due to automatic review settings October 24, 2025 07:31
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR refactors the TensorRT engine exporter to align with a standard onnx2engine pattern while adding robust error handling, DLA support for Jetson devices, and optional metadata embedding in the engine file.

Key Changes:

  • Added comprehensive error handling and validation for TensorRT export process
  • Introduced DLA (Deep Learning Accelerator) support for NVIDIA Jetson devices with proper validation
  • Implemented optional metadata header functionality for engine files

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

import tensorrt as trt
except ImportError:
import tensorrt as trt
import tensorrt as trt # noqa: F401
Copy link

Copilot AI Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The # noqa: F401 suppresses the unused import warning, but the import is actually used on line 86. Remove the # noqa: F401 comment since this import is legitimately used.

Suggested change
import tensorrt as trt # noqa: F401
import tensorrt as trt

Copilot uses AI. Check for mistakes.
return self._onnx2engine(
onnx_file=str(onnx_file),
engine_file=str(self.file.with_suffix(".engine")),
workspace=getattr(self, "workspace", None), # GB (float or int); handled below
Copy link

Copilot AI Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

[nitpick] The comment 'GB (float or int); handled below' could be clearer. Consider specifying that workspace is expected in gigabytes and will be converted to bytes in the _onnx2engine method.

Suggested change
workspace=getattr(self, "workspace", None), # GB (float or int); handled below
workspace=getattr(self, "workspace", None), # workspace expected in gigabytes (float or int); will be converted to bytes in _onnx2engine

Copilot uses AI. Check for mistakes.
Comment on lines +86 to +87
import tensorrt as trt # noqa

Copy link

Copilot AI Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The # noqa comment without a specific error code suppresses all linting warnings. Since TensorRT is already imported at line 46, this is a duplicate import. Remove this redundant import or consolidate it at the module level if needed in both methods.

Suggested change
import tensorrt as trt # noqa

Copilot uses AI. Check for mistakes.
# Optional metadata block (length + JSON) BEFORE engine bytes to mirror your example
if metadata is not None:
meta = json.dumps(metadata)
t.write(len(meta).to_bytes(4, byteorder="little", signed=True))
Copy link

Copilot AI Oct 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Using signed=True for a length value is unusual and could lead to issues if the metadata length exceeds 2GB (max signed 32-bit int). Consider using signed=False for length fields unless there's a specific protocol requirement for signed integers.

Suggested change
t.write(len(meta).to_bytes(4, byteorder="little", signed=True))
t.write(len(meta).to_bytes(4, byteorder="little", signed=False))

Copilot uses AI. Check for mistakes.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants