You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I am running cuda 12.8, tensorrt 10.7.0 and torch 2.6.0. When i try generating the engine file with python tools/trt.py -n yolox-x -c weights/yolox_x.pth I get the following error:
2025-03-05 01:42:21.741 | INFO | __main__:main:60 - loaded checkpoint done.
[03/05/2025-01:42:22] [TRT] [I] [MemUsageChange] Init CUDA: CPU +0, GPU +0, now: CPU 1210, GPU 6014 (MiB)
[03/05/2025-01:42:22] [TRT] [I] [MemUsageChange] Init builder kernel library: CPU +1, GPU +0, now: CPU 1413, GPU 6218 (MiB)
[03/05/2025-01:42:24] [TRT] [W] ITensor::setType(Float) was called on non I/O tensor: output_0. This will have no effect unless this tensor is marked as an output.
[03/05/2025-01:42:24] [TRT] [E] IBuilder::buildSerializedNetwork: Error Code 9: API Usage Error (Target GPU SM 87 is not supported by this TensorRT release.)
Saving torch model to ./YOLOX_outputs/yolox_x/model_trt.pth
2025-03-05 01:42:24.098 | ERROR | __main__:<module>:93 - An error has been caught in function '<module>', process 'MainProcess' (6468), thread 'MainThread' (281473344879872):
Traceback (most recent call last):
> File "/workspaces/aion-yolo/offline/aion-yolox/tools/trt.py", line 93, in <module>
main()
└ <function main at 0xfffefc354900>
File "/opt/venv/lib/python3.12/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
│ │ └ {}
│ └ ()
└ <function main at 0xfffefc4cba60>
File "/workspaces/aion-yolo/offline/aion-yolox/tools/trt.py", line 77, in main
torch.save(model_trt.state_dict(), os.path.join(
│ │ │ │ │ │ └ <function join at 0xffff9e314ae0>
│ │ │ │ │ └ <module 'posixpath' (frozen)>
│ │ │ │ └ <module 'os' (frozen)>
│ │ │ └ <function Module.state_dict at 0xfffefe130fe0>
│ │ └ TRTModule()
│ └ <function save at 0xfffefe63f560>
└ <module 'torch' from '/opt/venv/lib/python3.12/site-packages/torch/__init__.py'>
File "/opt/venv/lib/python3.12/site-packages/torch/nn/modules/module.py", line 2222, in state_dict
hook_result = hook(self, destination, prefix, local_metadata)
│ │ │ │ └ {'version': 1}
│ │ │ └ ''
│ │ └ OrderedDict()
│ └ TRTModule()
└ <function TRTModule._on_state_dict at 0xfffefc34dc60>
File "/opt/venv/lib/python3.12/site-packages/torch2trt-0.5.0-py3.12.egg/torch2trt/trt_module.py", line 60, in _on_state_dict
state_dict[prefix + "engine"] = bytearray(self.engine.serialize())
│ │ │ └ None
│ │ └ TRTModule()
│ └ ''
└ OrderedDict()
AttributeError: 'NoneType' object has no attribute 'serialize'
What is the best way to resolve? Thank you in advance.
The text was updated successfully, but these errors were encountered:
I am running cuda 12.8, tensorrt
10.7.0
and torch2.6.0
. When i try generating the engine file withpython tools/trt.py -n yolox-x -c weights/yolox_x.pth
I get the following error:What is the best way to resolve? Thank you in advance.
The text was updated successfully, but these errors were encountered: