Skip to content

Fixed parts of issue #56 #60

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -2,3 +2,4 @@
.pytest_cache/
__pycache__/
*.egg-info/*
.venv
45 changes: 44 additions & 1 deletion README.md
Original file line number Diff line number Diff line change
Expand Up @@ -108,7 +108,50 @@ Migration guide:
This function is deprecated. Use the `torch.nn.attention.sdpa_kernel` context manager instead.

Migration guide:
Each boolean input parameter (defaulting to true unless specified) of `sdp_kernel` corresponds to a `SDPBackened`. If the input parameter is true, the corresponding backend should be added to the input list of `sdpa_kernel`.

Each boolean input parameter (defaulting to true unless specified) of `sdp_kernel` corresponds to a `SDPBackened`.
If the input parameter is true, the corresponding backend should be added to the input list of `sdpa_kernel`.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This sentence doesn't make sense to me. "added to the input list of sdpa_kernel" - but we are removing sdp_kernel.


### TOR102 Unsafe use of `torch.load` without weights only parameter

The use of `torch.load` without the `weights_only` parameter is unsafe.
Loading an untrusted pickle file may lead to the execution of arbitrary malicious code and potential security issues.

Migration Guide:

Explicitly set `weights_only=False` only if you trust the data you load and full pickle functionality is needed, otherwise use `weights_only=True`.


### TOR104 Use of non-public function

#### torch.utils.data._utils.collate.default_collate

Public functions are well-documented and supported by the library maintainers and the use of the non-public function
`torch.utils.data._utils.collate.default_collate` is discouraged as it can can change without notice in future versions,
leading to potential breakage in your code.

Migration Guide:

For better maintainability and compatibility, please use the public function `torch.utils.data.dataloader.default_collate` instead.

### TOR201 Parameter `pretrained` is deprecated, please use `weights` instead.

The parameter `pretrained` has been deprecated in TorchVision models since PyTorch version 1.12.0. The `weights` parameter should be used instead.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Doesn't make sense.



### TOR202 The transform `v2.ToTensor()` is deprecated and will be removed in a future release.

The `transform v2.ToTensor()` is deprecated and will be removed in a future release. Instead, please use `v2.Compose([v2.ToImage(), v2.ToDtype(torch.float32, scale=True)])`.


### TOR203 Consider replacing `import torchvision.models as models` with `from torchvision import models`.

Consider replacing `import torchvision.models as models` with `from torchvision import models` to improve clarity, maintainability, and adhere to best practices and reducing potential confusion with other modules or variables named `models`.
This can lead to namespace conflicts and explicit import style helps avoid such issues.

### TOR401 Detected DataLoader running with synchronized implementation

Running synchronized implementations on `DataLoader` can lead to loss in data loading performance, especially when dealing with large datasets. A viable solution is to set the `num_workers` parameter to be greater than 0 when initializing the DataLoader class. This would parallelize the loading operations and would significantly increase performance.

#### torch.chain_matmul

Expand Down