Skip to content

Create rule S6979 #3959

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Merged
merged 5 commits into from
Sep 9, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 2 additions & 0 deletions rules/S6979/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
{
}
23 changes: 23 additions & 0 deletions rules/S6979/python/metadata.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
{
"title": "\"torch.tensor\" should be used instead of \"torch.autograd.Variable\"",
"type": "CODE_SMELL",
"status": "ready",
"remediation": {
"func": "Constant\/Issue",
"constantCost": "2min"
},
"tags": [
],
"defaultSeverity": "Major",
"ruleSpecification": "RSPEC-6979",
"sqKey": "S6979",
"scope": "All",
"defaultQualityProfiles": ["Sonar way"],
"quickfix": "targeted",
"code": {
"impacts": {
"MAINTAINABILITY": "MEDIUM"
},
"attribute": "CONVENTIONAL"
}
}
66 changes: 66 additions & 0 deletions rules/S6979/python/rule.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,66 @@
This rule raises when a `torch.autograd.Variable` is instantiated.

== Why is this an issue?

The Pytorch Variable API has been deprecated. The behavior of Variables is now provided by the Pytorch tensors and can be controlled with the `requires_grad` parameter.

The Variable API now returns tensors anyway, so there should not be any breaking changes.

== How to fix it

Replace the call to `torch.autograd.Variable` with a call to `torch.tensor` and set the `requires_grad` attribute to `True` if needed.

=== Code examples

==== Noncompliant code example

[source,python,diff-id=1,diff-type=noncompliant]
----
import torch

x = torch.autograd.Variable(torch.tensor([1.0]), requires_grad=True) # Noncompliant
x2 = torch.autograd.Variable(torch.tensor([1.0])) # Noncompliant
----

==== Compliant solution

[source,python,diff-id=1,diff-type=compliant]
----
import torch

x = torch.tensor([1.0], requires_grad=True)
x2 = torch.tensor([1.0])
----


== Resources
=== Documentation

* Pytorch documentation - https://pytorch.org/docs/stable/autograd.html#variable-deprecated[Variable API]


ifdef::env-github,rspecator-view[]

(visible only on this page)

== Implementation specification

Should be pretty straighforward to implement.

=== Message

Primary : Replace this call with a call to "torch.tensor".


=== Issue location

Primary : Name of the function call

=== Quickfix

Might be tricky to know how to call the `torch.tensor` function.
If there is an import like `from torch import tensor`, then replace with `tensor(...)`
If not, then replace with `torch.tensor(...)`


endif::env-github,rspecator-view[]
Loading