Skip to content

Add check for second value in sum: Logsumexp #90

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 11 commits into from
Closed
Show file tree
Hide file tree
Changes from 8 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
23 changes: 16 additions & 7 deletions tests/fixtures/misc/checker/logsumexp.py
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,21 @@
b = torch.randn(5)

# logsumexp
y = torch.log(torch.sum(torch.exp(x), 1, keepdim=True))
y = torch.log(torch.sum(torch.exp(2.5 + x), 1))
y = torch.log(torch.sum(torch.exp(x), 1, keepdim=True)) # has all the arguments for sum function call with keepdim as True
y = torch.log(torch.sum(torch.exp(2.5 + x), 1)) # addition operation inside the exp function call
y = torch.log(torch.sum(torch.exp(x),dim=1,keepdim=True)) # has all the arguments for sum function call
y = torch.log(torch.sum(torch.exp(x), dim=1)) #default value of keepdim is False
y = torch.log(torch.sum(torch.exp(x), dim=(1,2))) #default value of keepdim is False

# not logsumexp
y = torch.log(torch.sum(torch.exp(x), 1, keepdim=True) + 2.5)
y = torch.log(torch.sum(torch.exp(x) + 2.5, 1))
y = torch.log(2 + x)
y = torch.sum(torch.log(torch.exp(x)), 1)
y = torch.exp(torch.sum(torch.log(x), 1, keepdim=True))
y = torch.log(torch.sum(torch.exp(x), 1, keepdim=True) + 2.5) # cant have an addition operation inside the log function call
y = torch.log(torch.sum(torch.exp(x) + 2.5, 1)) # Cant have an addition operation inside the sum function call with the argument as it expects a tensor
y = torch.log(2 + x) # missing sum and exp
y = torch.sum(torch.log(torch.exp(x)), 1) # not proper order of log and sum
y = torch.exp(torch.sum(torch.log(x), 1, keepdim=True)) #order of log,sum and exp is reversed
y = torch.log(torch.sum(torch.exp(2.5))) # this should not be flagged as the second argument is missing for sum function call and exp function call has an integer argument instead of a tensor
y = torch.log(torch.sum(torch.exp(x)), dim=1) #dim is not part of the sum fuction call
y = torch.log(torch.sum(torch.exp(x)), dim=None) #dim is not part of the sum fuction call and dim is None
y = torch.log(torch.sum(torch.exp(x), keepdim=True, dim=None)) #dim argument cannot be None
y = torch.log(torch.sum(torch.exp(x), dim=(1,None))) #dim argument cannot be a tuple with None
y = torch.log(torch.sum(torch.exp(x), dim=(None,None))) #dim argument cannot be a tuple with None
Copy link
Contributor

@kit1980 kit1980 Feb 4, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No need to check for dim=(None,None) or dim=(1,None), it can not happen because if present dim is an int or tuple of ints: https://pytorch.org/docs/stable/generated/torch.sum.html

3 changes: 3 additions & 0 deletions tests/fixtures/misc/checker/logsumexp.txt
Original file line number Diff line number Diff line change
@@ -1,2 +1,5 @@
6:5 TOR108 Use numerically stabilized `torch.logsumexp`.
7:5 TOR108 Use numerically stabilized `torch.logsumexp`.
8:5 TOR108 Use numerically stabilized `torch.logsumexp`.
9:5 TOR108 Use numerically stabilized `torch.logsumexp`.
10:5 TOR108 Use numerically stabilized `torch.logsumexp`.
49 changes: 43 additions & 6 deletions torchfix/visitors/misc/__init__.py
Original file line number Diff line number Diff line change
Expand Up @@ -184,9 +184,46 @@ def visit_Call(self, node):
)
== "torch.exp"
):
self.add_violation(
node,
error_code=self.ERRORS[0].error_code,
message=self.ERRORS[0].message(),
replacement=None,
)
if (
self.get_qualified_name_for_call(node.args[0].value)
== "torch.sum"
):
if (
self.get_qualified_name_for_call(
node.args[0].value.args[0].value
)
== "torch.exp"
):
dim_arg = self.get_specific_arg(
node.args[0].value, arg_name="dim", arg_pos=1
)
if dim_arg: # checks if dim argument is present
if isinstance(
Copy link
Contributor

@kit1980 kit1980 Feb 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These lines are redundant, no?
Later there are checks for cst.Integer and cst.Tuple.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes. Removed it since they were test code.

dim_arg.value, cst.Integer
) or isinstance(
dim_arg.value, cst.Tuple
): # checks if dim argument is an integer or tuple
if (
isinstance(dim_arg.value, cst.Integer)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

cst.Integer can not be None, meaningless condition.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Here the condition checks if the value is of type integer and also makes sure that the value it holds is also not None since Tuples in dim cannot have None values

and dim_arg.value.value != "None"
):
self.add_violation(
node,
error_code=self.ERRORS[0].error_code,
message=self.ERRORS[0].message(),
replacement=None,
)
elif isinstance(
dim_arg.value, cst.Tuple
) and all(
isinstance(element.value, cst.Integer)
and element.value.value != "None"
for element in dim_arg.value.elements
): # checks if all elements of the
# tuple are not None
self.add_violation(
node,
error_code=self.ERRORS[0].error_code,
message=self.ERRORS[0].message(),
replacement=None,
)