Skip to content

Commit 89a40da

Browse files
committed
Update README.md
1 parent e4e6713 commit 89a40da

File tree

2 files changed

+10
-1
lines changed

2 files changed

+10
-1
lines changed

README.md

Lines changed: 2 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -9,7 +9,8 @@ There are a few "sights" you can metaphorically visit in this repository:
99
- Build C++ and/or CUDA extensions by going into the `cpp/` or `cuda/` folder and executing `python setup.py install`,
1010
- JIT-compile C++ and/or CUDA extensions by going into the `cpp/` or `cuda/` folder and calling `python jit.py`, which will JIT-compile the extension and load it,
1111
- Benchmark Python vs. C++ vs. CUDA by running `python benchmark.py {py, cpp, cuda} [--cuda]`,
12-
- Run gradient-checks on the code by running `python grad_check.py {py, cpp, cuda}`.
12+
- Run gradient checks on the code by running `python grad_check.py {py, cpp, cuda} [--cuda]`.
13+
- Run output checks on the code by running `python check.py {forward, backward} [--cuda]`.
1314

1415
## Authors
1516

check.py

Lines changed: 8 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -24,6 +24,11 @@ def check_equal(first, second, verbose):
2424
np.testing.assert_allclose(x, y, err_msg="Index: {}".format(i))
2525

2626

27+
def zero_grad(variables):
28+
for variable in variables:
29+
variable.grad.zero_()
30+
31+
2732
def check_forward(variables, with_cuda, verbose):
2833
baseline_values = python.lltm_baseline.LLTMFunction.apply(*variables)
2934
cpp_values = cpp.lltm.LLTMFunction.apply(*variables)
@@ -44,6 +49,8 @@ def check_backward(variables, with_cuda, verbose):
4449
(baseline_values[0] + baseline_values[1]).sum().backward()
4550
grad_baseline = [var.grad for var in variables]
4651

52+
zero_grad(variables)
53+
4754
cpp_values = cpp.lltm.LLTMFunction.apply(*variables)
4855
(cpp_values[0] + cpp_values[1]).sum().backward()
4956
grad_cpp = [var.grad for var in variables]
@@ -53,6 +60,7 @@ def check_backward(variables, with_cuda, verbose):
5360
print('Ok')
5461

5562
if with_cuda:
63+
zero_grad(variables)
5664
cuda_values = cuda.lltm.LLTMFunction.apply(*variables)
5765
(cuda_values[0] + cuda_values[1]).sum().backward()
5866
grad_cuda = [var.grad for var in variables]

0 commit comments

Comments
 (0)