Skip to content

Commit 9e26ef3

Browse files
committed
Automated tutorials push
1 parent a21d50b commit 9e26ef3

File tree

205 files changed

+12857
-12696
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

205 files changed

+12857
-12696
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "fd7b9d2d",
37+
"id": "07ca7101",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "46c9b1d1",
53+
"id": "5c80c64e",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "cbdc2894",
34+
"id": "65cd1d93",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "a6876e37",
50+
"id": "355262c2",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "077f87e9",
37+
"id": "c15d3934",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "552e9b60",
53+
"id": "edc64c5d",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "9a3fe3ca",
38+
"id": "416301d9",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "eb3a0ab0",
54+
"id": "f8d96351",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "9416f772",
40+
"id": "1f4610df",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "2fcf8bbd",
56+
"id": "73c65f06",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "a88c8a4e",
38+
"id": "bd2b8139",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "95a37240",
54+
"id": "4fc6d23a",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "07b323fb",
37+
"id": "4fdbbc06",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "def99fa8",
53+
"id": "59a4103a",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "bd934e01",
37+
"id": "f252b13b",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "32100206",
53+
"id": "9f07beb5",
5454
"metadata": {},
5555
"source": [
5656
"\n",
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

-505 Bytes
Loading
-1.31 KB
Loading
122 Bytes
Loading
-516 Bytes
Loading
-520 Bytes
Loading
-250 Bytes
Loading
2 Bytes
Loading
-9.06 KB
Loading
Loading
198 Bytes
Loading
-17 Bytes
Loading
Loading
688 Bytes
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1723.48it/s]
1638-
16%|#6 | 1600/10000 [00:02<00:17, 482.01it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:10, 692.03it/s]
1640-
32%|###2 | 3200/10000 [00:04<00:07, 889.82it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1051.29it/s]
1642-
48%|####8 | 4800/10000 [00:05<00:04, 1175.63it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1268.34it/s]
1644-
reward: -2.15 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-2.11/6.67, grad norm= 81.09, loss_value= 480.25, loss_actor= 14.00, target value: -12.47: 56%|#####6 | 5600/10000 [00:06<00:03, 1268.34it/s]
1645-
reward: -2.15 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-2.11/6.67, grad norm= 81.09, loss_value= 480.25, loss_actor= 14.00, target value: -12.47: 64%|######4 | 6400/10000 [00:07<00:04, 881.03it/s]
1646-
reward: -0.09 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-1.62/5.64, grad norm= 233.21, loss_value= 279.81, loss_actor= 12.44, target value: -10.15: 64%|######4 | 6400/10000 [00:07<00:04, 881.03it/s]
1647-
reward: -0.09 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-1.62/5.64, grad norm= 233.21, loss_value= 279.81, loss_actor= 12.44, target value: -10.15: 72%|#######2 | 7200/10000 [00:08<00:03, 729.85it/s]
1648-
reward: -3.25 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-2.81/6.06, grad norm= 324.82, loss_value= 370.70, loss_actor= 15.91, target value: -17.90: 72%|#######2 | 7200/10000 [00:09<00:03, 729.85it/s]
1649-
reward: -3.25 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-2.81/6.06, grad norm= 324.82, loss_value= 370.70, loss_actor= 15.91, target value: -17.90: 80%|######## | 8000/10000 [00:10<00:03, 637.41it/s]
1650-
reward: -4.74 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-1.98/5.00, grad norm= 235.25, loss_value= 263.46, loss_actor= 16.64, target value: -12.76: 80%|######## | 8000/10000 [00:10<00:03, 637.41it/s]
1651-
reward: -4.74 (r0 = -2.19), reward eval: reward: -0.01, reward normalized=-1.98/5.00, grad norm= 235.25, loss_value= 263.46, loss_actor= 16.64, target value: -12.76: 88%|########8 | 8800/10000 [00:11<00:02, 590.95it/s]
1652-
reward: -4.36 (r0 = -2.19), reward eval: reward: -3.03, reward normalized=-2.24/5.10, grad norm= 145.03, loss_value= 217.91, loss_actor= 14.59, target value: -15.13: 88%|########8 | 8800/10000 [00:14<00:02, 590.95it/s]
1653-
reward: -4.36 (r0 = -2.19), reward eval: reward: -3.03, reward normalized=-2.24/5.10, grad norm= 145.03, loss_value= 217.91, loss_actor= 14.59, target value: -15.13: 96%|#########6| 9600/10000 [00:15<00:00, 404.44it/s]
1654-
reward: -5.12 (r0 = -2.19), reward eval: reward: -3.03, reward normalized=-2.91/4.94, grad norm= 82.40, loss_value= 199.72, loss_actor= 14.63, target value: -21.11: 96%|#########6| 9600/10000 [00:15<00:00, 404.44it/s]
1655-
reward: -5.12 (r0 = -2.19), reward eval: reward: -3.03, reward normalized=-2.91/4.94, grad norm= 82.40, loss_value= 199.72, loss_actor= 14.63, target value: -21.11: : 10400it [00:17, 364.44it/s]
1656-
reward: -3.22 (r0 = -2.19), reward eval: reward: -3.03, reward normalized=-3.20/4.04, grad norm= 242.17, loss_value= 150.76, loss_actor= 15.33, target value: -22.04: : 10400it [00:18, 364.44it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1639.57it/s]
1638+
16%|#6 | 1600/10000 [00:02<00:17, 487.17it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:10, 705.08it/s]
1640+
32%|###2 | 3200/10000 [00:03<00:07, 909.04it/s]
1641+
40%|#### | 4000/10000 [00:04<00:05, 1079.15it/s]
1642+
48%|####8 | 4800/10000 [00:04<00:04, 1230.99it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1352.05it/s]
1644+
reward: -2.34 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.57/6.06, grad norm= 61.27, loss_value= 241.02, loss_actor= 14.64, target value: -16.11: 56%|#####6 | 5600/10000 [00:06<00:03, 1352.05it/s]
1645+
reward: -2.34 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.57/6.06, grad norm= 61.27, loss_value= 241.02, loss_actor= 14.64, target value: -16.11: 64%|######4 | 6400/10000 [00:06<00:04, 899.53it/s]
1646+
reward: -0.14 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.31/5.83, grad norm= 156.11, loss_value= 344.75, loss_actor= 12.38, target value: -15.08: 64%|######4 | 6400/10000 [00:07<00:04, 899.53it/s]
1647+
reward: -0.14 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.31/5.83, grad norm= 156.11, loss_value= 344.75, loss_actor= 12.38, target value: -15.08: 72%|#######2 | 7200/10000 [00:08<00:03, 736.68it/s]
1648+
reward: -1.14 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.23/5.64, grad norm= 53.64, loss_value= 203.78, loss_actor= 10.83, target value: -13.65: 72%|#######2 | 7200/10000 [00:09<00:03, 736.68it/s]
1649+
reward: -1.14 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.23/5.64, grad norm= 53.64, loss_value= 203.78, loss_actor= 10.83, target value: -13.65: 80%|######## | 8000/10000 [00:10<00:03, 645.26it/s]
1650+
reward: -4.46 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.25/5.01, grad norm= 107.66, loss_value= 201.47, loss_actor= 14.83, target value: -15.88: 80%|######## | 8000/10000 [00:10<00:03, 645.26it/s]
1651+
reward: -4.46 (r0 = -2.30), reward eval: reward: 0.01, reward normalized=-2.25/5.01, grad norm= 107.66, loss_value= 201.47, loss_actor= 14.83, target value: -15.88: 88%|########8 | 8800/10000 [00:11<00:02, 595.57it/s]
1652+
reward: -5.10 (r0 = -2.30), reward eval: reward: -4.21, reward normalized=-2.79/5.24, grad norm= 125.92, loss_value= 211.97, loss_actor= 16.07, target value: -19.41: 88%|########8 | 8800/10000 [00:14<00:02, 595.57it/s]
1653+
reward: -5.10 (r0 = -2.30), reward eval: reward: -4.21, reward normalized=-2.79/5.24, grad norm= 125.92, loss_value= 211.97, loss_actor= 16.07, target value: -19.41: 96%|#########6| 9600/10000 [00:15<00:00, 400.81it/s]
1654+
reward: -4.87 (r0 = -2.30), reward eval: reward: -4.21, reward normalized=-3.25/5.48, grad norm= 365.64, loss_value= 370.61, loss_actor= 15.09, target value: -22.95: 96%|#########6| 9600/10000 [00:15<00:00, 400.81it/s]
1655+
reward: -4.87 (r0 = -2.30), reward eval: reward: -4.21, reward normalized=-3.25/5.48, grad norm= 365.64, loss_value= 370.61, loss_actor= 15.09, target value: -22.95: : 10400it [00:17, 364.77it/s]
1656+
reward: -4.24 (r0 = -2.30), reward eval: reward: -4.21, reward normalized=-3.06/4.72, grad norm= 93.86, loss_value= 195.94, loss_actor= 17.67, target value: -22.37: : 10400it [00:18, 364.77it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 28.623 seconds)
1726+
**Total running time of the script:** ( 0 minutes 28.439 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/cpp_extension.rst.txt

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -1207,3 +1207,5 @@ examples displayed in this note `here
12071207
<https://github.com/pytorch/extension-cpp>`_. If you have questions, please use
12081208
`the forums <https://discuss.pytorch.org>`_. Also be sure to check our `FAQ
12091209
<https://pytorch.org/cppdocs/notes/faq.html>`_ in case you run into any issues.
1210+
A blog on writing extensions for AMD ROCm can be found `here
1211+
<https://rocm.blogs.amd.com/artificial-intelligence/cpp-extn/readme.html>`_.

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 214.3
520+
elapsed time (seconds): 207.7
521521
loss: 5.168
522-
elapsed time (seconds): 119.1
522+
elapsed time (seconds): 118.3
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 42.296 seconds)
544+
**Total running time of the script:** ( 5 minutes 34.400 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 34 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -410,37 +410,33 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|2 | 16.2M/548M [00:00<00:03, 169MB/s]
414-
6%|6 | 33.1M/548M [00:00<00:03, 174MB/s]
415-
9%|9 | 50.6M/548M [00:00<00:02, 178MB/s]
416-
12%|#2 | 68.1M/548M [00:00<00:02, 180MB/s]
417-
16%|#5 | 86.1M/548M [00:00<00:02, 183MB/s]
418-
19%|#9 | 104M/548M [00:00<00:02, 185MB/s]
419-
22%|##2 | 123M/548M [00:00<00:02, 187MB/s]
420-
26%|##5 | 141M/548M [00:00<00:02, 188MB/s]
421-
29%|##9 | 159M/548M [00:00<00:02, 189MB/s]
422-
32%|###2 | 177M/548M [00:01<00:02, 189MB/s]
423-
36%|###5 | 196M/548M [00:01<00:01, 190MB/s]
424-
39%|###9 | 214M/548M [00:01<00:01, 190MB/s]
425-
42%|####2 | 232M/548M [00:01<00:01, 190MB/s]
426-
46%|####5 | 250M/548M [00:01<00:01, 190MB/s]
427-
49%|####9 | 269M/548M [00:01<00:01, 190MB/s]
428-
52%|#####2 | 287M/548M [00:01<00:01, 190MB/s]
429-
56%|#####5 | 305M/548M [00:01<00:01, 191MB/s]
430-
59%|#####9 | 324M/548M [00:01<00:01, 191MB/s]
431-
62%|######2 | 342M/548M [00:01<00:01, 191MB/s]
432-
66%|######5 | 360M/548M [00:02<00:01, 191MB/s]
433-
69%|######9 | 378M/548M [00:02<00:00, 191MB/s]
434-
72%|#######2 | 397M/548M [00:02<00:00, 191MB/s]
435-
76%|#######5 | 415M/548M [00:02<00:00, 191MB/s]
436-
79%|#######9 | 433M/548M [00:02<00:00, 191MB/s]
437-
82%|########2 | 452M/548M [00:02<00:00, 191MB/s]
438-
86%|########5 | 470M/548M [00:02<00:00, 191MB/s]
439-
89%|########9 | 488M/548M [00:02<00:00, 191MB/s]
440-
92%|#########2| 506M/548M [00:02<00:00, 191MB/s]
441-
96%|#########5| 524M/548M [00:02<00:00, 191MB/s]
442-
99%|#########9| 543M/548M [00:03<00:00, 191MB/s]
443-
100%|##########| 548M/548M [00:03<00:00, 189MB/s]
413+
4%|3 | 20.5M/548M [00:00<00:02, 214MB/s]
414+
8%|7 | 41.2M/548M [00:00<00:02, 215MB/s]
415+
11%|#1 | 62.4M/548M [00:00<00:02, 218MB/s]
416+
15%|#5 | 83.6M/548M [00:00<00:02, 219MB/s]
417+
19%|#9 | 105M/548M [00:00<00:02, 220MB/s]
418+
23%|##2 | 126M/548M [00:00<00:02, 220MB/s]
419+
27%|##6 | 147M/548M [00:00<00:01, 220MB/s]
420+
31%|### | 168M/548M [00:00<00:01, 220MB/s]
421+
35%|###4 | 189M/548M [00:00<00:01, 221MB/s]
422+
38%|###8 | 210M/548M [00:01<00:01, 221MB/s]
423+
42%|####2 | 232M/548M [00:01<00:01, 221MB/s]
424+
46%|####6 | 253M/548M [00:01<00:01, 221MB/s]
425+
50%|####9 | 274M/548M [00:01<00:01, 221MB/s]
426+
54%|#####3 | 295M/548M [00:01<00:01, 221MB/s]
427+
58%|#####7 | 316M/548M [00:01<00:01, 221MB/s]
428+
62%|######1 | 337M/548M [00:01<00:01, 220MB/s]
429+
65%|######5 | 358M/548M [00:01<00:00, 220MB/s]
430+
69%|######9 | 379M/548M [00:01<00:00, 220MB/s]
431+
73%|#######3 | 400M/548M [00:01<00:00, 220MB/s]
432+
77%|#######6 | 422M/548M [00:02<00:00, 220MB/s]
433+
81%|######## | 442M/548M [00:02<00:00, 220MB/s]
434+
85%|########4 | 464M/548M [00:02<00:00, 220MB/s]
435+
88%|########8 | 484M/548M [00:02<00:00, 220MB/s]
436+
92%|#########2| 506M/548M [00:02<00:00, 220MB/s]
437+
96%|#########6| 526M/548M [00:02<00:00, 219MB/s]
438+
100%|#########9| 548M/548M [00:02<00:00, 220MB/s]
439+
100%|##########| 548M/548M [00:02<00:00, 220MB/s]
444440
445441
446442
@@ -761,22 +757,22 @@ Finally, we can run the algorithm.
761757
762758
Optimizing..
763759
run [50]:
764-
Style Loss : 4.123869 Content Loss: 4.182580
760+
Style Loss : 4.168906 Content Loss: 4.182115
765761
766762
run [100]:
767-
Style Loss : 1.127630 Content Loss: 3.023808
763+
Style Loss : 1.157379 Content Loss: 3.045664
768764
769765
run [150]:
770-
Style Loss : 0.704597 Content Loss: 2.645709
766+
Style Loss : 0.716242 Content Loss: 2.652578
771767
772768
run [200]:
773-
Style Loss : 0.479943 Content Loss: 2.490248
769+
Style Loss : 0.478957 Content Loss: 2.489498
774770
775771
run [250]:
776-
Style Loss : 0.346584 Content Loss: 2.404827
772+
Style Loss : 0.346494 Content Loss: 2.404430
777773
778774
run [300]:
779-
Style Loss : 0.265733 Content Loss: 2.350008
775+
Style Loss : 0.264193 Content Loss: 2.346598
780776
781777
782778
@@ -785,7 +781,7 @@ Finally, we can run the algorithm.
785781
786782
.. rst-class:: sphx-glr-timing
787783

788-
**Total running time of the script:** ( 0 minutes 36.658 seconds)
784+
**Total running time of the script:** ( 0 minutes 36.240 seconds)
789785

790786

791787
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.597 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.589 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)