Skip to content

Commit 0cfccf2

Browse files
committed
Automated tutorials push
1 parent 305101b commit 0cfccf2

File tree

183 files changed

+10599
-9859
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

183 files changed

+10599
-9859
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "ed0fac73",
37+
"id": "d3c44a3b",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "1fac403b",
53+
"id": "cadbfc27",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "cca9e040",
34+
"id": "545968d6",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "d7537f93",
50+
"id": "b9c43220",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "f605c109",
37+
"id": "dfe3ad0c",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "385743f6",
53+
"id": "54349c43",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "6e91dc77",
38+
"id": "08989ae5",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "c4fcb891",
54+
"id": "56db2c22",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "da53bf76",
40+
"id": "d51cb2b4",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "e5e459fa",
56+
"id": "99578330",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "71cf7b6f",
38+
"id": "e8ab2e66",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "5df4ed87",
54+
"id": "821eef43",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "16911d73",
37+
"id": "2fc67b48",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "5bab964e",
53+
"id": "e948beaa",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "da367d56",
37+
"id": "109c2d90",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "a15c695d",
53+
"id": "655c25fd",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_images/sphx_glr_coding_ddpg_001.png

1.1 KB
Loading
336 Bytes
Loading
383 Bytes
Loading
4.38 KB
Loading
-5.13 KB
Loading
Loading
316 Bytes
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1632,26 +1632,26 @@ modules we need.
16321632
16331633
16341634
0%| | 0/10000 [00:00<?, ?it/s]
1635-
8%|8 | 800/10000 [00:00<00:03, 2906.66it/s]
1636-
16%|#6 | 1600/10000 [00:01<00:10, 765.70it/s]
1637-
24%|##4 | 2400/10000 [00:02<00:06, 1154.25it/s]
1638-
32%|###2 | 3200/10000 [00:02<00:04, 1518.98it/s]
1639-
40%|#### | 4000/10000 [00:02<00:03, 1834.60it/s]
1640-
48%|####8 | 4800/10000 [00:02<00:02, 2108.97it/s]
1641-
56%|#####6 | 5600/10000 [00:03<00:01, 2327.16it/s]
1642-
reward: -2.19 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.80/6.29, grad norm= 307.70, loss_value= 439.06, loss_actor= 12.75, target value: -18.85: 56%|#####6 | 5600/10000 [00:05<00:01, 2327.16it/s]
1643-
reward: -2.19 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.80/6.29, grad norm= 307.70, loss_value= 439.06, loss_actor= 12.75, target value: -18.85: 64%|######4 | 6400/10000 [00:05<00:04, 788.25it/s]
1644-
reward: -1.84 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.67/5.37, grad norm= 113.56, loss_value= 254.95, loss_actor= 13.86, target value: -16.73: 64%|######4 | 6400/10000 [00:07<00:04, 788.25it/s]
1645-
reward: -1.84 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.67/5.37, grad norm= 113.56, loss_value= 254.95, loss_actor= 13.86, target value: -16.73: 72%|#######2 | 7200/10000 [00:08<00:05, 507.75it/s]
1646-
reward: -3.68 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.43/5.39, grad norm= 109.65, loss_value= 275.44, loss_actor= 15.73, target value: -14.91: 72%|#######2 | 7200/10000 [00:10<00:05, 507.75it/s]
1647-
reward: -3.68 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.43/5.39, grad norm= 109.65, loss_value= 275.44, loss_actor= 15.73, target value: -14.91: 80%|######## | 8000/10000 [00:11<00:04, 410.42it/s]
1648-
reward: -3.46 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.96/4.69, grad norm= 48.92, loss_value= 169.77, loss_actor= 18.82, target value: -19.16: 80%|######## | 8000/10000 [00:13<00:04, 410.42it/s]
1649-
reward: -3.46 (r0 = -1.89), reward eval: reward: -0.01, reward normalized=-2.96/4.69, grad norm= 48.92, loss_value= 169.77, loss_actor= 18.82, target value: -19.16: 88%|########8 | 8800/10000 [00:14<00:03, 362.09it/s]
1650-
reward: -5.27 (r0 = -1.89), reward eval: reward: -4.71, reward normalized=-3.01/5.13, grad norm= 64.85, loss_value= 192.72, loss_actor= 19.04, target value: -19.81: 88%|########8 | 8800/10000 [00:17<00:03, 362.09it/s]
1651-
reward: -5.27 (r0 = -1.89), reward eval: reward: -4.71, reward normalized=-3.01/5.13, grad norm= 64.85, loss_value= 192.72, loss_actor= 19.04, target value: -19.81: 96%|#########6| 9600/10000 [00:18<00:01, 290.75it/s]
1652-
reward: -4.04 (r0 = -1.89), reward eval: reward: -4.71, reward normalized=-2.63/4.39, grad norm= 70.28, loss_value= 162.83, loss_actor= 15.91, target value: -19.09: 96%|#########6| 9600/10000 [00:19<00:01, 290.75it/s]
1653-
reward: -4.04 (r0 = -1.89), reward eval: reward: -4.71, reward normalized=-2.63/4.39, grad norm= 70.28, loss_value= 162.83, loss_actor= 15.91, target value: -19.09: : 10400it [00:21, 257.96it/s]
1654-
reward: -7.61 (r0 = -1.89), reward eval: reward: -4.71, reward normalized=-3.05/5.54, grad norm= 193.22, loss_value= 325.05, loss_actor= 19.07, target value: -19.84: : 10400it [00:24, 257.96it/s]
1635+
8%|8 | 800/10000 [00:00<00:03, 2743.96it/s]
1636+
16%|#6 | 1600/10000 [00:01<00:11, 763.17it/s]
1637+
24%|##4 | 2400/10000 [00:02<00:06, 1135.11it/s]
1638+
32%|###2 | 3200/10000 [00:02<00:04, 1499.19it/s]
1639+
40%|#### | 4000/10000 [00:02<00:03, 1815.64it/s]
1640+
48%|####8 | 4800/10000 [00:02<00:02, 2086.49it/s]
1641+
56%|#####6 | 5600/10000 [00:03<00:01, 2296.08it/s]
1642+
reward: -1.88 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-1.03/6.27, grad norm= 311.07, loss_value= 446.30, loss_actor= 12.66, target value: -6.03: 56%|#####6 | 5600/10000 [00:05<00:01, 2296.08it/s]
1643+
reward: -1.88 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-1.03/6.27, grad norm= 311.07, loss_value= 446.30, loss_actor= 12.66, target value: -6.03: 64%|######4 | 6400/10000 [00:05<00:04, 746.08it/s]
1644+
reward: -2.00 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.21/5.59, grad norm= 95.63, loss_value= 265.29, loss_actor= 13.03, target value: -14.48: 64%|######4 | 6400/10000 [00:07<00:04, 746.08it/s]
1645+
reward: -2.00 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.21/5.59, grad norm= 95.63, loss_value= 265.29, loss_actor= 13.03, target value: -14.48: 72%|#######2 | 7200/10000 [00:08<00:05, 493.59it/s]
1646+
reward: -4.82 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.49/5.17, grad norm= 53.90, loss_value= 204.50, loss_actor= 14.36, target value: -15.45: 72%|#######2 | 7200/10000 [00:10<00:05, 493.59it/s]
1647+
reward: -4.82 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.49/5.17, grad norm= 53.90, loss_value= 204.50, loss_actor= 14.36, target value: -15.45: 80%|######## | 8000/10000 [00:11<00:04, 401.43it/s]
1648+
reward: -5.29 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.87/5.49, grad norm= 188.91, loss_value= 289.90, loss_actor= 17.97, target value: -19.30: 80%|######## | 8000/10000 [00:13<00:04, 401.43it/s]
1649+
reward: -5.29 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.87/5.49, grad norm= 188.91, loss_value= 289.90, loss_actor= 17.97, target value: -19.30: 88%|########8 | 8800/10000 [00:14<00:03, 356.31it/s]
1650+
reward: -3.86 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-2.50/5.87, grad norm= 115.25, loss_value= 303.35, loss_actor= 18.57, target value: -17.39: 88%|########8 | 8800/10000 [00:17<00:03, 356.31it/s]
1651+
reward: -3.86 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-2.50/5.87, grad norm= 115.25, loss_value= 303.35, loss_actor= 18.57, target value: -17.39: 96%|#########6| 9600/10000 [00:18<00:01, 287.48it/s]
1652+
reward: -4.78 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.09/4.74, grad norm= 56.83, loss_value= 192.36, loss_actor= 19.61, target value: -21.98: 96%|#########6| 9600/10000 [00:20<00:01, 287.48it/s]
1653+
reward: -4.78 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.09/4.74, grad norm= 56.83, loss_value= 192.36, loss_actor= 19.61, target value: -21.98: : 10400it [00:22, 254.71it/s]
1654+
reward: -5.00 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.24/4.47, grad norm= 111.75, loss_value= 225.29, loss_actor= 17.31, target value: -21.10: : 10400it [00:24, 254.71it/s]
16551655
16561656
16571657
@@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:
17211721

17221722
.. rst-class:: sphx-glr-timing
17231723

1724-
**Total running time of the script:** ( 0 minutes 28.285 seconds)
1724+
**Total running time of the script:** ( 0 minutes 28.685 seconds)
17251725

17261726

17271727
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 202.4
519+
elapsed time (seconds): 210.7
520520
loss: 5.168
521-
elapsed time (seconds): 118.4
521+
elapsed time (seconds): 120.7
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 29.609 seconds)
543+
**Total running time of the script:** ( 5 minutes 40.322 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 38 additions & 41 deletions
Original file line numberDiff line numberDiff line change
@@ -410,40 +410,37 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|2 | 15.6M/548M [00:00<00:03, 164MB/s]
414-
6%|5 | 31.9M/548M [00:00<00:03, 167MB/s]
415-
9%|8 | 48.4M/548M [00:00<00:03, 169MB/s]
416-
12%|#1 | 64.9M/548M [00:00<00:02, 171MB/s]
417-
15%|#4 | 81.4M/548M [00:00<00:02, 171MB/s]
418-
18%|#7 | 97.9M/548M [00:00<00:02, 171MB/s]
419-
21%|## | 114M/548M [00:00<00:02, 172MB/s]
420-
24%|##3 | 131M/548M [00:00<00:02, 172MB/s]
421-
27%|##6 | 148M/548M [00:00<00:02, 172MB/s]
422-
30%|##9 | 164M/548M [00:01<00:02, 173MB/s]
423-
33%|###2 | 181M/548M [00:01<00:02, 173MB/s]
424-
36%|###5 | 197M/548M [00:01<00:02, 172MB/s]
425-
39%|###9 | 214M/548M [00:01<00:02, 173MB/s]
426-
42%|####2 | 230M/548M [00:01<00:01, 173MB/s]
427-
45%|####5 | 247M/548M [00:01<00:01, 173MB/s]
428-
48%|####8 | 264M/548M [00:01<00:01, 172MB/s]
429-
51%|#####1 | 280M/548M [00:01<00:01, 172MB/s]
430-
54%|#####4 | 296M/548M [00:01<00:01, 172MB/s]
431-
57%|#####7 | 313M/548M [00:01<00:01, 173MB/s]
432-
60%|###### | 330M/548M [00:02<00:01, 173MB/s]
433-
63%|######3 | 346M/548M [00:02<00:01, 173MB/s]
434-
66%|######6 | 363M/548M [00:02<00:01, 173MB/s]
435-
69%|######9 | 380M/548M [00:02<00:01, 173MB/s]
436-
72%|#######2 | 396M/548M [00:02<00:00, 173MB/s]
437-
75%|#######5 | 413M/548M [00:02<00:00, 173MB/s]
438-
78%|#######8 | 430M/548M [00:02<00:00, 173MB/s]
439-
81%|########1 | 446M/548M [00:02<00:00, 173MB/s]
440-
84%|########4 | 463M/548M [00:02<00:00, 173MB/s]
441-
87%|########7 | 480M/548M [00:02<00:00, 173MB/s]
442-
91%|######### | 496M/548M [00:03<00:00, 173MB/s]
443-
94%|#########3| 513M/548M [00:03<00:00, 173MB/s]
444-
97%|#########6| 529M/548M [00:03<00:00, 174MB/s]
445-
100%|#########9| 546M/548M [00:03<00:00, 174MB/s]
446-
100%|##########| 548M/548M [00:03<00:00, 173MB/s]
413+
3%|3 | 17.6M/548M [00:00<00:03, 184MB/s]
414+
7%|6 | 35.6M/548M [00:00<00:02, 186MB/s]
415+
10%|9 | 53.8M/548M [00:00<00:02, 188MB/s]
416+
13%|#3 | 72.0M/548M [00:00<00:02, 189MB/s]
417+
16%|#6 | 90.1M/548M [00:00<00:02, 189MB/s]
418+
20%|#9 | 108M/548M [00:00<00:02, 189MB/s]
419+
23%|##3 | 126M/548M [00:00<00:02, 189MB/s]
420+
26%|##6 | 144M/548M [00:00<00:02, 189MB/s]
421+
30%|##9 | 163M/548M [00:00<00:02, 190MB/s]
422+
33%|###3 | 181M/548M [00:01<00:02, 190MB/s]
423+
36%|###6 | 199M/548M [00:01<00:01, 190MB/s]
424+
40%|###9 | 217M/548M [00:01<00:01, 190MB/s]
425+
43%|####2 | 236M/548M [00:01<00:01, 190MB/s]
426+
46%|####6 | 254M/548M [00:01<00:01, 190MB/s]
427+
50%|####9 | 272M/548M [00:01<00:01, 190MB/s]
428+
53%|#####2 | 290M/548M [00:01<00:01, 190MB/s]
429+
56%|#####6 | 309M/548M [00:01<00:01, 190MB/s]
430+
60%|#####9 | 327M/548M [00:01<00:01, 190MB/s]
431+
63%|######2 | 345M/548M [00:01<00:01, 190MB/s]
432+
66%|######6 | 363M/548M [00:02<00:01, 190MB/s]
433+
70%|######9 | 382M/548M [00:02<00:00, 190MB/s]
434+
73%|#######2 | 400M/548M [00:02<00:00, 190MB/s]
435+
76%|#######6 | 418M/548M [00:02<00:00, 190MB/s]
436+
80%|#######9 | 436M/548M [00:02<00:00, 190MB/s]
437+
83%|########2 | 454M/548M [00:02<00:00, 189MB/s]
438+
86%|########6 | 472M/548M [00:02<00:00, 189MB/s]
439+
89%|########9 | 490M/548M [00:02<00:00, 188MB/s]
440+
93%|#########2| 508M/548M [00:02<00:00, 189MB/s]
441+
96%|#########6| 527M/548M [00:02<00:00, 189MB/s]
442+
99%|#########9| 545M/548M [00:03<00:00, 189MB/s]
443+
100%|##########| 548M/548M [00:03<00:00, 189MB/s]
447444
448445
449446
@@ -764,22 +761,22 @@ Finally, we can run the algorithm.
764761
765762
Optimizing..
766763
run [50]:
767-
Style Loss : 3.975973 Content Loss: 4.121006
764+
Style Loss : 4.251205 Content Loss: 4.215905
768765
769766
run [100]:
770-
Style Loss : 1.140689 Content Loss: 3.033468
767+
Style Loss : 1.166493 Content Loss: 3.052896
771768
772769
run [150]:
773-
Style Loss : 0.720279 Content Loss: 2.653354
770+
Style Loss : 0.716793 Content Loss: 2.662554
774771
775772
run [200]:
776-
Style Loss : 0.492001 Content Loss: 2.500135
773+
Style Loss : 0.478618 Content Loss: 2.496649
777774
778775
run [250]:
779-
Style Loss : 0.353916 Content Loss: 2.407208
776+
Style Loss : 0.348233 Content Loss: 2.406508
780777
781778
run [300]:
782-
Style Loss : 0.270373 Content Loss: 2.350080
779+
Style Loss : 0.266141 Content Loss: 2.351221
783780
784781
785782
@@ -788,7 +785,7 @@ Finally, we can run the algorithm.
788785
789786
.. rst-class:: sphx-glr-timing
790787

791-
**Total running time of the script:** ( 0 minutes 36.929 seconds)
788+
**Total running time of the script:** ( 0 minutes 36.865 seconds)
792789

793790

794791
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.566 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.591 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)