Skip to content

Commit 72ce17d

Browse files
committed
Automated tutorials push
1 parent 7fa51c2 commit 72ce17d

File tree

177 files changed

+9899
-10314
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

177 files changed

+9899
-10314
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "08a56b87",
37+
"id": "f58659f7",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "6a1a1ff1",
53+
"id": "f682f082",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "fa407add",
34+
"id": "62a8d9fa",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "798a5677",
50+
"id": "8c389022",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "70184db4",
37+
"id": "10f283be",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "fccaaa0c",
53+
"id": "5bd82f1b",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "87401fce",
38+
"id": "44c5b791",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "30621f77",
54+
"id": "bef53218",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "f2168b84",
40+
"id": "4936e0ae",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "31b0ca73",
56+
"id": "e0482226",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "7e4da9f6",
38+
"id": "701273da",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "bbf7b3e9",
54+
"id": "9c992c0a",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "6bc61fe1",
37+
"id": "a5b7eac5",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "5a8aecd1",
53+
"id": "5cb609e3",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "81372e17",
37+
"id": "fb874688",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "65055fea",
53+
"id": "4b321f64",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_images/sphx_glr_coding_ddpg_001.png

360 Bytes
Loading
-2.75 KB
Loading
333 Bytes
Loading
-4.55 KB
Loading
-5.39 KB
Loading
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1632,26 +1632,26 @@ modules we need.
16321632
16331633
16341634
0%| | 0/10000 [00:00<?, ?it/s]
1635-
8%|8 | 800/10000 [00:00<00:03, 2865.54it/s]
1636-
16%|#6 | 1600/10000 [00:01<00:10, 764.01it/s]
1637-
24%|##4 | 2400/10000 [00:02<00:06, 1129.65it/s]
1638-
32%|###2 | 3200/10000 [00:02<00:04, 1464.27it/s]
1639-
40%|#### | 4000/10000 [00:02<00:03, 1743.21it/s]
1640-
48%|####8 | 4800/10000 [00:03<00:02, 1980.60it/s]
1641-
56%|#####6 | 5600/10000 [00:03<00:02, 2168.96it/s]
1642-
reward: -2.15 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-1.93/6.43, grad norm= 46.25, loss_value= 392.95, loss_actor= 12.56, target value: -13.07: 56%|#####6 | 5600/10000 [00:05<00:02, 2168.96it/s]
1643-
reward: -2.15 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-1.93/6.43, grad norm= 46.25, loss_value= 392.95, loss_actor= 12.56, target value: -13.07: 64%|######4 | 6400/10000 [00:05<00:04, 770.32it/s]
1644-
reward: -2.25 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-3.09/5.77, grad norm= 350.55, loss_value= 343.68, loss_actor= 15.60, target value: -20.42: 64%|######4 | 6400/10000 [00:07<00:04, 770.32it/s]
1645-
reward: -2.25 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-3.09/5.77, grad norm= 350.55, loss_value= 343.68, loss_actor= 15.60, target value: -20.42: 72%|#######2 | 7200/10000 [00:08<00:05, 506.73it/s]
1646-
reward: -4.05 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-2.29/5.78, grad norm= 123.61, loss_value= 323.63, loss_actor= 14.73, target value: -13.32: 72%|#######2 | 7200/10000 [00:10<00:05, 506.73it/s]
1647-
reward: -4.05 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-2.29/5.78, grad norm= 123.61, loss_value= 323.63, loss_actor= 14.73, target value: -13.32: 80%|######## | 8000/10000 [00:11<00:04, 412.53it/s]
1648-
reward: -5.09 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-3.16/5.30, grad norm= 159.55, loss_value= 223.76, loss_actor= 18.63, target value: -20.20: 80%|######## | 8000/10000 [00:13<00:04, 412.53it/s]
1649-
reward: -5.09 (r0 = -2.13), reward eval: reward: -0.01, reward normalized=-3.16/5.30, grad norm= 159.55, loss_value= 223.76, loss_actor= 18.63, target value: -20.20: 88%|########8 | 8800/10000 [00:14<00:03, 363.29it/s]
1650-
reward: -3.35 (r0 = -2.13), reward eval: reward: -2.23, reward normalized=-2.44/5.63, grad norm= 80.14, loss_value= 254.06, loss_actor= 16.65, target value: -17.43: 88%|########8 | 8800/10000 [00:17<00:03, 363.29it/s]
1651-
reward: -3.35 (r0 = -2.13), reward eval: reward: -2.23, reward normalized=-2.44/5.63, grad norm= 80.14, loss_value= 254.06, loss_actor= 16.65, target value: -17.43: 96%|#########6| 9600/10000 [00:18<00:01, 291.92it/s]
1652-
reward: -2.30 (r0 = -2.13), reward eval: reward: -2.23, reward normalized=-2.77/4.88, grad norm= 70.12, loss_value= 169.27, loss_actor= 15.45, target value: -19.94: 96%|#########6| 9600/10000 [00:19<00:01, 291.92it/s]
1653-
reward: -2.30 (r0 = -2.13), reward eval: reward: -2.23, reward normalized=-2.77/4.88, grad norm= 70.12, loss_value= 169.27, loss_actor= 15.45, target value: -19.94: : 10400it [00:22, 257.27it/s]
1654-
reward: -4.31 (r0 = -2.13), reward eval: reward: -2.23, reward normalized=-2.33/4.33, grad norm= 114.30, loss_value= 152.00, loss_actor= 13.96, target value: -15.90: : 10400it [00:24, 257.27it/s]
1635+
8%|8 | 800/10000 [00:00<00:03, 2656.70it/s]
1636+
16%|#6 | 1600/10000 [00:02<00:11, 703.73it/s]
1637+
24%|##4 | 2400/10000 [00:02<00:07, 1060.83it/s]
1638+
32%|###2 | 3200/10000 [00:02<00:04, 1409.48it/s]
1639+
40%|#### | 4000/10000 [00:02<00:03, 1720.27it/s]
1640+
48%|####8 | 4800/10000 [00:03<00:02, 1989.46it/s]
1641+
56%|#####6 | 5600/10000 [00:03<00:01, 2206.06it/s]
1642+
reward: -2.53 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-1.55/6.15, grad norm= 204.25, loss_value= 355.01, loss_actor= 12.59, target value: -9.48: 56%|#####6 | 5600/10000 [00:05<00:01, 2206.06it/s]
1643+
reward: -2.53 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-1.55/6.15, grad norm= 204.25, loss_value= 355.01, loss_actor= 12.59, target value: -9.48: 64%|######4 | 6400/10000 [00:05<00:04, 805.38it/s]
1644+
reward: -1.70 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.61/5.60, grad norm= 180.59, loss_value= 249.88, loss_actor= 12.86, target value: -16.44: 64%|######4 | 6400/10000 [00:07<00:04, 805.38it/s]
1645+
reward: -1.70 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.61/5.60, grad norm= 180.59, loss_value= 249.88, loss_actor= 12.86, target value: -16.44: 72%|#######2 | 7200/10000 [00:08<00:05, 510.78it/s]
1646+
reward: -4.61 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.26/5.27, grad norm= 179.66, loss_value= 241.11, loss_actor= 17.01, target value: -13.69: 72%|#######2 | 7200/10000 [00:10<00:05, 510.78it/s]
1647+
reward: -4.61 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.26/5.27, grad norm= 179.66, loss_value= 241.11, loss_actor= 17.01, target value: -13.69: 80%|######## | 8000/10000 [00:11<00:04, 411.41it/s]
1648+
reward: -5.24 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.27/5.40, grad norm= 58.48, loss_value= 215.63, loss_actor= 15.19, target value: -15.19: 80%|######## | 8000/10000 [00:13<00:04, 411.41it/s]
1649+
reward: -5.24 (r0 = -2.06), reward eval: reward: 0.00, reward normalized=-2.27/5.40, grad norm= 58.48, loss_value= 215.63, loss_actor= 15.19, target value: -15.19: 88%|########8 | 8800/10000 [00:13<00:03, 377.16it/s]
1650+
reward: -1.48 (r0 = -2.06), reward eval: reward: -3.87, reward normalized=-2.69/5.37, grad norm= 80.61, loss_value= 257.50, loss_actor= 14.64, target value: -19.63: 88%|########8 | 8800/10000 [00:17<00:03, 377.16it/s]
1651+
reward: -1.48 (r0 = -2.06), reward eval: reward: -3.87, reward normalized=-2.69/5.37, grad norm= 80.61, loss_value= 257.50, loss_actor= 14.64, target value: -19.63: 96%|#########6| 9600/10000 [00:18<00:01, 293.97it/s]
1652+
reward: -3.67 (r0 = -2.06), reward eval: reward: -3.87, reward normalized=-2.95/5.16, grad norm= 122.91, loss_value= 259.81, loss_actor= 18.23, target value: -20.63: 96%|#########6| 9600/10000 [00:19<00:01, 293.97it/s]
1653+
reward: -3.67 (r0 = -2.06), reward eval: reward: -3.87, reward normalized=-2.95/5.16, grad norm= 122.91, loss_value= 259.81, loss_actor= 18.23, target value: -20.63: : 10400it [00:21, 258.98it/s]
1654+
reward: -4.87 (r0 = -2.06), reward eval: reward: -3.87, reward normalized=-2.87/4.29, grad norm= 94.53, loss_value= 180.18, loss_actor= 19.29, target value: -19.89: : 10400it [00:24, 258.98it/s]
16551655
16561656
16571657
@@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:
17211721

17221722
.. rst-class:: sphx-glr-timing
17231723

1724-
**Total running time of the script:** ( 0 minutes 28.395 seconds)
1724+
**Total running time of the script:** ( 0 minutes 28.224 seconds)
17251725

17261726

17271727
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 207.0
519+
elapsed time (seconds): 204.4
520520
loss: 5.168
521-
elapsed time (seconds): 117.5
521+
elapsed time (seconds): 119.4
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 33.228 seconds)
543+
**Total running time of the script:** ( 5 minutes 32.552 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 40 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -410,38 +410,39 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|3 | 16.8M/548M [00:00<00:03, 175MB/s]
414-
6%|6 | 34.0M/548M [00:00<00:03, 178MB/s]
415-
9%|9 | 51.2M/548M [00:00<00:02, 179MB/s]
416-
13%|#2 | 68.6M/548M [00:00<00:02, 180MB/s]
417-
16%|#5 | 85.9M/548M [00:00<00:02, 180MB/s]
418-
19%|#8 | 103M/548M [00:00<00:02, 180MB/s]
419-
22%|##1 | 120M/548M [00:00<00:02, 180MB/s]
420-
25%|##5 | 138M/548M [00:00<00:02, 180MB/s]
421-
28%|##8 | 155M/548M [00:00<00:02, 180MB/s]
422-
31%|###1 | 172M/548M [00:01<00:02, 180MB/s]
423-
35%|###4 | 190M/548M [00:01<00:02, 181MB/s]
424-
38%|###7 | 207M/548M [00:01<00:01, 181MB/s]
425-
41%|#### | 224M/548M [00:01<00:01, 181MB/s]
426-
44%|####4 | 242M/548M [00:01<00:01, 182MB/s]
427-
47%|####7 | 259M/548M [00:01<00:01, 182MB/s]
428-
51%|##### | 277M/548M [00:01<00:01, 182MB/s]
429-
54%|#####3 | 294M/548M [00:01<00:01, 182MB/s]
430-
57%|#####6 | 312M/548M [00:01<00:01, 182MB/s]
431-
60%|###### | 329M/548M [00:01<00:01, 182MB/s]
432-
63%|######3 | 347M/548M [00:02<00:01, 182MB/s]
433-
66%|######6 | 364M/548M [00:02<00:01, 182MB/s]
434-
70%|######9 | 382M/548M [00:02<00:00, 182MB/s]
435-
73%|#######2 | 400M/548M [00:02<00:00, 183MB/s]
436-
76%|#######6 | 417M/548M [00:02<00:00, 177MB/s]
437-
79%|#######9 | 434M/548M [00:02<00:00, 178MB/s]
438-
82%|########2 | 452M/548M [00:02<00:00, 179MB/s]
439-
86%|########5 | 469M/548M [00:02<00:00, 180MB/s]
440-
89%|########8 | 486M/548M [00:02<00:00, 180MB/s]
441-
92%|#########1| 504M/548M [00:02<00:00, 181MB/s]
442-
95%|#########5| 521M/548M [00:03<00:00, 181MB/s]
443-
98%|#########8| 538M/548M [00:03<00:00, 181MB/s]
444-
100%|##########| 548M/548M [00:03<00:00, 181MB/s]
413+
3%|2 | 15.6M/548M [00:00<00:03, 163MB/s]
414+
6%|5 | 31.5M/548M [00:00<00:03, 165MB/s]
415+
9%|8 | 47.6M/548M [00:00<00:03, 167MB/s]
416+
12%|#1 | 64.6M/548M [00:00<00:02, 171MB/s]
417+
15%|#4 | 81.8M/548M [00:00<00:02, 174MB/s]
418+
18%|#8 | 98.8M/548M [00:00<00:02, 175MB/s]
419+
21%|##1 | 116M/548M [00:00<00:02, 176MB/s]
420+
24%|##4 | 133M/548M [00:00<00:02, 177MB/s]
421+
27%|##7 | 150M/548M [00:00<00:02, 177MB/s]
422+
30%|### | 167M/548M [00:01<00:02, 177MB/s]
423+
34%|###3 | 184M/548M [00:01<00:02, 177MB/s]
424+
37%|###6 | 201M/548M [00:01<00:02, 177MB/s]
425+
40%|###9 | 218M/548M [00:01<00:01, 177MB/s]
426+
43%|####2 | 235M/548M [00:01<00:01, 177MB/s]
427+
46%|####5 | 252M/548M [00:01<00:01, 177MB/s]
428+
49%|####9 | 269M/548M [00:01<00:01, 177MB/s]
429+
52%|#####2 | 286M/548M [00:01<00:01, 177MB/s]
430+
55%|#####5 | 303M/548M [00:01<00:01, 177MB/s]
431+
58%|#####8 | 320M/548M [00:01<00:01, 177MB/s]
432+
61%|######1 | 337M/548M [00:02<00:01, 177MB/s]
433+
65%|######4 | 354M/548M [00:02<00:01, 177MB/s]
434+
68%|######7 | 371M/548M [00:02<00:01, 177MB/s]
435+
71%|####### | 388M/548M [00:02<00:00, 177MB/s]
436+
74%|#######3 | 404M/548M [00:02<00:00, 177MB/s]
437+
77%|#######6 | 422M/548M [00:02<00:00, 177MB/s]
438+
80%|######## | 438M/548M [00:02<00:00, 177MB/s]
439+
83%|########3 | 456M/548M [00:02<00:00, 177MB/s]
440+
86%|########6 | 472M/548M [00:02<00:00, 177MB/s]
441+
89%|########9 | 490M/548M [00:02<00:00, 178MB/s]
442+
92%|#########2| 506M/548M [00:03<00:00, 178MB/s]
443+
96%|#########5| 524M/548M [00:03<00:00, 177MB/s]
444+
99%|#########8| 540M/548M [00:03<00:00, 177MB/s]
445+
100%|##########| 548M/548M [00:03<00:00, 176MB/s]
445446
446447
447448
@@ -762,22 +763,22 @@ Finally, we can run the algorithm.
762763
763764
Optimizing..
764765
run [50]:
765-
Style Loss : 4.149775 Content Loss: 4.157228
766+
Style Loss : 4.347193 Content Loss: 4.249933
766767
767768
run [100]:
768-
Style Loss : 1.104719 Content Loss: 3.005375
769+
Style Loss : 1.185907 Content Loss: 3.055794
769770
770771
run [150]:
771-
Style Loss : 0.700091 Content Loss: 2.638469
772+
Style Loss : 0.725471 Content Loss: 2.664198
772773
773774
run [200]:
774-
Style Loss : 0.471553 Content Loss: 2.486511
775+
Style Loss : 0.489452 Content Loss: 2.500590
775776
776777
run [250]:
777-
Style Loss : 0.344631 Content Loss: 2.400878
778+
Style Loss : 0.353058 Content Loss: 2.408744
778779
779780
run [300]:
780-
Style Loss : 0.263820 Content Loss: 2.348750
781+
Style Loss : 0.268822 Content Loss: 2.351739
781782
782783
783784
@@ -786,7 +787,7 @@ Finally, we can run the algorithm.
786787
787788
.. rst-class:: sphx-glr-timing
788789

789-
**Total running time of the script:** ( 0 minutes 36.749 seconds)
790+
**Total running time of the script:** ( 0 minutes 36.782 seconds)
790791

791792

792793
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.575 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.566 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)