Skip to content

Commit 8c5c41c

Browse files
committed
Automated tutorials push
1 parent 7b9d39f commit 8c5c41c

File tree

188 files changed

+13393
-12583
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

188 files changed

+13393
-12583
lines changed
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

-138 Bytes
Loading
1.71 KB
Loading
-93 Bytes
Loading
-115 Bytes
Loading
63 Bytes
Loading
-180 Bytes
Loading
-755 Bytes
Loading
5.89 KB
Loading
-2.33 KB
Loading
-77 Bytes
Loading
-253 Bytes
Loading
Loading
-3.85 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1680.42it/s]
1638-
16%|#6 | 1600/10000 [00:03<00:17, 471.71it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:11, 665.05it/s]
1640-
32%|###2 | 3200/10000 [00:04<00:07, 856.11it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1027.08it/s]
1642-
48%|####8 | 4800/10000 [00:05<00:04, 1166.85it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1277.85it/s]
1644-
reward: -2.85 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.54/6.66, grad norm= 155.46, loss_value= 513.11, loss_actor= 18.16, target value: -14.97: 56%|#####6 | 5600/10000 [00:06<00:03, 1277.85it/s]
1645-
reward: -2.85 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.54/6.66, grad norm= 155.46, loss_value= 513.11, loss_actor= 18.16, target value: -14.97: 64%|######4 | 6400/10000 [00:07<00:04, 849.73it/s]
1646-
reward: -0.17 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.64/5.64, grad norm= 91.66, loss_value= 263.18, loss_actor= 15.17, target value: -16.35: 64%|######4 | 6400/10000 [00:08<00:04, 849.73it/s]
1647-
reward: -0.17 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.64/5.64, grad norm= 91.66, loss_value= 263.18, loss_actor= 15.17, target value: -16.35: 72%|#######2 | 7200/10000 [00:08<00:03, 704.46it/s]
1648-
reward: -2.90 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-3.10/6.28, grad norm= 337.43, loss_value= 382.46, loss_actor= 14.87, target value: -19.26: 72%|#######2 | 7200/10000 [00:09<00:03, 704.46it/s]
1649-
reward: -2.90 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-3.10/6.28, grad norm= 337.43, loss_value= 382.46, loss_actor= 14.87, target value: -19.26: 80%|######## | 8000/10000 [00:10<00:03, 620.21it/s]
1650-
reward: -4.48 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.83/5.17, grad norm= 155.36, loss_value= 266.41, loss_actor= 16.28, target value: -18.26: 80%|######## | 8000/10000 [00:11<00:03, 620.21it/s]
1651-
reward: -4.48 (r0 = -3.05), reward eval: reward: 0.00, reward normalized=-2.83/5.17, grad norm= 155.36, loss_value= 266.41, loss_actor= 16.28, target value: -18.26: 88%|########8 | 8800/10000 [00:12<00:02, 582.56it/s]
1652-
reward: -5.43 (r0 = -3.05), reward eval: reward: -5.34, reward normalized=-2.46/5.01, grad norm= 233.46, loss_value= 207.46, loss_actor= 20.07, target value: -16.09: 88%|########8 | 8800/10000 [00:14<00:02, 582.56it/s]
1653-
reward: -5.43 (r0 = -3.05), reward eval: reward: -5.34, reward normalized=-2.46/5.01, grad norm= 233.46, loss_value= 207.46, loss_actor= 20.07, target value: -16.09: 96%|#########6| 9600/10000 [00:15<00:01, 392.16it/s]
1654-
reward: -5.40 (r0 = -3.05), reward eval: reward: -5.34, reward normalized=-3.72/5.35, grad norm= 273.07, loss_value= 350.02, loss_actor= 21.10, target value: -25.81: 96%|#########6| 9600/10000 [00:16<00:01, 392.16it/s]
1655-
reward: -5.40 (r0 = -3.05), reward eval: reward: -5.34, reward normalized=-3.72/5.35, grad norm= 273.07, loss_value= 350.02, loss_actor= 21.10, target value: -25.81: : 10400it [00:18, 360.77it/s]
1656-
reward: -4.90 (r0 = -3.05), reward eval: reward: -5.34, reward normalized=-4.18/4.47, grad norm= 120.83, loss_value= 254.43, loss_actor= 27.65, target value: -28.47: : 10400it [00:19, 360.77it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1732.40it/s]
1638+
16%|#6 | 1600/10000 [00:02<00:17, 488.68it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:10, 693.03it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:07, 878.39it/s]
1641+
40%|#### | 4000/10000 [00:04<00:05, 1035.33it/s]
1642+
48%|####8 | 4800/10000 [00:05<00:04, 1177.04it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1283.39it/s]
1644+
reward: -2.12 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.19/6.34, grad norm= 169.82, loss_value= 407.42, loss_actor= 15.43, target value: -13.32: 56%|#####6 | 5600/10000 [00:06<00:03, 1283.39it/s]
1645+
reward: -2.12 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.19/6.34, grad norm= 169.82, loss_value= 407.42, loss_actor= 15.43, target value: -13.32: 64%|######4 | 6400/10000 [00:07<00:04, 877.86it/s]
1646+
reward: -0.22 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.38/5.77, grad norm= 73.76, loss_value= 250.94, loss_actor= 14.13, target value: -16.05: 64%|######4 | 6400/10000 [00:07<00:04, 877.86it/s]
1647+
reward: -0.22 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.38/5.77, grad norm= 73.76, loss_value= 250.94, loss_actor= 14.13, target value: -16.05: 72%|#######2 | 7200/10000 [00:08<00:03, 723.38it/s]
1648+
reward: -3.28 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.41/5.98, grad norm= 135.06, loss_value= 295.38, loss_actor= 13.13, target value: -15.44: 72%|#######2 | 7200/10000 [00:09<00:03, 723.38it/s]
1649+
reward: -3.28 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.41/5.98, grad norm= 135.06, loss_value= 295.38, loss_actor= 13.13, target value: -15.44: 80%|######## | 8000/10000 [00:10<00:03, 642.40it/s]
1650+
reward: -4.93 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.81/5.28, grad norm= 257.19, loss_value= 257.16, loss_actor= 14.93, target value: -18.10: 80%|######## | 8000/10000 [00:10<00:03, 642.40it/s]
1651+
reward: -4.93 (r0 = -2.92), reward eval: reward: -0.01, reward normalized=-2.81/5.28, grad norm= 257.19, loss_value= 257.16, loss_actor= 14.93, target value: -18.10: 88%|########8 | 8800/10000 [00:11<00:02, 598.47it/s]
1652+
reward: -4.99 (r0 = -2.92), reward eval: reward: -5.87, reward normalized=-3.09/5.10, grad norm= 126.61, loss_value= 224.68, loss_actor= 19.46, target value: -20.96: 88%|########8 | 8800/10000 [00:14<00:02, 598.47it/s]
1653+
reward: -4.99 (r0 = -2.92), reward eval: reward: -5.87, reward normalized=-3.09/5.10, grad norm= 126.61, loss_value= 224.68, loss_actor= 19.46, target value: -20.96: 96%|#########6| 9600/10000 [00:15<00:00, 404.31it/s]
1654+
reward: -5.12 (r0 = -2.92), reward eval: reward: -5.87, reward normalized=-2.99/5.42, grad norm= 104.60, loss_value= 239.53, loss_actor= 20.24, target value: -21.99: 96%|#########6| 9600/10000 [00:15<00:00, 404.31it/s]
1655+
reward: -5.12 (r0 = -2.92), reward eval: reward: -5.87, reward normalized=-2.99/5.42, grad norm= 104.60, loss_value= 239.53, loss_actor= 20.24, target value: -21.99: : 10400it [00:18, 356.24it/s]
1656+
reward: -4.86 (r0 = -2.92), reward eval: reward: -5.87, reward normalized=-3.77/4.27, grad norm= 89.54, loss_value= 168.68, loss_actor= 23.35, target value: -25.89: : 10400it [00:18, 356.24it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 29.113 seconds)
1726+
**Total running time of the script:** ( 0 minutes 28.909 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 204.0
520+
elapsed time (seconds): 199.4
521521
loss: 5.168
522-
elapsed time (seconds): 116.6
522+
elapsed time (seconds): 114.7
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 31.500 seconds)
544+
**Total running time of the script:** ( 5 minutes 24.982 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 33 additions & 39 deletions
Original file line numberDiff line numberDiff line change
@@ -410,38 +410,32 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|3 | 16.5M/548M [00:00<00:03, 173MB/s]
414-
6%|6 | 33.5M/548M [00:00<00:03, 176MB/s]
415-
9%|9 | 50.5M/548M [00:00<00:02, 177MB/s]
416-
12%|#2 | 67.8M/548M [00:00<00:02, 178MB/s]
417-
15%|#5 | 84.9M/548M [00:00<00:02, 178MB/s]
418-
19%|#8 | 102M/548M [00:00<00:02, 179MB/s]
419-
22%|##1 | 119M/548M [00:00<00:02, 179MB/s]
420-
25%|##4 | 136M/548M [00:00<00:02, 179MB/s]
421-
28%|##8 | 154M/548M [00:00<00:02, 179MB/s]
422-
31%|###1 | 171M/548M [00:01<00:02, 179MB/s]
423-
34%|###4 | 188M/548M [00:01<00:02, 179MB/s]
424-
37%|###7 | 205M/548M [00:01<00:02, 179MB/s]
425-
41%|#### | 222M/548M [00:01<00:01, 179MB/s]
426-
44%|####3 | 239M/548M [00:01<00:01, 179MB/s]
427-
47%|####6 | 256M/548M [00:01<00:01, 179MB/s]
428-
50%|####9 | 274M/548M [00:01<00:01, 179MB/s]
429-
53%|#####3 | 291M/548M [00:01<00:01, 179MB/s]
430-
56%|#####6 | 308M/548M [00:01<00:01, 180MB/s]
431-
59%|#####9 | 325M/548M [00:01<00:01, 180MB/s]
432-
63%|######2 | 343M/548M [00:02<00:01, 180MB/s]
433-
66%|######5 | 360M/548M [00:02<00:01, 179MB/s]
434-
69%|######8 | 377M/548M [00:02<00:01, 179MB/s]
435-
72%|#######1 | 394M/548M [00:02<00:00, 179MB/s]
436-
75%|#######5 | 411M/548M [00:02<00:00, 179MB/s]
437-
78%|#######8 | 428M/548M [00:02<00:00, 179MB/s]
438-
81%|########1 | 446M/548M [00:02<00:00, 179MB/s]
439-
84%|########4 | 463M/548M [00:02<00:00, 179MB/s]
440-
88%|########7 | 480M/548M [00:02<00:00, 178MB/s]
441-
91%|######### | 498M/548M [00:02<00:00, 181MB/s]
442-
94%|#########4| 516M/548M [00:03<00:00, 184MB/s]
443-
98%|#########7| 534M/548M [00:03<00:00, 186MB/s]
444-
100%|##########| 548M/548M [00:03<00:00, 180MB/s]
413+
4%|3 | 20.9M/548M [00:00<00:02, 218MB/s]
414+
8%|7 | 42.5M/548M [00:00<00:02, 223MB/s]
415+
12%|#1 | 64.1M/548M [00:00<00:02, 224MB/s]
416+
16%|#5 | 85.8M/548M [00:00<00:02, 225MB/s]
417+
20%|#9 | 107M/548M [00:00<00:02, 225MB/s]
418+
24%|##3 | 129M/548M [00:00<00:01, 226MB/s]
419+
28%|##7 | 151M/548M [00:00<00:01, 226MB/s]
420+
31%|###1 | 172M/548M [00:00<00:01, 226MB/s]
421+
35%|###5 | 194M/548M [00:00<00:01, 219MB/s]
422+
39%|###9 | 216M/548M [00:01<00:01, 221MB/s]
423+
43%|####3 | 237M/548M [00:01<00:01, 223MB/s]
424+
47%|####7 | 259M/548M [00:01<00:01, 224MB/s]
425+
51%|#####1 | 281M/548M [00:01<00:01, 225MB/s]
426+
55%|#####5 | 302M/548M [00:01<00:01, 225MB/s]
427+
59%|#####9 | 324M/548M [00:01<00:01, 226MB/s]
428+
63%|######3 | 346M/548M [00:01<00:00, 226MB/s]
429+
67%|######7 | 367M/548M [00:01<00:00, 226MB/s]
430+
71%|####### | 389M/548M [00:01<00:00, 226MB/s]
431+
75%|#######4 | 411M/548M [00:01<00:00, 226MB/s]
432+
79%|#######8 | 432M/548M [00:02<00:00, 226MB/s]
433+
83%|########2 | 454M/548M [00:02<00:00, 226MB/s]
434+
87%|########6 | 476M/548M [00:02<00:00, 226MB/s]
435+
91%|######### | 498M/548M [00:02<00:00, 227MB/s]
436+
95%|#########4| 519M/548M [00:02<00:00, 227MB/s]
437+
99%|#########8| 541M/548M [00:02<00:00, 227MB/s]
438+
100%|##########| 548M/548M [00:02<00:00, 225MB/s]
445439
446440
447441
@@ -762,22 +756,22 @@ Finally, we can run the algorithm.
762756
763757
Optimizing..
764758
run [50]:
765-
Style Loss : 3.992533 Content Loss: 4.103969
759+
Style Loss : 4.038539 Content Loss: 4.104365
766760
767761
run [100]:
768-
Style Loss : 1.138738 Content Loss: 3.028159
762+
Style Loss : 1.119395 Content Loss: 3.013746
769763
770764
run [150]:
771-
Style Loss : 0.703683 Content Loss: 2.647084
765+
Style Loss : 0.697405 Content Loss: 2.644443
772766
773767
run [200]:
774-
Style Loss : 0.469144 Content Loss: 2.487169
768+
Style Loss : 0.471228 Content Loss: 2.485675
775769
776770
run [250]:
777-
Style Loss : 0.344788 Content Loss: 2.401090
771+
Style Loss : 0.343578 Content Loss: 2.401301
778772
779773
run [300]:
780-
Style Loss : 0.262716 Content Loss: 2.349406
774+
Style Loss : 0.261325 Content Loss: 2.348558
781775
782776
783777
@@ -786,7 +780,7 @@ Finally, we can run the algorithm.
786780
787781
.. rst-class:: sphx-glr-timing
788782

789-
**Total running time of the script:** ( 0 minutes 39.124 seconds)
783+
**Total running time of the script:** ( 0 minutes 38.355 seconds)
790784

791785

792786
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

0 commit comments

Comments
 (0)