Skip to content

Commit a4395c8

Browse files
committed
Automated tutorials push
1 parent e6b5935 commit a4395c8

File tree

188 files changed

+13471
-12323
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

188 files changed

+13471
-12323
lines changed
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

-129 Bytes
Loading
-2.06 KB
Loading
136 Bytes
Loading
-462 Bytes
Loading
-83 Bytes
Loading
506 Bytes
Loading
-679 Bytes
Loading
-7.98 KB
Loading
Loading
95 Bytes
Loading
76 Bytes
Loading
Loading
7.08 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1635.05it/s]
1638-
16%|#6 | 1600/10000 [00:02<00:17, 488.67it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:10, 693.90it/s]
1640-
32%|###2 | 3200/10000 [00:03<00:07, 898.92it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1078.60it/s]
1642-
48%|####8 | 4800/10000 [00:04<00:04, 1235.64it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1360.94it/s]
1644-
reward: -2.67 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.24/6.42, grad norm= 121.21, loss_value= 442.12, loss_actor= 15.34, target value: -14.76: 56%|#####6 | 5600/10000 [00:06<00:03, 1360.94it/s]
1645-
reward: -2.67 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.24/6.42, grad norm= 121.21, loss_value= 442.12, loss_actor= 15.34, target value: -14.76: 64%|######4 | 6400/10000 [00:06<00:03, 932.78it/s]
1646-
reward: -0.15 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.68/5.62, grad norm= 95.21, loss_value= 284.37, loss_actor= 15.45, target value: -17.78: 64%|######4 | 6400/10000 [00:07<00:03, 932.78it/s]
1647-
reward: -0.15 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.68/5.62, grad norm= 95.21, loss_value= 284.37, loss_actor= 15.45, target value: -17.78: 72%|#######2 | 7200/10000 [00:08<00:03, 747.27it/s]
1648-
reward: -3.37 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.59/6.10, grad norm= 74.55, loss_value= 307.46, loss_actor= 14.67, target value: -15.21: 72%|#######2 | 7200/10000 [00:09<00:03, 747.27it/s]
1649-
reward: -3.37 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.59/6.10, grad norm= 74.55, loss_value= 307.46, loss_actor= 14.67, target value: -15.21: 80%|######## | 8000/10000 [00:09<00:03, 657.24it/s]
1650-
reward: -4.37 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.57/4.64, grad norm= 48.72, loss_value= 192.21, loss_actor= 17.71, target value: -17.12: 80%|######## | 8000/10000 [00:10<00:03, 657.24it/s]
1651-
reward: -4.37 (r0 = -3.61), reward eval: reward: -0.01, reward normalized=-2.57/4.64, grad norm= 48.72, loss_value= 192.21, loss_actor= 17.71, target value: -17.12: 88%|########8 | 8800/10000 [00:11<00:01, 608.99it/s]
1652-
reward: -5.41 (r0 = -3.61), reward eval: reward: -5.49, reward normalized=-2.94/5.05, grad norm= 121.34, loss_value= 215.46, loss_actor= 20.00, target value: -19.67: 88%|########8 | 8800/10000 [00:14<00:01, 608.99it/s]
1653-
reward: -5.41 (r0 = -3.61), reward eval: reward: -5.49, reward normalized=-2.94/5.05, grad norm= 121.34, loss_value= 215.46, loss_actor= 20.00, target value: -19.67: 96%|#########6| 9600/10000 [00:14<00:00, 410.18it/s]
1654-
reward: -4.46 (r0 = -3.61), reward eval: reward: -5.49, reward normalized=-3.61/5.30, grad norm= 308.88, loss_value= 336.67, loss_actor= 19.23, target value: -26.05: 96%|#########6| 9600/10000 [00:15<00:00, 410.18it/s]
1655-
reward: -4.46 (r0 = -3.61), reward eval: reward: -5.49, reward normalized=-3.61/5.30, grad norm= 308.88, loss_value= 336.67, loss_actor= 19.23, target value: -26.05: : 10400it [00:17, 363.84it/s]
1656-
reward: -4.49 (r0 = -3.61), reward eval: reward: -5.49, reward normalized=-3.30/3.94, grad norm= 92.94, loss_value= 152.10, loss_actor= 24.58, target value: -23.26: : 10400it [00:18, 363.84it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1780.84it/s]
1638+
16%|#6 | 1600/10000 [00:02<00:17, 488.94it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:10, 705.84it/s]
1640+
32%|###2 | 3200/10000 [00:03<00:07, 907.78it/s]
1641+
40%|#### | 4000/10000 [00:04<00:05, 1075.55it/s]
1642+
48%|####8 | 4800/10000 [00:04<00:04, 1222.64it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1338.15it/s]
1644+
reward: -2.77 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.40/6.62, grad norm= 164.27, loss_value= 396.87, loss_actor= 12.97, target value: -15.44: 56%|#####6 | 5600/10000 [00:06<00:03, 1338.15it/s]
1645+
reward: -2.77 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.40/6.62, grad norm= 164.27, loss_value= 396.87, loss_actor= 12.97, target value: -15.44: 64%|######4 | 6400/10000 [00:06<00:04, 897.53it/s]
1646+
reward: -0.15 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.46/5.64, grad norm= 165.72, loss_value= 260.42, loss_actor= 13.35, target value: -15.62: 64%|######4 | 6400/10000 [00:07<00:04, 897.53it/s]
1647+
reward: -0.15 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.46/5.64, grad norm= 165.72, loss_value= 260.42, loss_actor= 13.35, target value: -15.62: 72%|#######2 | 7200/10000 [00:08<00:03, 718.68it/s]
1648+
reward: -2.73 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.34/6.00, grad norm= 116.30, loss_value= 268.66, loss_actor= 13.29, target value: -15.24: 72%|#######2 | 7200/10000 [00:09<00:03, 718.68it/s]
1649+
reward: -2.73 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.34/6.00, grad norm= 116.30, loss_value= 268.66, loss_actor= 13.29, target value: -15.24: 80%|######## | 8000/10000 [00:10<00:03, 631.02it/s]
1650+
reward: -4.72 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.68/5.33, grad norm= 98.38, loss_value= 215.18, loss_actor= 17.06, target value: -17.23: 80%|######## | 8000/10000 [00:10<00:03, 631.02it/s]
1651+
reward: -4.72 (r0 = -2.01), reward eval: reward: -0.00, reward normalized=-2.68/5.33, grad norm= 98.38, loss_value= 215.18, loss_actor= 17.06, target value: -17.23: 88%|########8 | 8800/10000 [00:11<00:02, 590.95it/s]
1652+
reward: -5.42 (r0 = -2.01), reward eval: reward: -5.04, reward normalized=-2.73/5.36, grad norm= 65.22, loss_value= 257.60, loss_actor= 20.45, target value: -18.03: 88%|########8 | 8800/10000 [00:14<00:02, 590.95it/s]
1653+
reward: -5.42 (r0 = -2.01), reward eval: reward: -5.04, reward normalized=-2.73/5.36, grad norm= 65.22, loss_value= 257.60, loss_actor= 20.45, target value: -18.03: 96%|#########6| 9600/10000 [00:15<00:01, 399.48it/s]
1654+
reward: -5.40 (r0 = -2.01), reward eval: reward: -5.04, reward normalized=-3.53/5.24, grad norm= 410.03, loss_value= 340.43, loss_actor= 19.12, target value: -24.79: 96%|#########6| 9600/10000 [00:15<00:01, 399.48it/s]
1655+
reward: -5.40 (r0 = -2.01), reward eval: reward: -5.04, reward normalized=-3.53/5.24, grad norm= 410.03, loss_value= 340.43, loss_actor= 19.12, target value: -24.79: : 10400it [00:17, 365.12it/s]
1656+
reward: -4.35 (r0 = -2.01), reward eval: reward: -5.04, reward normalized=-3.57/4.59, grad norm= 83.81, loss_value= 232.02, loss_actor= 22.47, target value: -25.23: : 10400it [00:18, 365.12it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 28.313 seconds)
1726+
**Total running time of the script:** ( 0 minutes 28.522 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 199.8
520+
elapsed time (seconds): 210.1
521521
loss: 5.168
522-
elapsed time (seconds): 115.5
522+
elapsed time (seconds): 115.7
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 26.263 seconds)
544+
**Total running time of the script:** ( 5 minutes 36.854 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 33 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -410,33 +410,32 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
4%|3 | 20.4M/548M [00:00<00:02, 214MB/s]
414-
8%|7 | 41.4M/548M [00:00<00:02, 217MB/s]
415-
11%|#1 | 62.4M/548M [00:00<00:02, 218MB/s]
416-
15%|#5 | 83.4M/548M [00:00<00:02, 219MB/s]
417-
19%|#9 | 104M/548M [00:00<00:02, 219MB/s]
418-
23%|##2 | 125M/548M [00:00<00:02, 219MB/s]
419-
27%|##6 | 146M/548M [00:00<00:01, 219MB/s]
420-
31%|### | 167M/548M [00:00<00:01, 220MB/s]
421-
34%|###4 | 188M/548M [00:00<00:01, 220MB/s]
422-
38%|###8 | 210M/548M [00:01<00:01, 220MB/s]
423-
42%|####2 | 230M/548M [00:01<00:01, 220MB/s]
424-
46%|####5 | 252M/548M [00:01<00:01, 220MB/s]
425-
50%|####9 | 272M/548M [00:01<00:01, 220MB/s]
426-
54%|#####3 | 294M/548M [00:01<00:01, 220MB/s]
427-
57%|#####7 | 314M/548M [00:01<00:01, 220MB/s]
428-
61%|######1 | 336M/548M [00:01<00:01, 220MB/s]
429-
65%|######5 | 357M/548M [00:01<00:00, 220MB/s]
430-
69%|######8 | 378M/548M [00:01<00:00, 220MB/s]
431-
73%|#######2 | 399M/548M [00:01<00:00, 220MB/s]
432-
77%|#######6 | 420M/548M [00:02<00:00, 220MB/s]
433-
80%|######## | 441M/548M [00:02<00:00, 220MB/s]
434-
84%|########4 | 462M/548M [00:02<00:00, 220MB/s]
435-
88%|########8 | 483M/548M [00:02<00:00, 220MB/s]
436-
92%|#########2| 504M/548M [00:02<00:00, 220MB/s]
437-
96%|#########5| 526M/548M [00:02<00:00, 221MB/s]
438-
100%|#########9| 547M/548M [00:02<00:00, 221MB/s]
439-
100%|##########| 548M/548M [00:02<00:00, 220MB/s]
413+
4%|3 | 20.6M/548M [00:00<00:02, 216MB/s]
414+
8%|7 | 41.8M/548M [00:00<00:02, 219MB/s]
415+
11%|#1 | 62.9M/548M [00:00<00:02, 220MB/s]
416+
15%|#5 | 84.0M/548M [00:00<00:02, 220MB/s]
417+
19%|#9 | 105M/548M [00:00<00:02, 221MB/s]
418+
23%|##3 | 126M/548M [00:00<00:02, 221MB/s]
419+
27%|##6 | 147M/548M [00:00<00:01, 221MB/s]
420+
31%|### | 168M/548M [00:00<00:01, 221MB/s]
421+
35%|###4 | 190M/548M [00:00<00:01, 221MB/s]
422+
38%|###8 | 211M/548M [00:01<00:01, 221MB/s]
423+
42%|####2 | 232M/548M [00:01<00:01, 221MB/s]
424+
46%|####6 | 253M/548M [00:01<00:01, 221MB/s]
425+
50%|##### | 274M/548M [00:01<00:01, 221MB/s]
426+
54%|#####3 | 295M/548M [00:01<00:01, 221MB/s]
427+
58%|#####7 | 316M/548M [00:01<00:01, 220MB/s]
428+
62%|######1 | 338M/548M [00:01<00:01, 220MB/s]
429+
65%|######5 | 359M/548M [00:01<00:00, 220MB/s]
430+
69%|######9 | 380M/548M [00:01<00:00, 220MB/s]
431+
73%|#######3 | 401M/548M [00:01<00:00, 220MB/s]
432+
77%|#######6 | 422M/548M [00:02<00:00, 220MB/s]
433+
81%|######## | 443M/548M [00:02<00:00, 220MB/s]
434+
85%|########4 | 464M/548M [00:02<00:00, 220MB/s]
435+
89%|########8 | 485M/548M [00:02<00:00, 220MB/s]
436+
92%|#########2| 506M/548M [00:02<00:00, 221MB/s]
437+
96%|#########6| 527M/548M [00:02<00:00, 221MB/s]
438+
100%|##########| 548M/548M [00:02<00:00, 221MB/s]
440439
441440
442441
@@ -757,22 +756,22 @@ Finally, we can run the algorithm.
757756
758757
Optimizing..
759758
run [50]:
760-
Style Loss : 4.036047 Content Loss: 4.145061
759+
Style Loss : 4.118675 Content Loss: 4.164120
761760
762761
run [100]:
763-
Style Loss : 1.121276 Content Loss: 3.036377
762+
Style Loss : 1.165700 Content Loss: 3.054597
764763
765764
run [150]:
766-
Style Loss : 0.717448 Content Loss: 2.649953
765+
Style Loss : 0.726226 Content Loss: 2.663800
767766
768767
run [200]:
769-
Style Loss : 0.482685 Content Loss: 2.490292
768+
Style Loss : 0.479455 Content Loss: 2.492070
770769
771770
run [250]:
772-
Style Loss : 0.349416 Content Loss: 2.405339
771+
Style Loss : 0.348070 Content Loss: 2.404342
773772
774773
run [300]:
775-
Style Loss : 0.269185 Content Loss: 2.352262
774+
Style Loss : 0.266881 Content Loss: 2.352220
776775
777776
778777
@@ -781,7 +780,7 @@ Finally, we can run the algorithm.
781780
782781
.. rst-class:: sphx-glr-timing
783782

784-
**Total running time of the script:** ( 0 minutes 38.651 seconds)
783+
**Total running time of the script:** ( 0 minutes 38.475 seconds)
785784

786785

787786
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.624 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.580 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)