Skip to content

Commit 1db3f30

Browse files
committed
Automated tutorials push
1 parent a5f974b commit 1db3f30

File tree

204 files changed

+15213
-12852
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

204 files changed

+15213
-12852
lines changed
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

2.24 KB
Loading
2.2 KB
Loading
-322 Bytes
Loading
-192 Bytes
Loading
30 Bytes
Loading
-376 Bytes
Loading
-631 Bytes
Loading
697 Bytes
Loading
-98 Bytes
Loading
261 Bytes
Loading
72 Bytes
Loading
Loading
539 Bytes
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1722.07it/s]
1638-
16%|#6 | 1600/10000 [00:02<00:17, 481.23it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:10, 692.11it/s]
1640-
32%|###2 | 3200/10000 [00:04<00:07, 889.74it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1050.97it/s]
1642-
48%|####8 | 4800/10000 [00:05<00:04, 1176.81it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1298.20it/s]
1644-
reward: -2.92 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.66/6.18, grad norm= 64.98, loss_value= 354.58, loss_actor= 15.12, target value: -16.19: 56%|#####6 | 5600/10000 [00:06<00:03, 1298.20it/s]
1645-
reward: -2.92 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.66/6.18, grad norm= 64.98, loss_value= 354.58, loss_actor= 15.12, target value: -16.19: 64%|######4 | 6400/10000 [00:06<00:03, 906.92it/s]
1646-
reward: -0.16 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.09/5.62, grad norm= 124.82, loss_value= 219.50, loss_actor= 14.16, target value: -13.41: 64%|######4 | 6400/10000 [00:07<00:03, 906.92it/s]
1647-
reward: -0.16 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.09/5.62, grad norm= 124.82, loss_value= 219.50, loss_actor= 14.16, target value: -13.41: 72%|#######2 | 7200/10000 [00:08<00:03, 750.54it/s]
1648-
reward: -2.73 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.40/5.86, grad norm= 125.51, loss_value= 239.26, loss_actor= 12.55, target value: -15.01: 72%|#######2 | 7200/10000 [00:09<00:03, 750.54it/s]
1649-
reward: -2.73 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.40/5.86, grad norm= 125.51, loss_value= 239.26, loss_actor= 12.55, target value: -15.01: 80%|######## | 8000/10000 [00:09<00:02, 671.44it/s]
1650-
reward: -4.45 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.48/5.26, grad norm= 133.72, loss_value= 212.55, loss_actor= 17.56, target value: -16.44: 80%|######## | 8000/10000 [00:10<00:02, 671.44it/s]
1651-
reward: -4.45 (r0 = -2.79), reward eval: reward: 0.00, reward normalized=-2.48/5.26, grad norm= 133.72, loss_value= 212.55, loss_actor= 17.56, target value: -16.44: 88%|########8 | 8800/10000 [00:11<00:01, 614.51it/s]
1652-
reward: -5.41 (r0 = -2.79), reward eval: reward: -5.53, reward normalized=-3.08/5.18, grad norm= 130.58, loss_value= 202.59, loss_actor= 19.66, target value: -20.80: 88%|########8 | 8800/10000 [00:14<00:01, 614.51it/s]
1653-
reward: -5.41 (r0 = -2.79), reward eval: reward: -5.53, reward normalized=-3.08/5.18, grad norm= 130.58, loss_value= 202.59, loss_actor= 19.66, target value: -20.80: 96%|#########6| 9600/10000 [00:14<00:00, 406.37it/s]
1654-
reward: -5.40 (r0 = -2.79), reward eval: reward: -5.53, reward normalized=-3.54/5.03, grad norm= 329.46, loss_value= 245.21, loss_actor= 19.56, target value: -25.54: 96%|#########6| 9600/10000 [00:15<00:00, 406.37it/s]
1655-
reward: -5.40 (r0 = -2.79), reward eval: reward: -5.53, reward normalized=-3.54/5.03, grad norm= 329.46, loss_value= 245.21, loss_actor= 19.56, target value: -25.54: : 10400it [00:17, 371.25it/s]
1656-
reward: -5.22 (r0 = -2.79), reward eval: reward: -5.53, reward normalized=-4.00/4.30, grad norm= 107.23, loss_value= 199.22, loss_actor= 24.13, target value: -27.65: : 10400it [00:18, 371.25it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1695.61it/s]
1638+
16%|#6 | 1600/10000 [00:02<00:17, 482.88it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:11, 687.66it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:07, 889.93it/s]
1641+
40%|#### | 4000/10000 [00:04<00:05, 1057.38it/s]
1642+
48%|####8 | 4800/10000 [00:05<00:04, 1193.59it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1290.37it/s]
1644+
reward: -2.84 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-2.43/6.31, grad norm= 71.04, loss_value= 396.71, loss_actor= 14.53, target value: -14.87: 56%|#####6 | 5600/10000 [00:06<00:03, 1290.37it/s]
1645+
reward: -2.84 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-2.43/6.31, grad norm= 71.04, loss_value= 396.71, loss_actor= 14.53, target value: -14.87: 64%|######4 | 6400/10000 [00:07<00:04, 851.50it/s]
1646+
reward: -0.15 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-1.76/5.98, grad norm= 118.29, loss_value= 304.53, loss_actor= 12.29, target value: -10.82: 64%|######4 | 6400/10000 [00:07<00:04, 851.50it/s]
1647+
reward: -0.15 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-1.76/5.98, grad norm= 118.29, loss_value= 304.53, loss_actor= 12.29, target value: -10.82: 72%|#######2 | 7200/10000 [00:08<00:04, 692.07it/s]
1648+
reward: -2.41 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-1.76/6.10, grad norm= 100.06, loss_value= 349.30, loss_actor= 11.72, target value: -12.19: 72%|#######2 | 7200/10000 [00:09<00:04, 692.07it/s]
1649+
reward: -2.41 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-1.76/6.10, grad norm= 100.06, loss_value= 349.30, loss_actor= 11.72, target value: -12.19: 80%|######## | 8000/10000 [00:10<00:03, 623.08it/s]
1650+
reward: -4.83 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-2.22/4.89, grad norm= 68.99, loss_value= 204.73, loss_actor= 13.44, target value: -14.30: 80%|######## | 8000/10000 [00:11<00:03, 623.08it/s]
1651+
reward: -4.83 (r0 = -3.17), reward eval: reward: -0.01, reward normalized=-2.22/4.89, grad norm= 68.99, loss_value= 204.73, loss_actor= 13.44, target value: -14.30: 88%|########8 | 8800/10000 [00:11<00:02, 578.83it/s]
1652+
reward: -4.41 (r0 = -3.17), reward eval: reward: -5.62, reward normalized=-2.91/4.96, grad norm= 81.74, loss_value= 161.88, loss_actor= 13.33, target value: -19.36: 88%|########8 | 8800/10000 [00:14<00:02, 578.83it/s]
1653+
reward: -4.41 (r0 = -3.17), reward eval: reward: -5.62, reward normalized=-2.91/4.96, grad norm= 81.74, loss_value= 161.88, loss_actor= 13.33, target value: -19.36: 96%|#########6| 9600/10000 [00:15<00:01, 389.80it/s]
1654+
reward: -5.39 (r0 = -3.17), reward eval: reward: -5.62, reward normalized=-3.06/5.23, grad norm= 193.62, loss_value= 258.27, loss_actor= 14.26, target value: -22.43: 96%|#########6| 9600/10000 [00:16<00:01, 389.80it/s]
1655+
reward: -5.39 (r0 = -3.17), reward eval: reward: -5.62, reward normalized=-3.06/5.23, grad norm= 193.62, loss_value= 258.27, loss_actor= 14.26, target value: -22.43: : 10400it [00:18, 357.06it/s]
1656+
reward: -4.73 (r0 = -3.17), reward eval: reward: -5.62, reward normalized=-3.74/4.05, grad norm= 76.70, loss_value= 159.92, loss_actor= 23.20, target value: -26.13: : 10400it [00:19, 357.06it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 28.302 seconds)
1726+
**Total running time of the script:** ( 0 minutes 29.164 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 203.3
520+
elapsed time (seconds): 201.1
521521
loss: 5.168
522-
elapsed time (seconds): 115.5
522+
elapsed time (seconds): 113.3
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 29.492 seconds)
544+
**Total running time of the script:** ( 5 minutes 25.145 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 34 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -410,32 +410,33 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
4%|3 | 20.6M/548M [00:00<00:02, 216MB/s]
414-
8%|7 | 41.5M/548M [00:00<00:02, 217MB/s]
415-
11%|#1 | 62.5M/548M [00:00<00:02, 219MB/s]
416-
15%|#5 | 83.8M/548M [00:00<00:02, 220MB/s]
417-
19%|#9 | 105M/548M [00:00<00:02, 221MB/s]
418-
23%|##3 | 126M/548M [00:00<00:01, 221MB/s]
419-
27%|##6 | 148M/548M [00:00<00:01, 222MB/s]
420-
31%|### | 169M/548M [00:00<00:01, 222MB/s]
421-
35%|###4 | 190M/548M [00:00<00:01, 222MB/s]
422-
39%|###8 | 211M/548M [00:01<00:01, 222MB/s]
423-
42%|####2 | 233M/548M [00:01<00:01, 222MB/s]
424-
46%|####6 | 254M/548M [00:01<00:01, 222MB/s]
425-
50%|##### | 275M/548M [00:01<00:01, 222MB/s]
426-
54%|#####4 | 296M/548M [00:01<00:01, 222MB/s]
427-
58%|#####7 | 318M/548M [00:01<00:01, 222MB/s]
428-
62%|######1 | 339M/548M [00:01<00:00, 222MB/s]
429-
66%|######5 | 360M/548M [00:01<00:00, 222MB/s]
430-
70%|######9 | 382M/548M [00:01<00:00, 223MB/s]
431-
74%|#######3 | 403M/548M [00:01<00:00, 223MB/s]
432-
77%|#######7 | 424M/548M [00:02<00:00, 223MB/s]
433-
81%|########1 | 446M/548M [00:02<00:00, 223MB/s]
434-
85%|########5 | 467M/548M [00:02<00:00, 222MB/s]
435-
89%|########9 | 488M/548M [00:02<00:00, 223MB/s]
436-
93%|#########2| 509M/548M [00:02<00:00, 222MB/s]
437-
97%|#########6| 531M/548M [00:02<00:00, 223MB/s]
438-
100%|##########| 548M/548M [00:02<00:00, 222MB/s]
413+
4%|3 | 20.1M/548M [00:00<00:02, 211MB/s]
414+
7%|7 | 40.8M/548M [00:00<00:02, 213MB/s]
415+
11%|#1 | 61.5M/548M [00:00<00:02, 215MB/s]
416+
15%|#5 | 82.2M/548M [00:00<00:02, 216MB/s]
417+
19%|#8 | 103M/548M [00:00<00:02, 216MB/s]
418+
23%|##2 | 124M/548M [00:00<00:02, 216MB/s]
419+
26%|##6 | 144M/548M [00:00<00:01, 217MB/s]
420+
30%|### | 165M/548M [00:00<00:01, 217MB/s]
421+
34%|###3 | 186M/548M [00:00<00:01, 217MB/s]
422+
38%|###7 | 207M/548M [00:01<00:01, 217MB/s]
423+
42%|####1 | 228M/548M [00:01<00:01, 217MB/s]
424+
45%|####5 | 248M/548M [00:01<00:01, 217MB/s]
425+
49%|####9 | 269M/548M [00:01<00:01, 217MB/s]
426+
53%|#####2 | 290M/548M [00:01<00:01, 217MB/s]
427+
57%|#####6 | 311M/548M [00:01<00:01, 217MB/s]
428+
60%|###### | 332M/548M [00:01<00:01, 217MB/s]
429+
64%|######4 | 352M/548M [00:01<00:00, 217MB/s]
430+
68%|######8 | 373M/548M [00:01<00:00, 217MB/s]
431+
72%|#######1 | 394M/548M [00:01<00:00, 217MB/s]
432+
76%|#######5 | 415M/548M [00:02<00:00, 217MB/s]
433+
79%|#######9 | 436M/548M [00:02<00:00, 217MB/s]
434+
83%|########3 | 456M/548M [00:02<00:00, 217MB/s]
435+
87%|########7 | 477M/548M [00:02<00:00, 217MB/s]
436+
91%|######### | 498M/548M [00:02<00:00, 217MB/s]
437+
95%|#########4| 519M/548M [00:02<00:00, 217MB/s]
438+
98%|#########8| 539M/548M [00:02<00:00, 217MB/s]
439+
100%|##########| 548M/548M [00:02<00:00, 217MB/s]
439440
440441
441442
@@ -756,22 +757,22 @@ Finally, we can run the algorithm.
756757
757758
Optimizing..
758759
run [50]:
759-
Style Loss : 4.065745 Content Loss: 4.157114
760+
Style Loss : 4.161686 Content Loss: 4.189911
760761
761762
run [100]:
762-
Style Loss : 1.119953 Content Loss: 3.009066
763+
Style Loss : 1.140328 Content Loss: 3.020304
763764
764765
run [150]:
765-
Style Loss : 0.703355 Content Loss: 2.644294
766+
Style Loss : 0.716041 Content Loss: 2.644831
766767
767768
run [200]:
768-
Style Loss : 0.471478 Content Loss: 2.488785
769+
Style Loss : 0.480255 Content Loss: 2.490241
769770
770771
run [250]:
771-
Style Loss : 0.341830 Content Loss: 2.400148
772+
Style Loss : 0.348056 Content Loss: 2.404358
772773
773774
run [300]:
774-
Style Loss : 0.261067 Content Loss: 2.345300
775+
Style Loss : 0.265763 Content Loss: 2.351621
775776
776777
777778
@@ -780,7 +781,7 @@ Finally, we can run the algorithm.
780781
781782
.. rst-class:: sphx-glr-timing
782783

783-
**Total running time of the script:** ( 0 minutes 38.628 seconds)
784+
**Total running time of the script:** ( 0 minutes 38.500 seconds)
784785

785786

786787
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.595 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.598 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)