Skip to content

Commit fed6e56

Browse files
committed
Automated tutorials push
1 parent 3ae1e8b commit fed6e56

File tree

391 files changed

+29275
-18935
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

391 files changed

+29275
-18935
lines changed
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

-132 Bytes
Loading
275 Bytes
Loading
51 Bytes
Loading
-90 Bytes
Loading
-222 Bytes
Loading
65 Bytes
Loading
518 Bytes
Loading
-4.88 KB
Loading
-1.33 KB
Loading
-208 Bytes
Loading
-161 Bytes
Loading
Loading
3.33 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1703.98it/s]
1638-
16%|#6 | 1600/10000 [00:02<00:17, 484.88it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:10, 699.40it/s]
1640-
32%|###2 | 3200/10000 [00:03<00:07, 901.42it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1070.69it/s]
1642-
48%|####8 | 4800/10000 [00:04<00:04, 1219.46it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1336.30it/s]
1644-
reward: -2.08 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.60/6.39, grad norm= 100.00, loss_value= 431.35, loss_actor= 16.48, target value: -15.57: 56%|#####6 | 5600/10000 [00:06<00:03, 1336.30it/s]
1645-
reward: -2.08 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.60/6.39, grad norm= 100.00, loss_value= 431.35, loss_actor= 16.48, target value: -15.57: 64%|######4 | 6400/10000 [00:06<00:03, 901.23it/s]
1646-
reward: -0.20 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-3.16/6.03, grad norm= 379.46, loss_value= 354.63, loss_actor= 14.92, target value: -19.87: 64%|######4 | 6400/10000 [00:07<00:03, 901.23it/s]
1647-
reward: -0.20 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-3.16/6.03, grad norm= 379.46, loss_value= 354.63, loss_actor= 14.92, target value: -19.87: 72%|#######2 | 7200/10000 [00:08<00:03, 740.79it/s]
1648-
reward: -3.19 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-1.95/6.12, grad norm= 78.80, loss_value= 296.14, loss_actor= 11.28, target value: -11.78: 72%|#######2 | 7200/10000 [00:09<00:03, 740.79it/s]
1649-
reward: -3.19 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-1.95/6.12, grad norm= 78.80, loss_value= 296.14, loss_actor= 11.28, target value: -11.78: 80%|######## | 8000/10000 [00:10<00:03, 652.77it/s]
1650-
reward: -4.73 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.86/5.35, grad norm= 79.71, loss_value= 225.52, loss_actor= 19.22, target value: -18.85: 80%|######## | 8000/10000 [00:10<00:03, 652.77it/s]
1651-
reward: -4.73 (r0 = -3.42), reward eval: reward: 0.00, reward normalized=-2.86/5.35, grad norm= 79.71, loss_value= 225.52, loss_actor= 19.22, target value: -18.85: 88%|########8 | 8800/10000 [00:11<00:01, 609.61it/s]
1652-
reward: -5.48 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.25/5.19, grad norm= 207.90, loss_value= 237.13, loss_actor= 21.20, target value: -22.06: 88%|########8 | 8800/10000 [00:14<00:01, 609.61it/s]
1653-
reward: -5.48 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.25/5.19, grad norm= 207.90, loss_value= 237.13, loss_actor= 21.20, target value: -22.06: 96%|#########6| 9600/10000 [00:14<00:00, 406.16it/s]
1654-
reward: -5.29 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-2.87/4.98, grad norm= 54.30, loss_value= 193.69, loss_actor= 20.55, target value: -20.42: 96%|#########6| 9600/10000 [00:15<00:00, 406.16it/s]
1655-
reward: -5.29 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-2.87/4.98, grad norm= 54.30, loss_value= 193.69, loss_actor= 20.55, target value: -20.42: : 10400it [00:17, 363.51it/s]
1656-
reward: -4.67 (r0 = -3.42), reward eval: reward: -5.46, reward normalized=-3.58/4.46, grad norm= 70.11, loss_value= 183.36, loss_actor= 23.07, target value: -25.11: : 10400it [00:18, 363.51it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1660.50it/s]
1638+
16%|#6 | 1600/10000 [00:02<00:17, 477.55it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:11, 674.49it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:07, 850.24it/s]
1641+
40%|#### | 4000/10000 [00:04<00:06, 995.16it/s]
1642+
48%|####8 | 4800/10000 [00:05<00:04, 1116.81it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1208.77it/s]
1644+
reward: -2.34 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.59/6.17, grad norm= 147.92, loss_value= 295.62, loss_actor= 13.96, target value: -15.91: 56%|#####6 | 5600/10000 [00:06<00:03, 1208.77it/s]
1645+
reward: -2.34 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.59/6.17, grad norm= 147.92, loss_value= 295.62, loss_actor= 13.96, target value: -15.91: 64%|######4 | 6400/10000 [00:07<00:04, 841.61it/s]
1646+
reward: -0.11 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-1.53/5.46, grad norm= 118.19, loss_value= 194.76, loss_actor= 10.63, target value: -10.14: 64%|######4 | 6400/10000 [00:08<00:04, 841.61it/s]
1647+
reward: -0.11 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-1.53/5.46, grad norm= 118.19, loss_value= 194.76, loss_actor= 10.63, target value: -10.14: 72%|#######2 | 7200/10000 [00:08<00:03, 700.19it/s]
1648+
reward: -2.33 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.42/5.58, grad norm= 182.04, loss_value= 220.44, loss_actor= 13.69, target value: -16.09: 72%|#######2 | 7200/10000 [00:09<00:03, 700.19it/s]
1649+
reward: -2.33 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.42/5.58, grad norm= 182.04, loss_value= 220.44, loss_actor= 13.69, target value: -16.09: 80%|######## | 8000/10000 [00:10<00:03, 623.68it/s]
1650+
reward: -4.44 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.44/4.89, grad norm= 111.35, loss_value= 211.11, loss_actor= 15.74, target value: -15.42: 80%|######## | 8000/10000 [00:11<00:03, 623.68it/s]
1651+
reward: -4.44 (r0 = -2.00), reward eval: reward: -0.00, reward normalized=-2.44/4.89, grad norm= 111.35, loss_value= 211.11, loss_actor= 15.74, target value: -15.42: 88%|########8 | 8800/10000 [00:12<00:02, 588.63it/s]
1652+
reward: -4.96 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-2.32/4.85, grad norm= 54.44, loss_value= 165.27, loss_actor= 16.38, target value: -16.11: 88%|########8 | 8800/10000 [00:14<00:02, 588.63it/s]
1653+
reward: -4.96 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-2.32/4.85, grad norm= 54.44, loss_value= 165.27, loss_actor= 16.38, target value: -16.11: 96%|#########6| 9600/10000 [00:15<00:01, 399.47it/s]
1654+
reward: -4.86 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.02/4.89, grad norm= 173.10, loss_value= 234.27, loss_actor= 13.70, target value: -21.43: 96%|#########6| 9600/10000 [00:16<00:01, 399.47it/s]
1655+
reward: -4.86 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.02/4.89, grad norm= 173.10, loss_value= 234.27, loss_actor= 13.70, target value: -21.43: : 10400it [00:18, 364.88it/s]
1656+
reward: -4.93 (r0 = -2.00), reward eval: reward: -5.98, reward normalized=-3.38/3.91, grad norm= 120.25, loss_value= 129.44, loss_actor= 15.23, target value: -23.98: : 10400it [00:18, 364.88it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 28.464 seconds)
1726+
**Total running time of the script:** ( 0 minutes 29.005 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 208.6
520+
elapsed time (seconds): 208.2
521521
loss: 5.168
522-
elapsed time (seconds): 115.2
522+
elapsed time (seconds): 115.8
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 34.642 seconds)
544+
**Total running time of the script:** ( 5 minutes 34.880 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 33 additions & 33 deletions
Original file line numberDiff line numberDiff line change
@@ -410,32 +410,32 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
4%|3 | 20.6M/548M [00:00<00:02, 216MB/s]
414-
8%|7 | 41.9M/548M [00:00<00:02, 219MB/s]
415-
12%|#1 | 63.1M/548M [00:00<00:02, 221MB/s]
416-
15%|#5 | 84.6M/548M [00:00<00:02, 222MB/s]
417-
19%|#9 | 106M/548M [00:00<00:02, 223MB/s]
418-
23%|##3 | 128M/548M [00:00<00:01, 223MB/s]
419-
27%|##7 | 149M/548M [00:00<00:01, 224MB/s]
420-
31%|###1 | 170M/548M [00:00<00:01, 224MB/s]
421-
35%|###5 | 192M/548M [00:00<00:01, 224MB/s]
422-
39%|###8 | 213M/548M [00:01<00:01, 224MB/s]
423-
43%|####2 | 235M/548M [00:01<00:01, 225MB/s]
424-
47%|####6 | 256M/548M [00:01<00:01, 225MB/s]
425-
51%|##### | 278M/548M [00:01<00:01, 225MB/s]
426-
55%|#####4 | 299M/548M [00:01<00:01, 225MB/s]
427-
59%|#####8 | 321M/548M [00:01<00:01, 225MB/s]
428-
62%|######2 | 342M/548M [00:01<00:00, 225MB/s]
429-
66%|######6 | 364M/548M [00:01<00:00, 225MB/s]
430-
70%|####### | 385M/548M [00:01<00:00, 225MB/s]
431-
74%|#######4 | 407M/548M [00:01<00:00, 225MB/s]
432-
78%|#######8 | 428M/548M [00:02<00:00, 225MB/s]
433-
82%|########2 | 450M/548M [00:02<00:00, 224MB/s]
434-
86%|########6 | 471M/548M [00:02<00:00, 225MB/s]
435-
90%|########9 | 493M/548M [00:02<00:00, 225MB/s]
436-
94%|#########3| 514M/548M [00:02<00:00, 225MB/s]
437-
98%|#########7| 536M/548M [00:02<00:00, 225MB/s]
438-
100%|##########| 548M/548M [00:02<00:00, 224MB/s]
413+
4%|3 | 20.5M/548M [00:00<00:02, 215MB/s]
414+
8%|7 | 41.5M/548M [00:00<00:02, 217MB/s]
415+
11%|#1 | 62.6M/548M [00:00<00:02, 219MB/s]
416+
15%|#5 | 83.9M/548M [00:00<00:02, 220MB/s]
417+
19%|#9 | 105M/548M [00:00<00:02, 221MB/s]
418+
23%|##3 | 126M/548M [00:00<00:02, 221MB/s]
419+
27%|##6 | 148M/548M [00:00<00:01, 221MB/s]
420+
31%|### | 169M/548M [00:00<00:01, 221MB/s]
421+
35%|###4 | 190M/548M [00:00<00:01, 221MB/s]
422+
39%|###8 | 211M/548M [00:01<00:01, 221MB/s]
423+
42%|####2 | 232M/548M [00:01<00:01, 221MB/s]
424+
46%|####6 | 254M/548M [00:01<00:01, 222MB/s]
425+
50%|##### | 275M/548M [00:01<00:01, 222MB/s]
426+
54%|#####4 | 296M/548M [00:01<00:01, 221MB/s]
427+
58%|#####7 | 317M/548M [00:01<00:01, 221MB/s]
428+
62%|######1 | 338M/548M [00:01<00:00, 221MB/s]
429+
66%|######5 | 360M/548M [00:01<00:00, 221MB/s]
430+
69%|######9 | 381M/548M [00:01<00:00, 221MB/s]
431+
73%|#######3 | 402M/548M [00:01<00:00, 221MB/s]
432+
77%|#######7 | 423M/548M [00:02<00:00, 221MB/s]
433+
81%|########1 | 444M/548M [00:02<00:00, 221MB/s]
434+
85%|########4 | 465M/548M [00:02<00:00, 221MB/s]
435+
89%|########8 | 486M/548M [00:02<00:00, 221MB/s]
436+
93%|#########2| 507M/548M [00:02<00:00, 221MB/s]
437+
96%|#########6| 528M/548M [00:02<00:00, 221MB/s]
438+
100%|##########| 548M/548M [00:02<00:00, 221MB/s]
439439
440440
441441
@@ -756,22 +756,22 @@ Finally, we can run the algorithm.
756756
757757
Optimizing..
758758
run [50]:
759-
Style Loss : 4.103480 Content Loss: 4.095845
759+
Style Loss : 4.243999 Content Loss: 4.230177
760760
761761
run [100]:
762-
Style Loss : 1.120694 Content Loss: 3.009445
762+
Style Loss : 1.153091 Content Loss: 3.027826
763763
764764
run [150]:
765-
Style Loss : 0.707350 Content Loss: 2.644475
765+
Style Loss : 0.714814 Content Loss: 2.653670
766766
767767
run [200]:
768-
Style Loss : 0.476449 Content Loss: 2.486541
768+
Style Loss : 0.479303 Content Loss: 2.491420
769769
770770
run [250]:
771-
Style Loss : 0.344099 Content Loss: 2.402043
771+
Style Loss : 0.347053 Content Loss: 2.402259
772772
773773
run [300]:
774-
Style Loss : 0.262806 Content Loss: 2.347914
774+
Style Loss : 0.262986 Content Loss: 2.348989
775775
776776
777777
@@ -780,7 +780,7 @@ Finally, we can run the algorithm.
780780
781781
.. rst-class:: sphx-glr-timing
782782

783-
**Total running time of the script:** ( 0 minutes 38.521 seconds)
783+
**Total running time of the script:** ( 0 minutes 38.572 seconds)
784784

785785

786786
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.582 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.602 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)