Skip to content

Commit 05efd43

Browse files
committed
Automated tutorials push
1 parent 97ebc8d commit 05efd43

File tree

195 files changed

+13477
-13515
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

195 files changed

+13477
-13515
lines changed

_downloads/562d6bd0e2a429f010fcf8007f6a7cac/pinmem_nonblock.py

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -547,7 +547,7 @@ def pin_copy_to_device_nonblocking(*tensors):
547547

548548
i = -1
549549
for i in range(100):
550-
# Create a tensor in pin-memory
550+
# Create a tensor in pageable memory
551551
cpu_tensor = torch.ones(1024, 1024)
552552
torch.cuda.synchronize()
553553
# Send the tensor to CUDA

_downloads/6a760a243fcbf87fb3368be3d4d860ee/pinmem_nonblock.ipynb

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -758,7 +758,7 @@
758758
"source": [
759759
"i = -1\n",
760760
"for i in range(100):\n",
761-
" # Create a tensor in pin-memory\n",
761+
" # Create a tensor in pageable memory\n",
762762
" cpu_tensor = torch.ones(1024, 1024)\n",
763763
" torch.cuda.synchronize()\n",
764764
" # Send the tensor to CUDA\n",
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

2.12 KB
Loading
-2.37 KB
Loading
139 Bytes
Loading
-46 Bytes
Loading
-20 Bytes
Loading
-62 Bytes
Loading
417 Bytes
Loading
-7.9 KB
Loading
-1.17 KB
Loading
-119 Bytes
Loading
-74 Bytes
Loading
Loading
5.89 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:05, 1706.86it/s]
1638-
16%|#6 | 1600/10000 [00:02<00:16, 494.56it/s]
1639-
24%|##4 | 2400/10000 [00:03<00:10, 714.14it/s]
1640-
32%|###2 | 3200/10000 [00:03<00:07, 927.36it/s]
1641-
40%|#### | 4000/10000 [00:04<00:05, 1108.64it/s]
1642-
48%|####8 | 4800/10000 [00:04<00:04, 1263.34it/s]
1643-
56%|#####6 | 5600/10000 [00:05<00:03, 1386.67it/s]
1644-
reward: -2.53 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.87/6.26, grad norm= 172.44, loss_value= 372.48, loss_actor= 13.77, target value: -17.13: 56%|#####6 | 5600/10000 [00:06<00:03, 1386.67it/s]
1645-
reward: -2.53 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.87/6.26, grad norm= 172.44, loss_value= 372.48, loss_actor= 13.77, target value: -17.13: 64%|######4 | 6400/10000 [00:06<00:03, 914.00it/s]
1646-
reward: -0.10 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-1.77/5.75, grad norm= 83.74, loss_value= 243.26, loss_actor= 12.18, target value: -11.92: 64%|######4 | 6400/10000 [00:07<00:03, 914.00it/s]
1647-
reward: -0.10 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-1.77/5.75, grad norm= 83.74, loss_value= 243.26, loss_actor= 12.18, target value: -11.92: 72%|#######2 | 7200/10000 [00:08<00:03, 732.98it/s]
1648-
reward: -3.06 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.10/6.37, grad norm= 175.25, loss_value= 410.52, loss_actor= 14.70, target value: -14.73: 72%|#######2 | 7200/10000 [00:09<00:03, 732.98it/s]
1649-
reward: -3.06 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.10/6.37, grad norm= 175.25, loss_value= 410.52, loss_actor= 14.70, target value: -14.73: 80%|######## | 8000/10000 [00:09<00:03, 648.25it/s]
1650-
reward: -5.12 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.76/5.11, grad norm= 204.29, loss_value= 256.34, loss_actor= 13.80, target value: -18.21: 80%|######## | 8000/10000 [00:10<00:03, 648.25it/s]
1651-
reward: -5.12 (r0 = -2.38), reward eval: reward: 0.00, reward normalized=-2.76/5.11, grad norm= 204.29, loss_value= 256.34, loss_actor= 13.80, target value: -18.21: 88%|########8 | 8800/10000 [00:11<00:01, 603.43it/s]
1652-
reward: -4.34 (r0 = -2.38), reward eval: reward: -3.72, reward normalized=-2.41/4.96, grad norm= 83.94, loss_value= 184.70, loss_actor= 12.14, target value: -16.55: 88%|########8 | 8800/10000 [00:14<00:01, 603.43it/s]
1653-
reward: -4.34 (r0 = -2.38), reward eval: reward: -3.72, reward normalized=-2.41/4.96, grad norm= 83.94, loss_value= 184.70, loss_actor= 12.14, target value: -16.55: 96%|#########6| 9600/10000 [00:14<00:00, 408.81it/s]
1654-
reward: -12.30 (r0 = -2.38), reward eval: reward: -3.72, reward normalized=-3.23/6.55, grad norm= 181.45, loss_value= 321.95, loss_actor= 15.06, target value: -22.91: 96%|#########6| 9600/10000 [00:15<00:00, 408.81it/s]
1655-
reward: -12.30 (r0 = -2.38), reward eval: reward: -3.72, reward normalized=-3.23/6.55, grad norm= 181.45, loss_value= 321.95, loss_actor= 15.06, target value: -22.91: : 10400it [00:17, 371.73it/s]
1656-
reward: -3.25 (r0 = -2.38), reward eval: reward: -3.72, reward normalized=-4.02/6.12, grad norm= 131.26, loss_value= 237.20, loss_actor= 23.33, target value: -27.38: : 10400it [00:18, 371.73it/s]
1637+
8%|8 | 800/10000 [00:00<00:05, 1686.02it/s]
1638+
16%|#6 | 1600/10000 [00:03<00:17, 474.11it/s]
1639+
24%|##4 | 2400/10000 [00:03<00:11, 671.52it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:07, 855.29it/s]
1641+
40%|#### | 4000/10000 [00:04<00:05, 1017.39it/s]
1642+
48%|####8 | 4800/10000 [00:05<00:04, 1163.51it/s]
1643+
56%|#####6 | 5600/10000 [00:05<00:03, 1272.13it/s]
1644+
reward: -2.71 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.89/6.36, grad norm= 139.40, loss_value= 452.21, loss_actor= 15.36, target value: -17.24: 56%|#####6 | 5600/10000 [00:06<00:03, 1272.13it/s]
1645+
reward: -2.71 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.89/6.36, grad norm= 139.40, loss_value= 452.21, loss_actor= 15.36, target value: -17.24: 64%|######4 | 6400/10000 [00:07<00:04, 870.02it/s]
1646+
reward: -0.14 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.44/5.99, grad norm= 161.20, loss_value= 324.10, loss_actor= 12.66, target value: -15.55: 64%|######4 | 6400/10000 [00:07<00:04, 870.02it/s]
1647+
reward: -0.14 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.44/5.99, grad norm= 161.20, loss_value= 324.10, loss_actor= 12.66, target value: -15.55: 72%|#######2 | 7200/10000 [00:08<00:03, 703.58it/s]
1648+
reward: -2.45 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.42/5.98, grad norm= 130.59, loss_value= 312.04, loss_actor= 12.01, target value: -15.33: 72%|#######2 | 7200/10000 [00:09<00:03, 703.58it/s]
1649+
reward: -2.45 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.42/5.98, grad norm= 130.59, loss_value= 312.04, loss_actor= 12.01, target value: -15.33: 80%|######## | 8000/10000 [00:10<00:03, 624.50it/s]
1650+
reward: -4.12 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.73/5.00, grad norm= 205.76, loss_value= 217.68, loss_actor= 16.33, target value: -18.26: 80%|######## | 8000/10000 [00:11<00:03, 624.50it/s]
1651+
reward: -4.12 (r0 = -3.45), reward eval: reward: 0.01, reward normalized=-2.73/5.00, grad norm= 205.76, loss_value= 217.68, loss_actor= 16.33, target value: -18.26: 88%|########8 | 8800/10000 [00:12<00:02, 581.06it/s]
1652+
reward: -5.10 (r0 = -3.45), reward eval: reward: -7.09, reward normalized=-2.55/5.13, grad norm= 123.25, loss_value= 197.78, loss_actor= 19.07, target value: -17.02: 88%|########8 | 8800/10000 [00:14<00:02, 581.06it/s]
1653+
reward: -5.10 (r0 = -3.45), reward eval: reward: -7.09, reward normalized=-2.55/5.13, grad norm= 123.25, loss_value= 197.78, loss_actor= 19.07, target value: -17.02: 96%|#########6| 9600/10000 [00:15<00:01, 391.08it/s]
1654+
reward: -5.12 (r0 = -3.45), reward eval: reward: -7.09, reward normalized=-3.09/5.14, grad norm= 269.24, loss_value= 261.51, loss_actor= 18.04, target value: -22.19: 96%|#########6| 9600/10000 [00:16<00:01, 391.08it/s]
1655+
reward: -5.12 (r0 = -3.45), reward eval: reward: -7.09, reward normalized=-3.09/5.14, grad norm= 269.24, loss_value= 261.51, loss_actor= 18.04, target value: -22.19: : 10400it [00:18, 354.47it/s]
1656+
reward: -5.27 (r0 = -3.45), reward eval: reward: -7.09, reward normalized=-3.08/5.03, grad norm= 66.19, loss_value= 194.93, loss_actor= 20.81, target value: -22.46: : 10400it [00:19, 354.47it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 28.061 seconds)
1726+
**Total running time of the script:** ( 0 minutes 29.218 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 205.9
520+
elapsed time (seconds): 199.5
521521
loss: 5.168
522-
elapsed time (seconds): 117.1
522+
elapsed time (seconds): 118.4
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 31.533 seconds)
544+
**Total running time of the script:** ( 5 minutes 26.291 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 34 additions & 34 deletions
Original file line numberDiff line numberDiff line change
@@ -410,33 +410,33 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|3 | 17.5M/548M [00:00<00:03, 182MB/s]
414-
6%|6 | 34.9M/548M [00:00<00:03, 150MB/s]
415-
10%|# | 55.5M/548M [00:00<00:02, 177MB/s]
416-
14%|#3 | 76.1M/548M [00:00<00:02, 191MB/s]
417-
18%|#7 | 96.8M/548M [00:00<00:02, 200MB/s]
418-
21%|##1 | 117M/548M [00:00<00:02, 205MB/s]
419-
25%|##5 | 138M/548M [00:00<00:02, 208MB/s]
420-
29%|##8 | 159M/548M [00:00<00:01, 211MB/s]
421-
33%|###2 | 179M/548M [00:00<00:01, 212MB/s]
422-
36%|###6 | 200M/548M [00:01<00:01, 214MB/s]
423-
40%|#### | 221M/548M [00:01<00:01, 214MB/s]
424-
44%|####4 | 241M/548M [00:01<00:01, 215MB/s]
425-
48%|####7 | 262M/548M [00:01<00:01, 215MB/s]
426-
52%|#####1 | 282M/548M [00:01<00:01, 215MB/s]
427-
55%|#####5 | 303M/548M [00:01<00:01, 215MB/s]
428-
59%|#####9 | 324M/548M [00:01<00:01, 215MB/s]
429-
63%|######2 | 344M/548M [00:01<00:00, 215MB/s]
430-
67%|######6 | 365M/548M [00:01<00:00, 216MB/s]
431-
70%|####### | 386M/548M [00:01<00:00, 216MB/s]
432-
74%|#######4 | 406M/548M [00:02<00:00, 216MB/s]
433-
78%|#######7 | 427M/548M [00:02<00:00, 216MB/s]
434-
82%|########1 | 448M/548M [00:02<00:00, 216MB/s]
435-
85%|########5 | 468M/548M [00:02<00:00, 216MB/s]
436-
89%|########9 | 489M/548M [00:02<00:00, 216MB/s]
437-
93%|#########3| 510M/548M [00:02<00:00, 216MB/s]
438-
97%|#########6| 531M/548M [00:02<00:00, 216MB/s]
439-
100%|##########| 548M/548M [00:02<00:00, 211MB/s]
413+
4%|3 | 20.0M/548M [00:00<00:02, 209MB/s]
414+
7%|7 | 40.4M/548M [00:00<00:02, 211MB/s]
415+
11%|#1 | 60.9M/548M [00:00<00:02, 213MB/s]
416+
15%|#4 | 81.5M/548M [00:00<00:02, 214MB/s]
417+
19%|#8 | 102M/548M [00:00<00:02, 214MB/s]
418+
22%|##2 | 122M/548M [00:00<00:02, 214MB/s]
419+
26%|##6 | 143M/548M [00:00<00:01, 215MB/s]
420+
30%|##9 | 164M/548M [00:00<00:01, 215MB/s]
421+
34%|###3 | 184M/548M [00:00<00:01, 215MB/s]
422+
37%|###7 | 205M/548M [00:01<00:01, 215MB/s]
423+
41%|####1 | 225M/548M [00:01<00:01, 215MB/s]
424+
45%|####4 | 246M/548M [00:01<00:01, 215MB/s]
425+
49%|####8 | 267M/548M [00:01<00:01, 215MB/s]
426+
52%|#####2 | 287M/548M [00:01<00:01, 215MB/s]
427+
56%|#####6 | 308M/548M [00:01<00:01, 215MB/s]
428+
60%|#####9 | 328M/548M [00:01<00:01, 215MB/s]
429+
64%|######3 | 349M/548M [00:01<00:00, 215MB/s]
430+
67%|######7 | 370M/548M [00:01<00:00, 215MB/s]
431+
71%|#######1 | 390M/548M [00:01<00:00, 215MB/s]
432+
75%|#######4 | 411M/548M [00:02<00:00, 215MB/s]
433+
79%|#######8 | 431M/548M [00:02<00:00, 215MB/s]
434+
82%|########2 | 452M/548M [00:02<00:00, 215MB/s]
435+
86%|########6 | 473M/548M [00:02<00:00, 215MB/s]
436+
90%|########9 | 493M/548M [00:02<00:00, 215MB/s]
437+
94%|#########3| 514M/548M [00:02<00:00, 215MB/s]
438+
98%|#########7| 534M/548M [00:02<00:00, 215MB/s]
439+
100%|##########| 548M/548M [00:02<00:00, 215MB/s]
440440
441441
442442
@@ -757,22 +757,22 @@ Finally, we can run the algorithm.
757757
758758
Optimizing..
759759
run [50]:
760-
Style Loss : 4.183072 Content Loss: 4.165503
760+
Style Loss : 3.933726 Content Loss: 4.069746
761761
762762
run [100]:
763-
Style Loss : 1.164476 Content Loss: 3.053042
763+
Style Loss : 1.115454 Content Loss: 3.014786
764764
765765
run [150]:
766-
Style Loss : 0.727555 Content Loss: 2.657095
766+
Style Loss : 0.703395 Content Loss: 2.641813
767767
768768
run [200]:
769-
Style Loss : 0.487632 Content Loss: 2.498033
769+
Style Loss : 0.472471 Content Loss: 2.486287
770770
771771
run [250]:
772-
Style Loss : 0.352444 Content Loss: 2.405909
772+
Style Loss : 0.340303 Content Loss: 2.399467
773773
774774
run [300]:
775-
Style Loss : 0.266894 Content Loss: 2.351026
775+
Style Loss : 0.262586 Content Loss: 2.348407
776776
777777
778778
@@ -781,7 +781,7 @@ Finally, we can run the algorithm.
781781
782782
.. rst-class:: sphx-glr-timing
783783

784-
**Total running time of the script:** ( 0 minutes 34.609 seconds)
784+
**Total running time of the script:** ( 0 minutes 36.551 seconds)
785785

786786

787787
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.592 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.613 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)