Skip to content

Commit ff7c860

Browse files
committed
Automated tutorials push
1 parent 0cfccf2 commit ff7c860

File tree

364 files changed

+11591
-10500
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

364 files changed

+11591
-10500
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "d3c44a3b",
37+
"id": "fe12b090",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "cadbfc27",
53+
"id": "43856939",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "545968d6",
34+
"id": "e8d8e64a",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "b9c43220",
50+
"id": "1327c996",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "dfe3ad0c",
37+
"id": "50219ef6",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "54349c43",
53+
"id": "47571eb9",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "08989ae5",
38+
"id": "65d56eb6",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "56db2c22",
54+
"id": "cc945e9a",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "d51cb2b4",
40+
"id": "8df026d7",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "99578330",
56+
"id": "6b78ab0a",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "e8ab2e66",
38+
"id": "d1891a06",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "821eef43",
54+
"id": "05475c80",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "2fc67b48",
37+
"id": "a654fa7e",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "e948beaa",
53+
"id": "02664d5a",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "109c2d90",
37+
"id": "e39ca173",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "655c25fd",
53+
"id": "86edf5df",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_images/fsdp_tp.png

250 KB
Loading

_images/loss_parallel.png

290 KB
Loading

_images/megatron_lm.png

774 KB
Loading

_images/sphx_glr_coding_ddpg_001.png

-1.93 KB
Loading
1.6 KB
Loading
-199 Bytes
Loading
-5.31 KB
Loading
Loading
Loading
-902 Bytes
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1632,26 +1632,26 @@ modules we need.
16321632
16331633
16341634
0%| | 0/10000 [00:00<?, ?it/s]
1635-
8%|8 | 800/10000 [00:00<00:03, 2743.96it/s]
1636-
16%|#6 | 1600/10000 [00:01<00:11, 763.17it/s]
1637-
24%|##4 | 2400/10000 [00:02<00:06, 1135.11it/s]
1638-
32%|###2 | 3200/10000 [00:02<00:04, 1499.19it/s]
1639-
40%|#### | 4000/10000 [00:02<00:03, 1815.64it/s]
1640-
48%|####8 | 4800/10000 [00:02<00:02, 2086.49it/s]
1641-
56%|#####6 | 5600/10000 [00:03<00:01, 2296.08it/s]
1642-
reward: -1.88 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-1.03/6.27, grad norm= 311.07, loss_value= 446.30, loss_actor= 12.66, target value: -6.03: 56%|#####6 | 5600/10000 [00:05<00:01, 2296.08it/s]
1643-
reward: -1.88 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-1.03/6.27, grad norm= 311.07, loss_value= 446.30, loss_actor= 12.66, target value: -6.03: 64%|######4 | 6400/10000 [00:05<00:04, 746.08it/s]
1644-
reward: -2.00 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.21/5.59, grad norm= 95.63, loss_value= 265.29, loss_actor= 13.03, target value: -14.48: 64%|######4 | 6400/10000 [00:07<00:04, 746.08it/s]
1645-
reward: -2.00 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.21/5.59, grad norm= 95.63, loss_value= 265.29, loss_actor= 13.03, target value: -14.48: 72%|#######2 | 7200/10000 [00:08<00:05, 493.59it/s]
1646-
reward: -4.82 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.49/5.17, grad norm= 53.90, loss_value= 204.50, loss_actor= 14.36, target value: -15.45: 72%|#######2 | 7200/10000 [00:10<00:05, 493.59it/s]
1647-
reward: -4.82 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.49/5.17, grad norm= 53.90, loss_value= 204.50, loss_actor= 14.36, target value: -15.45: 80%|######## | 8000/10000 [00:11<00:04, 401.43it/s]
1648-
reward: -5.29 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.87/5.49, grad norm= 188.91, loss_value= 289.90, loss_actor= 17.97, target value: -19.30: 80%|######## | 8000/10000 [00:13<00:04, 401.43it/s]
1649-
reward: -5.29 (r0 = -2.36), reward eval: reward: -0.00, reward normalized=-2.87/5.49, grad norm= 188.91, loss_value= 289.90, loss_actor= 17.97, target value: -19.30: 88%|########8 | 8800/10000 [00:14<00:03, 356.31it/s]
1650-
reward: -3.86 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-2.50/5.87, grad norm= 115.25, loss_value= 303.35, loss_actor= 18.57, target value: -17.39: 88%|########8 | 8800/10000 [00:17<00:03, 356.31it/s]
1651-
reward: -3.86 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-2.50/5.87, grad norm= 115.25, loss_value= 303.35, loss_actor= 18.57, target value: -17.39: 96%|#########6| 9600/10000 [00:18<00:01, 287.48it/s]
1652-
reward: -4.78 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.09/4.74, grad norm= 56.83, loss_value= 192.36, loss_actor= 19.61, target value: -21.98: 96%|#########6| 9600/10000 [00:20<00:01, 287.48it/s]
1653-
reward: -4.78 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.09/4.74, grad norm= 56.83, loss_value= 192.36, loss_actor= 19.61, target value: -21.98: : 10400it [00:22, 254.71it/s]
1654-
reward: -5.00 (r0 = -2.36), reward eval: reward: -5.69, reward normalized=-3.24/4.47, grad norm= 111.75, loss_value= 225.29, loss_actor= 17.31, target value: -21.10: : 10400it [00:24, 254.71it/s]
1635+
8%|8 | 800/10000 [00:00<00:03, 2681.65it/s]
1636+
16%|#6 | 1600/10000 [00:01<00:11, 750.04it/s]
1637+
24%|##4 | 2400/10000 [00:02<00:06, 1119.80it/s]
1638+
32%|###2 | 3200/10000 [00:02<00:04, 1481.12it/s]
1639+
40%|#### | 4000/10000 [00:02<00:03, 1791.45it/s]
1640+
48%|####8 | 4800/10000 [00:03<00:02, 2062.16it/s]
1641+
56%|#####6 | 5600/10000 [00:03<00:01, 2276.26it/s]
1642+
reward: -2.31 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.09/6.10, grad norm= 161.75, loss_value= 288.41, loss_actor= 15.07, target value: -12.50: 56%|#####6 | 5600/10000 [00:05<00:01, 2276.26it/s]
1643+
reward: -2.31 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.09/6.10, grad norm= 161.75, loss_value= 288.41, loss_actor= 15.07, target value: -12.50: 64%|######4 | 6400/10000 [00:06<00:04, 720.79it/s]
1644+
reward: -2.40 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.59/5.25, grad norm= 45.96, loss_value= 241.49, loss_actor= 16.61, target value: -16.18: 64%|######4 | 6400/10000 [00:07<00:04, 720.79it/s]
1645+
reward: -2.40 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.59/5.25, grad norm= 45.96, loss_value= 241.49, loss_actor= 16.61, target value: -16.18: 72%|#######2 | 7200/10000 [00:08<00:05, 484.05it/s]
1646+
reward: -3.86 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-3.02/5.04, grad norm= 102.12, loss_value= 216.35, loss_actor= 15.81, target value: -18.89: 72%|#######2 | 7200/10000 [00:10<00:05, 484.05it/s]
1647+
reward: -3.86 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-3.02/5.04, grad norm= 102.12, loss_value= 216.35, loss_actor= 15.81, target value: -18.89: 80%|######## | 8000/10000 [00:11<00:05, 396.38it/s]
1648+
reward: -3.80 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.58/5.09, grad norm= 66.95, loss_value= 269.20, loss_actor= 17.29, target value: -16.75: 80%|######## | 8000/10000 [00:13<00:05, 396.38it/s]
1649+
reward: -3.80 (r0 = -2.48), reward eval: reward: -0.00, reward normalized=-2.58/5.09, grad norm= 66.95, loss_value= 269.20, loss_actor= 17.29, target value: -16.75: 88%|########8 | 8800/10000 [00:14<00:03, 352.65it/s]
1650+
reward: -5.27 (r0 = -2.48), reward eval: reward: -3.84, reward normalized=-2.31/4.99, grad norm= 155.37, loss_value= 189.80, loss_actor= 17.74, target value: -15.70: 88%|########8 | 8800/10000 [00:17<00:03, 352.65it/s]
1651+
reward: -5.27 (r0 = -2.48), reward eval: reward: -3.84, reward normalized=-2.31/4.99, grad norm= 155.37, loss_value= 189.80, loss_actor= 17.74, target value: -15.70: 96%|#########6| 9600/10000 [00:18<00:01, 284.22it/s]
1652+
reward: -3.22 (r0 = -2.48), reward eval: reward: -3.84, reward normalized=-2.48/4.87, grad norm= 103.23, loss_value= 260.72, loss_actor= 17.05, target value: -18.01: 96%|#########6| 9600/10000 [00:20<00:01, 284.22it/s]
1653+
reward: -3.22 (r0 = -2.48), reward eval: reward: -3.84, reward normalized=-2.48/4.87, grad norm= 103.23, loss_value= 260.72, loss_actor= 17.05, target value: -18.01: : 10400it [00:22, 252.54it/s]
1654+
reward: -3.46 (r0 = -2.48), reward eval: reward: -3.84, reward normalized=-2.95/4.08, grad norm= 169.45, loss_value= 217.22, loss_actor= 21.39, target value: -19.89: : 10400it [00:24, 252.54it/s]
16551655
16561656
16571657
@@ -1721,7 +1721,7 @@ To iterate further on this loss module we might consider:
17211721

17221722
.. rst-class:: sphx-glr-timing
17231723

1724-
**Total running time of the script:** ( 0 minutes 28.685 seconds)
1724+
**Total running time of the script:** ( 0 minutes 28.929 seconds)
17251725

17261726

17271727
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -516,9 +516,9 @@ models run single threaded.
516516
.. code-block:: none
517517
518518
loss: 5.167
519-
elapsed time (seconds): 210.7
519+
elapsed time (seconds): 211.1
520520
loss: 5.168
521-
elapsed time (seconds): 120.7
521+
elapsed time (seconds): 121.5
522522
523523
524524
@@ -540,7 +540,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
540540

541541
.. rst-class:: sphx-glr-timing
542542

543-
**Total running time of the script:** ( 5 minutes 40.322 seconds)
543+
**Total running time of the script:** ( 5 minutes 41.459 seconds)
544544

545545

546546
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 38 additions & 38 deletions
Original file line numberDiff line numberDiff line change
@@ -410,37 +410,37 @@ network to evaluation mode using ``.eval()``.
410410
Downloading: "https://download.pytorch.org/models/vgg19-dcbb9e9d.pth" to /var/lib/ci-user/.cache/torch/hub/checkpoints/vgg19-dcbb9e9d.pth
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413-
3%|3 | 17.6M/548M [00:00<00:03, 184MB/s]
414-
7%|6 | 35.6M/548M [00:00<00:02, 186MB/s]
415-
10%|9 | 53.8M/548M [00:00<00:02, 188MB/s]
416-
13%|#3 | 72.0M/548M [00:00<00:02, 189MB/s]
417-
16%|#6 | 90.1M/548M [00:00<00:02, 189MB/s]
418-
20%|#9 | 108M/548M [00:00<00:02, 189MB/s]
419-
23%|##3 | 126M/548M [00:00<00:02, 189MB/s]
420-
26%|##6 | 144M/548M [00:00<00:02, 189MB/s]
421-
30%|##9 | 163M/548M [00:00<00:02, 190MB/s]
422-
33%|###3 | 181M/548M [00:01<00:02, 190MB/s]
423-
36%|###6 | 199M/548M [00:01<00:01, 190MB/s]
424-
40%|###9 | 217M/548M [00:01<00:01, 190MB/s]
425-
43%|####2 | 236M/548M [00:01<00:01, 190MB/s]
426-
46%|####6 | 254M/548M [00:01<00:01, 190MB/s]
427-
50%|####9 | 272M/548M [00:01<00:01, 190MB/s]
428-
53%|#####2 | 290M/548M [00:01<00:01, 190MB/s]
429-
56%|#####6 | 309M/548M [00:01<00:01, 190MB/s]
430-
60%|#####9 | 327M/548M [00:01<00:01, 190MB/s]
431-
63%|######2 | 345M/548M [00:01<00:01, 190MB/s]
432-
66%|######6 | 363M/548M [00:02<00:01, 190MB/s]
433-
70%|######9 | 382M/548M [00:02<00:00, 190MB/s]
434-
73%|#######2 | 400M/548M [00:02<00:00, 190MB/s]
435-
76%|#######6 | 418M/548M [00:02<00:00, 190MB/s]
436-
80%|#######9 | 436M/548M [00:02<00:00, 190MB/s]
437-
83%|########2 | 454M/548M [00:02<00:00, 189MB/s]
438-
86%|########6 | 472M/548M [00:02<00:00, 189MB/s]
439-
89%|########9 | 490M/548M [00:02<00:00, 188MB/s]
440-
93%|#########2| 508M/548M [00:02<00:00, 189MB/s]
441-
96%|#########6| 527M/548M [00:02<00:00, 189MB/s]
442-
99%|#########9| 545M/548M [00:03<00:00, 189MB/s]
443-
100%|##########| 548M/548M [00:03<00:00, 189MB/s]
413+
3%|3 | 17.1M/548M [00:00<00:03, 178MB/s]
414+
6%|6 | 35.0M/548M [00:00<00:02, 183MB/s]
415+
10%|9 | 52.9M/548M [00:00<00:02, 185MB/s]
416+
13%|#2 | 70.8M/548M [00:00<00:02, 185MB/s]
417+
16%|#6 | 88.5M/548M [00:00<00:02, 186MB/s]
418+
19%|#9 | 106M/548M [00:00<00:02, 186MB/s]
419+
23%|##2 | 124M/548M [00:00<00:02, 186MB/s]
420+
26%|##5 | 142M/548M [00:00<00:02, 186MB/s]
421+
29%|##9 | 160M/548M [00:00<00:02, 186MB/s]
422+
32%|###2 | 178M/548M [00:01<00:02, 187MB/s]
423+
36%|###5 | 196M/548M [00:01<00:01, 187MB/s]
424+
39%|###8 | 214M/548M [00:01<00:01, 187MB/s]
425+
42%|####2 | 232M/548M [00:01<00:01, 187MB/s]
426+
46%|####5 | 249M/548M [00:01<00:01, 182MB/s]
427+
49%|####8 | 267M/548M [00:01<00:01, 183MB/s]
428+
52%|#####2 | 285M/548M [00:01<00:01, 184MB/s]
429+
55%|#####5 | 303M/548M [00:01<00:01, 185MB/s]
430+
59%|#####8 | 321M/548M [00:01<00:01, 185MB/s]
431+
62%|######1 | 339M/548M [00:01<00:01, 185MB/s]
432+
65%|######5 | 356M/548M [00:02<00:01, 186MB/s]
433+
68%|######8 | 374M/548M [00:02<00:00, 186MB/s]
434+
72%|#######1 | 392M/548M [00:02<00:00, 186MB/s]
435+
75%|#######4 | 410M/548M [00:02<00:00, 186MB/s]
436+
78%|#######8 | 428M/548M [00:02<00:00, 186MB/s]
437+
81%|########1 | 446M/548M [00:02<00:00, 186MB/s]
438+
85%|########4 | 464M/548M [00:02<00:00, 186MB/s]
439+
88%|########7 | 482M/548M [00:02<00:00, 186MB/s]
440+
91%|#########1| 500M/548M [00:02<00:00, 186MB/s]
441+
94%|#########4| 517M/548M [00:02<00:00, 186MB/s]
442+
98%|#########7| 535M/548M [00:03<00:00, 179MB/s]
443+
100%|##########| 548M/548M [00:03<00:00, 185MB/s]
444444
445445
446446
@@ -761,22 +761,22 @@ Finally, we can run the algorithm.
761761
762762
Optimizing..
763763
run [50]:
764-
Style Loss : 4.251205 Content Loss: 4.215905
764+
Style Loss : 3.993423 Content Loss: 4.128504
765765
766766
run [100]:
767-
Style Loss : 1.166493 Content Loss: 3.052896
767+
Style Loss : 1.125479 Content Loss: 3.029766
768768
769769
run [150]:
770-
Style Loss : 0.716793 Content Loss: 2.662554
770+
Style Loss : 0.713655 Content Loss: 2.653280
771771
772772
run [200]:
773-
Style Loss : 0.478618 Content Loss: 2.496649
773+
Style Loss : 0.492075 Content Loss: 2.497542
774774
775775
run [250]:
776-
Style Loss : 0.348233 Content Loss: 2.406508
776+
Style Loss : 0.352248 Content Loss: 2.405735
777777
778778
run [300]:
779-
Style Loss : 0.266141 Content Loss: 2.351221
779+
Style Loss : 0.269499 Content Loss: 2.351811
780780
781781
782782
@@ -785,7 +785,7 @@ Finally, we can run the algorithm.
785785
786786
.. rst-class:: sphx-glr-timing
787787

788-
**Total running time of the script:** ( 0 minutes 36.865 seconds)
788+
**Total running time of the script:** ( 0 minutes 36.880 seconds)
789789

790790

791791
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.591 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.601 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)