Skip to content

Commit 2093d74

Browse files
committed
Automated tutorials push
1 parent 64c25e5 commit 2093d74

File tree

401 files changed

+12709
-18600
lines changed

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

401 files changed

+12709
-18600
lines changed

_downloads/3195443a0ced3cabc0ad643537bdb5cd/introyt1_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "d138a7e2",
37+
"id": "aa160397",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "c99a9b4c",
53+
"id": "190f9950",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/4355e2cef7d17548f1e25f97a62828c4/template_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -31,7 +31,7 @@
3131
{
3232
"cell_type": "code",
3333
"execution_count": null,
34-
"id": "71f07c09",
34+
"id": "48d2c3c7",
3535
"metadata": {},
3636
"outputs": [],
3737
"source": [
@@ -47,7 +47,7 @@
4747
},
4848
{
4949
"cell_type": "markdown",
50-
"id": "89a9c73d",
50+
"id": "21d0685d",
5151
"metadata": {},
5252
"source": [
5353
"\n",

_downloads/63a0f0fc7b3ffb15d3a5ac8db3d521ee/tensors_deeper_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "31e0c762",
37+
"id": "c1aebcb0",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "95ad8926",
53+
"id": "0e2aaa9c",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/6b019e0b5f84b568fcca1120bd28e230/torch_compile_tutorial.py

Lines changed: 4 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -30,6 +30,10 @@
3030
# - ``numpy``
3131
# - ``scipy``
3232
# - ``tabulate``
33+
#
34+
# **System Requirements**
35+
# - A C++ compiler, such as ``g++``
36+
# - Python development package (``python-devel``/``python-dev``)
3337

3438
######################################################################
3539
# NOTE: a modern NVIDIA GPU (H100, A100, or V100) is recommended for this tutorial in

_downloads/770632dd3941d2a51b831c52ded57aa2/trainingyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "c0fa3ab2",
38+
"id": "c38505ba",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "e3be9147",
54+
"id": "78c5656f",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/96ad88eb476f41a5403dcdade086afb8/torch_compile_tutorial.ipynb

Lines changed: 4 additions & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -48,7 +48,10 @@
4848
"- `torchvision`\n",
4949
"- `numpy`\n",
5050
"- `scipy`\n",
51-
"- `tabulate`\n"
51+
"- `tabulate`\n",
52+
"\n",
53+
"**System Requirements** - A C++ compiler, such as `g++` - Python\n",
54+
"development package (`python-devel`/`python-dev`)\n"
5255
]
5356
},
5457
{

_downloads/c28f42852d456daf9af72da6c6909556/captumyt.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -37,7 +37,7 @@
3737
{
3838
"cell_type": "code",
3939
"execution_count": null,
40-
"id": "06a5e4e8",
40+
"id": "6a17d7ee",
4141
"metadata": {},
4242
"outputs": [],
4343
"source": [
@@ -53,7 +53,7 @@
5353
},
5454
{
5555
"cell_type": "markdown",
56-
"id": "a1ad12c3",
56+
"id": "1a17e29f",
5757
"metadata": {},
5858
"source": [
5959
"\n",

_downloads/e2e556f6b4693c2cef716dd7f40caaf6/tensorboardyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -35,7 +35,7 @@
3535
{
3636
"cell_type": "code",
3737
"execution_count": null,
38-
"id": "43435335",
38+
"id": "84ecc7cc",
3939
"metadata": {},
4040
"outputs": [],
4141
"source": [
@@ -51,7 +51,7 @@
5151
},
5252
{
5353
"cell_type": "markdown",
54-
"id": "a8314615",
54+
"id": "580590a2",
5555
"metadata": {},
5656
"source": [
5757
"\n",

_downloads/ed9d4f94afb79f7dada6742a06c486a5/autogradyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "0d66ae34",
37+
"id": "c18fa265",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "90e25f71",
53+
"id": "77d9b723",
5454
"metadata": {},
5555
"source": [
5656
"\n",

_downloads/fe726e041160526cf828806536922cf6/modelsyt_tutorial.ipynb

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@
3434
{
3535
"cell_type": "code",
3636
"execution_count": null,
37-
"id": "89c30003",
37+
"id": "277d217e",
3838
"metadata": {},
3939
"outputs": [],
4040
"source": [
@@ -50,7 +50,7 @@
5050
},
5151
{
5252
"cell_type": "markdown",
53-
"id": "96d0efc3",
53+
"id": "c51520d0",
5454
"metadata": {},
5555
"source": [
5656
"\n",
Loading
Loading

_images/sphx_glr_coding_ddpg_001.png

390 Bytes
Loading
-1.17 KB
Loading
-89 Bytes
Loading
-72 Bytes
Loading
60 Bytes
Loading
-35 Bytes
Loading
-3 Bytes
Loading
-317 Bytes
Loading
Loading
Loading
5.29 KB
Loading

_sources/advanced/coding_ddpg.rst.txt

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -1634,26 +1634,26 @@ modules we need.
16341634
16351635
16361636
0%| | 0/10000 [00:00<?, ?it/s]
1637-
8%|8 | 800/10000 [00:00<00:06, 1436.54it/s]
1638-
16%|#6 | 1600/10000 [00:03<00:20, 402.61it/s]
1639-
24%|##4 | 2400/10000 [00:04<00:12, 592.23it/s]
1640-
32%|###2 | 3200/10000 [00:04<00:08, 781.90it/s]
1641-
40%|#### | 4000/10000 [00:05<00:06, 944.12it/s]
1642-
48%|####8 | 4800/10000 [00:05<00:04, 1086.72it/s]
1643-
56%|#####6 | 5600/10000 [00:06<00:03, 1188.73it/s]
1644-
reward: -2.10 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-2.03/6.49, grad norm= 118.91, loss_value= 421.56, loss_actor= 13.52, target value: -12.78: 56%|#####6 | 5600/10000 [00:07<00:03, 1188.73it/s]
1645-
reward: -2.10 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-2.03/6.49, grad norm= 118.91, loss_value= 421.56, loss_actor= 13.52, target value: -12.78: 64%|######4 | 6400/10000 [00:07<00:04, 818.74it/s]
1646-
reward: -0.18 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-1.85/5.98, grad norm= 64.71, loss_value= 317.24, loss_actor= 12.01, target value: -11.96: 64%|######4 | 6400/10000 [00:08<00:04, 818.74it/s]
1647-
reward: -0.18 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-1.85/5.98, grad norm= 64.71, loss_value= 317.24, loss_actor= 12.01, target value: -11.96: 72%|#######2 | 7200/10000 [00:09<00:04, 685.27it/s]
1648-
reward: -1.72 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-1.86/5.98, grad norm= 67.66, loss_value= 282.10, loss_actor= 10.29, target value: -11.38: 72%|#######2 | 7200/10000 [00:10<00:04, 685.27it/s]
1649-
reward: -1.72 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-1.86/5.98, grad norm= 67.66, loss_value= 282.10, loss_actor= 10.29, target value: -11.38: 80%|######## | 8000/10000 [00:11<00:03, 614.48it/s]
1650-
reward: -4.75 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-2.61/4.96, grad norm= 89.98, loss_value= 182.59, loss_actor= 17.27, target value: -17.35: 80%|######## | 8000/10000 [00:11<00:03, 614.48it/s]
1651-
reward: -4.75 (r0 = -2.77), reward eval: reward: -0.00, reward normalized=-2.61/4.96, grad norm= 89.98, loss_value= 182.59, loss_actor= 17.27, target value: -17.35: 88%|########8 | 8800/10000 [00:12<00:02, 578.12it/s]
1652-
reward: -5.26 (r0 = -2.77), reward eval: reward: -5.32, reward normalized=-2.15/5.30, grad norm= 81.40, loss_value= 181.11, loss_actor= 13.41, target value: -14.11: 88%|########8 | 8800/10000 [00:15<00:02, 578.12it/s]
1653-
reward: -5.26 (r0 = -2.77), reward eval: reward: -5.32, reward normalized=-2.15/5.30, grad norm= 81.40, loss_value= 181.11, loss_actor= 13.41, target value: -14.11: 96%|#########6| 9600/10000 [00:16<00:01, 368.51it/s]
1654-
reward: -4.78 (r0 = -2.77), reward eval: reward: -5.32, reward normalized=-2.96/5.24, grad norm= 126.27, loss_value= 280.88, loss_actor= 15.18, target value: -21.11: 96%|#########6| 9600/10000 [00:17<00:01, 368.51it/s]
1655-
reward: -4.78 (r0 = -2.77), reward eval: reward: -5.32, reward normalized=-2.96/5.24, grad norm= 126.27, loss_value= 280.88, loss_actor= 15.18, target value: -21.11: : 10400it [00:19, 346.55it/s]
1656-
reward: -4.34 (r0 = -2.77), reward eval: reward: -5.32, reward normalized=-3.35/4.36, grad norm= 87.44, loss_value= 220.61, loss_actor= 17.77, target value: -24.09: : 10400it [00:20, 346.55it/s]
1637+
8%|8 | 800/10000 [00:00<00:06, 1436.09it/s]
1638+
16%|#6 | 1600/10000 [00:03<00:20, 400.47it/s]
1639+
24%|##4 | 2400/10000 [00:04<00:12, 590.15it/s]
1640+
32%|###2 | 3200/10000 [00:04<00:08, 773.73it/s]
1641+
40%|#### | 4000/10000 [00:05<00:06, 936.46it/s]
1642+
48%|####8 | 4800/10000 [00:05<00:04, 1074.74it/s]
1643+
56%|#####6 | 5600/10000 [00:06<00:03, 1180.52it/s]
1644+
reward: -2.48 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.54/6.46, grad norm= 94.40, loss_value= 408.21, loss_actor= 14.01, target value: -15.93: 56%|#####6 | 5600/10000 [00:07<00:03, 1180.52it/s]
1645+
reward: -2.48 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.54/6.46, grad norm= 94.40, loss_value= 408.21, loss_actor= 14.01, target value: -15.93: 64%|######4 | 6400/10000 [00:07<00:04, 813.08it/s]
1646+
reward: -0.12 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.23/5.55, grad norm= 59.75, loss_value= 249.44, loss_actor= 13.52, target value: -14.12: 64%|######4 | 6400/10000 [00:08<00:04, 813.08it/s]
1647+
reward: -0.12 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.23/5.55, grad norm= 59.75, loss_value= 249.44, loss_actor= 13.52, target value: -14.12: 72%|#######2 | 7200/10000 [00:09<00:04, 678.33it/s]
1648+
reward: -3.47 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.69/6.34, grad norm= 119.62, loss_value= 349.01, loss_actor= 15.25, target value: -16.23: 72%|#######2 | 7200/10000 [00:10<00:04, 678.33it/s]
1649+
reward: -3.47 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.69/6.34, grad norm= 119.62, loss_value= 349.01, loss_actor= 15.25, target value: -16.23: 80%|######## | 8000/10000 [00:11<00:03, 606.90it/s]
1650+
reward: -4.16 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.91/5.23, grad norm= 265.29, loss_value= 256.97, loss_actor= 13.80, target value: -19.01: 80%|######## | 8000/10000 [00:11<00:03, 606.90it/s]
1651+
reward: -4.16 (r0 = -2.61), reward eval: reward: -0.01, reward normalized=-2.91/5.23, grad norm= 265.29, loss_value= 256.97, loss_actor= 13.80, target value: -19.01: 88%|########8 | 8800/10000 [00:12<00:02, 564.64it/s]
1652+
reward: -4.98 (r0 = -2.61), reward eval: reward: -5.96, reward normalized=-2.89/5.09, grad norm= 36.41, loss_value= 230.77, loss_actor= 17.59, target value: -19.00: 88%|########8 | 8800/10000 [00:15<00:02, 564.64it/s]
1653+
reward: -4.98 (r0 = -2.61), reward eval: reward: -5.96, reward normalized=-2.89/5.09, grad norm= 36.41, loss_value= 230.77, loss_actor= 17.59, target value: -19.00: 96%|#########6| 9600/10000 [00:16<00:01, 365.72it/s]
1654+
reward: -4.28 (r0 = -2.61), reward eval: reward: -5.96, reward normalized=-2.86/5.27, grad norm= 124.44, loss_value= 230.24, loss_actor= 17.83, target value: -20.33: 96%|#########6| 9600/10000 [00:17<00:01, 365.72it/s]
1655+
reward: -4.28 (r0 = -2.61), reward eval: reward: -5.96, reward normalized=-2.86/5.27, grad norm= 124.44, loss_value= 230.24, loss_actor= 17.83, target value: -20.33: : 10400it [00:19, 343.33it/s]
1656+
reward: -4.70 (r0 = -2.61), reward eval: reward: -5.96, reward normalized=-3.60/4.12, grad norm= 92.27, loss_value= 170.00, loss_actor= 22.34, target value: -25.35: : 10400it [00:20, 343.33it/s]
16571657
16581658
16591659
@@ -1723,7 +1723,7 @@ To iterate further on this loss module we might consider:
17231723

17241724
.. rst-class:: sphx-glr-timing
17251725

1726-
**Total running time of the script:** ( 0 minutes 31.426 seconds)
1726+
**Total running time of the script:** ( 0 minutes 31.807 seconds)
17271727

17281728

17291729
.. _sphx_glr_download_advanced_coding_ddpg.py:

_sources/advanced/dynamic_quantization_tutorial.rst.txt

Lines changed: 3 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -517,9 +517,9 @@ models run single threaded.
517517
.. code-block:: none
518518
519519
loss: 5.167
520-
elapsed time (seconds): 210.8
520+
elapsed time (seconds): 207.8
521521
loss: 5.168
522-
elapsed time (seconds): 117.2
522+
elapsed time (seconds): 118.5
523523
524524
525525
@@ -541,7 +541,7 @@ Thanks for reading! As always, we welcome any feedback, so please create an issu
541541

542542
.. rst-class:: sphx-glr-timing
543543

544-
**Total running time of the script:** ( 5 minutes 36.731 seconds)
544+
**Total running time of the script:** ( 5 minutes 34.789 seconds)
545545

546546

547547
.. _sphx_glr_download_advanced_dynamic_quantization_tutorial.py:

_sources/advanced/neural_style_tutorial.rst.txt

Lines changed: 32 additions & 32 deletions
Original file line numberDiff line numberDiff line change
@@ -411,31 +411,31 @@ network to evaluation mode using ``.eval()``.
411411
412412
0%| | 0.00/548M [00:00<?, ?B/s]
413413
4%|3 | 20.9M/548M [00:00<00:02, 218MB/s]
414-
8%|7 | 42.1M/548M [00:00<00:02, 220MB/s]
415-
12%|#1 | 63.2M/548M [00:00<00:02, 221MB/s]
416-
15%|#5 | 84.5M/548M [00:00<00:02, 221MB/s]
417-
19%|#9 | 106M/548M [00:00<00:02, 221MB/s]
418-
23%|##3 | 127M/548M [00:00<00:01, 222MB/s]
419-
27%|##7 | 148M/548M [00:00<00:01, 222MB/s]
420-
31%|### | 170M/548M [00:00<00:01, 222MB/s]
421-
35%|###4 | 191M/548M [00:00<00:01, 222MB/s]
422-
39%|###8 | 212M/548M [00:01<00:01, 222MB/s]
423-
43%|####2 | 233M/548M [00:01<00:01, 222MB/s]
424-
46%|####6 | 254M/548M [00:01<00:01, 222MB/s]
425-
50%|##### | 276M/548M [00:01<00:01, 222MB/s]
426-
54%|#####4 | 297M/548M [00:01<00:01, 222MB/s]
427-
58%|#####8 | 318M/548M [00:01<00:01, 222MB/s]
428-
62%|######1 | 340M/548M [00:01<00:00, 222MB/s]
429-
66%|######5 | 361M/548M [00:01<00:00, 222MB/s]
430-
70%|######9 | 382M/548M [00:01<00:00, 222MB/s]
431-
74%|#######3 | 403M/548M [00:01<00:00, 222MB/s]
432-
77%|#######7 | 424M/548M [00:02<00:00, 222MB/s]
433-
81%|########1 | 446M/548M [00:02<00:00, 222MB/s]
434-
85%|########5 | 467M/548M [00:02<00:00, 222MB/s]
435-
89%|########9 | 488M/548M [00:02<00:00, 223MB/s]
436-
93%|#########2| 510M/548M [00:02<00:00, 223MB/s]
437-
97%|#########6| 531M/548M [00:02<00:00, 223MB/s]
438-
100%|##########| 548M/548M [00:02<00:00, 222MB/s]
414+
8%|7 | 42.2M/548M [00:00<00:02, 221MB/s]
415+
12%|#1 | 63.5M/548M [00:00<00:02, 222MB/s]
416+
15%|#5 | 84.9M/548M [00:00<00:02, 223MB/s]
417+
19%|#9 | 106M/548M [00:00<00:02, 223MB/s]
418+
23%|##3 | 128M/548M [00:00<00:01, 223MB/s]
419+
27%|##7 | 149M/548M [00:00<00:01, 223MB/s]
420+
31%|###1 | 170M/548M [00:00<00:01, 223MB/s]
421+
35%|###4 | 192M/548M [00:00<00:01, 224MB/s]
422+
39%|###8 | 213M/548M [00:01<00:01, 224MB/s]
423+
43%|####2 | 235M/548M [00:01<00:01, 224MB/s]
424+
47%|####6 | 256M/548M [00:01<00:01, 224MB/s]
425+
51%|##### | 277M/548M [00:01<00:01, 224MB/s]
426+
55%|#####4 | 299M/548M [00:01<00:01, 224MB/s]
427+
58%|#####8 | 320M/548M [00:01<00:01, 224MB/s]
428+
62%|######2 | 342M/548M [00:01<00:00, 224MB/s]
429+
66%|######6 | 363M/548M [00:01<00:00, 224MB/s]
430+
70%|####### | 385M/548M [00:01<00:00, 224MB/s]
431+
74%|#######4 | 406M/548M [00:01<00:00, 224MB/s]
432+
78%|#######7 | 427M/548M [00:02<00:00, 223MB/s]
433+
82%|########1 | 449M/548M [00:02<00:00, 223MB/s]
434+
86%|########5 | 470M/548M [00:02<00:00, 223MB/s]
435+
90%|########9 | 492M/548M [00:02<00:00, 223MB/s]
436+
94%|#########3| 513M/548M [00:02<00:00, 223MB/s]
437+
97%|#########7| 534M/548M [00:02<00:00, 223MB/s]
438+
100%|##########| 548M/548M [00:02<00:00, 223MB/s]
439439
440440
441441
@@ -756,22 +756,22 @@ Finally, we can run the algorithm.
756756
757757
Optimizing..
758758
run [50]:
759-
Style Loss : 3.818673 Content Loss: 4.082992
759+
Style Loss : 4.072629 Content Loss: 4.158342
760760
761761
run [100]:
762-
Style Loss : 1.115698 Content Loss: 3.015260
762+
Style Loss : 1.157763 Content Loss: 3.050233
763763
764764
run [150]:
765-
Style Loss : 0.697345 Content Loss: 2.641728
765+
Style Loss : 0.714200 Content Loss: 2.651302
766766
767767
run [200]:
768-
Style Loss : 0.465102 Content Loss: 2.482987
768+
Style Loss : 0.489686 Content Loss: 2.496655
769769
770770
run [250]:
771-
Style Loss : 0.338386 Content Loss: 2.395916
771+
Style Loss : 0.354790 Content Loss: 2.407404
772772
773773
run [300]:
774-
Style Loss : 0.258478 Content Loss: 2.345674
774+
Style Loss : 0.268439 Content Loss: 2.352382
775775
776776
777777
@@ -780,7 +780,7 @@ Finally, we can run the algorithm.
780780
781781
.. rst-class:: sphx-glr-timing
782782

783-
**Total running time of the script:** ( 0 minutes 36.060 seconds)
783+
**Total running time of the script:** ( 0 minutes 36.042 seconds)
784784

785785

786786
.. _sphx_glr_download_advanced_neural_style_tutorial.py:

_sources/advanced/numpy_extensions_tutorial.rst.txt

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -303,7 +303,7 @@ The backward pass computes the gradient ``wrt`` the input and the gradient ``wrt
303303
304304
.. rst-class:: sphx-glr-timing
305305

306-
**Total running time of the script:** ( 0 minutes 0.613 seconds)
306+
**Total running time of the script:** ( 0 minutes 0.591 seconds)
307307

308308

309309
.. _sphx_glr_download_advanced_numpy_extensions_tutorial.py:

0 commit comments

Comments
 (0)