Skip to content

Commit 450b74a

Browse files
authored
Update README.md
Remove references to github.io docs that died with repo move
1 parent 10627bb commit 450b74a

File tree

1 file changed

+7
-11
lines changed

1 file changed

+7
-11
lines changed

README.md

Lines changed: 7 additions & 11 deletions
Original file line numberDiff line numberDiff line change
@@ -439,9 +439,7 @@ The work of many others is present here. I've tried to make sure all source mate
439439

440440
## Models
441441

442-
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated. Here are some example [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) to get you started.
443-
444-
A full version of the list below with source links can be found in the [documentation](https://rwightman.github.io/pytorch-image-models/models/).
442+
All model architecture families include variants with pretrained weights. There are specific model variants without any weights, it is NOT a bug. Help training new or better weights is always appreciated.
445443

446444
* Aggregating Nested Transformers - https://arxiv.org/abs/2105.12723
447445
* BEiT - https://arxiv.org/abs/2106.08254
@@ -542,15 +540,15 @@ Several (less common) features that I often utilize in my projects are included.
542540

543541
* All models have a common default configuration interface and API for
544542
* accessing/changing the classifier - `get_classifier` and `reset_classifier`
545-
* doing a forward pass on just the features - `forward_features` (see [documentation](https://rwightman.github.io/pytorch-image-models/feature_extraction/))
543+
* doing a forward pass on just the features - `forward_features` (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
546544
* these makes it easy to write consistent network wrappers that work with any of the models
547-
* All models support multi-scale feature map extraction (feature pyramids) via create_model (see [documentation](https://rwightman.github.io/pytorch-image-models/feature_extraction/))
545+
* All models support multi-scale feature map extraction (feature pyramids) via create_model (see [documentation](https://huggingface.co/docs/timm/feature_extraction))
548546
* `create_model(name, features_only=True, out_indices=..., output_stride=...)`
549547
* `out_indices` creation arg specifies which feature maps to return, these indices are 0 based and generally correspond to the `C(i + 1)` feature level.
550548
* `output_stride` creation arg controls output stride of the network by using dilated convolutions. Most networks are stride 32 by default. Not all networks support this.
551549
* feature map channel counts, reduction level (stride) can be queried AFTER model creation via the `.feature_info` member
552550
* All models have a consistent pretrained weight loader that adapts last linear if necessary, and from 3 to 1 channel input if desired
553-
* High performance [reference training, validation, and inference scripts](https://rwightman.github.io/pytorch-image-models/scripts/) that work in several process/GPU modes:
551+
* High performance [reference training, validation, and inference scripts](https://huggingface.co/docs/timm/training_script) that work in several process/GPU modes:
554552
* NVIDIA DDP w/ a single GPU per process, multiple processes with APEX present (AMP mixed-precision optional)
555553
* PyTorch DistributedDataParallel w/ multi-gpu, single process (AMP disabled as it crashes when enabled)
556554
* PyTorch w/ single GPU single process (AMP optional)
@@ -604,19 +602,17 @@ Model validation results can be found in the [results tables](results/README.md)
604602

605603
## Getting Started (Documentation)
606604

607-
My current [documentation](https://rwightman.github.io/pytorch-image-models/) for `timm` covers the basics.
608-
609-
Hugging Face [`timm` docs](https://huggingface.co/docs/hub/timm) will be the documentation focus going forward and will eventually replace the `github.io` docs above.
605+
The official documentation can be found at https://huggingface.co/docs/hub/timm. Documentation contributions are welcome.
610606

611607
[Getting Started with PyTorch Image Models (timm): A Practitioner’s Guide](https://towardsdatascience.com/getting-started-with-pytorch-image-models-timm-a-practitioners-guide-4e77b4bf9055) by [Chris Hughes](https://github.com/Chris-hughes10) is an extensive blog post covering many aspects of `timm` in detail.
612608

613-
[timmdocs](http://timm.fast.ai/) is quickly becoming a much more comprehensive set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
609+
[timmdocs](http://timm.fast.ai/) is an alternate set of documentation for `timm`. A big thanks to [Aman Arora](https://github.com/amaarora) for his efforts creating timmdocs.
614610

615611
[paperswithcode](https://paperswithcode.com/lib/timm) is a good resource for browsing the models within `timm`.
616612

617613
## Train, Validation, Inference Scripts
618614

619-
The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See [documentation](https://rwightman.github.io/pytorch-image-models/scripts/) for some basics and [training hparams](https://rwightman.github.io/pytorch-image-models/training_hparam_examples) for some train examples that produce SOTA ImageNet results.
615+
The root folder of the repository contains reference train, validation, and inference scripts that work with the included models and other features of this repository. They are adaptable for other datasets and use cases with a little hacking. See [documentation (https://huggingface.co/docs/timm/training_script).
620616

621617
## Awesome PyTorch Resources
622618

0 commit comments

Comments
 (0)