Skip to content

Commit 871c206

Browse files
committed
unit 1 readme typo fixes
1 parent 7aed88d commit 871c206

File tree

1 file changed

+6
-7
lines changed

1 file changed

+6
-7
lines changed

unit1/README.md

Lines changed: 6 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -1,6 +1,6 @@
11
# Unit 1: An Introduction to Diffusion Models
22

3-
Welcome to Unit 1 of the Hugging Face Diffusion Models Course! In this unit you will learn the basics of how diffusion
3+
Welcome to Unit 1 of the Hugging Face Diffusion Models Course! In this unit, you will learn the basics of how diffusion
44
models work and how to create your own using the 🤗 Diffusers library.
55

66
## Start this Unit :rocket:
@@ -11,22 +11,22 @@ Here are the steps for this unit:
1111
- Read through the introductory material below as well as any of the additional resources that sound interesting
1212
- Check out the _**Introduction to Diffusers**_ notebook below to put theory into practice with the 🤗 Diffusers library
1313
- Train and share your own diffusion model using the notebook or the linked training script
14-
- (Optional) Dive deeper with the _**Diffusion Models from Scratch**_ notebook if you're interested seeing a minimal from-scratch implementation and exploring the different design decisions involved
14+
- (Optional) Dive deeper with the _**Diffusion Models from Scratch**_ notebook if you're interested in seeing a minimal from-scratch implementation and exploring the different design decisions involved
1515

1616

1717
:loudspeaker: Don't forget to join the [Discord](https://huggingface.co/join/discord), where you can discuss the material and share what you've made in the `#diffusion-models-class` channel.
1818

1919
## What Are Diffusion Models?
2020

21-
Diffusion models are a relatively recent addition to a group of algorithms known as 'generative models'. The goal of generative modelling is to learn to **generate** data, such as images or audio, given a number of training examples. A good generative model will create a **diverse** set of outputs that resemble the training data without being exact copies. How do diffusion models achieve this? Let's focus on the image generation case for illustrative purposes.
21+
Diffusion models are a relatively recent addition to a group of algorithms known as 'generative models'. The goal of generative modeling is to learn to **generate** data, such as images or audio, given a number of training examples. A good generative model will create a **diverse** set of outputs that resemble the training data without being exact copies. How do diffusion models achieve this? Let's focus on the image generation case for illustrative purposes.
2222

2323
<p align="center">
2424
<img src="https://user-images.githubusercontent.com/10695622/174349667-04e9e485-793b-429a-affe-096e8199ad5b.png" width="800"/>
2525
<br>
2626
<em> Figure from DDPM paper (https://arxiv.org/abs/2006.11239). </em>
2727
<p>
2828

29-
The secret to diffusion models' success is the iterative nature of the diffusion process. Generation begins with random noise, but this is gradually refined over a number of steps until an output image emerges. At each step, the model estimates how we could go from the current input to a completely denoised version. However, since we only make a small change at every step, any errors in this estimate at early stages (where predicting the final output is extremely difficult) can be corrected in later updates.
29+
The secret to diffusion models' success is the iterative nature of the diffusion process. Generation begins with random noise, but this is gradually refined over a number of steps until an output image emerges. At each step, the model estimates how we could go from the current input to a completely denoised version. However, since we only make a small change at every step, any errors in this estimate at the early stages (where predicting the final output is extremely difficult) can be corrected in later updates.
3030

3131
Training the model is relatively straightforward compared to some other types of generative model. We repeatedly
3232
1) Load in some images from the training data
@@ -50,7 +50,7 @@ At this point, you know enough to get started with the accompanying notebooks! T
5050

5151
In _**Introduction to Diffusers**_, we show the different steps described above using building blocks from the diffusers library. You'll quickly see how to create, train and sample your own diffusion models on whatever data you choose. By the end of the notebook, you'll be able to read and modify the example training script to train diffusion models and share them with the world! This notebook also introduces the main exercise associated with this unit, where we will collectively attempt to figure out good 'training recipes' for diffusion models at different scales - see the next section for more info.
5252

53-
In _**Diffusion Models from Scratch**_ we show those same steps (adding noise to data, creating a model, training and sampling) but implemented from scratch in PyTorch as simply as possible. Then we compare this 'toy example' with the diffusers version, noting how the two differ and where improvements have been made. The goal here is to gain familiarity with the different components and the design decisions that go into them, so that when you look at a new implementation you can quickly identify the key ideas.
53+
In _**Diffusion Models from Scratch**_, we show those same steps (adding noise to data, creating a model, training and sampling) but implemented from scratch in PyTorch as simply as possible. Then we compare this 'toy example' with the diffusers version, noting how the two differ and where improvements have been made. The goal here is to gain familiarity with the different components and the design decisions that go into them so that when you look at a new implementation you can quickly identify the key ideas.
5454

5555
## Project Time
5656

@@ -61,8 +61,7 @@ Now that you've got the basics down, have a go at training one or more diffusion
6161
[The Annotated Diffusion Model](https://huggingface.co/blog/annotated-diffusion) is a very in-depth walk-through of the code and theory behind DDPMs with
6262
maths and code showing all the different components. It also links to a number of papers for further reading.
6363

64-
Hugging Face documentation on [Unconditional Image-Generation
65-
](https://huggingface.co/docs/diffusers/training/unconditional_training) for some examples of how to train diffusion models using the official training example script, including code showing how to create your own dataset.
64+
Hugging Face documentation on [Unconditional Image-Generation](https://huggingface.co/docs/diffusers/training/unconditional_training) for some examples of how to train diffusion models using the official training example script, including code showing how to create your own dataset.
6665

6766
AI Coffee Break video on Diffusion Models: https://www.youtube.com/watch?v=344w5h24-h8
6867

0 commit comments

Comments
 (0)