Skip to content

Commit b267aba

Browse files
authored
Add multinlingual configuration
1 parent a556a69 commit b267aba

Some content is hidden

Large Commits have some content hidden by default. Use the searchbox below for content that may be hidden.

69 files changed

+91936
-0
lines changed

units/en/_toctree.yml

Lines changed: 51 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,51 @@
1+
- title: Course introduction
2+
sections:
3+
- local: unit0/1
4+
title: Introduction
5+
6+
- title: 1. Introduction to diffusion models
7+
sections:
8+
- local: unit1/1
9+
title: Overview
10+
- local: unit1/2
11+
title: Implementation with 🤗 Diffusers
12+
- local: unit1/3
13+
title: Implementation from scratch
14+
15+
- title: 2. Fine-Tuning, Guidance and Conditioning
16+
sections:
17+
- local: unit2/1
18+
title: Overview
19+
- local: unit2/2
20+
title: Fine-Tuning and guidance
21+
- local: unit2/3
22+
title: Class-conditioned Diffusion Model
23+
24+
- title: 3. Stable Diffusion
25+
sections:
26+
- local: unit3/1
27+
title: Overview
28+
- local: unit3/2
29+
title: Introduction to Stable Diffusion
30+
- local: unit3/3
31+
title: Deep dive into Stable Diffusion
32+
33+
- title: 4. Going Further with Diffusion Models
34+
sections:
35+
- local: unit4/1
36+
title: Overview
37+
- local: unit4/2
38+
title: Inverse Denoising Diffusion Implicit Models
39+
- local: unit4/3
40+
title: Diffusion for audio
41+
42+
- title: Events related to the course
43+
sections:
44+
- local: events/launch
45+
title: Diffusion Models Live Event
46+
- local: events/dreambooth
47+
title: Dreambooth Hackathon
48+
- local: events/3
49+
title: Keras Dreambooth event
50+
- local: events/4
51+
title: JAX/Diffusers community sprint

units/en/events/1.mdx

Lines changed: 35 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,35 @@
1+
# Diffusion Models Live Event
2+
3+
To go with the course's release, we are organising a **live community event on November 30th 2022** to which you are invited! The program includes exciting talks from the creators of Stable Diffusion, researchers at Stability AI and Meta, and more!
4+
5+
The talks will focus on a high-level presentation of diffusion models and the tools we can use to build applications with them.
6+
7+
**Collective Intelligence and Creative AI** by **David Ha**
8+
David Ha is the Head of Strategy at Stability AI. He previously worked as a Research Scientist at Google, working in the Brain team in Japan. His research interests include complex systems, self-organization, and creative applications of machine learning. Prior to joining Google, He worked at Goldman Sachs as a Managing Director, where he co-ran the fixed-income trading business in Japan. He obtained undergraduate and masters degrees from the University of Toronto, and a PhD from the University of Tokyo.
9+
You can find him on [Twitter](https://twitter.com/hardmaru) or on her personnal [wesbite](https://otoro.net/ml/).
10+
<Youtube id="00GKzGyWFEs" />
11+
12+
**AI for Augmenting Human Creativity** by **Devi Parikh**
13+
Devi Parikh is a Research Director at the Fundamental AI Research (FAIR) lab at Meta, and an Associate Professor in the School of Interactive Computing at Georgia Tech. She has held visiting positions at Cornell University, University of Texas at Austin, Microsoft Research, MIT, Carnegie Mellon University, and Facebook AI Research. She received her M.S. and Ph.D. degrees from the Electrical and Computer Engineering department at Carnegie Mellon University in 2007 and 2009 respectively. Her research interests are in computer vision, natural language processing, embodied AI, human-AI collaboration, and AI for creativity.
14+
You can find her on [Twitter](https://twitter.com/deviparikh) or on her personnal [wesbite](https://faculty.cc.gatech.edu/~parikh/).
15+
<Youtube id="bucUO6_0FGU" />
16+
17+
**Food for Diffusion** by **Patrick Esser**
18+
Patrick Esser is a Principal Research Scientist at Runway, leading applied research efforts including the core model behind Stable Diffusion, otherwise known as High-Resolution Image Synthesis with Latent Diffusion Models.
19+
You can find him on [Twitter](https://twitter.com/pess_r).
20+
<Youtube id="g6tIUrMvOec" />
21+
22+
**Beyond Text - Giving Stable Diffusion New Abilities** by **Justin Pinkney**
23+
Justin is a Senior Machine Learning Researcher at Lambda Labs working on image generation and editing, particularly for artistic and creative applications. He loves to play and tweak pre-trained models to add new capabilities to them, and is probably best known for models like: Toonify, Stable Diffusion Image Variations, and Text-to-Pokemon.
24+
You can find himon [Twitter](https://twitter.com/Buntworthy) or on his personnal [wesbite](https://www.justinpinkney.com).
25+
<Youtube id="mpMGwQa7J1w" />
26+
27+
**Diffusion Models are Cool - But What Comes After the Hype?** by **Apolinário Passos**
28+
Apolinário Passos is a Machine Learning Art Engineer at Hugging Face and an artist who focuses on generative art and generative media. He founded the platform multimodal.art and the corresponding Twitter account, and works on the organization, aggregation, and platformization of open-source generative media machine learning models.
29+
You can find him on [Twitter](https://twitter.com/multimodalart).
30+
<Youtube id="eqOSQeQNqaw" />
31+
32+
**Stable Diffusion & Friends: High-Resolution Image Synthesis via Two-Stage Generative Models** by **Robin Rombach**
33+
Robin is a research scientist at Stability AI. After studying physics at the University of Heidelberg from 2013-2020, he started a PhD in computer science in the Computer Vision group in Heidelberg in 2020 under the supervision of Björn Ommer and moved to LMU Munich with the research group in 2021. His research focuses on generative deep learning models, in particular text-to-image systems. During his PhD, Robin was instrumental in the development and publication of several now widely used projects, such as VQGAN and Taming Transformers, and Latent Diffusion Models. In collaboration with Stability AI, Robin scaled the latent diffusion approach and published a series of models now known as Stable Diffusion, which have been widely adapted by the community.
34+
You can find him on [Twitter](https://twitter.com/robrombach).
35+
<Youtube id="eqOSQeQNqaw" />

units/en/events/2.mdx

Lines changed: 89 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,89 @@
1+
# Hackathon DreamBooth 🏆
2+
3+
📣 **The hackathon is now over and the winners have been announced on Discord. You are still welcome to train models and submit them to the leaderboard, but we won't be offering prizes or certificates at this point in time.**
4+
5+
6+
Welcome to the DreamBooth Hackathon! This is a community event where you'll **personalise a Stable Diffusion model by fine-tuning it on a handful of your own images.** To do so, you'll use a powerful technique called [_DreamBooth_](https://arxiv.org/abs/2208.12242), which allows one to implant a subject (e.g. your pet or favourite dish) into the output domain of the model such that it can be synthesized with a _unique identifier_ in the prompt.
7+
8+
This competition is composed of 5 _themes_, where each theme will collect models belong to the following categories:
9+
10+
* **Animal 🐨:** Use this theme to generate images of your pet or favourite animal hanging out in the Acropolis, swimming, or flying in space.
11+
* **Science 🔬:** Use this theme to generate cool synthetic images of galaxies, proteins, or any domain of the natural and medical sciences.
12+
* **Food 🍔:** Use this theme to tune Stable Diffusion on your favourite dish or cuisine.
13+
* **Landscape 🏔:** Use this theme to generate beautiful landscapes of your favourite mountain, lake, or garden.
14+
* **Wildcard 🔥:** Use this theme to go wild and create Stable Diffusion models for any category of your choosing!
15+
16+
We'll be **giving out prizes to the top 3 most liked models per theme**, and you're encouraged to submit as many models as you want!
17+
18+
## Getting started
19+
20+
Follow the steps below to take part in this event:
21+
22+
1. Join the [Hugging Face Discord server](https://huggingface.co/join/discord) and check out the `#dreambooth-hackathon` channel to stay up to date with the event.
23+
2. Launch and run the [DreamBooth notebook](https://github.com/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb) to train your models by clicking on one of the links below. Make sure you select the GPU runtime in each platform to ensure your models train fast!
24+
25+
| Notebook | Colab | Kaggle | Gradient | Studio Lab |
26+
|:--------------------------------------------|:----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:---------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|:-------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------|
27+
| DreamBooth Training | [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb) | [![Kaggle](https://kaggle.com/static/images/open-in-kaggle.svg)](https://kaggle.com/kernels/welcome?src=https://github.com/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb) | [![Gradient](https://assets.paperspace.io/img/gradient-badge.svg)](https://console.paperspace.com/github/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb) | [![Open In SageMaker Studio Lab](https://studiolab.sagemaker.aws/studiolab.svg)](https://studiolab.sagemaker.aws/import/github/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb) |
28+
29+
**Note 👋:** The DreamBooth notebook uses the [`CompVis/stable-diffusion-v1-4`](https://huggingface.co/CompVis/stable-diffusion-v1-4) checkpoint as the Stable Diffusion model to fine-tune. However, you are totally free to use any Stable Diffusion checkpoint that you want - you'll just have to adjust the code to load the appropriate components and the safety checker (if it exists). Some interesting models to fine-tune include:
30+
31+
* [`runwayml/stable-diffusion-v1-5`](https://huggingface.co/runwayml/stable-diffusion-v1-5)
32+
* [`prompthero/openjourney`](https://huggingface.co/prompthero/openjourney)
33+
* [`stabilityai/stable-diffusion-2`](https://huggingface.co/stabilityai/stable-diffusion-2)
34+
* [`hakurei/waifu-diffusion`](https://huggingface.co/hakurei/waifu-diffusion)
35+
* [`stabilityai/stable-diffusion-2-1`](https://huggingface.co/stabilityai/stable-diffusion-2-1)
36+
* [`nitrosocke/elden-ring-diffusion`](https://huggingface.co/nitrosocke/elden-ring-diffusion)
37+
38+
## Evaluation & Leaderboard
39+
40+
To be in the running for the prizes, push one or more DreamBooth models to the Hub with the `dreambooth-hackathon` tag in the model card ([example](https://huggingface.co/lewtun/ccorgi-dog/blob/main/README.md#L9)). This is created automatically by the [DreamBooth notebook](https://github.com/huggingface/diffusion-models-class/blob/main/hackathon/dreambooth.ipynb), but you'll need to add it if you're running your own scripts.
41+
42+
Models are evaluated according to the number of likes they have and you can track your model's ranking on the hackathon's leaderboard:
43+
44+
* [DreamBooth Leaderboard](https://huggingface.co/spaces/dreambooth-hackathon/leaderboard)
45+
46+
## Timeline
47+
48+
* **December 21, 2022** - Start date
49+
* **December 31, 2022** - Colab Pro registration deadline
50+
* **January 22, 2023** - Final submissions deadline (closing of the leaderboard)
51+
* **January 23-27, 2023** - Announce winners of each theme
52+
53+
All deadlines are at 11:59 PM UTC on the corresponding day unless otherwise noted.
54+
55+
## Prizes
56+
57+
We will be awarding 3 prizes per theme, where **winners are determined by the models with the most likes** on the leaderboard:
58+
59+
**1st place winner**
60+
61+
* [Hugging Face Pro subscription](https://huggingface.co/pricing) for 1 year or a $100 voucher for the [Hugging Face merch store](https://store.huggingface.co/)
62+
63+
**2nd place winnner**
64+
65+
* A copy of the [_NLP with Transformers_](https://transformersbook.com/) book or a $50 voucher for the [Hugging Face merch store](https://store.huggingface.co/)
66+
67+
**3rd place winner**
68+
69+
* [Hugging Face Pro subscription](https://huggingface.co/pricing) for 1 month or a $15 voucher for the [Hugging Face merch store](https://store.huggingface.co/)
70+
71+
We will also provide a **certificate of completion** to all the participants that submit at least 1 DreamBooth model to the hackathon 🔥.
72+
73+
74+
## Compute
75+
76+
Google Colab will be sponsoring this event by providing fee Colab Pro credits to 100 participants (selected randomly). We'll be giving out the credits in January 2023, and you have until December 31 to register. To register for these credits, please fill out [this form](https://docs.google.com/forms/d/e/1FAIpQLSeE_js5bxq_a_nFTglbZbQqjd6KNDD9r4YRg42kDFGSb5aoYQ/viewform).
77+
78+
![](https://lh3.googleusercontent.com/-l6dUgmPOKMM/X7w3nNn3OpI/AAAAAAAALAg/74fTRiPqikMURTD_Dn4PzAVADey2_6lLwCNcBGAsYHQ/s400/colab-logo-128x128.png)
79+
80+
## FAQ
81+
82+
### What data is allowed for fine-tuning?
83+
84+
You can use any images that belong to you or for which a permissive license allows for. If you'd like to submit a model trained on faces (e.g. as a Wilcard submission), we recommend using your own likeness. Ideally, use your own data where you can - we'd love to see your pets or favorite local landscape features, and we suspect the likes and prizes will tend to go to those who do something nice and personal 😁
85+
86+
### Are other fine-tuning techniques like textual inversion allowed?
87+
88+
Absolutely! Although this hackathon is focused on DreamBooth, you're welcome (and encouraged) to experiment with other fine-tuning techniques. This also means you can use whatever frameworks, code, or services that help you make delightful models for the community to enjoy 🥰
89+

0 commit comments

Comments
 (0)