|
| 1 | +# Implicit Reparametrization Trick |
| 2 | + |
| 3 | +<div align="center"> |
| 4 | + <img src="images/implicit.webp" width="500px" /> |
| 5 | +</div> |
| 6 | + |
| 7 | + |
| 8 | +<table> |
| 9 | + <tr> |
| 10 | + <td align="left"> <b> Title </b> </td> |
| 11 | + <td> Implicit Reparametrization Trick for BMM </td> |
| 12 | + </tr> |
| 13 | + <tr> |
| 14 | + <td align="left"> <b> Authors </b> </td> |
| 15 | + <td> Matvei Kreinin, Maria Nikitina, Petr Babkin, Iryna Zabarianska </td> |
| 16 | + </tr> |
| 17 | + <tr> |
| 18 | + <td align="left"> <b> Consultant </b> </td> |
| 19 | + <td> Oleg Bakhteev, PhD </td> |
| 20 | + </tr> |
| 21 | +</table> |
| 22 | + |
| 23 | + |
| 24 | + |
| 25 | + |
| 26 | + |
| 27 | + |
| 28 | +## Description |
| 29 | + |
| 30 | +This repository implements an educational project for the Bayesian Multimodeling course. It implements algorithms for sampling from various distributions, using the implicit reparameterization trick. |
| 31 | + |
| 32 | +## Scope |
| 33 | +We plan to implement the following distributions in our library: |
| 34 | +- [x] Gaussian normal distribution (*) |
| 35 | +- [x] Dirichlet distribution (Beta distributions)(\*) |
| 36 | +- [x] Mixture of the same family distributions (**) |
| 37 | +- [x] Student's t-distribution (**) (\*) |
| 38 | +- [x] VonMises distribution (***) |
| 39 | +- [ ] Sampling from an arbitrary factorized distribution (***) |
| 40 | + |
| 41 | +(\*) - this distribution is already implemented in torch using the explicit reparameterization trick, we will implement it for comparison |
| 42 | + |
| 43 | +(\*\*) - this distribution is added as a backup, their inclusion is questionable |
| 44 | + |
| 45 | +(\*\*\*) - this distribution is not very clear in implementation, its inclusion is questionable |
| 46 | + |
| 47 | +## Stack |
| 48 | + |
| 49 | +We plan to inherit from the torch.distribution.Distribution class, so we need to implement all the methods that are present in that class. |
| 50 | + |
| 51 | +## Usage |
| 52 | +In this example, we demonstrate the application of our library using a Variational Autoencoder (VAE) model, where the latent layer is modified by a normal distribution. |
| 53 | +``` |
| 54 | +>>> import torch.distributions.implicit as irt |
| 55 | +>>> params = Encoder(inputs) |
| 56 | +>>> gauss = irt.Normal(*params) |
| 57 | +>>> deviated = gauss.rsample() |
| 58 | +>>> outputs = Decoder(deviated) |
| 59 | +``` |
| 60 | +In this example, we demonstrate the use of a mixture of distributions using our library. |
| 61 | +``` |
| 62 | +>>> import irt |
| 63 | +>>> params = Encoder(inputs) |
| 64 | +>>> mix = irt.Mixture([irt.Normal(*params), irt.Dirichlet(*params)]) |
| 65 | +>>> deviated = mix.rsample() |
| 66 | +>>> outputs = Decoder(deviated) |
| 67 | +``` |
| 68 | + |
| 69 | +## Links |
| 70 | +- [LinkReview](https://github.com/intsystems/implitic-reparametrization-trick/blob/main/linkreview.md) |
| 71 | +- [Plan of project](https://github.com/intsystems/implitic-reparametrization-trick/blob/main/planning.md) |
| 72 | +- [BlogPost](blogpost/Blog_post_sketch.pdf) |
| 73 | +- [Documentation](https://intsystems.github.io/implicit-reparameterization-trick/) |
| 74 | +- [Matvei Kreinin](https://github.com/kreininmv), [Maria Nikitina](https://github.com/NikitinaMaria), [Petr Babkin](https://github.com/petr-parker), [Iryna Zabarianska](https://github.com/Akshiira) |
| 75 | + |
0 commit comments