Skip to content

Releases: chairc/Integrated-Design-Diffusion-Model

IDDM v1.1.9

01 Jun 17:03
7604b98
Compare
Choose a tag to compare

What's Changed

  • refactor: Refactor and add some old code. by @chairc in #129
  • docs: Add new docs dir. by @chairc in #130
  • docs: Update new README. by @chairc in #131
  • chore: Update installation scripts and documentation. by @chairc in #132
  • feat: Test new network and self-attention. by @chairc in #133
  • docs: Update running locally in README. by @chairc in #134
  • chore: Update the new version 1.1.9. by @chairc in #136
  • chore: Update the pip upload. by @chairc in #138

Full Changelog: v1.1.8-beta.3...v1.1.9

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120 (celebahq-120-weight.pt)
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120 (animate-ganyu-120-weight.pt)
  • neu-cls-64-weight.pt: Trained on a dataset of 7,226 defect, and image size is 64 (neu-cls-64-weight.pt)
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120 (neu-120-weight.pt)
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64 (cifar10-64-weight.pt)
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64 (animate-face-64-weight.pt)

IDDM v1.1.8-beta.3

08 Mar 15:36
858313f
Compare
Choose a tag to compare

What's Changed

Full Changelog: v1.1.8-beta.2...v1.1.8-beta.3

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120 (celebahq-120-weight.pt)
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120 (animate-ganyu-120-weight.pt)
  • neu-cls-64-weight.pt: Trained on a dataset of 7,226 defect, and image size is 64 (neu-cls-64-weight.pt)
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120 (neu-120-weight.pt)
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64 (cifar10-64-weight.pt)
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64 (animate-face-64-weight.pt)

IDDM v1.1.8-beta.2

07 Mar 08:40
358301d
Compare
Choose a tag to compare

What's Changed

  • fix: Patch interface post image to solve the MEAN and STD not to work. by @bestl1fe in #104
  • fix: Fix MEAN and STD bug and update organization logo. by @chairc in #105
  • chore: MEAN and STD params setting by @chairc in #108
  • feat: Parameters decomposed into methods; Added PSNR and SSIM calculators; Update requirements.txt. by @chairc in #109
  • chore: Update model list by @chairc in #110
  • chore: Add use_gpu params. by @chairc in #111
  • refactor: Refactor trainer and update README by @chairc in #113
  • fix: Patch pip and server by @BestChenA in #114
  • fix: Fix import package safety alerts; Fix the bug that the Flask API could only be called once in the server mode by @chairc in #115
  • fix: Fix the bug where the pixels of the image exceed 255 or less than 0. by @chairc in #117
  • feat: Update the short name trigger parameter. by @chairc in #119
  • docs: Fix neu-120-weight.pt pre-training model download link. by @BestChenA in #121
  • docs: Update README. by @chairc in #122
  • docs: Update username. by @BestChenA in #123
  • docs: Update username. by @chairc in #124

New Contributors

  • @BestChenA made their first contribution in #114

Full Changelog: v1.1.7...v1.1.8-beta.2

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120 (celebahq-120-weight.pt)
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120 (animate-ganyu-120-weight.pt)
  • neu-cls-64-weight.pt: Trained on a dataset of 7,226 defect, and image size is 64 (neu-cls-64-weight.pt)
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120 (neu-120-weight.pt)
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64 (cifar10-64-weight.pt)
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64 (animate-face-64-weight.pt)

IDDM v1.1.7

14 Nov 10:45
d1d80b7
Compare
Choose a tag to compare

What's Changed

  • Dev: Modify the README in datasets; Add the Chinese README in datasets. by @chairc in #88
  • Add: Add the NaN check method. by @chairc in #89
  • Add date, author and site description; Update require packages. by @chairc in #90
  • Remove two imshow() duplicate functions; Remove magic transforms and eval function; Replace images.shape[0] to batch_size. by @chairc in #91
  • Add banner and version information. by @chairc in #93
  • About magic value and pytorch (version >=2.0.0) notice. by @chairc in #95
  • Fix: Fix the reference parameter name error by @bestl1fe in #96
  • Dev: Major Update in 20241112. About argparse, fix error, decouple generate.py and image processing method. by @chairc in #98
  • Add: Added deploy support (server and socket). by @chairc in #99
  • Add: Added deploy README. by @chairc in #100
  • Dev: Pre-release preparation. by @chairc in #102

Full Changelog: v1.1.6...v1.1.7

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120 (celebahq-120-weight.pt)
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120 (animate-ganyu-120-weight.pt)
  • neu-cls-64-weight.pt: Trained on a dataset of 7,226 defect, and image size is 64 (neu-cls-64-weight.pt)
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120 (neu-120-weight.pt)
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64 (cifar10-64-weight.pt)
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64 (animate-face-64-weight.pt)

IDDM v1.1.6

08 Aug 12:05
7b9ef6c
Compare
Choose a tag to compare

What's Changed

  • Update: Refactor web.py, and add the generate page. by @chairc in #75
  • Modify custom images length and width input; Other code enhancement changes by @chairc in #76
  • Add custom parameter settings; Modify the method import path and eliminate the magic value by @chairc in #77
  • Update: Modify the format and delete useless import by @bestl1fe in #78
  • Update about loss function. by @chairc in #79
  • Update: Update the README. by @chairc in #80
  • Modify check_and_create_dir function; Modify comment. by @chairc in #82
  • Fix: Fix the problem of lack parameters when input --image_size. by @chairc in #83
  • Add: Add v1.1.6 tag. by @chairc in #86

New Contributors

Full Changelog: v1.1.5...v1.1.6

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120 (celebahq-120-weight.pt)
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120 (animate-ganyu-120-weight.pt)
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120 (neu-120-weight.pt)
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64 (cifar10-64-weight.pt)
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64 (animate-face-64-weight.pt)

IDDM v1.1.5

31 May 09:07
5e4f669
Compare
Choose a tag to compare

What's Changed

  • FID calculator used for evaluating generated images. by @egoist945402376 in #64
  • Update: Add Pytorch_fid in requirements.txt by @EdwardTj in #65
  • Update: Refactor train.py format. by @chairc in #66
  • Separate get dataset methods into data files; Add initialization operation. by @chairc in #67
  • Update: Modify repository structure, parameter explanation and citation. by @chairc in #68
  • Add: Add better FID calculator to verify image quality. by @chairc in #69
  • Update: Modify the citation link. by @chairc in #70
  • Add: Add Evaluation in README. by @chairc in #71
  • Update: Modify type description error in README. by @chairc in #72

New Contributors

Weights

Note: The weight include model, ema_model and optimizer.

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120
  • cifar-64-weight.pt: Trained on a dataset of 60,000 images, and image size is 64
  • animate-face-64-weight.pt: Trained on a dataset of 63,565 animate face, and image size is 64

Full Changelog: v1.1.4...v1.1.5

IDDM v1.1.4

20 Apr 15:00
87a2227
Compare
Choose a tag to compare

What's Changed

  • Add PLMS sampler; Add PLMS sample initializer. by @chairc in #51
  • Removed forced checking of samplers, now free choice of sample generation; Add output image format; Add choices.py; Modify the choices attribute in parsers. by @chairc in #52
  • Update README.md by @chairc in #53
  • Add version.py. by @chairc in #54
  • Add --save_model_interval_epochs parameter. Save model interval and save it every X epochs. by @chairc in #55
  • Modify the mean, std and random resized crop settings in torchvision.transforms.Compose; Modify the choices in parser and adjust the order of parts. by @chairc in #56
  • Fix: Fix unable to load unconditional model. by @chairc in #57
  • Modify the config in repository structure; Added "--noise_schedule" training parameters, this method is a model noise adding method. by @chairc in #59
  • Added unetv2.py, replace nn.Upsample with nn.ConvTranspose2d; Add DDIM code comments; Fix the problem that bool in parser cannot be correctly recognized by the console. by @chairc in #61
  • Modify the get_dataset; Add check.py; Add classes_initializer function by @chairc in #62

Weights

  • celebahq-120-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120
  • animate-ganyu-120-weight.pt: Trained on a dataset of 500 animate ganyu face, and image size is 120
  • neu-120-weight.pt: Trained on a dataset of 1,800 defect, and image size is 120

Full Changelog: v1.1.3...v1.1.4

IDDM v1.1.3

10 Mar 09:56
670c532
Compare
Choose a tag to compare

What's Changed

  • Add the setting to run GPU commands; Add the tag and update acknowledgements. by @chairc in #37
  • Add: Add Visual webui. by @chairc in #38
  • Modify the GPU settings. CPU training is not supported after version 1.1.2; Modify automatic mixed precision setting, optimize GradScaler. by @chairc in #39
  • Modify information prompt problems; Modify beta_end; Add sqrt_linear and sqrt schedules. by @chairc in #41
  • Add: Add super resolution model (The implement of RDN), low images to high. by @chairc in #43
  • Generation image format by @egoist945402376 in #45
  • Add: Add option to load EMA model. by @chairc in #46
  • Update: Modify Chinese introduction errors and add --image_format parameter introduction. by @chairc in #48

New Contributors

Full Changelog: v1.1.2...v1.1.3

Weights

  • celebahq-weight.pt: Trained on a dataset of 30,000 people face, and image size is 120.

IDDM v1.1.2-stable

12 Jan 09:48
bf24b96
Compare
Choose a tag to compare

What's Changed

  • Update: Modify automatic mixed precision training name; reconstruct automatic mixed precision training structure. by @chairc in #28
  • Update: Modify project's name. by @chairc in #29
  • Add: Add current epoch average loss log. by @chairc in #30
  • Update: Update the introduction in README. by @chairc in #31
  • Major update about generating images by @chairc in #33
  • Modify some parameters and formats; Generate images in new folder. by @chairc in #34
  • Modify comment; Modify README, add issue link and correct some text. by @chairc in #35

Note

Due to changes in the model saving structure, the following parameters in 'generate.py' may not be entered after this version.

Parameter Name Conditional Usage Type Description
--sample Sampling method str Set the sampling method type, currently supporting DDPM and DDIM. (No need to set for models after version 1.1.1)
--network Training network str Set the training network, currently supporting UNet, CSPDarkUNet. (No need to set for models after version 1.1.1)
--act Activation function str Activation function selection. Currently supports gelu, silu, relu, relu6 and lrelu. If you do not set the same activation function as the model, mosaic phenomenon will occur. (No need to set for models after version 1.1.1)
--num_classes Number of classes int Number of classes for classification (No need to set for models after version 1.1.1)

Full Changelog: v1.1.1...v1.1.2

IDDM v1.1.1

20 Dec 15:50
efdf1e3
Compare
Choose a tag to compare

What's Changed

  • Update: Update README. by @chairc in #20
  • Add: Add separate checkpoint weights function. by @chairc in #21
  • Add: Add pretrain ckeckpoint method. by @chairc in #22
  • Update: The utils.initializer encapsulation method is used in distributed training. by @chairc in #23
  • Update: Add "Base on the 64×64 model to generate every size images". by @chairc in #24
  • Add: Add 160×160 NEU-DET generate images. by @chairc in #25

Full Changelog: v1.1.0...v1.1.1