Skip to content

Different results when I train my own model & bugs #19

@ChlaegerIO

Description

@ChlaegerIO

Thank you for the nice work. The evaluation and demo with your weights and configuration works like in the paper. I have installed the library versions provided in the requirements.txt

My problem now is that I want to adapt, e.g. shrink, it to fit my purpose therefore I have to train my own models. So I tried to train the entire DELTAR model as it is provided by you. I use 25k nyu images. I have tweaked the code here and there to fit my data and I changed it with the two additional remarks from below. In this setting I trained it for 50 epochs, but I have also trained for 25 epochs with the settings provided in the configs for nyu. I then get the following results for validation:
image

and training loss:
image

images from the training set looks good though:
image

but it does not generalize that well:
image

My questions now are:

Additional remarks:

  • I found that the pointNet parameters where not trained in the network before as they are not added to the parameters for the optimizer in
    def get_10x_lr_params(self):  # lr learning rate
        modules = [self.decoder, self.depth_head, self.conv_out]
  • In the nyu dataloader in "train" mode the image should be normalized t0 [0,1] (image = np.array(image, dtype=np.float32) / 255.0) before the random_crop(...) and train_preprocess(...) augmentation as it is clipped there to [0,1]

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions