We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
There was an error while loading. Please reload this page.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ValueError when using swin_upernet models, The solution can refer to ZFTurbo/Music-Source-Separation-Training/issues/6
ValueError: Make sure that the channel dimension of the pixel values match with the one set in the configuration.
The text was updated successfully, but these errors were encountered:
ValueError when using swin_upernet models
Follow the steps below to solve this problem:
site-packages\transformers\models\swin\modeling_swin.py
def forward(self, pixel_values: Optional[torch.FloatTensor]) -> Tuple[torch.Tensor, Tuple[int]]: _, num_channels, height, width = pixel_values.shape if num_channels != self.num_channels: self.num_channels = num_channels # pad the input to be divisible by self.patch_size, if needed pixel_values = self.maybe_pad(pixel_values, height, width) embeddings = self.projection(pixel_values) _, _, height, width = embeddings.shape output_dimensions = (height, width) embeddings = embeddings.flatten(2).transpose(1, 2) return embeddings, output_dimensions
Sorry, something went wrong.
bug fix and feat #24
b4ca1ee
SUC-DriverOld
No branches or pull requests
Uh oh!
There was an error while loading. Please reload this page.
ValueError when using swin_upernet models, The solution can refer to ZFTurbo/Music-Source-Separation-Training/issues/6
The text was updated successfully, but these errors were encountered: