title | labels | dataset | ||||||
---|---|---|---|---|---|---|---|---|
LatentDiffSep |
|
|
Master Thesis Project at University of Cambridge.
**Contributor: ** Eduard Burlacu \
Abstract:
What's implemented: Source code used for producing the results in _____ paper.
Datasets: Libri2Mix, WSJ0-2mix *
Hardware Setup: These experiments were run on ___
Contributor: Eduard Burlacu
To construct the Python environment follow these steps:
#Setup source separation env
conda env create -f env/environment.yaml
We use the StabilityAI's stable-audio-tools
to train an OobleckVAE
specially-designed for source separation, being able to encode and reconstruct multi-speaker audio samples.
Useful for these Tasks: Blind Source Separation, Speech Enhancement, Target Speaker Extraction
Datasets: The settings are as follows:
Dataset | #speakers | target method | SI-SDR |
---|