This repository contains the necessary files and instructions to generate AI-generated lyrics and music using LSTM and Magenta library, respectively.
- Install the required Python libraries:
- BeautifulSoup4
- Requests
- TensorFlow
- Magenta
- Create a Python script to scrape song lyrics from a lyrics website.
- Use BeautifulSoup and Requests to extract the lyrics from the HTML content.
- Save the extracted lyrics to a text file.
- Create a Python script to preprocess the scraped lyrics.
- Read the lyrics from the text file and apply necessary preprocessing steps, such as lowercasing and tokenization.
- Save the preprocessed lyrics to a new text file.
If you have multiple MIDI files and want to merge them into a single file for training, run the following command:
This repository contains the necessary files and instructions to train a polyphonic music generation model using Google's Magenta library, specifically the Polyphony RNN model.
- Install Magenta library by following the installation instructions here.
- Make sure you have a collection of MIDI files to use for training.
Follow these steps to train the Polyphony RNN model:
- Convert your MIDI files into a format suitable for training by running the following command:
polyphony_rnn_create_dataset --input=./path_to_your_MIDI_files/*.mid --output_dir=./polyphony_rnn_training_data --eval_ratio=0.10
This will generate a .tfrecord
file in the ./polyphony_rnn_training_data
directory.
- Create a directory to store the model checkpoints:
mkdir songs_MIDI_polyphonic_checkpoints
- Train the Polyphony RNN model by running the following command:
polyphony_rnn_train --config=polyphony --run_dir=./songs_MIDI_polyphonic_checkpoints --sequence_example_file=./polyphony_rnn_training_data/training_poly_tracks.tfrecord --hparams="batch_size=8,rnn_layer_sizes=[128,128,128]" --num_training_steps=20000
You can stop the training process at any time using Ctrl+C
. To resume training from the latest checkpoint, simply run the same command again. You can also adjust the num_training_steps
argument as needed.
After training the model, you can use it to generate new music. Detailed instructions on how to do this will be added soon.
This project is released under the MIT License.