Name:
Description: Generates MIDI notes that are likely to follow the input drum beat or melody. Can extend the input of a specified MIDI clip by up to 32 measures. This can be helpful for adding variation to a drum beat or creating new material for a melodic track. It typically picks up on things like durations, key signatures and timing. It can be used to produce more random outputs by increasing the temperature. Ready to use as a Max for Live device. If you want to train the model on your own data or try different pre-trained models provided by the Magenta team, refer to the instructions on the team's GitHub page: https://github.com/magenta/magenta/tree/main/magenta/models/melody_rnn
Year:
Website:
Input types: Audio MIDI Text None Genre Metadata Image
Output types: Audio MIDI
Output length:
Technology: Not Specified Latent Consistency Model Latent Diffusion LSTM VAE Sequence-to-sequence neural network Transformer Suite of AI tools Diffusion Hierarchical Recurrent Neural Network (RNN) Autoregressive Convolutional Neural Network
Dataset:
License type:
Has real time inference: Yes No Not known
Is free: Yes No Yes and No, depending on the plan Not known
Is open source: Yes No Not known
Are checkpoints available: Yes No Not known
Can finetune: Yes No Not known
Can train from scratch: Yes No Not known
Tags: text-to-audio MIDI text-prompt small-dataset open-source low-resource free checkpoints proprietary no-input image-to-audio
Guide: This field renders Markdown
Captcha: