Name:
Description: Music FaderNets is a controllable MIDI generation framework that models high-level musical qualities, such as emotional attributes like arousal. Drawing inspiration from the concept of sliding faders on a mixing console, the model offers intuitive and continuous control over these characteristics. Given an input MIDI, Music FaderNets can produce multiple variations with different levels of arousal, adjusted according to the position of the fader.
Year:
Website:
Input types: Audio MIDI Text None Genre Metadata Image
Output types: Audio MIDI
Output length:
Technology: Not Specified Latent Consistency Model Latent Diffusion LSTM VAE Sequence-to-sequence neural network Transformer Suite of AI tools Diffusion Hierarchical Recurrent Neural Network (RNN) Autoregressive Convolutional Neural Network
Dataset:
License type:
Has real time inference: Yes No Not known
Is free: Yes No Yes and No, depending on the plan Not known
Is open source: Yes No Not known
Are checkpoints available: Yes No Not known
Can finetune: Yes No Not known
Can train from scratch: Yes No Not known
Tags: text-to-audio MIDI text-prompt small-dataset open-source low-resource free checkpoints proprietary no-input image-to-audio
Guide: Code accompanying ISMIR 2020 paper - "Music FaderNets: Controllable Music Generation Based On High-Level Features via Low-Level Feature Modelling" can be found on GitHub: [https://github.com/gudgud96/music-fader-nets](https://github.com/gudgud96/music-fader-nets) This field renders Markdown
Captcha: