One of the most common problems people have with NAM is that the standard WaveNet architecture (the default in the simplified trainers and therefore the most common) is too compute-hungry to work in applications where audio plugins traditionally have.
The existing options for people who wanted to get a lighter model are to...
use a "lite"-, "feather"-, or "nano"-sized architecture,
use the CLI trainer to customize the architectures I've shared, or
Of these, using a nano-sized model (presumably with the WaveNet architecture) seems the most common. However, the quality of the models isn't always great, and it's still not small enough for some applications. The nano LSTM is plenty small, but often just sounds bad.
In order to enable NAM to be usable in more applications, I've been playing around with a different architecture that seems to work better in the "low-compute" regime, and I'm interested in open-sourcing it. However, I want to make sure it'll address the low-compute need.
I'm seeking feedback to understand what typical compute constraints are in this "low-compute" regime. To do that, I'm sharing a set of LSTM models that are supported by the current NeuralAmpModelerCore.
If you've been interested in building with NAM but have struggled with compute constraints, please download these models, try them out in your application, and contact me to let me know which model is as big as you're willing to go (Note: The some of the models will sound very different. The intent of this is to focus on compute usage, not sound quality). This will help me make sure that the new model architecture will be able to work for you.
Thanks!