top of page
Search
  • Steve

NAM Trainer version 0.6.0 is released

I've been doing some cleaning up around how users interact with NAM's trainer. For one, I've streamlined the installation process around the latest released version, so users can get a more stable experience. If you haven't seen how to install it, check out the YouTube video guides! (Windows, macOS).


To update, type

pip install --upgrade neural-amp-modeler

in the terminal, and you'll be good to go!


What's new?

Quite a few things, this time!


Cab modeling

There's a checkbox in the GUI trainer, and a text option in Colab. Turning it on customizes some settings under the hood for training to make sure that models including a cabinet are better-represented by the modeling process. I'll have a blog post with more on that at a later date.


A new reamping file

I've made a new file, v2_0_0.wav, that should make sure that models come out a little better. I've spaced out the blips in order to allow for more settling time for rigs that take longer to return to zero (like rigs with cabs!), and I've also repeated several parts of the file. This lets me get a baseline for how accurate the model might turn out, as well as to check for some hard-to-detect problems that can happen during training like drift in the rig's tone (think: if you start recording before the tubes warm all the way up) or drift in the timing of the recording interface (which can be particularly sneaky--if you interface falls behind by even a single sample over the course of reamping, the quality of the model can be severely degraded because of how sharp the peaks in a guitar amp's tone can be.


The problem with doing these checks is that sometimes they fail, and that can be frustrating. If the quality checks fail and you want to try your luck anyways, I've added an option to disregard the warnings and train anyways.


Optimized training for CPU-only computers

Since NAM's trainer is using deep learning, NAM thrives on computers that have an nVIDIA GPU or an Apple Silicon processor with MPS. If your computer doesn't have an advanced graphics chip on it, it's a hard climb to train a model (about 20-40x slower, depending upon the details of your computer). In order to soften the blow, I took some time to optimize some of the parameters like batch size so that CPU-only computers can make the most of their compute when training. The result is something that should train about 5x faster than it used to, bringing the gap down to only about 5-10x slower. As a benchmark, I've been able to get decently good models with a training time of about 30 minutes on my 2020 Macbook Pro with an Intel i5. I hope that this helps make NAM a little more accessible to folks who want to get in on the fun but don't have the fanciest computer around.


Some other minor tweaks

I'm always on the lookout to tune some of the "knobs" that the trainer has so that things work a little more reliably for folks, and this is no different.


[Breaking] Change how the loss function configuration dictates what calculations are performed during the forward pass during training

The loss function that's used has a number of tweakable parameters, and power users that configure the loss in a custom way should know that setting the weights of certain loss terms like MRSTFT to zero will no longer bypass calculating them. Instead, you should use "None" to get the previous behavior. This means that I also changed the defaults to preserve the previous default behavior, but folks who have customized this for themselves should check their settings accordingly.


Enjoy!

546 views

Recent Posts

See All

NAM Plugin version 0.7.9 is released

I've released a patch for the NAM snapshot plugin. Download it on the users page. The full changelog can be read on GitHub. Nothing much to say except this release focuses on a bug fix for the resampl

Towards a (good) CPU-efficient NAM

One of the most common problems people have with NAM is that the standard WaveNet architecture (the default in the simplified trainers and therefore the most common) is too compute-hungry to work in a

bottom of page