top of page
Search
  • Steve

NAM Trainer v0.7.0 is released

I've just released a minor version bump of the NAM trainer. As always, you can update your local install from the terminal by typing:

pip install --upgrade neural-amp-modeler

What's new?

  1. The "Nano" preset architecture. In response to some asks for a more CPU-conscious offering than the "feather" configuration, this release makes official a pair of "nano" configurations for the WaveNet and LSTM architectures. The nano WaveNet is about half as CPU-intensive as the feather configuration, and the LSTM variant is about another 2x lighter, based on some tests on some more CPU-bound devices than my laptop.

  2. Models know their expected sample rate. One noteworthy limitation of the pre-built NAM offerings has been the need to stick to a 48kHz sample rate. This release explicitly notes that need in the model artifacts. The intent is that, by making this an explicitly-tracked piece of information in the models, the possibility opens up for running models in different native sample rates in the future. This is a separate matter from resampling within the plugin so that e.g. a DAW session running at 44.1 kHz can correctly use a 48 kHz NAM model. (That's under development; sit tight!)

  3. [breaking] Removal of the "advanced" Colab Jupyter notebook. In order to streamline the many ways that models can be trained, I've removed the more full-featured "colab.ipynb" from the repo. The more popular "easy_colab.ipynb" is still there, and the wide majority of users who are used to training models in their browser will be unaffected by this deprecation. Power users may either move to using the Python script bin/train/main.py, which will continue to support the most complete offering of functionality, or are welcome to keep their own copy of the notebook and maintain it themselves.

Enjoy!

1,127 views

Recent Posts

See All

NAM Plugin version 0.7.9 is released

I've released a patch for the NAM snapshot plugin. Download it on the users page. The full changelog can be read on GitHub. Nothing much to say except this release focuses on a bug fix for the resampl

Towards a (good) CPU-efficient NAM

One of the most common problems people have with NAM is that the standard WaveNet architecture (the default in the simplified trainers and therefore the most common) is too compute-hungry to work in a

bottom of page