top of page
Search
Steve

NeuralAmpModelerCore v0.2.0 is released

There's a new version of the core DSP library for NAM: version 0.2.0. This makes a few breaking changes but also cleans some things up and adds some fun new functionality that I hope folks building with NAM will be excited to integrate.


What's new?


Input and output analog level calibration

This feature standardizes how to make use of the "input_level_dbu" and "output_level_dbu" fields in the metadata that were introduced in trainer v0.10.0. If your model has them, then you can now use them in this version of the core to make sure that the digital signal going into and out of the model are accurate to the analog gear that was recorded. This not only paves the way for automatic input gain compensation in plugins but also accurate chaining of models (e.g. an overdrive pedal feeding into an amp). I plan to demonstrate this in NeuralAmpModelerPlugin in its next release.


Reset() method

NAMs now have a Reset() method, which serves as the interface through which a plugin host can communicate sample rate and maximum buffer size to the model. This will be used in the near future to clean up memory handling, which should help with performance.


Single-call real-time processing [breaking]

The heart of a DSP (NAM) class is .process(). In v0.1.0, there was a second method, .finalize_(), that needed to be called next in order to let a model prepare itself for the next buffer of samples to process. That's been simplified and the code that used to live in the latter is now part of the former, so processing is a little simpler and less error-prone. You can see how this works by looking at NeuralAmpModelerPlugin.


Support for leaky ReLU activations

If you want to use the leaky ReLU in your models, you can now use it in real-time applications! I don't currently intend to have this because the new default in the standardized (GUI & Google Colab) trainers that I maintain, but I heard some interest in using it, so I'm happy to oblige and add support. Note: only PyTorch's default negative slope of 0.01 is supported in v0.2.0. If you're interested in other values, get in touch.


Gated activations in WaveNets work again [bug fix]

There was a bug with WaveNet models where gated activations weren't working correctly. This isn't how the standardized WaveNet architectures are configured to work, so the problem didn't affect many people as far as I could tell. Thanks to a bug report, I was able to pinpoint the cause and fix it. If you are interested in using models that have activations (again, not a default in the standardized trainers), you should be able to do so.


Enjoy!

847 views

Recent Posts

See All

Plug-in version 0.7.11 is released

I've released a new version of the NAM snapshot plug-in. You can download it on the users page . This release fixes a bug that users were...

bottom of page