Trainer v0.12.2 is released
- Steve
- 2 hours ago
- 2 min read
I recently released a new version of the Python package for training models.
As usual, the newest version is automatically used when training with Colab; folks making models locally on their own computers can install the update by typing
pip install --upgrade neural-amp-modeler
Full release notes are available on GitHub.
What's new?
Faster training on Apple Silicon. This is the only change that affects the standardized trainer. I've optimized the code so that everything is done on the MPS accelerator; before, there was one line that had to go on CPU that impacted speed by quite a bit!
And then, for folks who are into the software, here are a few things that allow you to do some advanced things better:
PyTorch models can be instantiated from .nam files. A couple factories are added that allow (where possible) to go back from the exported NAMs back into PyTorch code. Useful if you don't have the .ckpt files anymore.
Added a teardown hook for datasets. If you need to execute some custom code when datasets are torn down at the end of training, you can add it to these hooks.
Handshaking between data and models. If you've made special model classes that require special dataset classes to go with them, this handshaking can be implemented so that you can validate that the configurations of both are compatible. I've juts found this to be handy for validating configurations etc and is totally optional (the default is to assume that things will work).
Custom losses. I've added the ability to use arbitrary loss functions that take the predicted and target audio as inputs. Feel free to make use of this for more exotic ways of training models.
Enjoy!
Note: If you're counting, you'll notice that v0.12.1 seems to be skipped. It's released on GitHub but not available on PyPI--see Issue 591Â if you're interested.