Learning to Sound like a Fender
Written by Harry Fairhead   
Tuesday, 06 November 2018

Bassman 56F vacuum tube amplifier, that is. Yes, neural networks go where no network has gone before. It is now officially amazing what you can think up for a neural network to do.

We really don't need to get into an argument about audiophile tendencies to agree that classic vacuum tube amplifiers sound different. It's probably not that they are so good, more that they are bad in the right sort of way. If you want to sound like them what's the best way to do the job?

You could build the hardware, as so many do at great expense and bother. Let's face it, vacuum tubes were never easy to use and putting a finger in the wrong place was a shocking experience. The most obvious thing to do is model the response, either by building a simulator to match the output characteristics or by analyzing the circuit and implementing some digital filters to do the job. However, the complex non-linear dynamics of a typical tube circuit is difficult to model.

So why not get a neural network to learn how it sounds?

fender2

Inside a Fender Bassman 56F Credit Technical University Berlin.

All we need is the transfer function, which changes the input raw signal into the output that sounds like the amp. This is something that a neural network should be able to learn. This is what Eero-Pekka Damskagg, Lauri Juvela, Etienne Thuillier, and Vesa Valimaki at Aalto University Finland decided to try.

The network was based on the WaveNet model with modifications:

fender1

The network was trained to predict the current output sample given a set of past samples. The training data was, almost ironically, genenerated by a SPICE model of the Fender amp. Of course the SPICE model couldn't generate signals in real time.

So did it work?

Can you reduce some hot triode tubes to a set of neural network coefficients?

It seems you can.

  • Our tests suggest that the deep neural network can be run in real time at a typical audio sample rate. Our listening test results show that the proposed deep convolutional architecture outperforms a state-of-the-art black-box model.

So you probably don't need expensive tubes to get the real sounds.

It seems almost a shame.

I wonder if they turned the coefficients of the network up to eleven?

fender3

More Information

Deep Learning for Tube Amplifier Emulation

Related Articles

Audio Super Resolution

AI Plays The Instrument From The Music

Nao Plays Music Like A Human

The World's Ugliest Music - More than Random

How the Music Flows from Place to Place

Google Mines Music

To be informed about new articles on I Programmer, sign up for our weekly newsletter, subscribe to the RSS feed and follow us on Twitter, Facebook or Linkedin.

 

Banner


Redis Changes License, Rival Fork Launched
03/04/2024

The developers of Redis have announced that they are changing the licensing model for the database. From now on, all future versions of Redis will be released with source-available licenses rather tha [ ... ]



Women Who Code Closing For Lack of Funding
24/04/2024

Women Who Code the US-based non-profit organization that since its foundation in 2011 has advocated for women and diversity in technology, has announced its imminent closure due to critical funding cu [ ... ]


More News

 <ASIN:0976982250>

Last Updated ( Tuesday, 06 November 2018 )