This project has been a few months in the making and so much fun to do. It’s a big progressive epic in the style of Pink Floyd. It’s based on a story called There Will Be Soft Rains by Ray Bradbury. It’s a poignant story about the last day in the life of an automated house that has outlived it’s owners due to nuclear war. Cheery stuff, but still beautiful. I only hope my song does the story justice. I decided to let the house tell the story instead of singing it with my own voice.
If you are interested in the production of it, here are some details of what I used. I record all of my audio through a Universal Audio Apollo 16 into Logic Pro X. I have just a handful of preamps that I can use if I’m not recording at line level. For this project, I only actually recorded my vocal for the vocoder. I used an SM-58 into a Focusrite ISA Two. I limited the crap out of the vocal so the vocoder would take on the more monotone, constant level that you would expect from an automated voice. I know text to speech is much better now, but I really wanted a vintagy vibe to the house as I was trying to take on some of that 50’s era vibe.
Drums are all Superior Drummer plus some loops from Damage and Evolve in the Komplete Ultimate library. I also added a few instruments from the orchestral percussion to get the gong and clavas. There are quite a few other synth loops or synths droning the basslines. They are from Omnisphere and Massive. The bass is a Fender Jazz V Elite played into a Sansamp RBI.
Guitar is all a Fender USA Strat into either a KSR Ares into a Two-Notes Torpedo Live for the dirty, and a Fractal AxeFX2 for cleans. There are multiple solos all with unique tones, some droning parts and the big chorus guitars. Actually a small amount of guitar for me.
I figured out doing the vocoder part with the built in Logic plugin but wasn’t at all happy with it’s synth abilities, so I bought Izotope’s VocalSynth plugin which worked very well.
There is surprisingly little overall effects going on. Here’s a photo of the session:
Starting with the master buss, it’s pretty simple actually. NLS is there since I’m using it on all the tracks. Waves SSL Buss Compressor for a little glue, Slate Virtual Tape Machine for some vibe, Waves L3 for some limiting and Universal Audio’s Manley Massive Massive EQ to tweak a little.
I’ll go through what I did to various tracks, but assume Waves NLS is first on every track. Drums got Waves C4 to squash the buss a bit and I used the Logic EQ to cut off the extreme lows and highs. I’ve also done that in Superior Drummer for the kick and toms. The kit just had so much bass, I had to get rid of a lot of it.
The Bass got some compression via the LA2A from Universal Audio and some sweetening with the Manley Massive Passive EQ. You’ll see almost all the synth stuff got severely shelved. That Bass just adds up so I started cutting it at the source. That cleaned up the mix a ton.
The guitars got lots of subtle tweaks, but nearly all of them got Waves X-Noise. The Strat hum was just too much for me, I’ve now replaced the pickups to solve that problem, but these tracks were recorded before that, so I used a plugin to remove some noise.
Reverb is all from Waves R-Verb.
Oh, and TONS of automation.
An oldie, but a goodie….
I was discussing this post with a friend this morning and realized I had wiped it from the internet. I found it on Wayback so I could repost it. Enjoy!
Recently I have been involved in a lively debate on the future of digital amp simulations on gearslutz.com’s forums. I use the term debate lightly because the conversation was more like, “they aren’t real so they suck,” than something useful. I’d like to spell out what I feel is the current reality and future of computer modeling analog audio gear. I say analog audio gear generically because this technology is not limited to guitar amplifiers.
In today’s professional studios it’s incredibly common to use simulations of amplifiers, but also compressors, equalizers, reverb and delay units, tape and tube simulation, and pretty much any piece of analog gear that’s ever had a reputation for anything useful.
Most people credit Line6’s POD device with truly igniting the popularity of amp simulations. Building on the AxSys 212, Line6 exploded in the early 2000s with the success of the original POD digital amp modeler. While platinum selling artists have access to million dollar recording facilities and can draw from hundreds of thousands of either studio owned or rented amplifiers to record their albums, home and project studio owners traditionally had one or two amplifiers to use when making music. In addition to the small amount of options, the outlook of making a great guitar recording grew worse when you consider in microphone and microphone preamp quality and room acoustics. A professional recording studio is going to have dozens of microphone options, some costing well north of $3,000. A professional studio is also going to have specially designed and acoustically treated rooms that not only capture the guitar amp’s sound accurately, but often enhance it with a natural ambiance that could not possibly be reproduced in a home studio environment.
With the original Line6 POD, a home studio enthusiast now had 16 amp options. With classic and rare amps going for sometimes north of $30,000, it’s very easy to see why someone might want to outfit their home studio with tens of thousands of dollars worth of amp options that remove pesky things like microphones and rooms from the equation.
Let’s be totally clear and transparent, although the technology has improved dramatically, this little magic red box was not going to give you an exact replication of the sounds of those amps. It’s clearly more along the “inspired by the sound of” route and there were some aspects to the sound that weren’t incredibly pleasant to a guitar tone enthusiast; myself included. I owned the original Line6 POD and while it was a blast to jam around on, the amp tones themselves where definitely a little flat and two dimensional. That being said, if my other option was a Crate solid state half-stack being recorded with a Radio Shack microphone, I’d be taking the POD thank you very much.
Believe it or not, the original POD did find some success in professional studios. It was by no means a standard, but it’s not too difficult to find articles mentioning it’s use on professional recordings. Add in the POD’s solid collection of studio effects, and it was a clear sign of the future.
The Growth, Competition and The War
In the past decade, nearly every manufacturer has toyed with modeling in some way or another. Marshall has the JMD line of amplifiers that uses tube power to amplify digitally modeled preamps. Vox introduced the Valvetronix series that has a hybrid of tube and modeling to reproduce dozens of non-Vox styled tones. Fender has a G-series modeling amp. The list goes on and when you consider non-hardware modelers also, several manufacturers obviously bet their company’s future on modeling. Digitech, Johnson, Korg, Boss, and others have extensive lines of modeling products.
The proliferation of these devices and software simulations have not been limited to bedroom musicians. The real “debate” began when respected professional musicians began admitting to using them in live and/or studio situations.
The Line6 Amp Farm product for TDM versions of ProTools was likely the first amp modeling software to get significant professional studio use. With it, along came a host of high end amp modeling solutions like Digidesign’s own Eleven. The quality of the simulations improved dramatically and the debate was further fueled by shootouts that left professionals having trouble always determining which was real and which wasn’t.
Waves released the promotional video above (switch to HD for the best sound) for the making of GTR3 that has the legendary Paul Reed Smith praising the quality of the simulations. This is where detractors will cite the authenticity of the quotes because there is an endorsement involved. Apparently it’s not possible to get paid for something and be honest about it at the same time. In the video Paul brings in some of his favorite personally owned amplifiers and works with the engineer to get it mic’d up and sounding the way he likes it. After that, they model his amp and bring him back to compare the actual amp and Waves’ model back to back. If you are a reasonably objective person, you can listen to him play both back to back and you would have to seriously analyze the results to determine a difference.
I can’t imagine not being able to use these tones on a CD, it’s just gorgeous. They sound the same, in some ways better than the amp because there’s no noise, which is really cool. – Paul Reed Smith
Personally I agree with him, but a war wouldn’t be very interesting without the other side of the argument. Let’s quickly enumerate reasons commonly given to argue against digital amp simulations:
- Don’t feel like real amps
- Don’t sound like real amps
- Latency throws the player off
- Sound ok solo, but don’t sit in the mix well
- Don’t “inspire” the player like a real amp
And then of course there are a few dogmatic arguments that are worth mentioning, but of little real impact on the argument:
- Engineers are going to grow up not knowing how to record real amps
- Modelers looks amateur
- If they could afford the real amp they would
- Using modeled amps is lazy, the easy way out or will result in a mediocre song
There is an interesting lack of objective reasoning in this particular argument that often plays out in very humorous circular logic. I’ve actually had someone respond to a thread and in the same post make these two arguments for tube amps and against simulations:
- For Tube Amps: People use modelers because getting a good sound out of a tube amplifier takes work. You have to tweak it to get it to sound good in the room, get the mic position just right, put it in the right place in the room, select the right mic, the right preamp, and make sure the player is using the right guitar and pickups to bring the best out of the amp. It’s not plug and play, it takes effort to get a really great guitar sound from a tube amp.
- Against Models: The problem with amp models is that they just don’t sound that great. You have to do too much tweaking with the settings to get it to sound like the real amp. If I have to tweak all of the settings like crazy to get a good sound, why not just use a real amp?
Did you catch that? It’s ok to put a lot of tedious effort into getting a great guitar tone with a real amp, but any effort required to optimize a model is a waste of your time. I thought that was pretty funny, but also interesting.
Have you ever gone to Best Buy to look at televisions? Manufacturers regularly use a vibrant setting by default so their TVs have a ton of punch and really grab the consumer’s attention when they see them on display. Amp modeler’s are no different. The presets that come included on these devices are designed to give you instant gratification. Who would reasonably expect that the single selection used to demonstrate the sound of a particular amplifier would exactly resonate with every individual’s memory of that particular amp’s sound?
If a real amplifier has literally millions of combinations of settings that all produce different results, compounded exponentially by factors like room, mic, position of the amp, position of the listener, stomp boxes, guitar choice, cable brand, pickup type… well, how could one preset possibly be designed to replicate what everyone expects to hear?
The reality is that it can’t. In any situation, with any piece of gear, you’ll need to do some futzing with he controls to get a sound you like.
Latency is another very interesting issue. In a neutral environment, sound travels 3 feet in 2.66 milliseconds. According to Fractal Audio, the processing latency of the AxeFX II is 1ms. That means that if I am standing in my control room playing through an AxeFX II, and I am 5’ from my studio monitors, the latency I hear is roughly equivalent to me sticking my ear against the grill of a real amplifier. I’d say given today’s stage environments and large studio live rooms 1ms is an insignificant amount of latency when combined with the environment.
But the really interesting thing about latency is that all too often musicians are having to deal with latency at much more drastic levels. ProTools HD can have latency of 5ms or more depending on the situation and even in “low-latency mode” only drops to 1.6ms. Many engineers do not use software monitoring making these figures a non-factor, but software monitoring is becoming more and more common.
Is 1ms going to be enough to distract a guitar player from his or her performance? Let’s step away from the guitar for a moment to answer that question. I haven’t seen a band put a piano on stage very often. It’s much more convenient and flexible to use a synthesizer. Today’s high end workstations carry from 1.5 to nearly 4ms of latency.
I am sure there will be a distracting argument about how latency is different on a keyboard than a guitar, but scientifically speaking, the time it takes to hear a sound when pressing a key takes twice as long on a Roland Fantom-SR than pluck to sound on an AxeFX II.
Don’t Sound Like Real Amps
I’ve learned one thing in this journey of understanding. That is that posting audio samples comparing one sound to another will get you nowhere. Just like the human brain will allow someone to find meaning that did not exist in the lyricists intent, many people cannot objectively compare sounds and put aside personal opinions. To debate this side of the story, let’s talk about the words good, bad, different, equal, close and similar. These words strike really interesting chords in people.
I propose that it is possible that sounding different does not equal sounding bad. I am a big fan of metaphors, so try this one on for size:
I’m a huge fan of Pixar. It’s ridiculous success is a great example of this particular phenomenon. Disney has been pumping out hand drawn animation for decades. We all grew up with a collection of beloved Disney characters. When digital animation first surfaced, the common consensus was that it was sterile, lifeless, lacked charm, etc. Sound familiar?
When a few guys were allowed to truly explore their passions and the limits of the technology, guided by Steve Jobs of course, they were able to turn the wave of sentiment dramatically.
When initially used, the public compared these two images:
In the above example, if duplicating hand drawn animation is what they were going for, it’s different and bad. I mean, for computer animation it’s good, but compared to something like the Lion King, it’s really bad.
with the passing of time, technology moved on, and of course improved:
What about this example? The technology has obviously improved drastically. It’s absolutely different, but bad? Hardly!
The similarity I see is that if you compare a Dumble to the original POD’s simulation of a Dumble, it’s going to be like the first set of images. If you compare a Rectifier to the AxeFX II’s simulation of a Rectifier it’s probably going to be closer to the second set. Much less noise, no need to reach for stomp boxes to get rid of the flubby low end. You can create what you think a Rectifier should sound like.
Being of course aware of the significant impact of taste, some will still prefer the Disney classic animation style and animatedly vilify the Pixar example with a number of derisions, but one speaks to me more than any other in this debate. There is a strong nostalgic contingent that are upset that the craft and job market of hand animators has suffered at the hands of digital animation.
There’s no debating that. I wouldn’t exactly call it a unique phenomenon though. Hurry, go find me someone who hand knits socks. Bring me a cobbler to make me some shoes. I think that trend started with the beginning of the industrial revolution and will continue long after we’re all dead. Technology begins to crudely replicate things that we’ve learned to craft with skill and experience and eventually, and I stress eventually, is able to achieve a level of quality and consistency that no hand can achieve.
It absolutely applies to audio gear, even in the world of high end boutique gear you’ll regularly hear “I got a good one” or “I got a bad one.” If that’s the laws of the land, are we saying that a bad tube amp is still better than a great modeler? If Waves modeled a Marshall amp that Paul Reed Smith spent his lifetime finding and they are able to model it to a reasonable accuracy, is that still worse than an average one? We’re in some muddy waters here.
Let’s talk more about consistency. These days it’s incredibly common for artists to be forced by schedules to record in different studios. It’s entirely possible that they might not be able to carry an amp with them everywhere they go. If not all amps are the same, and of course the rooms, mics and positions are not exactly the same, is consistency not a benefit to using a high quality model? I’d suggest that a single consistent guitar track will sound better than a patchwork of rooms, amps and mics. It’s not as uncommon as you think.
The Art Itself
This last section is the one that I have the most passion for. I’ve made this statement before and am constantly surprised at the response:
If I traveled back to 1973 and stole all of Jimmy Paige’s gear, but left him with an AxeFX II, Led Zeppelin III would still be Led Zeppelin III.
I’ve had a engineer flat out tell me I was crazy. These are responses to some of my points in a thread:
Originally Posted by philoking
Are you suggesting that David Gilmour could not have made a great Pink Floyd record with an AxeFX?
Yes. I am. I have not heard a Pink Floyd type sound from a modeler that comes close to that sound. IMHO.
Unless this guy entirely missed the point, I was not saying that an AxeFX can exactly replicate David Gilmour’s classic tone. I’m saying that David Gilmour could find an amazing tone with an AxeFX. He could create amazing music with one. It truly pains me that some people will give the gear more credit than the artist.
I like to think that the players we respect are respected for their skill and creativity. So many classic albums we love were recorded with whatever they had around at the time. It’s just difficult for me to imagine that anyone could believe so completely that the ability to create compelling music is so entirely dependent on an amplifier.
Joe Bonamassa’s Live at the Royal Hall DVD has a special feature where they are interviewing him on a bus. On stage he uses a monstrous guitar rig with 4 tube heads, several guitar cabinets and literally dozens of pedals. On the bus he was plugged directly into a $400 Marshall Class 5 5w amp with no effects. It sounded just like Joe Bonamassa.
The artist, the talent and most importantly, the song is not intrinsically tethered to the technology that was used to create it. The premise that the artist could not have created something as compelling with different gear is as un-provable as it is unrealistic. The artists we seem to draw inspiration from our past were themselves pushing the limits of the technology they had at the time. I am willing to bet that if a forum had existed in the 60s they would have been slamming using a small box of transistors to get distortion instead of taking a razor blade to your speakers.
A Better Mousetrap
I know this is now by far the longest blog post ever written, but there is one last logical point to consider. We’ve seen computer simulation in animation. We’ve discussed it’s use in audio at length. It’s also been able to simulate incredibly complex quantum physics experiments, simulate nuclear explosions, the impact of time and stress on bridges and buildings, the result of car accidents, space travel, DNA sequence mutation and even some of the complex workings of our own brain. It’s past improbable to think that a computer will never be able to accurately simulate something as simple as the physics of electrons in a vacuum tube. The most sought after amp you can imagine is still only a collection of some of the most simple electrical components in existence: transistors, vacuum tubes, capacitors, transformers, potentiometers, resistors and some copper wire.
If we can all agree that the technology exists to accurately simulate these components, even if not to acceptable standards in devices currently available to consumers, then we are still agreeing that the future of guitar tone is amp simulation.
That’s where the rubber meets the road. A collection of individuals involved in this debate on Gearslutz.com, truly believe that the lack of the technology’s ability to accurately reproduce their favorite tube amp’s sound to a degree of accuracy that they cannot discern (even if it’s emotional and not logical) is the final word on this technology and that a great amplifier sound should only be reserved for those who can afford to buy that amp.
The technology will continue to improve in both sound quality and performance. The scientists who are developing the formulas that allow software to model real world physics are not finished. The product managers who are developing these products are not deaf to the concerns of potential customers. Whatever your gripe with amp modeling technology is, I can guarantee you that it’s being worked on as we speak.
A whole new group of musicians who are inspired by classic tube amp sounds will grow up with the ability to reproduce the core tones and take them into wild new directions that were not remotely possible with hardware.
With technology like the AxeFX, they’ll make decisions like “I want to put the preamp from my Mesa in front of the power amp from a Plexi.” They’ll be able to change capacitors with the flick of a mouse in an editor. They’ll be able to eventually improve on everything they don’t like about these revered amplifiers.
If you think about it in that way, amplifiers are incredibly limited devices. You spend thousands of dollars on an amplifier that can really only sound like that one amplifier. Why should a musician need thousands of dollars to have access to a sound that they hear in their head? Not to go into hyperbole, but that amounts to class warfare.
It ends up being a circular problem: If you aren’t successful you can’t afford great gear, if you can’t afford great gear you can’t be successful. I don’t believe either of those arguments for a single second. The pretense that all of the classic guitar tones that we covet could not have been improved on in the eyes of the creator at the time if the technology was available is ridiculous.
David Gilmour would have went absolutely nuts with an AxeFX in the 70s. It’s not even debatable. The guy is a tone chaser and not afraid of technology. He had a massive rack before anyone had massive racks. He used digital devices and was on the bleeding edge of guitar technology.
I’ll leave you with the final word on this argument which I think is the true problem. I got an email from one of the people on the thread that sparked this tome of a blog post. It started with a paragraph that sums up the whole problem.
There are many studios that have hundreds of thousands to millions in gear that are going out of business because of products like axefx, plugs that “immolate” real pieces of hardware, and the lack of musicians taking pride in what they do. Today vs. 20-30 years ago people move more and more towards the “cheaper easy way out”. Things today sound so digital its a joke. Bands like animals as leaders will literally record in a bedroom studio, and when they do so, the bands that look up to them end up wanting to do they same thing that they do. This gives any kid the option of becoming an engineer or producer with almost no investment, and almost no knowledge. Real studios lose work every day and go bankrupt everyday because of this. The only thing that us studio owners can do is fight it the best we can. You have to understand, THOUSANDS of people read that thread that we were just posting in and will actually utilize the information that was posted.
The line that sticks out to me the most is “The only thing us studio owners can do is fight it the best way we can.” If you step away from the guitar and think about music generally, people are protecting their jobs and their investments. If you had a half a million dollars worth of amplifiers in storage for your clients to use, would you be comfortable with the idea that a $2,600 box could reduce the value of that investment?
So that’s the real question I leave you with. Do musicians have the right to be able to create their own music? Is the job of the engineer sacred? If the technology existed for a musician to allow the listener to hear what they hear in their head without the involvement and influence of the engineer, does it make it less valid?
That’s the part I get from most of the conversation I read on that thread. One poster went as far as to say “I don’t mind modelers as long as you aren’t using them to get a sound from an amp you don’t have, get the amp.”
Music is about the artist, music is about the end result. I’ll get a ridiculous amount of hate mail for this statement, but the consumer of music does not care about how you made the music. The consumer of music does not care what you used. The consumer of music is not bothered by what they cannot hear. As important as the producer and engineer are to the creation of music, if technology ever provided a way to remove the knowledge and experience from recording, mixing, mastering…at least the truly technical aspects of them, then then what is left is a matter of taste.
Show me one artist that loves their latest album and I’ll show you one that thinks the mixer ruined it. What does that mean? That means that if the artist creating the music loves the guitar tone he or she is getting from whatever modeling devices that they have decided to use for whatever reason, it’s perceived quality compared to whatever amp it’s attempting to simulate is rendered irrelevant. In the end music is all about taste and the point of it all is not to achieve a duplicate of any sound but the one that sounds good to whomever is making it.
Thanks for reading, this was a lot of fun to write and think about.