This project has been a few months in the making and so much fun to do. It’s a big progressive epic in the style of Pink Floyd. It’s based on a story called There Will Be Soft Rains by Ray Bradbury. It’s a poignant story about the last day in the life of an automated house that has outlived it’s owners due to nuclear war. Cheery stuff, but still beautiful. I only hope my song does the story justice. I decided to let the house tell the story instead of singing it with my own voice.
If you are interested in the production of it, here are some details of what I used. I record all of my audio through a Universal Audio Apollo 16 into Logic Pro X. I have just a handful of preamps that I can use if I’m not recording at line level. For this project, I only actually recorded my vocal for the vocoder. I used an SM-58 into a Focusrite ISA Two. I limited the crap out of the vocal so the vocoder would take on the more monotone, constant level that you would expect from an automated voice. I know text to speech is much better now, but I really wanted a vintagy vibe to the house as I was trying to take on some of that 50’s era vibe.
Drums are all Superior Drummer plus some loops from Damage and Evolve in the Komplete Ultimate library. I also added a few instruments from the orchestral percussion to get the gong and clavas. There are quite a few other synth loops or synths droning the basslines. They are from Omnisphere and Massive. The bass is a Fender Jazz V Elite played into a Sansamp RBI.
Guitar is all a Fender USA Strat into either a KSR Ares into a Two-Notes Torpedo Live for the dirty, and a Fractal AxeFX2 for cleans. There are multiple solos all with unique tones, some droning parts and the big chorus guitars. Actually a small amount of guitar for me.
I figured out doing the vocoder part with the built in Logic plugin but wasn’t at all happy with it’s synth abilities, so I bought Izotope’s VocalSynth plugin which worked very well.
There is surprisingly little overall effects going on. Here’s a photo of the session:
Starting with the master buss, it’s pretty simple actually. NLS is there since I’m using it on all the tracks. Waves SSL Buss Compressor for a little glue, Slate Virtual Tape Machine for some vibe, Waves L3 for some limiting and Universal Audio’s Manley Massive Massive EQ to tweak a little.
I’ll go through what I did to various tracks, but assume Waves NLS is first on every track. Drums got Waves C4 to squash the buss a bit and I used the Logic EQ to cut off the extreme lows and highs. I’ve also done that in Superior Drummer for the kick and toms. The kit just had so much bass, I had to get rid of a lot of it.
The Bass got some compression via the LA2A from Universal Audio and some sweetening with the Manley Massive Passive EQ. You’ll see almost all the synth stuff got severely shelved. That Bass just adds up so I started cutting it at the source. That cleaned up the mix a ton.
The guitars got lots of subtle tweaks, but nearly all of them got Waves X-Noise. The Strat hum was just too much for me, I’ve now replaced the pickups to solve that problem, but these tracks were recorded before that, so I used a plugin to remove some noise.
Reverb is all from Waves R-Verb.
Oh, and TONS of automation.
Today I took my dog for a walk in the sun. I have been fighting some nausea today so I decided to take a walk with my dog in the fresh air and that always gets me in a creative frame of mind. While walking, I was listening to the new Dream Theater album, The Astonishing, for the umpteenth time. The album is about a dystopian society and today, while also thinking about frustration with the news, a thread of a thought occurred to me. I kept pulling on it until I felt convinced of three things.
- Hackers will be the most reliable source of news in the future, and they will become corporations in their own right for democratizing and monetizing information verification.
- If the government was truly worried about the people’s ability to defend themselves, they would take our computers, not our guns.
- Artificial Intelligence technology, that already exists today, can transform the news, how we apply context from personal to global and everywhere in between, and even politics and how our government works.
So this is a really long rabbit hole, stick with me, I promise it will make sense.
Framing the Problem
I am probably one of the few people who reads four different news sites every day. I don’t do this because I expect different stories. I do this to understand how the stories are spun according to the news corporation’s political leanings. The problem with not doing this, is that your news agency can lead you down the primrose path to misunderstanding without ever legally telling a lie. That’s how Donald Trump becomes a legitimate contender for president, and that’s how the world slumps into dystopia. Thankfully, that is not going to happen.
Getting to the Source
If you have watched or read the news lately, you probably already realize that it has already become almost entirely reliant on social media. This makes sense of course, because the news can get to people faster than they can route it through themselves. This isn’t a problem they can ever solve, so they adapt to use it to replace traditional news wires. That then of course is filtered and editorialized to condition it for the intended spin.
As Facebook, LinkedIn, Twitter, YouTube, Instagram and their many competitors and wish they were competitors meticulously capture every event and serve it to who they think cares about it, there comes one byproduct. In almost all cases, and article can be traced to it’s genesis. Today it would require access to people’s computers, forensic computer science and warrants to pull this off, but as the news moves faster, the digital fingerprint gets clearer. There is still one bottleneck though, humans.
Without considering spin, people are not capable of reading all of the news, and groups of people are not capable of reading the news and applying a consistent interpretation of the news, but computers can. Imagine for a minute a search engine index system similar to what Google or Bing use to determine content’s ability to match what we are searching for, tuned slightly to instead index all of the connections between not just the news, but the actual source material that inspired a blog post or even a tweet. Privacy issues not withstanding, the technology isn’t even remotely futuristic. No part of that scenario isn’t technically possible with today’s software and hardware.
But that’s not it at all. Now imagine that this news search engine was aggregating news stories based on an entirely dynamic dictionary of topics. Using semantic analysis, a computer could meticulously tag content and aggregate those tags at any level: personal, your apartment building, your neighborhood, your city, your county, state, voting region, climate area, time zone, basically any type of dimension that could be inferred by where your connection originated, cookies in your browser, social media accounts you are currently logged into, the language your browser is using, the sky is the limit.
With that type of aggregation ability, a handful of “man my water tastes like shit” tweets can cause our intelligent news software to bubble up a tweet sized news article to people in your city saying, “This morning hundreds of people are complaining about their water quality in Flint Michigan.”
This type of system could detect a food quality issue at a global scale and predict other areas that could have similar issues. Imagine if the news was capable of not being sensationalistic. If Chipotle’s current issue had been caught at several local levels and was able to report it with the appropriate priority in other markets that had been identified by the system that knew where all the stores were located.
But What About That Spin?
Yea, about that. News wouldn’t be able to sell ads if it was just a bunch of dashboards showing numbers, and a line item list of events that should be contextually important to you would make knowing what’s going on in the world be like trying to get to the bottom of your inbox, wouldn’t it?
Of course editorial content is important. Computers can definitely solve a lot of the problems we are facing, but it’s still the creativity of the human mind that can find creative ways to correlate events that have never been correlated before in any way. Editorial content is not going anywhere, but can this type of system help you trust it? Absolutely.
Today, when you log into Amazon.com from your browser, you see that little lock. That means that the browser is dealing with your personal information in a secure manner. That lock is what is supposed to make you trust that site with your data. But what about trusting that site’s data?
This system we are imagining, would have a monetized API Service. Probably monetized on an audience scale so blogs and small businesses could use it also. This API is able to sift it’s incredible database of content and perform it’s algorithms at mind numbing speeds. This part isn’t quite possible with today’s technology, but it’s a problem of horsepower more than software. Imagine that every time you loaded something, the browser of your choice gave you an indication of that content’s alignment with the global understanding of truth. SSL for honesty.
Creating Real Change
Now that we have a way to wade through truth down to the lowest level, and we have a system that can aggregate and validate that instantly, let’s change politics.
Today, politicians have little fear of lying, because nobody fact checks them. Now that our hacker news network has democratized the collection of news, and via open source systems that are open to be scrutinized down to the code level, created a system for calculating an honestly score on content, let’s change politics.
Our API for honesty has made our hacker non-profit one of the wealthiest companies on the planet. With a dogmatic mission to keep the transmission of information honest and audit-able, our fictional group of hackers have decided to launch their own actual news network to compete with all the major news networks.
The only difference to this new network, is that the news content is purely a feed of video clips, like YouTube. They have been curated for you by either your personal preferences, or things that have a potential impact on you based on anything from your location, gender, employer, investments, family’s locations, it’s really an infinite list. With no interest in selling you ads, and an open but public service approach to identity and privacy, the sky is the limit.
What this gets us is this: As you watch a politician speak, or see a celebrity tweet something, it’s being analyzed against a common understanding of the facts in realtime, and appropriate context is being displayed in a meaningful way. Imagine if Donald Trump (my personal political punching bag if I am being transparent) decided to make one of his statements about the number of rapists coming across the border. While you hear him make this statement, the display is showing you actual crime statistics, sliced appropriate to the context of the statement being made. It’s digital liar liar pants on fire. That’s awesome.
I would assume that if politicians knew that any time the public sees them speak they are being reminded of their voting records, statistical expansion of the data they cite, and how their opinions align with the inferred political agendas of the corporations that donate to their campaigns….. they would probably either say nothing, or get a lot more honest.
This Is Just The Beginning
I get that to a lot of people artificial intelligence is scary. There are concerns from all sorts of perspectives, many of which are well founded. That being said, when you look at this current election, there is one fear that strikes me much harder than being concerned that a computer will kill me to save someone else because they are richer.
That fear is based in reality. Anyone who knows me knows that I do not subscribe to conspiracies and have never owned a tin foil hat. Still, the current election is proof that the nation is very much divided, and the actions of our government have proven that disagreement has finally blossomed into gridlock.
If technology existed today, to give you confidence that information was accurate, and clearly point out when it’s not. If we had tools that could make sure that the people that we elect our truly representing our agendas and not their own, we would be fools to be afraid of them.
Fun to think about.
An oldie, but a goodie….
I was discussing this post with a friend this morning and realized I had wiped it from the internet. I found it on Wayback so I could repost it. Enjoy!
Recently I have been involved in a lively debate on the future of digital amp simulations on gearslutz.com’s forums. I use the term debate lightly because the conversation was more like, “they aren’t real so they suck,” than something useful. I’d like to spell out what I feel is the current reality and future of computer modeling analog audio gear. I say analog audio gear generically because this technology is not limited to guitar amplifiers.
In today’s professional studios it’s incredibly common to use simulations of amplifiers, but also compressors, equalizers, reverb and delay units, tape and tube simulation, and pretty much any piece of analog gear that’s ever had a reputation for anything useful.
Most people credit Line6’s POD device with truly igniting the popularity of amp simulations. Building on the AxSys 212, Line6 exploded in the early 2000s with the success of the original POD digital amp modeler. While platinum selling artists have access to million dollar recording facilities and can draw from hundreds of thousands of either studio owned or rented amplifiers to record their albums, home and project studio owners traditionally had one or two amplifiers to use when making music. In addition to the small amount of options, the outlook of making a great guitar recording grew worse when you consider in microphone and microphone preamp quality and room acoustics. A professional recording studio is going to have dozens of microphone options, some costing well north of $3,000. A professional studio is also going to have specially designed and acoustically treated rooms that not only capture the guitar amp’s sound accurately, but often enhance it with a natural ambiance that could not possibly be reproduced in a home studio environment.
With the original Line6 POD, a home studio enthusiast now had 16 amp options. With classic and rare amps going for sometimes north of $30,000, it’s very easy to see why someone might want to outfit their home studio with tens of thousands of dollars worth of amp options that remove pesky things like microphones and rooms from the equation.
Let’s be totally clear and transparent, although the technology has improved dramatically, this little magic red box was not going to give you an exact replication of the sounds of those amps. It’s clearly more along the “inspired by the sound of” route and there were some aspects to the sound that weren’t incredibly pleasant to a guitar tone enthusiast; myself included. I owned the original Line6 POD and while it was a blast to jam around on, the amp tones themselves where definitely a little flat and two dimensional. That being said, if my other option was a Crate solid state half-stack being recorded with a Radio Shack microphone, I’d be taking the POD thank you very much.
Believe it or not, the original POD did find some success in professional studios. It was by no means a standard, but it’s not too difficult to find articles mentioning it’s use on professional recordings. Add in the POD’s solid collection of studio effects, and it was a clear sign of the future.
The Growth, Competition and The War
In the past decade, nearly every manufacturer has toyed with modeling in some way or another. Marshall has the JMD line of amplifiers that uses tube power to amplify digitally modeled preamps. Vox introduced the Valvetronix series that has a hybrid of tube and modeling to reproduce dozens of non-Vox styled tones. Fender has a G-series modeling amp. The list goes on and when you consider non-hardware modelers also, several manufacturers obviously bet their company’s future on modeling. Digitech, Johnson, Korg, Boss, and others have extensive lines of modeling products.
The proliferation of these devices and software simulations have not been limited to bedroom musicians. The real “debate” began when respected professional musicians began admitting to using them in live and/or studio situations.
The Line6 Amp Farm product for TDM versions of ProTools was likely the first amp modeling software to get significant professional studio use. With it, along came a host of high end amp modeling solutions like Digidesign’s own Eleven. The quality of the simulations improved dramatically and the debate was further fueled by shootouts that left professionals having trouble always determining which was real and which wasn’t.
Waves released the promotional video above (switch to HD for the best sound) for the making of GTR3 that has the legendary Paul Reed Smith praising the quality of the simulations. This is where detractors will cite the authenticity of the quotes because there is an endorsement involved. Apparently it’s not possible to get paid for something and be honest about it at the same time. In the video Paul brings in some of his favorite personally owned amplifiers and works with the engineer to get it mic’d up and sounding the way he likes it. After that, they model his amp and bring him back to compare the actual amp and Waves’ model back to back. If you are a reasonably objective person, you can listen to him play both back to back and you would have to seriously analyze the results to determine a difference.
I can’t imagine not being able to use these tones on a CD, it’s just gorgeous. They sound the same, in some ways better than the amp because there’s no noise, which is really cool. – Paul Reed Smith
Personally I agree with him, but a war wouldn’t be very interesting without the other side of the argument. Let’s quickly enumerate reasons commonly given to argue against digital amp simulations:
- Don’t feel like real amps
- Don’t sound like real amps
- Latency throws the player off
- Sound ok solo, but don’t sit in the mix well
- Don’t “inspire” the player like a real amp
And then of course there are a few dogmatic arguments that are worth mentioning, but of little real impact on the argument:
- Engineers are going to grow up not knowing how to record real amps
- Modelers looks amateur
- If they could afford the real amp they would
- Using modeled amps is lazy, the easy way out or will result in a mediocre song
There is an interesting lack of objective reasoning in this particular argument that often plays out in very humorous circular logic. I’ve actually had someone respond to a thread and in the same post make these two arguments for tube amps and against simulations:
- For Tube Amps: People use modelers because getting a good sound out of a tube amplifier takes work. You have to tweak it to get it to sound good in the room, get the mic position just right, put it in the right place in the room, select the right mic, the right preamp, and make sure the player is using the right guitar and pickups to bring the best out of the amp. It’s not plug and play, it takes effort to get a really great guitar sound from a tube amp.
- Against Models: The problem with amp models is that they just don’t sound that great. You have to do too much tweaking with the settings to get it to sound like the real amp. If I have to tweak all of the settings like crazy to get a good sound, why not just use a real amp?
Did you catch that? It’s ok to put a lot of tedious effort into getting a great guitar tone with a real amp, but any effort required to optimize a model is a waste of your time. I thought that was pretty funny, but also interesting.
Have you ever gone to Best Buy to look at televisions? Manufacturers regularly use a vibrant setting by default so their TVs have a ton of punch and really grab the consumer’s attention when they see them on display. Amp modeler’s are no different. The presets that come included on these devices are designed to give you instant gratification. Who would reasonably expect that the single selection used to demonstrate the sound of a particular amplifier would exactly resonate with every individual’s memory of that particular amp’s sound?
If a real amplifier has literally millions of combinations of settings that all produce different results, compounded exponentially by factors like room, mic, position of the amp, position of the listener, stomp boxes, guitar choice, cable brand, pickup type… well, how could one preset possibly be designed to replicate what everyone expects to hear?
The reality is that it can’t. In any situation, with any piece of gear, you’ll need to do some futzing with he controls to get a sound you like.
Latency is another very interesting issue. In a neutral environment, sound travels 3 feet in 2.66 milliseconds. According to Fractal Audio, the processing latency of the AxeFX II is 1ms. That means that if I am standing in my control room playing through an AxeFX II, and I am 5’ from my studio monitors, the latency I hear is roughly equivalent to me sticking my ear against the grill of a real amplifier. I’d say given today’s stage environments and large studio live rooms 1ms is an insignificant amount of latency when combined with the environment.
But the really interesting thing about latency is that all too often musicians are having to deal with latency at much more drastic levels. ProTools HD can have latency of 5ms or more depending on the situation and even in “low-latency mode” only drops to 1.6ms. Many engineers do not use software monitoring making these figures a non-factor, but software monitoring is becoming more and more common.
Is 1ms going to be enough to distract a guitar player from his or her performance? Let’s step away from the guitar for a moment to answer that question. I haven’t seen a band put a piano on stage very often. It’s much more convenient and flexible to use a synthesizer. Today’s high end workstations carry from 1.5 to nearly 4ms of latency.
I am sure there will be a distracting argument about how latency is different on a keyboard than a guitar, but scientifically speaking, the time it takes to hear a sound when pressing a key takes twice as long on a Roland Fantom-SR than pluck to sound on an AxeFX II.
Don’t Sound Like Real Amps
I’ve learned one thing in this journey of understanding. That is that posting audio samples comparing one sound to another will get you nowhere. Just like the human brain will allow someone to find meaning that did not exist in the lyricists intent, many people cannot objectively compare sounds and put aside personal opinions. To debate this side of the story, let’s talk about the words good, bad, different, equal, close and similar. These words strike really interesting chords in people.
I propose that it is possible that sounding different does not equal sounding bad. I am a big fan of metaphors, so try this one on for size:
I’m a huge fan of Pixar. It’s ridiculous success is a great example of this particular phenomenon. Disney has been pumping out hand drawn animation for decades. We all grew up with a collection of beloved Disney characters. When digital animation first surfaced, the common consensus was that it was sterile, lifeless, lacked charm, etc. Sound familiar?
When a few guys were allowed to truly explore their passions and the limits of the technology, guided by Steve Jobs of course, they were able to turn the wave of sentiment dramatically.
When initially used, the public compared these two images:
In the above example, if duplicating hand drawn animation is what they were going for, it’s different and bad. I mean, for computer animation it’s good, but compared to something like the Lion King, it’s really bad.
with the passing of time, technology moved on, and of course improved:
What about this example? The technology has obviously improved drastically. It’s absolutely different, but bad? Hardly!
The similarity I see is that if you compare a Dumble to the original POD’s simulation of a Dumble, it’s going to be like the first set of images. If you compare a Rectifier to the AxeFX II’s simulation of a Rectifier it’s probably going to be closer to the second set. Much less noise, no need to reach for stomp boxes to get rid of the flubby low end. You can create what you think a Rectifier should sound like.
Being of course aware of the significant impact of taste, some will still prefer the Disney classic animation style and animatedly vilify the Pixar example with a number of derisions, but one speaks to me more than any other in this debate. There is a strong nostalgic contingent that are upset that the craft and job market of hand animators has suffered at the hands of digital animation.
There’s no debating that. I wouldn’t exactly call it a unique phenomenon though. Hurry, go find me someone who hand knits socks. Bring me a cobbler to make me some shoes. I think that trend started with the beginning of the industrial revolution and will continue long after we’re all dead. Technology begins to crudely replicate things that we’ve learned to craft with skill and experience and eventually, and I stress eventually, is able to achieve a level of quality and consistency that no hand can achieve.
It absolutely applies to audio gear, even in the world of high end boutique gear you’ll regularly hear “I got a good one” or “I got a bad one.” If that’s the laws of the land, are we saying that a bad tube amp is still better than a great modeler? If Waves modeled a Marshall amp that Paul Reed Smith spent his lifetime finding and they are able to model it to a reasonable accuracy, is that still worse than an average one? We’re in some muddy waters here.
Let’s talk more about consistency. These days it’s incredibly common for artists to be forced by schedules to record in different studios. It’s entirely possible that they might not be able to carry an amp with them everywhere they go. If not all amps are the same, and of course the rooms, mics and positions are not exactly the same, is consistency not a benefit to using a high quality model? I’d suggest that a single consistent guitar track will sound better than a patchwork of rooms, amps and mics. It’s not as uncommon as you think.
The Art Itself
This last section is the one that I have the most passion for. I’ve made this statement before and am constantly surprised at the response:
If I traveled back to 1973 and stole all of Jimmy Paige’s gear, but left him with an AxeFX II, Led Zeppelin III would still be Led Zeppelin III.
I’ve had a engineer flat out tell me I was crazy. These are responses to some of my points in a thread:
Originally Posted by philoking
Are you suggesting that David Gilmour could not have made a great Pink Floyd record with an AxeFX?
Yes. I am. I have not heard a Pink Floyd type sound from a modeler that comes close to that sound. IMHO.
Unless this guy entirely missed the point, I was not saying that an AxeFX can exactly replicate David Gilmour’s classic tone. I’m saying that David Gilmour could find an amazing tone with an AxeFX. He could create amazing music with one. It truly pains me that some people will give the gear more credit than the artist.
I like to think that the players we respect are respected for their skill and creativity. So many classic albums we love were recorded with whatever they had around at the time. It’s just difficult for me to imagine that anyone could believe so completely that the ability to create compelling music is so entirely dependent on an amplifier.
Joe Bonamassa’s Live at the Royal Hall DVD has a special feature where they are interviewing him on a bus. On stage he uses a monstrous guitar rig with 4 tube heads, several guitar cabinets and literally dozens of pedals. On the bus he was plugged directly into a $400 Marshall Class 5 5w amp with no effects. It sounded just like Joe Bonamassa.
The artist, the talent and most importantly, the song is not intrinsically tethered to the technology that was used to create it. The premise that the artist could not have created something as compelling with different gear is as un-provable as it is unrealistic. The artists we seem to draw inspiration from our past were themselves pushing the limits of the technology they had at the time. I am willing to bet that if a forum had existed in the 60s they would have been slamming using a small box of transistors to get distortion instead of taking a razor blade to your speakers.
A Better Mousetrap
I know this is now by far the longest blog post ever written, but there is one last logical point to consider. We’ve seen computer simulation in animation. We’ve discussed it’s use in audio at length. It’s also been able to simulate incredibly complex quantum physics experiments, simulate nuclear explosions, the impact of time and stress on bridges and buildings, the result of car accidents, space travel, DNA sequence mutation and even some of the complex workings of our own brain. It’s past improbable to think that a computer will never be able to accurately simulate something as simple as the physics of electrons in a vacuum tube. The most sought after amp you can imagine is still only a collection of some of the most simple electrical components in existence: transistors, vacuum tubes, capacitors, transformers, potentiometers, resistors and some copper wire.
If we can all agree that the technology exists to accurately simulate these components, even if not to acceptable standards in devices currently available to consumers, then we are still agreeing that the future of guitar tone is amp simulation.
That’s where the rubber meets the road. A collection of individuals involved in this debate on Gearslutz.com, truly believe that the lack of the technology’s ability to accurately reproduce their favorite tube amp’s sound to a degree of accuracy that they cannot discern (even if it’s emotional and not logical) is the final word on this technology and that a great amplifier sound should only be reserved for those who can afford to buy that amp.
The technology will continue to improve in both sound quality and performance. The scientists who are developing the formulas that allow software to model real world physics are not finished. The product managers who are developing these products are not deaf to the concerns of potential customers. Whatever your gripe with amp modeling technology is, I can guarantee you that it’s being worked on as we speak.
A whole new group of musicians who are inspired by classic tube amp sounds will grow up with the ability to reproduce the core tones and take them into wild new directions that were not remotely possible with hardware.
With technology like the AxeFX, they’ll make decisions like “I want to put the preamp from my Mesa in front of the power amp from a Plexi.” They’ll be able to change capacitors with the flick of a mouse in an editor. They’ll be able to eventually improve on everything they don’t like about these revered amplifiers.
If you think about it in that way, amplifiers are incredibly limited devices. You spend thousands of dollars on an amplifier that can really only sound like that one amplifier. Why should a musician need thousands of dollars to have access to a sound that they hear in their head? Not to go into hyperbole, but that amounts to class warfare.
It ends up being a circular problem: If you aren’t successful you can’t afford great gear, if you can’t afford great gear you can’t be successful. I don’t believe either of those arguments for a single second. The pretense that all of the classic guitar tones that we covet could not have been improved on in the eyes of the creator at the time if the technology was available is ridiculous.
David Gilmour would have went absolutely nuts with an AxeFX in the 70s. It’s not even debatable. The guy is a tone chaser and not afraid of technology. He had a massive rack before anyone had massive racks. He used digital devices and was on the bleeding edge of guitar technology.
I’ll leave you with the final word on this argument which I think is the true problem. I got an email from one of the people on the thread that sparked this tome of a blog post. It started with a paragraph that sums up the whole problem.
There are many studios that have hundreds of thousands to millions in gear that are going out of business because of products like axefx, plugs that “immolate” real pieces of hardware, and the lack of musicians taking pride in what they do. Today vs. 20-30 years ago people move more and more towards the “cheaper easy way out”. Things today sound so digital its a joke. Bands like animals as leaders will literally record in a bedroom studio, and when they do so, the bands that look up to them end up wanting to do they same thing that they do. This gives any kid the option of becoming an engineer or producer with almost no investment, and almost no knowledge. Real studios lose work every day and go bankrupt everyday because of this. The only thing that us studio owners can do is fight it the best we can. You have to understand, THOUSANDS of people read that thread that we were just posting in and will actually utilize the information that was posted.
The line that sticks out to me the most is “The only thing us studio owners can do is fight it the best way we can.” If you step away from the guitar and think about music generally, people are protecting their jobs and their investments. If you had a half a million dollars worth of amplifiers in storage for your clients to use, would you be comfortable with the idea that a $2,600 box could reduce the value of that investment?
So that’s the real question I leave you with. Do musicians have the right to be able to create their own music? Is the job of the engineer sacred? If the technology existed for a musician to allow the listener to hear what they hear in their head without the involvement and influence of the engineer, does it make it less valid?
That’s the part I get from most of the conversation I read on that thread. One poster went as far as to say “I don’t mind modelers as long as you aren’t using them to get a sound from an amp you don’t have, get the amp.”
Music is about the artist, music is about the end result. I’ll get a ridiculous amount of hate mail for this statement, but the consumer of music does not care about how you made the music. The consumer of music does not care what you used. The consumer of music is not bothered by what they cannot hear. As important as the producer and engineer are to the creation of music, if technology ever provided a way to remove the knowledge and experience from recording, mixing, mastering…at least the truly technical aspects of them, then then what is left is a matter of taste.
Show me one artist that loves their latest album and I’ll show you one that thinks the mixer ruined it. What does that mean? That means that if the artist creating the music loves the guitar tone he or she is getting from whatever modeling devices that they have decided to use for whatever reason, it’s perceived quality compared to whatever amp it’s attempting to simulate is rendered irrelevant. In the end music is all about taste and the point of it all is not to achieve a duplicate of any sound but the one that sounds good to whomever is making it.
Thanks for reading, this was a lot of fun to write and think about.