Sunday, October 31, 2010

How to sell your music on Itunes, Msn

This e book describes in detail how to take a professional approach to setting up a legal entity for your music project and how to get your music distributed worldwide in all major online music stores like iTunes, Virgin, Sony, Yahoo, Msn, & Napster


Check it out!

Read Music Notes Easily - For Children

How Your Child Or Student Can Read Music Notes -- Easily And Quickly!


Check it out!

Saturday, October 30, 2010

Make It In Music

A complete guide on how to make it in the music industry. Written by professional musician Dane Espinoza this eBook explains how to get on tour, promote your music, land a record deal, get on the radio, and much more. Great step by step affiliate program!


Check it out!

Friday, October 29, 2010

Promote Music Artist ThirdTemple

Promote Independent Electronic artist ThirdTemple. ThirdTemple pushes the boundaries of the Idm Techno Trance music genres, introducing instruments and samples not commonly used in modern Electronica music. Clients can listen before they buy.


Check it out!

The A To Z Of Music Licensing

Comprehensive program from Berklee Alumnus explains in detail how to license music for use in Tv and Films. This program converts extremely well. 50% commission on all sales of $40.00 product. One of a kind program.


Check it out!

Wednesday, October 27, 2010

Music Theory Made Easy

Music Theory Made Easy Videos, Show You How To Internalize, Know, And Do, Music Theory In Your Head Free Videos Reveal The 4 Secrets, To Knowing All Music Theory! In Video1 Pt1: After Viewing This Free Video, You'll Know All The Sharp Major Scales.


Check it out!

Tuesday, October 26, 2010

Music Model Entertainer Photographer Internet Marketing Books,Videos

Internet Marketing books(4) and videos(5) for music artist , models, entertainers, photographers, actors. Videos: Seo, Traffic Generation, Insider Traffic, Viral Video Marketing. Books: Email, Ppc, Social Media, Trent Partridge Music/Model Marketing Book


Check it out!

New Music Economy - The Music Marketing System

Music marketing system that teaches musicians how to work their business like an Internet Marketer. As used on artists on Cash Money, Warner Bros, etc. Pays 50%. http://genyrockstars.com/newmusiceconomy/affiliate s


Check it out!

Monday, October 25, 2010

Play Popular Music With Ease

Play popular music - easily and quickly!


Check it out!

Record Label Business Plan 2.0 + Music And Entertainment Contracts!

Start Your Own Successful Music Company And Get Funding From Investors With a Professional Record Label Business Plan Template. Free Bonus Offers Included Such As The Musicians Upload Directory + a Big Music and Entertainment Business Contracts Package


Check it out!

Sunday, October 24, 2010

Music Theory Course and Workshops

Music Principles and Theory Course, Fundamentals to Intermediate for all music students. 4 Workshops and two course options. Great value to beginner and intermediate music students.


Check it out!

Music Scores.com

Download Classical Sheet Music. Originals And Arrangements For All Abilities And Instruments.


Check it out!

Saturday, October 23, 2010

Music Marketing Manifesto

Advanced strategies, tactics and tips for selling your music on the internet, from major label recording artist John Oszajca.


Check it out!

How to Read Music...In One Evening!

Why take months to learn how to read music when you can do it in 3 or 4 hours? When you learn the 3 elements of music -- melody, rhythm, & harmony -- then combine them into a song, it all makes sense. You wont be great right away, but youll be rolling.


Check it out!

Friday, October 22, 2010

Improve Your Recordings and Mixes, on the Cheap

Some of the easiest ways to improve your recordings are also the cheapest. In fact, the most effective techniques require no money at all.


Here’s a collection of tips you might find helpful the next time a pricey piece of gear stands between you and great recordings.


Have a friend perform: Home recording, especially for singer/songwriters and electronic musicians, often involves a single musician writing and recording all the music. But artists in this situation can find themselves too close to the song, at mix time, to make decisions critically.


Working with other musicians might initially complicate recording and mixing. However, creating a great mix depends, in part, on your ability to remove unnecessary details, and most of us are more comfortable objectively critiquing someone else’s work. So asking a friend (or some professionals) to perform a track or two will ultimately make mixing easier, and more effective.


Get more ears on the mix: With any task requiring attention to detail, it’s easy to lose the forest for the trees. And so it goes with mixing. A second or third opinion can draw your attention back to details you’ve glossed over.


And outside opinions needn’t come from other musicians and engineers. (Although the homerecording.com MP3 mixing clinic is a great source for free advice.) Often, regular listeners give the best feedback because they don’t think in technical terms about the production, and instead form their thoughts on how the song makes them feel. And some of the best mix feedback I’ve gotten has come from children, who are unconditioned by musical convention.


Listen on multiple systems: Hearing a mix through different speakers is a little like getting a second opinion. And professional mixing engineers rely on this technique. Chris Lord Alge, for example, keeps a portable radio near his console for checking mixes:



[E]very client who comes in here wants to hear their mixes on it. If it doesn’t sound good through 2-inch speakers on your little boom box, what’s the point? It’s got to sound big on a small speaker.


Avoid dogma: Our hobby (or profession, if you’re lucky) is plagued with religious arguments, like “tube gear sounds better,” and “analog sounds warmer than digital.” Regardless of each argument’s merit, these dogmatic issues over-complicate the recording process, and distract us from the importance of technique – which, of course, costs nothing!


Cut. Ruthlessly: As musicians, our egos push us to put everything we’ve got into every part we record. But virtuoso performances and great recordings don’t necessarily go together. The whole, as they say, is often greater than the sum of the parts.


In most song arrangements, over-instrumentation usually just leads to clutter. And along with being more difficult to mix, clutter rarely sounds good.

The so-called “car test,” checking a mix though car speakers, helps gauge the overall balance of a mix rather than the translation of small details. So instead of burning a CD of every mix you want to check, transfer the mixes to a cheap MP3 player. You may lose tiny details with the MP3 compression, but you’ll still be able to judge if the bass is too loud or the vocals are too quiet, and you’ll save time and money in the long run.

Make every part do work: Ensure that every part competing for the listener’s attention is supposed to compete for the listener’s attention.


Practice your performance before hitting record: The benefits of practice should be obvious to all musicians, but home recording fosters a “write as you record” approach to song creation.


Practice takes time. But it needn’t hamper the creative process; and in most cases it will ultimately save time. Though the tracks may take longer to record, it’s far easier – and quicker – to mix a set of well-performed, polished performances.


Not only do the performances themselves benefit from practice, but the final mix will sound more professional.


Use reference CDs: No single technique will do more to improve the quality of your mixes. Working with a reference mix is, in some ways, like getting a free lesson on mixing from a professional engineer.


Practice mixing when you’re not in the studio: Every mixing engineer should spend time listening critically to professional mixes. Set aside some time every day, say 10 minutes, to immerse yourself in a mix someone else has done. Consider the panning, which instruments take your focus, and how the focus changes as the song evolves. Try to determine the effects in use, and why they were chosen. In modern pop and rock mixes, the interplay between the lead vocal and the snare drum is particularly important, as is the bass guitar/kick drum relationship, so spend some time analyzing these parts in detail.

See Also: Create more professional home recordings


For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: arrangement, mixing, professional-engineers


View the original article here

Thursday, October 21, 2010

Quick Home Studio Monitor Tests

I keep a collection of audio samples designed to help check my monitor setup. Test tones, essentially, that I use after I’ve moved my speakers or desk, to ensure the speakers still behave as they should.


I’ve included 4 of the samples below, and I hope you find them useful – and possibly enlightening. Each tests a facet of the two most common monitoring problems in home studios: Uneven bass response, and poor stereo imaging.


Contents: A sine wave sweeping from 40Hz to 300Hz.
Use this to test for: Bass response, sympathetic vibrations.

Unless you’re outdoors, or listening on headphones, you’ll notice the volume rising and falling as the audio plays. That’s normal, although the level doesn’t actually change. (Open the MP3 in your DAW to confirm this.) Rather, you’re exposing the acoustic response of your room.


Use this test as a rough gauge of how extreme the acoustic issues are in your space. (You can flatten the response somewhat, but acoustic treatment is a topic unto itself. For some more information, check the quick backgrounder on home studio acoustics.)


Additionally, the sweep can expose low-frequency dependent rattles, buzzes, or other sympathetic vibrations happening in the area around you. With this test, I once discovered the casing on an overhead light shook at exactly 140Hz, after puzzling with a mix for 15 minutes, unable to isolate the odd rattling sound.


Contents: Consecutive semitones from G1 (46.2Hz) to F3 (174.6Hz)
Use this to test for: Bass response, specific problem notes.

Here, the tone ascends through a chromatic scale. Certain notes will jump out or disappear, for the same reasons as above. Remember these notes, as they’re important to the character of your mixing space. Specifically, when you know that, for example, the B at 61Hz drops in volume in your space, you can reconsider when you find yourself reaching for the fader every time the bass guitar plays B.


Contents: 5 bursts of white noise at different pan positions.
Use this to test for: Coarse panning issues.

This file plays sound at the center, hard left, hard right, half left, and half right. If you don’t hear 5 separate panning locations, you’ve got stereo issues!


Most stereo imaging problems are caused by incorrect speaker configuration (i.e. the speaker aren’t equal distances from your ears,) and poor room acoustics.


Contents: White noise at 3 different pan positions.
Use this to test for: Fine panning issues.

This file plays a sound at 50% left, then hard right, then 25% left. (The jump to the right distracts your ear so it can’t track the sound moving from 50% to 25%) The 3 sounds then repeat on the other side.


Most listeners can reliably distinguish 5 or 7 distinct pan positions. So if your stereo imaging is clear across 9 points, i.e. 25% increments, you’re in good shape (for mixing in a home studio, at any rate.)


On the other hand, if the difference from 50% to 25% isn’t clear in your monitors, or is more defined on one side, you might want to consider using headphones to verify your important panning decisions.


Note: Since these test don’t require high fidelity, MP3s should be fine for checking your setup. However, here are links for WAV versions of the test:


Sine Wave Sweep – 40Hz – 300Hz
Consecutive semitones from G1 (46.2Hz) to F3 (174.6Hz)
White noise at 5 pan positions
White noise at 3 pan positions


For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: acoustics, monitors, stereo


View the original article here

7 Questions from Amateur Mix Engineers

Over time, I’ve noted several questions that arise repeatedly on the web’s home recording forums. Each question reads as though it should have a simple answer, but none of them do. And indeed, the questions themselves betray their askers’ lack of experience with the subject.


In effect, posing one of these questions tells the world you’re an amateur. But I hope that by explaining why the questions don’t have the simple answers a rookie expects, you’ll appreciate how an experienced engineer thinks about each problem, and be better equipped to identify gaps in your own knowledge.


1. What are the best EQ settings for guitar?
Or its many variants: “What are the best compressor settings for vocals,” “what reverb settings should I use for mastering,” and so on.


This question has a straightforward answer: The best settings are the ones that sound right. But for most beginners, who haven’t yet learned critical listening skills, this advice seems trite.


Unfortunately, any other answer is meaningless. Every track, in every song, has its own unique requirements. And the best settings, for EQ or compression or any effect, are dictated solely by the requirements of the song. (See the Rule of Mixing for more.)


2. Which is the best microphone?
We’d all love to own a U87 or a C12. But engineers covet those mics because they’re reliable and versatile, not because either is inherently superior. In fact, there are as many ways to define “best” (and for that matter “worst”) as there are sounds to record. As with the question above, what’s best ultimately depends on what fits the song.


3. How do I record my song to sound like The Foo Fighters?
This question stems from the misconception that The Foo Fighters, or any band, sound the way they do because of their equipment. Acquire the same instruments and mics, the thinking goes, and you can duplicate their recordings.


Most professional recordings have deceptive clarity. They sound, at least to listeners unfamiliar with the process, as though they should be easy to reproduce. But the question above has only one honest answer. To sound like The Foo Fighters:

Buy quality instruments, and learn how to play them well.Write songs suitable for the genre.Arrange those songs to support Foo Fighters-style production.Practice. Lots. Record in a great live room.Spend time on microphone selection and placement.Play every part till you get it right.

In other words, there are no shortcuts, and it’s not easy. Great recordings take time and talent.


4. What vocal chain does Paul Simon use?
Also commonly worded as “I want to sound like John Mayer. Which microphones and settings should I use?”


Beginners ask this question assuming that we can recreate a track by knowing how it was recorded. Unfortunately, even if you bought Paul Simon’s complete signal chain, you’d have little success matching his recordings. His voice, and John Mayer’s voice, and of course the voice of any famous musician, is unique, as are his performances.


To sound like Paul Simon, in short, you need to have him sing your vocal


5. How do I remove the room’s ambiance from a recording?
Conceptually, it makes sense that since we use reverb to add depth, there must be some way to reverse the process.


There isn’t. If you don’t notice until you’re mixing that a guitar track has too much room sound, you have 2 options: Live with the sound, or re-record.


6. Is this mix finished?
Rookie engineers like to think there’s a golden standard sound to which they aspire, and once they’ve attained that sound, their mixes will thereafter be perfect.


We should be so lucky! In truth, our learning never stops. We continue (hopefully) to improve, but none of us is ever done acquiring knowledge, as true of recording and mixing as it is of life. But this is OK. Learning, after all, is the fun part!


To the question: As a general guideline, a mix is finished when it best represents the song. Of course, “best” is open to interpretation here as it is everywhere in recording. You need to use your ears and your gut, and make the call when it feels right. In other words, only you know when the mix is finished.


Unless someone has paid you, in which case the mix is done when the deadline arrives.


Finally, a surefire question to signal your newbie status to the world:
7. How do I use this $1,200 plugin that I just happen to have installed on my machine?
Answer: You read the manual, which comes with the software when you buy it legally.


You’ll out yourself as a novice by asking these questions of an experienced engineer. But really, there’s nothing wrong with that. In some senses, we’re all amateurs.


Take the colleague of my friend Paul, who once asked him, “what does a compressor do?” The question seems innocent enough until you learn that this colleague has been a film industry sound engineer for over 20 years, and has worked on dozens of major motion pictures. Of course, Paul now has difficulty taking his colleague seriously as an audio professional. But the guy still works on movies as a sound engineer, so the anecdote should be comforting for the rest of us amateurs!

See Also: Tips for more professional recordings


For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: EQ, FAQs, microphones, miking, mixing, professional-engineers


View the original article here

Wednesday, October 20, 2010

EQ – “Cut narrow, boost wide” explained

This tip arises in most discussions of good equalizer technique: “Use narrow adjustments when cutting frequencies, and wide adjustments when boosting.”


There are some great reasons to heed this advice. But they’re not immediately obvious, especially if you’re unfamiliar or uncomfortable with parametric EQs, and they’re rarely fully explained. I’ll explain and demonstrate below, and you can use the information to improve your EQ adjustments, and in turn your mixes.


In brief, equalizers alter the tonal quality of audio by applying gain to a specific frequency range. (For something a little less brief, Sound On Sound’s Equalisers Explained is the best EQ primer I’ve read.)


Every EQ filter has 3 settings: Frequency, Gain, and Bandwidth.


Frequency determines where in the tonal spectrum an adjustment occurs. Low frequencies correspond to bass sounds, high frequencies to treble.


Gain determines the magnitude of the adjustment. Positive values increase the signal level at the specified frequency, and we call this a “boost.” Negative gain values decrease the signal level, and we call this a “cut.”


Bandwidth allows us to choose the range of neighbouring frequencies that our adjustment affects. Bandwidth is usually called “Q” (for esoteric reasons from filter theory.) Higher Q values affect fewer frequencies, and we refer to this as a “narrow” filter. Low Q values, on the other hand, yield “wide” filters that affect many frequencies.


This is easier to understand as a visual:

EQ cut narrow boost wide

The diagram above shows 4 key combinations. From left to right:
#1 – A narrow cut – Note the high Q value, and negative gain.
#2 – A narrow boost – Note the positive gain.
#3 – A wide cut – Note the low Q value.
#4 – A wide boost.


Your EQ plugin may not look the same (for comparison here’s the above illustration using Reaper’s EQ) but all parametric equalizers support the same 3 basic options: Frequency, Q, and gain. And using these options, we can “cut narrow, and boost wide.”


In practice, wide EQ cuts remove more signal, and therefore more of a sound’s defining characteristics. Remove too much signal, and the audio you’re treating no longer sounds like itself. This can certainly produce interesting effects, but it won’t yield accurate mixes.


Narrow surgical cuts, on the other hand, remove only specific frequencies, and as such leave the signal largely unchanged. The narrowest cuts can be practically inaudible, as they remove so little from the sound. Often, we use narrow cuts to remove only “problem frequencies,” such as ringing overtones from a drum or boomy resonance from an acoustic guitar, without affecting the overall character of the sound.


It might seem the same should be true of boosting – that narrow boosts are the least audible. But in fact, because of how our ears work, narrow EQ boosts usually sound unnatural and jarring, where wide boosts are much less obvious. (The reasons behind this involve science a little beyond the scope of this article. Summarized: Human brains evolved an innate understanding of the harmonic series, and narrow EQ boosts affect specific harmonics, producing timbres that we sense can’t possibly have occurred naturally.)


The effect should be clear in the examples below. These 5 audio files illustrate the various extreme EQ adjustments. First, an untreated track:

In the next sample, I’ve used a narrow boost at 2060Hz. [diagram]

The ringing is immediately apparent, and sounds unnatural and distracting. (Your ears and brain sense, based on the other frequencies, that there shouldn’t be a loud harmonic at that frequency.)


Now, here’s a wide boost at 2060Hz. [diagram] Broad EQ cut

While the sound might not be great, the ringing effect introduced above isn’t apparent, because the boost affects so many other frequencies:

The next example illustrates a wide cut at 2060Hz. [diagram]

Notice how much of the guitar’s character disappears:

Finally, in this example the narrow cut is barely audible at 2060Hz.[diagram]

All we’ve done is remove the ringing frequency, though since it wasn’t readily apparent in the original sample, its removal is hard to hear.

These examples were contrived to illustrate an effect. (i.e. You’d never actually apply at 14dB boost at 2060Hz to an acoustic guitar track.) However, the principle applies regardless of the audio with which you’re working.


Note, too, that this technique is relevant only to adjustments made with parametric equalizers. Graphic EQs have a fixed bandwidth at each frequency, so “narrow” vs. “wide” cuts aren’t possible.


Finally, and perhaps most importantly, the advice is generally useful but NOT a set-in-stone rule. Sometimes, a ringing effect or hollowed-out sound is exactly what a mix requires. As with everything in audio engineering, let your ears be the final judge of what works best.

See Also: The Rule Of Mixing, General EQ Guidelines


For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: EQ


View the original article here

Auto-Tune Abuse in Pop Music – 10 Examples

Pitch correction software has applications from restoration and mix-rescue to outright distortion of a voice or instrument. I’ll discuss some of the more tasteful uses of these auto-tune tools (whether the original from Antares, or a variant like the free GSnap) below. But first I thought I’d highlight their misuse to illustrate the effects we usually try to avoid.


So, listen here to 10 of pop music’s most blatant auto-tune abuses:

If you’re unfamiliar with Auto-tune, and especially if you listen to much pop and rock, you might not hear it initially. When overdone, the effect yields an unnatural yodel or warble in a singer’s voice. But the sound is so commonplace in modern mainstream music that your ears may have tuned out the auto-tune!


The songs in this clip, in order, and the phrases most affected by auto-tuning to help you spot them:


Dixie Chicks – The Long Way Around – Noticeable on “parents” and “but I.”


T-Pain – I’m Sprung – Especially obvious on “homies” and “lady.”


Avril Lavigne – Complicated – Listen to “way,” “when,” “driving,” “you’re.”


Uncle Kracker – Follow Me
The whole vocal sounds strained, but especially the word “goodbye.”


Maroon 5 – She Will Be Loved – Listen for “rain” and “smile.”


Natasha Bedingfield – Love Like This – “Apart” and “life.”


Sean Kingston – Beautiful girls – “OoooOver” doesn’t sound human.


JoJo – Too Little Too Late – Appropriately, “problem” stands out.


Rascal Flatts – Life is a Highway
Every vocal, foreground and background, is treated, but “drive” in particular.


New Found Glory – Hit or Miss – “Thriller”, and every time Jordan sings “I.”


When used noticeably, an auto-tuner produces what most call “The Cher Effect“, named for her trademark sound in the song Believe*. (In essence, we named the effect like scientists naming a new disease after its first victim.) Treated this heavily, a vocal track sounds synthetic, and obviously processed.


But not all auto-tuning is so blatant. In the sample above, it’s harder to hear the pitch correction on Uncle Kracker and Avril than on T-Pain and Bedingfield.


As with any tool, a little care can yield great results. Some simple things to keep in mind about pitch correction tools:

Performance: Most importantly, an auto-tuner isn’t a shortcut to a perfect performance. If you can’t sing the song properly, no amount of post-processing will make it sound like you did. So when your pitch matters, and you don’t want to correct it with an effect, you’ll need to work on your performance until it’s right.Less is more: The fewer notes you correct, the less obvious your use of an auto tuner will be. Consider automating the plugin so it acts only when most needed.Graphical mode: If your pitch correction software offers a graphical mode (like Antares Auto-Tune and Melodyne,) learn how to work with it. The default “auto” modes are OK for basic corrections, but often produce noticeable yodeling.Backing vocals: In general, you can get away with more pitch correction on backing vocals than lead vocals.Outdated: Obvious vocoder-style autotuning is dated, and borders on kitschy. The synthetic warbling vocal sound marks songs as having come from a specific era, the same way gated-reverb on drums instantly places a song in the 1980’s. Remember: If you make the auto tuner obvious, people will say your song uses “the Cher effect.” Let this be a guideline.

Two songs have auto tuners on my mind today: Snoop’s Sensual Seduction (because of Anil Dash’s ruminations on the death of the analog vocoder,) and Natasha Bedingfield’s Love Like This, which I heard on the radio. In the former, the auto tuner is clearly a gimmick. But every time I hear Bedingfield’s song, I’m struck by the same question: Why do that to her voice?


She’s a fantastic singer, and once you’ve heard the song without the cheesy auto tuner effect, it’s hard to take the radio single seriously.


And there’s a lesson in that for home recordists, (even those of us who don’t write pop music,) which echoes the rule of mixing: If an effect significantly changes the sound of a track, especially one so important as the lead vocal, be sure that change improves the song before committing it to the mix.

See Also: The Rule of Mixing


For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: freeplugins, mixing


View the original article here

Tuesday, October 19, 2010

10 Myths About Normalization

The process of normalization often confuses newcomers to digital audio production. The word itself, “normalize,” has various meanings, and this certainly contributes to the confusion. However, beginners and experts alike are also tripped up by the myths and misinformation that abound on the topic.


I address the 10 most common myths, and the truth behind each, below.


First, some background: While “normalize” can mean several things (see below), the myths below primarily involve peak normalization.


Peak normalization is an automated process that changes the level of each sample in a digital audio signal by the same amount, such that the loudest sample reaches a specified level. Traditionally, the process is used to ensure that the signal peaks at 0dBfs, the loudest level allowed in a digital system.


Normalizing is indistinguishable from moving a volume knob or fader. The entire signal changes by the same fixed amount, up or down, as required. But the process is automated: The digital audio system scans the entire signal to find the loudest peak, then adjusts each sample accordingly.


Some of the myths below reflect nothing more than a misunderstanding of this process. As usual with common misconceptions, though, some of the myths also stem from a more fundamental misunderstanding – in this case, about sound, mixing, and digital audio.


Myth #1: Normalizing makes each track the same volume
Normalizing a set of tracks to a common level ensures only that the loudest peak in each track is the same. However, our perception of loudness depends on many factors, including sound intensity, duration, and frequency. While the peak signal level is important, it has no consistent relationship to the overall loudness of a track – think of the cannon blasts in the 1812 Overture.


Myth #2: Normalizing makes a track as loud as it can be
Consider these two mp3 files, each normalized to -3dB:


The second is, by any subjective standard, “louder” than the first. And while the normalized level of the first file obviously depends on a single peak, the snare drum hit at 0:04, this serves to better illustrate the point: Our perception of loudness is largely unrelated to the peaks in a track, and much more dependent on the average level throughout the track.


Myth #3: Normalizing makes mixing easier
I suspect this myth stems from a desire to remove some mystery from the mixing process. Especially for beginners, the challenge of learning to mix can seem insurmountable, and the promise of a “trick” to simplify the process is compelling.


In this case, unfortunately, there are no short cuts. A track’s level pre-fader has no bearing on how that track will sit in a mix. With the audio files above, for example, the guitar must come down in level at least 12dB to mix properly with the drums.


Simply put, there is no “correct” track volume – let alone a correct track peak level.


Myth #4: Normalizing increases (or decreases) the dynamic range
A normalized track can sound as though it has more punch. However, this is an illusion dependent on our tendency to mistake “louder” for “better.”


By definition, the dynamic range of a recording is the difference between the loudest and softest parts. Peak normalization affects these equally, and as such leaves the difference between them unchanged. You can affect a recording’s dynamics with fader moves & volume automation, or with processors like compressors and limiters. But a simple volume change that moves everything up or down in level by the same amount doesn’t alter the dynamic range.


Myth #5: Normalized tracks “use all the bits”
With the relationship between bit depth and dynamic range, each bit in a digital audio sample represents 6dB of dynamic range. An 8-bit sample can capture a maximum range of 48dB between silence and the loudest sound, where a 16-bit sample can capture a 96dB range.


In a 16-bit system, a signal peaking at -36dBfs has a maximum dynamic range of 60dB. So in effect, this signal doesn’t use the top 6 bits of each sample*. The thinking goes, then, that by normalizing the signal peak to 0dBfs, we “reclaim” those bits and make use of the full 96dB dynamic range.


But as shown above, normalization doesn’t affect the dynamic range of a recording. Normalizing may increase the range of sample values used, but the actual dynamic range of the encoded audio doesn’t change. To the extent it even makes sense to think of a signal in these terms*, normalization only changes which bits are used to represent the signal.


*NOTE: This myth also rests on a fundamental misunderstanding of digital audio, and perhaps binary numbering. Every sample in a digital (PCM) audio stream uses all the bits, all the time. Some bits may be set to 0, or “turned off,” but they still carry information.


Myth #6: Normalizing can’t hurt the audio, so why not just do it?
Best mixing practices dictate that you never apply processing “just because.” But even setting that aside, there are at least 3 reasons NOT to normalize:

Normalizing raises the signal level, but also raises the noise level. Louder tracks inevitably mean louder noise. You can turn the level of a normalized track down to lower the noise, of course, but then why normalize in the first place?Louder tracks leave less headroom before clipping occurs. Tracks that peak near 0dBfs are more likely to clip when processed with EQ and effects.Normalizing to near 0dbfs can introduce inter sample peaks.

Myth #7: One should always normalize
As mixing and recording engineers, “always” and “never” are the closest we have to dirty words. Every mixing decision depends on the mix itself, and since every mix is different, no single technique will be correct 100% of the time.


And so it goes with normalization. Normalizing has valid applications, but you should decide on a track-by-track basis whether or not the process is required.


Myth #8: Normalizing is a complete waste of time.
There are at least 2 instances when your DAW’s ‘normalize’ feature is a great tool:

When a track’s level is so low that you can’t use gain and volume faders to make the track loud enough for your mix. This points to an issue with the recording, and ideally you’d re-record the track at a more appropriate level. But at times when that’s not possible, normalizing can salvage an otherwise unusable take.When you explicitly need to set a track’s peak level without regard to its perceived loudness. For example, when working with test tones, white noise, and other non-musical content. You can set the peak level manually – play through the track once, note the peak, and raise the track’s level accordingly – but the normalize feature does the work for you.

Myth #9: Normalizing ensures a track won’t clip
A single track normalized to 0dBfs won’t clip. However, that track may be processed or filtered (e.g. an EQ boost,) causing it to clip. And if the track is part of a mix that includes other tracks, all normalized to 0dB, it’s virtually guaranteed that the sum of all the tracks will exceed the loudest peak in any single track. In other words, normalizing only protects you against clipping in the simplest possible case.


Myth #10: Normalizing requires an extra dithering step
(Note: Please read Adam’s comment below for a great description of how I oversimplified this myth.) This last myth is a little esoteric, but it pops up sporadically in online recording discussions. Usually, in the form of a claim, “it’s OK to normalize in 24 bits but not in 16 bits, because …” followed by an explanation that betrays a misunderstanding of digital audio.


Simply put: A digital system dithers when changing bit depth. (i.e. Converting from 24-bits to 16-bits.) Normalizing operates independent of bit depth, changing only the level of each sample. Since no bit-rate conversion takes place, no dithering is required.


Normalizing can mean a few other things. In the context of mastering an album, engineers often normalize the album’s tracks to the same level. This refers to the perceived level, though, as judged by the mastering engineer, and bears no relationship to the peak level of each track.


Some systems (e.g. Sound Forge) also offer “RMS Normalization,” designed to adjust a track based on its average, rather than peak, level. This approach closer matches how we interpret loudness. However, as with peak normalization, it ultimately still requires human judgment to confirm that the change works as intended.

Tags: mixing, myths


View the original article here

Saturday, October 16, 2010

Vocal EQ Tips

Elvis-style vintage microphoneHere are some tips and techniques for treating vocal tracks with EQ while mixing.

Most importantly: Every voice is different, and every song is different. That advice bears remembering, even if you’ve heard it dozens of times. When you find yourself approaching a vocal mix on auto-pilot, applying effects “because they worked last time,” consider disabling the EQ altogether to gauge just how badly the adjustments are needed.

Reasons to EQ: The 3 main reasons to filter a vocal with EQ are
1) to help the voice sit better in the mix,
2) to correct a specific problem, and
3) to create a deliberate effect, like “A.M. radio voice.”

If you’ve EQ’d a vocal track for some other reason, be sure the result is improving the mix.

Gentle boosts: The “cut narrow, boost wide” guideline applies to vocals perhaps more than any instrument. Our ears have evolved remarkable sensitivity to the sound of human speech. (Consider how easily we pick up a single conversation in a crowded noisy room.) So we’re immediately, instinctively aware when a voice has been processed unnaturally.

High-pass: Most vocals – though of course not all – benefit from a low cut filter. The average fundamental frequency in an adult male voice is 125Hz, and often you can roll off up to 180Hz without affecting the sound. (If your mic or preamp has a low-cut filter, consider engaging it when recording vocals, as most subsonic audio in a vocal track consists of mic-stand noise, breath rumble, popping, and other undesirable sounds.)

Bypass: Especially with high-pass filters, it’s easy to remove too much body from a vocal, as our ears adjust so quickly to new sounds when mixing. If your EQ has a bypass option, use it periodically to make sure you haven’t gone too far with an adjustment.

Common fixes:

To reduce a nasal sound, try dipping a few dB around 1kHz, and moving the center frequency slightly up or down to find the most effective point.To treat popping P’s and T’s, cut everything below 80 Hz.For a little extra clarity and presence, try gently boosting the “vocal presence range” between 4kHz and 6kHz.

Reasons NOT to EQ: EQ can’t make your voice sound like someone else’s.

See Also: Better vocals improve your recordings, Great free vocal plugins

For more home recording tips,
Subscribe to the Hometracked feed, or receive email updates.

Tags: EQ, mixing, vocals


View the original article here