Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, July 31, 2012

How to compress the snare drum and kick drum

Everyone knows that you should compress the snare and kick drum. But why should you do it, and how should you do it?

By David Mellor, Course Director of Audio Masterclass

It has become part of recording folklore that you should compress the snare and kick drum. But first, you have to know why you are doing it. If you do not know why, then you're never going to get a good result - the sound you achieve will be no more than the work of random chance.

Compressing individual drums vs. the whole drum set
There are two ways you can approach compressing the drum set. One would be to compress individual drums, the other is to compress the drum set as a whole. These will produce entirely different results. You can do both if you wish, but here we shall concentrate on compressing individual drums, principally the snare and kick, but also the toms too.

The sound of drums without compression
A while ago an experiment was carried out where a snare drum was recorded and the recording played back through a PA system. The sound of both the drum itself and the PA were fed to an audio analyzer. Apparently, to reproduce the sound of the drum accurately and maintain the transient (the initial strike) properly, it took 1000 watts of amplifier power.

The reason for this is that the transient, the very first few milliseconds, is VERY loud. The sound dies away quickly after that. So to reproduce the transient accurately, a lot of power is needed. In recording, then the level must be set so that the transient does not exceed 0 dBFS - the full scale level of the system before the red light comes on.

Why drums need compression
The problem now is that the transient is much louder than the 'body' of the sound, as the strike dies away. But the transient is short and does not fully register with the ear. So the drum is actually a lot louder than it sounds. Yes a drum played live sounds loud, but any other instrument played continuously at the level of the peak of the transient would be truly ear-splitting.

If the transient therefore can be made quieter than the body of the sound, overall the strike will sound subjectively louder. Actually, 'louder' is probably not quite the right word for the subjective experience. 'Fuller' or 'more powerful' would be better.

How to set the compressor to make the snare and kick sound fuller and more powerful
Every compressor - every decent one - has a control labeled 'attack'. This is confusing. Anyone new to compressors would think that more attack means a more attacking sound. In fact this control sets the speed at which the compressor responds to a sound. If you set a long attack time, say 100 milliseconds (a tenth of a second), then the transient of the drum would get through before the compressor had time to respond. So to lower the level of the transient, you should set a very short attack time, as low perhaps as just one millisecond.

When compressing individual drums, the attack time is the most important control. The compression ratio can be set to around 4:1 and the release time to 100 milliseconds. Naturally you should experiment with all of these settings.

Problems with a short attack time
One thing is very much for sure, you have to experiment with the attack time. Setting an attack time that is too short will result in a 'flattening' of the sound of the drum. It just doesn't sound natural any more. So you should pay a lot of attention to very small movements of the attack control because these small movements will make a lot of difference.

Differences between the snare drum and the kick drum
The main difference between the snaredrum and the kick drum is that the snare is always a very attacking sound with a sharp transient. The kick is always less attacking, but the degree of attack can vary. If a hard beater is used, then the sound will be attacking. Sometimes a piece of hard plastic is attached to the drum head to emphasize this. But if a soft beater is used, then the sound will not have such an aggressive transient. Either way, the sound can still benefit from compression. But you have to use your ears and fine-tune the settings to get the best results.

Compressing the toms
Toms can also benefit from this type of compression. However the body of the tom sound is louder compared to the transient than in the snare and kick drums. So effectively, the sound is already compressed in comparison to the snare and kick. Therefore, although this style of compression is certainly applicable, generally less compression will be used than for the snare and kick.

Summary and further considerations
What we have learned here is how to reduce the level of the transient compared to the body of the drum sound to make the overall effect fuller and subjectively louder. There are occasions, not covered here, where you might want to emphasize the transient. There is also a significant difference, not covered here, in the way you would approach compression of drums in digital and in analog recording.
Publication date: Monday April 20, 2009
Author: David Mellor, Course Director of Audio Masterclass

Haydn: Symphony No. 92 "Oxford" / Rattle · Berliner Philharmoniker

Monday, July 30, 2012

How to record a 'Symphony Band'? Do you need sixty microphones

 An RP visitor asks how he would record a symphony orchestra. Does it need sixty microphones? Or could you do it with two?

By David Mellor, Course Director of Audio Masterclass

I want you to give me an advise, about how to record a Symphony Band, and where to set my mics to capture a good ambience from the instruments... How many mics do you suggest me, and where to position them to receive the liveness of the Symphony Band.

I will appreciate your help...

Tony Barragán From Tampico Mexico

RecordProducer.com replies...

Recording a symphony orchestra of sixty or more players for the first time would be a daunting prospect for any engineer, so how exactly do you go about giving it your 'best shot'?

The first thing to remember is that in popular music, we usually don't care about what the instrument actually sounds like, the priority is to get a good sound in the context of the recording. So we close mic each individual instrument because that is what gives us the sound we need.

Not so with an orchestra though. The priority with an orchestra is to get a recording that sounds like an orchestra, with no 'improvements'. And the best way to achieve this is to start by thinking about how people ideally listen to an orchestra, from a high-price ticket in the front few rows of a concert hall with excellent acoustics.

You could put a pair of microphones in this position and record the orchestra as though the mics were an audience member's ears.

Unfortunately this doesn't work. The human brain has the ability to process the information supplied by the ears and focus on the sounds it wants to hear, and to ignore other sound that it doesn't.

In this case, the reflections from the walls of the auditorium are too loud. The brain doesn't mind, but microphones do - they pick up everything within their coverage angle and the recording would be hopelessly reverberant.

So you have to move the mics closer. The problem now is that the mics are much closer to the front rows of instruments than the rear rows. This is exaggerated when you move in closer.

The answer to this is to raise the microphones. I have often been heard to say that, "You can't get too high". Well I suppose you can, but heights up to four meters are certainly useful. This gives an overview of the orchestra that will pick up every instrument clearly.

Some people find this difficult to believe, but you can indeed record an orchestra very successfully with just two microphones, provided you experiment extensively with microphone positioning.

The only problem left is that most classical music CDs are not recorded like this, but with additional mics, and to a certain extent it is necessary to make a recording that sounds the way people expect.

The drawback to the two mic technique is that the orchestra does not have the upfront, exciting sound that most listeners prefer.

The solution is to set up additional mics, one or two per section of instruments. You could quite easily provide enough coverage with a dozen mics.

The purpose of these mics is to add a little 'presence' to the instruments. The way to set the levels is to listen to the output of the main pair of microphones. Then bring up each sectional mic to the level where it just makes a difference, but a difference that is hardly audible. That is usually enough.

And you know...? It isn't rocket science. In an auditorium with good acoustics it is surprisingly easy to get a good recording of a symphony orchestra. In a school hall it might be another question... for another day.
Publication date: Thursday January 14, 2010
Author: David Mellor, Course Director of Audio Masterclass

Groove Mapping In ACID Pro Part 2

Sunday, July 29, 2012

How do you define the term 'sound'?

This might seem like an odd question, but what do you mean by 'sound'?

By David Mellor, Course Director of Audio Masterclass

Yes it is an odd question... Or is it?

I can immediately think of four scenarios where this question is perfectly valid.

Firstly, anyone who is totally deaf and has never experienced sound must indeed wonder what the sensation is like, and what sound is useful for.

I am not even going to attempt an answer to this as it probably could not be answered satisfactorily in 10,000 words, let alone a couple of paragraphs.

Let's move on to something more closely related to music production and sound engineering...

Sound traveling in air

Sound in this context is a wave motion that travels in air; the actual sound that we hear naturally.

Sound also travels through liquids and solids, but sound traveling in a liquid is only rarely of interest to us. Sound traveling through solids is very important in the context of soundproofing, even though we don't directly hear it in this medium.

One common point of confusion however is that sound does NOT travel through electrical wires. This is an electrical signal that represents sound. It would not be unusual to call it a sound signal however.

Sound vs. music

My third meaning for 'sound' is an interesting one. Musicians can produce sound using nothing more than acoustic instruments and voices. This is sound.

But broadcasting organizations, theatres and other enterprises often have a Sound Department and a Head of Sound. But they don't make music, they work with microphones, mixing consoles, amplifiers etc.

In this context therefore, 'sound' is short for 'sound engineering'.

The sound of a microphone, preamplifier or mixing console

Finally there is the use of the word 'sound' to mean the way a piece of audio equipment colors the actual sound or the signal it is handling. So a vintage tube microphone, for instance, has a 'sound' even when it is in its box. The engineer will choose it because he knows from experience that that particular microphone's sound will suit the instrument or voice he is about to record.

Microphone preamplifiers and mixing consoles also often have a 'sound'. The term 'sound' in this sense is only commonly used in connection with analog equipment. Digital equipment is hardly ever described as having a sound, unless the commentator is criticizing digital audio in general.

So, four meanings of the term 'sound'. It wasn't such an odd question after all.

Publication date: Thursday June 24, 2010
Author: David Mellor, Course Director of Audio Masterclass

Groove Mapping In ACID Pro Part 1

Saturday, July 28, 2012

Q: How can I get the 'recorded' sound in my live shows?

 An RP reader asks, "In a crowded place, when we play an audio CD through the PA system, most of the time it sounds great because the audio is probably highly compressed. But when we play live music with our band through the same PA system, it sounds dull, no matter how much we equalize. My question is: Is it possible to find a compressor that will allow us to obtain the same punch in a live concert situation?"

By David Mellor, Course Director of Audio Masterclass
I understand this situation very well, and have done for a long time. I was 16 years old when my band played its first gig at the school dance. I thought the first half of the show went well, but then during the interval the DJ put on Roll Over Beethoven from The Beatles' album With The Beatles.

Oh dear, oh dear, oh dear... it wasn't any louder than the band, and indeed the band had more amplification. But it sounded so much better than we did. OK, it was The Beatles versus a 16-year-old's band's first gig. But so much of the difference was in the sound.

In the 1960s there was a lot of competition among producers and record labels to create an exciting sound. Get that sound onto the record and people would buy it. That was the theory. These days we are more aware of hi-fi, but in the 60s people wanted an exciting sound coming from the tiny speaker of their Dansette portable.

And the exciting sounds that the musicians and producers of the day achieved were a combination of the performance, the instruments, the noise and distortion of the recording and manufacturing processes at the time, and of course compression. George Martin specifically mentioned in his autobiography recording The Beatles onto two tracks so he could compress the tracks together for the final mono version for what he described as a 'harder' sound.

These days we have to work a little harder, indeed, to achieve an exciting sound in a recording. The equipment and software we use is hi-fi as standard, so we have to use all kinds of grunging up techniques to put the excitement back.

But in the end, a finished recording can encapsulate excitement in a way that is a challenge for a live band to match up to, other than visually of course.

Compression

One's first thought, naturally enough, might be to consider compression. It works in the studio, so why shouldn't it work live?

Well the problem is that in live performance, the specter of feedback is forever lurking in the wings. Live sound engineers learn how to spot oncoming feedback before the audience is even aware of it. They know the point on the faders that marks the line in the sand between good sound and horrible howling.

And compression unfortunately reduces the margin of error. For example, if you used 10 dB of compression on peaks, which would be a reasonable amount in the studio, then your safety margin before feedback would be reduced by 10 dB. That is if you had as much as a 10 dB margin to begin with!

So although you can use compression in live sound, you typically can't use as much of it as you would use in the studio. Bigger venues are better in this respect, so at least that is something to aspire to.

What you can do however is take advantage of vacuum tube processors to add a frisson of distortion. So if you have a tube compressor, it will add a certain amount of warmth and excitement to your sound, even on a very low compression setting.

Overall balance

Having mentioned compression first, I might be giving the impression that it is the most important element in achieving an exciting live sound.

It isn't. You have to add excitement at every opportunity. The lead singer needs to have an exciting voice with an exciting performance style. The vocal mic needs to have the 'edge' that sounds good live, but might be a little out of place in the studio. The instruments and backline need to sound good, and settings chosen that sound good on stage, which are not necessarily the same as those that work in the rehearsal room.

And of course the front-of-house engineer needs to create a mix that allows each voice and instrument to come through clearly, but also blend into a tight, punchy overall sound. Having your own engineer helps rather than using different people from gig to gig.

Dirty tricks

It can take days or even weeks to record, mix and master a song. And that's what you are competing with when you play live.

So perhaps you need to consider leveling the playing field.

If there is a DJ who is in control of the pre-show and interval music, and he or she is playing through the same PA as you, then it will be natural for them to play their records in a way that will please and excite the crowd. The word 'loud' comes to mind.

But suppose when you took on the booking, you also offered to provide the DJ, who of course will be a friend of the band?

That way, you can make sure that the recorded music is quieter in level, and perhaps even dulled down a little with EQ, so that when the band comes on, it sounds mega-exciting in comparison.

This might seem a little unfair, but the point of the show is to please the audience, and this in fact will please them more than if the band sounds lackluster compared to the pre-show and interval music.

Does anyone have any other tips for achieving a punchy, exciting sound in live performance? Post them below please...
Publication date: Wednesday December 29, 2010
Author: David Mellor, Course Director of Audio Masterclass

Quick Tip Video: Chopper In ACID Pro

Friday, July 27, 2012

Why do we have faders? Why don't we just have knobs?

 There is nothing so iconic about sound engineering than faders. Like the keys of a piano, the faders of the mixing console are the gateway to the wonderful sounds within.

By David Mellor, Course Director of Audio Masterclass

When recording was first invented, musicians would cluster around the microphone (actually a mechanical horn). Lead instruments and quiet instruments were closer, loud instruments and background instruments were further away.

It was a primitive form of mixing, and it worked to an extent. But as soon as electronic recording became possible it was realized that the outputs of several microphones could be combined electronically.

And to do that the mixing consoles of the day had...

Knobs.

Yes, rotary level controls. Generally quite large ones that you could really get your hands around. But you have to wonder why knobs then, and why faders now?

Linear controllers were also available during the early days of sound engineering. I have seen examples that were used as lighting dimmers. Rheostats they were called, and they were really big. And then some bright spark had the idea of curling them round into a circle to save space.

The whole thing was then scaled down since audio doesn't use anywhere near as much voltage and current as lighting. In audio, we call the rheostat a potentiometer. And since the potentiometers that controlled the levels were circular, it made absolute sense to control them with circular knobs.

So what's the problem with knobs?

Simple, controlling level is the most important aspect of mixing. There is nothing else that even comes close. And in mixing that is done live, for radio or TV broadcast for instance, the engineer needs to know where the levels are at all times.

But you can't see that easily with a knob. Yes, the knob has an indicator line or pointer, but you have to look closely. Every half-second taken adds up. And of course there are several or perhaps many knobs to handle.

Another problem with the knob is that it is almost impossible to turn two at the same rate. And you can't even attempt to turn more than two at the same time.

So a solution was sought. And it was to...

Turn the whole thing around by 90 degrees so that the resistive track of the potentiometer was vertical. It was controlled by what we would now call a 'quadrant fader'. It's like a modern fader, but it moves around the arc of a circle rather than on a flat surface.

Now, not only could the engineer see at a glance where ALL the levels were, he could move two faders at the same time, at the same rate. And in fact move several faders at the same time.

I don't know exactly when or why this first evolved into the linear fader, but I imagine that it seemed like a neater solution. And it is, although if quadrant faders had stuck around they would have developed further, I have no doubt.

But the quadrant fader has one advantage that the linear fader does not - you can tell where it is purely by feel.

This is very important for the TV or film sound mixer. Also when mixing music it is better to keep one's head level because the frequency response of the ear changes when the head is tilted.

I'm not suggesting a return to quadrant faders as clearly they have had their day. But one has to wonder whether the modern fader is the ultimate in design yet.

If you look at early keyboard instruments such as harpsichords, clavichords and organs, you will see that the keys are all different shapes and sizes from one instrument to another. The same basic layout, but no standardization.

The keys of all keyboard instruments, except some intended as toys, are the same size now. And they fit the hand so well that you can't imagine that any improvement would be possible.

We haven't quite reached such a degree of standardization in faders yet. Before that happens, maybe it would be worth a little thought and experimentation to see if a better design is possible.

Don't tell me your ideas though - get down to the patent office first. If you can come up with something to replace the fader then you will soon be worth millions!
 
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Haydn: Symphony No. 88 / Fischer · Berliner Philharmoniker

Thursday, July 26, 2012

Why add noise to your digital recording?

 Digital systems always add noise, known as dither, to the signal. Why is this necessary? What will it sound like if you switch the dither off?

By David Mellor, Course Director of Audio Masterclass

When an analog signal is converted to digital, or a digital signal is generated directly in a digital synthesizer, the result is a sequence of numbers. A CD-quality 16-bit signal has 65,536 possible different numbers to describe the signal.

An analog signal changes smoothly, but a digital signal jumps from one allowable number to the next. There are no 'in between' numbers. The only way to get to these intermediate positions is to increase the bit count to 20-bit or 24-bit. But even then, the digital signal must make that jump from one number to the next, they are just closer.

The result is that there is always a slight inaccuracy in the digital signal. If it were truly accurate, it would follow the analog signal precisely. Instead it follows it closely, but in a 'stair-step' fashion.
The inaccuracy in a digital signal is heard as noise. In a complex signal, such as a music signal, that noise is mostly innocuous, and it is anyway very low in level.

But there are situations that can occur, even in a music signal, where the noise takes on a harsh, grainy quality. And this can happen often. An example is in a reverb tail. As the tail decays, eventually it will get to the point where it crosses the boundary between the smallest possible digital number and absolute zero. Analog signals being as they are, the reverb tail will probably cross and recross this boundary many times.

The result is a chain of digital pulses at random intervals. Although this is very low in level, it is very harsh and therefore easily heard.

To get round this problem, digital systems add 'dither' noise to the signal. This can be simple white noise, or it can be cleverly designed to optimize its effect. Dither is added in analog to digital converters, systems that generate digital signals directly, and after any processing has been carried out, which negates the effect of any previously added dither.

Dither has the effect of randomizing the transitions between digital numbers and thus concealing the step from one to another. It works very well.

Dither has the additional advantage that it allows signals that are lower in level than the lowest possible level that can be described digitally to be heard.

It seems odd, but it is the random nature of noise that allows this. When the signal is extremely low in level, it can 'ride on the back' of the white noise and pretend to be bigger than it is. Even though it is lower in level than the dither signal, it can still be heard.

In digital audio workstations, dither is not added after every stage of processing, otherwise it would build up in level. Instead it is added right at the end of the chain, at the output.

Or not.

In some systems, you have to enable dither manually. If you don't, you will get the digital harshness buzzing along at the lowest levels of your audio. This is something that is definitely to be avoided. Fortunately, if your accidentally dither-free recording gets as far as a mastering studio, the mastering engineer will spot this and add dither at this late stage.
 
Publication date: Saturday March 19, 2005
Author: David Mellor, Course Director of Audio Masterclass

Quick Tip Video: Routing To Soft Synth In ACID Music Studio

Wednesday, July 25, 2012

How does MP3 reduce an audio file's size to one-eleventh?

A typical three minute song takes up about 30 megabytes of date. Convert it to MP3 and the file size could be just 2.7 megabytes. What happens to the audio if 27.3 megabytes is lost?

By David Mellor, Course Director of Audio Masterclass

I first heard about data compression through perceptual coding more than ten years ago, having lunch in a pub just outside of the SSL factory, which I was visiting for the day.

The guy from SSL was telling me 'off the record' that they had linked up with a company that was able to encode audio into just four bits, when CD-quality is sixteen.

CD-quality is pretty good at 16-bit resolution. Some of the early samplers worked at 12-bit and even 8-bit resolution, and they were pretty grungy. So 4-bit resolution - how could that possibly work?

That part was kept secret from me at the time. But the technology did work. And further developments led to MP3, which can reduce an audio file's size to one-eleventh or less and still sound pretty good.

And the rest is history (as will MP3 be, when people realize that AAC - Advanced Audio Coding - is much better!).

But how does it work?

Simple - MP3 discards any aspect of the audio that we are not likely to notice. And it's amazing how much we do ignore.

For instance, if there is a particularly high level at a certain frequency at any moment, the ear won't notice another nearby frequency that is at a lower level. So it might as well be discarded.

Same with time. If a loud sound occurs at a certain time, a quieter sound that occurs just before or just after will not be noticed, so it can again be discarded.

Of course, it depends how far you want to take all of this. For most people, reducing the data rate to 128 kilobits/second is far enough. I would put that on a par with the cassette format in terms of the degree of degradation. MP3 can encode to even lower data rates, but the side effects will show.

One thing puzzles me though. What I would really love to be able to hear is what the audio that is thrown away actually sounds like! Now that would be interesting indeed.

Anyone know how that can be done?
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

New for ACID Music Studio 8: TruePianos™ Amber Lite soft synth

Tuesday, July 24, 2012

Why owning a studio like this will mark the downfall of your musical career

 What will you do when you achieve success? Buy a state-of-the-art recording studio? No - that will be the worst decision you make in your entire life...

By David Mellor, Course Director of Audio Masterclass

Take a look at this studio. It is someone's own, personal home studio. Wow! It has all the good stuff, doesn't it?

It has an SSL SL4048G+ Special Edition mixing console, one of only ten similar models made, one of which is owned by Bob Clearmountain.

The studio acoustics were designed by Andy Munro, who designed the prestigious Air Studio (famed haunt of Beatles' producer George Martin). Munro designed the studio to be sufficiently soundproof to monitor on the huge Dynaudio M3A main monitor speakers a full twenty-four hours a day.

The studio comes with a house in a fashionable area of London, with all the usual features and facilities, plus a fitness room. The studio has never been rented out and has at all times been non-smoking.

The price for all this... �995,000 UK pounds, equivalent to around $1.8m US dollars. Sounds like a bargain.

Now I have to say that I don't know who the current owner is so my next comments may not be relevant in this case. However the evolutionary history of popular music has shown this cycle many, many times...
  1. Musician works for years in studios.
  2. Musician has hit record and makes a lot of money quickly.
  3. Musician thinks that spending all of his money, plus a substantial loan, on a state-of-the-art recording studio is a wise idea.
  4. Musician puts in a lot of time and effort supervising the project. Becomes sick and tired of the whole thing.
  5. Makes a few recordings in the studio, none of which are successful.
  6. Thinks about renting out the studio but is stymied by planning/zoning restrictions.
  7. Stops using it and twiddles thumbs for five years.
  8. Finally decides to sell up.
  9. Uses money from sale to pay off remaining finance.
  10. Lives and dies in poverty and obscurity.
As we can see, investing in an expensive home studio was not a good idea. Let's suppose the musician is you and go back to square one (you should be so lucky!). This is what you should do...
  1. Work for years in studios.
  2. Have a hit record and make a lot of money quickly.
  3. Realize that you, like so many people, may be a 'one hit wonder'.
  4. Invest all of the money you made in property (real estate) or your pension fund. Do not buy a Ferrari.
  5. Continue working and build on your success so that you have another hit record.
  6. Invest all of the money you make in an established non-music related business that is rock solid and will earn you money and retain its value for decades. Perhaps buy a low-end Porsche.
  7. Continue working and have another hit record.
  8. Realize that you are on a roll and enjoy your fame and fortune.
  9. Have the sense also to realize that it will not last forever.
  10. Have fun while it lasts. When it's over, relax, live off your investments and play golf.
This really does make sense - if you have a hit record, then you already have what it takes to have a hit record. You have just proven it. So to stand the best chance of having another hit you need to do pretty much the same things in pretty much the same way. Taking a totally different track and buying a state-of-the-art studio, and putting in the energy it needs to set it up properly, just has to be the wrong thing to do.

I'm hoping that a few people reading this might recognize themselves from earlier years and share their story. If you have blown the fruits of your earlier musical success, please tell us about it. We all would like to know - david.mellor@record-producer.com

Details on this studio for sale are available at MJQ (of course bear in mind that by the time you read this article, the property may have been sold).

Comment on this feature in the Record-Producer.com Forum
 
Publication date: Friday September 30, 2005
Author: David Mellor, Course Director of Audio Masterclass

New for ACID Music Studio 8: élastique timestretching

Monday, July 23, 2012

Warmth - what is it? How do you get it? Analog tape warmth

Analogue tape is also well known for its 'warmth', even when the electronics are purely transistor. The problem with analogue magnetic tape, or what was seen as a problem before we had a digital alternative, is that tape is even more non-linear and it wasn't until the discovery of alternating bias that it was feasible to use tape to record music...

By David Mellor, Course Director of Audio Masterclass

Analogue tape is also well known for its 'warmth', even when the electronics are purely transistor. The problem with analogue magnetic tape, or what was seen as a problem before we had a digital alternative, is that tape is even more non-linear and it wasn't until the discovery of alternating bias that it was feasible to use tape to record music.

There is a further disadvantage however as far as accuracy is concerned: in any system other than a recording device, the input and output are both available simultaneously. Therefore the output can be compared with the input, and any dissimilarities - i.e. distortion - corrected. This design technique is known as negative feedback and it works amazingly well.

So well in fact that no-one in their right mind would construct an audio device without it (but are all designers in their right mind in the hifi world?). In any recording system however, by the time the recording is played back, the original input signal has long since vanished and there is no point of comparison. It is a matter of faith, and good design, that the output is similar in any way to the input.

Analogue tape produces harmonic distortion in a way not too dissimilar to valves and the result is the addition of some even-order, more odd-order and a spattering of intermodulation products too. The bonus feature of soft saturation of tape certainly produces warmth but this has been well covered elsewhere. But there is more to tape than this and there are some effects that have no parallels in valve equipment, or any purely electronic equipment for that matter.

In electronic equipment there is little that can affect the timing of signals other than in the extremely short phase shift domain. In a tape recorder however there are three motors, capstan and pinch wheel, several bearings, guides, tensioners, rollers, all of which affect timing, and let us not forget those curiously static erase, record and playback heads (at least newcomers to the industry will marvel at magnetic heads that don't move and you can prize off a particularly troublesome speck of grime with a fingernail).

It's amazing that it works at all, particularly when there is no timing reference on the tape itself. (Modern design engineers who have only experienced digital technology probably won't believe that one!). Analogue tape recorders are, considering the unpromising premise of their design, amazingly free from wow and flutter. Wow is for most people insignificant but it is the flutter, or short term speed variation, that contributes much to the characteristic sound of analogue tape, regardless of any distortion.
Publication date: Thursday January 01, 2004
Author: David Mellor, Course Director of Audio Masterclass

Saturday, July 21, 2012

If you use compression on stage, will your sound sparkle, or will unholy feedback ensue?

The compressor is a great studio tool. But does it work for live sound? Is there a hidden danger that will keep the engineer on his toes?

By David Mellor, Course Director of Audio Masterclass
 
The compressor is a very useful studio tool. It has the ability to control dynamic range, but these days that is better handled by automated mixing. But it also has the remarkable ability to add fullness and sparkle to a sound.

For instance, a vocal recorded without compression will sound flat and lifeless, even with EQ. But apply suitable compression and it will leap out from the speakers and give you a great big sloppy kiss.

Compression is also extremely useful on drums and bass, for added punch and fullness, and on acoustic guitar for a lovely shimmery effect on high frequencies. It isn't so useful for electric guitar since all the compression you could ever want is provided by distortion in the amplifier and speaker. But go ahead and try it if you like. You never know what might happen, which I guess is one of the joys of recording.

Compression is therefore an invaluable studio tool. But what about taking the compressor on stage, or at least to the front-of-house processor rack, and using it in live sound?

All of the benefits of the compressor in the studio work just as effectively in live sound. So yes, go ahead and do it. But there is one problem...

Compression reduces the margin before feedback.

So suppose that without compression, you have 6 dB of gain in hand before you know, through experiment prior to the sound check, that the system will feed back. Now put the compressor in the signal path, and adjust the levels so that the peak level is the same as it was before. But now you will find that your margin before feedback has diminished. Perhaps now just an additional couple of dB will set the system ringing.

The reason for this is that compression works by lowering peak levels. So when the signal goes above a level set on the threshold control, the level of the signal is brought down. Signals that are lower in level than the threshold are not affected.

But this would make the signal quieter overall because the peaks have been lowered. So 'make up' gain is provided, usually by a control in the compressor, to bring the peaks back to where they were.

But the low level signals have now been brought up too, and this is precisely where potential feedback lurks. If you use 6 dB worth of compression, and 6 dB make up gain, then you have reduced your margin before feedback by, yes, 6 dB. And for a system that is already quite close to the edge, that would be scary.

So a live sound engineer who uses compression has to be very much on top of his or her profession. It makes the sound better, but it makes feedback harder to control.

Worse still, since feedback is always more of a problem in small venues, it is the sound engineer who is still working up the ranks that has the most difficulty. Still, no-one said live sound would be easy. Perhaps being on the edge is what makes it fun!
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Kronos Music Workstation - Set List Mode - In The Studio With Korg

Friday, July 20, 2012

How should you clean a mixing console?


Mixing consoles are known to attract dust and dirt, and they are difficult to keep clean. Will the dust and dirt affect audio quality? How can they be cleaned?

By David Mellor, Course Director of Audio Masterclass

Mixing consoles attract dirt, there's no doubt of that. And with all those knobs and buttons it is very difficult to clean. There are three kinds of dirt that the mixing console is prone to collect - dust, smoke and finger marks.

Dust is all around so you can't do much about that. Well you can - you can live outside of the big city and away from diesel engines in particular. The diesel engine is without doubt the dirtiest thing known to mankind. It is said that a modern petrol engined car following a diesel bus or truck actually makes the air cleaner by sucking up and processing the muck the diesel produces. Back to the point...

Cigarette smoke is a known killer of consoles. I once met a studio manager who knew this first hand as he had two identical consoles in similar control rooms. One was regularly operated by a non-smoker, the other by a smoker. The 'smoky' console was close to being a wreck after a couple of years.

The problem with dust and smoke is that it enters the moving components - the faders, potentiometers and switches. These are devices that rely on good electrical contact through surfaces that merely touch. Separate them by a dust particle and you get the familiar scratchy sound of old equipment. The better varieties of these components are sealed more effectively, but then they cost more.

Grease from sticky fingers isn't a problem in itself, but it makes the dust cling. Stickiness from spilt drinks is even worse, although that can be avoided by keeping drinks trays below the level of the console, so they only spill on the floor. Consoles benefit the most from regular cleaning so dust doesn't have time to form a stuck-down layer.

So, to clean a console you need something that will get in the crevices, and a vacuum cleaner is ideal, with a very small nozzle. Use this daily and your console will be spotless and your faders and potentiometers smooth and not scratchy. Let some dust collect and stick and you will need something more aggressive - a paintbrush is good, about 25 mm in width. Obviously one that has never been used for painting!

When dusting the console with a brush, try not to brush the dust into the components but away from them, particularly the faders.

If you are ever lucky enough to work in a studio as an assistant, your console cleaning abilities will earn you brownie points with the manager that you can trade for free use of downtime!
Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Korg Kronos Music Workstation - Voicing Session with Tom Coster

Thursday, July 19, 2012

To eliminate feedback is it good to reduce the gain and raise the fader? (Part 2)

An RP reader has feedback problems. But will clever manipulation of the gain control and fader provide the cure? In Part 2 of this two-part article, we look at the relationship between these two controls.

By David Mellor, Course Director of Audio Masterclass 
Part 1 of this article explored some important concepts of feedback (howlround), so preferably that should be read first.

Now the question is whether carefully balancing the gain and fader (of the same channel) will improve the situation regarding howlround.

Quick question... What does the gain control do?

Answer: It boosts the level of the signal.

Another question... What does the fader do?

Answer: It lowers the level of the signal.

Clearly both of these controls have an effect on the loop gain of the system and can therefore affect feedback.

But if you raise the gain by 6 dB, the output from the loudspeakers goes up by 6 dB (assuming no compression). If you then lower the fader by 6 dB, then the output goes down to what it was before.

In fact, however many decibels you change the gain, if you move the fader by an equal number of decibels but opposite in direction, then the output level and loop gain will stay the same.

So the short answer is that you won't improve anything however much you play about with the relative gain and level, assuming that you always use one to exactly compensate the other.

But there is a 'but'...

The exception is if you have your gain too high on the point of clipping. (If you do this you will almost certainly have your faders set very low to compensate.)

Distortion induced by clipping adds an uncertainty into the feedback equation by changing the balance of frequencies.

The result is not going to be good. In general, distortion adds energy to the higher frequencies. Howlround is always unpleasant but high frequency howlround can be ear splitting.

So set the gain correctly using the normal methods and concentrate on the factors that really can reduce howlround...

Firstly if you can get the microphone closer to the sound source, you can get a greater proportion of the sound source you want to pick up, in comparison to the sound coming from the speakers.

Secondly, as much as possible place the speakers so they don't fire sound directly at the microphone. And from the opposite point of view, position the microphone as carefully as you can so that it doesn't point at the loudspeakers.

There are more weapons in the anti-howlround arsenal, but these two are the biggest guns.

Publication date: Monday January 03, 2011
Author: David Mellor, Course Director of Audio Masterclass

Korg Kronos X Music Workstation -- Official Product Introduction

Wednesday, July 18, 2012

To eliminate feedback is it good to reduce the gain and raise the fader? (Part 1)

An RP reader has feedback problems. But will clever manipulation of the gain control and fader provide the cure? In Part 1 of a two-part article, we explore the concept of loop gain.

By David Mellor, Course Director of Audio Masterclass

In a live sound system there is a loop from microphone through mixing console through power amplifier through loudspeakers back to the microphone.

It's a complete circle. The microphone will always pick up some sound from the speakers, which will be re-amplified and travel the loop again and again.

But this isn't feedback, or what we commonly call feedback, yet.

What is needed for howlround, which is a slightly better word, is for the 'loop gain' to be greater than 1 (or 0 dB, it means the same).

Suppose that the system is active yet completely silent. Now click your fingers in front of the microphone. The signal will pass through the system almost instantaneously.

Here comes an important point...

If the sound of the click from the speakers is louder at the microphone position than the original real-life finger click, then the loop gain is greater than 1.

If howlround hasn't happened yet, the finger click would probably have been enough to set it off.

But if the click from the speakers at the microphone position is quieter than the original click, then the loop gain is less than one.

There will be no howlround.

While it is true to say that when the loop gain is greater than 1, howlround is a near-certainty, it would not be equally true to say that a loop gain of less than 1 is entirely safe.

When the loop gain is just under 1, then you will not hear howlround but you will hear 'ringing'. Any input to the microphone causes the system to 'ring' at the frequency where the loop gain, taking into account the acoustics of the room, is highest.

Ringing is unpleasant and a wise sound operator will take suitable action to bring the loop gain well under 1.

Good advice... Have the microphone close to the sound source; point the loudspeakers away from the microphone and the microphone away from the speakers.

P.S. Phase is an issue, but not as big an issue as level because level is very much more controllable.

Publication date: Monday January 03, 2011
Author: David Mellor, Course Director of Audio Masterclass

Saturday, July 14, 2012

Does your studio need a ducker? A Neve ducker?

Everyone wants the famous Neve sound. But can you find it in a ducker?

By David Mellor, Course Director of Audio Masterclass
 
Here's an interesting device that is currently up for auction on Ebay. It is a ducker module, made by the famous Neve company. There are two reasons why you might want to buy it...
  1. Because you want a ducker.
  2. You want the Neve sound and this is a cheap (£495 'buy-it-now') way to buy into it.
So firstly, what is a ducker?

A ducker is something that is very handy to have for live TV or radio sound. Suppose you have a radio phone-in program for instance. You parallel the presenter's signal into the control input of the ducker, and the caller's signal through the regular input and output. The caller gets his or her chance to speak, but as soon as the presenter chimes in, the caller's signal level goes down. Call this giving the presenter an unfair advantage if you like, but it keeps the show flowing smoothly.

Another example might be background atmosphere in a sports stadium. This can be ducked under the commentator's voice whenever he speaks.

Such a gadget might have a musical use too - you could subgroup the backing instruments of a song and duck them a little whenever the lead vocal is active. I have to say that I've tried this and I haven't personally had a great deal of satisfaction from it, but it is certainly worth giving it a go and it might work for you.

 

As you can see, the unit has controls for threshold, which is the level of the control input at which the ducker will kick in. Attenuation depth governs how much the signal under control will be lowered in level. Recovery sets the time to return to normal.

The famous Neve sound?

Now, as to whether you can get the famous Neve sound just by putting a signal through this unit and not actually using it as a ducker, I don't know. Looking at the internal shots there are transformers for both input and output (and I am guessing that the signal path is mono). Transformers are often thought to have a 'sound'.

And of course, when the unit was blessed by Rupert himself as it came off the production line, then the Neve sound was sealed in for sure. (Kidding, BTW.)

Practicalities

Hopefully you will see that this is just a module. It would need a power supply and proper connectors to the outside world to get it working. For someone with the necessary know-how and a sense of adventure, it should be a fun project.

Oh, and if any reader of Record-Producer.com does buy this item, let us know how you get on (and send us some audio)!
Publication date: Monday July 02, 2012
Author: David Mellor, Course Director of Audio Masterclass

Rudess OASYS Video 3

Friday, July 13, 2012

Record on a workstation - export to your computer. Could this be the best of both recording worlds?

 An RP reader asks whether he can record on his workstation and then export tracks to his computer for editing and mixing. Why would he want to do that...?

By David Mellor, Course Director of Audio Masterclass

A RecordProducer.com reader with a standalone recording workstation wonders whether he can export tracks to his computer for mixing and editing, and how it can be done.

But why work in this seemingly convoluted way?

If you have a very long memory, one that predates the use of computers in the studio, you may remember what it was like in the old-style studios of the day.

Since musicians are notoriously likely to play wrong notes and miss their timing, in comparison with super-perfect computers, then listening to playbacks was a vital part of recording technique. You would spend almost as much time listening to playbacks as you would recording.

And when the tape-op (the studio junior, appointed to sit by the analog multitrack recorder) pressed the play button, people would defocus their eyes, absorb the music, nod their heads and tap their feet to the beat.

These days, eyes are firmly glued to the on-screen waveform sliding smoothly across the display. Yes, you could choose not to look at it, and concentrate on the music, but can you?

While recognizing that computer displays definitely have their uses, distracting you from the music shouldn't be one of them.

So for the ultimate in modern-day non-distraction, you could choose to record onto a standalone digital audio workstation. The smaller display and reduced function set actually makes it easier to get the musical performance you want captured onto disk.

But the problem with standalone workstations, in general, is that they don't offer the same ease, flexibility and precision of editing as computer software such as Pro Tools, Sonar, Logic etc.

So to get the best of both worlds, you could record onto your standalone workstation with the computer switched off (and silent for once!). Then you can transfer the tracks to the computer for editing and mixing.

But how do you do that?

Some standalone workstations make it easy and offer a CD burning or USB export feature. Others don't. And since they generally don't have an output for each track, how can you transfer the material across?

Well, it is perfectly possible to make the transfer one track at a time. Time-consuming perhaps, but do-able, and well worth it if that's what you want.

Fortunately, digital signals are very reliable in their timing. If you can connect digitally between the workstation and computer, then timing should be 100% sample-accurate. Even if you use an analog connection, over the course of a four to five minute song you should not notice any drift between the channels. (If it's a thirty minute symphony you might not be so lucky though.)

But you still have to align the tracks. This is often more difficult than you might think. Particularly with non-sequenced tracks such as vocals and guitar, you could spend a lot of time wondering whether they should be a little earlier or a little later.

But one way to ensure quick and easy sync is to record a percussive sound across all of the tracks, like a movie clapperboard. You could do this at the beginning of the track, but it's actually easier at the end, when all the music has finished. You will find the waveform very easy to line up on screen. Use as high a magnification as you need.

And there you have it. Ease of recording, flexibility and precision of editing, and all the plug-ins you want for the mix.

Job done.
Publication date: Monday June 01, 2009
Author: David Mellor, Course Director of Audio Masterclass

Korg Pa3X Video Manual Part 8- Global & Media

Thursday, July 12, 2012

The shocking truth about working in pro recording studios

 An RP reader successfully lands an internship in a major recording studio. But the kind of work he is asked to do isn't quite what he expected...

By David Mellor, Course Director of Audio Masterclass

Here is a message received from a RecordProducer.com reader who landed an internship (work experience) in a major recording studio in London. I won't name the studio, but it has Neve and SSL consoles, so clearly it is way outside of bedroom-studio land...
"I had an extremely disappointing time at the studio, I was made to be a gofer for 3 days - fetching tea and coffee, and food for the staff.
On the 4th day, Thursday morning, I was told by the studio manager (after the studio engineer childishly spoke directly to her instead of myself), 'You're not allowed to sit in sessions anymore, and we wish for you not to come back tomorrow (Friday).'
The reason for this was I had spoken in the session when they were not recording, saying, 'I wish I had brought my laptop with me as I've loads of synth plug-ins you could have used on the track.'
Thus, I was extremely upset about this as I had done nothing wrong but try to help. They expected me to sit there for 9 hours, not speak or ask questions, fetch them food, etc...
This wasn't what I would call work experience at all. It was a complete waste."
Now the first thing I'll say is that the person who said this isn't whinging. He genuinely feels as though he has been treated badly.

However, if he had known in advance what life in the studio is like, he would have been ready for this entirely normal treatment.

The first thing to remember is that the pro recording studio is a special place, and only special people get to work there. These are people with the immense dedication needed to produce and record to the highest standards. And they set certain rules...
  • Newcomers have to prove themselves over a significant period of time. Some studios won't even let a new starter into the control room for several months.
  • Anyone who is in the room has to be helpful. A new starter who doesn't know anything about professional recording can do little more than fetch coffee and empty the ash trays.
  • Just to be in the room is an extraordinary privilege, available to very few indeed. There is an immense amount that can be learned simply by observation.
  • The most junior person in the room must NEVER SPEAK! He or she doesn't have anything useful to say to professionals who have worked in studios 14 hours a day, 7 days a week for years or decades.
There are other rules too. If you are interested you can read more in 'An Insider's Guide to Working in a Pro Recording Studio'. All of the information in this guide came directly from studio managers, producers, engineers and musicians.

But what do you think? Let us know your opinions on this, whether you already work in a pro studio, or whether you aspire to. Was this person's treatment fair?
Publication date: Monday March 29, 2010
Author: David Mellor, Course Director of Audio Masterclass

Korg Pa3X Video Manual Part 7- Recording

Wednesday, July 11, 2012

MIDI OUT - what use is that?

Your MIDI equipment has IN, OUT and THRU connectors round the back. But you hardly use the OUT connector on some equipment. Why is that?

By David Mellor, Course Director of Audio Masterclass

Your MIDI keyboard has a MIDI OUT socket, and you use it all the time. The data generated by playing the keys flows through the MIDI OUT to your sequencer or sound modules.

Your sequencer, or the interface you use with it, has a MIDI OUT too. This sends data to your sound modules.

But if you look round the back of your MIDI sound modules, samplers and effects units, you will find that they all have MIDI OUT sockets too. And you probably never use them.

The fact that MIDI has been around so long is testament to how clever it is. And amazingly it is still on version 1.0!

MIDI has three types of output sockets - OUT, THRU and MERGE. If you would like to call yourself a music technology expert, you would need to understand all of these.

MIDI THRU is widely used. The MIDI THRU socket outputs an exact copy of the data present at the MIDI IN socket, with no delay (in correctly designed equipment). It adds nothing and takes nothing away. The MIDI THRU socket is used to connect a 'daisy chain' of equipment where each item responds to data on its own MIDI channel.

The MIDI OUT connector is completely different. It outputs data generated by the unit. For a keyboard or sequencer this is obvious. But for a keyboardless sound module, where is the data to be generated?

The answer is that you might want to perform a System Exclusive dump to store the internal settings, probably into a sequencer. There might also be the rare occasion where a module might output trigger or clock data, but this would be unusual. MIDI effects units too can do this, so you can store all your favorite settings as part of a sequence, to be restored automatically when the sequence plays.

The reason why MIDI OUT isn't used so much in this way is that it involves re-plugging your system. MIDI only transfers data in one direction - it can't send data from a sound module back to the sequencer unless you re-plug.

MIDI MERGE is used where you want to combine one MIDI data stream with another. An example would be where you are synchronizing two pieces of equipment using MIDI Clock or MIDI Timecode, and you need to incorporate this with note data from a keyboard.

In summary...
  • MIDI IN accepts data into the unit
  • MIDI OUT outputs data generated by the unit
  • MIDI THRU outputs an exact copy of the data present on the MIDI IN socket, with no delay
  • MIDI MERGE combines two MIDI data streams
 Publication date: Tuesday November 30, 1999
Author: David Mellor, Course Director of Audio Masterclass

Tuesday, July 10, 2012

How to Protect Your Ears

  By Shure Notes|

The facts speak for themselves. According to the CDC (Centers for Disease Control and Prevention) 12.5% of children and adolescents and 17% of adults – over 31 million Americans have suffered permanent damage to their hearing from excessive exposure to noise.

Think this can’t happen to you? Consider this: Just 15 minutes of exposure to high-decibel noise or music can cause permanent hearing loss. That’s right. Permanent. Research indicates that 30% of rock musicians have a measurable hearing loss. Classical musicians fare even worse – with up to 52% experiencing hearing impairment. The good news? Hearing loss can be prevented — so listen up while you still can.

Your ears process sound audio frequencies between 20 Hz and 20,000 Hz. Audio frequencies between 500 and 4000 Hz are the frequencies that we associate with speech.

Hearing loss is classified according to which part of the auditory system is affected. There are three types of hearing loss: conductive, sensorineural (including the Noise-Induced type we’re talking about here) and mixed. We’re going to skip the conductive and mixed types and go straight to the noise-induced cause of sensorineural damage.

Sensorineural Hearing Loss is sometimes called Nerve Deafness and happens when inner ear nerves become damaged and do not properly transmit signals to the brain. Sensorineural hearing loss is the result of:

•  Illness or injury
•  Heredity
•  Excessive noise exposure

It’s the most common type of hearing loss among adults and is rarely medically or surgically treatable. Most sensorineural hearing loss can only be treated with hearing aids. Be aware of this: hearing aids can only amplify the frequencies you can hear – they will not replace the high frequencies that you’ve lost.  More bad news: a single digital hearing aid from a leading manufacturer will set you back around $1500 – and it’s not covered by insurance.

Noise Induced Hearing Loss

Excessive sound exposure damages hearing by over-stimulating the tiny hair cells within the inner ear. There are between 15,000 and 20,000 of these microscopic sensory receptors. When they’re damaged, they no longer transmit sound to the brain.

Sounds are muffled. Human speech is difficult to understand. The damage that occurs slowly over years of continuous exposure to loud noise is accompanied by various changes in the structure of the hair cells. It also results in hearing loss and tinnitus – a ringing, buzzing or roaring in the ears or head, which may or may not subside over time. It may be experienced in one or both ears, and tinnitus may continue constantly or intermittently throughout a lifetime.

Turn it Down!

A typical rock concert can average between 110 and 120 dB SPL (Sound Pressure Level), even in locations with local noise ordinances.

According to the organization H.E.A.R. (Hearing Education and Awareness for Rockers), “At rock shows, the dB level can be as great as 140 dB SPL in front of the speakers and about 120 dB SPL at the back which is still very loud and dangerous.” Probably, the best-known example of alleged rock-related hearing damage is a Smashing Pumpkins concert in 2000 in which loudness levels reached 125 decibels. The resulting litigation against the concert hall, the band, the promoters and the record label is probably still keeping lawyers busy.

Loud music isn’t the only problem. According to music writer Bernard Sherman, “Such stereotypical guy-toys as guns, motorcycles, chainsaws and snowmobiles can punish you ears just as badly – so can leaf blowers; so can some digital movie theater soundtracks.” About 30 million Americans – more than one in ten – are exposed every day to dangerously loud levels of noise.

Are You at Risk?

Given the statistics, it appears that if you were born after 1946, the answer is a loud “yes”. According to the House Ear Institute (HEI), “Advances in the electronics industry have made possible clean sound production at higher sound pressure levels. This has resulted in an average sound increase of 10-15 dB in the work environments of musicians, audio engineers, record and movie/television producers, post-production mixers, dancers and other entertainment professionals.”

Self-Quiz

1.     Do you have trouble understanding certain words or parts of words?
2.     Do you often ask others to repeat themselves?
3.     Do you have difficulties on the telephone?
4.     Do others complain about television or radio volumes?
5.     Do you have more trouble understanding people in noisy environments?
6.     Do sounds seem muffled?
7.     Do you experience ear discomfort like ringing or buzzing in the ears?

If you’ve experienced some or all of these indicators, you may be prone to Noise- Induced Hearing Loss (NIHL). It’s time for a visit to the audiologist.

Keep this in mind – even if you have experienced a degree of loss — it is NOT TOO LATE to preserve your hearing. NIHL is not a degenerative condition unless you ignore it.

Ten Things You Can Do to Preserve Your Hearing

Here are some general tips for diminishing potential damage to your hearing:
  1. Limit the amount of time you spend in a loud environment.
  2. Wear hearing protection when involved in a loud activity. Forget about tissue or cotton – these homemade devices only reduce noise by about 7 dB. They’re not effective.
  3. Be alert to noise levels in your environment.
  4. If you know a gig will be longer than usual, decrease the intensity level.
  5. Increase distance between you and the sound source – this means standing at an angle from the source – not in front of it.
  6. Take breaks during long sessions to give your ears a rest.
  7. Be aware of the symptoms of hearing loss – listen to your own ears.
  8. Keep the volume at moderate levels when you’re using headphones or earphones.
  9. Have your hearing checked by an audiologist. (There are retail ‘hearing aid dispensers’ who are in the business of selling hearing devices and there are audiologists who are trained to evaluate and improve your hearing.  Have your doctor recommend an audiologist.)  Make an audiologist appointment an annual event if you’re at risk or if a loss has been detected.
  10. If you think you’re risking your hearing as a result of prolonged exposure, (for instance sounds in excess of 85 dB SPL) buy a sound pressure level meter and measure SPL against OSHA requirements.  A variety of types and models are available for around $50.
Want more information?

H.E.A.R.

Non-profit hearing information source for musicians
www.hearnet.com

OSHA Permissible Noise Standards

Find exposure standards here
http://www.osha.gov/dts/osta/otm/noise/standards_more.html

American Academy of Audiologists

More information on hearing loss, audiologist locator
http://www.howsyourhearing.org/

Sensaphonics

Hearing-protection products – sound-isolating earphones and custom earplugs – for the music industry
www.sensaphonics.com

Korg Pa3X Video Manual Part 6- SongBook

Monday, July 9, 2012

Do you curse at your computer?

 Computer rage is a well-known phenomenon of the times. But apparently 61% of people don't suffer from it at all.

By David Mellor, Course Director of Audio Masterclass
Have you ever shouted at your computer? Which part did you shout at - the system unit? The screen? The keyboard? The mouse? Or maybe it was at the software, although it's difficult to know which way to direct your venom.

Computers are a common cause of frustrations. My own personal bête noire is printing. At home I have four computers connected through the network to one printer. Some days it will print, some days it won't. Just the other day it decided not to print and showed an error message I had never seen before... "Can't print". Yes really.

But according to a recent survey, 61% of computer users have never cursed their computer, nor merely shouted at it. I'm not the only one to find this hard to believe.

Where this links to audio is that the people who never have cause to complain at their computer, and there are some, are the people who only use it very lightly, and for the common tasks that everyone else does. If all you do is browse the Internet, send e-mail, word process and play with the occasional spreadsheet, then there's nothing to go wrong. As long as you don't want to print of course ;-)

But audio stresses the computer much more than that, and in relation to all the other activities that people use computers for, audio is very much in the minority. So you can expect problems, and we hear a lot about people's many and varied problems here at RecordProducer.com

So if you are not having problems with your computer, congratulations to you.

Wait...

If you're not having problems with your computer, that means you're not stressing it, and that means you're probably not being sufficiently creative.

Yes, being caused to shout and curse at your computer is a GOOD THING! It shows you're doing good original work.

Just remember that next time it happens.
Publication date: Wednesday February 09, 2011
Author: David Mellor, Course Director of Audio Masterclass

Korg Pa3X Video Manual Part 5- Vocal Processor

Saturday, July 7, 2012

What is the difference between gain and level?

Gain... Level... Are you confused? And does it make any difference if you are?

By David Mellor, Course Director of Audio Masterclass

Having thought carefully, I really can't imagine any scenario in practical audio operations where it would matter if someone confused gain and level. If anyone can think of such a situation, then I'd love to know. But it doesn't hurt to have these things clear in one's mind, so here is a simple statement that should help...

Gain=change in level

So a signal has a certain level, whether it be sound pressure, voltage or digits. If you do something that changes the level, you have applied gain. So if the level of a signal is -26 dBFS and you apply 6 dB of gain, the signal level rises to -20 dBFS.

The word 'gain' of course implies more of something. Like 'profit' means more money. The opposite of 'profit' is 'loss'. The opposite of 'gain' is 'attenuation'. So if we want to make a signal lower in level, then we have to apply an attenuation. If the level of a signal is -6 dBFS and you apply 12 dB of attenuation, the signal level drops to -18 dBFS. With the magic of negative numbers, which have been with us for more than 2000 years now, we can indeed talk about negative gain just as easily as attenuation. So once again if the level of a signal is -6 dBFS and you apply -12 dB of gain, the signal level drops to -18 dBFS.

But...

In electronic audio, there is a way of thinking that positive gain is provided by active devices - devices that take electricity from a power source and use it to boost the signal. Level in a downwards direction - attenuation - can be controlled by passive devices that need no power source - a couple of resistors will do nicely.

So an electronic engineer may think in terms of controlling gain with active devices and controlling level with a passive device such as a fader.

But how can a fader provide +10 dB of gain at the top of its scale? Simple - by actively applying that 10 dB of gain before the fader. When the fader is set to 0 dB, it attenuates the already-boosted signal by 10 dB.

And...

I do feel a mild sense of irritation when, for example, I see a 'gain reduction' meter on a compressor. It should be 'level reduction' or 'attenuation'. Or it could just be labeled 'gain' and calibrated in negative values of decibels.

At the end of the day however it isn't that much to worry about in practical audio operations. Just keep in mind that 'gain=change in level' and you'll be fine!
 
Publication date: Saturday July 07, 2012
Author: David Mellor, Course Director of Audio Masterclass

Korg Pa3X Video Manual Part 4- Song Play

Friday, July 6, 2012

How much would you like to play an amazing keyboard instrument like this?

You don't often see keyboards like this one. A delight to the eye as well as the ear...

By David Mellor, Course Director of Audio Masterclass
 
My role as an occasional composer of cheap and cheerful television music gives me a great excuse to watch daytime TV if I feel like it. And today I spent a few relaxing minutes watching a little of the BBC's Bargain Hunt, in which two teams of contestants buy junk at boot sales and sell it on as 'antiques' at auction. The winning team is the one that makes the smallest loss! On the odd occasion, they might even make a small profit and get to keep it.

In the middle of the program there is normally a slot where presenter Tim Wonnacott enthuses over some piece of genuine antiquery. And today's didn't look very promising to start with. But it turned out to be quite amazing. Here's the video...



Wow! Well, it probably doesn't work fully and, even if it did, it would take a lot of maintenance to keep it going. But it's an amazing instrument with so much character. And I want one like it.

Now... who can I see about a MIDI retrofit?

Publication date: Friday July 06, 2012
Author: David Mellor, Course Director of Audio Masterclass

Korg Pa3X Video Manual Part 3- Styles

Thursday, July 5, 2012

Windows 8 brings performance improvements to Sonar

Lower latency, better CPU load balancing, reduced memory usage, better disk performance... Something to look forward to?

By David Mellor, Course Director of Audio Masterclass

Here's an interesting article written by someone who knows about the inner workings of DAW software. That someone is Noel Borthwick, CTO of Cakewalk Inc., so he should know quite a lot. Really, quite a lot. These are the inner workings that users don't need to think about on a day-to-day basis. But a little knowledge or awareness of how a DAW works 'under the hood' must have a certain value.

It is also a useful reminder that OS developers don't care that much about pro audio because it is a very small part of their market. As Borthwick says, "Call me cynical but in the multimedia/DAW industry we're bottom feeders. Most operating systems vendors don't really care about high performance audio, so most benefits we see tend to be 'happy accidents' or side effects of other more commercially viable features."

As you might have heard already, Windows 8 has two modes of operation - Metro and Desktop. Metro, so it seems to me, is Microsoft's answer to Apple's iOS, as found in the iPhone, iPod and iPad. Although it is almost inevitable that audio and music apps will be created for Metro, that is going to be something for the future rather than right now. So Desktop mode will be our playground for some time to come.

When it comes to upgrading something as important as an operating system, my inclination is always to play safe. I have working DAW systems and I want to keep them working. Upgrading always carries a risk, so I will only upgrade a test system, not a production system. When the test system is proven to work properly, then I will put it into production, but I will always have a backup running the old software until I am absolutely sure.

So Windows 8 doesn't fill me with excitement in that sense. On the other hand, one cannot ignore the relentless march of progress. Sooner or later, upgrading becomes essential. And if Windows 8 offers advantages in any way, I want to have those advantages.

At this point you might like to read Borthwick's article in its entirety. Or perhaps just skip to the conclusions.
Borthwick's conclusions are encouraging. To quote...



SONAR CPU gains were observed when using Windows 8 for Low latency performance tests. These gains mean you can run bigger loads in Win8 at low latency without audio glitching.
  • low latency plugins… 15.5% CPU reduction
  • input monitoring… 8% CPU reduction
  • high track count… 23% CPU reduction
  • High bandwidth audio …6.2% CPU reduction
Workloads for cores are more evenly balanced at low latencies on Windows 8. Better balanced core workloads translate to more efficient use of multiple CPU core hardware and thereby better workload scaling for large projects.
  • low latency plugins… 23% improvement
  • input monitoring… 31.7% improvement
  • high track count… 30.6% improvement
  • High bandwidth audio …17.5% improvement 
A 7.9% reduction in memory use under Win8 was observed when loading a large real world SONAR project (Cori Yarkin project from SONAR sample content) under identical system configuration. Reduced memory load can be observed in most of the tests.

A 78% improvement under Win8 was observed in disk read/write performance while reading large buffer sizes. Improvements were more moderate at smaller buffer sizes.

An 85% reduction in system calls was observed under Windows 8 in the input monitoring case and more moderate gains in the other cases. Fewer system calls translate to improved CPU load as well as fewer user mode to kernel transitions which mean fewer audio glitches.

25 – 50% reduction in kernel use can be observed in some of the tests with Win8. Lower kernel use results in fewer audio glitches since it leaves more headroom for audio drivers.



From these data, the future for Windows 8 in Desktop mode as a DAW platform looks very bright. Basically, you get more performance from your existing DAW software. Of course, this test only applies to Sonar, but hopefully similar improvements can be achieved in other Windows DAWs.

Oh, and by the way - I like the $39.99 upgrade pricing a lot! This isn't what we normally expect from Microsoft.

And I like the new Metro look too. My iPhone looks a mess with all those jazzy icons which, compared to the clean lines of Metro, now all seem rather dated.
 
Publication date: Thursday July 05, 2012
Author: David Mellor, Course Director of Audio Masterclass