Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, October 31, 2017

Q. Can you explain what a submix is?

I've come across the term 'submix' a few times recently. I can guess at what it means, but would like to know for sure. Can you explain?

Tony Quayle via email

SOS Reviews Editor Matt Houghton replies: A submix is simply mixing tracks down to 'stems', or sending them to group buses. For example, you can route all your separate drum mics to a group bus so that you can process them together. You'd call that your drum bus, and if you bounced that down to a stereo file, that would be a drum submix. Using buses in this way is very common indeed, whether for drums, backing vocals, guitars or whatever, because it means that you can easily gain control over a large, unwieldy mix with only a few faders.

A submix is simply a way of sending several tracks to a group bus. This enables you to process the tracks all together. In the example shown here, a compressor is being used on a side-chain input, causing the whole submix to be ducked by the kick drum. 
A submix is simply a way of sending several tracks to a group bus. This enables you to process the tracks all together. In the example shown here, a compressor is being used on a side-chain input, causing the whole submix to be ducked by the kick drum.

These days, there's rather less call for submixes, particularly now that you have the full recall of a DAW project. However, they can still be useful in a few situations, such as providing material to remixers, or allowing you to perform 'vocal up' and 'vocal down' mixes if you're asked to. Bear in mind, though, that if you're using any processing on your master bus (for example, mix compression), you can't simply bounce each group down on its own and expect to add them all back together to create your mix; the bus compressor will react according to the input signal. You'd have to bypass bus processing when bouncing the submix, and re-do any such processing when summing the submixes back together.



Published July 2011

Saturday, October 28, 2017

Q. Is a 'reflection filter' worth the money?

I've been thinking about trying out an SE Reflexion Filter or similar device. So far, however, I've been hearing mixed reviews and seeing a lot of DIY stuff when looking them up. Some folks say that they're not worth the money, but the DIY options — hanging duvets and hooking up foam contraptions — look so complicated that I figure it must be, to a certain extent. My studio is treated, but I'd still like to tighten up on the vocal side of things. I've been going back and forth looking at different options, but I just don't know whether it's worth it and don't know anyone who has one I can try. Can you give me some advice?

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: All of these products are useful to some extent, but they aren't a magical cure‑all and can't instantly turn a bad‑sounding room into a good one. Most people use cardioid‑pattern mics for recording vocals and, if you think about the physics of the situation, the mic is therefore most sensitive in the direction facing the performer, and only slightly less sensitive to the sides. So it's going to pick up any sound reflected from rear and side walls that bounces back over the shoulders and around the performer.

It should be obvious, then, that the single most important area to treat with sound-absorption material is directly behind and to the sides of the performer. This is why we champion the SOS mantra of hanging a duvet (or similar) behind the vocalist: it really does make a massive difference to any recording sessions (unless you are lucky enough to have a properly treated studio).
 As a vocal mic is so sensitive in the direction of the singer, it will pick up reflections from any walls or surfaces behind and to the sides of the singer, so whether you decide to use a reflection filter or not, acoustic treatment in these areas is a priority. A Reflexion Filter or similar device is designed to absorb some of the sound that would hit the rear of the mic, so if you're thinking of buying one, it makes sense to use it in conjunction with acoustic treatment to the rear of the performer, rather than simply using one or the other. 

As a vocal mic is so sensitive in the direction of the singer, it will pick up reflections from any walls or surfaces behind and to the sides of the singer, so whether you decide to use a reflection filter or not, acoustic treatment in these areas is a priority. A Reflexion Filter or similar device is designed to absorb some of the sound that would hit the rear of the mic, so if you're thinking of buying one, it makes sense to use it in conjunction with acoustic treatment to the rear of the performer, rather than simply using one or the other.

 Q. Is a 'reflection filter' worth the money?
The idea of the reflection filter type of product is to provide some helpful absorption of sounds that would otherwise reach the rear-facing sides of the mic, and also to catch and absorb some of the direct sound from the vocalist. The latter helps to minimise the amount of energy that gets out into the room in the first place, thus reducing the amount that subsequently bounces around to get back into the mic.

The differences between the various alternative filter designs really come down to the usual compromises of size, weight, cost and the efficiency of their low‑frequency absorption. Bigger is generally better, as is thicker (both lower the LF‑absorption frequency roll-off).

Whereas most products use a simple acoustic foam panel, the SE design uses clever multi‑layer panel construction, which is designed to extend the LF performance without making the unit excessively heavy or thicker! However, simple DIY filter constructions can be virtually as effective as commercial versions, and if you have an experimental nature I'd certainly recommend having some fun with a foam panel to see whether the idea is useful to your specific recording situation or not!

However, start with the absorbers behind the performer — the ubiquitous duvet — because that will make much more of an improvement.


Published August 2011

Thursday, October 26, 2017

Q. Does my shotgun mic have any uses in the studio?

I've recently inherited a shotgun mic that seems to be in pretty good condition. However, I never do any kind of video or broadcast work, so I can't see myself using it for its intended purpose. I'm loath to get rid of something if I can make use of it, so are there any uses for a shotgun mic in the studio?

James Gately, via email

SOS Technical Editor Hugh Robjohns replies: You can always find a use for a decent mic in a studio, but shotgun — or rifle — mics aren't the easiest to use because their particular blend of properties don't really work well in enclosed spaces.

The shotgun mic gets its name from the long slotted tube — the 'interference tube' — affixed in front of a (usually) hypercardioid capsule. The idea of the tube is to enhance the rejection of off‑axis sound sources, and thus make the polar pattern more directional, although it relies on on‑axis sounds not being picked up off‑axis (and vice versa), and that means it doesn't work so well in an enclosed and reverberant space.Though a shotgun mic may appear to have obvious uses in the studio — rejecting, as it does, off‑axis sound effectively — it actually captures highly coloured spill and is, therefore, very difficult to use in the studio context. 
Though a shotgun mic may appear to have obvious uses in the studio — rejecting, as it does, off‑axis sound effectively — it actually captures highly coloured spill and is, therefore, very difficult to use in the studio context.

In normal use, the sound wavefront from an on‑axis source travels down the length of the tube unheeded, to strike the capsule diaphragm in the usual way, and so generates the expected output. However, sound wavefronts from an off‑axis sound source enter the tube through the side slots. The numerous different path lengths from each slot to the capsule itself mean that multiple off‑axis sound waves actually arrive at the diaphragm at the same time and with a multitude of different relative phase shifts. Consequently, this multiplicity of sound waves partially cancel one another out, and so sound sources to the sides of the microphone are attenuated relative to those directly in front. The polar pattern essentially becomes elongated and narrower in the forward axis, and the microphone is said to have more 'reach' or 'suck'.

Sadly, though, there's no such thing as a free lunch, and in this case the down side is that the interference‑tube phase cancellation varies dramatically with frequency (because the phase‑cancellation effects relate to signal wavelength as a proportion of the interference‑tube slot distances). If you examine the polar plot at different frequencies of a real interference‑tube microphone, you'll see that it resembles a squashed spider: deep nulls and sharp peaks in the polar pattern appear all around the sides and rear of the mic. What this means, in practice, is that off‑axis sounds are captured with a great deal of frequency coloration, and if they move relative to the mic, they will be heard with a distinctly phasey quality.

So while it might seem that a shotgun mic could afford greater separation in a studio context, in reality the severe off‑axis colouration makes the benefit rather less advantageous, the strongly coloured spill doing more damage than good and making it almost impossible to get a sweet‑sounding mix.

 Shotgun mics really only provide useful advantage out of doors (or in very large and well‑damped enclosed spaces), and where no other, better‑sounding alternative is viable. My advice would be to sell the mic to someone who is involved with film, video or external sound effects work, and use the funds to buy something more useful for your studio applications!


Published August 2011

Tuesday, October 24, 2017

Q. How could I get the most from a Korg Monotron?

I have a fairly basic setup that I've so far been using for some simple audio work. However, I'd like to introduce some more interesting sounds and thought that a Korg Monotron might be an inexpensive way to start experimenting. However, being a beginner, I'm not entirely sure of the extent of the Monotron's capabilities. How could I get the most from it? Do you have any interesting tips or tricks?

Craig Varney via email

SOS contributor Paul Nagle replies: OK, without knowing about your setup I'll opt for a generic sort of reply. As you know, the Monotron is a tiny synthesizer with just five knobs and a short ribbon. Its strength is in having genuine analogue sound generation rather than massive versatility or playability. But, in my opinion, it possesses that 'certain something' that stands out in a recording.

Being old and hairy, I use mine primarily for the kind of weebly sound effects heard on Hawkwind or Klaus Schulze albums. Add a dash of spring reverb for atmosphere and its electronic tones get closer to my EMS Synthi than a pile of posh digital synths! Through studio monitors (or a large PA), the bass end is quite impressive and the filter screams like a possessed kettle, its resonance breaking up in that distinctive 'Korg MS' way. On stage or in the studio, I'd always recommend extra distortion, courtesy of as many guitar pedals as you can get your hands on.
Though the Monotron looks like a simple piece of kit, it has surprising potential when used in inventive ways. 
Though the Monotron looks like a simple piece of kit, it has surprising potential when used in inventive ways.

 An easy way to experiment with the sounds available from your Korg Monotron is to pile on the effects with different guitar pedals. 
An easy way to experiment with the sounds available from your Korg Monotron is to pile on the effects with different guitar pedals.

But let's not get too carried away. We're still talking about a basic monophonic synthesizer with an on/off envelope and just one waveform (a sawtooth). If it's tunes you're hoping for, that's going to take some work, and preferably external help, such as a sampler. Personally, I ignore the keyboard markings on the ribbon, finding the correct pitch entirely by ear. The ribbon's range is only slightly above one octave, so to squeeze out a fraction more, turn the tiny screw at the rear as far as it will go. On my Monotron, this gives a range of about an octave and a half: roughly comparable to your typical X‑Factor contestant.

As with X‑Factor contestants, there's no universally adopted gripping technique, but I mostly sweep the pitch with my right thumb whilst adjusting the knobs with my left hand. I also find a Nintendo DS stylus works fairly well for melodies, à la the Stylophone.

When your thumb gets tired, you should try the Monotron's second trick: being an audio processor. In a typical loop‑harvesting session, I'll run a few drum loops through it while playing with the filter cutoff and resonance. Once I've recorded a chunk of that, I go back through the results, slicing out shorter loops that contain something appealing, discarding the rest. Often when the filter is on the edge of oscillation, or is modulated by the LFO cranked to near maximum speed, loops acquire that broken, lo‑fi quality that magically enhances plush modern mixes (I expect that this effect is due to our ears becoming acclimatised to sanitised filter sweeps and in‑the‑box perfection). This is a fun (and cheap) way to compile an array of unique loops to grace any song, and you can process other signals too, of course. The results can get a little noisy, though, so you will need to address that, perhaps with additional filtering, EQ or gating. Alternatively, you can make a feature of the hiss, using some tasteful reverb or more distortion.

I have a pal who takes his Monotron into the park with a pocket solid‑state multitracker and acoustic guitar – the joys of battery power! When multitracking in the studio, you might be skilled enough to eventually achieve tracks like those seen on YouTube. Or, if you have a sampler (hardware or computer‑based), and take the time to sample many individual notes, the Monotron can spawn a polyphonic beast that sends expensive modelled analogues scurrying into the undergrowth. Some of the dirty filter noises, when transposed down a few octaves, can be unsettlingly strange and powerful.

I don't know if your setup includes digital audio workstation software, but if so, its built‑in effects and editing can do marvellous tricks with even the simplest analogue synthesizer. Later down the line, you will discover more sophisticated programs — such as Ableton Live and its Lite versions — offering mind‑boggling ways to warp audio, shunting pitch and timing around with a freedom I'd have killed for when I started out.
Anyone handy with a soldering iron should check out the raft of mods kicking around: Google 'Monotron mods' to see what I mean. Lastly, if the Monotron is your first real analogue synth, beware: it might be the inexpensive start to a long and hopeless addiction. Oh, and my final tip is very predictable to any who know me: delay, delay and more delay.

For a full review of the Korg Monotron go to /sos/aug10/articles/korg‑monotron.htm.


Published May 2011

Saturday, October 21, 2017

How We Hear Pitch

By Emmanuel Deruty

When two sounds happen very close together, we hear them as one. This surprising phenomenon is the basis of musical pitch — and there are lots of ways to exploit it in sound design.

How We Hear Pitch
Films and television programmes consist of a series of individual still images, but we don't see them as such. Instead, we experience a continuous flow of visual information: a moving picture. Images that appear in rapid succession are merged in our perception because of what's called 'persistence of vision'. Any image we see persists on the retina for a short period of time — generally stated as approximately 40ms, or 1/25th of a second.

A comparable phenomenon is fundamental to human hearing, and has huge consequences for how we perceive sound and music. In this article, we'll explain how it works, and how we can exploit it through practical production tricks. The article is accompanied by a number of audio examples, which can be downloaded as a Zip archive at /sos/apr11/articles/perceptionaudio.htm. The audio examples are all numbered, so I'll refer to them simply by their number in the text.

Perceptual Integration

The ear requires time to process information, and has trouble distinguishing audio events that are very close to one another. With reference to the term used in signal processing, we'll call this phenomenon 'perceptual integration', and start by pointing out that there are two ways in which it manifests itself. In some cases, the merging of close occurrences is not complete. They're perceived as a whole, but each occurrence can still be heard if one pays close attention. In others, it becomes completely impossible to distinguish between the original occurrences, which merge to form a new and different audio object. Which of the two happens depends on how far apart the two events are.

Take two short audio samples and play them one second apart. You will hear two samples. Play them 20 or 30ms apart, and you will hear a single, compound sound: the two original samples can still be distinguished, but they appear as one entity. Play the two sounds less than 10ms apart, and you won't hear two samples any more, just a single event. We are dealing with two distinct thresholds, each one of a specific nature. The first kind of merging seems to be a psychological phenomenon: the two samples can still be discerned, but the brain spontaneously makes a single object out of them. In the second case, the problem seems to be ear‑based: there is absolutely no way we can hear two samples. The information just doesn't get through.

These two thresholds are no mere curiosities. Without them, EQs would be heard as reverberation or echoes, compression would never be transparent, and AM and FM synthesis would not exist. Worse still, we would not be able to hear pitch! In fact, it's no exaggeration to state that without perceptual integration, music would sound completely different — if, indeed, music could exist at all. In audio production, awareness of the existence of such perceptual thresholds will make you able to optimise the use of a variety of production techniques you would otherwise never have thought about in this way. The table to the right lists some situations in which perceptual integration plays an important role. Changing a single parameter can radically change the nature of sound(s). 
Changing a single parameter can radically change the nature of sound(s).

Two Become One

Let's think in more detail about the way in which two short samples, such as impulses, are merged first into a compound sample and then into a single impulse. You can refer to audio examples 1 through 15 to hear the two transitions for yourself, and real‑world illustrations of the phenomenon are plentiful. Think about the syllable 'ta', for instance. It's really a compound object ('t” and 'a'), as can easily be confirmed if you record it and look at the waveform. But the amount of time that separates both sounds lies below the upper threshold, and we hear 't' and 'a' as a single object. Indeed, without perceptual integration, we wouldn't understand compound syllables the way we do. Nor would we be able to understand percussive sounds. Take an acoustic kick‑drum sample, for instance. The attack of such a sample is very different from its resonance: it's a high, noisy sound, whereas the resonance is a low, harmonic sound. Yet because the two sounds happen so close to each other, we hear a single compound object we identify as a 'kick drum'.

In audio production, there are lots of situations where you can take advantage of this merging. A straightforward example would be attack replacement: cut the attack from a snare drum and put it at the beginning of a cymbal sample. The two sounds will be perceptually merged, and you will get a nice hybrid. Refer to audio examples 16 to 18 to listen to the original snare, the original cymbal, and then the hybrid sample. This is but an example, and many other applications of this simple phenomenon can be imagined. A very well-made compound sound-object of this kind can be found in the Britney Spears song 'Piece Of Me': the snare sound used is a complex aggregate of many samples spread through time, and though it's easy to tell it's a compound sound, we really perceive it as a single object.

Creating Pitch

Let's try to repeat our original experiment, but this time with several impulses instead of only two. With the impulses one second apart, we hear a series of impulses — no surprise there. However, reducing the time between the impulses brings a truly spectacular change of perception: at around a 50ms spacing, we pass the upper threshold and begin to hear a granular, pitched sound. As we near 10ms and cross the lower threshold, we begin to hear a smooth, pitched waveform, and it's quite hard to remember that what you are actually hearing is a sequence of impulses. Refer to audio examples 19 to 33 in this order to witness for yourself this impressive phenomenon. Hints of pitch can also be progressively heard in examples 1 through 15, for the same reasons.

This points to a fundamental property of hearing: without perceptual time integration, we would have no sense of pitch. Notice how, in this series of examples, we begin to hear pitch when the spacing between impulses falls to around 50ms. It's no coincidence that the lowest pitch frequency humans can hear — 20Hz — corresponds to a period of 50ms.

In fact, we're often told that humans are not able to hear anything below 20Hz, but referring to our little experiment, you can see that this is misleading. Below 20Hz, we can indeed hear everything that's going on — just not as pitch. Think about it: we hear clocks ticking perfectly well, though they tick at 1Hz: we're just not able to derive pitch information from the ticking. Again, compare hearing with vision: obviously, we can see pictures below 10 frames per second, we just see them as… pictures, not as a continual stream of information in the manner of a film.

You don't need this article to be aware of the existence of pitch, so let's get a bit more practical. In audio production, reducing the interval between consecutive samples to below perceptual time thresholds can be of real interest. A good example can be found in a piece called 'Gantz Graf' by British band Autechre. In this piece, between 0'56” and 1'05”, you can witness a spectacular example of a snare‑drum loop being turned into pitch, then back into another loop. More generally, most musical sequences in this track are made from repetitions of short samples, with a repetition period always close to the time thresholds. Apparently, Autechre enjoy playing with the integration zone.

This track being admittedly a bit extreme, it's worth mentioning that the same phenomenon can also be used in more mainstream music. In modern R&B, for instance, you can easily imagine a transition between two parts of a song based on the usual removal of the kick drum and the harmonic layer, with parts of the lead vocal track being locally looped near the integration zone. This would create a hybrid vocal‑cum‑synthesizer‑like sound that could work perfectly in this kind of music.

AM Synthesis: Tremolo Becomes Timbre

The idea that simply changing the level of a sound could alter its timbre might sound odd, but this is actually a quite well‑known technique, dating back at least to the '60s. Amplitude modulation, or AM for short, was made famous by Bob Moog as a way to create sounds. It's an audio synthesis method that relies on the ear's integration time. When levels change at a rate that approaches the ear's time thresholds, they are no longer perceived as tremolo, but as adding additional harmonics which enrich the original waveform.AM synthesis converts level changes into changes in timbre. 
AM synthesis converts level changes into changes in timbre.

AM synthesis uses two waveforms. The first one is called the carrier, and level changes are applied to this in a way that is governed by a second waveform. To put it another way, this second waveform modulates the carrier's level or amplitude, hence the name Amplitude Modulation. The diagram on the previous page illustrates this principle with two sine waves. When we modulate the carrier with a sine wave that has a period of one second, the timbre of the carrier appears unchanged, but we hear it fading in and out. Now let's reduce the modulation period. When it gets close to 50ms — the upper threshold of perceptual integration — the level changes are not perceived as such any more. Instead, the original waveform now exhibits a complex, granular aspect. As the lower theshold is approached, from 15ms downwards, the granular aspect disappears, and the carrier is apparently replaced by a completely different sound. Refer to audio examples 34 through 48, in this order, to hear the transition from level modulation through granular effects to timbre change.

In audio production, you can apply these principles to create interesting‑sounding samples with a real economy of means. For instance, you can use a previously recorded sample instead of a continuous carrier wave. Modulating its amplitude using an LFO that has a cycle length around the integration threshold often brings interesting results. This can be done with any number of tools: modular programming environments such as Pure Data, Max MSP and Reaktor, software synths such as Arturia's ARP 2600, and hardware analogue synths such as the Moog Voyager. If you like even simpler solutions, any DAW is capable of modulating levels using volume automation. The screen above shows basic amplitude modulation of pre‑recorded samples in Pro Tools using volume automation (and there's even a pen tool mode that draws the triangles for us).

You can use Pro Tools volume automation to process samples with amplitude modulation techniques. 
You can use Pro Tools volume automation to process samples with amplitude modulation techniques.

FM Synthesis: Vibrato Becomes Timbre

FM synthesis is, to some extent, similar to AM synthesis. It also uses a base waveform called a carrier, but it is modulated in terms of frequency instead of being modulated in terms of amplitude. The diagram to the rightFM synthesis converts frequency changes into changes in timbre. 
FM synthesis converts frequency changes into changes in timbre.

illustrates this principle with two sine waves. The FM technique was invented by John Chowning at Stanford University near the end of the '60s, then sold to Yamaha during the '70s, the outcome being the world‑famous DX7 synth.

Suppose a carrier is modulated in frequency by a waveform whose period is one second: we hear regular changes of pitch, or vibrato. Now let's reduce the modulation period. Near 50ms we begin to have trouble hearing the pitch changes and experience a strange, granular sound. Near 10ms the result loses its granularity, and a new timbre is created. Audio examples 49 to 63 in this order show the transition from frequency modulation to timbre change.

In practice, dedicated FM synths, such as the Native Instruments FM8 plug‑in, are generally not designed to function with modulation frequencies as low as this, which makes it difficult to play with the integration zone. It's often easier to use a conventional subtractive synth in which you can control pitch with an LFO — which, in practice, is most of them!

Panning, Delay & Stereo Width

A well‑known trick that takes advantage of integration time is the use of delay to create the impression of stereo width. As illustrated in the top screen overleaf
An easy way to create a stereo impression. 
An easy way to create a stereo impression.

, we take a mono file and put it on two distinct tracks. Pan the first track hard left and the second hard right. Then delay one of the tracks. With a one‑second delay, we can clearly hear two distinct occurrences of the same sample. If we reduce the delay to 50ms, the two occurrences are merged, and we hear only one sample spread between the left and the right speakers: the sound appears to come from both speakers simultaneously, but has a sense of 'width'. Still going downwards, this width impression remains until 20ms, after which the stereo image gets narrower and narrower. Refer to audio examples 64 to 78 to hear the transition in action.

This is a simple way to create stereo width, but as many producers and engineers have found, it is often better to 'double track' instrumental or vocal parts. Panning one take hard left, and the other hard right, makes the left and right channels very similar to each other, but the natural variation between the performances will mean that small details are slightly different, and, in particular, that notes won't be played at exactly the same times. Double‑tracking thus produces an effect akin to a short, random delay between the parts, making the technique a variant of the simple L/R delay, though it's more sophisticated and yields better results. Judge for yourself by listening to audio example 79. It's based on the same sample as audio examples 64 to 78, but uses two distinct guitar parts panned L/R. Compare this with audio example 69, the one that features a 50ms L/R delay.

Double‑tracking is an extremely well‑known production trick that has been used and abused in metal music. Urban and R&B music also makes extensive use of it on vocal parts, sometimes to great effect. Put your headphones on and listen to the vocal part from the song 'Bad Girl' by Danity Kane. This song is practically a showcase of vocal re‑recording and panning techniques (see http://1-1-1-1.net/IDS/?p=349 for more analysis). To return to 'Piece Of Me' by Britney Spears: not only does the snare sound take advantage of the merging effect described earlier, but it also generates a stereo width impression by using short delays between left and right channels.

Delay Becomes Comb Filtering

Take a mono file, put it on two tracks and delay one of the tracks, but this time don't pan anything to the left or right. If we set the delay at one second, we hear the same sample played twice. As we reduce the delay time to 15ms or so (the lower threshold of perceptual integration, in this case), the delay disappears and is replaced by a comb filter. This works even better using a multi‑tap delay. With a delay value set at 1s, we hear the original sample superimposed with itself over and over again, which is to be expected. At a delay value approaching 40‑50ms (theupper threshold in this case), we still can distinguish the different delay occurrences, but the overall effect is of some kind of reverb that recalls an untreated corridor. Getting nearer 10ms (the lower threshold in this case), we only hear a comb filter. Refer to audio examples 80 through 94 to listen to the transition between multi‑tap delay, weird reverb and finally comb filter.

The ear's ability to convert a multi‑tap delay into a comb filter is exploited in the GRM Comb Filter plug‑in from GRM Tools. GRM Tools seldom make harmless plug‑ins, and this one is no exception, containing a bank of five flexible comb filters that can be used as filters, delays or anything in between. If you happen to get your hands on it, try setting the 'filter time' near the integration zone: fancy results guaranteed.
Likewise, very short reverbs are not heard as reverb but as filters. Conversely, filters can be thought of in some ways as extremely short reverbs — the screen belowAn impulse response from a filter. 
An impulse response from a filter.

shows the impulse response not of a reverb, but of a filter. This particular subject was discussed in detail in SOS September 2010 (/sos/sep10/articles/convolution.htm): see especially the section headed 'The Continuum Between Reverb And Filtering', and refer to the article's corresponding audio examples (/sos/sep10/articles/convolutionaudio.htm) 16 to 27 to hear a transition between a reverb and a filter. In the same article, I also explained how discrete echoes gradually turn into a continuous 'diffuse field' of sound when the spacing between them becomes short enough to cross the upper threshold of perceptual integration — see the section called 'Discrete Or Diffuse' and audio examples 3 to 15.

Dynamics & Distortion

Dynamic compression involves levelling signal amplitude: any part of the signal that goes over a given level threshold will be attenuated. Consider a signal that's fed into a compressor. Suddenly, a peak appears that's above the level threshold: compression kicks in, and negative gain is applied. However, the gain can't be applied instantaneously. If it was, the signal would simply be clipped, generating harmonic distortion. This can be interesting in certain cases, but the basic purpose of compression remains compression, not distortion. As a consequence, gain‑reduction has to be applied gradually. The amount of reduction should be 0dB at the moment the signal goes over the threshold, and then reach its full value after a small amount of time; the Attack time setting on a compressor determines exactly how much time.
The screen to the right
Attack time is an important parameter of dynamic compression. 
Attack time is an important parameter of dynamic compression.

shows the results of feeding a square wave through a compressor using a variety of attack times. In this screenshot, the attenuation applied by the compressor (the Digidesign Dynamics III plug‑in) is clearly visible. When attack time is set at 300ms, the action of the gain reduction can clearly be heard as a gradual change in level. When we reduce the attack time to 10ms (the lower time threshold in this case), it's not possible to hear this as level change any more. Instead, we perceive the envelope change almost as a transient — an 'attack' that now introduces the sound. Refer to audio examples 95 to 103 to hear this effect. For comparison purposes, audio example 104 contains the original square wave without progressive attenuation.
Of course, there is much more to dynamic compression than the attack time: other factors, such as the shape of the attack envelope, the release time, and the release envelope shape, all have the potential to affect our perception of the source sound. In music production, compressors are often set up with time constants that fall below the threshold of perceptual integration, and this is one reason why we think of compressors as having a 'sound' of their own, rather than simply turning the level of the source material up or down.

A Starting Point

This article covers many situations in which perceptual time integration can bring unexpected and spectacular results from basic modifications of the audio signal. Think about it: since when are simple level changes supposed to add harmonics to a sample? Yet it's the principle at the base of AM synthesis. And how on earth can a simple delay actually be a comb filter? Yet it takes only a few seconds in any DAW to build a comb filter without any plug‑ins. There are many other examples that this article doesn't cover. For instance, what happens if you automate the centre frequency of a shelf EQ to make it move very quickly? Or automate panning of a mono track so it switches rapidly from left to right? Try these and more experiments for yourself, and you might discover effects you never thought could exist.

Why Perceptual Integration Exists

Perceptual integration is an interesting phenomenon and one that's very important for music production. But why does it exist? As I explained in the main text, the upper threshold of perceptual integration lies between 30 and 60 milliseconds, depending on the situation. This seems to be a cognitive phenomenon that is based in the brain, and is not fully understood. On the other hand, the lower threshold, which lies between 10 and 20 ms, depending on the circumstances, originates in the physics of the ear, and is easier to understand.
The key idea here is inertia. Put a heavy book on a table, and try to move it very quickly from one point to another: no matter what you do, the book will resist the acceleration you apply to it. With regard to the movement you want it to make, the book acts like a transducer — like a reverb or an EQ, in fact, except that it's mechanical instead of being electrical or digital. The input of the 'book transducer' is the movement you try to apply to it, and its output is the movement it actually makes. Now, as we saw in March's SOS (/sos/mar11/articles/how-the-ear-works.htm) our ears are also mechanical transducers, which means that they also display inertia. There is a difference between the signal that goes into the ear, and the signal that reaches the cilia cells.
The illustration to the right
 Mechanical inertia of the ear prevents us from distinguishing samples that are too close to each other. 
Mechanical inertia of the ear prevents us from distinguishing samples that are too close to each other.

schematically shows the difference between those two signals, when the input is a short impulse. Because it is made from mechanical parts, the ear resists the movement the impulse is trying to force it to make: this explains the slowly increasing aspect of the response's first part. Then, without stimulus, the ear parts fall back to their original position. Naturally, if the two input impulses are very close to each other, the 'ear transducer' doesn't have time to complete its response to the first impulse before the second one arrives. As a consequence, the two impulses are merged. This exactly corresponds to what is described at the beginning of this article, when we were merging two sound objects into a single one: as far as we can tell, the two impulses are joined into a single one.

(Readers with a scientific background who specialise in psychoacoustics may be wondering what my proof is for the claim that the upper and lower time thresholds I keep referring to originate respectively from the brain and the ear. To the best of my abilities, I think this is a safe and reasonable assertion, but I can't prove it. Still, in a comparable field, it has been proven that persistence of vision is eye‑centred, whereas perception of movement is brain‑centred. I'm eager to see similar research concerning hearing.)

Published April 2011

Thursday, October 19, 2017

Q. Can I use an SM58 as a kick-drum mic?

By Mike Senior

I'll be doing a session with lots of mics and I'm going to be running out of gear choices without hiring, begging or stealing! For the kit, I don't really have all the right mics, so will need to compromise. Is it wise to use a Shure SM58 on kick drum? What can I expect?
 The SM58 is better known as a vocal, guitar and snare mic than anything else — but can it be pressed into service as a kick-drum mic? 
The SM58 is better known as a vocal, guitar and snare mic than anything else — but can it be pressed into service as a kick-drum mic?

 If you have to use a kick‑drum close‑mic that lacks low end, the neatest mix fix is usually to employ some kind of sample‑triggering plug‑in to supplement the sound, such as Wavemachine Labs' Drumagog, SPL's DrumXchanger or Slate Digital's Trigger. 
If you have to use a kick‑drum close‑mic that lacks low end, the neatest mix fix is usually to employ some kind of sample‑triggering plug‑in to supplement the sound, such as Wavemachine Labs' Drumagog, SPL's DrumXchanger or Slate Digital's Trigger.

 Q. Can I use an SM58 as a kick-drum mic?
Q. Can I use an SM58 as a kick-drum mic?
Via SOS web site

SOS contributor Mike Senior replies: The first thing to say is that, although this mic (and, indeed, its SM57 cousin) is much better known for vocal, guitar and snare miking, there is also a good deal to recommend it for kick‑drum applications: its physical ruggedness; its ability to deal with high SPLs; and its presence-frequency emphasis, which can, in many situations, help the drum 'click' to cut through the mix, even when it's played back on small speakers. The biggest potential problem will be the low‑frequency response, which has been tailored to compensate for proximity effect in close‑miking situations and so falls off pretty steeply below 100Hz. However, there are several reasons why this needn't actually be a disaster in practice.
The first reason is that your microphone placement may well compensate for this, somewhat, especially if you're planning to use the mic inside the casing of the drum, where small changes in positioning can make an enormous difference to the amount of captured low end. It's also worth bearing in mind that lots of low‑end may not actually be very desirable at all, especially if the song you happen to be recording features detailed kick‑drum patterns that could lose definition in the presence of bloated lows. I often find myself filtering out sub‑bass frequencies at mixdown, in fact, as this can make the drum feel a lot tighter, as well as leaving more mix headroom for the bass part.

However, even if you do get an undesirably lightweight kick‑drum close‑mic sound, it's comparatively easy to supplement that at the mix: this is usually one of the simpler mix salvage tasks you're likely to encounter, in fact. One approach is to create some kind of low‑frequency synth tone (typically a sine wave, but it might be something more complex if you need more low‑end support) and then gate that in time with the kick‑drum hits. You can do this in most DAW systems now, using the built‑in dynamics side‑chaining system. I've done this in the past, but I tend to prefer the other common tactic: triggering a sample alongside the live kick‑drum using a sample‑triggering program (see our feature in last month's issue). There are now loads of these on the market, including the examples shown in the screens above.


Published April 2011

Wednesday, October 18, 2017

Q. What are the characteristics of vintage mics?

By Hugh Robjohns

I've been browsing a vintage microphone site and it got me thinking: what kind of characteristics are actually offered by vintage mics? Can the same sound be achieved with modern mics and EQ? Isn't most of the 'vintage sound' due to tape and valves rather than mics?
The sought-after sound of the classic vintage mics is partly down to the fact that microphones used in professional studios many years ago would have been of particularly high quality to start with — and quality tends to age well. 
The sought-after sound of the classic vintage mics is partly down to the fact that microphones used in professional studios many years ago would have been of particularly high quality to start with — and quality tends to age well.

Via SOS web site

SOS Technical Editor Hugh Robjohns replies: A good vintage capacitor mic sounds much the same as a good modern equivalent, and the same goes for ribbons and moving coils. Having said that, there has been a tendency over the last decade or two to make modern mics sound brighter, partly because the technology has improved to allow that, and partly because of aural fashion.

Also, professional mics that are now considered vintage were usually pretty expensive in their day — studios and broadcasters bought very high‑quality products — and that high‑end quality generally persists despite the age of the microphones.

Most of the vintage mics you'll find on those kinds of sites, though, are either valve capacitor mics or ribbons, and they both have inherent characteristics of their own that a lot of people revere. Ribbons have a delightfully smooth and natural top end, while high‑quality valve capacitor mics often have mid‑range clarity and low‑end warmth. These qualities can still be found in some modern equivalents if you choose carefully.

Some of the vintage character is certainly attributable to recording on tape, replaying from vinyl, and the use of valves and transformers. But some is also down to the construction of the microphone capsules and the materials used, not all of which are still available in commercial products today.


Published January 2011

Monday, October 16, 2017

Q. If speakers have to be 'anchored', why don't mics?

By Hugh Robjohns & Mike Senior

As I understand it, loudspeakers create sound and momentum, which needs to be absorbed in order for the sound quality to be accurate, so we ensure they are braced or fixed to their stands and not wobbling about too much. So surely a mic diaphragm, which is moved by incoming sound, will less accurately represent the sound if the mic casing is not sufficiently anchored. Given that we hang these things from cables, or put them in elastic shockmounts, can you explain to me why this principle doesn't apply?
Is it just to do with acceptable tolerances or is it a trade‑off between picking up vibrations from the stand and capturing the intended sound?

Paul Hammond, via email

SOS Technical Editor Hugh Robjohns replies: In a perfect world, both the loudspeaker and the microphone would be held rigidly in space to deliver optimal performance. However, we don't live in a perfect world. Sometimes a shelf is the most appropriate position for a speaker, but the inevitable down side, then, is that the vibrations inherently generated by the speaker's drive units wobbling back and forth will set up sympathetic resonances and rattles in the shelf, adding unwanted acoustic contributions to the direct sound from the speaker, and thus messing up the sound.
 
We 'decouple' speakers with foam to prevent annoying low‑end frequencies leaving the speakers from reaching the surface they sit on. In the case of mics, we want to stop problem frequencies from reaching them, so we support them in shockmounts.

 
The obvious solution is, therefore, to 'decouple' the speaker from the shelf with some kind of damped mass‑spring arrangement optimised to prevent the most troubling and annoying frequencies (generally the bottom end) from reaching the shelf. This is often achieved, in practice, using a foam pad or similar.

With microphones, we are trying to control energy going the other way. We want to stop mechanical vibrations from reaching the mic, whereas we were trying to stop mechanical vibrations leaving the speaker.

Again, in a perfect world the mic would be held rigidly in space, using some kind of tripod, much like the ones photographers use for their cameras. However, in practice, we tend to place mics at the ends of long, undamped boom arms on relatively floppy mic stands which are, themselves, placed on objects that pick up mechanical vibrations (foot tapping, perhaps) and then pass them along the metalwork straight to the mic.

The obvious result is that the mic body moves in space, and in so doing forces the diaphragm back and forth through the air. This results in a varying air pressure impinging on the diaphragm that the mic can't differentiate from the wanted sound waves coming through the air, and so the mic indirectly captures the 'sound' of its physical movement as well as the wanted music.

The solution is to support the mic in a well‑designed shockmount so that the troublesome (low end, again) vibrations that travel up through the mic stand are trapped by another damped mass‑spring arrangement and thus are prevented from reaching the mic. If the shockmount works well, the mic stays still while the stand wobbles about around it, much like the interior of a car moving smoothly while the wheels below are crashing in and out of potholes!

The only potential problem with the microphone shockmount is that it can easily be bypassed by the microphone cable. If the cable is relatively stiff and is wrapped around the mic stand, the vibrations can travel along the mic cable and reach the mic that way, neatly circumventing the shockmount. The solution is to use a very lightweight cable from the mic to the stand, properly secured at the stand to trap unwanted vibrations.


Published February 2011

Friday, October 13, 2017

Q. Where should I put my overhead mics?

By Hugh Robjohns
When recording drums, I really want to get the kick, snare and hi‑hat in the middle of the image, but with a wide spread of cymbals. The snare is placed off to the left of the kick (from the drummer's point of view). I know I need to set my drum overhead mics so that there are no phasing issues with the kick and snare mics, but how do I know where to point the OH mics? For example, if I have two cardioid-pattern mics, should they be pointing straight down, at the snare, or somewhere between the kick and snare — or somewhere else entirely?

Adrian Cairns via email

SOS Technical Editor Hugh Robjohns replies: This is an interesting one because what you are trying to do is distort the stereo imaging of the recording, compared with the reality of the kit setup. And the only way you can do that is by maximising the separation of what each mic hears. That's easy enough with the kick, snare and hi‑hat mics because of their proximity to the sources and the effectiveness of bracketing EQ. The overheads, however, remain more of an issue, because they are naturally going to pick up significant spill from the snare and hi‑hat (you can use bracketing EQ to minimise the kick drum spill, of course).

To achieve your desire of keeping the snare and hi‑hat central in the image you will have to ensure that the overhead mics are equally spaced from those two sources, so that the level and time of arrival of snare and hi‑hat sounds are equal in both mics. With that as a primary requirement, you can then experiment with moving the mics (and/or cymbals) around to achieve the required spread of cymbal sound. Angling the mics, to assist with the rejection of as much snare and hat spill as possible while capturing the wanted cymbals, is also a useful tool, providing you maintain the equal distance so that whatever spill is captured remains central in the stereo image.
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources. 
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources.

There are also some less conventional alternative techniques you might also like to consider, using fig‑8 mics where you can aim the deep null to minimise snare and hat pickup in a useful way.


Published March 2011

Tuesday, October 10, 2017

Q. Where should I put my overhead mics?

By Hugh Robjohns

When recording drums, I really want to get the kick, snare and hi‑hat in the middle of the image, but with a wide spread of cymbals. The snare is placed off to the left of the kick (from the drummer's point of view). I know I need to set my drum overhead mics so that there are no phasing issues with the kick and snare mics, but how do I know where to point the OH mics? For example, if I have two cardioid-pattern mics, should they be pointing straight down, at the snare, or somewhere between the kick and snare — or somewhere else entirely?

Adrian Cairns via email

SOS Technical Editor Hugh Robjohns replies: This is an interesting one because what you are trying to do is distort the stereo imaging of the recording, compared with the reality of the kit setup. And the only way you can do that is by maximising the separation of what each mic hears. That's easy enough with the kick, snare and hi‑hat mics because of their proximity to the sources and the effectiveness of bracketing EQ. The overheads, however, remain more of an issue, because they are naturally going to pick up significant spill from the snare and hi‑hat (you can use bracketing EQ to minimise the kick drum spill, of course).

To achieve your desire of keeping the snare and hi‑hat central in the image you will have to ensure that the overhead mics are equally spaced from those two sources, so that the level and time of arrival of snare and hi‑hat sounds are equal in both mics. With that as a primary requirement, you can then experiment with moving the mics (and/or cymbals) around to achieve the required spread of cymbal sound. Angling the mics, to assist with the rejection of as much snare and hat spill as possible while capturing the wanted cymbals, is also a useful tool, providing you maintain the equal distance so that whatever spill is captured remains central in the stereo image.
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources. 
To get a particular section of your drum kit central in the stereo image, it is important to set up your overhead mics such that they are equidistant from the relevant sources.

There are also some less conventional alternative techniques you might also like to consider, using fig‑8 mics where you can aim the deep null to minimise snare and hat pickup in a useful way.



Published March 2011

Saturday, October 7, 2017

Q. Where should I place my monitors in a small room?

By Paul White

I recently built my own home studio by converting an old garage into a well‑isolated music room of 410 x 215 x 275cm. The isolation is great, but I'm now moving on to phase two — acoustics — and bass is a problem, especially on the notes of A, B‑flat and B, which are kind of booming.
So I am wondering how to position my Dynaudio BM6As? At first I put them along the short wall, but a lot of bass was built up, probably because of the proximity of the corners. I've already tried to put the speakers backwards, but noticed no change.

I've now got them along the long wall, which I think sounds more balanced, even though there's still some resonance on certain notes. Also, this tends to differ a lot depending on whether I sit in the exact 'sweet spot' or not. The further forward I go with my head, the more bass I get; the further back I go, the less bass I get.
In your books and in Sound On Sound, I've seen you advocate placing speakers on both the shortest wall, and the longest wall, depending on the room. So, what would you recommend for a room of my size and dimensions? Also, are the BM6As too much for my room?

Paul Stanhope via email

SOS Editor In Chief Paul White replies: In large studio rooms, which includes many commercial studios, putting the speakers along the longest wall is quite common and has the benefit of getting those reflective side walls further away. However, in the smaller rooms many of us have to deal with, it is invariably best to have the speakers facing down the longest axis of the room. If you work across the room, the reflective wall behind you is too close and the physical size of the desk means you're almost certainly sitting mid‑way between the wall in front and the wall behind, which causes a big bass cancellation in the exact centre and, as you've noticed, causes the bass end to change if you move your position even slightly. In a room the size of yours, working lengthways will give the most consistent results. Your room is a slightly unfortunate size for bass response as the length is almost twice the width, so any resonant modes will tend to congregate at the same frequencies.
In a small room such as this, which is about twice as long as it is wide, it's usually best to position monitors of this size along the shortest wall. Working the other way — across the room — would create a bass cancellation in the centre of the room, where you'll most likely be sitting. Moving around even slightly would create variable results, as the space is so small. Positioning them as shown in the bottom image will give more consistent results, though you will still need to treat the room accordingly. 
In a small room such as this, which is about twice as long as it is wide, it's usually best to position monitors of this size along the shortest wall. Working the other way — across the room — would create a bass cancellation in the centre of the room, where you'll most likely be sitting. Moving around even slightly would create variable results, as the space is so small. Positioning them as shown in the bottom image will give more consistent results, though you will still need to treat the room accordingly.

You can often change the bass behaviour by moving the speakers forward or backwards slightly, but try to keep them out of the corners, as that just adds more unevenness to the bass end. Corner bass traps of the type you're making may help, but if they don't do enough, you could try one of the automatic EQ systems designed for improving monitoring. I don't normally like to EQ monitors but, in difficult situations, using EQ to cut only the boomy frequencies can really help.

As for your monitors, the BM6As should be fine in that room. Just make sure they're perched on something solid, as standing them directly on a desk or shelf can also cause bass resonances. Either solid metal stands or foam speaker pads with something solid on top work best and can really tighten up the bass end. You can buy the Primacoustic or Silent Peaks pads, which have steel plate on top, use Auralex MoPads or similar with a heavy floor tile stuck on top, or make your own from furniture foam with ceramic floor tiles or granite table mats stuck on top. A layer of non‑slip matting under the speakers will keep them in place.

For the mid‑range, foam or mineral wool absorbers placed at the mirror points in the usual way should be adequate, but try to put something on the rear wall that will help to scatter the sound, such as shelving or unused gear.


Published March 2011

Thursday, October 5, 2017

Q. How should I record an upright piano?

I have a pretty basic recording setup and, up until now, have just been making vocal and guitar recordings using an Audio‑Technica AT2035 and an Edirol FA66 audio interface with Reaper. However, I've been playing the piano a lot lately and would like to incorporate that. I have access to an old upright that's in the corner of my mum's living room. How can I achieve the best recording of the piano? Will I need different equipment?

Fiona McKay, via email

SOS Editor In Chief Paul White replies: There are many different ways to mic the upright piano, but in a domestic room a pair of cardioid capacitor mics would probably be the best option, as they would exclude much of the room reflection that might otherwise adversely colour the sound. Aim each mic at an imaginary point about a quarter-piano's width in from the ends of the piano, as that helps keep the string balance even. If the piano sounds good to the player, you can use a spaced pair of mics either side of the player's head, but it is also common practice to open the lid and, often, to remove the upper front cover above the keyboard as well. With the strings exposed in this way, you have more options to position the spaced pair either in front of or above the instrument, and I'd go for a 600 to 800 mm spacing between the mics, adjusting the mic distances as necessary to get an even level balance between the bass and treble strings.If a piano sounds good to the player, it's worth trying the recording from just either side of their position, placing the microphones 600 to 800 mm apart. However, it's also common practice to open the lid of the piano and place the mics above the exposed strings at that same distance apart. 

If a piano sounds good to the player, it's worth trying the recording from just either side of their position, placing the microphones 600 to 800 mm apart. However, it's also common practice to open the lid of the piano and place the mics above the exposed strings at that same distance apart.

If you're lucky enough to have a great‑sounding room, you can increase the mic distance to let in more room sound or switch to omnis. But in a typical domestic room I'd be inclined to start with the mics around that 600 to 800 mm distance apart. Also listen out for excessive pedal noise on your recording and, if necessary, wrap some cloth around the pedals to damp the sound.

SOS contributor Mike Senior explored this subject in some detail back in April of 2009. It's probably worth going to /sos/apr09/articles/uprightpianos.htm and giving it a read.



Published October 2010