Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Tuesday, February 27, 2018

Q. How do I mike up woodwind instruments?

By Hugh Robjohns

I'm going to be adding some woodwind parts to a track, recording flute, clarinet, alto and tenor saxophones one at a time. Can you offer any advice on how to mike them up?

Martin Brady
Berlin Woodwinds' flute ensemble. 

Technical Editor Hugh Robjohns replies: Unlike brass instruments, which emit sound solely from the bell and are therefore highly directional, the woodwind family emit sound from the entire length of the instrument, as the player covers or uncovers the finger holes, effectively lengthening or shortening the vibrating column of air within the instrument and changing the pitch of the note being played. As a general rule, therefore, you need to place the mic in such a way that it can 'see' the whole thing. This will usually mean a position about 18 inches away and about half the way down the length of the instrument. As always though, experiment and listen to find the optimal microphone position and angle for that combination of instrument, player and room. A condenser mic should give you the best results.

So for the clarinet, start about 18 inches away and at a height which puts the microphone about half to two-thirds of the way down the body of the clarinet. Try to stop the musician from swaying about, and adjust the distance, height and angle of the mic to optimise the tone and body of clarinet and minimise the key action and wind noises.

The flute presents a slightly different case, as it is held horizontally rather than vertically. Start by positioning the mic level with the flute's horizontal plane, halfway along its length and 18 inches away. Classical flute playing is often recorded with the mic placed slightly above the horizontal and at a distance of three or more feet away, while for more modern styles, the mic is sometimes placed less than a foot away for a more breathy, jazzy feel. Again, you should experiment to find the sound you're looking for. If you're recording in a small room with poor acoustics, putting the mic more than two feet away may mean you pick up more of the room sound than you want.

For the saxes, try a large-diaphragm condenser placed about 18 inches away and 'looking' roughly just below the halfway point of the body of the sax. You could also use a large dynamic mic for a slightly fuller, softer sound. Angling the mic downwards towards the upper rim of the bell can help to reduce how much noise from the keys is picked up — some consider it to be an intrinsic part of the instrument's sound, while others find it unsatisfactory.


Published November 2003

Saturday, February 24, 2018

Q. What does diatonic mean?

By Len Sasso

I know that the white keys on a keyboard form a diatonic scale, but what does diatonic really mean?

Rob Fowler
Finger on piano keyboard. 

SOS Contributor Len Sasso replies: To understand the meaning of diatonic, it helps to think of a scale not as a collection of notes, but rather as a series of intervals. The definition of a diatonic scale is that there are five whole-tone and two semitone intervals in the series and that the semitones must always be separated by at least two whole-tones. Using '2' to symbolize the whole-tone steps and '1' for the semitone steps, the major diatonic scale corresponds to the interval series 2212221. No matter what note you start on, following this prescription yields a major diatonic scale — the white keys starting on C is one example. It turns out that all possible diatonic scales are constructed by starting somewhere in the major diatonic scale and continuing until you reach the same note you started on. Those are generally referred to as the church modes: Dorian for 2122212, Phrygian for 1222122, Lydian for 2221221, and so on.

While the preceding definition is correct and functionally useful, it might leave you a little cold, as it does nothing to explain why those intervals are used or why the seven notes in a diatonic scale are chosen over the other notes in the 12-tone equal-tempered scale.

For reasons deriving from the physics and maths of sound, the strongest harmonic relationship aside from the octave is the perfect fifth, which makes G the closest relative of C, for example. Since C stands in the same relationship to F as G does to C, it makes sense that a scale centered around C should contain both G (called the dominant) and F (called the subdominant). The next closest harmonic interval is the major third. Together, the root, major third, and perfect fifth constitute a major triad, and it's not too big a stretch to imagine that you might want to construct a major triad on the three notes C, F, and G. Do that and you have the seven notes in the C diatonic scale.

There's still the question of why there are five other notes in the 12-tone equal-tempered scale, and the answer contains a hidden but important compromise. You can make music, which is naturally called diatonic music, with just the seven notes of the diatonic scale. And if you did that, they would in fact be slightly different notes from the ones you find in the equal-tempered scale. If you want to expand the system to accommodate diatonic scales in other keys, one natural way is to iterate the process of adding perfect fifths. This produces what is commonly called the 'cycle of fifths', but is actually a spiral of fifths that never really comes full circle. But if you make the perfect fifths just slightly flat, they do come full circle after 12 steps. Miraculously, you also wind up with notes that are close to the major thirds — they're a little sharp and a little more out of tune than the fifths, but still usable.

This compromise gives us the 12-tone equal-tempered scale (equal-tempered meaning all the intervals are the same). Relative to C, the extra five notes turn out to be where you find the black keys on the piano keyboard, and that's why the intervalic definition we started with works.


Published September 2003

Thursday, February 22, 2018

Q. Should I believe my meters?

By Hugh Robjohns
I read recently that the level meters in most DAWs don't give a true indication of when audio is peaking, and that audio which looks like it's below 0dBFS may actually be peaking above it, causing distortion when it's played back. Is this true? I want my mixes to be as loud as possible and use compression to push them as 'hot' as I can. How do I tell when and if these invisible overloads are occurring, and how do I avoid them?

SOS Forum Post

Technical Editor Hugh Robjohns replies: The problem you are referring to is a very real and widely recognised one. Simple digital meters register the amplitude of the individual samples within the digital domain and not the waveform which is reconstructed from them by the D-A converter. Even if the samples stay just beneath 0dBfs, the reconstructed waveform, which is, after all, a smooth curve, is likely to exceed full-scale at certain points, potentially causing overmodulation and digital distortion.

Top: A perfectly legal digital signal with no samples higher than 0dBFS. However, this signal will overmodulate a typical oversampling digital filter in a D-A converter.
Bottom: An oversampling meter will reveal the overload. 
Top: A perfectly legal digital signal with no samples higher than 0dBFS. However, this signal will overmodulate a typical oversampling digital filter in a D-A converter. Bottom: An oversampling meter will reveal the overload.This overloading generally happens in the integrated digital filters employed in most consumer and budget D-A converter designs. The state-of-the-art converters used in professional environments tend to be far less prone to this kind of problem, and consequently, a mastering engineer may not be aware of a problem which is glaringly obvious when the track is replayed over cheaper D-As. The problem is likely to be worst in heavily compressed material.

It's important to understand what the different types of meter are actually measuring. VU meters read averaged signal levels and don't give any indication of peak values whatsoever. The VU meter was designed to provide a crude indication of perceived volume (hence 'volume units'), originally in telecommunications circuits, and so served its original purpose perfectly well. It was only when the VU meter was adopted by the recording industry that its limitations became significant.

Subsequently, the PPM or peak programme meter was developed. This has complex analogue circuitry designed to register peak signal levels, so that the sound operator can better control the peak modulation of recordings and radio transmissions. However, the international standards defining the various versions of PPM all include a short integration period of between 5 and 10ms. This means that, in fact, the meter deliberately ignores short transients. True peak levels are typically 4 to 6dB higher than a standard PPM would indicate. This deliberate 'fiddling' with the meter's accuracy was done to optimise the modulation of analogue transmitters and recorders, safe in the knowledge that the short-term harmonic distortion caused by a small amount of overmodulation of analogue systems was inaudible to the majority of listeners.

Now we come to digital meters. These have to show true peak levels because any overloads in the digital domain cause aliasing distortions — distortions which are anything but harmonically pleasing and extremely audible. However, the inherent difficulty in achieving true peak readings from raw sample amplitudes, as described above, is one reason why it is advisable to engineer in a degree of headroom when working in the digital domain. Oversampling digital meters, which are far more accurate in terms of displaying the true peak levels, have been available in professional systems for a long time, and Trillium Lane Labs have recently produced an oversampling meter plug-in for Pro Tools TDM systems running on Mac OS X, called Master Meter.

You can circumvent the problems of inaccurate metering and the resulting potential for overmodulation by working at 96kHz. Even simple sample-based metering at this sample rate is essentially oversampled as far as the bulk of energy in the audible frequency range is concerned.

But the simplest solution, as ever, is to turn away from the notion that a track has to be louder than loud, and to leave a small but credible headroom margin. If you really want to overcompress particular genres of music, that's fine, but remember to leave a decent amount of headroom. There's really no need for recordings to hit 0dBFS, nor for recording musicians to misuse the digital format, and CD in particular. If the end user wants the music louder, there is always the volume control on the hi-fi!



Published October 2003

Wednesday, February 21, 2018

Q. What is optical compression?

By Paul White
Focusrite Trak Master.Focusrite Trak Master.

The Focusrite Trak Master and Behringer Composer Pro are two affordable compressors which use optical gain control elements. 
The Focusrite Trak Master and Behringer Composer Pro are two affordable compressors which use optical gain control elements.

The Samson S*Com, however, uses a VCA. 
The Samson S*Com, however, uses a VCA.

Lately, there seem to be numerous affordable hardware compressors on the market, and I've noticed that many of them (the Platinum Focusrites and the Joemeeks, for example) are described as optical compressors. What's the difference between optical compressors and other types of compressor, such as VCA, FET and valve compressors? Are there any relative merits to these different types of compressor and are they suited to any particular applications?

Luke Ritchie

Editor In Chief Paul White replies: After microphones, nothing stirs up a group of music professionals so much as a discussion about compressors. Essentially, compressors are gain-riding devices that monitor the level of the incoming signal and then apply gain reduction in accordance with the user's control settings. Given this simplistic explanation, shouldn't all compressors sound exactly the same, in the same way that faders tend to?

Clearly compressors don't all sound the same, and there are a few good technical reasons why. Perhaps of less importance than some people might imagine is the gain control element itself, which can be a tube, a FET (field effect transistor), a VCA (voltage-controlled amplifier), an optical photocell arrangement (a light source and a light detector) or even a digital processor. Certainly all these devices add their own colorations and distortions to a greater or lesser extent, but what influences the sound most is the way the ratio and envelope characteristics deviate from theoretically perfect behaviour.

In an imaginary, perfect compressor, nothing happens to the signal until it reaches a threshold set by the user, after which a fixed compression ratio is applied. For example, if the compression ratio is set at 4:1, for every 4dB the signal rises above the threshold, the output rises by only 1dB. A modification to this is the soft-knee compressor where the ratio increases progressively as the signal approaches the threshold, the end result being a less assertive, less obtrusive form of compression.

Many classic designs don't in practice act like this perfect compressor however, as their compression ratio may vary with the input signal level. For example, some compressors work like a perfect soft-knee device until the signal has risen some way above the threshold, then the compression ratio reduces so that those higher level signals are compressed to a lesser degree than signals just above the threshold. The reason for this change in ratio is simply that many early gain-reduction circuits don't behave linearly, especially those using optical circuitry as the variable gain element. The components themselves are non-linear so when, for example, you combine a non-linear light source with a non-linear light detector, the composite behaviour can be quite complex and unpredictable — however, history has buried those optical circuits that didn't sound good, so we're now left with those that happened to sound musical.

The other very important factor governing the sound of a compressor is the shape of the attack and release curves. While a modern VCA compressor can be made to behave in an almost theoretically perfect way with a constant ratio and predictable attack/release curves, many of the older designs had very strange attack and release characteristics, and, in the case of optical compressors, this was originally due to the relatively slow response of a light and photocell compared with a VCA.

For example, the now legendary Universal Audio 1176 combined a fairly fast attack time with a multi-stage release envelope. Conversely, the Teletronix's LA2A's rather primitive optical components resulted in a slower and quite non-linear attack combined with a release characteristic that slowed as the release progressed. Indeed, perhaps the reason the traditional opto compressor has so much character is that there are so many places in the circuitry that non-linearities can creep in.

Having said that, some modern optical compressors use specialised integrated circuits that incorporate the necessary LED light source (which has largely taken over from the filament lamps and electroluminescent devices used in early designs) and detector element in a single package that incorporates feedback circuitry to speed up the response time and to linearise the gain control performance. Indeed, some of these are so well behaved that they can sound almost like VCAs, but using clever design, it should be possible to recreate the old sounds as well as the new using contemporary electronic devices, or imaginative software design come to that.

It's harder when it comes to saying what type of compressor is best for which job, but in very general terms, a well-designed VCA compressor will provide the most transparent gain reduction, which is ideal for controlling levels without changing the character too much. However, a compressor that allows high-level transients to sneak through with less compression can also sound kinder to material than one that controls transients too assertively, which is why some of the older, less linear designs sound good. That's not to say modern designs can't sound good too though — Drawmer pioneered the trick of leaking high frequencies past the compressor to maintain transient clarity while other manufacturers, such as Behringer, use built-in transient enhancers or resort to equally ingenious design tricks.

Optical compressors, especially those that don't use super-well-behaved integrated optical circuits (or those that use them imaginatively) usually impose more of their own character on the material being treated, making it sound larger than life. In this context, the compressor is as much an effect as a gain-control device, and such compressors are popular for treating vocals, drums and basses. The Joemeek and TFPro compressors fit this 'compression as an effect' category as they use discrete LEDs and photocells in a deliberately non-linear topography that's really a refinement of that used in some vintage designs.

Digital compressors and plug-ins can reproduce the characteristics of vintage classics, but only if the designers successfully identify those technical aspects of the original design that make it sound unique. If they don't, you end up with an approximation or caricature rather than a true emulation.



Published September 2003

Monday, February 19, 2018

Q. What do Solo, PFL and AFL do?

By Hugh Robjohns
PFL button on a mixer. 
The Solo, PFL and AFL options on well-specified mixers allow the engineer to hear what's happening at different points in the channel's signal path.

Please can you explain the difference between 'soloing' a channel and using the other buttons marked 'PFL' and 'AFL' to listen to it. They seem to do very similiar but different things. Enlighten me!

Will Robinson

Technical Editor Hugh Robjohns replies: The PFL, AFL and Solo buttons found on the channel strips of professional mixing desks can be confusing if you're unfamiliar with their uses, not least because different manufacturers have different names for, and different ways of arranging these functions.

PFL stands for Pre-Fade Listen. It allows you to monitor the channel in question's signal level at a point immediately prior to the channel fader, and will therefore include any EQ or dynamics that might have been applied on that channel. Thus when setting up a channel's input gain using PFL, it's important to bypass any EQ and dynamics processing, otherwise you won't know what the actual headroom is at the front end. On mono channels, PFL is mono. On Stereo channels PFL should be stereo, but some cheap desks derive a mono PFL signal for both mono and stereo channels.

AFL, which stands for After-Fade Listen, is similar to PFL in function, but takes its signal from a point immediately after the channel fader, showing the level of the channel's contribution to the mix. AFL is also mono on mono channels.

Solo, more correctly known as Solo-in-Place (SIP), is an after-fade listen taken from after the pan control as well as the channel fader. It is therefore a stereo signal even on mono channels. The idea is to allow the monitoring of a channel signal when panned to its appropriate position in the stereo image. SIP is usually achieved by monitoring the main mix buss and muting all the channels other than the one you pressed the SIP button on. However, this means that you can't use SIP while mixing because it destroys the mix on the mix buss, muting aux channels as well as main channels. (PFL and AFL only affect the signal routed to the monitor outputs.) That's why SIP is often described as 'destructive solo monitoring'. Usually, you'll want to solo a channel and hear it with any associated effects returns, so selected channels can usually be made 'safe' from the SIP function, so that they continue to contribute to the mix when all the other channels are muted. A lot of desks have a single 'solo' button somewhere near the fader which can be configured to provide any or all of these functions.


Published September 2003

Friday, February 16, 2018

Q. Why is my vocal clipping?


By Mike Senior
The Dbx 386 hybrid valve/solid-state preamp features the Dbx Type IV A-D converter, supposedly impossible to clip... 
The Dbx 386 hybrid valve/solid-state preamp features the Dbx Type IV A-D converter, supposedly impossible to clip...

I've been recording vocals using a Neumann TLM103 mic going through a Dbx 386 tube preamp, and using the Dbx's converters to send a digital signal into a Roland VS1680 multitracker. I understood the Dbx was virtually impossible to clip, but experience proves otherwise! Firstly, it's impossible to use the Dbx's 'Drive' tube emulation above its lowest setting without getting obvious red light peaking and distortion for any louder transients during a vocal take (I like to sing fairly close to the mic). Does this mean I'm not getting any tube warmth from the unit? Generally, due to this problem, I always use the 20dB pad which enables me to crank up the Drive dial a little, but not much. What is the purpose of its higher incremental notches if you can't really use them? Even with Drive set all the way down, and the digital metering on the output stage peaking between 12 and 16dBu but avoiding the red light district, there are still obvious frequencies in my voice which cut through the supposed soft limiting facilities of the Dbx type IV converters to produce distortion. Sometimes I have to do drop-ins of single vowels, vainly trying to grab a clean one at a comparable level to its neighbouring words. What am I doing wrong?

Phil Godfrey

Reviews Editor Mike Senior replies: I own a Dbx 376 and use it for all my vocal recording, and I'd suggest that you definitely don't want to be lighting that input Peak LED — that lights when the input is clipping, and clipping is quite a different thing to valve warmth. Given that your TLM103 has a fairly high output level of 21mV/Pa, if you're giving your performance a bit of welly close up to the mic then you may well find that you have to have the input gain all the way down.

I also work very close to the mic — like you, I have the Drive control all the way down for most of my louder numbers. This isn't a problem, though — you're still driving the valve, simply by dint of the raw level coming from the mic, it's just that you don't have to add any gain on the Drive control to do it. The valve 'sound' for recording purposes is very understated in quality equipment, and you don't need to try too hard to get the benefits of the valve — you'll get all the warmth on offer just by running the valve comfortably within its normal working range. You don't need to overdrive the valve, as you would in a guitar amp.

You also asked what use the upper notches of the control were if you always sang too loud for them. The reason for having them is so that low-output mics, such as dynamics and ribbons, can also be boosted into the optimum operating range for the valve. Think of the Drive control more like an input gain control, and that should clarify things a bit. I'd also be tempted to leave the Pad out unless it's absolutely necessary — it'll just be adding extra components into the signal path, and that's not necessarily desirable.

So, if you're setting up your Drive control right, there remains the question of the gain management in the rest of the chain. The first thing to realise is that it is possible to get nasty distortion out of the Dbx Type IV compression if you push it too hard, even if you don't theoretically get digital clipping. The best tactic, in my opinion, is to treat the converter just as you would any other and leave plenty of headroom. In this case, without compression, the majority of the signal will probably be hitting the -16dBFS mark, although this depends on your own performance dynamics. The most important thing is that you try to avoid making the -4dBFS light come on at all. Set the channel up while rehearsing so that only the -8dBFS light ever comes on. Because of the way in which the Type IV conversion process works, the moment the -4dBFS light comes on, the converter is effectively limiting the signal, so if (once you've set things up) you cook things a little hot in the middle of a take and the -4dBFS light comes on, you'll only be limiting the spikiest peaks.

Type IV is great at peak limiting, but that's all it should be used for — use a compressor to reduce the dynamic range if necessary. Your description of your metering levels ("the digital metering on the output stage peaking between 12 and 16dBu but avoiding the red light district") shows me that you're running the output too hot: the 12dBu and 16dBu lights correspond to the -8dBFS and -4dBFS lights when the meter is switched to read the digital level, so if these are coming on most of the time then you've strayed too far into the danger zone. Also, bear in mind that even the digital output metering in the Dbx 386 is analogue, so the real peaks in your audio signal will probably extend beyond the meter reading. And because of the Type IV process, the output meter will only hit the 0dBFS light if it's seriously abused, so just avoiding the red light does not necessarily guarantee clean audio.

If you're getting distortion through the Roland VS1680 even on unclipped material, double-check that Dbx's sample rate is set correctly and that you're clocking the VS1680 from it — if the Roland is set to run from its own internal master clock then you may encounter a variety of strange spits and pops.

When digital and analogue gear is used in the same system, setting up the gain sensibly throughout the recording chain can be a bit of a minefield. However, it's worth taking the time to get it right, because otherwise all your recordings will suffer. You certainly shouldn't have to be dropping in words to avoid clipping — that's something you should be doing for artistic reasons to get the best possible performance.


Published September 2003

Tuesday, February 13, 2018

Q. Why does 'Class' matter in an amplifier?

By Hugh Robjohns

I saw a mic preamp advertised as 'Class A' and 'Transformerless.' What do these terms mean and why exactly are they a good thing?

SOS Forum Post

Technical Editor Hugh Robjohns replies: The 'class' of an amplifier refers to the circuit topology used, and is independent of whether the circuit uses valves, transistors or FET as the active devices.

Buzz Audio MA2.2 preamp.
In a Class-A circuit the output device is arranged to pass the entire audio waveform — both the upper and lower halves of the signal waveform. This provides the cleanest, most transparent sound, but the necessary biasing arrangements makes this kind of circuit power-hungry, and it tends to generate a lot of heat as a result.

Focusrite ISA220 features Class-A circuitry.
A more efficient circuit design is the Class B, which uses two output devices, one to handle only the upper portion of the sound waveform and another to handle the lower half. The benefit is that only one device is working at any time, and when there is no input, both are switched off, allowing huge savings in power consumption and heat generation. The drawback is that at the zero crossover point between the positive and negative halves of the waveform, one device might have switched off before the other has come on, and that results in 'crossover distortion' — which isn't a good thing in high-quality audio circuits.

The Buzz Audio MA2.2 (top), Focusrite ISA220 (middle) and TL Audio VP1 (above) all feature Class A circuitry, sacrificing efficiency for superior sound quality. 
The Buzz Audio MA2.2 (top), Focusrite ISA220 (middle) and TL Audio VP1 (above) all feature Class A circuitry, sacrificing efficiency for superior sound quality.

The compromise solution is a combination of both topologies (Class A and Class B), and it's called... Class AB. This also employs separate devices to handle the upper and lower portions of the sound waveform, but they are biased in such a way that both are operating when the signal is close to the zero crossover region, and thus crossover distortion is much less of a problem.
These basic circuit topologies can be employed in any amplifier design, whether it's a power amplifier to drive loudspeakers, a microphone preamplifier, or a line driving amplifier, as well as in discrete-component or integrated (IC) circuits. However, Class A remains the best choice for audio systems where the power consumption can be tolerated.

The term 'transformerless' refers to the absence of a transformer within the circuit. Transformers can be useful things in audio systems, providing 'galvanic' isolation between circuits and systems, or impedance-matching and the balancing (or unbalancing) of audio circuits, or even providing a 'free' voltage gain, depending on the application. However, transformers also have disadvantages, such as large size and weight in audio applications, and the introduction of large phase shifts which can become audible under some circumstances and therefore undesirable.

Many modern electronic circuits have been developed to replicate some of the desirable characteristics of transformers, without their associated disadvantages, and this is often championed as an overall advantage. Hence the 'transformerless' term is generally seen as a good thing, along with Class A. However, there are some circumstances where transformers still provide the best solution, and the inherent sonic qualities are often deliberately sought.


Published October 2003

Saturday, February 10, 2018

Q. Should I use my PC's ACPI mode?

By Martin Walker
MOTU 828 MkI Firewire audio interface.
I've just bought a PC laptop and a MOTU 828. I installed the software and have been using Emagic Logic 5.5. However when I turn off the computer's Advanced Configuration Power Interface (ACPI) it no longer recognises that the 828 is there. This is a common PC/Windows tweak for making audio apps run more efficiently, so I was hoping that the 828, or any other Firewire hardware for that matter, would be able to run with ACPI turned off. Can anyone enlighten me on this subject?

SOS Forum Post

SOS PC Notes columnist Martin Walker replies: It's strange that your MOTU 828 is no longer recognised, since Plug and Play will still detect exactly the same set of devices when you boot your PC, including the host controllers for both your USB and Firewire ports. It's then up to Windows to detect the devices plugged into their ports, and this ought not to be any different when running under ACPI or Standard mode.

MOTU have reported some problems running their 828 interface with Dell Inspiron laptops, due to IRQ sharing between the Firewire and graphics, and this could possibly be cured by changing to Standard Mode, but there's a much wider issue here. While switching from ACPI to Standard Mode has solved quite a few problems for some musicians in the past (those with M-Audio soundcards have certainly benefitted), it shouldn't be used as a general-purpose cure-all, and particularly not with Windows XP, which, I suspect, is installed on your recently bought PC laptop.

Since Windows XP needs very little tweaking compared with the Windows 9x platform, I would recommend all PC Musicians leave their machines running in ACPI mode on XP. I'd only suggest turning it off if there is some unresolved problem such as occasional audio clicks and pops that won't go away with any other OS tweaks, no matter how you set the soundcard's buffer size, or the inability to run with an audio latency lower than about 12ms without glitching.

Some modern motherboards now offer an Advanced Programmable Interrupt Controller (APIC) that offers 24 interrupts under Windows XP in ACPI mode rather than the 16 available to Standard Mode, so if yours provides this feature you should always stick to ACPI. Apparently it's also faster at task-switching, leaving a tiny amount more CPU for running applications. The latest hyperthreading processors also require ACPI to be enabled to use this technology, since each logical processor has its own local APIC. Moreover, laptops benefit from ACPI far more than desktop PCs, since it's integrated with various Power Management features that extend battery life. If I were you, I'd revert to ACPI.


Published October 2003

Thursday, February 8, 2018

Q. What's the difference between a talk box and a vocoder?


By Craig Anderton
In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.In addition to its built-in microphone, the Korg MS2000B's vocoder accepts external line inputs for both the carrier and modulator signals.

I've heard various 'talking instrument' effects which some people attribute to a processor called a vocoder, while others describe it as a 'talk box'. Are these the same devices? I've also seen references in some of Craig Anderton's articles about using vocoders to do 'drumcoding'. How is this different from vocoding, and does it produce talking instrument sounds?

James Hoskins

SOS Contributor Craig Anderton replies: A 'talk box' is an electromechanical device that produces talking instrument sounds. It was a popular effect in the '70s and was used by Peter Frampton, Joe Walsh and Stevie Wonder [ see this YouTube video], amongst others. It works by amplifying the instrument you want to make 'talk' (often a guitar), and then sending the amplified signal to a horn-type driver, whose output goes to a short, flexible piece of tubing. This terminates in the performer's mouth, which is positioned close to a mic feeding a PA or other sound system. As the performer says words, the mouth acts like a mechanical filter for the acoustic signal coming in from the tube, and the mic picks up the resulting, filtered sound. Thanks to the recent upsurge of interest in vintage effects, several companies have begun producing talk boxes again, including Dunlop (the reissued Heil Talk Box) and Danelectro, whose Free Speech talk box doesn't require an external mic, effecting the signal directly.

The vocoder, however, is an entirely different animal. The forerunner to today's vocoder was invented in the 1930s for telecommunications applications by an engineer named Homer Dudley; modern versions create 'talking instrument' effects through purely electronic means. A vocoder has two inputs: one for an instrument (the carrier input), and one for a microphone or other signal source (the modulator input, sometimes called the analysed input). Talking into the microphone superimposes vocal effects on whatever is plugged into the instrument input.

The principle of operation is that the microphone feeds several paralleled filters, each of which covers a narrow frequency band. This is electronically similar to a graphic equaliser. We need to separate the mic input into these different filter sections because in human speech, different sounds are associated with different parts of the frequency spectrum.

For example, an 'S' sound contains lots of high frequencies. So, when you speak an 'S' into the mic, the higher-frequency filters fed by the mic will have an output, while there will be no output from the lower-frequency filters. On the other hand, plosive sounds (such as 'P' and 'B') contain lots of low-frequency energy. Speaking one of these sounds into the microphone will give an output from the low-frequency filters. Vowel sounds produce outputs at the various mid-range filters.

But this is only half the picture. The instrument channel, like the mic channel, also splits into several different filters and these are tuned to the same frequencies as the filters used with the mic input. However, these filters include DCAs or VCAs (digitally controlled or voltage-controlled amplifiers) at their outputs. These amplifiers respond to the signals generated by the mic channel filters; more signal going through a particular mic channel filter raises the amp's gain.

Now consider what happens when you play a note into the instrument input while speaking into the mic input. If an output occurs from the mic's lowest-frequency filter, then that output controls the amplifier of the instrument's lowest filter, and allows the corresponding frequencies from the instrument input to pass. If an output occurs from the mic's highest-frequency filter, then that output controls the instrument input's highest-frequency filter, and passes any instrument signals present at that frequency.

As you speak, the various mic filters produce output signals that correspond to the energies present at different frequencies in your voice. By controlling a set of equivalent filters connected to the instrument, you superimpose a replica of the voice's energy patterns on to the sound of the instrument plugged into the instrument input. This produces accurate, intelligible vocal effects.

Vocoders can be used for much more than talking instrument effects. For example, you can play drums into the microphone input instead of voice, and use this to control a keyboard (I've called this 'drumcoding' in previous articles). When you hit the snare drum, that will activate some of the mid-range vocoder filters. Hitting the bass drum will activate the lower vocoder filters, and hitting the cymbals will cause responses in the upper frequency vocoder filters. So, the keyboard will be accented by the drums in a highly rhythmic way. This also works well for accenting bass and guitar parts with drums.

Note that for best results, the instrument signal should have plenty of harmonics, or the filters won't have much to work on.


Published October 2003

Wednesday, February 7, 2018

Q. What's the difference beween morphing and crossfading?

By Len Sasso

Is there any real difference between morphing from one sound to another and crossfading? In many cases, the two sound very similar.

Anna Silman

SOS Contributor Len Sasso replies: Morphing and crossfading are really two entirely different processes and apply to different situations. Crossfading takes place between two audio files, typically non-destructively in a sequencing environment or destructively in a sample editor. The effect, of course, is that one sound fades out as the other fades in.

Crossfading between two different sounds. 
Crossfading between two different sounds.

Morphing takes place between two groups of settings for an audio device, either hardware or software. In that case, one sound also dissolves into another, but the intermediate sounds are not simply a mix of the starting and ending sounds.

If you have sequencing software and a synth plug-in that can be automated, here's an experiment to quickly convince yourself that there really is a difference.

Set up a basic oscillator and lowpass-filter patch without any envelope applied to the filter cutoff. Record rather long clips with the filter wide open, then with it relatively closed (but with the oscillator still audible). Now crossfade between the clips over a fairly long period. Next use animation to slowly sweep the filter cutoff across the same range. Compare the crossfade with the animated filter, which amounts to morphing between the open and closed states. Of course, morphing usually involves many more parameters, and the results are correspondingly more complex and interesting.


Published September 2003

Monday, February 5, 2018

Q. What is optical compression?

By Paul White
Focusrite Trak Master.Focusrite Trak Master.

The Focusrite Trak Master and Behringer Composer Pro are two affordable compressors which use optical gain control elements. 
The Focusrite Trak Master and Behringer Composer Pro are two affordable compressors which use optical gain control elements.

The Samson S*Com, however, uses a VCA. 
The Samson S*Com, however, uses a VCA.

Lately, there seem to be numerous affordable hardware compressors on the market, and I've noticed that many of them (the Platinum Focusrites and the Joemeeks, for example) are described as optical compressors. What's the difference between optical compressors and other types of compressor, such as VCA, FET and valve compressors? Are there any relative merits to these different types of compressor and are they suited to any particular applications?

Luke Ritchie

Editor In Chief Paul White replies: After microphones, nothing stirs up a group of music professionals so much as a discussion about compressors. Essentially, compressors are gain-riding devices that monitor the level of the incoming signal and then apply gain reduction in accordance with the user's control settings. Given this simplistic explanation, shouldn't all compressors sound exactly the same, in the same way that faders tend to?

Clearly compressors don't all sound the same, and there are a few good technical reasons why. Perhaps of less importance than some people might imagine is the gain control element itself, which can be a tube, a FET (field effect transistor), a VCA (voltage-controlled amplifier), an optical photocell arrangement (a light source and a light detector) or even a digital processor. Certainly all these devices add their own colorations and distortions to a greater or lesser extent, but what influences the sound most is the way the ratio and envelope characteristics deviate from theoretically perfect behaviour.

In an imaginary, perfect compressor, nothing happens to the signal until it reaches a threshold set by the user, after which a fixed compression ratio is applied. For example, if the compression ratio is set at 4:1, for every 4dB the signal rises above the threshold, the output rises by only 1dB. A modification to this is the soft-knee compressor where the ratio increases progressively as the signal approaches the threshold, the end result being a less assertive, less obtrusive form of compression.

Many classic designs don't in practice act like this perfect compressor however, as their compression ratio may vary with the input signal level. For example, some compressors work like a perfect soft-knee device until the signal has risen some way above the threshold, then the compression ratio reduces so that those higher level signals are compressed to a lesser degree than signals just above the threshold. The reason for this change in ratio is simply that many early gain-reduction circuits don't behave linearly, especially those using optical circuitry as the variable gain element. The components themselves are non-linear so when, for example, you combine a non-linear light source with a non-linear light detector, the composite behaviour can be quite complex and unpredictable — however, history has buried those optical circuits that didn't sound good, so we're now left with those that happened to sound musical.

The other very important factor governing the sound of a compressor is the shape of the attack and release curves. While a modern VCA compressor can be made to behave in an almost theoretically perfect way with a constant ratio and predictable attack/release curves, many of the older designs had very strange attack and release characteristics, and, in the case of optical compressors, this was originally due to the relatively slow response of a light and photocell compared with a VCA.

For example, the now legendary Universal Audio 1176 combined a fairly fast attack time with a multi-stage release envelope. Conversely, the Teletronix's LA2A's rather primitive optical components resulted in a slower and quite non-linear attack combined with a release characteristic that slowed as the release progressed. Indeed, perhaps the reason the traditional opto compressor has so much character is that there are so many places in the circuitry that non-linearities can creep in.

Having said that, some modern optical compressors use specialised integrated circuits that incorporate the necessary LED light source (which has largely taken over from the filament lamps and electroluminescent devices used in early designs) and detector element in a single package that incorporates feedback circuitry to speed up the response time and to linearise the gain control performance. Indeed, some of these are so well behaved that they can sound almost like VCAs, but using clever design, it should be possible to recreate the old sounds as well as the new using contemporary electronic devices, or imaginative software design come to that.

It's harder when it comes to saying what type of compressor is best for which job, but in very general terms, a well-designed VCA compressor will provide the most transparent gain reduction, which is ideal for controlling levels without changing the character too much. However, a compressor that allows high-level transients to sneak through with less compression can also sound kinder to material than one that controls transients too assertively, which is why some of the older, less linear designs sound good. That's not to say modern designs can't sound good too though — Drawmer pioneered the trick of leaking high frequencies past the compressor to maintain transient clarity while other manufacturers, such as Behringer, use built-in transient enhancers or resort to equally ingenious design tricks.

Optical compressors, especially those that don't use super-well-behaved integrated optical circuits (or those that use them imaginatively) usually impose more of their own character on the material being treated, making it sound larger than life. In this context, the compressor is as much an effect as a gain-control device, and such compressors are popular for treating vocals, drums and basses. The Joemeek and TFPro compressors fit this 'compression as an effect' category as they use discrete LEDs and photocells in a deliberately non-linear topography that's really a refinement of that used in some vintage designs.

Digital compressors and plug-ins can reproduce the characteristics of vintage classics, but only if the designers successfully identify those technical aspects of the original design that make it sound unique. If they don't, you end up with an approximation or caricature rather than a true emulation.


Published September 2003

Friday, February 2, 2018

Q. Do I need balanced patchbays?

By Mike Senior

I am currently setting up a home studio, which I'm hoping to eventually turn into a professional facility, based around a Soundtracs Topaz desk, three Egosys Wamirack soundcards and a Pentium 4 PC, with numerous synths, samplers, effects and other outboard gear. I'm now looking to wire everything together using patchbays. Bearing in mind that my console does not accommodate balanced outputs and insert points (the only balanced connections on the console are at the input stages of all channels and the effects returns), can I use unbalanced patchbays, thereby simplifying the patch lead requirements? If you are going to suggest a balanced patchbay setup, could you describe where to connect and disconnect the ground/screen connections to avoid ground loops.

SOS Forum post
Installing balanced patchbays (as opposed to unbalanced ones) makes dealing with hum much, much easier. 
Installing balanced patchbays (as opposed to unbalanced ones) makes dealing with hum much, much easier. 

Reviews Editor Mike Senior replies: It sounds like you've already invested a good deal of money in the gear, and there's certainly enough there to produce high quality audio. However, if you're going to retain audio fidelity with so many pieces of equipment working together, I would try to balance as many of your analogue audio cables as possible. Even in my more modest home setup mains hum and induced noise are problems (which have taken upgrading to balanced connections to sort out), so if you're ever hoping to use your studio professionally you don't really have a choice. Even in commercial studios a lot of time can be spent dealing with hum, so it's worth planning for it now, in my opinion. Unbalanced connections are fine for a smaller setup than yours, but, at the stage you're at, I reckon it's a recipe for disaster.

The great thing about balanced connections is that lifting the earth connections between equipment to break earth loops is comparatively easy — just disconnect the earth wire at one end of the signal cable — but with unbalanced gear the same trick very rarely works in practice and will often make things worse. If you're wondering how to decide where to make this disconnection in your system, Mallory Nicholls suggested that his preferred method was "to connect cable shields at equipment outputs and not at equipment inputs" in his Studio Installation Workshops in SOS September 2002 and November 2002. So, disconnect the shield just before it reaches the equipment inputs. If you're using any moulded cables, then you might have to perform some modification on the patchbay, but this is not usually too difficult to work out — it's what I did, and it's worked very well so far!

To incorporate any unbalanced devices within the balanced system, you have two main choices: unbalance at the input to the unbalanced device — connect one of the balanced signal wires to the jack sleeve, along with the earth wire, and don't disconnect the earth wire elsewhere — or use a balancing transformer to do the interfacing. The second solution is more costly, but may be the only way to solve any hum problems which the first solution may create. Maybe you'll be lucky and not get any appreciable hum using the first system, but if you do get hum then have a look at the Ebtech Hum Eliminators — there's an eight-channel one for £295 which would probably isolate enough connections to sort remaining hum problems out. I've only needed to use a two-channel one to sort out a persistent hum in my system, but yours is much more complex, and all of it will be connecting to the central desk, which multiplies the potential for hum.


Published September 2003