Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Saturday, November 30, 2013

Making The Most Of Plug-in Presets

Tips & Techniques


Technique : Effects / Processing
If you know what you’re doing, plug‑ins are best set by ear — but if you lack experience and take care to avoid rookie mistakes, dialling in an off‑the‑peg preset can still prove effective...
Paul White
To really understand the finer points of a Digital Audio Workstation, you need to know how everything works in a traditional recording studio, including processors such as reverb, echo, compression and EQ that now exist as plug‑ins inside DAWs. Knowledge comes with time and experience, though, so ‘first contact’ with a DAW can be intimidating. To make life easier, manufacturers offer plug‑in presets that you can use in typical mixing situations. Preset sounds for plug‑in instruments can often be used with very little tweaking, but preset processor and effects settings are more problematic, as how well they work for you will depend on things like the level, frequency content and dynamic range of the signal, or the project tempo. So how can you ensure they do the job with the mimimum of intervention?

Crossing The Threshold

Preset names can look very appealing — but how could the patch designers know what material you planned to put through them?
During our Studio SOS visits and Mix Rescue sessions, we often see DAW projects using a compressor preset that’s not compressing at all! The reason is obvious if you know about compressors, but for the rest of you, here’s the story. A compressor is part of a family of processors called ‘dynamics’ processors. These all respond differently according to the level of the signal being fed to the processor — and a preset designer can’t possibly know what level of signal you’re sending to the processor! Furthermore, the greater the difference between the quietest and loudest parts of the signal, the greater its dynamic range, so a singer with good mic technique will probably control their dynamic range to achieve a more consistent level than a less experienced singer, and thus require less compression to keep the level even in the mix. Again, the programmer of the compressor preset can’t know how much compression your singer needs.
Level‑dependent ‘dynamics’ processors include compressors, limiters, expanders, gates and de‑essers, and all have a threshold level that determines when processing kicks in. Compressors, limiters and de‑essers are designed to reduce the gain when the input signal exceeds the threshold level (they act only on the loud bits). Gates and expanders reduce the level of the signal when it falls below the threshold, usually to cut down the amount of noise or spill in between the wanted portions of the audio signal. In most cases, there will be a control called ‘threshold’, but in some compressor designs the threshold is built into a single ‘Compression Amount’ control.
Compressors: Typical compressors have lots of parameters that you can adjust, but if you were to pick, say, a ‘Male Vocal’ compressor preset, the ratio, attack and release times would probably be appropriate for vocals: all you really need to adjust is the threshold, so that the gain‑reduction meter shows the right amount of activity. (It’s still important to listen, to check you’re getting the desired result!). As you lower the threshold, more of the signal exceeds it, so more compression is applied. Gentle compression, to even out the level, typically shows a maximum gain reduction of about 5dB, but more assertive compression (as you might use on rock or urban vocals) can require gain reduction of around 10dB.
Compressors work by reducing the level of the loudest parts of the signal, so the compressor output may need to be ‘turned up’ to get the signal back up to its original peak level. To this end, most compressors include an output gain control, often called ‘make-up gain’. Some include an ‘automatic’ option for this, but different plug-ins seem to handle this differently, and it’s easy enough to set the make‑up gain yourself: adjust the level while keeping an eye on the track or plug-in’s level meter, so that you end up with a healthy signal level, while still leaving some headroom. Making something louder will make it sound more impressive, but that’s no different from raising the fader — so try to keep the compressed and uncompressed signals at the same subjective level.

Spot the difference! The compressor on the left has a threshold control, which you adjust to suit the level of the source. The one on the right has a fixed threshold: the input control raises the level of the source above the threshold, so the compression kicks in, and the output control compensates for any perceived changes in level.
Limiters: Rather than a variable threshold, plug‑in limiters often have their threshold fixed at 0dBFS, and there’s an input level control that can be used to boost the signal if you want to limit the signal peaks for the purpose of maximising loudness. Essentially, you only need to adjust the input level so that the limiter’s gain-reduction meter shows a dB or two of activity on the loudest peaks. If you want harder limiting, as you might on an individual drum, just turn the gain up a little more; let your ears decide when enough is enough.
Gates & Expanders: Like compressors, gate and expander presets will have their attack and release times tailored to specific applications, such as drums, vocals, guitar and so on. Again, though, you’ll need to adjust the threshold to get them to work correctly for your track. The simplest way to set them up is to set the track to loop around a section that contains both wanted signal and pauses; then, starting with the threshold at minimum, increase it until the noise in the pauses just disappears. Don’t set the threshold any higher than necessary to mute the noise, though, or you may start to hear the wanted signal being affected, especially where the signal has a wide dynamic range, or long, decaying tails. If you really want to minimise the chance of adverse side-effects when tweaking a gate, reduce the amount of attenuation (most gates, though not all, include a control for this) so that instead of muting the pauses completely, you simply reduce the noise to an acceptable level. Attenuating by between 10 and 20dB is often enough to keep the track sounding clean.
De‑essers: De-essers vary in design, but most have a variable sensitivity control, which essentially sets the threshold above which de‑essing takes place. A de‑esser monitors the frequency band most likely to contain sibilant ‘S’ and ‘T’ sounds, and then applies gain reduction to that part of the audio spectrum when loud sibilants are detected. Often there’s no gain‑reduction meter, so you simply have to adjust the sensitivity control by ear. What you’re after is a setting that reduces the ‘spitty’ character of those ‘S’ and ‘T’ sounds, but that doesn’t go so far as to make the vocal sound lispy.
Some specialist plug‑ins that reshape the envelope of a sound, or those that create swept‑filter effects, also rely on a threshold setting to adjust how they respond. The main exception is the SPL Transient Designer (and its many imitators), which intelligently varies its own threshold according to the input source, and works on material recorded at pretty much any level.

Equalisation

How much EQ boost and cut you apply depends entirely on the source material and the track. To make use of EQ presets, it’s best to keep the frequency bands fixed, but tweak the amount of cut and boost. Be careful not to end up simply boosting all frequencies though!
When it comes to EQ, the preset designers will once again have had to make assumptions about the source signal — so the presets won’t be a perfect match for your track. A Male Voice EQ preset, say, might not work in every case: for example, if there’s already plenty of 2.5kHz attitude in the voice, you’re not going to want to boost that band further! That said, the preset designer will at least have carefully chosen the frequencies that are likely to have the greatest impact on a typical male vocal — so usually the simplest way to tweak such a preset is to focus on changing the gain setting for each EQ band, without changing the frequency or Q. Gain or attenuation in these bands should make plenty of difference, even with reasonably small cut or boost settings.
Keep comparing the result with the clean (bypassed) sound, as it’s very easy to fool your ears into thinking louder and brighter means better! As a rule, use as little EQ as you can get away with to do the job. Using EQ effectively is a very necessary skill for any recording engineer, so also try to wean yourself off presets as soon as possible. There are lots of articles covering EQ on the SOS web site, for example at www.soundonsound.com/sos/dec08/articles/eq.htm.
After experimenting with the gain settings, the best way to ease yourself into parametric EQ is to experiment with varying the frequencies of the various bands to see what audible effect that produces. When you feel comfortable adjusting these two, move on to adjusting the Q or bandwidth, which controls the width of the cut or boost region.

Delay

As with compressors and most other dynamics processors, a gate (such as the Sonalksis one pictured here) requires you to set the threshold to make the preset settings work on your material.
There are three main adjustments you might want to make to a delay plug‑in: how much, how long and how many repeats. If you’re using a delay as an insert effect, the mix control determines how much of the delay you hear relative to the dry sound. If the delay is set up in a send/return loop, the mix control should be set to 100 percent wet. ‘Time’ determines the delay time (in milliseconds or in musical measures), and ‘feedback’ controls how much of the delayed signal is fed back to the input to produce further repeats. Many delays also include a tempo‑sync feature, which is only relevant if you recorded your song to the tempo grid, but it’s obviously helpful if you’re programming and experimenting with different tempos.
Many engineers record live bands without using the tempo grid, and though this makes editing a little harder, it can free the music from the ‘tyranny’ of the click track, which, in turn, can help the song ‘breathe’ through natural small tempo changes. In such cases, you’ll need to turn off tempo sync and set the delay time by ear.

Reverb
Reverb comes in so many different flavours now, but back in the day, the plate reverb was about the only game in town, and I find that it still sounds right on almost anything. The main parameters that control the amount of reverb are the mix setting and the decay time. As with delay, set the mix to 100 percent wet if using in the send/return loop.
If you’re confused by the reverb options available, try starting your project with a vocal and a drum plate preset on two different effect sends, and then adjust each by varying only the reverb decay time to suit your track. You could set up more reverbs, but that’s a great place to start.
Reverb is such a taste‑driven thing that any judgement is purely subjective, but as a very general rule, there’s a trend on modern records to use less obvious reverb treatments than on those made in the ’70s and ’80s. A useful tip is to try mixing the different reverbs you’ve set up (as described above), because the chances are they’ll have different decay times — and that means that you can achieve useful ‘in between’ combinations by blending them.

Enhancers & Distortion

It’s no coincidence that the biggest dial, smack in the centre of this 112dB Redline Reverb, controls decay time: that’s probably the first control you should reach for when tweaking a reverb preset to suit your mix.
Typical harmonic enhancers synthesize new high‑frequency harmonics to brighten a sound that has little or no natural top end. If you choose a preset, the filter frequency that determines what part of the audio spectrum will be enhanced will already be set, so to make adjustments you need only vary the harmonics ‘amount’ control.
Distortion plug‑ins are popular for applications ranging from adding a little warmth to a voice or synth to making an electric guitar sound like a blender full of barnacles! Chances are that your DAW will offer you several different types, with presets based on each — so try them all, to get an idea of their tonal differences, and then adjust the ‘drive’ setting to add more or less distortion. Many types of distortion are also dependent on signal level, so the lower your recording level, the higher the drive setting is likely to need to be to get the same result.

Modulation
Modulation effects include chorus, flanging, tremolo, vibrato, rotary‑speaker effects, and so on. While rotary‑speaker plug‑ins tend to have fast, slow and stop settings, the others are often more adjustable, and the general rule is that if you speed up the effect, you should also reduce the depth setting — especially when dealing with chorus and flanging. If you simply turn up the speed of a chorus or flanger without also reducing the depth, you’ll tend to end up with a nausea‑inducing warble — but then I suppose that might be what you’re after, so feel free to experiment! Some tremolo and panner plug‑ins have a tempo‑sync option, which is worth trying if it isn’t already engaged when you open the preset.

Channel Presets
Some DAWs allow you to save all the plug‑ins used in a channel strip as a kind of ‘combo’, or ‘channel preset’. These can be called up for use on another channel, or in another project, and can save you lots of time when working with similar material. Although they can be a great starting point in a new project, do bear in mind that the key parameters for each plug-in, such as a compressor’s threshold, will probably require tweaking for every track to which the preset is applied.

The Future Of Presets?
Though not directly related to this article, some plug‑ins, such as the Waves Producer series and the Izotope Nectar Vocal Suite, make life easy by hiding away the controls that you’re not likely to need to adjust. That way, you only have to deal with the important controls — which is an approach that’s a bit like the preset editing advice given here. I find this a good halfway house between tweaking everything and simply calling up presets, and because of the work that has gone into designing them, the results can be extremely good. Maybe we’re moving towards an era where the software treats us more like musicians and producers, and less like engineers? 

Auralex Mudguard - NAB 2011

Friday, November 29, 2013

Creating & Using Custom Delay Effects


Technique : Effects / Processing

Delay forms the basis for a wide range of effects that can transform your tracks from dull and pedestrian to polished and professional.

Geoff Smith

Creating & Using Custom Delay Effects

If I had to pick a single desert island effect, it would be delay. Why? Well, delay isn't only an effect in itself; it's also one of the basic building blocks for many other effects, including reverb, chorus and flanging — and that makes it massively versatile. The aim of this article is to focus on the practical applications of delays, by looking at some ways they're typically used on vocals, guitars and synths. I'll be writing primarily about the sound of different delay configurations, and to make things easier to follow, I've created an illustrative series of audio examples (see 'Audio Examples' box for details). Note that I've used exaggerated effects levels in the examples, to highlight the sound of the delay.

First, let's run through a quick overview to get any newbies up to speed. As the name suggests, the way in which delay processors work is quite simple: the programme material passes through a memory buffer and it is then recalled from the buffer a short time later. We refer to this time difference as the delay time. (In an analogue delay, you could think of the electronics or the tape loop performing the same function as the memory in a digital delay.) Multiple echoes or 'repeats' of the programme material are produced by feeding a percentage of the delayed material back from the output of the delay buffer into the input. We refer to this as feedback. You should be aware when adjusting the feedback parameter that high settings can result in the level of the processed signal increasing rapidly with each repeat, so if your monitoring levels are high, it pays to be careful!

1. A simple delay 

1. A simple delay
 2. A simple delay with feedback 

2. A simple delay with feedback

As with reverb, delay is most often applied as a send effect, rather than as an insert. This approach not only conserves processing power when applying the same delay to multiple sources, but it allows you to treat the delayed part of the signal, separately from the original, with extra processing such as EQ or distortion and this allows you more creative freedom.

That's it in a nutshell, but the combination of those few parameters and the flexibility afforded by using delays as send effects open up a world of production effects and tricks. With the basics explained, let's work through some useful techniques.

 

Vocal Treatments


During the mix process, vocals are often treated with either reverb or delay, or possibly a combination of both, so in the following section I'll work through some typical vocal delay treatments. To get the most from these, call up a vocal part of your own in your DAW and use that as raw material. For my examples, I've taken a heavily 'tuned' vocal with a tempo of 130bpm (audio examples 1 and 1b).

1. Create a send from your vocal track and on the send, call up a delay plug-in. It doesn't need to be a sophisticated one; in fact, the simpler the plug-in is, the easier this will be to follow. I'm working in Logic Pro here, and I've used Logic's Tape Delay.

2. Set the delay time to a quarter note and the feedback to zero. If you're using a delay that doesn't have tempo-sync options, you can still get things in sync by remembering the following equation: ms = 60,000 ÷ bpm
Note that 'ms' is the quarter-note delay time in milliseconds, 60,000 is the number of milliseconds in a minute, and 'bpm' is the tempo in beats per minute. From there, you can divide the result as necessary to get eighth notes, sixteenth notes, and so on.

3. With the vocal playing, gradually raise the send amount and you'll hear a single quarter-note echo that, when set quietly, can add a useful sense of ambience to a vocal.

4. Next, gradually raise the amount of feedback until the delay fills up the gaps between the vocal phrases, but doesn't cloud over the original (audio example 2).

 A simple quarter note delay in Logic Pro's Tape Delay plug-in. 

A simple quarter note delay in Logic Pro's Tape Delay plug-in.

 

EQ'ed & Distorted Delay


With the quarter-note delay in place, it's common to try to create a unique 'space' for the delay in the mix, and this can often be accomplished by applying EQ before or after the delay (it doesn't normally matter which, though there will be subtle differences in, for example, the response of a tape delay to a signal with the low end present and the low end filtered away). As Robert Orton, discussing his mix of Lady Gaga's 'Just Dance' in SOS March 2009 explained, what EQ settings you require will vary from mix to mix: "Sometimes you want quite a dark delay that's hidden behind the vocals just to give it more body; at other times [there's] a word that clearly repeats, in which case the delay has to sound up-front and clear”.
Many plug-in delays have in-built high- and low-pass filters, but don't worry if yours doesn't, because you can use a separate EQ — as in this example:

1. Find an EQ plug-in that includes high- and low-pass filters and place an instance of it after your delay.

2. Start by rolling off the high frequencies of the delayed sound, using a low-pass filter. This will soften any transients, and help to push the delay back in the mix behind the main vocal sound. When used at low levels, this creates a really subtle sense of ambience.

3. Now bypass the low-pass filter and try high-pass filtering the delay, to remove some of its bottom end. This can help to stop the delay clouding up the low end of your mix.

4. Finally, combine the high-and low-pass filters to create a 'telephone' EQ. This effect (see image 2) is much less obvious when used on the delay signal than on the source vocal track, and it's a tactic that can reduce the amount of space taken up by the delay, leaving space for other elements. It also helps to make the delay more distinct from the original vocal (audio example 3).

You can find another example of this last effect in Peter Mokran's Mix of the Pussy Cat Dolls track 'Jai Ho!', where he inserted the telephonic EQ before the delay (for more details, see the interview in SOS August 2009's Inside Track feature, /sos/aug09/articles/it_0809.htm).

One reason for using a delay instead of a reverb for these treatments is that quarter- or eighth-note delays are heard distinctly from the original vocal, whereas with a reverb, the early reflections can easily combine with the original vocal part and thus change its perceived timbre. This means that, for example, adding a bright quarter-note delay doesn't brighten the original vocal in the way that adding a bright reverb might do.

3. 'Telephone EQ' can be applied before or after the delay effect to differentiate the echo from the original vocal. 

3. 'Telephone EQ' can be applied before or after the delay effect to differentiate the echo from the original vocal.

Another way to make the delayed signal stand apart from the original vocal is to run the delay through an amp and/or speaker simulation plug-in. The plug-in will not only provide a unique equalisation curve, but the sound of the speaker and the distortion from the amp will add a compression effect that can help to even out the level between repeats. This adds a greater sense of sustain to the delayed signal, which you can hear in audio example 4. It's worth experimenting with different amp models and cabinets, as you'll find plenty of different timbres. There really are no hard and fast rules as to what works best here.

 

Repeat After Me...


Now let's set up a second send, this time with a eighth-note delay. Again, there are plenty of plug-ins you could use, and I'm going to choose Soundtoys Echoboy, which is a really versatile plug-in. It has low- and high-pass filters and, in the Style Edit section, further equalisation and output-modelling options. This allows you to do all of the tonal shaping we've just discussed inside a single plug-in, and this means that you can quickly compare radically different delay, equalisation and distortion settings simply by changing presets. Echoboy also has a Decay EQ section, which is an EQ that's placed in the delay's feedback loop, and thus allows you to make each successive repeat duller or brighter, making the delay evolve in timbre with each repeat (audio example 5). In simpler terms, this feature controls how the tone of the echo changes over time.

4. The comprehensive EQ and output modelling section in Sound Toys' EchoBoy allow the user to compare drastically different delays simply by changing presets. 

4. The comprehensive EQ and output modelling section in Sound Toys' EchoBoy allow the user to compare drastically different delays simply by changing presets.

There are plenty of other plug-ins that offer this sort of feature, and it's well worth having one in your mix toolkit, as it's often tricky setting up an EQ'ed feedback signal using separate delay and EQ plug-ins, partly because not all DAWs have sufficiently flexible routing.

 

Ducked Delay



With conventional delay setups, it can sometimes be difficult to find a constant send level that allows the delay to be sufficiently audible when the singer finishes a word, but doesn't cloud the vocal when he or she is singing. To counter this problem, you could use automation to ride either the send or return level of the delay on the offending passages (better to automate the send if you're sharing the delay with other sources). This has the advantage of giving you absolute control over the level of the delay, but it can be fiddly, particularly if you need to go back and change this automation as your mix evolves.

A less fussy means of achieving a similar result is to set up a compressor to duck the delay when the vocal is present. To set this up, you'll need a compressor and DAW that allow you to use an external side-chain input. Here's how to do it:

1. Add the compressor after the delay — and any subsequent processors in the send chain — on your send channel (picture 5).

2. Next, go to the compressor's side-chain input and set it so that the compressor is triggered by the lead vocal track. How you do this varies from DAW to DAW, so if you're unsure, check the manual.

3. With the track playing, lower the threshold and raise the ratio to apply the appropriate amount of gain reduction to the delayed signal (audio examples 6 and 7).

This is a common but effective trick, which has been used on plenty of hit records. By way of example, Marcella Araica used this on Timbaland's 'The Way I Are': listen to that track, and you'll hear the delay become prominent after the last lines of the verses.

5. The setup in Logic for creating a ducking delay. I've used Logic's Compressor plug-in to provide the ducking but I could just as easily have used the included Noise Gate or Ducker plug-ins, shown bypassed. 

5. The setup in Logic for creating a ducking delay. I've used Logic's Compressor plug-in to provide the ducking but I could just as easily have used the included Noise Gate or Ducker plug-ins, shown bypassed.

 

Delay Versus Reverb


When you're mixing, one of the advantages of using a simple quarter-note delay on a vocal is that, compared to reverb, it takes up such a small amount of space in a mix, and this is even more the case when you use a mono delay. By combining a single quarter-note mono delay with a lead vocal that's also mono, you can preserve the positional integrity of that source. This can be helpful in busy arrangements, where space is at a premium.

Mixing isn't always about saving space for other elements, though: in a spacious mix, you might actually want the vocal effects to take up more room, and in this scenario it's a good idea to look at creating a sound using both your delay and reverb plug-ins. Running a delay into a stereo reverb will spread the delay out across the stereo field, making it seem broader (audio example 8a). One trick that's well worth exploring is running your delay through an early-reflection patch from a reverb, as this will 'stereoize' the delay without it sounding so obviously effected (audio example 8e).

You can also use both your mono delay and stereo reverb to deliberately create contrast between song sections. For example, you could use a quarter-note delay on a mono verse vocal to add a sense of ambience while keeping things dead centre, but follow this in the chorus by moving to ultra-wide stereo vocals, with stereo delay and reverb effects. It's this sort of contrast that's so essential in keeping the listener's ear engaged. When processing a lead vocal, then, consider in all of the different song sections whether you want both the vocal track and its send delay and reverb effects to sound more mono or more stereo.

A delay can also be used to change the character of a reverb, by acting as an ultra-configurable pre-delay. If you listen to audio example 8c, you'll hear that the vocal is treated with a small amount of hall reverb. Then listen to example 8d, where I've taken the same reverb but inserted an eighth-note delay (with feedback set to zero) before the hall reverb: the delayed reverb seems louder and more obvious in the mix. Even though many reverb plug-ins have a pre-delay control, they're often not tempo-syncable, and that's why I prefer to use a dedicated delay plug-in to provide the pre-delay: most delay plug-ins offer tempo-sync'ed delay times at the click of a button. Thus, I'm able to quickly compare say a sixteenth-note or eighth-note pre-delay.

 

Ping-pong Delay


Ping-pong delay is a type of dual delay where the first echo appears in the 'ping' channel (usually the left), delayed by the ping amount, and the second appears in the opposite 'pong' channel, delayed by the ping time plus the pong time. For example, if you set the ping to 200ms and the pong to 400ms, you'd first hear the ping 200ms after the programme material out of the left channel, and the Pong 600ms after the programme material out of the right channel. This process will then repeat, assuming the feedback values are higher than zero. In audio example 9, Echoboy is set to a ping-pong delay with the Transmitter output style, to give a telephonic effect.

On his mix of U2's 'No Line On The Horizon', Declan Gaffney used the SoundToys Echoboy plug-in to add a subtle amount of ambience to the lead vocal using a ping-pong setting: "The echo is just a kind of warm ping-pong sound. There's no reverb on the track, there's not even any feedback on the delay, it's all about the dry sound with a little bit of space on the side, provided by the delay… I'm not a huge fan of reverb anyway; it's better to do the same thing with delays.” (SOS June 2009, /sos/jun09/articles/itu2.htm). Audio example 9b uses a similar setting.

6. A ping-pong delay 

6. A ping-pong delay

 

Slap Happy


I'm not sure if there's an official definition of 'slapback delay' as an artificial effect, but it is generally used to describe a single echo of 60-180ms that creates a sort of thickening effect. Famous fans of slapback on their voice include John Lennon and Elvis Presley. 'Slap delay' like this is also a standard treatment for hip-hop vocals because, as Jaycen Joshua put it in SOS August 2010, "reverb is the kiss of death on rap vocals” (/sos/aug10/articles/it-0810.htm). Audio example 10a demonstrates how slapback delay can be used to enliven a vocal part. The delay here is set to 80ms.

Mix engineer Jaycen Joshua, who explains that "reverb is the kiss of death on rap vocals,” but still uses delay to enliven the sound. 

Mix engineer Jaycen Joshua, who explains that "reverb is the kiss of death on rap vocals,” but still uses delay to enliven the sound.Slapback delays were originally created with tape machines, so you may want to try rolling off a little of the top end and adding a small amount of modulation of the delay time to approximate some of the inconsistencies of tape. It's also fun to try a stereo slapback delay using a dual delay with slightly different delay times for each side, as this will create a wider stereo image. Audio example 10b is a slapback delay with the left side set to 80ms and the right side set to 120ms: listen out for the extra stereo width this gives the vocal.

I'll mention one last slap-related trick before we move on: take a short slapback delay and then gradually increase the feedback. Because of the relatively short delay time, this will begin to sound reminiscent of a spring reverb, such as you often find in guitar amps.

 

Stereoising A Delay


The German scientist Helmut Haas wrote that when two identical signals, each played through a separate speaker, are delayed by anything from 1-30ms, a sense of a broadening of the primary sound source is heard, but without there being a perceptible echo. This effect, often referred to as the 'Haas effect', can be created using delay plug-ins or track offsets, and can be used to add stereo width to a vocal part. It also serves as a reasonable foundation for creating fake double-tracking effects.

Let's look at some settings. If you take a delay plug-in and increase the right channel's delay time to 10ms more than the left, you should notice that the delay creates a stereo effect rather than a perceptible echo: the delay simply broadens the vocal, adding stereo width (see audio examples 11a and 11b).

Using a plug-in like Logic's Sample Delay can create extra stereo width, but often at the expense of mono compatibility. 

Using a plug-in like Logic's Sample Delay can create extra stereo width, but often at the expense of mono compatibility.

Jaycen Joshua used this type of processing to create a stereo effect for the crash cymbal on Justin Bieber's 'Baby': "There's a medium delay on the crash,” he remarks, "because it was originally a mono track, and I wanted it in stereo. So I set the delay to 21ms and mixed the original 100 percent to the left, and the delay to the right…. With 21ms, you get enough separation between left and right and it's a bit dramatic and not so phasey.... The sound becomes like a drummer hitting two cymbals, left and right, at the same time.”
Unfortunately, though, this approach can create problematic side-effects. First, simply delaying one side in relation to the other will result in unpleasant comb filtering when the left and right channels are summed to mono. Example 11c shows the effect of making the delayed vocal mono. Contrast that with example 11a and you'll hear why this sort of processing is risky if you want to ensure mono compatibility! A second problem with this treatment is that you can perceive that the undelayed side as louder than the delayed side, causing potential balance problems.

To take advantage of the widening effect that Haas delays offer, while deftly dodging the mono-compatibility trap, there are some useful tricks. One of the most common widening treatments is created using two short delays, with a small amount of pitch-shifting applied to either side. To achieve this effect yourself:

1. Set a mono delay to 11ms and with zero feedback, then pan it hard right.

2. Then, add a pitch-shift plug-in to take it up seven cents. Repeat this with a second delay, but this time pan it hard left, and pitch-shift it down by seven cents.

For a demonstration of this effect, see audio examples 11d and 11e. To create these, I've used Logic's Delay Designer plug-in (picture 7), as it allows you to set up multiple delays and pitch-shifting all in the same plug-in, but it's easy enough to do by combining delays and pitch-shifters separately. If you want to hear the effect used in context, there are countless commercial tracks I could offer by way of example, one of which is the Arcade Fires album The Suburbs. Producer Craig Silvery explained in SOS November 2010 that he used "two short delays from an AMS with a little pitch-shift, one up and one down, which thickened the vocals like a doubler” (/sos/nov10/articles/it-1110.htm). There are also plenty of examples of this tactic being used to good effect in our regular Mix Rescue articles.




7. The classic delay/pitch shifted widener: an incredibly useful effect, heard on many chart hits. 

7. The classic delay/pitch shifted widener: an incredibly useful effect, heard on many chart hits.

A nice variation on this effect is provided by Waves' Doubler plug-in, which provides the same widening effect but with a less obviously effected sound. In the words of Grammy-winning mix engineer Dave Pensado: "Doubler has four delays that also help to make the vocal sound bigger, wider and more powerful” (SOS Jan 2007, /sos/jan07/articles/insidetrack_0107.htm). Audio example 11f provides an example of this setting.

Very short ping-pong delays can also make excellent wideners, as you can hear if you listen to audio example 11g, which shows a 30ms ping-pong delay built with NI Reaktor, where the output of the ping delay is polarity inverted before going into the pong delay. This technique is great for transparent widening and has excellent mono compatibility.

 

Multi-tap Delay


A multi-tap delay is a delay line where multiple 'taps' or outputs are taken from a delay buffer at different points, and the taps are then summed with the original. Multi-tap delays are great for creating rhythmic delay patterns, but they can also be used to create sound fields of such density that they start to take on some of the qualities we'd more usually associate with reverb. Favourite plug-ins for the job include Waves Supertap, PSP Audioware's PSP608 and Echoboy, using its Pattern mode, but there are many more available, and you might even have suitable toys bundled with your DAW (I often use Logic's Delay Designer, for example).

It can be great fun using a multi-tap delay to design reverb-like effects that, while they might not compete with a proper reverb algorithm in terms of realism, can produce some wonderful, unique-sounding results. A simple method for creating a reverb-like setting is to take a multi-tap delay and create a series of delay taps starting at 30ms and increasing in time. With the preset shown in Picture 8, I've gradually increased the delay tap times in Logic's Delay Designer at random up to the last taps, which are in time with the sequencer tempo, the last tap being a quarter-note delay. To build on this starting point, experiment with panning successive delay taps left and right, filtering the delay taps so that each tap becomes duller, and changing each tap's volume. In audio example 12, the taps swell in volume toward the middle of the delays and then fade out again in the later taps.

8. Consecutive delay taps panned left and then right to create an effect similar to reverb. 

8. Consecutive delay taps panned left and then right to create an effect similar to reverb.

 9. In this screen, the delay taps are being increasingly high- and low-pass filtered the longer they get. 

9. In this screen, the delay taps are being increasingly high- and low-pass filtered the longer they get.

Dave Pensado describes how he used the Waves Supertap plug-in on the lead vocal for the track 'Beep' by the Pussycat Dolls: "The delays on Supertap are all very short, and what it also allowed me to do is spread the vocal wide across the stereo spectrum. In other words, instead of occupying a small spot in the middle of the mix, I could fill the whole spectrum between the speakers. The 149, 298 and 587 ms are sixteenth, eighth and half-note delays, and they spread and get louder from left to right." If you have Supertap in your plug-in collection, you can find a similar setting to the one used for 'Beep' in the preset menu under Dave Pensado/Pensado Tap Vocal — and if not, you can still hear the effect in audio example 13.

 

Automation

 

10. Waves Supertap plug-in, a good example of a very tweakable multi-tap delay. 

10. Waves Supertap plug-in, a good example of a very tweakable multi-tap delay.

Of course, you don't have to use just one delay effect on a sound. It's perfectly possible to set up a range of different delays on your aux send channels, and then experiment with automating different delays on different words. It's this level of sophistication you might need to reach for if you want to compete with contemporary pop tracks. For example, in SOS March 2009 (/sos/mar09/articles/it_0309.htm)

 Robert Orton, who used extensive automation of delays on Lady Gaga's vocal for the smash hit 'Let's Dance'.  

Robert Orton, who used extensive automation of delays on Lady Gaga's vocal for the smash hit 'Let's Dance'. Robert Orton described how he automated the delays on Lady Gaga's hit 'Just Dance': "When soloing the vocals, I added half-, quarter- and eighth-note delays, and I think there's also a dotted eighth-note delay, all using the Sound Toys Echoboy... The eighth-note delay is panned to the right, and comes in the choruses and some words in the verses. The send is automated. The half-note delay is panned to the left, and captures certain words; for instance, in the chorus each time the word 'dance' occurs at the end of a line. The quarter-note delay is also panned to the right, and is automated to happen on certain words. All the delays catch words differently, to keep it interesting. They're also set to different styles on the Echoboy — TubeTape, Analogue, etc — to get different textures.”

 

Axe Effects


All of the techniques from the vocal section can also be applied to other instruments, including the guitar. For example, quarter- and eighth-note delays can be used for ambience and delay/pitch wideners to add stereo width. But there are also plenty of effects that work particularly well on guitar.

In this section, then, let's work through an example of using delay on guitar, to show how it can transform even the simplest of parts. We'll try to emulate one of the most famous and distinctive-sounding delayed guitar sounds: that of U2's The Edge, which you can hear on many records, including The Joshua Tree. The delay effect that's most often associated with the Edge has a delay time of three-sixteenths or a dotted eighth-note of the song tempo.

U2 guitarist The Edge forged his signature sound using delay, which you can hear on classic albums such as The Joshua Tree. Find out how to mimic this sound here! 

U2 guitarist The Edge forged his signature sound using delay, which you can hear on classic albums such as The Joshua Tree. Find out how to mimic this sound here!

1. It pays to start with a really simple, repetitive guitar part, so for my audio examples I've created a simple eighth-note guitar line (audio example 14a).

2. Now, place a 3/16 note delay against this, to create a much more complex pattern (audio example 14b). For the example, I used a Korg SDD2000 hardware delay unit (from the same time period as The Joshua Tree), but you can get good results with a plug-in delay too.

3. Next, experiment with using a very slow LFO to modulate the delay time. Used subtly, this creates a pleasant amount of movement in the delay. Some plug-ins include an LFO, but if yours doesn't, you can create a similar effect by routing a separate MIDI LFO plug-in to control the delay-time parameter, or draw the modulation in using automation.

4. Finally, try experimenting with panning the delay into a different position from the dry signal. To demonstrate this, I created audio example 14c, in which I've left the guitar in the centre and panned the delay hard right. More interesting stereo delays can often be achieved by choosing two different delay timbres, with one panned hard left and the other hard right, and with slightly different modulation rates (see audio example 14d).

 

Synths

 

Heavily compressing a synth sound after the addition of delay and reverb can give the part an interesting sense of movement, as the effects are effectively ducked when the synth is played. Here, I've used Logic's own plug-ins to apply this effect to a simple plucked-string patch to create a much more complex, and richer-sounding result. 

Heavily compressing a synth sound after the addition of delay and reverb can give the part an interesting sense of movement, as the effects are effectively ducked when the synth is played. Here, I've used Logic's own plug-ins to apply this effect to a simple plucked-string patch to create a much more complex, and richer-sounding result.

 Creating & Using Custom Delay EffectsCreating & Using Custom Delay Effects

Up until now, we've looked mainly at using delays as send effects, but now let's consider an example of where you might use them as inserts. When using a synth, there are some advantages to processing the delayed signal along with the dry, and to understand why this is the case, let's work through another example:

1. Take a simple plucked synth and program a repetitive pattern based around eighth notes (audio example 15a).

2. Now add a stereo delay and slightly offset the right and left delay times to increase the stereo width of the effect (audio example 15b).

3. This sounds quite boring at the moment, so now let's make it more interesting. Add a small amount of a medium-sized reverb (audio example 15c).

4. Next, insert a compressor after the delay and reverb. Set this so that it's applying about 10dB of gain reduction, and adjust the attack and release time so that the compressor clamps down on the original pluck and then releases from compression in time for the next delay.

Hopefully, what you can hear is a transformation from a boring, predictable patch to one with some movement. Notice that the delay and reverb elements are also brought forward in the mix (audio example 15d). Essentially, what's happening is that the compressor applies 10db of gain reduction to the whole effects chain whenever the plucked synth plays, effectively ducking the delay and reverb by 10dB every time the synth is played. This creates a really satisfying pumping effect and gives the part a wonderful sense of rhythmic movement that simply increasing the effect level couldn't match. There are ways to achieve this with sends, using side-chain compression (remember the ducked delay?) or sending both the source and delay tracks to a bus, but it's much simpler to use delay as an insert in this case!

We don't need to stop here, though: we can create more rhythmic complexity in the same part by stealing ideas from the Edge, so adjust the left delay time of your stereo delay to 3/16th notes while keeping the right delay to 1/8th note (audio example 15e.) Again, listen to how the compressor effectively ducks the effect each time the synth plays.

Now we'll use delay to take the synth part to a climactic peak. When you turn up the feedback on a delay, it will begin to self oscillate, and the delay will get louder and louder. Dance musicians have managed to take advantage of the excitement this effect creates, but keep it under control, by using a limiter to pin down the delay's output level, so that high feedback amounts that would normally result in ear-bleeding pain are kept manageable.

1. Use the delay from the previous example and then insert a limiter at the end of the effects chain. Set the limiter so that the maximum output is something bearable (mine is set to -7.5dB), and set the threshold so that you can see a small amount of gain reduction on the meters under normal circumstances.

2. Now, slowly bring up the feedback of the delay. As the delay moves into self oscillation, you should find that the limiter pins the delay to a bearable level instead of allowing it to increase in volume like a crazed animal.

This article presents only a few ideas for using delay in your productions, and there are whole, beautiful worlds that are swamped in delay effects and waiting to be discovered. Have fun exploring them!  .


Line 6 Pod HD - Musikmesse 2011

Getting Better Results From Izotope's RX2

Article Preview :: Clean Up Your Acts


Technique : Effects / Processing


Restoration software such as Izotope's RX2 can breathe new life into damaged audio — with the right moves from the user!

Mike Thornton

Izotope's RX2 software makes available some of the most powerful restoration tools around, at an affordable price, and as a result has become a very popular package. In this article, I'm going to share some power-user hints and tips that will help you get the best from it.

If you're new to RX, a good place to start would be checking out Sound On Sound's original review in July 2008 (/sos/jul08/articles/izotoperx.htm). Since that review was published, Izotope have released a new and significantly expanded version called RX Advanced, in addition to the basic RX. RX Advanced has a number of extra modules, and some of the modules that appear in both versions have extra features in the Advanced release. Both variants can be run as stand-alone applications or as plug-ins for your favourite Mac OS or Windows DAW: Audio Units, VST, RTAS and AAX Native plug-in formats are supported. However, running RX as a plug-in means that its processes have to operate in real time. This makes some features unavailable and limits the effectiveness of others, so I tend to export files that need processing to the stand-alone version of RX and re-import them into my DAW once processed. (For this reason, my number one feature request would be for Izotope to improve the links between DAWs and the stand-alone application, perhaps in the same way as Synchro Arts have done with Revoice Pro.)

The basic version of RX has five main restoration modules. Declip is for repairing clipped and distorted audio, while the Declick & Decrackle module is intended for restoring recordings from vinyl records, although it is also good for dealing with digital clicks. Remove Hum can eliminate low-frequency noise such as mains hum, along with up to seven harmonics. Denoise removes broadband noise that is relatively static in profile; it is effective both on electrical noise such as hiss, and acoustic noise such as air-conditioning. Finally, Spectral Repair is designed to remove occasional random sounds that have interrupted a recording, whether these come from the instrument being recorded or from external sources. Beeps, car horns, mic pops and mouth clicks are all grist to its mill.

Supplementing these are a selection of 'fix it' modules such as Gain & Fades and Channel Ops, which can help with all kinds of routing and phase-related problems. There is also a Spectrum Analyser module to help track down exactly where the problems are. In RX Advanced, there are more 'fix it' modules you wouldn't necessarily expect to find in a restoration software package, such as Resample for downsampling audio files, Dither for reducing the word length of audio files, and pitch-shifting and time correction using Izotope's Radius technology. The Advanced version also operates as a VST and AU plug-in host in stand-alone mode, and boasts an intriguing Deconstruct module, where you can separate and adjust the tonal (pitched) and broadband (unpitched) elements of an audio file.

To illustrate some of the techniques involved in getting the best from the various RX modules, I'll work through some examples of audio files that have needed some work. I will describe how things happen in the stand alone application, but most of what I am covering could be undertaken in the plug-ins within the limits of real-time processing.

 

Peak Distortion


Digital audio is not forgiving when it comes to peak distortion: if your signal exceeds 0dBFS, you will experience clipping, with anharmonic distortion that is usually very obvious and unpleasant.

My first screenshot


1: The stand-alone RX Advanced. A stereo audio file is loaded that has severe clipping distortion. 

1: The stand-alone RX Advanced. A stereo audio file is loaded that has severe clipping distortion. shows an audio file where the tops of the waveform have been chopped off, and the challenge is to restore this audio to its former glory using the Declip module. Either adjust the Clipping threshold by eye so that either the red lines on the audio waveform are below the clipped-off peaks, or click the Compute button in the module window and adjust the Clipping threshold control until the red line is just before the white line in the module window display. Next, adjust the Makeup gain. This is actually an attenuation control, and it is needed because once RX has reconstructed the missing peaks on the audio, its new peak level will be around 6dB higher — hence the default setting of -6dB. If you are already close to 0dBFS, you might want to consider reducing this further.

2: RX's Declip module. 'Makeup gain' is a misnomer, as it's actually attenuation that is required following the de-clipping process. 

2: RX's Declip module. 'Makeup gain' is a misnomer, as it's actually attenuation that is required following the de-clipping process.


Thursday, November 28, 2013

Sennheiser MK4 - NAB 2011

Roland Jupiter 80

Performance Synthesizer


Reviews : Keyboard

The Jupiter 8 looms large in synthesizer history, and any synth bearing the name has a lot to live up to. Is the Jupiter 80 destined for the same legendary status? Find out in our world‑exclusive review...

Gordon Reid

Roland Jupiter 80

I have friends who have been waiting nearly three decades for a successor to the Roland Jupiter 8. Their hearts went all a‑flutter when 1991's JD800 was announced but, while this is now a minor classic in its own right, it wasn't what they had envisaged. They went through the same set of emotions when the JP8000 appeared in 1997, only to be disappointed again. But now there's a synth that says to the world, "Let there be no confusion; I am a Roland Jupiter”. Launched amid a flurry of speculation, praise and diatribes in equal measure from people who had never been within 100 miles of one, let alone heard one, it's the Jupiter 80.

Physically, it's somewhat larger and heavier than Roland's most recent and now discontinued 76‑note workstation, the Fantom G7. Its colourful control panel is reminiscent of a Jupiter 8, but only in a superficial way, and it's clear even before switching it on that most of the action is going to take place on the 800 x 480-pixel touchscreen that dominates its control panel. The touchscreen is good news; I've lost track of the number of times I've poked a Fantom's display in the expectation that something will happen.

The Jupiter 80 generates its sounds using the Supernatural technology first heard on the ARX boards introduced for the Fantom G series, married to a significantly cut‑down version of the APS (Articulative Phrase Synthesis) technology found in the V‑Synth GT. However, despite the justified clamour from Fantom owners, there are only three ARX boards (one each for drums, electric pianos and brass), and the set of polyphonic APS sounds in the Jupiter 80 does not overlap fully with the APS sources and Phrase Models in the GT, so it's clear from the outset that the new synth is not simply a mélange of existing engines presented in a colourful new box.

What's even more apparent is that the Jupiter 80 is not based on any conventional synth architecture, because it eschews the conventional patch and performance structures that have dominated synth architectures for the past 20 years or so. The lowest level (or so Roland claims) is the 'Tone', and there are two distinct Tone generators: Supernatural Acoustic (which, confusingly, also contains the APS sounds) and Supernatural Synth. The next level up is the 'Live Set', which can comprise up to four Tones in 'Layers'. The top level is the 'Registration', which comprises four 'Parts': a single Tone in the Perc Part, a Live Set in the Lower Part, another Live Set in the Upper Part, and another Tone in the Solo Part. Confused? I'm not surprised; so was I. But I have to admit that, by the end of the review, I found it simple to use, if rather unusual.

 

Tones

 

Resemblance to its ancestor aside, the Jupiter 80's large, colourful buttons and simple control panel make it especially useful for live performance. 

Resemblance to its ancestor aside, the Jupiter 80's large, colourful buttons and simple control panel make it especially useful for live performance.

Let's start at the lowest level and look at the Supernatural Acoustic Tones. There are 117 of these, divided into categories such as pianos, basses, strings, guitars, and so on. A closer look shows that some sounds are presented in two versions, with the second prefixed by the letters 'APS'. Consequently, you have (for example) 0062:Oboe and 0104:APS Oboe, both of which sound like oboes but nevertheless sound quite different from one another. There's nothing to worry about here, although Roland seem to have got themselves into a bit of a semantic tangle when combining the Supernatural and APS technologies, because the literature also talks about something called Behavior Modeling Technology (the company's spellings, not mine) which, like APS, also claims to emulate the behaviour of a given instrument when you play its physical model. Are APS and BMT components of Supernatural Acoustic, or is Supernatural Acoustic the initial sound generator and are APS and BMT independent performance modifiers? Or is APS a component of BMT? Who shot JFK? Damned if I know!

So let's turn to Supernatural Synth, where things should be simpler. Except that they're not. Indeed, I found the Synth mightily confusing until I realised that Roland's claim that the Tone is the lowest level of sound creation is wrong. I initially approached Supernatural Synth on that basis, but I got myself into a tangle because I wasn't differentiating between the controls that affect a Supernatural Synth Tone as a whole, and those that programme the three miniature synthesizers ('Partials') that comprise it.

A Partial is a powerful synthesizer in its own right. Its oscillator appears to offer eight waveforms, but the six analogue‑type waves each have three variants, and pulse width and PWM are programmable where appropriate. The depth of the Super Saw (the seventh option) is also programmable, while the eighth option allows you to select any one of 380 PCMs that include many of the underlying waveforms from earlier generations of Roland's digital synths. There's also a dedicated AD pitch envelope, a ring modulator between Partials 1 and 2, and a waveshaper that can act upon any of the resulting sounds, whether Virtual Analaogue or PCM digital. Similar flexibility is apparent when you turn to the multimode (low-pass, high‑pass, band-pass and peaking) filter with its 12dB/oct and 24dB/oct slopes, and to the amplifier. There are even two LFOs, one for conventional duties, and a second dedicated to the modulation joystick. This is all good stuff, and a polysynth built on three of these (ie. a single Supernatural Synth Tone) would be a very powerful instrument in its own right.

 

Live Sets

 

Largely eschewing the knobs and sliders of its forebear, the Jupiter 80's synth engine is accessed mostly via the 800 x 480-pixel touchscreen. 

Largely eschewing the knobs and sliders of its forebear, the Jupiter 80's synth engine is accessed mostly via the 800 x 480-pixel touchscreen.

Moving up a level, you can select any four Tones — whether generated by Supernatural Acoustic or Supernatural Synth — to insert into the four Layers in a Live Set. Strangely, this is also the level at which Tone editing (as opposed to Synth Partial editing) takes place, so before I can tell you about the facilities provided by the Live Set itself, I need to tell you about what it lets you do to the Tones that comprise it.

Let's say that you want to layer a couple of Tones — one generated by Supernatural Acoustic and the other generated by Supernatural Synth — within a Live Set. To do so, you place the first in, say, Layer 1, and the second in, say, Layer 2 of the Set. (You have to do this when the Live Set is inserted within either the Upper or Lower Part of a Registration, but let's ignore that for the moment.) Now let's say that you want to edit the first of these Tones. Punching the appropriate Edit button reveals a handful of parameters that are relevant to the instrument in question, so, for example, the piano model offers control over string resonance, key-off resonance, hammer noise, stereo width, nuance and tone character, while the flutes offer noise level, growl sensitivity and 'variation', while the electric pianos offer just a single parameter: key-off noise. In contrast, the most complex of the models, the electric organ, is based on a Hammond generator with all nine drawbars, percussion, keyclick, leakage, and even subtleties such as the unusual behaviour of the 1‑foot drawbar correctly implemented. Frustratingly, there's no user memory for edited Acoustic sounds, so you can only save the modified Tone within the Live Set in which you edited it. I can see that this saves memory but if, for example, I want to create and store a range of Honky Tonk pianos, I can't do so except by using up Layers and Live Sets. Perhaps the reasoning was that there are so few parameters in a Supernatural Acoustic sound that any edits could be recreated elsewhere without too much hassle.

Moving across to Supernatural Synth, inserting a Tone into a Live Set provides what looks like a complete set of synthesis parameters in addition to those found at the Partial level. However, these are not absolute, they are modifiers that override some of the values in the Partials (such as switching the filter type to a new setting) or provide offsets that affect the Partials' values. This is a weird architecture. The system is, in effect, treating the Partials as an 'expert' level, allowing novice users to regard Synth Tones as immutable building blocks and to tweak them into shape using the controls provided at the Live Set level. However, the advantage of this is that a Synth Tone moulded into a new shape in one Live Set is not affected when inserted into another, which is not a trivial benefit.

Once you have inserted and, if desired, edited the Tones within a Live Set, you can apply effects and other facilities to them. On the surface, the effects section within a Live Set looks good, with four assignable MFXs (multi‑effects) offering 76 effect types, plus a global reverb. Unfortunately, the routing of the MFXs is fixed. 

While you can determine the level of the signal sent from each Layer to each MFX, to the reverb, and to the outside world, the four MFXs are permanently arranged in parallel, so there is no way to pass signal from one to the next. This means that you cannot send sounds sequentially through them to create (for example) an organ's effects path of chorus, reverb, overdrive and rotary speaker, or through common guitar effects paths such as compression, overdrive, EQ, chorus, and delay. I discussed this with Roland, whose engineer's response was, "we could make the routing more flexible, but this might have ramifications in other areas, so we should consider this carefully”. In other words: don't hold your breath. Notwithstanding this, the quality of the effects themselves is up to Roland's usual high standards, which is hardly surprisingly since many of them are augmented versions of the Fantom G effects, enhanced by the welcome addition of a three‑band parametric EQ.

A Live Set also offers a variation on the concept of morphing from one patch to another. Called 'Tone Blending', this allows you to move from one state to another, with multiple parameters such as level, filter cutoff, resonance and effect send levels being affected simultaneously by a single knob or the onboard D‑Beam controller. You can define the start and end points of the transition for each of the Layers in the Live Set, which makes it possible to do things as simple as introducing a sound into an existing mix, as interesting as morphing from an Acoustic sound to a Synth sound and back again, or as experimental as turning civilised patches into over‑effected sonic mayhem. Then, if you stumble across something that you like anywhere within the blending range, you can save this as another Live Set, which is an innovative way to generate new sounds.

 

Registrations

 

The Jupiter 80's rear panel hosts all the connections you'd expect to see, and a couple more besides. A full list can be found in the 'Abridged Specification' box. 

The Jupiter 80's rear panel hosts all the connections you'd expect to see, and a couple more besides. A full list can be found in the 'Abridged Specification' box.

We now come to the top level, the Registration. There are 32 of these (A‑1 to D‑8) in a 'Set', and eight Sets (called [01] to [08]), giving a total of 256 Registrations, each of which comprises the aforementioned four Parts — Perc, Lower, Upper and Solo — that can be played in isolation, layered in a variety of ways, or split into a maximum of four regions across the keyboard. If you want to change the whole setup at the touch of a button, this is the level at which it's done, using the buttons under the keyboard.
Given that the Solo Part comprises just a single Tone, you simply insert the one that you want, determine the keyboard range over which it will play, and set up things such as its level, pitch and pan. The modifying parameters aren't as extensive as those provided by a Live Set, and the Solo Part's effects structure — which offers just compression, EQ and delay in series — is quite different from a Live Set's, so a given Tone can sound quite different when inserted and played here.

If anything, the Perc Part is the weirdest of them all. With its Manual Percussion option selected, this sets aside the bottom 15 notes of the keyboard for a selection of percussion sounds. There are eight sound sets provided but, given the limited number of notes available, these are not laid out in conventional GM fashion. Alternatively, you can select the Drums/SFX option, which gives you access to 16 drum kits, either across the entire keyboard or limited to the region defined by the Lower Part. You can also insert any Supernatural Tone here, again accessible across the whole keyboard if no splits are on, or in the same range as the Lower Part. This can be very useful, although you have to remember that the Perc Part's editing and effects have the same structure as the Solo Part's.

With more than a nod toward live performance, the Jupiter 80 offers two additional tools at the Registration level that can be applied to the Live Sets inserted into the Upper and Lower Parts. The first is a powerful arpeggiator. This can generate traditional patterns, with the usual parameters, such as octave range, rate and shuffle, but when you start to experiment with its Styles, Variations and Motifs, you'll find that it offers many more possibilities, encompassing everything from simple patterns to guitar licks and strums, walking bass patterns and more. You can even create up to 128 new 'User' styles by importing and saving Standard MIDI Files of up to 500 notes. The second tool is called Harmony Intelligence, and this adds a harmony to the topmost note that you play in the Upper Part, calculated from the notes that you play in the Lower. There are 17 types of 'intelligence', and these determine the nature of the harmonies that are generated. Names such as Big Band, Strings, Hymn, Country and Gospel tell you exactly what Roland's engineers had in mind, but while these would be appropriate for a domestic instrument, they seem a touch incongruous here.

The Jupiter 80's MIDI capabilities are as extensive as you would expect, with independent input and output channels for each of the Parts, a separate control channel to change Registrations, MIDI sync, extensive MIDI CC capabilities, and the ability to transmit parameter changes as SysEx data. There's also an extensive menu for controlling external sounds, and this allows you to set up things such as velocity ranges and key zones for each of the 16 channels. In addition to this, the Jupiter 80 offers Roland's proprietary V‑Link protocol and is compatible with the new MIDI Visual Control specification, both of which allow players to control still images and video clips using MIDI note numbers and CCs. I would love to have access to this technology for my band's stage shows, but as I don't own a MIDI‑controlled video presenter or projector, I can only assume that this works as it should.

Finally, the Jupiter 80 incorporates a USB‑based song player and recorder. Copy a suitable file (or files) to a USB drive and stick it in the USB slot on the control panel, press the Song button and you're ready to go. 

In playback mode, you can fast‑forward and rewind, loop within a file, chain files, alter the sound using the dedicated four‑band EQ, and perform karaoke‑style centre cancellation. You can also alter the playback speed and pitch, and although the algorithms for these functions are not state‑of‑the‑art, they are adequate. 

You can even record audio back onto the memory stick as 44.1kHz, 16‑bit WAV files, mixing your own performance with any audio being presented to the USB or analogue audio inputs. I like this player; it's simple and intuitive, and the independent speed and pitch controls will allow you to work out all those fiddly Emerson Lake & Palmer piano solos that have been bothering you for years [he's not kidding — Ed]. Nonetheless, I'm unimpressed by the repeated exhortation that one should, "never insert or remove a USB memory stick when the power is on”. Given that the same port is used for saving and backing up the Jupiter 80's memories, you would think that an 'eject' command and hot‑swapping would be taken for granted, not precluded with dire warnings of the sky falling in!

 

In Use


The Jupiter 80's keyboard is very pleasant to play; a good compromise given the range of keyboard duties — from grand pianos, to organs, to orchestral imitations, to synth solos — that it will be asked to perform.
I suspect that this is in part because the size and weight of the Jupiter 80 lends the keyboard a reassuring solidity, while the instrument as a whole still remains more manageable than most synths based on 88‑note piano‑weighted keybeds. This has to be a good thing for an instrument designed to spend much of its life on the road. What's more, the fact that the Jupiter 80 boots up markedly faster than a V‑Synth or a Fantom makes it more suitable for live use in at least one other sense: there must be nothing worse than standing on stage and telling the audience you'll start playing again in a few minutes once the keyboards have rebooted! Other things also suggest that Roland have thought carefully about on‑stage use, and the area in which this is most obvious is, perhaps, that of patch selection. The 27 large, colourful buttons running behind the keyboard allow you to punch your choices from 54 predetermined Tones and Live Sets into the Parts of the current Registration, and the Tone Remain function holds any existing sound(s) until you release those keys, so there are no glitches on changeover. What's more, you can specify the Tones and Live Sets that are attached to each button, and you're not constrained to sounds that conform to their names, so the system can be very flexible.

So now we come to the sound of the Jupiter 80. Starting with the Supernatural Acoustic sounds, I have strong suspicions that, rather than being a pure 'physical modelling' synth in the way that I would historically have used that term, Supernatural Acoustic is similar to Roland's Structured Adaptive Synthesis (SAS), which built its piano and electric piano sounds using parametric models derived from sample analysis. Given that a similar technology was recently used for the V‑Piano, I emailed Roland to ask whether I was correct and whether the piano sounds in the Jupiter 80 were the same as those found on other Supernatural pianos such as the RD700 and FP7F. I hit a corporate brick wall. The official response was, "we are unwilling to share this information”, and when I asked for a list of the behaviours that comprise Articulative Phrase Synthesis and Behavior Modeling Technology, I obtained the same answer. Nonetheless, the Jupiter 80 is, as Roland claim, a remarkably playable and expressive synthesizer, and many of its Supernatural Acoustic sounds — such as the superb Clavinets, the accordions and the excellent acoustic bass — are impressive, and the subtle but sometimes important performance benefits of BMT and APS shouldn't be overlooked. 

Let's take an example. If you select one of the acoustic guitars — say, 0035:SteelStr Guitar — and isolate this from all the other sounds, you'll find that you can play it conventionally and it will sound equivalent to patches from elsewhere. You could also switch on the 'Strum Mode', so that chords are strummed in a realistic fashion. But, again, this is nothing new. However, BMT becomes apparent when you play two notes, either one or two semitones apart, quite hard and almost simultaneously. You then hear a realistic glissando from the first note to the second. If the interval is any greater, you obtain a strum or a picked chord because BMT assumes than any interval above two semitones is fretted separately or played on a different string. Other easily audible examples of APS/BMT can be heard when you trill rapidly and slur brass sounds, but there are some Tones for which the effects are either so subtle that I'm missing them, or nothing is implemented. Unfortunately, given Roland's reticence to discuss the matter, I can't be any more informative.

Turning now to Supernatural Synth, I can only restate how powerful the Jupiter 80 is in this department. You can stack three Partials in a Tone and up to 10 Tones in a Registration to create some monstrous patches combining analogue waveforms, Supersaw and PCMs! Or course, it's much more sensible to create useful Tones and then layer them in Live Sets to create some luscious sounds, but I suspect that the main question on everyone's lips is, 'does it sound like a Jupiter 8?'. As you might expect in a synth called the Jupiter 80, there are numerous Tones, Live Sets and Registrations called 'Jupiter 8 something or other', and a cynic might expect these to be nothing more than an appeal to the gullible. However, notwithstanding a touch of aliasing at the highest pitches, I found the Jupiter 80 to be capable of some remarkably good imitations of the old lady. What's more, these comparisons weren't against dim recollections of how a JP8 sounded when I heard one in a shop in Middlesbrough on a soggy afternoon in 1982... I placed my Jupiter 8 next to the Jupiter 80 and compared them directly. Of course there were differences, but in a blind test of some brass and string patches, I couldn't tell which synth was generating which. This was not what I had expected! Nonetheless, there is in my view one significant shortcoming in Supernatural Synth; while you can affect the loudness and brightness of a sound using aftertouch, you can't introduce vibrato, tremolo or growl. For a synth that prides itself on its performance capabilities, this seems a lamentable oversight.

 

Conclusions


You may have expected that something bearing the Jupiter name would offer fistfuls of knobs and sliders, and a signal path harkening back to the heyday of analogue synthesis. The Jupiter 80 does neither. So should we conclude that its name and colour scheme are no more than a cynical marketing exercise designed to drag cash out of the wallets of the unwary? Certainly, it bears no more relation to a Jupiter 8 than a Juno Stage bears to a Juno 60, so it's easy to leap to this conclusion. But you must also remember that the Jupiter name only assumed its current cachet some time after the original series had been superseded by the JX8P and the Super JX10. In 1981, the Jupiter 8 was merely Roland's interpretation of the current state of the art, designed to compete head‑to‑head with the Prophet 5 and Oberheim OBX, so in the sense that the Jupiter 80 is a performance synth based on the latest technology, its name is not inappropriate. Nonetheless, it's going to continue to annoy a lot of people.

Such issues aside, I'm relieved that Roland let me have the pre‑release Jupiter 80 for such an extended period, because at the start of this review I couldn't understand why they had designed such a strange synth. But, as I learned how to approach its Tones, Live Sets and Registrations, and as I began to work with what its effects structure could do rather than complain about what it could not, I started to discover what a remarkably expressive musical instrument the Jupiter 80 is. I also began to realise that, had it been manufactured elsewhere, somebody in the marketing department would have been bouncing up and down and proclaiming loudly the multiple physical models that comprise Supernatural Acoustic, whereas Roland have been commendably conservative in their lack of hyperbole.

Of course, the Jupiter 80 is not for everybody, and if you need a workstation capable of providing a dozen splits with multitimbral effects assignments, you're looking at the wrong instrument. But if you're after something that provides some top‑notch piano and orchestral sounds, a remarkably powerful VA synth that can imitate the best of the real thing, and the ability to build these into complex, involving sonic structures, the Jupiter 80 has a lot going for it. Sitting somewhere between a preset stage piano/organ/synth and a fully featured workstation, it's a brave design, and — like me — potential purchasers need to take the time to overcome their preconceptions of what they think it should be, and begin to appreciate it for what it really is.  .