Welcome to No Limit Sound Productions

Company Founded
2005
Overview

Our services include Sound Engineering, Audio Post-Production, System Upgrades and Equipment Consulting.
Mission
Our mission is to provide excellent quality and service to our customers. We do customized service.

Wednesday, November 30, 2016

Q. Is it worth recording at a higher sample rate?

By Hugh Robjohns
CD burning.
I've recorded several songs in 24-bit/48kHz. When I went to burn the CD, the application I'm using for burning (Nero) did not recognise the files. I had to convert them to 16-bit/44.1kHz first. So does it make a difference in the audio quality of the final CD tracks to record at the higher rates when it's then converted back to 16-bit/44.1kHz? Maybe it's not worth getting a fancy audio interface after all?

SOS Forum Post

Technical Editor Hugh Robjohns replies: If your end format is destined for a 44.1kHz sample rate — the standard audio CD format (the 'Red Book' standard) is 16-bit/44.1kHz — there is no point in recording at 48kHz in the first place. There is no significant quality gain involved in using a fractionally higher sample rate (48kHz is only eight percent higher than 44.1kHz), and the technical losses and time involved in sample-rate conversion aren't very constructive either.

I would recommend recording your material at 24-bit/44.1kHz and then truncating and re-dithering the finished tracks to 16-bit as the final stage before burning the CD. There is a useful advantage in recording your original material with 24-bit resolution, as this increases the dynamic range available to you. This translates into greater headroom and a reduced risk of overloads and transient clipping.

If you want to start with higher resolution source recordings, possibly with an eye to releasing the material on high-resolution formats in the future, then you have a choice of sample-rate options. Obviously, sample-rate conversion will be needed for a CD release, and many argue in favour of recording at 88.2kHz as this is double 44.1, making the down-sampling relatively simple.

In the early days of sample-rate conversion this made a significant difference to the resulting sound quality, but modern sample-rate converters appear to handle non-integer conversions with no loss of quality at all, and a 96kHz sample rate is more widely used in high-resolution formats. In my opinion, there is very little to be gained in going to higher sample rates, so I would use 24-bit/44.1kHz for a CD-only release (reducing to 16-bit/44.1kHz at the last possible stage), and 24-bit/96kHz for everything else.

Published June 2005

Monday, November 28, 2016

Q. What kind of ear plugs should I wear at gigs?

By Hugh Robjohns
Some generic attenuating ear plugs manufactured by Sensorcom.Some generic attenuating ear plugs manufactured by Sensorcom.

I've been coming home from gigs recently with my ears ringing and I'm worried about damaging my hearing. I think it's definitely time to invest in some kind of (preferably unobtrusive) ear protection, but what kind of ear plugs should I be looking at? I still want to be able to hear what's going on but keep my ears out of danger at the same time. I guess I can't wear earplugs when I'm actually performing, but at least I can reduce the chances of permanent damage when I'm watching the other bands. What's your advice?

Patrick Bailey

Technical Editor Hugh Robjohns replies: Hearing damage is directly related to both sound level and length of exposure. So, even if you don't want to wear ear plugs when you're performing, consider wearing them when you're rehearsing, as well as at gigs — it has been suggested that musicians often do more damage to their ears during the many hours of rehearsal than in the comparatively short time they spend on stage.

I would recommend investigating the options for good-quality ear plugs that reduce the overall level of sound but maintain an even spectral balance so that you can still hear everything clearly, although the overall level is reduced. Disposable solid-foam ear plugs won't give you this even balance and will adversely affect your enjoyment of the music. You can often find suitable generic ear plugs in the good musical instrument and equipment retailers, sold as 'musicians' earplugs', and available in different strengths (amounts of attenuation). Obviously, the greater the number of dBs of attenuation, the better overall protection they offer.

However, for a really comfortable and long-lasting solution, I would recommend making an appointment with a good audiologist who will be able to take ear moulds and make earplugs to your precise specifications that will be comfortable to wear for long periods and easy to clean and look after. Custom-made earplugs will cost more, but considering that hearing damage is irreversible, if you value your ears the cost should be irrelevant!

More information and advice is available from the RNID (www.rnid.org.uk). The web site of their ongoing 'Don't Lose The Music' campaign (www.dontlosethemusic.com) is aimed specifically at musicians, DJs, clubbers and concert-goers and is linked with two hearing protection specialists — Advanced Communication Solutions, or ACS for short (www.hearingprotection.co.uk), and Sensorcom (www.sensorcom.com) — who can produce custom-fitted ear plugs.
Some custom-moulded ear plugs, manufactured by Sensorcom.Some custom-moulded ear plugs, manufactured by Sensorcom.





Published June 2005


Friday, November 25, 2016

Pop Shields: Why You Need Them

By Paul White
Pop Shields: Why You Need Them
Pop shields are essential for most modern studio productions, but what are they and why are they so important?

Everyone has heard microphone announcements ruined by loud popping and banging noises, but we never hear these noises when people speak normally. That begs the question, 'What are microphones hearing that we're not?' If these noises are inconvenient during live announcements, they can be disastrous in studio recordings, so how do we go about avoiding them?

How Vocal Pops Occur

It turns out that these pops and thumps occur mainly on what are known as 'plosive' sounds, prime examples being words that start with the letter 'B' or 'P'. If you were to hold a lighted candle in front of your lips while speaking or singing 'plosives', you'd see the flame flicker, because we tend to expel a blast of air when making these sounds. By contrast, if you sing a sustained 'Ahh' sound, the candle will barely flicker at all, because you're mainly just producing sound vibrations with your vocal cords and expelling very little air in the process.

The problem is made considerably worse if the mouth is very close to a microphone. The plosive air blast is obviously strongest close to the mouth, and when it slams into the microphone diaphragm it produces a very large asymmetrical output signal. This may be so large that it can saturate the microphone's output transformer (if present) or overload the mic preamp, making the sound even worse. Engaging the low-cut filter on the mic (ideally) or preamp may ease the overloading problem, but the basic cause of the popping will still remain.

The problem is made even worse because all directional microphones suffer from the 'proximity effect', a bass tip-up which makes the microphone considerably more sensitive to low-frequency sounds from very close sources. A plosive blast is essentially low-frequency energy, and hence it translates into a loud, low-frequency thumping sound.

Capacitor mics of the type we use in the studio tend to be particularly susceptible to popping, because their diaphragms are very light, so some form of effective pop shield (or pop screen) is essential. Dynamic mics are a little more tolerant because of their more massive diaphragm assemblies, but they are by no means immune.

What Is A Pop Shield?

Even the best controlled singers (who naturally turn to one side or back off from the mic when singing loudly or plosively) tend to get microphone popping on occasions, so in most studios you'll see circular nylon-mesh screens that clip to the mic stand and sit a couple of inches in front of the mic. You can see how effective these are by trying the candle trick again. A good loud plosive with a pop screen between the mouth and the candle should barely disturb the flame.

There's nothing special about the construction of most pop shields — in fact, you can even make one for yourself out of a pair of old nylon tights and a wire coat hanger.There's nothing special about the construction of most pop shields — in fact, you can even make one for yourself out of a pair of old nylon tights and a wire coat hanger.

There's nothing magic about these screens, and you can use ordinary stocking nylon to make one for yourself if you can devise some way to support it. At one time commercial pop shields were quite expensive, but competition has caused prices to drop to the point where making your own really isn't worth the effort.

The way the pop screen works is simple — sound passes through the fine mesh with just a little high-frequency reduction, but plosives are stopped dead. As the puff of air from the mouth hits the mesh, it breaks up, becomes turbulent, and loses its coherence, so what starts off as an organised mass of air ends up being randomised so that the air molecules are no longer all pushing in the same direction.
It's simple, but it works!

To make the screens even more effective, many designs incorporate two layers of mesh a short distance apart, so that anything that gets by the first layer is mopped up by the second. Such a screen will tame even the worst plosives. However, it is crucial that the windshield is spaced a couple of inches in front of the mic capsule — there has to be a volume of still air between the pop shield and mic capsule.

High-frequency Losses

Although the amount of high-end loss is generally very small, some engineers still feel that nylon-mesh pop shields have too much effect on the sound. Fortunately there's an alternative, which is to use a slightly more widely spaced mesh made from woven or perforated metal. The larger holes have less impact on high frequencies, but the hole spacing is still small enough to convert blasts of air to harmless turbulence. Even a metal kitchen sieve will work, though its looks leave something to be desired!
Nylon-mesh pop shields can cause a slight dulling of the sound at high frequencies, but metal-mesh designs such as this one don't suffer as much.Nylon-mesh pop shields can cause a slight dulling of the sound at high frequencies, but metal-mesh designs such as this one don't suffer as much.

The reason the wire basket covering the capsule of a typical mic doesn't usually prevent popping is that it is usually too close to the capsule to be effective, though some hand-held capacitor models have the capsule set further back to make the mesh more useful — effective though pop screens are, they're too visually intrusive for most types of live performance.

The bottom line is that you should always use an effective pop shield when recording close-up vocals. You don't need one for most instruments (though they can be useful near hi-hats, which expel gusts of air when closing), and you don't need them for recording vocals at a distance, such as choirs, but for typical studio recordings where the singer is only a few inches from the mic, they are absolutely essential.

Wind Shield Or Pop Shield?

Pop Shields: Why You Need Them

Some microphones come with foam wind shields that fit over the microphone grille, but in practice they tend to be ineffective against anything more than a gentle breeze, and they are no match for a full-on plosive. Furthermore, the thickness of foam invariably absorbs some high frequencies, causing the sound to become noticeably duller than it should be. Wind shields can be handy in live performance to stop the mic filling with drool, but they have a very limited effect on popping.


Published May 2005



Tuesday, November 22, 2016

Q. Can I use a Mono compressor for Stereo compression?

By Hugh Robjohns
The Focusrite ISA430 MkII can be linked to a second unit for stereo compression. 
The Focusrite ISA430 MkII can be linked to a second unit for stereo compression.

I was wondering if it's possible to use a single mono compressor that can be stereo-linked (the Focusrite ISA430, for example) as a stereo compressor, using the following method. Play a mono mix of the stereo signal through the unit and record the Link Out signal; play the left channel through the unit at the same time as the recorded Link Out signal is fed into the Link In input, and record the output; do the same thing with the right channel; combine the two mono recordings back into stereo. I guess I'd have to be pretty careful about compensating for any delays. Also, the ISA430 (for one) doesn't specify what levels the link signals work at. But might this work?

SOS Forum Post

Technical Editor Hugh Robjohns replies: In a word, no! There are a number of problems with the process you are proposing.

Firstly, you are assuming that the signal used to link two units together for stereo operation is a normal audio signal. It might be, but equally, it might be a DC-referenced control signal, and the DC reference would be lost if you were recording the signal into a DAW. Similarly, any gain changes that occur anywhere in the recording and replaying of the link buss signal will upset the compression settings.
Next, assuming that you can record and replay the link signal, there is the danger of disturbing the delicate phase relationships between the left and right channels when each is processed separately and re-recorded. This will upset the stereo imaging. Remember that both audio channels are going through two A-D/D-A stages, both subject to random jitter effects controlled by different clocks at different times. Furthermore, the link buss signal is going through another two conversion stages, twice.

Furthermore, there's the delay introduced by the A-D/D-A conversion process to take into account. Remember that the side-chain control signal will have to pass through an A-D stage on recording, and then a D-A stage on replay to control the compressor. The left or right channel passes through a similar pair of D-A and A-D stages. But in order to create the side-chain control signal, the mono sum track used also passes through a D-A stage. Hence, the control signal will be one converter delay out of sync with the original audio, and hence you risk transient compression errors.

When you also factor in the practical difficulty of optimising the compressor settings, plus the huge amount of time and effort this process will take, it appears to be a futile triumph of technology over sense! Why bother trying to benefit from the sonic quality of a unit like the Focusrite ISA430 when you are inherently trashing it by using the process you describe? Given that pretty much everything you produce will need a pass through a compressor sooner or later, why not simply buy or hire a decent stereo compressor?



Published May 2005

Saturday, November 19, 2016

Q. Is it possible to record in surround on only two tracks?

By Hugh Robjohns
This simplified polar pattern diagram shows how a figure-of-eight mic (red) and a coincident omnidirectional mic (blue) can be combined to produce a directional cardioid pickup pattern (green). Flipping the polarity of the omni reverses the direction of the cardioid pickup. 
This simplified polar pattern diagram shows how a figure-of-eight mic (red) and a coincident omnidirectional mic (blue) can be combined to produce a directional cardioid pickup pattern (green). Flipping the polarity of the omni reverses the direction of the cardioid pickup.

Would it be possible to use two figure-of-eight mics to create a surround sound recording on a two-track recorder, which could be decoded and mixed later on? The two mics would be placed at 90 degrees capturing left and right and front and back respectively. I'm after a quick and portable way to make surround recordings in the field, and portable recorders with more than two inputs are expensive! As far as I can see this could work, unless there's something I'm missing about figure-of-eight decoding?

SOS Forum Post

Technical Editor Hugh Robjohns replies: There is something you are missing — a third mic to resolve the ambiguity inherent in a figure-of-eight mic over which side of the diaphgram the sound strikes. You cannot derive front-to-back directional information with just two crossed figure-of-eight mics.
Imagine I'm in a studio and you are in a control room, without sight lines between the two. There is a single figure-of-eight mic in the studio and I'm talking into it. How could you tell whether I was talking into the front or back of the mic just by listening? The answer is that you couldn't — a figure-of-eight mic provides signals of identical level from front and rear sources. Yes, the polarity of the signal is inverted between front and rear sources, but without a reference to know which polarity was which, you are no better off.

If you think about it, when you combine the two coincident figure-of-eight mics mounted at 90 degrees, you effectively end up with another 'virtual' figure-of-eight pointing midway between the two original mics. You can use that to provide left-right discrimination — which allows the arrangement to be used for stereo — but you will get the same level of output from a source behind and to the right of the array as you would from one to the left front. That doesn't matter in stereo: in fact it is often very useful that rearward sounds are folded back onto the front. However, it is obviously a problem in surround because we need to be able to have rearward sounds coming out of the rear speakers!

Q. Is it possible to record in surround on only two tracks?

So, what we need to be able to do is create virtual polar patterns that have front-back discrimination, and that basically means creating 'virtual' cardioid patterns. These can be derived by combining an omnidirectional mic's polar response with a figure-of-eight.

If you have two coincident figure-of-eights, facing left/right and front/back, and you add to that a coincident omnidirectional mic, you can combine them in various ways to create virtual cardioid patterns pointing in almost any direction you like. Let me explain how...

Consider what happens if you mix together the output of an omnidirectional mic and figure-of-eight, the two capsules being coincident. The omni mic picks up sound from all directions equally. The figure-of-eight picks up sound only from the front and back, rejecting sound sources to the sides, and its rearward pickup is in the opposite polarity to the front.

If we arrange the polarity of the omni to be the same as the front of the figure-of-eight mic, then when you mix the two mic outputs together their contributions will add together for frontal sound sources. So the resulting 'virtual mic' will be very sensitive to frontal sound sources.
Sources to the side are not picked up at all by the figure-of-eight mic, but the omni still hears them. So the 'virtual mic' created by the combination of the two is not as sensitive to sounds from the sides as it was from the front, but it does still hear them.

The four coincident capsules of a B-format Soundfield mic, and a diagram showing the intersecting pickup patterns of the three figure-of-eight capsules (X, Y and Z) and one omni-directional capsule (W), courtesy of Soundfield. 
The four coincident capsules of a B-format Soundfield mic, and a diagram showing the intersecting pickup patterns of the three figure-of-eight capsules (X, Y and Z) and one omni-directional capsule (W), courtesy of Soundfield.

Sources to the rear are heard by both the figure-of-eight and omni, but the figure-of-eight's output is of the opposite polarity to the omni. Hence, when the two outputs are combined they will cancel each other out. Thus the virtual mic hears nothing at all from the rear, and if you draw this polar pattern out accurately you'll discover that we have just created a cardioid microphone (facing forwards). You can also create other first-order polar patterns (sub-cardioid, hypercardioid and so on) by varying the ratio of the omni and figure-of-eight contibutions (in other words changing the gain of each).

So you see that by introducing an omni mic into the array, you can resolve the inherent front/rear ambiguity of the figure-of-eight pickup patterns, by converting them into cardioid patterns, and by adjusting the ratios and polarities of the signals from the two figure-of-eights, you can make those cardioids face in pretty much any direction you like. You can now generate any number of virtual microphone outputs which 'hear' only the sources in front of them — front left, centre front, front right, rear left and rear right, for example.

This is the basis of horizontal Ambisonic encoding — the 'B-format' — as used by the Soundfield mic. The omni component is called W, the front/back figure-eight is called X and the left-right figure-eight is called Y. The Soundfield mic also adds a third figure-eight element for the up/down axis (called Z), although this isn't really needed in most surround applications. You can read more about the Ambisonic approach in SOS October 2001.

Another (arguably more practical) way of recording horizontal surround sound is the MSM format. This uses the same basic concepts, but is constructed from a pair of matched cardioids facing to the front and rear, plus a sideways-facing figure-of-eight. Again, all three should be coincident. The front cardioid and the sideways figure-of-eight are decoded as a conventional M&S pair for the frontal sound stage, while the rear cardioid and the same figure-of-eight are decoded as another M&S pair for the rear channels.

However, whether you adopt the crossed figure-of-eights plus omni approach (WXY format), or the front/rear cardioids plus figure-of-eight approach (MSM format) you do need to be able to record at least three channels, and there is no getting away from that!

If you really want to encode surround onto a two-track machine, you have to use some form of phase/amplitude matrix system like Dolby Pro Logic or RSP Circle Surround. However, while these formats are acceptable for final mixes delivered to the end user, they are far too restrictive for source recordings because you can't easily manipulate the signals to alter the spatial surround characteristics later on, as you can with either of the three-channel surround recording techniques described above.


Published December 2005

Thursday, November 17, 2016

Power & Electrical Safety On Stage

By Mike Crofts
PA Basics
Staying safe on stage is more than a matter of simply making sure that willing hands are available before taking a dive. Knowing how to properly handle the mains power we all need is also crucial to performance health...

Whatever the size, complexity or cost of your live sound rig, one of the first — if not the first — question on your mind when you get to a venue will usually be "where do I plug it in?" Depending on the venue, the answer can vary from a wall-socket behind a plant pot to a dedicated and professionally-installed supply that is reserved for your exclusive use, fully tested and certificated, and for which (with any luck) you'll have brought an appropriate connector. Whatever you encounter, you'll need to know some basic rules. When it comes to portable live-sound systems, this means firstly, using a suitable electrical supply; secondly, using suitable equipment; and, thirdly, connecting and using that equipment safely.

How much power will the average band's gear actually need? The only way to know for sure is to add up the power requirements of each individual item. 
How much power will the average band's gear actually need? The only way to know for sure is to add up the power requirements of each individual item.Photo: Mike Crofts

How Much Mains Power?

What constitutes a suitable supply will depend, of course, on what you need to plug into it: if it's your own equipment you'll presumably know what supply capacity is required, but there may be other factors to consider if additional gear needs to be connected to the same supply. Such gear might include a visiting disco, a lighting rig, or other event equipment — for example, fridges at summer events.

A good first step, then, is working out what current your equipment will draw from the mains. The power rating of each piece of gear should be stated on a panel fixed close to the mains connector, or where a fixed mains lead enters the equipment. The power rating may be expressed as a current (in Amps) or as a power figure in Watts. It's generally best to work out the total current your gear will draw, adding up all the individual figures to find the total load you'll be connecting to the mains. To convert Watts to Amps, divide the Wattage figure by 230 (mains voltage). As an example, a piece of equipment with a mains power rating of 100 Watts (not 100W of audio power) will draw a little under half an Amp. In a small venue that is only offering 13-Amp sockets of the normal domestic type, you can then work out how you need to wire up. If the total connected load of your system — including the backline equipment — is comfortably within the rating of a single or double 13-Amp socket, it's perfectly alright to connect it all from a single point. After all, that's what they're designed for! Try to avoid too many connections between this point and your equipment. It's much better to have a single power lead of the required length than two shorter ones joined together: less to go wrong!

A professionally made distribution box with meters to indicate AC mains voltage and current. 
A professionally made distribution box with meters to indicate AC mains voltage and current.Photo: Mike Crofts

Working It Out

One common mistake is assuming that audio output power is the same as the mains power required to operate the gear. If an amplifier were 100 percent efficient, you could, in theory, use all the mains power as audio output power, but this is not the case in practice, as some of the power used by the amplifier is dissipated as heat. A typical full-range 'active' speaker with built-in amp modules, rated at 240 Watts audio output, would have a mains power rating somewhere around 350 Watts. A useful rule of thumb (if you don't have the manufacturer's stated figures) is to multiply the audio output power by 1.4 to get an idea of how much mains power would be needed, then divide by 230 to find out the current consumption.

The table below gives a rough guide to the supply current likely to be required by a band with three backline amps and a vocal PA (based on UK voltage). Bear in mind that equipment may demand a much bigger supply current when it is first switched on, so don't be tempted to turn everything on from a single switched socket — you wouldn't want to do this anyway, for many other reasons, such as risking a huge pop through your speakers! Also consider that the power that you can safely run your system on may not be enough to realise its full performance capability. Any system capable of delivering good bass power will need to draw a hefty current from the mains, and if, in the above example, we were to replace our typical small speakers with, say, a pair of Mackie SA1521s, the makers recommend that each speaker's mains supply is capable of providing seven Amps at 230 Volts! This is, of course, not a constant current requirement, but it does illustrate how important a good power source is for getting the best from your gear.

EquipmentMains Power neededMains Current neededAmps
2 x 240W active PA speakers480W x 1.4 = 672W672 / 230 = 2.92 Amps2.92
Mixer100W (stated)100 / 230 = 0.44 Amps0.44
2 x rack processors20W each (stated) = 40W40 / 230 = 0.17 Amps0.17
3 x 100W backline amps300W audio x 1.4 = 420W420 / 230 = 1.83 Amps1.83
TOTAL = 5.36
ROUNDED UP = 5.5 Amps

User Beware

While we're talking in Amperes, it's worth remembering that electrical current is a dangerous animal; a current of only 50 Milliamps (0.005 Amps) can be fatal, and our typical small rig above is using over a thousand times more current than this. Safety is thus a huge consideration, and the use of a suitably rated supply is only the beginning. The best way to stay safe is to use only well-maintained equipment (including cables and connectors) that are properly designed for the task in hand, and to make sure that they are used as the manufacturers intended.

A professionally made distribution box with four 16-Amp outputs, all RCD protected, for limited outdoor use. 
A professionally made distribution box with four 16-Amp outputs, all RCD protected, for limited outdoor use.Photo: Mike Crofts

If the venue in question is unfamiliar to you and you are responsible for providing and operating the PA, always check that the supply you're asked to use is suitable. Just because it's a 13-Amp socket doesn't mean that it's capable of supplying 13 Amps: it may have been DIY-installed as a spur from a domestic ring main, originally to light a garden shed or run a fountain or something! If you're operating in any kind of business or commercial premises, they should have an up-to-date electrical safety certificate. A quick look at the distribution board or consumer unit should show the overall current rating of the circuit you'll be using, and you can also see if it uses old-style wired fuses or the more modern MCBs (Miniature Circuit Breakers), which react more quickly if the rated current is exceeded.

Fuses and MCBs do not protect you from electric shock, so always make sure that your system is fed via a residual current device (RCD). This could be at the main board/box, on the socket itself, or at the point where a separate spur is fed. If you're not sure that this is the case, use your own RCD, either as a plug type or one of the RCD plug-in adaptors readily available for a few quid from any electrical retailer. The RCD should be as far 'upstream' as possible so that it protects as much as possible, and wherever it is, make sure you test it before use, by using the built-in test button. If it doesn't seem to work, find another!

A final word on RCDs: they are there as a backup in case anything goes wrong, not as a substitute for poorly-maintained, faulty or unsuitable equipment.

Going Through A Phase

Most small venues are likely to have a single-phase supply, as will normal domestic premises, and for the purposes of this basic article we won't be looking at the whys and wherefores of 'three-phase' systems, other than to point out that all the sound equipment should be connected to the same phase. Any other electrical equipment, such as lighting, should also share the sound-system phase if it is possible for a person to come into physical contact with both systems — for example, to touch the lights and a guitar at the same time. It's a given that the venue's technical staff should supervise any connection to a three-phase supply.

Distribution Deal

A damaged mains plug recently discovered at the bottom of the cable trunk — to be disposed of straight away! 
A damaged mains plug recently discovered at the bottom of the cable trunk — to be disposed of straight away!Photo: Mike Crofts 

Having found a suitable supply point, you now have to feed it to all your equipment. For all gigs where a 'proper' supply is available, I use a professionally made portable distribution box, which has a single 32-Amp inlet and 32-Amp breaker, feeding four 16-Amp outlets, all via separate combined RCD/MCBs. Although, on the face of it, I've got four 16-Amp outlets, giving a total of 64 amps, I can only use 32 Amps overall, with each outlet limited to 16 Amps. I run my front-of-house speakers from two of these feeds, the monitors and desk from the third, and the stage backline from the fourth. This splits up the load and ensures that each feed is fully RCD protected. As mentioned earlier, it is always best to have an RCD as far upstream as possible, and I would ensure that my original 32-Amp source incorporated suitable protection if available.

For smaller indoor events, a single 13-Amp fused RCD plug feeding into a multi-way distribution board (four or six sockets) is fine, as the total current can't exceed the 13-Amp fuse rating in your RCD plug. From this distribution board you should try, where possible, to connect direct to equipment, or feed the equipment in logical groups. Normally, you can take the initial feed from a socket at the back of the stage and run all your backline straight from this, with one feed going off to the PA system. If you need to use more than one socket in a small venue, ensure that all your signal connections are balanced, and never, ever remove an earth connection to get rid of hum or noise. Also take care when using those 'flying saucer' extension reels. They are very useful and neat, but remember that their maximum current-carrying rating only applies when the cable is fully unwound.

Care and Maintenance

All leads, connectors and equipment should always be checked before use, even if this is a quick visual check for any obvious signs of damage. If it's your own gear, you'll know it's all correctly fused, but it's best to check if you're not sure. Cables should be undamaged along their entire length and plugs should be securely clamped on, with no inner conductors visible. Cables with moulded plugs are a common sight nowadays, but these plugs cannot ever be re-used, and if damaged or removed for any reason they must be thrown away — preferably after destroying them so that an unaware person can't find one and plug it in. If anything looks faulty, then it probably is. Remove it from service and make sure it can't be used again until it has been repaired and tested.

All electrical equipment, including cables and connectors should be stored and used in dry conditions unless it is designed for outdoor wet weather use and carries an appropriate IP rating (for mains connectors this will usually mean industrial 'Ceeform' types — coloured blue — either rated IP44 (which is splashproof) or IP67 (which is waterproof).

Never be tempted to 'lift' the earth wire for any reason. If you find a plug like this one, with the earth disconnected, don't use it. 
Never be tempted to 'lift' the earth wire for any reason. If you find a plug like this one, with the earth disconnected, don't use it.Photo: Mike Crofts

Summing Up

We've covered the basics of finding a suitable supply and connecting the gear to it, but there are other things to consider when rigging. Cable runs need to be thought out to avoid or minimise trip hazards, and a generally neat cabling job will be much easier to troubleshoot than a spaghetti surprise. Don't forget the rule 'signal before mains'. Connect the power leads last and switch on after everything has been connected in the signal path (with the master levels down, of course). Turn on your power amps last of all, and switch them off first when powering down the system.

In this article I've taken a very basic and superficial look at the power side of live sound. There's a lot of additional good advice to be found, and it's well worth taking a professional approach and discovering as much as you can. After all, if you were going to jump out of an aeroplane you would, presumably, want to know that your parachute was (a) of the correct type; (b) correctly installed on your person; and (c) recently tested! Electrical power is a serious business, so if in doubt, ask a qualified electrician. If you don't know one personally, someone you know will, or you can look one up in the phone book.

There are also plenty of useful pages on the Internet. The UK's Health and Safety Executive web site, for example, has a lot of relevant information and links to some very good guidance publications. Check out www.hse.gov.uk.

Safety Checks

Your visual examination, before connecting any equipment, every time you're about to use it, should include checks for:
All portable electrical equipment should be periodically tested for electrical safety, and 'PAT' (Portable Appliance Testing) records kept. Some venues will not allow you to use anything which hasn't been properly tested. Get a quote from your local electrician for testing; it's not expensive and is well worth it for the peace of mind. 
All portable electrical equipment should be periodically tested for electrical safety, and 'PAT' (Portable Appliance Testing) records kept. Some venues will not allow you to use anything which hasn't been properly tested. Get a quote from your local electrician for testing; it's not expensive and is well worth it for the peace of mind.Photo: Mike Crofts
  • Damage to cables or plugs, including cuts, cracks, abrasions, bent or missing pins.
  • Previous repairs or modifications, including exposed or taped-up cable joins and unsuitable connectors.
  • Exposed inner conductors where the cable enters the mains plug.
  • Signs of damage to casing and covers.
  • Obvious signs of previous problems; for example, signs of water, moisture or heat damage.
A visual check on a regular basis (by a competent person, such as a qualified electrician or someone with appropriate training) should include taking the cover off each mains plug and checking that:
  • All wires are firmly attached (screws nice and tight) to the correct terminals, with no bare wires showing.
  • The cable outer sheath is firmly gripped by the cord grip.
  • There is no debris or signs of damage internally.
Electrical testing on a regular basis (by a professionally-qualified and suitably trained person) normally includes all of the above, plus:
  • Additional testing of earth integrity and insulation.
  • Test results recorded and appropriate labels attached to the equipment.
  • Failed equipment identified for disposal or repair.


Published December 2005

Wednesday, November 16, 2016

Q. What is the difference between mono with one speaker and mono with two?

By Hugh Robjohns
I read recently that when top engineers check their mixes in mono, they don't just hit a mono switch, but instead route the mix through a single speaker to hear it in true mono. What's the difference between the two?
A single speaker in a sealed enclosure is the classic means of monitoring in mono. 
A single speaker in a sealed enclosure is the classic means of monitoring in mono.

SOS Forum Post

Technical Editor Hugh Robjohns replies: It's important to check the derived mono signal from a stereo mix to ensure that nothing unexpected or unacceptable will be heard by anyone listening in mono, as could be the case in poor FM radio reception areas, on portable radios, in clubs, on the Internet and so on. Mono compatibility, as it's called, is very important for commercial releases — the artist, producer and record company want the record to sound as good as possible in these less-than-ideal circumstances.

In addition to simply checking the finished product, mixing in mono — or regularly switching the monitoring to mono while mixing — is very useful and a good habit to get into. Summing to mono removes any misleading phasing between the left and right signals that can make a stereo mix sound artificially 'big'.

The crucial difference between auditioning the summed mono signal on a single speaker, as compared to a 'phantom' mono image between two speakers, relates to the perceived balance of the bass end of the frequency spectrum. When you listen to a mono signal on two speakers, you hear a false or 'phantom' image which seems to float midway between the speakers, but because both speakers are contributing to the sound, the impression is of a slightly over-inflated level of bass. Listening to mono via one speaker — the way everyone else will hear it — reveals the material in its true form!

Checking the derived mono is always best done in the monitoring section of the mixer or with a dedicated monitor controller. Although a mono signal can be derived in the output sections of a mixer (real or virtual), this is potentially dangerous — if you should forget to cancel the mono mixing, you'll end up with a very mono final mix. It does happen, believe me! Sadly, very few monitor controllers outside of broadcast desks and related equipment provide facilities to check mono on a single speaker. Most provide a phantom mono image, which is fine for checking imaging accuracy and phasing issues, but no good for checking the mono balance.


Published November 2005

Monday, November 14, 2016

Q. What is that 'robot voice' effect?

By Steve Howell
Modern software vocoders like Native Instruments' Vokator are far more sophisticated than their hardware forebears.Modern software vocoders like Native Instruments' Vokator are far more sophisticated than their hardware forebears.

In the recent TV ad campaign for Marks & Spencer, they use the Electric Light Orchestra track 'Mr Blue Sky'. There's a distinctive robotic vocal sound in it that I am curious about. How was it made? I was thinking at first that it was something like Auto-Tune (as on the annoying Cher single!) but the ELO record was made years before that. Or is it a remix? (I'm not old enough to remember the original!)

Danny Finn

SOS contributor Steve Howell replies: The effect is created using a device known as a vocoder, which is short for voice encoder, though it was also briefly known as a 'voder'. Like so many things in this business, the vocoder dates back many decades and, again like so many things in this business, is derived from telephonic communications technology!

It was originally developed by Homer Dudley of Bell Labs in the '40s as a means to compress audio for transmission down copper telephone lines. Later, one Werner Meyer-Eppler of Bonn University saw the potential for the vocoder in the then-emerging genre of electronic music.
Basically, a vocoder has two inputs: a modulator and a carrier. The modulator is usually fed by a microphone, typically with sung or spoken words, and the carrier will take a bright, sustained synth sound. Chords are played into the carrier input and words are spoken (or sung) into the modulator. The spoken/sung words are electronically imposed on the carrier signal, to create the effect of the synth speaking or singing. So how does this magic work?

Q. What is that 'robot voice' effect?
The carrier signal is split into different frequencies, using very tight band-pass filters (not unlike those in a graphic equaliser), and each of these has a voltage-controlled amplifier or, more recently, a digitally controlled amp. The modulator input is similarly split into different frequencies and on the output of each of the modulators' band-pass filters is an envelope follower that opens and closes the corresponding amplifier on the carrier input (see diagram). Thus, if you were to say 'ooaaah' into the modulator, the lower filters on the modulator would activate and open the lower filters on the carrier's signal; as the modulating signal moved into 'aaaah', the modulator's higher filters would be activated, in turn opening the carrier's upper filters and creating the illusion of vocals.

The number of filter bands the vocoder has is crucial. In the early days of analogue vocoders, for technical reasons (and reasons of cost) they typically only had around 10 bands, making speech somewhat unintelligible. More recent developments using modern DSP allows vocoder designers to include almost any number of filters, meaning that intelligibility is greatly improved, although they still sound like vocoders. Of course, these filters only really deal with the vowel components of a sound; to cater for sibilants and fricatives, such as 's', 'b' and 'p', noise generators are sometimes used, which are triggered when the modulator detects them. While they help, they are still not convincing.

Vocoders were grossly over-used to the point of cliché in the '70s ('Mr Blue Sky' being a prime example!) and they subsequently fell from grace. However, they can be responsible for some stunning sounds, and one only has to listen to Herbie Hancock's use of his Sennheiser vocoder in his brief foray into dance/disco music in the '70s and '80s to confirm this. Feeding the vocoder with an impeccably phrased Minimoog, Hancock created perfectly realistic and fluid lead vocals but with a curious robotic quality. He multitracked these to create harmonies and backing vocals to stunning effect.

More recently, the vocoder has become much more than just a 'speaking synth' effect, and people routinely now feed drum loops into the modulator to rhythmically chop the carrier signal.
Prominent vocoder manufacturers of old were EMS (whose products are still on sale), Moog (who I believe based their design on the vocoder Wendy Carlos contructed out of discrete modules on her giant Moog modular), Sennheiser, Korg and, of course, Roland, with their famous Vocoder Plus.
More recently, there has been a veritable glut of software vocoders on the market, some free, some shareware and some payware. Notable examples are Akai's DC Vocoder and Native Instruments' Vokator, both of which offer outstanding intelligibility and flexibility.


Published November 2005

Friday, November 11, 2016

Q. What is the best way to mike up a clarinet?

By Hugh Robjohns
DPA clarinet mic. 
DPA clarinet mic.

I'm in need of some help and advice. I was working with a group at the weekend who have a wind player in their line-up. She alternates between flute and clarinet. I had no trouble miking up the flute, but when it came to the clarinet I just could not get a signal that would cut through the mix, despite using a second mic. The mics I was using were a JTS small-diaphragm condenser (cheap but usually effective) and a Shure SM58 pointed towards the bell of the instrument, but slightly off-axis. I have spoken to another engineer who has also worked with the group and he confirms that he also had problems with the clarinet.

Any suggestions?

SOS Forum Post

Technical Editor Hugh Robjohns replies: The clarinet has the widest dynamic range of any orchestral instrument, so it's no wonder you had problems! First of all, don't try placing the microphone anywhere near the bell. Most of the sound of the clarinet comes from the unstopped finger holes, so if you focus on the bell you'll be missing 70 percent of the instrument's harmonic range. Instead, you need to place a mic where it can 'see' the entire body of the clarinet.

Try placing the small-diaphragm condenser mic about a foot away and aiming more or less at the lower hand position. You can experiment with the height, angle and distance of the mic until you find a balance you're happy with, but it may also be worth having a quiet word with the player to explain the problem, in order to extract a more appropriate performance in terms of dynamics and consistent positioning relative to the mic.

In a live situation, miking may be more difficult because of spill, feedback or over-exuberant performers, amongst other things. If so, you might find it easier to use a clip-on instrument mic. There are lots on the market that will hold a small mic in a reasonable position and keep it there regardless of the gyrations of the player.

Published November 2005



Saturday, November 5, 2016

Q. How can I make the most of a small studio space?

By Tom Flint
When space is limited, you need to think carefully about how to lay out your studio.When space is limited, you need to think carefully about how to lay out your studio.

I'm just about to set up a studio in a room of my house and I was wondering if you have any tips on where to put everything. I have quite a few keyboards, sound and effects modules, guitars and a computer workstation, and I also record vocals, so I need to provide for all those things. The room I have is quite small and I know it'll never be perfect in terms of acoustics, but can you offer any advice on general layout?

Tony Mendelle

SOS contributor Tom Flint replies: If your workspace doesn't feel right, you'll probably find it difficult to commit yourself to doing any work. Get it right, on the other hand, and hopefully inspirational recordings will follow!

The way I see it, a typical home studio has three important spots where most of the activity takes place, which we'll call the engineering seat, the performance seat and the listening seat, and it's a good idea to start by thinking about where these might be best situated. The engineering seat is where you'll be when working at a computer, hardware multitracker or stand-alone sequencer. This is where the majority of editing, mixing and programming takes place, and is therefore somewhere you are likely to be for long periods of time. The performance seat is the place where music is created and played, perhaps using a workstation keyboard, a guitar or a simple MIDI controller. It can even be the same place as the engineer's seat if you use a desk with a sliding keyboard shelf. The last significant spot is the listening seat. Even the smallest studio needs one, in my opinion. It is where you can relax with a cup of tea, listen to your music and ponder. Without one, your studio will merely be a place of work and you'll be tempted to leave it whenever you feel jaded. Naturally, this position should be angled towards the studio's speakers, but you won't be doing any really critical listening from here so you needn't worry too much.

For efficient working you'll want to have the most frequently used bits of kit within reach from the key working positions. If, for example, you are using hardware multi-effects processors patched into the send/return loop of a workstation, you'll want to be able to easily reach the effects parameter controls during mixing. Many pro studios tend to have racks of gear under the desk, but I avoid this arrangement at all cost. If, like me, you find touching your toes a struggle, you'll be much more comfortable having everything up at eye level. Not only does this leave leg room below, it also makes the programming of effects units or rack synths a more appealing prospect, allowing you to get the most from your equipment without having to move from your playing or mixing position. The space under the desk need not be wasted though. I use mine to store boxes and unused bits of kit, and it is also home to my two guitar amps, which I don't need daily.

If you are recording your own vocals and use hardware preamps, it is also vitally important to have them close to hand when standing near the mic, so that adjustments can be made while singing. In my setup, for example, I have a 10U rack of gear seated on top of my desk so that all my effects and preamps are between three and five feet high, which is ideal for checking settings and making adjustments. If the preamps are feeding a multitracker or computer workstation, it is also useful to be able to see the input meters on the display screen from the vocal position, at least while levels are being established. Once again, if everything is to hand, excessive levels can be curbed by simply reaching for the preamp output level control. It's often possible to use footpedals to remotely adjust transport controls when performing, but it is well worth arranging things so that recording devices can be manually operated too.

A comfy chair or two can make your studio a much nicer place to be. 
A comfy chair or two can make your studio a much nicer place to be.

Having the right desk surface can make all the difference, and often the most effective option is to build your own so that it fits the room, your equipment and your own physical stature. I built my desktop at a height that allows me to comfortably play my master keyboard either when I'm standing up or when seated on a stool. In fact, I have everything, including my multitracker, placed on the same desk so that it can be operated from both positions. You'll save money by building your own desk, too. My desk, which is nine feet long and two feet deep, has a frame built from thick planed pine and an MDF top, and it probably cost less than £40 in materials.

Putting up some shelves will give you somewhere to put boxes of leads and adaptors, old copies of SOS and other general studio clutter. The side benefit is that by placing them carefully they can also be used as a crude, but often effective, form of acoustic treatment. You might also consider hanging your guitars on the wall to free up floor space and mounting monitors on wall brackets to free up the desk.
One of the most limiting factors governing your studio layout will be the interconnectivity of the gear. Every studio requires power, audio I/O and, in most cases, a network of MIDI connections. For mains power, I like to wire up my own plug-boards so that I can cut the cable to the required length and run it around the edge of the room, well away from doorways. In fact, it's a good idea to make sure that there aren't going to be any cables running in front of doorways, cupboards and so on, otherwise sooner or later they'll get hooked on a fast-moving foot and will do some damage to you or your equipment.

Furthermore, a patchwork of wires makes vacuuming difficult. Being able to clean a studio effectively is an important consideration, because dust can seriously damage audio equipment such as mics or faders.
MIDI leads are best kept as short as possible, because the data stream doesn't include any error correction and errors become more likely the longer the cable is, so it's worth planning the MIDI layout early on. If MIDI Merge or Thru boxes are used in the system, these can be carefully placed to act as hubs for the network.

It's possible to buy both audio and MIDI leads in a variety of colours and this is well worth doing, because it makes it far easier to distinguish each cable from the rest. Otherwise it's a good idea to label each lead using stickers wrapped around either end. Labels can be used for mains plugs too.

Before building your studio, it's also worth remembering that bad posture and poor ergonomics can cause repetitive strain injury (RSI) and other uncomfortable conditions. For more on the subject, check out the feature SOS ran in the January 2002 issue (www.soundonsound.com/sos/Jan02/articles/studioergonomics.asp).

Published November 2005

Thursday, November 3, 2016

Q. What is that 'robot voice' effect?

By Steve Howell
Modern software vocoders like Native Instruments' Vokator are far more sophisticated than their hardware forebears.Modern software vocoders like Native Instruments' Vokator are far more sophisticated than their hardware forebears.

In the recent TV ad campaign for Marks & Spencer, they use the Electric Light Orchestra track 'Mr Blue Sky'. There's a distinctive robotic vocal sound in it that I am curious about. How was it made? I was thinking at first that it was something like Auto-Tune (as on the annoying Cher single!) but the ELO record was made years before that. Or is it a remix? (I'm not old enough to remember the original!)

Danny Finn

SOS contributor Steve Howell replies: The effect is created using a device known as a vocoder, which is short for voice encoder, though it was also briefly known as a 'voder'. Like so many things in this business, the vocoder dates back many decades and, again like so many things in this business, is derived from telephonic communications technology!

It was originally developed by Homer Dudley of Bell Labs in the '40s as a means to compress audio for transmission down copper telephone lines. Later, one Werner Meyer-Eppler of Bonn University saw the potential for the vocoder in the then-emerging genre of electronic music.

Basically, a vocoder has two inputs: a modulator and a carrier. The modulator is usually fed by a microphone, typically with sung or spoken words, and the carrier will take a bright, sustained synth sound. Chords are played into the carrier input and words are spoken (or sung) into the modulator. The spoken/sung words are electronically imposed on the carrier signal, to create the effect of the synth speaking or singing. So how does this magic work?

Q. What is that 'robot voice' effect?


The carrier signal is split into different frequencies, using very tight band-pass filters (not unlike those in a graphic equaliser), and each of these has a voltage-controlled amplifier or, more recently, a digitally controlled amp. The modulator input is similarly split into different frequencies and on the output of each of the modulators' band-pass filters is an envelope follower that opens and closes the corresponding amplifier on the carrier input (see diagram). Thus, if you were to say 'ooaaah' into the modulator, the lower filters on the modulator would activate and open the lower filters on the carrier's signal; as the modulating signal moved into 'aaaah', the modulator's higher filters would be activated, in turn opening the carrier's upper filters and creating the illusion of vocals.

The number of filter bands the vocoder has is crucial. In the early days of analogue vocoders, for technical reasons (and reasons of cost) they typically only had around 10 bands, making speech somewhat unintelligible. More recent developments using modern DSP allows vocoder designers to include almost any number of filters, meaning that intelligibility is greatly improved, although they still sound like vocoders. Of course, these filters only really deal with the vowel components of a sound; to cater for sibilants and fricatives, such as 's', 'b' and 'p', noise generators are sometimes used, which are triggered when the modulator detects them. While they help, they are still not convincing.

Vocoders were grossly over-used to the point of cliché in the '70s ('Mr Blue Sky' being a prime example!) and they subsequently fell from grace. However, they can be responsible for some stunning sounds, and one only has to listen to Herbie Hancock's use of his Sennheiser vocoder in his brief foray into dance/disco music in the '70s and '80s to confirm this. Feeding the vocoder with an impeccably phrased Minimoog, Hancock created perfectly realistic and fluid lead vocals but with a curious robotic quality. He multitracked these to create harmonies and backing vocals to stunning effect.

More recently, the vocoder has become much more than just a 'speaking synth' effect, and people routinely now feed drum loops into the modulator to rhythmically chop the carrier signal.
Prominent vocoder manufacturers of old were EMS (whose products are still on sale), Moog (who I believe based their design on the vocoder Wendy Carlos contructed out of discrete modules on her giant Moog modular), Sennheiser, Korg and, of course, Roland, with their famous Vocoder Plus.

More recently, there has been a veritable glut of software vocoders on the market, some free, some shareware and some payware. Notable examples are Akai's DC Vocoder and Native Instruments' Vokator, both of which offer outstanding intelligibility and flexibility.


Published November 2005

Wednesday, November 2, 2016

Backstage with Michael Bluestein of Black Beagle Sound

Q. Why is the signal louder when it is panned to the centre?

By Hugh Robjohns
Different mixers employ different panning laws.Different mixers employ different panning laws.

When I plug my guitar into my 16-track and send the same signal to two channels, if I pan both channels to the middle it sounds louder than if I pan one all the way left and one all the way right. Surely it should sound the same — if they are both in the middle, the signal is coming through both speakers, and if one is panned left and one right, it's still coming through both speakers. Can you explain what's going on?

SOS Forum Post

Technical Editor Hugh Robjohns replies: Panning laws vary between products, depending on whether they are designed to maintain constant voltage, constant power, or a compromise between the two. The compromise version is probably the most common these days, with pan pots designed to provide something like a 4.5dB attenuation when at the centre. Constant power gives 3dB of centre attenuation, while constant voltage gives 6dB.

If you pan identical signals fully left and right, you have full-level signals in each output channel. However, if you pan the signal to the centre, the left and right outputs will be attenuated by (in the case of the common 'compromise' panning law) 4.5dB. But because you have panned both input channels to the centre, each output channel is receiving two lots of signal, each 4.5dB lower than the level of a single channel panned fully left or right. Since your two signals are identical, they will sum together and the level will rise by 6dB. So we go up 6dB from -4.5dB and find that each output channel is now carrying a summed mix of +1.5dB. Hence, each output channel is now carrying a signal that is 1.5dB higher than it was when you panned the channels individually left and right, so it will sound slightly louder.

For the record, if the mixing desk employed the constant power law, with 3dB central attenuation, the two channels panned centrally would produce an output of +3dB in each channel, while a desk with the constant voltage law would produce an output that was exactly the same level as the fully panned channels (in terms of signal voltage, at least).

The constant power panning law is used where you want a panned signal to remain at more or less the same perceived volume regardless of where you pan it. However, this panning law looks wrong on the desk meters, which only show a constant level if you use the constant voltage law! Hence the halfway-house compromise law, which tries to satisfy the demands of both situations reasonably well.



Published November 2005

Tuesday, November 1, 2016

Q. Can I connect an AES output to an S/PDIF input?


Liquid Channel offers AES digital in/out on XLRs.Liquid Channel offers AES digital in/out on XLRs.

I own a Focusrite Liquid Channel (which is great!) and would like to connect it to my MOTU 828's digital input. However, on the digital side of things the Liquid Channel only has an AES-EBU input and output, and the MOTU 828 only has S/PDIF and ADAT digital connections. How can I connect them together?

Bernhard Wagner

Technical Editor Hugh Robjohns replies: The proper way to connect AES to S/PDIF is to use a dedicated digital format converter, of which there are plenty around (although some older designs only pass 16 bits rather than the full 24, so check before buying). The M Audio CO3 and the Behringer Ultramatch and Ultramatch Pro probably represent the most affordable options.

However, although it is a rather makeshift solution (a 'bodge', to use the technical term), if you only need the signal to travel a fairly short distance — say, no more than two metres — you can get an AES output to feed an S/PDIF input with just a simple XLR-to-phono cable. Wire pin 2 of the XLR to the tip of the RCA phono jack, and pin 3 of the XLR to the sleeve of the phono. Pin 1 of the XLR should remain wired to the cable screen at the XLR end, but leave it disconnected and insulated at the phono end.

Strictly speaking, the Channel Status and other subcode data is formatted differently between AES and S/PDIF, but very little equipment bothers to send or read the full subcode data set anyway, so it is rarely a problem.

Published November 2005

Korg Kronos Effects Tutorial Part 2: Master and Total Effects