The A to Z of computer music: B
Computer Music goes below the bonnet in this month's lexical supplement, demystifying yet more digital audio bits and bobs.
Adjusting the balance of a stereo signal means changing the amplitude of the left and right signals. It can be used to change the 'position' of an audio signal in the stereo field, though it takes its name from the corrective use of giving each channel equal strength, to 'balance' and centralise the signal.
Officially, balance differs from panning in that the former will silence the right channel if panned hard-left, while the latter will bring the two together in the left channel. In fact, the pan knobs in most DAWs and plugins actually function like a balance control!
A system using balanced connections (cables, equipment, input/output connections, etc) will be less susceptible to unwanted interference such as mains hum and can maintain a clear audio signal over longer distances than its unbalanced counterpart.
A range of frequencies, eg, 120–200Hz. Some plugins divide the incoming signal into multiple bands for processing separately before recombining them. The term 'band' is also used to refer to the filters in an equaliser (EQ), whether they be bell/peak, low shelf, high shelf, etc. An EQ is often specified as having a number of bands, indicating its flexibility.
A type of filter used to let through frequencies within a certain band, rejecting those outside it, similar to a low-pass and high-pass in series. There is a gradual roll-off of frequencies outside of the passed area, rather than an abrupt cutoff. Band-reject filters have the opposite frequency response, cutting out a band of frequencies.
Most plugins have a selection of presets: this 'bank' contains ready-made settings that can be good starting points from which to create a customised sound. For VST plugins, a preset is usually stored as an FXP file, and banks of presets are stored in the FXB format, (though plugins may use their own custom formats too.) If you're short on inspiration, preset banks can be bought in formats like FXB.
The lower end of the frequency spectrum; that is, the lowest sounds that we are capable of hearing. At their very lowest, bass sounds are more felt than they are heard, and these frequencies are termed sub-bass. As a rule of thumb, bass is anything up to about 250Hz, while sub-bass resides at 20–80Hz.
As an instrument, both a standard bass guitar and a standard double bass can play the notes E1–G4. The intricacies of bass were explored in Computer Music issue 186's Bass cover feature.
In music theory: One whole unit of musical time. In 4/4 time, represented by a quarter-note (aka crotchet.) A beat can be subdivided into two eighth-notes, four sixteenth-notes, etc.
In hip-hop: This is the instrumental track/music sans vocals, created by a hip-hop producer, sometimes known as a beatmaker.
In drum-speak: An arrangement of percussive hits into one repeating pattern.
In acoustics: If two signals are close enough in frequency, we will perceive a 'throbbing' - or beating - sensation in amplitude as they fall in and out of phase. For example, signals of 328Hz and 332Hz will sound like one signal of 330Hz, rising and falling in level four times per second (332-328=4Hz). These regular 'pulses' in level are called beats. Beating is used creatively in unison synth sounds such as Reese basses, achieved by using the synthesiser's unison detune features or simply detuning oscillators manually.
Taking a rhythmic piece of audio, then 'slicing' it up for rearrangement. The sliced beat can be processed to change its tempo, arrangement, and sounds of its parts.
Traditionally, beat-slicing is done to drum or percussion loops, but it can be applied to guitar or synth parts, vocals, or anything with enough of a rhythmic quality.
While beat-slicing can be done manually within a DAW, there are also methods for doing it automatically. Ready-sliced material is available via the REX format, and some timestretching and pitchshifting algorithms are essentially behind-the-scenes beat-slicers.
Digital data is composed of 1s and 0s. Each one is a bit, and a group of eight is called a byte. As an example, in MIDI, a Note On message is sent using three bytes. The first byte is split into two: four bits to say 'Note on' and four to show the channel number; the second byte is used to communicate which note; and the final byte describes the MIDI velocity.
A measurement of the amplitude resolution of a digital audio signal. Lower bit depths mean there are fewer possible amplitude values for each sample, and the mapping of desired sample values to the nearest possible destination values can result in gritty, grungy distortion known as quantisation noise. This can be alleviated via a process called dither, which, in simple terms, converts the distortion into (less obnoxious) background noise. A higher bit depth means greater usable dynamic range and quieter quantisation noise (and/or dither.)
An audio signal with a high bit depth (pictured left) is a closer representation of the desired waveform (green line) than a lower bit depth one (right.)
CD-quality audio is 16-bit. 24-bit audio is classed as 'professional standard', and any potential quantisation/dither noise is practically inaudible (and indeed, is often quieter than the softest sounds that an audio interface can actually reproduce.) DAWs and plugins generally use 32- or 64-bit floating point calculations internally, to preserve audio fidelity throughout repeated complex calculations.
An effect that reduces bit depth and/or sample rate of a signal in order to create distortion. Here, degradation of the signal is the goal, so no restorative functions are applied. What's left of the signal on output is a low-bit, under-sampled (read: poorer quality) version, reminiscent of the sounds of '80s computers and music gear.
The opposite of attenuation in an equaliser. Boosting frequencies will increase their levels.
Like a 'render' or 'export' command, this DAW function will play a selected region or entire project, and record the results to a fresh wave file. The resulting audio is often placed back into the project on a new track.
In the early days of audio engineering, the limitations of recording devices with only a few tracks made bouncing a very necessary part of the production process, summing a six-track mix down to a pair of stereo tracks, for example, to free up the original six tracks for further recording.
Nowadays, this limitation is irrelevant, but bouncing is still used to:
- Export a mixed track for collaboration or demoing purposes, as well as to produce a final export of a completed mix.
- Consolidate multiple takes across multiple tracks into one master take.
- Take a snapshot of an effect, whether for a collaborator who doesn't have the effect installed, or for a fixed copy of an instrument/effect that generates randomly.
- Reduce CPU load.
Beats Per Minute, a measure of tempo. 120BPM equates to two beats per second.
A drum beat, typically sampled from a vintage funk or rock track during a 'break' in which the other instruments cease playing, leaving the drummer to strut his funky stuff. Breakbeats are a staple of many electronic genres, and they can be sampled, processed and reprogrammed for use as new beats.
A part in the song with barer instrumentation, intended as a respite from the intensity of the main track, offering a different mood. For instance, the drums and bass could be cut out, for a more musical, atmospheric section.
Traditionally, a brickwall limiter is one with a high ratio (of at least 20:1, up to ∞:1) and a minimal attack time. Once an amplitude threshold is set, any parts of our signal that are louder than this point will, very quickly, be reduced to that amplitude. The brickwall limiter is intended to do its best to prevent the signal exceeding the threshold, swiftly pulling it back down the instant that it does. This is often used to prevent overloading of subsequent processes, and to catch sudden, loud bursts of sound that could damage speakers and ears.
In the world of digital audio, brickwall limiters are more literal: to qualify, they must not pass any signal at all above the threshold. Such limiters are often used to reduce the peaks of a mix so the signal can then be turned up louder without the peaks causing harsh digital clipping.
Digital audio systems use buffers to split the processing of audio signals into manageable chunks, helping to maintain a constant signal that's free of glitches and dropouts. High (large) buffer sizes mean higher latency; that is, the delay between input and output, manifested as an audible delay/echo between playing a note and hearing it back through the speakers.
When recording MIDI or audio and monitoring through the DAW, it is important to use a low buffer size to ensure low latency. Buffer size is set using your DAW's preferences or in your audio interface's settings. The number and complexity of plugins used and the processing power of your system are two key factors (though not the only ones) affecting the size of the buffer required for good audio performance.
Part of a song that builds up to something (usually 'the drop' or breakdown), adding intensity with elements such as gradually opening filters, drum rolls, risers and FX.
An area on a mixer (real or virtual) that signals can be sent to. One use is that of auxiliary send/return buses. Signals sent to these can then be sent through effects processors - handy if you want to apply the same effect (reverb or delay, usually) to a number of tracks, then control the level of this effect via the auxiliary bus fader.
Tracks can also be bused directly to create a group bus (say, the tracks of a drum kit) that can then be treated and processed as one.
The final destination for tracks in a mix is the master bus, where you can apply effects to the whole mix (eg, for mastering.)
When you've got an effect on a track, you can activate the bypass to hear what it would sound like without the effect.