BY: LIC. EZEQUIEL MORFI | TITANIO

CONCLUSION

DITHERING and its siblings NOISE-SHAPING and AUTO-BLANKING are subjects of profound although often mistaken discussion, both in formal literature as well as in casual forum debates, and constitute subjects and procedures that need to be understood and mastered by the audio engineer, most especially in our current times of digitally tracked, edited, mixed, mastered and delivered recordings. The internal workings of a dither plug-in are not entirely subject to a software developer but also matters of importance to the engineer himself/herself, who has to know more than just “when to use it”.

As a general rule, there is no doubt that dithering must be applied only when a situation of bit-reduction occurs; however, it is of vital importance for the unexperienced engineer to dominate his signal flow and realize exactly when that reduction of word length is actually taking place.

Some real-world examples can be given to illustrate such occasions; a dither plug-in placed at the end of the master fader processing chain that enables the operator to properly monitor a 64-bit floating-point mix coming out of his DAW and going into the 24-bit fixed-point digital-to-analog converter without the added truncation distortion from that one bit-reduction seems to be an undisputed necessity and an obvious thing to do, yet not everybody is working this way, most likely due to a lack of consideration.

When the operator has realized the moments where bit-reduction is taking place inside his/her potentially hybrid (analog/digital) processing chain and the reasons for dithering under such circumstances, he/she can begin to enjoy the creativity of having control over this procedure and experiment with the various dithering types (Gaussian, Triangular, Rectangular) and algorithms currently available in the market, all yielding different sound results, or even extended techniques such as using rounding instead of dithering or even let truncation itself occur.

This is all possible and could be beneficial under certain circumstances (“sonic demands”) but still calls for a complete understating of the dithering subject from the engineer before he/she can “break the rules”.

Conversion from floating-point to fixed-point (such as in the example above) always constitutes bit-reduction (even in the outdated and extremely unlikely case of going from a 32-bit floating-point audio program to a 32-bit fixed-point file) and will require a dithering process such as normal.

Conversion between 32 bits and 64 bits floating-point, a regular scenario for audio clips being bussed by the DAW in and out of various, third-party plug-ins, is a completely transparent and lossless process that needs not be dithered in any case and produces no truncation distortion whatsoever, even when going “down” from 64 bits to 32 bits. The scaling nature of floating-point makes for a full-scale accuracy at all possible sample values.

As stated above, the benefits from using a floating-point architecture inside our DAW and plug-ins are clear and straight-forward and serve to offer even greater accuracy and a consistently lower noise floor than a 24-bit fixed-point signal. Much more importantly, floating-point architecture prevents the cumulative nature of rounding errors in the calculations of our DAW and plugins to become the cause of continuously-degrading sound quality in complex mixes or heavily-processed material.

The degree of accuracy, so-called “transparency” and overall fidelity of audio quality of a 64-bit floating-point calculation is unsurpassed and completely unmatched in the fixed-point realm and even in the purest analog system. The careless investigator can still come across serious though outdated publications from the late 1990s by authors who favor the long-gone 48-bit fixed-point format, called “double- precision”, to the floating-point architecture.

While the noble 48-bit format was indeed capable of dealing with most of the “single-precision” 24-bit fixed-point limitations while also avoiding certain problems present in the floating-point architecture at the time (computing and storage issues, inaccuracies for certain low-frequency filter designs, etc.) this comparison has been rendered obsolete by the introduction of the IEEE® 754-2008 standard.

All in all, software developers and programmers are left to their own virtuous choice as for making their code 32, 64 and/or 80 bits floating-point internally, as may be more suitable to their processors, but engineers should know well to process, edit, consolidate and overall handle audio files in the 32-bit floating-point format, not 64-bit, and to introduce dither in their processing chain when a situation of true bit-reduction occurs, and only then.

LIC. EZEQUIEL MORFI | TITANIO morfi@titanioesarte.com

Check pt-1 & pt-2