Building a Room for Immersive Mixing

Presented by

James D. (jj) Johnston, AES Life Fellow

Produced By

AES PNW Committee

jj discussing immersive listening rooms

PDF of Zoom Meeting Chat

jj’s Powerpoint Deck

Video Recording of Zoom Session

On December 3, 2024, James D. (jj) Johnston gave the PNW Section November presentation about immersive mixing rooms, albeit in December. Originally scheduled for November 20, severe weather left hundreds of thousands without power and roads blocked by trees and powerlines. This lecture was rescheduled for December 3, on Zoom and in-person at Digipen Institute of Technology, Redmond WA. Some 10 people were at Digipen and 45 on Zoom; an estimated 38 were AES members.

 

PNW Chair Mortensen noted that the actual December PNW meeting would be on Zoom December 11, about audio tape simulations.

 

jj (lowercase nickname) Powerpointed his way through his observations of how to make an immersive mixing room, some of which was gleaned from his time as Chief Scientist at Immersion Networks. 

 

Comparisons were drawn between listening and mixing/production rooms in immersive, stereo and other formats. Live-End-Dead-End (LEDE, a former trademark of Syn-Aud-Con) is less desirable for a good immersive room, although often liked for other formats. Precursor multi-channel techniques were discussed and compared to the present fashions of immersive. Since few can make the “perfect immersive room,” jj described what is practical, including the ability to overlay the playback room with the intended space of the immersive content; avoiding adding room coloration, and controlling the playback room reverb. Many details about early reflections, matching speakers with delay and gain calibration and FIRE RATED room treatments were discussed. The use of subwoofers and Low Frequency Effects (LFE), and the effect on immersive accuracy was noted.

 

Questions and comments were made both live and via Zoom.

 

About our presenter:

James D. (jj) Johnston was formerly Chief Scientist of Immersion Networks. He has a long and distinguished career in electrical engineering, audio science, and digital signal processing. His research and product invention spans hearing and psychoacoustics, perceptual encoding, and spatial audio methodologies.

 

He was one of the first investigators in the field of perceptual audio coding, one of the inventors and standardizers of MPEG 1/2 audio Layer 3 and MPEG-2 AAC. Most recently, he has been working in the area of auditory perception and ways to expand the limited sense of realism available in standard audio playback for both captured and synthetic performances.

 

Johnston worked for AT&T Bell Labs and its successor AT&T Labs Research for two and a half decades. He later worked at Microsoft and then Neural Audio and its successors before joining Immersion. He is an IEEE Fellow, an AES Fellow, a NJ Inventor of the Year, an AT&T Technical Medalist and Standards Awardee, and a co-recipient of the IEEE Donald Fink Paper Award. In 2006, he received the James L. Flanagan Signal Processing Award from the IEEE Signal Processing Society, and presented the 2012 Heyser Lecture at the AES 133rd Convention: Audio, Radio, Acoustics and Signal Processing: the Way Forward. 

 

In 2021, along with two colleagues, Johnston was awarded the Industrial Innovation Award by the Signal Processing Society “for contributions to the standardization of audio coding technology.”

 

jj received the BSEE and MSEE degrees from Carnegie-Mellon University, Pittsburgh, PA in 1975 and 1976 respectively. 

 

Reported by Gary Louie, PNW Secretary.