Hi!
Thanks for the interesting work!
How can I use BINAMIX repository to recreate the binaurally rendered signals from ambisonics (https://github.com/QxLabIreland/Ambiqual/tree/main/validation/audiofiles), as used in the listening test of the AMBIQUAL/BINAQUAL paper?
In other words:
Section 4.4 in the BINAQUAL paper describes a codec compression dataset (“..512, 384, 256, 128, 96, 64, and 32 kbps to produce a range of conditions. They were then rendered binaurally for presentation.”).
My understanding is:
MUSHRA scores are here: https://github.com/QxLabIreland/Ambiqual/tree/main/validation
But, where is the corresponding binaural content used in the MUSHRA test?
Hi!
Thanks for the interesting work!
How can I use BINAMIX repository to recreate the binaurally rendered signals from ambisonics (https://github.com/QxLabIreland/Ambiqual/tree/main/validation/audiofiles), as used in the listening test of the AMBIQUAL/BINAQUAL paper?
In other words:
Section 4.4 in the BINAQUAL paper describes a codec compression dataset (“..512, 384, 256, 128, 96, 64, and 32 kbps to produce a range of conditions. They were then rendered binaurally for presentation.”).
My understanding is:
MUSHRA scores are here: https://github.com/QxLabIreland/Ambiqual/tree/main/validation
But, where is the corresponding binaural content used in the MUSHRA test?