Skip to content

Binaural rendering from Ambisonics / recreating listening tests in AMBIQUAL/BINAQUAL papers #1

@listener17

Description

@listener17

Hi!

Thanks for the interesting work!

How can I use BINAMIX repository to recreate the binaurally rendered signals from ambisonics (https://github.com/QxLabIreland/Ambiqual/tree/main/validation/audiofiles), as used in the listening test of the AMBIQUAL/BINAQUAL paper?

In other words:
Section 4.4 in the BINAQUAL paper describes a codec compression dataset (“..512, 384, 256, 128, 96, 64, and 32 kbps to produce a range of conditions. They were then rendered binaurally for presentation.”).

My understanding is:
MUSHRA scores are here: https://github.com/QxLabIreland/Ambiqual/tree/main/validation
But, where is the corresponding binaural content used in the MUSHRA test?

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions