Skip to content

Consider hooking up sound source nodes in the API somehow #390

Open
@cwilso

Description

@cwilso

There have been requests to add sound to the scope of the WebXR API . There are two aspects to this - first, that we should manage the audio input/outputs associated with an XR device. This is already covered by #98. The second aspect is enabling developers to easily position sound sources in the virtual space, and use an HRTF (Head-Related Transfer Function) or multi-speaker setup to properly "position" the sound.

It is relatively straightforward to use Web Audio's PannerNode to hook up between a posed sound source and the head pose - in fact, three.js does exactly this, with a PositionalAudio source object. However, the problem lies in keeping the headpose (and the sound source pose) updated on a high enough frequency - ideally, letting the audio thread directly get headpose info somehow or the like.

(Note that I don't consider this a high-priority today - Issue #98 is more important, and even that is a future enhancement - but I wanted to capture it.)

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions