Description
The naming of the "focus"
and "blur"
events on XRSession
are somewhat confusing to me.
I hadn't thought about those events for a bit and when I saw them today in the spec, I presumed they were telling the app when it could expect or not expect to receive XR input updates such as pose updates, select
events and other input source button presses. Basically, I presumed these events to be the equivalent of "focus"
/"blur"
in the core DOM. However, these events actually tell the app when its pixels are invisible or visible, allowing it to save effort by pausing its rendering.
While it is likely that XR input will also be removed from the app during the example situation of a sensitive overlay being displayed, that seems like a secondary effect. Specifically, if XR input is removed from the page temporarily due to a non-sensitive partial overlay, the UA would expect the page to continue rendering. However, it's unlikely that a UA could raise the "blur"
event to communicate the loss of input, since we'd have already suggested that pages stop rendering when they receive that event.
This seems more similar to the Page Visibility API and its visibilitychange
event, which serves an equivalent purpose for 2D content, letting the page know when the user can't see it, allowing the page to stop expensive rendering or pause content. Perhaps a single visibilitychange
event better fits our intended developer mental model here?