Meeting minutes
<bajones> When attempting to join the Zoom call I'm getting an "Incorrect Passcode" error?
<yonet> oh no
<yonet> It's the year right bajones
<bajones> I'm getting the link from the meeting page for today's session off the W3C events pages
<bajones> Confirmed it's the right date
<yonet> It seems to be working for me but Alex is sending you just in case
<bajones> Okay, got it!
NeilT: SIGGRAPH Asia, Dec, Tokio
… BoF, 90 min each, one of BoF is 3D standards for the Web, opportunity to represent W3C work as it fits together
atsushi: I will be there and can help
NeilT: Khronos is open to feedback and offering to help.. bidirectional..
ada: model element is a good example where we can work together
song: we plan to join XR Summet
https://github.com/immersive-web/WebXR-WebGPU-Binding/issues/6
MikeW: presentation
<bajones> :)
MikeW: WebGPU + WebXR, motivations.. Thanks for bajones to get this work started
… WebKit, initial implementation based on bajones's explainer
… WebGPU > WebGL - support compute shaders, atomic operations, more uniform
… lot of support for WebGPU already from JS frameworks
… open issues, issue#7 mixing webgpu and webgl
… issue#8 - depth direction
… issue#8 - lifetime of XR subimage
… issue#10 - texture formats
… issues #12 - projection matrices proposal align with WebGPU
bajones: chromium demo, first triangle with WebGPU + WebXR
cabanier: only projection layers, not other layers ?
MikeW: bajones's current proposal only projection layers
bajones: we're trying to catch up on the layers implementation..
… bajones: layers spec only requires projection - so that kind of an MVP/phase 1.. later we want to introduce other layer types
MikeW: this was also our thoughts
https://github.com/immersive-web/WebXR-WebGPU-Binding/issues/6
MikeW: issue strictly on WebGPU side.. adding a boolean for XR
bajones: need to make a dictionary change.. probably not in WebXR spec.. need to approach WebGPU spec
… maybe less necessary now.. usually there is only a single GPU
ada: should we schedule the joint meeting with WebGPU team
bajones: MikeW and bajones are regular attendees of WebGPU, we can take this action to bring it up there
… late October F2F meeting is the current plan to bring this up with WebGPU team
https://github.com/immersive-web/WebXR-WebGPU-Binding/issues/7
MikeW: WebGL frame mixing with WebGPU frame within one XR session.. do we allow it ? what about existing JS frameworks ?
… seems most frameworks are moving to WebGPU already
bajones: intermixing content is feasible in browsers.. feels heavy burden to mandate it.. cleaner break not to attempt
… how does the API shape looks like
bajones: .. should be session creation time
<Zakim> Brandel, you wanted to ask if this is exclusively about rendering _to XR_ vs. simple use, and the bidirectionality of it
Brandel: is this mixing only for WebXR ?
bajones: if I am opting in WebGPU like XR session than WebGL no longer function
… it is not that they can not intermix, it is more about session presentation frames
bajones: benefit is e.g. make it easier to reason about projection matrices
<Zakim> alcooper, you wanted to ask if this is a system or headset type feature
MikeW: method of selection - function call or flag .. no hard opinion
alcooper: is this a feature of computer or headset or a browser ?
bajones: browser.. e.g. VisionOS only accepts metal .. browser deals with formats
… make it optional.. most content will not use it
… most existing content loads resources before XR session creation
alcooper: do we need more entry point - not just WebGPU flag, but more enum states
bajones: we can also use session type if we need to later..
alcooper: should we consider session type
bajones: we would expose more privacy bits
bialpio: (discussion on #7) extend XR dictionary instead of adding to required / optional features?
bajones: suggest new enum if we extend the dictionary
bajones: place comments on the issue regarding preference, easy for implementations to change
MikeW: no real preference on which option, easy to change
alcooper: slight preference against new method, any other one is fine
ada: feature is fine
alcooper: add issue to the spec to revisit
trevorPicoXR: rough timeline on standardization?
bajones: prototyping stage, hopeful to become close to shipping by end of 2025
MikeW: should implementation conform to WebGL or use WebGPU when WebGPU is used?
bajones: pretty clear we should adopt API conventions of WebGPU when WebGPU is the rendering option for WebXR
bajones: easier for developers to fail if we perform conversions to WebGL / current WebXR specification
alcooper: sticking with API conventions would get us the best developer experience
<alcooper> A
<bajones> A
<yonet> A
ada: (a) for WebGPU for WebGPU, (b) for return WebGL conventions for WebGPU
A
<Brandel> A
<bialpio> A
<ada> A
<song> A
cabanier: seems counter for implementors to go outside of group to decide things
<Zakim> bajones, you wanted to bring up one more WebGPU topic, about better guarantees for resource needs
bajones: not our intent to do anything outside the group, specifically regarding the Layers API: there exists a carve out for XRProjectionLayer in the Layers API specification today, optionally the 'layers feature' requires support of all the layer types
bajones: focused just to get pixels on the screen, whether or not we intend to ship before all layers API is supported, want to turn feature on for developers first
https://github.com/immersive-web/WebXR-WebGPU-Binding/issues/10
bajones: formats accepted for the compositor (issue #10). In WebGPU we have rgba, bgra, rgba16float, should we follow that or only allow preferred format?
MikeW: stick with existing WebGPU conventions of rgba, bgra, rgba16float, open to change if performance is an issue
bajones: ok with that, may mean a copy is needed on some platforms
bajones: is rgba16float necessary or not initially?
MikeW: keep rgba16float for now, HDR becoming more prevalent
bajones: no problem with that
bajones: consider POR for prototypes unless no objections and no serious problems?
Brandel: does it have bearing on display characteristics?
bajones: it is somewhat independent, canvas configuration, if we want to support HDR rendering then need to discuss with group
MikeW: brief explanation on WebGPU
MikeW: and how it handles HDR / SDR for webgpu
bajones: This is probably closed up already as it's pretty trivial. In the part of the spec where we describe fingerprinting considerations of "isSessionSupported":
https://github.com/immersive-web/webxr-input-profiles/issues/267
bajones: The recommendation was that while it was an entire section, it should be pushed into the subsection above it.
ada: I think we got resolution on the MX ink issue...?
cabanier: Yes, I think I was going to make a PR to the spec to indicate that the ray should originate from the tip of the stylus
alcooper: Yes, I think we thought it was implied but we can make it more explicit
ada: I hoard the minutes which could be useful for retrieval
Unconference
ada: We are now in Unconference time!
… we only have one issue identified, but now is a good time to bring up anything else you want to bring up
https://github.com/immersive-web/proposals/issues/15
bajones: I'd like to talk about something regarding webGPU:
… There are certain areas where the webXR spec that are fairly loose about, like the number of views returned - that webGPU would very much like to have crisper definitions around
<ada> ack
ada: is this an instance where we should agree on standard numbers, to avoid fingerprinting risk, or allow device-specific values?
bajones: In many cases the values will be simple, like "two" - it's not necessarily about how many you _do_ get, but how many you _could_ get.
… in the event where you're doing something like rendering for LKGlass and requiring ~40 views, there's a good chance you'll be fingerprintable anyway.
bajones: This is beneficial for setting ceilings for how much infrastructure to request while establishing the pipelines that webGPU is made out of
bajones: webGL didn't have this rigidity, but webGPU wants clearer scope for the worlst it'll be expected to do
bajones: There may be benefits for webGL as well. I don't know if there are other attributes that would benefit from this specificity as well, but being able to proactively anticipate maximums is a useful exercise
MikeW: tentatively, keeping the maximum to two seems reasonable. We are still very early in this implementation, but I don't see where people may need more
bajones: Varjo has some devices with multiple views and multiple levels of pixel density - however, they usually have modes that render only two at a time.
… we do have ways to ask for more - using "secondary views"
<Zakim> ada, you wanted to mention using secondary views for accessibility
bajones: as such, it may only matter if folks are using secondary views. Maybe we only use more when that's used.
ada: I had been experimenting with secondary views to generate accessibility information buffers
<Zakim> Brandel, you wanted to talk about autostereoscopic monitors and junk
Brandel: Some display devices generate a large number of views, autostereoscopic monitors etc - but they generally require heavily-customized UAs to run them
<Zakim> ada, you wanted to ask about WebGPU
ada: Would these values need to be established before the XRSession?
bajones: I would not suggest doing this outside of a session. It may be nice if you wanted to set up ahead of the session, but the buffers aren't generally going to be so huge that you'd need to do it before launching. I'd be worried about exposing this before that.
ada: Is adding additional content to a scene expensive as well, would it be helpful to know how many controllers a user might present etc?
bajones: Possibly! It may be helpful to know the maximum number of input devices. Being able to bucket to not-exact numbers, e.g. "four", may be useful. I hadn't thought about that.
cabanier: We never support more than two inputsources at the same time - that's why we introduced `trackedSources`
ada: Apple hasn't put inputs into trackedSources yet
cabanier: trackedSources should be unlimited, no? That seems to be the point
bajones: Theoretically - if you need to show new input sources, you may be needing to create new resources for that display - that's why I say we may need to deeper thinking on it.
https://github.com/immersive-web/proposals/issues/15
bajones: Even if the ceiling is "absurdly" large, like 64, it's unlikely to be harmful - it would be mostly about the maximum length of arrays. to ada's point, it would likely to pertain to both inputsources and trackedSources together
trevorPicoXR: This is an old issue I created a while ago - to support multiple webXR apps running concurrently
trevorPicoXR: since then, operating systems like visionOS have made "shared-space" apps that resolve some of these concepts, and Meta's recent announcement may include solutions
… And things like model tag may resolve some of the issues as well
ada: Vision Pro does have a context where a shared display can be achieved via "Virtual Mac" display of your connected machine
ada: We don't have any other apps that do that, I'm not aware of any plans for furthering that
cabanier: re: the Spatial SDK, that's not multi-app. It's like a browser in some ways, that can start in 2D and then "go immersive" - composing things with layers and planes etc.
trevorPicoXR: I see. some of the desires this were related to are solved by model and the ability to see multiple objects, but some are related to other tasks like movie theatres and google docs etc.
… There is also the idea of having more than model to support this, like with things like exokit
bajones: Overall, it feels like we're coming at this concept from two potential ends of a spectrum. It's not clear which end gets us to this place first.
bajones: As you indicate, Model tag could get you into a place like that, but we're a long way away from it.
bajones: it seems like we're likely years and years from that being able to facilitate system like that. OTOH, WebXR is for a deep, rich, single-experience way of approaching this.
bajones: Looking at visionOS today seems like anything in a shared environment has to take a very different tack wrt rendering - becoming much more like Model tag
bajones: it seems like you'd have to do a lot of crunching down things into a shape that's appropriate to render in something like a visionOS. It's a great goal, but I can't make a call on which way is the right side to approach the problem from.
cabanier: I do think it could be possible to have "cooperative stereo buffers" that could play together - I think maybe Apple proposed something to that effect
cabanier: Potentially by constraining interaction or views to a specific area, it should be possible
bajones: Based on what Rik was saying that there may be platform capabilities that I wasn't aware of that could support that. I would like to hear more about that!
<Zakim> Brandel, you wanted to talk about the different parts
Brandel: Talked about who does the rendering being really important
AOB
ada: Thinks maybe we can close up!
atsushi: We re-chartered today! Please ask to rejoin the WG!
song: I am from China Mobile, where we are going to start 6G for "IMG 2030" - immersive communication is a top priority for our industry
… I would like to find occasion for immersive web topics. I had thought that the CG might be a place to discuss things, but I see that it's not very active. Is there a place for a usage-focused discussion?
ada: This is more focused on the infrastructure that builds this, but entities like the Metaverse Standards forum may be another place to look
NeilT9: MSF is very professional! I will add that it is not itself a standards body
… it's a neutral territory where everyone can collaborate
song: I work with the W3C on this, and webVR and webAR in my company
ada: We do have a community group, but it acts mainly as an incubation group for the WG.
atsushi: There is a proposal repository that contains the new features that people would like to start. it's in the CG rather than the WG. Once there is a mature understanding of the specification, we bring it to the WG.
<alcooper> https://
<alcooper> I think
<bajones> Apologies, but I need to drop off now. Thanks all for a productive TPAC!