0% found this document useful (0 votes)
10 views18 pages

Arvr End

6 degrees of freedom (6DoF) VR allows users to move freely within a virtual environment by enabling both rotational and translational movement. This level of immersion enhances the user experience by making interactions with virtual objects and movement through the virtual world feel natural and realistic. 6DoF VR improves immersion, interaction, realism and reduces motion sickness compared to 3DoF VR which only allows rotational movement. As VR technology advances, 6DoF will become even more important for creating engaging virtual reality experiences.

Uploaded by

krahul74714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
10 views18 pages

Arvr End

6 degrees of freedom (6DoF) VR allows users to move freely within a virtual environment by enabling both rotational and translational movement. This level of immersion enhances the user experience by making interactions with virtual objects and movement through the virtual world feel natural and realistic. 6DoF VR improves immersion, interaction, realism and reduces motion sickness compared to 3DoF VR which only allows rotational movement. As VR technology advances, 6DoF will become even more important for creating engaging virtual reality experiences.

Uploaded by

krahul74714
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 18

ARVR END

Tuesday, December 5, 2023 9:38 PM

WHY 6 DOF IS REQUIRED FOR VR?

6 Degrees of Freedom (6DoF) is crucial for VR because it allows users to move around
freely within the virtual environment, creating a more immersive and realistic
experience. With 6DoF, users can not only rotate their heads to look around but also
physically move their bodies, enabling them to walk, run, crouch, and even jump in the
virtual world. This level of movement mimics our real-world interactions, making the VR
experience more natural and believable.

Opens in a new window


virtualspeech.com
6DoF VR

3DoF VR, on the other hand, only tracks rotational movements, limiting users to turning
their heads to navigate the virtual space. While this provides some level of immersion, it
doesn't allow for the same level of exploration and interaction as 6DoF VR.

Here's a breakdown of the benefits of 6DoF VR:

1. Enhanced Immersion: 6DoF VR creates a more immersive experience by enabling


users to move around freely within the virtual environment. This physical interaction with
the virtual world makes it feel more real and believable.

2. Improved Interaction: 6DoF VR allows for more natural and intuitive interactions with
virtual objects. Users can physically reach out and grab objects, manipulate them, and
even walk around them, just like they would in the real world.

3. Increased Realism: 6DoF VR enhances the realism of VR experiences by allowing


users to move in a way that is consistent with their real-world movements. This can be
particularly important for experiences that involve physical activities, such as sports
simulations or action-adventure games.

4. Reduced Motion Sickness: Some users experience motion sickness when using 3DoF
VR due to the mismatch between their visual movements and their physical
movements. 6DoF VR can help to reduce motion sickness by providing a more
physically accurate representation of the user's movements.

New Section 1 Page 1


5. Expanded VR Applications: 6DoF VR opens up a wider range of VR applications,
including virtual training, education, and collaboration. The ability to move freely within
the virtual world makes these applications more effective and engaging.

Overall, 6DoF VR is essential for creating truly immersive and realistic VR experiences.
It allows users to move around freely, interact with virtual objects naturally, and
experience the virtual world in a way that is consistent with their real-world movements.
As VR technology continues to evolve, 6DoF will become even more important for
creating engaging and impactful VR experiences.

From <https://bard.google.com/chat/76eb33eab7a22b28>

what is Google cardboard? how does it fit in Vr world


edit
View other drafts
volume_up

Google Cardboard is a low-cost, DIY virtual reality (VR) headset made of cardboard and other
simple materials. It was developed by Google in 2014 as a way to introduce people to VR
without the need for expensive hardware.

Opens in a new window


www.amazon.com
Google Cardboard VR

Google Cardboard works by using the smartphone's display as the VR screen. The smartphone
is placed inside the headset, and lenses magnify the display to create a stereoscopic image.
The headset also has sensors that track the user's head movements, which allows the VR
experience to be interactive.

New Section 1 Page 2


Google Cardboard has been very successful in introducing people to VR. It is estimated that
over 20 million Cardboard viewers have been shipped since its launch. The headset has been
used for a variety of purposes, including gaming, education, and entertainment.

Google Cardboard is a good option for people who are new to VR and want to try it out without
spending a lot of money. However, it is important to note that Cardboard VR is not as immersive
as high-end VR headsets, such as the Oculus Rift and HTC Vive.

Here are some of the pros and cons of Google Cardboard:

Pros:

• Affordable

• Easy to set up

• Wide range of apps available

• Good for introductory VR experiences

Cons:

• Not as immersive as high-end VR headsets

• Can be uncomfortable to wear for long periods of time

• Not suitable for all types of VR experiences

New Section 1 Page 3


Overall, Google Cardboard is a great way to introduce people to the world of VR. It is a fun,
affordable, and easy-to-use headset that can provide a variety of immersive experiences.

From <https://bard.google.com/chat/c07d025c2f578b23>

which transformation is helping to achieve a zooming


effect on the screen.
edit
View other drafts
volume_up

Scaling is the transformation that is helping to achieve a zooming effect on the screen. Scaling
is the process of enlarging or reducing the size of an object. When you zoom in on an object,
you are scaling it up, making it appear larger. When you zoom out on an object, you are scaling
it down, making it appear smaller.

Scaling can be applied to both 2D and 3D objects. In 2D graphics, scaling is often used to
resize images or text. In 3D graphics, scaling is used to resize objects in a scene or to change
the size of the camera's view.

There are three types of scaling:

• Uniform scaling: This type of scaling scales an object in all directions equally. This means that
the object's width, height, and depth will all be multiplied by the same factor.

• Non-uniform scaling: This type of scaling scales an object in different directions by different
amounts. This means that the object's width, height, and depth will all be multiplied by different
factors.

• Anisotropic scaling: This type of scaling is a special type of non-uniform scaling that is used to
scale textures. Anisotropic scaling allows textures to be scaled without becoming blurry.

Scaling is a powerful transformation that can be used to create a variety of effects in both 2D
and 3D graphics. It is a versatile tool that can be used to resize objects, change the camera's
view, and create illusions.

From <https://bard.google.com/chat/c07d025c2f578b23>

Explain the 2D viewing pipeline.


edit

New Section 1 Page 4


edit
View other drafts
volume_up

The 2D viewing pipeline is a series of steps that are used to transform 2D objects from world
coordinates to device coordinates. World coordinates are the coordinates of the objects in the
real world, while device coordinates are the coordinates of the objects on the display device.
The viewing pipeline is used to ensure that the objects are displayed correctly on the screen,
regardless of the size or position of the display device.

Here are the steps involved in the 2D viewing pipeline:

6. Modeling transformation: This step converts the objects from world coordinates to modeling
coordinates. Modeling coordinates are the coordinates of the objects relative to a local
reference frame. The local reference frame can be any coordinate system that is convenient for
modeling the objects. For example, the local reference frame could be centered at the origin of
the object or at some other point on the object.

7. World-to-view transformation: This step converts the objects from modeling coordinates to view
coordinates. View coordinates are the coordinates of the objects relative to the viewing
coordinate system. The viewing coordinate system is a coordinate system that is used to define
the position and orientation of the camera. The world-to-view transformation is typically
performed using a combination of translation, rotation, and scaling.

8. Normalization: This step normalizes the view coordinates so that they lie within the range [-1, 1].
Normalization is necessary because the display device has a finite resolution, and the
normalized view coordinates can be used to map the objects to the correct pixels on the screen.

9. Viewport transformation: This step converts the normalized view coordinates to viewport
coordinates. Viewport coordinates are the coordinates of the objects relative to the viewport.
The viewport is the area on the display device where the objects will be displayed. The viewport
transformation is typically performed using a combination of translation and scaling.

10. Device transformation: This step converts the viewport coordinates to device coordinates.
Device coordinates are the coordinates of the pixels on the display device. The device
transformation is typically performed by the graphics hardware.

The 2D viewing pipeline is a complex process, but it is necessary to ensure that 2D objects are
displayed correctly on the screen. The viewing pipeline is used in a variety of applications,
including video games, computer-aided design (CAD) software, and virtual reality (VR)
applications.

New Section 1 Page 5


Here is a diagram that illustrates the 2D viewing pipeline:

World coordinates -> Modeling coordinates -> View coordinates -> Normalized view
coordinates -> Viewport coordinates -> Device coordinates

From <https://bard.google.com/chat/c07d025c2f578b23>

How is gesture recognition based interaction used in the


virtual enviornment?
edit
View other drafts
volume_up

Gesture recognition-based interaction is an increasingly popular way to interact with virtual


environments (VEs). It offers a more natural and intuitive way to interact with VEs than
traditional methods such as keyboards and mice.

How does gesture recognition work in VEs?

Gesture recognition systems use a variety of techniques to track and recognize a user's
gestures. These techniques can be broadly divided into two categories:

• Vision-based: Vision-based gesture recognition systems use cameras to capture images or


video of the user's body. These images are then processed to identify and track the user's
gestures.

• Sensor-based: Sensor-based gesture recognition systems use sensors to track the user's
movements. These sensors can be worn on the user's body or embedded in the VE
environment.

Benefits of gesture recognition in VEs

Gesture recognition offers a number of benefits for interacting with VEs, including:

New Section 1 Page 6


• Natural and intuitive: Gestures are a natural way of communicating, so users can interact with
VEs in a way that feels comfortable and intuitive.

• Immersive: Gesture recognition can make the VE experience more immersive by allowing users
to interact with objects in the VE using their own bodies.

• Hands-free: Gesture recognition can free up the user's hands, which can be especially useful
for tasks that require a lot of dexterity.

Applications of gesture recognition in VEs

Gesture recognition is being used in a wide variety of VE applications, including:

• Gaming: Gesture recognition is being used to control games and interact with virtual
objects. For example, users can use gestures to throw punches, cast spells, or drive cars.

• Education: Gesture recognition is being used to create interactive learning experiences. For
example, students can use gestures to answer questions, manipulate virtual objects, and
collaborate with other students.

• Training: Gesture recognition is being used to create immersive training simulations. For
example, surgeons can use gesture recognition to practice surgical procedures on virtual
patients.

• Design: Gesture recognition is being used to create more natural and intuitive design tools. For
example, designers can use gestures to sketch, rotate, and scale objects.

Future of gesture recognition in VEs

As gesture recognition technology continues to develop, we can expect to see even more
innovative and immersive VE applications. Gesture recognition has the potential to revolutionize
the way we interact with computers and with the world around us.

Here are some examples of how gesture recognition is being used in VEs today:

New Section 1 Page 7


Here are some examples of how gesture recognition is being used in VEs today:

• In the game "The Gallery," users can use gestures to manipulate objects in the game world. For
example, they can use gestures to open doors, pick up objects, and throw objects.

• In the educational app "Anatomy Next," students can use gestures to interact with virtual models
of the human body. For example, they can use gestures to rotate the models, zoom in on
different parts of the body, and identify different organs and muscles.

• In the training simulator "Immersive Surgical," surgeons can use gestures to practice surgical
procedures on virtual patients. For example, they can use gestures to make incisions, suture
wounds, and remove tumors.

• In the design tool "Tilt Brush," designers can use gestures to sketch, rotate, and scale objects in
3D space.

These are just a few examples of the many ways that gesture recognition is being used in VEs
today. As the technology continues to develop, we can expect to see even more innovative and
immersive applications.

From <https://bard.google.com/chat/c07d025c2f578b23>

What are all the design considerations to be followed by


developing good haptic feedback interfaces?
edit
View other drafts
volume_up

Designing effective haptic feedback interfaces requires careful consideration of various factors
to provide users with an intuitive, engaging, and non-intrusive sensory experience. Here are
some key design considerations to follow:

11. Relevance and Meaningfulness: Ensure that haptic feedback aligns with the context and action
of the user interaction. It should provide clear and consistent sensory cues that reinforce the
user's understanding of the virtual environment and their actions within it.

12. Timeliness and Accuracy: Haptic feedback should be delivered promptly and synchronized with
the corresponding visual or auditory feedback. This temporal correlation enhances the user's
perception of the event and strengthens the overall sensory experience.

13. Intensity and Duration: Carefully calibrate the intensity and duration of haptic feedback to match
the significance of the event or action. Avoid overpowering or distracting users with overly

New Section 1 Page 8


the significance of the event or action. Avoid overpowering or distracting users with overly
strong or prolonged feedback.

14. Variety and Differentiation: Employ a diverse range of haptic effects to convey different types of
feedback, such as textures, vibrations, or forces. This variety helps users distinguish between
different actions, events, and objects in the virtual environment.

15. Personalization and User Preferences: Allow users to customize haptic feedback settings to suit
their individual preferences and needs. This could include adjusting intensity, duration, or
specific effects.

16. Device Compatibility and Limitations: Consider the capabilities and limitations of the haptic
feedback device being used. Design feedback patterns that are compatible with the device's
capabilities and avoid overwhelming its capacity.

17. Accessibility and Inclusiveness: Ensure that haptic feedback design does not exclude or
disadvantage users with sensory impairments or disabilities. Consider alternative feedback
modalities or provide customization options for accessibility.

18. Safety and Comfort: Prioritize the safety and comfort of users by avoiding overly intense or
jarring haptic feedback that could cause discomfort or physical strain.

19. Energy Efficiency: Optimize haptic feedback design to minimize power consumption, especially
for portable devices. This consideration extends the battery life and promotes sustainability.

20. User Testing and Iteration: Continuously test and refine haptic feedback implementations
through user testing to gather feedback and identify areas for improvement. Iterative design
ensures that the feedback is effective, enjoyable, and well-received by users.

By carefully considering these design principles, developers can create haptic feedback
interfaces that enhance user engagement, improve interaction efficiency, and provide a more
immersive and intuitive virtual experience.

From <https://bard.google.com/chat/c07d025c2f578b23>

Explain how AR is useful in Google Lens app?


edit
View other drafts
volume_up

Augmented reality (AR) is a technology that superimposes a computer-generated image on a


user's view of the real world, thus providing a composite view. AR is used in the Google Lens
app in a number of ways, including:

New Section 1 Page 9


• Object recognition: Google Lens can use AR to identify objects in the real world and then
provide additional information about those objects. For example, if you point your phone at a
flower, Google Lens can use AR to identify the type of flower and then provide you with
information about its name, genus, species, and other characteristics.
Opens in a new window
www.techspot.com
Google Lens object recognition

• Translation: Google Lens can use AR to translate text in the real world. For example, if you
point your phone at a foreign language sign, Google Lens can use AR to overlay the translation
of the sign on top of the original text.
Opens in a new window
play.google.com
Google Lens translation

• Shopping: Google Lens can use AR to help you shop. For example, if you see an item you like
in a store, you can point your phone at the item and Google Lens can use AR to show you
where you can buy it online.
Opens in a new window
techcrunch.com
Google Lens shopping

• Navigation: Google Lens can use AR to help you navigate. For example, if you are looking for a
specific location, you can point your phone at the street and Google Lens can use AR to show
you directions to that location.
Opens in a new window
www.engadget.com
Google Lens navigation

AR is still a relatively new technology, but it is already being used in a number of innovative
ways in the Google Lens app. As AR technology continues to develop, we can expect to see
even more creative and useful applications of AR in the future.

From <https://bard.google.com/chat/8217bb1cbcf9b465>

Explain how to you can develop a flight stimulator with the


help VR technology. this stimulator puts you in the cockpit
in airplane, and you can anywhere in the world. Explain in
detail about the design steps, implementation details and
the components required to develop the stimulator. (use
the tool of your choice)
Certainly, here's a detailed explanation of how to develop a flight simulator using VR
technology:

New Section 1 Page 10


Design Steps

21. Conceptual Design: a. Define the purpose and scope of the flight simulator. b. Identify the target
audience and their level of expertise. c. Determine the types of aircraft to be simulated. d.
Specify the desired level of realism and immersion.

22. Technical Design: a. Select a suitable VR development platform, such as Unity or Unreal
Engine. b. Design the 3D cockpit environment and aircraft models. c. Develop the flight
dynamics model to simulate realistic aircraft behavior. d. Implement the VR interaction system
to control the aircraft.

23. Content Design: a. Create a variety of scenery and weather conditions. b. Design training
scenarios and missions for different skill levels. c. Develop tutorials and documentation to guide
users.

Implementation Details

24. 3D Modeling: a. Use 3D modeling software to create high-quality models of cockpits, aircraft,
and scenery. b. Ensure the models are optimized for VR performance. c. Use textures to add
realism and detail to the models.

25. Physics Engine: a. Implement a physics engine to simulate realistic aircraft movement. b.
Consider factors like aerodynamics, gravity, and engine thrust. c. Adjust the physics parameters
to achieve the desired level of realism.

26. VR Interaction: a. Develop a system for users to interact with the cockpit controls. b. Use hand
tracking, motion controllers, or VR joysticks for input. c. Provide visual feedback to confirm user
actions.

27. Sound Design: a. Record or create realistic sounds for aircraft engines, instruments, and
ambient effects. b. Use spatial audio to enhance the immersive experience. c. Synchronize
sound effects with visual cues.

28. User Interface: a. Design a user-friendly interface for navigation and settings. b. Provide clear
instructions and feedback to users. c. Allow for customization of the VR experience.

Components Required

New Section 1 Page 11


Components Required

Hardware:

29. Powerful computer with a dedicated graphics card.

30. VR headset, such as Oculus Rift, HTC Vive, or Valve Index.

31. Motion controllers or VR joysticks (optional).

Software:

32. VR development platform, such as Unity or Unreal Engine.

33. 3D modeling software, such as Blender or Autodesk Maya.

34. Physics engine middleware, such as PhysX or Bullet Physics.

35. Audio editing software, such as Audacity or Adobe Audition.

Additional Considerations:

36. Performance Optimization: a. Optimize the 3D models and textures for VR performance. b. Use
efficient rendering techniques and algorithms. c. Balance visual fidelity with frame rate to avoid
motion sickness.

37. Testing and Debugging: a. Conduct thorough testing to ensure the simulator's stability and
performance. b. Gather feedback from users to identify and address issues. c. Continuously
refine and improve the simulator based on testing and feedback.

38. Deployment and Distribution: a. Choose an appropriate platform for deployment, such as PC or
standalone VR devices. b. Create installer and distribution packages for easy installation. c.
Provide ongoing support and updates for the simulator.

New Section 1 Page 12


Provide ongoing support and updates for the simulator.

From <https://bard.google.com/chat/8217bb1cbcf9b465>

In what aspect the Metaverse and virtual reality


technologies interlinked? explain it in detail.
Sure, here is a detailed explanation of how the Metaverse and virtual reality technologies are
interlinked:

The Metaverse

The Metaverse is a term used to describe a shared virtual world that is persistent,
interconnected, and immersive. It is a concept that has been around for many years, but it has
only recently become possible to create a Metaverse thanks to advances in virtual reality (VR)
technology.

The Metaverse is still in its early stages of development, but it has the potential to revolutionize
the way we interact with each other and the world around us. It could be used for a variety of
purposes, including:

• Socializing and gaming: The Metaverse could be used to create virtual worlds where people can
meet up with friends, play games, and attend events.

• Education and training: The Metaverse could be used to create immersive learning experiences
that could help people learn new skills and knowledge.

• Business and collaboration: The Metaverse could be used to create virtual offices and meeting
rooms where people can collaborate on projects and attend meetings.

• Retail and shopping: The Metaverse could be used to create virtual stores where people can
browse and purchase products.

Virtual Reality (VR)

VR is a technology that allows users to experience a simulated environment that is visually

New Section 1 Page 13


VR is a technology that allows users to experience a simulated environment that is visually
realistic and immersive. VR headsets are worn on the head and display a computer-generated
image that surrounds the user's field of view. VR headsets also often have tracking sensors that
can detect the user's head movements and body movements, allowing them to interact with the
virtual environment in a natural way.

VR is a key technology that will enable the Metaverse to be realized. VR headsets will allow
users to fully immerse themselves in the Metaverse and feel like they are actually present in the
virtual world.

How Metaverse and VR are Interlinked

The Metaverse and VR are interlinked in several ways:

• VR is a key technology that will enable the Metaverse to be realized. Without VR, it would not
be possible to create a shared virtual world that is both immersive and persistent.

• The Metaverse provides a platform for VR applications. The Metaverse is a vast and open
space that can be used to create a variety of VR applications, such as games, social
spaces, and training simulations.

• VR can be used to enhance the Metaverse experience. VR can make the Metaverse experience
more immersive and realistic by providing users with a full field of view and the ability to interact
with the virtual world in a natural way.

In conclusion, the Metaverse and VR are two technologies that are closely intertwined. VR is a
key technology that will enable the Metaverse to be realized, and the Metaverse provides a
platform for VR applications. Together, these two technologies have the potential to
revolutionize the way we interact with each other and the world around us.

From <https://bard.google.com/chat/8217bb1cbcf9b465>

what are Fiducial markers? How are they used in AR?


edit
View other drafts
volume_up

Fiducial markers are visual patterns that are used to track the position and orientation of objects
in augmented reality (AR) applications. They are typically simple patterns, such as squares,

New Section 1 Page 14


in augmented reality (AR) applications. They are typically simple patterns, such as squares,
circles, or QR codes, that are easy for a computer to recognize. Fiducial markers are placed on
real-world objects, and then an AR camera tracks the markers and overlays virtual content onto
the real-world view.

Opens in a new window


ezag.com
Fiducial markers

Fiducial markers are used in a wide variety of AR applications, including:

• Games: Fiducial markers can be used to create AR games that are more interactive and
immersive. For example, a game could use fiducial markers to track the position of a player's
head and body, and then overlay virtual objects onto the real-world view.

• Education: Fiducial markers can be used to create interactive learning experiences. For
example, a student could use a fiducial marker to scan a textbook page, and then see an
animated video or 3D model of the topic being studied.

• Retail: Fiducial markers can be used to create AR shopping experiences. For example, a
customer could use a fiducial marker to scan a product in a store, and then see reviews and
additional information about the product.

• Navigation: Fiducial markers can be used to create AR navigation systems. For example, a user
could use a fiducial marker to scan a map, and then see directions overlaid onto the real-world
view.

Fiducial markers are a powerful tool for creating AR applications that are more interactive and
engaging. They are relatively easy to use and can be implemented in a variety of ways. As AR
technology continues to develop, we can expect to see even more creative and innovative uses
for fiducial markers.

New Section 1 Page 15


From <https://bard.google.com/chat/8217bb1cbcf9b465>

Obtain a transformation matrix for sharing an object with


respect to Y-axis about a specified pivot point (xf,yf).
edit
View other drafts

volume_up

Sure, here is the transformation matrix for shearing an object with respect to the Y-axis about a
specified pivot point (xf, yf):

[1 0 0 0]
[0 1 0 0]
[0 sy 1 0]
[0 0 0 1]

where sy is the shear factor. The shear factor determines the amount of shear that is applied to
the object. A positive shear factor will cause the object to be sheared to the right, while a
negative shear factor will cause the object to be sheared to the left.

To use this matrix, you will need to first translate the object to the pivot point. You can do this by
using the following translation matrix:

[1 0 0 tx ]
[0 1 0 ty ]
[0 0 1 0]
[0 0 0 1]

where tx and ty are the x and y translations, respectively.

Once the object has been translated to the pivot point, you can then apply the shear
transformation matrix. Finally, you will need to translate the object back to its original position.

Here is an example of how to use the transformation matrices to shear an object:

// Translate the object to the pivot point


tx = xf;
ty = yf;
// Apply the shear transformation matrix

New Section 1 Page 16


// Apply the shear transformation matrix
sy = 0.5;
// Translate the object back to its original position
tx = -xf;
ty = -yf;

This code will shear the object with respect to the Y-axis about the pivot point (xf, yf) by a factor
of 0.5.

I hope this helps!

From <https://bard.google.com/chat/8217bb1cbcf9b465>

A rectangular parallelopiped having a lenght on x-axis, y-axis, and z-axis as 2,3,1 reespectively. what is
the effect of scaling when scaling factor sx=1/2; sy =1/3 and sz = 1?

From <https://bard.google.com/chat/8217bb1cbcf9b465>

Check whether the 3D rotations about x-axis followed by


an equal angle of rotation about y-axis is equal to rotation
about the y-axis followed by equal rotation about the x-
axis. justify your answer.
edit
View other drafts
volume_up

No, 3D rotations about the x-axis followed by an equal angle of rotation about the y-axis are not
equal to rotation about the y-axis followed by equal rotation about the x-axis. This is because 3D
rotations are not commutative, meaning that the order in which rotations are performed matters.

To see why this is the case, consider a simple example. Let's say we want to rotate a point (1,
0, 0) 90 degrees about the x-axis, followed by 90 degrees about the y-axis. The resulting point
will be (0, 1, 0). However, if we rotate the point 90 degrees about the y-axis, followed by 90
degrees about the x-axis, the resulting point will be (0, 0, -1).

This is because the 90-degree rotation about the x-axis changes the direction of the y-axis, so
that when we rotate the point 90 degrees about the y-axis, we are actually rotating it in a
different direction than we would have if we had done the y-axis rotation first.

The non-commutativity of 3D rotations is a result of the fact that they are not infinitesimal
rotations. Infinitesimal rotations are rotations that are very small, so that they can be

New Section 1 Page 17


rotations. Infinitesimal rotations are rotations that are very small, so that they can be
approximated by linear transformations. However, finite rotations, such as 90-degree rotations,
cannot be approximated by linear transformations, and so they are not commutative.

In general, the order of 3D rotations matters, and the resulting rotation will be different
depending on the order in which they are performed. This is an important property of 3D
rotations that must be taken into account when working with 3D objects in computer graphics
and other applications.

From <https://bard.google.com/chat/8217bb1cbcf9b465>

New Section 1 Page 18

You might also like