Visual Effects Unit-1 Final
Visual Effects Unit-1 Final
There are a lot of tools created by the developers to make the animation, like
Blenders3D, Maya, etc. Animation can be of various types like 2D animation, 3D
animation, paper animation, traditional animation, puppet animation, etc.
There are some topics that the term "animation" covers in today's society, which is full
of creativity and visualizations. Everyone immediately conjures up images of cartoons
and various Disney World shows when they hear this word. Children love animated
films like Disney World, Doraemon, etc. All cartoons and animated images are a sort
of animation created by combining thousands of individual images and playing them
out in a predetermined order.
When we think back a few decades, all animation was produced by hand or by
painting, and certain puppet-like structures were made to display the animation. These
types of animation, however, are real-world animations, while in that technological era,
digital animation will advance.
There are numerous animation styles that we may observe on TV, as well as
numerous productions and images that mostly diverge from actual productions and
films.
• Traditional - also known as cel animation, hand-drawn and 2D. This is the
original method of animation, dating back to the 19th century.
• Stop motion - involves physically moving objects, often made with clay, one
frame at a time.
• Motion graphics - animated graphic design that brings text and images to
life.
Visual effects (VFX) can transport audiences to new worlds and show off mind
bending animations and CGI creations one can only dream of. But before audiences
can experience a brave new world, a VFX pipeline needs to be in place to make it a
reality. But just what is the VFX pipeline, and what does it entail? You can lean on
this handy VFX guide when developing your own customized VFX plan.
The VFX pipeline breaks down the steps of a visual effects workflow for film,
television, and digital media projects. It keeps the entire VFX process organized; it
allows everyone to know their role and how it fits into the production timeline. From
storyboarding and reference imagery, all the way through modelling, rotoscoping,
composition, and lighting (just to name a few).
In smaller productions, one VFX artist may handle the entire workflow, but most
productions use teams of specialized artists. The pipeline brings a level of sanity to a
process that is usually not completely linear. Members of the VFX process often get
involved during the pre-production, production, and post-production stages. To do their
best work (and to ensure they’re not asked to redo various steps), artists should
understand and appreciate each step of the VFX workflow pipeline.
VFX Pipeline DiagramThe VFX Workflow :
When asking the question “What is VFX mean in editing?”, it’s important to
understand
that no two real-world VFX pipeline workflows are identical. We’ve divided these steps
into the pre-production, production, and post-production phases in our VFX roadmap
below, but many steps below often occur in parallel throughout the project.
PRE-PRODUCTION :
R&D on a video project primarily involves Technical Directors (TDs) who work
with VFX supervisors to plan the technical approach and determine which shots and effects
are technically feasible. Extremely VFX-heavy projects may also involve outside
scientists, engineers, or mathematicians for further guidance.
TDs must ensure that all software and files used throughout the VFX pipeline are
compatible and sometimes create custom software and plug-ins to improve VFX
pipeline efficiency.
Most of the R&D stage is ongoing throughout a project’s lifetime as the material is
tweaked and concepts evolve.
Storyboarding is where the VFX artist team creates visual representations of all the
actions within the script. Character motions and story settings are analyzed and basic
drawings are created to illustrate the desired framing on a shot-by-shot basis.
Like most planning elements, however, storyboarding isn’t final. It’s more about
planting a stake in the ground and giving the VFX artists a solid idea of what the
editorial team wants.
3. Pre-Visualization
This is when the VFX workflow really gets cracking because it’s when most of the
shooting takes place, raw video files are created, and VFX dailies are submitted. But
plenty of VFX tasks can be done in tandem with the production process.
4. 3D Modeling
3D modeling takes place throughout all three production phases, but in the production
phase, artists transform storyboard art or low-poly 3D models into lifelike
representations. Most 3D modeling is devoted to creating assets such as vehicles or
buildings that either aren’t practical or cost-effective to bring on set, but 3D models are
also used to create characters (to illustrate non-humans or stand in as digital doubles)
and other props.
While 3D models of impractical things like spaceships or the Batmobile are the most
visually interesting, 3D modelers also replace or complement physical objects shot on
set that need improvements in lighting, shadowing, or texture.
Matte painting is one of the oldest VFX techniques in existence and involves
creating visual backgrounds that don’t exist. These days such backgrounds are often created
digitally using LED panels and game engines, often as entire 3D sets for virtual
production, or by chroma keying using a green or blue screen.
Years ago, matte painting was exclusively done using photographs and painted glass
panels (matte paint was used because it doesn’t reflect light).
But matte painting is still used in many productions — including the Harry
Potter films, Game of Thrones, and The Witcher — partly because it can save money.
But it’s pretty limited in what it can do: Matte paintings can’t change their lighting,
camera angles, or other elements.
6. Reference Photography
Throughout the entire production phase members of the VFX team hang out on set
to take reference photos of actors, scenes, props, and anything else important. These
photos are then used to rig, animate, and add texture to 3D models.
POST-PRODUCTION
Rigging teams often rely on reference photographs, but motion capture cameras or
suits are also often used to capture movement data to aid the rigging and animation
process.Rigging teams often get so granular that their jobs can include calculating skin
weights and adding digital skeletons and muscles within 3D characters to replicate natural
movement.
8. FX and Simulation
FX artists are responsible for creating concepts and scenes that move and react
according to the laws of physics, such as a long shot of a raging battle at sea or in
space — complete with fiery explosions, which in reality can’t exist in space, but
whatever — they look cool. FX artists often work with elements such as fire, smoke,
liquids, and even particles.
FX artists work alongside animators to ensure these simulated elements don’t stick
out (in a bad way) while looking and feeling as natural as possible.
Motion tracking, also known as match moving, allows VFX artists (in this context,
referred to as match move artists) to insert effects into moving scenes and live-action
footage without the entire thing looking bad. After all, inserting VFX elements into a
static shot is relatively easy, all things considered — but adding the same elements to
a camera move involves many more variables. That’s why motion tracking accounts
for the positioning, orientation, scale, and how the object moves within the shot,
including replicating physical camera moves using virtual cameras in their motion
tracking software.
10. Texturing
The texturing process is pretty much as it sounds: It adds textures to the surfaces of
3D models. Texture can include anything from surface color to scaly skin on a reptile,
to reflections in water, to a metallic shine or scratches on a car door. This ensures
models look as realistic as possible.
Rotoscoping involves artists drawing around and cutting out objects or characters
from frames in the original footage, to use the cutout images against a different background
or context. Rotoscoping has typically been a relatively painful and manual process,
especially in the days before computerized VFX.
Lighting is typically dealt with once the texture artists have done their thing. It’s the last
element applied before the effect or computer-generated image (CGI) is complete.
Adding and adjusting virtual lighting and shadows to match either static or live-action
computer-generated scenes or characters, like texturing, helps make them look more
realistic while enhancing aspects of the original shot such as color and intensity.
Just like real-life lighting, however, virtual lights must be placed strategically within a
scene.
Lighting artists use tools such as shader settings and lighting maps to achieve
this by positioning spot, area, and directional lights to match the angles and shadows
of the original footage. Once lighting has been applied, the entire scene is handed off to
compositing.
13. Compositing
Compositing, sometimes called stitching, is the final step of the post-production VFX
workflow. While it is the final step in the VFX roadmap, it is also the most
important because it integrates all the various VFX elements with real-life footage to
create a finalized shot or scene.
A bad compositing job can ruin all your otherwise great VFX work up to this point —
so it’s crucial to get it right. The process involves a compositor gathering all the content —
including live-action footage, renders, VFX plates, and matte paintings — and layering them
together in preparation for the next step in the post-production pipeline (typically coloring).
Some shots may require combining just a couple of elements, but others may need to layer
dozens while finalizing lighting, reflections, shadowing, and atmospherics to create a
seamless look and feel.
After Effects
After Effects is regarded as one of the best, if not the best, VFX software around.
That’s partly because it integrates seamlessly with Adobe’s Premiere Pro video editing
software and collaboration tools such as Frame.io, but also because it’s damn good
at what it does. After Effects also has plenty of third-party, customizable VFX templates you
can download to help scale your project.
Fusion :
Blackmagic Design’s Fusion is a great tool for creating immersive 360 or VR video;
stereoscopic 3D effects; and the compositing of 3D models and real-life, live-action
footage. It comes as part of video editing software DaVinci Resolve, and has been
used to create VFX scenes in films such as Guardians of the Galaxy and the Hunger
Games and even cinematics for video games such as Halo 5.
Nuke :
Foundry’s Nuke is VFX and film editing software used by major industry players such
as Walt Disney Animation Studios, Blizzard Entertainment, Sony Pictures Animation,
and DreamWorks Animation, and has been used on productions from The
Crown to Boardwalk Empire. It offers seamless review workflows and the ability to add
VFX elements to dynamic editorial timelines.
Houdini :
Houdini by SideFX is used in the R&D process and at other junctures to come up with
customized effects. It integrates animation design, effects rendering, and character
modeling and provides a host of VFX simulation modules for fluids, crowds, grains,
and other elements including destruction and pyro FX. Houdini also integrates with
other software such as Maya.
Maya :
HitFilm Pro:
HitFilm Pro (and its free consumer market version, HitFilm Express) is an all-in-one
VFX and video editing tool that allows VFX artists to apply effects to the NLE timeline
(instead of laying them). The app comes loaded with nearly 1,000 VFX templates and
presets, with features such as masking, 2D and 3D motion tracking, green screen
keying, and particle simulators.
Blender:
• Know the various VFX systems (tools, processes, software, etc.) of the
entire VFX workflow inside and out before embarking on a months-long
project.
• In virtual production, VFX artists must be aware that what looks good on
workstations may not on a gigantic LED wall. Unreal Engine recommends
their In-Camera VFX Production Test to see recommended configurations
for virtual stages.
• Replace physical objects (fires, potholes, tire tracks, etc.) with VFX
renderings to save a ton of money over replicating the same thing in the
physical world, or having to go back and re-shoot certain scenes to get
things perfect.
• Netflix recommends VFX artists working from home use virtual desktop
solutions such as HP RGS or Teradici, a single monitor at 1920×1200 as a
baseline resolution, and dedicated bandwidth and low-latency connections.
Ollie Johnston and Frank Thomas were the men behind the principles. The duo were
two of Disney’s famous Nine Old Men (even Walt Disney himself would call them this).
This group were the studio’s core group of animators. In 1981, Johnston and Thomas
released a book called The Illusion of Life: Disney Animation. The Nine Old Men had been
using the principles for decades, but this was the first time the outside world were made
aware.
These principles of animation are important because combining all 12 helps ground
animation in the real world. The sky is the limit when it comes to using your imagination,
but you also need to consider gravity and other laws of physics. Failure to do so will make
animation much less believable and your audience won’t care about what happens to your
characters, whether hand-drawn or 3D.
In their 1981 book, The Illusion of Life, Disney animators Ollie Johnston and Frank
Thomas introduced the twelve principles of animation. The pair were part of Disney’s
“Nine Old Men,” the core group of animators at Disney who were instrumental in
developing the art of traditional animation. The twelve principles have now become
widely recognized as a theoretical bedrock for all animators, whether they are working
on animated entertainment, commercials, or web-based explainers.
Squash and stretch is debatably the most fundamental principle. Look at what
happens when a ball hits the ground. The force of the motion squashes the ball flat,
but because an object needs to maintain its volume, it also widens on impact. This
what’s called squash and stretch.
This effect gives animation an elastic life-like quality because although it may not seem like
it, squash and stretch is all around you. All shapes are distorted in some way or
another when acted upon by an outside force; it’s just harder to see in real-life. Squash
and stretch imitates that and exaggerates it to create some fun.
When the letters spring from the ground, they elongate to show the impression of
speed. Conversely, the letters squash horizontally when they come into contact with
the ground. This conveys a sense of weight in each letter.
2) Anticipation:
Imagine you’re about to kick a soccer ball. What’s the first thing you do? Do you
swing your foot back to wind up? Steady yourself with your arms? That’s anticipation.
Anticipation is the preparation for the main action. The player striking the soccer ball
would be the main action, and the follow-through of the leg is well… the follow through.
Notice how the progression of action operates in this scene. We first see the woman
as she’s standing on the box. She then bends her knees in anticipation of what’s about
to happen and springs into action by leaping from the ground up into the air.
2) Staging:
When filming a scene, where do you put the camera? Where do the actors go? What do
you have them do? The combination of all these choices is what we call staging.
Staging is one of the most overlooked principles. It directs the audience’s attention
toward the most important elements in a scene in a way that effectively advances the
story. It builds from problem to realization to shared understanding, to the beginning of a
solution, all in a visual telling.
Pose-to-pose gives you more control over the action. You can see early on where your
character is going to be at the beginning and end instead of hoping you’re getting the
timing right. By doing the main poses first, it allows you to catch any major mistakes
early. The problem with it is that sometimes it comes off as too neat and perfect.Straight-
ahead action is less planned, and therefore more fresh and surprising. The
problem with it is that it’s like running blindfolded… you can’t figure out where you’re
supposed to be at any one time.
Mastering both techniques and combining them is the best approach to being a
successful animator because then you can get both structure and spontaneity. And
incidentally, this distinction is just as important in computer animation, where molding
a pose at each keyframe is the equivalent of making a drawing.
When a moving object such as a person comes to a stop, parts might continue to
move in the same direction because of the force of forward momentum. These parts might
be hair, clothing, jowls, or jiggling flesh of an overweight person. This is where you can
see follow-through and overlapping action. The secondary elements (hair, clothing,
fat) are following-through on the primary element, and overlapping its action.
Follow-through can also describe the movement of the primary element though. If you
land in a crouch after a jump, before standing up straight, that’s follow-through.
Take a look at an example from a video we did for ViewBoost. Watch the sleeves of
the “Cheese Jedi’s” cloak when he swings his lightsaber. They move with the
momentum of the action, but when it’s over, the sleeves continue to go before settling
to a stop.
When you start your car, you don’t get up to 60 mph right away. It takes a little
while to accelerate and reach a steady speed. In animation speak, we would call this
an Ease Out.
Likewise, if you brake, you’re not going to come to a full stop right away. (Unless you
crash into a tree or something.) You step on the pedal and decelerate over a few
seconds until you are at a stand-still. Animators call this an Ease In.Carefully controlling the
changing speeds of objects creates an animation that is more
realistic and has more personality.
7) Arcs
Life doesn’t move in straight lines, and neither should animation. Most living beings –
including humans – move in circular paths called arcs.
Arcs operate along a curved trajectory that adds the illusion of life to an animated
object in action. Without arcs, your animation would be stiff and mechanical.
The speed and timing of an arc are crucial. Sometimes an arc is so fast that it blurs
beyond recognition. This is called an animation smear – but that’s a topic for another
time.
8) Secondary Action
Secondary actions are gestures that support the main action to add more dimension
to character animation. They can give more personality and insight to what the
character is doing or thinking.
9) Timing
Timing is about where on a timeline you put each frame of action. To see what this
means in action, let’s look at the classic animator’s exercise: the bouncing ball that we
saw earlier when we were talking about squash and stretch. (The reason this is a
popular assignment is that there is a lot of wisdom to be gained from it!)
Notice that at the top of each bounce, the balls are packed closer together. That is
because the ball is slowing down as it reaches the peak of the bounce. As the ball falls
from its peak it and accelerates, the spacing starts becoming wider.
Notice also how many drawings there are in each bounce. As the momentum of the
ball diminishes, the bounces become shorter and more frequent (i.e., the number of
frames in each bounce decrease.)In practice, the success of your animation is going to
depend on your sense of timing.
10) Exaggeration
Solid drawing is all about making sure that animated forms feel like they’re in three
dimensional space.
12) Appeal
Enlarging the most defining feature of a character can go a long way to giving the
character personality. Strive for a good balance between detail and simplicity.
What Are Keyframes In Animation?
Keyframes in animation are specific points that denote the start and end of a transition.
They define the precise moments when movements or transformations begin and finish,
allowing animators to map out the animation’s timing and motion path.
Table of Contents:
• What is a keyframe?
What is a keyframe?
A keyframe in animation is a specific reference point in an animation where a change or
adjustment is made to an object's state or property.
Usually, all keyframe-based animation tools use keyframes to change states for animators
such as:
• Position
• Scale
• Rotation
• Opacity
For example, if you would like to create an animated element that moves from the left to
the right over the duration of 3 seconds, you should:
Disney pioneered keyframe animation in the 30s by setting up the main poses of
movement to be drawn by artists and the inbetween frames were created by less
experienced colleagues or machines.
The company was the first to set up the principles of animation and influenced other studios
to adopt their techniques.
Computer animation arose in the 70s as a new technique for producing animations. It
followed the keyframe animation principles and adapted them to the digital image
generation using mathematical models and algorithms.
What is the difference between a frame and a keyframe?
The difference between a keyframe and a frame is that a frame is a single component from
a sequence of frames, while a keyframe is a reference point that marks how the object or
element transitions, or changes to that particular frame.
What is a frame?
A frame is a single image within a sequence of images. It is the building block of any
video, film, or animation. Each frame is flashed on the screen for a fraction of a second and
human persistence of vision blends them together, producing the illusion of movement.
The number of frames displayed within a second are measured by FPS (frames per
second). The standard FPS for videos is 24; higher frame rates produce even smoother
motions.
How are keyframes used in keyframe animation software?
Every keyframe animation software follows the same logic and can be used by following the
next steps:
The state of the object that you are changing should be the same state as the assigned
animator that you are adding keyframes to.
Take for example the Rotation animator. You will only change the degrees of the object
(between 0 degrees and 359 degrees from the center).
Changing the object’s position, scale level, or any other state except degrees of rotation,
won't result in any animated effect.
In SVGator, the first keyframe will be added along the animator right where the playhead is
positioned on the timeline. By dragging the playhead on a different second and making the
adjustments to the element, another keyframe will be automatically added to mark the end
of the transition. The adjustments should match the chosen animator, so if you chose the
Rotate animator, you can only adjust the element’s rotation.
Pro Tip: You can also reuse keyframes on the timeline by simply copying and pasting them
along the timeline in order to repeat a certain transition for the same element.
You can also copy them to a different element that you want to animate in the same way.
Additionally, you can make more adjustments to the keyframes that will change the timing
or the behavior of the animation.
There are a large number of changes you can make with keyframes on an object. For
example, in SVGator, you have the following options:
• Timing between keyframes: Timing between keyframes dictates the speed of the
transition between the two keyframes. You can change the timing between two
keyframes by increasing or decreasing the distance between them on the timeline.
• Position of the keyframes: By manipulating the position of the keyframes you can
reverse an animation by selecting its keyframes, right-clicking, and choosing
"Reverse keyframes." This action will simply interchange the position of two or
more keyframes on the timeline.
• Keyframe easing effects: Keyframe easing effects imply selecting at least one
keyframe, to which you can then apply an easing effect from the Easing panel. The
easing will apply on the transition from the selected keyframe toward the
second/following one.
• Skipping transitions between keyframes: Skipping transitions between keyframes
means that you can also eliminate the transition between two or more keyframes
by choosing the Step End or Step Start easing functions. Also known as hold
keyframes in other animation tools, these easing functions will simply remove the
transition and make jumps between the steps of the element.
💡
Note: Step keyframes support step numbers as well. You can set a certain number of steps
between two-step keyframes. The state of a step keyframe will be easy to distinguish in the
timeline as the keyframe shape will change to a square instead of a rhombus.
Linear interpolation creates a This is a more complex This maintains the object in a particular
uniform and consistent change of interpolation that makes it possible pose. It is used to freeze or block a
values from the beginning to the to specify the object's velocity and certain keyframe in a static phase. It is
end, at a constant speed. motion path between two points. also known as a stop-motion keyframe.
The 3 Main Types of Keyframes
Interpolation in the context of keyframes is the process of filling data between two
keyframes. The changes made to property values can be calculated in different ways based
on what type of keyframes are set.
The biggest advantage of using keyframes in animation is that they make the creation
process far quicker without losing quality.
The animator has to set up only a few important reference points instead of creating
hundreds of individual frames.
Another advantage of keyframes is that the final work will retain the artist’s personal charm
and specific hand-drawing style together with sleek movements and a professional finish.
Later changes are also easier with keyframes because the editor has to modify only their
main values or features instead of going through all of the frames.
• It is difficult to keep track of them when you have a lot of keyframes set on the
timeline
Keyframes have some disadvantages when it comes to producing and handling realistic,
complex, and natural movements. These are easier to achieve with motion capture, another
technology to record movement.
Video animations are great for explaining complicated processes and entertaining viewers,
but they are not so efficient when it comes to expressing feelings and pushing people to
action.
The main use cases for keyframes are video production and animation:
https://www.svgator.com/blog/what-are-keyframe-animations/
I. KINEMATICS
What is Kinematics?
Kinematics is the study of motion without regard to the forces that cause it. In visual
effects (VFX), it is used to create realistic and believable motion for objects and characters.
Kinematics deals with the following aspects of motion:
Example: www.educba.com
Inverse kinematics: This is the process of calculating the positions and rotations of an
object's joints to achieve a desired position for the end effector (e.g., the hand of a character).
It is often used to create animations where the end effector needs to follow a specific path.
Image of Inverse kinematics in VFXOpens in a new window.
Motion capture: This is the process of recording the motion of an actor or object using
sensors and then using that data to animate a character or object in a computer. Motion
capture data can be used to drive forward kinematics or inverse kinematics calculations.
Procedural animation: This is a type of animation where the motion of an object is defined
by a set of rules or algorithms. Kinematic principles can be used to create procedural
animations, such as the animation of cloth or hair.
Rigid body dynamics: This is a type of physics simulation that treats objects as if they are
made up of rigid bodies that cannot deform. Kinematic principles can be used to constrain
the motion of rigid bodies in a simulation.
Character animation: Kinematics is used to create realistic and believable motion for
characters, such as walking, running, and jumping.
Vehicle animation: Kinematics is used to animate the motion of vehicles, such as cars,
airplanes, and spaceships.
Kinematics is a powerful tool that can be used to create realistic and believable
motion in VFX. By understanding the principles of kinematics, VFX artists can create
stunning and immersive visual effects.
II. FULL ANIMATION
Full Animation:
Here are some subtopics that explore the details of full animation in VFX:
3D animation: This is the most common type of full animation, where objects are created
and animated in three-dimensional space. Popular software programs for 3D animation
include Maya, Houdini, and Blender.
2D animation: This type of animation uses flat, two-dimensional images that are
manipulated to create the illusion of movement. Traditional 2D animation is drawn by hand,
while modern 2D animation is often created using digital software like Adobe Animate.
Image of 2D animation software Adobe AnimateOpens in a new window.
2D animation software Adobe Animate
The animation pipeline is the process of creating a full animation, from the
initial concept to the final rendered image. Here are some of the key steps:
Modeling: This is the process of creating the 3D models of the characters, objects, and
environment.
dreamfarmstudios.com
Rigging: This involves adding a "skeleton" to the models that allows them to be animated.
Animation: This is the process of bringing the models to life by moving them and creating
facial expressions.
studiopigeon.com
Animation process:
Lighting and rendering: This involves adding lighting and other effects to the animation to
make it look realistic.
dreamfarmstudios.com
Compositing: This is the final step, where all of the different elements of the animation are
put together into a single image.
Video games: Full animation is used to create the characters, objects, and
environments in video games.
www.ign.com
ii. The use of artificial intelligence (AI): AI is being used to automate some of the
more tedious tasks in the animation process, such as lip-syncing and character rigging.
iii. The use of virtual reality (VR) and augmented reality (AR): VR and AR are
being used to create new and immersive animation experiences.
While limited animation isn't typically used directly in visual effects (VFX),
its principles and techniques can definitely find application and influence the approach.
Here's a breakdown of how limited animation and VFX intersect:
Limited frame rates: Fewer frames per second (FPS) than traditionally animated works
(12-24 FPS vs. 30 FPS).
Historical Use:
While it's true that limited animation directly doesn't apply to most VFX, its
principles and techniques can influence the approach in subtle and profound ways.
This deeper dive explores how limited animation's impact goes beyond mere
stylization.
Benefits:
Can appear less fluid and detailed compared to fully animated works.
Requires careful planning and execution to avoid looking choppy or unnatural.
Influence on VFX:
Efficiency: The "less is more" approach of limited animation translates well to VFX
workflows. Optimizing rendering times, using procedural techniques, and
strategically placing VFX elements can save resources without sacrificing impact.
Technical Considerations:
Frame Rates: While limited animation typically uses lower frame rates (12-24 FPS),
VFX can leverage this concept through:
Selective frame drops: Intentionally dropping frames in specific scenes for stylistic
or dramatic effect (e.g., action sequences).
Frame-by-frame animation: Utilizing hand-drawn or meticulously crafted
individual frames for specific elements (e.g., character expressions).
Animation Cycles: Reusing and subtly modifying animation cycles can be applied in
VFX for:
ii. Exaggerated effects: The over-the-top fire and explosions in films like "Who
Framed Roger Rabbit?" or "Pirates of the Caribbean" draw inspiration from limited
animation's use of exaggeration for comedic or dramatic effect.
iii. Minimalist aesthetics: Some VFX-heavy films like "Sin City" or "300" utilize a
limited color palette and simplified backgrounds, reminiscent of limited animation's
focus on key elements.
iv. Speed Ramping: Intentionally altering playback speed to create a stylized effect,
reminiscent of limited animation's use of timing for comedic or dramatic emphasis.
v. Particle Systems: Utilizing procedural techniques to create complex effects like fire,
smoke, or explosions efficiently, echoing the focus on key elements in limited
animation.
What is Rotoscoping?
Instead of drawing by hand, animators projected the reference live-action footage onto
glass panels and traced over the image frame by frame. It was a tedious and time-
consuming process, but it was faster than drawing frame by frame, resulting in more
realistic animations, enhanced artistic style, and more emphasis on a dramatic scene.
Today, most rotoscoping is done digitally as a special effect on live-action footage to create
an animated film version or as a visual effect to composite the footage on a different
background. Before diving into digital rotoscoping, let’s take a look at the history of
rotoscoping.
Rotoscoping originated in 1915 when animator Max Fleischer created the rotoscoping
technique to produce his Out of the Inkwell series. Fleischer wanted to create a realistic
animation where cartoons moved and looked more like real people; therefore, he decided
to film his brother Dave dressed as a clown to breathe life into his cartoon character Koko
the Clown, the first rotoscoped cartoon character.
From that moment, Fleischer and his rotoscoping technique revolutionized the film
animation industry, bringing other iconic cartoon characters such as Betty Boop, Popeye,
and Superman to the screen.
Back then, everyone wanted to try this new technique for their animations. When the
Fleischer Process patent expired, other studios could use the rotoscope process. The first
full-length animated feature films using rotoscoping were Disney's Snow White and the
Seven Dwarfs (1937) and Gulliver's Travels (1939) by Fleischer Studios.
Rotoscoping became popular and mainly stayed the same until the mid-90s when a
computer scientist veteran, Bob Sabiston, developed the interpolated rotoscoping process
and created Rotoshop, an advanced computer software for hand-tracing frame-by-frame
over layers of frames. Rotoshop allowed shifting the rotoscoping technique to a computer,
though as of today, it’s a technique only the company Flat Black Films can use.
Director Richard Linklater was the first filmmaker to use digital rotoscoping to make a live-
action full-feature film. Linklater's full-length movies Waking Life (2001), A Scanner
Darkly (2006), and more recently, Apollo 10½: A Space Age Childhood (2022) used
rotoscoping to animate the live-action footage of the actors while keeping the animation
extremely realistic.
Types of Rotoscoping
The film industry uses considerably rotoscoping techniques for multiple purposes.
Here are some types of rotoscoping that you can do to add creativity to a dramatic scene, to
add visual effects, or to make an animation from scratch using real-life footage.
• Traditional Rotoscoping
Let’s start with the most traditional technique. As mentioned before, rotoscoping
starts with live-action footage. Let’s say you want to create an animation about
basketball players for an animated feature film. You can draw them by hand, but it'll
be difficult to replicate the movements of the player's body.
The best option is to first record players to capture their actions to make it more
realistic as if you were creating motion picture footage. Then, using a movie
projector, play the movie through glass or use a lightbox to trace over the footage.
Filmmakers have used rotoscoping in various ways. Walt Disney used reference films
to define a character's movement from a live movie reference and animate Snow
White and the Seven Dwarfs accordingly. Having a reference film allowed Disney to
reuse many of their motion scenes: you can find the same motion in many Disney
films, like the dancing scene from Snow White and Robin Hood, and other
rotoscoping movements across movies like The Jungle Book, Winnie the Pooh, 101
Dalmatians, Pinocchio, The Sword in the Stone, Bambi, and many more.
This type of rotoscoping allows you to use your animator skills to draw your
characters on top of the reference film instead of tracing directly from the footage
and to reuse the reference for future projects and different animated characters.
One recent use of reference rotoscoping was in James Gunn’s Guardians of the
Galaxy (2014). Gunn used rotoscoping with a real-life raccoon to keep the animal
features and movements for Rocket, the raccoon.
• Digital Rotoscoping
Rotoscoping allows you to add effects such as glow, color grading, flickers, and more.
One of the most popular uses of rotoscoping as a visual effect is in the original Star
Wars trilogy. The Jedi lightsabers were recorded using sticks; then, the VFX team
rotoscoped the sticks on every frame and added the characteristic glow of the
lightsabers.
Also, in Hitchcock's movie The Birds (1963), animator Ub Iwerks created the bird
scenes using rotoscoping.
• Photorealistic Rotoscoping
Rotoscoping has proven to be a fantastic creative tool outside of animated films too.
If you want to get into the rotoscoping animation world, you will need an animation
software. While you can definitely do it in the traditional way, why not use the help of the
technology available when it saves you time and money?
The ones below are the most popular software for rotoscoping.
• Silhouette
Silhouette is a refined rotoscoping tool by Boris FX. It allows you to create complex
mattes and masks using B-Spline, X-Spline, and Magnetic Freehand shapes.
Silhouette integrates point tracking, planar tracking, and Mocha Pro planar tracking.
It has been the tool for Academy Award-winning films such as Black Panther:
Wakanda Forever, Top Gun: Maverick, Dune, and The Mandalorian.
• Mocha Pro
Mocha Pro is a plug-in for planar tracking and rotoscoping from Boris FX. You can use
it on other video editing software like DaVinci Resolve, After Effects, Premiere Pro,
and Vegas Pro. Mocha Pro allows you to rotoscope with fewer keyframes and speed
up the rotoscope process with the X-Splines and Bezier Splines with magnetic edge-
snapping assistance.
Adobe After Effects is a professional software for motion graphics and animation. It’s
popular among video editors and graphic designers to create eye-catching motion
graphics and visual effects. Adobe After Effects is available as part of the Creative
Cloud bundle subscription. Additionally, After Effects includes a limited version of
Mocha with rotoscoping features from the Pro version of the Boris FX plug-in.
• Blackmagic’s DaVinci Resolve Fusion
Fusion is built into DaVinci Resolve, and it’s your tool for all visual effects and motion
graphic-related work. It features advanced mask and rotoscope tools with B-Spline
and Bezier shapes. Just switch to the Fusion page on your DaVinci Resolve project to
start using rotoscoping to animate characters and objects.
Rotoscoping Examples
Here is a list of the most notable rotoscope movies produced with rotoscope techniques. It
includes TV shows and music videos for you to explore and analyze the rotoscoping
technique in depth.
• Movies
o Alice in Wonderland
o Star Wars Trilogy
o Fantasia
o Gulliver's Travels
o Lord of the Rings (1978)
o Fire & Ice
o Waking Life
o A Scanner Darkly
o Apollo 10½: A Space Age Childhood
• Video Games
o Prince of Persia
o Another world
o Flashback
• Music Videos
o A-Ha - Take On Me
o INXS - What You Need
o A-Ha - Train Of Thought
o Opposites Attract – Paula Abdul
o Incubus - Drive
o Linking Park - Breaking the Habit
o Kanye West Heartless
• TV Shows and Series
o Jem and the Holograms
o The Simpsons
o Family Guy
o The Flowers of Evil
o Undone
Final Words
Nowadays, rotoscoping is more commonly used as a visual effect rather than an animation
technique, thanks to the rise of 2D and 3D computer graphics. Nonetheless, some
filmmakers still appreciate rotoscoping for its unique aesthetic qualities in both animation
and live-action films. As such, we can expect to see filmmakers continue to find new and
innovative ways to incorporate the rotoscoping process into their movies, series, and other
creative projects.
https://medium.com/@uxgayatri/mastering-rotoscoping-techniques-for-achieving-seamless-results-
1c902fa51233
ROTOSCOPING
Maybe you have heard about this process, but do not know how to use it. Perhaps you have
an idea of how this technique works, but do not know where to begin. This section will help
you get started.
• What is Rotoscoping?
• Selecting a Video
• Tracing the Character
• Painting the Animation
What is Rotoscoping?
Rotoscoping is an animation technique where the animator traces over each frame of a live-
action movie to reproduce a realistic movement. This technique was invented by Max
Fleischer in 1915. The movie was projected frame by frame onto a piece of glass that the
animator could trace over. The piece of equipment used in this process is called a
rotoscope. Today, the rotoscope has been replaced by the computer.
Reasons for Rotoscoping
Some reasons you may chose to use this process:
You can also superimpose your character design and only use the motion but not the actual
object, person or animal from the video.
Because rotoscoping is so realistic, it leaves little room for exaggeration, movement, squash
and stretch, or a very cartoony look. If you use this technique, make sure it suits your
project.
Selecting a Video
•Film the actions you need to animate yourself. For example, you could film a dog playing with a ball or a pe
• Find a free movie clip on the web, or
• Purchase a royalty-free movie clip from a website.
• AVI (*.avi)
• QuickTime (*.mov)
• MPEG (*.mpg)
• iPod (*.m4v)
If you find a movie clip that is not in any of these formats, you can easily convert it in an
editing software.
Your clip does not need to have a very high resolution, however the higher the resolution,
the more detail you see. A minimum resolution of 300x200 is recommended.
Importing a Video
When you create your Toon Boom Studio project, you can avoid having too many drawings
to trace over by creating the project with a rate of 12 frames per second instead of 24.
4. Click OK.
Tracing
When you trace over your imported movie, concentrate on one element at a time. For
example, if there is a boy running with a balloon, trace the boy first and then the balloon.
This helps when trying to create separate movements. The boy moves differently than a
balloon, even if they are moving at the same speed and in the same direction.If the two
objects or characters are interacting, it is best to draw them on the same layer.
Remember, when you trace over the character, try to close your zones for fast and efficient
painting later on.
5. In the Timeline view, select the second cell and trace the second image.
6.
If necessary enable Onion Skin to see your previous drawings.
7. Repeat the process until the animation is entirely traced.
To learn more about closing gaps, see Adding Colours and to learn more about making
invisible strokes, see Drawing and Design.
If you want your final project to be lighter once you are done tracing, select your lines and
flatten them.
1.
In the Drawing Tools toolbar, click the Select tool.
2. In the Timeline view, select the first frame of the tracing layer.
3. In the Camera/Drawing view, select your entire drawing.
To create your colour palette, use Toon Boom Studio's special dropper to pick colours from
your live-action movie and paint your animation in the same colours as the clip.
FRAME 1
FRAME 2
FRAME 3
FRAME 4
ANIMATED
You see stop motion animation all the time—in commercials,
music videos, television shows and feature films—even if you don’t
realize it. While it is common for people to think of stop motion as
just one specific style, such as clay animation, the reality is that stop
motion techniques can be used to create a wide range of film styles:
Camera
To capture the image, you can use a smartphone or a digital camera like a
DSLR.
Tripod
A stand or holster to keep your camera steady.
Editing Software
To edit the frames together in an animation.
Materials/Objects
Inanimate objects become your subject of animation.
The first step when you wish to stop motion animation is to establish
where you can place your camera. Further, fill your frame with the location or
backdrop and ensure not to capture the edges outside your frame to maintain
consistency.
You need to limit your camera shakes to have a good setup for your stop-
motion video. For this purpose, you can use a tripod or a stand to maintain your
camera in a stable position.
3. Use A Remote Trigger Or Timer
When you shoot the picture with your camera in auto mode, the
settings will adjust the camera itself to every image you take, resulting in a
flickering effect. However, setting a uniform shutter speed, ISO, aperture,
and white balance helps overcome this issue.
Too much lighting can cause shadows and minor flickerings that
may not suit your animation. Hence, always be mindful of windows and
maintain only essential lighting to allow you to see your objects.
6.Frame Rate
8. Audio
Once you are done shooting the silent stop-motion animation, you can
add some audio to your video to make it enjoyable. For this purpose, opt for a
dedicated stop motion software or app.
OBJECT ANIMATION :
o Object animation is often combined with other forms of animation for more
realism.
o For example, a toy car might be animated using object animation, while a
character (often in puppet or model animation style) is seen driving the car.
Pixilation Animation :
Pixilation is a filmmaking technique where live actors and objects are shot
frame-by-frame to simulate movement. This results in an animated-looking
movie, where a human, and the things around them, move without being
touched. The actual can often appear jerky or smooth, depending on gaps of
motion between in each frame.
The name seems to come from the word “pixilated,” which itself is a reference
to someone being under the influence of pixies (yes, the small magical flying
ones). Due to pixilation often representing human beings seemingly moving
around on their own, it makes some amount of sense.
Stop-Motion Vs Pixilation
Pixilation is often used as a tool for creating a unique and comical movie,
and has its origins dating as far back as the 1900s. In some movies, like Hôtel
électrique (1908), objects are used around the character in such a way that they
are affecting them without any other person’s touch.
RIGGING
But before a computer can take an artist’s rendering of a character and bring it
to life with motion, it has to go through an important phase: 3D rigging.
It’s part art and part science. Here’s a look at how it works.
But before 3D animation is done, animators must first create rigs. Rigging
involves creating bones or a digital skeleton that makes it possible to control the
movement of characters and objects. For example, animators can control how
characters run, how their hair, arms, legs, and other body parts move, and even
their facial expressions.
When this 3D mesh is placed over the rig or skeleton, it aligns perfectly with the
underlying bone structure. This harmonious interaction between the mesh and
the rig enables the character to move in a lifelike and cohesive manner, with
each polygon adjusting to mimic realistic movements.
For example, when a character lifts a hand, the skin mesh should also follow
along—generating an illusion of movement and flexibility. A rigging artist can
also apply different colors, textures, and lighting effects to a 3D mesh to achieve
different goals.
Designers can also deform and manipulate the skin mesh for characters to
perform actions like laughing, smiling, and other expressions.
On the other hand, muscles are mainly mimicked when creating the skin mesh.
They are connected to the underlying skeleton to allow them to move as
realistically as possible while obeying the laws of physics.
From initial modeling to weight painting, here are key steps in a rigging process.
Before a 3D model can be animated, it has to get a rig. Let’s talk about this by
thinking of a 3D character as a hand-sculpted clay model.
Once a model has been created by an artist, it’s inanimate, stuck in its original
position until you manually bend an arm or turn its head. You can imagine that
creating motion by hand for a feature-length film would be extremely tedious.
3D rigging creates a skeleton for a 3D model—all the bones and joints inside a
character that give animation software vertices it can recognize.
For example, the bones can rotate, bend in certain directions, and even control
the motion of other bones. Bones can be weighted so that they have more
influence over other bones. A “master bone” can be set to control the center
point of how a character moves.
With software platforms like Unity and Blender, experienced animators can use
drivers, morphs, kinematics, and weight painting, among other tools, to control
nearly anything on a character—say, raising the left eyebrow for a curious look
or raising both for a surprised look.
AI-powered tools can perform repetitive and time-consuming tasks, allowing you
to focus more on creative tasks.
AI also enhances the motion capture process, making it easier to imitate the
movement of real human actors and map it to 3D rigs, allowing animation
characters to move or act in a natural or human-like way.
Examples of 3D rigging
From Autodesk Maya to Cinema 4D, we discuss the top rigging software that
you can integrate into your animation workflow.
Autodesk Maya
Features:
• Ability to create complex skeleton structures for different characters
• Enhanced skinning tools
• Transfer rigs from one character to another with similar skeleton
structure
• Polygonal modeling
Pricing:
Subscription:
Flexible payments:
Blender is open-source software that helps artists and animators create complex
characters, graphics, vectors, and visual effects. Apart from its intuitive
interface, experienced creators can also write Python scripts and use them
in 3D modeling, character rigging, and animation. Blender has a huge
community support network, allowing you to access valuable learning resources
and tutorials.
Features:
Pricing:
Features:
Pricing:
Introduction
Shape keys are used to deform objects into new shapes for animation. In other
terminology, shape keys may be called “morph targets” or “blend shapes”.
The most popular use cases for shape keys are in character facial animation and in
tweaking and refining a skeletal rig. They are particularly useful for modeling organic
soft parts and muscles where there is a need for more control over the resulting
shape than what can be achieved with combination of rotation and scale.
Shape keys can be applied on object types with vertices like mesh, curve, surface and
lattice.
Workflow
Shape keys are authored in the Shape Keys panel which is accessed in the Object Data tab of
the Properties (e.g. the Mesh tab for mesh objects).
A shape key is modified by first selecting a shape key in the panel, and then moving the
object’s vertices to a new position in the 3D Viewport.
The panel has controls for affecting the current Value (influence, weight) of a shape. It is
possible to see a shape in isolation or how it combines with others.
Adding and Removing Vertices
It is not possible to add or remove vertices in a shape key. The number of vertices and how
they connect is specified by the mesh, curve, surface or lattice. A shape key merely records a
position for each vertex and therefore shapes always contain all the object’s vertices.
When adding a vertex, all shape keys will record it with the position in which it is created.
Workflow-wise, adding and deleting vertices after creating shape keys is possible, but it is
best to leave the creation of shape keys for when the mesh is finished or its topology is stable.
When adding a new shape key with the + button next to the list, the new shape will be a
copy of the Basis shape, independently of the current result visible in the 3D Viewport.
When adding a new shape key from Specials ‣ New Shape from Mix , the shape will start of with
the vertex configuration that is visible at that moment.
When doing facial animation with relative shape keys, it can be useful to first create a shape
key with a complex extreme pose (e.g. anger or surprise), and then break this complex shape
into components by applying a temporary vertex group to the complex shape and creating a
copy with New Shape from Mix. This technique helps reducing conflicts between different
shape keys that would otherwise produce a double effect.
A mesh (curve, surface or lattice) has a stack of shape keys. The stack may be
of Relative or Absolute type.
Relative
Each shape is defined relative to the Basis or to another specified shape key.
The resulting effect visible in the 3D Viewport, also called Mix, is the cumulative
effect of each shape with its current value. Starting with the Basis shape, the result is
obtained by adding each shape’s weighted relative offset to its reference key.
Value
Represents the weight of the blend between a shape key and its reference key.
A value of 0.0 denotes 100% influence of the reference key and 1.0 of the shape key.
Blender can extrapolate the blend between the two shapes above 1.0 and below 0.0.
Basis
Basis is the name given to the first (top-most) key in the stack.
The Basis shape represents the state of the object’s vertices in their original position.
It has no weight value and it is not keyable. This is the default Reference Key when
creating other shapes.
Absolute
Mainly used to deform the objects into different shapes over time.
Each shape defines how the object’s shape will be at Evaluation Time specified in
its Value.
The resulting shape, or Mix, is the interpolation of the previous and next shape given
the current Evaluation Time.
Value
Represents the Evaluation Time at which that shape key will be active.
Basis
Basis is the name given to the first (topmost) key in the stack.
The Basis shape represents the state of the object’s vertices in their original position.
A List View.
Value/Frame (number)
In Relative mode: Value is the current influence of the shape key used for
blending between the shape (value=1.0) and its reference key (value=0.0). The
reference key is usually the Basis shape. The weight of the blend can be
extrapolated above 1.0 and below 0.0.
In Absolute mode: Value is the Evaluation Time at which the shape will have
maximum influence.
Mute (check mark)
If unchecked, the shape key will not be taken into consideration when mixing
the shape key stack into the result visible in the 3D Viewport.
Add a new shape key with the current deformed shape of the object. This
differs from the + button of the list, as that one always copies the Basis shape
independently of the current mix.
Mirror Shape Key
If your mesh is symmetrical, in Object Mode, you can mirror the shape keys on
the X axis. This will not work unless the mesh vertices are perfectly
symmetrical. Use the Mesh ‣ Symmetrize tool in Edit Mode.
Mirror Shape Key (Topology)
Same as Mirror Shape Key though it detects the mirrored vertices based on the
topology of the mesh. The mesh vertices do not have to be perfectly
symmetrical for this action to work.
Join as Shapes (Transfer Mix)
Select the object to copy, then the object to copy into. Use this action and a
new shape key will be added to the active object with the current mix of the
first object.
Transfer Shape Key
Transfer the active shape key from a different object regardless of its current
influence.
Select the object to copy, then the object to copy into. Use this action and a
new shape key will be added to the active object with the active shape of the
first object.
Delete All Shape Keys
Removes all Shape Keys and any effect that they had on the mesh.
Apply All Shape Keys
Saves the current visible shape to the mesh data and deletes all Shape Keys.
Relative
Set the shape keys to Relative or Absolute. See Relative or Absolute Shape
Keys.
Shape Key Lock (pin icon)
Show the active shape in the 3D Viewport without blending. Shape Key
Lock gets automatically enabled while the object is in Edit Mode.
Shape Key Edit Mode (edit mode icon)
If enabled, when entering Edit Mode the active shape key will not take
maximum influence as is default. Instead, the current blend of shape keys will
be visible and can be edited from that state.
Add Rest Position
With relative shape keys, the value shown for each shape in the list represents the
current weight or influence of that shape in the current Mix.
Set all influence values, or weights, to zero. Useful to quickly guarantee that
the result shown in the 3D Viewport is not affected by shapes.
Value
The weight of the blend between the shape key and its reference key (usually
the Basis shape).
A value of 0.0 denotes 100% influence of the reference key and 1.0 of the
shape key.
Range
Minimum and maximum range for the influence value of the active shape key.
Blender can extrapolate results when the Value goes lower than 0.0 or above
1.0.
Vertex Group
Limit the active shape key deformation to a vertex group. Useful to break
down a complex shape into components by assigning temporary vertex groups
to the complex shape and copying the result into new simpler shapes.
Relative To
Select the shape key to deform from. This is called the Reference Key for that
shape.
Note
Rather than storing offsets directly, internally relative keys are stored as snapshots of
the mesh shape. The relative deformation offsets are computed by
subtracting Reference Key from that snapshot.
Therefore, replacing the Reference Key has the effect of subtracting the difference
between the new and old reference from the relative deformation of the current key.
With absolute shape keys, the value shown for each shape in the list represents
the Evaluation Time at which that shape key will be active.
Absolute shape keys are timed, by order in the list, at a constant interval. This
button resets the timing for the keys. Useful if keys were removed or re-
ordered.
Interpolation
The red line represents interpolated values between keys (black dots).
Evaluation Time
Controls the shape key influence. Scrub to see the effect of the current
configuration. Typically, this property is keyed for animation or rigged with a
driver.
Workflow
1. In Object Mode, add a new shape key via the Shape Key panel with the + button.
2. “Basis” is the rest shape. “Key 1”, “Key 2”, etc. will be the new shapes.
3. Switch to Edit Mode, select “Key 1” in the Shape Key panel.
4. Deform mesh as you want (do not remove or add vertices).
5. Select “Key 2”, the mesh will be changed to the rest shape.
6. Transform “Key 2” and keep going for other shape keys.
7. Switch back to Object Mode.
8. Set the Value for “Key 1”, “Key 2”, etc. to see the transformation between the
shape keys.
In the figure below, from left to right shows: “Basis”, “Key 1”, “Key 2” and mix (“Key
1” 1.0 and “Key 2” 0.8 ) shape keys in Object Mode.
Relative shape keys example.
1. Add sequence of shape keys as described above for relative shape keys.
2. Uncheck the Relative checkbox.
3. Click the Reset Timing button.
4. Switch to Object Mode.
5. Drag Evaluation Time to see how the shapes succeed one to the next.
By adding a driver or setting keyframes to Evaluation Time you can create an animation.
MOTION PATHS
Motion paths are a fundamental tool in visual effects (VFX) for animating objects
along predefined paths. They offer precise control over movement, enabling realistic and
visually appealing animations. Let's explore the details, subtopics, and principles:
i. Predefined: Software provides sets of built-in paths, like circles, spirals, or loops.
ii. Custom: Users draw their own paths using Bézier curves for complete control.
iii. Spline-based: Uses smooth, interconnected curves for organic movements.
iv.Motion capture: Records real-world movement and translates it into a path.
Keyframe Animation:
Motion paths work alongside keyframe animation, defining the object's location at
specific frames. Interpolation fills the gaps between keyframes, animating the movement
along the path.
Control the animation speed by adjusting the time it takes for the object to travel the
path.Use acceleration, deceleration, or ease-in/ease-out effects for natural-looking motion.
i. Objects can rotate and adjust their orientation while following the path.
ii. This creates realistic 3D movement, like a car turning while traveling on a road.
Additional Effects:
i. Combine motion paths with other animation techniques like scaling, opacity changes,
or particle effects.
ii. This adds depth and complexity to your animations.
Software-Specific Features:
i. Different VFX software offers various tools and options for motion paths.
ii. Explore features like path editing, mirroring, looping, and path deformation.
Applications:
i. Motion paths are used in diverse VFX scenarios, including:
ii. Character animation (walking, running, jumping)
iii. Vehicle movement (cars, spaceships, planes)
iv.Projectile trajectories (weapons fire, magic effects)
v. Camera movements (tracking shots, pans, zooms)
vi.Abstract animations (geometric shapes, particle systems)
Principles for Effective Motion Paths:
Anticipation & Follow-through: Use subtle movements before and after the path for
realism.
Variation & Asymmetry: Avoid perfect symmetry for more natural dynamics.
Timing & Rhythm: Adjust speed and timing to match the object's mass, physics, and
scene context.
Ease & Flow: Create smooth transitions between path segments for visually pleasing
motion.