0% found this document useful (0 votes)
19 views

Visual Effects Unit-1 Final

The document provides an overview of animation and visual effects (VFX), detailing the types of animation such as 2D, 3D, and stop motion, as well as the VFX production pipeline which includes pre-production, production, and post-production stages. It outlines the key steps in the VFX workflow, including research and development, storyboarding, 3D modeling, and compositing, emphasizing the importance of collaboration among various teams. Additionally, it highlights popular VFX software tools and best practices for achieving high-quality visuals.

Uploaded by

maniv2769
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Visual Effects Unit-1 Final

The document provides an overview of animation and visual effects (VFX), detailing the types of animation such as 2D, 3D, and stop motion, as well as the VFX production pipeline which includes pre-production, production, and post-production stages. It outlines the key steps in the VFX workflow, including research and development, storyboarding, 3D modeling, and compositing, emphasizing the importance of collaboration among various teams. Additionally, it highlights popular VFX software tools and best practices for achieving high-quality visuals.

Uploaded by

maniv2769
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 76

CCS373 - VISUAL EFFECTS

UNIT-1 ANIMATION BASICS


What is Animation?
Animation is the process of creating a scene through the rapid display of pictures
and motions. When we hear the word animation, we think about cartoon-like Doraemon,
shin-chan etc. So in earlier times, animation was done by the continuous movement
of the pictures of characters and scenes using hand-like puppets. Nowadays, with the
help of many tools, it is possible to create the characters and scenes in 2D or 3D and
make the animation.

There are a lot of tools created by the developers to make the animation, like
Blenders3D, Maya, etc. Animation can be of various types like 2D animation, 3D
animation, paper animation, traditional animation, puppet animation, etc.
There are some topics that the term "animation" covers in today's society, which is full
of creativity and visualizations. Everyone immediately conjures up images of cartoons
and various Disney World shows when they hear this word. Children love animated
films like Disney World, Doraemon, etc. All cartoons and animated images are a sort
of animation created by combining thousands of individual images and playing them
out in a predetermined order.

When we think back a few decades, all animation was produced by hand or by
painting, and certain puppet-like structures were made to display the animation. These
types of animation, however, are real-world animations, while in that technological era,
digital animation will advance.

There are numerous animation styles that we may observe on TV, as well as
numerous productions and images that mostly diverge from actual productions and
films.

What are the types of animation?


There are five main types of animation:

• 3D - computer generated imagery (CGI) is used to create characters and the


worlds they inhabit. This is the most common method in modern animation.

• Traditional - also known as cel animation, hand-drawn and 2D. This is the
original method of animation, dating back to the 19th century.

• Stop motion - involves physically moving objects, often made with clay, one
frame at a time.

• Motion graphics - animated graphic design that brings text and images to
life.

• Vector - a more modern version of traditional, using 2D graphics.VFX PRODUCTION


PIPELINE :

Visual effects (VFX) can transport audiences to new worlds and show off mind
bending animations and CGI creations one can only dream of. But before audiences
can experience a brave new world, a VFX pipeline needs to be in place to make it a
reality. But just what is the VFX pipeline, and what does it entail? You can lean on
this handy VFX guide when developing your own customized VFX plan.

What is the VFX Pipeline?

The VFX pipeline breaks down the steps of a visual effects workflow for film,
television, and digital media projects. It keeps the entire VFX process organized; it
allows everyone to know their role and how it fits into the production timeline. From
storyboarding and reference imagery, all the way through modelling, rotoscoping,
composition, and lighting (just to name a few).

In smaller productions, one VFX artist may handle the entire workflow, but most
productions use teams of specialized artists. The pipeline brings a level of sanity to a
process that is usually not completely linear. Members of the VFX process often get
involved during the pre-production, production, and post-production stages. To do their
best work (and to ensure they’re not asked to redo various steps), artists should
understand and appreciate each step of the VFX workflow pipeline.
VFX Pipeline DiagramThe VFX Workflow :

When asking the question “What is VFX mean in editing?”, it’s important to
understand
that no two real-world VFX pipeline workflows are identical. We’ve divided these steps
into the pre-production, production, and post-production phases in our VFX roadmap
below, but many steps below often occur in parallel throughout the project.

PRE-PRODUCTION :

Much of the visual effect workflow in pre-production is planning-related, which


helps keep crews informed while preparing for any technical requirements or potential
execution issues. VFX artists in the planning stage can download VFX shot
list templates to save time when planning which shots and scenes need visual effects.

1. Research & Development (R&D)

R&D on a video project primarily involves Technical Directors (TDs) who work
with VFX supervisors to plan the technical approach and determine which shots and effects
are technically feasible. Extremely VFX-heavy projects may also involve outside
scientists, engineers, or mathematicians for further guidance.
TDs must ensure that all software and files used throughout the VFX pipeline are
compatible and sometimes create custom software and plug-ins to improve VFX
pipeline efficiency.
Most of the R&D stage is ongoing throughout a project’s lifetime as the material is
tweaked and concepts evolve.

2. Storyboarding and Animatics

Storyboarding is where the VFX artist team creates visual representations of all the
actions within the script. Character motions and story settings are analyzed and basic
drawings are created to illustrate the desired framing on a shot-by-shot basis.
Like most planning elements, however, storyboarding isn’t final. It’s more about
planting a stake in the ground and giving the VFX artists a solid idea of what the
editorial team wants.

3. Pre-Visualization

Also known as previs, pre-visualization uses storyboards to create low-poly 3D


models, wireframes, and scene representations to function as stand-ins for the visual
effects to come. Previs typically takes place alongside other members of the creative
team to determine camera angles, determine shoot locations, and plan complex
scenes.Other VFX steps in the pre-production stage can include concept and art design,
which further refine artistic concepts and produces full images to define characters, settings,
and props. Layout (also known as production design), as the name implies, defines what final
sets should look like and provides guidance to creators of physical or digital sets.t 20GB
free to use with MASV’s fast, reliably large file .
PRODUCTION :

This is when the VFX workflow really gets cracking because it’s when most of the
shooting takes place, raw video files are created, and VFX dailies are submitted. But
plenty of VFX tasks can be done in tandem with the production process.

4. 3D Modeling

3D modeling takes place throughout all three production phases, but in the production
phase, artists transform storyboard art or low-poly 3D models into lifelike
representations. Most 3D modeling is devoted to creating assets such as vehicles or
buildings that either aren’t practical or cost-effective to bring on set, but 3D models are
also used to create characters (to illustrate non-humans or stand in as digital doubles)
and other props.

While 3D models of impractical things like spaceships or the Batmobile are the most
visually interesting, 3D modelers also replace or complement physical objects shot on
set that need improvements in lighting, shadowing, or texture.

3D modeling is one of the most time-consuming and labor-intensive elements of the


VFX pipeline and often depends on reference photos taken during production. The 3D
models that move must be rigged and animated, but more on this later.
5. Matte Painting

Matte painting is one of the oldest VFX techniques in existence and involves
creating visual backgrounds that don’t exist. These days such backgrounds are often created
digitally using LED panels and game engines, often as entire 3D sets for virtual
production, or by chroma keying using a green or blue screen.

Years ago, matte painting was exclusively done using photographs and painted glass
panels (matte paint was used because it doesn’t reflect light).
But matte painting is still used in many productions — including the Harry
Potter films, Game of Thrones, and The Witcher — partly because it can save money.
But it’s pretty limited in what it can do: Matte paintings can’t change their lighting,
camera angles, or other elements.

6. Reference Photography

Throughout the entire production phase members of the VFX team hang out on set
to take reference photos of actors, scenes, props, and anything else important. These
photos are then used to rig, animate, and add texture to 3D models.

POST-PRODUCTION

Post-production brings all the elements of a video production together — VFX,


footage, music, and sound — into the finished product. While, as we’ve seen, the VFX
team is busy throughout the production cycle, the VFX post-production workflow is the
busiest phase of the entire process for the VFX team.

7. Rigging and Animating


Imagine what happens when a puppeteer pulls the strings on a marionette, and
you’ve already got a pretty good idea of what rigging and animating are all about — only at
a digital level. Rigging and animation breathe life into 3D models by building a system
of controls that animators can use to manipulate these objects.

Rigging teams often rely on reference photographs, but motion capture cameras or
suits are also often used to capture movement data to aid the rigging and animation
process.Rigging teams often get so granular that their jobs can include calculating skin
weights and adding digital skeletons and muscles within 3D characters to replicate natural
movement.

8. FX and Simulation

FX artists are responsible for creating concepts and scenes that move and react
according to the laws of physics, such as a long shot of a raging battle at sea or in
space — complete with fiery explosions, which in reality can’t exist in space, but
whatever — they look cool. FX artists often work with elements such as fire, smoke,
liquids, and even particles.
FX artists work alongside animators to ensure these simulated elements don’t stick
out (in a bad way) while looking and feeling as natural as possible.

9. Motion Tracking/Match Moving

Motion tracking, also known as match moving, allows VFX artists (in this context,
referred to as match move artists) to insert effects into moving scenes and live-action
footage without the entire thing looking bad. After all, inserting VFX elements into a
static shot is relatively easy, all things considered — but adding the same elements to
a camera move involves many more variables. That’s why motion tracking accounts
for the positioning, orientation, scale, and how the object moves within the shot,
including replicating physical camera moves using virtual cameras in their motion
tracking software.

10. Texturing

The texturing process is pretty much as it sounds: It adds textures to the surfaces of
3D models. Texture can include anything from surface color to scaly skin on a reptile,
to reflections in water, to a metallic shine or scratches on a car door. This ensures
models look as realistic as possible.

11. Rotoscoping and Masking

Rotoscoping involves artists drawing around and cutting out objects or characters
from frames in the original footage, to use the cutout images against a different background
or context. Rotoscoping has typically been a relatively painful and manual process,
especially in the days before computerized VFX.

“One legendary example of manual rotoscoping occurred during Alfred Hitchcock’s


The Birds, when crews filmed birds in nature and rotoscoped them into each shot. It
took three months to rotoscope hundreds of birds, one by one, into a single shot.”
Nowadays VFX artists still perform manual rotoscoping, but new tools such as
Runway, which use machine learning, have helped to accelerate the process
dramatically.The entire rotoscoping and masking process can be avoided through chroma
keying, which, as we mentioned, is the process of shooting foreground subjects against an
easily removable background (like a green screen). But in many cases rotoscoping is
also required to create a perfect cutout.

12. Lighting and Rendering

Lighting is typically dealt with once the texture artists have done their thing. It’s the last
element applied before the effect or computer-generated image (CGI) is complete.
Adding and adjusting virtual lighting and shadows to match either static or live-action
computer-generated scenes or characters, like texturing, helps make them look more
realistic while enhancing aspects of the original shot such as color and intensity.
Just like real-life lighting, however, virtual lights must be placed strategically within a
scene.
Lighting artists use tools such as shader settings and lighting maps to achieve
this by positioning spot, area, and directional lights to match the angles and shadows
of the original footage. Once lighting has been applied, the entire scene is handed off to
compositing.

13. Compositing

Compositing, sometimes called stitching, is the final step of the post-production VFX
workflow. While it is the final step in the VFX roadmap, it is also the most
important because it integrates all the various VFX elements with real-life footage to
create a finalized shot or scene.

A bad compositing job can ruin all your otherwise great VFX work up to this point —
so it’s crucial to get it right. The process involves a compositor gathering all the content —
including live-action footage, renders, VFX plates, and matte paintings — and layering them
together in preparation for the next step in the post-production pipeline (typically coloring).
Some shots may require combining just a couple of elements, but others may need to layer
dozens while finalizing lighting, reflections, shadowing, and atmospherics to create a
seamless look and feel.

VFX Tools and Software

VFX is only growing in importance in the modern video production industry —


indeed, the first season of Amazon’s The Rings of Power used more than 1,500 VFX artists
from 20 studios. But what kind of software and VFX systems do industry-leading VFX
artists use to weave their magic?
Here are a few examples.

After Effects

After Effects is regarded as one of the best, if not the best, VFX software around.
That’s partly because it integrates seamlessly with Adobe’s Premiere Pro video editing
software and collaboration tools such as Frame.io, but also because it’s damn good
at what it does. After Effects also has plenty of third-party, customizable VFX templates you
can download to help scale your project.

Fusion :

Blackmagic Design’s Fusion is a great tool for creating immersive 360 or VR video;
stereoscopic 3D effects; and the compositing of 3D models and real-life, live-action
footage. It comes as part of video editing software DaVinci Resolve, and has been
used to create VFX scenes in films such as Guardians of the Galaxy and the Hunger
Games and even cinematics for video games such as Halo 5.

Nuke :

Foundry’s Nuke is VFX and film editing software used by major industry players such
as Walt Disney Animation Studios, Blizzard Entertainment, Sony Pictures Animation,
and DreamWorks Animation, and has been used on productions from The
Crown to Boardwalk Empire. It offers seamless review workflows and the ability to add
VFX elements to dynamic editorial timelines.

Houdini :

Houdini by SideFX is used in the R&D process and at other junctures to come up with
customized effects. It integrates animation design, effects rendering, and character
modeling and provides a host of VFX simulation modules for fluids, crowds, grains,
and other elements including destruction and pyro FX. Houdini also integrates with
other software such as Maya.
Maya :

Autodesk’s Maya provides 3D animation and modeling, simulations, and ultra-realistic


rendering. Lighting artists often use it to create and place virtual lighting and by 3D
modelers to create and rig animated characters or other objects. Although it has been
described as difficult to use by reviewers, the software covers various VFX pipeline
elements, including dynamic simulations, texturing, and animation.

HitFilm Pro:

HitFilm Pro (and its free consumer market version, HitFilm Express) is an all-in-one
VFX and video editing tool that allows VFX artists to apply effects to the NLE timeline
(instead of laying them). The app comes loaded with nearly 1,000 VFX templates and
presets, with features such as masking, 2D and 3D motion tracking, green screen
keying, and particle simulators.

Blender:

Blender by Blender Foundation is powerful open-source freeware perfect for those


starting in VFX who want to learn the craft without shelling out thousands of dollars on
other alternatives (many mentioned above). Its 3D animation tools include a camera
and object tracker that offers manual or automated tracking, camera reconstruction,
and real-time previews.
with other video production

Tips and Best Practices for High-Quality Visuals

• Engage in constant and ongoing communication between various creative


teams; not just other VFX artists but also video editors, colorists, the on
location production team, the director, etc.

• Know the various VFX systems (tools, processes, software, etc.) of the
entire VFX workflow inside and out before embarking on a months-long
project.

• In virtual production, VFX artists must be aware that what looks good on
workstations may not on a gigantic LED wall. Unreal Engine recommends
their In-Camera VFX Production Test to see recommended configurations
for virtual stages.

• Replace physical objects (fires, potholes, tire tracks, etc.) with VFX
renderings to save a ton of money over replicating the same thing in the
physical world, or having to go back and re-shoot certain scenes to get
things perfect.

• Netflix recommends VFX artists working from home use virtual desktop
solutions such as HP RGS or Teradici, a single monitor at 1920×1200 as a
baseline resolution, and dedicated bandwidth and low-latency connections.

• Always follow proper file naming conventions.

PRINCIPLES OF ANIMATIONWho invented the 12 principles of animation?

Ollie Johnston and Frank Thomas were the men behind the principles. The duo were
two of Disney’s famous Nine Old Men (even Walt Disney himself would call them this).

This group were the studio’s core group of animators. In 1981, Johnston and Thomas
released a book called The Illusion of Life: Disney Animation. The Nine Old Men had been
using the principles for decades, but this was the first time the outside world were made
aware.

Why are the 12 principles of animation important?

These principles of animation are important because combining all 12 helps ground
animation in the real world. The sky is the limit when it comes to using your imagination,
but you also need to consider gravity and other laws of physics. Failure to do so will make
animation much less believable and your audience won’t care about what happens to your
characters, whether hand-drawn or 3D.

The 12 Principles of Animation (With Examples)

In their 1981 book, The Illusion of Life, Disney animators Ollie Johnston and Frank
Thomas introduced the twelve principles of animation. The pair were part of Disney’s
“Nine Old Men,” the core group of animators at Disney who were instrumental in
developing the art of traditional animation. The twelve principles have now become
widely recognized as a theoretical bedrock for all animators, whether they are working
on animated entertainment, commercials, or web-based explainers.

In order, they consist of:


• Squash and Stretch
• Anticipation
• Staging
• Straight Ahead Action and Pose-to-Pose
• Follow Through and Overlapping Action
• Ease In, Ease Out
• Arcs
• Secondary Action
• Timing
• Exaggeration
• Solid Drawing
• Appeal
Each principle is vital to the animation process, so let’s dig deeper into each one.
1) Squash and Stretch:

Squash and stretch is debatably the most fundamental principle. Look at what
happens when a ball hits the ground. The force of the motion squashes the ball flat,
but because an object needs to maintain its volume, it also widens on impact. This
what’s called squash and stretch.

This effect gives animation an elastic life-like quality because although it may not seem like
it, squash and stretch is all around you. All shapes are distorted in some way or
another when acted upon by an outside force; it’s just harder to see in real-life. Squash
and stretch imitates that and exaggerates it to create some fun.

When the letters spring from the ground, they elongate to show the impression of
speed. Conversely, the letters squash horizontally when they come into contact with
the ground. This conveys a sense of weight in each letter.

2) Anticipation:

Imagine you’re about to kick a soccer ball. What’s the first thing you do? Do you
swing your foot back to wind up? Steady yourself with your arms? That’s anticipation.
Anticipation is the preparation for the main action. The player striking the soccer ball
would be the main action, and the follow-through of the leg is well… the follow through.
Notice how the progression of action operates in this scene. We first see the woman
as she’s standing on the box. She then bends her knees in anticipation of what’s about
to happen and springs into action by leaping from the ground up into the air.
2) Staging:

When filming a scene, where do you put the camera? Where do the actors go? What do
you have them do? The combination of all these choices is what we call staging.
Staging is one of the most overlooked principles. It directs the audience’s attention
toward the most important elements in a scene in a way that effectively advances the
story. It builds from problem to realization to shared understanding, to the beginning of a
solution, all in a visual telling.

4) Straight-Ahead Action and Pose-to-Pose

These are two ways of drawing animation. Straight-ahead action is where


you draw each frame of an action one after another as you go along. With pose-to-pose, you
draw the extremes – that is, the beginning and end drawings of action – then you go
on to the middle frame, and start to fill in the frames in-between.

Pose-to-pose gives you more control over the action. You can see early on where your
character is going to be at the beginning and end instead of hoping you’re getting the
timing right. By doing the main poses first, it allows you to catch any major mistakes
early. The problem with it is that sometimes it comes off as too neat and perfect.Straight-
ahead action is less planned, and therefore more fresh and surprising. The
problem with it is that it’s like running blindfolded… you can’t figure out where you’re
supposed to be at any one time.
Mastering both techniques and combining them is the best approach to being a
successful animator because then you can get both structure and spontaneity. And
incidentally, this distinction is just as important in computer animation, where molding
a pose at each keyframe is the equivalent of making a drawing.

5) Follow-Through and Overlapping Action

When a moving object such as a person comes to a stop, parts might continue to
move in the same direction because of the force of forward momentum. These parts might
be hair, clothing, jowls, or jiggling flesh of an overweight person. This is where you can
see follow-through and overlapping action. The secondary elements (hair, clothing,
fat) are following-through on the primary element, and overlapping its action.

Follow-through can also describe the movement of the primary element though. If you
land in a crouch after a jump, before standing up straight, that’s follow-through.
Take a look at an example from a video we did for ViewBoost. Watch the sleeves of
the “Cheese Jedi’s” cloak when he swings his lightsaber. They move with the
momentum of the action, but when it’s over, the sleeves continue to go before settling
to a stop.

6) Ease In, Ease Out

When you start your car, you don’t get up to 60 mph right away. It takes a little
while to accelerate and reach a steady speed. In animation speak, we would call this
an Ease Out.
Likewise, if you brake, you’re not going to come to a full stop right away. (Unless you
crash into a tree or something.) You step on the pedal and decelerate over a few
seconds until you are at a stand-still. Animators call this an Ease In.Carefully controlling the
changing speeds of objects creates an animation that is more
realistic and has more personality.

7) Arcs

Life doesn’t move in straight lines, and neither should animation. Most living beings –
including humans – move in circular paths called arcs.

Arcs operate along a curved trajectory that adds the illusion of life to an animated
object in action. Without arcs, your animation would be stiff and mechanical.
The speed and timing of an arc are crucial. Sometimes an arc is so fast that it blurs
beyond recognition. This is called an animation smear – but that’s a topic for another
time.

8) Secondary Action

Secondary actions are gestures that support the main action to add more dimension
to character animation. They can give more personality and insight to what the
character is doing or thinking.
9) Timing

Timing is about where on a timeline you put each frame of action. To see what this
means in action, let’s look at the classic animator’s exercise: the bouncing ball that we
saw earlier when we were talking about squash and stretch. (The reason this is a
popular assignment is that there is a lot of wisdom to be gained from it!)
Notice that at the top of each bounce, the balls are packed closer together. That is
because the ball is slowing down as it reaches the peak of the bounce. As the ball falls
from its peak it and accelerates, the spacing starts becoming wider.

Notice also how many drawings there are in each bounce. As the momentum of the
ball diminishes, the bounces become shorter and more frequent (i.e., the number of
frames in each bounce decrease.)In practice, the success of your animation is going to
depend on your sense of timing.

10) Exaggeration

Sometimes more is more. Exaggeration presents a character’s features and actions


in an extreme form for comedic or dramatic effect. This can include distortions in facial
features, body types, and expressions, but also the character’s movement.
Exaggeration is a great way for an animator to increase the appeal of a character and
enhance the storytelling.
11) Solid Drawing

Solid drawing is all about making sure that animated forms feel like they’re in three
dimensional space.

12) Appeal

People remember real, interesting, and engaging characters. Animated characters


should be pleasing to look at and have a charismatic aspect to them; this even applies
to the antagonists of the story. Appeal can be hard to quantify because everyone has a
different standard. That said, you can give your character a better chance of being appealing
by making them attractive to look at.Play around with different shapes and proportions of
characters to keep things fresh.

Enlarging the most defining feature of a character can go a long way to giving the
character personality. Strive for a good balance between detail and simplicity.
What Are Keyframes In Animation?

Keyframes in animation are specific points that denote the start and end of a transition.

They define the precise moments when movements or transformations begin and finish,
allowing animators to map out the animation’s timing and motion path.

Table of Contents:

• What is a keyframe?

• Where does the word keyframe come from?


• What is the difference between a frame and a keyframe?
• What is a frame?

• How are keyframes used in keyframe animation software?


• What changes can you make with keyframes on an object?
• What changes can you make to keyframes?
• What are the main types of keyframes?
• What are the advantages of keyframes?
• What are the disadvantages of keyframes?
• What are the use cases for keyframes?
• How can you use keyframes in SVGator?

What is a keyframe?
A keyframe in animation is a specific reference point in an animation where a change or
adjustment is made to an object's state or property.
Usually, all keyframe-based animation tools use keyframes to change states for animators
such as:

• Position

• Scale

• Rotation

• Opacity

• And many others


This list includes any other transition that takes place between the predefined starting and
ending points. Keyframes are essential for precise control over animation effects and timing
in creating motion graphics.

For example, if you would like to create an animated element that moves from the left to
the right over the duration of 3 seconds, you should:

1. Set a keyframe at the starting position (A)


2. Set another keyframe at the ending position (B)

The animation software will automatically create the in-between positions and create a
smooth transition between point (A) and point (B).
The speed of the transition is determined by the distance between the two keyframes in the
timeline. A longer distance will mean a slower speed for the element to get from (A) to (B).

Keyframe Animation Example - Made by SVGator

Where does the word "keyframe" come from?


The word "keyframe" comes from the early days of keyframe animation, when each frame
was drawn by hand, which was a very time-consuming and difficult task.

Disney pioneered keyframe animation in the 30s by setting up the main poses of
movement to be drawn by artists and the inbetween frames were created by less
experienced colleagues or machines.

The company was the first to set up the principles of animation and influenced other studios
to adopt their techniques.
Computer animation arose in the 70s as a new technique for producing animations. It
followed the keyframe animation principles and adapted them to the digital image
generation using mathematical models and algorithms.
What is the difference between a frame and a keyframe?

The difference between a keyframe and a frame is that a frame is a single component from
a sequence of frames, while a keyframe is a reference point that marks how the object or
element transitions, or changes to that particular frame.

• Frame: one single component from a sequence of frames


• Keyframe: marks the changes/transitions assigned to a particular frame

What is a frame?
A frame is a single image within a sequence of images. It is the building block of any
video, film, or animation. Each frame is flashed on the screen for a fraction of a second and
human persistence of vision blends them together, producing the illusion of movement.
The number of frames displayed within a second are measured by FPS (frames per
second). The standard FPS for videos is 24; higher frame rates produce even smoother
motions.
How are keyframes used in keyframe animation software?
Every keyframe animation software follows the same logic and can be used by following the
next steps:

1. Mark the initial state of an object with a keyframe.


2. Choose whether to leave the initial state as it is or apply more changes to it,
which, of course, will represent the new initial state of the animation.
3. Define how long the animation will be by adding a second keyframe on the
timeline at a certain second. This will mark the ending point of the animation.
4. Change the state of the object at the timing marked by the second keyframe, so
that it is different from the state of the object at the first keyframe.
5. Hit play and see a smooth transition between the two states of the object

The state of the object that you are changing should be the same state as the assigned
animator that you are adding keyframes to.

Take for example the Rotation animator. You will only change the degrees of the object
(between 0 degrees and 359 degrees from the center).

Changing the object’s position, scale level, or any other state except degrees of rotation,
won't result in any animated effect.
In SVGator, the first keyframe will be added along the animator right where the playhead is
positioned on the timeline. By dragging the playhead on a different second and making the
adjustments to the element, another keyframe will be automatically added to mark the end
of the transition. The adjustments should match the chosen animator, so if you chose the
Rotate animator, you can only adjust the element’s rotation.

Pro Tip: You can also reuse keyframes on the timeline by simply copying and pasting them
along the timeline in order to repeat a certain transition for the same element.

You can also copy them to a different element that you want to animate in the same way.
Additionally, you can make more adjustments to the keyframes that will change the timing
or the behavior of the animation.

Using keyframe animation software - Made by SVGator

What changes can you make with keyframes on an object?

There are a large number of changes you can make with keyframes on an object. For
example, in SVGator, you have the following options:

Changes made with keyframes to an object

Position Changes the object’s location

Origin Changes the object’s origin (center) point

Scale Makes the object bigger or smaller


Rotate Moves the object in a circle around a fixed point

Skew Makes the object oblique, asymmetrical

Opacity Changes the degree to which an object appears to be transparent

Fill Color Changes the object’s color

Fill Opacity Changes the object’s opacity

Stroke Color Changes a stroke’s color

Stroke Opacity Changes a stroke’s opacity

Stroke Width Changes a stroke’s width

Stroke Offset Changes the location of a dash along a path

Stroke Dashes Changes the dash-gap pattern of a stroke

Filters Adds filters to the object

Types of changes made with keyframes to an object

What changes can you make to keyframes?


The changes you can make to keyframes are the following:

• Timing between keyframes: Timing between keyframes dictates the speed of the
transition between the two keyframes. You can change the timing between two
keyframes by increasing or decreasing the distance between them on the timeline.
• Position of the keyframes: By manipulating the position of the keyframes you can
reverse an animation by selecting its keyframes, right-clicking, and choosing
"Reverse keyframes." This action will simply interchange the position of two or
more keyframes on the timeline.
• Keyframe easing effects: Keyframe easing effects imply selecting at least one
keyframe, to which you can then apply an easing effect from the Easing panel. The
easing will apply on the transition from the selected keyframe toward the
second/following one.
• Skipping transitions between keyframes: Skipping transitions between keyframes
means that you can also eliminate the transition between two or more keyframes
by choosing the Step End or Step Start easing functions. Also known as hold
keyframes in other animation tools, these easing functions will simply remove the
transition and make jumps between the steps of the element.
💡
Note: Step keyframes support step numbers as well. You can set a certain number of steps
between two-step keyframes. The state of a step keyframe will be easy to distinguish in the
timeline as the keyframe shape will change to a square instead of a rhombus.

Example of changes made to keyframes - Made by SVGator

What are the main types of keyframes?

There are 3 main types of keyframes used in animation software:

Linear Interpolation Keyframe Bézier Interpolation Keyframe Hold Interpolation Keyframe

Linear interpolation creates a This is a more complex This maintains the object in a particular
uniform and consistent change of interpolation that makes it possible pose. It is used to freeze or block a
values from the beginning to the to specify the object's velocity and certain keyframe in a static phase. It is
end, at a constant speed. motion path between two points. also known as a stop-motion keyframe.
The 3 Main Types of Keyframes

Linear Interpolation Example

Bezier Interpolation Example

Hold Interpolation Example


What is interpolation in the context of keyframes?

Interpolation in the context of keyframes is the process of filling data between two
keyframes. The changes made to property values can be calculated in different ways based
on what type of keyframes are set.

Interpolation in animation is a mathematical method used to fill in the unknown values in


between two or more specified points.

What are the advantages of keyframes?

The advantages of keyframes are:


• They speed up the animation process
• They let animators create any kind of movement with ease
• They create smooth transitions
• They make later changes easy to make
• They can be reused for other elements because they are easy to copy and paste

The biggest advantage of using keyframes in animation is that they make the creation
process far quicker without losing quality.

The animator has to set up only a few important reference points instead of creating
hundreds of individual frames.

Keyframe animation software offers a huge range of different animation movements on an


advanced level and in a reasonably short time.

Another advantage of keyframes is that the final work will retain the artist’s personal charm
and specific hand-drawing style together with sleek movements and a professional finish.
Later changes are also easier with keyframes because the editor has to modify only their
main values or features instead of going through all of the frames.

What are the disadvantages of keyframes?

The disadvantages of keyframes are:

• It can be time-consuming to manually set up and adjust each keyframe

• Complex movements are challenging to create

• It is difficult to keep track of them when you have a lot of keyframes set on the
timeline
Keyframes have some disadvantages when it comes to producing and handling realistic,
complex, and natural movements. These are easier to achieve with motion capture, another
technology to record movement.

Video animations are great for explaining complicated processes and entertaining viewers,
but they are not so efficient when it comes to expressing feelings and pushing people to
action.

What are the use cases for keyframes?

The main use cases for keyframes are video production and animation:

1. Post-production: Post-production is the last stage of video-making when color


correction, special effects, sound design, and many other editing work takes place.
In this stage, creators can add animated filters, graphics, and various animation
effects, whether they are making a simple YouTube video or a Hollywood
blockbuster.
2. Creating animations: Animations can be created from scratch using graphic
software to draw and animate characters, showcase products, present a process,
or just entertain the viewers. Besides traditional motion art production, there is an
increasing number of businesses that choose to have animated elements on their
websites for better engagement and higher conversions.

https://www.svgator.com/blog/what-are-keyframe-animations/
I. KINEMATICS

What is Kinematics?

Kinematics is the study of motion without regard to the forces that cause it. In visual
effects (VFX), it is used to create realistic and believable motion for objects and characters.
Kinematics deals with the following aspects of motion:

i. Position: The location of an object at a given time.


ii. Velocity: The rate of change of position over time.
iii. Acceleration: The rate of change of velocity over time.

Why is Kinematics Important in VFX?

Kinematics is essential for creating realistic and believable motion in


VFX for several reasons:

i. It provides a foundation for animation:


By understanding the principles of kinematics, animators can create motion that
is physically accurate and looks natural.

ii. It enables realistic physics simulations:


Kinematic principles can be used to create simulations of objects interacting
with their environment, such as cloth, hair, and fluids.

iii. It allows for efficient animation:


Kinematic tools can automate some of the animation process, freeing up
animators to focus on more creative aspects.

There are several subtopics of kinematics that are relevant to VFX:


Forward kinematics: This is the process of calculating the position of an object based on
the positions and rotations of its joints. It is often used to animate characters and robots.
Image of Forward kinematics in VFXOpens in a new window.

Example: www.educba.com

Forward kinematics in VFX:

Inverse kinematics: This is the process of calculating the positions and rotations of an
object's joints to achieve a desired position for the end effector (e.g., the hand of a character).
It is often used to create animations where the end effector needs to follow a specific path.
Image of Inverse kinematics in VFXOpens in a new window.

Inverse kinematics in VFX:

Motion capture: This is the process of recording the motion of an actor or object using
sensors and then using that data to animate a character or object in a computer. Motion
capture data can be used to drive forward kinematics or inverse kinematics calculations.

Example: Image of Motion capture in VFXOpens in a new window

Motion capture in VFX:

Procedural animation: This is a type of animation where the motion of an object is defined
by a set of rules or algorithms. Kinematic principles can be used to create procedural
animations, such as the animation of cloth or hair.

Example: Image of Procedural animation in VFXOpens in a new window

Procedural animation in VFX:

Rigid body dynamics: This is a type of physics simulation that treats objects as if they are
made up of rigid bodies that cannot deform. Kinematic principles can be used to constrain
the motion of rigid bodies in a simulation.

Example: Image of Rigid body dynamics in VFXOpens in a new window

Rigid body dynamics in VFX:

Kinematics in Action: Kinematics is used in a wide variety of VFX applications,


including:

Character animation: Kinematics is used to create realistic and believable motion for
characters, such as walking, running, and jumping.
Vehicle animation: Kinematics is used to animate the motion of vehicles, such as cars,
airplanes, and spaceships.

Crowd simulation: Kinematics is used to simulate the motion of large crowds of


people.

Destruction effects: Kinematics is used to create simulations of objects being


destroyed, such as buildings collapsing or explosions.

Kinematics is a powerful tool that can be used to create realistic and believable
motion in VFX. By understanding the principles of kinematics, VFX artists can create
stunning and immersive visual effects.
II. FULL ANIMATION

Full Animation:

Full animation in visual effects (VFX) is the process of creating moving


images entirely through digital means. It's used in a wide range of productions, from movies
and TV shows to video games and commercials.

Here are some subtopics that explore the details of full animation in VFX:

Types of full animation:

3D animation: This is the most common type of full animation, where objects are created
and animated in three-dimensional space. Popular software programs for 3D animation
include Maya, Houdini, and Blender.

Image of 3D animation software HoudiniOpens in a new window

Example: 3D animation software Houdini

2D animation: This type of animation uses flat, two-dimensional images that are
manipulated to create the illusion of movement. Traditional 2D animation is drawn by hand,
while modern 2D animation is often created using digital software like Adobe Animate.
Image of 2D animation software Adobe AnimateOpens in a new window.
2D animation software Adobe Animate

Stop-motion animation: This technique involves physically moving objects in small


increments between frames, creating the illusion of movement when played back at normal
speed. Popular stop-motion materials include clay, puppets, and paper cutouts.
Image of Stopmotion animationOpens in a new window

Example: Stopmotion animation

The animation pipeline:

The animation pipeline is the process of creating a full animation, from the
initial concept to the final rendered image. Here are some of the key steps:

Pre-production: This involves developing the story, characters, and environment


for the animation. It also includes creating storyboards, animatics, and concept art.
Image of Concept art for animationOpens in a new window.
Concept art for animation:

Modeling: This is the process of creating the 3D models of the characters, objects, and
environment.

Image of 3D modeling for animationOpens in a new window

dreamfarmstudios.com

Example: 3D modeling for animation

Rigging: This involves adding a "skeleton" to the models that allows them to be animated.

Image of Rigging for animationOpens in a new window

Rigging for animation:

Animation: This is the process of bringing the models to life by moving them and creating
facial expressions.

Image of Animation processOpens in a new window

studiopigeon.com
Animation process:

Lighting and rendering: This involves adding lighting and other effects to the animation to
make it look realistic.

Image of Lighting and rendering for animationOpens in a new window

dreamfarmstudios.com

Lighting and rendering for animation

Compositing: This is the final step, where all of the different elements of the animation are
put together into a single image.

Image of Compositing for animationOpens in a new window

Compositing for animation;

Applications of full animation in VFX:

Full animation is used in a wide range of VFX applications, including:


Movies and TV shows: Full animation is used to create everything from cartoon
characters to photorealistic creatures and environments.

Image of Full animation in movies and TV showsOpens in a new window

Example: Full animation in movies and TV shows

Video games: Full animation is used to create the characters, objects, and
environments in video games.

Image of Full animation in video gamesOpens in a new window

www.ign.com

Example: Full animation in video games

Commercials: Full animation is used to create eye-catching and memorable


commercials.

Image of Full animation in commercialsOpens in a new window

Example: Full animation in commercials

Architectural visualization: Full animation is used to create realistic visualizations of


proposed buildings and landscapes.

Image of Full animation in architectural visualizationOpens in a new window

Example: Full animation in architectural visualization

The future of full animation:

Full animation is a rapidly evolving field, with new technologies and


techniques emerging all the time. Some of the trends that are shaping the future of full
animation include:
i. The use of real-time rendering: This allows animators to see their work in real-time,
which can make the animation process more efficient and interactive.

ii. The use of artificial intelligence (AI): AI is being used to automate some of the
more tedious tasks in the animation process, such as lip-syncing and character rigging.

iii. The use of virtual reality (VR) and augmented reality (AR): VR and AR are
being used to create new and immersive animation experiences.

III. LIMITED ANIMATION

While limited animation isn't typically used directly in visual effects (VFX),
its principles and techniques can definitely find application and influence the approach.
Here's a breakdown of how limited animation and VFX intersect:

Limited Animation Basics:

Definition: An animation technique that intentionally reduces the number of drawings


and movements to save time and resources. It often employs:

Stylized drawings: Simplified designs focusing on key features.

Limited frame rates: Fewer frames per second (FPS) than traditionally animated works
(12-24 FPS vs. 30 FPS).

Reused animation: Repeating cycles of movement for efficiency.

Limited backgrounds: Static or minimally animated backdrops.

Historical Use:

 Pioneered by studios like Hanna-Barbera and UPA in the mid-20th century.


 Often utilized in television cartoons due to budget and production constraints.
 Notable examples: "The Flintstones," "Scooby-Doo," "South Park."

 While it's true that limited animation directly doesn't apply to most VFX, its
principles and techniques can influence the approach in subtle and profound ways.
This deeper dive explores how limited animation's impact goes beyond mere
stylization.
Benefits:

 Cost-effective and time-saving.


 Allows for stylistic expression and exaggeration.
 Can lend a unique charm and comedic timing.
Drawbacks:

 Can appear less fluid and detailed compared to fully animated works.
 Requires careful planning and execution to avoid looking choppy or unnatural.

Influence on VFX:

 Stylization: Limited animation inspires stylized VFX choices, focusing on specific


details and effects rather than hyper-realism. Think of stylized creatures, exaggerated
explosions, or cartoon-like elements within a live-action scene.

 Efficiency: The "less is more" approach of limited animation translates well to VFX
workflows. Optimizing rendering times, using procedural techniques, and
strategically placing VFX elements can save resources without sacrificing impact.

 Storytelling: Similar to limited animation's use of exaggeration and simplified


visuals, VFX artists can use these techniques to emphasize emotions, actions, and key
information within a scene.

Technical Considerations:

 Frame Rates: While limited animation typically uses lower frame rates (12-24 FPS),
VFX can leverage this concept through:

 Selective frame drops: Intentionally dropping frames in specific scenes for stylistic
or dramatic effect (e.g., action sequences).
 Frame-by-frame animation: Utilizing hand-drawn or meticulously crafted
individual frames for specific elements (e.g., character expressions).

 Animation Cycles: Reusing and subtly modifying animation cycles can be applied in
VFX for:

 Background elements: Repeating cycles of clouds, smoke, or water movement for


efficiency.

 Crowd animation: Creating variations on a base animation cycle to populate a scene


with diverse-looking characters.

Examples of Limited Animation in VFX:

i. Stylized creatures: Think of the fantastical creatures in "Harry Potter" or the


Muppets in "Muppet Treasure Island." Their movements and designs might not be
hyper-realistic, but they effectively convey their emotions and character.

ii. Exaggerated effects: The over-the-top fire and explosions in films like "Who
Framed Roger Rabbit?" or "Pirates of the Caribbean" draw inspiration from limited
animation's use of exaggeration for comedic or dramatic effect.

iii. Minimalist aesthetics: Some VFX-heavy films like "Sin City" or "300" utilize a
limited color palette and simplified backgrounds, reminiscent of limited animation's
focus on key elements.

iv. Speed Ramping: Intentionally altering playback speed to create a stylized effect,
reminiscent of limited animation's use of timing for comedic or dramatic emphasis.

v. Particle Systems: Utilizing procedural techniques to create complex effects like fire,
smoke, or explosions efficiently, echoing the focus on key elements in limited
animation.

Key Features of Limited Animation :

 Simplified Character Designs: Characters in limited animation have simplified and


streamlined designs to facilitate faster animation production.

 Symbolic Movement: Limited animation employs symbolic movement, where


characters convey actions with minimal frame changes, implying movement rather
than fully illustrating it.

 Looped Sequences: Animators use looped sequences, repeating short segments of


animation to depict continuous movement efficiently.
 Cost and Time Efficiency: Limited animation’s reduced frame count and
simplified designs contribute to quicker production times and cost savings.

Use Cases and Advantages :

Limited animation finds application in various scenarios:

Television Series: Limited animation is commonly used in TV series due to its


efficiency and suitability for tight broadcasting schedules.

Commercials: It is favored for commercials that require quick turnaround times


tomeet marketing campaigns.

Online Content: Limited animation is ideal for web-based animations, short


videos,and online advertisements.

Limited animation's influence on VFX goes beyond surface-level stylization. It's a


philosophy of efficiency, prioritization, and effective communication through simplified
visuals. By understanding and applying its principles, VFX artists can create impactful
and memorable visuals within budget and resource constraints.
ROTOSCOPING

What is Rotoscoping?

Rotoscoping is an animation technique that consists of drawing or tracing over a photo or


live-action footage frame by frame to create more accurate and smoother animations. The
result is having the live-action footage as a reference to produce realistic movements in the
animation. Rotoscoping was used for intricate dance movements, walking, running,
jumping, and other smooth motions, such as facial expressions that were difficult to
replicate in the hand-drawing animation process.

Instead of drawing by hand, animators projected the reference live-action footage onto
glass panels and traced over the image frame by frame. It was a tedious and time-
consuming process, but it was faster than drawing frame by frame, resulting in more
realistic animations, enhanced artistic style, and more emphasis on a dramatic scene.

Today, most rotoscoping is done digitally as a special effect on live-action footage to create
an animated film version or as a visual effect to composite the footage on a different
background. Before diving into digital rotoscoping, let’s take a look at the history of
rotoscoping.

The History of Rotoscoping

Rotoscoping originated in 1915 when animator Max Fleischer created the rotoscoping
technique to produce his Out of the Inkwell series. Fleischer wanted to create a realistic
animation where cartoons moved and looked more like real people; therefore, he decided
to film his brother Dave dressed as a clown to breathe life into his cartoon character Koko
the Clown, the first rotoscoped cartoon character.
From that moment, Fleischer and his rotoscoping technique revolutionized the film
animation industry, bringing other iconic cartoon characters such as Betty Boop, Popeye,
and Superman to the screen.

Back then, everyone wanted to try this new technique for their animations. When the
Fleischer Process patent expired, other studios could use the rotoscope process. The first
full-length animated feature films using rotoscoping were Disney's Snow White and the
Seven Dwarfs (1937) and Gulliver's Travels (1939) by Fleischer Studios.

Rotoscoping became popular and mainly stayed the same until the mid-90s when a
computer scientist veteran, Bob Sabiston, developed the interpolated rotoscoping process
and created Rotoshop, an advanced computer software for hand-tracing frame-by-frame
over layers of frames. Rotoshop allowed shifting the rotoscoping technique to a computer,
though as of today, it’s a technique only the company Flat Black Films can use.

Director Richard Linklater was the first filmmaker to use digital rotoscoping to make a live-
action full-feature film. Linklater's full-length movies Waking Life (2001), A Scanner
Darkly (2006), and more recently, Apollo 10½: A Space Age Childhood (2022) used
rotoscoping to animate the live-action footage of the actors while keeping the animation
extremely realistic.

Types of Rotoscoping

The film industry uses considerably rotoscoping techniques for multiple purposes.
Here are some types of rotoscoping that you can do to add creativity to a dramatic scene, to
add visual effects, or to make an animation from scratch using real-life footage.

• Traditional Rotoscoping

Let’s start with the most traditional technique. As mentioned before, rotoscoping
starts with live-action footage. Let’s say you want to create an animation about
basketball players for an animated feature film. You can draw them by hand, but it'll
be difficult to replicate the movements of the player's body.

The best option is to first record players to capture their actions to make it more
realistic as if you were creating motion picture footage. Then, using a movie
projector, play the movie through glass or use a lightbox to trace over the footage.

• Reference Film Rotoscoping

Filmmakers have used rotoscoping in various ways. Walt Disney used reference films
to define a character's movement from a live movie reference and animate Snow
White and the Seven Dwarfs accordingly. Having a reference film allowed Disney to
reuse many of their motion scenes: you can find the same motion in many Disney
films, like the dancing scene from Snow White and Robin Hood, and other
rotoscoping movements across movies like The Jungle Book, Winnie the Pooh, 101
Dalmatians, Pinocchio, The Sword in the Stone, Bambi, and many more.

This type of rotoscoping allows you to use your animator skills to draw your
characters on top of the reference film instead of tracing directly from the footage
and to reuse the reference for future projects and different animated characters.

One recent use of reference rotoscoping was in James Gunn’s Guardians of the
Galaxy (2014). Gunn used rotoscoping with a real-life raccoon to keep the animal
features and movements for Rocket, the raccoon.

• Digital Rotoscoping

In the digital realm, rotoscoping opens up other animation opportunities, such as


motion-tracking and motion capture, to get live-action footage and then rotoscope
on computer software. Animators trace directly in the rotoscoping software using
tablets and other digital hardware.

Digital rotoscoping streamlines the traditional rotoscope process to create mattes to


move subjects and objects into scenarios impossible to shoot in live-action films.
However, it still involves tracing and is still a time-consuming process.
• Rotoscoping for Visual Effects

Rotoscoping allows you to add effects such as glow, color grading, flickers, and more.
One of the most popular uses of rotoscoping as a visual effect is in the original Star
Wars trilogy. The Jedi lightsabers were recorded using sticks; then, the VFX team
rotoscoped the sticks on every frame and added the characteristic glow of the
lightsabers.

Also, in Hitchcock's movie The Birds (1963), animator Ub Iwerks created the bird
scenes using rotoscoping.

• Photorealistic Rotoscoping

Rotoscoping has proven to be a fantastic creative tool outside of animated films too.

Director Richard Linklater pioneered photorealistic rotoscoping with the movie A


Scanner Darkly, where most features from the real actors were kept to create a
unique visual experience. Linklater used the same proprietary rotoscoping process
for his other movie, Waking Life. You can find another recent example of using the
rotoscope technique to create facial expressions in Mark Ruffalo’s Hulk.

Rotoscoping Animation Software

If you want to get into the rotoscoping animation world, you will need an animation
software. While you can definitely do it in the traditional way, why not use the help of the
technology available when it saves you time and money?

The ones below are the most popular software for rotoscoping.

• Silhouette

Silhouette is a refined rotoscoping tool by Boris FX. It allows you to create complex
mattes and masks using B-Spline, X-Spline, and Magnetic Freehand shapes.
Silhouette integrates point tracking, planar tracking, and Mocha Pro planar tracking.
It has been the tool for Academy Award-winning films such as Black Panther:
Wakanda Forever, Top Gun: Maverick, Dune, and The Mandalorian.

Silhouette: Quick Start to Rotoscoping

• Mocha Pro

Mocha Pro is a plug-in for planar tracking and rotoscoping from Boris FX. You can use
it on other video editing software like DaVinci Resolve, After Effects, Premiere Pro,
and Vegas Pro. Mocha Pro allows you to rotoscope with fewer keyframes and speed
up the rotoscope process with the X-Splines and Bezier Splines with magnetic edge-
snapping assistance.

• Adobe After Effects

Adobe After Effects is a professional software for motion graphics and animation. It’s
popular among video editors and graphic designers to create eye-catching motion
graphics and visual effects. Adobe After Effects is available as part of the Creative
Cloud bundle subscription. Additionally, After Effects includes a limited version of
Mocha with rotoscoping features from the Pro version of the Boris FX plug-in.
• Blackmagic’s DaVinci Resolve Fusion

Fusion is built into DaVinci Resolve, and it’s your tool for all visual effects and motion
graphic-related work. It features advanced mask and rotoscope tools with B-Spline
and Bezier shapes. Just switch to the Fusion page on your DaVinci Resolve project to
start using rotoscoping to animate characters and objects.

Rotoscoping Examples

Here is a list of the most notable rotoscope movies produced with rotoscope techniques. It
includes TV shows and music videos for you to explore and analyze the rotoscoping
technique in depth.

• Movies
o Alice in Wonderland
o Star Wars Trilogy
o Fantasia
o Gulliver's Travels
o Lord of the Rings (1978)
o Fire & Ice
o Waking Life
o A Scanner Darkly
o Apollo 10½: A Space Age Childhood
• Video Games
o Prince of Persia
o Another world
o Flashback
• Music Videos
o A-Ha - Take On Me
o INXS - What You Need
o A-Ha - Train Of Thought
o Opposites Attract – Paula Abdul
o Incubus - Drive
o Linking Park - Breaking the Habit
o Kanye West Heartless
• TV Shows and Series
o Jem and the Holograms
o The Simpsons
o Family Guy
o The Flowers of Evil
o Undone

Final Words
Nowadays, rotoscoping is more commonly used as a visual effect rather than an animation
technique, thanks to the rise of 2D and 3D computer graphics. Nonetheless, some
filmmakers still appreciate rotoscoping for its unique aesthetic qualities in both animation
and live-action films. As such, we can expect to see filmmakers continue to find new and
innovative ways to incorporate the rotoscoping process into their movies, series, and other
creative projects.

For more detail visit

https://medium.com/@uxgayatri/mastering-rotoscoping-techniques-for-achieving-seamless-results-
1c902fa51233

ROTOSCOPING
Maybe you have heard about this process, but do not know how to use it. Perhaps you have
an idea of how this technique works, but do not know where to begin. This section will help
you get started.

• What is Rotoscoping?
• Selecting a Video
• Tracing the Character
• Painting the Animation

What is Rotoscoping?

Rotoscoping is an animation technique where the animator traces over each frame of a live-
action movie to reproduce a realistic movement. This technique was invented by Max
Fleischer in 1915. The movie was projected frame by frame onto a piece of glass that the
animator could trace over. The piece of equipment used in this process is called a
rotoscope. Today, the rotoscope has been replaced by the computer.
Reasons for Rotoscoping
Some reasons you may chose to use this process:

• The motion is very realistic.


• The timing is accurate.
• The characters or other elements retain their proportions and volume.
• It helps you learn how to animate.
• It helps you understand how to break down a movement.
• It teaches you how to animate very subtle motions, like a slight head turn or a slow raise of the hand.

You can also superimpose your character design and only use the motion but not the actual
object, person or animal from the video.

Because rotoscoping is so realistic, it leaves little room for exaggeration, movement, squash
and stretch, or a very cartoony look. If you use this technique, make sure it suits your
project.

Selecting a Video

Selecting a video to rotoscope is simple. You can:

•Film the actions you need to animate yourself. For example, you could film a dog playing with a ball or a pe
• Find a free movie clip on the web, or
• Purchase a royalty-free movie clip from a website.

Your movie format can be any of the following:

• AVI (*.avi)
• QuickTime (*.mov)
• MPEG (*.mpg)
• iPod (*.m4v)

If you find a movie clip that is not in any of these formats, you can easily convert it in an
editing software.

Your clip does not need to have a very high resolution, however the higher the resolution,
the more detail you see. A minimum resolution of 300x200 is recommended.

Importing a Video
When you create your Toon Boom Studio project, you can avoid having too many drawings
to trace over by creating the project with a rate of 12 frames per second instead of 24.

To import your video:

1. Select File > Import File.


2. In the Open dialog box, browse for your clip and click Open.
3. In the Import Options dialog box, do NOT set an opacity value.

4. Click OK.

Tracing the Character


Before you start tracing your movie, set up your brush with a low smoothness and a lively
tracing colour. This is so your lines will be more faithful to the video still and you will be able
to see your lines clearly.

Tracing
When you trace over your imported movie, concentrate on one element at a time. For
example, if there is a boy running with a balloon, trace the boy first and then the balloon.
This helps when trying to create separate movements. The boy moves differently than a
balloon, even if they are moving at the same speed and in the same direction.If the two
objects or characters are interacting, it is best to draw them on the same layer.

Remember, when you trace over the character, try to close your zones for fast and efficient
painting later on.

To trace your animation:

1. In the Timeline view, add a new layer to trace your animation.


2.
If you work in the Drawing view, enable the Light Table to see the live-action frames.
3. In the Camera/Drawing view, zoom in on your image to see the details better.
4. Trace your first frame.

5. In the Timeline view, select the second cell and trace the second image.
6.
If necessary enable Onion Skin to see your previous drawings.
7. Repeat the process until the animation is entirely traced.

Fine Tuning the Animation


Once your animation is traced, it is a good idea to deselect the live-action clip layer or turn
off the Light Table, and then go over your animation to fix the little details, such as open
zones and uncompleted lines.

To learn more about closing gaps, see Adding Colours and to learn more about making
invisible strokes, see Drawing and Design.

If you want your final project to be lighter once you are done tracing, select your lines and
flatten them.

To flatten your drawings:

1.
In the Drawing Tools toolbar, click the Select tool.
2. In the Timeline view, select the first frame of the tracing layer.
3. In the Camera/Drawing view, select your entire drawing.

4. Select Tools > Flatten.


5. Repeat this process for all your drawings.

Painting the Animation

To create your colour palette, use Toon Boom Studio's special dropper to pick colours from
your live-action movie and paint your animation in the same colours as the clip.

To create your colour palette:

1. In the Colour view, create and rename your new palette.


2.
In the Colour list section, add a new colour swatch by clicking the Add Colour button.
3. In the Camera view, display your live action movie.
4. Double-click on the new colour swatch to display the Colour Picker window.
5. In the Colour picker window, click the Eye Dropper button.
6. In the Camera view, pick a colour.
7. Repeat this process until the colour palette is entirely finished.

Your animation is now ready for painting.


STOP MOTION ANIMATION

What is Stop Motion Animation?


Stop motion animation (also called stop frame animation) is
animation that is captured one frame at time, with physical objects that
are moved between frames. When you play back the sequence of
images rapidly, it creates the illusion of movement. If you understand
how 2D drawn animation (early Disney) works, stop motion is similar,
except using physical objects instead of drawings.

FRAME 1

FRAME 2

FRAME 3

FRAME 4

ANIMATED
You see stop motion animation all the time—in commercials,
music videos, television shows and feature films—even if you don’t
realize it. While it is common for people to think of stop motion as
just one specific style, such as clay animation, the reality is that stop
motion techniques can be used to create a wide range of film styles:

Tools Required To Make Stop Motion Animation


If you want to try stop-motion animation at home, you can do it with
simple tools. They include:

Camera
To capture the image, you can use a smartphone or a digital camera like a
DSLR.

Tripod
A stand or holster to keep your camera steady.

Editing Software
To edit the frames together in an animation.

Materials/Objects
Inanimate objects become your subject of animation.

How To Process Stop Motion Animation?

1. Find Your Setting

The first step when you wish to stop motion animation is to establish
where you can place your camera. Further, fill your frame with the location or
backdrop and ensure not to capture the edges outside your frame to maintain
consistency.

2. Set Your Camera Right

You need to limit your camera shakes to have a good setup for your stop-
motion video. For this purpose, you can use a tripod or a stand to maintain your
camera in a stable position.
3. Use A Remote Trigger Or Timer

You will get an elegant stop-motion animation when you avoid


clicking your camera every time. You can trigger your camera using a remote or
set a timer to take a picture every few seconds to make this possible.

4.Shoot With Manual Settings

When you shoot the picture with your camera in auto mode, the
settings will adjust the camera itself to every image you take, resulting in a
flickering effect. However, setting a uniform shutter speed, ISO, aperture,
and white balance helps overcome this issue.

5.Control Your Lighting

Too much lighting can cause shadows and minor flickerings that
may not suit your animation. Hence, always be mindful of windows and
maintain only essential lighting to allow you to see your objects.

6.Frame Rate

As a beginner, it is enough for you to know that a second of video


constitutes 12 frames. However, if you exceed this rate, your video can become
jittery.

7.Move-In Small Increments

Move your objects in small, consistent increments to create a smooth


animation. On the other hand, if you want your things to appear slower, you can
move the objects quickly.

8. Audio

Once you are done shooting the silent stop-motion animation, you can
add some audio to your video to make it enjoyable. For this purpose, opt for a
dedicated stop motion software or app.
OBJECT ANIMATION :

Object animation is a fascinating form of stop motion animation that


brings everyday items to life. Here are the key points about object animation:

1. Definition and Technique:

o Object animation involves animating non-drawn objects such as toys, blocks,


dolls, and similar items.
o Unlike plasticine (clay) or wax, these objects are not fully malleable and are
not designed to resemble recognizable human or animal characters.
o Animators physically manipulate these objects, capturing each movement frame
by frame. When played back, the sequence creates the illusion of motion.
2. Distinct from Model and Puppet Animation:

o Model Animation: Uses recognizable characters (such as clay figures or


puppets) as subjects.
o Puppet Animation: Features characters with articulated joints.
o Object animation works with pre-existing objects like static toy soldiers,
LEGO bricks, or construction toys. These objects are not inherently designed as
characters.
3. Combining Techniques:

o Object animation is often combined with other forms of animation for more
realism.
o For example, a toy car might be animated using object animation, while a
character (often in puppet or model animation style) is seen driving the car.
Pixilation Animation :

Before jumping into a definition of pixilation, we want to stress that we


are not misspelling the word. "Pixelation" with an "e" is a different thing all
together; that’s when you zoom in on an image and you can make out the
individual blocks (pixels) that it’s made of. In other words, it has nothing to do
with "pixilation" with an "i," which we will get into below.

Pixilation is a filmmaking technique where live actors and objects are shot
frame-by-frame to simulate movement. This results in an animated-looking
movie, where a human, and the things around them, move without being
touched. The actual can often appear jerky or smooth, depending on gaps of
motion between in each frame.

The name seems to come from the word “pixilated,” which itself is a reference
to someone being under the influence of pixies (yes, the small magical flying
ones). Due to pixilation often representing human beings seemingly moving
around on their own, it makes some amount of sense.

Stop-Motion Vs Pixilation

No doubt pixilation will remind you of stop-motion animation, and


that’s mainly because they’re almost the same thing. The key difference is that
stop-motion animation involves models, along with sets, that are 100%
manipulated by a director/animator. Compare with pixilation, where a human
being and their surroundings are manipulated, but that’s all. In both cases,
everything is shot frame-by-frame.

Characteristics of Pixilation include:

 Frame-by-frame filmmaking process


 Jerky and unnatural looking movement
 Surreal and fantastical subject matter
 Usually only reserved for specific moments and VFX shots in full-length
movies

Pixilation is often used as a tool for creating a unique and comical movie,
and has its origins dating as far back as the 1900s. In some movies, like Hôtel
électrique (1908), objects are used around the character in such a way that they
are affecting them without any other person’s touch.
RIGGING

What Is RIGGING in 3DAnimation? Basics and How It Works


Three-dimensional (3D) computer animation can seem a lot like magic. How
does a graphic artist’s sketch of a character get transformed into a lifelike, 3D
animation that’s able to walk, crouch, jump, and use its limbs and hands as
naturally as you or I can?

Image source: Kreonit

With 2D animation, motion is created frame by frame. Computers have since


revolutionized animation, replacing hand-drawn frames with computer
simulations that control how everything on screen moves: cloth, leaves on trees,
and even hair.

But before a computer can take an artist’s rendering of a character and bring it
to life with motion, it has to go through an important phase: 3D rigging.

It’s part art and part science. Here’s a look at how it works.

What are 3D animation and rigging?


3D animation is simply the process of creating characters, objects, and even
scenes or environments in a three-dimensional space. With 3D animation,
designers can add more depth and realism to their creations than with 2D
animation. It also makes it easier to achieve complex interactions like the
natural movement of water, fire, and wind—making final products more visually
pleasing. In addition, 3D animations make it possible to abide by the natural
laws of physics, texture, and lighting.

But before 3D animation is done, animators must first create rigs. Rigging
involves creating bones or a digital skeleton that makes it possible to control the
movement of characters and objects. For example, animators can control how
characters run, how their hair, arms, legs, and other body parts move, and even
their facial expressions.

Keep reading to understand the intricacies around rigging in 3D animation.

Key components of 3D rigging

Creating a 3D mesh, designing a skeleton, and finally incorporating the motion


simulation and manipulation are the key components of 3D rigging.

The 3D mesh (skin)

In 3D animation, a mesh or skin is typically crafted using polygonal modeling, a


technique where artists construct the character's form using interconnected
polygons, usually triangles or quadrilaterals. This results in a wireframe
structure, a kind of skeletal framework, that outlines the character's shape.

When this 3D mesh is placed over the rig or skeleton, it aligns perfectly with the
underlying bone structure. This harmonious interaction between the mesh and
the rig enables the character to move in a lifelike and cohesive manner, with
each polygon adjusting to mimic realistic movements.

For example, when a character lifts a hand, the skin mesh should also follow
along—generating an illusion of movement and flexibility. A rigging artist can
also apply different colors, textures, and lighting effects to a 3D mesh to achieve
different goals.

Designers can also deform and manipulate the skin mesh for characters to
perform actions like laughing, smiling, and other expressions.

The skeleton: bones, joints, and muscles

Before you animate a character, it needs to be rigged. Using interconnected


bones, muscles, and joints, you use the skeleton or rig to control how characters
or objects move. A skeleton can feature a few simple control points or it can
also quickly grow and become complex, depending on the character.

In 3D animation, the skeleton can be represented by lines or shapes that are


interconnected via joints. In this context, joints are places where skeleton bones
meet—and they control how different body limbs move. For example, a knee is
an example of a joint when creating a rig for a human character.

On the other hand, muscles are mainly mimicked when creating the skin mesh.
They are connected to the underlying skeleton to allow them to move as
realistically as possible while obeying the laws of physics.

Motion simulation and vertex manipulation

Character movements are simulated by a computer based on the properties of


the internal skeleton.

In 3D animation, each bone in a skeleton is connected to specific vertices on a


3D skin mesh. This means that when the bone moves, the skin, clothing, or
even facial expressions are also affected. Animators can assign skin weights to
different body parts—which will determine how much deformation occurs.

Animators also control character movement through forward and inverse


kinematics. In forward kinematics, each bone in a skeleton can be manipulated
independently to achieve different actions and poses. Inverse kinematics makes
it easier to move a character’s limbs, like legs and arms, realistically to specific
predetermined positions.

The rigging process: step by step


Rigging is a highly complex but necessary step in the animation process. It
allows a character’s body to be articulated in a structured way. Without rigging,
trying to animate a character would result in a very distorted, deformed mesh.

From initial modeling to weight painting, here are key steps in a rigging process.

Initial modeling and skeleton creation

Before a 3D model can be animated, it has to get a rig. Let’s talk about this by
thinking of a 3D character as a hand-sculpted clay model.
Once a model has been created by an artist, it’s inanimate, stuck in its original
position until you manually bend an arm or turn its head. You can imagine that
creating motion by hand for a feature-length film would be extremely tedious.

To automate the process, computer animation programs allow animators to


assign motions. For that to happen, animators have to transform characters from
clay models into marionettes that can be manipulated. That’s where 3D rigging
comes in.

3D rigging creates a skeleton for a 3D model—all the bones and joints inside a
character that give animation software vertices it can recognize.

Assigning bones and creating the rig

Each bone in a character skeleton is assigned properties and constraints, just


like bones in a human skeleton.

For example, the bones can rotate, bend in certain directions, and even control
the motion of other bones. Bones can be weighted so that they have more
influence over other bones. A “master bone” can be set to control the center
point of how a character moves.

Weight painting and vertex assignment

With software platforms like Unity and Blender, experienced animators can use
drivers, morphs, kinematics, and weight painting, among other tools, to control
nearly anything on a character—say, raising the left eyebrow for a curious look
or raising both for a surprised look.

Through weight painting, animators can assign different values or weights to


vertices on a skin mesh. This will influence their level of deformation from
nearby bone structures, allowing 3D objects to move naturally or in any desired
way. For example, Blender has the Weight Paint mode you can use to assign
weights and visualize different values.

AI and machine learning in 3D rigging

Artificial intelligence (AI) is transforming the animation industry, enabling


stakeholders to be more creative and productive. We discuss how AI can fit into
your 3D rigging workflow.
Automating the rigging process with AI

AI-powered tools can perform repetitive and time-consuming tasks, allowing you
to focus more on creative tasks.

In 3D animation, AI tools can analyze models and identify appropriate places to


include skeletons or bone structures. These platforms can also assist in weight
painting—specifically, in setting and adjusting skin weights—to ensure different
parts move as intended.

AI also enhances the motion capture process, making it easier to imitate the
movement of real human actors and map it to 3D rigs, allowing animation
characters to move or act in a natural or human-like way.

Today’s AI tools have been trained on vast amounts of datasets, including


rigging best practices. As a result, they identify errors or incorrect logic
applications in your project and provide tips to enhance your rigging process.

Enhancing realism: AI in muscle and skin simulation

AI tools contribute to realism in animation by simulating realistic muscle and skin


movements. From their massive training datasets, AI platforms can simulate
how muscles stretch and contract or affect other body limbs and apply these
fundamentals to rigs, adding a more realistic touch to animations.

Examples of 3D rigging

Rigging systems are used to create lifelike movements in animated creatures,


such as the one below.

Image source: Paween Sarachan


Rigging is a widely used concept in different forums, including video games,
films, marketing ads, and more. Below are some popular areas where 3D rigging
has been exceptionally used.

Image source: Media Division

• Avatar. This film used a performance capture technique to map a wide


range of movements and actions of real actors to animated characters.
As a result, the characters moved and behaved in a realistic and
believable way.

Image source: The Science Behind Pixar

• Monsters Inc. Pixar's movies, such as "Monsters Inc.," use advanced


3D rigging techniques to bring their characters to life. The rigging
process in these films allows characters like Sully or Mike Wazowski to
move in expressive and emotionally engaging ways.
Image source: Vajont VR Devblog

• Virtual reality (VR) experiences. In VR environments, 3D rigging is


crucial for creating immersive and interactive experiences. Characters
or avatars in VR games and simulations are rigged to respond to player
movements and actions, enhancing the sense of presence in the virtual
world.

Image source: 3D Blendered

• Educational software and simulations. 3D rigging is used in


educational software to create realistic models of the human body,
animals, or machinery. These models help in understanding complex
concepts through interactive visualizations.
Image source: Thangs

• Marketing and advertising mascots . Many brands use animated


mascots in their advertising campaigns. These mascots are often 3D
models that have been rigged to perform various actions, like dancing
or interacting with products, making them appealing and memorable to
consumers.
Top rigging software tools

From Autodesk Maya to Cinema 4D, we discuss the top rigging software that
you can integrate into your animation workflow.

Autodesk Maya

Autodesk Maya is a comprehensive platform used for complex rigging,


rendering, facial animation, character modeling, skeletal animation, creating 3D
effects, and simulations. It’s used by special effects (also called FX) artists,
riggers, animators, and 3D modelers in a wide variety of industries spanning
from gaming to film.

Features:
• Ability to create complex skeleton structures for different characters
• Enhanced skinning tools
• Transfer rigs from one character to another with similar skeleton
structure
• Polygonal modeling

Pricing:

You can purchase Maya via subscription or through flexible payments:

Subscription:

• $235 per month


• $1,875 per year
• $5,625 paid every three years

Flexible payments:

• 100 tokens for $300


• 500 tokens for $1,500
Blender

Blender is open-source software that helps artists and animators create complex
characters, graphics, vectors, and visual effects. Apart from its intuitive
interface, experienced creators can also write Python scripts and use them
in 3D modeling, character rigging, and animation. Blender has a huge
community support network, allowing you to access valuable learning resources
and tutorials.

Features:

• A robust animation toolset


• A simple weight scale and painting tool for assigning different weights
to digital bones
• Automatic skinning—which leads to realistic movements
• Bone layers and colored groups facilitating better organization
• Control objects easily by setting constraints
• Motion paths allow you to control character movement easily

Pricing:

• Available for free, donations accepted


Cinema 4D

Cinema 4D is a professional tool that performs numerous tasks in the 3D


animation pipeline, including sculpting, character design, modeling, and rigging.
Apart from built-in features, Cinema 4D also supports different integrations,
including from popular apps like Adobe After Effects and Photoshop.

Features:

• Parametric, volume, and polygonal modeling


• Body paint 3D for adding textures to models
• Car rig templates
• Toon rig to animate cartoon characters quickly
• Provides a huge library of resources you can import and use in a project

Pricing:

• Maxon One for $99.91 per month or $1,199 billed annually


• C4D +Redshift for $81.91 per month or $983 billed annually
Shape Keys

Introduction

Shape keys are used to deform objects into new shapes for animation. In other
terminology, shape keys may be called “morph targets” or “blend shapes”.

The most popular use cases for shape keys are in character facial animation and in
tweaking and refining a skeletal rig. They are particularly useful for modeling organic
soft parts and muscles where there is a need for more control over the resulting
shape than what can be achieved with combination of rotation and scale.

Shape keys can be applied on object types with vertices like mesh, curve, surface and
lattice.

Example of a mesh with different shape keys applied.

Workflow

Shape keys are authored in the Shape Keys panel which is accessed in the Object Data tab of
the Properties (e.g. the Mesh tab for mesh objects).

A shape key is modified by first selecting a shape key in the panel, and then moving the
object’s vertices to a new position in the 3D Viewport.

The panel has controls for affecting the current Value (influence, weight) of a shape. It is
possible to see a shape in isolation or how it combines with others.
Adding and Removing Vertices

It is not possible to add or remove vertices in a shape key. The number of vertices and how
they connect is specified by the mesh, curve, surface or lattice. A shape key merely records a
position for each vertex and therefore shapes always contain all the object’s vertices.

When adding a vertex, all shape keys will record it with the position in which it is created.
Workflow-wise, adding and deleting vertices after creating shape keys is possible, but it is
best to leave the creation of shape keys for when the mesh is finished or its topology is stable.

Adding Shape Keys

When adding a new shape key with the + button next to the list, the new shape will be a
copy of the Basis shape, independently of the current result visible in the 3D Viewport.

When adding a new shape key from Specials ‣ New Shape from Mix , the shape will start of with
the vertex configuration that is visible at that moment.
When doing facial animation with relative shape keys, it can be useful to first create a shape
key with a complex extreme pose (e.g. anger or surprise), and then break this complex shape
into components by applying a temporary vertex group to the complex shape and creating a
copy with New Shape from Mix. This technique helps reducing conflicts between different
shape keys that would otherwise produce a double effect.

Relative or Absolute Shape Keys

A mesh (curve, surface or lattice) has a stack of shape keys. The stack may be
of Relative or Absolute type.

Relative

Mainly used for muscles, limb joints, and facial animation.

Each shape is defined relative to the Basis or to another specified shape key.

The resulting effect visible in the 3D Viewport, also called Mix, is the cumulative
effect of each shape with its current value. Starting with the Basis shape, the result is
obtained by adding each shape’s weighted relative offset to its reference key.

Value

Represents the weight of the blend between a shape key and its reference key.
A value of 0.0 denotes 100% influence of the reference key and 1.0 of the shape key.
Blender can extrapolate the blend between the two shapes above 1.0 and below 0.0.
Basis

Basis is the name given to the first (top-most) key in the stack.

The Basis shape represents the state of the object’s vertices in their original position.
It has no weight value and it is not keyable. This is the default Reference Key when
creating other shapes.
Absolute

Mainly used to deform the objects into different shapes over time.

Each shape defines how the object’s shape will be at Evaluation Time specified in
its Value.

The resulting shape, or Mix, is the interpolation of the previous and next shape given
the current Evaluation Time.

Value

Represents the Evaluation Time at which that shape key will be active.
Basis

Basis is the name given to the first (topmost) key in the stack.

The Basis shape represents the state of the object’s vertices in their original position.

Shape Keys Panel

Shape Keys panel.

The Shape Keys panel is used for authoring shape keys.


Active Shape Key Index

A List View.

Value/Frame (number)

In Relative mode: Value is the current influence of the shape key used for
blending between the shape (value=1.0) and its reference key (value=0.0). The
reference key is usually the Basis shape. The weight of the blend can be
extrapolated above 1.0 and below 0.0.

In Absolute mode: Value is the Evaluation Time at which the shape will have
maximum influence.
Mute (check mark)

If unchecked, the shape key will not be taken into consideration when mixing
the shape key stack into the result visible in the 3D Viewport.

Shape Key Specials

New Shape from Mix

Add a new shape key with the current deformed shape of the object. This
differs from the + button of the list, as that one always copies the Basis shape
independently of the current mix.
Mirror Shape Key

If your mesh is symmetrical, in Object Mode, you can mirror the shape keys on
the X axis. This will not work unless the mesh vertices are perfectly
symmetrical. Use the Mesh ‣ Symmetrize tool in Edit Mode.
Mirror Shape Key (Topology)

Same as Mirror Shape Key though it detects the mirrored vertices based on the
topology of the mesh. The mesh vertices do not have to be perfectly
symmetrical for this action to work.
Join as Shapes (Transfer Mix)

Transfer the current resulting shape from a different object.

Select the object to copy, then the object to copy into. Use this action and a
new shape key will be added to the active object with the current mix of the
first object.
Transfer Shape Key

Transfer the active shape key from a different object regardless of its current
influence.

Select the object to copy, then the object to copy into. Use this action and a
new shape key will be added to the active object with the active shape of the
first object.
Delete All Shape Keys

Removes all Shape Keys and any effect that they had on the mesh.
Apply All Shape Keys

Saves the current visible shape to the mesh data and deletes all Shape Keys.
Relative

Set the shape keys to Relative or Absolute. See Relative or Absolute Shape
Keys.
Shape Key Lock (pin icon)

Show the active shape in the 3D Viewport without blending. Shape Key
Lock gets automatically enabled while the object is in Edit Mode.
Shape Key Edit Mode (edit mode icon)

If enabled, when entering Edit Mode the active shape key will not take
maximum influence as is default. Instead, the current blend of shape keys will
be visible and can be edited from that state.
Add Rest Position

Creates an Attribute in the vertex domain called rest_position which is a copy


of the position attribute before shape keys and modifiers are evaluated. Only
mesh objects support this option.
Relative Shape Keys

Relative Shape Keys options.

With relative shape keys, the value shown for each shape in the list represents the
current weight or influence of that shape in the current Mix.

Clear Shape Keys X

Set all influence values, or weights, to zero. Useful to quickly guarantee that
the result shown in the 3D Viewport is not affected by shapes.
Value

The weight of the blend between the shape key and its reference key (usually
the Basis shape).

A value of 0.0 denotes 100% influence of the reference key and 1.0 of the
shape key.
Range

Minimum and maximum range for the influence value of the active shape key.
Blender can extrapolate results when the Value goes lower than 0.0 or above
1.0.
Vertex Group

Limit the active shape key deformation to a vertex group. Useful to break
down a complex shape into components by assigning temporary vertex groups
to the complex shape and copying the result into new simpler shapes.
Relative To
Select the shape key to deform from. This is called the Reference Key for that
shape.
Note

Rather than storing offsets directly, internally relative keys are stored as snapshots of
the mesh shape. The relative deformation offsets are computed by
subtracting Reference Key from that snapshot.

Therefore, replacing the Reference Key has the effect of subtracting the difference
between the new and old reference from the relative deformation of the current key.

Absolute Shape Keys

Absolute Shape Keys options.

With absolute shape keys, the value shown for each shape in the list represents
the Evaluation Time at which that shape key will be active.

Re-Time Shape Keys (clock icon)

Absolute shape keys are timed, by order in the list, at a constant interval. This
button resets the timing for the keys. Useful if keys were removed or re-
ordered.
Interpolation

Controls the interpolation between shape keys.

Linear, Cardinal, Catmull-Rom, B-Spline


Different types of interpolation.

The red line represents interpolated values between keys (black dots).
Evaluation Time

Controls the shape key influence. Scrub to see the effect of the current
configuration. Typically, this property is keyed for animation or rigged with a
driver.
Workflow

Relative Shape Keys

1. In Object Mode, add a new shape key via the Shape Key panel with the + button.
2. “Basis” is the rest shape. “Key 1”, “Key 2”, etc. will be the new shapes.
3. Switch to Edit Mode, select “Key 1” in the Shape Key panel.
4. Deform mesh as you want (do not remove or add vertices).
5. Select “Key 2”, the mesh will be changed to the rest shape.
6. Transform “Key 2” and keep going for other shape keys.
7. Switch back to Object Mode.
8. Set the Value for “Key 1”, “Key 2”, etc. to see the transformation between the
shape keys.
In the figure below, from left to right shows: “Basis”, “Key 1”, “Key 2” and mix (“Key
1” 1.0 and “Key 2” 0.8 ) shape keys in Object Mode.
Relative shape keys example.

Absolute Shape Keys

1. Add sequence of shape keys as described above for relative shape keys.
2. Uncheck the Relative checkbox.
3. Click the Reset Timing button.
4. Switch to Object Mode.
5. Drag Evaluation Time to see how the shapes succeed one to the next.

Absolute shape keys workflow.

By adding a driver or setting keyframes to Evaluation Time you can create an animation.
MOTION PATHS

Motion Paths in Visual Effects: A Deep Dive

Motion paths are a fundamental tool in visual effects (VFX) for animating objects
along predefined paths. They offer precise control over movement, enabling realistic and
visually appealing animations. Let's explore the details, subtopics, and principles:

What are Motion Paths?

Defined as a sequence of interconnected points that guide an object's movement in animation


software.These points can create straight lines, curves, or complex shapes, offering
flexibility inmovement design.
Objects "follow" the path, animating their position frame-by-frame according to set timings.

Types of Motion Paths:

i. Predefined: Software provides sets of built-in paths, like circles, spirals, or loops.
ii. Custom: Users draw their own paths using Bézier curves for complete control.
iii. Spline-based: Uses smooth, interconnected curves for organic movements.
iv.Motion capture: Records real-world movement and translates it into a path.
Keyframe Animation:

Motion paths work alongside keyframe animation, defining the object's location at
specific frames. Interpolation fills the gaps between keyframes, animating the movement
along the path.

Timing & Speed:

Control the animation speed by adjusting the time it takes for the object to travel the
path.Use acceleration, deceleration, or ease-in/ease-out effects for natural-looking motion.

Rotation & Orientation:

i. Objects can rotate and adjust their orientation while following the path.
ii. This creates realistic 3D movement, like a car turning while traveling on a road.

Additional Effects:

i. Combine motion paths with other animation techniques like scaling, opacity changes,
or particle effects.
ii. This adds depth and complexity to your animations.

Software-Specific Features:

i. Different VFX software offers various tools and options for motion paths.
ii. Explore features like path editing, mirroring, looping, and path deformation.
Applications:
i. Motion paths are used in diverse VFX scenarios, including:
ii. Character animation (walking, running, jumping)
iii. Vehicle movement (cars, spaceships, planes)
iv.Projectile trajectories (weapons fire, magic effects)
v. Camera movements (tracking shots, pans, zooms)
vi.Abstract animations (geometric shapes, particle systems)
Principles for Effective Motion Paths:

Clarity & Purpose: Define the path's objective


(e.g., natural movement, stylized effect).

Anticipation & Follow-through: Use subtle movements before and after the path for
realism.

Variation & Asymmetry: Avoid perfect symmetry for more natural dynamics.

Timing & Rhythm: Adjust speed and timing to match the object's mass, physics, and
scene context.

Ease & Flow: Create smooth transitions between path segments for visually pleasing
motion.

Online communities and forums offer valuable insights and discussions on


motion paths.Experimentation and practice are key to mastering the art of creating effective
motion paths!

You might also like