dmslecture5
dmslecture5
36
DISTRIBUTED MULTIMEDIA SYSTEMS
the other, to seemingly blend together into a visual illusion of movement. The following
shows a few cells or frames of a rotating logo. When the images are progressively and
rapidly changed, the arrow of the compass is perceived to be spinning.
Television video builds entire frames or pictures every second; the speed with which
each frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just
change the shape and also move or translate it a few pixels for each frame.
Animation Techniques
When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is
complicated, you may wish to create a written script with a list of activities and required
objects. Choose the animation tool best suited for the job. Then build and tweak your
sequences; experiment with lighting effects. Allow plenty of time for this phase when
you are experimenting and testing. Finally, post-process your animation, doing any
special rendering and adding sound effects.
Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which have been replaced today by acetate or plastic. Cels of famous animated
cartoons have become sought-after, suitable-for-framing collector’s items. Cel
animation artwork begins with keyframes (the first and last frame of an action). For
example, when an animated figure of a man walks across the screen, he balances the
weight of his entire body on one foot and then the other in a series of falls and
recoveries, with the opposite foot and leg catching up to support the body.
37
DISTRIBUTED MULTIMEDIA SYSTEMS
Computer Animation
Computer animation programs typically employ the same logic and procedural concepts as
cel animation, using layer, keyframe, and tweening techniques, and even borrowing from
the vocabulary of classic animators. On the computer, paint is most often filled or drawn
with tools using features such as gradients and antialiasing. The word links, in computer
animation terminology, usually means special methods for computing RGB pixel values,
providing edge detection, and layering so that images can blend or otherwise mix their
colors to produce special transparencies, inversions, and effects.
Computer Animation is same as that of the logic and procedural concepts as
cel animation and use the vocabulary of classic cel animation – terms such as
layer, Keyframe, and tweening.
The primary difference between the animation software program is in how much must be
drawn by the animator and how much is automatically generated by the software
In 2D animation the animator creates an object and describes a path for the object to
follow. The software takes over, actually creating the animation on the fly as the program
is being viewed by your user.
In 3D animation the animator puts his effort in creating the models of individual
and designing the characteristic of their shapes and surfaces.
Paint is most often filled or drawn with tools using features such as gradients
and anti- aliasing.
Kinematics
It is the study of the movement and motion of structures that have joints,
38
DISTRIBUTED MULTIMEDIA SYSTEMS
Morphing
Morphing is popular effect in which one image transforms into another. Morphing
application and other modeling tools that offer this effect can perform transition not
only between still images but often between moving images as well.
The morphed images were built at a rate of 8 frames per second, with
each transition taking a total of 4 seconds.
Some product that uses the morphing features are as follows
Black Belt’s EasyMorph and WinImages,
Human Software’s Squizz
Valis Group’s Flo , MetaFlo, and MovieFlo.
39
DISTRIBUTED MULTIMEDIA SYSTEMS
3D Studio Max
Flash
AnimationPro
Video
Analog versus Digital
Digital video has supplanted analog video as the method of choice for making video for
multimedia use. While broadcast stations and professional production and postproduction
houses remain greatly invested in analog video hardware (according to Sony, there are
more than 350,000 Betacam SP devices in use today), digital video gear produces
excellent finished products at a fraction of the cost of analog. A digital camcorder
directly connected to a computer workstation eliminates the image-degrading analog-to-
digital conversion step typically performed by expensive video capture cards, and brings
the power of nonlinear video editing and production to everyday users.
NTSC
The United States, Japan, and many other countries use a system for broadcasting
and displaying video that is based upon the specifications set forth by the 1952
National Television Standards Committee. These standards define a method for
encoding information into the electronic signal that ultimately creates a television
picture. As specified by the NTSC standard, a single frame of video is made up
of 525 horizontal scan lines drawn onto the inside face of a phosphor-coated
picture tube every 1/30th of a second by a fast-moving electron beam.
40
DISTRIBUTED MULTIMEDIA SYSTEMS
PAL
The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe,
Australia, and South Africa. PAL is an integrated method of adding color to a
black-and-white television signal that paints 625 lines at a frame rate 25 frames
per second.
SECAM
The Sequential Color and Memory (SECAM) system is used in France, Russia,
and few other countries. Although SECAM is a 625-line, 50 Hz system, it
differs greatly from both the NTSC and the PAL color systems in its basic
technology and broadcast method.
HDTV
High Definition Television (HDTV) provides high resolution in a 16:9 aspect
ratio (see following Figure). This aspect ratio allows the viewing of Cinemascope
and Panavision movies. There is contention between the broadcast and computer
industries about whether to use interlacing or progressive-scan technologies.
41
DISTRIBUTED MULTIMEDIA SYSTEMS
Video Tips
A useful tool easily implemented in most digital video editing applications is “blue
screen,” “Ultimate,” or “chromo key” editing. Blue screen is a popular technique for
making multimedia titles because expensive sets are not required. Incredible backgrounds
can be generated using 3-D modeling and graphic software, and one or more actors,
vehicles, or other objects can be neatly layered onto that background. Applications such
as VideoShop, Premiere, Final Cut Pro, and iMovie provide this
42
DISTRIBUTED MULTIMEDIA SYSTEMS
capability.
Recording Formats
S-VHS video
In S-VHS video, color and luminance information are kept on two separate
tracks. The result is a definite improvement in picture quality. This standard is
also used in Hi-8. still, if your ultimate goal is to have your project accepted by
broadcast stations, this would not be the best choice.
Component (YUV)
In the early 1980s, Sony began to experiment with a new portable professional
video format based on Betamax. Panasonic has developed their own standard
based on a similar technology, called “MII,” Betacam SP has become the industry
standard for professional video field recording. This format may soon be eclipsed
by a new digital version called “Digital Betacam.”
Digital Video
Full integration of motion video on computers eliminates the analog television form of video
from the multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-
ROM, or other mass-storage device, that clip can be played back on the computer’s monitor
without overlay boards, videodisk players, or second monitors. This playback of digital
video is accomplished using software architecture such as QuickTime or AVI, a multimedia
producer or developer; you may need to convert video source material from its still common
analog form (videotape) to a digital form manageable by the end user’s computer system. So
an understanding of analog video and some special hardware must remain in your
multimedia toolbox. Analog to digital conversion of video can be accomplished using the
video overlay hardware described above, or it can be delivered direct to disk using FireWire
cables. To repetitively digitize a full-screen color video image every 1/30 second and store it
to disk or RAM severely taxes both
43
DISTRIBUTED MULTIMEDIA SYSTEMS
Cartoons are comprised of individual drawings that, when played in succession at a fast
enough rate, give the appearance of movement. The same thing is true of video. Frames
per second (fps) affects the size and quality of your video file; the less number of frames
that make up a video segment, the smaller the size of the file. So to save file size, why not
take a 30 frames per second (fps) video down to 3 fps? Because this will result in a video
that is extremely choppy. To put this in time concepts, consider a watch with a second
hand; a 1 frames per second video means you will see the video updated once every time
the hand moves. For 3 fps, try subdividing the second into thirds by counting to 3 for
each second.
What exactly is a desirable number of frames per second (fps) for a video? Anything
below 10 fps will be very choppy to the end user. You should have 15 fps or greater, but
do not exceed 30 fps because the benefits are usually negligible. A general rule of
thumb is that if you half the number of frames per second, you half the file size
(depending on the compression); however, this is at the cost of video quality.
The resolution of a video means its size in terms of width and height, which is measured
in pixels. A pixel is the smallest light-emitting speck on your screen; this can be seen if
you lean in close to your monitor. The smaller the resolution of the video, the smaller the
file size, which is beneficial to people with low-bandwidth connections. However, too
small of a video size can be frustrating to individuals with vision problems and becomes
blocky if stretched.
640 by 480,
320 by 240,
44
DISTRIBUTED MULTIMEDIA SYSTEMS
Note: All these resolutions are divisible by the number 16, except for last set, 160 by 120.
When resolutions are not divisible by 16, it can cause problems for some video players,
and should be avoided.
Another consideration when working with video is the color depth, which means the
number of bits that represent each color per pixel on the screen. The higher the number of
bits per pixel, the more colors that can be displayed at any one time. One bit per pixel
gives 2 colors - monochrome. Two bits per pixel gives 4 colors. Usually, you will have a
minimum of 8 bit color (256 colors), but more commonly you will use 16 bits (65,536
colors) or 24 bits (16,777,216 colors). Sometimes you may hear of 32 bit color, where 8
bits are used for the transparency or visibility (sometimes called alpha) of the color.
However, more than 24 bits is not really necessary since the human eye cannot see more
than 16 million colors (24 bits).
To get a better understanding of a the file size for a video, calculate the figures
associated with the resolution, color depth, and frames per second (fps) of a video. For
this example, the video has 640 by 480 resolution, 16 bit color depth (or 2 bytes), and is
set to play at 30 frames per second.
1. Multiply the resolution of the screen to get the total number of pixels on the
screen. For this example, 640 times 480 gives us 307,200 pixels.
2. Multiply the total number of pixels by the number of bytes for color depth to get
the amount of information in bytes. For this example, 307,200 pixels times 2
bytes (which is 16 bit color), gives us 614,400 bytes (or 614 KB) of information.
3. Multiply the amount of information in bytes by the number of frames per second to
get the amount of information per second. For this example, 614,400 bytes times 30
frames per second, gives us 18,432,000 bytes (or 18 megabytes) of
45
DISTRIBUTED MULTIMEDIA SYSTEMS
Video Compression
To digitize and store a 10-second clip of full-motion video in your computer requires
transfer of an enormous amount of data in a very short amount of time. Reproducing just
one frame of digital video component video at 24 bits requires almost 1MB of computer
data; 30 seconds of video will fill a gigabyte hard disk. Full-size, full-motion video
requires that the computer deliver data at about 30MB per second. This overwhelming
technological bottleneck is overcome using digital video compression schemes or codecs
(coders/decoders). A codec is the algorithm used to compress a video for delivery and
then decode it in real-time for fast playback. Real-time video compression algorithms
such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak, Sorenson, ClearVideo, RealVideo, and
VDOwave are available to compress digital video information. Compression schemes use
Discrete Cosine Transform (DCT), an encoding algorithm that quantifies the human eye’s
ability to detect color and image distortion. All of these codecs employ lossy compression
algorithms. In addition to compressing video data, streaming technologies are being
implemented to provide reasonable quality low-bandwidth video on the Web. Microsoft,
RealNetworks, VXtreme, VDOnet, Xing, Precept, Cubic, Motorola, Viva, Vosaic, and
Oracle are actively pursuing the commercialization of streaming technology on the Web.
QuickTime, Apple’s software-based architecture for seamlessly integrating sound,
animation, text, and video (data that changes over time), is often thought of as a
compression standard, but it is really much more than that.
MPEG
The MPEG standard has been developed by the Moving Picture Experts Group, a
working group convened by the International Standards Organization (ISO) and
the International Electro-technical Commission (IEC) to create standards for digital
46
DISTRIBUTED MULTIMEDIA SYSTEMS
representation of moving pictures and associated audio and other data. MPEG1 and
MPEG2 are the current standards. Using MPEG1, you can deliver 1.2 Mbps of video
and 250 Kbps of two-channel stereo audio using CD-ROM technology. MPEG2, a
completely different system from MPEG1, requires higher data rates (3 to 15 Mbps) but
delivers higher image resolution, picture quality, interlaced video formats,
multiresolution scalability, and multichannel audio features.
DVI/Indeo
DVI is a property, programmable compression/decompression technology based on the
Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated)
chips to separate the image processing and display functions. Two levels of
compression and decompression are provided by DVI: Production Level Video (PLV)
and Real Time Video (RTV). PLV and RTV both use variable compression rates. DVI’s
algorithms can compress video images at ratios between 80:1 and 160:1. DVI will play
back video in full-frame size and in full color at 30 frames per second.
47
DISTRIBUTED MULTIMEDIA SYSTEMS