Multimedia Systems - SPCA211
Multimedia Systems - SPCA211
MASTER OF
COMPUTER APPLICATIONS
SECOND YEAR
FOURTH SEMESTER
CORE PAPER - XX
MULTIMEDIA SYSTEMS
WELCOME
Warm Greetings.
I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.
DIRECTOR
(i)
MASTER OF COMPUTER CORE PAPER - XX
APPLICATIONS MULTIMEDIA SYSTEMS
SECOND YEAR - FOURTH SEMESTER
COURSE WRITER
Dr. R. Latha
Professor & Head,
Department of Computer Science and Applications,
St.Peter's Institute of Higher Education & Research,
Avadi, Chennai-600 054.
Dr. R. Parameswari
Associate Professor,
Department of Computer Science,
School of Computing Sciences,
Vels Institute of Science, Technology & Advanced Studies,
Pallavaram, Chennai - 600 117.
Dr. S. Sasikala
Associate Professor in Computer Science
Institute of Distance Education
University of Madras
Chepauk, Chennnai - 600 005.
(ii)
MASTER OF COMPUTER APPLICATIONS
SECOND YEAR
FOURTH SEMESTER
Core Paper - XX
MULTIMEDIA SYSTEMS
SYLLABUS
Objective of the course
the requirements to make good multimedia, Multimedia skills and training, Training
opportunities in Multimedia. Motivation for multimedia usage, Frequency domain analysis,
Application Domain.
Unit 3: Multimedia – making it work – multimedia building blocks – Text, Sound, Images,
Animation and Video, Digitization of Audio and Video objects, Data Compression: Different
algorithms concern to text, audio, video and images etc., Working Exposure on Tools like
Unit 4:Multimedia and the Internet: History, Internet working, Connections, Internet
Services, The World Wide Web, Tools for the WWW – Web Servers, Web Browsers,
Web page makers and editors, Plug-Ins and Delivery Vehicles, HTML, VRML, Designing for
the WWW – Working on the Web, Multimedia Applications – Media Communication, Media
(iii)
Unit 5 :Multimedia-looking towards Future: Digital Communication and New Media,
Interactive Television, Digital Broadcasting, Digital Radio, Multimedia Conferencing,
Assembling and delivering a project-planning and costing, Designing and Producing, content
Recommended Texts:
2. T. Vaughan, 1999, Multimedia: Making it work, 4th Edition, Tata McGraw Hill, New
Delhi.
3. K. Andleigh and K. Thakkar, 2000, Multimedia System Design, PHI, New Delhi.
Reference Books
1) http://www.cikon.de/Text_EN/Multimed.html
(iv)
MASTER OF COMPUTER APPLICATIONS
SECOND YEAR
FOURTH SEMESTER
Core Paper - XX
MULTIMEDIA SYSTEMS
SCHEME OF LESSONS
(v)
1
LESSON 1
AN OVERVIEW OF MULTIMEDIA
Structure
1.1 Introduction
1.3 Multimedia
1.10 Summary
1.1 Introduction
Multimedia has become an inevitable part of any presentation. It has found a variety of
applications right from entertainment to education. The evolution of internet has also increased
the demand for multimedia content.
As the name suggests, multimedia is a set of more than one media element used to
produce a concrete and more structured way of communication. In other words, multimedia is
a simultaneous usage of data from different sources. These sources in multimedia are known
as media elements. With growing and very fast changing information technology, Multimedia
has become a crucial part of computer world. Its importance has been realized in almost all
walks of life, may it be education, cinema, advertising, fashion and what not. Throughout the
2
1960s, 1970s and 1980s, computers have been restricted to be dealt with two main types of
data - words and numbers. But the cutting edge of information technology introduced a faster
system capable of handling graphics, audio, animation and video. And the entire world was
taken aback by the power of multimedia.
In this lesson, the preliminary concepts of Multimedia and the various benefits and
applications of multimedia is learnt and discussed. After going through this chapter the reader
will be able to:
i) Define Multimedia
1.3 Multimedia
Fig.1.1 Multimedia
Storage Media: Media in computer storage: floppy, CD, DVD, HD, USB
Definitions
Multimedia is a media that uses multiple forms of information contents and information
processing.
Multimedia means that computer information can be represented through audio, video,
and animation in addition to traditional media (i.e., text, graphics/drawings, and images).
Multimedia is the field concerned with the computer controlled integration of text, graphics,
drawings, still and moving images (Video), animation, audio, and any other media where,
every type of information can be represented, stored, transmitted and processed digitally.
Multimedia: refers to various information forms such as text, image, audio, video, graphics,
and animation in a variety of application environments.
1. Digitized Computing: All media including audio/video are represented in digital format.
3. Interactive Computing: It is possible to affect the information received, and send own
information, in a non-trivial way beyond start, stop, fast forward.
4. Integrated Computing: The media are treated in a uniform way, presented in an organized
way, but are possible to manipulate independently.
Hypertext is a text which contains links to other texts. The term was invented by Ted
Nelson around 1965.(Fig. 1.2)
Hypermedia is not constrained to be text-based (Fig. 1.3). It can include other media,
e.g., graphics, images, and especially continuous media - sound and video.
4. Adobe Flash
Multimedia means that computer information can be represented through audio, graphics,
image, video and animation in addition to traditional media (text and graphics). Hypermedia
can be considered as one type of multimedia application.
1. Text
2. Graphics
3. Animation
4. Video
5. Audio
Text
— It contains alphanumeric and some other special characters (Fig. 1.4). Keyboard is usually
used for input of text; however, there are some internal (inbuilt) features to include such
text.
Graphics
Animation
- Flipping through a series of still images. It is a series of graphics that create an illusion of
motion (Fig. 1.6).
7
Audio
- This technology records, synthesizes, and plays audio (sound) (Fig. 1.6). There are many
learning courses and different instructions that can be delivered through this medium
appropriately.
Video
— This technology records, synthesizes, and displays images (known as frames) in such
sequence (at a fixed speed) that makes the creation appear as moving to see a completely
developed video. In order to watch a video without any interruption, video device must
display 25 to 30 frames/second.
— Photographic images that are played back at speeds of 15 to 30 frames per second and
that provide the appearance of full motion (Fig. 1.8).
8
Categories of Multimedia
Linear active content progresses without any navigation control for the viewer such as a
cinema presentation.
Non-linear content offers user interactivity to control progress as used with a computer
game or used in self-paced computer based training.
It has a fast Central Processing Unit (CPU), to process large amount of data.
It has a huge storage capacity and a huge memory power that helps in running heavy
data programs.
It has a high capacity graphic card that helps in displaying graphics, animation, video, etc.
9
With all these features (discussed above), a computer system is known as high end
multimedia computer system.
However, all the features listed above are not essentially required for every multimedia
computer system, but rather the features of a multimedia computer system are configured as
per the needs of the respective user.
Media are divided into two types in respect to time in their representation space:
Information is expressed only in its individual value. E.g: text, image, etc.
Information is expressed not only in its individual value, but also by the time of its
occurrences. E.g.: sound and video. Multimedia system is defined by computer controlled,
integrated production, manipulation, presentation, storage and communication of independent
information, which is encoded atleast through a continuous and discrete media.
2. The Representation Media
4. The Storage media
Perception Media
Perception media helps the human to sense their environment. The central question is
how the human perceive the information in a computer environment. The answer is through
seeing and hearing.
Seeing:
o For the perception of information through seeing the usual, such as text, image and
video are used.
Hearing:
o For the perception of information through hearing media, such as music, noise and speech
are used.
Representation Media
ii. Graphics are coded according to CEPT or CAPTAIN video text standard.
iv. Audio video sequence can be coded in different TV standard format (PAL, NTSC,SECAM
and stored in the computer in MPEG format)
Presentation Media
Presentation media refers to the tools and devices for the input and output of the
information, through which the information is delivered by the computer and is introduced
to the computer.
Storage Media
Storage Media refers to the data carrier which enables storage of information. The
information will be stored in hard disk, CD-ROM etc.
Transmission Media
Transmission Media are the diff erent information carrier that enables
continuous data transmission. The information will be transmitted in co-axial cable,
fiber optics and as well as by air.
The uses of term multimedia are not every arbitrary combination of media.
Combination of media:
Computer is an idle tool for multimedia application which integrates many devices together.
Independence:
An important aspect of different media is their level of independence from each other. In
general there is a request for independence of different media but multimedia may require
12
several level of independence. E.g. A computer controlled video recorder stores audio
and video informations. There is inherently tight connection between two types of media.
Both media are coupled together through common storage medium of tape. On the other
hand for the purpose of presentation the combination of DAT (digital audio tape recorder)
signals and computer text satisfies the request for media independence.
2. System domain
3. Device domain
1. Application domain provides functions to the user to develop and present multimedia
projects. This includes software tools, and multimedia projects development methodology.
2. System Domain includes all supports for using the function of the device domain, e.g.
operating system, communication systems (networking) and database systems.
3. Device domain provides basic concepts and skills for processing various multimedia
elements and for handling physical device.
a) MIDI
b) Hyperlink
c) WYSIWYG
d) Multimedia
a) Frames
b) Signals
c) Packets
d) Slots
13
3. Images that are available without copyright restrictions are called ____________
4. In a multimedia project, a storyboard details the text, graphics, audio, video, animation,
interactivity, and other that should be used in each screen of the project: Say TRUE or
FALSE?
a) GIF animation.
b) JPG animation.
c) TIF animation.
d) Tweening.
One of the earliest and best-known examples of multimedia was the video game Pong.
Developed in 1972 by Nolan Bushnell (the founder of a then new company called Atari), the
game consisted of two simple paddles that batted a square “ball” back and forth across the
screen, like tennis. It started as an arcade game, and eventually ended up in many homes. A
New Revolution in 1976, another revolution was about to start as friends Steve Jobs and Steve
Wozniak founded a startup company called Apple Computer. A year later they unveiled the
Apple II, the first computer to use color graphics.
The computer revolution moved quickly: 1981 saw IBM’s first PC, and in 1984 Apple
released the Macintosh, the first computer system to use a Graphical User Interface (GUI).
The Macintosh also bore the first mouse, which would forever change the way people interact
with computers. In 1985, Microsoft released the first version of its Windows operating system.
That same year, Commodore released the Amiga, a machine which many experts consider to
be the first multimedia computer due to its advanced graphics processing power and innovative
user interface. The Amiga did not fare well over the years, though, and Windows has become
the standard for desktop computing. 2 Innovations Both Windows and the Macintosh operating
systems paved the way for the lightning-fast developments in multimedia that were to come.
Since both Windows and Mac OS handle graphics and sound – something that was previously
handled by individual software applications – developers are able to create programs that use
multimedia to more powerful effect.
14
One company that has played an important role in multimedia from its very inception is
Macromedia (formerly called Macro mind). In 1988, Macromedia released its landmark Director
program, which allowed everyday computer users to create stunning, interactive multimedia
presentations. Today, Macromedia Flash drives most of the animation and multimedia you see
on the Internet, while Director is still used to craft high-end interactive productions. Each new
development of each passing year is absorbed into next year’s technology, making the
multimedia experience, better, faster, and more interesting.
It can contain unique mixes of images, sounds, text, video, and animations controlled by
an authoring system to provide unlimited user interaction.
Digital Versatile Disc (DVD) technology has come into usage which has increased capacity
than the CD-ROM.
Now that telecommunications are global, information can be received online as distributed
resources on a data highway, where payment is there to acquire and use multimedia
based information.
1. Education
2. Training
3. Entertainment
4. Advertisement
5. Presentation
6. Business Communication
Multimedia in business
Multimedia in Schools
Multimedia provides radical changes in the teaching process, as smart students discover
they can go beyond the limits of traditional teaching methods.
Teachers may become more like guides and mentors along a learning path, not the
primary providers of information and understanding.
It provides physicians with over 100 case presentations and gives cardiologist, radiologist,
medical students, and fellows an opportunity for in-depth learning of new clinical techniques.
Multimedia at home
From gardening to cooking to home design, re-modeling multimedia has entered the
home.
In hotels, train stations, shopping malls, museums and grocery stores, multimedia will
become available at stand-alone terminals or kiosks to provide information and help.
Such installations reduce demand on traditional information booths and personnel and
they can work round the clock, when live help is off duty.
Hotel kiosks list nearby restaurants, maps of the city, airline schedules, and provide
guest services such as automated checkouts.
2. Teachers can use multimedia presentations to make lessons more interesting by using
animations to highlight or demonstrate key points.
3. A multimedia presentation can also make it easier for pupils to read text rather than trying
to read a teacher’s writing on the board.
4. Programs which show pictures and text whilst children are reading a story can help them
learn to read; these too are a form of multimedia presentation.
6. Some businesses use multimedia for training where CDROMs or on-line tutorials allow
staff to learn at their own speed, and at a suitable time to the staff and the company.
7. Another benefit is that the company do not have to pay the additional expenses of an
employee attending a course away from the workplace
8. People use the Internet for a wide range of reasons, including shopping and finding out
about their hobbies.
9. The Internet has many multimedia elements embedded in web pages and web browsers
support a variety of multimedia formats.
10. Many computer games use sound tracks, 3D graphics and video clips.
Let us now see the different fields where multimedia is applied. The fields are described
in brief below “
17
1. Presentation
2. E-book
3. Digital Library
4. E-learning
Today, most of the institutions (public as well as private both) are using such technology
to educate people.
5. Movie making
Most of the special effects that are seen in any movie, is only because of multimedia
technology.
6. Video games
Video games are one of the most interesting creations of multimedia technology. Video
games fascinate not only the children but adults too.
7. Animated films
Along with video games, animated film is another great source of entertainment for
children.
8. Multimedia conferencing
People can arrange personal as well as business meetings online with the help of
multimedia conferencing technology.
18
9. E-shopping
2. It is multi sensorial. It uses a lot of the user’s senses while making use of multimedia, for
example hearing, seeing and talking.
3. It is integrated and interactive. All the different mediums are integrated through the
digitization process. Interactivity is heightened by the possibility of easy feedback.
4. It is flexible. Being digital, this media can easily be changed to fit different situations and
audiences.
5. It can be used for a wide variety of audiences, ranging from one person to a whole group.
2. It takes time to compile. Even though it is flexible, it takes time to put the original draft
together.
4. Too much makes it unpractical. Large files like video and audio has an effect of the time
it takes for your presentation to load. Adding too much can mean that you have to use a
larger computer to store the files.
a) high storage
a) cost
b) adaptability
c) usability
d) relativity
8. A graphic image file name is tree.eps. This file is a bitmap image: Say TRUE or FALSE.
9. Multimedia files stored on a remote server are delivered to a client across the network
using a technique known as :
a) Download
b) Streaming
c) Flowing
d) Leaking
1.10 Summary
Multimedia is simply multiple forms of media integrated together. Media can be text,
graphics, audio, animation, video, data, etc.
Multimedia Applications is the creation of exciting and innovative multimedia systems that
communicate information customized to the user in a non-linear interactive format.
2. a) Frames
3. Clipart
4. True
5. GIF Animation
7. a) Cost
8. False
9. b) Steaming
LESSON 2
INTRODUCTION TO MAKING MULTIMEDIA
Structure
2.1 Introduction
2.9 Summary
2.1 Introduction
The basic stages of a multimedia project are planning and costing, design and production,
testing and delivery. Knowledge of hardware and software, as well as creativity and organizational
skills are essential for creating a high-quality multimedia project. In any project, including
multimedia, team building activities improve productivity by fostering communication and a
work culture that helps its members work together. Motivation is one of the primary factors that
influence the effectiveness of instruction. Motivation provides an opportunity to incorporate
many motivational factors. Motivating a student means the students is excited and will maintain
the interest in the activity or subject. Frequency domain analysis replaces the measured signal
with a group of sinusoids which, when added together, produce a waveform equivalent to the
original. The relative amplitudes, frequencies, and phases of the sinusoids are examined.
22
These stages are sequential. Before, beginning any work everybody involved in the project
should agree to what is to be done and why. Lack of agreement can create misunderstandings
which can have grim effects in the production process. Therefore, initial agreements give a
reference point for subsequent decisions and assessments. After the clarification of why, what
multimedia product has to do in order to fulfill its purpose is decided. The “why” and “what”
determine the entire how decisions including storyboards, flow chart, and media content, etc.
Pre-Production
Idea or Motivation
During the initial why phase of production, the first question the production team ask is
“why” you want to develop a multimedia project?
23
Is multimedia the best option, or would a print product being more effective?
It takes several brainstorming sessions to come up with an idea. Then the production
team decides what the product needs to accomplish in the market. It should keep in account
what information and function they need to provide to meet desired goals. Activities such as
developing a planning document, interviewing the client and building specifications for production
help in doing so.
Target Audience
The production team thinks about target age groups, and how it affects the nature of the
product. It is imperative to consider the background of target customers and the types of
references that will be fully understood. It is also important to think about any special interest
groups to which the project might be targeted towards, and the sort of information those groups
might find important.
The production team decides the medium through which the information reaches the
audience. The information medium can be determined on the basis of what types of equipment
the audience have and what obstacles must be overcome. Web, DVDs and CD-ROMs are
some of the common delivery mediums. The production team also ascertains what authoring
tools should be used in the project. A few of the authoring tools are graphics, audio, video, text,
animation, etc.
Planning
Planning is the key to the success of most business endeavors, and this is definitely true
in multimedia. This is because a lack of planning in the early processes of multimedia can cost
later. The production team works together and plans how the project will appear and how far it
will be successful in delivering the desired information. There is a saying, “If you fail to plan,
you are planning to fail.”
24
Group discussions take place for strategic planning and the common points of discussions
are given below:
Planning also includes creating and finalizing flowchart and resource organization in
which the product’s content is arranged into groups. It also includes timeline, content list,
storyboard, finalizing the functional specifications and work assignments. Detailed timelines
are created and major milestones are established for the difficult phases of the project. The
work is then distributed among various roles such as designers, graphic artists, programmers,
animators, audio-graphers, videographers, and permission specialists.
Production
In the production stage all components of planning come into effect. If pre-production
was done properly, all individuals will carry out their assigned work according to the plan.
During this phase graphic artists, instructional designers, animators, audiographers and
videographers begin to create artwork, animation, scripts, video, audio and interface. The
production phase runs easily if the project manager has distributed responsibilities to the right
individuals and created practical and achievable production schedule. Given below are some
of the things that people involved in production have to do:
Scriptwriting
The scripts for the text, transitions, audio narrations, voice-overs and video are written.
Existing material also needs to be rewritten and reorganized for an electronic medium. Then
the written material is edited for readability, grammar and consistency.
Art
Illustrations, graphics, buttons, and icons are created using the prototype screens as a
guide. Existing photographs, illustrations, and graphics are digitized for use in an electronic
25
medium. Electronically generated art as well as digitized art must be prepped for use; number
of colors, palettes, resolution, format, and size are addressed.
The 3D artwork is created, rendered, and then prepared for use in the authoring tool. The
3D animations require their own storyboards and schedules.
Authoring
All the pieces come together in the authoring tool. Functionality is programmed, and 2D
animation is developed. From here, the final working product is created. Every word on the
screen is proofread and checked for consistency of formatting. In addition, the proofreader
reviews all video and audio against the edited scripts.
The edited scripts are used to plan the budget, performers, time schedules and budget,
and then the shoot is scheduled followed by recording.
Quality Control
Quality control goes on throughout the process. The storyboards are helpful for checking
the sequencing. The final step checks should be done for the overall content functionality and
usability of the product. The main goal of production is to make the next stage, post production,
run smoothly and flawlessly.
Post-Production
Testing
Mastering
The original files, including audio, video, and the native software formats, are archived
for future upgrades or revisions. The duplicates are created from the original and packaged
accordingly.
Marketing is significant to the success of a product. The survival of a company and its
products depends greatly on the product reaching the maximum number of audience. Then
comes the final step in the process which is distribution of the multimedia project.
Good Multimedia
Many multimedia systems are too passive- users click and watch
For fully interactive systems, designers need clear picture of what happens as user interacts
All through the creation and development of a multimedia project, the team members
must communicate with each other on a constant basis. They must also share same goals and
consistency in the design of the end product.
Depending upon the size of a project, one specialist might be required to play more than
one role, or the roles might be extended to different departments. Every specialist team member
27
is not only required to have an extensive background in their fields but also be a fast learner
capable of picking up new skills. Knowledge and experience in other fields might be an added
advantage.
Every team member plays a significant role in the design, development and production
of a multimedia project.
Team members
A multimedia team member consists of the following:
Project manager
Multimedia designer
Interface designer
Multimedia programmer
Computer programmers
Writer
Audio specialist
Video specialist
Permission specialist
Project Manager
• Make schedules.
28
Multimedia designer
– This team consists of graphics designers, illustrators, animators, and image processing
specialists, who deal with visuals, thereby making the project appealing and aesthetic.
This team is responsible for:
Instructional designers, who make sure that the subject matter is presented clearly
for the target audience.
Interface designers, who devise the navigational pathways and content maps.
Interface Designer
An interface designer is responsible for:
Creating a software device that organizes content. It allows users to access or modify
content, and presents that content on the screen.
Multimedia Writer
A multimedia writer is responsible for:
Video Specialist
A video specialist needs to understand:
How to edit the footage down to a final product using digital non-linear editing system
(NLE).
Audio Specialist
An audio specialist is responsible for:
Multimedia Programmer
A multimedia programmer is responsible for:
Integrates all the multimedia elements into a seamless project, using authoring systems
or programming language.
Advertising: Imaginative and attractive advertisements can be made with the combination
of text, pictures, audio and video. Multimedia designers have a big role in creation of
advertisements. A product is well received by a customer if it is supported by a good
multimedia advertisement campaign.
Gaming and Graphic Design: They are perhaps making the maximum use of multimedia.
No computer game is complete without elaborate computer graphics, be it a arcade
game, strategy based game or sports game. A computer game with good graphics is
more enticing to play then a game with less or bad graphics. Multimedia designers have
a great role in making a game successful.
Product Design: Multimedia can be used effectively for designing a product. First its
prototype can be made before actually making the product. Multimedia programmers
can be employed in this work.
Education and Training: It is perhaps the need of the hour requirement for the multimedia.
Topics which are difficult to understand by reading the text can be made simple with the
help of multimedia. Time is coming when the multimedia lessons will take place of
classroom teaching. Students can repeat a lesson as many times until he understands
the concept. Multimedia designers have big role in all this work.
Leisure: Multimedia can also be used for entertainment. Most of the cartoon films are
made with the help of multimedia. It is used in scientific movies to give special effects like
animation, morphing etc. Actors created by combining different frames with the help of
multimedia can replace the actual actors.
With this Multimedia, students will gain creative skills and technological knowledge leading
to many exciting career opportunities including in the fields of electronic publishing, web design,
31
The major goals of providing multimedia instruction is to motivate students, there is need
to examine motivational elements. There are four major motivation theories–expectancy-value
theory, self-efficacy, goal-setting and task motivation, and self-determination theory.
Human-based system (teacher instructor, tutor, role-plays, group activities, field trips,
etc.)
Visual-based system (books, job aids, charts, graphs, maps, figures, transparencies, slides,
etc.)
2. Model-Facilitated Learning Environments How students will act and learn in a particular
environment depends on how the instructional designer creates the environment that
maximizes their learning potential, considering the interrelationships between the learning
experience, the technology, cognition, and other related issues of the learner.
32
3. Self-Regulated Learning (SRL) SRL competence has been promoted through reflection
on cognitive, meta-cognitive, emotional and motivational aspects of learning, as well as
through modeling teaching practices that tend to shift the locus of control from trainers to
trainees.
7. E-Learning with Wikis, Weblogs and Discussion Forums explores how social software
tools can offer support for innovative learning methods and instructional design in general
and those related to self-organize learning in an academic context in particular.
8. Emerging Edtech Design principles are universal and may be translated onto the newest
trends and emergent technologies. Used to guide evaluation, instructional design efforts,
or best practice models for exemplary use of educational technologies in the classroom.
10. Learning Activities Model The design of learning is probably more accurately described
as the design of learning activities as it is the activities that are designable compared to
learning which is the desired outcome of the activities.
a. Network
c. Good Idea
d. Programing Knowledge
2. The people who weave multimedia into meaningful tapestries are called _____.
a. Programmers
b. Multimedia Developers
c. Software Engineers
d. Hardware Engineers
3. The viewer of a multimedia project to control what and when the elements are delivered,
it is called _____
4. The software vehicle, the messages, and the content presented on a computer, television
screen PDA or cell phone together constitute a _______________
5. The most precious asset you can bring to the multimedia workshop is your ____.
a. Creativity
b. Programming Skill
c. Musical Ability
6. Before beginning a multimedia project, you must first develop a sense of its ____.
b. programming knowledge
34
c. implementing skills
The analogue, discrete and digital signals can be referred to as the baseband signal.
‘Baseband’ is used to describe the band of frequencies representing the signal of interest
as delivered by the source of information.
Any periodic signal can be synthesized by combining a series of cosine and sine signals
of different harmonics. By summing different amplitudes of the 1st, 3rd, 5th, and so on harmonics
of a sine signal, an odd functioned square wave can be synthesized.
If the period T of the synthesized signal becomes infinitely large then the difference in
frequency between the nth and the (n+1)th frequency components becomes infinitely small
and a continuous frequency domain representation is obtained.
In spatial domain, we deal with images as it is. The value of the pixels of the image
changes with respect to scene. Whereas in frequency domain, it is dealt with the rate at which
the pixel values are changing in spatial domain.
38
In simple spatial domain, we directly deal with the image matrix. Whereas, in frequency
domain, we deal an image like this.
Frequency Domain
We first transform the image to its frequency distribution. Then our black box system
performs whatever processing it has to performed, and the output of the black box in this case
is not an image, but a transformation. After performing inverse transformation, it is converted
into an image which is then viewed in spatial domain.
Frequency components
Any image in spatial domain can be represented in a frequency domain. But what do
these frequencies actually mean.
Application domain — provides functions to the user to develop and present multimedia
projects. This includes Software tools, and multimedia projects development methodology.
System domain — including all supports for using the functions of the device domain,
e.g., operating systems, communication systems (networking) and database systems.
Device domain — basic concepts and skill for processing various multimedia elements
and for handling physical device.
2.9 Summary
Every team member should perform their responsibilities as well as others if need arises.
The diverse skills required to create a multimedia project is called the multimedia skillset
Team building refers to activities that help a group and its members function at optimum
levels of performance.
Roles and responsibilities are assigned to each team member in a multimedia project.
2. b. Multimedia Developers
3. Interactive Multimedia
4. Multimedia Project
5. a. Creativity
LESSON 3
MULTIMEDIA HARDWARE AND SOFTWARE
Structure
3.1 Introduction
3.6 Summary
3.1 Introduction
Multimedia requires a variety of input devices to transmit data and instructions to a system
unit for processing and storage. Keyboards and pointing devices, such as trackballs, touch
pads, and touch screens, are central to interacting with graphical user interface (GUI) applications
and operating system software. Other devices are necessary to input sound, video, and a wide
array of images for multimedia applications. Some of these, such as microphones, are built
into the system. Others, such as scanners, cameras, sound recorders, and graphics tablets,
are plugged into USB or FireWire interface ports. Output devices include screen displays,
audio speakers or headsets, and hard copy. The quality of output for display, sound, and print
is dependent on the performance features of these devices.
This lesson aims at introducing the multimedia hardware used for providing interactivity
between the user and the multimedia software.
42
Know common input devices and their roles in getting different types of inform.
Know output devices and the way they make computers more useful
Input devices are under direct control by a human user, who uses them to communicate
commands or other information to be processed by the computer, which may then transmit
feedback to the user through an output device. Input and output devices together make up the
hardware interface between a computer and the user or external world. Typical examples of
input devices include keyboards and mice. However, there are others which provide many
more degrees of freedom. In general, any sensor which monitors, scans for and accepts
information from the external world can be considered an input device, whether or not the
information is under the direct control of a user.
The modality of input (e.g. mechanical motion, audio, visual, sound, etc.)
whether the input is discrete (e.g. key presses) or continuous (e.g. a mouse’s position,
though digitized into a discrete quantity, is high-resolution enough to be thought of as
continuous)
43
the number of degrees of freedom involved (e.g. many mice allow 2D positional input,
but some devices allow 3D input, such as the Logitech Magellan Space Mouse) Pointing
devices, which are input devices used to specify a position in space, can further be
classified according to
Whether the input is direct or indirect. With direct input, the input space coincides with
the display space, i.e. pointing is done in the space where visual feedback or the cursor
appears. Touchscreens and light pens involve direct input. Examples involving indirect
input include the mouse and trackball.
Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g.
with a mouse that can be lifted and repositioned)
Note that direct input is almost necessarily absolute, but indirect input may be either
absolute or relative. For example, digitizing graphics tablets that do not have an embedded
screen involve indirect input, and sense absolute positions and are run in an absolute
input mode, but they may also be setup to simulate a relative input mode where the
stylus or puck can be lifted and repositioned.
(i) Keyboard
A keyboard is the most common method of interaction with a computer. Keyboards provide
various tactile responses (from firm to mushy) and have various layouts depending upon your
computer system and keyboard model. Keyboards are typically rated for at least 50 million
cycles (the number of times a key can be pressed before it might suffer breakdown).
The most common keyboard for PCs is the 101 style (which provides 101 keys), although
many styles are available with more are fewer special keys, LEDs, and others features, such
as a plastic membrane cover for industrial or food-service applications or flexible “ergonomic”
styles. Macintosh keyboards connect to the Apple Desktop Bus (ADB), which manages all
forms of user input- from digitizing tablets to mice.
44
Computer keyboard
Keyer
Chorded keyboard
LPFK
While the most common pointing device by far is the mouse, many more devices have
been developed. However, mouse is commonly used as a metaphor for devices that move the
cursor. A mouse is the standard tool for interacting with a graphical user interface (GUI). All
Macintosh computers require a mouse; on PCs, mice are not required but recommended.
Even though the Windows environment accepts keyboard entry in lieu of mouse point-and-
click actions, your multimedia project should typically be designed with the mouse or touchscreen
in mind. The buttons the mouse provide additional user input, such as pointing and double-
clicking to open a document, or the click-and-drag operation, in which the mouse button is
pressed and held down to drag (move) an object, or to move to and select an item on a pull-
down menu, or to access context-sensitive help.
The Apple mouse has one button; other mouse may have as many as three.
mouse
trackball
touchpad
45
touchscreen
light pen
light gun
yoke (aircraft)
isometric joysticks - where the user controls the stick by varying the amount of force they
push with, and the position of the stick remains more or less constant
(iii) Scanners
Scanners capture text or images using a light-sensing device. Popular types of scanners
include flatbed, sheet fed, and handheld, all of which operate in a similar fashion: a light passes
over the text or image, and the light reflects back to a CCD (Charge-Coupled Device).
A CCD is an electronic device that captures images as a set of analog voltages. The
analog readings are then converted to a digital code by another device called an ADC (Analog-
to-Digital Converter) and transferred through the interface connection (usually USB) to RAM.
The quality of a scan depends on two main performance factors. The first is spatial
resolution. This measures the number of dots per inch (dpi) captured by the CCD. Consumer
scanners have spatial resolutions ranging from 1200 dpi to 4800 dpi. High-end production
scanners can capture as much as 12,500 dpi.
46
Once the dots of the original image have been converted and saved to digital form, they
are known as pixels. A pixel is a digital picture element. The second performance factor is color
resolution, or the amount of color information about each captured pixel. Color resolution is
determined by bit depth, the number of bits used to record the color of a pixel. A 1-bit scanner
only records values of 0 or 1 for each “dot” captured. This limits scans to just two colors,
usually black and white.
Scanners work with specific software and drivers that manage scanner settings. Spatial
resolutions and bit depth can be altered for each scan. These settings should reflect the purpose
of an image. For example, if an image is a black and white photo for a website, the scanning
software can be adjusted to capture gray scale color depth (8 bit) at 72 dpi. This produces an
image suitable for most computer monitors that display either 72 or 96 pixels per inch. Scanner
software also has settings to scale an image and perform basic adjustments for tonal quality
(amount of brightness and contrast).
Optical Character Recognition (OCR) OCR is the process of converting printed text to
a digital file that can be edited in a word processor. The same scanners that capture
images are used to perform OCR. However, a special software application is necessary
to convert a picture of the character into an ASCII-based letter. This OCR software
recognizes the picture of the letter C, for example, and stores it on the computer using its
ASCII code (01000011). These characters are then edited and reformatted in a word
processing application.
on the other hand, creates a specialized file format that can only be managed by Adobe Acrobat
software.
Flatbed scanners are configured to meet a variety of uses. The scanner bed varies to
handle standard letter- to legal-size image sources. Multi-format holders are available
for 35mm filmstrips and slides. Some scanners have an optional sheet-feed device. For
small production, these adapters to a flatbed scanner may suffice. For larger projects,
more specialized scanners should be considered. Slide and film scanners are specifically
calibrated to capture high spatial resolution, some at 4000 dpi.
Sheet-fed scanners are built to automatically capture large print jobs and process 15 or
more pages per minute. In selecting a scanner for multimedia development there are
many considerations. Image or text sources, quality of scan capture, ease of use, and
cost all factor into choosing the right scanner.
Digital cameras are a popular input source for multimedia developers. These cameras
eliminate the need to develop or scan a photo or slide. Camera images are immediately available
to review and reshoot if necessary, and the quality of the digital image is as good as a scanned
image. Digital capture is similar to the scanning process. When the camera shutter is opened
to capture an image, light passes through the camera lens. The image is focused onto a CCD,
which generates an analog signal. This analog signal is converted to digital form by an ADC
and then sent to a digital signal processor (DSP) chip that adjusts the quality of the image and
stores it in the camera’s built-in memory or on a memory card.
48
(v) Touchscreens
Touchscreens are monitors that usually have a textured coating across the glass face.
This coating is sensitive to pressure and registers the location of the user’s finger when it
touches the screen. The Touch Mate System, which has no coating, actually measures the
pitch, roll, and yaw rotation of the monitor when pressed by a finger, and determines how much
force was exerted and the location where the force was applied. Other touchscreens use
invisible beams of infrared light that crisscross the front of the monitor to calculate where a
finger was pressed. Pressing twice on the screen in quick and dragging the finger, without
lifting it, to another location simulates a mouse click and-drag. A keyboard is sometimes simulated
using an onscreen representation so users can input names, numbers, and other text by pressing
“keys”.
Touchscreen recommended for day-to-day computer work, but are excellent for multimedia
applications in a kiosk, at a trade show, or in a museum delivery system anything involving
public input and simple tasks. When your project is designed to use a touchscreen, the monitor
is the only input device required, so you can secure all other system hardware behind locked
doors to prevent theft or tampering.
Computer output devices present processed data in a useful form. Output devices include
screen displays, audio speakers or headsets, and hard copy. The quality of output for display,
sound, and print is dependent on the performance features of these devices.
49
1. Display Devices
Display devices share their heritage with either Cathode Ray Tube (CRT) technology
used in analog televisions or Liquid Crystal Displays (LCD) first used in calculators and watches.
Both CRT and LCD technologies produce an image on a screen through a series of individual
picture elements (pixels). As in scanners and digital cameras, the quality of a display image is
largely determined by spatial resolution (the number of pixels) and color resolution (the bit
depth of each pixel).
CRT monitors use raster scanning to generate a display. In this process an electronic
signal from the video card controls an electron gun that scans the back of a screen with
an electronic beam. The monitor’s back surface is coated with a phosphor material that
illuminates as electronic beams make contact. The electronic signal scans horizontal
rows from the top to the bottom of the screen. The number of available pixels that can be
illuminated determines the spatial resolution of the monitor. For example, a CRT with
1024 X 768 spatial resolution can display well over 700,000 pixels. CRT technology is
now replaced with smaller, lighter-weight, fully digital displays that use a different technique
to create pixels.
LCD screen is a sandwich of two plastic sheets with a liquid crystal material in the
middle. Tiny transistors control rod-shaped molecules of liquid crystal. When voltage is
applied to the transistor, the molecule is repositioned to let light shine through. Pixels
display light as long as the voltage is applied. Laptops borrowed this technology and
improved its resolution, color capability, and brightness to make LCDs suitable for computer
display. Resolution and brightness impact the quality of LCD output. LCD screens have
specific resolutions controlled by the size of the screen and the manufacturer. This fixed-
pixel format is referred to as the native resolution of the LCD screen. A 15-inch LCD
screen has a native resolution of 1024 X 768 pixels: there are exactly 1024 pixels in each
horizontal line and 768 pixels in each vertical line for a total of 786,432 pixels
LED (Light-Emitting Diode) displays have moved from large TV screens to mobile
phones, tablets, laptops, and desktop screens. These displays use the same TFT display
technology as the LCDs. A major distinction is in the manner of providing the light source
50
to illuminate the pixels on the screen. LED screens use a single row of light-emitting
diodes to make a brighter backlight that significantly improves the quality of the monitor
display.
2. Sound Devices
Sound output devices are speakers or headsets. They are plugged into the soundboard
where digital data is converted to analog sound waves. Soundboards can be a part of the
system board or added to a computer’s expansion slots. Soundboard circuitry performs four
basic processes: it converts digital sound data into analog form using a digital-to-analog
converter, or DAC; records sound in digital form using an ADC; amplifies the signal for delivery
through speakers; and creates digital sound’s using a synthesizer. A synthesizer is an output
device that creates sounds electronically.
Sound quality depends on the range of digital signals the soundboard can process. These
signals are measured as sample size and Sample rate.
Sample size is the resolution of the sound measured in bits per sample. Most soundboards
support 16-bit sound, the current CD-quality resolution.
Sample rate measures the frequency at which bits are recorded in digitizing a sound.
Modern boards accommodate the 48 KHz sample rate found in professional audio and
DVD systems. Soundboards control both sound input and output functions. Input functions are
especially important for developers because they need to capture and create high-quality sounds.
3. Print Devices
Printers remain an important multimedia peripheral device, despite the fact that multimedia
applications are primarily designed for display.
1. Impact Printers
2. Non-Impact Printers
51
1. Impact Printers
Impact printers print the characters by striking them on the ribbon, which is then pressed
on the paper.
Very noisy
Character printers are the printers which print one character at a time.
b. Daisy Wheel
In the market, one of the most popular printers is Dot Matrix Printer. These printers are
popular because of their ease of printing and economical price. Each character printed is in the
form of pattern of dots and head consists of a Matrix of Pins of size (5*7, 7*9, 9*7 or 9*9) which
comes out to form a character which is why it is called Dot Matrix Printer.
52
Advantages
Inexpensive
Widely Used
Disadvantages
Slow Speed
Poor Quality
b. Daisy Wheel
Head is lying on a wheel and pins corresponding to characters are like petals of Daisy
(flower) which is why it is called Daisy Wheel Printer. These printers are generally used for
word-processing in offices that require a few letters to be sent here and there with very nice
quality.
53
Advantages
Better quality
Disadvantages
Noisy
Line printers are the printers which print one line at a time.
a. Drum Printer
b. Chain Printer
a. Drum Printer
This printer is like a drum in shape hence it is called drum printer. The surface of the
drum is divided into a number of tracks. Total tracks are equal to the size of the paper, i.e. for
a paper width of 132 characters, drum will have 132 tracks. A character set is embossed on the
track. Different character sets available in the market are 48 character set, 64 and 96 characters
set. One rotation of drum prints one line. Drum printers are fast in speed and can print 300 to
2000 lines per minute.
Advantages
Disadvantages
Very expensive
b. Chain Printer
In this printer, a chain of character sets is used; hence it is called Chain Printer. A standard
character set may have 48, 64, or 96 characters.
Advantages
Disadvantages
Noisy
2. Non-impact Printers
Non-impact printers print the characters without using the ribbon. These printers print a
complete page at a time, thus they are also called as Page Printers.
a. Laser Printers
b. Inkjet Printers
High quality
a. Laser Printers
These are non-impact page printers. They use laser lights to produce the dots needed to
form the characters to be printed on a page.
Advantages
Disadvantages
Expensive
b. Inkjet Printers
Inkjet printers are non-impact character printers based on a relatively new technology.
They print characters by spraying small drops of ink onto paper. Inkjet printers produce high
quality output with presentable features.
They make less noise because no hammering is done and these have many styles of
printing modes available. Color printing is also possible. Some models of Inkjet printers can
produce multiple copies of printing also.
57
Advantages
More reliable
Disadvantages
4. Sharing peripheral resources such as file servers, printers, scanners, and network routers
is made possible by a __________.
a. ATA
b. IDE
c. LAN
d. GPS
5. With __________ and a scanner, you can convert paper documents into a word processing
document on your computer without retyping or rekeying.
a. QR codes
b. CRT projectors
c. GLV technology
d. OCR software
a. Adobe Captivate
b. Go! Animate
58
c. Easy generator
d. FileMaker Pro
8. Which one of the following resource is not necessarily required on a file server?
a. secondary storage
b. processor
c. network
d. monitor
The two types of desktop computer used for multimedia development are the Apple Mac
and the Microsoft Windows based personal computer or PC. Both platforms share these common
components as do most types of computer:
Processor: The processor or central processing unit is the key component and controls
the rest of the computer and executes programs.
Cache: Cache is a small amount of very high speed memory built into the processor for
doing immediate calculations.
RAM memory: RAM (random access memory) is the working memory where the current
application program resides.
System bus: The system bus connects all the necessary devices to the processor. There
are other buses that connect to the system bus like SCSI for hard drives.
Motherboard: The processor, cache, RAM and system bus all reside on a main printed
circuit board called the motherboard.
Operating system: The operating system manages the loading and unloading of
applications and files and the communication with other peripheral devices like printers.
Storage devices: Application programs and working files are saved longer term on different
kinds of storage device. Storage devices include hard disk drives, CD-ROMs and floppy
drives.
59
Input/output devices: Connected to the system bus are a number of other devices that
control the other essential components of a desk top computer including the monitor,
mouse, keyboard, speakers, printer, and scanner.
Expansion bus: Most desktops should include ‘slots’ into which other non-standard devices
can be installed.
The latest specification Macs and PCs are capable of running the application tools
necessary for developing standard multimedia applications. The standard applications are image,
sound and video editing, animation and multimedia integration. Comparisons of the performance
of the latest generation of PCs and Macs are hotly contested but in general they are now
roughly the same with each type of computer performing better on some tasks than others.
Apple Macs have, in the past, been more associated with the multimedia industry, however
PCs are increasingly being used since they are now capable of undertaking the same processor
intensive tasks like video compression equally well. High specification computers are required
to undertake some of the tasks required in multimedia development.
Today’s computer users live in a veritable golden age when it comes to choosing computing
devices. In truth, there’s no clear winner in the Mac vs. PC contest. Instead, both devices have
significant developments. Both platforms now can come equipped with Intel® Core™ processors
that result in impressive performance. In addition, both Mac and PC demonstrate increased
memory; larger hard drive space; better stability and more availability than four years ago.
However, differences remain: the PC and Ultrabook™ are widely available with touchscreens,
but Apple has yet to release a Mac or MacBook* with integrated touchscreen technology.
Retina display, which greatly reduces glare and reflection, is a standard feature on the new
iMac*, but is less common on PCs.
Compatibility: While the main operating system for Apple is OSX*, and PCs operate on
Microsoft Windows*, only Macs have the capability to run both. Naturally, both systems
continue to develop faster and more powerful versions of these operating systems that
are increasingly user-friendly and more compatible with handheld devices.
Reliability: When it comes to reliability, the Mac vs. PC debate has had some interesting
developments of late. Though the majority of PC users know their devices are vulnerable
60
to malware and viruses, Mac users this past year have certainly awoken to the fact that
Macs are also vulnerable to sophisticated attacks. Ultimately, both PC and Mac users are
safer after installing up-to-date antivirus software designed to protect their devices from
malicious hits. Even when it comes to repairs, both operating systems have made great
strides. Though it’s still advised to take a broken Mac to an Apple Genius Bar* in an
authorized Apple dealership, there are more locations than there were a few years ago.
PC users enjoy a broader range of choices, Notes from their local electronics dealer to a
repair center at a major department store, though it remains their own responsibility to
choose a repair service that’s up to their PC manufacturer’s standards. Since PCs and
Macs hit the market, the debate has existed over which is best. Depending upon who
you’re talking to, the PC vs. Mac debate is even hotter than politics or religion. While you
have many who are die-hard Microsoft PC users, another group exists that is just as
dedicated to Apple’s Mac*. A final group exists in the undecided computer category.
Cost: For many users, cost is key. You want to get the absolute most for your money. In
years past, PCs dominated the budget-friendly market, with Macs ranging anywhere from
$100 to $500 more than a comparable PC. Now this price gap has lessened significantly.
However, you will notice a few key features that Macs tend to lack in order to provide a
lower price: memory and hard drive space.
Memory: Most PCs have anywhere from 2 GB to 8 GB of RAM in laptops and desktops,
while Macs usually have only 1 GB to 4 GB. Keep in mind, this is for standard models, not
custom orders.
Hard Drive Space: Macs typically have smaller hard drives than PCs. This could be
because some Mac files and applications are slightly smaller than their PC counterparts.
On average, you will still see price gaps of several hundred dollars between comparable
Macs and PCs. For computing on a budget, PCs win. There are a few things to take into
consideration that may actually make Macs more cost effective: stability and compatibility.
Stability: In years past, PCs were known to crash, and users would get the “blue screen,”
but Microsoft has made their operating systems more reliable in recent years. On the
other hand, Mac hardware and software have tended to be stable, and crashes occur
infrequently.
61
Compatibility: Unlike with a PC, a Mac can also run Windows. If you want to have a
combination Mac and PC, a Mac is your best option.
Availability: Macs are exclusive to Apple. This means for the most part, prices and features
are the same no matter where you shop. This limits Mac availability. However, with the
new Apple stores, it’s even easier to buy Macs and Mac accessories. Any upgrades or
repairs can only be done by an authorized Apple support center. PCs, on the other hand,
are available from a wide range of retailers and manufacturers. This means more
customization, a wider price range for all budgets, repairs, and upgrades available at
most electronics retailers and manufacturers. It also makes it easier for the home user to
perform upgrades and repairs themselves as parts are easy to find.
Web Design: 95 per cent of the people surfing the Web use Windows on PCs. If you
want to be able to design in an atmosphere where you see pretty much what that 95 per
cent sees, then Windows just plain makes sense. Secondly, though many technologies
are available for the Mac, Windows technology is not and much of the Web uses this
technology. If you want to take advantage of .NET technology or ASP, it is just way easier
to implement from a Windows platform.
Software: The final Mac vs. PC comparison comes down to software. For the most part,
the two are neck and neck. Microsoft has even released Microsoft Office specifically for
Mac, proving Apple and Microsoft can get along. All and all, Macs are more software
compatible as PCs only support Windows friendly software. Both systems support most
open-source software. Software for both systems is user friendly and easy to learn.
All Macintoshes can record and play sound. Many include hardware and software for
digitizing and editing video and producing DVD discs. High-quality graphics capability is available
“out of the box.” Unlike the Windows environment, where users can operate any application
with keyboard input, the Macintosh requires a mouse.
The Macintosh computer you will need for developing a project depends entirely upon
the project’s delivery requirements, its content, and the tools you will need for production.
62
Unlike the Apple Macintosh computer, a Windows computer is not a computer per se, but
rather a collection of parts that are tied together by the requirements of the Windows operating
system. Power supplies, processors, hard disks, CD-ROM players, video and audio components,
monitors, key-boards and mice-it doesn’t matter where they come from or who makes them.
Made in Texas, Taiwan, Indonesia, Ireland, Mexico, or Malaysia by widely known or little-known
manufactures, these components are assembled and branded by Dell, IBM, Gateway, and
other into computers that run Windows.
In the early days, Microsoft organized the major PC hardware manufactures into the
Multimedia PC Marketing Council to develop a set of specifications that would allow Windows
to deliver a dependable multimedia experience.
LANs allow direct communication and sharing of peripheral resources such as file servers,
printers, scanners, and network modems. They use a variety of proprietary technologies, most
commonly Ethernet or Token Ring, to perform the connections.
Multimedia Software allows the users to create and play audio and video media. Audio
converters, burners, players, video encoders and decoders are some of it. Real Player and
Media Player are examples of this software.
63
Multimedia software can be entertaining as well as useful. The user can play music on
the computer, listen to the sound an animal makes while browsing a disk about the zoo, hear
actual recordings of famous speeches, view a video clip of a historic event, watch an animation
about how a car engine works, hear the correct pronunciation of a word or phrase, view full
color photographs of famous works of art or scenes from nature, listen to the sounds of different
musical instruments, hear works of music by renowned composers, or watch a movie on your
computer.
There is a large selection of multimedia software available for the person’s enjoyment.
Multimedia subjects include children’s learning, the arts, reference works, health and medicine,
science, history, geography, hobbies and sports, games, and much more. Because of the large
storage requirements of this type of media, most multimedia software comes on a compact
disk (CD-ROM) format.
To use multimedia software, the end user system must meet certain minimum requirements
set forth by the Multimedia Personal Computer (MPC) Marketing Council. These requirements
include
a CD-ROM drive
Most new computers far exceed these specifications. A microphone is optional if the user
wants to record their own sounds. While these are suggested minimum requirements, many
multimedia programs would run better on computer equipped with a Pentium 4 or AMD Athlon
CPU and 512 or more megabytes of RAM.
Since much of the software purchased today contains multimedia content, we are now
referring to multimedia software as the software used to create multimedia content. Examples
64
include authoring software, which is used to create interactive multimedia courseware which is
distributed on CD or available over the Internet. A teacher could use such a program to create
interesting interactive lessons for the students which are viewed on the computer. A business
could create programs to teach job skills or orient new employees.
Another category of multimedia involves the recreational use of music. Songs can be
copied from CDs or downloaded from the Internet and stored on the hard drive. The music can
then be burned onto a CD or transferred to a Walkman-like device called a MP3 player or a
“jukebox.” There is also software for the creation, arranging, performance and recording of
music and video. Through the use of a MIDI (Music Instrument Digital Interface) connector
installed in the computer, the computer can be connected to musical instruments such as
electronic keyboards. A music student or musician could then create a multiple track recording,
arrange it, play it back, change the key or tempo, and print out the sheet music. Another type
of software which is recently gaining popularity is digital audio recording software, which allows
the computer to be connected to a digital audio mixer, usually through USB or “Fire wire”
connectors, and record live music onto the hard drive. The “tracks” can then be mixed, effects
added, and music CDs can then be made from the master recording.
Also available are special cameras that allow the person to record pictures and movies to
the person’s hard disk drive so they can easily be transferred into a multimedia presentation or
edited and recorded back to video tape to create their own “home movie.” These cameras
range from the very inexpensive type that are wired to the computer and sit on a small stand
near the monitor. This type of camera is sometimes referred to as an “Internet camera” or a
“video chat” camera, and sometimes called a “golf ball” camera because many of them are in
the shape of a golf ball. These cameras can also be used to send a live video feed over the
Internet, such as a video “chat” or “teleconference” call.
Digital video cameras allow the person to record movies and watch them with amazing
clarity and resolution, or to transfer the video to the computer for editing using the included
software. The user can delete unwanted scenes, add titles and effects, fade in and out, create
a sequence of scenes from smaller video files, and even add a musical sound track. Once the
editing is complete, the movie can be recorded back to the videotape, or “burned” onto a DVD
if the person’s computer is equipped with a DVD-R or DVD-RW drive.
65
3.6 Summary
Hardware elements such as hard disks and networked peripherals must be connected
together.
Input and output devices such as microphones, recorders, speakers, and monitors are
required when working with multimedia elements.
The Graphics Card and a GPU (Graphical Processing Unit) are needed to generate the
highest quality output images on a monitor.
CD-ROMs (Compact Disk-Read Only Memory), HD-DVDs (High Density Digital Versatile
Disc), and BDs (Blu-ray Disc) are the best choices for saving and distributing multimedia
data and video.
Multimedia software tools can be divided into graphics and image editing, audio and
sound editing, video editing, and animation authoring tools.
2. True
3. Office Suite
4. c. LAN
5. d. OCR Software
6. d. FileMaker Pro
8. d. monitor
LESSON 4
HARDWARE PERIPHERALS IN
MULTIMEDIA SYSTEM
Structure
4.1 Introduction
4.4 Connections
4.8 Summary
4.1 Introduction
The hardware required for multimedia PC depends on the personal preference, budget,
project delivery requirements and the type of material and content in the project. Multimedia
production was much smoother and easy in Macintosh than in Windows. But Multimedia content
production in windows has been made easy with additional storage and less computing cost.
Right selection of multimedia hardware results in good quality multimedia presentation.
Peripheral devices are hardware used for input, auxiliary storage, display, and
communication. These are attached to the system unit through a hardware interface that carries
digital data to and from main memory and processors. The functions and performance
characteristics of peripherals are important considerations both for multimedia users, who may
want the best display device for a video game, and for developers, who seek high-performance
data capture and access.
Multimedia Hardware
The hardware required for multimedia can be classified into five. They are
1. Connecting Devices
3. Communicating devices.
Among the much hardware – computers, monitors, disk drives, video projectors, light
valves, video projectors, players, VCRs, mixers, sound speakers there are enough wires which
connect these devices. The data transfer speed the connecting devices provide will determine
the faster delivery of the multimedia content.
1. SCSI
SCSI (Small Computer System Interface) is a set of standards for physically connecting
and transferring data between computers and peripheral devices. The SCSI standards define
commands, protocols, electrical and optical interfaces. SCSI is most commonly used for hard
disks and tape drives, but it can connect a wide range of other devices, including scanners,
and optical drives (CD, DVD, etc.). SCSI is most commonly pronounced “scuzzy”. Since
its standardization in 1986, SCSI has been commonly used in the Apple Macintosh and Sun
Microsystems computer lines and PC server systems. SCSI has never been popular in the low-
priced IBM PC world, owing to the lower cost and adequate performance of its ATA hard disk
standard. SCSI drives and even SCSI RAIDs became common in PC workstations for video or
audio production, but the appearance of large cheap SATA drives means that SATA is rapidly
taking over this market. Currently, SCSI is popular on high-performance workstations and
servers. RAIDs on servers almost always use SCSI hard disks, though a number of
manufacturers offer SATA-based RAID systems as a cheaper option. Desktop computers and
notebooks more typically use the ATA/IDE or the newer SATA interfaces for hard disks, and
USB and FireWire connections for external devices.
SCSI interfaces
SCSI is available in a variety of interfaces. The first, still very common, was parallel SCSI
(also called SPI). It uses a parallel electrical bus design. The traditional SPI design is making
a transition to Serial Attached SCSI, which switches to a serial point-to-point design but retains
other aspects of the technology.
iSCSI drops physical implementation entirely, and instead uses TCP/IP as a transport
mechanism. Finally, many other interfaces which do not rely on complete SCSI standards still
implement the SCSI command protocol.
SCSI cabling
Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin
connectors. External cables are shielded and only have connectors on the ends.
iSCSI
iSCSI preserves the basic SCSI paradigm, especially the command set, almost
unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3 over TCP/
IP, as displacing Fibre Channel in the long run, arguing that Ethernet data rates are currently
increasing faster than data rates for Fibre Channel and similar disk-attachment technologies.
iSCSI could thus address both the low-end and high-end markets with a single commodity-
based technology.
Serial SCSI
Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI (SAS)
break from the traditional parallel SCSI standards and perform data transfer via serial
communications. Although much of the documentation of SCSI talks about the parallel interface,
most contemporary development effort is on serial SCSI. Serial SCSI has a number of
advantages over parallel SCSI—faster data rates, hot swapping, and improved fault isolation.
The primary reason for the shift to serial interfaces is the clock skew issue of high speed
71
parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems
caused by cabling and termination. Serial SCSI devices are more expensive than the equivalent
parallel SCSI devices.
In addition to many different hardware implementations, the SCSI standards also include
a complex set of command protocol definitions. The SCSI command architecture was originally
defined for parallel SCSI buses but has been carried forward with minimal change for use with
iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA
Packet Interface, USB Mass Storage class and FireWire SBP-2.
In SCSI terminology, communication takes place between an initiator and a target. The
initiator sends a command to the target which then responds. SCSI commands are sent in a
Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed
by five or more bytes containing command-specific parameters. At the end of the command
sequence the target returns a Status Code byte which is usually 00h for success, 02h for an
error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in
response to a command, the initiator usually then issues a SCSI Request Sense command in
order to obtain a Key Code Qualifier (KCQ) from the target. The Check Condition and Request
Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition.
There are 4 categories of SCSI commands: N (non-data), W (writing data from initiator to
target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in
total, with the most common being:
Test unit ready: Queries device to see if it is ready for data transfers (disk spun up,
media loaded, etc.).
Inquiry: Returns basic device information, also used to “ping” the device since it does not
modify sense data.
Request sense: Returns any error codes from the previous command that returned an
error status.
72
Send diagnostic and Receives diagnostic results: runs a simple self-test or a specialized
test defined in a diagnostic page.
Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding defective
sectors.
Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN). Simple
devices have just one LUN, more complex devices may have multiple LUNs. A “direct access”
(i.e. disk type) storage device consists of a number of logical blocks, usually referred to by the
term Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of
LBAs has evolved over time and so four different command variants are provided for reading
and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The
Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain
a 32-bit LBA address plus various other parameter options.
A “sequential access” (i.e tape-type) device does not have a specific capacity because it
typically depends on the length of the tape, which is not known exactly. Reads and writes on a
sequential access device happen at the current position, not at a specific LBA. The block size
on sequential access devices can either be fixed or variable, depending on the specific device.
(Earlier devices, such as 9-track tape, tended to be fixed block, while later types, such as DAT,
almost always supported variable block sizes.)
73
On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a “SCSI
ID”, which is a number in the range 0-7 on a narrow bus and in the range 0–15 on a wide bus.
On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter).
On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for
example, the adapter contains a BIOS program that runs when the computer boots up and that
program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively,
the host adapter may come with software that must be installed on the host computer to configure
the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority
during bus arbitration (even on a 16 bit bus).
The SCSI ID of a device in a drive enclosure that has a backplane is set either by jumpers
or by the slot in the enclosure the device is installed into, depending on the model of the
enclosure. In the latter case, each slot on the enclosure’s back plane delivers control signals to
the drive to select a unique SCSI ID. A SCSI enclosure without a backplane has a switch for
each drive to choose the drive’s SCSI ID. The enclosure is packaged with connectors that
must be plugged into the drive where the jumpers are typically located; the switch emulates the
necessary jumpers. While there is no standard that makes this work, drive designers typically
set up their jumper headers in a consistent format that matches the way that these switches
implement.
Note that a SCSI target device (which can be called a “physical unit”) is divided into
smaller “logical units.” For example, a high-end disk subsystem may be a single SCSI device
but contain dozens of individual disk drives, each of which is a logical unit (more commonly, it
74
is not that simple—virtual disk devices are generated by the subsystem based on the storage
in those physical drives, and each virtual disk device is a logical unit). The SCSI ID, WWNN,
etc. in this case identifies the whole subsystem, and a second number, the logical unit number
(LUN) identifies a disk device within the subsystem.
It is quite common, though incorrect, to refer to the logical unit itself as a “LUN.” Accordingly,
the actual LUN may be called a “LUN number” or “LUN id”. Setting the bootable (or first) hard
disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside
for the Floppy drive while SCSI ID 3 is typically for a CD ROM.
SCSI enclosure services In larger SCSI servers, the disk-drive devices are housed in
an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can
communicate with the enclosure using a specialized set of SCSI commands to access power,
cooling, and other non-data characteristics.
The Media Control Interface, MCI in short, is an aging API for controlling multimedia
peripherals connected to a Microsoft Windows or OS/2 computer. MCI makes it very simple to
write a program which can play a wide variety of media files and even to record sound by just
passing commands as strings. It uses relations described in Windows registries or in the [MCI]
section of the file SYSTEM.INI.
The MCI interface is a high-level API developed by Microsoft and IBM for controlling
multimedia devices, such as CD-ROM players and audio controllers. The advantage is that
MCI commands can be transmitted both from the programming language and from the scripting
language (open script, lingo). For a number of years, the MCI interface has been phased out in
favor of the DirectX APIs.
MCI Devices
The Media Control Interface consists of 4 parts:
AVIVideo
CDAudio
75
Sequencer
WaveAudio
Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays
avi files, CDAudio plays cd tracks among others. Other MCI devices have also been made
available over time.
To play a type of media, it needs to be initialized correctly using MCI commands. These
commands are subdivided into categories:
System Commands
Required Commands
Basic Commands
Extended Commands
Usually storage devices connect to the computer through an Integrated Drive Electronics
(IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect
to a computer. IDE is actually not the true technical name for the interface standard. The
original name, AT Attachment (ATA), signified that the interface was initially developed for the
IBM AT computer.
IDE was created as a way to standardize the use of hard drives in computers. The basic
concept behind IDE is that the hard drive and the controller should be combined. The controller
is a small circuit board with chips that provide guidance as to exactly how the hard drive stores
and accesses data. Most controllers also include some memory that acts as a buffer to enhance
hard drive performance. Before IDE, controllers and hard drives were separate and often
proprietary. In other words, a controller from one manufacturer might not work with a hard drive
from another manufacturer. The distance between the controller and the hard drive could result
in poor signal quality and affect performance. Obviously, this caused much frustration for
computer users.
76
IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of the
wires laid flat next to each other instead of bunched or wrapped together in a bundle. IDE
ribbon cables have either 40 or 80 wires. There is a connector at each end of the cable and
another one about two-thirds of the distance from the motherboard connector. This cable cannot
exceed 18 inches (46 cm) in total length (12 inches from first to second connector, and 6
inches from second to third) to maintain signal integrity.
The three connectors are typically different colors and attach to specific items:
Enhanced IDE (EIDE) — an extension to the original ATA standard again developed by
Western Digital — allowed the support of drives having a storage capacity larger than 504
MiBs (528 MB), up to 7.8 GiBs (8.4 GB). Although these new names originated in branding
convention and not as an official standard, the terms IDE and EIDE appear as if interchangeable
with ATA. This may be attributed to the two technologies being introduced with the same
consumable devices — these “new” ATA hard drives.
With the introduction of Serial ATA around 2003, conventional ATA was retroactively
renamed to Parallel ATA (P-ATA), referring to the method in which data travels over wires in
this interface.
Universal Serial Bus (USB) is a serial bus standard to interface devices. A major
component in the legacy-free PC, USB was designed to allow peripherals to be connected
using a single standardized interface socket and to improve plug-and-play capabilities by allowing
devices to be connected and disconnected without rebooting the computer (hot swapping).
Other convenient features include providing power to low-consumption devices without the
need for an external power supply and allowing many devices to be used without requiring
manufacturer specific, individual device drivers to be installed.
77
USB is intended to help retire all legacy varieties of serial and parallel ports. USB can
connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and
joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For
many of those devices USB has become the standard connection method. USB is also used
extensively to connect non-networked printers; USB simplifies connecting several printers to
one computer. USB was originally designed for personal computers, but it has become
commonplace on other devices such as PDAs and video game consoles.
The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry
standards body incorporating leading companies from the computer and electronics industries.
Notable members have included Apple Inc., Hewlett-Packard, NEC, Microsoft, Intel,
USB devices are linked in series through hubs. There always exists one hub known as
the root hub, which is built-in to the host controller. So-called “sharing hubs” also exist; allowing
multiple computers to access the same peripheral device(s), either switching access between
PCs automatically or manually. They are popular in small office environments. In network terms
they converge rather than diverge branches.
A single physical USB device may consist of several logical sub-devices that are referred
to as device functions, because each individual device may provide several functions, such as
a webcam (video device function) with a built-in microphone (audio device function).
FireWire was introduced by Apple in the late 1980s, and in 1995 it became an industry
standard (IEEE 1394) supporting high-bandwidth serial data transfer, particularly for digital
video and mass storage. Like USB, the standard supports hot-swapping and plug-and-play,
78
but it is faster, and while USB devices can only be attached to one computer at a time, FireWire
can connect multiple computers and peripheral devices (peer-to-peer). Both the Mac OS and
Windows offer IEEE 1394 support. Because the standard has been endorsed by the Electronics
Industries Association and the Advanced Television Systems Committee (ATSC), it has become
a common method for connecting and interconnecting professional digital video gear, from
cameras to recorders and edit suites. Sony calls this standard i.LINK. FireWire has replaced
Parallel SCSI in many applications because it’s cheaper and because it has a simpler, adaptive
cabling system.
2. The type of memory that is not erased when power is shut off to it is called
_______________.
a. Volatile memory
b. Non-Volatile Memory
c. Backup Memory
d. Impact Memory
Type of backup storage in which data is read in a sequence is classified as Serial Access.
b. CD-ROM drive
d. Optical Drive
79
7. Which of the following items is not used in Local Area Networks (LANs)?
a. Computer Modem
b. Cable
c. Modem
d. Interface card
a. Virus
b. Website
c. Application
d. CorelDraw
A data storage device is a device for recording (storing) information (data). Recording
can be done using virtually any form of energy. A storage device may hold information, process
information, or both. A device that only holds information is a recording medium. Devices that
process information (data storage equipment) may both access a separate portable (removable)
recording medium or a permanent component to store and retrieve information.
Electronic Data Storage is storage which requires electrical power to store and retrieve
that data. Most storage devices that do not require visual optics to read data fall into this
category. Electronic data may be stored in either an analog or digital signal format. This type of
data is considered to be electronically encoded data, whether or not it is electronically stored.
Most electronic data storage media (including some forms of computer storage) are considered
permanent (non-volatile) storage, that is, the data will remain stored when power is removed
from the device. In contrast, electronically stored information is considered volatile memory.
80
By adding more memory and storage space to the computer, the computing needs and
habits to keep pace, is filling the new capacity. To estimate the memory requirements of a
multimedia project- the space required on a floppy disk, hard disk, or CD-ROM, not the random
access sense of the project’s content and scope.
RAM is the main memory where the Operating system is initially loaded and the application
programs are loaded at a later stage. RAM is volatile in nature and every program that is quit/
exit is removed from the RAM. More the RAM capacity, higher will be the processing speed.
On the Macintosh, the minimum RAM configuration for serious multimedia production is
about 32MB; but even 64MB and 256MB systems are becoming common, because while
digitizing audio or video, you can store much more data much more quickly in RAM. And when
you’re using some software, you can quickly chew up available RAM – for example, Photoshop
(16MB minimum, 20MB recommended); After Effects (32MB required), Director (8MB minimum,
20MB better); Page maker (24MB recommended); Illustrator (16MB recommended); Microsoft
Office (12MB recommended).
In spite of all the marketing hype about processor speed, this speed is ineffective if not
accompanied by sufficient RAM. A fast processor without enough RAM may waste processor
cycles while it swaps needed portions of program code into and out of memory.
In some cases, increasing available RAM may show more performance improvement on
your system than upgrading the processor clip. On an MPC platform, multimedia authoring can
also consume a great deal of memory. It may be needed to open many large graphics and
audio files, as well as your authoring system, all at the same time to facilitate faster copying/
81
pasting and then testing in your authoring software. Although 8MB is the minimum under the
MPC standard, much more is required as of now.
Read-only memory is not volatile, Unlike RAM, when you turn off the power to a ROM
chip, it will not forget, or lose its memory. ROM is typically used in computers to hold the small
BIOS program that initially boots up the computer, and it is used in printers to hold built-in
fonts. Programmable ROMs (called EPROM’s) allow changes to be made that are not forgotten.
Adequate storage space for the production environment can be provided by largecapacity
hard disks; a server-mounted disk on a network; Zip, Jaz, or SyQuest removable cartridges;
optical media; CD-R (compact disc-recordable) discs; tape; floppy disks; banks of special
memory devices; or any combination of the above. Removable media (floppy disks, compact
or optical discs, and cartridges) typically fit into a letter-sized mailer for overnight courier service.
One or many disks may be required for storage and archiving each project, and it is necessary
to plan for backups kept off-site.
Floppy disks and hard disks are mass-storage devices for binary data-data that can be
easily read by a computer. Hard disks can contain much more information than floppy disks
and can operate at far greater data transfer rates. In the scale of things, floppies are, however,
no longer “mass-storage” devices. A floppy disk is made of flexible Mylar plastic coated with a
very thin layer of special magnetic material. A hard disk is actually a stack of hard metal platters
coated with magnetically sensitive material, with a series of recording heads or sensors that
hover a hairbreadth above the fast-spinning surface, magnetizing or demagnetizing spots along
82
formatted tracks using technology similar to that used by floppy disks and audio and video tape
recording. Hard disks are the most common mass-storage device used on computers, and for
making multimedia, it is necessary to have one or more large-capacity hard disk drives.
SyQuest’s 44MB removable cartridges have been the most widely used portable medium
among multimedia developers and professionals, but Iomega’s inexpensive Zip drives with
their likewise inexpensive 100MB cartridges have significantly penetrated SyQuest’s market
share for removable media. Iomega’s Jaz cartridges provide a gigabyte of removable storage
media and have fast enough transfer rates for audio and video development. Pinnacle Micro,
Yamaha, Sony, Philips, and others offer CD-R “burners” for making write-once compact discs,
and some double as quad-speed players. As blank CD-R discs become available for less than
a dollar each, this write-once media competes as a distribution vehicle. CD-R is described in
greater detail a little later in the chapter. Magneto-optical (MO) drives use a high-power laser to
heat tiny spots on the
metal oxide coating of the disk. While the spot is hot, a magnet aligns the oxides to
provide a 0 or 1 (on or off) orientation. Like SyQuests and other Winchester hard disks, this is
rewritable technology, because the spots can be repeatedly heated and aligned. Moreover,
this media is normally not affected by stray magnetism (it needs both heat and magnetism to
make changes), so these disks are particularly suitable for archiving data. The data transfer
rate is, however, slow compared to Zip, Jaz, and SyQuest technologies. One of the most
popular formats uses a 128MB-capacity disk-about the size of a 3.5-inch floppy. Larger-format
magneto-optical drives with 5.25-inch cartridges offering 650MB to 1.3GB of storage are also
available.
83
In December 1995, nine major electronics companies (Toshiba, Matsushita, Sony, Philips,
Time Waver, Pioneer, JVC, Hitachi, and Mitsubishi Electric) agreed to promote a new optical
disc technology for distribution of multimedia and feature-length movies called DVD.
With this new medium capable not only of gigabyte storage capacity but also full motion
video (MPEG2) and high-quantity audio in surround sound, the bar has again risen for multimedia
developers. Commercial multimedia projects will become more expensive to produce as
consumer’s performance expectations rise. There are two types of DVD-DVD-Video and DVD-
ROM; these reflect marketing channels, not the technology.
CD-ROM Players
Compact Disc Read-Only Memory (CD-ROM) players have become an integral part of
the multimedia development workstation and are important delivery vehicle for large, mass-
84
produced projects. A wide variety of developer utilities, graphic backgrounds, stock photography
and sounds, applications, games, reference texts, and educational software are available only
on this medium.
CD-ROM players have typically been very slow to access and transmit data (150k per
second, which is the speed required of consumer Red Book Audio CDs), but new developments
have led to double, triple, quadruple, speed and even 24x drives designed specifically for
computer (not Red Book Audio) use. These faster drives spool up like washing machines on
the spin cycle and can be somewhat noisy, especially if the inserted compact disc is not evenly
balanced.
CD Recorders
With a compact disc recorder, you can make your own CDs using special CD- Recordable
(CD-R) blank optical discs to create a CD in most formats of CD-ROM and CD-Audio. The
machines are made by Sony, Phillips, Ricoh, Kodak, JVC, Yamaha, and Pinnacle. Software,
such as Adaptec’s Toast for Macintosh or Easy CD Creator for Windows, lets you organize files
on your hard disk(s) into a “virtual” structure, then writes them to the CD in that order. CD-R
discs are made differently than normal CDs but can play in any CD-Audio or CD-ROM player.
They are available in either a “63 minute” or “74 minute” capacity for the former, that means
about 560MB, and for the latter, about 650MB. These write-once CDs make excellent high-
capacity file archives and are used extensively by multimedia developers for premastering and
testing CDROM projects and titles.
85
Videodisc Players
Videodisc players (commercial, not consumer quality) can be used in conjunction with
the computer to deliver multimedia applications. You can control the videodisc player from your
authoring software with X-Commands (XCMDs) on the Macintosh and with MCI commands in
Windows. The output of the videodisc player is an analog television signal, so you must setup
a television separate from your computer monitor or use a video digitizing board to “window”
the analog signal on your monitor.
A communication device is a hardware device capable of transmitting an analog or digital
signal over the telephone, other communication wire, or wirelessly. The best example of a
communication device is a computer Modem, which is capable of sending and receiving a
signal to allow computers to talk to other computers over the telephone.
Other examples of communication devices include a NIC (network interface card), Wi-
Fi devices, and access points.
Bluetooth
Bluetooth is a computing and telecommunications industry specification that describes
how devices can communicate with each other. Devices that use Bluetooth include computers,
a computer keyboard and mouse, personal digital assistants, and smartphones.
86
Bluetooth is an RF technology that operates at 2.4 GHz, has an effective range of 32-
feet (10 meters) (this range can change depending on the power class), and has a transfer
rate of 1 Mbps and throughput of 721 Kbps.
Modem
The NIC is also referred to as an Ethernet card and network adapter. It is an expansion
card that enables a computer to connect to a network; such as a home network, or the Internet
using an Ethernet cable with an RJ-45connector.
87
Smartphone
Smartphones use a touch screen to allow users to interact with them. There are thousands
of smartphone apps including games, personal-use, and business-use programs that can all
run on the phone. Example: Apple iPhone
Wi-Fi
Wi-Fi is a wireless network that utilizes one of the IEEE 802.11wireless standards to
achieve a wireless connection to a network. A home wireless network uses a wireless access
point or router to broadcast a signal using WAP or WEP encryption to send and receive signals
from wireless devices on the network. A wireless access point with two antennas is an example
of how most home users connect to the Internet using a wireless device.
88
For the creation of multimedia on the PC there are hundreds of software packages that
are available from manufacturers all over the world.
These software packages can cost anything from being absolutely free (normally this
software is called freeware or shareware).
Adobe CS4
Adobe CS4 is a collection of graphic design, video editing, and web development
applications made by Adobe Systems many of which are the industry standard that includes
Adobe Dreamweaver
Although a hybrid WYSIWYG and code-based web design and development application,
Dreamweaver’s WYSIWYG mode can hide the HTML code details of pages from the user,
making it possible for non-coders to create web pages and sites. WYSIWYG (What You See Is
What You Get) web development software that allows users to create websites without using
Html, everything can be done visually.
Adobe Fireworks
A graphics package that allows users to create bitmap and vector graphics editor with
features such as: slices, the ability to add hotspots etc.) for rapidly creating website prototypes
and application interfaces.
89
Gimp
Is an alternative to Photoshop and cheaper but not quite as good.
Google Sketch up
Microsoft FrontPage
As a WYSIWYG editor, FrontPage is designed to hide the details of pages’ HTML code
from the user, making it possible for novices to easily create web pages and sites.
Apple QuickTime
Photoshop Pro
Microsoft PowerPoint
PowerPoint Presentations are generally made up of slides may contain text, graphics,
movies, and other objects, which may be arranged freely on the slide.
Adobe Flash (formerly Macromedia Flash) is a multimedia platform that is popular for
adding animation and interactivity to web pages. Originally acquired by Macromedia, Flash was
introduced in 1996, and is currently developed and distributed by Adobe Systems.
90
Flash is commonly used to create animation, advertisements, and various web page Flash
components, to integrate video into web pages, and more recently, to develop rich Internet
applications.
Adobe Shockwave
4.8 Summary
SCSI (Small Computer System Interface) is a set of standards for physically connecting
and transferring data between computers and peripheral devices.
On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a “SCSI
ID”, which is a number in the range 0-7 on a narrow bus and in the range 0–15 on a wide
bus.
The Media Control Interface, MCI in short, is an aging API for controlling multimedia
peripherals connected to a Microsoft Windows
Memory and storage devices include Hard Drives, Random Access Memory (RAM), Read-
Only Memory (ROM), Flash Memory and Thumb Drives, and CD-ROM, DVD, and Blu-ray
discs.
A communication device is a hardware device capable of transmitting an analog or digital
signal over the telephone, other communication wire, or wirelessly.
Wi-Fi is a wireless network that utilizes one of the IEEE 802.11wireless standards to achieve
a wireless connection to a network.
3. Non-Volatile Memory
4. a.True
5. Frame buffer
7. Modem
8. Radio Waves
10. Application
LESSON 5
BASIC SOFTWARE TOOLS FOR
MULTIMEDIA OBJECTS
Structure
5.1 Introduction
5.7 Summary
5.1 Introduction
The basic tools set for building multimedia project contains one or more authoring systems
and various editing applications for text, images, sound, and motion video. A few additional
applications are also useful for capturing images from the screen, translating file formats and
tools for making multimedia production easier.
Understand common software programs used to handle text, graphics, audio, video, and
animation in multimedia projects and discuss their capabilities.
Learn the hardware most used in making multimedia and choose an appropriate platform
for a project.
93
Determine which multimedia authoring system is most appropriate for any given
project.
A word processor is the first software tool; computer users rely upon for creating the text.
The word processor is bundled with an office suite. Word processors such as Microsoft Word
and WordPerfect are powerful applications that include spellcheckers, table formatters,
thesauruses and prebuilt templates for letters, resumes, purchase orders and other common
documents.
OCR Software
There will be multimedia content and other texts to be incorporated into a multimedia
project, but no electronic text file. With optical character recognition (OCR) software, a flat-bed
scanner, and a computer, it is possible to save many hours of rekeying printed words, and get
the job done faster and more accurately than a roomful of typists.
OCR software turns bitmapped characters into electronically recognizable ASCII text. A
scanner is typically used to create the bitmap. Then the software breaks the bitmap into chunks
according to whether it contains text or graphics, by examining the texture and density of areas
of the bitmap and by detecting edges. The text areas of the image are then converted to ASCII
character using probability and expert system algorithms.
Image-Editing Tools
Image-editing application is a specialized and powerful tool for enhancing and re-touching
the existing bitmapped images. These applications also provide many of the feature and tools
of painting and drawing programs and can be used to create images from scratch as well as
images digitized from scanners, video frame-grabbers, digital cameras, clip art files, or original
artwork files created with a painting or drawing package.
Multiple windows that provide views of more than one image at a time
Employment of a virtual memory scheme that uses hard disk space as RAM for images
that require large amounts of memory
Capable selection tools, such as rectangles, lassos, and magic wands, to select portions
of a bitmap
Image and balance controls for brightness, contrast, and color balance
Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and tinting
Geometric transformation such as flip, skew, rotate, and distort and perspective changes
134-bit color, 8- or 4-bit indexed color, 8-bit gray-scale, black-and-white, and customizable
color palettes
Ability to create images from scratch, using line, rectangle, square, circle, ellipse, polygon,
airbrush, paintbrush, pencil, and eraser tools, with customizable brush shapes and user-
definable bucket and gradient fills
Multiple typefaces, styles, and sizes, and type manipulation and masking routines
Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco, graphic
pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl, watercolor, wave, and
wind
Plug-Ins
Image-editing programs usually support powerful plug-in modules available from third-
party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise “filter” your
images for special visual effects.
Painting and drawing tools, as well as 3-D modelers, are perhaps the most important
items in the toolkit because, of all the multimedia elements, the graphical impact of the project
will likely have the greatest influence on the end user. If the artwork is amateurish, or flat and
uninteresting, both the creator and the users will be disappointed.
Some software applications combine drawing and painting capabilities, but many authoring
systems can import only bitmapped images. Typically, bitmapped images provide the greatest
choice and power to the artist for rendering fine detail and effects, and today bitmaps are used
in multimedia more than drawn objects. Some vector based packages such as Macromedia’s
Flash are aimed at reducing file download times on the Web, and may contain both bitmaps
and drawn art.
An intuitive graphical user interface with pull-down menus, status bars, palette control,
and dialog boxes for quick, logical selection
Scalable dimensions, so you can resize, stretch, and distort both large and small bitmaps
Paint tools to create geometric shapes, from squares to circles and from curves to complex
polygons
Auto trace tool that turns bitmap shapes into vector-based outlines
Painting features such as smoothing coarse-edged objects into the background with anti-
aliasing, airbrushing in variable sizes, shapes, densities, and patterns; washing colors in
gradients; blending; and masking
Object and layering capabilities that allows to treat separate elements independently
All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and gray scale
Good color management and dithering capability among color depths using various color
models such as RGB, HSB, and CMYK
Good file importing and exporting capability for image formats such as PIC, GIF,
Sound editing tools for both digitized and MIDI sound lets us hear the music as well as
create it. By drawing a representation of a sound in fine increments, whether a score or a
waveform, it is possible to cut, copy, paste and otherwise edit segments of it with great precision.
System sounds are shipped both Macintosh and Windows systems and they are available
as soon as the Operating system is installed. For MIDI sound, a MIDI synthesizer is required to
play and record sounds from musical instruments. For ordinary sound there are varieties of
software’s such as sound edit, MP3cutter, Wave studio and etc.
97
Animation and digital movies are sequences of bitmapped graphic scenes or frames
which are rapidly played back. Most authoring tools adapt either a frame or object oriented
approach to animation.
Moviemaking tools typically take advantage of Quick time for Macintosh and Microsoft
Video for Windows and lets the content developers to create, edit and present digitized motion
video segments.
Video formats
A video format describes how one device sends video pictures to another device, such
as the way that a DVD player sends pictures to a television or a computer to a monitor. More
formally, the video format describes the sequence and structure of frames that create the
moving video image.
Video formats are commonly known in the domain of commercial broadcast and consumer
devices; most notably to date, these are the analog video formats of NTSC, PAL, and SECAM.
However, video formats also describe the digital equivalents of the commercial formats, the
aging custom military uses of analog video (such as RS-170 and RS-343), the increasingly
important video formats used with computers, and even such offbeat formats such as color
field sequential.
Video formats were originally designed for display devices such as CRTs (Cathode Ray
Tubes). However, other kinds of displays have common source material video formats enjoy
wide adoption and have convenient organization, video formats are a common means to describe
the structure of displayed visual information for a variety of graphical output devices.
A frame can consists of two or more fields, sent sequentially, that are displayed over time
to form a complete frame. This kind of assembly is known as interlace.
An interlaced video frame is distinguished from a progressive scan frame, where the
entire frame is sent as a single intact entity.
A frame consists of a series of lines, known as scan lines. Scan lines have a regular and
consistent length in order to produce a rectangular image. This is because in analog
formats, a line lasts for a given period of time; in digital formats, the line consists of a
number of pixels. When a device sends a frame, the video format specifies that each line
is sent independently by the device from any others and that all lines are sent in top-to-
bottom order.
As above, a frame may be split into fields – odd and even (by line “numbers”) or upper
and lower, respectively. In NTSC (National Television System Committee), the lower field
comes first, then the upper field, and that is the whole frame. The basics of a format are
Aspect Ratio, Frame Rate, and Interlacing with field order if applicable: Video formats use
a sequence of frames in a specified order. In some formats, a single frame is independent
of any other (such as those used in computer video formats), so the sequence is only one
frame. In other video formats, frames have an ordered position.
PAL
SECAM
ATSC Standards
DVB
ISDB
These are strictly the format of the video itself, and not for the modulation used for
transmission.
QuickTime
2. The QuickTime framework, which provides a common set of APIs for encoding and
decoding audio and video.
QuickTime players
QuickTime is distributed free of charge, and includes the QuickTime Player application.
Some other free player applications that rely on the QuickTime framework provide features not
available in the basic QuickTime Player. For example:
iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless.
QuickTime framework
Encoding and transcoding video and audio from one format to another.
Decoding video and audio, and then sending the decoded stream to the graphics or audio
subsystem for playback. In Mac OS X, QuickTime sends video playback to the Quartz
Extreme (OpenGL) Compositor.
The framework supports the following file types and codecs natively:
101
Audio
Apple Lossless
Digital Audio: Audio CD - 16-bit (CDDA), 134-bit, 313-bit integer & floating
MIDI
Sun AU Audio
Video
MPEG-1, MPEG-13, and MPEG-4 Video file formats and associated codecs
(such as AVC)
Other video codecs: Apple Video, Cinepak, Component Video, Graphics, and
Planar RGB
The QuickTime (.mov) file format functions as a multimedia container file that contains
one or more tracks, each of which stores a particular type of data: audio, video, effects, or text
(for subtitles, for example). Other file formats that QuickTime supports natively (to varying
degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional QuickTime Extensions, it
can also support Ogg, ASF, FLV, MKV, DivX Media Format, and others.
If your current software can do what you need, then there is no need to obtain dedicated
multimedia authoring package because:-
103
Most PCs sold today provides with necessary elements to produce at least sound and
animation. Popular software for word processing, spreadsheets, DBMS, graphing, drawing
and presentation have added capabilities for sound, image and animation to their products.
Nowadays you can:-
2. Call a voice annotation, picture or QuickTime/AVI movie from most word processing
applications.
The presentation will no longer be just as simple as slide show, but you can easily generate
interesting titles, visual effects and animated illustrations using your presentation software.
2. Import them from collections of clip art media (They provide quick and simple multimedia
productions).
3. License rights to use resources or content such as pictures, songs, music, and video
from their owners. Some simple Multimedia projects can be produced in such a way that
you cram all the organizing, planning, rendering and testing stags into a single effort,
making instant multimedia.
104
Apple Events,
Word Processors
Word
WordPerfect
Word Pro
Spreadsheets
Lotus 1-2-3
Excel
Databases
FileMaker Pro
Access
Presentation Tools
PowerPoint
2. Digital Audio
4. Video Editing
5. Animation
6. Multimedia Authoring
105
– The term sequencer comes from older devices that stored sequences of notes (“events”,
in MIDI).
– It is also possible to insert WAV files and Windows MCI commands (for animation and
video) into music tracks (MCI is a ubiquitous component of the Windows API.)
o Macromedia Sound edit: mature program for creating audio for multimedia projects
and the web that integrates well with other Macromedia products such as Flash and Director.
2. Digital Audio
Digital Audio tools deal with accessing and editing the actual sampled sounds that make
up audio:
Cool Edit: a very powerful and popular digital audio toolkit; emulates a professional
audio studio — multi-track productions and sound file editing including digital signal
processing effects.
Sound Forge: a sophisticated PC-based program for editing audio WAV files.
Pro Tools: a high-end integrated audio production and editing environment — MIDI
creation and manipulation; powerful audio mixing, recording, and editing software.
Adobe Illustrator: A powerful publishing tool from Adobe. Uses vector graphics; graphics
can be exported to Web.
– Allows layers of images, graphics, and text that can be separately manipulated for
maximum flexibility.
Macromedia Fireworks: Software for making graphics specifically for the web.
Macromedia Freehand: A text and web graphics editing tool that supports many bitmap
formats such as GIF, PNG, and JPEG.
4. Video Editing
Adobe Premiere: An intuitive, simple video editing tool for nonlinear editing, i.e., putting
video clips into any order:
o Provides a large number of video and audio tracks, superimpositions and virtual clips.
o A large library of built-in transitions, filters and motions for clips effective multimedia
productions with little effort.
Adobe After Effects: a powerful video editing tool that enables users to add and change
existing movies. Can add many effects: lighting, shadows, motion blurring; layers.
5. Animation
Multimedia APIs
o Java3D: API used by Java to construct and render 3D graphics, similar to the way in
which the Java Media Framework is used for handling media files.
1. Provides a basic set of object primitives (cube, splines, etc.) for building scenes.
2. It is an abstraction layer built on top of OpenGL or DirectX (the user can select which).
107
o DirectX : Windows API that supports video, images, audio and 3-D animation
Rendering Tools:
o 3D Studio Max: rendering tool that includes a number of very high-end professional tools
for character animation, game development, and visual effects production.
o Softimage XSI: a powerful modeling, animation, and rendering package used for animation
and special effects in films and games.
6. Multimedia Authoring
Multimedia Authoring: These are the tools which provide the capability for creating a
complete multimedia presentation, including interactive user control, are called authoring tools/
programs
Macromedia Flash: allows users to create interactive movies by using the score metaphor,
i.e., a timeline arranged in parallel event sequences.
Quest: similar to Author ware in many ways, uses a type of flowcharting metaphor.
However, the flowchart nodes can encapsulate information in a more abstract way (called
frames) than simply subroutine levels.
108
In multimedia authoring systems, multimedia elements and events are regarded as objects.
On receiving messages, objects perform tasks depending on the properties and modifiers
properties and modifiers.
1. Interactivity
2. Playback
3. Editing
4. Programming / Scripting
5. Cross Platform
6. Internet Playability
7. Delivery/Distribution
8. Project organization
2. Programming features.
3. Interactivity features.
Authoring systems include editing tools to create, edit, and convert multimedia elements
such as animation and video clips.
The organization, design, and production process for multimedia involves storyboarding
and flowcharting.
Visual and flowcharting or overview facility illustrates project structure at a macro level.
2. Programming features
Visual programming with icons or objects is the simplest and easiest authoring process.
Visual authoring tools such as Author ware and Icon Author are suitable for slide shows
and presentations.
3. Interactivity features
Interactivity gives the end user control over the content and flow of information in a
project.
o Card and page based authoring systems provide a simple and easily understood metaphor
for organizing multimedia elements.
110
o It contains media objects such as buttons, text fields, and graphic objects.
HyperCard (Mac)
o Icon based, event driven tools provide a visual programming approach to organize and
present multimedia.
o Flowchart can be built by dragging appropriate icons from a library, and then adding the
content.
o Authorware (Mac/Windows)
o IconAuthor (Windows)
Time-based tools are best suited for messages with a beginning and an end.
Environments
Image Processing
Image Enhancement
Medical Imaging
b. Painting programs.
d. Resolution programs
2. Software that stores lines and shapes rather than individual pixels is known as:
d. Resolution software
4. ____________ Software can rotate, stretch, and combine images with other model objects.
II. painting software B. can create pixels on the screen with a pointing device
III. photo management software C. can eliminate “red eye” and brush away blemishes
IV. drawing software D. can create objects or models that can be rotated or stretched
112
V. 3-D modeling software E. simplify and automate capturing, organizing, and editing
digital images
VI. video editing software F. automates the creation of visual aids for lectures
System Standards
In addition to the various so called standards for general computer systems which tend
to be set by the manufacturers and are really proprietary or possibly de facto standards, there
are some developments specific to multimedia systems. These include the MPC standard, a
base specification for a multimedia PC, and interface standards such as MCI (Media Control
Interface), HCI (Human Computer Interface ISO 9241 under development), and API (Application
Programming Interface). Some ‘standards’ are beginning to develop for software, and standards
for the development of systems, including multimedia systems, which may eventually become
ISO’s. The IMA (Interactive Multimedia Association) is industry led, producing recommended
practices and is currently working on multimedia system services, data exchange and scripting
languages.
113
For scanning quality control and OCR (Optical Character Recognition) procedures and
preparation various national standards exist, such as North American ANSI standards.
ODA (Office Document Architecture) and SGML (Standard Generalized Markup Language,
ISO 8879) are standards for describing electronic documents, for document interchange. They
can also be used for hypertext, which is used in multimedia applications. These two standards
have been developed to define formats for presentation of multimedia and hypermedia
information, and are also necessary for editing and manipulating, and for facilitating interchange
of such data between applications. MHEG (Coded Representation of Multimedia and Hypermedia
Information Objects, draft ISO CD13522) is extending the standards for text to include other
data and media. Hytime (Hypermedia/Time-Based Document Structuring Language, ISO 10744
extends the markup of single documents using SGML to multiple data objects or documents.
For data compression encoding various standards exists for different media, e.g. JPEG
(Joint Photographic Experts Group) for the digital coding of still images, MPEG (Motion Picture
Experts Group) for motion picture and associated audio. For data encoding many standards
exist which are really de facto standards rather than being formally accepted as standards.
These include the file formats mentioned earlier. The widely used TIFF Image File Format is
one. However there are different versions of TIFF files which may not be compatible.
CD-ROM is the most standardized and widely used of optical media. Standards exist for
the physical and optical characteristics of optical discs. The major disc sizes have different
format standards, with some national and ISO standards in place. The CD-ROM format is ISO
10149 for the recording format, with ISO 9660 for the ‘logical format’, i.e. the file structure. All
Photo CD disc formats conform to this. It should be noted that there are off shoots from the
main ISO 9660.
Many WORM media and drives use proprietary standards. 5.25" WORM format discs
have 3 different incompatible standards. ISO 9171 covers both formats A and B. Larger optical
discs have some draft standards. 5.25" and 3.5" rewritable optical discs have ISO standards
which are adhered to, but some imaging systems use nonstandard discs.
114
Volume and file structure standards enable operating systems to understand and access
files. The Yellow Book standard for CD-ROM and CD-ROM XA covers the use of audio and
video with computer data. The Green Book standard covers CD-I with its better audio and
video image quality. A new White Book covers the CD standard for Digital Video. An Orange
Book standard covers CD-R discs. For display standards see the earlier section on content
formats. Note should be taken that Apple machines and PCs differ in the way they store data
for screen display and files in the same format may not translate from one to the other.
Where ISO or national standards relevant to multimedia exist they should be specified in
any project requirement along with the instruction to specify what standards are adopted for
particular aspects of a product or system where there may be a choice.
5.7 Summary
A word processor is a regularly used tool in designing and building a multimedia project.
Image-editing software: bitmapped images provide the greatest choice and power to the
artist for rendering fine detail and effects.
Animations and digital video movies are sequences of bitmapped graphic scenes or
frames, rapidly played back.
With proper editing software, you can digitize video, edit, add special effects and titles,
mix sound tracks, and save the clip.
Three metaphors are used by authoring tools that make multimedia: card- and page-
based, icon- and object-based, and time-based.
For data compression encoding various standards exists for different media, e.g. JPEG
(Joint Photographic Experts Group) for the digital coding of still images, MPEG (Motion
Picture Experts Group) for motion picture and associated audio
3. a. True
4. 3-D modeling
6. Authoring Tools
LESSON 6
MULTIMEDIA ELEMENTS – TEXT AND SOUND
Structure
6.1 Introduction
6.6 Summary
6.1 Introduction
Multimedia is the media that uses multiple forms of information content and information
processing (e.g. text, audio, graphics, animation, and video, interactivity) to inform or entertain
the user. Multimedia also refers to the use of electronic media to store and experience multimedia
content. Multimedia is a combination of various elements, such as text, images, video, sound,
and animation. Interactive multimedia allows the user to control what and when the elements
are delivered. The multimedia application definition using the building blocks defined as
components is a general approach that can easily integrate existing development tools.
All multimedia content consists of texts in some form. Even a menu text is accompanied
by a single action such as mouse click, keystroke or finger pressed in the monitor (in case of a
touch screen). The text in the multimedia is used to communicate information to the user.
Proper use of text and words in multimedia presentation will help the content developer to
communicate the idea and message to the user.
117
Many multimedia developers take advantage of this sense by incorporating sound into
their multimedia products. Sound enhances a multimedia application by supplementing
presentations, images, animation, and video. In the past, only those who could afford expensive
sound recording equipment and facilities could produce high-quality, digital sound. Today,
computers and synthesizers make it possible for the average person to produce comparable
sound and music. Sound is the terminology used in the analogue form, and the digitized form
of sound is called as audio. A sound is a waveform. It is produced when waves of varying
pressure travel though a medium, usually air. It is inherently an analogous phenomenon, meaning
that the changes in air pressure can vary continuously over a range of values.
In this lesson we will learn the different multimedia building blocks. Later we will learn the
significant features of text.
Describe the characteristics and attributes of text, graphic, sound, animation and video
elements that make up multimedia
Understand the various file format used for each of these elements
1. Text: Text and symbols are very important for communication in any medium. With the
recent explosion of the Internet and World Wide Web, text has become more the important
than ever. Web is HTML (Hypertext Markup language) originally designed to display simple
text documents on computer screens, with occasional graphic images thrown in as
illustrations.
2. Audio: Sound is perhaps the most element of multimedia. It can provide the listening
pleasure of music, the startling accent of special effects or the ambience of a mood-
setting background.
118
3. Images: Images whether represented analog or digital plays a vital role in a multimedia.
It is expressed in the form of still picture, painting or a photograph taken through a digital
camera.
5. Video: Digital video has supplanted analog video as the method of choice for making
video for multimedia use. Video in multimedia are used to portray real time moving pictures
in a multimedia project.
Words and symbols in any form, spoken or written, are the most common system of
communication. They deliver the most widely understood meaning to the greatest number of
people. Most academic related text such as journals, e-magazines are available in the Web
Browser readable form.
Text is a collection of characters that makes the user understand very easily and special
meaning is given. Text can be used for communication. The information what you are trying to
say will be given as a text. Mostly, a text in multimedia plays a vital role
Definition: It is a printed or written version of speech, and also it gives the main facts
about the subjects.
Typeface: A typeface is family of graphic characters that usually includes many type
sizes and styles.
Font: A font is a collection of characters of a single size and style belonging to a particular
typeface family. Typical font styles are bold face and italic.
119
Type sizes are usually expressed in points; one point is .0138 inches or about 1/ 72 of an
inch.
The font’s size is the distance from the top of the capital letter to the bottom of the descends
in letters such as g and y.
A font’s size does not exactly describe the height and width of its characters. This is
because the x-height (the height of the lower case letter x) of two fonts may vary, while
the height of the capital letters of those fonts may be the same.
Computer fonts automatically add space below the descender to provide appropriate line
spacing, or leading (pronounced “ledding”).
Leading can be adjusted in most programs on both Macintosh and in Windows. When
you type lower case letters the ascenders and descenders will be changed but, for upper
case it won’t.
High-resolution monitors and printers can make more attractive-looking and varied
characters because there are more fine little squares or dots per inch (dpi).
The same letter can look very different when you use different fonts and faces:
Cases: The font always will be stored in two cases Capital letters (Upper Case) and
small letters (Lower Case).
Typefaces of fonts can be described in many ways, but the most common characterization
of a typeface is serif and sans serif. The serif is the little decoration at the end of a letter
stroke.
Example: Times, Times New Roman, Bookman are some fonts which come under serif
category. Arial, Optima, Verdana are some examples of sans serif font. Serif fonts are generally
used for body of the text for better readability and sans serif fonts are generally used for
headings.
The following fonts show a few categories of serif and sans serif fonts.
Installation of Fonts
Fonts can be installed on the computer by opening the fonts folder through Windows
Explorer. Go to C:\WINDOWS or C:\WINNT\FONTS. When the folder opens, select the fonts
you want to install from an alternate folder and copy and paste them into the fonts folder. The
second option is to go to Start > Settings > Control Panel > Fonts, then go to File > Install New
Font.
Usage of Fonts
After the installation of the font, you have to change the font of the present text in any text
editing program. A user can also use the installed font in HTML documents but the document
can be viewed by only those users who have the same font installed on their computers.
Always remember the name of the font and keep in mind that the name of the font is not the
same as the file name of the .ttf file. If a user does not remember the font name then he can
find it by going through the font list or by visiting the .ttf file.
In large size headlines, the kerning (spacing between the letters) can be adjusted
In text blocks, the leading for the most pleasing line can be adjusted.
Drop caps and initial caps can be used to accent the words.
The different effects and colors of a font can be chosen in order to make the text look in
a distinct manner.
For special attention to the text the words can be wrapped onto a sphere or bent like a
wave.
122
Meaningful words and phrases can be used for links and menu items.
In case of text links (anchors) on web pages the messages can be accented.
The most important text in a web page such as menu can be put in the top 320 pixels.
The basic element of multimedia is the text. However, the text should be kept minimum
to avoid overcrowding unless the application contains a lot of reference material. Less text can
be read easily and quickly unlike longer text passages which can be time consuming and tiring.
A lot of information in a multimedia presentation is not ideally the best way to transfer information
to a wide range of audience. Combining other elements such as pictures, graphics, diagrams,
etc., can help reduce the amount of text written to provide information.
From design point of view, text should fill less than half the screen. There are following
ways in which a text can be used in multimedia:
in text messaging
in advertisements
in a website
Interactive Buttons
Users can create their own buttons from bitmaps and graphics.
The design and labeling of the buttons should be treated as an industrial art project.
Reading a hard copy is easier and faster than reading from the computer screen.
HTML Documents
HTML stands for Hypertext Markup Language which is the standard document format
used for Web pages.
An advanced form of HTML is DHTML that stands for Dynamic Hypertext Markup
Language. It uses Cascading Style Sheets (CSS).
Symbols are concentrated text in the form of stand-alone graphic constructs and are
used to convey meaningful messages and human emotions are called emoticons.
Text Layout
While creating a multimedia presentation, the presenter should plan the text layout to let
a reader read it with ease. One of the first things to be kept in mind is the length of the text. It
should neither too long nor too short. For a printed document, a line containing 13 to 17 words
is sufficient. A line having more than 17 words should be too long to fit on a screen and would
be difficult to follow. On the other hand, a very short line would not look good on screen.
Therefore, for better presentation a line of around 8 to 15 words should be used.
Using text in websites attracts a visitor’s attention as well as help him in understanding
the webpage better. It is far better than the use of meaningless graphics and images which do
not contribute in understanding of the page.
Website loading speed is one of the important factors that influences conversion as
visitors stars to leave the page if it takes more than eight seconds to load. A website which
contains a lot of text loads faster than the websites that contains the following:
Internal code (not placed in external CSS, JS, etc. files and linked to)
JavaScript (for menus, including various stat tracking scripts, such as Google Analytics).
Audio and video clips on the page (especially without transcripts, which hurts accessibility
if you do use audio/video, do not auto-launch it and have a button to turn it on/off).
Table-based layouts that are twice larger in file size, than the ones built in CSS.
Most films start with titles and end with credits. The text is shown over either plain
background or colored background. Typography look different in different formats such as a in
film subtitles, on websites, poster, essay, etc. To include a text in multimedia, a designer has to
keep in mind the points given below:
125
The format of the project (video, website, blog, video, slideshow, etc.,).
Before adding subtitles to a film, people working on the film need to look into different
font styles, spacing, font color and size. Some fonts that work well on a website while some
work well in print.
Since the text ads are more of keyword oriented, they draw more attention than banner
advertising.
The text ads are inexpensive, thus making it affordable and effective for your business.
There are a few websites which offers a flat free rental services to place your text based
advertisements.
A few websites request for a onetime payment to place your text ads.
The foremost benefit of having text based advertisements is that it helps in improving
your search engine ranking.
Since it creates more visibility and draws more traffic to your site, your page rank will be
improved.
Thus, text ads will help in making your business a successful venture.
The American standard code for information interchange (SCII) is the 7 bit character
coding system most commonly used by computer systems in the United States and abroad.
126
ASCII assigns a number of values to 128 characters, including both lower and uppercase
letters, punctuation marks, Arabic numbers and math symbols. 32 control characters are also
included.
These control characters are used for device control messages, such as carriage return,
line feed, tab and form feed.
A byte which consists of 8 bits is the most commonly used building block for computer
processing. An ASCII use only 7 bits to code is 128 characters; the 8th bit of the byte is unused.
This extra bit allows another 128 characters to be encoded before the byte is used up, and
computer systems today use these extra 128 values for an extended character set. The extended
character set is commonly filled with ANSI (American National Standards Institute) standard
characters, including frequently used symbols.
Unicode
Unicode makes use of 16-bit architecture for multilingual text and character encoding.
Unicode uses about 65,000 characters from all known languages and alphabets in the world.
Several languages share a set of symbols that have a historically related derivation; the
shared symbols of each language are unified into collections of symbols (Called scripts). A
single script can work for tens or even hundreds of languages.
Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating in the
development of this standard and Microsoft and Apple have incorporated Unicode into their
operating system.
With these tools, professional typographers create distinct text and displays faces.
(i) ResEdit
ResEdit is a source editor available from apple that is useful for creating and changing
graphic resource such as cursors, icons, dialog boxes, patterns, keyboard maps, and
bitmapped fonts on the Macintosh.
It can be used to edit or create new font resources for storing the bitmaps of screen fonts.
(ii) Fontographer
Fontographer is a powerful font editor supplied by Macromedia, is a specialized graphics
editor for both Macintosh and Windows platforms.
You can use it to develop PostScript, TrueType and bitmapped fonts for Macintosh,
Windows, DOS, NeXT, and Sun workstations.
Designers can also modify existing typefaces, incorporate PostScript artwork, automatically
trace scanned images, and create designs from scratch.
Fontographer’s features include a freehand drawing tool to create professional and precise
in-line and outline drawings of calligraphic and script characters.
Fontographer allows the creation of multiple font designs from two existing typefaces,
and you can design lighter or heavier fonts by modifying the weight of an entire typeface.
Fonts can be condensed, expanded, scaled, rotated, and skewed to create new unique
typefaces.
A metric window provides complete control over character width, spacing, offset, and
kerning.
Type-Designer
Type-Designer for windows from DS Design is a font editor that lets you create, convert,
and manipulate PostScript Type1 and TrueType fonts as well as EPS file format illustrations.
An extensive palette of editing tools allows you to make changes to a font’s outline.
With Type-Designer you can open up to eight typefaces simultaneously and cut and
paste characters between them.
128
Font Monger
Font Monger from Ares Software offers a proprietary hinting technology to ensure that
your fonts will look good regardless of size.
To create new fonts or to manipulate existing ones, Font Monger includes a freehand
drawing tool, a scissors tool, and a gizmo tool that rotates, slants, and skews character
outlines. Font Monger converts Macintosh or PC fonts to either platform as well as in any
direction between PostScript Type 1, Type 3, and True Type formats.
It allows you to edit and expand the font of small caps, oblique, subscript or superscript
characters.
Font Monger will also save the previous original characters in the PostScript font so you
can modify it further in the future, or, if you wish to save on disk space, compress the font
and it will remove the extra information.
Font Monger does not allow editing of the actual outlines of a font but it allows many other
functions such as the ability to copy characters between fonts, perform various
transformations to any or all characters of a font, and create a variety of composite
characters such as fractions and accented characters.
Cool 3D Text
Cool 3D Production Studio is a program for creating and animating 3D text and graphics,
for videos and other multimedia products. This software runs on Windows 98SE/ ME/2000/XP.
With this program, a user can create 3D graphics, animations for videos. It includes new
modeling tools, animations plugs-in, and new features for animation and video.
Font Chameleon
Font Chameleon from Ares software for both Macintosh and Windows platforms builds
millions of different fonts from a single master font outline.
The program provides a number of pre-set font descriptors, which you build into a
With slide bars you can manipulate various aspects of the font, including its weight, width,
x-height, ascenders and descenders, and the blend of the serifs.
The fonts you do build from the master outline can be used on the Macintosh, Windows,
or OS/2 platforms.
Most designers find it easier to make pretty type starting with ready-made fonts, but
some will create their own custom fonts using font-editing and design tools such as
Fontographer,
Hypermedia information spaces are connected by non-linear links which a user may
follow in any order. Multimedia information spaces are arranged sequentially, with only one
path through the information provided.
Hypertext is different from normal text in that it is nonlinear. The reader need not read a
document from beginning to end, but can jump around within the document by clicking on hot
spots (or hyperlinks) in the text.
On the other hand, hypermedia involves more than simply hyperlinked text. It also
incorporates images, sounds, and video into the document. This allows for a more graphical
interface to information. Most web pages should be considered hypermedia instead of simply
hypertext.
The function of hypertext is to build links and generate an index of words. The index
helps to find and group words as per user’s search criteria. Hypertext systems are very useful
in multimedia interactive education courseware. Hypertext systems provide both unidirectional
130
and bi-directional navigation. Navigations can be through buttons or through simple, plain text.
The simple and easy navigation is through linear hypertext where information is organized in
linear fashion. Nonlinear hypertext, however, is the ultimate goal of effective navigation.
Individual chunks of information are usually referred to as documents or nodes, and the
connections between them as links or hyperlinks the so-called node-link hypermedia model.
The entire set of nodes and links forms a graph network. A distinct set of nodes and links which
constitutes a logical entity or work is called a hyper document – a distinct subset of hyperlinks
is called a hyper web. A source anchor is the starting point of a hyperlink and specifies the part
of a document from which an outgoing link can be activated. Typically, the user is given visual
cues as to where source anchors are located in a document (for example, a highlighted phrase
in a text document). A destination anchor is the endpoint of a hyperlink and determines what
part of a document should be on view upon arrival at that node (for example, a text might be
scrolled to a specific paragraph). An entire document is specified as the destination and viewing
commences at some default location within the document (for example, the start of a text).
Software robots visit Web pages and index entire Web sites.
Software robots visit Web pages and index entire Web sites.
Information management and hypertext programs present electronic text, images, and
other elements in a database fashion.
Categorical search
Adjacency
Word relationship
Alternates
Frequency
Association
Truncation
132
Negation
Intermediate words
Hypermedia Structures
Links
Nodes
Anchors
Nodes
Links are connections between conceptual elements and are known as navigation
pathways and menus.
Anchors
Anchor is defined as the reference from one document to another document, image,
sound, or file on the Web.
Hypertext Tools
Two functions common to most hypermedia text management systems are building
(authoring) and reading.
133
o Creating links
Technical documentation
Electronic catalogues
Interactive kiosks
Educational courseware
Sometimes a physical web page behaves like two or more separate chunks of content.
The page is not the essential unit of content in websites built with Flash (an animation technology
from Macromedia) and in many non-web hypertext systems. Hence, the term node is used as
the fundamental unit of hypertext content. Links are the pathways between nodes. When a
user clicks links a succession of web pages appear and it seems that a user is navigating the
website. For a user, exploring a website is much like finding the way through a complex physical
environment such as a city. The user chooses the most promising route and if get lost, he may
backtrack to familiar territory or even return to home page to start over. A limitation of the
navigation is that it does not correspond to the full range of user behaviour. Majority of users
click the most promising links they see which has forced the web designers to create links that
would attract users.
Information Structures
Website designers and other hypertexts must work hard to decide which nodes will be
linked to which other nodes. There are familiar arrangements of nodes and links that guide
designers as they work. They are called information structures. Hierarchy, web-like and multi-
path is three of the most important of these structures.
134
Hierarchical Structure
The hierarchy is the most important structure because it is the basis of almost all websites
and most other hypertexts. Hierarchies are orderly (so users can grasp them) and yet they
provide plenty of navigational freedom. Users start at the home page, descend the branch that
most interests them, and make further choices as the branch divides. At each level, the
information on the nodes becomes more specific. Notice that branches may also converge.
When designing larger hypertexts, website designers must choose between making the
hierarchy broader (putting more nodes on each level) or deeper (adding more levels). One
well-established design principle is that users more easily navigate a wide hierarchy (in which
nodes have as many as 32 links to their child nodes) than a deep hierarchy.
Web-like Structures
Nodes can be linked to one another in web-like structures. There are no specific designs
to follow but web designers must take care in deciding which links will be most helpful to users.
Many structures turn into a hierarchical structure and cause trouble to users in navigating
them.
Multi-path Structures
It is possible to build a sequence of nodes that is in large part linear but offers various
alternative pathways. This is called multi-path structure. Users find multi-path structures within
hierarchical websites. For instance, a corporate website may have a historical section with a
page for each decade of the company’s existence. Every page has optional digressions, which
allows the user to discover events of that decade’s -like websites and non-web hypertexts are
made. Many web-like hypertexts are short stories and other works of fiction, in which artistic
considerations may override the desire for efficient navigation.
a. typeface
b. font
c. point
d. link
2. Which of the following is a term that applies to the spacing between characters of text?
a. Leading
b. Kerning
c. Tracking
d. Dithering
a. Compiling
b. Anti-aliasing
c. Hyperlinking
d. Authoring
a. High Quality
b. Lower Quality
c. Same Quality
d. Bad Quality
The multimedia application user can use sound right off the bat on both the Macintosh
and on a multimedia PC running Windows because beeps and warning sounds are available
as soon as the operating system is installed. On the Macintosh you can choose one of the
several sounds for the system alert. In Windows system sounds are WAV files and they reside
in the windows Media subdirectory.
137
There are still more choices of audio if Microsoft Office is installed. Windows makes use
of WAV files as the default file format for audio and Macintosh systems use SND as default file
format for audio.
Digital Audio
The sound recorded on an audio tape through a microphone or from other sources is in
an analogue (continuous) form. The analogue format must be converted to a digital format for
storage in a computer. This process is called digitizing. The method used for digitizing sound is
called sampling.
Digital audio represents a sound stored in thousands of numbers or samples. The quality
of a digital recording depends upon how the samples are taken. Digital data represents the
loudness at discrete slices of time. It is not device dependent and should sound the same each
time it is played. It is used for music CDs.
Preparing digital audio files is fairly straight forward. If you have analog source materials
music or sound effects that you have recorded on analog media such as cassette tapes.
The first step is to digitize the analog material and recording it onto a computer readable
digital media.
- Balancing the need for sound quality against your available RAM and Hard disk
resources.
- To digitize the analogue material recording it into a computer readable digital media
The sampling rate determines the frequency at which samples will be drawn for the
recording.
138
The number of times the analogue sound is sampled during each period and transformed
into digital information is called sampling rate. Sampling rates are calculated in Hertz (HZ or
Kilo HZ). The most common sampling rates used in multimedia applications are 44.1 KHZ,
22.05 JHZ and 11.025 KHZ. Sampling at higher rates more accurately captures the high
frequency content of the sound. Higher sampling rate means higher quality of sound.
Sampling rate and sound bit depth are the audio equivalent of resolution and color depth
of a graphic image. Bit depth depends on the amount of space in bytes used for storing a given
piece of audio information. Higher the number of bytes higher is the quality of sound. Multimedia
sound comes in 8-bit, 16-bit, 32-bit and 64-bit formats. An 8-bit has 28 or 256 possible values.
A single bit rate and single sampling rate are recommended throughout the work. An
audio file size can be calculated with the simple formula:
File Size in Disk = (Length in seconds) × (sample rate) × (bit depth/8 bits per byte)
Bit Rate refers to the amount of data, specifically bits, transmitted or received per second.
It is comparable to the sample rate but refers to the digital encoding of the sound. It refers
specifically to how many digital 1s and 0s are used each second to represent the sound signal.
This means the higher the bit rate, the higher the quality and size of your recording. For instance,
139
an MP3 file might be described as having a bit rate of 320 kb/s or 320000 b/s. This indicates
the amount of compressed data needed to store one second of music.
Mono or Stereo
Mono sounds are flat and unrealistic compared to stereo sounds, which are much more
dynamic and lifelike. However, stereo sound files require twice the storage capacity of mono
sound files. Therefore, if storage and transfer are concerns, mono sound files may be the more
appropriate choice.
The sample size is the amount of information stored. This is called as bit resolution.
There are many different types of digital audio file formats that have resulted from working
with different computer platforms and software. Some of the better known formats include:
WAV
WAV is the Waveform format. It is the most commonly used and supported format on the
Windows platform. Developed by Microsoft, the Wave format is a subset of RIFE RIFF is
capable of sampling rates of 8 and 16 bits. With Wave, there are several different encoding
methods to choose from including Wave or PCM format. Therefore, when developing sound
for the Internet, it is important to make sure you use the encoding method that the player you’re
recommending supports.
140
AU
AU is the Sun Audio format. It was developed by Sun Microsystems to be used on UNIX,
NeXT and Sun Sparc workstations. It is a 16-bit compressed audio format that is fairly prevalent
on the Web. This is probably because it plays on the widest number of platforms.
RA
AIFF
AIFF or AFF is Apple’s Audio Interchange File Format. This is the Macintosh waveform
format. It is also supported on IBM compatibles and Silicon Graphics machines. The AIFF
format supports a large number of sampling rates up to 32 bits.
MPEG
MPEG and MPEG2 are the Motion Picture Experts Group formats. They are a compressed
audio and video format. Some Web sites use these formats for their audio because their
compression capabilities offer up to a factor of at least 14:1. These formats will probably become
quite widespread as the price of hardware based MPEG decoders continues to go down and
as software decoders and faster processors become more mainstream. In addition, MPEG is
a standard format.
MIDI
MIDI (MID, MDI, MFF) is an internationally accepted file format used to store Musical
Instrument Digital Interface (MIDI) data. It is a format used to represent electronic music produced
by an IDI device (such as a synthesizer or electronic keyboard). This format provides instructions
on how to replay music, but it does not actually record the waveform. For this reason, MIDI files
are small and efficient, which is why they are often used on the Web.
141
SND
SND is the Sound file format developed by Apple. It is used mainly within the operating
system and has a limited sampling rate of eight bits.
For a multimedia application to work on both PCs and Macs, save it using either the
Musical Instrument Digital Interface (MIDI) or the Audio Interchange File Format (AIFF) file
format. It is recommended to use AIFF format if sound is a part of the application. AIFF is a
cross platform format and it can also reside outside the multimedia application. Now the file
occupies less space and play faster. Moreover, if a user wants to burn the multimedia application
onto a CD, AIFF format can be used.
Digital Recordings
In digital recording, digital sound can be recorded through microphone, keyboard or DAT
(Digital Audio Tape). To record with the help of a microphone connected to a sound card
is avoided because of sound amplification and recording consistency. Recording on a tape
recorder after making all the changes and then through sound card is recommended.
Once a recording has been made, it will almost certainly need to be edited. The basic
sound editing operations that most multimedia procedures needed are described in the
paragraphs that follow
1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the tracks and
export them in a final mix to a single audio file.
2. Trimming: Removing dead air or blank space from the front of a recording and an
unnecessary extra time off the end is your first sound editing task.
3. Splicing and Assembly: Using the same tools mentioned for trimming, you will probably
want to remove the extraneous noises that inevitably creep into recording.
4. Volume Adjustments: If you are trying to assemble ten different recordings into a single
track there is a little chance that all the segments have the same volume.
142
5. Format Conversion: In some cases your digital audio editing software might read a
format different from that read by your presentation or authoring program.
6. Resampling or down sampling: If you have recorded and edited your sounds at 16 bit
sampling rates but are using lower rates you must resample or down sample the file.
7. Equalization: Some programs offer digital equalization capabilities that allow you to modify
a recording frequency content so that it sounds brighter or darker.
8. Digital Signal Processing: Some programs allow you to process the signal with
reverberation, multitap delay, and other special effects using DSP routines.
10. Time Stretching: Advanced programs let you alter the length of a sound file without
changing its pitch. This feature can be very useful but watch out: most time stretching
algorithms will severely degrade the audio quality.
The MIDI (Musical Instrument Digital Interface) is a connectivity standard that musicians
use to hook together musical instruments (such as keyboards and synthesizers) and computer
equipment. Using MIDI, a musician can easily create and edit digital music tracks. The MIDI
system records the notes played, the length of the notes, the dynamics (volume alterations),
the tempo, the instrument being played, and hundreds of other parameters, called control
changes. Because MIDI records each note digitally, editing a track of MIDI music is much
easier and more accurate than editing a track of audio. The musician can change the notes,
dynamics, tempo, and even the instrument being played with the click of button. Also, MIDI
files are basically text documents, so they take up very little disk space. The only catch is that
you need MIDI-compatible hardware or software to record and playback MIDI files. MIDI provides
143
a protocol for passing detailed descriptions of musical scores, such as the notes, sequences of
notes, and what the instrument will play these notes.
A MIDI file is very small, as 10 KB for a 1-minute playback (a .wav file of the same
duration requires 5 to 10 MB of disk space). This is because it doesn’t contain audio waves like
audio file formats do, but instructions on how to recreate the music. Another advantage of the
file containing instructions is that it is quite easy to change the performance by changing,
adding or removing one or more of the instructions – like note, pitch, tempo, and so on – thus
creating a completely new performance. This is the main reason for the file to be extremely
popular in creating, learning, and playing music.
MIDI actually consists of three distinctly different parts – the physical connector, the
message format, and the storage format. The physical connector connects and transports
data between devices; the message format (considered to be the most important part of MIDI)
controls the stored data and the connected devices; and the storage format stores all the data
and information. Today, MIDI is seen more of a way to accomplish music, rather than a format
or a protocol. This is why phrases like “composing in MIDI” and “creating MIDI” are quite
commonly used by musicians.
MIDI files may be converted to MP3, WAV, WMA, FLAC, OGG, AAC, MPC on any Windows
platform using Total Audio Converter
Advantages of MIDI
Since they are small, MIDI files embedded in web pages load and play promptly.
Length of a MIDI file can be changed without affecting the pitch of the music or degrading
audio quality
MIDI files will be 200 to 1000 times smaller than CD-quality digital audio files. Therefore,
MIDI files are much smaller than digitized audio.
MIDI files do not take up as much as RAM, disk space, and CPU resources.
A single MIDI link can carry up to sixteen channels of information, each of which can be
routed to a separate device.
144
A file format determines the application that is to be used for opening a file.
Following is the list of different file formats and the software that can be used for opening
a specific file.
Software such as Toast and CD-Creator from Adaptec can translate the digital files of red
book Audio format on consumer compact discs directly into a digital sound editing file, or
decompress MP3 files into CD-Audio. There are several tools available for recording audio.
Following is the list of different software that can be used for recording and editing audio;
Soundedit16
145
MIDI does not have consistent playback quality while digital audio provides consistent
audio quality.
One requires knowledge of music theory in order to run MIDI while digital audio does not
have this requirement.
MIDI files sound better than digital audio files when played on a high quality MIDI device.
MIDI data are completely editable—right down to the level to an individual note. You can
manipulate the smallest detail of a MIDI composition in ways that are impossible with
digital audio.
Audio CD Playback
Audio Compact Disks come in standard format of Compact Disc Digital Audio (CDDA or
CDDA). The standard is defined in the Red Book that contains the technical specifications for
all CD formats. The largest entity on a CD is called a track. A CD can contain up to 99 tracks
(including data track for mixed mode discs).
The best part is that you can sort the order in which you want to listen the tracks and
continue playing without interruption. If the Auto Insert Notification option is disabled or
unavailable, audio compact discs (CDs) are not played automatically. Instead, you must start
CD Player and then click Play.
To cause an audio CD to be played as soon as you start CD Player, follow these steps:
1. Insert the CD you want to play into the drive. Optional) If the CD doesn’t start playing or if
you want to select a disc that is already inserted, click the arrow below the Now Playing
tab, and then click the drive that contains the disc.
2. Double-click the Programs folder; double-click the Accessories folder, and then double
click the Multimedia folder.
4. On the Shortcut tab, change the entry in the Target box to read: C:\Windows\Cdplayer.
exe/PLAY
5. Click OK.
6. Use the “Stop,” “Pause,” “Skip next track” and “Previous track” buttons to set your
preferences while playing the CD.
7. Select from the Edit play list from the Disc menu in order to change the sequence of
the tracks.
8. Adjust the volume by clicking on the “Speakers” icon on the task bar and to adjust the
bass, treble and other options go to Equalizer.
1. To skip a song, click the Next button while the song is playing.
2. The song will be skipped. If repeat play is turned on, the song will not play again during
that playback session.
3. If you accidentally skip a song you’d like to hear, double-click the song in the playlist.
Audio Recording
In digital recording, we start with an analogue audio signal and convert it to digital data to
be stored. Changes in electrical voltage are encoded as discreet samples. On playback we
retrieve the digital data and convert it back to an analogue signal. Here, fidelity is dependent on
the quality and function of the Analogue-to-Digital (A-to-D) and the Digital-to-Analogue (D-to-
147
A) converter. Once an audio signal is stored as digital data, the storage media has no effect on
the quality of sound.
At the heart of hard-disk recording and editing is digital audio. When we record digitally,
sound is converted to an electrical signal by a microphone. That signal is coded into numbers
by an analogue-to-digital converter (ADC). The numbers are stored in memory, then played
back upon demand by sending the numbers to a digital-to-analogue converter (DAC). The
resulting signal is sent through an amplifier and speakers so we hear a reproduction of the
original sound. This is illustrated by the animation below:
Types of Storage
Devices used to capture, store and access sound will fall into some combination of the
following categories:
Analogue or digital
1. Input – Microphone
3. Output – Loudspeaker
These recommendations are intended to produce the best possible audio recordings. A
good audio recording dramatically improves the transcription quality. It lets transcriptionists
focus on the finer details, such as researching difficult words and ensuring correct punctuation,
rather than trying to discern what was said. This is particularly important in cases with multiple
speakers, background noise, complicated vocabulary, or heavy accents. Use a high-quality
microphone, either a headset microphone or a mounted directional microphone. As a rule of
thumb, spending at least $50 on a microphone is a good investment.
If a headset microphone is used, be sure that the transducer is at least 13 away from the
face and slightly below the lower lip. A standing microphone should be placed 9" – 153 directly
in front of the speaker. This provides the best trade-off between clarity and risk of over-saturation.
If there are multiple speakers, providing a separate microphone to each speaker is best. A bi-
directional microphone works well for two speakers sitting across from each other at the
appropriate distance away from the microphone. The multiple microphone signals should be
mixed into a single channel. The speaker should not hold or wear the microphone. It should
either be a headset mic or be mounted on a stable structure in front of them. This reduces the
likelihood that the microphone will move around during recording.
Use a microphone “pop” filter, either one that comes with the microphone or a separately
purchased standing one. Calibrate the input level. If your recording device has a VU meter,
have the speaker talk naturally – at the appropriate distance from the mic – and make sure the
levels are in a good range. If a VU meter is not available, try a sample recording and listen back
to make sure the level sounds good.
Minimize background noise. If you can notice the noise just standing and listening, it will
be much worse on the recording. Making a sample recording and listening to it is a good way
149
to discover the noise level. Placing soft materials between the microphone and air vents and
machinery will block most of the noise.
Try to eliminate background talking or music. This is a frequent source of poor quality
audio. Avoid recording in rooms that have a discernible ‘echo’. This is especially important if
the microphone placement cannot be optimal (i.e. if the microphones are distant from the
speaker or speakers). Listening to a recording sample is a good way to see if the echo is a
problem. An echo y room will produce “hollow sounding” speech, as if the speaker is at the
other end of a tube.
If the audio input to your recording device allows you to select the sampling rate, choose
16 KHz or higher. If it allows you to select the digital audio sample resolution, choose 16 bits or
higher.
If the audio input to your recording device supports “automatic gain control” (“AGC”) or
“voice activity detection” (“VAD”), disable this feature. Coach your speakers to “speak naturally
into the microphone”. Do not instruct speakers to over articulate words.
Media Formats
With digital formats becoming more popular, certain mp3 players have the ability to record
audio directly into a digital audio format. These devices are small, reliable, and can store
massive amounts of audio without the need to switch tapes. Mini disc recorders and discs are
compact, easily portable, sturdy and high quality. Using the mini disc recorder for lectures or
interviews with an appropriate microphone attachment works well.
Video cameras are not built specifically for audio recording; however, they nonetheless
can record good audio given an appropriate microphone attachment. The advantage of recording
directly to a computer, of course, is that there is no intermediary media to deal with and you
save time. This would most commonly be a choice if you have a laptop or a controlled location
like a sound studio.
150
Monitor Recordings
It is a good idea to always bring headphones with you to monitor the audio. If the equipment
you are using has the ability to monitor the recording, as with the Marantz tape decks or higher
end video cameras do so.
Voice recognition and voice response promise to be the easiest method of providing a
user interface for data entry and conversational computing, since speech is the easiest, most
natural means of human communication. Voice input and output of data have now become
technologically and economically feasible for a variety of applications.
6.6 Summary
Multimedia is a combination of various elements, such as text, images, video, sound, and
animation.
Interactive multimedia allows the user to control what and when the elements are delivered.
Unicode makes use of 16-bit architecture for multilingual text and character encoding.
Digital audio is created when a sound wave is converted into numbers – a process referred
to as digitizing.
Software such as Toast and CD-Creator from Adaptec can translate the digital files of red
book Audio format on consumer compact discs directly into a digital sound editing file, or
decompress MP3 files into CD-Audio.
2. b. Kerning
3. WYSIWYG
151
4. b. Anti-aliasing
5. a. Higher Quality
6. Byte
LESSON 7
MULTIMEDIA ELEMENTS –
IMAGES, ANIMATION AND VIDEO
Structure
7.1 Introduction
7.3 Images
7.4 Animation
7.5 Video
7.7 Summary
7.1 Introduction
Video is a combination of image and audio. It consists of a set of still images called
frames displayed to the user one after another at a specific speed, known as the frame rate
measured in number of frames per second (fps), If displayed fast enough our eye cannot
distinguish the individual frames, but because of persistence of vision merges the individual
frames with each other thereby creating an illusion of motion. The frame rate should range
between 20 and 30 for perceiving smooth realistic motion.
Computer animation or CGI animation is the process used for generating animated images
by using computer graphics. The more general term computer-generated imagery encompasses
both static scenes and dynamic images, while computer animation only refers to moving images.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics
are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target
153
of the animation is the computer itself, but sometimes the target is another medium, such as
film.
Understand the image file formats such as Macintosh image format, windows imaging file
format and analyze the cross-platform formats
Still images are the important element of a multimedia project or a web site. In order to
make a multimedia presentation look elegant and complete, it is necessary to spend ample
amount of time to design the graphics and the layouts. Competent, computer literate skills in
graphic art and design are vital to the success of a multimedia project.
Digital Image
The points at which an image is sampled are known as picture elements, commonly
abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we
encode here the “color” of the image). The intensity at each pixel is represented by an integer
and is determined from the continuous image by averaging over a small neighborhood around
154
the pixel location. If there are just two intensity values, for example, black, and white, they are
represented by the numbers 0 and 1; such images are called binary-valued images. If 8-bit
integers are used to store each pixel value, the gray levels range from 0 (black) to 255 (white).
There are different kinds of image formats in the literature. The image format are comes
out of an image frame grabber, i.e., the captured image format, and the format when images
are stored, i.e., the stored image format.
The image format is specified by two main parameters: spatial resolution, which is specified
as pixels (eg. 640x480) and color encoding, which is specified by bits per pixel. Both parameter
values depend on hardware and software for input/output of images.
Bitmaps
A bitmap is a simple information matrix describing the individual dots that are the smallest
elements of resolution on a computer screen or other display or printing device.
A one-dimensional matrix is required for monochrome (black and white); greater depth
(more bits of information) is required to describe more than 16 million colors the picture elements
may have, as illustrated in following figure. The state of all the pixels on a computer screen
make up the image seen by the viewer, whether in combinations of black and white or colored
pixels in a line of text, a photograph-like picture, or a simple background pattern.
155
Grab a bitmap from an active computer screen with a screen capture program, and then
paste into a paint program or your application.
Capture a bitmap from a photo, artwork, or a television image using a scanner or video
capture device that digitizes the image.
Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many
creative ways.
Color Depth
Describes the amount of storage per pixel
When a bitmap image is constructed, the color of each point or pixel in the image is
coded into a numeric value. This value represents the color of the pixel, its hue and intensity.
When the image is displayed on the screen, these values are transformed into intensities of
red, green and blue for the electron guns inside the monitor, which then create the picture on
the phosphor lining of the picture tube. In fact, the screen itself is mapped out in the computer’s
memory, stored as a bitmap from which the computer hardware drives the monitor.
156
These color values have to be finite numbers, and the range of colors that can be stored
is known as the color depth. The range is described either by the number of colors that can be
distinguished, or more commonly by the number of bits used to store the color value. Thus, a
pure black and white image (i.e. no greys) would be described as a 1-bit or 2-colour image,
since every pixel is either black (0) or white (1). Common color depths include 8-bit (256 colors)
and 24-bit (16 million colors). It’s not usually necessary to use more than 24-bit color, since the
human eye is not able to distinguish that many colors, though broader color depths may be
used for archiving or other high quality work.
There are a number of interesting attributes of such a color indexing system. If there are
less than 256 colors in the image then this bitmap will be the same quality as a 24 bit bitmap
but it can be stored with one third the data. Interesting coloring and animation effects can be
achieved by simply modifying the palette, this immediately changes the appearance of the
bitmap and with careful design can lead to intentional changes in the visual appearance of the
bitmap.
A common operation that reduces the size of large 24 bit bitmaps is to convert them to
indexed color with an optimized palette, that is, a palette which best represents the colors
available in the bitmap.
157
Resolution
Resolution is a measure of how finely a device displays graphics with pixels. It is used by
printers, scanners, monitors (TV, computer), mobile devices and cameras.
by pixels
The amount of pixels or dots per inch(dpi) is used to measure resolution. Printers and
scanners work with higher resolutions than computer monitors. Current desktop printers can
support 300dpi +, flatbed scanners from 100- 3600dpi+. In comparison computer monitors
support 72-130 dpi. This is also known as “Image resolution”.
The size of the frame (as in video) and monitor. For instance, the size of video frame
used for British Televisions is 768 × 576, whereas American TVs use 640 × 480.
Still images may be small or large, or even full screen. Whatever their form, still images
are generated by the computer in two ways:
Bitmaps are used for photo-realistic images and for complex drawing requiring fine
detail.
Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic
shapes that can be mathematically expressed in angles, coordinates, and distances. A drawn
object can be filled with color and patterns, and you can select it as a single object.
Typically, image files are compressed to save memory and disk space; many image
formats already use compression within the file itself – for example, GIF, JPEG, and PNG.
158
Still images may be the most important element of your multimedia project. If you are
designing multimedia by yourself, put yourself in the role of graphic artist and layout designer.
Bitmap Software
Bitmap is derived from the words “bit”, which means the simplest element in which only
two digits are used, and “map”, which is a two-dimensional matrix of these bits. A bitmap is a
data matrix describing the individual dots of an image.
A simple information matrix describing the dots or pixels which make up the image
Photo-realistic images.
Complex drawings.
Scanning images.
Clip Art
A clip art collection may contain a random assortment of images, or it may contain a
series of graphics, photographs, sound, and video related to a single topic. For example, Corel,
Micrografx, and Fractal Design bundle extensive clip art collection with their image-editing
software.
Clip arts are a popular alternative for users who do not want to create their own images.
Bitmap Software
The industry standard for bitmap painting and editing programs are:
Macromedia’s Fireworks.
Corel’s Painter.
CorelDraw.
Quark Express.
Director included a powerful image editor with advanced tools such as onion-skin and
image filtering
Adobe Photoshop and Fractal Design s Painter are more sophisticated painting and editing
tools
Note:
Use paint program for cartoon, text, icons, symbols, buttons, or graphics.
For photo-realistic images first scan a picture, then use a paint or image editing program
to refine or modify those Bitmaps
161
The image that is seen on a computer monitor is digital bitmap stored in video memory,
updated about every 1/60 second or faster, depending upon monitors scan rate. When the
images are assembled for multimedia project, it may be needed to capture and store an image
directly from screen. It is possible to use the Prt Scr key available in the keyboard to capture an
image.
Scanning Images
After scanning through countless clip art collections, if it is not possible to find the unusual
background you want for a screen about gardening. Sometimes when you search for something
too hard, you don’t realize that it’s right in front of your face. Open the scan in an image-editing
program and experiment with different filters, the contrast, and various special effects. Be
creative, and don’t be afraid to try strange combinations – sometimes mistakes yield the most
intriguing results.
Vector Drawing
Most multimedia authoring systems provide for use of vector-drawn objects such as
lines, rectangles, ovals, polygons, and text. Computer-aided design (CAD) programs have
traditionally used vector-drawn object systems for creating the highly complex and geometric
rendering needed by architects and engineers.
Graphic artists designing for print media use vector-drawn objects because the same
mathematics that put a rectangle on your screen can also place that rectangle on paper without
jaggies. This requires the higher resolution of the printer, using a page description language
such as PostScript.
Programs for 3-D animation also use vector-drawn graphics. For example, the various
changes of position, rotation, and shading of light required to spin the extruded.
Vector-drawn objects are described and drawn to the computer screen using a fraction
of the memory space required to describe and store the same object in bitmap form. A vector
162
is a line that is described by the location of its two endpoints. A simple rectangle, for example,
might be defined as follows:
RECT 0,0,200,200,RED,BLUE
–Starts at 0,0 and extends 200 pixels horizontally and 200 pixels downward from the
corner ( a square)
–This is the same square with a red border filled with blue
The same square as a bitmapped image would take 5,000 bytes to describe ( 200x200)/
8) and using 256 colors would require 40K as a bitmap
((200x 200) / 8 X 8)
Sometimes a single bitmap gives better performance than many vector images required
to make the same image
Converting bitmaps to drawn object is more difficult and is called auto tracing
It computes the bounds of an object and its colors and derives the polygon that most
nearly describes it
Software helps to render (or represent) the image in visual form, but these programs
have a steep learning curve.
Object in 3-D space carry many properties, shape color, texture, location… and a scene
contains many objects
3-D Drawing
3-D software usually offers:
Directional lighting
Motion
Different perspectives
Specular Infini-D
form*Z
164
• You can draw a 2-D object and extrude or lathe it into the third dimension
• A lathed shape is rotated around a defined axis to create the 3-D object.
Format Extension
JPEG .jpg
7.4 Animation
Introduction
Animation makes static presentations come alive. It is visual change over time and can
add great power to our multimedia projects. Carefully planned, well-executed video clips can
make a dramatic difference in a multimedia project. Animation is created from drawn pictures
and video is created using real time visuals.
Principles of Animation
Animation is the rapid display of a sequence of images of 2-D artwork or model positions
in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon
of persistence of vision, and can be created and demonstrated in a number of ways. The most
common method of presenting animation is as a motion picture or video program, although
several other forms of presenting animation also exist.
Television video builds entire frames or pictures every second; the speed with which
each frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just change
the shape and also move or translate it a few pixels for each frame.
Animation Techniques
When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is complicated,
166
you may wish to create a written script with a list of activities and required objects. Choose the
animation tool best suited for the job. Then build and tweak your sequences; experiment with
lighting effects. Allow plenty of time for this phase when you are experimenting and testing.
Finally, post-process your animation, doing any special rendering and adding sound effects.
Cel Animation
The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which have been replaced today by acetate or plastic. Cels of famous animated cartoons
have become sought-after, suitable-for-framing collector’s items.
Cel animation artwork begins with keyframes (the first and last frame of an action). For
example, when an animated figure of a man walks across the screen, he balances the weight
of his entire body on one foot and then the other in a series of falls and recoveries, with the
opposite foot and leg catching up to support the body.
The animation techniques made famous by Disney use a series of progressively different
on each frame of movie film which plays at 24 frames per second.
The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which is been replaced today by acetate or plastic.
Computer Animation
Computer animation programs typically employ the same logic and procedural concepts
as cel animation, using layer, key frame, and tweening techniques, and even borrowing from
the vocabulary of classic animators. On the computer, paint is most filled or drawn with tools
using features such as gradients and antialiasing.
The word links, in computer animation terminology, usually means special methods for
computing RGB pixel values, providing edge detection, and layering so that images can blend
or otherwise mix their colors to produce special transparencies, inversions, and effects.
167
Computer Animation is same as that of the logic and procedural concepts as cel animation
and use the vocabulary of classic cel animation – terms such as layer, Keyframe, and
tweening.
The primary difference between the animation software program is in how much must be
drawn by the animator and how much is automatically generated by the software
In 2D animation the animator creates an object and describes a path for the object to
follow. The software takes over, actually creating the animation on the fly as the program
is being viewed by your user
In 3D animation the animator puts his effort in creating the models of individual and
designing the characteristic of their shapes and surfaces.
Paint is most filled or drawn with tools using features such as gradients and anti- aliasing.
Kinematics
It is the study of the movement and motion of structures that have joints, such as a
walking man.
Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects
such as hands to arms and define their relationships and limits.
Once those relationships are set you can drag these parts around and let the computer
calculate the result.
Morphing
Morphing is popular effect in which one image transforms into another. Morphing application
and other modeling tools that offer this effect can perform transition not only between still
images, but between moving images as well.
The morphed images were built at a rate of 8 frames per second, with each transition
taking a total of 4 seconds.
Some file formats are designed specifically to contain animations and the can be ported
among application and platforms with the proper translators.
CompuServe *.gif
Following are the list of few Software used for computerized animation:
3D Studio Max
Flash
AnimationPro
Meta Graphics
Meta graphics can be termed as hybrid graphics as they are a combination of bitmap and
vector graphics. They aren’t as widely used as bitmaps and vectors, and aren’t as widely
supported. An example of a meta graphic would be a map consisting of a photo showing an
aerial view of a town, where the landmarks are highlighted using vector text and graphics, eg
arrows.
Animated Graphics
Animated graphics are ‘moving graphics’ that consist of at least more than one graphic.
Vector graphics are mainly the basis of animations. Think of cartoons such as the Simpsons
and Family Guy. Effects generated by bitmaps can be added and bitmaps themselves can also
be animated.
169
12-24fps for animations used in multimedia. 12fps is recommended for web based
animations
24fps for TV in UK
2. The type of image used for photo-realistic images and for complex drawings requiring
fine detail is the _______________.
4. _______________ is a process whereby the color value of each pixel is changed to the
closest matching color value in the target palette, using a mathematical algorithm.
170
c. Morphing
d. Tweening
Movies on film are typically shot at a shutter rate of 24 frames per second.
9. The file format that is most widely supported for web animations is_____________
7.5 Video
Video is a combination of image and audio. It consists of a set of still images called
frames displayed to the user one after another at a specific speed, known as the frame rate
measured in number of frames per second (fps), If displayed fast enough our eye cannot
distinguish the individual frames, but because of persistence of vision merges the individual
frames with each other thereby creating an illusion of motion. The frame rate should range
between 20 and 30 for perceiving smooth realistic motion. Audio is added and synchronized
with the apparent movement of images. The recording and editing of sound has long been in
the domain of the PC. Doing so with motion video has only recently gained acceptance. This is
because of the enormous file size required by video.
171
Digital video has supplanted analog video as the method of choice for making video for
multimedia use. While broadcast stations and professional production and postproduction
houses remain greatly invested in analog video hardware (according to Sony, there are more
than 350,000 Betacam SP devices in use today), digital video gear produces excellent finished
products at a fraction of the cost of analog. A digital camcorder directly connected to a computer
workstation eliminates the image-degrading analog-to-digital conversion step typically performed
by expensive video capture cards, and brings the power of nonlinear video editing and production
to everyday users.
Four broadcast and video standards and recording formats are commonly in use around
the world: NTSC, PAL, SECAM, and HDTV. Because these standards and formats are not
easily interchangeable, it is important to know where your multimedia project will be used.
NTSC
The United States, Japan, and many other countries use a system for broadcasting and
displaying video that is based upon the specifications set forth by the 1952 National Television
Standards Committee. These standards define a method for encoding information into the
electronic signal that ultimately creates a television picture. As specified by the NTSC standard,
a single frame of video is made up of 525 horizontal scan lines drawn onto the inside face of a
phosphor-coated picture tube every 1/30th of a second by a fast-moving electron beam.
PAL
The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe, Australia,
and South Africa. PAL is an integrated method of adding color to a black-and-white television
signal that paints 625 lines at a frame rate 25 frames per second.
SECAM
The Sequential Color and Memory (SECAM) system is used in France, Russia, and few
other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the
NTSC and the PAL color systems in its basic technology and broadcast method.
172
HDTV
High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio (see
following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision movies.
There is contention between the broadcast and computer industries about whether to use
interlacing or progressive-scan technologies.
DTV Transition
The switch from analogue to digital broadcast television is referred to as the Digital TV
(DTV) Transition. In 1996, the U.S. Congress authorized the distribution of an additional
broadcast channel to each broadcast TV station so that they could start a digital broadcast
channel while simultaneously continuing their analogue broadcast channel.
173
Later, Congress set June 12, 2009 as the deadline for full power television stations to
stop broadcasting analogue signals. Since June 13, 2009, all full-power U.S. television stations
have broadcast over-the-air signals in digital only.
Digital Video
Full integration of motion video on computers eliminates the analog television form of
video from the multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-
ROM, or other mass-storage device, that clip can be played back on the computer’s monitor
without overlay boards, videodisk players, or second monitors. This playback of digital video is
accomplished using software architecture such as QuickTime or AVI, a multimedia producer or
developer; you may need to convert video source material from its still common analog form
(videotape) to a digital form manageable by the end user’s computer system. So an
understanding of analog video and some special hardware must remain in your multimedia
toolbox.
Analog to digital conversion of video can be accomplished using the video overlay hardware
described above, or it can be delivered direct to disk using FireWire cables. To repetitively
digitize a full-screen color video image every 1/30 second and store it to disk or RAM severely
taxes both Macintosh and PC processing capabilities–special hardware, compression firmware,
and massive amounts of digital storage space are required.
The Advanced Television Standards Committee (ATSC) has set voluntary standards for
digital television. These standards include how sound and video are encoded and transmitted.
They also provide guidelines for different levels of quality. All of the digital standards are better
in quality than analogue signals. HDTV standards are the top tier of all the digital signals.
The ATSC has created 18 commonly used digital broadcast formats for video. The lowest
quality digital format is about the same as the highest quality an analogue TV can display.
174
(i) Aspect ratio: Standard television has a 4:3 aspect ratio—it is four units wide by three
units high. HDTV has a 16:9 aspect ratio, more like a movie screen.
(ii) Resolution: The lowest standard resolution (SDTV) will be about the same as analogue
TV and will go up to 704 x 480 pixels. The highest HDTV resolution is 1920 x 1080 pixels.
HDTV can display about ten times as many pixels as an analogue TV set.
(iii) Frame rate: A set’s frame rate describes how many times it creates a complete picture on
the screen every second. DTV frame rates usually end in “i” or “p” to denote whether they
are interlaced or progressive. DTV frame rates range from 24p (24 frames per second,
progressive) to 60p (60 frames per second, progressive).
Many of these standards have exactly the same aspect ratio and resolution — their
frame rates differentiate them from one another. When you hear someone mention a “1080i”
HDTV set, they’re talking about one that has a native resolution of 1920 x 1080 pixels and can
display 60 frames per second, interlaced.
Before going into the details of digitizing video and playback of video on a personal
computer, let us first have a look at the existing digital video standards for transmission and
playback.
ATSC (Advanced Television Systems Committee) is the name of the technical standard
that defines the digital TV (DTV) that the FCC has chosen for terrestrial TV stations. ATSC
employs MPEG-2, a data compression standard. MPEG-2 typically achieves a 50-to-1 reduction
in data.
It achieves this by not retransmitting areas of the screen that have not changed since the
previous frame. Digital cable TV systems and DBS systems like DirecTV have devised their
own standards that differ somewhat from ATSC. Their high-def set top boxes (STBs) conform
to ATSC at their output connectors. Those systems use MPEG-2 or MPEG-4.
175
ATSC has 18 different formats. All TVs must be able to receive all of these formats and
display them. The broadcaster chooses the format. Most TV sets will display only 1 or 2 of
these formats, but will convert the other formats into these.
ISDB is maintained by the Japanese organization ARIB. The standards can be obtained
for free at the Japanese organization DiBEG website and at ARIB. The core standards of ISDB
are ISDB-S (satellite television), ISDB-T (terrestrial), ISDB-C (cable) and 2.6 GHz band mobile
broadcasting which are all based on MPEG-2 or MPEG-4 standard for multiplexing with transport
stream structure and video and audio coding (MPEG-2 or H.264), and are capable of high
definition television (HDTV) and standard definition television. ISDB-T and ISDB-Tsb are for
mobile reception in TV bands. 1seg is the name of an ISDB-T service for reception on cell
phones, laptop computers and vehicles.
The concept was named for its similarity to ISDN, because both allow multiple channels
of data to be transmitted together (a process called multiplexing). This is also much like another
digital radio system, Eureka 147, which calls each group of stations on a transmitter an ensemble;
this is very much like the multi-channel digital TV standard DVB-T. ISDB-T operates on unused
TV channels, an approach taken by other countries for TV but never before for radio.
Interaction: Besides audio and video transmission, ISDB also defines data connections
(Data broadcasting) with the internet as a return channel over several media (10Base-T/
100Base-T, Telephone line modem, Mobile phone, Wireless LAN (IEEE 802.11) etc.) and with
different protocols. This is used, for example, for interactive interfaces like data broadcasting
(ARIB STDB24) and electronic program guides (EPG).
Receiver: There are two types of ISDB receiver: Television and set-top box. The aspect
ratio of an ISDB-receiving television set is 16:9; televisions fulfilling these specs are called Hi-
Vision TV.
There are three TV types: Cathode ray tube (CRT), plasma display panel (PDP) and
liquid crystal display (LCD), with LCD being the most popular Hi-Vision TV on the Japanese
market nowadays.
176
These are conventional systems modified to offer improved vertical and horizontal
resolutions. One of the systems emerging in US and Europe is known as the Improved Definition
Television (MTV). In TV is an attempt to improve NTSC image by using digital memory to
double the scanning lines from 525 to 1050. The pictures are only slightly more detailed than
NTSC images because the signal does not contain any new information. By separating the
chrominance and luminance parts of the video signal, IDTV prevents cross-interference between
the two. The Double Multiplexed Analogue Components (D2-MAC) standard is designed as an
intermediate standard for transition from current European analogue standard to HDTV standard
To add full-screen, full-motion video to your multimedia project, you will need to invest in
specialized hardware and software or purchase the services of a professional video production
studio. In many cases, a professional studio will also provide editing tools and post-production
capabilities that you cannot duplicate with your Macintosh or PC.
Video Tips
A useful tool easily implemented in most digital video editing applications is “blue screen,”
“Ultimate,” or “chromo key” editing. Blue screen is a popular technique for making multimedia
titles because expensive sets are not required. Incredible backgrounds can be generated using
3-D modeling and graphic software, and one or more actors, vehicles, or other objects can be
neatly layered onto that background. Applications such as VideoShop, Premiere, Final Cut
Pro, and iMovie provide this capability.
Recording Formats
S-VHS video
In S-VHS video, color and luminance information are kept on two separate tracks. The
result is a definite improvement in picture quality. This standard is also used in Hi-8. Still, if your
ultimate goal is to have your project accepted by broadcast stations, this would not be the best
choice.
177
Component (YUV)
In the early 1980s, Sony began to experiment with a new portable professional video
format based on Betamax. Panasonic has developed their own standard based on a similar
technology, called “MII,” Betacam SP has become the industry standard for professional video
field recording. This format may soon be eclipsed by a new digital version called “Digital
Betacam.”
Digitization of Audio
Sound and other analog data is generally represented as a transverse wave, and can be
converted to digital form by a process called sampling. The two important aspects of sampling
are sampling size and sampling rate.
Sampling size refers to the number of bits used to store each sample from the analog
wave. For example, an 8-bit sample can represent 256 (28 = 256) possible levels in a particular
sample.
A higher sample size will result in increased accuracy, but higher data storage
requirements. Sampling Rate refers to the number of samples or slices taken of the analog
wave in 1 second. The higher the sampling size, the better will be the representation of the
initial analog signal.
Capturing full motion video requires a video capture card to digitize the signal (unless
using a digital video recorder, in which case it is already digitized) before storing on disk for
later editing.
The standard PAL (Phase Alternate Line) video signal used in India displays a frame rate
of 25 frames per second. One frame of medium resolution and 16-bit color requires approximately
1 Mb of storage space per frame. This translates to 25 Mb per second of video, or a staggering
1,500 Mb per minute.
178
Current personal computers cannot sustain a transfer rate between secondary and primary
storage of 1,500 Mb per minute, so a number of solutions are applied including:
Decreased color depth to fewer colors or even black and white shades requires significantly
less memory.
7.7 Summary
A digital image is represented by a matrix of numeric values each representing a quantized
intensity value.
Bitmaps are used for photo-realistic images and for complex drawing requiring fine detail.
Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic shapes
that can be mathematically expressed in angles, coordinates, and distances.
Rendering is the process of generating an image from a model (or models in what
collectively could be called a scene file), by means of computer programs.
Color palette is a subset of all possible colors a monitor can display that is being used to
display the current document.
A color generator or color scheme selector is a tool for anyone in need of a color scheme.
Animation is the rapid display of a sequence of images of 2-D artwork or model positions
in order to create an illusion of movement.
Four broadcast and video standards and recording formats are commonly in use around
the world: NTSC, PAL, SECAM, and HDTV.
Animation catches the eye and makes things noticeable. But, like sound, animation quickly
becomes trite if it is improperly applied.
Video standards and formats are still being refined as transport, storage, compression,
and display technologies take shape in laboratories and in the marketplace and while
equipment and post-processing evolves from its analog beginnings to become fully digital,
from capture to display.
179
DVD Authoring software is used to create digital video disks which can be played on a
DVD player.
Media player is a term typically used to describe computer software for playing back
multimedia files. While many media players can play both audio and video, others focus
only on one media type or the other.
2. Bitmaps
4. Dithering
5. True
8. True
9. GIF89a
10. 16:9
2. Define Bitmaps.
4. Define Resolution.
LESSON 8
COMPRESSION TECHNIQUES IN
MULTIMEDIA SYSTEMS
Structure
8.1 Introduction
8.7 Summary
8.1 Introduction
Data compression is the process of encoding data using a representation that reduces
the overall size of data. This reduction is possible when the original dataset contains some type
of redundancy. Data compression, also called compaction, the process of reducing the amount
of data needed for the storage or transmission of a given piece of information, typically by the
use of encoding techniques. Multimedia compression is employing tools and techniques in
order to reduce the file size of various media formats.
learn methods for handling compressing various kinds of data such as text, images,
video and audio data
182
Compression is the way of making files to take up less space. In multimedia systems, in
order to manage large multimedia data objects efficiently, these data objects need to be
compressed to reduce the file size for storage of these objects.
For example, if a black pixel is followed by 20 white pixels, there is no need to store all 20
white pixels. A coding mechanism can be used so that only the count of the white pixels is
stored. Once such redundancies are removed, the data object requires less time for transmission
over a network. This in turn significantly reduces storage and transmission costs.
Types of Compression
Sometimes removing redundancies is not sufficient to reduce the size of the data object
to manageable levels. In such cases, some real information is also removed. The primary
criterion is that removal of the real information should not perfectly affect the quality of the
result. In the case of video, compression causes some information to be lost; some information
at a delete level is considered not essential for a reasonable reproduction of the scene.
183
This type of compression is called lossy compression. Audio compression, on the other
hand, is not lossy. It is called lossless compression.
Lossy compression is that some loss would occur while compressing information objects.
The idea behind the lossy compression is that, the human eye fills in the missing
information in the case of video.
But, an important consideration is how much information can be lost so that the result
should not affect. For example, in a gray scale image, if several bits are missing, the information
is still perceived in an acceptable manner as the eye fills in the gaps in the shading gradient.
184
Lossy compressions techniques can be used alone in combination with other compression
methods in a multimedia object consisting of audio, color images, and video as well as other
specialized data types.
Intel DVI
Fractals.
The schemes are used primarily for documents that do not contain any continuous-tone
information or where the continuous-tone information can be captured in a black and white
mode to serve the desired purpose.
The schemes are applicable in office/business documents, handwritten text, line graphics,
engineering drawings, and so on. Let us view the scanning process. A scanner scans a document
as sequential scan lines, starting from the top of the page.
A scan line is complete line of pixels, of height equal to one pixel, running across the
page. It scans the first line of pixels (Scan Line), then scans second “line, and works its way up
to the last scan line of the page. Each scan line is scanned from left to right of the page
generating black and white pixels for that scan line.
185
This uncompressed image consists of a single bit per pixel containing black and white
pixels. Binary 1 represents a black pixel, binary 0 a white pixel. Several schemes have been
standardized and used to achieve various levels of compressions.
In some cases, one byte is used to represent the pixel value and the other seven bits to
represents the run length.
Example:
Used when the source information comprises long substrings of the same character or
binary digit
000000011111111110000011
If binary and we know the first bit is 0 then the code becomes: 7, 10, 5, 2
This scheme is based on run-length encoding and assumes that a typical scanline has
long runs of the same color.
This scheme was designed for black and white images only, not for gray scale or color
images. The primary application of this scheme is in facsimile and early document imaging
system.
186
Huffman Encoding
It is used for many software based document imaging systems. It is used for encoding
the pixel run length in CCITT Group 3 1-dGroup 4.
It is variable-length encoding. It generates the shortest code for frequently occurring run
lengths and longer code for less frequently occurring run lengths.
Table below shows the CCITT Group 3 tables showing codes or white run lengths and
black run lengths.
The codes greater than a string of 1792 pixels are identical for black and white pixels. A
new code indicates reversal of color, that is, the pixel Color code is relative to the color of the
previous pixel sequence.
Table 8.3 shows the codes for pixel sequences larger than 1792 pixels.
CCITT Group 3 compression utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a given bit stream. Make-up codes are used to represent run
length in multiples of 64 pixels. Terminating codes are used to represent run lengths of less
than 64 pixels.
Makeup code for 128 white pixels - 10010 Terminating code for 4 white pixels - 1011
The compressed bit stream for 132 white pixels is 100101011, a total of nine bits. Therefore
the compression ratio is 14, the ratio between the total numbers of bits (132) divided by the
number of bits used to code them (9).
CCITT Group 3 uses a very simple data format. This consists of sequential blocks of
data for each scan line, as shown in Table 8.4.
Note that the file is terminated by a number of EOLs (End of. Line) if there is no change
in the line from the previous line (for example, white space).
189
It is a worldwide standard for facsimile which is accepted for document imaging application.
This allows document imaging applications to incorporate fax documents easily.
CCITT group 3 compressions utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a give bit stream.
CCITT Group 3 uses a very simply data format. This consists of sequential blocks of data
for each scan line.
It is also known as modified run length encoding. It is used for software based imaging
system and facsimile. It is easier to decompress in software than CCITT Group 4. The CCITT
Group 3 2D scheme uses a “k” factor where the image is divided into several group of k lines.
This scheme is based on the statistical nature of images; the image data across the adjacent
scanline is redundant.
If black and white transition occurs on a given scanline, chances are the same transition
will occur within + or - 3 pixels in the next scanline.
Necessity of k factor
The 2D scheme uses a combination of additional codes called vertical code, pass code,
and horizontal code to encode every line in the group of k lines.
The steps for pseudo code to code the code line are:
i) Parse the coding line and look for the change in the pixel value. (Change is found at al
location).
ii) Parse the reference line and look for the change in the pixel value. (Change is found at bl
location).
Disadvantage
It doesn’t provide dense compression
CClTT Group 4 compression is the two dimensional coding scheme without the k-factor.
In this method, the first reference line is an imaginary all-white line above the top of the
image. The first group of pixels (scanline) is encoded utilizing the imaginary white line as the
reference line.
The new coded line becomes the references line for the next scan line. The k-factor in
this case is the entire page of line. In this method, there are no end-of-line (EOL) markers
before the start of the compressed data.
191
The LZW algorithm is a very common compression technique. This algorithm is typically
used in GIF and optionally in PDF and TIFF. On Unix-like operating systems,
the compress command compresses a file so that it becomes smaller. The compressed file’s
name is given the extension .Z. It is lossless, meaning no data is lost when compressing. The
algorithm is simple to implement and has the potential for very high throughput in hardware
implementations. It is the algorithm of the widely used UNIX file compression utility compress,
and is used in the GIF image format.
The Idea relies on reoccurring patterns to save data space. LZW is the foremost technique
for general purpose data compression due to its simplicity and versatility. It is the basis of many
PC utilities that claim to “double the capacity of your hard drive”.
LZW compression works by reading a sequence of symbols, grouping the symbols into
strings, and converting the strings into codes. Because the codes take up less space than the
strings they replace, we get compression.
LZW compression uses a code table, with 4096 as a common choice for the number of
table entries. Codes 0-255 in the code table are always assigned to represent single
bytes from the input file.
When encoding begins the code table contains only the first 256 entries, with the remainder
of the table being blanks. Compression is achieved by using codes 256 through 4095 to
represent sequences of bytes.
As the encoding continues, LZW identifies repeated sequences in the data, and adds
them to the code table.
Decoding is achieved by taking each code from the compressed file and translating it
through the code table to find what character or characters it represents.
Example 8.1: Use the LZW algorithm to compress the string: BABAABAAA
192
Color:
Color is a part of life we take for granted. Color adds another dimension to objects. It
helps in making things standout. Color adds depth to images; enhance images, and helps set
objects apart from -background.
193
Let us review the physics of color. Visible light is a form of electromagnetic radiation or
radiant energy, as are radio frequencies or x-rays. The radiant energy spectrum contains audio
frequencies, radio frequencies, infrared, visible light, ultraviolet rays, x-rays and gamma rays.
Where ë – is the wavelength in meters, c is the velocity of light in meters per second and
f is frequency of the radiation in hertz.
Since all electromagnetic waves travel through space at the velocity of light, i.e. 3 x 108
meters/second- the wavelength is calculated by
Color Characteristics
We typically define color by its brightness, the hue and depth of the color.
Luminance or Brightness
This is the measure of the brightness of the light emitted or reflected by an object; it
depends on the radiant, energy of the color band.
Hue
This is the color sensation produced in an observer due to the presence of certain
wavelengths of color. Each wavelength represents a different hue.
194
Saturation
This is a measure of color intensity, for example, the difference between red and pink.
Color Models
Chromaticity Model
It is a three-dimensional model with two dimensions, x and y, defining the color, and the
third dimension defining the luminance. It is an additive model since x and yare added to
generate different colors.
RGB Model
RGB means Red Green Blue. This model implements additive theory in that
different intensities of red, green and blue are added to generate various colors.
HSI Model
The Hue Saturation and Intensity (HSI) model represents an artist’s impression of tint,
shade and tone. This model has proved suitable for image processing for filtering and smoothing
images.
CMYK Model
The Cyan, Magenta, Yellow and Black color model is used in desktop publishing printing
devices. It is a color-subtractive model and is best used in color printing devices only.
YUV Representation
The NTSC developed the YUV three-dimensional color model. y -Luminance Component
UV -Chrominance Components.
Luminance component contains the black and white or gray scale information. The
chrominance component contains color information where U is red minus cyan and V is magenta
minus green.
195
The first stage converts the signal from the spatial RGB domain to the YUV frequency
domain by performing discrete cosine transform. This process allows separating luminance or
gray-scale components from the chrominance components of the image.
ISO and CCITT working committee joint together and formed Joint Photographic Experts
Group. It is focused exclusively on still image compression.
Another joint committee, known as the Motion Picture Experts Group (MPEG), is
concerned with full motion video standards.
JPEG is a compression standard for still color images and grayscale images, otherwise
known as continuous tone images.
Part 1 specifies the modes of operation, the interchange formats, and the encoder/decoder
specifies for these modes along with substantial implementation guide lines.
It should be scalable from completely lossless to lossy ranges to adapt it. It should provide
sequential encoding.
The compression standard should provide the option of lossless encoding so that images
can be guaranteed to provide full detail at the selected resolution when decompressed.
196
* Base line system
* Extended system
* Special lossless function.
The base line system must reasonably decompress color images, maintain a high
compression ratio, and handle from 4 bits/pixel to 16 bits/pixel.
The extended system covers the various encoding aspects such as variable-length
encoding, progressive encoding, and the hierarchical mode of encoding.
The special lossless function is also known as predictive lossless coding. It ensures that
at the resolution at which the image is no loss of any details that was there in the original
source image.
(i) Baseline Sequential Codec
(ii) DCT Progressive Mode
The baseline sequential code defines a rich compression scheme the other three modes
describe enhancements to this baseline scheme for achieving different results.
197
The baseline sequential Codec uses Huffman coding. Arithmetic coding is another type
of entropy encoding
DCT is closely related to Fourier transforms. Fourier transforms are used to represent a
two dimensional sound signal.
DCT uses a similar concept to reduce the gray-scale level or color signal amplitudes to
equations that require very few points to locate the amplitude in Y-axis X-axis is for locating
frequency.
o DCT Coefficients
The output amplitudes of the set of 64 orthogonal basis signals are called DCT Co-
efficient.
o Quantization
o De Quantization
This process is the reverse of quantization. Note that since quantization used a many-to-
one mapping, the information lost in that mapping cannot be fully recovered
Huffman Coding
Huffman coding requires that one or more sets of huff man code tables be specified by
the application for encoding as well as decoding. The Huffman tables may be pre-defined and
used within an application as defaults, or computed specifically for a given image.
198
The key steps of formation of DCT co-efficient and quantization are the same as
for the baseline sequential codec. The key difference is that each image component is coded
in multiple scans instead of single scan.
The hierarchical mode provides a means of carrying multiple resolutions. Each successive
encoding of the image is reduced by a factor of two, in either the horizontal or vertical dimension.
JPEG Methodology
The JPEG compression scheme is lossy, and utilizes forward discrete cosine transform
(or forward DCT mathematical function), a uniform quantizer, and entropy encoding. The DCT
function removes data redundancy by transforming data from a spatial domain to a frequency
domain; the quantizer quantizes DCT co-efficient with weighting functions to generate quantized
DCT co-efficient optimized for the human eye; and the entropy encoder minimizes the entropy
of quantized DCT co-efficient.
199
The JPEG method is a symmetric algorithm. Here, decompression is the exact reverse
process of compression.
Quantization
The baseline JPEG algorithm supports four color quantization tables and two huffman
tables for both DC and AC DCT co-efficients. The quantized co-efficient is described by the
following equation:
Zig-Zag Sequence
Further empirical work proved that the length of zero values in a run can be increased to
give a further increase in compression by reordering the runs. JPEG came up with ordering the
quantized OCT co-efficients in a ZigZag sequence
Entropy Encoding
Entropy is a term used in thermodynamics for the study of heat and work. Entropy, as
used in data compression, is the measure of the information content of a message in number
of bits. It is represented as
a. Two
b. Three
c. Four
3. ————— audio video refers to an on-demand request for compressed audio video
files.
a. Streaming Live
b. Streaming Stored
c. Interactive
4. —————— audio video refers to broadcasting of radio and tv programs on the internet.
a. Interactive
b. Streaming Stored
c. Streaming Live
5. —————— audio video refers to the use of the internet for interactive audio/ video
applications.
6. In ————— encoding the difference between the samples are encoded instead of
encoding all sample values.
7. What is the process that condenses files to be stored in less space and therefore, sent
faster over the Internet?
a. Data condensation
b. Data compression
c. Zipping
d. Defragmentation
To digitize and store a 10-second clip of full-motion video in your computer requires
transfer of an enormous amount of data in a very short amount of time. Reproducing just one
frame of digital video component video at 24 bits requires almost 1MB of computer data; 30
seconds of video will fill a gigabyte hard disk. Full-size, full-motion video requires that the
computer deliver data at about 30MB per second. This overwhelming technological bottleneck
is overcome using digital video compression schemes or codecs (coders/decoders). A codec
is the algorithm used to compress a video for delivery and then decode it in real-time for fast
playback.
Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak,
Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video
information. Compression schemes use Discrete Cosine Transform (DCT), an encoding
algorithm that quantifies the human eye’s ability to detect color and image distortion. All of
these codecs employ lossy compression algorithms.
The development of digital video technology has made it possible to use digital video
compression for a variety of telecommunications applications. Standardization of compression
algorithms for video was first initiated by CCITT for teleconferencing and video telephony.
Applications using MPEG standards can be symmetric or asymmetric. Symmetric
applications are applications that require essentially equal use of compression and
decompression. Asymmetric applications require frequent decompression.
Symmetric applications require on-line input devices such as video cameras, scanners
and microphones for digitized sound. In addition to video and audio compression, this standards
activity is concerned with a number of other Issues concerned with playback of video clips and
sound clips. The MPEG standard has identified a number of such issues that have been
addressed by the standards activity.
Random Access
The expectations generated for multimedia systems are the ability to playa sound or
video clip from any frame with that clip, irrespective of on what kind-of media the information is
stored.
204
VCR paradigm
The VCR paradigm consists of the control functions typically found on a VCR such as
play, fast forward, rewind, search forward and rewind search.
The linear quantize uses a step algorithm that can be adjusted based on picture quality
and coding efficiency. The H.261 is a standard that uses a hybrid of OCT and OPCM (differential
pulse Code Modulation) schemes with motion estimation.
It also defines the data format. Each MB contains the OCT coefficients (TCOEFF) of a
block followed by an EOB (a fixed length end-of-block marker). Each MB consists of block data
and an MB header. A GOB (Group of Blocks) consists of a GOB header. The picture layer
consists of a picture header. The H.261 is designed for dynamic use and provides a fully
contained organization and a high level of interactive control.
The MPEG 2 suite of standards consist of standards for MPEG2 Video, MPEG - 2 Audio
and MPEG - 2 systems. It is also defined at different levels, called profiles.
The main profile is designed to cover the largest number of applications. It supports
digital video compression in the range of2 to 15 M bits/sec. It also provides a generic solution
for television worldwide, including cable, direct broadcast satellite, fiber optic media, and optical
storage media (including digital VCRs).
205
The above said requirements can be achieved only by incremental coding of successive
frames. It is known as intraframe coding. If we access information randomly by frame requires
coding confined to a specific frame, and then it is known as intraframe coding.
The MPEG standard addresses these two requirements by providing a balance between
intraframe coding and intraframe coding. The MPEG standard also provides for recursive and
non-recursive temporal redundancy reduction.
The MPEG video compression standard provides two basic schemes: discrete-transform-
based compression for the reduction of’ spatial redundancy and block-based motion
compensation for the reduction of temporal (motion) redundancy. During the initial stages of
DCT compression, both the full motion MPEG and still image JPEG algorithms are essentially
identical. First an image is converted to the YUVcolor space (a luminance/chrominance color
space similar to that used for color television). The pixel data is then fed into a discrete cosine
transform, which creates a scalar quantization (a two-dimensional array representing various
frequency ranges represented in the image) of the pixel data.
The MPEG algorithm for spatial reduction is lossy and is defined as a hybrid which employs
motion compensation, forward discrete cosine transform (DCF), a uniform quantizer, and
Huffman coding. Block-based motion compensation is utilized for reducing temporal redundancy
(i.e. to reduce the amount of data needed to represent each picture in a video sequence).
Motion-compensated reduction is a key feature of MPEG.
Moving pictures consist of sequences of video pictures or t1’ame’S that are played back
a fixed number of frames per second. To achieve the requirement of random access, a set of
206
pictures can be defined to form a group of pictures (GOP) consisting of one or more of the
following three types of pictures.
A Gap consists of consecutive pictures that begin with an intra-picture. The intra-picture
is coded without any reference to any other picture in the group.
Predicted pictures are coded with a reference to a past picture, either an intra-picture or
un-directionally predicted picture. Bi-directionally predicted picture is never used as references
Motion Compensation for Coding MPEG
Let us review the concept of Macro blocks and understand the role they play in compression
MACRO BLOCKS
For the video coding algorithm recommended by CCITT, CIF and QCIF are divided into
a hierarchical block structure consisting of pictures, groups of blocks (GOBs), Macro Blocks
(MBs), and blocks. Each picture frame is divided into 16 x 16 blocks. Each Macro block is
composed of four 8 x 8 (Y) luminance blocks and two 8 x 8 (Cb and Cn) chrominance blocks.
207
This set of six blocks, called a macro block; is the basic hierarchical component used for
achieved a high level of compression.
Motion compensation
Motion compensation is the basis for most compression algorithms for visual telephony
and full-motion video. Motion compensation assumes that the current picture is some translation
of a previous picture. This creates the opportunity for using prediction and interpolation.
Prediction requires only the current frame and the reference frame.
Based on motion vectors values generated, the prediction approach attempts to find the
relative new position of the object and confirms it by comparing some block exhaustively. In the
interpolation approach, the motion vectors are generated in relation to two reference frames,
one from the past and the next predicted frame.
The best-matching blocks in both reference frames are searched, and the average is
taken as the position of the block in the current frame. The motion vectors for the two reference,
frames are averaged.
MPEG Encoder
Figure 8.7 below shows the architecture of an MPEG encoder. It contains DCT quantizer,
Huffman coder and Motion compensation. These represent the key modules in the encoder.
208
The pixel data is then fed into a DCT, which creates a scalar quantization of the pixel
data.
MPEG -2
The MPEG-2 Standard Supports:
* Multichannel sound.
3. Multiplexing: MPEG-2 definitions
209
It consists of following companies AT&T, MIT, Philips, Sarnoff Labs, GI Thomson, and
Zenith.
The MPEG-2committee and FCC formed this alliance. These companies together have
defined the advanced digital television system that include the US and European HDTV systems.
The outline of the advanced digital television system is as follows:
Vector Quantization
In image compression, source samples such as pixels are blocked into vectors so that
each vector describes a small segment or sub block of the original image.
It is developed by Intel Architecture Labs Indeo Video is a software technology that reduces
the size of uncompressed digital video files from five to ten times.
Indeo technology uses multiple types of ‘lossy’ and ‘lossless’ compression techniques.
210
DVI/Indeo
CD-ROMs provide an excellent distribution medium for computer-based video: they are
inexpensive to mass produce, and they can store great quantities of information. CDROM
layers offer slow data transfer rates, but adequate video transfer can be achieved by taking
care to properly prepare your digital video files.
Limit the amount of synchronization required between the video and audio. With Microsoft’s
AVI files, the audio and video data are already interleaved, so this is not a necessity, but
with QuickTime files, you should “flatten” your movie.
Flattening means you interleave the audio and video segments together.
Use regularly spaced key frames, 10 to 15 frames apart, and temporal compression can
correct for seek time delays. Seek time is how long it takes the CD-ROM player to locate
specific data on the CD-ROM disc. Even fast 56x drives must spin up, causing some
delay (and occasionally substantial noise).
The size of the video window and the frame rate you specify dramatically affect
performance. In QuickTime, 20 frames per second played in a 160X120-pixel window is
equivalent to playing 10 frames per second in a 320X240 window. The more data that has
to be decompressed and transferred from the CD-ROM to the screen, the slower the
playback.
Audio consists of analog signals of varying frequencies. The audio signals are converted
to digital form and then processed, stored and transmitted. Schemes such as linear predictive
coding and Adaptive Differential Pulse Code Modulation (ADPCM) are utilized for compression
to achieve 40-80% compression.
Audio compression is a form of data compression designed to reduce the size of audio
data files.
(i) Audio Data Compression - in which the amount of data in a recorded waveform is reduced
for transmission. This is used in MP3 encoding, internet radio, and the like.
(ii) Audio level compression - in which the dynamic range (difference between loud and
quiet) of an audio waveform is reduced. This is used in guitar effects racks, recording
studios, etc.
The compressed bit stream can support one or two audio channels in one of 4 possible
modes:
A stereo mode for stereo channels with a sharing of bits between the channels, but
no joint-stereo coding, and
A joint-stereo mode that either takes advantage of the correlations between the
stereo channels or the irrelevancy of the phase difference between channels, or
both.
The compressed bit stream can have one of several predefined fixed bit rates ranging
from 32 to 224 kbits/sec per channel. Depending on the audio sampling rate, this translates
to compression factors ranging from 2.7 to 24. In addition, the standard provides a “free”
bit rate mode to support fixed bit rates other than the predefined rates.
Layer I is the simplest and is best suited for bit rates above 128 kbits/sec per channel.
For example, Philips’ Digital Compact Cassette (DCC)[5] uses Layer I compression at 192
kbits/s per channel.
Layer II has an intermediate complexity and is targeted for bit rates around 128 kbits/s
per channel. Possible applications for this layer include the coding of audio for Digital Audio
Broadcasting (DAB) , for the storage of synchronized video-and-audio sequences on CD-
ROM, and the full motion extension of CD-interactive, Video CD.
Layer III is the most complex but offers the best audio quality, particularly for bit rates
around 64 kbits/s per channel.
213
All three layers are simple enough to allow single-chip, real-time decoder implementations.
The coded bit stream supports an optional Cyclic Redundancy Check (CRC) error detection
code.
MPEG/audio provides a means of including ancillary data within the bit stream.
In addition, the MPEG/audio bit stream makes features such as random access, audio
fast forwarding, and audio reverse possible.
Applications
8.8 Summary
Compression is the way of making files to take up less space.
Compression methods are otherwise known as algorithms, which are calculations that
are used to compress files.
214
2. b. Three
3. b. Streaming Stored
4. c. Streaming Live
5. Interactive
6. Predictive
7. Data Compression
10. Explain MPEG architecture and different kind of picture used with neat sketch of frames.
215
LESSON 9
WORKING EXPOSURE ON TOOLS
Structure
9.1 Introduction
9.4 Flash
9.5 Photoshop
9.6 Summary
9.1 Introduction
Dreamweaver allows users to preview websites in locally-installed web browsers. It also
has site management tools such as FTP/SFTP and WebDAV file transfer and synchronization
features, the ability to find and replace lines of text or code by search terms and regular
expressions across the entire site, and a tempting feature that allows single-source update of
shared code and layout across entire sites without server-side includes or scripting.
Adobe Flash (formerly Macromedia Flash) is a multimedia platform originally acquired
by Macromedia and currently developed and distributed by Adobe Systems.
Flash has become a popular method for adding animation and interactivity to web pages.
Flash is commonly used to create animation, advertisements, and various web page Flash
components, to integrate video into web pages, and more recently, to develop rich Internet
applications.
Adobe Photoshop, or Photoshop, is the most powerful graphics editing program (also
known as a DPP, Desktop Publishing Program) developed and published by Adobe Systems.
216
It is the current market leader for commercial bitmap and image manipulation software, and is
the flagship product of Adobe Systems.
Incorporate text, images, animation, sound, and video into Web pages
Create an accessible and full feature Website with popular multimedia authoring tools,
such as Adobe Dreamweaver, Flash, and Photoshop
Learn how to design and develop multimedia for real time applications.
Adobe Dreamweaver is a software program for designing web pages, essentially a more
fully featured HTML web and programming editor. The program provides a WYSIWYG (what
you see is what you get) interface to create and edit web pages. Dreamweaver supports many
markup languages, including HTML, XML, CSS, and JavaScript
Purpose of Dreamweaver
Adobe Dreamweaver CC is a web design and development application that uses both a
visual design surface are known as Live View and a code editor with standard features such as
syntax highlighting code completion, and code collapsing as well as more advanced features
such as real-time syntax checking and code introspection.
Dreamweaver Features
Adobe Dreamweaver CC is a web design and development application that uses both a
visual design surface known as Live View and a code editor with standard features such as
syntax highlighting, code completion, and code collapsing as well as more advanced features
217
such as real-time syntax checking and code introspection for generating code hints to assist
the user in writing code. Combined with an array of site management tools, Dreamweaver
allows for its user’s design, code and manage websites, as well as mobile content. Dreamweaver
is an Integrated Development Environment (IDE) tool. You can live preview of changes for the
frontend. Dreamweaver is positioned as a versatile web design and development tool that
enables visualization of web content while coding.
Dreamweaver, like other HTML editors, edits files locally then uploads them to the remote
web server using FTP, SFTP, or WebDAV. Dreamweaver CS4 now supports the Subversion
(SVN) version control system.
Since version 5, Dreamweaver supports syntax highlighting for the following languages
out of the box:
Action Script
C#
ColdFusion
EDML
Java
JavaScript
PHP
Support for Active Server Pages (ASP) and Java Server Pages was dropped in version
CS5.
Users can add their own language syntax highlighting. In addition, code completion is
available for many of these languages. The main features of Dreamweaver to be considered,
The root folder for our local site will become a “mirror” that can be installed & online
Dreamweaver uses site information to track links and updates to your files
Select File > New Site; we’ll create a site called bookstore
To create a folder called Sites, with bookstore (and other sites) as subfolders
You can also establish a connection between local site and remote server, at site http://
Cache option can improve the speed of link and site management tasks
Refresh button automatically refreshes local site from remote site (but this takes time)
You can use the Site window to create a Site map (under Window pulldown menu)
OK, we can close this window for now and return to Dreamweaver workspace
219
On the left is the Object Palette, a set of toolbars, analogous to the Authorware toolbar
Objects are HTML elements that Dreamweaver will insert into your documents
Starts with Common tools; more available by clicking on the pulldown arrow at the top
On the upper right is Launcher—launches other programs (Site, Library, HTML Source
etc.)
At bottom is Page Properties, which lets you modify font, alignment, etc.
This dialog box lets you edit global properties of the page, such as background
Type Bookstore into Title (this will appear in browser’s title bar, bookmarks & favorites)
Design tip: descriptive titles make it easier for search engines to find your page
Click on the square next to Background Color and select a color (or import image)
You can also change the Text, Link, Visited Links, and Active Links colors
Left Margin and Top Margin specifies page margins—in Microsoft IE, not Netscape
Click OK, then save your page by selecting File > Save As (as index.htm)
Type in “Mythical Bookstore” then highlight (and keep cursor in highlight area)
Use right button to select heading 1 , change the font to Arial and alignment to center
Our title looks OK, so place the cursor after the heading, hit Enter
Open the Property Inspector by selecting Window Properties (may need to hide
windows)
Property inspector will change when you highlight different elements on a page
Here we see the HTML source code that Dreamweaver has generated
You can use the HTML Source window to edit text, if you prefer this to WYSIWYG!
The Object palette (you can reopen this by selecting Window > Objects)
Holding the cursor over each icon on the palette opens a caption box explaining it
This toolbar lets you insert images, tables, horizontal rules, Java applets, Flash movies,
etc.
Use the mouse to place the cursor just after the headline and hit Enter
Click on the Image icon (a tree) on the Object palette to choose an image
Browse to the folder “images” and click on books.jpg, click select, then OK
221
Note: now the Property Inspector refers to how this image is embedded on the page
Click on the Align menu in the Property Inspector, then select Align Center
Now click on the image—now the Property Inspector refers to the image itself
(here, Align is for aligning a picture next to text, not on the page as a whole)
Let’s try another object from the object palette: Horizontal Rule
Place the cursor after the address, then select the Horizontal Rule button on Object palette
Use the Property Inspector to change the width to 75% (% via menu) and alignment to
center
Use Notepad to open books.txt, the paste it into the Dreamweaver document
Notice that any formatting in books.txt is now lost, including paragraph breaks: why?
Let’s use Dreamweaver to insert paragraph breaks, Heading2 formats, alignment, etc.
Enter inserts paragraph breaks (double space break), Shift-Enter enters line break
Use HTML inspector to take a look at the code: <p> vs. <br>
Creating lists—you can created ordered lists (set off by numbers or letters)
Dreamweaver lets you create lists as you type highlight text and apply list format
Let’s do it: click the Numbered List button in the Property Inspector (below I) (or choose
Text > List from the menu bar)
Mouse-select all the text between “This Month’s Specials” and next horizontal rule
222
Demonstrate undo from Edit menu (note Ctrl-Z short-cut) and Redo (Ctrl-Y)
Mouse-select all the text between “On the CoffeeTable” and “This Month’s Specials”
Now we’ll see how to position images in relation to text (this is tricky in HTML!)
Place the cursor below the headline “On the CoffeeTable” then click Insert Image
tool on the Object Palette. Find books.jpg again
In Image Property Inspector, go to Align pulldown menu and select Absolute Middle
In Alt text box, type “books” – What does this do? Why is it useful?
Select File > Open to open page called “arica.htm” in the “catalog” folder
Click on the arrow in lower right corner of Property Inspector to bring up more options:
In the H Space text box in bottom left of Property Inspector, enter “10”
Note how the text now wraps on the right of the image
OK, we’ve got enough content, let’s create hyperlinks to other pages
Go back to coftable.htm window and select the words “The Arica Conundrum”
In Property Inspector, click on the Folder icon to right of Link text area
The text changes to indicate a link (un-highlight the text to see the actual link color)
Use HTML Source inspector to see what this action has created in HTML
One more time: create another link from “Varoom” in coftable.htm to catalog\varoom.htm
Note: links can have absolute paths (starting with “http:”) or relative paths (starting with /)
Use an absolute path to link to another web site, relative path to link to local site
What’s an advantage of relative paths? makes it easier to move entire web site
Place cursor next to “This month’s specials”, then choose Insert > Named Anchor
Now switch to arica.htm and insert cursor to the right of the book’s price
In the Link box, to the right of the filename, type “#specials”—thus specifying an
anchor
If you wanted to link to a Named Anchor in same page, you wouldn’t need file name
OK, let’s preview our work so far, in a browser, by selecting File > Preview (or pressing
F12)
In coftable.htm, click on graphic of books, then view the Map options in Property
Inspector
Recall, it’s in lower part of Property Inspector for images, accessible via arrow on
lower right
This is part of the Image Property Inspector is the Image Map Editor:
The Image Editor can create hot spots of different shapes: rectangle, circle, polygon
Arrow on left is a selection tool that will let us move or resize the image map
Select the rectangle tool, create a rectangular shape covering the book Varoom
Hot spots have links to other pages: click on the Folder icon next to Link text field
Dreamweaver Pros
Dreamweaver is an intuitive and flexible tool that does a lot of things right. Here are
some of the biggest advantages this software can offer you:
With Dreamweaver, you’ll have an instant preview option that lets you test and see how
your website will look across any device. A lot of other tools have this feature. But, with
Dreamweaver, all it takes is a single click to preview and adjust your site on the fly.
When you’re tweaking your code or writing it from scratch, there are going to be errors
that accumulate over time. With Dreamweaver, you can quickly find and fix these errors quickly.
Instead of having to guess and troubleshoot your site for errors, you’ll know what’s wrong and
how to fix it.
Designing your site can be a lot of fun. Especially when you start tweaking things like
color, layout, font choice, and more. Dreamweaver has a massive font selection built right into
the software. This makes it easy to find the perfect font for your headers and body text.
225
If you’ve ever published anything online you know how long the stock photo search can
take. Instead of having to search across a variety of stock photo websites you can do it right
within the tool. There’s a massive selection and you’ll be able to find the perfect photo for your
needs.
When you first start using Dreamweaver you might be overwhelmed with all the different
tools and options available. But, you can actually streamline the appearance and use of the
site builder by changing the preferences. Once you know what you use and what you don’t you
can craft the appearance of the builder to suit your needs.
Dreamweaver Cons
Still, Dreamweaver isn’t perfect. If you’re not willing to put in the work of learning how this
software works, you might be better off with a different solution. Here are some of the biggest
drawbacks of Dreamweaver:
There are other site builder’s solutions like Square space, Wix, and Word Press that
make it incredibly easy to build out your first website. Creating a basic site and getting it online
with Dreamweaver isn’t too difficult, but creating a site that can do exactly what you want will
take some time.
Since you’re starting with a blank canvas the end result will depend upon your own
creativity and skills. Some users prefer this, but others would prefer a simpler solution that
requires absolutely no coding skills. You can do a lot with Dreamweaver, and a lot of experienced
developers prefer using this software for the flexibility it provides.
226
1. How many Sites can you define with one copy of Dreamweaver installed on your
computer?
a. unlimited
b. 2
c. 10
d. 999
2. What do you add to a template in order to control where page content goes?
a. Text Frames
b. HTML Controllers
c. Editable Regions
d. Page Content Controllers
a. Linked
b. Embedded
c. Inline
d. Orthogonal
a. CSS
b. Layers
c. Frames
9.4 Flash 5
Flash is one of the most popular technologies on the internet, with thousands of websites
using it for introductions, animations and advertisements. Although many people feel that these
animations are sometimes unnecessary, Flash has created a way of including multimedia on
web pages, which will run over a standard internet connection. The recent release, Flash 5,
has brought many changes to the creation of Flash animations. Many of the techniques covered
in this tutorial will also apply to past versions of Flash, as well as Flash MX, the very latest
version, though. If you are not sure what a Flash animation is like click here for an example. If
this does not work you will need to download the Flash Plugin.
Flash is one of the best multimedia formats on the internet today for several reasons.
Firstly, the Flash plugin (required to view the animations) is installed on nearly every computer
connected to the internet. All the major browsers come with it installed by default and, for those
who don’t have it, the download is very small. Secondly, Flash is a ‘vector based’ program,
which means the animations and graphics created by it have much smaller file sizes than a
video or streaming media version of the same animation would be. You can also include sound,
graphics and dynamically created information in your animation.
When you first open Flash you will find an interface that looks something like this:
In the center is the large white ‘Stage’. This is the actual movie where you will place all
the objects you want to include in it. Across the top of the screen is the timeline. This is where
you insert all the actions that happen in your movie so that they happen at the correct times. It
is split up into frames. Down the left hand side of the screen is the ‘Tools’ palette. This is where
you will find all the tools for inserting objects and text into your animation.
There are also four floating palettes on the screen. The ‘Mixer’ palette allows you to
choose the colors you will be using in your animation. It will change the colors of the currently
selected object. The’ Info’ palette will allow you to find out a bit of information about the object
you have selected and will allow you to make changes to the properties of a tool you are using.
The ‘Character’ palette contains all the text formatting tools. Finally the ‘Instance’ palette
contains all the tools for changing objects when you are animating them, including sound and
several other tools for making changes to your animation.
Each of the parts of the Flash window does many different things. Instead of going through
each tool explaining what it does, I will show you examples and explain how to create them,
showing you how to use each tool while doing so.
The first thing you need to learn how to do is to draw basic shapes in Flash. We will start
with the most basic shape, the circle/oval. Before you start you might want to move some of the
floating palettes so that you can see enough of the stage to work on.
Firstly, choose the Oval tool from the Tools bar on the left:
Then, draw the oval or circle you want on the stage (just as you would in a normal
graphics program).
Tool Purpose
rectangle tool
The rectangle tool, unlike the oval, has some options which can be set.
line tool
filling a shape with a single color you can also use Flash’s premade fills
Dropper Tool - pick a color off one part of the screen and use it as the fill
or line color
Paintbrush Tool - paint lines all the time you have the mouse button held
down.
character palet
Symbols
In order to animate something in Flash it must first be changed into a Symbol. There are
three types of symbol: Graphic, Button and Movie.
To start, draw a filled circle in the middle of the screen, a few centimeters high. Choose
the arrow tool and double click on the circle to select it and the line around it. Then press F8 on
the keyboard. You will get a window called Symbol Properties. In this window you can give a
name to your symbol so that you can refer to it later. Type ‘Circle’ (without the quotes) in the
box and then select Graphic and click OK.
You will now notice that the circle appears with a blue line around it. The next thing you
will want to do is to animate this circle.
The Timeline
The timeline window shows all the frames that make up your animation and all the layers
(which will be covered later). Each small box in the timeline is a frame. The animation runs at
12 frames per second (shown at the bottom) as standard but this can be changed. As you can
see above, there is a black dot in the first frame. This signifies that it is a key frame.
Key frames
Key frames are very important in flash as they are used whenever something is changed.
For instance if you wanted the circle to appear in another position later in the movie you would
create a key frame in the frame where you want it to change and then you could move the
circle without affecting the rest of the movie. That is exactly what you are going to do now.
Right click in frame 50 on the timeline and choose Insert Key frame. This will insert a new
key frame into the animation at frame 50 and it will contain the same information as the previous
key frame. You could have also chosen Blank Key frame which will make a new blank key
frame but you want the circle to be in both key frames in your movie.
Now, click in frame one and press Enter to play the movie. As you can see you now have
a 4.1 second long movie of a circle in the middle of the screen - not very interesting.
To make something happen you will need to change the second key frame. Click on it
(frame 50) and the symbol of the circle will be selected. Now, with the arrow tool, click and drag
the circle to the upper left hand corner of the stage. Then click in frame one again and press
Enter to play the movie.
Animation
The movie you have created now has a circle which moves on the screen but, as you will
have noticed, it stays in the same place and then suddenly moves in the last frame. Animations,
like television and film, are made up many frames, each of which has a slight change from the
last one. As they are played very fast (12 frames per second in flash) the object looks like it is
moving. Luckily, flash has been built so that you don’t have to do all of this manually.
Actually, animating the circle on the screen is amazingly easy because of the Flash
feature called Motion Tweening. This feature will automatically create all the frames to go
232
between two key frames to animate an object which you have moved (in this case the circle).
All you have to do is right click in any frame between your two key frames and choose Create
Motion Tween.
Once you have done this the frames will change from being grey to being blue with an
arrow across them. This signifies a motion tween. Click in frame one and press Enter to view
your movie. As you can see, now flash has made your circle move smoothly across the screen
and, if you click i n the frames between your key frames you will see that it has created all the
frames in between.
Scaling
Motion Tweens can be used for other things as well as moving objects. You can also
change their size. For this you will use the scale tool. Right click inn frame 80 and create a new
key frame. Your circle will be selected. Now choose the Scale tool from the Options section on
the tools pallette (if it is not available make sure you have the black pointer tool selected).
This tool allows you to change the size of objects. 6 white boxes will appear at the edges
of the circle, just like in any other image application. Use the bottom right hand one to drag the
circle size until it is considerably larger. You will also notice that the circle grows equally around
its center point. Now, as before, right click in between frames 50 and 80 and choose Create
Motion Tween.
Rotation
Resizing a symbol is not the only thing you can do to it. Symbols can also be rotated. To
do this create a movie and draw a large red square in the middle. Then, select the square and
make it a symbol (F8). Give it a name and choose Graphic as the type. Then go to frame 30
233
and insert a key frame. In this new key frame choose the black arrow from the Tools menu and
then click on the Rotation option:
which is found next to the Scale option under the Options section for the arrow. Then
click on one of the white handles that appear round the rectangle and drag the mouse until the
rectangle rotates to about 90 degrees (or any rotation). Then all you have to do is right click
between frames 1 and 30 and choose Create Motion Tween to animate your rotation.
Animating Text
Text, like images can be made into symbols and animated in exactly the same way as
images can. The technique is exactly the same except for one difference: even when it is a
symbol you can still edit text by double clicking on it. Apart from this you can rotate it, scale it,
move it and perform any other changes that you normally could.
Importing Images
You can import any graphic into Flash and then use it as you would as if it had been
created in Flash. This is especially useful for pictures such as photos which could not be
created using Flash’s graphics tools. To import an image is very simple: just go to File then
Import... find the image on your hard drive and then click Open. The image will then appear on
the stage. You can now resize it and make it a symbol if you want to.
234
Multiple Animation
You don’t only have to change one thing at once when you animate a symbol. For example
create a symbol of a square. Create a key frame at frame 30. Then click on the symbol in frame
30. Use the scale tool to make it much bigger. Then use the rotate tool to turn it to a different
angle. Finally use the effects pallet to set the alpha at 100%. Now go back to frame 1 and click
on the same square. Go the effects pallette and set the Alpha to 0. Then create a motion tween
between frames 1 and 30 and play your movie. You now have a square which rotates and
grows while fading in.
Layers
One major feature of Flash is that, like several image editing programs, everything you
do is put into layers on the screen. Layers are like pieces of transparent plastic. You can put
pictures text and animation on them. Layers higher up have their content over the top of layers
lower down. So far everything you have done has been contained in one layer, though.
Layers are controlled through the timeline, which you have seen before:
As you can see, there is only one layer in this animation, Layer 1. The first thing you
should do is to rename this layer. Although your animation will work with it being called Layer 1
it is much easier to understand what you are doing if you use proper names for your layers.
DoubleClick on the name and type in Scrolling Text.
Now you will want to put in some content for this layer. Make a symbol with the text:
235
And make it a symbol. Now move it right off the left edge of the stage. Insert a key frame
at frame 120 and in that frame move the text to off the other side of the stage. Now make a
motion tween. Your text should ‘scroll’ across the screen.
Now you can add another frame. Click the little icon on the timeline with a + sign on it.
This will add a new frame above the one you are currently using. Rename this to Circle.
In this layer make a circle which is very small, make it a symbol and then animate it to
grow much bigger in 120 frames.
This should show you how, when you create a second layer it is completely independent
of the other layers but that layers above another layer overlap them.
Inserting Actions
In the last part I showed you how to use an action with a button so that it was triggered
when the button was clicked. Actions can also be added to frames and to other mouse events
on the button. Firstly I will cover the buttons. If you haven’t done so already make a simple
button for your animation and right click on it and choose Actions. The actions window (which
you first used last week) will appear. It has two windows. The one on the right contains the
hundreds of actions you can add. The one on the left contains the code (like programming
code). Choose an event (like Go To) and double click it to add it to the code. This is as far as
you did in the last part.
What you didn’t learn last week was that you can change what triggers the action. There
are several options for this. To access them click on the part of the code which says:
on (release) {
A new section will now appear at the bottom of the box with the options for this part of the
code (in Flash you write code by selecting options). You can choose what triggers the action.
As you can see it is currently set as Release so when the mouse button is released the action
will happen. This is fine for clicks but you may want to use some of the other triggers. To
change the trigger just deselect the old one and select a new one.
236
You can also trigger actions when a frame loads. Right click in any key frame and choose
Actions. This is exactly the same as the button Actions box except when you add an action
there will be no:
on() {
The Play and Stop actions have no parameters. One plays the movie from the current
point and the other stops it (although it remains at its current position).
Toggle High Quality will switch the movie between high and low quality. Stop All Sounds
will stop all the sounds currently playing in the movie. Neither of these have any parameters
which can be set.
Get URL
This can be used for both frames and buttons. Basically, when clicked, it will point the
browser to the specified URL. The URL is specified in the parameters for the action. You can
also choose the window for the new page you are opening. This is the same as target in HTML.
_blank will open in a new window and you can specify the name of a frame in here (if you are
using them). The Variables option allows you to send the variables set in a form in your movie
just like an HTML form (this is good for Submit buttons). You can choose between the standard
POST and GET options.
If Frame Is Loaded
The If Frame Is Loaded is quite a complex but very useful command. It is used to make
the ‘loading’ parts at the beginning of some Flash movies. It is used like an IF statement in a
program. Double click the If Frame Is Loaded action to add it to the code, then double click the
Go To action. You now have a small IF loop.
237
Firstly you should set the parameters for If Frame Is Loaded. Click on this part of the
code. You will see that this is very similar to the Go To parameters. Here you select the frame
you want to use. When this code is run it will check whether the specified frame has loaded yet,
if it has then it will execute the code within the { and }.
Many Flash animations on the internet, especially ones with a lot of sound and images,
will not just start playing smoothly while they are still loading. For these, most people add a
‘loading’ part to their movie. This is a actually a few frames which will repeat until the movie is
loaded. They are actually quite easy to make.
Firstly choose how many frames you will want for your ‘loading’ section. Something like
10 frames is about right. Create the part of the animation you want to loop in these frames. In
the last frame of the ‘loading’ section right click to access the Actions menu. Double click on If
Frame Is Loaded and then immediately afterwards double click on Go To. Then click on the
final
} in the animation and double click the Go To action again. You should now have the
following code:
ifFrameLoaded (1) {
gotoAndPlay (1);
gotoAndPlay (1);
This is the code which will do the ‘loading’ part. Firstly click on ifFrameLoaded(1) and
choose the frame you want to wait until it is loaded from the parameters. Click on the first
gotoAndPlay(1) and choose frame 11 (if you used 10 frames for your ‘loading’ sequence).
Finally, for the last gotoAndPlay(1) choose the first frame in your animation.
This is actually a very basic program, showing how easily complex programs can be
written using Flash’s actions. What the code actually does is to check if the specified frame is
238
loaded. If it is it goes to the first frame of the actual animation. Otherwise, it returns to the
beginning and plays the ‘loading’ sequence again.
Importing Sounds
Before sounds can be used in your animation they must be first made available for the
software to use. To do this you must use the standard Import menu. To access this go to File,
Import. From here you can select each file you want to import (just as you did in an earlier part
with images). The only confusing thing about this is that once you have imported your sound
you will see nothing special on the screen. This is because the sound has been added to the
library.
The Library
The library is not only for sound, but is actually one of the most useful parts of Flash
when you start to create large movies with many elements. Basically, the library contains all the
objects you have in your movie, the three types of symbol (movie, button and graphic) and all
sounds. This can be very useful as, to add something to the movie from the library you just
drag it to the position you want it on the stage.
You can also preview all the objects here, graphics will appear in the top window and if
you select a button, sound or movie clip you can watch or listen to it by clicking the little play
button which appears in the preview window. You should be able to see and preview any
sounds you have added here.
Adding Sound
Sound is added using the sound palette. This is in the same palette as Instance, Effect
and Frame. If it is not on the screen go to Window and choose Panels, Sound. The sound
palette will probably be ‘greyed out’ at first. Insert a key frame into your movie and click in it to
make all the options available.
239
In the first ‘Sound’ box you can select the sound you want to play. If no sounds appear
here, you have not yet imported any into your movie. Check the Library to see if any appear.
Now the effect box will be available. This is not particularly important just now. The next
box is the Sync box. You can choose Event, Start, Stop and Stream. The only ones you really
want to learn about at the moment are Event and Stream. They each have their own features.
Stream
Streaming sounds work like streaming audio on the internet. The sound does not need to
be fully loaded before it starts playing, it will load as it plays. Streaming sounds will only play for
the number of frames that are available for it (until the next keyframe). This is fine for background
sounds but it will not work very well for a button.
Event
Event sounds are mainly used for when something happens in your movie. Once they
have started playing they will continue until they end, whatever else happens in the movie. This
makes them excellent for buttons (where the button moves on to another frame as soon as it is
clicked). The problem with Event sounds, though, is that they must fully load before they can
play.
It is much easier to manage your sounds if you put them on a separate layer. Insert a new
layer and place a key frame at the beginning. Using the sounds palette select the sound you
240
want to play and select Stream from the Sync. If you want the sound to loop change the value
in the Loops box (for some reason a value of 0 (default) will cause the sound to play once).
Now you must insert some frames to give the sound time to play. Insert a frame (or key
frame) at about frame 500 in the movie (this will give most sounds time to play). When working
out how many frames are needed remember that the movie will play at 12 frames per second.
A graphical representation of the sound will appear in the frames it will be playing during so that
you can see how much space it takes up. Press CTRL + Enter to preview your movie.
With the sound on a separate layer you can have your movie running on other layers
while the sound plays in its own layer.
Adding an event sound to a button is nearly as easy as adding a streaming sound. Either
create a button or use a pre-made one and right click and choose Edit. This will allow you to
see the 4 different states of the button (as you learned about in part 6. Usually sounds are
placed in the Over or down frames. To make a sound play when the mouse moves over the
button choose over and to hear it when the button is clicked choose down.
Insert a new frame and then put a key frame for the button state you want to use. Click in
the frame and use the sounds palette to add an Event sound. You don’t need to put in any extra
frames as an event sound will play until it finishes. Now return to the movie and use CTRL +
Enter to test it with the button.
Effects
The effects option allows you to add a variety of effects to the sound as it plays. The
preset ones are quite self-explanatory but you can also use the Edit. Button to create your own.
This will open a window with the waveform representation of the sound (left speaker at the top,
right at the bottom). On the top of this is a line which is the volume control (the top is full volume
(the volume the sound was recorded at) and the bottom is mute). By clicking in the window you
can insert little squares. The line goes between these squares. You can also drag them around
the screen. By doing this you can change the volume of the sound at different points throughout
its playing time, and make it different for each speaker.
241
a. Tweening
b. Shape Tween
c. Motion Tween
d. Transition
a. Text Box
b. Text Tool
c. HTML
d. Key frames
a. For Graphics
b. For Animation
c. For Programming
d. For Typing
a. Bmp
b. Tiff
c. Psd
d. Txt
242
a. Page maker
b. Painting
c. Photoshop
d. All of these
9.5 PHOTOSHOP 7
History
In 1987, Thomas Knoll, a PhD student at the University of Michigan began writing a
program on his Macintosh Plus to display grayscale images on a monochrome display. This
program, called Display, caught the attention of his brother John Knoll, an Industrial Light &
Magic employee, who recommended that Thomas turn it into a full-fledged image editing
program. Thomas took a six-month break from his studies in 1988 to collaborate with his
brother on the program. Thomas renamed the program ImagePro, but the name was already
taken. Later that year, Thomas renamed his program Photoshop and worked out a short-term
deal with scanner manufacturer Barneyscan to distribute copies of the program with a slide
scanner; a “total of about 200 copies of Photoshop were shipped” this way.
During this time, John traveled to Silicon Valley and gave a demonstration of the program
to engineers at Apple and Russell Brown, art director at Adobe. Both showings were successful,
and Adobe decided to purchase the license to distribute in September 1988.[8] While John
worked on plug-ins in California, Thomas remained in Ann Arbor writing code. Photoshop 1.0
was released in 1990 for Macintosh exclusively.
File format
Photoshop files have default file extension as .PSD, which stands for “Photoshop
Document.” A PSD file stores an image with support for most imaging options available in
Photoshop. These include layers with masks, color spaces, ICC profiles, CMYK Mode (used
243
for commercial printing), transparency, text, alpha channels and spot colors, clipping paths,
and duotone settings. This is in contrast to many other file formats (e.g. .JPG or .GIF) that
restrict content to provide streamlined, predictable functionality. A PSD file has a maximum
height and width of 30,000 pixels, and a length limit of 2 Gigabytes.
PHOTOSHOP DESKTOP
2. Menu Bar with several layers of drop down menus & dialogues
4. Navigator/Info/Color Palette palettes allows zooming in and out, shows information about
the cursor point and selection of colors
Zoom tool: found on the TOOLBOX - Used to zoom in and out on the image
To increase: -click on the ZOOM tool on the TOOLBOX -click on the image
To decrease: -click on the ZOOM OUT button on the OPTIONS BAR -click on the image
Rectangular Marquee:
- Click on the RECTANGULAR MARQUEE tool on the TOOLBOX -click and drag
diagonally inside the image window
-to select more than one rectangle at the time, hold down the SHIFT key while using tool
Elliptical Marquee
TOOLBOX
-from the box that appears, select the ELLIPTICAL MARQUEE tool -click and drag
diagonally inside the image window
-to select more than one ellipsis at the time, hold down the SHIFT key while using tool
Lasso Tool
-click and drag to draw a selection until you get to the beginning and then release the
mouse
245
-from the box that appears, select the POLYGONAL LASSO tool
-click multiple times until you get to the beginning to create a border for the area selected
-from the box that appears, select the MAGNETIC LASSO tool -click on the edge of the
object you want to select, then continue dragging/clicking around it
-to adjust sensitivity, go to the options bar and change width, edge contrast, and frequency
Magic Wand
-type a number from 0-225 in TOLERENCE field on the OPTIONS BAR -click area/color
to be selected
-to select more than one area at the time, hold down the SHIFT key while using tool
Layers
• Layers work as several images, layered on top of one another. Each layer has pixels that
can be independently edited.
• Most Photoshop commands/tools work only on the layer you have selected.
• You can combine, duplicate, and hide layers in an image. You can also shuffle the order
in which the layers are stacked.
• Layers can have transparent areas, so that you can see the layers underneath. When
you cut or erase, the affected pixels become transparent. Also, you can change the
opacity of a layer.
246
• You MUST save files as a .PSD or a .TIFF to continue to work with the images.
These are large file formats. When you are completely done editing your image, you can
FLATTEN the layers into a single layer and save the file as a .JPG, .BMP, and .GIF
If you can’t make additions to a layer you probably need to uncheck ‘Preserve
Transparency’.
A highlighted layer with a paintbrush is an active layer. It is the only layer that can be
altered.
At the bottom are the Effects Button, Layer Mask Button, Layers Folder Button, Adjustment
Layers Button, New Layer button and the Delete Layer button.
To Create a Layer:
-click WINDOWS-> LAYERS to show the LAYERS palette -click on the layer above which
you want to add the new layer -click on the NEW LAYER button
To Hide a Layer:
-click WINDOWS-> LAYERS to show the LAYERS palette -click a layer. click the EYE
icon for the layer. -the layer and the EYE icon will be hidden.
To Duplicate a Layer
247
-click on the layer you want to copy and drag the layer to the NEW LAYER button
To Delete a Layer
-click on the layer you want to delete. click on the DELETE LAYER button
Moving/Copying/Pasting
Moving a Selection
-use the MOVE tool to move the selection to another part of the layer
-using a selection tool, select where you want to paste the copied element -Click EDIT ->
PASTE in the menu bar
-the image is copied onto a new layer that can be moved independently of the original
image
-you can also copy selections from one image file to another one. Just copy in the old
window and then paste in the new window
Delete a Selection
Resizing an Image/Canvas/Selection
248
-the IMAGE SIZE DIALOG BOX opens, listing the current height, width, and resolution
of the image
-type a size for a dimension. If you want it to stay the same proportion, make sure the
- the CANVAS SIZE DIALOG BOX opens, listing the current height and width of the image
- modify the direction that the program changes the canvas size by selecting an anchor
point
- click OK
-click and drag a corner handle on the selection to scale on for the horizontal and vertical axis
Rotate/Skew/Distort A Selection
To Rotate a Selection
-click and drag a corner handle on the selection to rotate the selection
To Skew a Selection
-click and drag a corner handle on the selection to skew either the horizontal and vertical
axis
250
To Distort a Selection
-click and drag a corner handle on the selection to distort both the horizontal and vertical
axis
9.6 Summary
There is also limited integration with some other applications. For example, you can export
an InDesign file as XHTML and continue working on it in Dreamweaver.
Flash is very popular in web designing because you can do fantastic animations while still
keeping the file size low and so that sites can load fast
Adobe Photoshop CS6 software includes automated tools that slash the time needed for
selecting and compositing and live filters that boost the comprehensive, nondestructive
editing toolset of Photoshop.
Integrate Dreamweaver with flash, Photoshop tools to simplify your web design workflow.
2. c. Editable Regions
3. d. Orthogonal
6. a. Tweening
251
7. b. Text Tool
8. a. True
9. a. For Graphics
10. c. .Psd
11. Layer
12. c. Photoshop
2. Explain about step by step procedure to set the working environment in Dreamweaver.
4. Define flash.
7. Define Photoshop.
LESSON 10
THE INTERNET AND MULTIMEDIA
Structure
10.1 Introduction
10.5 Connections
10.8 Summary
10.10Model Questions
10.1 Introduction
In this lesson is designed to give you an overview of the Internet while describing particular
features that may beuseful to you as a developer of multimedia for the World Wide Web.
URLsand other pointers are also included here to lead you to information forobtaining, installing,
and using these applications and utilities.
Learn what a computer network is and how Internet domains, addresses, and
interconnections work
Understand the current state of multimedia on the Internet and tools for the World Wide
Web
253
The internet began as a research network funded by the Advanced Research Projects
Agency (ARPA) of the U.S. Defense Department, when the first node of the ARPANET was
installed at the University of California at Los Angeles in September 1969.
In 1985, the National Science Foundation (NSF) arranged with ARPA to support a
collaboration of supercomputing centers and computer science researchers across the
ARPANET. The NSF also funded a program for improving the backbone of the ARPANET,
increasing its bandwidth from 56 Kbps and branching out with links to international sites in
Europe and the Far East.
In 1989, responsibility and management for the ARPANET was officiallypassed from
military interests to the academically oriented NSF,and research organizations and universities
(professors and students alike)became increasingly heavy users of this ever-growing “Internet.”
Much ofthe Internet’s etiquette and rules for behavior (such as for sending e-mailand
posting to newsgroups) was established during this time.More and more private companies
and organizations linked up tothe Internet, and by the mid-1990s, the Internet included
connectionsto more than 60 countries and more than 2 million host computers withmore than
15 million users worldwide.
Commercial and business useof the Internet was not permitted until 1992, but businesses
have sincebecome its driving force. By 2001 there were 109,574,429 domain hostsand 407.1
million users of the Internet, representing 6.71 percent of theworld’s population. By the beginning
of 2010 (see Table 12-1), about oneout of every four people around the world (26.6 percent)
had access to theInternet, and more than 51 million domain names had been registered as”dot
coms.”
254
Networking basics
In its simplest form, a network is a cluster of computers, with one computer acting as a
server to provide network services such as file transfer, e-mail, and document printing to
the client computers of that network.
Using gateways and routers, a local area network (LAN) can be connected to other LANs
to form a wide area network (WAN).
These LANs and WANs can also be connected to the Internet through a server that
provides both the necessary software for the Internet and the physical data connection.
Individual computers not permanently part of a network can dial up to one of these Internet
servers and, with proper identification and onboard client software, obtain an IP address
on the Internet.
Internet Addresses
Address Syntax
Internet addresses use the following syntax:
(HTTP://WWW.YCCE.EDU)
For Example
The server directory path and file name are left off.
mailto
news
255
In 1983 the Domain Name System (DNS) was established to assign names and addresses
to computers which were linked to the internet.
Top-level domains were established as categories to accommodate all users of the Internet:
Gov U.S. federal government agencies (state and local agencies register in the
country domain)
In 1998 (ICANN) Internet Corporation for Assigned Names and Numbers was set up to
oversee the DNS.
Aero(Air-Transport)
info(Unrestricted use)
pro(Accountants, lawyers)
Biz(Business)
museum(museums)
Coop(Cooperatives)
name(For individuals)
Many second-level domains contain huge numbers of computers and user accounts
representing local, regional, and even international branches as well as various internal business
and management functions. So the Internet addressing scheme provides for subdomains that
can contain even more subdomains. Within the education (.edu) domain containing hundreds
of universities and colleges, for example, is a second-level domain for Yale University called
yale. At that university are many schools and departments (medicine, engineering, law, business,
computer science, and so on), and each of these entities in turn has departments and possibly
sub departments and many users. These departments operate one or even several servers for
managing traffic to IP and from the many computers in their group and to the outside world. At
Yale, the server for the Computing and Information Systems Department is named cis. It
manages about 11,000 departmental accounts—so many accounts that a cluster of three
subsidiary servers was installed to dealefficiently with the demand.
These subsidiary servers are named minerva, morpheus, and mercury. Thus, minerva
lives in the cis domain, which lives in the yale domain, which lives in the edu domain. Real
people’s computers are networked to minerva. Other real people are connected to the morpheus
and mercury servers. To make things easy (exactly what computers are for), the mail system
database at Yale maintains a master list of its entire people.
So, as far as the outside world is concerned, a professor’s e-mail address can be simply
[email protected]; the database knows he or she is really connected to minerva
257
so the mail is forwarded to that correct final address. In detailed e-mail headers, may see the
complete destination address listed as well as names of the computers through which mail
message may have been routed.
E-mail accounts are said to be “at” a domain (written with the @ sign). There are never
any blank spaces in an Internet e-mail address, and while addresses on the Internet are normally
case insensitive, conventional use dictates using all lowercase: the Internet will find
[email protected], [email protected], and [email protected] to be the same
address.
When a stream of data is sent over the Internet by the computer, it is first broken down
into packets by the Transmission Control Protocol (TCP).
Each packet includes the address of the receiving computer, a sequence number (“this is
packet #5”), error correction information, and a small piece of the data.
After a packet is created by TCP, the Internet Protocol (IP) then takes over and actually
sends the packet to its destination along a route that may include many other computers
acting as forwarders.
The 32-bit address included in a data packet, the IP address, is the “real” Internet address.
It is made up of four numbers separated by periods, for example, 140.174.162.10. Some of
these numbers are assigned by Internet authorities, and some may be dynamically assigned
by an Internet Service Provider (ISP) when a computer logs on using a subscriber’s account.
a. 192.168.1.1
b. www.apple.com
d. http://www.pages.net/index.html
a. HMT extension
b. HTML extension
c. THM extension
d. None of these
When a stream of data is sent over the Internet by your computer, it is first broken down
into packets by the Transmission Control Protocol (TCP). a) True b) False
10.5 Connections
If your computer is connected to an existing network at an office or school, it is possible
you are already connected to the Internet.
Check with your system administrator about procedures for connecting to the Internet
services such as World Wide Web; necessary browser software may already be installed
on your machine.
If you are an individual working from home, you will need a dial-up account to your office
network or to an Internet Service Provider or an online service.
You will also need a modem an available telephone line, and software.
TCP/IP software
o Operating system may need to be configured to connect to the server and use TCP/IP
software.
o E-MAIL PROGRAMS
o WEB BROWSERS
o FTP SOFTWARE
o NEWS READERS
Bandwidth Bottleneck
Bandwidth is how much data, expressed in bits per second, you can send from one
computer to another in a given amount of time.
260
The faster your transmissions, the less time you will spend waiting for text, images,
sounds, and animated illustrations to upload or download from computer to computer, and the
more satisfaction you will have with your Internet experience.
Available bandwidth greatly affects how a person can use the internet.
Users with slow connections will have a difficult time using multimedia over the internet.
Require users to download data only once, and then store the data in a local hard disk
cache (this is automatically managed by most WWW browsers).
Design each multimedia element to be efficiently compact – don’t use a greater color
depth than is absolutely necessary.
Implement streaming methods that allow data to be transferred and displayed incrementally
(without waiting for the complete dataset to arrive).
To many users, the Internet means the World Wide Web. But the World Wide Web is
only the latest and most popular of services available today on the Internet.
E-mail, file transfer; discussions groups and newsgroups; real-time chatting by text, voice,
and video; and the capability to log into remote computers are common as well. Internet services
include the following:
261
In the case of the Internet, daemons support protocols such as the Hypertext Transfer
Protocol (HTTP) for the World Wide Web, the Post Office Protocol (POP) for e-mail, or the File
Transfer Protocol (FTP) for exchanging files. The first few letters of a Uniform Resource Locator
(URL)—for example, http://www.timestream.com/index.html—notify a server as to which daemon
to bring into play to satisfy a request.
In many cases, the daemons for the Web, mail, news, and FTP may run on completely
different servers, each isolated by a security firewall from other servers on a network.
MIME (Multipurpose Internet Mail Extension) media types were originally devised so that
e-mails could include information other than plain text. MIME media types indicate the following
things
262
How different parts of a message, such as text and attachments, are combined into the
message.
The way different items are encoded for transmission so that even software that was
designed to work only with ASCII text can process the message.
Now MIME types are not just for use with e-mail; they have been adopted by Web servers
as a way to tell Web browsers what type of material was being sent to them so that they can
cope with that kind of messages correctly.
A main type
A sub-type
The main type is separated from the subtype by a forward slash character. For example,
text/html for HTML.
text
image
multipart
audio
video
message
model
application
For example, the text main type contains types of plain text files, such as “
MIME types are officially supposed to be assigned and listed by the Internet Assigned
Numbers Authority (IANA).
Many of the popular MIME types in this list (all those begin with “x-”) are not assigned by
the IANA and do not have official status. You can see the list of official MIME types at http://
www.iana.org/assignments/media-types/. Those preceded with .vnd are vendorspecific.
When specifying the MIME type of a content-type field you can also indicate the character
set for the text being used. If you do not specify a character set, the default is US-ASCII. For
example – content-type:text/plain;charset=iso-8859-1
Web History
Tim Berners-Lee of CERN (the European particle physics laboratory) developed the web’s
hypertext system in 1989.
The Hypertext Transfer Protocol (HTTP) was designed as a means for sharing documents
over the internet.
The Hypertext Markup Language (HTML) is the markup language of the web.
HTTP
The Hypertext Transfer Protocol (HTTP) provided rules for a simple transaction:
Establishing a connection
Sending a document
HTML
The HTTP protocol also required a simple document format called HTML (hypertext markup
language) for presenting text and graphics.
264
The HTML document can contain hotlinks which a user can click to jump to another
location.
HTML is fine for building and delivering simple static web pages. The other tools and
programming know-how to deliverdynamic pages that are built on the fly from text, graphics,
animations, andinformation contained in databases or documents. JavaScript and
programswritten in Java may be inserted into HTML pages to perform special functions and
tasks that the common abilities of HTML—for mouserollovers, window control, and custom
animations.
Cold Fusion and PHP are applications running side by side witha web server like Apache;
they scan an outgoing web page for specialcommands and directives, usually embedded in
special tags.
The application servers, Oracle, Sybase, and MySQL offer software to manage Structured
Query Language (SQL) databases thatmay contain not only text but also graphics and multimedia
resourceslike sounds and video clips. In concert with HTML, these tools provide the power to
do real work and perform real tasks within thecontext of the World Wide Web.
Flash animations, Director Applications, and RunRevstacks can also be called from within
HTML pages. Thesemultimedia mini-applications, programmed by Webdevelopers, use a
browser plug-in to display the action and perform tasks such as playing a sound, showing a
video, orcalculating a date. As with Cold Fusion and PHP, both useunderlying programming
languages. With the introductionof HTML5, browsers can play multimedia elements such
assound, animations, and video without requiring special pluginsor software.
XML (Extensible Markup Language) goes beyondHTML—it is the next evolutionary step
in the developmentof the Internet for formatting and delivering web pages usingstyles. Unlike
HTML, you can create your own tags in XMLto describe exactly what the data means, and you
can get thatdata from anywhere on the Web. In XML, you can build aset of tags like
<fruit>
<type>Tomato</type>
<source>California</source>
<price>$.64</price>
</fruit>
XML document, according to the instructions, willfind the information to put into the proper
place on the web page in the formatting style assign. For example, with XML styles, can declare
that all items within the <price>tag will be displayed in boldface Helvetica type.
In today’s world web plays a vital role in diversifying multimedia experience. It has been
a broadcast medium offering various online facilities like live TV, Pre-recorded videos, photos,
animations etc. During the coming years most multimedia applications experience on the internet
and occur on the WWW [World Wide Web]. Programmes contain HTML [Hyper Text Mark-up
266
Language] pages which are also available and provided by XML [extended Mark-up Language].
Along with it Java Script is also used.
Plug-in and Media Players are software programmes that allow us to experience
multimedia on the web. File formats requiring this software are known as MIME [Multimedia
Internet Mail Extension] types. To embed a media file, just copy the source code and paste it
into user’s webpage. It is as simple as easy.
Plug-ins is software programmes that work with web browser to display multimedia. When
web browser encounters a multimedia file it hands off the data to the plug-in to play (or) display
the file. Multimedia players are also software programmes that can play audio and video files
both ON and OFF the web. The concept of streaming media is important to understand how
media can be delivered on the web.
10.8 Summary
A network is a cluster of computers, with one computer acting as a server to provide
services such as file transfer, e-mail, and document printing to the client computers.
The Domain Name System (DNS) manages the names and addresses of computers
linked to the Internet.
Multimedia elements are typically saved and transmitted on the Internet in the appropriate
MIME-type (for Multipurpose Internet Mail Extensions) format and are named with the
proper extension for that type.
Hypertext Transfer Protocol (HTTP) provides rules for contacting, requesting, and sending
documents encoded with the Hypertext Markup Language (HTML).
HTML documents are simple ASCII text files. HTML currently includes about 50 tags.
XML (Extensible Markup Language) allows you to create your own tags and import data
from anywhere on the Web.
267
2. a. 192.168.1.1
3. b. HTML extension
4. Dameon
5. a. True
6. Define XML.
7. What is HTML?
LESSON 11
WORLD WIDE WEB (WWW)
Structure
11.1 Introduction
11.9 HTML
11.10 VRML
11.11 Summary
11.1 Introduction
Web pages are primarily text documents form atted and an notated with Hyper text Markup
Language (HTML).In addition to formatted text, webpages may contain images, video,
and software components that are rendered in the user’s web browser as coherent pages
of multimediacontent.
The terms Internet and World Wide Web are used with outmuch distinction. However, the
two are not the same.
The Internet is a global system of inter connected computer networks. In contrast, the
World Wide Web is one of the services transferred over these networks. It is a collection
of text documents and other resources, linked by hyperlinks and URLs, usually accessed
by web browsers, from webservers.
There are several applications called Web browsers that make it easy to access the
World Wide Web; For example: Firefox, Microsoft’s Internet Explorer, Chrome Etc.
Users access the World-Wide Web facilities via a client called a browser, which provides
transparent access to the WWW servers. User can access.
History of WWW
Tim Berners-Lee, in 1980 was investigating how computer could store information with
random links. In 1989, while working at European Particle Physics Laboratory, he proposed to
270
idea of global hypertext space in which any network-accessible information could be referred
to by single “Universal Document Identifier”. After that in 1990, this idea expanded with further
program and knows as World Wide Web.
The Internet, linking your computer to other computers around the world, is a way of
transportingcontent.TheWebissoftwarethatletsyouusethatcontent…orcontributeyour
own.TheW eb,runningonthemostlyinvisibleInternet,iswhatyouseeandclickoninyour
computer’sbrowser.
The World Wide Web, or simply Web, is a way of accessing information over the medium
of the Internet. It is an information-sharing model that is built on top of the Internet.
The Web uses the HTTP protocol, only one of the languages spoken over the Internet, to
transmit data. The Web also utilizes browsers, such as Internet Explorer or Firefox, to access
Web documents called Web pages that are linked to each other via hyperlinks. Web documents
also contain graphics, sounds, text and video.
The Web is a Portion of The Internet. The Web is just one of the ways that information
can be disseminated over the Internet. The Internet, not the Web, is also used for email, which
relies on SMTP, Usenet news groups, instant messaging and FTP. So the Web is just a portion
of the Internet.
271
Errors on the Internet can be quite frustrating — especially if you do not know the difference
between a 404 error and a 502 error. These error messages, also called HTTP status
codes are response codes given by Web servers and help identify the cause of the problem.
For example, “404 File Not Found” is a common HTTP status code. It means the Web
server cannot find the file you requested. The file — the webpage or other document you
try to load in your Web browser has either been moved or deleted, or you entered the
wrong URL or document name.
HTTP is a stateless protocol means the HTTP Server doesn’t maintain the contextual
information about the clients communicating with it and hence we need to maintain sessions
in case we need that feature for our Web-applications
HTTP header fields provide required information about the request or response, or about
the object sent in the message body. There are four types of HTTP message headers:
• General-header:
These header fields have general applicability for both request and response messages.
• Request-header:
• Response-header:
• Entity-header:
As mentioned, when every ouentera UR Linthe address box of the browser, the browser
translates the UR Linto are quest message according to the specified protocol; and sends
the request message to the server.
Here, Step by step communication between client and server mention into following figure.
In the late 1990s, multimedia plug-ins and commercial tools aimed at the Web entered
the marketplace at a furious pace, each competing for visibility and developer/user mind share
in an increasingly noisy venue.
In the few years since the birth of the first line-driven HTTP daemon in Switzerland,
millions of web surfers had become hungry for “cool” enhancements to entertaining sites. Web
site and page developers needed creative tools to feed the surfers, while surfers needed
browsers and the plug-ins and players to make these cool multimedia enhancements work.
A combination of the explosion of these tools and user demand for performance stresses
the orderly development of the core HTML standard. Unable to evolve fast enough to satisfy
the demand for features (there are committees, international meetings, rational debates,
comment periods, and votes in the standards process), the HTML language is constantly being
extended de facto by commercial interests. These companies regularly release new versions
of web browsers containing tags (HTML formatting elements) and features not yet formally
approved.
Browsers provide a method for third-party developers to “plug in” special tools that take
over certain computational and display activities. They also support the Java and JavaScript
languages by which programmers can create bits of programming script and Java applets to
extend and customize a browser’s basic HTML capabilities, especially into the multimedia
realm.
Java and JavaScript are only related by name. Java is a programming language much
like C++ that must be compiled into machine code to be executed by a computer’s operating
system. JavaScript is a “scripting language” whose commands are executed at runtime by the
browser itself. JavaScript code can be placed directly into HTML using <script> tags or referenced
from a file with the “.js” extension.
Thus, while browsers provide the orchestrated foundation of HTML, third-party players
and even nonprogrammers can create their own cadenzas to enhance browser performance
or perform special tasks. It is often through these plug-ins and applets that multimedia reaches
274
end users. Many of these tools are available as freeware and shareware while others, particularly
server software packages, are expensive, though most any tool can be downloaded from the
Internet in a trial version.
Developers need to understand how to create and edit elements of multimedia and also
how to deliver it for HTML browsers and plug- in/player vehicles.
The number of new users of the web will create a greater need for high quality, compelling
content, and reasonably quick presentations.
Web browser is a client, program, software or tool through which we sent HTTP request
to web server. The main purpose of web browser is to locate the content on the World Wide
Web and display in the shape of web page, image, audio or video form.
We can also call it a client server because it contacts the web server for desired information.
If the requested data is available in the web server data then it will send back the requested
information again via web browser.
Microsoft Internet Explorer, Mozilla Firefox, Safari, Opera and Google Chrome are
examples of web browser and they are more advanced than earlier web browser because they
are capable to understand the HTML, JavaScript, AJAX, etc. Now days, web browser for mobiles
are also available, which are called micro browser.
Web server is a computer system, which provides the web pages via HTTP (Hypertext
Transfer Protocol). IP address and a domain name is essential for every web server.
Whenever, you insert a URL or web address into your web browser, this sends request to
the web address where domain name of your URL is already saved. Then this server collects
275
the all information of your web page and sends to browser, which you see in form of web page
on your browser.
Lot of web server software is available in the market in shape of NCSA, Apache, Microsoft
and Netscape. Storing, processing and delivering web pages to clients are its main function. All
the communication between client (web browser) and server takes place via HTTP.
Here, we can easily understand concept of web browser and web server by following
figure.
Search Engines
Individualized personal search engines are available that can search the entire public
Web, while enterprise search engines can search intranets, and mobile search engines can
search PDAs and even cell phones.
Learn HTML
Although site building tools seem to remove the need to learn HTML, some knowledge is
still important.
Various tools help you create web pages in a WYSIWYG(What you See Is What you Get)
editing environment.
They provide more power and more features specifically geared to exploiting HTML.
Adobe Go live
Macromedia Dreamweaver
Microsoft FrontPage
Myrmidon
Netscape Composer
HTML translators
These are built into many word processing programs, so we can export a word –processed
documents with its text styles and layout converted to HTML tags for header, bolding,
underlying, indenting and so on.
Plug-ins adds the power of multimedia to web browsers by allowing users to view and
interact with new types of documents and images.
If your content requires a plug-in, don’t forget that users must have the plug-in installed.
Images (such as Macromedia Shockwave) which allows the display of vector graphics.
Sound
Plug-ins such as Real player, QuickTime, and Windows Media Player can play music.
Real player, QuickTime, and Windows Media player also play animations and video.
(i) Text
Text and document plug-ins such as the popular Adobe Acrobat Reader get you past
the display limitations of HTML and web browsers, where fonts are dependent on end users’
preferences and pagelayout is primitive. In file formats provided by Adobe Acrobat, for example,
special fonts and graphic images are embedded as data into the file and travel with it, so what
you see when you view that file is precisely what the document’s maker intended.
(ii) Images
Browsers enabled for HTML5 will read and display bitmapped JPEG, GIF, and PNG
image files as well as Scalable Vector Graphics (SVG) files. Vector files are a mathematical
278
description of the lines, curves, fills, and patterns needed to draw a picture, and while they
typically do not provide the rich detail found in bitmaps, they are smaller and can be scaled
without image degradation. Plug-ins to enable viewing of vector formats (such as Flash) are
useful, particularly when some provide high-octane compression schemes to dramatically shrink
file size and shorten the time spent downloading and displaying them. File size and
compressionsound a recurring theme on the Internet, where data-rich images, movies, and
sounds may take many seconds, minutes, or even longer to reach the end user.
Vector graphics are also device-independent, in that the image is always displayed at
the correct size and with the maximum number of colors supported by the computer. Unlike
bitmapped files, a single vector file can be downloaded, cached, and then displayed multiple
times at different scaled sizes on the same or a different web page.
(iii) Sound
Sound over the Web is managed in a few different ways. Digitized sound files in various
common formats such as MP3, WAV, AIF, or AU may be sent to your computer and then
played, either as they are being received (streaming playback) or once they are fully downloaded
(using a player). MIDI files may also be received and played; these files are more compact, but
they depend upon your computer’s MIDI setup for quality. Speech files can be specially encoded
into a token language (a “shorthand” description of the speech components) and sent at great
speed o another computer to be un-tokenized and played back in a variety of voices. Sounds
may be embedded into QuickTime, Windows Media, and MPEG movie files. Some sounds can
be multicast (using the multicast IP protocols for the Internet specified in RFC 1112), so
multiple users can simultaneously listen to the same data streams without duplication of data
across the Internet. Web-based (VoIP, or Voice over Internet Protocol) telephones also transmit
data packets containing sound information.
The most data-intense multimedia elements to travel the Internet are video streams
containing both images and synchronized sound, and commonly packaged as Apple’s
QuickTime, Microsoft’s Video for Windows (AVI), and MPEG files. Also data rich are the files
279
for proprietary formats such as Keynote, Microsoft PowerPoint, and other presentation
applications. In all cases, the trade-offs between bandwidth and quality are constantly in your
face when designing, developing, and delivering animations or motion video for the Web.
The markup tags tell the Web browser how to display the page An HTMLfile must have
an htm or html file extension
<html>
<head>
<title>Title of page</title>
</head>
<body>
</body>
</html>
Start your Internet browser. Select “Open” (or “Open Page”) in the File menu of your
browser. A dialog box will appear. Select “Browse” (or “Choose File”) and locate the HTML file
280
you just created - “mypage.htm” - select it and click “Open”. Now you should see an address in
the dialog box, for example “C:\MyDocuments\mypage.htm”. Click OK, and the browser will
display the page.
Example Explained
The first tag in your HTML document is <html>. This tag tells your browser that this is the
start of an HTML document. The last tag in your document is </html>. This tag tells your
browser that this is the end of the HTML document.
The text between the <head> tag and the </head> tag is header information.
The text between the <title> tags is the title of your document. The title is displayed in
your browser’s caption.
The text between the <body> tags is the text that will be displayed in your browser.
The text between the <b> and </b> tags will be displayed in a bold font.
You can easily edit HTML files using a WYSIWYG (what you see is what you get) editor
like FrontPage, Claris Home Page, or Adobe PageMill instead of writing your markup tags in a
plain text file.
But if you want to be a skillful Web developer, we strongly recommend that you use a
plain text editor to learn your primer HTML.
HTML Elements
HTML Tags
HTML tags are used to mark-up HTML elements
HTML tags are surrounded by the two characters < and > The surrounding characters are
called angle brackets
The first tag in a pair is the start tag, the second tag is the end tag The text between the
start and end tags is the element content
HTML tags are not case sensitive, <b> means the same as <B>
Sometimes you need to add music or video into your web page. The easiest way to add
video or sound to your web site is to include the special HTML tag called <embed>. This tag
causes the browser itself to include controls for the multimedia automatically provided browser
supports <embed> tag and given media type.
You can also include a <noembed> tag for the browsers which don’t recognize the
<embed> tag. You could, for example, use <embed> to display a movie of your choice,
and <noembed> to display a single JPG image if browser does not support <embed> tag.
Example
<!DOCTYPE html>
<html>
<head>
</head>
<body>
282
</embed>
</body>
</html>
Following is the list of important attributes which can be used with <embed> tag.
Note ”The align and autostart attributes deprecated in HTML5. Do not use these attributes.
1 Align- Determines how to align the object. It can be set to either center, left or right.
2 Autostart- This Boolean attribute indicates if the media should start automatically.
You can set it either true or false.
3 Loop- Specifies if the sound should be played continuously (set loop to true), a certain
number of times (a positive value) or not at all (false)
4 Playcount- Specifies the number of times to play the sound. This is alternate option
for loop if you are using IE.
5 Hidden - Specifies if the multimedia object should be shown on the page. A false
value means no and true values means yes.
10 Volume Controls volume of the sound. Can be from 0 (off) to 100 (full volume).
You can use various media types like Flash movies (.swf), AVI’s (.avi), and MOV’s (.mov)
file types inside embed tag.
.swf files ” are the file types created by Macromedia’s Flash program.
.wmv files ” are Microsoft’s Window’s Media Video file types.
.mov files ” are Apple’s Quick Time Movie format.
.mpeg files ” are movie files created by the Moving Pictures Expert Group.
Background Audio
You can use HTML <bgsound> tag to play a soundtrack in the background of your
webpage. This tag is supported by Internet Explorer only and most of the other browsers
ignore this tag. It downloads and plays an audio file when the host document is first downloaded
by the user and displayed. The background sound file also will replay whenever the user
refreshes the browser.
Note ” Thebgsound tag is deprecated and it is supposed to be removed in a future version
of HTML. So they should not be used rather, it’s suggested to use HTML5 tag audio for adding
sound. But still for learning purpose, this chapter will explain bgsound tag in detail.
This tag is having only two attributes loop and src. Both these attributes have same
meaning as explained above.
<!DOCTYPE html>
<html>
<head>
284
</head>
<body>
<bgsoundsrc = “/html/yourfile.mid”>
</bgsound>
</body>
</html>
This will produce the blank screen. This tag does not display any component and remains
hidden.
Internet Explorer can also handle only three different sound format files “ wav, the native
format for PCs; au, the native format for most Unix workstations; and MIDI, a universal music-
encoding scheme.
a. downloading.
b. RealAudio.
c. MIDI.
d. AAC.
The W3C is an organization dedicated to helping evolve the Web in positive directions.
285
a. True
b.False
a. .DLL
b. .SO
c. .DSO
d. PPC
a. HTML
b. VRML
c. XML
d. UML
Purpose: The Virtual Reality Modeling Language is a file format for describing interactive
3D objects and worlds. VRML is designed to be used on the Internet, intranets, and local client
systems. VRML is also intended to be a universal interchange format for integrated 3D graphics
and multimedia.
Use: VRML may be used in a variety of application areas such as engineering and scientific
visualization, multimedia presentations, entertainment and educational titles, web pages, and
shared virtual worlds.
286
History
Design
Compatibility: Provide the ability to use and combine dynamic 3D objects within a VRML
world and thus allow re”usability (Object orientated approach)
Extensibility: Provide the ability to add new object types not explicitly defined in VRML,
e.g. Sound
Characteristics of VRML
VRML is capable of representing static and animated dynamic 3D and multimedia objects
with hyperlinks to other media such as text, sounds, movies, and images.
VRML browsers, as well as authoring tools for the creation of VRML files, are widely
available for many different platforms.
VRML “ Basics
Some nodes are container nodes or grouping nodes, which contain other nodes
Nodes are arranged in hierarchical structures called scene graphs. Scene graphs are
more than just a collection of nodes; the scene graph defines an ordering for the nodes.
The scene graph has a notion of state, i.e. nodes earlier in the world can affect nodes that
appear later in the world.
287
VRML Shapes
VRML contains basic geometric shapes that can be combined to create more complex
objects. Fig. displays some of these shapes:
Ø Material node specifies the surface properties of an object. It can control what color the
object is by specifying the red, green and blue values of the object.
There are three kinds of texture nodes that can be used to map textures onto any object:
1. Image Texture: The most common one that can take an external JPEG or PNG image
file and map it onto the shape.
2. Movie Texture:Allows the mapping of a movie onto an object; can only use MPEG movies.
3. Pixel Texture: Simply means creating an image to use with Image Texture within VRML.
1. Directional Light node shines a light across the whole world in a certain direction.
2. Point Light shines a light from all directions from a certain point in space.
The background of the VRML world can also be specified using the Background node.
A Panorama node can map a texture to the sides of the world. A panorama is mapped
onto a large cube surrounding the VRML world.
VRML Specifics
Some VRML Specifics:
(b) VRML97 needs to include the line #VRML V2.0 UTF8 in the first line of the VRML
(c) VRML nodes are case sensitive and are usually built in a hierarchical manner.
(d) All Nodes begin with “ {“ and end with “ }” and most can contain nodes inside of
nodes.
(e)Special nodes called group nodes can cluster together multiple nodes and use the
(f) Nodes can be named using DEF and be used again later by using the keyword USE.
This allows for the creation of complex objects using many simple objects.
• A simple VRML example to create a box in VRML: one can accomplish this by typing:
Shape
Geometry Box{}
The Box defaults to a 2-meter long cube in the center of the screen. Putting it into a
Transform node can move this box to a different part of the scene. We can also give the box a
different color, such as red.
Shape {
appearanceAppearance
materialMaterial
289
diffuseColor 1 0 0
geometry Cylinder
radius 3 height 6
IndexFaceSet Node
· IndexFaceSet Node
IndexedFaceSet
coord Coordinate
coordIndex [ 3 0 5 1 –1,
2 0 1 4 5 –1,
3 1 5 –1 ]
}
291
· Texture Node
Texture Node
Shape
appearnce Appearance
geometryIndexedFaceSet
coord Coordinate {
· Transformation Node
Transform
rotation x y z angle
translation x y z
scale x y z
· Viewpoint Node
Viewpoint
position 0 0 10
orientation 0 0 1 0
11.11 Summary
WorldWideWeb(W W W)iscollectionoftextpages,digitalphotographs,musicfiles,
videos,andanimationsyoucanaccessovertheInternet.
Web browser is a client, program, software or tool through which we sent HTTP request
to web server.
Web server is a computer system, which provides the web pages via HTTP (Hypertext
Transfer Protocol). IP address and a domain name is essential for every web server.
Plug-ins adds the power of multimedia to web browsers by allowing users to view and
interact with new types of documents and images.
Various media types like Flash movies (.swf), AVI’s (.avi), and MOV’s (.mov) file types
inside embed tag.
The Virtual Reality Modeling Language is a file format for describing interactive 3D objects
and worlds
294
2. Tim Berners-Lee
3. A.True
4. a. .DLL
5. b. VRML
6. Path
8. What is VRML?
9. Define HTML?
LESSON 12
DESIGNING FOR THE WWW
Structure
12.1 Introduction
12.10Summary
12.12Model Questions
12.1 Introduction
Multimedia is one of the most fascinating and fastest growing areas in the field of
information technology. The capability of computers to handle different types of media makes
them suitable for a wide range of applications. A Multimedia application is an application which
uses a collection of multiple media sources e.g. text, images, sound/audio, animation and/or
video on a single platform for a defined purpose. Multimedia can be seen at each and every
aspect of our daily life in different forms. However, entertainment and education are the fields
where multimedia has its dominance.
296
Define the multimedia facilities needed by business and distributed learning Environments
The World Wide Web (WWW) is a global information medium which users can read and
write via computer connected to the internet.
The Web, or World Wide Web, is basically a system of Internet servers that support
specially formatted documents. The documents are formatted in a markup language called
HTML (Hyper text Markup Language) that supports links too ther documents, as well as
graphics, audio, and videofiles.
Web Browsers
A web browser, or simply “browser,” is an application used to access and view websites.
Common web browsers include Microsoft Internet Explorer, Google Chrome, Mozilla Firefox,
and Apple Safari.
The primary function of a web browser is to render HTML, the code used to design or
“markup” webpages. Each time a browser loads a web page, it processes the HTML, which
may include text, links, and references to images and other items, such as cascading style
sheets and JavaScript functions. The browser processes these items, then renders them in
the browser window.
Early web browsers, such as Mosaic and Netscape Navigator, were simple applications
that rendered HTML, processed form input, and supported bookmarks. As websites have
evolved, so have web browser requirements. Today’s browsers are far more advanced,
297
supporting multiple types of HTML (such as XHTML and HTML 5), dynamic JavaScript,
and encryption used by secure websites.
The capabilities of modern web browsers allow web developers to create highly interactive
websites. For example, Ajax enables a browser to dynamically update information on a webpage
without the need to reload the page. Advances in CSS allow browsers to display a responsive
website layouts and a wide array of visual effects. Cookies allow browsers to remember your
settings for specific websites.
While web browser technology has come a long way since Netscape, browser compatibility
issues remain a problem. Since browsers use different rendering engines, websites may not
appear the same across multiple browsers. In some cases, a website may work fine in one
browser, but not function properly in another. Therefore, it is smart to install multiple browsers
on your computer so you can use an alternate browser if necessary.
Web Sites
Information on the Web is displayed in pages. These pages are written in a standard
language called HTML (HyperText Markup Language) which describes how the information
should be displayed regardless of the browser used or the type of computer. Pages also include
hypertext links which allow users to jump to other related information. Hypertext is usually
underlined and in a different color and can include individual words, sentences, or even graphics.
A Web site is a collection of related Web pages with a common Web address.
Web Addresses
Web sites and the pages they contain each have a unique worldwide address. This
address (or Uniform Resource Locator, URL, in Internet jargon). The address for Microsoft is
www.microsoft.com. For most sites, this is all you need to specify and it defaults to the main
page (or home page) for the site. In some cases, you may also need or want to specify the path
and file name such as www.microsoft.com/office97. Note the extension .com after microsoft.
There are six of extensions that help to divide the computers on the Internet into understandable
groups or domains. These six domains include: .com = commercial, .gov = government, .edu
= education, .org = organizations, .net = networks, .mil = military. There are also extensions for
sites outside of the U.S. including: .jp = Japan, .uk = United Kingdom, .fr = France, and so on.
Enter a Web site address in the “Location” box and hit the return key. You will jump to the
home page of the site. If you are not looking for a particular site, a good place to start is
Netscape’s “What’s Cool” page which can be found by pressing the “What’s Cool” button
located under the address location box on Netscape browsers.
299
Mouse click- on any words on the page that is underlined and highlighted. These words
are hypertext links which jump you to other related information located on the page, on the site,
or other sites. As you jump from page to page and site to site, remember that you can always
hit the “Back” arrow button to return to any page. The browser automatically saves all the Web
pages to your hard drive (the disk cache) so you can immediately go back without having to
reload the pages.
In most cases, you will start out surfing a particular site or topic and through numerous
hypertext links find yourself somewhere completely unrelated but interesting. Now you’re surfing!
There are basically three major search services available for handling different tasks:
Directories, Search Engines, and Meta Search Engines. Directories are sites that, like a gigantic
phone book, provide a listing of the sites on the web. Sites are typically categorized and you
can search by descriptive keywords. Directories do not include all of the sites on the Web, but
generally include all of the major sites and companies. Yahoo is a great directory. Search
Engines read the entire text of all sites on the Web and creates an index based on the occurrence
of key words for each site. AltaVista and Infoseek are powerful search engines. Meta Search
Engines submit your query to both directory and search engines. Metacrawler is a popular
Meta search engine.
WebpagesarewrittenusingdifferentHTMLtagsandviewedinbrowserwindow.
Thedifferentbrowsersandtheirversionsgreatlyaffectthewayapageisrendered,asdifferent
browserssometimesinterpretsameHTMLtaginadifferentway.
DifferentversionsofHTMLalsosupportdifferentsetsoftags.
Thesupportfordifferenttagsalsovariesacrossthedifferentbrowsersandtheirversions.
Samebrowsermayworkslightlydifferentondifferentoperatingsystemandhardwareplatform.
300
Tomakeawebpageportable,testitondifferentbrowsersondifferentoperatingsystems.
Usershavedifferentconnectionspeed,i.e.bandwidth,toaccesstheWebsites.
Connection speed plays an important role in designing web pages, if user has low bandwidth
connectionandawebpagecontainstoomanyimages,ittakesmoretimetodownload.
Generally,usershavenopatiencetowaitforlongertimethan10-15secondsandmovetoothersite
withoutlookingatcontentsofyourwebpage.
Browserprovidestemporarymemorycalledcachetostorethegraphics.
WhenusergivestheURLofthewebpageforthefirsttime,HTMLfiletogetherwithallthegraphics
filesreferredinapageisdownloadedanddisplayed.
Display Resolution
Display or screenresolutionismeasuredintermsofpixelsandcommonresolutionsare800X600
and 1024 X786.
WehavethreechoicesforWebpagedesign.
o Designawebpagewithfixedresolution.
o MakeaflexibledesignusingHTMLtabletofitintodifferentresolution.
o Ifthepageisdisplayedonamonitorwithahigherresolution,thepageisdisplayedonleft-
hand side and some part on the right-hand side remains blank. We can use centered
design to display pageproperly.
Lookandfeelofthewebsitedecidestheoverallappearanceofthewebsite.
Itincludesallthedesignaspectssuchas
Web sitetheme
301
Webtypography
Graphics
Visual structure
Navigation etc…
Website containsofindividualwebpagesthatarelinkedtogetherusingvariousnavigationallinks.
Pagelayoutallowsthedesignertodistributethecontentsonapagesuchthatvisitorcanviewit
easilyandfindnecessarydetails.
Locating Information
Webpage is viewedonacomputerscreenandthescreencanbedividedintofivemajorareassuch
ascenter,top,right,bottomandleftinthisparticularorder.
Thefirstmajorareaofimportanceintermsofusersviewingpatternisthecenter,thentop,right,
bottomandleftinthisparticularorder.
ItisverydifficultforanyWebdesignertopredicttheexactbehavioroftheWebsiteusers.
However,ideaofgeneralbehaviorofcommonuserhelpsinmakingdesignoftheWebsiteuser-
centric.
Userseitherscantheinformationonthewebpagetofindthesectionoftheirinterestorreadthe
information to getdetails.
Sitemap
ManyatimesWebsitesaretoocomplexastherearealargenumberofsectionsandeachsection
contains manypages.
Itbecomesdifficultforvisitorstoquicklymovefromoneparttoother.
302
Once the user selects a particular section and pages in that section, user gets confused
about where he/sheisandwheretogofromthere.
To make it simple,keepyourhierarchyofinformationtofewlevelsorprovidethenavigationbaron
eachpagetojumpdirectlytoaparticularsection.
Navigationlinksareeithertextbased,i.e.awordoraphraseisusedasalink,orgraphical,i.e.an
image,i.e.aniconoralogoisusedasalink.
Navigationlinksshouldbeclearandmeaningful.
It should beconsistent.
Organizethelinkssuchthatcontentsaregroupedlogically.
Provide search link, if necessary, usually on top of the page. Use common links such as
‘about us’ or ‘Contactus’.
Providethewaytoreturntofirstpage.
Providetheuserwithinformationregardinglocation
Horizontalnavigationbarcanbeprovidedoneachpagetodirectlyjumptoanysection
There are so many applications of multimedia in this web world. Let us consider few.
They are as follows:
Software Engineers may use multimedia in computer from entertainment to training such
as military industrial training, designing digital games; it can be used as a learning process.
This multimedia software’s are created by professionals and software engineers.
Edutainment is nothing but educational entertainment. Many computer games with, Focus
on education are now available. A simple example, in this case is an educational game, which
plays various rhymes for little kids. In addition to playing rhymes, the child can paint the pictures,
increase reduce size of various objects etc. Similarly many other edutainment packages, which
provide a lot of detailed information to lads, are available. Microsoft has produced many such
CD- based multimedia such as Sierra, Knowledge Adventure etc. which in addition to play
provide some sort of learning component. The latest in this series is a package, which teaches
about the computer using games playing. There are many more companies which have
specialized in entertainment sector you may explore the list of such companies on the net.
304
Business Communications Multimedia is a very powerful tool for enhancing the quality of
business Communications. The business communications such as employee related
communications, product promotions, customer information, and reports for investors can be
presented in multimedia form. All these business communications are required to be structured
such that a formal level of content structure exists in the communication. Other common business
application involving multimedia requires Access to database of multimedia information about
a company. The multimedia Technology of today can easily support this application as natural
language enquiry Systems do exist for making queries.
(v) E-learning
Electronic Learning has become a very good communication [interaction] media between
students and teachers. Several lines of research evolved the possibility for learning and
instructions are nearly endless. There are two categories which link the students and teachers.
One- those which can be used to convey the subject content, such as print materials, video
tapes and audio tapes, television computer based course ware, CD-ROM etc. the other- those
which permit communication between teacher and students such as audio, video conferencing,
tele-conferencing and internet.
This kind of application involves transmission of a piece of information with the Maximum
impact, that is, the transfer of information in such a fashion that it facilitates the retention. This
application is meant for academia and business both. In academies, the knowledge transfer is
used as the building block, whereas, in Business it is the effective transfer of information which
might be essential for the survival of a business. Multimedia based teaching is gaining momentum
as powerful teaching aids are quite common. Multimedia is one of the best ways to provide
short- term training to the workers in a business houses.
Public Access is an area of application where many multimedia applications will soon be
available. One such application may be the tourist information system, where a person who
305
wants to go for a sight seminary may have the glimpse of places he has R L selected for
visiting.
For example, for a very simple public Information, that is, the Railway Time table enquiry,
a multimedia based system may Not only display the trains and time but also the route map of
the destination from the Source you have desired.
Multimedia Representation
1. Form of representation
- In applications that involve just a single type of media, the basic form of representation of
the particular media type is required.
- No streaming is required.
- The amount of data used to represent the signal is measured in bits per second (bps).
(i) The resulting bit rate to a level a network can support and
(ii) The time delay between a request being made for some information and the information
becoming available.
306
Multimedia Networks
There are 5 types of communication network that are used to provide multimedia
communication services:
Characteristics:
- The first 3 types were initially designed to provide just a single type of service.
Media types
The information flow associated with the different applications can be either continuous
or block mode.
The source stream can be generated at a constant bit rate (CBR) or a variable bit rate
(VBR).
307
The source information comprises a single block of information that is created in a time-
independent way.E.g. text, image
The delay between the request being made and the contents of the block being outputted
at the destination is called round-trip delay. (Should be <few seconds)
Communication Modes
The transfer of the information streams associated with an application can be 1 of the 5
modes:
(1-to-1 transmission)
Network types
There are 2 types of communications channel associated with the various network types:
circuit-mode & packet- mode.
1. Channels in circuit-mode:
Operates in a time-dependent way
2. Channels in packet-mode:
· Operates in a time-varying way
1. Circuit-Mode:
· This type of network is also known as a circuit switched network.
· Prior to sending any information, the source must first set up a connection through the
network.
· The messages associated with the setting up and clearing of a connection are known as
signaling messages.
2. Packet Mode
There are 2 types of packet-mode networks: connection-oriented (CO) and connectionless
(CL)
Prior to sending any information, a connection is first set up through the network.
The connection utilizes only a variable portion of the bandwidth of each link and hence it’s
known as a virtual connection or a virtual circuit (VC).
309
Each PSE has a routing table which defines a packet coming from which input link will be
delivered to which output link.
Each packet must carry the full source and destination addresses in its header in order
for each PSE to route the packet onto the appropriate outgoing link.
Example: Internet
Kiosks – Airport, train station, bank assistant, cinema information, real-estate catalogue,
university, museum showcase etc. – Fast response is necessary
Tele-shopping
a. Compiler
b. Server
c. Web Browser
d. Interpreter
2. Engineer design cars before producing them using a multimedia applications called
__________________
310
a. True b. False
a. Text
b. Joystick
c. Voiceover
d. Encyclopedia
a. Training
b. Education
c. Transferring
d. Examination
The entertainment industry has used this technology the most to create real life like
Games. Several developers have used graphics, sound, animation of multimedia to create
variety of games. The special technologies such as virtual reality have made. These games
just like experiences of real life. Example is flight simulator which creates a real-life imaging.
Many multimedia games are now available on computers. The children can enjoy these
experiences, for example, they can drive cars of different variety, fly aircraft play any musical
instrument, play golf etc. Multimedia productions are also using creation of many movies where
the multimedia components are mixed with real life pictures to create powerful entertainment
atmosphere.
One of the most exciting applications of multimedia is games. Now days the live internet
pay to play gaming with multiple players has become popular. Actually the first application of
multimedia system was in the field of entertainment and that too in the video game industry.
The integrated audio and video effects make various types of games more entertaining.
Generally most of the video games need joystick play.
One can download digital online multimedia or can be streamed. This Streaming
multimedia can be on-demand or live Multimedia games and simulations may be used with
exclusive effects in a physical environment, in an online network, with diversified users; it can
also be used at offline mode, game system, or hosier.
Electronic games, 3D adventure games, sporting games and interactive movies are
extremely popular forms of multimedia applications. The key to their popularity lies in their
interactive nature. The new generations of games provide ingenious levels of interactivity and
realism to captivate the user of the product. The attraction of this type of application is realism,
312
fast action and user input through peripherals such as mouse, track-pad, keyboard and joystick.
Computer-based games have led to many developments in interactive computing. This type of
application requires a high level of graphics computing power and hence the impetus to develop
more efficient algorithms for display movement and more powerful graphics cards.
The forms of an Interactive T.V or through the World Wide Web are listed below:
Interactive entertainment
Digital Audio
Video on demand
Cable TV and telephone companies, dot com companies, publishing industry etc. are the
main infrastructure providers for these facilities. The networking technology along with the
improved compiling and compression technologies are delivering interactive services profitably.
The entertainment cable, telephone, and Internet passed industries Companies are trying to
design wide variety of such multimedia services.
Today Personal Computers are the tool that promotes collaboration. They are essential
to any multimedia workstations. Many high-speed networks are in place that allows multimedia
conferencing, or electronic conferencing. Such facilities are even available today through Internet
313
also. Today, we have to depend on our telephone to link us with others, whether it is a phone
call or a group audio conference or dialup Internet connection. However, tomorrow it will be
sort of based links that link us with others. A Computer-based multimedia conference allows us
to exchange audio, text, image, and even video information. It also facilitates group development
of documents and other information products. Let us discuss more about these concepts in
greater details.
Benefits of Multimedia
1. Addresses multiple learning styles
7. Improves retention
12.10 Summary
TheWorldWideWeb(WWW)is aglobalinformationmediumwhichuserscanreadandwrite
viacomputerconnectedtotheinternet.
Information on the Web is displayed in pages. These pages are written in a standard
language called HTML (HyperText Markup Language) which describes how the information
should be displayed regardless of the browser used or the type of computer
A Web site is a collection of related Web pages with a common Web address.
Web sites and the pages they contain each have a unique worldwide address.
Usershavedifferentconnectionspeed,i.e.bandwidth,toaccesstheWebsites.
Pagelayoutdefinesthevisualstructureofthepageanddividesthepageareaintodifferentpartsto
314
presenttheinformationofvaryingimportance.
In applications that involve just a single type of media, the basic form of representation of
the particular media type is required.Otherwise, different media types should be integrated
together in a digital form.
Examples of typical multimedia applications include: digital video editing and production
systems; electronic newspapers and magazines; the World Wide Web; online
referenceworks, such as encyclopedias; games; groupware; home shopping; interactive
TV; multimediacourseware; video conferencing; video-on-demand; and interactive movies.
3. a) True
4. d) Encyclopedia
5. b) Education
6. Web-based
LESSON 13
MULTIMEDIA IN FUTURE
Structure
13.1 Introduction
13.6 Summary
13.1 Introduction
The feature scope of the multimedia is to gain the place to desire the people’s need and
also understands their expressions. The feature trends of the multimedia use technology to
understand the expression of the human begins and respond to them in the right manner.
Multimedia provides information by sources like web engine, online news and social media.
The feature trends of multimedia influence many in an effective manner. Many sectors apply
multimedia technologies to enhance sector-related factors.
The future of the multimedia can be decided by a few factors like Comfortless of the
people by automation features of multimedia technologies. Many multimedia companies like
Sony, Panasonic use multimedia technologies to a great extent. Multimedia technology like
record players, Bluetooth headphones, radio eyes, blue tooth headphones, light bulb speaker
with baby cry detector, ear buds.
There are many future directions for Multimedia, indeed the multimedia explosion has
probably only just initiated.
Developments in Hypermedia Models which we mentioned in the last chapter briefly with
RMM for example.
3. Interactive Television
Digital Library
A big step forward from traditional database search which is largely based on simple
attributes.
Serve as a browsing tool - analogous to the current web search
o Can’t handle nonspecific queries such as “Find a scenic photo of Lake Tahoe”
The three differences are; an increase in picture resolution, 16:9 widescreen as standard,
and the ability to support multi-channel audio such as Dolby Digital. The most important aspect
of HDTV, and the one which gives it its name is the increased resolution. Standard definition
NTSC broadcasts have 525 horizontal lines, and PAL broadcasts are slightly better at 625
lines. In both these systems however, the actual number of lines used to display the picture,
known as the active lines, is fewer than that. In addition, both PAL and NTSC systems are
interlaced, that is, each frame is split into two fields, one field is the odd-numbered lines and
the other is the even lines.
Each frame is displayed alternately and our brain puts them together to create a complete
image of each frame. This has an adverse effect on picture quality. HDTV is broadcast in one
of two formats; 720p and 1080i. The numbers refer to the number of lines of vertical resolution
318
and the letters refer to whether the signal is progressive scan, ‘p’, or interlaced, ‘i’. Progressive
scan means that each frame is shown in its entirety, rather than being split into fields. Both
systems are significantly better quality than either PAL or NTSC broadcasts. The first is 720p
(“p” stands for progressive), which is an image comprised of 1280 lines along the horizontal by
720 vertical lines. It shows the whole image in a single frame – that is, progressively. The
second is 1080i, which measures 1920 x 1080 lines and is displayed as two fields that are
interlaced.
A high-res screen with at least 720 lines will show both formats but only a 1080-line
screen will show 1080i footage at its best, i.e. in an un-scaled form. The 1080p format, which
is the absolute best form of HD is not used by broadcasters. Movies made in 1080p (e.g. the
last three Star Wars films) might appear in Blu-ray and/or HD DVD format. Sony’s PlayStation
3 produces 1080p output. There are more and more ‘Full HD’ screens (capable of displaying
1080p) appearing. A 1080p screen can de-interlace a 1080i signal. With very few 1080p sources
available, the main benefit of a Full HD screen is its ability to map a source such as Sky TV
(1080i) pixel for pixel to the screens resolution (ie 1920 x 1080). HDTV uses 16:9 widescreen
as is its aspect ratio so widescreen pictures are transmitted properly and not letterboxed or
panned.
Dolby Digital multichannel sound can be broadcast as part of an HDTV signal, so if you
have a surround sound speaker set-up you can use it to listen to TV rather than just DVDs. To
receive an HDTV broadcast you need either a TV with a built-in HDTV tuner or a HDTV receiver
which can pick-up off the air HDTV channels, or cable or satellite HDTV like. You also need to
live in be where HDTV channels are broadcast or distributed by cable or satellite. Currently
HDTV is widespread in Japan and is becoming commonplace in the US, with most major
networks distributing HDTV versions of their popular content. The situation in Europe is not so
bright.
There is only one company broadcasting HDTV in the whole of Europe, Euro1080, and
it has only two HDTV channels, both in the 1080i format. Euro1080HDe shows major cultural
and sporting events to cinemas and clubs around Europe, while HD1 broadcasts sports, opera,
rock music, and lifestyle programs via satellite to homes in Europe. UK satellite broadcaster,
319
Sky, which is owned by Fox proprieter Rupert Murdoch, has announced plans to broadcast
some HDTV content in 2006. The BBC has also made noises about broadcasting HDTV
programs (it already films some programs in HD format).
a multimediaproducts are dropping rapidly, this increases the demand for them as they
become moreaffordable.
ii) MMX Technologies: Enabled the computer systems to interact fully with the audio,
videoelements and compact disc drive, more effectively.
iii) Development of DVD Technology: DVD technology has replaced VHS technology
andlaser disk in the production of digital videos or films because DVD pictures are
clearer,faster, higher quality, higher capacity and lower price.
iv) Erasable Compact Discs (CD E): Since it is re writable, it enables us to change data,
toarchive large volumes of data and also to backup copies of data stored in the hard disk.
vii) Increased usage of Computers: Previously, computers were used for just
WordProcessing, with the development of multimedia technology, text is not the only
mainmedium used to disseminate information but also graphics, audio, video, animation
andinteractivity. Hence, computers role has diversified and now act as the source for
education,publication, entertainment, games and many others.
a. Single Layer
b. Dual Layer
c. Multi-Layer
d. Assigned Layer
321
3. The viewer of a multimedia project to control what and when the elements are delivered,
it is called_____.
a. interactive multimedia
b. selective multimedia
c. onscreen multimedia
d. portable multimedia
Online gaming sites are a fast and efficient ways for companies to promote their products
a. True b. False
Interactive multimedia allows the viewer of the multimedia presentation to control what
and what sequence the elements of multimedia are delivered.
a. True b. False
New media is a catch-all term used for various kinds of electronic communications that
are conceivable due to innovation in computer technology. In contrast to “old” media, which
includes newspapers, magazines, books, television and other such non-interactive media, new
media is comprised of websites, online video/audio streams, email, online social platforms,
322
online communities, online forums, blogs, Internet telephony, Web advertisements, online
education and much more.
Traditional media methods include mostly non-digital advertising and marketing methods.
Traditional media is:
Television advertisements
Radio advertising
Print advertising
Cold calling
Door-to-door sales
Banner ads
New media, also called digital media, consists of methods that are mostly online or involve
the Internet in some sense. These methods include:
Pay-per-click advertising
Content marketing
Social media
Email marketing
Interactive Media:
encyclopedias, and travel guides. Interactive media shift the user’s role from observer to
participant and are considered the next generation of electronic information systems.
The most common media machine consists of a PC with a digital speaker unit and a CD-
ROM (compact disc read-only memory) drive, which optically retrieves data and instructions
from a CD-ROM. Many systems also integrate a handheld tool (e.g., a control pad
or joystick) that is used to communicate with the computer. Such systems permit users to
read and rearrange sequences of text, animated images, and sound that are stored on
high-capacity CD-ROMs. Systems with CD write-once read-many (WORM) units allow
users to create and store sounds and images as well. Some PC-based media devices
integrate television and radio as well.
Among the interactive media systems under commercial development by the mid-1990s
were cable television services with computer interfaces that enable viewers to interact
with television programs; high-speed interactive audiovisual communications systems
that rely on digital data from fiber-optic lines or digitized wireless transmissions; and virtual
reality systems that create small-scale artificial sensory environments.
Interactive entertainment
Digital Audio
324
Video on demand
Cable TV and telephone companies, dot com companies, publishing industry etc. arethe
main infrastructure providers for these facilities. The networking technologyalongwith the
improved compiling and compression technologies are delivering interactiveservices profitably.
The entertainment cable, telephone, and Internet passed industriesCompanies are trying to
design wide variety of such multimedia services.
Today Personal Computers are the tool that promotes collaboration. They areessential
to any multimedia workstations. Many high-speed networks are in place thatallows multimedia
conferencing, or electronic conferencing. Such facilities are evenavailable today through Internet
also. Today, we have to depend on our telephone tolink us with others, whether it is a phone
call or a group audio conference or dialupInternet connection. However, tomorrow it will be sort
of based links that link uswith others. A Computer-based multimedia conference allows us to
exchange audio,text, image, and even video information. It also facilitates group development
ofdocuments and other information products.
Interactive Television (iTV) is the integration of traditional television technology and data
services. It is a two-way cable system that allows users to interact with it via commands and
feedback information. A set-top box is an integral part of an interactive television system. It can
be used by the viewer to select the shows that they want to watch, view show schedules and
give advanced options like ordering products shown in ads, as well as accessing email and
Internet.
325
Interactive TV services
Interactive TV is similar to converged TV services, but should not be confused with it.
Interactive TV is delivered through pay-tv set-top boxes, whereas converged TV services are
delivered using Internet connectivity and Web-based services with the help of over-the-top
boxes like Roku or gaming consoles.
The return path is the channel that is used by viewers to send information back to the
broadcaster. This path can be established using a cable, telephone lines or any data
communications technology. The most commonly used return path is a broadband IP connection.
However, when iTV is delivered through a terrestrial aerial, there is no return path, and
hence data cannot be sent back to the broadcaster. But in this case, interactivity can be made
possible with the help of appropriate application downloaded onto the set-top box.
Basics of Interactive TV
Beside the normal services provided by the current telephone and cable
services, Interactive TV will provide a variety of new services to homes, such as:
326
video-on-demand
interactive entertainment
User Experience
The viewer must be able to alter the viewing experience (e.g. choose which angle to
watch a football match), or return information to the broadcaster.
This “return path,” return channel or “back channel” can be by telephone, mobile SMS
(text messages), radio, digital subscriber lines (ADSL) or cable.
Cable TV viewers receive their programs via a cable, and in the integrated cable return
path enabled platforms, they use the same cable as a return path.
Satellite viewers (mostly) return information to the broadcaster via their regular telephone
lines. They are charged for this service on their regular telephone bill. An Internet connection
via ADSL, or other, data communications technology, is also being increasingly used.
Interactive TV can also be delivered via a terrestrial aerial (Digital Terrestrial TV such as
‘Free view’ in the UK). In this case, there is no ‘return path’ as such - so data cannot be sent
back to the broadcaster (so you could not, for instance, vote on a TV show, or order a product
sample). However, interactivity is still possible as there is still the opportunity to interact with an
application which is broadcast and downloaded to the set-top box.
Increasingly the return path is becoming a broadbandIP connection, and some hybrid
receivers are now capable of displaying video from either the IP connection or from traditional
tuners. Some devices are now dedicated to displaying video only from the IP channel, which
has given rise to IPTV - Internet Protocol Television. The rise of the “broadband return path”
327
has given new relevance to Interactive TV, as it opens up the need to interact with Video on
Demand servers, advertisers, and website operators.
How it works
Telephone Network:
Advantage: High availability, security. Good support for interactive/two way traffic
Cable Network:
Analog video down the wire
Disadvantage: Little infrastructure for long-distance, low security, harder for two way traffic.
328
2. Wireless Cable
Remotely, signals transmitted via satellites at 4 GHz. regionally, from mountain-top towers
at 2.1-2.7 GHz microwave band, with a total of 33 analog 6 MHz channels.
Optical fiber to each residential neighborhood, terminating in ONU (Optical Network Unit).
Each ONU supports up to 16 copper local loops that can run full-duplex T1 or T2 for
MPEG-1 and MPEG-2 respectively.4
Cable Modem
o capable of providing 10-100 Mbps, e.g. Motorola CyberSURFR - 10 Mbps per user
downstream and 768 kbps return upstream
“500-Channel” Scenario:
Forms of Interaction
The term “interactive television” is used to refer to a variety of rather different kinds of
interactivity (both as to usage and as to technology), and this can lead to considerable
misunderstanding. At least three very different levels are important (see also the instructional
video literature which has described levels of interactivity in computer-based instruction which
will look very much like tomorrow’s interactive television):
The term “interactive television” is used to refer to a variety of rather different kinds of
interactivity (both as to usage and as to technology), and this can lead to considerable
misunderstanding. At least three very different levels are important (see also the instructional
video literature which has described levels of interactivity in computer-based instruction which
will look very much like tomorrow’s interactive television): The forms of interaction will be of
different types such as interactive with TV set, TV related programs with TV content and
interactive programs
330
The simplest, Interactivity with a TV set is already very common, starting with the use
of the remote control to enable channel surfing behaviors, and evolving to include video-on-
demand, VCR-like pause, rewind, and fast forward, and DVRs, commercial skipping and the
like. It does not change any content or its inherent linearity, only how users control the viewing
of that content. DVRs allow users to time shift content in a way that is impractical with VHS.
Though this form of interactive TV is not insignificant, critics claim that saying that using a
remote control to turn TV sets on and off makes television interactive is like saying turning the
pages of a book makes the book interactive.
In the not too distant future, the questioning of what is real interaction with the TV will be
difficult. Panasonic already has face recognition technology implemented its prototype Panasonic
Life Wall. The Life Wall is literally a wall in your house that doubles as a screen. Panasonic
uses their face recognition technology to follow the viewer around the room, adjusting its screen
size according to the viewer’s distance from the wall. Its goal is to give the viewer the best seat
in the house, regardless of location. The concept was released at Panasonic Consumer
Electronics Show in 2008. Its anticipated release date is unknown, but it can be assumed
technology like this will not remain hidden for long.
In its deepest sense, Interactivity with normal TV program content is the one that is
“interactive TV”, but it is also the most challenging to produce. This is the idea that the program,
itself, might change based on viewer input. Advanced forms, which still have uncertain prospect
for becoming mainstream, include dramas where viewers get to choose or influence plot details
and endings.
As an example, in Accidental Lovers viewers can send mobile text messages to the
broadcast and the plot transforms on the basis of the keywords picked from the messages.
Global Television Network offers a multi-monitor interactive game for Big Brother 8 (US)
“‘In The House’” which allows viewers to predict who will win each competition, who’s
going home, as well as answering trivia questions and instant recall challenges throughout
the live show. Viewers login to the Global website to play, with no downloads required.
331
Another kind of example of interactive content is the Hugo game on Television where
viewers called the production studio, and were allowed to control the game character in
real time using telephone buttons by studio personnel, similar to The Price Is Right.
Another example is the Click vision Interactive Perception Panel used on news programmes
in Britain, a kind of instant clap-o-meter run over the telephone.
Commercial broadcasters and other content providers serving the US market are
constrained from adopting advanced interactive technologies because they must serve the
desires of their customers, earn a level of return on investment for their investors, and are
dependent on the penetration of interactive technology into viewers’ homes. In association
with many factors such as
requirements for backward compatibility of TV content formats, form factors and Customer
Premises Equipment (CPE)
the ‘cable monopoly’ laws that are in force in many communities served by cable TV
operators
consumer acceptance of the pricing structure for new TV-delivered services. Over the air
(broadcast) TV is Free in the US, free of taxes or usage fees.
proprietary coding of set top boxes by cable operators and box manufacturers
the ability to implement ‘return path’ interaction in rural areas that have low, or no technology
infrastructure
the competition from Internet-based content and service providers for the consumers’
attention and budget
The least understood,Interactivity with TV-related content may have most promise to
alter how we watch TV over the next decade. Examples include getting more information about
what is on the TV, weather, sports, movies, news, or the like.
332
Similar (and most likely to pay the bills), getting more information about what is being
advertised, and the ability to buy it—(after futuristic innovators make it) is called “tcommerce”
(short for “television commerce”). Partial steps in this direction are already becoming a mass
phenomenon, as Web sites and mobile phone services coordinate with TV programs (note:
this type of interactive TV is currently being called “participation TV” and GSN and TBS are
proponents of it). This kind of multitasking is already happening on large scale—but there is
currently little or no automated support for relating that secondary interaction to what is on the
TV compared to other forms of interactive TV. In the coming months and years, there will be no
need to have both a computer and a TV set for interactive television as the interactive content
will be built into the system via the next generation of set-top boxes. However, set-top-boxes
have yet to get a strong foothold in American households as price (pay per service pricing
model) and lack of interactive content have failed to justify their cost.
Many think of interactive TV primarily in terms of “one-screen” forms that involve interaction
on the TV screen, using the remote control, but there is another significant form of interactive
TV that makes use of Two-Screen Solutions, such as NanoGaming. In this case, the second
screen is typically a PC (personal computer) connected to a Web site application. Web
applications may be synchronized with the TV broadcast, or be regular websites that provide
supplementary content to the live broadcast, either in the form of information, or as interactive
game or program. Some two-screen applications allow for interaction from a mobile device
(phone or PDA), that run “in synch” with the show.
Such services are sometimes called “Enhanced TV,” but this term is in decline, being
seen as anachronistic and misused occasionally. (Note: “Enhanced TV” originated in the mid-
late 1990s as a term that some hoped would replace the umbrella term of “interactive TV” due
to the negative associations “interactive TV” carried because of the way companies and the
news media over-hyped its potential in the early 90’s.)
Notable Two-Screen Solutions have been offered for specific popular programs by many
US broadcast TV networks. Today, two-screen interactive TV is called either 2-screen (for
short) or “Synchronized TV” and is widely deployed around the US by national broadcasters
with the help of technology offerings from certain companies. The first such application was
333
Chat Television™ (ChatTV.com), originally developed in 1996. The system synchronized online
services with television broadcasts, grouping users by time-zone and program so that all real-
time viewers could participate in a chat or interactive gathering during the show’s airing.
One-screen interactive TV generally requires special support in the set-top box, but Two-
Screen Solutions, synchronized interactive TV applications generally do not, relying instead on
Internet or mobile phone servers to coordinate with the TV and are most free to the user.
Developments from 2006 onwards indicate that the mobile phone can be used for seamless
authentication through Bluetooth, explicit authentication through Near Field Communication.
Through such an authentication it will be possible to provide personalized services to the mobile
phone.
Interactive TV services
Notable interactive TV services are:
T-commerce - Is a commerce transaction through the set top box return path connection.
ATVEF - ‘Advanced Television Enhancement Forum’ is a group of companies that are set
up to create HTML based TV products and services. ATVEF’s work has resulted in an
Enhanced Content Specification which makes it possible for developers to create their
content once and have it display properly on any compliant receiver.
Philips Net TV - solution to view Internet content designed for TV; directly integrated
inside the TV set. No extra subscription costs or hardware costs involved.
An Interactive TV purchasing system was introduced in 1994 in France. The system was
using a regular TV set connected together with a regular antenna and the Internet for
feedback. A demo has shown the possibility of immediate purchasing, interactively with
displayed contents.
QUBE - A very early example of this concept, it was introduced experimentally by Warner
Cable (later Time Warner Cable, now part of CharterSpectrum) in Columbus, Ohio in
1977. Its most notable feature was five buttons that could allow the viewers to, among
other things, participate in interactive game shows, and answer survey questions. While
successful, going on to expand to a few other cities, the service eventually proved to be
too expensive to run, and was discontinued by 1984, although the special boxes would
continue to be serviced well into the 1990s.
13.6 Summary
HDTV High-definition television (HDTV) is a digital television broadcasting system with
greater resolution than traditional television systems (NTSC, SECAM, PAL).
HDTV is digitally broadcast because digital television (DTV) requires less bandwidth if
sufficient video compression is used.
There are three key differences between HDTV and what’s become known as standard
definition TV i.e. regular NTSC, PAL or SECAM.
New media, also called digital media, consists of methods that are mostly online or involve
the Internet in some sense.
Interactive Television (iTV) is the integration of traditional television technology and data
services.
335
3. a. interactive multimedia
4. anonymous
5. a. True
6. a. True
4. Define ADSL.
LESSON 14
MULTIMEDIA TECHNOLOGIES
Structure
14.1 Introduction
14.6 Summary
14. 1 Introduction
The technology used in televisions has improved dramatically. With the introduction of
digital broadcasting, users have now a wide array of options when it comes to methods of
receiving television signals. It also allows you to play or stream videos in different resolutions.
Traditional televisions receive data through analog waveforms to assign radio frequencies or
broadcasts to television channels. However, in digital broadcast, digital data is used.
Know the practices of using digital signals rather than analogue signals for broadcasting
over radio frequency bands.
337
Learn benefits of digital radio, higher quality sound than current AM and FM radio
broadcasts to fixed, portable and mobile receivers.
Understand the different types of multimedia conferencing and online multimedia tools
etc.
Digital broadcasting is the practice of using digital signals rather than analogue signals
for broadcasting over radio frequency bands. Digital Television (DTV) broadcasting (especially
satellite television) is widespread. Content providers can provide more services or a higher-
quality signal than was previously available.
Digital Television is more advanced than the older analog technology. Unlike analog
television, which uses a continuously variable signal, a digital broadcast converts the
programming into a stream of binary on/off bits—sequences of 0s and 1s. The air digital signals
don’t weaken over distance, as analog signals do.
Digital channels
Digital Television is the transmission of television signals, including the sound channel,
using digital encoding, in contrast to the earlier televisiontechnology, analog television, in which
the video and audio are carried by analog signals.
Digitization Process
A digitization process is used to convert analog data, such as media, sound, image, and
text, into a numerical representation through two discrete steps:
(i) Sampling
(ii) Quantization
338
(i) Sampling
The first step, data is sampled at regular intervals, such as the grid of pixels used to
represent a digital image. The frequency of sampling is referred to as resolution of the image.
Sampling turns continuous data (analog) into discrete (digital) data. This is data occurring in
distinct units: people, pages of a book, pixels.
Second, each sample is quantified, i.e. assigned a numerical value drawn from a defined
range (such as 0-255 in the case of an 8-bit grayscale image).
(ii) Quantization
Any image or audio, like color, projects a signal of its wavelength. The signals are measured
through a y=sin(x) graph. It is a mathematical representation that becomes digitized when
sampled by a computer.The digital representation can change depending on its selected
resolution. The higher the resolution, the more accurately the digital representation will measure
the signal.
Sampling refers to considering the image only at a finite number of points and quantization
refers to the representation of the color value (in RGB format) at each sampled point using a
finite number of bits. In this case, each image sample is called a pixel and every pixel has one
and only one color value. Any typical desktop image scanner does sampling quantization.
Usually, in scanning a printed image, the first steps are about the sampling area and rate and
the later steps deal with the quantization parameters, such as resolution and file size.
Digitization should not only be seen as a technical process because it also has an important
semi-logical and cultural significance. “While some oldmedia such as photography and sculpture
is truly continuous, most involve the combination of continuous and discrete coding. One example
is motion picture film: each frame is a continuous photograph, but time is broken into a number
of samples (frames).
Video goes one step further by sampling the frame along the vertical dimension (scan
lines). Similarly, a photograph printed using a halftone process coming discrete and continuous
representations. Such photographs consist from a number of orderly dots (i.e., samples),
339
however the diameters and areas of dots vary continuously. As this last example demonstrates,
while old media contains level(s) of discrete representations, the samples were never quantified.
This quantification of samples is the crucial step accomplished by digitization.
Digital TV
Digital Television (DTV) is the transmission of television signals using digital rather than
conventional analog methods.
Digital Television is not the same thing as HDTV (High-Definition Television). HDTV
describes a new television format (including a new aspect ratio and pixel density), but not how
the format will be transmitted. Digital Television can be either standard or high definition.
Digital TV Standards
Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing
(OFDM) modulation and supports hierarchical transmission. This standard has been
adopted in Europe, Africa, Asia, Australia, total about 60 countries.
Advanced Television System Committee (ATSC) uses eight-level vestigial sideband (8VSB)
for terrestrial broadcasting. This standard has been adopted by 6 countries: United States,
Canada, Mexico, South Korea, Dominican Republic and Honduras.
This standard has been adopted in Japan and the Philippines. ISDB-T International is an
adaptation of this standard using H.264/MPEG-4 AVC that been adopted in most of South
America and is also being embraced by Portuguese-speaking African countries.
1. Better Bandwidth
One of the main advantages is that they are more efficient when it comes to bandwidth
usage than analog transmission. Furthermore, the image quality delivered by digital signals is
more efficient when it comes to image quality. In fact, high definition televisions can only display
images with the use of digital data. The digital signals are divided into 5 signal patterns, which
can accommodate various aspect ratios. This in turn improves the quality of the images displayed
on your television.
2. Automatic Tuning
Digital signals can be tuned automatically and auto selects the suitable resolution for
your digital television. This in turn allows your television to display clearer and more detailed
images. It also gives you the assurance that your television will work regardless of its bandwidth
capability.
Digital broadcasting also allows your television to receive television signals through various
methods. One of the most common methods used is through a cable connection, which is also
known as digital cable. It also allows televisions to receive digital signals with the use of
satellite dish.
Because of the advancements in technology, digital broadcast can now be run through
the DSL connections. This improvement also makes it possible for mobile phones to receive
digital signals. This also allows you to set up a computer to television system, which is great for
entertainment.
341
Some systems have USB ports that can be connected to a telephone line, allowing you
to contact your service provider, as well as do other electronic transactions.
It also allows you to record television programs, so that you can view them at your own
convenient time. With this feature, you can have the assurance that you will never miss an
episode of favorite TV series.
In case you want to have a great experience watching television programs at your home
during your leisure time, it is advisable that you incorporate digital broadcasting system in your
home. It offers many benefits. It improves the quality of images that your television displays.
The digital format is also compatible with any resolutions, giving you the assurance that your
broadcasting will work regardless of the size of your television screen.
The United States made the switch to digital television broadcasting in 2009, which meant
that individuals using standard analog televisions had to convert. This meant either purchasing
an entirely new digital television set or—the less expensive option—purchasing an external
converter box, which you can attach to an analog television much like a cable box.
Although this was likely an inconvenience for many, the government offered coupons
prior to the conversion to help cover the costs of these converter boxes. According to nhk.or.jp,
broadcasters also had to adjust to the conversion, and needed to invest in new production,
transmission and operating equipment as well as new devices for video and audio encoding.
2. Scanning Channels
According to kmos.ucmo.edu, when you first set up a digital converter box or turn on
your digital TV, you will not have instant access to channels as with an analog system. This is
due to a delay between when your digital device receives a transmission and when it can
display it. So before you can start watching, your television needs to complete a channel scan
or memorization. This will take approximately 30 to 60 seconds per channel.
342
A long-term problem that will occur with digital broadcasting is that more and more
frequencies will eventually be needed to make room for more digital programming. According
to nhk.or.jp, this means that the frequencies usually reserved for analog broadcasting, such as
those used by traditional radio stations, will eventually need to be appropriated. Otherwise,
digital TV will only be able broadcast a limited amount of programming.
Digital radio is the transmission and reception of sound processed into patterns of numbers,
or “digits” – hence the term “digital radio.” In contrast, traditional analog radios process sounds
into patterns of electrical signals that resemble sound waves.
Digital radio reception is more resistant to interference and eliminates many imperfections
of analog radio transmission and reception. There may be some interference to digital radio
signals, however, in areas that are distant from a station’s transmitter. FM digital radio can
provide clear sound comparable in quality to CDs, and AM digital radio can provide sound
quality equivalent to that of standard analog FM.
The radio station creates a digital signal at the same time they create the analog signal.
The digital signal is compressed and then broadcast along with the analog signal. The nice
thing about high definition receivers is that they can filter out the signals with the interference
of the waves reflecting off of buildings.
343
The term broadcasting means the transmission of audio or video content using radio-
frequency waves. With the recent advancements in digital technology, radio broadcasting now
applies to many different types of content distribution.
Analog Radio
Analog radio consists of two main types: AM (amplitude modulation) and FM (frequency
modulation). Analog radio station frequently feeds only one transmitter and referred to as an
AM station or an FM station in the U.S. But it is quite possible for a station to feed both transmitters
in a similar area, or to feed more than one transmitter covering different areas. In either case,
AM or FM refers only to a particular transmitter and not to the entire station. The latter
arrangement is becoming widespread throughout the U.S.
AM radio uses the long-wave band in some nations. This long-wave band comes with
frequencies that are fairly lower than the FM band, and having slightly different transmission
features, better for broadcasting over long distances. Both AM and FM are in use to broadcast
audio signals to home, car, and moveable receivers.
MPT1327: Perhaps the most widely used analog trucking technology today is called MPT
1327. It is named after the UK Ministry of Post and Telegraph that invented this particular
open standard. A number of different manufacturers support this trucking technology.
Tetra: As the world becomes more digital, a number of digital radio technologies have
emerged. One of these is Tetra, developed in Europe in the late eighties. It’s very similar
to GSM used in modern digital cellphones. Tetra is a 4-slot TDMA technology that works
in 25 kHz (wideband) channel spacing. It’s very popular amongst large public safety
agencies and used in the airports and has strong data applications. Tetra operates in
specific bands: 380 to 420 MHz and in the 700/800 MHz system.
P25: Another major open standard for digital radio technology is APCO Project 25 or P25
for short which was developed specifically for public safety agencies in the United States.
P25 Phase 1 differs from Tetra by being an FDMA technology and also supporting
conventional, trunked, and simulcast operation (or a combination of all three of these).
P25 can be used in any licensed frequency that a public safety agency has whether it be
VHF, UHF, 700, 800, even 900 MHz. It can be employed by non-public safety users as
well. P25 actually comes in two phases. Phase 1 is an FDMA technology operating in the
12.5 kHz channel spacing. Phase 2 is a more recent development and is only available in
trunked. It is also TDMA and offers two time slots in a single 12.5 kHz channel spacing
given the equivalent of a 6.25 kHz channel.
DMR: One of the newest open radio standards is called digital mobile radio or DMR for
short. It’s a TDMA technology which uses 2-time slots and operates in the 12.5 kHz
channel spacing, available in any licensed frequency. Tier 2 DMR offers conventional
operation and Tier 3 DMR offers trunked operation. DMR is increasingly used by businesses
such as mining, utilities and transport throughout the world.
NXDN: NXDN is a FDMA technology, similar to DMR, which operates in a 6.25 kHz channel
spacing. It’s not limited to any particular frequency band and it also supports conventional
and trunked operation.
345
Four standards for digital radio systems exist worldwide: IBOC (In-Band On-Channel),
DAB (Digital Audio Broadcasting), ISDB-TSB (Integrated Services Digital Broadcasting-
Terrestrial Sound Broadcasting), and DRM (Digital Radio Mondiale). All are different from each
other in several respects.
IBOC
DAB
Also known as Eureka 147 in the U.S. and as Digital Radio in the U.K., DAB comes with
a number of advantages similar to IBOC. But it is fundamentally different in its design. Unlike
IBOC, DAB cannot share a channel with an analog transmit. So it needs a new, dedicated
band. Each DAB broadcast also needs much more band as it consists of multi-program services
(typically 6 to 10, depending on quality and the amount of data it carries). This makes it unusable
by a typical local radio station. It is generally implemented with the cooperation of several
broadcasters, or by a third-party aggregator that acts as service operators for broadcasters.
Recently, improved versions of DAB, known as DAB+ and DAB-IP, have been developed.
These developments increase the range of DAB signal. Today, almost 40 countries worldwide
have DAB services on air (mostly in Europe), and others are thinking about the adoption of it or
one of its variants.
346
ISDB-TSB
Specifically developed for Japan in 2003, ISDB-TSB is the digital radio system used for
multi- program services. It is currently using transmission frequencies in the VHF band. A
unique feature of ISDB-TSB is that the digital radio channels are intermingled with ISDB digital
TV channels in a similar broadcast.
DRM
Sirius Xm:Sirius XM is the combination of two similar but competing satellite radio
services: XM Satellite Radio and Sirius Satellite Radio. XM and Sirius, which still operate
separately at the retail level, are subscription services. They broadcast more than 150
digital audio channels intended for reception by car, portable, and fixed receivers. These
provide coverage of the complete continental United States, much of Canada, and parts
of Mexico.
Internet Radio
Many radio stations are now using online streaming audio services to provide a simulated
broadcast of their over-the-air signals to web listeners. A broadcaster may also offer additional
online audio streams that are re-purposed, time-shifted, or completely different from their on-
air services. Because no scarcity of bandwidth or obligation for licensing of online services
exists, broadcasters may offer as many services as they wish. Unlike over-the-air broadcasting,
web distribution is delivered to end-users by the third-party telecommunication providers on a
nationwide or worldwide basis.
347
Traditional radio stations simulcast their programs using one of the compatible audio
formats that internet radio uses such as MP3, OGG, WMA, RA, AAC Plus and others. Most up-
to-date software media players can play streaming audio using these popular formats.
Traditional radio stations are limited by the power of their station’s transmitter and the
available broadcast options. They might be heard for 100 miles, but not much further, and they
may have to share the airwaves with other local radio stations.
Internet radio stations don’t have these limitations, so you can listen to any internet radio
station anywhere you can get online. In addition, internet radio stations are not limited to audio
transmissions. They have the option to share graphics, photos, and links with their listeners
and to form chat rooms or message boards.
Digital radio is able to offer generally higher quality sound than current AM and FM radio
broadcasts to fixed, portable and mobile receivers. The sound quality can relate to the bandwidth
and the data rates used.
Listeners benefit from an increased variety of radio programs because each broadcaster
is permitted to transmit multiple program streams. This means that broadcasters may provide
numerous new digital radio stations instead of a single analog radio station.
The technology also enables a number of additional audio, image and text services,
including:
Program information such as the station name, song title and artist’s name
a) Interactive
b) Streaming live
c) Streaming stored
2. We can divide audio and video services into _______ broad categories.
a) Audio Conferencing
b) Video Conferencing
c) Computer Conferencing
Broadcast leads usually withhold much important information because listeners do not
hear the first two or three words of a story. a) True b) False
Public radio stations typically schedule shorter and less frequent news programs than do
commercial radio stations. a) True b) False
a) Interpolator
b) Decimator
c) Equalizer
d) Filter
349
a) ISDN
b) Modems
c) Classical telephony
14.5 MultimediaConferencing
Video is used in technical discussions to display view-graph and to indicate how many
users are still physically present at a conference. For visual support, workstations, PCs or
video walls can be used.
For conferences with more than three or four participants, the screen resources on a PC
or workstation run out quickly, particularly if other applications, such as shared editors or drawing
spaces, are used. Hence, mechanisms which quickly resize individual images should be used.
Closing a conference.
Adding new users and removing users who leave the conference.
Conference states can be stored (located) either on a central machine (centralized control),
where a central application acts as the repository for all information related to the conference,
or in a distributed fashion.
Multimedia Conferencing
The use of continuous media, such as voice, video, in distributed systems implies the
need for continuous data transfers over relatively long periods of time; for example, play out of
video from a remote conferencing camera. Furthermore, the timeliness of such media
transmissions must be maintained as an ongoing commitment for the duration of the continuous
media presentation.
There are several aspects to group support for multimedia. Firstly, it is necessary to
provide a programming model for multiparty communications (supporting both discrete and
continuous media types). Facilities should also be provided to enable management of such
groups: for example, providing support for joining and leaving of groups at run-time. Secondly,
it is important to ensure that the underlying system provides the right level of support for such
communications, particularly for continuous media types. Thirdly, with multimedia, it is necessary
to cater for multicast communications where receivers may require different qualities of service.
This adds some complexity to quality of service management. Fourthly, it is important to be
able to support a variety of policies for ordering and reliability of data delivery.
2. Ad-Hoc Conference
3. Interactive-Broadcast Conference
1. Meet-Me Conference
Meet-Me Conference
Conference is pre-arranged
Determines the video signal to be sent to the participants(in case of audio/video conference)
2. Ad-Hoc Conference
Ad-Hoc Conference
– A talks to B
– A puts B on hold
– A calls C
User originating the conference call must be able toprovide the necessary bridge
functionality
3. Interactive-Broadcast Conference
Asymmetric conference
– Terminals have a much simpler back channel to the master (e.g. justsignaling or a plain
text stream)
Interactive-Broadcast Conference
Adobe Connect is one of the tool to broadcast the events interactively to the web.
Polycom video conferencing system supports meetings with peers all over the world.
The interactive conferencing includes certain gadgets like audio speakers, LCD Projectors.
BEING THERE
3-D worlds
Apple QuickTime
Java
14.6 Summary
Digital broadcasting is a way of transmitting audio and video information through an
encoded signal that is comprised of 1s and 0s.
It represents the latest in mainstream television broadcasting, having replaced the analog
system.
Digital radio broadcasting is significantly more spectrum efficient than analog FMradio.
Individuals using video conferencing can share data with each other and transfer
information such as photo or documents.
2. Three
355
4. a) True
5. b) False
6. a) Interpolator
LESSON 15
STAGES OF MULTIMEDIA APPLICATION
DEVELOPMENT
Structure
15.1 Introduction
15.6 Delivering
15.8 Summary
15.10Model Questions
15.1 Introduction
Even though we have all the required elements of multimedia to start and finish a full-
fledged multimedia project, it also requires a plan of action relating to project handling that
includes planning, budgeting, analysis, provisioning etc., so, this lesson gives a brief introduction
to multimedia project handling stages.
3. Testing
4. Delivering
This stage of multimedia application is the first stage which begins with an idea or need.
This idea can be further refined by outlining its messages and objectives. Before starting to
develop the multimedia project, it is necessary to plan what writing skills, graphic art, music,
video and other multimedia expertise will be required.
It is also necessary to estimate the time needed to prepare all elements of multimedia
and prepare a budget accordingly. After preparing a budget, a prototype or proof of concept
can be developed.
The needs of a project are analyzed by outlining its messages and objectives.
(ii) Scheduling.
358
(iii) Estimating.
Idea analysis.
Pre-testing.
Task planning.
Development.
Delivery
Before beginning a multimedia project, it is necessary to determine its scope and content.
The aim is to generate a plan of action that will become the road map for production.
It is necessary to continually weigh the purpose or goal against the feasibility and the cost
of production and delivery.
Additive process involves starting with minimal capabilities and gradually adding elements.
Idea Analysis:
CPM - Project management software typically provides Critical Path Method (CPM)
scheduling functions to calculate the total duration of a project based upon each identified
task, showing prerequisites.
Microsoft Project.
Designer’s Edge.
Outlining programs.
Spreadsheets
Pre-Testing:
Involves defining project goals in fine detail and spelling out what it will take in terms of
skills, content, and money to meet these goals.
Work up a prototype of the project on paper to help you relate your ideas to the real world.
Task Planning:
Building a prototype, producing audio and video, testing the functionality, and delivering
the final product.
360
Development
Prototype development
Involves testing of the initial implementation of ideas, building mock-up interfaces, and
exercising the hardware platform.
A written report and an analysis of budgets allow the client some flexibility and also provide
a reality check for developers.
Alpha development – At this stage, the investment of effort increases and becomes
more focused. More people get involved.
Beta development – At this stage, most of the features of a project are functional.
Testing is done by a wider arena of testers.
Delivery
The concerns shift towards the scalability of the project in the marketplace.
(ii) Scheduling
Milestones are decided at this stage.
The time required for each deliverable, that is the work products delivered to the client, is
estimated and allocated.
Scheduling is also difficult because computer hardware and software technology are in
constant flux.
At this stage, clients need to approve or sign off on the work created.
A change order stipulates that the additional cost of revising previously approved material
should be borne by the client.
(iii) Estimating
Cost estimation is done by analyzing the tasks involved in a project and the people who
build it.
The hidden costs of administration and management are also included in the cost estimates.
A contingency rate of 10 to 15 percent of the total cost should be added to the estimated
costs.
Time, money, and people are the three elements that can vary in project estimates.
The time at which payments are to be made is determined and are usually made in three
stages.
Time, money, and people are the three elements that can vary in project estimates.
The time at which payments are to be made is determined and are usually made in three
stages.
Contractors and consultants can be hired, but they should be billed at a lower rate.
Ensure that contractors perform the majority of their work off-site and use their own
equipment to avoid classifying them as employees.
Salaries.
Client meetings.
Acquisition of content.
Communication.
Travel.
Research.
Overheads.
Authoring costs.
These include:
Salaries.
Facility rental.
Printing costs.
363
Editing.
Beta program.
Salaries
Documentation
Packaging
Manufacturing
Marketing
Advertising
Shipping
Hardware:
Hardware is the most common limiting factor for realizing a multimedia idea.
The most common delivery platforms require a monitor resolution of 800X600 pixels and
at least 16- bit color depth
They provide information about the scope of work and the bidding process.
Bid proposals:
The backbone of the proposal is the estimate and project plan, which describes the scope
of the work.
The cost estimates for each phase or deliverable milestone and the payment schedules
should also be included.
The terms of a contract should include a description of the billing rates, invoicing policy,
third-party licensing fees, and a disclaimer for liability and damages.
A proposal should appear plain and simple, yet businesslikeA table of contents or an
index is a straightforward way to present the elements of a proposal in condensed overview.
Need analysis and description describes the reasons the project is being put forward.
Creative strategy – This section describes the look and feel of a project. This is
useful if the reviewing executives were not present for the preliminary discussions.
The next stage is to execute each of the planned tasks and create a finished product.
The product is revised, based on the continuous feedback received from the client.
365
Feedback loops and good communication between the design and production effort are
critical to the success of a project.
A user can either describe the project in minute details, or can build a less- detailed
storyboard and spend more effort in actually rendering the project.
The method chosen depends upon the scope of a project, the size and style of the team,
and whether the same people will do design and development.
If the design team is separate from the development team, it is best to produce a detailed
design first.
· The manner in which project material is organized has just as great an impact on the
viewer as the content itself.
· Mapping the structure of a project should be done early in the planning phase.
Navigation:
Navigation maps are also known as site maps.
Navigation maps provide a hierarchical table of contents and a chart of the logical flow of
the interactive interface.
366
(i) Linear - Users navigate sequentially, from one frame of information to another.
(ii) Hierarchical - Users navigate along the branches of a tree structure that is shaped by the
natural logic of the content. It is also called linear with branching.
(iii) Non-linear - Users navigate freely through the content, unbound by predetermined routes.
(iv) Composite - Users may navigate non-linearly, but are occasionally constrained to linear
presentations.
The navigation system should be designed in such a manner that viewers are given free
choice.
The architectural drawings for a multimedia project are storyboards and navigation maps.
Storyboards are linked to navigation maps during the design process, and help to visualize
the information architecture.
Structural Depth:
A user can design their product using two types of structures:
367
(i) Depth structure - Represents the complete navigation map and describes all the links
between all the components of the project.
Depth Structure
(ii) Surface structure - Represents the structures actually realized by a user while navigating
the depth structure.
Surface Structure
Hotspots:
Add interactivity to a multimedia project.
The three categories of hotspots are text, graphic, and icon. – The simplest hot spots on
the Web are the text anchors that link a document to other documents.
Hyperlinks
A hotspot that connects a viewer to another part of the same document, a different
document, or another Web site is called a hyperlink.
Image maps - Larger images that are sectioned into hot areas with associated links are
called image maps.
Small JPEG or GIF images that are themselves anchor links can also serve as buttons on
the Web.
It is essential to follow accepted conventions for button design and grouping, visual and
audio feedback, and navigation structure.
The user interface of a project is a blend of its graphic elements and its navigation system.
The simplest solution for handling varied levels of user expertise is to provide a modal
interface.
In a modal interface, the viewer can simply click a Novice/Expert button and change the
approach of the whole interface.
The solution is to build a project that can contain plenty of navigational power, which
provides access to content and tasks for users at all levels.
GUIs offer built-in help systems, and provide standard patterns of activity that produce
the standard expected results.
- Gradients.
369
- Shadows.
- Eye-grabbers.
- Clashes of color.
- Busy screens.
Audio interface:
A multimedia user interface can include sound elements.
Sounds can be background music, special effects for button clicks, voice-overs, effects
synced to animation.
The production stage requires good organization and detailed management oversight
during the entire construction process.
It is important to check the development hardware and software and review the
organizational and administrative setup. Potential problems can be avoided by answering
these questions:
– Develop a scheme that specifies the number and duration of client approval cycles.
- Provide a mechanism for change orders when changes are requested after sign-off
– The most cost-effective and time-saving methods of transportation are CD-R or DVD-
ROMs.
Tracking:
– Organize a method for tracking the receipt of material to be incorporated in a project.
– To address cross-platform issues, develop a file identification system that uses the DOS
file-naming convention of eight characters plus a three-character extension.
– Insert a copyright statement in the project that legally designates the code as the creator’s
intellectual property.
– Copyright and ownership statements are embedded in <meta> tags at the top of a HTML
page.
Testing
Testing a project ensure the product to be free from bugs. Apart from bug elimination
another aspect of testing is to ensure that the multimediaapplication meets the objectives of
the project. It is also necessary to test whether the multimedia project works properly on the
intended deliver platforms and they meet the needs of the clients.
The program is tested to ensure that it meets the objectives of the project, works on the
proposed delivery platforms, and meets the client requirements.
Delivering
The final stage of the multimedia application development is to pack the project and
deliver the completed project to the end user. This stage has several steps such as
implementation, maintenance, shipping and marketing the product.
Multimedia projects are complex; they involve the skills and efforts of multiple teams or
people. During the development process, a project moves through the specialized parts of the
team, from story creation to technical editing, with regular collective review sessions Each
stage is designed to refine the project with attention to the client’s needs, technical requirements
and audience preferences
bringing together the team. During the meeting, the project manager communicates the major
goals and lays out the milestones. The meeting may include a discussion of the target audience
and how each division can help support the overarching goal.
Most multimedia projects have a story behind them. After the initial meeting, the people
in charge of the background story write a script, creative brief or outline. The text hits the main
points of the project and uses language that appeals to the audience in jargon, tone and style.
A multimedia project usually includes multiple pieces: audio, video, imagery, text for
voiceovers and on-screen titles. Story boarding ties everything together; a story board panel
for a scene includes a sketch of the visual elements, the voiceover or title text, and any production
notes. It guides the process, keeps everyone in check and gives structure to the project.
During the design stage, designers take over the visual aspects of the project to determine
how it looks and feels. Using the notes from the storyboard, they create graphics, design the
navigation and give direction to photographers and videographers regarding the correct shots.
Depending on the project, the design stage might include graphic design, web design, information
design, photography or image collection. Design is always done with an eye toward the audience
Editing is one of the most involved and complex stages of the multimedia development
process. The people responsible for editing the project turn the various pieces into a cohesive
product, taking into consideration the time constraints, story line and creative specifications.
Depending on the scope of the project, pieces of the project may be edited separately.
For projects with a large amount of video, editing is the longest stage of the process; a
minute of final video can take hours of editing. The editing stage usually involves internal
review iterations and may also include rounds of client review and editing.
373
The production stage is when all the parts of a multimedia project come together. The
production staff gathers all of the edited assets in one place and puts them together in a logical
sequence, using the story board as a guide. The rough draft is then put through rounds of
review and final edits, both internally and with the client. To ensure that a project has the
desired impact on the target audience, a company may engage in user testing as part of
production.
During this stage, test members of the audience use the multimedia piece while team
members observe. Depending on the goals of the project, the staff might observe users’ reactions
or have them answer questions to see if the project hits the right marks. After user testing,
there are usually further adjustments to the project. Once the team and clients are satisfied,
the project goes out for distribution.
PERL
JAVA
JAVASCRIPT
PHP
(a) CD-ROM
A Compact Disc or CD is an optical disc used to store digital data, originally developed
for storing digital audio. The CD, available on the market since late 1982, remains the standard
playback medium for commercial audio recordings to the present day, though it has lost ground
in recent years to MP3 players.
374
An audio CD consists of one or more stereo tracks stored using 16-bit PCM coding at a
sampling rate of 44.1 kHz. Standard CDs have a diameter of 20 mm and can hold approximately
80 minutes of audio. There are also 80 mm discs, sometimes used for CD singles, which hold
approximately 20 minutes of audio. The technology was later adapted for use as a data storage
device, known as a CD-ROM, and to include record once and re-writable media (CD-R and
CD-RW respectively).
CD-ROMs and CD-Rs remain widely used technologies in the computer industry as of
2007. The CD and its extensions have been extremely successful: in 2004, the worldwide
sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion
CDs had been sold worldwide.
(b) DVD
DVD (also known as “Digital Versatile Disc” or “Digital Video Disc”) is a popular optical
disc storage media format. Its main uses are video and data storage. Most DVDs are of the
same dimensions as compact discs (CDs) but store more than 6 times the data. Variations of
the term DVD describe the way data is stored on the discs:
DVD-ROM has data which can only be read and not written, DVD-R can be written once
and then functions as a DVD-ROM, and DVD-RAM or DVDRW holds data that can be re-
written multiple times.
DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured
video and audio content. Other types of DVD discs, including those with video content, may be
referred to as DVD-Data discs. The term “DVD” is commonly misused to refer to high density
optical disc formats in general, such as Blu-ray and HD DVD. “DVD” was originally used as
initials for the unofficial term “digital video disc”. It was reported in 1995, at the time of the
specification finalization,applications), however, the text of the press release announcing the
specification finalization only refers to the technology as “DVD”, making no mention of what (if
anything) the letters stood for. Usage in the present day varies, with “DVD”, “Digital Video
Disc”, and “Digital Versatile Disc” all being common.
375
A USB flash drive is a data storage device that includes flash memory with an integrated
Universal Serial Bus (USB) interface. USB flash drives are typically removable and rewritable,
and physically much smaller than a floppy disk. Most weigh less than 30 g. As of January 2012
drives of 1 terabytes (TB) are available and storage capacities as large as 2 terabytes are
planned, with steady improvements in size and price per capacity expected. Some allow up to
100,000 write/erase cycles (depending on the exact type of memory chip used) and 10 years
shelf storage time.
USB flash drives are used for the same purposes for which floppy disks or CD-ROMs
were used. They are smaller, faster, have thousands of times more capacity, and are more
durable and reliable because they have no moving parts. Until approximately 2005, most desktop
and laptop computers were supplied with floppy disk drives, but floppy disk drives have been
abandoned in favor of USB ports.
USB flash drives use the USB mass storage standard, supported natively by modern
operating systems such as Linux, Mac OS X, Windows, and other Unix-like systems, as well
as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer
faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by
many other systems such as the Xbox 360, PlayStation 3, DVD players and in some upcoming
mobile smart phones.
The Internet is a global system of interconnected computer networks that use the standard
Internet protocol suite (TCP/IP) to serve billions of users worldwide. It is a network of networks
that consists of millions of private, public, academic, business, and government networks, of
local to global scope, that are linked by a broad array of electronic, wireless and optical
networking technologies. The Internet carries an extensive range of information resources and
services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and
the infrastructure to support email.
376
15.6 Delivering
Multimedia can be delivered using
Optical Disks
Themostcost effectivemethodofdeliveryformultimediamaterials.
The devices are used to store large amounts of some combination of text, graphics,
sound, and moving video.
Optical Disks
Media Storage
Distributed Network
Suitableforweb-basedcontente.g.Website
Filesneedtobecompressbeforetransfer
377
Distributed Network
Web-based CD-based
Limited in picture size andlow video Can storehigh end Multimedia elements
resolution
2. By the time you reach the __________ stage of multimedia project development, you are
producing the final product.
3. __________ are passed through several levels of a company so that managers and
directors can evaluate projects’ quality and price.
4. The end of each phase of the development of a multimedia project is a natural place to
set __________.
a. prerequisites
b. scopes
c. contingencies
d. milestones
5. According to the text, during which phase of development should you build a skills matrix?
a. Idea analysis
b. Pretesting
c. Building a team
d. Scheduling
378
Large corporations that are “outsourcing” their multimedia development work often create
Requests for Proposals (RFPs), which provide background information, the scope of
work, and information about the bidding process to potential contractors.
a. True b. False
7. __________ provide you with a table of contents, as well as a chart of the logical flow of
the interactive interface
9. __________ structure describes all the links between all the components of your project.
10. __________ interfaces provide the simplest solutions for handling varied levels of user
expertise
Introduction
Optical storage devices have become the order of the day. The high storage capacity
available in the optical storage devices has influenced it as storage for multimedia content.
Apart from the high storage capacity the optical storage devices have higher data transfer rate.
15.7.1 CD-ROM
A Compact Disc or CD is an optical disc used to store digital data, originally developed
for storing digital audio. The CD, available on the market since late 1982, remains the standard
playback medium for commercial audio recordings to the present day, though it has lost ground
in recent years to MP3 players.
An audio CD consists of one or more stereo tracks stored using 16-bit PCM coding at a
sampling rate of 44.1 kHz. Standard CDs have a diameter of 120 mm and can hold approximately
80 minutes of audio. There are also 80 mm discs, sometimes used for CD singles, which hold
approximately 20 minutes of audio. The technology was later adapted for use as a data storage
device, known as a CD-ROM, and to include recordonce and re-writable media (CD-R and CD-
379
RW respectively). CD-ROMs and CD-Rs remain widely used technologies in the computer
industry as of 2007. The CD and its extensions have been extremely successful: in 2004, the
worldwide sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007,
200 billion CDs had been sold worldwide.
CD-ROM History
In 1979, Philips and Sony set up a joint task force of engineers to design a new digital
audio disc. The CD was originally thought of as an evolution of the gramophone record, rather
than primarily as a data storage medium. Only later did the concept of an “audio file” arise, and
the generalizing of this to any data file. From its origins as a music format, Compact Disc has
grown to encompass other applications. In June 1985, the CD-ROM (read-only memory) and,
in 1990, CD-Recordable were introduced, also developed by Sony and Philips.
A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate plastic
and weighs approximately 16 grams. A thin layer of aluminum (or, more rarely, gold, used for
its longevity, such as in some limited-edition audiophile CDs) is applied to the surface to make
it reflective, and is protected by a film of lacquer. CD data is stored as a series of tiny indentations
(pits), encoded in a tightly packed spiral track molded into the top of the polycarbonate layer.
The areas between pits are known as “lands”. Each pit is approximately 100 nm deep by 500
nm wide, and varies from 850 nm to 3.5 µm in length.
The spacing between the tracks, the pitch, is 1.6 µm. A CD is read by focusing a 780 nm
wavelength semiconductor laser through the bottom of the polycarbonate layer.
While CDs are significantly more durable than earlier audio formats, they are susceptible
to damage from daily usage and environmental factors. Pits are much closer to the label side
of a disc, so that defects and dirt on the clear side can be out of focus during playback. Discs
consequently suffer more damage because of defects such as scratches on the label side,
whereas clear-side scratches can be repaired by refilling them with plastic of similar index of
refraction, or by careful polishing.
380
The digital data on a CD begins at the center of the disc and proceeds outwards to the
edge, which allows adaptation to the different size formats available. Standard CDs are available
in two sizes. By far the most common is 120 mm in diameter, with a 74 or 80-minute audio
capacity and a 650 or 700 MB data capacity. 80 mm discs (“Mini CDs”) were originally designed
for CD singles and can hold up to 21 minutes of music or 184 MB of data but never really
became popular. Today nearly all singles are released on 120 mm CDs, which is called a Maxi
single.
Audio CD
The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is
described in a document produced in 1980 by the format’s joint creators, Sony and Philips. The
document is known colloquially as the “Red Book” after the color of its cover. The format is a
two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate. Four-channel sound is an
allowed option within the Red Book format, but has never been implemented.
The selection of the sample rate was primarily based on the need to reproduce the
audible frequency range of 20Hz - 20kHz. The Nyquist–Shannon sampling theorem states that
a sampling rate of double the maximum frequency to be recorded is needed, resulting in a 40
kHz rate. The exact sampling rate of 44.1 kHz was inherited from a method of converting
digital audio into an analog video signal for storage on video tape, which was the most affordable
way to transfer data from the recording studio to the CD manufacturer at the time the CD
specification was being developed. The device that turns an analog audio signal into PCM
audio, which in turn is changed into an analog video signal is called a PCM adaptor.
The main parameters of the CD (taken from the September 1983 issue of the audio CD
specification) are as follows:
381
The program area is 86.05 cm² and the length of the recordable spiral is 86.05 cm² / 1.6
µm = 5.38 km. With a scanning speed of 1.2 m/s, the playing time is 74 minutes, or around 650
MB of data on a CD-ROM. If the disc diameter were only 115 mm, the maximum playing time
would have been 68 minutes, i.e., six
A disc with data packed slightly more densely is tolerated by most players (though some
old ones fail). Using a linear velocity of 1.2 m/s and a track pitch of 1.5 µm leads to a playing
time of 80 minutes, or a capacity of 700 MB. Even higher capacities on non-standard discs (up
to 99 minutes) are available at least as recordable, but generally the tighter the tracks are
squeezed the worse the compatibility.
15.7.3Data structure
The smallest entity in a CD is called a frame. A frame consists of 33 bytes and contains
six complete 16-bit stereo samples (2 bytes × 2 channels × six samples equals 24 bytes). The
other nine bytes consist of eight Cross-Interleaved Reed-Solomon Coding error correction
bytes and one subcode byte, used for control and display. Each byte is translated into a 14-bit
word using Eight-toFourteen Modulation, which alternates with 3-bit merging words. In total we
have 33 × (14 + 3) = 561 bits. A 27-bit unique synchronization word is added, so that the
number of bits in a frame totals 588 (of which only 192 bits are music). These 588-bit frames
are in turn grouped into sectors. Each sector contains 98 frames, totaling 98 × 24 = 2352 bytes
of music.
382
The CD is played at a speed of 75 sectors per second, which results in 176,400 bytes per
second. Divided by 2 channels and 2 bytes per sample, this result in a sample rate of 44,100
samples per second. “Frame” For the Red Book stereo audio CD, the time format is commonly
measured in minutes, seconds and frames (mm:ss:ff), where one frame corresponds to one
sector, or 1/75th of a second of stereo sound. Note that in this context, the term frame is
erroneously applied in editing applications and does not denote the physical frame described
above. In editing and extracting, the frame is the smallest addressable time interval for an
audio CD, meaning that track start and end positions can only be defined in 1/75 second steps.
Logical structure
CD-Text
CD-Text is an extension of the Red Book specification for audio CD that allows for storage
of additional text information (e.g., album name, song name, and artist) on a standards-compliant
audio CD. The information is stored either in the lead-in area of the CD, where there is roughly
five kilobytes of space available, or in the subcode channels R to W on the disc, which can
store about 31 megabytes.
CD + Graphics
Compact Disc + Graphics (CD+G) is a special audio compact disc that contains graphics
data in addition to the audio data on the disc. The disc can be played on a regular audio CD
player, but when played on a special CD+G player, can output a graphics signal (typically, the
CD+G player is hooked up to a television set or a computer monitor); these graphics are
383
almost exclusively used to display lyrics on a television set for karaoke performers to sing
along with. CD + Extended Graphics Compact Disc + Extended Graphics (CD+EG, also known
as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G,
CD+EG utilizes basic CD-ROM features to display text and video information in addition to the
music being played.
This extra data is stored in subcode channels R-W. CD-MIDI Compact Disc MIDI or CD-
MIDI is a type of audio CD where sound is recorded in MIDI format, rather than the PCM format
of Red Book audio CD. This provides much greater capacity in terms of playback duration, but
MIDI playback is typically less realistic than PCM playback. Video CD Video CD (aka VCD,
View CD, Compact Disc digital video) is a standard digital format for storing video on a Compact
Disc. VCDs are playable in dedicated VCD players, most modern DVD-Video players, and
some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita,
and JVC and is referred to as the White Book standard. Overall picture quality is intended to be
comparable to VHS video, though VHS has twice as many scanline (approximately 480 NTSC
and 580 PAL) and therefore double the vertical resolution. Poorly compressed video in VCD
tends to be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog
noise.
Super Video CD
Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video
on standard compact discs. SVCD was intended as a successor to Video CD and an alternative
to DVD-Video, and falls somewhere between both in terms of technical capability and picture
quality. SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD.
One CD-R disc can hold up to 60 minutes of standard quality SVCD-format video. While
no specific limit on SVCD video length is mandated by the specification, one must lower the
video bitrate, and therefore quality, in order to accommodate very long videos. It is usually
difficult to fit much more than 100 minutes of video onto one SVCD without incurring significant
quality loss, and many hardware players are unable to play video with an instantaneous bitrate
lower than 300 to 600 kilobits per second.
384
Photo CD
Photo CD is a system designed by Kodak for digitizing and storing photos in a CD.
Launched in 1992, the discs were designed to hold nearly 100 high quality images, scanned
prints and slides using special proprietary encoding. Photo CD discs are defined in the Beige
Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended
to play on CD-i players, Photo CD players and any computer with the suitable software
irrespective of the operating system. The images can also Enhanced CD Enhanced CD, also
known as CD Extra and CD Plus, is a certification mark of the Recording Industry Association
of America for various technologies that combine audio and computer data for use in both
compact disc and CD-ROM players.
The primary data formats for Enhanced CD disks are mixed mode (Yellow Book/Red
Book), CD-i, hidden track, and multisession (Blue Book). Recordable CD Recordable compact
discs, CD-Rs, are injection molded with a “blank” data spiral. A photosensitive dye is then
applied, after which the discs are metalized and lacquer coated. The write laser of the CD
recorder changes the color of the dye to allow the read laser of a standard CD player to see the
data as it would an injection molded compact disc. The resulting discs can be read by most (but
not all) CD-ROM drives and played in most (but not all) audio CD players.
CD-R recordings are designed to be permanent. Over time the dye’s physical
characteristics may change, however, causing read errors and data loss until the reading device
cannot recover with error correction methods. The design life is from 20 to 100 years depending
on the quality of the discs, the quality of the writing drive, and storage conditions. However,
testing has demonstrated such degradation of some discs in as little as 18 months under
normal storage conditions. This process is known as CD rot. CD-Rs follow the Orange Book
standard.
Recordable Audio CD
management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable
Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a
3% AHRA royalty used to compensate the music industry for the making of a copy.
Re-Writable CD
CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write
laser in this case is used to heat and alter the properties (amorphous vs. crystalline) of the
alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity
as a pressed CD or a CD-R, and so many earlier CD audio players cannot read CD-RW discs,
although later CD audio players and stand-alone DVD players can. CD-RWs follow the Orange
Book standard.
Large capacity
Disadvantages:
15.8 Summary
The basic stages of a multimedia project are planning and costing, design and production,
testing and delivery.
Knowledge of hardware and software, as well as creativity and organizational skills are
essential for creating a high-quality multimedia project.
386
The process of making multimedia involves idea analysis, pre-testing, task planning,
development, and delivery.
Feedback loops and good communication between the design and the production efforts
are critical to the success of a project.
The four fundamental organizing structures are linear, non-linear, hierarchical, and
composite.
The user interface should be simple, user- friendly, and easy to navigate.
2. Task Planning
3. Estimations
4. Milestones
5. Building a team
6. a.True
7. A Navigation Map
8. Production
9. Depth
10. Modal
387
PART B - (5 x 6 = 30 Marks)
Write Short Notes on any SIX of the following, in about 250 words each
PART C - (3 x 10 = 30 Marks)
Write Essay on any THREE of the following, in about 750 words each
23. Explain about step by step procedure to set the working environment in
Dreamweaver.