0% found this document useful (0 votes)
794 views

Multimedia Systems - SPCA211

Uploaded by

Saraswathi E
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
794 views

Multimedia Systems - SPCA211

Uploaded by

Saraswathi E
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 395

SPCA 211

MASTER OF
COMPUTER APPLICATIONS

SECOND YEAR
FOURTH SEMESTER

CORE PAPER - XX

MULTIMEDIA SYSTEMS

INSTITUTE OF DISTANCE EDUCATION


UNIVERSITY OF MADRAS
MASTER OF COMPUTER CORE PAPER - XX
APPLICATIONS MULTIMEDIA SYSTEMS
SECOND YEAR - FOURTH SEMESTER

WELCOME
Warm Greetings.

It is with a great pleasure to welcome you as a student of Institute of Distance


Education, University of Madras. It is a proud moment for the Institute of Distance education
as you are entering into a cafeteria system of learning process as envisaged by the University
Grants Commission. Yes, we have framed and introduced Choice Based Credit
System(CBCS) in Semester pattern from the academic year 2018-19. You are free to
choose courses, as per the Regulations, to attain the target of total number of credits set
for each course and also each degree programme. What is a credit? To earn one credit in
a semester you have to spend 30 hours of learning process. Each course has a weightage
in terms of credits. Credits are assigned by taking into account of its level of subject content.
For instance, if one particular course or paper has 4 credits then you have to spend 120
hours of self-learning in a semester. You are advised to plan the strategy to devote hours of
self-study in the learning process. You will be assessed periodically by means of tests,
assignments and quizzes either in class room or laboratory or field work. In the case of PG
(UG), Continuous Internal Assessment for 20(25) percentage and End Semester University
Examination for 80 (75) percentage of the maximum score for a course / paper. The theory
paper in the end semester examination will bring out your various skills: namely basic
knowledge about subject, memory recall, application, analysis, comprehension and
descriptive writing. We will always have in mind while training you in conducting experiments,
analyzing the performance during laboratory work, and observing the outcomes to bring
out the truth from the experiment, and we measure these skills in the end semester
examination. You will be guided by well experienced faculty.

I invite you to join the CBCS in Semester System to gain rich knowledge leisurely at
your will and wish. Choose the right courses at right times so as to erect your flag of
success. We always encourage and enlighten to excel and empower. We are the cross
bearers to make you a torch bearer to have a bright future.

With best wishes from mind and heart,

DIRECTOR

(i)
MASTER OF COMPUTER CORE PAPER - XX
APPLICATIONS MULTIMEDIA SYSTEMS
SECOND YEAR - FOURTH SEMESTER

COURSE WRITER

Dr. R. Latha
Professor & Head,
Department of Computer Science and Applications,
St.Peter's Institute of Higher Education & Research,
Avadi, Chennai-600 054.

Dr. R. Parameswari
Associate Professor,
Department of Computer Science,
School of Computing Sciences,
Vels Institute of Science, Technology & Advanced Studies,
Pallavaram, Chennai - 600 117.

EDITING AND CO-ORDINATION

Dr. S. Sasikala
Associate Professor in Computer Science
Institute of Distance Education
University of Madras
Chepauk, Chennnai - 600 005.

© UNIVERSITY OF MADRAS, CHENNAI 600 005.

(ii)
MASTER OF COMPUTER APPLICATIONS

SECOND YEAR

FOURTH SEMESTER

Core Paper - XX

MULTIMEDIA SYSTEMS

SYLLABUS
Objective of the course

This course introduces the basic concepts of Multimedia Systems.

Unit 1: Introductory Concepts: Multimedia – Definitions, CD-ROM and the Multimedia

Highway, Uses of Multimedia, Introduction to making multimedia – The Stages of project,

the requirements to make good multimedia, Multimedia skills and training, Training
opportunities in Multimedia. Motivation for multimedia usage, Frequency domain analysis,

Application Domain.

Unit 2: Multimedia-Hardware and Software: Multimedia Hardware – Macintosh and

Windows production Platforms, Hardware peripherals – Connections, Memory and storage


devices, Media software – Basic tools, making instant multimedia, Multimedia software

and Authoring tools, Production Standards.

Unit 3: Multimedia – making it work – multimedia building blocks – Text, Sound, Images,
Animation and Video, Digitization of Audio and Video objects, Data Compression: Different

algorithms concern to text, audio, video and images etc., Working Exposure on Tools like

Dream Weaver, Flash, Photoshop Etc.,

Unit 4:Multimedia and the Internet: History, Internet working, Connections, Internet

Services, The World Wide Web, Tools for the WWW – Web Servers, Web Browsers,

Web page makers and editors, Plug-Ins and Delivery Vehicles, HTML, VRML, Designing for
the WWW – Working on the Web, Multimedia Applications – Media Communication, Media

Consumption, Media Entertainment, Media games.

(iii)
Unit 5 :Multimedia-looking towards Future: Digital Communication and New Media,
Interactive Television, Digital Broadcasting, Digital Radio, Multimedia Conferencing,

Assembling and delivering a project-planning and costing, Designing and Producing, content

and talent, Delivering, CD-ROM technology.

Recommended Texts:

1. S. Heath, 1999, Multimedia & Communication Systems, Focal Press, UK.

2. T. Vaughan, 1999, Multimedia: Making it work, 4th Edition, Tata McGraw Hill, New

Delhi.

3. K. Andleigh and K. Thakkar, 2000, Multimedia System Design, PHI, New Delhi.

Reference Books

1) Keyes, “Multimedia Handbook”, TMH, 2000.

2) R. Steinmetz and K. Naharstedt, 2001, Multimedia: Computing, Communications &

Applications, Pearson, Delhi.

3) S. Rimmer, 2000, Advanced Multimedia Programming , PHI, New Delhi..

Website and e-Learning Source :

1) http://www.cikon.de/Text_EN/Multimed.html

(iv)
MASTER OF COMPUTER APPLICATIONS

SECOND YEAR

FOURTH SEMESTER

Core Paper - XX

MULTIMEDIA SYSTEMS

SCHEME OF LESSONS

Sl.No. Title Page

1. An Overview of Multimedia 001

2. Introduction to Making Multimedia 021

3. Multimedia Hardware and Software 041

4. Hardware Peripherals in Multimedia System 067

5. Basic Software Tools for Multimedia Objects 093

6. Multimedia Elements – Text and Sound 116

7. Multimedia Elements – Images, Animation and Video 152

8. Compression Techniques in Multimedia Systems 181

9. Working Exposure on Tools 215

10. The Internet and Multimedia 252

11. World Wide Web (WWW) 268

12. Designing for the WWW 295

13. Multimedia in Future 315

14. Multimedia Technologies 336

15. Stages of Multimedia Application Development 356

(v)
1

LESSON 1
AN OVERVIEW OF MULTIMEDIA

Structure
1.1 Introduction

1.2 Learning Objectives

1.3 Multimedia

1.4 History of Multimedia

1.5 CD-ROM and Multimedia Highway

1.6 Uses of Multimedia

1.7 Multimedia Applications

1.8 Advantages of using Multimedia

1.9 Disadvantages of using Multimedia

1.10 Summary

1.11 Check Your Answers

1.12 Model Questions

1.1 Introduction

Multimedia has become an inevitable part of any presentation. It has found a variety of
applications right from entertainment to education. The evolution of internet has also increased
the demand for multimedia content.

As the name suggests, multimedia is a set of more than one media element used to
produce a concrete and more structured way of communication. In other words, multimedia is
a simultaneous usage of data from different sources. These sources in multimedia are known
as media elements. With growing and very fast changing information technology, Multimedia
has become a crucial part of computer world. Its importance has been realized in almost all
walks of life, may it be education, cinema, advertising, fashion and what not. Throughout the
2

1960s, 1970s and 1980s, computers have been restricted to be dealt with two main types of
data - words and numbers. But the cutting edge of information technology introduced a faster
system capable of handling graphics, audio, animation and video. And the entire world was
taken aback by the power of multimedia.

1.2 Learning Objective

In this lesson, the preliminary concepts of Multimedia and the various benefits and
applications of multimedia is learnt and discussed. After going through this chapter the reader
will be able to:

i) Define Multimedia

ii) List the Elements of Multimedia

iii) Enumerate the Different Applications of Multimedia

iv) Uses of Multimedia, Advantages and Disadvantages of Multimedia

1.3 Multimedia

Fig.1.1 Multimedia

 Multi: more than one

 Medium (singular): middle, intermediary, mean

 Media (plural): means for conveying information


3

 Mass Media: Media in the press, newspaper, radio and TV context.

 Transmission Media: Media in communications: cables, satellite, network

 Storage Media: Media in computer storage: floppy, CD, DVD, HD, USB

 Interactive Media: Media in HCI context: text, image, audio, video, CG

Definitions

 Multimedia is a media that uses multiple forms of information contents and information
processing.

 Multimedia means that computer information can be represented through audio, video,
and animation in addition to traditional media (i.e., text, graphics/drawings, and images).

 Multimedia is the field concerned with the computer controlled integration of text, graphics,
drawings, still and moving images (Video), animation, audio, and any other media where,
every type of information can be represented, stored, transmitted and processed digitally.

 Multimedia: refers to various information forms such as text, image, audio, video, graphics,
and animation in a variety of application environments.

 Multimedia can be referred as a product, application, technology, platform, board, device,


network computer, system, classroom, school, and etc. The word “multimedia” is widely
used to mean many different things.

Multimedia in terms of Computing

Multimedia(Fig. 1.1) is represented in Computer-based technologies and applications


into four fundamental attributes:

1. Digitized Computing: All media including audio/video are represented in digital format.

2. Distributed Computing: The information conveyed is remote, either pre-produced and


stored or produced in real-time, distributed over networks.
4

3. Interactive Computing: It is possible to affect the information received, and send own
information, in a non-trivial way beyond start, stop, fast forward.

4. Integrated Computing: The media are treated in a uniform way, presented in an organized
way, but are possible to manipulate independently.

Hyper Text and Hyper Media

Hypertext is a text which contains links to other texts. The term was invented by Ted
Nelson around 1965.(Fig. 1.2)

Fig. 1.2 Hyper Text

Hypermedia is not constrained to be text-based (Fig. 1.3). It can include other media,
e.g., graphics, images, and especially continuous media - sound and video.

Fig. 1.3 Hyper Media


5

Examples of Hypermedia Applications


1. The World Wide Web (WWW) is a clear example of a hypermedia application.

2. Power Point Presentation

3. Adobe Acrobat (or other PDF software)

4. Adobe Flash

1.3.1 Elements of Multimedia System

Multimedia means that computer information can be represented through audio, graphics,
image, video and animation in addition to traditional media (text and graphics). Hypermedia
can be considered as one type of multimedia application.

1. Text

2. Graphics

3. Animation

4. Video

5. Audio

Following are the major components of a multimedia computer system:

 Text

— It contains alphanumeric and some other special characters (Fig. 1.4). Keyboard is usually
used for input of text; however, there are some internal (inbuilt) features to include such
text.

— Characters are used to create words, sentences, and paragraphs.


6

Fig. 1.4 Text

 Graphics

— It is technology to generate, represent, process, manipulate, and display pictures (Fig.


1.5). It is one of the most important components of multimedia application. The development
of graphics is supported by different software.

— A digital representation of non-text information, such as a drawing, chart, or photograph.

Fig. 1.5 Graphics

 Animation

- Computer animation is a modern technology, which helps in creating, developing,


sequencing, and displaying a set of images (technically known as ‘frames’). Animation
gives visual effects or motion very similar to that of a video file.

- Flipping through a series of still images. It is a series of graphics that create an illusion of
motion (Fig. 1.6).
7

Fig. 1.6 Graphics

 Audio

- This technology records, synthesizes, and plays audio (sound) (Fig. 1.6). There are many
learning courses and different instructions that can be delivered through this medium
appropriately.

- Music, speech, or any other sound.

Fig. 1.7 Audio

 Video

— This technology records, synthesizes, and displays images (known as frames) in such
sequence (at a fixed speed) that makes the creation appear as moving to see a completely
developed video. In order to watch a video without any interruption, video device must
display 25 to 30 frames/second.

— Photographic images that are played back at speeds of 15 to 30 frames per second and
that provide the appearance of full motion (Fig. 1.8).
8

Fig. 1.8 Video

Categories of Multimedia

Multimedia may be broadly divided into linear and non-linear.

Linear active content progresses without any navigation control for the viewer such as a
cinema presentation.

 Non-linear content offers user interactivity to control progress as used with a computer
game or used in self-paced computer based training.

 Non-linear content is also known as hypermedia content. Multimedia presentations can


be live or recorded. A recorded presentation may allow interactivity via a navigation system.
A live multimedia presentation may allow interactivity via interaction with the presenter or
performer.

Significant Features of Multimedia Computer System


Following are the major features of a multimedia computer system:-

 It has a fast Central Processing Unit (CPU), to process large amount of data.

 It has a huge storage capacity and a huge memory power that helps in running heavy
data programs.

 It has a high capacity graphic card that helps in displaying graphics, animation, video, etc.
9

 The sound system makes it easy to listen to audio.

With all these features (discussed above), a computer system is known as high end
multimedia computer system.

However, all the features listed above are not essentially required for every multimedia
computer system, but rather the features of a multimedia computer system are configured as
per the needs of the respective user.

Representative Dimensions of media

Media are divided into two types in respect to time in their representation space:

1. Time independent (discrete):

Information is expressed only in its individual value. E.g: text, image, etc.

2. Time dependent (continuous):

Information is expressed not only in its individual value, but also by the time of its
occurrences. E.g.: sound and video. Multimedia system is defined by computer controlled,
integrated production, manipulation, presentation, storage and communication of independent
information, which is encoded atleast through a continuous and discrete media.

1.3.2 Classifications of Media


1. The Perception media

2. The Representation Media

3. The Presentation Media

4. The Storage media

5. The Transmission media

6. The Information Exchange media


10

 Perception Media

 Perception media helps the human to sense their environment. The central question is
how the human perceive the information in a computer environment. The answer is through
seeing and hearing.

 Seeing:

o For the perception of information through seeing the usual, such as text, image and
video are used.

 Hearing:

o For the perception of information through hearing media, such as music, noise and speech
are used.

 Representation Media

 Representation media are defined by internal computer representation of information.


The central question is how does the computer information coded? The answer is that
various formats are used to represent media information in computer.

i. Text, character is coded in ASCII code

ii. Graphics are coded according to CEPT or CAPTAIN video text standard.

iii. Image can be coded as JPEG format

iv. Audio video sequence can be coded in different TV standard format (PAL, NTSC,SECAM
and stored in the computer in MPEG format)

 Presentation Media

 Presentation media refers to the tools and devices for the input and output of the
information, through which the information is delivered by the computer and is introduced
to the computer.

 Output media: Paper, Screen and Speaker


11

 Input Media: Keyboard, Mouse, Camera, Microphone.

 Storage Media

Storage Media refers to the data carrier which enables storage of information. The
information will be stored in hard disk, CD-ROM etc.

 Transmission Media

Transmission Media are the diff erent information carrier that enables
continuous data transmission.  The information will be  transmitted  in  co-axial  cable,
fiber optics and as well as by air.

 Information Exchange Media

 Information exchange media includes all information carriers f or


transmission, i.e. all storage and transmission media.   The information  exchanges
between the different places are carried out by both the storage and transmission media.
E.g. Electronic mailing system.

1.3.3 Properties of Multimedia System

The uses of term multimedia are not every arbitrary combination of media.

Combination of media:

A simple text processing program with incorporated image is called a multimedia


application. It combines text and image. But one should talk multimedia only when both
continuous and discrete media are utilized. So text processing program with
incorporated images is not a multimedia application.

 Computer support integrated

Computer is an idle tool for multimedia application which integrates many devices together.

 Independence:

An important aspect of different media is their level of independence from each other. In
general there is a request for independence of different media but multimedia may require
12

several level of independence. E.g. A computer controlled video recorder stores audio
and video informations. There is inherently tight connection between two types of media.
Both media are coupled together through common storage medium of tape. On the other
hand for the purpose of presentation the combination of DAT (digital audio tape recorder)
signals and computer text satisfies the request for media independence.

Global structure of Multimedia System:


1. Application domain

2. System domain

3. Device domain

1. Application domain provides functions to the user to develop and present multimedia
projects. This includes software tools, and multimedia projects development methodology.

2. System Domain includes all supports for using the function of the device domain, e.g.
operating system, communication systems (networking) and database systems.

3. Device domain provides basic concepts and skills for processing various multimedia
elements and for handling physical device.

Check your Progress


1. The term ___________ generally means using some combination of text, graphics,
animation, video, music, voice, and sound effects to communicate.

a) MIDI

b) Hyperlink

c) WYSIWYG

d) Multimedia

2. Video consists of a sequence of

a) Frames

b) Signals

c) Packets

d) Slots
13

3. Images that are available without copyright restrictions are called ____________

4. In a multimedia project, a storyboard details the text, graphics, audio, video, animation,
interactivity, and other that should be used in each screen of the project: Say TRUE or
FALSE?

5. Many bitmapped images in a sequence is known as

a) GIF animation.

b) JPG animation.

c) TIF animation.

d) Tweening.

1.4 History of Multimedia

One of the earliest and best-known examples of multimedia was the video game Pong.
Developed in 1972 by Nolan Bushnell (the founder of a then new company called Atari), the
game consisted of two simple paddles that batted a square “ball” back and forth across the
screen, like tennis. It started as an arcade game, and eventually ended up in many homes. A
New Revolution in 1976, another revolution was about to start as friends Steve Jobs and Steve
Wozniak founded a startup company called Apple Computer. A year later they unveiled the
Apple II, the first computer to use color graphics.

The computer revolution moved quickly: 1981 saw IBM’s first PC, and in 1984 Apple
released the Macintosh, the first computer system to use a Graphical User Interface (GUI).
The Macintosh also bore the first mouse, which would forever change the way people interact
with computers. In 1985, Microsoft released the first version of its Windows operating system.
That same year, Commodore released the Amiga, a machine which many experts consider to
be the first multimedia computer due to its advanced graphics processing power and innovative
user interface. The Amiga did not fare well over the years, though, and Windows has become
the standard for desktop computing. 2 Innovations Both Windows and the Macintosh operating
systems paved the way for the lightning-fast developments in multimedia that were to come.
Since both Windows and Mac OS handle graphics and sound – something that was previously
handled by individual software applications – developers are able to create programs that use
multimedia to more powerful effect.
14

One company that has played an important role in multimedia from its very inception is
Macromedia (formerly called Macro mind). In 1988, Macromedia released its landmark Director
program, which allowed everyday computer users to create stunning, interactive multimedia
presentations. Today, Macromedia Flash drives most of the animation and multimedia you see
on the Internet, while Director is still used to craft high-end interactive productions. Each new
development of each passing year is absorbed into next year’s technology, making the
multimedia experience, better, faster, and more interesting.

1.5 CD-ROM and Multimedia Highway


 Compact Disc-Read Only Memory (CD-ROM) is a cost effective distribution medium for
multimedia projects.

 It can contain unique mixes of images, sounds, text, video, and animations controlled by
an authoring system to provide unlimited user interaction.

 Digital Versatile Disc (DVD) technology has come into usage which has increased capacity
than the CD-ROM.

 Now that telecommunications are global, information can be received online as distributed
resources on a data highway, where payment is there to acquire and use multimedia
based information.

1.6 Where to use multimedia?


Usages of Multimedia Applications:

1. Education

2. Training

3. Entertainment

4. Advertisement

5. Presentation

6. Business Communication

7. Web page Design


15

 Multimedia in business

 Business applications for multimedia include presentations, training, marketing, advertising,


product demos, databases, catalogues and networked communications.

 Multimedia is used in voice mail and video conferencing.

 Multimedia is used in training programs.

 Mechanics learn to repair engines through simulation.

 Sales person learn about the products online

 Pilots practice before spooling up for the real thing.

 Multimedia in Schools

 Multimedia provides radical changes in the teaching process, as smart students discover
they can go beyond the limits of traditional teaching methods.

 Teachers may become more like guides and mentors along a learning path, not the
primary providers of information and understanding.

 It provides physicians with over 100 case presentations and gives cardiologist, radiologist,
medical students, and fellows an opportunity for in-depth learning of new clinical techniques.

 Adults and children learn well by exploration and discovery.

 Multimedia at home

 From gardening to cooking to home design, re-modeling multimedia has entered the
home.

 Multimedia in public places

 In hotels, train stations, shopping malls, museums and grocery stores, multimedia will
become available at stand-alone terminals or kiosks to provide information and help.

 Such installations reduce demand on traditional information booths and personnel and
they can work round the clock, when live help is off duty.

 Supermarket kiosks provide services ranging from meal planning to coupons.


16

 Hotel kiosks list nearby restaurants, maps of the city, airline schedules, and provide
guest services such as automated checkouts.

1.6.1 Usage of Multimedia


1. In education, multimedia can be used as a source of information. Students can search
encyclopedia such as Encarta, which provide facts on a variety of different topics using
multimedia presentations.

2. Teachers can use multimedia presentations to make lessons more interesting by using
animations to highlight or demonstrate key points.

3. A multimedia presentation can also make it easier for pupils to read text rather than trying
to read a teacher’s writing on the board.

4. Programs which show pictures and text whilst children are reading a story can help them
learn to read; these too are a form of multimedia presentation.

5. Multimedia is used for advertising and selling products on the Internet.

6. Some businesses use multimedia for training where CDROMs or on-line tutorials allow
staff to learn at their own speed, and at a suitable time to the staff and the company.

7. Another benefit is that the company do not have to pay the additional expenses of an
employee attending a course away from the workplace

8. People use the Internet for a wide range of reasons, including shopping and finding out
about their hobbies.

9. The Internet has many multimedia elements embedded in web pages and web browsers
support a variety of multimedia formats.

10. Many computer games use sound tracks, 3D graphics and video clips.

1.7 Multimedia Applications

Let us now see the different fields where multimedia is applied. The fields are described
in brief below “
17

1. Presentation

With the help of multimedia, presentation can be made effective.

2. E-book

Today, books are digitized and easily available on the Internet.

3. Digital Library

The need to be physically present at a library is no more necessary. Libraries can be


accessed from the Internet also. Digitization has helped libraries to come to this level of
development.

4. E-learning

Today, most of the institutions (public as well as private both) are using such technology
to educate people.

5. Movie making

Most of the special effects that are seen in any movie, is only because of multimedia
technology.

6. Video games

Video games are one of the most interesting creations of multimedia technology. Video
games fascinate not only the children but adults too.

7. Animated films

Along with video games, animated film is another great source of entertainment for
children.

8. Multimedia conferencing

People can arrange personal as well as business meetings online with the help of
multimedia conferencing technology.
18

9. E-shopping

Multimedia technology has created a virtual arena for the e-commerce.

1.8 Advantages of using Multimedia


1. It is very user-friendly. It doesn’t take much energy out of the user, in the sense that the
user can sit and watch the presentation; the user can read the text and hear the audio.

2. It is multi sensorial. It uses a lot of the user’s senses while making use of multimedia, for
example hearing, seeing and talking.

3. It is integrated and interactive. All the different mediums are integrated through the
digitization process. Interactivity is heightened by the possibility of easy feedback.

4. It is flexible. Being digital, this media can easily be changed to fit different situations and
audiences.

5. It can be used for a wide variety of audiences, ranging from one person to a whole group.

1.9 Disadvantages of using Multimedia


1. Information overload. Because it is so easy to use, it can contain too much information at
once.

2. It takes time to compile. Even though it is flexible, it takes time to put the original draft
together.

3. It can be expensive. As mentioned in one of my previous posts, multimedia makes use of


a wide range of resources, which can cost you a large amount of money.

4. Too much makes it unpractical. Large files like video and audio has an effect of the time
it takes for your presentation to load. Adding too much can mean that you have to use a
larger computer to store the files.

Check your Progress


6. Which one of the following is the characteristic of a multimedia system?

a) high storage

b) high data rates


19

c) both high storage and high data rates

d) none of the above

7. One of the disadvantages of multimedia is:

a) cost

b) adaptability

c) usability

d) relativity

8. A graphic image file name is tree.eps. This file is a bitmap image: Say TRUE or FALSE.

9. Multimedia files stored on a remote server are delivered to a client across the network
using a technique known as :

a) Download

b) Streaming

c) Flowing

d) Leaking

10. Which of these is not likely to be the responsibility of a multimedia project?

(a) Create interfaces

(b) Ensure the visual consistency of the project

(c) Structure content

(d) Create budgets and timelines for the project

(e) Select media types for content.

1.10 Summary
 Multimedia is simply multiple forms of media integrated together. Media can be text,
graphics, audio, animation, video, data, etc.

 Hypermedia can be considered as one of the multimedia applications.


20

 Multimedia consists of five different elements.

 Applications of multimedia are in different fields.

 Multimedia Applications is the creation of exciting and innovative multimedia systems that
communicate information customized to the user in a non-linear interactive format.

 Multimedia is mostly used in the entertainment industry, especially to develop special


effects in movies and animation of cartoon characters.

1.11 Check Your Answers


1. a) Multimedia

2. a) Frames

3. Clipart

4. True

5. GIF Animation

6. c) both high storage and high data rates

7. a) Cost

8. False

9. b) Steaming

10. d) Create budgets and timelines for the project

1.13 Model Questions


1. What is multimedia?

2. List the basic elements of multimedia.

3. What are the types of multimedia? List it.

4. Define hypertext and hypermedia.

5. What is multimedia Highway?

6. List out the applications of multimedia.

7. Describe about the multimedia applications in different fields.


21

LESSON 2
INTRODUCTION TO MAKING MULTIMEDIA

Structure
2.1 Introduction

2.2 Learning Objectives

2.3 Stages of Multimedia Project

2.4 Multimedia Skills and Training

2.5 Training Opportunities in Multimedia

2.6 Motivation for Multimedia usage

2.7 Frequency Domain Analysis

2.8 Application Domain

2.9 Summary

2.10 Check Your Answers

2.11 Model Questions

2.1 Introduction

The basic stages of a multimedia project are planning and costing, design and production,
testing and delivery. Knowledge of hardware and software, as well as creativity and organizational
skills are essential for creating a high-quality multimedia project. In any project, including
multimedia, team building activities improve productivity by fostering communication and a
work culture that helps its members work together. Motivation is one of the primary factors that
influence the effectiveness of instruction. Motivation provides an opportunity to incorporate
many motivational factors. Motivating a student means the students is excited and will maintain
the interest in the activity or subject. Frequency domain analysis replaces the measured signal
with a group of sinusoids which, when added together, produce a waveform equivalent to the
original. The relative amplitudes, frequencies, and phases of the sinusoids are examined.
22

2.2 Learning Objectives


At the end of the lesson, the leaner will be able to

 Know the phases of Multimedia production

 Know the team members in Multimedia development

 Understand the training opportunities in multimedia

 Learn motivation for multimedia usage of different applications

 Understand the concept of frequency domain analysis

 Learn the global structure of multimedia

2.3 Stages of Multimedia Project

A multimedia program should go through various multimedia production phases. There


are three main stages of a multimedia project:

1. Pre-production: The process before producing multimedia project.

2. Production: The process in which multimedia project is produced.

3. Post-production: The process after the production of multimedia project.

These stages are sequential. Before, beginning any work everybody involved in the project
should agree to what is to be done and why. Lack of agreement can create misunderstandings
which can have grim effects in the production process. Therefore, initial agreements give a
reference point for subsequent decisions and assessments. After the clarification of why, what
multimedia product has to do in order to fulfill its purpose is decided. The “why” and “what”
determine the entire how decisions including storyboards, flow chart, and media content, etc.

 Pre-Production

Idea or Motivation

During the initial why phase of production, the first question the production team ask is
“why” you want to develop a multimedia project?
23

 Is the idea marketable and profitable?

 Is multimedia the best option, or would a print product being more effective?

Product Concept and Project Goals

It takes several brainstorming sessions to come up with an idea. Then the production
team decides what the product needs to accomplish in the market. It should keep in account
what information and function they need to provide to meet desired goals. Activities such as
developing a planning document, interviewing the client and building specifications for production
help in doing so.

Target Audience

The production team thinks about target age groups, and how it affects the nature of the
product. It is imperative to consider the background of target customers and the types of
references that will be fully understood. It is also important to think about any special interest
groups to which the project might be targeted towards, and the sort of information those groups
might find important.

Delivery Medium and Authoring Tools

The production team decides the medium through which the information reaches the
audience. The information medium can be determined on the basis of what types of equipment
the audience have and what obstacles must be overcome. Web, DVDs and CD-ROMs are
some of the common delivery mediums. The production team also ascertains what authoring
tools should be used in the project. A few of the authoring tools are graphics, audio, video, text,
animation, etc.

Planning

Planning is the key to the success of most business endeavors, and this is definitely true
in multimedia. This is because a lack of planning in the early processes of multimedia can cost
later. The production team works together and plans how the project will appear and how far it
will be successful in delivering the desired information. There is a saying, “If you fail to plan,
you are planning to fail.”
24

Group discussions take place for strategic planning and the common points of discussions
are given below:

 What do you require for the multimedia project?

 How long will each task take?

 Who is going to do the work?

 How much will the product cost?

Planning also includes creating and finalizing flowchart and resource organization in
which the product’s content is arranged into groups. It also includes timeline, content list,
storyboard, finalizing the functional specifications and work assignments. Detailed timelines
are created and major milestones are established for the difficult phases of the project. The
work is then distributed among various roles such as designers, graphic artists, programmers,
animators, audio-graphers, videographers, and permission specialists.

 Production

In the production stage all components of planning come into effect. If pre-production
was done properly, all individuals will carry out their assigned work according to the plan.
During this phase graphic artists, instructional designers, animators, audiographers and
videographers begin to create artwork, animation, scripts, video, audio and interface. The
production phase runs easily if the project manager has distributed responsibilities to the right
individuals and created practical and achievable production schedule. Given below are some
of the things that people involved in production have to do:

Scriptwriting

The scripts for the text, transitions, audio narrations, voice-overs and video are written.
Existing material also needs to be rewritten and reorganized for an electronic medium. Then
the written material is edited for readability, grammar and consistency.

Art

Illustrations, graphics, buttons, and icons are created using the prototype screens as a
guide. Existing photographs, illustrations, and graphics are digitized for use in an electronic
25

medium. Electronically generated art as well as digitized art must be prepped for use; number
of colors, palettes, resolution, format, and size are addressed.

3D Modeling and Animation

The 3D artwork is created, rendered, and then prepared for use in the authoring tool. The
3D animations require their own storyboards and schedules.

Authoring

All the pieces come together in the authoring tool. Functionality is programmed, and 2D
animation is developed. From here, the final working product is created. Every word on the
screen is proofread and checked for consistency of formatting. In addition, the proofreader
reviews all video and audio against the edited scripts.

Shooting and Recording Digitizing Video

The edited scripts are used to plan the budget, performers, time schedules and budget,
and then the shoot is scheduled followed by recording.

Quality Control

Quality control goes on throughout the process. The storyboards are helpful for checking
the sequencing. The final step checks should be done for the overall content functionality and
usability of the product. The main goal of production is to make the next stage, post production,
run smoothly and flawlessly.

 Post-Production

After the production of the multimedia project, post-production technicalities should be


addressed to produce a perfect and error free project. It is one of the most fundamental of all
stages of production.

The stage of post-production involves:


26

Testing

The product is tested on multiple computers and monitors. It is imperative to evaluate,


test and revise the product to make sure the quality and success of the product.

Mastering

Mastering can be as simple as writing a CD-ROM or floppy disk. Or it can be as complex


as sending the files to a service that will create a pre-master from which the master is made.

Archiving and Duplication

The original files, including audio, video, and the native software formats, are archived
for future upgrades or revisions. The duplicates are created from the original and packaged
accordingly.

Marketing and Distribution

Marketing is significant to the success of a product. The survival of a company and its
products depends greatly on the product reaching the maximum number of audience. Then
comes the final step in the process which is distribution of the multimedia project.

Good Multimedia
 Many multimedia systems are too passive- users click and watch

 For fully interactive systems, designers need clear picture of what happens as user interacts

 Adaptive systems modify themselves based on user input (intelligent tutors)

2.4 Multimedia Skills and Training

All through the creation and development of a multimedia project, the team members
must communicate with each other on a constant basis. They must also share same goals and
consistency in the design of the end product.

Depending upon the size of a project, one specialist might be required to play more than
one role, or the roles might be extended to different departments. Every specialist team member
27

is not only required to have an extensive background in their fields but also be a fast learner
capable of picking up new skills. Knowledge and experience in other fields might be an added
advantage.

Every team member plays a significant role in the design, development and production
of a multimedia project.

Team members
A multimedia team member consists of the following:

 Project manager

 Multimedia designer

 Interface designer

 Multimedia programmer

 Computer programmers

 Writer

 Subject matter expert

 Audio specialist

 Video specialist

 Producer for the Web

 Permission specialist

Project Manager

The project manager is responsible for:

 The overall development, implementation, and day-to-day operations of the project.

 The design and management of a project.

 Understanding the strengths and limitations of hardware and software.

• Make schedules.
28

 Decide the budget of the project.

 Interact with team and clients.

 Provides resolution to development and production problems.

 Motivate people and should be detail oriented.

Multimedia designer
– This team consists of graphics designers, illustrators, animators, and image processing
specialists, who deal with visuals, thereby making the project appealing and aesthetic.
This team is responsible for:

 Instructional designers, who make sure that the subject matter is presented clearly
for the target audience.

 Interface designers, who devise the navigational pathways and content maps.

 Information designers, who structure content, determine user pathways and


feedback, and select presentation media.

Interface Designer
An interface designer is responsible for:

 Creating a software device that organizes content. It allows users to access or modify
content, and presents that content on the screen.

 Building a user-friendly interface.

Multimedia Writer
A multimedia writer is responsible for:

 Creating characters, actions, point of view, and interactivity.

 Writing proposals and test screens.

 Scripting voice-overs and actors’ narrations.


29

Video Specialist
A video specialist needs to understand:

 The delivery of video files on CD, DVD, or the Web.

 How to shoot quality video.

 How to transfer the video footage to a computer.

 How to edit the footage down to a final product using digital non-linear editing system
(NLE).

Audio Specialist
An audio specialist is responsible for:

 Locating and selecting suitable music talent.

 Scheduling recording sessions.

 Digitizing and editing recorded material into computer files.

Multimedia Programmer
A multimedia programmer is responsible for:

 Locating audio/video resources.

 Selecting suitable audio/video clips.

 Creating audio/video clips.

 Interacting with project managers and instructional designers.

 Participating in the design process.

 Working on storyboard and uses it as a guideline.

 Finding out problems, solving them and fixing bugs.

 Writing understandable, easy and reusable codes

 Liaising with designers

 Integrates all the multimedia elements into a seamless project, using authoring systems
or programming language.

 Manages timings, transitions and record keeping.


30

2.5 Training Opportunities in Multimedia

 Business: Multimedia designers can be used in business application in a variety of ways


like presentations, training, marketing, advertising, product demos, databases, catalogues
and networked communications. They can be successfully engaged in video conferencing
also.

 Advertising: Imaginative and attractive advertisements can be made with the combination
of text, pictures, audio and video. Multimedia designers have a big role in creation of
advertisements. A product is well received by a customer if it is supported by a good
multimedia advertisement campaign.

 Gaming and Graphic Design: They are perhaps making the maximum use of multimedia.
No computer game is complete without elaborate computer graphics, be it a arcade
game, strategy based game or sports game. A computer game with good graphics is
more enticing to play then a game with less or bad graphics. Multimedia designers have
a great role in making a game successful.

 Product Design: Multimedia can be used effectively for designing a product. First its
prototype can be made before actually making the product. Multimedia programmers
can be employed in this work.

 Education and Training: It is perhaps the need of the hour requirement for the multimedia.
Topics which are difficult to understand by reading the text can be made simple with the
help of multimedia. Time is coming when the multimedia lessons will take place of
classroom teaching. Students can repeat a lesson as many times until he understands
the concept. Multimedia designers have big role in all this work.

 Leisure: Multimedia can also be used for entertainment. Most of the cartoon films are
made with the help of multimedia. It is used in scientific movies to give special effects like
animation, morphing etc. Actors created by combining different frames with the help of
multimedia can replace the actual actors.

With this Multimedia, students will gain creative skills and technological knowledge leading
to many exciting career opportunities including in the fields of electronic publishing, web design,
31

information architecture, human-computer interface, design, multimedia design and production,


3-D animation, computer games, exhibition design, scientific and medical visualization and
special effects for film and television. Escalating demand for these skills by the Creative Industries
provide students with exciting options for professional placement and eventual employment
nationally and internationally.

2.6 Motivation for Multimedia usage

The major goals of providing multimedia instruction is to motivate students, there is need
to examine motivational elements. There are four major motivation theories–expectancy-value
theory, self-efficacy, goal-setting and task motivation, and self-determination theory.

Classification scheme for “Media” is based on attributes in which learning technologies


are grouped into five “systems.”

 Human-based system (teacher instructor, tutor, role-plays, group activities, field trips,
etc.)

 Print-based system (books, manuals, workbooks, job aids, handouts, etc.)

 Visual-based system (books, job aids, charts, graphs, maps, figures, transparencies, slides,
etc.)

 Audiovisual-based system (video, film, slide-tape programs, live television, etc.

 Computer-based system (computer-based instruction, computer-based interactive video,


hypertext, etc.)

Some of the methods applied in Multimedia for the motivation

1. Preparing Teachers to Teach Online: presents a brief background on the use of


technology in education, research on approaches to professional development, and specific
information on the competencies required to be an effective online teacher

2. Model-Facilitated Learning Environments How students will act and learn in a particular
environment depends on how the instructional designer creates the environment that
maximizes their learning potential, considering the interrelationships between the learning
experience, the technology, cognition, and other related issues of the learner.
32

3. Self-Regulated Learning (SRL) SRL competence has been promoted through reflection
on cognitive, meta-cognitive, emotional and motivational aspects of learning, as well as
through modeling teaching practices that tend to shift the locus of control from trainers to
trainees.

4. Individualized Web-Based Instructional Design Adaptive (Individualized) Web-based


instruction provides mechanisms to individualize instruction for learners based on their
individual needs.

Adaptive Web-based instruction, paying particular attention to (a) the implications of


individual differences to Web-based instruction, (b) the adaptive methods that are available
to designers and developers, and (c) the considerations for instruction design and
development with adaptive Web-based instruction.

5. Development of Game-Based Training Systems Improved understanding is needed


on how to best embed instruction in a game and how to best use gaming features to
support different types of instruction. In addition, the field is inherently inter-disciplinary,
requiring instructional system designers, software developers, game designers and more,
yet there are no established development methodologies to ensure effective coordination
and integration across these disciplines.

6. iPods as Mobile Multimedia Learning Environments  iPods are being used across a


variety of content areas, educational levels and geographic locations, involving a variety
of pedagogies.

7. E-Learning with Wikis, Weblogs and Discussion Forums explores how social software
tools can offer support for innovative learning methods and instructional design in general
and those related to self-organize learning in an academic context in particular.

8. Emerging Edtech Design principles are universal and may be translated onto the newest
trends and emergent technologies. Used to guide evaluation, instructional design efforts,
or best practice models for exemplary use of educational technologies in the classroom.

9. Harnessing the Emotional Potential of Video Games the importance of acknowledging


users’ personalities, learning styles, and emotions in the design of educational games.
33

10. Learning Activities Model The design of learning is probably more accurately described
as the design of learning activities as it is the activities that are designable compared to
learning which is the desired outcome of the activities.

Check your Progress


1. You need hardware, software and ______to make multimedia

a. Network 

b. Compact Disk Drive

c. Good Idea

d. Programing Knowledge

2. The people who weave multimedia into meaningful tapestries are called _____.

a. Programmers

b. Multimedia Developers

c. Software Engineers

d. Hardware Engineers

3. The viewer of a multimedia project to control what and when the elements are delivered,
it is called _____

4. The software vehicle, the messages, and the content presented on a computer, television
screen PDA or cell phone together constitute a _______________

5. The most precious asset you can bring to the multimedia workshop is your ____.

a. Creativity

b. Programming Skill

c. Musical Ability

d. Film and Video Production Talent

6. Before beginning a multimedia project, you must first develop a sense of its ____.

a. scope and content

b. programming knowledge
34

c. implementing skills

d. planning and editing

2.7 Frequency Domain Analysis

2.7.1 Signal Fundamentals Analogue, Discrete and Digital Signals


 An analogue signal is an electrical waveform with continuously varying possible amplitudes
of a quantity such as voltage or current. It is uniquely defined for all t.

 A discrete signal is one that exists at discrete times. It is characterised by a sequence of


numbers at each time kT, where k is an integer and T is a fixed time interval. It is sometimes
referred to as a Pulse Amplitude Modulated (PAM) signal because the amplitude of a
pulse stream is modulated (varied) by the amplitude of an analogue signal.

 A digital signal is one that has a finite set of possible amplitudes

Fig. 2.1 Analogue, Discrete and Digital Signals

2.7.2 Signal Fundamentals Digital Waveforms


 A digital waveform conveys digital information even though its representation is sinusoidal
and consequentially has an analogue appearance.
35

 The analogue, discrete and digital signals can be referred to as the baseband signal.
‘Baseband’ is used to describe the band of frequencies representing the signal of interest
as delivered by the source of information.

Fig. 2.2 Digital Waveforms

2.7.3 Signal Fundamentals Even and Odd Functions

If a function v(t) = v(-t) then it is defined as an even function. An example of an even


function is V = Acos wt, where w= 2pf.

Fig. 2.3 Signal –Even Function


36

 if a function v(t) = -v(-t) then it is defined as an odd function.

 An example of an odd function is V = Asin wt, where w= 2pf.

Fig. 2.4 Signal –Odd Function

2.7.4 Signal Fundamentals Synthesis of Signals and the Frequency


Domain

Any periodic signal can be synthesized by combining a series of cosine and sine signals
of different harmonics. By summing different amplitudes of the 1st, 3rd, 5th, and so on harmonics
of a sine signal, an odd functioned square wave can be synthesized.

Fig. 2.5 Synthesis of Signals and the Frequency Domain


37

The synthesized signal can be represented as a function of frequency against amplitude


of a sine signal for the different harmonics. This is known as the frequency domain
representation.

Fig. 2.6 Synthesized Signal – Frequency domain

If the period T of the synthesized signal becomes infinitely large then the difference in
frequency between the nth and the (n+1)th frequency components becomes infinitely small
and a continuous frequency domain representation is obtained.

Fig. 2.7 Synthesized Signal – Continuous Frequency

2.8.5 Difference between spatial domain and frequency domain

In spatial domain, we deal with images as it is. The value of the pixels of the image
changes with respect to scene. Whereas in frequency domain, it is dealt with the rate at which
the pixel values are changing in spatial domain.
38

Fig. 2.8 Image Processing

In simple spatial domain, we directly deal with the image matrix. Whereas, in frequency
domain, we deal an image like this.

Frequency Domain

We first transform the image to its frequency distribution. Then our black box system
performs whatever processing it has to performed, and the output of the black box in this case
is not an image, but a transformation. After performing inverse transformation, it is converted
into an image which is then viewed in spatial domain.

It can be pictorially viewed as

Fig. 2.9 Picture Transformation

Frequency components

Any image in spatial domain can be represented in a frequency domain. But what do
these frequencies actually mean.

The frequency components are divided into two major components.

 High frequency components

High frequency components corresponds to edges in an image.


39

 Low frequency components

Low frequency components in an image corresponds to smooth regions.

2.8 Multimedia Global Structure

Multimedia is an inter-disciplinary subject because it involves a variety of different theories


and skills: these include computer technology, hardware and software; arts and design, literature,
presentation skills; application domain knowledge.

Fig. 2.10 Multimedia Global Structure

 Application domain — provides functions to the user to develop and present multimedia
projects. This includes Software tools, and multimedia projects development methodology.

 System domain — including all supports for using the functions of the device domain,
e.g., operating systems, communication systems (networking) and database systems.

 Device domain — basic concepts and skill for processing various multimedia elements
and for handling physical device.

2.9 Summary

 There are three stages of multimedia – pre-production, production and post-


production.
40

 The team involved in a creating a multimedia project should be knowledgeable,


experienced and efficient.

 Every team member should perform their responsibilities as well as others if need arises.

 The diverse skills required to create a multimedia project is called the multimedia skillset

 Team building refers to activities that help a group and its members function at optimum
levels of performance.

 Roles and responsibilities are assigned to each team member in a multimedia project.

2.10 Check Your Answers


1. c. Good Idea

2. b. Multimedia Developers

3. Interactive Multimedia

4. Multimedia Project

5. a. Creativity

6. a. Scope and Content

2.11 Model Questions


1. What are the stages of multimedia project?

2. Explain in detail various steps involved in the process of production.

3. What is a multimedia designer? Explain in brief.

4. Why is team spirit essential for people working in a team?

5. What is the first stage of a multimedia project?

6. Write short notes on multimedia skills and training.

7. Discuss in detail about training opportunity.

8. Write short notes on motivation for multimedia usage.

9. Explain in detail about frequency domain analysis.

10. Describe multimedia global structure in detail.


41

LESSON 3
MULTIMEDIA HARDWARE AND SOFTWARE

Structure
3.1 Introduction

3.2 Learning Objectives

3.3 Multimedia Hardware

3.4 Macintosh and Windows Production Platform

3.5 Multimedia Software

3.6 Summary

3.7 Check Your Answers

3.8 Model Questions

3.1 Introduction

Multimedia requires a variety of input devices to transmit data and instructions to a system
unit for processing and storage. Keyboards and pointing devices, such as trackballs, touch
pads, and touch screens, are central to interacting with graphical user interface (GUI) applications
and operating system software. Other devices are necessary to input sound, video, and a wide
array of images for multimedia applications. Some of these, such as microphones, are built
into the system. Others, such as scanners, cameras, sound recorders, and graphics tablets,
are plugged into USB or FireWire interface ports. Output devices include screen displays,
audio speakers or headsets, and hard copy. The quality of output for display, sound, and print
is dependent on the performance features of these devices.

3.2 Learning Objectives

This lesson aims at introducing the multimedia hardware used for providing interactivity
between the user and the multimedia software.
42

At the end of this lesson the learner will be able to

 Know common input devices and their roles in getting different types of inform.

 Know output devices and the way they make computers more useful

 Understand the concept of Macintosh and Windows Production Platform

 List and understand different multimedia software

3.3 Multimedia Hardware

An input device is a hardware mechanism that transforms information in the external


world for consumption by a computer.

An output device is a hardware used to communicate the result of data processing


carried out by the user or CPU.

3.3.1 Input devices for Multimedia Computers

Input devices are under direct control by a human user, who uses them to communicate
commands or other information to be processed by the computer, which may then transmit
feedback to the user through an output device. Input and output devices together make up the
hardware interface between a computer and the user or external world. Typical examples of
input devices include keyboards and mice. However, there are others which provide many
more degrees of freedom. In general, any sensor which monitors, scans for and accepts
information from the external world can be considered an input device, whether or not the
information is under the direct control of a user.

Classification of Input Devices

Input devices can be classified according to:-

 The modality of input (e.g. mechanical motion, audio, visual, sound, etc.)

 whether the input is discrete (e.g. key presses) or continuous (e.g. a mouse’s position,
though digitized into a discrete quantity, is high-resolution enough to be thought of as
continuous)
43

 the number of degrees of freedom involved (e.g. many mice allow 2D positional input,
but some devices allow 3D input, such as the Logitech Magellan Space Mouse) Pointing
devices, which are input devices used to specify a position in space, can further be
classified according to

 Whether the input is direct or indirect. With direct input, the input space coincides with
the display space, i.e. pointing is done in the space where visual feedback or the cursor
appears. Touchscreens and light pens involve direct input. Examples involving indirect
input include the mouse and trackball.

 Whether the positional information is absolute (e.g. on a touch screen) or relative (e.g.
with a mouse that can be lifted and repositioned)

 Note that direct input is almost necessarily absolute, but indirect input may be either
absolute or relative. For example, digitizing graphics tablets that do not have an embedded
screen involve indirect input, and sense absolute positions and are run in an absolute
input mode, but they may also be setup to simulate a relative input mode where the
stylus or puck can be lifted and repositioned.

(i) Keyboard

A keyboard is the most common method of interaction with a computer. Keyboards provide
various tactile responses (from firm to mushy) and have various layouts depending upon your
computer system and keyboard model. Keyboards are typically rated for at least 50 million
cycles (the number of times a key can be pressed before it might suffer breakdown).

The most common keyboard for PCs is the 101 style (which provides 101 keys), although
many styles are available with more are fewer special keys, LEDs, and others features, such
as a plastic membrane cover for industrial or food-service applications or flexible “ergonomic”
styles. Macintosh keyboards connect to the Apple Desktop Bus (ADB), which manages all
forms of user input- from digitizing tablets to mice.
44

Examples of types of keyboards include

 Computer keyboard

 Keyer

 Chorded keyboard

 LPFK

(ii) Pointing Devices

A Pointing Device is any computer hardware component (specifically human interface


device) that allows a user to input spatial (i.e., continuous and multi-dimensional) data to a
computer. CAD systems and graphical user interfaces (GUI) allow the user to control and
provide data to the computer using physical gestures - point, click, and drag - typically by
moving a hand-held mouse across the surface of the physical desktop and activating switches
on the mouse.

While the most common pointing device by far is the mouse, many more devices have
been developed. However, mouse is commonly used as a metaphor for devices that move the
cursor. A mouse is the standard tool for interacting with a graphical user interface (GUI). All
Macintosh computers require a mouse; on PCs, mice are not required but recommended.
Even though the Windows environment accepts keyboard entry in lieu of mouse point-and-
click actions, your multimedia project should typically be designed with the mouse or touchscreen
in mind. The buttons the mouse provide additional user input, such as pointing and double-
clicking to open a document, or the click-and-drag operation, in which the mouse button is
pressed and held down to drag (move) an object, or to move to and select an item on a pull-
down menu, or to access context-sensitive help.

The Apple mouse has one button; other mouse may have as many as three.

Examples of common pointing devices include

 mouse

 trackball

 touchpad
45

 space Ball - 6 degrees-of-freedom controller

 touchscreen

 graphics tablets (or digitizing tablet) that use a stylus

 light pen

 light gun

 eye tracking devices

 steering wheel - can be thought of as a 1D pointing device

 yoke (aircraft)

 jog dial - another 1D pointing device

 isometric joysticks - where the user controls the stick by varying the amount of force they
push with, and the position of the stick remains more or less constant

 discrete pointing devices

 directional pad - a very simple keyboard

 Dance pad - used to point at gross locations in space with feet

(iii) Scanners

Scanners capture text or images using a light-sensing device. Popular types of scanners
include flatbed, sheet fed, and handheld, all of which operate in a similar fashion: a light passes
over the text or image, and the light reflects back to a CCD (Charge-Coupled Device).

A CCD is an electronic device that captures images as a set of analog voltages. The
analog readings are then converted to a digital code by another device called an ADC (Analog-
to-Digital Converter) and transferred through the interface connection (usually USB) to RAM.

The quality of a scan depends on two main performance factors. The first is spatial
resolution. This measures the number of dots per inch (dpi) captured by the CCD. Consumer
scanners have spatial resolutions ranging from 1200 dpi to 4800 dpi. High-end production
scanners can capture as much as 12,500 dpi.
46

Once the dots of the original image have been converted and saved to digital form, they
are known as pixels. A pixel is a digital picture element. The second performance factor is color
resolution, or the amount of color information about each captured pixel. Color resolution is
determined by bit depth, the number of bits used to record the color of a pixel. A 1-bit scanner
only records values of 0 or 1 for each “dot” captured. This limits scans to just two colors,
usually black and white.

Scanners work with specific software and drivers that manage scanner settings. Spatial
resolutions and bit depth can be altered for each scan. These settings should reflect the purpose
of an image. For example, if an image is a black and white photo for a website, the scanning
software can be adjusted to capture gray scale color depth (8 bit) at 72 dpi. This produces an
image suitable for most computer monitors that display either 72 or 96 pixels per inch. Scanner
software also has settings to scale an image and perform basic adjustments for tonal quality
(amount of brightness and contrast).

 Optical Character Recognition (OCR) OCR is the process of converting printed text to
a digital file that can be edited in a word processor. The same scanners that capture
images are used to perform OCR. However, a special software application is necessary
to convert a picture of the character into an ASCII-based letter. This OCR software
recognizes the picture of the letter C, for example, and stores it on the computer using its
ASCII code (01000011). These characters are then edited and reformatted in a word
processing application.

Specialized applications, such as OmniPage or Readiris Pro, are optimized to deliver


high-speed, accurate OCR results. The final success of any OCR conversion depends on the
quality of the source material and the particular fonts used on the page. Small print on wrinkled,
thin paper will not deliver good OCR results. OCR scanning is one method of capturing text
documents. Scanners are also used to create a PDF (Portable Document Format) file. The
scanner captures a specialized image of the page and saves it as a .pdf file. Adobe Acrobat
Reader is necessary to view the contents of a .pdf file. This file format is cross-platform
compatible, so it is particularly suitable for distributing highly formatted documents over a network.
OCR scanning creates a file that can be edited in any word processing application. PDF scanning,
47

on the other hand, creates a specialized file format that can only be managed by Adobe Acrobat
software.

 Flatbed scanners are configured to meet a variety of uses. The scanner bed varies to
handle standard letter- to legal-size image sources. Multi-format holders are available
for 35mm filmstrips and slides. Some scanners have an optional sheet-feed device. For
small production, these adapters to a flatbed scanner may suffice. For larger projects,
more specialized scanners should be considered. Slide and film scanners are specifically
calibrated to capture high spatial resolution, some at 4000 dpi.

 Sheet-fed scanners are built to automatically capture large print jobs and process 15 or
more pages per minute. In selecting a scanner for multimedia development there are
many considerations. Image or text sources, quality of scan capture, ease of use, and
cost all factor into choosing the right scanner.

Fig. 3.1 Slide and Flatbed Scanner

(iv) Digital Cameras

Digital cameras are a popular input source for multimedia developers. These cameras
eliminate the need to develop or scan a photo or slide. Camera images are immediately available
to review and reshoot if necessary, and the quality of the digital image is as good as a scanned
image. Digital capture is similar to the scanning process. When the camera shutter is opened
to capture an image, light passes through the camera lens. The image is focused onto a CCD,
which generates an analog signal. This analog signal is converted to digital form by an ADC
and then sent to a digital signal processor (DSP) chip that adjusts the quality of the image and
stores it in the camera’s built-in memory or on a memory card.
48

Fig. 3.2 Digital Cameras

(v) Touchscreens

Touchscreens are monitors that usually have a textured coating across the glass face.
This coating is sensitive to pressure and registers the location of the user’s finger when it
touches the screen. The Touch Mate System, which has no coating, actually measures the
pitch, roll, and yaw rotation of the monitor when pressed by a finger, and determines how much
force was exerted and the location where the force was applied. Other touchscreens use
invisible beams of infrared light that crisscross the front of the monitor to calculate where a
finger was pressed. Pressing twice on the screen in quick and dragging the finger, without
lifting it, to another location simulates a mouse click and-drag. A keyboard is sometimes simulated
using an onscreen representation so users can input names, numbers, and other text by pressing
“keys”.

Touchscreen recommended for day-to-day computer work, but are excellent for multimedia
applications in a kiosk, at a trade show, or in a museum delivery system anything involving
public input and simple tasks. When your project is designed to use a touchscreen, the monitor
is the only input device required, so you can secure all other system hardware behind locked
doors to prevent theft or tampering.

3.3.2 Output Devices for Multimedia Computers

Computer output devices present processed data in a useful form. Output devices include
screen displays, audio speakers or headsets, and hard copy. The quality of output for display,
sound, and print is dependent on the performance features of these devices.
49

1. Display Devices

Display devices share their heritage with either Cathode Ray Tube (CRT) technology
used in analog televisions or Liquid Crystal Displays (LCD) first used in calculators and watches.
Both CRT and LCD technologies produce an image on a screen through a series of individual
picture elements (pixels). As in scanners and digital cameras, the quality of a display image is
largely determined by spatial resolution (the number of pixels) and color resolution (the bit
depth of each pixel).

 CRT monitors use raster scanning to generate a display. In this process an electronic
signal from the video card controls an electron gun that scans the back of a screen with
an electronic beam. The monitor’s back surface is coated with a phosphor material that
illuminates as electronic beams make contact. The electronic signal scans horizontal
rows from the top to the bottom of the screen. The number of available pixels that can be
illuminated determines the spatial resolution of the monitor. For example, a CRT with
1024 X 768 spatial resolution can display well over 700,000 pixels. CRT technology is
now replaced with smaller, lighter-weight, fully digital displays that use a different technique
to create pixels.

 LCD screen is a sandwich of two plastic sheets with a liquid crystal material in the
middle. Tiny transistors control rod-shaped molecules of liquid crystal. When voltage is
applied to the transistor, the molecule is repositioned to let light shine through. Pixels
display light as long as the voltage is applied. Laptops borrowed this technology and
improved its resolution, color capability, and brightness to make LCDs suitable for computer
display. Resolution and brightness impact the quality of LCD output. LCD screens have
specific resolutions controlled by the size of the screen and the manufacturer. This fixed-
pixel format is referred to as the native resolution of the LCD screen. A 15-inch LCD
screen has a native resolution of 1024 X 768 pixels: there are exactly 1024 pixels in each
horizontal line and 768 pixels in each vertical line for a total of 786,432 pixels

 LED (Light-Emitting Diode) displays have moved from large TV screens to mobile
phones, tablets, laptops, and desktop screens. These displays use the same TFT display
technology as the LCDs. A major distinction is in the manner of providing the light source
50

to illuminate the pixels on the screen. LED screens use a single row of light-emitting
diodes to make a brighter backlight that significantly improves the quality of the monitor
display.

2. Sound Devices

Sound output devices are speakers or headsets. They are plugged into the soundboard
where digital data is converted to analog sound waves. Soundboards can be a part of the
system board or added to a computer’s expansion slots. Soundboard circuitry performs four
basic processes: it converts digital sound data into analog form using a digital-to-analog
converter, or DAC; records sound in digital form using an ADC; amplifies the signal for delivery
through speakers; and creates digital sound’s using a synthesizer. A synthesizer is an output
device that creates sounds electronically.

Sound quality depends on the range of digital signals the soundboard can process. These
signals are measured as sample size and Sample rate.

 Sample size is the resolution of the sound measured in bits per sample. Most soundboards
support 16-bit sound, the current CD-quality resolution.

 Sample rate measures the frequency at which bits are recorded in digitizing a sound.

Modern boards accommodate the 48 KHz sample rate found in professional audio and
DVD systems. Soundboards control both sound input and output functions. Input functions are
especially important for developers because they need to capture and create high-quality sounds.

3. Print Devices

Printers remain an important multimedia peripheral device, despite the fact that multimedia
applications are primarily designed for display.

Printer is an output device, which is used to print information on paper.

There are two types of printers –

1. Impact Printers

2. Non-Impact Printers
51

1. Impact Printers

Impact printers print the characters by striking them on the ribbon, which is then pressed
on the paper.

Characteristics of Impact Printers are the following “

 Very low consumable costs

 Very noisy

 Useful for bulk printing due to low cost

 There is physical contact with the paper to produce an image

These printers are of two types “

(i) Character printers

(ii) Line printers

(i) Character Printers

Character printers are the printers which print one character at a time.

These are further divided into two types:

a. Dot Matrix Printer(DMP)

b. Daisy Wheel

a. Dot Matrix Printer

In the market, one of the most popular printers is Dot Matrix Printer. These printers are
popular because of their ease of printing and economical price. Each character printed is in the
form of pattern of dots and head consists of a Matrix of Pins of size (5*7, 7*9, 9*7 or 9*9) which
comes out to form a character which is why it is called Dot Matrix Printer.
52

Fig. 3.3 Dot Matrix Printer

Advantages

 Inexpensive

 Widely Used

 Other language characters can also be printed

Disadvantages

 Slow Speed

 Poor Quality

b. Daisy Wheel

Head is lying on a wheel and pins corresponding to characters are like petals of Daisy
(flower) which is why it is called Daisy Wheel Printer. These printers are generally used for
word-processing in offices that require a few letters to be sent here and there with very nice
quality.
53

Fig. 3.4 Daisy Wheel

Advantages

 More reliable than DMP

 Better quality

 Fonts of character can be easily changed

Disadvantages

 Slower than DMP

 Noisy

 More expensive than DMP

(ii) Line Printers

Line printers are the printers which print one line at a time.

Fig. 3.4 Line Printers


54

These are of two types “

a. Drum Printer

b. Chain Printer

a. Drum Printer

This printer is like a drum in shape hence it is called drum printer. The surface of the
drum is divided into a number of tracks. Total tracks are equal to the size of the paper, i.e. for
a paper width of 132 characters, drum will have 132 tracks. A character set is embossed on the
track. Different character sets available in the market are 48 character set, 64 and 96 characters
set. One rotation of drum prints one line. Drum printers are fast in speed and can print 300 to
2000 lines per minute.

Advantages

 Very high speed

Disadvantages

 Very expensive

 Characters fonts cannot be changed

b. Chain Printer

In this printer, a chain of character sets is used; hence it is called Chain Printer. A standard
character set may have 48, 64, or 96 characters.

Advantages

 Character fonts can easily be changed.

 Different languages can be used with the same printer.


55

Disadvantages

 Noisy

2. Non-impact Printers

Non-impact printers print the characters without using the ribbon. These printers print a
complete page at a time, thus they are also called as Page Printers.

These printers are of two types “

a. Laser Printers

b. Inkjet Printers

Characteristics of Non-impact Printers

 Faster than impact printers

 They are not noisy

 High quality

 Supports many fonts and different character size

a. Laser Printers

These are non-impact page printers. They use laser lights to produce the dots needed to
form the characters to be printed on a page.

Fig. 3.5 Laser Printer


56

Advantages

 Very high speed

 Very high quality output

 Good graphics quality

 Supports many fonts and different character size

Disadvantages

 Expensive

 Cannot be used to produce multiple copies of a document in a single printing

b. Inkjet Printers

Inkjet printers are non-impact character printers based on a relatively new technology.
They print characters by spraying small drops of ink onto paper. Inkjet printers produce high
quality output with presentable features.

Fig. 3.6 Inkjet Printers

They make less noise because no hammering is done and these have many styles of
printing modes available. Color printing is also possible. Some models of Inkjet printers can
produce multiple copies of printing also.
57

Advantages

 High quality printing

 More reliable

Disadvantages

 Expensive as the cost per page is high

 Slow as compared to laser printer

Check your Progress


1. A _______________ file requires no cross-platform conversion.

2. Say True or False : FAQ stands for Frequently Asked Questions

3. A package of software applications that might include a spreadsheet, database, e-mail,


web browser, and presentation applications is called a _______________

4. Sharing peripheral resources such as file servers, printers, scanners, and network routers
is made possible by a __________.

a. ATA

b. IDE

c. LAN

d. GPS

5. With __________ and a scanner, you can convert paper documents into a word processing
document on your computer without retyping or rekeying.

a. QR codes

b. CRT projectors

c. GLV technology

d. OCR software

6. Which of the following is not a tool designed for creating e-learning?

a. Adobe Captivate

b. Go! Animate
58

c. Easy generator

d. FileMaker Pro

7. DPI stands for ____________.

8. Which one of the following resource is not necessarily required on a file server?

a. secondary storage

b. processor

c. network

d. monitor

3.4 Macintosh versus Windows

The two types of desktop computer used for multimedia development are the Apple Mac
and the Microsoft Windows based personal computer or PC. Both platforms share these common
components as do most types of computer:

 Processor: The processor or central processing unit is the key component and controls
the rest of the computer and executes programs.

 Cache: Cache is a small amount of very high speed memory built into the processor for
doing immediate calculations.

 RAM memory: RAM (random access memory) is the working memory where the current
application program resides.

 System bus: The system bus connects all the necessary devices to the processor. There
are other buses that connect to the system bus like SCSI for hard drives.

 Motherboard: The processor, cache, RAM and system bus all reside on a main printed
circuit board called the motherboard.

 Operating system: The operating system manages the loading and unloading of
applications and files and the communication with other peripheral devices like printers.

 Storage devices: Application programs and working files are saved longer term on different
kinds of storage device. Storage devices include hard disk drives, CD-ROMs and floppy
drives.
59

 Input/output devices: Connected to the system bus are a number of other devices that
control the other essential components of a desk top computer including the monitor,
mouse, keyboard, speakers, printer, and scanner.

 Expansion bus: Most desktops should include ‘slots’ into which other non-standard devices
can be installed.

The latest specification Macs and PCs are capable of running the application tools
necessary for developing standard multimedia applications. The standard applications are image,
sound and video editing, animation and multimedia integration. Comparisons of the performance
of the latest generation of PCs and Macs are hotly contested but in general they are now
roughly the same with each type of computer performing better on some tasks than others.
Apple Macs have, in the past, been more associated with the multimedia industry, however
PCs are increasingly being used since they are now capable of undertaking the same processor
intensive tasks like video compression equally well. High specification computers are required
to undertake some of the tasks required in multimedia development.

Today’s computer users live in a veritable golden age when it comes to choosing computing
devices. In truth, there’s no clear winner in the Mac vs. PC contest. Instead, both devices have
significant developments. Both platforms now can come equipped with Intel® Core™ processors
that result in impressive performance. In addition, both Mac and PC demonstrate increased
memory; larger hard drive space; better stability and more availability than four years ago.
However, differences remain: the PC and Ultrabook™ are widely available with touchscreens,
but Apple has yet to release a Mac or MacBook* with integrated touchscreen technology.
Retina display, which greatly reduces glare and reflection, is a standard feature on the new
iMac*, but is less common on PCs.

 Compatibility: While the main operating system for Apple is OSX*, and PCs operate on
Microsoft Windows*, only Macs have the capability to run both. Naturally, both systems
continue to develop faster and more powerful versions of these operating systems that
are increasingly user-friendly and more compatible with handheld devices.

 Reliability: When it comes to reliability, the Mac vs. PC debate has had some interesting
developments of late. Though the majority of PC users know their devices are vulnerable
60

to malware and viruses, Mac users this past year have certainly awoken to the fact that
Macs are also vulnerable to sophisticated attacks. Ultimately, both PC and Mac users are
safer after installing up-to-date antivirus software designed to protect their devices from
malicious hits. Even when it comes to repairs, both operating systems have made great
strides. Though it’s still advised to take a broken Mac to an Apple Genius Bar* in an
authorized Apple dealership, there are more locations than there were a few years ago.
PC users enjoy a broader range of choices, Notes from their local electronics dealer to a
repair center at a major department store, though it remains their own responsibility to
choose a repair service that’s up to their PC manufacturer’s standards. Since PCs and
Macs hit the market, the debate has existed over which is best. Depending upon who
you’re talking to, the PC vs. Mac debate is even hotter than politics or religion. While you
have many who are die-hard Microsoft PC users, another group exists that is just as
dedicated to Apple’s Mac*. A final group exists in the undecided computer category.

 Cost: For many users, cost is key. You want to get the absolute most for your money. In
years past, PCs dominated the budget-friendly market, with Macs ranging anywhere from
$100 to $500 more than a comparable PC. Now this price gap has lessened significantly.
However, you will notice a few key features that Macs tend to lack in order to provide a
lower price: memory and hard drive space.

 Memory: Most PCs have anywhere from 2 GB to 8 GB of RAM in laptops and desktops,
while Macs usually have only 1 GB to 4 GB. Keep in mind, this is for standard models, not
custom orders.

 Hard Drive Space: Macs typically have smaller hard drives than PCs. This could be
because some Mac files and applications are slightly smaller than their PC counterparts.
On average, you will still see price gaps of several hundred dollars between comparable
Macs and PCs. For computing on a budget, PCs win. There are a few things to take into
consideration that may actually make Macs more cost effective: stability and compatibility.

 Stability: In years past, PCs were known to crash, and users would get the “blue screen,”
but Microsoft has made their operating systems more reliable in recent years. On the
other hand, Mac hardware and software have tended to be stable, and crashes occur
infrequently.
61

 Compatibility: Unlike with a PC, a Mac can also run Windows. If you want to have a
combination Mac and PC, a Mac is your best option.

 Availability: Macs are exclusive to Apple. This means for the most part, prices and features
are the same no matter where you shop. This limits Mac availability. However, with the
new Apple stores, it’s even easier to buy Macs and Mac accessories. Any upgrades or
repairs can only be done by an authorized Apple support center. PCs, on the other hand,
are available from a wide range of retailers and manufacturers. This means more
customization, a wider price range for all budgets, repairs, and upgrades available at
most electronics retailers and manufacturers. It also makes it easier for the home user to
perform upgrades and repairs themselves as parts are easy to find.

 Web Design: 95 per cent of the people surfing the Web use Windows on PCs. If you
want to be able to design in an atmosphere where you see pretty much what that 95 per
cent sees, then Windows just plain makes sense. Secondly, though many technologies
are available for the Mac, Windows technology is not and much of the Web uses this
technology. If you want to take advantage of .NET technology or ASP, it is just way easier
to implement from a Windows platform.

 Software: The final Mac vs. PC comparison comes down to software. For the most part,
the two are neck and neck. Microsoft has even released Microsoft Office specifically for
Mac, proving Apple and Microsoft can get along. All and all, Macs are more software
compatible as PCs only support Windows friendly software. Both systems support most
open-source software. Software for both systems is user friendly and easy to learn.

3.4.1 The Macintosh Platform

All Macintoshes can record and play sound. Many include hardware and software for
digitizing and editing video and producing DVD discs. High-quality graphics capability is available
“out of the box.” Unlike the Windows environment, where users can operate any application
with keyboard input, the Macintosh requires a mouse.

The Macintosh computer you will need for developing a project depends entirely upon
the project’s delivery requirements, its content, and the tools you will need for production.
62

3.4.2 The Windows Platform

Unlike the Apple Macintosh computer, a Windows computer is not a computer per se, but
rather a collection of parts that are tied together by the requirements of the Windows operating
system. Power supplies, processors, hard disks, CD-ROM players, video and audio components,
monitors, key-boards and mice-it doesn’t matter where they come from or who makes them.
Made in Texas, Taiwan, Indonesia, Ireland, Mexico, or Malaysia by widely known or little-known
manufactures, these components are assembled and branded by Dell, IBM, Gateway, and
other into computers that run Windows.

In the early days, Microsoft organized the major PC hardware manufactures into the
Multimedia PC Marketing Council to develop a set of specifications that would allow Windows
to deliver a dependable multimedia experience.

3.4.3 Networking Macintosh and Windows Computers

When a user works in a multimedia development environment consisting of a mixture of


Macintosh and Windows computers, you will want them to communicate with each other. It
may also be necessary to share other resources among them, such as printers. Local area
networks (LANs) and wide area networks (WANs) can connect the members of a workgroup.
In a LAN, workstations are usually located within a short distance of one another, on the same
floor of a building, for example. WANs are communication systems spanning great distances,
typically set up and managed by large corporation and institutions for their own use, or to share
with other users.

LANs allow direct communication and sharing of peripheral resources such as file servers,
printers, scanners, and network modems. They use a variety of proprietary technologies, most
commonly Ethernet or Token Ring, to perform the connections.

3.5 Multimedia Software

Multimedia Software allows the users to create and play audio and video media. Audio
converters, burners, players, video encoders and decoders are some of it. Real Player and
Media Player are examples of this software.
63

Multimedia software can be entertaining as well as useful. The user can play music on
the computer, listen to the sound an animal makes while browsing a disk about the zoo, hear
actual recordings of famous speeches, view a video clip of a historic event, watch an animation
about how a car engine works, hear the correct pronunciation of a word or phrase, view full
color photographs of famous works of art or scenes from nature, listen to the sounds of different
musical instruments, hear works of music by renowned composers, or watch a movie on your
computer.

There is a large selection of multimedia software available for the person’s enjoyment.
Multimedia subjects include children’s learning, the arts, reference works, health and medicine,
science, history, geography, hobbies and sports, games, and much more. Because of the large
storage requirements of this type of media, most multimedia software comes on a compact
disk (CD-ROM) format.

To use multimedia software, the end user system must meet certain minimum requirements
set forth by the Multimedia Personal Computer (MPC) Marketing Council. These requirements
include

 a CD-ROM drive

 hard disk drive with ample storage capacity

 a 486 or better central processing unit (CPU)

 at least 4 to 8 megabytes of RAM (memory)

 a 256 color or better video adapter

 a sound card with speakers or headphones.

Most new computers far exceed these specifications. A microphone is optional if the user
wants to record their own sounds. While these are suggested minimum requirements, many
multimedia programs would run better on computer equipped with a Pentium 4 or AMD Athlon
CPU and 512 or more megabytes of RAM.

Since much of the software purchased today contains multimedia content, we are now
referring to multimedia software as the software used to create multimedia content. Examples
64

include authoring software, which is used to create interactive multimedia courseware which is
distributed on CD or available over the Internet. A teacher could use such a program to create
interesting interactive lessons for the students which are viewed on the computer. A business
could create programs to teach job skills or orient new employees.

Another category of multimedia involves the recreational use of music. Songs can be
copied from CDs or downloaded from the Internet and stored on the hard drive. The music can
then be burned onto a CD or transferred to a Walkman-like device called a MP3 player or a
“jukebox.” There is also software for the creation, arranging, performance and recording of
music and video. Through the use of a MIDI (Music Instrument Digital Interface) connector
installed in the computer, the computer can be connected to musical instruments such as
electronic keyboards. A music student or musician could then create a multiple track recording,
arrange it, play it back, change the key or tempo, and print out the sheet music. Another type
of software which is recently gaining popularity is digital audio recording software, which allows
the computer to be connected to a digital audio mixer, usually through USB or “Fire wire”
connectors, and record live music onto the hard drive. The “tracks” can then be mixed, effects
added, and music CDs can then be made from the master recording.

Also available are special cameras that allow the person to record pictures and movies to
the person’s hard disk drive so they can easily be transferred into a multimedia presentation or
edited and recorded back to video tape to create their own “home movie.” These cameras
range from the very inexpensive type that are wired to the computer and sit on a small stand
near the monitor. This type of camera is sometimes referred to as an “Internet camera” or a
“video chat” camera, and sometimes called a “golf ball” camera because many of them are in
the shape of a golf ball. These cameras can also be used to send a live video feed over the
Internet, such as a video “chat” or “teleconference” call.

Digital video cameras allow the person to record movies and watch them with amazing
clarity and resolution, or to transfer the video to the computer for editing using the included
software. The user can delete unwanted scenes, add titles and effects, fade in and out, create
a sequence of scenes from smaller video files, and even add a musical sound track. Once the
editing is complete, the movie can be recorded back to the videotape, or “burned” onto a DVD
if the person’s computer is equipped with a DVD-R or DVD-RW drive.
65

3.6 Summary
 Hardware elements such as hard disks and networked peripherals must be connected
together.

 Input and output devices such as microphones, recorders, speakers, and monitors are
required when working with multimedia elements.

 Windows and Macintosh is the two computer platforms most used.

 The Graphics Card and a GPU (Graphical Processing Unit) are needed to generate the
highest quality output images on a monitor.

 CD-ROMs (Compact Disk-Read Only Memory), HD-DVDs (High Density Digital Versatile
Disc), and BDs (Blu-ray Disc) are the best choices for saving and distributing multimedia
data and video.

 Multimedia software tools can be divided into graphics and image editing, audio and
sound editing, video editing, and animation authoring tools.

3.7 Check Your Answers


1. binary compatible

2. True

3. Office Suite

4. c. LAN

5. d. OCR Software

6. d. FileMaker Pro

7. dots per inch

8. d. monitor

3.8 Model Questions


1. Explain in details about Input Devices with an example.

2. Discuss in detail about Output Devices with an example.


66

3. What is multimedia software?

4. Write short notes on Macintosh and Windows Production Platform.

5. Explain about multimedia Software in detail.

6. Compare and contrast Macintosh and Window platform.

7. What is keyboard and pointing devices?

8. What are Flat-Bed scanners?

9. What are Touch screens?

10. Write short notes on printer and its types.


67

LESSON 4
HARDWARE PERIPHERALS IN
MULTIMEDIA SYSTEM

Structure
4.1 Introduction

4.2 Learning Objectives

4.3 Hardware Peripherals

4.4 Connections

4.5 Memory and Storage Devices

4.6 Communication Devices

4.7 Media Software

4.8 Summary

4.9 Check Your Answers

4.10 Model Questions

4.1 Introduction

The hardware required for multimedia PC depends on the personal preference, budget,
project delivery requirements and the type of material and content in the project. Multimedia
production was much smoother and easy in Macintosh than in Windows. But Multimedia content
production in windows has been made easy with additional storage and less computing cost.
Right selection of multimedia hardware results in good quality multimedia presentation.

4.2 Learning Objectives


At the end of this lesson, the learner will be able to

 Learn the hardware peripherals with connecting devices

 Know the functionality of different types of memory and storage devices


68

 Understand the ways the components of a computer fits together

 List and understand different communication devices

 Know the current media software packages used in multimedia system

4.3 Hardware Peripherals

Peripheral devices are hardware used for input, auxiliary storage, display, and
communication. These are attached to the system unit through a hardware interface that carries
digital data to and from main memory and processors. The functions and performance
characteristics of peripherals are important considerations both for multimedia users, who may
want the best display device for a video game, and for developers, who seek high-performance
data capture and access.

Multimedia Hardware
The hardware required for multimedia can be classified into five. They are

1. Connecting Devices

2. Memory and Storage devices

3. Communicating devices.

4.4 Connecting Devices

Among the much hardware – computers, monitors, disk drives, video projectors, light
valves, video projectors, players, VCRs, mixers, sound speakers there are enough wires which
connect these devices. The data transfer speed the connecting devices provide will determine
the faster delivery of the multimedia content.

The most popularly used connecting devices are:

1. Small Computer System Interface (SCSI)

2. Media Control Interface (MCI)

3. Integrated Drive Electronics (IDE)

4. Universal Serial Bus (USB)

5. FireWire and i.LINK (IEEE 1394)


69

1. SCSI

SCSI (Small Computer System Interface) is a set of standards for physically connecting
and transferring data between computers and peripheral devices. The SCSI standards define
commands, protocols, electrical and optical interfaces. SCSI is most commonly used for hard
disks and tape drives, but it can connect a wide range of other devices, including scanners,
and optical drives (CD, DVD, etc.). SCSI is most commonly pronounced “scuzzy”. Since
its standardization in 1986, SCSI has been commonly used in the Apple Macintosh and Sun
Microsystems computer lines and PC server systems. SCSI has never been popular in the low-
priced IBM PC world, owing to the lower cost and adequate performance of its ATA hard disk
standard. SCSI drives and even SCSI RAIDs became common in PC workstations for video or
audio production, but the appearance of large cheap SATA drives means that SATA is rapidly
taking over this market. Currently, SCSI is popular on high-performance workstations and
servers. RAIDs on servers almost always use SCSI hard disks, though a number of
manufacturers offer SATA-based RAID systems as a cheaper option. Desktop computers and
notebooks more typically use the ATA/IDE or the newer SATA interfaces for hard disks, and
USB and FireWire connections for external devices.

SCSI interfaces

SCSI is available in a variety of interfaces. The first, still very common, was parallel SCSI
(also called SPI). It uses a parallel electrical bus design. The traditional SPI design is making
a transition to Serial Attached SCSI, which switches to a serial point-to-point design but retains
other aspects of the technology.

iSCSI drops physical implementation entirely, and instead uses TCP/IP as a transport
mechanism. Finally, many other interfaces which do not rely on complete SCSI standards still
implement the SCSI command protocol.

The following table compares the different types of SCSI.


70

Table 4.1 Types of SCSI

SCSI cabling

Internal SCSI cables are usually ribbon cables that have multiple 68 pin or 50 pin
connectors. External cables are shielded and only have connectors on the ends.

iSCSI

iSCSI preserves the basic SCSI paradigm, especially the command set, almost
unchanged. iSCSI advocates project the iSCSI standard, an embedding of SCSI-3 over TCP/
IP, as displacing Fibre Channel in the long run, arguing that Ethernet data rates are currently
increasing faster than data rates for Fibre Channel and similar disk-attachment technologies.
iSCSI could thus address both the low-end and high-end markets with a single commodity-
based technology.

Serial SCSI

Four recent versions of SCSI, SSA, FC-AL, FireWire, and Serial Attached SCSI (SAS)
break from the traditional parallel SCSI standards and perform data transfer via serial
communications. Although much of the documentation of SCSI talks about the parallel interface,
most contemporary development effort is on serial SCSI. Serial SCSI has a number of
advantages over parallel SCSI—faster data rates, hot swapping, and improved fault isolation.
The primary reason for the shift to serial interfaces is the clock skew issue of high speed
71

parallel interfaces, which makes the faster variants of parallel SCSI susceptible to problems
caused by cabling and termination. Serial SCSI devices are more expensive than the equivalent
parallel SCSI devices.

SCSI command protocol

In addition to many different hardware implementations, the SCSI standards also include
a complex set of command protocol definitions. The SCSI command architecture was originally
defined for parallel SCSI buses but has been carried forward with minimal change for use with
iSCSI and serial SCSI. Other technologies which use the SCSI command set include the ATA
Packet Interface, USB Mass Storage class and FireWire SBP-2.

In SCSI terminology, communication takes place between an initiator and a target. The
initiator sends a command to the target which then responds. SCSI commands are sent in a
Command Descriptor Block (CDB). The CDB consists of a one byte operation code followed
by five or more bytes containing command-specific parameters. At the end of the command
sequence the target returns a Status Code byte which is usually 00h for success, 02h for an
error (called a Check Condition), or 08h for busy. When the target returns a Check Condition in
response to a command, the initiator usually then issues a SCSI Request Sense command in
order to obtain a Key Code Qualifier (KCQ) from the target. The Check Condition and Request
Sense sequence involves a special SCSI protocol called a Contingent Allegiance Condition.

There are 4 categories of SCSI commands: N (non-data), W (writing data from initiator to
target), R (reading data), and B (bidirectional). There are about 60 different SCSI commands in
total, with the most common being:

 Test unit ready: Queries device to see if it is ready for data transfers (disk spun up,
media loaded, etc.).

 Inquiry: Returns basic device information, also used to “ping” the device since it does not
modify sense data.

 Request sense: Returns any error codes from the previous command that returned an
error status.
72

 Send diagnostic and Receives diagnostic results: runs a simple self-test or a specialized
test defined in a diagnostic page.

 Start/Stop unit: Spins disks up and down, load/unload media.

 Read capacity: Returns storage capacity.

 Format unit: Sets all sectors to all zeroes, also allocates logical blocks avoiding defective
sectors.

 Read Format Capacities: Read the capacity of the sectors.

 Read (four variants): Reads data from a device.

 Write (four variants): Writes data to a device.

 Log sense: Returns current information from log pages.

 Mode sense: Returns current device parameters from mode pages.

 Mode select: Sets device parameters in a mode page.

Each device on the SCSI bus is assigned at least one Logical Unit Number (LUN). Simple
devices have just one LUN, more complex devices may have multiple LUNs. A “direct access”
(i.e. disk type) storage device consists of a number of logical blocks, usually referred to by the
term Logical Block Address (LBA). A typical LBA equates to 512 bytes of storage. The usage of
LBAs has evolved over time and so four different command variants are provided for reading
and writing data. The Read(6) and Write(6) commands contain a 21-bit LBA address. The
Read(10), Read(12), Read Long, Write(10), Write(12), and Write Long commands all contain
a 32-bit LBA address plus various other parameter options.

A “sequential access” (i.e tape-type) device does not have a specific capacity because it
typically depends on the length of the tape, which is not known exactly. Reads and writes on a
sequential access device happen at the current position, not at a specific LBA. The block size
on sequential access devices can either be fixed or variable, depending on the specific device.
(Earlier devices, such as 9-track tape, tended to be fixed block, while later types, such as DAT,
almost always supported variable block sizes.)
73

SCSI device identification

In the modern SCSI transport protocols, there is an automated process of “discovery” of


the IDs. SSA initiators “walk the loop” to determine what devices are there and then assign
each one a 7-bit “hop-count” value. FC-AL initiators use the LIP (Loop Initialization Protocol) to
interrogate each device port for its WWN (World Wide Name). For iSCSI, because of the
unlimited scope of the (IP) network, the process is quite complicated. These discovery processes
occur at power-on/initialization time and also if the bus topology changes later, for example if
an extra device is added.

On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a “SCSI
ID”, which is a number in the range 0-7 on a narrow bus and in the range 0–15 on a wide bus.
On earlier models a physical jumper or switch controls the SCSI ID of the initiator (host adapter).
On modern host adapters (since about 1997), doing I/O to the adapter sets the SCSI ID; for
example, the adapter contains a BIOS program that runs when the computer boots up and that
program has menus that let the operator choose the SCSI ID of the host adapter. Alternatively,
the host adapter may come with software that must be installed on the host computer to configure
the SCSI ID. The traditional SCSI ID for a host adapter is 7, as that ID has the highest priority
during bus arbitration (even on a 16 bit bus).

The SCSI ID of a device in a drive enclosure that has a backplane is set either by jumpers
or by the slot in the enclosure the device is installed into, depending on the model of the
enclosure. In the latter case, each slot on the enclosure’s back plane delivers control signals to
the drive to select a unique SCSI ID. A SCSI enclosure without a backplane has a switch for
each drive to choose the drive’s SCSI ID. The enclosure is packaged with connectors that
must be plugged into the drive where the jumpers are typically located; the switch emulates the
necessary jumpers. While there is no standard that makes this work, drive designers typically
set up their jumper headers in a consistent format that matches the way that these switches
implement.

Note that a SCSI target device (which can be called a “physical unit”) is divided into
smaller “logical units.” For example, a high-end disk subsystem may be a single SCSI device
but contain dozens of individual disk drives, each of which is a logical unit (more commonly, it
74

is not that simple—virtual disk devices are generated by the subsystem based on the storage
in those physical drives, and each virtual disk device is a logical unit). The SCSI ID, WWNN,
etc. in this case identifies the whole subsystem, and a second number, the logical unit number
(LUN) identifies a disk device within the subsystem.

It is quite common, though incorrect, to refer to the logical unit itself as a “LUN.” Accordingly,
the actual LUN may be called a “LUN number” or “LUN id”. Setting the bootable (or first) hard
disk to SCSI ID 0 is an accepted IT community recommendation. SCSI ID 2 is usually set aside
for the Floppy drive while SCSI ID 3 is typically for a CD ROM.

SCSI enclosure services In larger SCSI servers, the disk-drive devices are housed in
an intelligent enclosure that supports SCSI Enclosure Services (SES). The initiator can
communicate with the enclosure using a specialized set of SCSI commands to access power,
cooling, and other non-data characteristics.

2. Media Control Interface (MCI)

The Media Control Interface, MCI in short, is an aging API for controlling multimedia
peripherals connected to a Microsoft Windows or OS/2 computer. MCI makes it very simple to
write a program which can play a wide variety of media files and even to record sound by just
passing commands as strings. It uses relations described in Windows registries or in the [MCI]
section of the file SYSTEM.INI.

The MCI interface is a high-level API developed by Microsoft and IBM for controlling
multimedia devices, such as CD-ROM players and audio controllers. The advantage is that
MCI commands can be transmitted both from the programming language and from the scripting
language (open script, lingo). For a number of years, the MCI interface has been phased out in
favor of the DirectX APIs.

MCI Devices
The Media Control Interface consists of 4 parts:

 AVIVideo

 CDAudio
75

 Sequencer

 WaveAudio

Each of these so-called MCI devices can play a certain type of files e.g. AVI Video plays
avi files, CDAudio plays cd tracks among others. Other MCI devices have also been made
available over time.

Playing media through the MCI interface

To play a type of media, it needs to be initialized correctly using MCI commands. These
commands are subdivided into categories:

 System Commands

 Required Commands

 Basic Commands

 Extended Commands

3. Integrated Drive Electronics (IDE)

Usually storage devices connect to the computer through an Integrated Drive Electronics
(IDE) interface. Essentially, an IDE interface is a standard way for a storage device to connect
to a computer. IDE is actually not the true technical name for the interface standard. The
original name, AT Attachment (ATA), signified that the interface was initially developed for the
IBM AT computer.

IDE was created as a way to standardize the use of hard drives in computers. The basic
concept behind IDE is that the hard drive and the controller should be combined. The controller
is a small circuit board with chips that provide guidance as to exactly how the hard drive stores
and accesses data. Most controllers also include some memory that acts as a buffer to enhance
hard drive performance. Before IDE, controllers and hard drives were separate and often
proprietary. In other words, a controller from one manufacturer might not work with a hard drive
from another manufacturer. The distance between the controller and the hard drive could result
in poor signal quality and affect performance. Obviously, this caused much frustration for
computer users.
76

IDE devices use a ribbon cable to connect to each other. Ribbon cables have all of the
wires laid flat next to each other instead of bunched or wrapped together in a bundle. IDE
ribbon cables have either 40 or 80 wires. There is a connector at each end of the cable and
another one about two-thirds of the distance from the motherboard connector. This cable cannot
exceed 18 inches (46 cm) in total length (12 inches from first to second connector, and 6
inches from second to third) to maintain signal integrity.

The three connectors are typically different colors and attach to specific items:

 The blue connector attaches to the motherboard.

 The black connector attaches to the primary (master) drive.

 The grey connector attaches to the secondary (slave) drive.

Enhanced IDE (EIDE) — an extension to the original ATA standard again developed by
Western Digital — allowed the support of drives having a storage capacity larger than 504
MiBs (528 MB), up to 7.8 GiBs (8.4 GB). Although these new names originated in branding
convention and not as an official standard, the terms IDE and EIDE appear as if interchangeable
with ATA. This may be attributed to the two technologies being introduced with the same
consumable devices — these “new” ATA hard drives.

With the introduction of Serial ATA around 2003, conventional ATA was retroactively
renamed to Parallel ATA (P-ATA), referring to the method in which data travels over wires in
this interface.

4. Universal Serial Bus (USB)

Universal Serial Bus (USB) is a serial bus standard to interface devices. A major
component in the legacy-free PC, USB was designed to allow peripherals to be connected
using a single standardized interface socket and to improve plug-and-play capabilities by allowing
devices to be connected and disconnected without rebooting the computer (hot swapping).
Other convenient features include providing power to low-consumption devices without the
need for an external power supply and allowing many devices to be used without requiring
manufacturer specific, individual device drivers to be installed.
77

USB is intended to help retire all legacy varieties of serial and parallel ports. USB can
connect computer peripherals such as mouse devices, keyboards, PDAs, gamepads and
joysticks, scanners, digital cameras, printers, personal media players, and flash drives. For
many of those devices USB has become the standard connection method. USB is also used
extensively to connect non-networked printers; USB simplifies connecting several printers to
one computer. USB was originally designed for personal computers, but it has become
commonplace on other devices such as PDAs and video game consoles.

The design of USB is standardized by the USB Implementers Forum (USB-IF), an industry
standards body incorporating leading companies from the computer and electronics industries.
Notable members have included Apple Inc., Hewlett-Packard, NEC, Microsoft, Intel,

A USB system has an asymmetric design, consisting of a host, a multitude of downstream


USB ports, and multiple peripheral devices connected in a tiered-star topology. Additional USB
hubs may be included in the tiers, allowing branching into a tree structure, subject to a limit of
5 levels of tiers. USB host may have multiple host controllers and each host controller may
provide one or more USB ports. Up to 127 devices, including the hub devices, may be connected
to a single host controller.

USB devices are linked in series through hubs. There always exists one hub known as
the root hub, which is built-in to the host controller. So-called “sharing hubs” also exist; allowing
multiple computers to access the same peripheral device(s), either switching access between
PCs automatically or manually. They are popular in small office environments. In network terms
they converge rather than diverge branches.

A single physical USB device may consist of several logical sub-devices that are referred
to as device functions, because each individual device may provide several functions, such as
a webcam (video device function) with a built-in microphone (audio device function).

5. FireWire and i.LINK (IEEE 1394)

FireWire was introduced by Apple in the late 1980s, and in 1995 it became an industry
standard (IEEE 1394) supporting high-bandwidth serial data transfer, particularly for digital
video and mass storage. Like USB, the standard supports hot-swapping and plug-and-play,
78

but it is faster, and while USB devices can only be attached to one computer at a time, FireWire
can connect multiple computers and peripheral devices (peer-to-peer). Both the Mac OS and
Windows offer IEEE 1394 support. Because the standard has been endorsed by the Electronics
Industries Association and the Advanced Television Systems Committee (ATSC), it has become
a common method for connecting and interconnecting professional digital video gear, from
cameras to recorders and edit suites. Sony calls this standard i.LINK. FireWire has replaced
Parallel SCSI in many applications because it’s cheaper and because it has a simpler, adaptive
cabling system.

Check your Progress


1. The type of memory used by a computer to run several programs at the same time is
called _______________

2. The type of memory that is not erased when power is shut off to it is called
_______________.

3. Secondary storage memory is basically

a. Volatile memory

b. Non-Volatile Memory

c. Backup Memory

d. Impact Memory

4. Say True or False:

Type of backup storage in which data is read in a sequence is classified as Serial Access.

5. In graphical system, Hardware used to store bitmap is _______________

6. Example of magnetic storage device includes

a. Flash Memory Drive

b. CD-ROM drive

c. Hard Disk Drive

d. Optical Drive
79

7. Which of the following items is not used in Local Area Networks (LANs)?

a. Computer Modem

b. Cable

c. Modem

d. Interface card

8. WI–FI uses ________________

9. DVD acronym _________________

10. A specific instance of a software is called an

a. Virus

b. Website

c. Application

d. CorelDraw

4.5 Memory and Storage Devices

A data storage device is a device for recording (storing) information (data). Recording
can be done using virtually any form of energy. A storage device may hold information, process
information, or both. A device that only holds information is a recording medium. Devices that
process information (data storage equipment) may both access a separate portable (removable)
recording medium or a permanent component to store and retrieve information.

Electronic Data Storage is storage which requires electrical power to store and retrieve
that data. Most storage devices that do not require visual optics to read data fall into this
category. Electronic data may be stored in either an analog or digital signal format. This type of
data is considered to be electronically encoded data, whether or not it is electronically stored.
Most electronic data storage media (including some forms of computer storage) are considered
permanent (non-volatile) storage, that is, the data will remain stored when power is removed
from the device. In contrast, electronically stored information is considered volatile memory.
80

By adding more memory and storage space to the computer, the computing needs and
habits to keep pace, is filling the new capacity. To estimate the memory requirements of a
multimedia project- the space required on a floppy disk, hard disk, or CD-ROM, not the random
access sense of the project’s content and scope.

Random Access Memory (RAM)

RAM is the main memory where the Operating system is initially loaded and the application
programs are loaded at a later stage. RAM is volatile in nature and every program that is quit/
exit is removed from the RAM. More the RAM capacity, higher will be the processing speed.

If there is a budget constraint, then it is certain to produce a multimedia project on a


slower or limited-memory computer. On the other hand, it is profoundly frustrating to face
memory (RAM) shortages time after time, when you’re attempting to keep multiple applications
and files open simultaneously. It is also frustrating to wait the extra seconds required oh each
editing step when working with multimedia material on a slow processor.

On the Macintosh, the minimum RAM configuration for serious multimedia production is
about 32MB; but even 64MB and 256MB systems are becoming common, because while
digitizing audio or video, you can store much more data much more quickly in RAM. And when
you’re using some software, you can quickly chew up available RAM – for example, Photoshop
(16MB minimum, 20MB recommended); After Effects (32MB required), Director (8MB minimum,
20MB better); Page maker (24MB recommended); Illustrator (16MB recommended); Microsoft
Office (12MB recommended).

In spite of all the marketing hype about processor speed, this speed is ineffective if not
accompanied by sufficient RAM. A fast processor without enough RAM may waste processor
cycles while it swaps needed portions of program code into and out of memory.

In some cases, increasing available RAM may show more performance improvement on
your system than upgrading the processor clip. On an MPC platform, multimedia authoring can
also consume a great deal of memory. It may be needed to open many large graphics and
audio files, as well as your authoring system, all at the same time to facilitate faster copying/
81

pasting and then testing in your authoring software. Although 8MB is the minimum under the
MPC standard, much more is required as of now.

Read-Only Memory (ROM)

Read-only memory is not volatile, Unlike RAM, when you turn off the power to a ROM
chip, it will not forget, or lose its memory. ROM is typically used in computers to hold the small
BIOS program that initially boots up the computer, and it is used in printers to hold built-in
fonts. Programmable ROMs (called EPROM’s) allow changes to be made that are not forgotten.

A new and inexpensive technology, optical read-only memory (OROM), is provided in


proprietary data cards using patented holographic storage. Typically, OROM s offer 128MB of
storage, have no moving parts, and use only about 200 mill watts of power, making them ideal
for handheld, battery-operated devices.

Floppy and Hard Disks

Adequate storage space for the production environment can be provided by largecapacity
hard disks; a server-mounted disk on a network; Zip, Jaz, or SyQuest removable cartridges;
optical media; CD-R (compact disc-recordable) discs; tape; floppy disks; banks of special
memory devices; or any combination of the above. Removable media (floppy disks, compact
or optical discs, and cartridges) typically fit into a letter-sized mailer for overnight courier service.
One or many disks may be required for storage and archiving each project, and it is necessary
to plan for backups kept off-site.

Floppy disks and hard disks are mass-storage devices for binary data-data that can be
easily read by a computer. Hard disks can contain much more information than floppy disks
and can operate at far greater data transfer rates. In the scale of things, floppies are, however,
no longer “mass-storage” devices. A floppy disk is made of flexible Mylar plastic coated with a
very thin layer of special magnetic material. A hard disk is actually a stack of hard metal platters
coated with magnetically sensitive material, with a series of recording heads or sensors that
hover a hairbreadth above the fast-spinning surface, magnetizing or demagnetizing spots along
82

formatted tracks using technology similar to that used by floppy disks and audio and video tape
recording. Hard disks are the most common mass-storage device used on computers, and for
making multimedia, it is necessary to have one or more large-capacity hard disk drives.

Fig. 4.1 a) Floppy Disk b) Hard Disk

Zip, jaz, SyQuest, and Optical storage devices

SyQuest’s 44MB removable cartridges have been the most widely used portable medium
among multimedia developers and professionals, but Iomega’s inexpensive Zip drives with
their likewise inexpensive 100MB cartridges have significantly penetrated SyQuest’s market
share for removable media. Iomega’s Jaz cartridges provide a gigabyte of removable storage
media and have fast enough transfer rates for audio and video development. Pinnacle Micro,
Yamaha, Sony, Philips, and others offer CD-R “burners” for making write-once compact discs,
and some double as quad-speed players. As blank CD-R discs become available for less than
a dollar each, this write-once media competes as a distribution vehicle. CD-R is described in
greater detail a little later in the chapter. Magneto-optical (MO) drives use a high-power laser to
heat tiny spots on the

metal oxide coating of the disk. While the spot is hot, a magnet aligns the oxides to
provide a 0 or 1 (on or off) orientation. Like SyQuests and other Winchester hard disks, this is
rewritable technology, because the spots can be repeatedly heated and aligned. Moreover,
this media is normally not affected by stray magnetism (it needs both heat and magnetism to
make changes), so these disks are particularly suitable for archiving data. The data transfer
rate is, however, slow compared to Zip, Jaz, and SyQuest technologies. One of the most
popular formats uses a 128MB-capacity disk-about the size of a 3.5-inch floppy. Larger-format
magneto-optical drives with 5.25-inch cartridges offering 650MB to 1.3GB of storage are also
available.
83

Fig. 4.2 Optical Storage

Digital Versatile Disc (DVD)

In December 1995, nine major electronics companies (Toshiba, Matsushita, Sony, Philips,
Time Waver, Pioneer, JVC, Hitachi, and Mitsubishi Electric) agreed to promote a new optical
disc technology for distribution of multimedia and feature-length movies called DVD.

With this new medium capable not only of gigabyte storage capacity but also full motion
video (MPEG2) and high-quantity audio in surround sound, the bar has again risen for multimedia
developers. Commercial multimedia projects will become more expensive to produce as
consumer’s performance expectations rise. There are two types of DVD-DVD-Video and DVD-
ROM; these reflect marketing channels, not the technology.

Fig. 4.3 Digital Versatile Disc

CD-ROM Players

Compact Disc Read-Only Memory (CD-ROM) players have become an integral part of
the multimedia development workstation and are important delivery vehicle for large, mass-
84

produced projects. A wide variety of developer utilities, graphic backgrounds, stock photography
and sounds, applications, games, reference texts, and educational software are available only
on this medium.

Fig. 4.4 CD-ROM Players

CD-ROM players have typically been very slow to access and transmit data (150k per
second, which is the speed required of consumer Red Book Audio CDs), but new developments
have led to double, triple, quadruple, speed and even 24x drives designed specifically for
computer (not Red Book Audio) use. These faster drives spool up like washing machines on
the spin cycle and can be somewhat noisy, especially if the inserted compact disc is not evenly
balanced.

CD Recorders

With a compact disc recorder, you can make your own CDs using special CD- Recordable
(CD-R) blank optical discs to create a CD in most formats of CD-ROM and CD-Audio. The
machines are made by Sony, Phillips, Ricoh, Kodak, JVC, Yamaha, and Pinnacle. Software,
such as Adaptec’s Toast for Macintosh or Easy CD Creator for Windows, lets you organize files
on your hard disk(s) into a “virtual” structure, then writes them to the CD in that order. CD-R
discs are made differently than normal CDs but can play in any CD-Audio or CD-ROM player.
They are available in either a “63 minute” or “74 minute” capacity for the former, that means
about 560MB, and for the latter, about 650MB. These write-once CDs make excellent high-
capacity file archives and are used extensively by multimedia developers for premastering and
testing CDROM projects and titles.
85

Videodisc Players

Videodisc players (commercial, not consumer quality) can be used in conjunction with
the computer to deliver multimedia applications. You can control the videodisc player from your
authoring software with X-Commands (XCMDs) on the Macintosh and with MCI commands in
Windows. The output of the videodisc player is an analog television signal, so you must setup
a television separate from your computer monitor or use a video digitizing board to “window”
the analog signal on your monitor.

Fig. 4.5 VideoDisc players

4.6 Communication Devices

A communication device is a hardware device capable of transmitting an analog or digital
signal over the telephone, other communication wire, or wirelessly. The best example of a
communication device is a computer Modem, which is capable of sending and receiving a
signal to allow computers to talk to other computers over the telephone.

Other examples of communication devices include a NIC (network interface card), Wi-
Fi devices, and access points.

Communication device examples

 Bluetooth

Bluetooth is a computing and telecommunications industry specification that describes
how devices can communicate with each other. Devices that use Bluetooth include computers,
a computer keyboard and mouse, personal digital assistants, and smartphones.
86

Fig. 4.6 Bluetooth

Bluetooth is an RF technology that operates at 2.4 GHz, has an effective range of 32-
feet (10 meters) (this range can change depending on the power class), and has a transfer
rate of 1 Mbps and throughput of 721 Kbps.

 Modem

Modulator/Demodulator, a modem is a hardware device that allows a computer to send


and receive information over telephone lines. When sending a signal, the device converts
(“modulates”) digital data to an analog audio signal,  and transmits  it over a telephone line.
Similarly, when an analog signal is received, the modem converts it back (“demodulates” it) to
a digital signal.

Fig. 4.7 Modem

 Network Interface Card

The NIC is also referred to as an Ethernet card and network adapter. It is an expansion
card that enables a computer to connect to a network; such as a home network, or the Internet
using an Ethernet cable with an RJ-45connector.
87

Fig. 4.8 Network Interface-Ethernet card

 Smartphone

Smartphones use a touch screen to allow users to interact with them. There are thousands
of smartphone apps including games, personal-use, and business-use programs that can all
run on the phone. Example: Apple iPhone

Fig. 4.9 Smartphone

 Wi-Fi

Wi-Fi is a wireless network that utilizes one of the IEEE 802.11wireless standards to
achieve a wireless connection to a network. A home wireless network uses a wireless access
point or router to broadcast a signal using WAP or WEP encryption to send and receive signals
from wireless devices on the network. A wireless access point with two antennas is an example
of how most home users connect to the Internet using a wireless device.
88

Fig. 4.10 a) & b) WAP with two antennas

4.7 Media Software

For the creation of multimedia on the PC there are hundreds of software packages that
are available from manufacturers all over the world.

These software packages can cost anything from being absolutely free (normally this
software is called freeware or shareware).

 Adobe CS4

Adobe CS4 is a collection of graphic design, video editing, and web development
applications made by Adobe Systems many of which are the industry standard that includes

 Adobe Dreamweaver

Although a hybrid WYSIWYG and code-based web design and development application,
Dreamweaver’s WYSIWYG mode can hide the HTML code details of pages from the user,
making it possible for non-coders to create web pages and sites. WYSIWYG (What You See Is
What You Get) web development software that allows users to create websites without using
Html, everything can be done visually.

 Adobe Fireworks

A graphics package that allows users to create bitmap and vector graphics editor with
features such as: slices, the ability to add hotspots etc.) for rapidly creating website prototypes
and application interfaces.
89

 Gimp

 Is an alternative to Photoshop and cheaper but not quite as good.

 Google Sketch up

Sketch Up is a 3D modeling program designed for architects, civil engineers, filmmakers,


game developers, and related professions.

 Microsoft FrontPage

As a WYSIWYG editor, FrontPage is designed to hide the details of pages’ HTML code
from the user, making it possible for novices to easily create web pages and sites.

 Apple QuickTime

QuickTime is an extensible proprietary framework developed by Apple, capable of handling


various formats of digital video, 3D models, sound, text, animation, music, panoramic images,
and interactivity.       

 Photoshop Pro

Adobe Photoshop, or simply Photoshop, is a program developed and published by Adobe


Systems. It is the current market leader for commercial bitmap and image manipulation software,
and is the flagship product of Adobe Systems. It has been described as “an industry standard
for graphics professionals”

 Microsoft PowerPoint

PowerPoint Presentations are generally made up of slides may contain text, graphics,
movies, and other objects, which may be arranged freely on the slide.

 Adobe Flash Player

Adobe Flash (formerly Macromedia Flash) is a multimedia platform that is popular for
adding animation and interactivity to web pages. Originally acquired by Macromedia, Flash was
introduced in 1996, and is currently developed and distributed by Adobe Systems.
90

Flash is commonly used to create animation, advertisements, and various web page Flash
components, to integrate video into web pages, and more recently, to develop rich Internet
applications.

 Adobe Shockwave

Adobe Shockwave (formerly Macromedia Shockwave) is a multimedia player program,


first developed by Macromedia, acquired by Adobe Systems in 2005. It allows Adobe
Director Applications to be published on the Internet and viewed in a web browser on any
computer which has the Shockwave plug-in installed.

4.8 Summary
 SCSI (Small Computer System Interface) is a set of standards for physically connecting
and transferring data between computers and peripheral devices.

 On a parallel SCSI bus, a device (e.g. host adapter, disk drive) is identified by a “SCSI
ID”, which is a number in the range 0-7 on a narrow bus and in the range 0–15 on a wide
bus.

 The Media Control Interface, MCI in short, is an aging API for controlling multimedia
peripherals connected to a Microsoft Windows

 Memory and storage devices include Hard Drives, Random Access Memory (RAM), Read-
Only Memory (ROM), Flash Memory and Thumb Drives, and CD-ROM, DVD, and Blu-ray
discs.

 A communication device is a hardware device capable of transmitting an analog or digital
signal over the telephone, other communication wire, or wirelessly.

 Wi-Fi is a wireless network that utilizes one of the IEEE 802.11wireless standards to achieve
a wireless connection to a network.

4.9 Check Your Answers


1. Random-Access Memory (RAM)

2. Read-Only Memory (ROM)


91

3. Non-Volatile Memory

4. a.True

5. Frame buffer

6. Hard Disk Drive

7. Modem

8. Radio Waves

9. Digital Versatile Disc (DVD)

10. Application

4.10 Model Questions


1. Define Peripheral devices.

2. Explain in detail about connecting devices in multimedia.

3. Describe about memory devices in detail.

4. Discuss about storage devices with an example.

5. Explain communication devices in multimedia in detail.

6. Write short notes on Small Computer System Interface (SCSI).

7. List out the multimedia software and its uses.

8. Define Modems and ISDN.

9. List down the name of communication devices.

10. Define USB.


92

LESSON 5
BASIC SOFTWARE TOOLS FOR
MULTIMEDIA OBJECTS

Structure
5.1 Introduction

5.2 Learning Objectives

5.3 Basic Tools

5.4 Making Instant Multimedia

5.5 Multimedia Software and Authoring Tools

5.6 Production Standards

5.7 Summary

5.8 Check Your Answers

5.9 Model Questions

5.1 Introduction

The basic tools set for building multimedia project contains one or more authoring systems
and various editing applications for text, images, sound, and motion video. A few additional
applications are also useful for capturing images from the screen, translating file formats and
tools for making multimedia production easier.

5.2 Learning Objectives


At the end of this lesson, the learner will be able to

 Understand common software programs used to handle text, graphics, audio, video, and
animation in multimedia projects and discuss their capabilities.

 Learn the hardware most used in making multimedia and choose an appropriate platform
for a project.
93

 Determine which multimedia authoring system is most appropriate for any given
project.

5.3 Basic Tools

Text Editing and Word Processing Tools

A word processor is the first software tool; computer users rely upon for creating the text.
The word processor is bundled with an office suite. Word processors such as Microsoft Word
and WordPerfect are powerful applications that include spellcheckers, table formatters,
thesauruses and prebuilt templates for letters, resumes, purchase orders and other common
documents.

OCR Software

There will be multimedia content and other texts to be incorporated into a multimedia
project, but no electronic text file. With optical character recognition (OCR) software, a flat-bed
scanner, and a computer, it is possible to save many hours of rekeying printed words, and get
the job done faster and more accurately than a roomful of typists.

OCR software turns bitmapped characters into electronically recognizable ASCII text. A
scanner is typically used to create the bitmap. Then the software breaks the bitmap into chunks
according to whether it contains text or graphics, by examining the texture and density of areas
of the bitmap and by detecting edges. The text areas of the image are then converted to ASCII
character using probability and expert system algorithms.

Image-Editing Tools

Image-editing application is a specialized and powerful tool for enhancing and re-touching
the existing bitmapped images. These applications also provide many of the feature and tools
of painting and drawing programs and can be used to create images from scratch as well as
images digitized from scanners, video frame-grabbers, digital cameras, clip art files, or original
artwork files created with a painting or drawing package.

Typical features of image-editing applications for multimedia developers are:


94

 Multiple windows that provide views of more than one image at a time

 Conversion of major image-data types and industry-standard file formats

 Direct inputs of images from scanner and video sources

 Employment of a virtual memory scheme that uses hard disk space as RAM for images
that require large amounts of memory

 Capable selection tools, such as rectangles, lassos, and magic wands, to select portions
of a bitmap

 Image and balance controls for brightness, contrast, and color balance

 Good masking features

 Multiple undo and restore features

 Anti-aliasing capability, and sharpening and smoothing controls

 Color-mapping controls for precise adjustment of color balance

 Tools for retouching, blurring, sharpening, lightening, darkening, smudging, and tinting

 Geometric transformation such as flip, skew, rotate, and distort and perspective changes

 Ability to resample and resize an image

 134-bit color, 8- or 4-bit indexed color, 8-bit gray-scale, black-and-white, and customizable
color palettes

 Ability to create images from scratch, using line, rectangle, square, circle, ellipse, polygon,
airbrush, paintbrush, pencil, and eraser tools, with customizable brush shapes and user-
definable bucket and gradient fills

 Multiple typefaces, styles, and sizes, and type manipulation and masking routines

 Filters for special effects, such as crystallize, dry brush, emboss, facet, fresco, graphic
pen, mosaic, pixelize, poster, ripple, smooth, splatter, stucco, twirl, watercolor, wave, and
wind

 Support for third-party special effect plug-ins

 Ability to design in layers that can be combined, hidden, and reordered


95

Plug-Ins

Image-editing programs usually support powerful plug-in modules available from third-
party developers that allow to wrap, twist, shadow, cut, diffuse, and otherwise “filter” your
images for special visual effects.

Painting and Drawing Tools

Painting and drawing tools, as well as 3-D modelers, are perhaps the most important
items in the toolkit because, of all the multimedia elements, the graphical impact of the project
will likely have the greatest influence on the end user. If the artwork is amateurish, or flat and
uninteresting, both the creator and the users will be disappointed.

Painting software, such as Photoshop, Fireworks, and Painter, is dedicated to producing


crafted bitmap images. Drawing software, such as CorelDraw, FreeHand, Illustrator, Designer,
and Canvas, is dedicated to producing vector-based line art easily printed to paper at high
resolution.

Some software applications combine drawing and painting capabilities, but many authoring
systems can import only bitmapped images. Typically, bitmapped images provide the greatest
choice and power to the artist for rendering fine detail and effects, and today bitmaps are used
in multimedia more than drawn objects. Some vector based packages such as Macromedia’s
Flash are aimed at reducing file download times on the Web, and may contain both bitmaps
and drawn art.

Look for these features in a drawing or painting packages:

 An intuitive graphical user interface with pull-down menus, status bars, palette control,
and dialog boxes for quick, logical selection

 Scalable dimensions, so you can resize, stretch, and distort both large and small bitmaps

 Paint tools to create geometric shapes, from squares to circles and from curves to complex
polygons

 Ability to pour a color, pattern, or gradient into any area


96

 Ability to paint with patterns and clip art

 Customizable pen and brush shapes and sizes

 Eyedropper tool that samples colors

 Auto trace tool that turns bitmap shapes into vector-based outlines

 Support for scalable text fonts and drop shadows

 Multiple undo capabilities, to let’s try again

 Painting features such as smoothing coarse-edged objects into the background with anti-
aliasing, airbrushing in variable sizes, shapes, densities, and patterns; washing colors in
gradients; blending; and masking

 Support for third-party special effect plug-ins

 Object and layering capabilities that allows to treat separate elements independently

 Zooming, for magnified pixel editing

 All common color depths: 1-, 4-, 8-, and 16-, 134-, or 313- bit color, and gray scale

 Good color management and dithering capability among color depths using various color
models such as RGB, HSB, and CMYK

 Good palette management when in 8-bit mode

 Good file importing and exporting capability for image formats such as PIC, GIF,

 TGA, TIF, WMF, JPG, PCX, EPS, PTN, and BMP

Sound Editing Tools

Sound editing tools for both digitized and MIDI sound lets us hear the music as well as
create it. By drawing a representation of a sound in fine increments, whether a score or a
waveform, it is possible to cut, copy, paste and otherwise edit segments of it with great precision.

System sounds are shipped both Macintosh and Windows systems and they are available
as soon as the Operating system is installed. For MIDI sound, a MIDI synthesizer is required to
play and record sounds from musical instruments. For ordinary sound there are varieties of
software’s such as sound edit, MP3cutter, Wave studio and etc.
97

Animation, Video and Digital Movie Tools

Animation and digital movies are sequences of bitmapped graphic scenes or frames
which are rapidly played back. Most authoring tools adapt either a frame or object oriented
approach to animation.

Moviemaking tools typically take advantage of Quick time for Macintosh and Microsoft
Video for Windows and lets the content developers to create, edit and present digitized motion
video segments.

Video formats

A video format describes how one device sends video pictures to another device, such
as the way that a DVD player sends pictures to a television or a computer to a monitor. More
formally, the video format describes the sequence and structure of frames that create the
moving video image.

Video formats are commonly known in the domain of commercial broadcast and consumer
devices; most notably to date, these are the analog video formats of NTSC, PAL, and SECAM.
However, video formats also describe the digital equivalents of the commercial formats, the
aging custom military uses of analog video (such as RS-170 and RS-343), the increasingly
important video formats used with computers, and even such offbeat formats such as color
field sequential.

Video formats were originally designed for display devices such as CRTs (Cathode Ray
Tubes). However, other kinds of displays have common source material video formats enjoy
wide adoption and have convenient organization, video formats are a common means to describe
the structure of displayed visual information for a variety of graphical output devices.

Common Organization of Video Formats

A video format describes a rectangular image carried within an envelope containing


information about the image. Although video formats vary greatly in organization, there is a
common taxonomy:
98

 A frame can consists of two or more fields, sent sequentially, that are displayed over time
to form a complete frame. This kind of assembly is known as interlace.

 An interlaced video frame is distinguished from a progressive scan frame, where the
entire frame is sent as a single intact entity.

 A frame consists of a series of lines, known as scan lines. Scan lines have a regular and
consistent length in order to produce a rectangular image. This is because in analog
formats, a line lasts for a given period of time; in digital formats, the line consists of a
number of pixels. When a device sends a frame, the video format specifies that each line
is sent independently by the device from any others and that all lines are sent in top-to-
bottom order.

 As above, a frame may be split into fields – odd and even (by line “numbers”) or upper
and lower, respectively. In NTSC (National Television System Committee), the lower field
comes first, then the upper field, and that is the whole frame. The basics of a format are
Aspect Ratio, Frame Rate, and Interlacing with field order if applicable: Video formats use
a sequence of frames in a specified order. In some formats, a single frame is independent
of any other (such as those used in computer video formats), so the sequence is only one
frame. In other video formats, frames have an ordered position.

Individual frames within a sequence typically have similar construction. However,


depending on its position in the sequence, frames may vary small elements within them to
represent additional information. For example, MPEG-13 compression may eliminate the
information that is redundant frame-to-frame in order to reduce the data size, preserving the
information relating to changes between frames.

Analog video formats


 NTSC

 PAL

 SECAM

Digital Video Formats


99

These are MPEG13 based terrestrial broadcast video formats

 ATSC Standards

 DVB

 ISDB

These are strictly the format of the video itself, and not for the modulation used for
transmission.

Table 5.1 Broadcast video formats

QuickTime

QuickTime is a multimedia framework developed by Apple Inc. capable of handling various


formats of digital video, media clips, sound, text, animation, music, and several types of
interactive panoramic images. Available for Classic Mac OS, Mac OS X and Microsoft Windows
operating systems, it provides essential support for software packages including iTunes,
QuickTime Player (which can also serve as a helper application for web browsers to play
media files that might otherwise fail to open) and Safari.
100

The QuickTime technology consists of the following:

1. The QuickTime Player application created by Apple, which is a media player.

2. The QuickTime framework, which provides a common set of APIs for encoding and
decoding audio and video.

3. The QuickTime Movie (.mov) file format, an openly-documented media container.


QuickTime is integral to Mac OS X, as it was with earlier versions of Mac OS. All Apple
systems ship with QuickTime already installed, as it represents the core media framework
for Mac OS X. QuickTime is optional for Windows systems, although many software
applications require it. Apple bundles it with each iTunes for Windows download, but it is
also available as a stand-alone installation.

QuickTime players

QuickTime is distributed free of charge, and includes the QuickTime Player application.
Some other free player applications that rely on the QuickTime framework provide features not
available in the basic QuickTime Player. For example:

 iTunes can export audio in WAV, AIFF, MP3, AAC, and Apple Lossless.

 In Mac OS X, a simple AppleScript can be used to play a movie in full-screen mode.


However, since version 7.13 the QuickTime Player now also supports for full screen viewing
in the non-pro version.

QuickTime framework

The QuickTime framework provides the following:

 Encoding and transcoding video and audio from one format to another.

 Decoding video and audio, and then sending the decoded stream to the graphics or audio
subsystem for playback. In Mac OS X, QuickTime sends video playback to the Quartz
Extreme (OpenGL) Compositor.

 A plug-in architecture for supporting additional codecs (such as DivX).

 The framework supports the following file types and codecs natively:
101

Audio

 Apple Lossless

 Audio Interchange (AIFF)

 Digital Audio: Audio CD - 16-bit (CDDA), 134-bit, 313-bit integer & floating

 point, and 64-bit floating point

 MIDI

 MPEG-1 Layer 3 Audio (.mp3)

 MPEG-4 AAC Audio (.m4a, .m4b, .m4p)

 Sun AU Audio

 ULAW and ALAW Audio

 Waveform Audio (WAV)

Video

 3GPP & 3GPP13 file formats

 AVI file format

 Bitmap (BMP) codec and file format

 DV file (DV NTSC/PAL and DVC Pro NTSC/PAL codecs)

 Flash & FlashPix files

 GIF and Animated GIF files

 H.1361, H.1363, and H.1364 codecs

 JPEG, Photo JPEG, and JPEG-13000 codecs and file formats


102

 MPEG-1, MPEG-13, and MPEG-4 Video file formats and associated codecs

 (such as AVC)

 QuickTime Movie (.mov) and QTVR movies

 Other video codecs: Apple Video, Cinepak, Component Video, Graphics, and

 Planar RGB

 Other still image formats: PNG, TIFF, and TGA

Specification for QuickTime file format

Table 5.2 Specification for QuickTime file format

The QuickTime (.mov) file format functions as a multimedia container file that contains
one or more tracks, each of which stores a particular type of data: audio, video, effects, or text
(for subtitles, for example). Other file formats that QuickTime supports natively (to varying
degrees) include AIFF, WAV, DV, MP3, and MPEG-1. With additional QuickTime Extensions, it
can also support Ogg, ASF, FLV, MKV, DivX Media Format, and others.

5.4 Making Instant Multimedia

If your current software can do what you need, then there is no need to obtain dedicated
multimedia authoring package because:-
103

1. That can save you money

2. Already familiar with the tools

3. No arduous and lengthy learning curve.

Most PCs sold today provides with necessary elements to produce at least sound and
animation. Popular software for word processing, spreadsheets, DBMS, graphing, drawing
and presentation have added capabilities for sound, image and animation to their products.
Nowadays you can:-

1. Add Multimedia elements to your word processing documents, spreadsheets, HTML


documents.

2. Call a voice annotation, picture or QuickTime/AVI movie from most word processing
applications.

3. Click a spreadsheet cell to call up graphic images, sounds and animations.

4. Include pictures, audio clips and movies in your database.

The presentation will no longer be just as simple as slide show, but you can easily generate
interesting titles, visual effects and animated illustrations using your presentation software.

Where do you get all these multimedia elements?


1. Make them from scratch (If you decide to start from scratch or edit existing material, you
need special hardware and software tools. That may produce more spectacular and lively
products).

2. Import them from collections of clip art media (They provide quick and simple multimedia
productions).

3. License rights to use resources or content such as pictures, songs, music, and video
from their owners. Some simple Multimedia projects can be produced in such a way that
you cram all the organizing, planning, rendering and testing stags into a single effort,
making instant multimedia.
104

 Linking Multimedia Objects

 Apple Events,

 DDE and OLE

 Word Processors

 Word

 WordPerfect

 Word Pro

 Spreadsheets

 Lotus 1-2-3

 Excel

 Databases

 FileMaker Pro

 Access

 Presentation Tools

 PowerPoint

5.5 Overview of Multimedia Software and Authoring Tools


The categories of software tools briefly examined here are:

1. Music Sequencing and Notation

2. Digital Audio

3. Graphics and Image Editing

4. Video Editing

5. Animation

6. Multimedia Authoring
105

1. Music Sequencing and Notation

o Cakewalk: now called Pro Audio.

– The term sequencer comes from older devices that stored sequences of notes (“events”,
in MIDI).

– It is also possible to insert WAV files and Windows MCI commands (for animation and
video) into music tracks (MCI is a ubiquitous component of the Windows API.)

o Cubase: another sequencing/editing program, with capabilities similar to those of


Cakewalk. It includes some digital audio editing tools.

o Macromedia Sound edit: mature program for creating audio for multimedia projects
and the web that integrates well with other Macromedia products such as Flash and Director.

2. Digital Audio

Digital Audio tools deal with accessing and editing the actual sampled sounds that make
up audio:

 Cool Edit: a very powerful and popular digital audio toolkit; emulates a professional
audio studio — multi-track productions and sound file editing including digital signal
processing effects.

 Sound Forge: a sophisticated PC-based program for editing audio WAV files.

 Pro Tools: a high-end integrated audio production and editing environment — MIDI
creation and manipulation; powerful audio mixing, recording, and editing software.

3. Graphics and Image Editing

 Adobe Illustrator: A powerful publishing tool from Adobe. Uses vector graphics; graphics
can be exported to Web.

 Adobe Photoshop: The standard in a graphics, image processing and manipulation


tool.
106

– Allows layers of images, graphics, and text that can be separately manipulated for
maximum flexibility.

– Filter factory permits creation of sophisticated lighting-effects filters.

 Macromedia Fireworks: Software for making graphics specifically for the web.

 Macromedia Freehand: A text and web graphics editing tool that supports many bitmap
formats such as GIF, PNG, and JPEG.

4. Video Editing

 Adobe Premiere: An intuitive, simple video editing tool for nonlinear editing, i.e., putting
video clips into any order:

o Video and audio are arranged in “tracks”.

o Provides a large number of video and audio tracks, superimpositions and virtual clips.

o A large library of built-in transitions, filters and motions for clips effective multimedia
productions with little effort.

 Adobe After Effects: a powerful video editing tool that enables users to add and change
existing movies. Can add many effects: lighting, shadows, motion blurring; layers.

 Final Cut Pro: a video editing tool by Apple; Macintosh only.

5. Animation

 Multimedia APIs

o Java3D: API used by Java to construct and render 3D graphics, similar to the way in
which the Java Media Framework is used for handling media files.

1. Provides a basic set of object primitives (cube, splines, etc.) for building scenes.

2. It is an abstraction layer built on top of OpenGL or DirectX (the user can select which).
107

o DirectX : Windows API that supports video, images, audio and 3-D animation

o OpenGL: the highly portable, most popular 3-D API.

 Rendering Tools:

o 3D Studio Max: rendering tool that includes a number of very high-end professional tools
for character animation, game development, and visual effects production.

o Softimage XSI: a powerful modeling, animation, and rendering package used for animation
and special effects in films and games.

o Maya: competing product to Softimage; as well, it is a complete modeling package.

o RenderMan: rendering package created by Pixar.

 GIF Animation Packages:

A simpler approach to animation, allows very quick development of effective small


animations for the web.

6. Multimedia Authoring

Multimedia Authoring: These are the tools which provide the capability for creating a
complete multimedia presentation, including interactive user control, are called authoring tools/
programs

 Macromedia Flash: allows users to create interactive movies by using the score metaphor,
i.e., a timeline arranged in parallel event sequences.

 Macromedia Director: uses a movie metaphor to create interactive presentations very


powerful and includes a built-in scripting language, Lingo, which allows creation of complex
interactive movies.

 Author ware: a mature, well-supported authoring product based on the Iconic/Flow-


control metaphor.

 Quest: similar to Author ware in many ways, uses a type of flowcharting metaphor.
However, the flowchart nodes can encapsulate information in a more abstract way (called
frames) than simply subroutine levels.
108

(i) Authoring system in multimedia

 In multimedia authoring systems, multimedia elements and events are regarded as objects.

 Objects exist in a hierarchical order of parent and child relationships

 Each object is assigned properties and modifiers.

 On receiving messages, objects perform tasks depending on the properties and modifiers
properties and modifiers.

(ii) Authoring Tools Capability

Authoring tools should possess the following capabilities:

1. Interactivity

2. Playback

3. Editing

4. Programming / Scripting

5. Cross Platform

6. Internet Playability

7. Delivery/Distribution

8. Project organization

(iii) Features of Authoring Tools

1. Editing and organizing features.

2. Programming features.

3. Interactivity features.

4. Performance tuning and playback features.

5. Delivery, cross platform, and Internet Playability features.


109

1. Editing and organizing features

 Authoring systems include editing tools to create, edit, and convert multimedia elements
such as animation and video clips.

 The organization, design, and production process for multimedia involves storyboarding
and flowcharting.

 Visual and flowcharting or overview facility illustrates project structure at a macro level.

2. Programming features

 Visual programming with icons or objects is the simplest and easiest authoring process.

 Visual authoring tools such as Author ware and Icon Author are suitable for slide shows
and presentations.

3. Interactivity features

 Interactivity gives the end user control over the content and flow of information in a
project.

 Simple branching is the ability to go to another section of the multimedia production.

 Conditional branching is an activity based on the results of IF THEN decisions or events.

 Structured language supports complex programming logic, subroutines, event tracking,


and message passing among objects and elements.

(iv) Types of Authoring Tools

 Card and page based tools.

 Icon based, event driven tools.

 Time based tools

 Card and page based authoring systems

o Card and page based authoring systems provide a simple and easily understood metaphor
for organizing multimedia elements.
110

o It contains media objects such as buttons, text fields, and graphic objects.

o It provides a facility for linking objects to pages or cards.

Example of authoring tools

 HyperCard (Mac)

 ToolBook (Mac / Windows)

 Icon-based, event-driven tools.

o Icon based, event driven tools provide a visual programming approach to organize and
present multimedia.

o Multimedia elements and interaction cues are organized as objects in a flowchart.

o Flowchart can be built by dragging appropriate icons from a library, and then adding the
content.

o Examples of authoring tools

o Authorware (Mac/Windows)

o IconAuthor (Windows)

 Time based authoring tools

 Time-based tools are best suited for messages with a beginning and an end.

 Some time-based tools facilitate navigation and interactive control.

 Macromedia’s Director and Flash as are time-based development

Environments

 Example: Macromedia Director / Flash (Mac/Windows)


111

(v) Applications of Authoring Tools

 Image Processing

 Image Enhancement

 Medical Imaging

Check your Progress


1. Simple pictures or maps are created by:

a. Bitmapped graphics programs

b. Painting programs.

c. Vector graphics programs.

d. Resolution programs

2. Software that stores lines and shapes rather than individual pixels is known as:

a. Vector graphics software.

b. Raster graphics software.

c. Photo database software.

d. Resolution software

3. Say True or False

Many bitmapped images in a sequence is known as GIF Animation

4. ____________ Software can rotate, stretch, and combine images with other model objects.

5. Match the following software programs with their capabilities:

I. image processing software A. stores a picture as a collection of lines and shapes

II. painting software B. can create pixels on the screen with a pointing device

III. photo management software C. can eliminate “red eye” and brush away blemishes

IV. drawing software D. can create objects or models that can be rotated or stretched
112

V. 3-D modeling software E. simplify and automate capturing, organizing, and editing
digital images

VI. video editing software F. automates the creation of visual aids for lectures

6. Multimedia elements are typically sewn together into a project using_________________

7. CD-XA allows the storage of

a. Digital Audio, Text, Graphics and Video

b. Only Audio Data

c. Only Text Data

d. Only Video Data

5.6 Production Standards

MHEG (Coded Representation of Multimedia and Hypermedia Information Objects ISO


CD 13522), from the Multimedia, Hypermedia information coding Expert Group, a draft standard,
is the nearest to an overall standard for multimedia, at a high level. Reference needs to be
made to various other areas of standards within this, or in addition to this, such as those for
various mono-media elements contributing to multimedia. Many of the relevant standards are
important to data interchange and are specified by the CIMI Standards Framework.

System Standards

In addition to the various so called standards for general computer systems which tend
to be set by the manufacturers and are really proprietary or possibly de facto standards, there
are some developments specific to multimedia systems. These include the MPC standard, a
base specification for a multimedia PC, and interface standards such as MCI (Media Control
Interface), HCI (Human Computer Interface ISO 9241 under development), and API (Application
Programming Interface). Some ‘standards’ are beginning to develop for software, and standards
for the development of systems, including multimedia systems, which may eventually become
ISO’s. The IMA (Interactive Multimedia Association) is industry led, producing recommended
practices and is currently working on multimedia system services, data exchange and scripting
languages.
113

Capture and Encoding Standards

For scanning quality control and OCR (Optical Character Recognition) procedures and
preparation various national standards exist, such as North American ANSI standards.

ODA (Office Document Architecture) and SGML (Standard Generalized Markup Language,
ISO 8879) are standards for describing electronic documents, for document interchange. They
can also be used for hypertext, which is used in multimedia applications. These two standards
have been developed to define formats for presentation of multimedia and hypermedia
information, and are also necessary for editing and manipulating, and for facilitating interchange
of such data between applications. MHEG (Coded Representation of Multimedia and Hypermedia
Information Objects, draft ISO CD13522) is extending the standards for text to include other
data and media. Hytime (Hypermedia/Time-Based Document Structuring Language, ISO 10744
extends the markup of single documents using SGML to multiple data objects or documents.

For data compression encoding various standards exists for different media, e.g. JPEG
(Joint Photographic Experts Group) for the digital coding of still images, MPEG (Motion Picture
Experts Group) for motion picture and associated audio. For data encoding many standards
exist which are really de facto standards rather than being formally accepted as standards.
These include the file formats mentioned earlier. The widely used TIFF Image File Format is
one. However there are different versions of TIFF files which may not be compatible.

Storage and Retrieval Standards

CD-ROM is the most standardized and widely used of optical media. Standards exist for
the physical and optical characteristics of optical discs. The major disc sizes have different
format standards, with some national and ISO standards in place. The CD-ROM format is ISO
10149 for the recording format, with ISO 9660 for the ‘logical format’, i.e. the file structure. All
Photo CD disc formats conform to this. It should be noted that there are off shoots from the
main ISO 9660.

Many WORM media and drives use proprietary standards. 5.25" WORM format discs
have 3 different incompatible standards. ISO 9171 covers both formats A and B. Larger optical
discs have some draft standards. 5.25" and 3.5" rewritable optical discs have ISO standards
which are adhered to, but some imaging systems use nonstandard discs.
114

Volume and file structure standards enable operating systems to understand and access
files. The Yellow Book standard for CD-ROM and CD-ROM XA covers the use of audio and
video with computer data. The Green Book standard covers CD-I with its better audio and
video image quality. A new White Book covers the CD standard for Digital Video. An Orange
Book standard covers CD-R discs. For display standards see the earlier section on content
formats. Note should be taken that Apple machines and PCs differ in the way they store data
for screen display and files in the same format may not translate from one to the other.

Where ISO or national standards relevant to multimedia exist they should be specified in
any project requirement along with the instruction to specify what standards are adopted for
particular aspects of a product or system where there may be a choice.

5.7 Summary

 A word processor is a regularly used tool in designing and building a multimedia project.

 Image-editing software: bitmapped images provide the greatest choice and power to the
artist for rendering fine detail and effects.

 Animations and digital video movies are sequences of bitmapped graphic scenes or
frames, rapidly played back.

 With proper editing software, you can digitize video, edit, add special effects and titles,
mix sound tracks, and save the clip.

 Three metaphors are used by authoring tools that make multimedia: card- and page-
based, icon- and object-based, and time-based.

 When choosing an authoring system, consider its editing, organizing, programming,


interactivity, performance, playback, cross-platform, and delivery features.

 MHEG (Coded Representation of Multimedia and Hypermedia Information Objects ISO


CD 13522), from the Multimedia, Hypermedia information coding Expert Group, a draft
standard, is the nearest to an overall standard for multimedia, at a high level.
115

 For data compression encoding various standards exists for different media, e.g. JPEG
(Joint Photographic Experts Group) for the digital coding of still images, MPEG (Motion
Picture Experts Group) for motion picture and associated audio

5.8 Check Your Answers


1. a. Bitmapped graphics programs

2. a. Vector graphics software

3. a. True

4. 3-D modeling

5. I-C, II-B, III-E, IV-A, V-D, VI-F

6. Authoring Tools

7. a. Digital Audio, Text, Graphics and Video

5.9 Model Questions


1. List the software tools of multimedia.

2. Categorize multimedia tools.

3. What is Multimedia Authoring?

4. Describe multimedia tools in detail.

5. Explain the step-by-step procedure to create instant multimedia.

6. List the multimedia production standards.

7. List the multimedia software available in the market.

8. Explain the multimedia software.

9. What are authoring tools explain it in detail?

10. Describe briefly about the broadcast video standards.


116

LESSON 6
MULTIMEDIA ELEMENTS – TEXT AND SOUND

Structure
6.1 Introduction

6.2 Learning Objectives

6.3 Multimedia Building Blocks

6.4 Text in Multimedia

6.5 Sound in Multimedia

6.6 Summary

6.7 Check Your Answers

6.8 Model Questions

6.1 Introduction

Multimedia is the media that uses multiple forms of information content and information
processing (e.g. text, audio, graphics, animation, and video, interactivity) to inform or entertain
the user. Multimedia also refers to the use of electronic media to store and experience multimedia
content. Multimedia is a combination of various elements, such as text, images, video, sound,
and animation. Interactive multimedia allows the user to control what and when the elements
are delivered. The multimedia application definition using the building blocks defined as
components is a general approach that can easily integrate existing development tools.

All multimedia content consists of texts in some form. Even a menu text is accompanied
by a single action such as mouse click, keystroke or finger pressed in the monitor (in case of a
touch screen). The text in the multimedia is used to communicate information to the user.
Proper use of text and words in multimedia presentation will help the content developer to
communicate the idea and message to the user.
117

Many multimedia developers take advantage of this sense by incorporating sound into
their multimedia products. Sound enhances a multimedia application by supplementing
presentations, images, animation, and video. In the past, only those who could afford expensive
sound recording equipment and facilities could produce high-quality, digital sound. Today,
computers and synthesizers make it possible for the average person to produce comparable
sound and music. Sound is the terminology used in the analogue form, and the digitized form
of sound is called as audio. A sound is a waveform. It is produced when waves of varying
pressure travel though a medium, usually air. It is inherently an analogous phenomenon, meaning
that the changes in air pressure can vary continuously over a range of values.

6.2 Learning Objectives

In this lesson we will learn the different multimedia building blocks. Later we will learn the
significant features of text.

At the end of the lesson, the learner will be able to

 List the different multimedia building blocks

 Describe the characteristics and attributes of text, graphic, sound, animation and video
elements that make up multimedia

 Understand the various file format used for each of these elements

 Learn the Sounds in multimedia elements

6.3 Multimedia Building Blocks


Any multimedia application consists any or all of the following components:

1. Text: Text and symbols are very important for communication in any medium. With the
recent explosion of the Internet and World Wide Web, text has become more the important
than ever. Web is HTML (Hypertext Markup language) originally designed to display simple
text documents on computer screens, with occasional graphic images thrown in as
illustrations.

2. Audio: Sound is perhaps the most element of multimedia. It can provide the listening
pleasure of music, the startling accent of special effects or the ambience of a mood-
setting background.
118

3. Images: Images whether represented analog or digital plays a vital role in a multimedia.
It is expressed in the form of still picture, painting or a photograph taken through a digital
camera.

4. Animation: Animation is the rapid display of a sequence of images of 2-D artwork or


model positions in order to create an illusion of movement. It is an optical illusion of
motion due to the phenomenon of persistence of vision, and can be created and
demonstrated in a number of ways.

5. Video: Digital video has supplanted analog video as the method of choice for making
video for multimedia use. Video in multimedia are used to portray real time moving pictures
in a multimedia project.

6.4 Text in Multimedia

Words and symbols in any form, spoken or written, are the most common system of
communication. They deliver the most widely understood meaning to the greatest number of
people. Most academic related text such as journals, e-magazines are available in the Web
Browser readable form.

Text is a collection of characters that makes the user understand very easily and special
meaning is given. Text can be used for communication. The information what you are trying to
say will be given as a text. Mostly, a text in multimedia plays a vital role

Definition: It is a printed or written version of speech, and also it gives the main facts
about the subjects.

About Fonts and Faces

Typeface: A typeface is family of graphic characters that usually includes many type
sizes and styles.

Font: A font is a collection of characters of a single size and style belonging to a particular
typeface family. Typical font styles are bold face and italic.
119

Type sizes are usually expressed in points; one point is .0138 inches or about 1/ 72 of an
inch.

 The font’s size is the distance from the top of the capital letter to the bottom of the descends
in letters such as g and y.

 A font’s size does not exactly describe the height and width of its characters. This is
because the x-height (the height of the lower case letter x) of two fonts may vary, while
the height of the capital letters of those fonts may be the same.

 Computer fonts automatically add space below the descender to provide appropriate line
spacing, or leading (pronounced “ledding”).

 Leading can be adjusted in most programs on both Macintosh and in Windows. When
you type lower case letters the ascenders and descenders will be changed but, for upper
case it won’t.

Fig. 6.1 Measurement Type

Character Metrics: it is a general measurement applied to individual characters.

Kerning: It is the spacing between character pairs.

Fig. 6.2 Kerning


120

When it converts the letter A from a mathematical representation to a recognizable symbol


displayed on the screen or in printed output (a process called rasterizing), the computer must
know how to represent the letter using tiny square pixels (picture elements), or dots.

High-resolution monitors and printers can make more attractive-looking and varied
characters because there are more fine little squares or dots per inch (dpi).

The same letter can look very different when you use different fonts and faces:

Fig. 6.3 Various Fonts

Cases: The font always will be stored in two cases Capital letters (Upper Case) and
small letters (Lower Case).

Serif vs. Sans Serif

Typefaces of fonts can be described in many ways, but the most common characterization
of a typeface is serif and sans serif. The serif is the little decoration at the end of a letter
stroke.

Example: Times, Times New Roman, Bookman are some fonts which come under serif
category. Arial, Optima, Verdana are some examples of sans serif font. Serif fonts are generally
used for body of the text for better readability and sans serif fonts are generally used for
headings.

The following fonts show a few categories of serif and sans serif fonts.

Fig. 6.4 Font Face


121

Installation of Fonts

Fonts can be installed on the computer by opening the fonts folder through Windows
Explorer. Go to C:\WINDOWS or C:\WINNT\FONTS. When the folder opens, select the fonts
you want to install from an alternate folder and copy and paste them into the fonts folder. The
second option is to go to Start > Settings > Control Panel > Fonts, then go to File > Install New
Font.

Usage of Fonts

After the installation of the font, you have to change the font of the present text in any text
editing program. A user can also use the installed font in HTML documents but the document
can be viewed by only those users who have the same font installed on their computers.
Always remember the name of the font and keep in mind that the name of the font is not the
same as the file name of the .ttf file. If a user does not remember the font name then he can
find it by going through the font list or by visiting the .ttf file.

Selecting Text fonts

It is a very difficult process to choose the fonts to be used in a multimedia presentation.


Following are a few guidelines which help to choose a font in a multimedia presentation.

 As many numbers of typefaces can be used in a single presentation, this concept of


using many fonts in a single page is called ransom-note topography.

 For small type, it is advisable to use the most legible font.

 In large size headlines, the kerning (spacing between the letters) can be adjusted

 In text blocks, the leading for the most pleasing line can be adjusted.

 Drop caps and initial caps can be used to accent the words.

 The different effects and colors of a font can be chosen in order to make the text look in
a distinct manner.

 Anti-aliased can be used to make a text look gentle and blended.

 For special attention to the text the words can be wrapped onto a sphere or bent like a
wave.
122

 Meaningful words and phrases can be used for links and menu items.

 In case of text links (anchors) on web pages the messages can be accented.

 The most important text in a web page such as menu can be put in the top 320 pixels.

Using Text in Multimedia

The basic element of multimedia is the text. However, the text should be kept minimum
to avoid overcrowding unless the application contains a lot of reference material. Less text can
be read easily and quickly unlike longer text passages which can be time consuming and tiring.
A lot of information in a multimedia presentation is not ideally the best way to transfer information
to a wide range of audience. Combining other elements such as pictures, graphics, diagrams,
etc., can help reduce the amount of text written to provide information.

From design point of view, text should fill less than half the screen. There are following
ways in which a text can be used in multimedia:

 in text messaging

 in advertisements

 in a website

 in films such as titles and credits

 as subtitles in a film or documentary that provide a translation

Using Text Elements in a Multimedia Presentation

The text elements used in multimedia are given below:

Menus for Navigation

 A user navigates through content using a menu.

 A simple menu consists of a text list of topics.


123

Interactive Buttons

 A button is a clickable object that executes a command when activated.

 Users can create their own buttons from bitmaps and graphics.

 The design and labeling of the buttons should be treated as an industrial art project.

Fields for Reading

 Reading a hard copy is easier and faster than reading from the computer screen.

 A document can be printed in one of two orientations - portrait or landscape.

 The taller-than-wide orientation used for printing documents is called portrait.

 The wider-than-tall orientation that is normal to monitors is called landscape.

HTML Documents

 HTML stands for Hypertext Markup Language which is the standard document format
used for Web pages.

 HTML documents are marked using tags.

 An advanced form of HTML is DHTML that stands for Dynamic Hypertext Markup
Language. It uses Cascading Style Sheets (CSS).

 Some of the commonly used tags are:

_ The <B> tag for making text bold faced.

_ The <OL> tag for creating an ordered list.

_ The <IMG> tag for inserting images.

Symbols and Icons

 Symbols are concentrated text in the form of stand-alone graphic constructs and are
used to convey meaningful messages and human emotions are called emoticons.

 Icons are symbolic representations of objects and processes.


124

Text Layout

While creating a multimedia presentation, the presenter should plan the text layout to let
a reader read it with ease. One of the first things to be kept in mind is the length of the text. It
should neither too long nor too short. For a printed document, a line containing 13 to 17 words
is sufficient. A line having more than 17 words should be too long to fit on a screen and would
be difficult to follow. On the other hand, a very short line would not look good on screen.
Therefore, for better presentation a line of around 8 to 15 words should be used.

Use of Text in Webs

Using text in websites attracts a visitor’s attention as well as help him in understanding
the webpage better. It is far better than the use of meaningless graphics and images which do
not contribute in understanding of the page.

Website Loading Speed

Website loading speed is one of the important factors that influences conversion as
visitors stars to leave the page if it takes more than eight seconds to load. A website which
contains a lot of text loads faster than the websites that contains the following:

 Internal code (not placed in external CSS, JS, etc. files and linked to)

 A lot of images and graphics

 JavaScript (for menus, including various stat tracking scripts, such as Google Analytics).

 Audio and video clips on the page (especially without transcripts, which hurts accessibility
if you do use audio/video, do not auto-launch it and have a button to turn it on/off).

 Table-based layouts that are twice larger in file size, than the ones built in CSS.

Text in Films such as Titles and Credits

Most films start with titles and end with credits. The text is shown over either plain
background or colored background. Typography look different in different formats such as a in
film subtitles, on websites, poster, essay, etc. To include a text in multimedia, a designer has to
keep in mind the points given below:
125

 The theme or look of the multimedia product.

 The amount of text needed.

 The placement of the text (heading, body text or logo).

 The format of the project (video, website, blog, video, slideshow, etc.,).

 The content of the information.

Text in Subtitles in a Film or Documentary

Before adding subtitles to a film, people working on the film need to look into different
font styles, spacing, font color and size. Some fonts that work well on a website while some
work well in print.

Significance of Text Based Advertising

 Since the text ads are more of keyword oriented, they draw more attention than banner
advertising.

 The text ads are inexpensive, thus making it affordable and effective for your business.

 There are a few websites which offers a flat free rental services to place your text based
advertisements.

 A few websites request for a onetime payment to place your text ads.

 The foremost benefit of having text based advertisements is that it helps in improving
your search engine ranking.

 Since it creates more visibility and draws more traffic to your site, your page rank will be
improved.

Thus, text ads will help in making your business a successful venture.

Character set and alphabets

 ASCII Character set

The American standard code for information interchange (SCII) is the 7 bit character
coding system most commonly used by computer systems in the United States and abroad.
126

ASCII assigns a number of values to 128 characters, including both lower and uppercase
letters, punctuation marks, Arabic numbers and math symbols. 32 control characters are also
included.

These control characters are used for device control messages, such as carriage return,
line feed, tab and form feed.

The Extended Character set

A byte which consists of 8 bits is the most commonly used building block for computer
processing. An ASCII use only 7 bits to code is 128 characters; the 8th bit of the byte is unused.
This extra bit allows another 128 characters to be encoded before the byte is used up, and
computer systems today use these extra 128 values for an extended character set. The extended
character set is commonly filled with ANSI (American National Standards Institute) standard
characters, including frequently used symbols.

 Unicode

Unicode makes use of 16-bit architecture for multilingual text and character encoding.
Unicode uses about 65,000 characters from all known languages and alphabets in the world.

Several languages share a set of symbols that have a historically related derivation; the
shared symbols of each language are unified into collections of symbols (Called scripts). A
single script can work for tens or even hundreds of languages.

Microsoft, Apple, Sun, Netscape, IBM, Xerox and Novell are participating in the
development of this standard and Microsoft and Apple have incorporated Unicode into their
operating system.

Font Editing and Design Tools

A font editor is a class of application software specifically designed to create or modify


font files. Font editors differ greatly depending on if they are designed to edit bitmap fonts or
outline fonts. Most modern font editors deal with the outline fonts. Special font editing tools can
be used to make your own type, so you can communicate an idea or graphic feeling exactly.
127

With these tools, professional typographers create distinct text and displays faces.

(i) ResEdit
 ResEdit is a source editor available from apple that is useful for creating and changing
graphic resource such as cursors, icons, dialog boxes, patterns, keyboard maps, and
bitmapped fonts on the Macintosh.

 It can be used to edit or create new font resources for storing the bitmaps of screen fonts.

(ii) Fontographer
 Fontographer is a powerful font editor supplied by Macromedia, is a specialized graphics
editor for both Macintosh and Windows platforms.

 You can use it to develop PostScript, TrueType and bitmapped fonts for Macintosh,
Windows, DOS, NeXT, and Sun workstations.

 Designers can also modify existing typefaces, incorporate PostScript artwork, automatically
trace scanned images, and create designs from scratch.

 Fontographer’s features include a freehand drawing tool to create professional and precise
in-line and outline drawings of calligraphic and script characters.

 Fontographer allows the creation of multiple font designs from two existing typefaces,
and you can design lighter or heavier fonts by modifying the weight of an entire typeface.

 Fonts can be condensed, expanded, scaled, rotated, and skewed to create new unique
typefaces.

 A metric window provides complete control over character width, spacing, offset, and
kerning.

Type-Designer
 Type-Designer for windows from DS Design is a font editor that lets you create, convert,
and manipulate PostScript Type1 and TrueType fonts as well as EPS file format illustrations.

 An extensive palette of editing tools allows you to make changes to a font’s outline.

 With Type-Designer you can open up to eight typefaces simultaneously and cut and
paste characters between them.
128

Font Monger
 Font Monger from Ares Software offers a proprietary hinting technology to ensure that
your fonts will look good regardless of size.

 To create new fonts or to manipulate existing ones, Font Monger includes a freehand
drawing tool, a scissors tool, and a gizmo tool that rotates, slants, and skews character
outlines. Font Monger converts Macintosh or PC fonts to either platform as well as in any
direction between PostScript Type 1, Type 3, and True Type formats.

 It allows you to edit and expand the font of small caps, oblique, subscript or superscript
characters.

 Font Monger will also save the previous original characters in the PostScript font so you
can modify it further in the future, or, if you wish to save on disk space, compress the font
and it will remove the extra information.

 Font Monger does not allow editing of the actual outlines of a font but it allows many other
functions such as the ability to copy characters between fonts, perform various
transformations to any or all characters of a font, and create a variety of composite
characters such as fractions and accented characters.

Cool 3D Text

Cool 3D Production Studio is a program for creating and animating 3D text and graphics,
for videos and other multimedia products. This software runs on Windows 98SE/ ME/2000/XP.

With this program, a user can create 3D graphics, animations for videos. It includes new
modeling tools, animations plugs-in, and new features for animation and video.

Font Chameleon
 Font Chameleon from Ares software for both Macintosh and Windows platforms builds
millions of different fonts from a single master font outline.

 The program provides a number of pre-set font descriptors, which you build into a

 PostScript Type 1, or True Type Font.


129

 With slide bars you can manipulate various aspects of the font, including its weight, width,
x-height, ascenders and descenders, and the blend of the serifs.

 The fonts you do build from the master outline can be used on the Macintosh, Windows,
or OS/2 platforms.

Making Pretty Text


 To make your text look pretty, you need a toolbox of fonts and special graphics applications
that can stretch, shade, shadow, color, and anti-alias your words into real artwork.

 Most designers find it easier to make pretty type starting with ready-made fonts, but
some will create their own custom fonts using font-editing and design tools such as
Fontographer,

 Type-designer, and Font Monger

Hypermedia and Hypertext

Hypermedia information spaces are connected by non-linear links which a user may
follow in any order. Multimedia information spaces are arranged sequentially, with only one
path through the information provided.

Example: Educational television tends to be the prime example of multimedia information.

Hypertext is different from normal text in that it is nonlinear. The reader need not read a
document from beginning to end, but can jump around within the document by clicking on hot
spots (or hyperlinks) in the text.

On the other hand, hypermedia involves more than simply hyperlinked text. It also
incorporates images, sounds, and video into the document. This allows for a more graphical
interface to information. Most web pages should be considered hypermedia instead of simply
hypertext.

The function of hypertext is to build links and generate an index of words. The index
helps to find and group words as per user’s search criteria. Hypertext systems are very useful
in multimedia interactive education courseware. Hypertext systems provide both unidirectional
130

and bi-directional navigation. Navigations can be through buttons or through simple, plain text.
The simple and easy navigation is through linear hypertext where information is organized in
linear fashion. Nonlinear hypertext, however, is the ultimate goal of effective navigation.

Individual chunks of information are usually referred to as documents or nodes, and the
connections between them as links or hyperlinks the so-called node-link hypermedia model.
The entire set of nodes and links forms a graph network. A distinct set of nodes and links which
constitutes a logical entity or work is called a hyper document – a distinct subset of hyperlinks
is called a hyper web. A source anchor is the starting point of a hyperlink and specifies the part
of a document from which an outgoing link can be activated. Typically, the user is given visual
cues as to where source anchors are located in a document (for example, a highlighted phrase
in a text document). A destination anchor is the endpoint of a hyperlink and determines what
part of a document should be on view upon arrival at that node (for example, a text might be
scrolled to a specific paragraph). An entire document is specified as the destination and viewing
commences at some default location within the document (for example, the start of a text).

Figure 6.5 illustrates these concepts graphically.

Fig. 6.5 Hyper Media and Hyper Text


131

Referential and Organizational Links

Some authors distinguish between referential and organizational hyperlinks. Referential


links are the cross-references distinctive of hypermedia. Organizational links are special links
which establish explicit structure by connecting a parent node with its children, forming a tree
within the overall node-link graph.

Using Hypertext Systems


 Information management and hypertext programs present electronic text, images, and
other elements in a database fashion.

 Software robots visit Web pages and index entire Web sites.

 Hypertext databases make use of proprietary indexing systems.

 Server-based hypertext and database engines are widely available.

 Software robots visit Web pages and index entire Web sites.

 Hypertext databases make use of proprietary indexing systems.

 Server-based hypertext and database engines are widely available.

 Information management and hypertext programs present electronic text, images, and
other elements in a database fashion.

Searching for Words

Typical methods for word searching in hypermedia systems are as follows:

 Categorical search

 Adjacency

 Word relationship

 Alternates

 Frequency

 Association

 Truncation
132

 Negation

 Intermediate words

Hypermedia Structures

 Links

 Nodes

 Anchors

 Navigating hypermedia structures

Nodes

 Nodes are accessible topics, documents, messages and content elements.

 Nodes and links form the backbone of a knowledge access system.

 Links are connections between conceptual elements and are known as navigation
pathways and menus.

Anchors

 Anchor is defined as the reference from one document to another document, image,
sound, or file on the Web.

 The destination node linked to the anchor is referred to as a link end.

 The source node linked to the anchor is referred to as a link anchor.

Navigating Hypermedia Structures

 Location markers must be provided to make navigation user-friendly.

 The simplest way to navigate hypermedia structures is via buttons.

Hypertext Tools

 Two functions common to most hypermedia text management systems are building
(authoring) and reading.
133

 The functions of ‘builder’ are:

o Generating an index of words Identifying nodes

o Creating links

Hypertext systems are used for:

 Technical documentation

 Electronic catalogues

 Interactive kiosks

 Electronic publishing and reference works

 Educational courseware

Nodes, Links and Navigation

Sometimes a physical web page behaves like two or more separate chunks of content.
The page is not the essential unit of content in websites built with Flash (an animation technology
from Macromedia) and in many non-web hypertext systems. Hence, the term node is used as
the fundamental unit of hypertext content. Links are the pathways between nodes. When a
user clicks links a succession of web pages appear and it seems that a user is navigating the
website. For a user, exploring a website is much like finding the way through a complex physical
environment such as a city. The user chooses the most promising route and if get lost, he may
backtrack to familiar territory or even return to home page to start over. A limitation of the
navigation is that it does not correspond to the full range of user behaviour. Majority of users
click the most promising links they see which has forced the web designers to create links that
would attract users.

Information Structures

Website designers and other hypertexts must work hard to decide which nodes will be
linked to which other nodes. There are familiar arrangements of nodes and links that guide
designers as they work. They are called information structures. Hierarchy, web-like and multi-
path is three of the most important of these structures.
134

Hierarchical Structure

The hierarchy is the most important structure because it is the basis of almost all websites
and most other hypertexts. Hierarchies are orderly (so users can grasp them) and yet they
provide plenty of navigational freedom. Users start at the home page, descend the branch that
most interests them, and make further choices as the branch divides. At each level, the
information on the nodes becomes more specific. Notice that branches may also converge.

When designing larger hypertexts, website designers must choose between making the
hierarchy broader (putting more nodes on each level) or deeper (adding more levels). One
well-established design principle is that users more easily navigate a wide hierarchy (in which
nodes have as many as 32 links to their child nodes) than a deep hierarchy.

Fig. 6.6 Hierarchy structure


135

Web-like Structures

Nodes can be linked to one another in web-like structures. There are no specific designs
to follow but web designers must take care in deciding which links will be most helpful to users.
Many structures turn into a hierarchical structure and cause trouble to users in navigating
them.

Multi-path Structures

It is possible to build a sequence of nodes that is in large part linear but offers various
alternative pathways. This is called multi-path structure. Users find multi-path structures within
hierarchical websites. For instance, a corporate website may have a historical section with a
page for each decade of the company’s existence. Every page has optional digressions, which
allows the user to discover events of that decade’s -like websites and non-web hypertexts are
made. Many web-like hypertexts are short stories and other works of fiction, in which artistic
considerations may override the desire for efficient navigation.

Check your Progress


1. A family of graphic characters that usually includes many type sizes and styles is called a

a. typeface

b. font

c. point

d. link

2. Which of the following is a term that applies to the spacing between characters of text?

a. Leading

b. Kerning

c. Tracking

d. Dithering

3. “What you see is what you get” is spoken as?


136

4. __________ text and graphics creates “smooth” boundaries between colors.

a. Compiling

b. Anti-aliasing

c. Hyperlinking

d. Authoring

5. To receive signal, a translator is needed to decode signal and encode it again at a

a. High Quality

b. Lower Quality

c. Same Quality

d. Bad Quality

6. Each individual measurement of a sound that is stored as digital information is called a


________________

7. MIDI stands for _______________

6.5 Sounds in Multimedia

Sound is perhaps the most important element of multimedia. It is meaningful “speech” in


any language, from a whisper to a scream. It can provide the listening pleasure of music, the
startling accent of special effects or the ambience of a mood setting background. Sound is the
terminology used in the analog form, and the digitized form of sound is called as audio.

Multimedia Sound Systems

The multimedia application user can use sound right off the bat on both the Macintosh
and on a multimedia PC running Windows because beeps and warning sounds are available
as soon as the operating system is installed. On the Macintosh you can choose one of the
several sounds for the system alert. In Windows system sounds are WAV files and they reside
in the windows Media subdirectory.
137

There are still more choices of audio if Microsoft Office is installed. Windows makes use
of WAV files as the default file format for audio and Macintosh systems use SND as default file
format for audio.

Digital Audio

The sound recorded on an audio tape through a microphone or from other sources is in
an analogue (continuous) form. The analogue format must be converted to a digital format for
storage in a computer. This process is called digitizing. The method used for digitizing sound is
called sampling.

Digital audio represents a sound stored in thousands of numbers or samples. The quality
of a digital recording depends upon how the samples are taken. Digital data represents the
loudness at discrete slices of time. It is not device dependent and should sound the same each
time it is played. It is used for music CDs.

Preparing Digital Audio Files

Preparing digital audio files is fairly straight forward. If you have analog source materials
music or sound effects that you have recorded on analog media such as cassette tapes.

 The first step is to digitize the analog material and recording it onto a computer readable
digital media.

 It is necessary to focus on two crucial aspects of preparing digital audio files:

- Balancing the need for sound quality against your available RAM and Hard disk
resources.

- Setting proper recording levels to get a good, clean recording.

- To digitize the analogue material recording it into a computer readable digital media

The sampling rate determines the frequency at which samples will be drawn for the
recording.
138

The number of times the analogue sound is sampled during each period and transformed
into digital information is called sampling rate. Sampling rates are calculated in Hertz (HZ or
Kilo HZ). The most common sampling rates used in multimedia applications are 44.1 KHZ,
22.05 JHZ and 11.025 KHZ. Sampling at higher rates more accurately captures the high
frequency content of the sound. Higher sampling rate means higher quality of sound.

Sound Bit Depth

Sampling rate and sound bit depth are the audio equivalent of resolution and color depth
of a graphic image. Bit depth depends on the amount of space in bytes used for storing a given
piece of audio information. Higher the number of bytes higher is the quality of sound. Multimedia
sound comes in 8-bit, 16-bit, 32-bit and 64-bit formats. An 8-bit has 28 or 256 possible values.

A single bit rate and single sampling rate are recommended throughout the work. An
audio file size can be calculated with the simple formula:

File Size in Disk = (Length in seconds) × (sample rate) × (bit depth/8 bits per byte)

Fig. 6.7 Waveforms

Bit Rate refers to the amount of data, specifically bits, transmitted or received per second.
It is comparable to the sample rate but refers to the digital encoding of the sound. It refers
specifically to how many digital 1s and 0s are used each second to represent the sound signal.
This means the higher the bit rate, the higher the quality and size of your recording. For instance,
139

an MP3 file might be described as having a bit rate of 320 kb/s or 320000 b/s. This indicates
the amount of compressed data needed to store one second of music.

Bit Rate = (Sample Rate) × (Bit Depth) × (Number of Channels)

Mono or Stereo

Mono sounds are flat and unrealistic compared to stereo sounds, which are much more
dynamic and lifelike. However, stereo sound files require twice the storage capacity of mono
sound files. Therefore, if storage and transfer are concerns, mono sound files may be the more
appropriate choice.

The sampling rate is how often the samples are taken.

 The sample size is the amount of information stored. This is called as bit resolution.

 The number of channels is 2 for stereo and 1 for monophonic.

 The time span of the recording is measured in seconds.

Types of Digital Audio File Formats

There are many different types of digital audio file formats that have resulted from working
with different computer platforms and software. Some of the better known formats include:

WAV

WAV is the Waveform format. It is the most commonly used and supported format on the
Windows platform. Developed by Microsoft, the Wave format is a subset of RIFE RIFF is
capable of sampling rates of 8 and 16 bits. With Wave, there are several different encoding
methods to choose from including Wave or PCM format. Therefore, when developing sound
for the Internet, it is important to make sure you use the encoding method that the player you’re
recommending supports.
140

AU

AU is the Sun Audio format. It was developed by Sun Microsystems to be used on UNIX,
NeXT and Sun Sparc workstations. It is a 16-bit compressed audio format that is fairly prevalent
on the Web. This is probably because it plays on the widest number of platforms.

RA

RA is Progressive Networks RealAudio format. It is very popular for streaming audio on


the Internet because it offers good compression up to a factor of 18. Streaming technology
enables a sound file to begin playing before the entire file has been downloaded.

AIFF

AIFF or AFF is Apple’s Audio Interchange File Format. This is the Macintosh waveform
format. It is also supported on IBM compatibles and Silicon Graphics machines. The AIFF
format supports a large number of sampling rates up to 32 bits.

MPEG

MPEG and MPEG2 are the Motion Picture Experts Group formats. They are a compressed
audio and video format. Some Web sites use these formats for their audio because their
compression capabilities offer up to a factor of at least 14:1. These formats will probably become
quite widespread as the price of hardware based MPEG decoders continues to go down and
as software decoders and faster processors become more mainstream. In addition, MPEG is
a standard format.

MIDI

MIDI (MID, MDI, MFF) is an internationally accepted file format used to store Musical
Instrument Digital Interface (MIDI) data. It is a format used to represent electronic music produced
by an IDI device (such as a synthesizer or electronic keyboard). This format provides instructions
on how to replay music, but it does not actually record the waveform. For this reason, MIDI files
are small and efficient, which is why they are often used on the Web.
141

SND

SND is the Sound file format developed by Apple. It is used mainly within the operating
system and has a limited sampling rate of eight bits.

For a multimedia application to work on both PCs and Macs, save it using either the
Musical Instrument Digital Interface (MIDI) or the Audio Interchange File Format (AIFF) file
format. It is recommended to use AIFF format if sound is a part of the application. AIFF is a
cross platform format and it can also reside outside the multimedia application. Now the file
occupies less space and play faster. Moreover, if a user wants to burn the multimedia application
onto a CD, AIFF format can be used.

Digital Recordings

In digital recording, digital sound can be recorded through microphone, keyboard or DAT

(Digital Audio Tape). To record with the help of a microphone connected to a sound card
is avoided because of sound amplification and recording consistency. Recording on a tape
recorder after making all the changes and then through sound card is recommended.

Editing Digital Recordings

Once a recording has been made, it will almost certainly need to be edited. The basic
sound editing operations that most multimedia procedures needed are described in the
paragraphs that follow

1. Multiple Tasks: Able to edit and combine multiple tracks and then merge the tracks and
export them in a final mix to a single audio file.

2. Trimming: Removing dead air or blank space from the front of a recording and an
unnecessary extra time off the end is your first sound editing task.

3. Splicing and Assembly: Using the same tools mentioned for trimming, you will probably
want to remove the extraneous noises that inevitably creep into recording.

4. Volume Adjustments: If you are trying to assemble ten different recordings into a single
track there is a little chance that all the segments have the same volume.
142

5. Format Conversion: In some cases your digital audio editing software might read a
format different from that read by your presentation or authoring program.

6. Resampling or down sampling: If you have recorded and edited your sounds at 16 bit
sampling rates but are using lower rates you must resample or down sample the file.

7. Equalization: Some programs offer digital equalization capabilities that allow you to modify
a recording frequency content so that it sounds brighter or darker.

8. Digital Signal Processing: Some programs allow you to process the signal with
reverberation, multitap delay, and other special effects using DSP routines.

9. Reversing Sounds: Another simple manipulation is to reverse all or a portion of a digital


audio recording. Sounds can produce a surreal, other word effect when played backward.

10. Time Stretching: Advanced programs let you alter the length of a sound file without
changing its pitch. This feature can be very useful but watch out: most time stretching
algorithms will severely degrade the audio quality.

Making MIDI Audio

Fig. 6.8 MIDI Audio

The MIDI (Musical Instrument Digital Interface) is a connectivity standard that musicians
use to hook together musical instruments (such as keyboards and synthesizers) and computer
equipment. Using MIDI, a musician can easily create and edit digital music tracks. The MIDI
system records the notes played, the length of the notes, the dynamics (volume alterations),
the tempo, the instrument being played, and hundreds of other parameters, called control
changes. Because MIDI records each note digitally, editing a track of MIDI music is much
easier and more accurate than editing a track of audio. The musician can change the notes,
dynamics, tempo, and even the instrument being played with the click of button. Also, MIDI
files are basically text documents, so they take up very little disk space. The only catch is that
you need MIDI-compatible hardware or software to record and playback MIDI files. MIDI provides
143

a protocol for passing detailed descriptions of musical scores, such as the notes, sequences of
notes, and what the instrument will play these notes.

A MIDI file is very small, as 10 KB for a 1-minute playback (a .wav file of the same
duration requires 5 to 10 MB of disk space). This is because it doesn’t contain audio waves like
audio file formats do, but instructions on how to recreate the music. Another advantage of the
file containing instructions is that it is quite easy to change the performance by changing,
adding or removing one or more of the instructions – like note, pitch, tempo, and so on – thus
creating a completely new performance. This is the main reason for the file to be extremely
popular in creating, learning, and playing music.

MIDI actually consists of three distinctly different parts – the physical connector, the
message format, and the storage format. The physical connector connects and transports
data between devices; the message format (considered to be the most important part of MIDI)
controls the stored data and the connected devices; and the storage format stores all the data
and information. Today, MIDI is seen more of a way to accomplish music, rather than a format
or a protocol. This is why phrases like “composing in MIDI” and “creating MIDI” are quite
commonly used by musicians.

MIDI files may be converted to MP3, WAV, WMA, FLAC, OGG, AAC, MPC on any Windows
platform using Total Audio Converter

Advantages of MIDI
 Since they are small, MIDI files embedded in web pages load and play promptly.

 Length of a MIDI file can be changed without affecting the pitch of the music or degrading
audio quality

 MIDI files will be 200 to 1000 times smaller than CD-quality digital audio files. Therefore,
MIDI files are much smaller than digitized audio.

 MIDI files do not take up as much as RAM, disk space, and CPU resources.

 A single MIDI link can carry up to sixteen channels of information, each of which can be
routed to a separate device.
144

Audio File Formats

A file format determines the application that is to be used for opening a file.

Following is the list of different file formats and the software that can be used for opening
a specific file.

1. *.AIF, *.SDII in Macintosh Systems

2. *.SND for Macintosh Systems

3. *.WAV for Windows Systems

4. MIDI files – used by north Macintosh and Windows

5. *.WMA –windows media player

6. *.MP3 – MP3 audio

7. *.RA – Real Player

8. *.VOC – VOC Sound

9. AIFF sound format for Macintosh sound files

10. *.OGG – Ogg Vorbis

Software used for Audio

Software such as Toast and CD-Creator from Adaptec can translate the digital files of red
book Audio format on consumer compact discs directly into a digital sound editing file, or
decompress MP3 files into CD-Audio. There are several tools available for recording audio.
Following is the list of different software that can be used for recording and editing audio;

 Sound recorder from Microsoft

 Apple’s QuickTime Player pro

 Sonic Foundry’s SoundForge for Windows

 Soundedit16
145

MIDI versus Digital Audio


 With MIDI, it is difficult to playback spoken dialog, while digitized audio can do so with
ease.

 MIDI does not have consistent playback quality while digital audio provides consistent
audio quality.

 One requires knowledge of music theory in order to run MIDI while digital audio does not
have this requirement.

 MIDI files sound better than digital audio files when played on a high quality MIDI device.

 MIDI data are completely editable—right down to the level to an individual note. You can
manipulate the smallest detail of a MIDI composition in ways that are impossible with
digital audio.

Audio CD Playback

Audio Compact Disks come in standard format of Compact Disc Digital Audio (CDDA or
CDDA). The standard is defined in the Red Book that contains the technical specifications for
all CD formats. The largest entity on a CD is called a track. A CD can contain up to 99 tracks
(including data track for mixed mode discs).

The best part is that you can sort the order in which you want to listen the tracks and
continue playing without interruption. If the Auto Insert Notification option is disabled or
unavailable, audio compact discs (CDs) are not played automatically. Instead, you must start
CD Player and then click Play.

To cause an audio CD to be played as soon as you start CD Player, follow these steps:

1. Insert the CD you want to play into the drive. Optional) If the CD doesn’t start playing or if
you want to select a disc that is already inserted, click the arrow below the Now Playing
tab, and then click the drive that contains the disc.

Another method to launch an audio CD constitutes the following steps:

1. Right-click the Start button, and then click Explore.


146

2. Double-click the Programs folder; double-click the Accessories folder, and then double
click the Multimedia folder.

3. Right-click the CD player icon, and then click Properties.

4. On the Shortcut tab, change the entry in the Target box to read: C:\Windows\Cdplayer.
exe/PLAY

5. Click OK.

6. Use the “Stop,” “Pause,” “Skip next track” and “Previous track” buttons to set your
preferences while playing the CD.

7. Select from the Edit play list from the Disc menu in order to change the sequence of
the tracks.

8. Adjust the volume by clicking on the “Speakers” icon on the task bar and to adjust the
bass, treble and other options go to Equalizer.

To skip songs when playing a CD:

1. To skip a song, click the Next button while the song is playing.

2. The song will be skipped. If repeat play is turned on, the song will not play again during
that playback session.

3. If you accidentally skip a song you’d like to hear, double-click the song in the playlist.

It will be played immediately and won’t be skipped anymore.

Audio Recording

In digital recording, we start with an analogue audio signal and convert it to digital data to
be stored. Changes in electrical voltage are encoded as discreet samples. On playback we
retrieve the digital data and convert it back to an analogue signal. Here, fidelity is dependent on
the quality and function of the Analogue-to-Digital (A-to-D) and the Digital-to-Analogue (D-to-
147

A) converter. Once an audio signal is stored as digital data, the storage media has no effect on
the quality of sound.

At the heart of hard-disk recording and editing is digital audio. When we record digitally,
sound is converted to an electrical signal by a microphone. That signal is coded into numbers
by an analogue-to-digital converter (ADC). The numbers are stored in memory, then played
back upon demand by sending the numbers to a digital-to-analogue converter (DAC). The
resulting signal is sent through an amplifier and speakers so we hear a reproduction of the
original sound. This is illustrated by the animation below:

Fig. 6.9 Audio Recording

Types of Storage

Devices used to capture, store and access sound will fall into some combination of the
following categories:

 Analogue or digital

 Linear or Random Access, also called non-linear

Example: Some examples of the following:

 Cassette Tape – Linear, Analogue

 Hard Disk Recording – Random Access, Digital

 DAT Tape – Linear, Digital

 CD – Random Access, Digital

 LP (Long-playing Record) – Random Access, Analogue


148

Any type of audio recording system has 3 major components:

1. Input – Microphone

2. Storage/retrieval – Audio Recorder

3. Output – Loudspeaker

Audio Recording Guidelines

These recommendations are intended to produce the best possible audio recordings. A
good audio recording dramatically improves the transcription quality. It lets transcriptionists
focus on the finer details, such as researching difficult words and ensuring correct punctuation,
rather than trying to discern what was said. This is particularly important in cases with multiple
speakers, background noise, complicated vocabulary, or heavy accents. Use a high-quality
microphone, either a headset microphone or a mounted directional microphone. As a rule of
thumb, spending at least $50 on a microphone is a good investment.

If a headset microphone is used, be sure that the transducer is at least 13 away from the
face and slightly below the lower lip. A standing microphone should be placed 9" – 153 directly
in front of the speaker. This provides the best trade-off between clarity and risk of over-saturation.
If there are multiple speakers, providing a separate microphone to each speaker is best. A bi-
directional microphone works well for two speakers sitting across from each other at the
appropriate distance away from the microphone. The multiple microphone signals should be
mixed into a single channel. The speaker should not hold or wear the microphone. It should
either be a headset mic or be mounted on a stable structure in front of them. This reduces the
likelihood that the microphone will move around during recording.

Use a microphone “pop” filter, either one that comes with the microphone or a separately
purchased standing one. Calibrate the input level. If your recording device has a VU meter,
have the speaker talk naturally – at the appropriate distance from the mic – and make sure the
levels are in a good range. If a VU meter is not available, try a sample recording and listen back
to make sure the level sounds good.

Minimize background noise. If you can notice the noise just standing and listening, it will
be much worse on the recording. Making a sample recording and listening to it is a good way
149

to discover the noise level. Placing soft materials between the microphone and air vents and
machinery will block most of the noise.

Try to eliminate background talking or music. This is a frequent source of poor quality
audio. Avoid recording in rooms that have a discernible ‘echo’. This is especially important if
the microphone placement cannot be optimal (i.e. if the microphones are distant from the
speaker or speakers). Listening to a recording sample is a good way to see if the echo is a
problem. An echo y room will produce “hollow sounding” speech, as if the speaker is at the
other end of a tube.

If the audio input to your recording device allows you to select the sampling rate, choose
16 KHz or higher. If it allows you to select the digital audio sample resolution, choose 16 bits or
higher.

If the audio input to your recording device supports “automatic gain control” (“AGC”) or
“voice activity detection” (“VAD”), disable this feature. Coach your speakers to “speak naturally
into the microphone”. Do not instruct speakers to over articulate words.

Media Formats

With digital formats becoming more popular, certain mp3 players have the ability to record
audio directly into a digital audio format. These devices are small, reliable, and can store
massive amounts of audio without the need to switch tapes. Mini disc recorders and discs are
compact, easily portable, sturdy and high quality. Using the mini disc recorder for lectures or
interviews with an appropriate microphone attachment works well.

Video cameras are not built specifically for audio recording; however, they nonetheless
can record good audio given an appropriate microphone attachment. The advantage of recording
directly to a computer, of course, is that there is no intermediary media to deal with and you
save time. This would most commonly be a choice if you have a laptop or a controlled location
like a sound studio.
150

Monitor Recordings

It is a good idea to always bring headphones with you to monitor the audio. If the equipment
you are using has the ability to monitor the recording, as with the Marantz tape decks or higher
end video cameras do so.

Voice Recognition and Response

Voice recognition and voice response promise to be the easiest method of providing a
user interface for data entry and conversational computing, since speech is the easiest, most
natural means of human communication. Voice input and output of data have now become
technologically and economically feasible for a variety of applications.

6.6 Summary
 Multimedia is a combination of various elements, such as text, images, video, sound, and
animation.

 Interactive multimedia allows the user to control what and when the elements are delivered.
 

 Unicode makes use of 16-bit architecture for multilingual text and character encoding.

 Digital audio is created when a sound wave is converted into numbers – a process referred
to as digitizing.

 MIDI (Musical Instrument Digital Interface) is a communication standard developed for


electronic musical instruments and computers.

 Software such as Toast and CD-Creator from Adaptec can translate the digital files of red
book Audio format on consumer compact discs directly into a digital sound editing file, or
decompress MP3 files into CD-Audio.

6.7 Check Your Answers


1. a. typeface

2. b. Kerning

3. WYSIWYG
151

4. b. Anti-aliasing

5. a. Higher Quality

6. Byte

7. Musical Instrument Digital Interface

6.8 Model Questions


1. Describe the working of multimedia.

2. What are multimedia building blocks?

3. How to digitize audio and video blocks?

4. Describe multimedia tools in detail.

5. Explain multimedia text formats.

6. Explain in detail about multimedia audio formats.


152

LESSON 7
MULTIMEDIA ELEMENTS –
IMAGES, ANIMATION AND VIDEO

Structure
7.1 Introduction

7.2 Learning Objectives

7.3 Images

7.4 Animation

7.5 Video

7.6 Digitization of Audio and Video Objects

7.7 Summary

7.8 Check Your Answers

7.9 Model Questions

7.1 Introduction

Video is a combination of image and audio. It consists of a set of still images called
frames displayed to the user one after another at a specific speed, known as the frame rate
measured in number of frames per second (fps), If displayed fast enough our eye cannot
distinguish the individual frames, but because of persistence of vision merges the individual
frames with each other thereby creating an illusion of motion. The frame rate should range
between 20 and 30 for perceiving smooth realistic motion.

Computer animation or CGI animation is the process used for generating animated images
by using computer graphics. The more general term computer-generated imagery encompasses
both static scenes and dynamic images, while computer animation only refers to moving images.
Modern computer animation usually uses 3D computer graphics, although 2D computer graphics
are still used for stylistic, low bandwidth, and faster real-time renderings. Sometimes the target
153

of the animation is the computer itself, but sometimes the target is another medium, such as
film.

7.2 Learning Objectives


At the end of the lesson, the learner will be able to

 Describe the bitmap images and analyses the vector drawing

 Enumerate 3D drawing and rendering

 Explain natural lights, colors, computerized colors and color palettes

 Understand the image file formats such as Macintosh image format, windows imaging file
format and analyze the cross-platform formats

 Know the concept of Animation

 Describe how video works

 Understand broadcast video standards

7.3 Images in Multimedia

Still images are the important element of a multimedia project or a web site. In order to
make a multimedia presentation look elegant and complete, it is necessary to spend ample
amount of time to design the graphics and the layouts. Competent, computer literate skills in
graphic art and design are vital to the success of a multimedia project.

Digital Image

A digital image is represented by a matrix of numeric values each representing a quantized


intensity value. When I is a two-dimensional matrix, then I(r,c) is the intensity value at the
position corresponding to row r and column c of the matrix.

The points at which an image is sampled are known as picture elements, commonly
abbreviated as pixels. The pixel values of intensity images are called gray scale levels (we
encode here the “color” of the image). The intensity at each pixel is represented by an integer
and is determined from the continuous image by averaging over a small neighborhood around
154

the pixel location. If there are just two intensity values, for example, black, and white, they are
represented by the numbers 0 and 1; such images are called binary-valued images. If 8-bit
integers are used to store each pixel value, the gray levels range from 0 (black) to 255 (white).

Digital Image Format

There are different kinds of image formats in the literature. The image format are comes
out of an image frame grabber, i.e., the captured image format, and the format when images
are stored, i.e., the stored image format.

 Captured Image Format

The image format is specified by two main parameters: spatial resolution, which is specified
as pixels (eg. 640x480) and color encoding, which is specified by bits per pixel. Both parameter
values depend on hardware and software for input/output of images.

 Stored Image Format

When we store an image, we are storing a two-dimensional array of values, in which


each value represents the data associated with a pixel in the image. For a bitmap, this value is
a binary digit.

Bitmaps

A bitmap is a simple information matrix describing the individual dots that are the smallest
elements of resolution on a computer screen or other display or printing device.

A one-dimensional matrix is required for monochrome (black and white); greater depth
(more bits of information) is required to describe more than 16 million colors the picture elements
may have, as illustrated in following figure. The state of all the pixels on a computer screen
make up the image seen by the viewer, whether in combinations of black and white or colored
pixels in a line of text, a photograph-like picture, or a simple background pattern.
155

Fig. 7.1 Bitmaps Matrix Formats

Where do bitmap come from? How are they made?

 Make a bitmap from scratch with paint or drawing program.

 Grab a bitmap from an active computer screen with a screen capture program, and then
paste into a paint program or your application.

 Capture a bitmap from a photo, artwork, or a television image using a scanner or video
capture device that digitizes the image.

 Once made, a bitmap can be copied, altered, e-mailed, and otherwise used in many
creative ways.

Color Depth
 Describes the amount of storage per pixel

 Also indicates the number of colors available

 Higher color depths require greater compression

When a bitmap image is constructed, the color of each point or pixel in the image is
coded into a numeric value. This value represents the color of the pixel, its hue and intensity.
When the image is displayed on the screen, these values are transformed into intensities of
red, green and blue for the electron guns inside the monitor, which then create the picture on
the phosphor lining of the picture tube. In fact, the screen itself is mapped out in the computer’s
memory, stored as a bitmap from which the computer hardware drives the monitor.
156

These color values have to be finite numbers, and the range of colors that can be stored
is known as the color depth. The range is described either by the number of colors that can be
distinguished, or more commonly by the number of bits used to store the color value. Thus, a
pure black and white image (i.e. no greys) would be described as a 1-bit or 2-colour image,
since every pixel is either black (0) or white (1). Common color depths include 8-bit (256 colors)
and 24-bit (16 million colors). It’s not usually necessary to use more than 24-bit color, since the
human eye is not able to distinguish that many colors, though broader color depths may be
used for archiving or other high quality work.

Fig. 7.2 Colour pixels

There are a number of interesting attributes of such a color indexing system. If there are
less than 256 colors in the image then this bitmap will be the same quality as a 24 bit bitmap
but it can be stored with one third the data. Interesting coloring and animation effects can be
achieved by simply modifying the palette, this immediately changes the appearance of the
bitmap and with careful design can lead to intentional changes in the visual appearance of the
bitmap.

A common operation that reduces the size of large 24 bit bitmaps is to convert them to
indexed color with an optimized palette, that is, a palette which best represents the colors
available in the bitmap.
157

Resolution

Resolution is a measure of how finely a device displays graphics with pixels. It is used by
printers, scanners, monitors (TV, computer), mobile devices and cameras.

There are two ways of measuring resolution:

 by pixels

 by size in terms of pixels

The amount of pixels or dots per inch(dpi) is used to measure resolution. Printers and
scanners work with higher resolutions than computer monitors. Current desktop printers can
support 300dpi +, flatbed scanners from 100- 3600dpi+. In comparison computer monitors
support 72-130 dpi. This is also known as “Image resolution”.

The size of the frame (as in video) and monitor. For instance, the size of video frame
used for British Televisions is 768 × 576, whereas American TVs use 640 × 480.

Making Still Images

Still images may be small or large, or even full screen. Whatever their form, still images
are generated by the computer in two ways:

 as bitmap (or paint graphics)

 as vector-drawn (or just plain drawn) graphics.

Bitmaps are used for photo-realistic images and for complex drawing requiring fine
detail.

Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic
shapes that can be mathematically expressed in angles, coordinates, and distances. A drawn
object can be filled with color and patterns, and you can select it as a single object.

Typically, image files are compressed to save memory and disk space; many image
formats already use compression within the file itself – for example, GIF, JPEG, and PNG.
158

Still images may be the most important element of your multimedia project. If you are
designing multimedia by yourself, put yourself in the role of graphic artist and layout designer.

Bitmap Software

Bitmap is derived from the words “bit”, which means the simplest element in which only
two digits are used, and “map”, which is a two-dimensional matrix of these bits. A bitmap is a
data matrix describing the individual dots of an image.

 A simple information matrix describing the dots or pixels which make up the image

 Make it with paint or drawing program

 Grab it and (save it) then paste it into you application

 Scan or digitize an image

Fig. 7.3 Digital Image

Bitmaps are an image format suited for creation of:

 Photo-realistic images.

 Complex drawings.

 Images that require fine detail.


159

Bitmapped images are known as paint graphics.

 A bitmap is made up of individual dots or picture elements known as pixels or pels.

• Bitmapped images can have varying bit and color depths.

Fig. 7.4 Bit and Color Depth

Bitmaps can be inserted by:

 Using clip art galleries.

 Using bitmap software.

 Capturing and editing images.

 Scanning images.

 Clip Art

A clip art collection may contain a random assortment of images, or it may contain a
series of graphics, photographs, sound, and video related to a single topic. For example, Corel,
Micrografx, and Fractal Design bundle extensive clip art collection with their image-editing
software.

Fig. 7.5 Clip Art


160

Clip Art Features


 Available from many sources on the web or on CD ( such as PHOTODISC)

 included with packages such as Corel Draw, Office, etc.

 Can manipulate some properties such as brightness, color, size

 Can paste it into an application

 A clip art gallery is an assortment of graphics, photographs, sound, and video.

 Clip arts are a popular alternative for users who do not want to create their own images.

 Clip arts are available on CD-ROMs and on the Internet.

 Bitmap Software

The industry standard for bitmap painting and editing programs are:

 Adobe’s Photoshop and Illustrator.

 Macromedia’s Fireworks.

 Corel’s Painter.

 CorelDraw.

 Quark Express.

 Primitive Paint programs included with windows and MAC

 Director included a powerful image editor with advanced tools such as onion-skin and
image filtering

 Adobe Photoshop and Fractal Design s Painter are more sophisticated painting and editing
tools

Note:

 Use paint program for cartoon, text, icons, symbols, buttons, or graphics.

 For photo-realistic images first scan a picture, then use a paint or image editing program
to refine or modify those Bitmaps
161

 Capturing and Editing Images

The image that is seen on a computer monitor is digital bitmap stored in video memory,
updated about every 1/60 second or faster, depending upon monitors scan rate. When the
images are assembled for multimedia project, it may be needed to capture and store an image
directly from screen. It is possible to use the Prt Scr key available in the keyboard to capture an
image.

 Scanning Images

After scanning through countless clip art collections, if it is not possible to find the unusual
background you want for a screen about gardening. Sometimes when you search for something
too hard, you don’t realize that it’s right in front of your face. Open the scan in an image-editing
program and experiment with different filters, the contrast, and various special effects. Be
creative, and don’t be afraid to try strange combinations – sometimes mistakes yield the most
intriguing results.

Vector Drawing

Most multimedia authoring systems provide for use of vector-drawn objects such as
lines, rectangles, ovals, polygons, and text. Computer-aided design (CAD) programs have
traditionally used vector-drawn object systems for creating the highly complex and geometric
rendering needed by architects and engineers.

Graphic artists designing for print media use vector-drawn objects because the same
mathematics that put a rectangle on your screen can also place that rectangle on paper without
jaggies. This requires the higher resolution of the printer, using a page description language
such as PostScript.

Programs for 3-D animation also use vector-drawn graphics. For example, the various
changes of position, rotation, and shading of light required to spin the extruded.

How Vector Drawing Works

Vector-drawn objects are described and drawn to the computer screen using a fraction
of the memory space required to describe and store the same object in bitmap form. A vector
162

is a line that is described by the location of its two endpoints. A simple rectangle, for example,
might be defined as follows:

RECT 0,0,200,200,RED,BLUE

Fig. 7.6 Vector Drawing

 A rectangle might be described as:

–RECT, 0, 0,200, 200

–Starts at 0,0 and extends 200 pixels horizontally and 200 pixels downward from the
corner ( a square)

–RECT, 0, 0,200, 200, red, blue

–This is the same square with a red border filled with blue

 Colored square as a vector contains < 30 bytes of data

 The same square as a bitmapped image would take 5,000 bytes to describe ( 200x200)/
8) and using 256 colors would require 40K as a bitmap

 ((200x 200) / 8 X 8)

 Vector objects are easily scalable

 Sometimes a single bitmap gives better performance than many vector images required
to make the same image

Converting Between Bitmaps and Vectors


 Most drawing programs offer several file formats for saving and converting images.
163

 Converting bitmaps to drawn object is more difficult and is called auto tracing

 It computes the bounds of an object and its colors and derives the polygon that most
nearly describes it

 It is available in some programs such as Adobe Streamline

Vectors vs. Bitmaps


 Vector drawings are easily scaled

 Vector files are usually smaller

 Calculation time can draw resources

 Bitmaps cannot easily be converted to vector

 Vector drawings require plug-ins

3-D Drawing and Rendering


 Drawing in 3-D on 2 2-D surface or screen takes practice and skill

 Software helps to render (or represent) the image in visual form, but these programs
have a steep learning curve.

 Object in 3-D space carry many properties, shape color, texture, location… and a scene
contains many objects

3-D Drawing
3-D software usually offers:

 Directional lighting

 Motion

 Different perspectives

3-D creation tools include:

 Ray Dream Designer

 Caligari True Space 2

 Specular Infini-D

 form*Z
164

Fig. 7.6 3-D Drawing

Modeling 3-D objects

• Start with a shape ( block, cylinder, sphere, …)

• You can draw a 2-D object and extrude or lathe it into the third dimension

• Extrude – extends the shape perpendicular to the shapes outline

• A lathed shape is rotated around a defined axis to create the 3-D object.

Image File Formats

Table 7.1 File Formats

Format Extension

Microsoft Windows DIB .bmp .dib .rle

Microsoft Palette .pal

Autocad format 2D .dxf

JPEG .jpg

Window Meta file .wmf

Portable Network graphic .png

Compuserve gif .gif

Apple Macintosh .pict .pic .pct


165

7.4 Animation

Introduction

Animation makes static presentations come alive. It is visual change over time and can
add great power to our multimedia projects. Carefully planned, well-executed video clips can
make a dramatic difference in a multimedia project. Animation is created from drawn pictures
and video is created using real time visuals.

Principles of Animation

Animation is the rapid display of a sequence of images of 2-D artwork or model positions
in order to create an illusion of movement. It is an optical illusion of motion due to the phenomenon
of persistence of vision, and can be created and demonstrated in a number of ways. The most
common method of presenting animation is as a motion picture or video program, although
several other forms of presenting animation also exist.

Animation is possible because of a biological phenomenon known as persistence of


vision and a psychological phenomenon called phi. An object seen by the human eye remains
chemically mapped on the eye’s retina for a brief time after viewing. Combined with the human
mind’s need to conceptually complete a perceived action, this makes it possible for a series of
images that are changed very slightly and very rapidly, one after the other, to seemingly blend
together into a visual illusion of movement. The following shows a few cells or frames of a
rotating logo. When the images are progressively and rapidly changed, the arrow of the compass
is perceived to be spinning.

Television video builds entire frames or pictures every second; the speed with which
each frame is replaced by the next one makes the images appear to blend smoothly into
movement. To make an object travel across the screen while it changes its shape, just change
the shape and also move or translate it a few pixels for each frame.

Animation Techniques

When you create an animation, organize its execution into a series of logical steps. First,
gather up in your mind all the activities you wish to provide in the animation; if it is complicated,
166

you may wish to create a written script with a list of activities and required objects. Choose the
animation tool best suited for the job. Then build and tweak your sequences; experiment with
lighting effects. Allow plenty of time for this phase when you are experimenting and testing.
Finally, post-process your animation, doing any special rendering and adding sound effects.

Cel Animation

The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which have been replaced today by acetate or plastic. Cels of famous animated cartoons
have become sought-after, suitable-for-framing collector’s items.

Cel animation artwork begins with keyframes (the first and last frame of an action). For
example, when an animated figure of a man walks across the screen, he balances the weight
of his entire body on one foot and then the other in a series of falls and recoveries, with the
opposite foot and leg catching up to support the body.

 The animation techniques made famous by Disney use a series of progressively different
on each frame of movie film which plays at 24 frames per second.

 A minute of animation may thus require as many as 1,440 separate frames.

 The term cel derives from the clear celluloid sheets that were used for drawing each
frame, which is been replaced today by acetate or plastic.

 Cel animation artwork begins with key frames.

Computer Animation

Computer animation programs typically employ the same logic and procedural concepts
as cel animation, using layer, key frame, and tweening techniques, and even borrowing from
the vocabulary of classic animators. On the computer, paint is most filled or drawn with tools
using features such as gradients and antialiasing.

The word links, in computer animation terminology, usually means special methods for
computing RGB pixel values, providing edge detection, and layering so that images can blend
or otherwise mix their colors to produce special transparencies, inversions, and effects.
167

 Computer Animation is same as that of the logic and procedural concepts as cel animation
and use the vocabulary of classic cel animation – terms such as layer, Keyframe, and
tweening.

 The primary difference between the animation software program is in how much must be
drawn by the animator and how much is automatically generated by the software

 In 2D animation the animator creates an object and describes a path for the object to
follow. The software takes over, actually creating the animation on the fly as the program
is being viewed by your user

 In 3D animation the animator puts his effort in creating the models of individual and
designing the characteristic of their shapes and surfaces.

 Paint is most filled or drawn with tools using features such as gradients and anti- aliasing.

Kinematics
 It is the study of the movement and motion of structures that have joints, such as a
walking man.

 Inverse Kinematics is in high-end 3D programs, it is the process by which you link objects
such as hands to arms and define their relationships and limits.

 Once those relationships are set you can drag these parts around and let the computer
calculate the result.

Morphing
 Morphing is popular effect in which one image transforms into another. Morphing application
and other modeling tools that offer this effect can perform transition not only between still
images, but between moving images as well.

 The morphed images were built at a rate of 8 frames per second, with each transition
taking a total of 4 seconds.

 Some product that uses the morphing features are as follows

 Black Belt’s EasyMorph and WinImages,


168

 Human Software’s Squizz

 Valis Group’s Flo , MetaFlo, and MovieFlo.

Animation File Formats

Some file formats are designed specifically to contain animations and the can be ported
among application and platforms with the proper translators.

 Director *.dir, *.dcr

 AnimationPro *.fli, *.flc

 3D Studio Max *.max

 SuperCard and Director *.pics

 CompuServe *.gif

 Flash *.fla, *.swf

Following are the list of few Software used for computerized animation:

 3D Studio Max

 Flash

 AnimationPro

Meta Graphics

Meta graphics can be termed as hybrid graphics as they are a combination of bitmap and
vector graphics. They aren’t as widely used as bitmaps and vectors, and aren’t as widely
supported. An example of a meta graphic would be a map consisting of a photo showing an
aerial view of a town, where the landmarks are highlighted using vector text and graphics, eg
arrows.

Animated Graphics

Animated graphics are ‘moving graphics’ that consist of at least more than one graphic.
Vector graphics are mainly the basis of animations. Think of cartoons such as the Simpsons
and Family Guy. Effects generated by bitmaps can be added and bitmaps themselves can also
be animated.
169

The illusion of movement is created by playing a series of graphics at a certain speed.


Too slow, and it will look like a number of static graphics to us. Too fast and we won’t be able to
make out the graphics at all, they’ll just look like a blur. Animation started back in the mid-
1800s. Early animators experimented with the speed of playback of their drawings to determine
the correct setting to create the illusion of movement. Early filmmakers also experimented with
this. The term applied to the playback setting is known as frame rate. The frame rate is measured
determined by the amount of frames per second (fps) that are displayed. Each frame consists
of a change in the image. Early animators and filmmakers found that when images were played
back at anything below 12 fps you could see the individual static images, which also resulted in
the movement being jerky. The accepted frame rates used today are:

 12-24fps for animations used in multimedia. 12fps is recommended for web based
animations

 24fps for TV in UK

 30fps for TV in USA

 25fps for film

Check your Progress


1. Match The Following:

I.TIFF A. Moving Pictures Experts Group

II. BMP B. Tag Image File Format

III. JPEG C. Bitmap Image

IV. MPEG D. Joint Photographic Experts Group

2. The type of image used for photo-realistic images and for complex drawings requiring
fine detail is the _______________.

3. TIFF stands for ____________________________

4. _______________ is a process whereby the color value of each pixel is changed to the
closest matching color value in the target palette, using a mathematical algorithm.
170

5. Say True or False

The picture elements that make up a bitmap are called pixels

6. DSP stands for:

a. Dynamic Sound Programming

b. Data Structuring Parameters

c. Direct Splicing and Partitioning

d. Digital Signal Processing

7. Most authoring packages include visual effects such as:

a. Panning, Zooming, and Tilting

b. Wipes, Fades, Zooms, and Dissolves

c. Morphing

d. Tweening

8. Say True or False

Movies on film are typically shot at a shutter rate of 24 frames per second.

9. The file format that is most widely supported for web animations is_____________

10. High-Definition Television (HDTV) is displayed in a(n) ________________ aspect ratio.

7.5 Video

Video is a combination of image and audio. It consists of a set of still images called
frames displayed to the user one after another at a specific speed, known as the frame rate
measured in number of frames per second (fps), If displayed fast enough our eye cannot
distinguish the individual frames, but because of persistence of vision merges the individual
frames with each other thereby creating an illusion of motion. The frame rate should range
between 20 and 30 for perceiving smooth realistic motion. Audio is added and synchronized
with the apparent movement of images. The recording and editing of sound has long been in
the domain of the PC. Doing so with motion video has only recently gained acceptance. This is
because of the enormous file size required by video.
171

Analog versus Digital

Digital video has supplanted analog video as the method of choice for making video for
multimedia use. While broadcast stations and professional production and postproduction
houses remain greatly invested in analog video hardware (according to Sony, there are more
than 350,000 Betacam SP devices in use today), digital video gear produces excellent finished
products at a fraction of the cost of analog. A digital camcorder directly connected to a computer
workstation eliminates the image-degrading analog-to-digital conversion step typically performed
by expensive video capture cards, and brings the power of nonlinear video editing and production
to everyday users.

Broadcast Video Standards

Four broadcast and video standards and recording formats are commonly in use around
the world: NTSC, PAL, SECAM, and HDTV. Because these standards and formats are not
easily interchangeable, it is important to know where your multimedia project will be used.

 NTSC

The United States, Japan, and many other countries use a system for broadcasting and
displaying video that is based upon the specifications set forth by the 1952 National Television
Standards Committee. These standards define a method for encoding information into the
electronic signal that ultimately creates a television picture. As specified by the NTSC standard,
a single frame of video is made up of 525 horizontal scan lines drawn onto the inside face of a
phosphor-coated picture tube every 1/30th of a second by a fast-moving electron beam.

 PAL

The Phase Alternate Line (PAL) system is used in the United Kingdom, Europe, Australia,
and South Africa. PAL is an integrated method of adding color to a black-and-white television
signal that paints 625 lines at a frame rate 25 frames per second.

 SECAM

The Sequential Color and Memory (SECAM) system is used in France, Russia, and few
other countries. Although SECAM is a 625-line, 50 Hz system, it differs greatly from both the
NTSC and the PAL color systems in its basic technology and broadcast method.
172

 HDTV

High Definition Television (HDTV) provides high resolution in a 16:9 aspect ratio (see
following Figure). This aspect ratio allows the viewing of Cinemascope and Panavision movies.
There is contention between the broadcast and computer industries about whether to use
interlacing or progressive-scan technologies.

Fig. 7.7 High Definition Televisions (HDTV)

Digital Television (DTV)

Digital Television (DTV) is an advanced broadcasting technology that has transformed


your television viewing experience. DTV has enabled broadcasters to offer television with better
picture and sound quality. It also offers multiple programming choices, called multi-casting,
and interactive capabilities.

DTV Transition

The switch from analogue to digital broadcast television is referred to as the Digital TV
(DTV) Transition. In 1996, the U.S. Congress authorized the distribution of an additional
broadcast channel to each broadcast TV station so that they could start a digital broadcast
channel while simultaneously continuing their analogue broadcast channel.
173

Later, Congress set June 12, 2009 as the deadline for full power television stations to
stop broadcasting analogue signals. Since June 13, 2009, all full-power U.S. television stations
have broadcast over-the-air signals in digital only.

Digital Video

Full integration of motion video on computers eliminates the analog television form of
video from the multimedia delivery platform. If a video clip is stored as data on a hard disk, CD-
ROM, or other mass-storage device, that clip can be played back on the computer’s monitor
without overlay boards, videodisk players, or second monitors. This playback of digital video is
accomplished using software architecture such as QuickTime or AVI, a multimedia producer or
developer; you may need to convert video source material from its still common analog form
(videotape) to a digital form manageable by the end user’s computer system. So an
understanding of analog video and some special hardware must remain in your multimedia
toolbox.

Analog to digital conversion of video can be accomplished using the video overlay hardware
described above, or it can be delivered direct to disk using FireWire cables. To repetitively
digitize a full-screen color video image every 1/30 second and store it to disk or RAM severely
taxes both Macintosh and PC processing capabilities–special hardware, compression firmware,
and massive amounts of digital storage space are required.

DTV versus HDTV

The Advanced Television Standards Committee (ATSC) has set voluntary standards for
digital television. These standards include how sound and video are encoded and transmitted.
They also provide guidelines for different levels of quality. All of the digital standards are better
in quality than analogue signals. HDTV standards are the top tier of all the digital signals.

The ATSC has created 18 commonly used digital broadcast formats for video. The lowest
quality digital format is about the same as the highest quality an analogue TV can display.
174

The 18 formats cover differences in:

(i) Aspect ratio: Standard television has a 4:3 aspect ratio—it is four units wide by three
units high. HDTV has a 16:9 aspect ratio, more like a movie screen.

(ii) Resolution: The lowest standard resolution (SDTV) will be about the same as analogue
TV and will go up to 704 x 480 pixels. The highest HDTV resolution is 1920 x 1080 pixels.
HDTV can display about ten times as many pixels as an analogue TV set.

(iii) Frame rate: A set’s frame rate describes how many times it creates a complete picture on
the screen every second. DTV frame rates usually end in “i” or “p” to denote whether they
are interlaced or progressive. DTV frame rates range from 24p (24 frames per second,
progressive) to 60p (60 frames per second, progressive).

Many of these standards have exactly the same aspect ratio and resolution — their
frame rates differentiate them from one another. When you hear someone mention a “1080i”
HDTV set, they’re talking about one that has a native resolution of 1920 x 1080 pixels and can
display 60 frames per second, interlaced.

Digital Video Standards—ATSC, ISDB, EDTV

Before going into the details of digitizing video and playback of video on a personal
computer, let us first have a look at the existing digital video standards for transmission and
playback.

 Advanced Television Systems Committee (ATSC)

ATSC (Advanced Television Systems Committee) is the name of the technical standard
that defines the digital TV (DTV) that the FCC has chosen for terrestrial TV stations. ATSC
employs MPEG-2, a data compression standard. MPEG-2 typically achieves a 50-to-1 reduction
in data.

It achieves this by not retransmitting areas of the screen that have not changed since the
previous frame. Digital cable TV systems and DBS systems like DirecTV have devised their
own standards that differ somewhat from ATSC. Their high-def set top boxes (STBs) conform
to ATSC at their output connectors. Those systems use MPEG-2 or MPEG-4.
175

ATSC has 18 different formats. All TVs must be able to receive all of these formats and
display them. The broadcaster chooses the format. Most TV sets will display only 1 or 2 of
these formats, but will convert the other formats into these.

 Integrated Services Digital Broadcasting (ISDB)

ISDB is maintained by the Japanese organization ARIB. The standards can be obtained
for free at the Japanese organization DiBEG website and at ARIB. The core standards of ISDB
are ISDB-S (satellite television), ISDB-T (terrestrial), ISDB-C (cable) and 2.6 GHz band mobile
broadcasting which are all based on MPEG-2 or MPEG-4 standard for multiplexing with transport
stream structure and video and audio coding (MPEG-2 or H.264), and are capable of high
definition television (HDTV) and standard definition television. ISDB-T and ISDB-Tsb are for
mobile reception in TV bands. 1seg is the name of an ISDB-T service for reception on cell
phones, laptop computers and vehicles.

The concept was named for its similarity to ISDN, because both allow multiple channels
of data to be transmitted together (a process called multiplexing). This is also much like another
digital radio system, Eureka 147, which calls each group of stations on a transmitter an ensemble;
this is very much like the multi-channel digital TV standard DVB-T. ISDB-T operates on unused
TV channels, an approach taken by other countries for TV but never before for radio.

Interaction: Besides audio and video transmission, ISDB also defines data connections
(Data broadcasting) with the internet as a return channel over several media (10Base-T/
100Base-T, Telephone line modem, Mobile phone, Wireless LAN (IEEE 802.11) etc.) and with
different protocols. This is used, for example, for interactive interfaces like data broadcasting
(ARIB STDB24) and electronic program guides (EPG).

Receiver: There are two types of ISDB receiver: Television and set-top box. The aspect
ratio of an ISDB-receiving television set is 16:9; televisions fulfilling these specs are called Hi-
Vision TV.

There are three TV types: Cathode ray tube (CRT), plasma display panel (PDP) and
liquid crystal display (LCD), with LCD being the most popular Hi-Vision TV on the Japanese
market nowadays.
176

 Enhanced Definition Television Systems (EDTV)

These are conventional systems modified to offer improved vertical and horizontal
resolutions. One of the systems emerging in US and Europe is known as the Improved Definition
Television (MTV). In TV is an attempt to improve NTSC image by using digital memory to
double the scanning lines from 525 to 1050. The pictures are only slightly more detailed than
NTSC images because the signal does not contain any new information. By separating the
chrominance and luminance parts of the video signal, IDTV prevents cross-interference between
the two. The Double Multiplexed Analogue Components (D2-MAC) standard is designed as an
intermediate standard for transition from current European analogue standard to HDTV standard

Shooting and Editing Video

To add full-screen, full-motion video to your multimedia project, you will need to invest in
specialized hardware and software or purchase the services of a professional video production
studio. In many cases, a professional studio will also provide editing tools and post-production
capabilities that you cannot duplicate with your Macintosh or PC.

 Video Tips

A useful tool easily implemented in most digital video editing applications is “blue screen,”
“Ultimate,” or “chromo key” editing. Blue screen is a popular technique for making multimedia
titles because expensive sets are not required. Incredible backgrounds can be generated using
3-D modeling and graphic software, and one or more actors, vehicles, or other objects can be
neatly layered onto that background. Applications such as VideoShop, Premiere, Final Cut
Pro, and iMovie provide this capability.

Recording Formats

 S-VHS video

In S-VHS video, color and luminance information are kept on two separate tracks. The
result is a definite improvement in picture quality. This standard is also used in Hi-8. Still, if your
ultimate goal is to have your project accepted by broadcast stations, this would not be the best
choice.
177

 Component (YUV)

In the early 1980s, Sony began to experiment with a new portable professional video
format based on Betamax. Panasonic has developed their own standard based on a similar
technology, called “MII,” Betacam SP has become the industry standard for professional video
field recording. This format may soon be eclipsed by a new digital version called “Digital
Betacam.”

7.6 Digitization of Audio and Video Objects

 Digitization of Audio

Sound and other analog data is generally represented as a transverse wave, and can be
converted to digital form by a process called sampling. The two important aspects of sampling
are sampling size and sampling rate.

Sampling size refers to the number of bits used to store each sample from the analog
wave. For example, an 8-bit sample can represent 256 (28 = 256) possible levels in a particular
sample.

A higher sample size will result in increased accuracy, but higher data storage
requirements. Sampling Rate refers to the number of samples or slices taken of the analog
wave in 1 second. The higher the sampling size, the better will be the representation of the
initial analog signal.

 Methods of Digitizing / Capturing Video Images

Capturing full motion video requires a video capture card to digitize the signal (unless
using a digital video recorder, in which case it is already digitized) before storing on disk for
later editing.

The standard PAL (Phase Alternate Line) video signal used in India displays a frame rate
of 25 frames per second. One frame of medium resolution and 16-bit color requires approximately
1 Mb of storage space per frame. This translates to 25 Mb per second of video, or a staggering
1,500 Mb per minute.
178

Current personal computers cannot sustain a transfer rate between secondary and primary
storage of 1,500 Mb per minute, so a number of solutions are applied including:

 Video data will be compressed during recording, using a codec.

 Decreased color depth to fewer colors or even black and white shades requires significantly
less memory.

 Decreased resolution reduces number of pixels to describe in each frame.

7.7 Summary
 A digital image is represented by a matrix of numeric values each representing a quantized
intensity value.

 Bitmaps are used for photo-realistic images and for complex drawing requiring fine detail.

 Vector-drawn objects are used for lines, boxes, circles, polygons, and other graphic shapes
that can be mathematically expressed in angles, coordinates, and distances.

 Rendering is the process of generating an image from a model (or models in what
collectively could be called a scene file), by means of computer programs.

 Color palette is a subset of all possible colors a monitor can display that is being used to
display the current document.

 A color generator or color scheme selector is a tool for anyone in need of a color scheme.

 Animation is the rapid display of a sequence of images of 2-D artwork or model positions
in order to create an illusion of movement.

 Four broadcast and video standards and recording formats are commonly in use around
the world: NTSC, PAL, SECAM, and HDTV.

 Animation catches the eye and makes things noticeable. But, like sound, animation quickly
becomes trite if it is improperly applied.

 Video standards and formats are still being refined as transport, storage, compression,
and display technologies take shape in laboratories and in the marketplace and while
equipment and post-processing evolves from its analog beginnings to become fully digital,
from capture to display.
179

 DVD Authoring software is used to create digital video disks which can be played on a
DVD player.

 Media player is a term typically used to describe computer software for playing back
multimedia files. While many media players can play both audio and video, others focus
only on one media type or the other.

7.8 Check Your Answers


1. I-B, II-C, III-D, IV-A

2. Bitmaps

3. Tagged Interchange File Format

4. Dithering

5. True

6. d. Digital Signal Processing

7. b. Wipes, Fades, Zooms, and Dissolves

8. True

9. GIF89a

10. 16:9

7.9 Model Questions


1. Define digital image. List out the formats of digital images.

2. Define Bitmaps.

3. What is color depth?

4. Define Resolution.

5. Write short notes on Bitmap software.

6. List out the image file formats

7. Discuss in detail about animation techniques.


180

8. Explain in detail about images with an example.

9. Write short notes on broadcast standards.

10. Explain in detail about digitization of audio and video objects.


181

LESSON 8
COMPRESSION TECHNIQUES IN
MULTIMEDIA SYSTEMS

Structure
8.1 Introduction

8.2 Learning Objectives

8.3 Compression and Decompression

8.4 Text Compression

8.5 Images Compression

8.5 Video Compression

8.6 Audio Compression

8.7 Summary

8.8 Check Your Answers

8.9 Model Questions

8.1 Introduction

Data compression is the process of encoding data using a representation that reduces
the overall size of data. This reduction is possible when the original dataset contains some type
of redundancy. Data compression, also called compaction, the process of reducing the amount
of data needed for the storage or transmission of a given piece of information, typically by the
use of encoding techniques. Multimedia compression is employing tools and techniques in
order to reduce the file size of various media formats.

8.2 Learning Objective


At the end of this lesson, the learner will be able to

 learn methods for handling compressing various kinds of data such as text, images,
video and audio data
182

 Understand data compression techniques for multimedia and other applications.

 understand different multimedia compression standards

 design and develop multimedia systems according to the requirements of multimedia


applications

8.3 Compression and Decompression

Compression is the way of making files to take up less space. In multimedia systems, in
order to manage large multimedia data objects efficiently, these data objects need to be
compressed to reduce the file size for storage of these objects.

Compression tries to eliminate redundancies in the pattern of data.

For example, if a black pixel is followed by 20 white pixels, there is no need to store all 20
white pixels. A coding mechanism can be used so that only the count of the white pixels is
stored. Once such redundancies are removed, the data object requires less time for transmission
over a network. This in turn significantly reduces storage and transmission costs.

Types of Compression

Compression and decompression techniques are utilized for a number of applications,


such as facsimile system, printer systems, document storage and retrieval systems, video
teleconferencing systems, and electronic multimedia messaging systems. An important
standardization of compression algorithm was achieved by the Consultative Committee for
International Telephony and Telegraphy (CCITT) when it specified Group 2 compression for
facsimile system.

When information is compressed, the redundancies are removed.

 Sometimes removing redundancies is not sufficient to reduce the size of the data object
to manageable levels. In such cases, some real information is also removed. The primary
criterion is that removal of the real information should not perfectly affect the quality of the
result. In the case of video, compression causes some information to be lost; some information
at a delete level is considered not essential for a reasonable reproduction of the scene.
183

This type of compression is called lossy compression. Audio compression, on the other
hand, is not lossy. It is called lossless compression.

(i) Lossless Compression

In lossless compression, data is not altered or lost in the process of compression or


decompression. Decompression generates an exact replica of the original object. Text
compression is a good example of lossless compression. The repetitive nature of text, sound
and graphic images allows replacement of repeated strings of characters or bits by codes.
Lossless compression techniques are good for text data and for repetitive data in images all
like binary images and gray-scale images.

Some of the commonly accepted lossless standards are given below:

 PackBits encoding (Run-length encoding)

 CCITT Group 3 I-D Compression

 CCITT Group 3 2-D Compression

 CCITT Group 4 2-D Compression

 Lempel-Ziv and Welch algorithm (LZW)

(ii) Lossy Compression

Lossy compression is that some loss would occur while compressing information objects.

 Lossy  compression  is  used for  compressing audio,  gray-scale or  color  images,  and


video objects in which absolute data accuracy is not necessary.

The idea behind the lossy compression is that, the human eye fills in the missing
information in the case of video.

But, an important consideration is how much information can be lost so that the result
should not affect. For example, in a gray scale image, if several bits are missing, the information
is still perceived in an acceptable manner as the eye fills in the gaps in the shading gradient.
184

Lossy compression is applicable in medical screening systems, video tele-conferencing,


and multimedia electronic messaging systems.

Lossy compressions techniques can be used alone in combination with other compression
methods in a multimedia object consisting of audio, color images, and video as well as other
specialized data types.

The following lists some of the lossy compression mechanisms:

 Joint Photographic Experts Group (JPEG)

 Moving Picture Experts Group (MPEG)

 Intel DVI

 CCITT H.261 (P * 24) Video Coding Algorithm

 Fractals.

8.4 Text Compression

Binary Image compression schemes

Binary Image Compression Scheme is a scheme by which a binary image containing


black and white pixel is generated when a document is scanned in a binary mode.

 The schemes are used primarily for documents that do not contain any continuous-tone
information or where the continuous-tone information can be captured in a black and white
mode to serve the desired purpose.

The schemes are applicable in office/business documents, handwritten text, line graphics,
engineering drawings, and so on. Let us view the scanning process. A scanner scans a document
as sequential scan lines, starting from the top of the page.

A scan line is complete line of pixels, of height equal to one pixel, running across the
page. It scans the first line of pixels (Scan Line), then scans second “line, and works its way up
to the last scan line of the page. Each scan line is scanned from left to right of the page
generating black and white pixels for that scan line.
185

This uncompressed image consists of a single bit per pixel containing black and white
pixels. Binary 1 represents a black pixel, binary 0 a white pixel. Several schemes have been
standardized and used to achieve various levels of compressions.

PackBits Encoding (Run-Length Encoding)

It is a scheme in which a consecutive repeated string of characters is replaced by two


bytes. It is the simple, earliest of the data compression scheme developed. It does not need to
have a standard. It is used to compress black and white (binary) images. Among two bytes
which are being replaced, the first byte contains a number representing the number of times
the character is repeated, and the second byte contains the character itself.

In some cases, one byte is used to represent the pixel value and the other seven bits to
represents the run length.

Example:
 Used when the source information comprises long substrings of the same character or
binary digit

 000000011111111110000011

 is represented as: 0,7 1,10 0,5 1,2

 If binary and we know the first bit is 0 then the code becomes: 7, 10, 5, 2

 7 “zeros” followed by 10 “ones” followed by 5 “zeros” followed by 2 “ones” etc.

CCITT Group 3 1-D Compression

This scheme is based on run-length encoding and assumes that a typical scanline has
long runs of the same color.

This scheme was designed for black and white images only, not for gray scale or color
images. The primary application of this scheme is in facsimile and early document imaging
system.
186

 Huffman Encoding

A modified version of run-length encoding is Huffman encoding.

 It is used for many software based document imaging systems. It is used for encoding
the pixel run length in CCITT Group 3 1-dGroup 4.

It is variable-length encoding. It generates the shortest code for frequently occurring run
lengths and longer code for less frequently occurring run lengths.

Mathematical Algorithm for Huffman encoding:

Huffman encoding scheme is based on a coding tree.

It is constructed based on the probability of occurrence of white pixels or black pixels in


the run length or bit stream.

Table below shows the CCITT Group 3 tables showing codes or white run lengths and
black run lengths.

Table 8.1: Run-Length code – 16 pixels


187

For example, from Table 8.1, the run-length code of 16 white pixels is 101010, and of 16


black pixels 0000010111. Statistically, the occurrence of 16 white pixels is more frequent than
the occurrence of 16 black pixels. Hence, the code generated for 16 white pixels is much
shorter. This allows for quicker decoding. For this example, the tree structure could be
constructed.

Table 8.2: Run-Length code-1792 pixels

The codes greater than a string of 1792 pixels are identical for black and white pixels. A
new code indicates reversal of color, that is, the pixel Color code is relative to the color of the
previous pixel sequence.

  Table 8.3 shows the codes for pixel sequences larger than 1792 pixels.

Table 8.3: Run-Length code-1792 pixels


188

CCITT Group 3 compression utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a given bit stream. Make-up codes are used to represent run
length in multiples of 64 pixels. Terminating codes are used to represent run lengths of less
than 64 pixels.

 As  shown  in Table 8.1;  run-length codes  for black  pixels  are different  from the  run-


length codes for white pixels. For example, the run-length code for 64 white pixels is 11011.
The run length code for 64 black pixels is 0000001111. Consequently, the run length of 132
white pixels is encoded by the following two codes:

Makeup code for 128 white pixels - 10010 Terminating code for 4 white pixels - 1011

 The compressed bit stream for 132 white pixels is 100101011, a total of nine bits. Therefore
the compression ratio is 14, the ratio between the total numbers of bits (132) divided by the
number of bits used to code them (9).

CCITT Group 3 uses a very simple data format. This consists of sequential blocks of
data for each scan line, as shown in Table 8.4.

Fig. 8.1 Coding tree for 16 white pixels

Note that the file is terminated by a number of EOLs (End of. Line) if there is no change
in the line from the previous line (for example, white space).
189

Table 8.4: CCITT Group 3- 1D File Format

Advantages of CCITT Group 3- 1D Compression


 CCITT Group 3 compression has been used extensively due to the following two
advantages: It is simple to implement in both hardware and software.

 It is a worldwide standard for facsimile which is accepted for document imaging application.
This allows document imaging applications to incorporate fax documents easily.

 CCITT group 3 compressions utilizes Huffman coding to generate a set of make-up codes
and a set of terminating codes for a give bit stream.

 CCITT Group 3 uses a very simply data format. This consists of sequential blocks of data
for each scan line.

CCITT Group 3- 2D Compression

It is also known as modified run length encoding. It is used for software based imaging
system and facsimile. It is easier to decompress in software than CCITT Group 4. The CCITT
Group 3 2D scheme uses a “k” factor where the image is divided into several group of k lines.
This scheme is based on the statistical nature of images; the image data across the adjacent
scanline is redundant.

If black and white transition occurs on a given scanline, chances are the same transition
will occur within + or - 3 pixels in the next scanline.

Necessity of k factor

When CCITT Group 3- 2D compression is used, the algorithm embeds Group 3- 1D


coding between every k groups of Group 3- 2D coding, allowing the Group 3- 1D coding to be
the synchronizing line in the event of a transmission error. Therefore when a transmission
error occurs due to a bad communication link, the group 3 I D can be used to synchronize and
correct the error.
190

Data formatting for CClTT Group 3- 2D

The 2D scheme uses a combination of additional codes called vertical code, pass code,
and horizontal code to encode every line in the group of k lines.

The steps for pseudo code to code the code line are:

i) Parse the coding line and look for the change in the pixel value. (Change is found at al
location).

ii) Parse the reference line and look for the change in the pixel value. (Change is found at bl
location).

iii) Find the difference in location between bland a 1: delta = b1- al

Advantage of CClTT Group 3- 2D


 The implementation of the k factor allows error-free transmission.

 Compression ratio achieved is better than CClTT Group 3 1 D.

 It is accepted for document imaging applications.

Disadvantage
 It doesn’t provide dense compression

CCITT Group 4 -2D compression

CClTT Group 4 compression is the two dimensional coding scheme without the k-factor.

In this method, the first reference line is an imaginary all-white line above the top of the
image. The first group of pixels (scanline) is encoded utilizing the imaginary white line as the
reference line.

The new coded line becomes the references line for the next scan line. The k-factor in
this case is the entire page of line. In this method, there are no end-of-line (EOL) markers
before the start of the compressed data.
191

Lempel-Ziv and Welch algorithm (LZW)

The LZW algorithm is a very common compression technique. This algorithm is typically
used in GIF and optionally in PDF and TIFF. On Unix-like operating systems,
the compress command compresses a file so that it becomes smaller. The compressed file’s
name is given the extension .Z. It is lossless, meaning no data is lost when compressing. The
algorithm is simple to implement and has the potential for very high throughput in hardware
implementations. It is the algorithm of the widely used UNIX file compression utility compress,
and is used in the GIF image format.

The Idea relies on reoccurring patterns to save data space. LZW is the foremost technique
for general purpose data compression due to its simplicity and versatility. It is the basis of many
PC utilities that claim to “double the capacity of your hard drive”.

LZW compression works by reading a sequence of symbols, grouping the symbols into
strings, and converting the strings into codes. Because the codes take up less space than the
strings they replace, we get compression.

Characteristic features of LZW includes,

 LZW compression uses a code table, with 4096 as a common choice for the number of
table entries. Codes 0-255 in the code table are always assigned to represent single
bytes from the input file.

 When encoding begins the code table contains only the first 256 entries, with the remainder
of the table being blanks. Compression is achieved by using codes 256 through 4095 to
represent sequences of bytes.

 As the encoding continues, LZW identifies repeated sequences in the data, and adds
them to the code table.

 Decoding is achieved by taking each code from the compressed file and translating it
through the code table to find what character or characters it represents.

Compression using LZW

Example 8.1: Use the LZW algorithm to compress the string: BABAABAAA
192

The steps involved are systematically shown in the diagram below.

Fig. 8.2 String Compression

Advantages of LZW over Huffman:

 LZW requires no prior information about the input data stream.

 LZW can compress the input stream in one single pass.

 Another advantage of LZW its simplicity, allowing fast execution.

8.5 Image Compression

Color, Gray Scale and Still-Video Image Compression

 Color:

Color is a part of life we take for granted. Color adds another dimension to objects. It
helps in making things standout. Color adds depth to images; enhance images, and helps set
objects apart from -background.
193

Let us review the physics of color. Visible light is a form of electromagnetic radiation or
radiant energy, as are radio frequencies or x-rays. The radiant energy spectrum contains audio
frequencies, radio frequencies, infrared, visible light, ultraviolet rays, x-rays and gamma rays.

Radian energy is measured in terms of frequency or wavelength. The relationship between


the two is

Where ë – is the wavelength in meters, c is the velocity of light in meters per second and
f is frequency of the radiation in hertz.

Since all electromagnetic waves travel through space at the velocity of light, i.e. 3 x 108
meters/second- the wavelength is calculated by

 Color Characteristics

We typically define color by its brightness, the hue and depth of the color.

 Luminance or Brightness

This is the measure of the brightness of the light emitted or reflected by an object; it
depends on the radiant, energy of the color band.

 Hue 

This is the color sensation produced in an observer due to the presence of certain
wavelengths of color. Each wavelength represents a different hue.
194

 Saturation 

This is a measure of color intensity, for example, the difference between red and pink. 

 Color Models

Several calm’ models have been developed to represent color mathematically. 

 Chromaticity Model

 It is a three-dimensional model with two dimensions, x and y, defining the color, and the
third dimension defining the luminance. It is an additive model since x and yare added to
generate different colors.

 RGB Model

RGB means Red Green Blue. This model implements additive theory in that
different intensities of red, green and blue are added to generate various colors.

 HSI Model

The Hue Saturation and Intensity (HSI) model represents an artist’s impression of tint,
shade and tone. This model has proved suitable for image processing for filtering and smoothing
images.

 CMYK Model

The Cyan, Magenta, Yellow and Black color model is used in desktop publishing printing
devices. It is a color-subtractive model and is best used in color printing devices only.

 YUV Representation

The NTSC developed the YUV three-dimensional color model. y -Luminance Component

 UV -Chrominance Components.

Luminance component contains the black and white or gray scale information. The
chrominance component contains color information where U is red minus cyan and V is magenta
minus green.
195

Joint Photographic Experts Group Compression (JPEG)

The first stage converts the signal from the spatial RGB domain to the YUV frequency
domain by performing discrete cosine transform. This process allows separating luminance or
gray-scale components from the chrominance components of the image.

ISO and CCITT working committee joint together and formed Joint Photographic Experts
Group. It is focused exclusively on still image compression.

 Another  joint  committee,  known  as  the  Motion  Picture  Experts  Group  (MPEG),  is
concerned with full motion video standards.

JPEG is a compression standard for still color images and grayscale images, otherwise
known as continuous tone images.

JPEG has been released as an ISO standard in two parts

 Part 1 specifies the modes of operation, the interchange formats, and the encoder/decoder
specifies for these modes along with substantial implementation guide lines.

 Part 2 describes compliance tests which determine whether the implementation of an


encoder or decoder conforms to the standard specification of part I to ensure interoperability
of systems compliant with JPEG standards

Requirements addressed by JPEG

 The design should address image quality.

 The compression standard should be applicable to practically any kind of continuous-


tone digital source image.

 It should be scalable from completely lossless to lossy ranges to adapt it. It should provide
sequential encoding.

 It should provide for progressive encoding.

 It should also provide for hierarchical encoding.

 The compression standard should provide the option of lossless encoding so that images
can be guaranteed to provide full detail at the selected resolution when decompressed.
196

 Definitions in the JPEG Standard

The JPEG Standards have three levels of definition as follows:

* Base line system

* Extended system

* Special lossless function.

The base line system must reasonably decompress color images, maintain a high
compression ratio, and handle from 4 bits/pixel to 16 bits/pixel.

The extended system covers the various encoding aspects such as variable-length
encoding, progressive encoding, and the hierarchical mode of encoding.

The special lossless function is also known as predictive lossless coding. It ensures that
at the resolution at which the image is no loss of any details that was there in the original
source image.

 Overview of JPEG Components

JPEG Standard components are:

(i) Baseline Sequential Codec

(ii) DCT Progressive Mode

(iii) Predictive Lossless Encoding

(iv) Hierarchical Mode.

These four components describe four different levels of JPEG compression.

The baseline sequential code defines a rich compression scheme the other three modes
describe enhancements to this baseline scheme for achieving different results.
197

(i)  Baseline Sequential codec

It consists of three steps: Formation of DCT co-efficient quantization, and entropy


encoding. It is a rich compression scheme

The baseline sequential Codec uses Huffman coding. Arithmetic coding is another type
of entropy encoding

 Discrete Cosine Transform (DCT)

DCT is closely related to Fourier transforms. Fourier transforms are used to represent a
two dimensional sound signal.

DCT uses a similar concept to reduce the gray-scale level or color signal amplitudes to
equations that require very few points to locate the amplitude in Y-axis X-axis is for locating
frequency.

o DCT Coefficients

The output amplitudes of the set of 64 orthogonal basis signals are called DCT Co-
efficient.

o Quantization 

This is a process that attempts to determine what information can be safely


discarded without a significant loss in visual fidelity. It uses DCT co-efficient and provides
many-to-one mapping. The quantization process is fundamentally lossy due to its many-to-one
mapping.

o  De Quantization

This process is the reverse of quantization. Note that since quantization used a many-to-
one mapping, the information lost in that mapping cannot be fully recovered

 Huffman Coding

Huffman coding requires that one or more sets of huff man code tables be specified by
the application for encoding as well as decoding. The Huffman tables may be pre-defined and
used within an application as defaults, or computed specifically for a given image.
198

 Entropy Encoder / Decoder

Entropy is defined as a measure of randomness, disorder, or chaos, as well as a measure


of a system’s ability to undergo spontaneous change. The entropy encoder compresses
quantized DCT co-efficient more compactly based on their spatial characteristics.

(ii) DCT Progressive Mode

  The key steps of formation of DCT co-efficient and quantization are the same as
for the baseline sequential codec. The key difference is that each image component is coded
in multiple scans instead of single scan.

(iii) Predictive Lossless Encoding

It is to define a means of approaching lossless continuous-tone compression. A predictor


combines sample areas and predicts neighboring areas on the basis of the sample areas. The
predicted areas are checked against the fully loss less sample for each area.

The difference is encoded losslessly using Huffman on arithmetic entropy encoding.

(iv) Hierarchical Mode

The hierarchical mode provides a means of carrying multiple resolutions. Each successive
encoding of the image is reduced by a factor of two, in either the horizontal or vertical dimension.

 JPEG Methodology

The JPEG compression scheme is lossy, and utilizes forward discrete cosine transform
(or forward DCT mathematical function), a uniform quantizer, and entropy encoding. The DCT
function removes data redundancy by transforming data from a spatial domain to a frequency
domain; the quantizer quantizes DCT co-efficient with weighting functions to generate quantized
DCT co-efficient optimized for the human eye; and the entropy encoder minimizes the entropy
of quantized DCT co-efficient.

 
199

The JPEG method is a symmetric algorithm. Here, decompression is the exact reverse
process of compression.

Figure 8.3  below describes a typical DCT based encoder and decoder. Symmetric


Operation of DCT based Codec

Fig. 8.3 DCT based Encoder and Decoder

Figure 8.4 below shows the components and sequence of quantization 5 * 8

Fig. 8.4 DCT based Encoder steps


200

Quantization

Quantization is a process of reducing the precision of an integer, thereby reducing the


number of bits required to store the integer, thereby reducing the number of bits required to
store the integer.

The baseline JPEG algorithm supports four color quantization tables and two huffman
tables for both DC and AC DCT co-efficients. The quantized co-efficient is described by the
following equation:

Zig-Zag Sequence

Run-length encoding generates a code to represent the C0unt of zero-value OCT


co-efficients. This process of run-length encoding gives an excellent compression of the block
consisting mostly of zero values.

Further empirical work proved that the length of zero values in a run can be increased to
give a further increase in compression by reordering the runs. JPEG came up with ordering the
quantized OCT co-efficients in a ZigZag sequence

Entropy Encoding

Entropy is a term used in thermodynamics for the study of heat and work. Entropy, as
used in data compression, is the measure of the information content of a message in number
of bits. It is represented as

Entropy in number of bits = log2 (probability of Object)

Check your Progress


1. When using a ___________ compression system, a file can be compressed and
decompressed without loss of data.
201

2. We can divide the audio-video services into————— broad categories.

a. Two

b. Three

c. Four

d. None of the above

3. ————— audio video refers to an on-demand request for compressed audio video
files.

a. Streaming Live

b. Streaming Stored

c. Interactive

d. None of the above

4. —————— audio video refers to broadcasting of radio and tv programs on the internet.

a. Interactive

b. Streaming Stored

c. Streaming Live

d. None of the above

5. —————— audio video refers to the use of the internet for interactive audio/ video
applications.

6. In ————— encoding the difference between the samples are encoded instead of
encoding all sample values.

7. What is the process that condenses files to be stored in less space and therefore, sent
faster over the Internet?

a. Data condensation

b. Data compression

c. Zipping

d. Defragmentation

8. Expansion of LZW Coding is ___________________


202

8.6 Video Compression

To digitize and store a 10-second clip of full-motion video in your computer requires
transfer of an enormous amount of data in a very short amount of time. Reproducing just one
frame of digital video component video at 24 bits requires almost 1MB of computer data; 30
seconds of video will fill a gigabyte hard disk. Full-size, full-motion video requires that the
computer deliver data at about 30MB per second. This overwhelming technological bottleneck
is overcome using digital video compression schemes or codecs (coders/decoders). A codec
is the algorithm used to compress a video for delivery and then decode it in real-time for fast
playback.

Real-time video compression algorithms such as MPEG, P*64, DVI/Indeo, JPEG, Cinepak,
Sorenson, ClearVideo, RealVideo, and VDOwave are available to compress digital video
information. Compression schemes use Discrete Cosine Transform (DCT), an encoding
algorithm that quantifies the human eye’s ability to detect color and image distortion. All of
these codecs employ lossy compression algorithms.

In addition to compressing video data, streaming technologies are being implemented to


provide reasonable quality low-bandwidth video on the Web. Microsoft, RealNetworks, VXtreme,
VDOnet, Xing, Precept, Cubic, Motorola, Viva, Vosaic, and Oracle are actively pursuing the
commercialization of streaming technology on the Web.

QuickTime, Apple’s software-based architecture for seamlessly integrating sound,


animation, text, and video (data that changes over time), is thought of as a compression standard,
but it is really much more than that.

The development of digital video technology has made it possible to use digital video
compression for a variety of telecommunications applications. Standardization of compression
algorithms for video was first initiated by CCITT for teleconferencing and video telephony.

Multimedia standards for Video:


203

Fig. 8.5 Multimedia Standards

Requirements for full-motion Video Compression

 Applications  using  MPEG  standards  can  be  symmetric  or  asymmetric.  Symmetric
applications are applications that require essentially equal use of compression and
decompression. Asymmetric applications require frequent decompression.

Symmetric applications require on-line input devices such as video cameras, scanners
and microphones for digitized sound. In addition to video and audio compression, this standards
activity is concerned with a number of other Issues concerned with playback of video clips and
sound clips. The MPEG standard has identified a number of such issues that have been
addressed by the standards activity.

Let us review these Issues.

 Random Access

The expectations generated for multimedia systems are the ability to playa sound or
video clip from any frame with that clip, irrespective of on what kind-of media the information is
stored.
204

 VCR paradigm

The VCR paradigm consists of the control functions typically found on a VCR such as
play, fast forward, rewind, search forward and rewind search.

 Multiplexing Multiple Compressed Audio and Video Bit Streams

  It is a special requirement retrieved from different storage centers on a network. It


may be received from different storage centers on a network. It may have to be achieved in a
smooth manner to avoid the appearance of a jumpy screen.

 CCITT H.261 Video Coding Algorithms (P x 64)

The linear quantize uses a step algorithm that can be adjusted based on picture quality
and coding efficiency. The H.261 is a standard that uses a hybrid of OCT and OPCM (differential
pulse Code Modulation) schemes with motion estimation.

It also defines the data format. Each MB contains the OCT coefficients (TCOEFF) of a
block followed by an EOB (a fixed length end-of-block marker). Each MB consists of block data
and an MB header. A GOB (Group of Blocks) consists of a GOB header. The picture layer
consists of a picture header. The H.261 is designed for dynamic use and provides a fully
contained organization and a high level of interactive control.

Moving Picture Experts Group Compression (MPEG)

The MPEG standards consist of a number of different standards.

The MPEG 2 suite of standards consist of standards for MPEG2 Video, MPEG - 2 Audio
and MPEG - 2 systems. It is also defined at different levels, called profiles.

 The main profile is designed to cover the largest number of applications. It supports
digital video compression in the range of2 to 15 M bits/sec. It also provides a generic solution
for television worldwide, including cable, direct broadcast satellite, fiber optic media, and optical
storage media (including digital VCRs).
205

 MPEG Coding Methodology

The above said requirements can be achieved only by incremental coding of successive
frames. It is known as intraframe coding. If we access information randomly by frame requires
coding confined to a specific frame, and then it is known as intraframe coding.

The MPEG standard addresses these two requirements by providing a balance between
intraframe coding and intraframe coding. The MPEG standard also provides for recursive and
non-recursive temporal redundancy reduction.

The MPEG video compression standard provides two basic schemes: discrete-transform-
based compression for the reduction of’ spatial redundancy and block-based motion
compensation for the reduction of temporal (motion) redundancy. During the initial stages of
DCT compression, both the full motion MPEG and still image JPEG algorithms are essentially
identical. First an image is converted to the YUVcolor space (a luminance/chrominance color
space similar to that used for color television). The pixel data is then fed into a discrete cosine
transform, which creates a scalar quantization (a two-dimensional array representing various
frequency ranges represented in the image) of the pixel data.

Following quantization, a number of compression algorithms are applied, including run-


length and Huffman encoding. For full motion video (MPEG I and 2), several more levels of
block based motion-compensated techniques are applied to reduce temporal redundancy with
both causal and non-causal coding to further reduce spatial redundancy.

The MPEG algorithm for spatial reduction is lossy and is defined as a hybrid which employs
motion compensation, forward discrete cosine transform (DCF), a uniform quantizer, and
Huffman coding. Block-based motion compensation is utilized for reducing temporal redundancy
(i.e. to reduce the amount of data needed to represent each picture in a video sequence).
Motion-compensated reduction is a key feature of MPEG.

 Moving Picture Types

Moving pictures consist of sequences of video pictures or t1’ame’S that are played back
a fixed number of frames per second. To achieve the requirement of random access, a set of
206

pictures can be defined to form a group of pictures (GOP) consisting of one or more of the
following three types of pictures.

1. Intra pictures (I)

2. Un-directionally predicted pictures (U)

3. Bi- directionally predicted pictures (B)

 A Gap consists of consecutive pictures that begin with an intra-picture. The intra-picture
is coded without any reference to any other picture in the group.

 Predicted pictures are coded with a reference to a past picture, either an intra-picture or
un-directionally predicted picture. Bi-directionally predicted picture is never used as references
Motion Compensation for Coding MPEG

Fig. 8.6 Moving Picture

Let us review the concept of Macro blocks and understand the role they play in compression

MACRO BLOCKS

For the video coding algorithm recommended by CCITT, CIF and QCIF are divided into
a hierarchical block structure consisting of pictures, groups of blocks (GOBs), Macro Blocks
(MBs), and blocks. Each picture frame is divided into 16 x 16 blocks. Each Macro block is
composed of four 8 x 8 (Y) luminance blocks and two 8 x 8 (Cb and Cn) chrominance blocks.
207

This set of six blocks, called a macro block; is the basic hierarchical component used for
achieved a high level of compression.

 Motion compensation

Motion compensation is the basis for most compression algorithms for visual telephony
and full-motion video. Motion compensation assumes that the current picture is some translation
of a previous picture. This creates the opportunity for using prediction and interpolation.
Prediction requires only the current frame and the reference frame.

Based on motion vectors values generated, the prediction approach attempts to find the
relative new position of the object and confirms it by comparing some block exhaustively. In the
interpolation approach, the motion vectors are generated in relation to two reference frames,
one from the past and the next predicted frame.

The best-matching blocks in both reference frames are searched, and the average is
taken as the position of the block in the current frame. The motion vectors for the two reference,
frames are averaged.

 Picture Coding Method

In this coding method, motion compensation is applied bi-directionally. In MPEG


terminology, the motion-compensated units are called macro blocks (MBs).

MBs are 16 x 16 blocks that contain a number of 8 x 8 luminance and chrominance


blocks. Each 16 x 16 macro block can be of type intra picture, forward-predicted, backward
predicted, or average.

 MPEG Encoder

Figure 8.7 below shows the architecture of an MPEG encoder. It contains DCT quantizer,
Huffman coder and Motion compensation. These represent the key modules in the encoder.
208

Fig. 8.7 Architecture of MPEG Encoder

The Sequence of events for MPEG

First an image is converted to the YUV color space.

The pixel data is then fed into a DCT, which creates a scalar quantization of the pixel
data.

Following quantization, a number of compression algorithms are applied, including run-


length and Huffman encoding. For full-motion video, several more levels of motion compensation
compression and coding are applied.

 MPEG -2

It is defined to include current television broadcasting compression and decompression


needs, and attempts to include hooks for HDTV broadcasting.

 The MPEG-2 Standard Supports:

1. Video Coding: * MPEG-2 profiles and levels.

2. Audio Coding:*MPEG-l audio standard for backward compatibility.

* Layer-2 audio definitions for MPEG-2 and stereo sound.

* Multichannel sound.

3. Multiplexing: MPEG-2 definitions
209

MPEG-2, “The Grand Alliance”

It consists of following companies AT&T, MIT, Philips, Sarnoff Labs, GI Thomson, and
Zenith.

The MPEG-2committee and FCC formed this alliance. These companies together have
defined the advanced digital television system that include the US and European HDTV systems.
The outline of the advanced digital television system is as follows:

1. Format: 1080/2: 1160 or 720/1.1160

2. Video coding: MPEG-2 main profile and high level

3. Audio coding: Dolby AC3

4. Multiplexor: As defined in MPEG-2

5. Modulation: 8- VSB for terrestrial and 64-QAM for cable.

 Vector Quantization

Vector quantization provides a multidimensional representation of information stored in


look-up tables, vector quantization is an efficient pattern-matching algorithm in which an image
is decomposed into two or more vectors, each representing particular features of the image
that are matched to a code book of vectors.

These are coded to indicate the best fit.

In image compression, source samples such as pixels are blocked into vectors so that
each vector describes a small segment or sub block of the original image.

The image is then encoded by quantizing each vector separately

 Intel’s Indeo Technology

It is developed by Intel Architecture Labs Indeo Video is a software technology that reduces
the size of uncompressed digital video files from five to ten times.

 Indeo technology uses multiple types of ‘lossy’ and ‘lossless’ compression techniques.
210

 DVI/Indeo

DVI is a property, programmable compression/decompression technology based on the


Intel i750 chip set. This hardware consists of two VLSI (Very Large Scale Integrated) chips to
separate the image processing and display functions. Two levels of compression and
decompression are provided by DVI: Production Level Video (PLV) and Real Time Video (RTV).
PLV and RTV both use variable compression rates. DVI’s algorithms can compress video
images at ratios between 80:1 and 160:1. DVI will play back video in full-frame size and in full
color at 30 frames per second.

 Optimizing Video Files for CD-ROM

CD-ROMs provide an excellent distribution medium for computer-based video: they are
inexpensive to mass produce, and they can store great quantities of information. CDROM
layers offer slow data transfer rates, but adequate video transfer can be achieved by taking
care to properly prepare your digital video files.

 Limit the amount of synchronization required between the video and audio. With Microsoft’s
AVI files, the audio and video data are already interleaved, so this is not a necessity, but
with QuickTime files, you should “flatten” your movie.

 Flattening means you interleave the audio and video segments together.

 Use regularly spaced key frames, 10 to 15 frames apart, and temporal compression can
correct for seek time delays. Seek time is how long it takes the CD-ROM player to locate
specific data on the CD-ROM disc. Even fast 56x drives must spin up, causing some
delay (and occasionally substantial noise).

 The size of the video window and the frame rate you specify dramatically affect
performance. In QuickTime, 20 frames per second played in a 160X120-pixel window is
equivalent to playing 10 frames per second in a 320X240 window. The more data that has
to be decompressed and transferred from the CD-ROM to the screen, the slower the
playback.

8.7 Audio Compression


211

Audio consists of analog signals of varying frequencies. The audio signals are converted
to digital form and then processed, stored and transmitted. Schemes such as linear predictive
coding and Adaptive Differential Pulse Code Modulation (ADPCM) are utilized for compression
to achieve 40-80% compression.

Audio compression is a form of data compression designed to reduce the size of audio
data files.

Audio compression can mean two things:

 Audio Data Compression

 Audio Level Compression

(i) Audio Data Compression - in which the amount of data in a recorded waveform is reduced
for transmission. This is used in MP3 encoding, internet radio, and the like.

(ii) Audio level compression - in which the dynamic range (difference between loud and
quiet) of an audio waveform is reduced. This is used in guitar effects racks, recording
studios, etc.

MPEG Audio Compression

MPEG audio compression takes advantage of psychoacoustic models, constructing a


large multi-dimensional lookup table to transmit masked frequency components using fewer
bits.

 MPEG Audio Overview

MPEG/audio is a generic audio compression standard. Unlike vocal-tract-model coders


specially tuned for speech signals, the MPEG/audio coder gets its compression without making
assumptions about the nature of the audio source. Instead, the coder exploits the perceptual
limitations of the human auditory system. Much of the compression results from the removal of
perceptually irrelevant parts of the audio signal. Removal of such parts results in inaudible
distortions, thus MPEG/audio can compress any signal meant to be heard by the human ear. In
keeping with its generic nature, MPEG/audio offers a diverse assortment of compression modes:
212

 The audio sampling rate can be 32, 44.1, or 48 kHz.

 The compressed bit stream can support one or two audio channels in one of 4 possible
modes:

 A monophonic mode for a single audio channel,

 A dual-monophonic mode for two independent audio channels (this is functionally


identical to the stereo mode),

 A stereo mode for stereo channels with a sharing of bits between the channels, but
no joint-stereo coding, and

 A joint-stereo mode that either takes advantage of the correlations between the
stereo channels or the irrelevancy of the phase difference between channels, or
both.

 The compressed bit stream can have one of several predefined fixed bit rates ranging
from 32 to 224 kbits/sec per channel. Depending on the audio sampling rate, this translates
to compression factors ranging from 2.7 to 24. In addition, the standard provides a “free”
bit rate mode to support fixed bit rates other than the predefined rates.

 MPEG/audio offers a choice of three independent layers of compression. This provides a


wide range of tradeoffs between codec complexity and compressed audio quality:

Layer I is the simplest and is best suited for bit rates above 128 kbits/sec per channel.
For example, Philips’ Digital Compact Cassette (DCC)[5] uses Layer I compression at 192
kbits/s per channel.

Layer II has an intermediate complexity and is targeted for bit rates around 128 kbits/s
per channel. Possible applications for this layer include the coding of audio for Digital Audio
Broadcasting (DAB) , for the storage of synchronized video-and-audio sequences on CD-
ROM, and the full motion extension of CD-interactive, Video CD.

Layer III is the most complex but offers the best audio quality, particularly for bit rates
around 64 kbits/s per channel.
213

This layer is well suited for audio transmission over ISDN.

All three layers are simple enough to allow single-chip, real-time decoder implementations.

 The coded bit stream supports an optional Cyclic Redundancy Check (CRC) error detection
code.

 MPEG/audio provides a means of including ancillary data within the bit stream.

In addition, the MPEG/audio bit stream makes features such as random access, audio
fast forwarding, and audio reverse possible.

 Digital Audio Compression

– Removal of redundant or otherwise irrelevant information from audio signal

– Audio compression algorithms are referred to as “audio encoders”

 Applications

– Reduces required storage space

– Reduces required transmission bandwidth

8.8 Summary
 Compression is the way of making files to take up less space.

 Compression tries to eliminate redundancies in the pattern of data.

 There are two categories of compression techniques used with digital


graphics, lossy and lossless. 

 Lossy compression methods include DCT (Discreet Cosine Transform), Vector


Quantization and Huffman coding

 Lossless compression methods include RLE (Run Length Encoding), string-table


compression, LZW (Lempel Ziff Welch) and zlib.

 Compression methods are otherwise known as algorithms, which are calculations that
are used to compress files.
214

 A CODEC (compressor/de-compressor) is used carry out the algorithm to save a file in a


compressed format and open a compressed file

8.9 Check Your Answers


1. Lossless

2. b. Three

3. b. Streaming Stored

4. c. Streaming Live

5. Interactive

6. Predictive

7. Data Compression

8. Lempel-Ziv and Welch

8.10 Model Questions


1. Categorize compression techniques. Explain briefly.

2. Describe data compression techniques in detail.

3. Define lossy and lossless compression.

4. Compare and contrast any two image compression techniques.

5. Explain the algorithms used in media content with an example.

6. Draw and explain sequential encoding JPEG image compression technique.

7. Explain audio video compression detail.

8. Explain text Compression techniques in detail.

9. Explain image compression techniques in detail.

10. Explain MPEG architecture and different kind of picture used with neat sketch of frames.
215

LESSON 9
WORKING EXPOSURE ON TOOLS

Structure
9.1 Introduction

9.2 Learning Objectives

9.3 Dream Weaver

9.4 Flash

9.5 Photoshop

9.6 Summary

9.7 Check Your Answers

9.8 Model Questions

9.1 Introduction

Dreamweaver allows users to preview websites in locally-installed web browsers. It also
has site management tools such as FTP/SFTP and WebDAV file transfer and synchronization
features, the ability to find and replace lines of text or code by search terms and regular
expressions across the entire site, and a tempting feature that allows single-source update of
shared code and layout across entire sites without server-side includes or scripting.

Adobe Flash (formerly Macromedia Flash) is a multimedia platform originally acquired
by Macromedia and currently developed and distributed by Adobe Systems. 

Flash has become a popular method for adding animation and interactivity to web pages.
Flash is commonly used to create animation, advertisements, and various web page Flash
components, to integrate video into web pages, and more recently, to develop rich Internet
applications.

Adobe Photoshop, or Photoshop, is the most powerful graphics editing program (also
known as a DPP, Desktop Publishing Program) developed and published by Adobe Systems.
216

It is the current market leader for commercial bitmap and image manipulation software, and is
the flagship product of Adobe Systems.

9.2 Learning Objectives


At the end of the lesson, the learner will be able to

 Understand the detailed design plan required to create a successf ul W eb 


site that considers audience needs, accessibility features, and various technical issues   

 Incorporate text, images, animation, sound, and video into Web pages   

 Create an accessible and full  feature  Website  with  popular  multimedia  authoring  tools, 
such as Adobe Dreamweaver, Flash, and Photoshop

 Learn how to design and develop multimedia for real time applications.

9.3 Web Site Development with Dreamweaver

Dream Weaver - Definition

Adobe Dreamweaver is a software program for designing web pages, essentially a more
fully featured HTML web and programming editor. The program provides a WYSIWYG (what
you see is what you get) interface to create and edit web pages. Dreamweaver supports many
markup languages, including HTML, XML, CSS, and JavaScript

Purpose of Dreamweaver

Adobe Dreamweaver CC is a web design and development application that uses both a
visual design surface are known as Live View and a code editor with standard features such as
syntax highlighting code completion, and code collapsing as well as more advanced features
such as real-time syntax checking and code introspection.

Dreamweaver Features

Adobe Dreamweaver CC is a web design and development application that uses both a
visual design surface known as Live View and a code editor with standard features such as
syntax highlighting, code completion, and code collapsing as well as more advanced features
217

such as real-time syntax checking and code introspection for generating code hints to assist
the user in writing code. Combined with an array of site management tools, Dreamweaver
allows for its user’s design, code and manage websites, as well as mobile content. Dreamweaver
is an Integrated Development Environment (IDE) tool. You can live preview of changes for the
frontend. Dreamweaver is positioned as a versatile web design and development tool that
enables visualization of web content while coding.

Dreamweaver, like other HTML editors, edits files locally then uploads them to the remote
web server using FTP, SFTP, or WebDAV. Dreamweaver CS4 now supports the Subversion
(SVN) version control system.

Since version 5, Dreamweaver supports syntax highlighting for the following languages
out of the box:

 Action Script

 Active Server Pages (ASP).

 C#

 Cascading Style Sheets (CSS)

 ColdFusion

 EDML

 Extensible Hyper Text Markup Language (XHTML)

 Extensible Markup Language (XML)

 Extensible Style sheet Language Transformations (XSLT)

 Hyper Text Markup Language (HTML)

 Java

 JavaScript

 PHP

 Visual Basic (VB)


218

 Visual Basic Script Edition (VBScript)

 Wireless Markup Language (WML)

Support for Active Server Pages (ASP) and Java Server Pages was dropped in version
CS5.

Users can add their own language syntax highlighting. In addition, code completion is
available for many of these languages. The main features of Dreamweaver to be considered,

 Easy to use visual interface.

 Built-in Code editor.

 Part of the creative cloud suite.

9.3.1 Working with Dreamweaver


To start creating a web site in Dreamweaver, you first need where you’ll store asset files

 The root folder for our local site will become a “mirror” that can be installed & online

 Dreamweaver uses site information to track links and updates to your files

 Select File > New Site; we’ll create a site called bookstore

 To create a folder called Sites, with bookstore (and other sites) as subfolders

 You can also establish a connection between local site and remote server, at site http://

 Dreamweaver will verify links to absolute URLs on the remote site

 Cache option can improve the speed of link and site management tasks

 Refresh button automatically refreshes local site from remote site (but this takes time)

 Click OK and Dreamweaver will set up a site for our bookstore

 You can use the Site window to create a Site map (under Window pulldown menu)

 OK, we can close this window for now and return to Dreamweaver workspace
219

Work to create content in the Document Window of the Dreamweaver workspace

 On the left is the Object Palette, a set of toolbars, analogous to the Authorware toolbar

 Objects are HTML elements that Dreamweaver will insert into your documents

 Starts with Common tools; more available by clicking on the pulldown arrow at the top

 On the upper right is Launcher—launches other programs (Site, Library, HTML Source
etc.)

 Opens other palette/toolbars and shows which ones are open

 Click on the Site icon, then close the Site window

Right click on document to bring up a menu with lots of editing options!

 At bottom is Page Properties, which lets you modify font, alignment, etc.

 This dialog box lets you edit global properties of the page, such as background

 Type Bookstore into Title (this will appear in browser’s title bar, bookmarks & favorites)

 Design tip: descriptive titles make it easier for search engines to find your page

 Click on the square next to Background Color and select a color (or import image)

 You can also change the Text, Link, Visited Links, and Active Links colors

 Left Margin and Top Margin specifies page margins—in Microsoft IE, not Netscape

 Margin Width and Height is for Netscape, not Microsoft IE!

 Click OK, then save your page by selecting File > Save As (as index.htm)

Let’s make a new file: coftable.htm and edit some text:

 Type in “Mythical Bookstore” then highlight (and keep cursor in highlight area)

 Use right button to select heading 1 , change the font to Arial and alignment to center

 Our title looks OK, so place the cursor after the heading, hit Enter

 Then type in a mythical address


220

Properties also appear at lower left corner of Document Window:

 E.g., <body><h1><font>: Click on <h1> to show what this tag includes

 Right click to edit the tag, e.g., edit “center” to “right”

Open the Property Inspector by selecting Window Properties (may need to hide
windows)

 Edit properties of current HTML element

 Change its font size to 5 and change its color

 Property inspector will change when you highlight different elements on a page

Clicking on Launcher’s HTML Source editor (or press F10)

 Here we see the HTML source code that Dreamweaver has generated

 Most of you are familiar with HTML, right?

 HTML uses tags to describe properties of a page and individual elements

 E.g., <h1 align=”center”> is the H1 (Heading1) tag, with an align attribute

 You can use the HTML Source window to edit text, if you prefer this to WYSIWYG!

 Or, you can also choose your own External Editor

The Object palette (you can reopen this by selecting Window > Objects)

 Holding the cursor over each icon on the palette opens a caption box explaining it

 This toolbar lets you insert images, tables, horizontal rules, Java applets, Flash movies,
etc.

Let’s insert an image between our headline and address

 Use the mouse to place the cursor just after the headline and hit Enter

 Click on the Image icon (a tree) on the Object palette to choose an image

 Browse to the folder “images” and click on books.jpg, click select, then OK
221

Note: now the Property Inspector refers to how this image is embedded on the page

 Click on the Align menu in the Property Inspector, then select Align Center

Now click on the image—now the Property Inspector refers to the image itself

 We see its dimensions and location

 (here, Align is for aligning a picture next to text, not on the page as a whole)

Let’s try another object from the object palette: Horizontal Rule

 Place the cursor after the address, then select the Horizontal Rule button on Object palette

 Use the Property Inspector to change the width to 75% (% via menu) and alignment to
center

Importing text using copy and paste

 Use Notepad to open books.txt, the paste it into the Dreamweaver document

 Notice that any formatting in books.txt is now lost, including paragraph breaks: why?

 Let’s use Dreamweaver to insert paragraph breaks, Heading2 formats, alignment, etc.

 Enter inserts paragraph breaks (double space break), Shift-Enter enters line break

 Use HTML inspector to take a look at the code: <p> vs. <br>

Creating lists—you can created ordered lists (set off by numbers or letters)

 Unordered lists (preceded by bullets) and definition lists (simply indented)

 Dreamweaver lets you create lists as you type highlight text and apply list format

 Let’s do it: click the Numbered List button in the Property Inspector (below I) (or choose
Text > List from the menu bar)

 Enter the list items, pressing Enter after each item:

“LoveDogs,” “Spacebopping,” and “Purp L. Elephant”

 Suppose you want to see bullets instead of numbers?

 Mouse-select all the text between “This Month’s Specials” and next horizontal rule
222

 Click on Bullet list button in Property Inspector—creates an unordered list

 Why is it called “unordered”? — Bullets instead of numbers or letters

Demonstrate undo from Edit menu (note Ctrl-Z short-cut) and Redo (Ctrl-Y)

Mouse-select all the text between “On the CoffeeTable” and “This Month’s Specials”

 Click on the Text Indent icon in the Property Inspector

 Dreamweaver uses the definition list format to create an indented block

Image format options—you’ve learned how to position an embedded image on a


page

 Now we’ll see how to position images in relation to text (this is tricky in HTML!)

 Place the cursor below the headline “On the CoffeeTable” then click Insert Image
tool on the Object Palette. Find books.jpg again

 In Image Property Inspector, go to Align pulldown menu and select Absolute Middle

 In Alt text box, type “books” – What does this do? Why is it useful?

Select File > Open to open page called “arica.htm” in the “catalog” folder

 Click Insert Image tool again, then select “arica.gif”

 Use Align pull-down menu to left align the image

 Click on the arrow in lower right corner of Property Inspector to bring up more options:

 In the H Space text box in bottom left of Property Inspector, enter “10”

 This places 10 pixels of horizontal space on either side of the image

 Note how the text now wraps on the right of the image

OK, we’ve got enough content, let’s create hyperlinks to other pages

 Go back to coftable.htm window and select the words “The Arica Conundrum”

 In Property Inspector, click on the Folder icon to right of Link text area

 Create a link to “arica.htm” in the “catalog” folder by double-clicking it


223

 The text changes to indicate a link (un-highlight the text to see the actual link color)

 Use HTML Source inspector to see what this action has created in HTML

 One more time: create another link from “Varoom” in coftable.htm to catalog\varoom.htm

Note: links can have absolute paths (starting with “http:”) or relative paths (starting with /)

 Use an absolute path to link to another web site, relative path to link to local site

 What’s an advantage of relative paths? makes it easier to move entire web site

Set an internal link to an anchor:

 Place cursor next to “This month’s specials”, then choose Insert > Named Anchor

 Enter name “specials” to represent this anchor

 Now switch to arica.htm and insert cursor to the right of the book’s price

 Enter the text “This month’s special,” select these words

 Use Property Inspector link folder icon to browse back to coftable.htm

 In the Link box, to the right of the filename, type “#specials”—thus specifying an
anchor

 If you wanted to link to a Named Anchor in same page, you wouldn’t need file name

OK, let’s preview our work so far, in a browser, by selecting File > Preview (or pressing
F12)

 Why is it a good idea to preview work in more than one browser?

Image maps are analogous to hot spots on graphics in Authorware

 In coftable.htm, click on graphic of books, then view the Map options in Property
Inspector

 Recall, it’s in lower part of Property Inspector for images, accessible via arrow on
lower right

 This is part of the Image Property Inspector is the Image Map Editor:

 Insert a name for the Map: “booksMap”


224

 (HTML needs to have a map name, so this step is important.)

 The Image Editor can create hot spots of different shapes: rectangle, circle, polygon

 Arrow on left is a selection tool that will let us move or resize the image map

 Select the rectangle tool, create a rectangular shape covering the book Varoom

 Hot spots have links to other pages: click on the Folder icon next to Link text field

 Browse to find catalog\varoom.htm and select it.

9.3.2 Pros and Cons of Dreamweaver

Dreamweaver offers you a ton of possibilities. It would be nearly impossible to break


down all of its features here. But here are a few of the unique advantages that Dreamweaver
offers, as well as a few reasons.

Dreamweaver Pros

Dreamweaver is an intuitive and flexible tool that does a lot of things right. Here are
some of the biggest advantages this software can offer you:

(i) Device Testing

With Dreamweaver, you’ll have an instant preview option that lets you test and see how
your website will look across any device. A lot of other tools have this feature. But, with
Dreamweaver, all it takes is a single click to preview and adjust your site on the fly.

(ii) Easy Code Error Testing

When you’re tweaking your code or writing it from scratch, there are going to be errors
that accumulate over time. With Dreamweaver, you can quickly find and fix these errors quickly.
Instead of having to guess and troubleshoot your site for errors, you’ll know what’s wrong and
how to fix it.

(iii) Included Font Selection

Designing your site can be a lot of fun. Especially when you start tweaking things like
color, layout, font choice, and more. Dreamweaver has a massive font selection built right into
the software. This makes it easy to find the perfect font for your headers and body text.
225

(iv) Bundled Stock Photos

If you’ve ever published anything online you know how long the stock photo search can
take. Instead of having to search across a variety of stock photo websites you can do it right
within the tool. There’s a massive selection and you’ll be able to find the perfect photo for your
needs.

(v) Interface Personalization

When you first start using Dreamweaver you might be overwhelmed with all the different
tools and options available. But, you can actually streamline the appearance and use of the
site builder by changing the preferences. Once you know what you use and what you don’t you
can craft the appearance of the builder to suit your needs.

Dreamweaver Cons

Still, Dreamweaver isn’t perfect. If you’re not willing to put in the work of learning how this
software works, you might be better off with a different solution. Here are some of the biggest
drawbacks of Dreamweaver:

(i) Steeper Learning Curve

There are other site builder’s solutions like Square space, Wix, and Word Press that
make it incredibly easy to build out your first website. Creating a basic site and getting it online
with Dreamweaver isn’t too difficult, but creating a site that can do exactly what you want will
take some time.

Since you’re starting with a blank canvas the end result will depend upon your own
creativity and skills. Some users prefer this, but others would prefer a simpler solution that
requires absolutely no coding skills. You can do a lot with Dreamweaver, and a lot of experienced
developers prefer using this software for the flexibility it provides.
226

1. How many Sites can you define with one copy of Dreamweaver installed on your
computer? 

a. unlimited

b.  2

c.  10

d.  999 

2. What do you add to a template in order to control where page content goes?

a.  Text Frames

b.  HTML Controllers

c.  Editable Regions

d.  Page Content Controllers

3. Which of the following is NOT a Style?

a. Linked

b. Embedded

c. Inline

d. Orthogonal

4. Which of the following is NOT a Hotspot tool?

a. Orthogonal Hotspot Tool

b. Rectangular Hotspot Tool

c. Oval Hotspot Tool

d. Polygon Hotspot Tool

5. Which of the following is not supported by older browsers?

a. CSS

b. Layers

c. Frames

d. All of the above


227

9.4 Flash 5

Flash is one of the most popular technologies on the internet, with thousands of websites
using it for introductions, animations and advertisements. Although many people feel that these
animations are sometimes unnecessary, Flash has created a way of including multimedia on
web pages, which will run over a standard internet connection. The recent release, Flash 5,
has brought many changes to the creation of Flash animations. Many of the techniques covered
in this tutorial will also apply to past versions of Flash, as well as Flash MX, the very latest
version, though. If you are not sure what a Flash animation is like click here for an example. If
this does not work you will need to download the Flash Plugin.

Why Use Flash?

Flash is one of the best multimedia formats on the internet today for several reasons.
Firstly, the Flash plugin (required to view the animations) is installed on nearly every computer
connected to the internet. All the major browsers come with it installed by default and, for those
who don’t have it, the download is very small. Secondly, Flash is a ‘vector based’ program,
which means the animations and graphics created by it have much smaller file sizes than a
video or streaming media version of the same animation would be. You can also include sound,
graphics and dynamically created information in your animation.

9.4.1 The Flash Interface

When you first open Flash you will find an interface that looks something like this:

Fig. 9.1 Flash Interface


228

In the center is the large white ‘Stage’. This is the actual movie where you will place all
the objects you want to include in it. Across the top of the screen is the timeline. This is where
you insert all the actions that happen in your movie so that they happen at the correct times. It
is split up into frames. Down the left hand side of the screen is the ‘Tools’ palette. This is where
you will find all the tools for inserting objects and text into your animation.

There are also four floating palettes on the screen. The ‘Mixer’ palette allows you to
choose the colors you will be using in your animation. It will change the colors of the currently
selected object. The’ Info’ palette will allow you to find out a bit of information about the object
you have selected and will allow you to make changes to the properties of a tool you are using.

The ‘Character’ palette contains all the text formatting tools. Finally the ‘Instance’ palette
contains all the tools for changing objects when you are animating them, including sound and
several other tools for making changes to your animation.

Each of the parts of the Flash window does many different things. Instead of going through
each tool explaining what it does, I will show you examples and explain how to create them,
showing you how to use each tool while doing so.

9.4.2 Basic Drawing

The first thing you need to learn how to do is to draw basic shapes in Flash. We will start
with the most basic shape, the circle/oval. Before you start you might want to move some of the
floating palettes so that you can see enough of the stage to work on.

Firstly, choose the Oval tool from the Tools bar on the left:

Then, draw the oval or circle you want on the stage (just as you would in a normal
graphics program).

Table 9.1 Flash Tools

Tool Purpose

Holding down shift will force the object drawn to be a circle


229

colors section of the Tools bar

rectangle tool

The rectangle tool, unlike the oval, has some options which can be set.

Round Rectangle Radius. This creates rectangles with rounded corners

line tool

filling a shape with a single color you can also use Flash’s premade fills

Ink Bottle- to add a line round the edge

Dropper Tool - pick a color off one part of the screen and use it as the fill

or line color

To rub things out on the stage.

Paintbrush Tool - paint lines all the time you have the mouse button held

down.

Pencil Tool - draw lines on the screen


230

character palet

Symbols

In order to animate something in Flash it must first be changed into a Symbol. There are
three types of symbol: Graphic, Button and Movie.

To start, draw a filled circle in the middle of the screen, a few centimeters high. Choose
the arrow tool and double click on the circle to select it and the line around it. Then press F8 on
the keyboard. You will get a window called Symbol Properties. In this window you can give a
name to your symbol so that you can refer to it later. Type ‘Circle’ (without the quotes) in the
box and then select Graphic and click OK.

You will now notice that the circle appears with a blue line around it. The next thing you
will want to do is to animate this circle.

The Timeline

To create animation in flash you must use the timeline:

Fig. 9.2 Flash Timeline


231

The timeline window shows all the frames that make up your animation and all the layers
(which will be covered later). Each small box in the timeline is a frame. The animation runs at
12 frames per second (shown at the bottom) as standard but this can be changed. As you can
see above, there is a black dot in the first frame. This signifies that it is a key frame.

Key frames

Key frames are very important in flash as they are used whenever something is changed.
For instance if you wanted the circle to appear in another position later in the movie you would
create a key frame in the frame where you want it to change and then you could move the
circle without affecting the rest of the movie. That is exactly what you are going to do now.

Right click in frame 50 on the timeline and choose Insert Key frame. This will insert a new
key frame into the animation at frame 50 and it will contain the same information as the previous
key frame. You could have also chosen Blank Key frame which will make a new blank key
frame but you want the circle to be in both key frames in your movie.

Now, click in frame one and press Enter to play the movie. As you can see you now have
a 4.1 second long movie of a circle in the middle of the screen - not very interesting.

To make something happen you will need to change the second key frame. Click on it
(frame 50) and the symbol of the circle will be selected. Now, with the arrow tool, click and drag
the circle to the upper left hand corner of the stage. Then click in frame one again and press
Enter to play the movie.

Animation

The movie you have created now has a circle which moves on the screen but, as you will
have noticed, it stays in the same place and then suddenly moves in the last frame. Animations,
like television and film, are made up many frames, each of which has a slight change from the
last one. As they are played very fast (12 frames per second in flash) the object looks like it is
moving. Luckily, flash has been built so that you don’t have to do all of this manually.

Actually, animating the circle on the screen is amazingly easy because of the Flash
feature called Motion Tweening. This feature will automatically create all the frames to go
232

between two key frames to animate an object which you have moved (in this case the circle).
All you have to do is right click in any frame between your two key frames and choose Create
Motion Tween.

Once you have done this the frames will change from being grey to being blue with an
arrow across them. This signifies a motion tween. Click in frame one and press Enter to view
your movie. As you can see, now flash has made your circle move smoothly across the screen
and, if you click i n the frames between your key frames you will see that it has created all the
frames in between.

Scaling

Motion Tweens can be used for other things as well as moving objects. You can also
change their size. For this you will use the scale tool. Right click inn frame 80 and create a new
key frame. Your circle will be selected. Now choose the Scale tool from the Options section on
the tools pallette (if it is not available make sure you have the black pointer tool selected).

Fig. 9.3 Scale Tool

This tool allows you to change the size of objects. 6 white boxes will appear at the edges
of the circle, just like in any other image application. Use the bottom right hand one to drag the
circle size until it is considerably larger. You will also notice that the circle grows equally around
its center point. Now, as before, right click in between frames 50 and 80 and choose Create
Motion Tween.

Rotation

Resizing a symbol is not the only thing you can do to it. Symbols can also be rotated. To
do this create a movie and draw a large red square in the middle. Then, select the square and
make it a symbol (F8). Give it a name and choose Graphic as the type. Then go to frame 30
233

and insert a key frame. In this new key frame choose the black arrow from the Tools menu and
then click on the Rotation option:

Fig. 9.4 Rotation option

which is found next to the Scale option under the Options section for the arrow. Then
click on one of the white handles that appear round the rectangle and drag the mouse until the
rectangle rotates to about 90 degrees (or any rotation). Then all you have to do is right click
between frames 1 and 30 and choose Create Motion Tween to animate your rotation.

Animating Text

Text, like images can be made into symbols and animated in exactly the same way as
images can. The technique is exactly the same except for one difference: even when it is a
symbol you can still edit text by double clicking on it. Apart from this you can rotate it, scale it,
move it and perform any other changes that you normally could.

Importing Images

You can import any graphic into Flash and then use it as you would as if it had been
created in Flash. This is especially useful for pictures such as photos which could not be
created using Flash’s graphics tools. To import an image is very simple: just go to File then
Import... find the image on your hard drive and then click Open. The image will then appear on
the stage. You can now resize it and make it a symbol if you want to.
234

Multiple Animation

You don’t only have to change one thing at once when you animate a symbol. For example
create a symbol of a square. Create a key frame at frame 30. Then click on the symbol in frame
30. Use the scale tool to make it much bigger. Then use the rotate tool to turn it to a different
angle. Finally use the effects pallet to set the alpha at 100%. Now go back to frame 1 and click
on the same square. Go the effects pallette and set the Alpha to 0. Then create a motion tween
between frames 1 and 30 and play your movie. You now have a square which rotates and
grows while fading in.

Layers

One major feature of Flash is that, like several image editing programs, everything you
do is put into layers on the screen. Layers are like pieces of transparent plastic. You can put
pictures text and animation on them. Layers higher up have their content over the top of layers
lower down. So far everything you have done has been contained in one layer, though.

Layers are controlled through the timeline, which you have seen before:

Fig. 9.5 Layer 1 animation

As you can see, there is only one layer in this animation, Layer 1. The first thing you
should do is to rename this layer. Although your animation will work with it being called Layer 1
it is much easier to understand what you are doing if you use proper names for your layers.
DoubleClick on the name and type in Scrolling Text.

Now you will want to put in some content for this layer. Make a symbol with the text:
235

This is my Flash Animation

And make it a symbol. Now move it right off the left edge of the stage. Insert a key frame
at frame 120 and in that frame move the text to off the other side of the stage. Now make a
motion tween. Your text should ‘scroll’ across the screen.

Now you can add another frame. Click the little icon on the timeline with a + sign on it.
This will add a new frame above the one you are currently using. Rename this to Circle.

In this layer make a circle which is very small, make it a symbol and then animate it to
grow much bigger in 120 frames.

This should show you how, when you create a second layer it is completely independent
of the other layers but that layers above another layer overlap them.

Inserting Actions

In the last part I showed you how to use an action with a button so that it was triggered
when the button was clicked. Actions can also be added to frames and to other mouse events
on the button. Firstly I will cover the buttons. If you haven’t done so already make a simple
button for your animation and right click on it and choose Actions. The actions window (which
you first used last week) will appear. It has two windows. The one on the right contains the
hundreds of actions you can add. The one on the left contains the code (like programming
code). Choose an event (like Go To) and double click it to add it to the code. This is as far as
you did in the last part.

What you didn’t learn last week was that you can change what triggers the action. There
are several options for this. To access them click on the part of the code which says:

on (release) {

A new section will now appear at the bottom of the box with the options for this part of the
code (in Flash you write code by selecting options). You can choose what triggers the action.
As you can see it is currently set as Release so when the mouse button is released the action
will happen. This is fine for clicks but you may want to use some of the other triggers. To
change the trigger just deselect the old one and select a new one.
236

You can also trigger actions when a frame loads. Right click in any key frame and choose
Actions. This is exactly the same as the button Actions box except when you add an action
there will be no:

on() {

code as the actions are executed when the frame is played.

Play and Stop

The Play and Stop actions have no parameters. One plays the movie from the current
point and the other stops it (although it remains at its current position).

Toggle High Quality and Stop All Sounds

Toggle High Quality will switch the movie between high and low quality. Stop All Sounds
will stop all the sounds currently playing in the movie. Neither of these have any parameters
which can be set.

Get URL

This can be used for both frames and buttons. Basically, when clicked, it will point the
browser to the specified URL. The URL is specified in the parameters for the action. You can
also choose the window for the new page you are opening. This is the same as target in HTML.
_blank will open in a new window and you can specify the name of a frame in here (if you are
using them). The Variables option allows you to send the variables set in a form in your movie
just like an HTML form (this is good for Submit buttons). You can choose between the standard
POST and GET options.

If Frame Is Loaded

The If Frame Is Loaded is quite a complex but very useful command. It is used to make
the ‘loading’ parts at the beginning of some Flash movies. It is used like an IF statement in a
program. Double click the If Frame Is Loaded action to add it to the code, then double click the
Go To action. You now have a small IF loop.
237

Firstly you should set the parameters for If Frame Is Loaded. Click on this part of the
code. You will see that this is very similar to the Go To parameters. Here you select the frame
you want to use. When this code is run it will check whether the specified frame has loaded yet,
if it has then it will execute the code within the { and }.

Creating A ‘Loading’ Sequence

Many Flash animations on the internet, especially ones with a lot of sound and images,
will not just start playing smoothly while they are still loading. For these, most people add a
‘loading’ part to their movie. This is a actually a few frames which will repeat until the movie is
loaded. They are actually quite easy to make.

Firstly choose how many frames you will want for your ‘loading’ section. Something like
10 frames is about right. Create the part of the animation you want to loop in these frames. In
the last frame of the ‘loading’ section right click to access the Actions menu. Double click on If
Frame Is Loaded and then immediately afterwards double click on Go To. Then click on the
final

} in the animation and double click the Go To action again. You should now have the
following code:

ifFrameLoaded (1) {

gotoAndPlay (1);

gotoAndPlay (1);

This is the code which will do the ‘loading’ part. Firstly click on ifFrameLoaded(1) and
choose the frame you want to wait until it is loaded from the parameters. Click on the first
gotoAndPlay(1) and choose frame 11 (if you used 10 frames for your ‘loading’ sequence).
Finally, for the last gotoAndPlay(1) choose the first frame in your animation.

This is actually a very basic program, showing how easily complex programs can be
written using Flash’s actions. What the code actually does is to check if the specified frame is
238

loaded. If it is it goes to the first frame of the actual animation. Otherwise, it returns to the
beginning and plays the ‘loading’ sequence again.

Importing Sounds

Before sounds can be used in your animation they must be first made available for the
software to use. To do this you must use the standard Import menu. To access this go to File,
Import. From here you can select each file you want to import (just as you did in an earlier part
with images). The only confusing thing about this is that once you have imported your sound
you will see nothing special on the screen. This is because the sound has been added to the
library.

The Library

The library is not only for sound, but is actually one of the most useful parts of Flash
when you start to create large movies with many elements. Basically, the library contains all the
objects you have in your movie, the three types of symbol (movie, button and graphic) and all
sounds. This can be very useful as, to add something to the movie from the library you just
drag it to the position you want it on the stage.

You can also preview all the objects here, graphics will appear in the top window and if
you select a button, sound or movie clip you can watch or listen to it by clicking the little play
button which appears in the preview window. You should be able to see and preview any
sounds you have added here.

Adding Sound

Sound is added using the sound palette. This is in the same palette as Instance, Effect
and Frame. If it is not on the screen go to Window and choose Panels, Sound. The sound
palette will probably be ‘greyed out’ at first. Insert a key frame into your movie and click in it to
make all the options available.
239

Fig. 9.6 Adding Sound

In the first ‘Sound’ box you can select the sound you want to play. If no sounds appear
here, you have not yet imported any into your movie. Check the Library to see if any appear.

Now the effect box will be available. This is not particularly important just now. The next
box is the Sync box. You can choose Event, Start, Stop and Stream. The only ones you really
want to learn about at the moment are Event and Stream. They each have their own features.

 Stream

Streaming sounds work like streaming audio on the internet. The sound does not need to
be fully loaded before it starts playing, it will load as it plays. Streaming sounds will only play for
the number of frames that are available for it (until the next keyframe). This is fine for background
sounds but it will not work very well for a button.

 Event

Event sounds are mainly used for when something happens in your movie. Once they
have started playing they will continue until they end, whatever else happens in the movie. This
makes them excellent for buttons (where the button moves on to another frame as soon as it is
clicked). The problem with Event sounds, though, is that they must fully load before they can
play.

 Adding A Streaming Sound

It is much easier to manage your sounds if you put them on a separate layer. Insert a new
layer and place a key frame at the beginning. Using the sounds palette select the sound you
240

want to play and select Stream from the Sync. If you want the sound to loop change the value
in the Loops box (for some reason a value of 0 (default) will cause the sound to play once).

Now you must insert some frames to give the sound time to play. Insert a frame (or key
frame) at about frame 500 in the movie (this will give most sounds time to play). When working
out how many frames are needed remember that the movie will play at 12 frames per second.
A graphical representation of the sound will appear in the frames it will be playing during so that
you can see how much space it takes up. Press CTRL + Enter to preview your movie.

With the sound on a separate layer you can have your movie running on other layers
while the sound plays in its own layer.

 Adding an Event Sound to a Button

Adding an event sound to a button is nearly as easy as adding a streaming sound. Either
create a button or use a pre-made one and right click and choose Edit. This will allow you to
see the 4 different states of the button (as you learned about in part 6. Usually sounds are
placed in the Over or down frames. To make a sound play when the mouse moves over the
button choose over and to hear it when the button is clicked choose down.

Insert a new frame and then put a key frame for the button state you want to use. Click in
the frame and use the sounds palette to add an Event sound. You don’t need to put in any extra
frames as an event sound will play until it finishes. Now return to the movie and use CTRL +
Enter to test it with the button.

 Effects

The effects option allows you to add a variety of effects to the sound as it plays. The
preset ones are quite self-explanatory but you can also use the Edit. Button to create your own.
This will open a window with the waveform representation of the sound (left speaker at the top,
right at the bottom). On the top of this is a line which is the volume control (the top is full volume
(the volume the sound was recorded at) and the bottom is mute). By clicking in the window you
can insert little squares. The line goes between these squares. You can also drag them around
the screen. By doing this you can change the volume of the sound at different points throughout
its playing time, and make it different for each speaker.
241

Check your Progress


1. This ideal with the rotation and movement of the object from one point to another in
specific frames.

a. Tweening

b. Shape Tween

c. Motion Tween

d. Transition

2. It allows you to insert text within your flash stage.

a. Text Box

b. Text Tool

c. HTML

d. Key frames

3. Say True or False

FLA is the shortcut key for adding a key frame.

4. For What work Photoshop is used?

a. For Graphics

b. For Animation

c. For Programming

d. For Typing

5. What is File Extension in Photoshop?

a. Bmp

b. Tiff

c. Psd

d. Txt
242

6. _________________ menu contains the duplicate layer option in Photoshop.

7. Which of this software is using the Gradient tool?

a. Page maker

b. Painting

c. Photoshop

d. All of these

9.5 PHOTOSHOP 7

History

In 1987, Thomas Knoll, a PhD student at the University of Michigan began writing a
program on his Macintosh Plus to display grayscale images on a monochrome display. This
program, called Display, caught the attention of his brother John Knoll, an Industrial Light &
Magic employee, who recommended that Thomas turn it into a full-fledged image editing
program. Thomas took a six-month break from his studies in 1988 to collaborate with his
brother on the program. Thomas renamed the program ImagePro, but the name was already
taken. Later that year, Thomas renamed his program Photoshop and worked out a short-term
deal with scanner manufacturer Barneyscan to distribute copies of the program with a slide
scanner; a “total of about 200 copies of Photoshop were shipped” this way.

During this time, John traveled to Silicon Valley and gave a demonstration of the program
to engineers at Apple and Russell Brown, art director at Adobe. Both showings were successful,
and Adobe decided to purchase the license to distribute in September 1988.[8] While John
worked on plug-ins in California, Thomas remained in Ann Arbor writing code. Photoshop 1.0
was released in 1990 for Macintosh exclusively.

File format

Photoshop files have default file extension as .PSD, which stands for “Photoshop
Document.” A PSD file stores an image with support for most imaging options available in
Photoshop. These include layers with masks, color spaces, ICC profiles, CMYK Mode (used
243

for commercial printing), transparency, text, alpha channels and spot colors, clipping paths,
and duotone settings. This is in contrast to many other file formats (e.g. .JPG or .GIF) that
restrict content to provide streamlined, predictable functionality. A PSD file has a maximum
height and width of 30,000 pixels, and a length limit of 2 Gigabytes.

PHOTOSHOP DESKTOP

Fig 9.7 Photoshop Tool

1. Toolbox, full of selection tools, brushes, erasers, and other tools

2. Menu Bar with several layers of drop down menus & dialogues

3. Option Bar are context sensitive and allow customization of tools

4. Navigator/Info/Color Palette palettes allows zooming in and out, shows information about
the cursor point and selection of colors

5. History/Actions/Layers palettes allow multiple backward steps, automation of tasks and


manipulation of layers

6. Image Window contains your image


244

 Zoom tool: found on the TOOLBOX - Used to zoom in and out on the image

To increase: -click on the ZOOM tool on the TOOLBOX -click on the image

To decrease: -click on the ZOOM OUT button on the OPTIONS BAR -click on the image

 Selection tools: found on the TOOLBOX

 Used to select areas of the image

Rectangular Marquee:

- Click on the RECTANGULAR MARQUEE tool on the TOOLBOX -click and drag
diagonally inside the image window

-to select more than one rectangle at the time, hold down the SHIFT key while using tool

Elliptical Marquee

-click and hold on the RECTANGULAR MARQUEE tool on the

 TOOLBOX

-from the box that appears, select the ELLIPTICAL MARQUEE tool -click and drag
diagonally inside the image window

-to select more than one ellipsis at the time, hold down the SHIFT key while using tool

Lasso Tool

-click on the LASSO tool on the TOOLBOX

-click and drag to draw a selection until you get to the beginning and then release the
mouse
245

Polygonal Lasso Tool

-click and hold on the LASSO tool on the TOOLBOX

-from the box that appears, select the POLYGONAL LASSO tool

-click multiple times until you get to the beginning to create a border for the area selected

Magnetic Lasso Tool

-click and hold on the LASSO tool on the TOOLBOX

-from the box that appears, select the MAGNETIC LASSO tool -click on the edge of the
object you want to select, then continue dragging/clicking around it

-to adjust sensitivity, go to the options bar and change width, edge contrast, and frequency

Magic Wand

-click the MAGIC WAND tool on the TOOLBOX

-type a number from 0-225 in TOLERENCE field on the OPTIONS BAR -click area/color
to be selected

-to select more than one area at the time, hold down the SHIFT key while using tool

Layers

• Layers work as several images, layered on top of one another. Each layer has pixels that
can be independently edited.

• Most Photoshop commands/tools work only on the layer you have selected.

• You can combine, duplicate, and hide layers in an image. You can also shuffle the order
in which the layers are stacked.

• Layers can have transparent areas, so that you can see the layers underneath. When
you cut or erase, the affected pixels become transparent. Also, you can change the
opacity of a layer.
246

• You MUST save files as a .PSD or a .TIFF to continue to work with the images.

These are large file formats. When you are completely done editing your image, you can
FLATTEN the layers into a single layer and save the file as a .JPG, .BMP, and .GIF

The Layers Palette

Fig. 9.8 Layers Palette

 If you can’t make additions to a layer you probably need to uncheck ‘Preserve
Transparency’.

 The Eyeball denotes that a layer is visible.

 A highlighted layer with a paintbrush is an active layer. It is the only layer that can be
altered.

 At the bottom are the Effects Button, Layer Mask Button, Layers Folder Button, Adjustment
Layers Button, New Layer button and the Delete Layer button.

 To Create a Layer:

-click WINDOWS-> LAYERS to show the LAYERS palette -click on the layer above which
you want to add the new layer -click on the NEW LAYER button

 To Hide a Layer:

-click WINDOWS-> LAYERS to show the LAYERS palette -click a layer. click the EYE
icon for the layer. -the layer and the EYE icon will be hidden.

 To Duplicate a Layer
247

-click WINDOWS-> LAYERS to show the LAYERS palette

-click on the layer you want to copy and drag the layer to the NEW LAYER button

 To Delete a Layer

-click WINDOWS-> LAYERS to show the LAYERS palette

-click on the layer you want to delete. click on the DELETE LAYER button

 Moving/Copying/Pasting

 Moving a Selection

-Make a selection with a selection tool

-use the MOVE tool to move the selection to another part of the layer

 Copy and Paste a Selection

-make a selection with a selection tool

-Click EDIT -> COPY in the menu bar

-using a selection tool, select where you want to paste the copied element -Click EDIT ->
PASTE in the menu bar

-the image is copied onto a new layer that can be moved independently of the original
image

-you can also copy selections from one image file to another one. Just copy in the old
window and then paste in the new window

 Delete a Selection

-make a selection with a selection tool

-press DELETE on the keyboard

Resizing an Image/Canvas/Selection
248

Fig. 9.9 Image Size

To Change Image Size

-click on IMAGE->IMAGE SIZE on the menu bar

-the IMAGE SIZE DIALOG BOX opens, listing the current height, width, and resolution
of the image

-type a size for a dimension. If you want it to stay the same proportion, make sure the

CONSTRAIN PROPORTIONS box is checked. Enter the correct resolution. -click OK

Fig. 9.10 Canvas Size


249

To Change Canvas Size


- click IMAGE -> CANVAS SIZE on the menu bar

- the CANVAS SIZE DIALOG BOX opens, listing the current height and width of the image

- type the new canvas dimensions

- modify the direction that the program changes the canvas size by selecting an anchor
point

- click OK

To Change Selection Size

-make a selection with a selection tool

-click EDIT -> TRANSFORM -> SCALE

-click and drag a corner handle on the selection to scale on for the horizontal and vertical axis

Rotate/Skew/Distort A Selection

To Rotate a Selection

-make a selection with a selection tool

-click EDIT -> TRANSFORM -> ROTATE

-click and drag a corner handle on the selection to rotate the selection

To Skew a Selection

-make a selection with a selection tool

-click EDIT -> TRANSFORM -> SKEW

-click and drag a corner handle on the selection to skew either the horizontal and vertical
axis
250

To Distort a Selection

-make a selection with a selection tool

-click EDIT -> TRANSFORM -> DISTORT

-click and drag a corner handle on the selection to distort both the horizontal and vertical
axis

9.6 Summary
 There is also limited integration with some other applications. For example, you can export
an InDesign file as XHTML and continue working on it in Dreamweaver.

 Flash is very popular in web designing because you can do fantastic animations while still
keeping the file size low and so that sites can load fast

 Adobe’s Photoshop CS6 is  a  top-quality  professional  photo  editing  tool  that  creates


fantastic effects! This design software is ideal for photographers, graphic designers, and
seasoned web designers.

 Adobe Photoshop CS6 software includes automated tools that slash the time needed for
selecting and compositing and live filters that boost the comprehensive, nondestructive
editing toolset of Photoshop.

 Integrate Dreamweaver with flash, Photoshop tools to simplify your web design workflow.

9.7 Check Your Answers


1. a. unlimited

2. c. Editable Regions

3. d. Orthogonal

4. a. Orthogonal Hotspot Tool

5. d. All of the above

6. a. Tweening
251

7. b. Text Tool

8. a. True

9. a. For Graphics

10. c. .Psd

11. Layer

12. c. Photoshop

9.8 Model Questions


1. Describe the working experience of multimedia.

2. Explain about step by step procedure to set the working environment in Dreamweaver.

3. Create a webpage of your own interest using Dreamweaver.

4. Define flash.

5. Explain step by step process to setup flash environment.

6. How does animation done in flash?

7. Define Photoshop.

8. Point out the significance of Photoshop.

9. Explain the adjacent layers in detail.

10. Explain Photoshop controls in detail.

11. Describe layer functions in detail. -


252

LESSON 10
THE INTERNET AND MULTIMEDIA

Structure
10.1 Introduction

10.2 Learning Objectives

10.3 Internet History

10.4 Internet Working

10.5 Connections

10.6 Internet Services

10.7 The World Wide Web and HTML

10.8 Summary

10.9 Check your Answers

10.10Model Questions

10.1 Introduction

In this lesson is designed to give you an overview of the Internet while describing particular
features that may beuseful to you as a developer of multimedia for the World Wide Web.
URLsand other pointers are also included here to lead you to information forobtaining, installing,
and using these applications and utilities.

10.2 Learning Objectives


At the end of the lesson, the learner will be able to

 Know the origins of the Internet

 Learn what a computer network is and how Internet domains, addresses, and
interconnections work

 Understand the current state of multimedia on the Internet and tools for the World Wide
Web
253

10.3 Internet History

The internet began as a research network funded by the Advanced Research Projects
Agency (ARPA) of the U.S. Defense Department, when the first node of the ARPANET was
installed at the University of California at Los Angeles in September 1969.

By the mid-1970s, the ARPANET “internetwork” embraced more than 30 universities,


military sites, and the government contractors, and its user base expanded to include the
larger computer science research community. By 1983, the network still consisted of but several
hundred computers on only a few local area networks.

In 1985, the National Science Foundation (NSF) arranged with ARPA to support a
collaboration of supercomputing centers and computer science researchers across the
ARPANET. The NSF also funded a program for improving the backbone of the ARPANET,
increasing its bandwidth from 56 Kbps and branching out with links to international sites in
Europe and the Far East.

In 1989, responsibility and management for the ARPANET was officiallypassed from
military interests to the academically oriented NSF,and research organizations and universities
(professors and students alike)became increasingly heavy users of this ever-growing “Internet.”

Much ofthe Internet’s etiquette and rules for behavior (such as for sending e-mailand
posting to newsgroups) was established during this time.More and more private companies
and organizations linked up tothe Internet, and by the mid-1990s, the Internet included
connectionsto more than 60 countries and more than 2 million host computers withmore than
15 million users worldwide.

Commercial and business useof the Internet was not permitted until 1992, but businesses
have sincebecome its driving force. By 2001 there were 109,574,429 domain hostsand 407.1
million users of the Internet, representing 6.71 percent of theworld’s population. By the beginning
of 2010 (see Table 12-1), about oneout of every four people around the world (26.6 percent)
had access to theInternet, and more than 51 million domain names had been registered as”dot
coms.”
254

10.4 Internet Working

Networking basics
 In its simplest form, a network is a cluster of computers, with one computer acting as a
server to provide network services such as file transfer, e-mail, and document printing to
the client computers of that network.

 Using gateways and routers, a local area network (LAN) can be connected to other LANs
to form a wide area network (WAN).

 These LANs and WANs can also be connected to the Internet through a server that
provides both the necessary software for the Internet and the physical data connection.

 Individual computers not permanently part of a network can dial up to one of these Internet
servers and, with proper identification and onboard client software, obtain an IP address
on the Internet.

 A server is permanently connected to the internet through a high-bandwidth physical


connection. 

Internet Addresses

Address Syntax
 Internet addresses use the following syntax:

[PROTOCOL]://[DOMAIN NAME]/[PATH]/[FILE NAME]

(HTTP://WWW.YCCE.EDU)

For Example
 The server directory path and file name are left off.

 The protocol usually does not need to be typed.

 The protocol is also hidden, such as

 mailto

 news
255

Domain Name System (DNS)


TCP/IP is the protocol used for communicating on the internet

 TCP is Transmission Control Protocol

 IP is the Internet Protocol

In 1983 the Domain Name System (DNS) was established to assign names and addresses
to computers which were linked to the internet.

(i) Top-Level Domains

Top-level domains were established as categories to accommodate all users of the Internet:

Com Commercial entities

Edu Four-year degree-granting colleges and universities (schools and two-year


College’s register in the country domain)

Gov U.S. federal government agencies (state and local agencies register in the
country domain)

Int Organizations established by international treaties and international


databases.

Mil U.S. military

Net Computers belonging to network providers

Org miscellaneous and nongovernment organizations

Two-letter e.g. uk, in, sg etc. country code

 In 1998 (ICANN) Internet Corporation for Assigned Names and Numbers was set up to
oversee the DNS.

 In 2000, ICANN approved seven new TLDS:


256

 Aero(Air-Transport)

 info(Unrestricted use)

 pro(Accountants, lawyers)

 Biz(Business)

 museum(museums)

 Coop(Cooperatives)

 name(For individuals)

(ii) Second-Level Domains

Many second-level domains contain huge numbers of computers and user accounts
representing local, regional, and even international branches as well as various internal business
and management functions. So the Internet addressing scheme provides for subdomains that
can contain even more subdomains. Within the education (.edu) domain containing hundreds
of universities and colleges, for example, is a second-level domain for Yale University called
yale. At that university are many schools and departments (medicine, engineering, law, business,
computer science, and so on), and each of these entities in turn has departments and possibly
sub departments and many users. These departments operate one or even several servers for
managing traffic to IP and from the many computers in their group and to the outside world. At
Yale, the server for the Computing and Information Systems Department is named cis. It
manages about 11,000 departmental accounts—so many accounts that a cluster of three
subsidiary servers was installed to dealefficiently with the demand.

These subsidiary servers are named minerva, morpheus, and mercury. Thus, minerva
lives in the cis domain, which lives in the yale domain, which lives in the edu domain. Real
people’s computers are networked to minerva. Other real people are connected to the morpheus
and mercury servers. To make things easy (exactly what computers are for), the mail system
database at Yale maintains a master list of its entire people.

So, as far as the outside world is concerned, a professor’s e-mail address can be simply
[email protected]; the database knows he or she is really connected to minerva
257

so the mail is forwarded to that correct final address. In detailed e-mail headers, may see the
complete destination address listed as well as names of the computers through which mail
message may have been routed.

E-mail accounts are said to be “at” a domain (written with the @ sign). There are never
any blank spaces in an Internet e-mail address, and while addresses on the Internet are normally
case insensitive, conventional use dictates using all lowercase: the Internet will find
[email protected], [email protected], and [email protected] to be the same
address.

Addresses and Data Packets

 When a stream of data is sent over the Internet by the computer, it is first broken down
into packets by the Transmission Control Protocol (TCP).

 Each packet includes the address of the receiving computer, a sequence number (“this is
packet #5”), error correction information, and a small piece of the data.

 After a packet is created by TCP, the Internet Protocol (IP) then takes over and actually
sends the packet to its destination along a route that may include many other computers
acting as forwarders.

The 32-bit address included in a data packet, the IP address, is the “real” Internet address.
It is made up of four numbers separated by periods, for example, 140.174.162.10. Some of
these numbers are assigned by Internet authorities, and some may be dynamically assigned
by an Internet Service Provider (ISP) when a computer logs on using a subscriber’s account.

Every time you connect to http://www.google.com or send mail to


[email protected], the domain name server is consulted and the destination address
is converted to numbers.

Check your Progress


1. DNS stands for:

a. Distributed Numbering System

b. Device Nomenclature System


258

c. Data Networking System

d. Domain Name System

2. Which of the following is a valid IP address?

a. 192.168.1.1

b. www.apple.com

c. [email protected]

d. http://www.pages.net/index.html

3. The MIME text file is saved with

a. HMT extension

b. HTML extension

c. THM extension

d. None of these

4. Each Internet service is implemented on an Internet server by dedicated software known


as a(n) _______________.

5. Say True or False

When a stream of data is sent over the Internet by your computer, it is first broken down
into packets by the Transmission Control Protocol (TCP). a) True b) False

6. MIME Acronym ______________________________

7. One of the greatest benefits of XML is that:

a. it allows you to create animated rollovers

b. it compresses audio and video files, allowing larger files to be sent

c. it connects local area networks with wide area networks

d. it allows you to create your own tags for data


259

10.5 Connections
 If your computer is connected to an existing network at an office or school, it is possible
you are already connected to the Internet.

 Check with your system administrator about procedures for connecting to the Internet
services such as World Wide Web; necessary browser software may already be installed
on your machine.

 If you are an individual working from home, you will need a dial-up account to your office
network or to an Internet Service Provider or an online service.

 You will also need a modem an available telephone line, and software.

 To connect to the internet, a computer or network needs

 TCP/IP software

o Operating system may need to be configured to connect to the server and use TCP/IP
software.

 Internet software includes

o E-MAIL PROGRAMS

o WEB BROWSERS

o FTP SOFTWARE

o NEWS READERS

v ISP ( INTERNET SERVICE PROVIDERS) SOFTWARE

o PPP - (POINT TO POINT) for dialing up

o TCP/IP for sending and receiving

 POP (POINT OF PRESENCE) - local telephone number

Bandwidth Bottleneck

Bandwidth is how much data, expressed in bits per second, you can send from one
computer to another in a given amount of time.
260

The faster your transmissions, the less time you will spend waiting for text, images,
sounds, and animated illustrations to upload or download from computer to computer, and the
more satisfaction you will have with your Internet experience.

The bandwidth bottleneck


 Bandwidth is measured in bits per second (bps).

 Available bandwidth greatly affects how a person can use the internet.

 Users with slow connections will have a difficult time using multimedia over the internet.

Multimedia developers on the Internet should consider the following

 Compress data as tightly as possible before transmitting.

 Require users to download data only once, and then store the data in a local hard disk
cache (this is automatically managed by most WWW browsers).

 Design each multimedia element to be efficiently compact – don’t use a greater color
depth than is absolutely necessary.

 Design alternate low-bandwidth and high-bandwidth navigation paths to accommodate


all users.

 Implement streaming methods that allow data to be transferred and displayed incrementally
(without waiting for the complete dataset to arrive).

10.6 INTERNET SERVICES

To many users, the Internet means the World Wide Web. But the World Wide Web is
only the latest and most popular of services available today on the Internet.

E-mail, file transfer; discussions groups and newsgroups; real-time chatting by text, voice,
and video; and the capability to log into remote computers are common as well. Internet services
include the following:
261

Internet Services and its Purpose

Each Internet service is implemented on an Internet server by dedicated software known


as a daemon. Daemons are agent programs that run in the background, waiting to act on
requests from the outside.

In the case of the Internet, daemons support protocols such as the Hypertext Transfer
Protocol (HTTP) for the World Wide Web, the Post Office Protocol (POP) for e-mail, or the File
Transfer Protocol (FTP) for exchanging files. The first few letters of a Uniform Resource Locator
(URL)—for example, http://www.timestream.com/index.html—notify a server as to which daemon
to bring into play to satisfy a request.

In many cases, the daemons for the Web, mail, news, and FTP may run on completely
different servers, each isolated by a security firewall from other servers on a network.

MIME Media Types

MIME (Multipurpose Internet Mail Extension) media types were originally devised so that
e-mails could include information other than plain text. MIME media types indicate the following
things
262

 How different parts of a message, such as text and attachments, are combined into the
message.

 The way in which each part of the message is specified.

 The way different items are encoded for transmission so that even software that was
designed to work only with ASCII text can process the message.

Now MIME types are not just for use with e-mail; they have been adopted by Web servers
as a way to tell Web browsers what type of material was being sent to them so that they can
cope with that kind of messages correctly.

MIME content types consist of two parts “

 A main type

 A sub-type

The main type is separated from the subtype by a forward slash character. For example,
text/html for HTML.

This chapter is organized for the main types “

 text

 image

 multipart

 audio

 video

 message

 model

 application

For example, the text main type contains types of plain text files, such as “

 text/plain for plain text files

 text/html for HTML files

 text/rtf for text files using rich text formatting


263

MIME types are officially supposed to be assigned and listed by the Internet Assigned
Numbers Authority (IANA).

Many of the popular MIME types in this list (all those begin with “x-”) are not assigned by
the IANA and do not have official status. You can see the list of official MIME types at http://
www.iana.org/assignments/media-types/. Those preceded with .vnd are vendorspecific.

When specifying the MIME type of a content-type field you can also indicate the character
set for the text being used. If you do not specify a character set, the default is US-ASCII. For
example – content-type:text/plain;charset=iso-8859-1

10.7 The World Wide Web and HTML

 Web History

 Tim Berners-Lee of CERN (the European particle physics laboratory) developed the web’s
hypertext system in 1989.

 The Hypertext Transfer Protocol (HTTP) was designed as a means for sharing documents
over the internet.

 The Hypertext Markup Language (HTML) is the markup language of the web.

 Cross-platform compatibility was a design goal.

 HTTP

 The Hypertext Transfer Protocol (HTTP) provided rules for a simple transaction:

 Establishing a connection

 Requesting that a document be sent

 Sending a document

 Closing the connection

 HTML

 The HTTP protocol also required a simple document format called HTML (hypertext markup
language) for presenting text and graphics.
264

 The HTML document can contain hotlinks which a user can click to jump to another
location.

 Dynamic Web Pages and XML

HTML is fine for building and delivering simple static web pages. The other tools and
programming know-how to deliverdynamic pages that are built on the fly from text, graphics,
animations, andinformation contained in databases or documents. JavaScript and
programswritten in Java may be inserted into HTML pages to perform special functions and
tasks that the common abilities of HTML—for mouserollovers, window control, and custom
animations.

Cold Fusion and PHP are applications running side by side witha web server like Apache;
they scan an outgoing web page for specialcommands and directives, usually embedded in
special tags.

The application servers, Oracle, Sybase, and MySQL offer software to manage Structured
Query Language (SQL) databases thatmay contain not only text but also graphics and multimedia
resourceslike sounds and video clips. In concert with HTML, these tools provide the power to
do real work and perform real tasks within thecontext of the World Wide Web.

Flash animations, Director Applications, and RunRevstacks can also be called from within
HTML pages. Thesemultimedia mini-applications, programmed by Webdevelopers, use a
browser plug-in to display the action and perform tasks such as playing a sound, showing a
video, orcalculating a date. As with Cold Fusion and PHP, both useunderlying programming
languages. With the introductionof HTML5, browsers can play multimedia elements such
assound, animations, and video without requiring special pluginsor software.

1. Advanced tools can be used to make a web page Dynamic.

 Dynamic Technologies include

 Cold Fusion (CFM)

 Hypertext Preprocessor (PHP)

 Active Server Pages (ASP)


265

 Java Script and Java Applets

2. Dynamic pages work in conjunction with database applications to look up data.

 Extensible Markup Language (XML)

XML (Extensible Markup Language) goes beyondHTML—it is the next evolutionary step
in the developmentof the Internet for formatting and delivering web pages usingstyles. Unlike
HTML, you can create your own tags in XMLto describe exactly what the data means, and you
can get thatdata from anywhere on the Web. In XML, you can build aset of tags like

<fruit>

<type>Tomato</type>

<source>California</source>

<price>$.64</price>

</fruit>

XML document, according to the instructions, willfind the information to put into the proper
place on the web page in the formatting style assign. For example, with XML styles, can declare
that all items within the <price>tag will be displayed in boldface Helvetica type.

In development as a technique to deliver more pleasingweb experiences, AJAX


(Asynchronous JavaScript and XML)uses a combination of XML, CSS (Cascading Style
Sheets)for marking up and styling information), and JavaScript togenerate dynamic displays
and allow user interaction withina web browser.

 Multimedia on the Web

In today’s world web plays a vital role in diversifying multimedia experience. It has been
a broadcast medium offering various online facilities like live TV, Pre-recorded videos, photos,
animations etc. During the coming years most multimedia applications experience on the internet
and occur on the WWW [World Wide Web]. Programmes contain HTML [Hyper Text Mark-up
266

Language] pages which are also available and provided by XML [extended Mark-up Language].
Along with it Java Script is also used.

Plug-in and Media Players are software programmes that allow us to experience
multimedia on the web. File formats requiring this software are known as MIME [Multimedia
Internet Mail Extension] types. To embed a media file, just copy the source code and paste it
into user’s webpage. It is as simple as easy.

Plug-ins is software programmes that work with web browser to display multimedia. When
web browser encounters a multimedia file it hands off the data to the plug-in to play (or) display
the file. Multimedia players are also software programmes that can play audio and video files
both ON and OFF the web. The concept of streaming media is important to understand how
media can be delivered on the web.

10.8 Summary
 A network is a cluster of computers, with one computer acting as a server to provide
services such as file transfer, e-mail, and document printing to the client computers.

 The Domain Name System (DNS) manages the names and addresses of computers
linked to the Internet.

 Multimedia elements are typically saved and transmitted on the Internet in the appropriate
MIME-type (for Multipurpose Internet Mail Extensions) format and are named with the
proper extension for that type.

 Hypertext Transfer Protocol (HTTP) provides rules for contacting, requesting, and sending
documents encoded with the Hypertext Markup Language (HTML).

 HTML documents are simple ASCII text files. HTML currently includes about 50 tags.

 XML (Extensible Markup Language) allows you to create your own tags and import data
from anywhere on the Web.
267

10.9 Check your Answer


1. d. Domain Name System

2. a. 192.168.1.1

3. b. HTML extension

4. Dameon

5. a. True

6. Multipurpose Internet Mail Extension

7. d. it allows you to create your own tags for data

10.10 Model Questions


1. Write short notes on history of Internet.

2. Discuss about Domain Name System in detail.

3. Define bandwidth Bottleneck.

4. Explain in detail about internet services.

5. Describe in detail about World Wide Web and HTML.

6. Define XML.

7. What is HTML?

8. Discuss about Dynamic Web Pages and XML in detail.


268

LESSON 11
WORLD WIDE WEB (WWW)

Structure
11.1 Introduction

11.2 Learning Objectives

11.3 World Wide Web

11.4 Tools for the WWW

11.5 Web Browsers

11.6 Web Servers

11.7 Web Page Makers and Editors

11.8 Plug-Ins and Delivery Vehicles

11.9 HTML

11.10 VRML

11.11 Summary

11.12 Check Your Answer

11.13 Model Questions

11.1 Introduction

The Internet is a worldwide network of computers that use common communication


standards and interfaces to provide the physical backbone for a number of interesting
applications. One of the most utilized of these Internet applications is the World Wide Web.
What sets the Web apart is an easy-to-use interface to a complex network of computers and
data. Webserver and web browser are the terms which are commonly used for website.The
basic purpose of both is to develop a platform for internet web directory. So that, any user scan
any time access any kind of website. Major difference between the mison their function and
how they perform their functions.
269

11.2 Learning Objectives


At the end of the lesson, the learner will be able to

 Learn how information is transmitted on the Internet.

 Understand how computers are connected on the Internet.

 Know the way a web page gets to your computer.

 Learn some services available on the Internet and their protocols.

11.3 World Wide Web


 World Wide Web (WWW) is collection of text pages, digital photographs, music files,
videos, and animations you can access over the Internet.

 Web pages are primarily text documents form atted and an notated with Hyper text Markup
Language (HTML).In addition to formatted text, webpages may contain images, video,
and software components that are rendered in the user’s web browser as coherent pages
of multimediacontent.

 The terms Internet and World Wide Web are used with outmuch distinction. However, the
two are not the same.

 The Internet is a global system of inter connected computer networks. In contrast, the
World Wide Web is one of the services transferred over these networks. It is a collection
of text documents and other resources, linked by hyperlinks and URLs, usually accessed
by web browsers, from webservers.

 There are several applications called Web browsers that make it easy to access the
World Wide Web; For example: Firefox, Microsoft’s Internet Explorer, Chrome Etc.

 Users access the World-Wide Web facilities via a client called a browser, which provides
transparent access to the WWW servers. User can access.

History of WWW

Tim Berners-Lee, in 1980 was investigating how computer could store information with
random links. In 1989, while working at European Particle Physics Laboratory, he proposed to
270

idea of global hypertext space in which any network-accessible information could be referred
to by single “Universal Document Identifier”. After that in 1990, this idea expanded with further
program and knows as World Wide Web.

Internet and WWW

The Internet, linking your computer to other computers around the world, is a way of
transportingcontent.TheWebissoftwarethatletsyouusethatcontent…orcontributeyour
own.TheW eb,runningonthemostlyinvisibleInternet,iswhatyouseeandclickoninyour
computer’sbrowser.

What is The Internet?

The Internet is a massive network of networks, a networking infrastructure. It connects


millions of computers together globally, forming a network in which any computer can
communicatewithanyothercomputeraslongastheyarebothconnectedtotheInternet.
Inf or ma t io nt ha tt r av el so v er th eInt e rn etdo e ss ov ia av a ri et yo f la ng ua ge s kn ow na s
protocols.SowecansaysthatInternetisnetworkofcomputerwhichconnecttotogetherand
anycomputercommunicatewithanyothercomputer.

What is The Web (World Wide Web)?

The World Wide Web, or simply Web, is a way of accessing information over the medium
of the Internet. It is an information-sharing model that is built on top of the Internet.

The Web uses the HTTP protocol, only one of the languages spoken over the Internet, to
transmit data. The Web also utilizes browsers, such as Internet Explorer or Firefox, to access
Web documents called Web pages that are linked to each other via hyperlinks. Web documents
also contain graphics, sounds, text and video.

Different between Internet and WWW

The Web is a Portion of The Internet. The Web is just one of the ways that information
can be disseminated over the Internet. The Internet, not the Web, is also used for email, which
relies on SMTP, Usenet news groups, instant messaging and FTP. So the Web is just a portion
of the Internet.
271

HTTP Protocol: Request and Response


 HTTP stands for Hypertext Transfer Protocol.

 HTTP is based on the client-server architecture model and a stateless request/response


protocol that operates by exchanging messages across a reliable TCP/IP connection.

 An HTTP “client” is a program (Web browser) that establishes a connection to a server


for the purpose of sending one or more HTTP request messages. An HTTP “server” is a
program (generally a web server like Apache Web Server) that accepts connections in
order to serve HTTP requests by sending HTTP response messages.

 Errors on the Internet can be quite frustrating — especially if you do not know the difference
between a 404 error and a 502 error. These error messages, also called HTTP status
codes are response codes given by Web servers and help identify the cause of the problem.

 For example, “404 File Not Found” is a common HTTP status code. It means the Web
server cannot find the file you requested. The file — the webpage or other document you
try to load in your Web browser has either been moved or deleted, or you entered the
wrong URL or document name.

 HTTP is a stateless protocol means the HTTP Server doesn’t maintain the contextual
information about the clients communicating with it and hence we need to maintain sessions
in case we need that feature for our Web-applications

 HTTP header fields provide required information about the request or response, or about
the object sent in the message body. There are four types of HTTP message headers:

• General-header:

These header fields have general applicability for both request and response messages.

• Request-header:

These header fields have applicability only for request messages.

• Response-header:

These header fields have applicability only for response messages.


272

• Entity-header:

These header fields define Meta information about the entity-body

 As mentioned, when every ouentera UR Linthe address box of the browser, the browser
translates the UR Linto are quest message according to the specified protocol; and sends
the request message to the server.

 For example, the browser translated the URL http://www.test101.com/doc/index.html into


the following request message:

GET /docs/index.html HTTP/1.1 Host:www.test101.com

Accept: image/gif, image/jpeg, */* Accept-Language:en-us

Accept-Encoding: gzip, deflate

User-Agent: Mozilla/4.0 (compatible; MSIE 6.0; Windows NT 5.1)

Here, Step by step communication between client and server mention into following figure.

Fig 11.1 : Communication between HTTP Client and HTTP Server


273

11.4 Tools for World Wide Web (WWW)

In the late 1990s, multimedia plug-ins and commercial tools aimed at the Web entered
the marketplace at a furious pace, each competing for visibility and developer/user mind share
in an increasingly noisy venue.

In the few years since the birth of the first line-driven HTTP daemon in Switzerland,
millions of web surfers had become hungry for “cool” enhancements to entertaining sites. Web
site and page developers needed creative tools to feed the surfers, while surfers needed
browsers and the plug-ins and players to make these cool multimedia enhancements work.

A combination of the explosion of these tools and user demand for performance stresses
the orderly development of the core HTML standard. Unable to evolve fast enough to satisfy
the demand for features (there are committees, international meetings, rational debates,
comment periods, and votes in the standards process), the HTML language is constantly being
extended de facto by commercial interests. These companies regularly release new versions
of web browsers containing tags (HTML formatting elements) and features not yet formally
approved.

Browsers provide a method for third-party developers to “plug in” special tools that take
over certain computational and display activities. They also support the Java and JavaScript
languages by which programmers can create bits of programming script and Java applets to
extend and customize a browser’s basic HTML capabilities, especially into the multimedia
realm.

Java and JavaScript are only related by name. Java is a programming language much
like C++ that must be compiled into machine code to be executed by a computer’s operating
system. JavaScript is a “scripting language” whose commands are executed at runtime by the
browser itself. JavaScript code can be placed directly into HTML using <script> tags or referenced
from a file with the “.js” extension.

Thus, while browsers provide the orchestrated foundation of HTML, third-party players
and even nonprogrammers can create their own cadenzas to enhance browser performance
or perform special tasks. It is often through these plug-ins and applets that multimedia reaches
274

end users. Many of these tools are available as freeware and shareware while others, particularly
server software packages, are expensive, though most any tool can be downloaded from the
Internet in a trial version.

Multimedia on the Web:


To design and make effective multimedia for the environment

 Developers need to understand how to create and edit elements of multimedia and also
how to deliver it for HTML browsers and plug- in/player vehicles.

 The number of new users of the web will create a greater need for high quality, compelling
content, and reasonably quick presentations.

11.5 Web Browsers

Web browser is a client, program, software or tool through which we sent HTTP request
to web server. The main purpose of web browser is to locate the content on the World Wide
Web and display in the shape of web page, image, audio or video form.

We can also call it a client server because it contacts the web server for desired information.
If the requested data is available in the web server data then it will send back the requested
information again via web browser.

Microsoft Internet Explorer, Mozilla Firefox, Safari, Opera and Google Chrome are
examples of web browser and they are more advanced than earlier web browser because they
are capable to understand the HTML, JavaScript, AJAX, etc. Now days, web browser for mobiles
are also available, which are called micro browser.

11.6 Web Servers

Web server is a computer system, which provides the web pages via HTTP (Hypertext
Transfer Protocol). IP address and a domain name is essential for every web server.

Whenever, you insert a URL or web address into your web browser, this sends request to
the web address where domain name of your URL is already saved. Then this server collects
275

the all information of your web page and sends to browser, which you see in form of web page
on your browser.

Lot of web server software is available in the market in shape of NCSA, Apache, Microsoft
and Netscape. Storing, processing and delivering web pages to clients are its main function. All
the communication between client (web browser) and server takes place via HTTP.

Here, we can easily understand concept of web browser and web server by following
figure.

Fig 11.2 : Communication between web Browser and Web Server

Search Engines

Individualized personal search engines are available that can search the entire public
Web, while enterprise search engines can search intranets, and mobile search engines can
search PDAs and even cell phones.

11.7 Web Page Makers and Site Builders

 Learn HTML

 Although site building tools seem to remove the need to learn HTML, some knowledge is
still important.

 An HTML document can be created or edited using only a text editor.


276

 Site building tools

 Various tools help you create web pages in a WYSIWYG(What you See Is What you Get)
editing environment.

 They provide more power and more features specifically geared to exploiting HTML.

 The markup created by editors is complicated and bloated.

 In spite of this, these tools can be timesavers

Common site building tools include

 Adobe Go live

 Macromedia Dreamweaver

 Microsoft FrontPage

 Myrmidon

 Netscape Composer

 HTML translators

 Many programs such as word processors incorporate HTML translators.

 These are built into many word processing programs, so we can export a word –processed
documents with its text styles and layout converted to HTML tags for header, bolding,
underlying, indenting and so on.

 The markup created by translators is bloated and proprietary.

11.8Plug-Ins and Delivery Vehicles

Plug-ins adds the power of multimedia to web browsers by allowing users to view and
interact with new types of documents and images.

Helper applications, or players, also provide multimedia power by displaying or running


files downloaded from the Internet by your browser, but helpers are not seamlessly integrated
into the operation of the browser itself.
277

An unrecognized embedded MIME-type that can’t be displayed within your browser is


called from an HTML document (sounds, movies, unusual text or image files), most browsers
will automatically launch a helper application (if it is specified in the browser’s preferences) to
view or run it. However, this helper starts up and runs separately from the browser.

 If your content requires a plug-in, don’t forget that users must have the plug-in installed.

 provide a link to help the user obtain the plug-in.

 decide whether requiring a plug-in is worthwhile.

 Types of plug-ins include

 Text (such as Adobe Acrobat Reader)

 Images (such as Macromedia Shockwave) which allows the display of vector graphics.

 Sound

 Plug-ins such as Real player, QuickTime, and Windows Media Player can play music.

 Animation, video, and presentation

 Real player, QuickTime, and Windows Media player also play animations and video.

 Flash and Shockwave are used for animation and presentation.

 Microsoft PowerPoint can be used for online presentations

(i) Text

Text and document plug-ins such as the popular Adobe Acrobat Reader get you past
the display limitations of HTML and web browsers, where fonts are dependent on end users’
preferences and pagelayout is primitive. In file formats provided by Adobe Acrobat, for example,
special fonts and graphic images are embedded as data into the file and travel with it, so what
you see when you view that file is precisely what the document’s maker intended.

(ii) Images

Browsers enabled for HTML5 will read and display bitmapped JPEG, GIF, and PNG
image files as well as Scalable Vector Graphics (SVG) files. Vector files are a mathematical
278

description of the lines, curves, fills, and patterns needed to draw a picture, and while they
typically do not provide the rich detail found in bitmaps, they are smaller and can be scaled
without image degradation. Plug-ins to enable viewing of vector formats (such as Flash) are
useful, particularly when some provide high-octane compression schemes to dramatically shrink
file size and shorten the time spent downloading and displaying them. File size and
compressionsound a recurring theme on the Internet, where data-rich images, movies, and
sounds may take many seconds, minutes, or even longer to reach the end user.

Vector graphics are also device-independent, in that the image is always displayed at
the correct size and with the maximum number of colors supported by the computer. Unlike
bitmapped files, a single vector file can be downloaded, cached, and then displayed multiple
times at different scaled sizes on the same or a different web page.

(iii) Sound

Sound over the Web is managed in a few different ways. Digitized sound files in various
common formats such as MP3, WAV, AIF, or AU may be sent to your computer and then
played, either as they are being received (streaming playback) or once they are fully downloaded
(using a player). MIDI files may also be received and played; these files are more compact, but
they depend upon your computer’s MIDI setup for quality. Speech files can be specially encoded
into a token language (a “shorthand” description of the speech components) and sent at great
speed o another computer to be un-tokenized and played back in a variety of voices. Sounds
may be embedded into QuickTime, Windows Media, and MPEG movie files. Some sounds can
be multicast (using the multicast IP protocols for the Internet specified in RFC 1112), so
multiple users can simultaneously listen to the same data streams without duplication of data
across the Internet. Web-based (VoIP, or Voice over Internet Protocol) telephones also transmit
data packets containing sound information.

(iv) Animation, Video, and Presentation

The most data-intense multimedia elements to travel the Internet are video streams
containing both images and synchronized sound, and commonly packaged as Apple’s
QuickTime, Microsoft’s Video for Windows (AVI), and MPEG files. Also data rich are the files
279

for proprietary formats such as Keynote, Microsoft PowerPoint, and other presentation
applications. In all cases, the trade-offs between bandwidth and quality are constantly in your
face when designing, developing, and delivering animations or motion video for the Web.

11.9 HTML (HyperText Markup Language)


 HTML stands for Hyper Text Markup Language

 An HTML file is a text file containing small markup tags

 The markup tags tell the Web browser how to display the page An HTMLfile must have
an htm or html file extension

 An HTML file can be created using a simple text editor

If you are running Windows, start Notepad.

Type in the following text:

<html>

<head>

<title>Title of page</title>

</head>

<body>

This is my first homepage. <b>This text is bold</b>

</body>

</html>

Save the file as “mypage.htm”.

Start your Internet browser. Select “Open” (or “Open Page”) in the File menu of your
browser. A dialog box will appear. Select “Browse” (or “Choose File”) and locate the HTML file
280

you just created - “mypage.htm” - select it and click “Open”. Now you should see an address in
the dialog box, for example “C:\MyDocuments\mypage.htm”. Click OK, and the browser will
display the page.

Example Explained

The first tag in your HTML document is <html>. This tag tells your browser that this is the
start of an HTML document. The last tag in your document is </html>. This tag tells your
browser that this is the end of the HTML document.

The text between the <head> tag and the </head> tag is header information.

Header information is not displayed in the browser window.

The text between the <title> tags is the title of your document. The title is displayed in
your browser’s caption.

The text between the <body> tags is the text that will be displayed in your browser.

The text between the <b> and </b> tags will be displayed in a bold font.

Note on HTML Editors

You can easily edit HTML files using a WYSIWYG (what you see is what you get) editor
like FrontPage, Claris Home Page, or Adobe PageMill instead of writing your markup tags in a
plain text file.

But if you want to be a skillful Web developer, we strongly recommend that you use a
plain text editor to learn your primer HTML.

 HTML Elements

 HTML documents are text files made up of HTML elements.

 HTML elements are defined using HTML tags


281

HTML Tags
 HTML tags are used to mark-up HTML elements

 HTML tags are surrounded by the two characters < and > The surrounding characters are
called angle brackets

 HTML tags normally come in pairs like <b> and </b>

 The first tag in a pair is the start tag, the second tag is the end tag The text between the
start and end tags is the element content

 HTML tags are not case sensitive, <b> means the same as <B>

HTML - Embed Multimedia

Sometimes you need to add music or video into your web page. The easiest way to add
video or sound to your web site is to include the special HTML tag called <embed>. This tag
causes the browser itself to include controls for the multimedia automatically provided browser
supports <embed> tag and given media type.

You can also include a <noembed> tag  for  the  browsers  which  don’t  recognize  the
<embed> tag. You could, for example, use <embed> to display a movie of your choice,
and <noembed> to display a single JPG image if browser does not support <embed> tag.

Example

<!DOCTYPE html>

<html>

<head>

<title>HTML embed Tag</title>

</head>

<body>
282

<embed src = “/html/yourfile.mid” width = “100%” height = “60” >

<noembed><imgsrc = “yourimage.gif” alt = “Alternative Media” ></noembed>

</embed>

</body>

</html>

The <embed> Tag Attributes

Following is the list of important attributes which can be used with <embed> tag.

Note ”The align and autostart attributes deprecated in HTML5. Do not use these attributes.

S.No Attribute & Description

1 Align- Determines how to align the object. It can be set to either center, left or right.

2 Autostart- This Boolean attribute indicates if the media should start automatically.
You can set it either true or false.

3 Loop- Specifies if the sound should be played continuously (set loop to true), a certain
number of times (a positive value) or not at all (false)

4 Playcount- Specifies the number of times to play the sound. This is alternate option
for loop if you are using IE.

5 Hidden - Specifies if the multimedia object should be shown on the page. A false
value means no and true values means yes.

6 Width of the object in pixels

7 Height of the object in pixels

8 Name A name used to reference the object.


283

9 SrcURL of the object to be embedded.

10 Volume Controls volume of the sound. Can be from 0 (off) to 100 (full volume).

Supported Video Types

You can use various media types like Flash movies (.swf), AVI’s (.avi), and MOV’s (.mov)
file types inside embed tag.

 .swf files ” are the file types created by Macromedia’s Flash program.

 .wmv files ” are Microsoft’s Window’s Media Video file types.

 .mov files ” are Apple’s Quick Time Movie format.

 .mpeg files ” are movie files created by the Moving Pictures Expert Group.

Background Audio

You can use HTML <bgsound> tag  to  play  a  soundtrack  in  the  background  of  your
webpage. This tag is supported by Internet Explorer only and most of the other browsers
ignore this tag. It downloads and plays an audio file when the host document is first downloaded
by the user and displayed. The background sound file also will replay whenever the user
refreshes the browser.

Note ” Thebgsound tag is deprecated and it is supposed to be removed in a future version
of HTML. So they should not be used rather, it’s suggested to use HTML5 tag audio for adding
sound. But still for learning purpose, this chapter will explain bgsound tag in detail.

This tag is having only two attributes loop and src. Both these attributes have same
meaning as explained above.

<!DOCTYPE html>

<html>

<head>
284

<title>HTML embed Tag</title>

</head>

<body>

<bgsoundsrc = “/html/yourfile.mid”>

<noembed><imgsrc = “yourimage.gif” ></noembed>

</bgsound>

</body>

</html>

This will produce the blank screen. This tag does not display any component and remains
hidden.

Internet Explorer can also handle only three different sound format files “ wav, the native
format for PCs; au, the native format for most Unix workstations; and MIDI, a universal music-
encoding scheme.

Check your Progress


1. A standard interface that is used to send commands to instruments and sound sources is:

a. downloading.

b. RealAudio.

c. MIDI.

d. AAC.

2. ____________ was the developer of HTML and the Web.

3. Say True or False

The W3C is an organization dedicated to helping evolve the Web in positive directions.
285

a. True

b.False

4. Which of the following plug-in file type is used in MS Windows?

a. .DLL

b. .SO

c. .DSO

d. PPC

5. Which language in WWW specifies a web’s way by describing three-dimensional objects?

a. HTML

b. VRML

c. XML

d. UML

6. _________________ field of cookie in WWW represents the server’s directory structure


by identifying the utilization of part associated with server’s file tree?

11.10 Virtual Reality Modeling Language (VRML)

VRML –HTML – Hyper Text Markup Language

VRML is more than an extension of HTML

Purpose: The Virtual Reality Modeling Language is a file format for describing interactive
3D objects and worlds. VRML is designed to be used on the Internet, intranets, and local client
systems. VRML is also intended to be a universal interchange format for integrated 3D graphics
and multimedia.

Use: VRML may be used in a variety of application areas such as engineering and scientific
visualization, multimedia presentations, entertainment and educational titles, web pages, and
shared virtual worlds.
286

History

• VRML 1.0 Specification 1995

• VRML 97 / 2.0 Specification 1997

• X3D (and Java3D) Specification (in development)

Design
 Compatibility: Provide the ability to use and combine dynamic 3D objects within a VRML
world and thus allow re”usability (Object orientated approach)

 Extensibility: Provide the ability to add new object types not explicitly defined in VRML,
e.g. Sound

 Performance: Emphasize scalable, interactive performance on a wide variety of computing


platforms (Platform independent)

Characteristics of VRML

 VRML is capable of representing static and animated dynamic 3D and multimedia objects
with hyperlinks to other media such as text, sounds, movies, and images.

 VRML browsers, as well as authoring tools for the creation of VRML files, are widely
available for many different platforms.

 Other formats: OpenGL, Inventor (not designed for WWW use)

VRML “ Basics
 Some nodes are container nodes or grouping nodes, which contain other nodes

 Nodes are arranged in hierarchical structures called scene graphs. Scene graphs are
more than just a collection of nodes; the scene graph defines an ordering for the nodes.
The scene graph has a notion of state, i.e. nodes earlier in the world can affect nodes that
appear later in the world.
287

VRML Shapes

VRML contains basic geometric shapes that can be combined to create more complex
objects. Fig. displays some of these shapes:

Ø Shape node is a generic node for all objects in VRML.

Ø Material node specifies the surface properties of an object. It can control what color the
object is by specifying the red, green and blue values of the object.

There are three kinds of texture nodes that can be used to map textures onto any object:

1. Image Texture: The most common one that can take an external JPEG or PNG image
file and map it onto the shape.

2. Movie Texture:Allows the mapping of a movie onto an object; can only use MPEG movies.

3. Pixel Texture: Simply means creating an image to use with Image Texture within VRML.

 Four types of lighting can be used in a VRML world:

1. Directional Light node shines a light across the whole world in a certain direction.

2. Point Light shines a light from all directions from a certain point in space.

3. Spot Light shines a light in a certain direction from a point.

4. RenderMan: rendering package created by Pixar.

 The background of the VRML world can also be specified using the Background node.

 A Panorama node can map a texture to the sides of the world. A panorama is mapped
onto a large cube surrounding the VRML world.

VRML Specifics
 Some VRML Specifics:

(a) A VRML file is simply a text file with a “.wrl” extension.

(b) VRML97 needs to include the line #VRML V2.0 UTF8 in the first line of the VRML

file — tells the VRML client what version of VRML to use.


288

(c) VRML nodes are case sensitive and are usually built in a hierarchical manner.

(d) All Nodes begin with “ {“ and end with “ }” and most can contain nodes inside of
nodes.

(e)Special nodes called group nodes can cluster together multiple nodes and use the

keyword “children” followed by “[ ... ]”.

(f) Nodes can be named using DEF and be used again later by using the keyword USE.

This allows for the creation of complex objects using many simple objects.

• A simple VRML example to create a box in VRML: one can accomplish this by typing:

Shape

Geometry Box{}

The Box defaults to a 2-meter long cube in the center of the screen. Putting it into a
Transform node can move this box to a different part of the scene. We can also give the box a
different color, such as red.

First VRML – File

#VRML V2.0 utf8

Shape {

appearanceAppearance

materialMaterial
289

diffuseColor 1 0 0

geometry Cylinder

radius 3 height 6

First File – Scene Graph

Basic Shape Nodes

· A lot of simple shapes are given: Cylinder, Box, Sphere, etc.

· More “general “ shapes are needed for realistic worlds – IndexedFaceSet

 IndexFaceSet Node

Define 3D Polyhedrons from a collection of 2D Polygons, e.g. Triangules


290

· IndexFaceSet Node

IndexedFaceSet

coord Coordinate

point [ x1 y1 z1, x2 y2 z2, ... ]

coordIndex [ 3 0 5 1 –1,

2 0 1 4 5 –1,

3 1 5 –1 ]

}
291

· Texture Node

Map the 2D image texture onto the 2D Polygons in the scene

Texture Node

Shape

appearnce Appearance

textureImageTexture {url “image.jpg”}

geometryIndexedFaceSet

coord Coordinate {

point [ x1 y1 z1, x2 y2 z2, ... ] }

coordIndex [1 0 3 –1, 2 3 4 –1, ...]


292

texCoordTextureCoordinate { point [ x1 y1, x2 y2, ... ]

· Transformation Node

Is a grouping node of the form:

Transform

rotation x y z angle

translation x y z

scale x y z

children [ Shape {} , Shape {}, .... ]

• Different Coordinate Systems for groups of Objects


293

· Viewpoint Node

Specify the position of the camera; default:

Viewpoint

position 0 0 10

orientation 0 0 1 0

fieldofView 0.78 # 45deg is normal

• Multiple Viewpoints possible

11.11 Summary
 WorldWideWeb(W W W)iscollectionoftextpages,digitalphotographs,musicfiles,
videos,andanimationsyoucanaccessovertheInternet.

 Web browser is a client, program, software or tool through which we sent HTTP request
to web server.

 Web server is a computer system, which provides the web pages via HTTP (Hypertext
Transfer Protocol). IP address and a domain name is essential for every web server.

 Plug-ins adds the power of multimedia to web browsers by allowing users to view and
interact with new types of documents and images.

 Plug-ins adds capabilities to the web browser.

 Plug-ins are also sometimes called helper applications.

 Various media types like Flash movies (.swf), AVI’s (.avi), and MOV’s (.mov) file types
inside embed tag.

 The Virtual Reality Modeling Language is a file format for describing interactive 3D objects
and worlds
294

11.12 Check Your Answer


1. C. MIDI

2. Tim Berners-Lee

3. A.True

4. a. .DLL

5. b. VRML

6. Path

11.13 Model Questions


1. Define WWW.

2. List the tools of WWW.

3. Define web server.

4. Define Internet Service Provider.

5. Define web page markers.

6. Define web editors.

7. What are the plugins on the internet?

8. What is VRML?

9. Define HTML?

10. What are HTML tags?

11. Create a web page using HTML?

12. Explain in detail about VRML.


295

LESSON 12
DESIGNING FOR THE WWW

Structure
12.1 Introduction

12.2 Learning Objectives

12.3 Working on the Web

12.4 Multimedia Applications

12.5 Media Communication

12.6 Media Consumption

12.7 Media Entertainment

12.8 Media games

12.9 Multimedia Services

12.10Summary

12.11 Check Your Answer

12.12Model Questions

12.1 Introduction

Multimedia is one of the most fascinating and fastest growing areas in the field of
information technology. The capability of computers to handle different types of media makes
them suitable for a wide range of applications. A Multimedia application is an application which
uses a collection of multiple media sources e.g. text, images, sound/audio, animation and/or
video on a single platform for a defined purpose. Multimedia can be seen at each and every
aspect of our daily life in different forms. However, entertainment and education are the fields
where multimedia has its dominance.
296

12.2 Learning Objective


At the end of the lesson, the learner will be able to

 Designing for the World Wide Web

 Define various utilities provided on networked multimedia system

 Define the multimedia facilities needed by business and distributed learning Environments

 Propose new multimedia applications based on the examples presented.

12.3 Working on the Web


 WWW is stands for World Wide Web.

 The World Wide Web (WWW) is a global information medium which users can read and
write via computer connected to the internet.

 The Web, or World Wide Web, is basically a system of Internet servers that support
specially formatted documents. The documents are formatted in a markup language called
HTML (Hyper text Markup Language) that supports links too ther documents, as well as
graphics, audio, and videofiles.

Web Browsers

A web browser, or simply “browser,” is an application used to access and view websites.
Common web browsers include Microsoft Internet Explorer, Google Chrome, Mozilla Firefox,
and Apple Safari.

The primary function of a web browser is to render HTML, the code used to design or
“markup” webpages. Each time a browser loads a web page, it processes the HTML, which
may include text, links, and references to images and other items, such as cascading style
sheets and JavaScript functions. The browser processes these items, then renders them in
the browser window.

Early web browsers, such as Mosaic and Netscape Navigator, were simple applications
that rendered HTML, processed form input, and supported bookmarks. As websites have
evolved, so have web browser requirements. Today’s browsers are far more advanced,
297

supporting multiple types of HTML (such as XHTML and HTML 5), dynamic JavaScript,
and encryption used by secure websites.

The capabilities of modern web browsers allow web developers to create highly interactive
websites. For example, Ajax enables a browser to dynamically update information on a webpage
without the need to reload the page. Advances in CSS allow browsers to display a responsive
website layouts and a wide array of visual effects. Cookies allow browsers to remember your
settings for specific websites.

While web browser technology has come a long way since Netscape, browser compatibility
issues remain a problem. Since browsers use different rendering engines, websites may not
appear the same across multiple browsers. In some cases, a website may work fine in one
browser, but not function properly in another. Therefore, it is smart to install multiple browsers
on your computer so you can use an alternate browser if necessary.

Here is a brief overview of the most commonly used features of a browser:


298

Web Sites

Information on the Web is displayed in pages. These pages are written in a standard
language called HTML (HyperText Markup Language) which describes how the information
should be displayed regardless of the browser used or the type of computer. Pages also include
hypertext links which allow users to jump to other related information. Hypertext is usually
underlined and in a different color and can include individual words, sentences, or even graphics.
A Web site is a collection of related Web pages with a common Web address.

Web Addresses

Web sites and the pages they contain each have a unique worldwide address. This
address (or Uniform Resource Locator, URL, in Internet jargon). The address for Microsoft is
www.microsoft.com. For most sites, this is all you need to specify and it defaults to the main
page (or home page) for the site. In some cases, you may also need or want to specify the path
and file name such as www.microsoft.com/office97. Note the extension .com after microsoft.
There are six of extensions that help to divide the computers on the Internet into understandable
groups or domains. These six domains include: .com = commercial, .gov = government, .edu
= education, .org = organizations, .net = networks, .mil = military. There are also extensions for
sites outside of the U.S. including: .jp = Japan, .uk = United Kingdom, .fr = France, and so on.

 How to “Surf” the Web

Enter a Web site address in the “Location” box and hit the return key. You will jump to the
home page of the site. If you are not looking for a particular site, a good place to start is
Netscape’s “What’s Cool” page which can be found by pressing the “What’s Cool” button
located under the address location box on Netscape browsers.
299

Mouse click- on any words on the page that is underlined and highlighted. These words
are hypertext links which jump you to other related information located on the page, on the site,
or other sites. As you jump from page to page and site to site, remember that you can always
hit the “Back” arrow button to return to any page. The browser automatically saves all the Web
pages to your hard drive (the disk cache) so you can immediately go back without having to
reload the pages.

In most cases, you will start out surfing a particular site or topic and through numerous
hypertext links find yourself somewhere completely unrelated but interesting. Now you’re surfing!

 How to Search the Web

There are basically three major search services available for handling different tasks:
Directories, Search Engines, and Meta Search Engines. Directories are sites that, like a gigantic
phone book, provide a listing of the sites on the web. Sites are typically categorized and you
can search by descriptive keywords. Directories do not include all of the sites on the Web, but
generally include all of the major sites and companies. Yahoo is a great directory. Search
Engines read the entire text of all sites on the Web and creates an index based on the occurrence
of key words for each site. AltaVista and Infoseek are powerful search engines. Meta Search
Engines submit your query to both directory and search engines. Metacrawler is a popular
Meta search engine.

12.3.1 Web Design Issues

 Bowser & Operating Systems

 WebpagesarewrittenusingdifferentHTMLtagsandviewedinbrowserwindow.

 Thedifferentbrowsersandtheirversionsgreatlyaffectthewayapageisrendered,asdifferent
browserssometimesinterpretsameHTMLtaginadifferentway.

 DifferentversionsofHTMLalsosupportdifferentsetsoftags.

 Thesupportfordifferenttagsalsovariesacrossthedifferentbrowsersandtheirversions.

 Samebrowsermayworkslightlydifferentondifferentoperatingsystemandhardwareplatform.
300

 Tomakeawebpageportable,testitondifferentbrowsersondifferentoperatingsystems.

 Bandwidth and Cache

 Usershavedifferentconnectionspeed,i.e.bandwidth,toaccesstheWebsites.

 Connection speed plays an important role in designing web pages, if user has low bandwidth
connectionandawebpagecontainstoomanyimages,ittakesmoretimetodownload.

 Generally,usershavenopatiencetowaitforlongertimethan10-15secondsandmovetoothersite
withoutlookingatcontentsofyourwebpage.

 Browserprovidestemporarymemorycalledcachetostorethegraphics.

 WhenusergivestheURLofthewebpageforthefirsttime,HTMLfiletogetherwithallthegraphics
filesreferredinapageisdownloadedanddisplayed.

 Display Resolution

 Display resolution is another


impo rta ntfa cto ra ffe cting th eW e bpag ede sign,aswed on othavea ny
controlondisplayresolutionofthemonitorsonwhichuserviewsourpages.

 Display or screenresolutionismeasuredintermsofpixelsandcommonresolutionsare800X600
and 1024 X786.

 WehavethreechoicesforWebpagedesign.

o Designawebpagewithfixedresolution.

o MakeaflexibledesignusingHTMLtabletofitintodifferentresolution.

o Ifthepageisdisplayedonamonitorwithahigherresolution,thepageisdisplayedonleft-
hand side and some part on the right-hand side remains blank. We can use centered
design to display pageproperly.

 Look & Feel

 Lookandfeelofthewebsitedecidestheoverallappearanceofthewebsite.

 Itincludesallthedesignaspectssuchas

 Web sitetheme
301

 Webtypography

 Graphics

 Visual structure

 Navigation etc…

 Page Layout and Linking

 Website containsofindividualwebpagesthatarelinkedtogetherusingvariousnavigationallinks.

 Page layout definesthevisualstructureofthepageanddividesthepageareaintodifferentpartsto


presenttheinformationofvaryingimportance.

 Pagelayoutallowsthedesignertodistributethecontentsonapagesuchthatvisitorcanviewit
easilyandfindnecessarydetails.

 Locating Information

 Webpage is viewedonacomputerscreenandthescreencanbedividedintofivemajorareassuch
ascenter,top,right,bottomandleftinthisparticularorder.

 Thefirstmajorareaofimportanceintermsofusersviewingpatternisthecenter,thentop,right,
bottomandleftinthisparticularorder.

 Making Design User-Centric

 ItisverydifficultforanyWebdesignertopredicttheexactbehavioroftheWebsiteusers.

 However,ideaofgeneralbehaviorofcommonuserhelpsinmakingdesignoftheWebsiteuser-
centric.

 Userseitherscantheinformationonthewebpagetofindthesectionoftheirinterestorreadthe
information to getdetails.

 Sitemap

 ManyatimesWebsitesaretoocomplexastherearealargenumberofsectionsandeachsection
contains manypages.

 Itbecomesdifficultforvisitorstoquicklymovefromoneparttoother.
302

 Once the user selects a particular section and pages in that section, user gets confused
about where he/sheisandwheretogofromthere.

 To make it simple,keepyourhierarchyofinformationtofewlevelsorprovidethenavigationbaron
eachpagetojumpdirectlytoaparticularsection.

 Tips for Effective Navigation

 Navigationlinksareeithertextbased,i.e.awordoraphraseisusedasalink,orgraphical,i.e.an
image,i.e.aniconoralogoisusedasalink.

 Navigationlinksshouldbeclearandmeaningful.

 It should beconsistent.

 Link should beunderstandable.

 Organizethelinkssuchthatcontentsaregroupedlogically.

 Provide search link, if necessary, usually on top of the page. Use common links such as
‘about us’ or ‘Contactus’.

 Providethewaytoreturntofirstpage.

 Providetheuserwithinformationregardinglocation

 Horizontalnavigationbarcanbeprovidedoneachpagetodirectlyjumptoanysection

12.4 Application Areas of Multimedia

There are so many applications of multimedia in this web world. Let us consider few.
They are as follows:

(i) Multimedia in Business

Training, informational, promotional material, sales presentation point-of-sales displays


that allow for consumer integration and communication within and outside the organization are
all common applications of multimedia in the business world. Multimedia presentation for many
applications can be highly portable particularly in the case of CD-ROMs, DVD-ROMs and video
tapes. The equipment required to produce these presentations are relatively common place
303

(or) otherwise easy to access. Existing presentation uses Grab-keep-Attention in advertising.


Business-to-Business and inter office communication are developed by creative service firms.
For advance multimedia presentation beyond symbols, slide shows to sell idea live-up training,
commercial multimedia developer may be hired to design Government services and non-
professional services applications as well.

(ii) Multimedia in software

Software Engineers may use multimedia in computer from entertainment to training such
as military industrial training, designing digital games; it can be used as a learning process.
This multimedia software’s are created by professionals and software engineers.

(iii) Multimedia in Education and Training

In education, multimedia is used to produce Computer Based Training and providing


reference books like Encyclopedia and Alma’s. Computer based training leads the users go
through the CD of the presentation text about particular and associated information in various
formats. The combination of education and entertainment gives us edutainment [i.e., education
with entertainment and entertainment with education.]. The idea of Media Convergence is also
becoming a major factor in the field of higher education. Separate technologies have been
defined such as voice, data and video that shares resources and interact with each other
synergistically creating new efficiencies. Media convergence is rapidly changing the curriculum
in Universities all over the world. Along with all these things it is also changing the availability of
jobs required skills, savvy technological skills.

Edutainment is nothing but educational entertainment. Many computer games with, Focus
on education are now available. A simple example, in this case is an educational game, which
plays various rhymes for little kids. In addition to playing rhymes, the child can paint the pictures,
increase reduce size of various objects etc. Similarly many other edutainment packages, which
provide a lot of detailed information to lads, are available. Microsoft has produced many such
CD- based multimedia such as Sierra, Knowledge Adventure etc. which in addition to play
provide some sort of learning component. The latest in this series is a package, which teaches
about the computer using games playing. There are many more companies which have
specialized in entertainment sector you may explore the list of such companies on the net.
304

(iv) Business Communications

Business Communications Multimedia is a very powerful tool for enhancing the quality of
business Communications. The business communications such as employee related
communications, product promotions, customer information, and reports for investors can be
presented in multimedia form. All these business communications are required to be structured
such that a formal level of content structure exists in the communication. Other common business
application involving multimedia requires Access to database of multimedia information about
a company. The multimedia Technology of today can easily support this application as natural
language enquiry Systems do exist for making queries.

(v) E-learning

Electronic Learning has become a very good communication [interaction] media between
students and teachers. Several lines of research evolved the possibility for learning and
instructions are nearly endless. There are two categories which link the students and teachers.
One- those which can be used to convey the subject content, such as print materials, video
tapes and audio tapes, television computer based course ware, CD-ROM etc. the other- those
which permit communication between teacher and students such as audio, video conferencing,
tele-conferencing and internet.

(vi) Knowledge Transfer

This kind of application involves transmission of a piece of information with the Maximum
impact, that is, the transfer of information in such a fashion that it facilitates the retention. This
application is meant for academia and business both. In academies, the knowledge transfer is
used as the building block, whereas, in Business it is the effective transfer of information which
might be essential for the survival of a business. Multimedia based teaching is gaining momentum
as powerful teaching aids are quite common. Multimedia is one of the best ways to provide
short- term training to the workers in a business houses.

(v) Public Access

Public Access is an area of application where many multimedia applications will soon be
available. One such application may be the tourist information system, where a person who
305

wants to go for a sight seminary may have the glimpse of places he has R L selected for
visiting.

For example, for a very simple public Information, that is, the Railway Time table enquiry,
a multimedia based system may Not only display the trains and time but also the route map of
the destination from the Source you have desired.

12.5 Media Communication

Multimedia Representation

1. Form of representation

- In applications that involve just a single type of media, the basic form of representation of
the particular media type is required.

- Otherwise, different media types should be integrated together in a digital form.

2. In applications involving text and images:


- It comprise blocks of digital data each of which is represented by a fixed bit pattern known
as code word.

- The duration of the overall transaction is relatively short.

- No streaming is required.

3. In applications involving audio & video: The signals vary


continuously with time.
- The duration of application can be relatively long. Streaming is required.

- The amount of data used to represent the signal is measured in bits per second (bps).

Compression is generally applied to digitized signals to reduce

(i) The resulting bit rate to a level a network can support and

(ii) The time delay between a request being made for some information and the information
becoming available.
306

Multimedia Networks

There are 5 types of communication network that are used to provide multimedia
communication services:

(vi) Telephone networks

(vii) Data networks

(viii) Broadcast television networks

(ix) Integrated services digital networks (ISDN)

(x) Broadband Multiservice networks

Characteristics:
- The first 3 types were initially designed to provide just a single type of service.

- The last 2 types were designed to provide multiple services.

Media types
 The information flow associated with the different applications can be either continuous
or block mode.

i. In the case of continuous media:

Mode of operation: streaming

The information stream is generated by the source continuously in a timely-dependent


way and played out directly as it is received at the destination.e.g. audio, video

The continuous media is called real-time media as it’s generated in a time-dependent


way.

The source stream can be generated at a constant bit rate (CBR) or a variable bit rate
(VBR).
307

ii. In the case of block-mode media:

Mode of operation: downloading

The source information comprises a single block of information that is created in a time-
independent way.E.g. text, image

The delay between the request being made and the contents of the block being outputted
at the destination is called round-trip delay. (Should be <few seconds)

Communication Modes

The transfer of the information streams associated with an application can be 1 of the 5
modes:

Simplex : 1 direction only

Half-duplex: flows in both directions but alternately

Full-duplex : flows in both directions simultaneously

(1-to-1 transmission)

Broadcast : 1-to-all transmission

Multicast: 1-to-many transmission

Network types

There are 2 types of communications channel associated with the various network types:
circuit-mode & packet- mode.

1. Channels in circuit-mode:
 Operates in a time-dependent way

 Also known as a synchronous communications channel since it provides a constant bit


rate service.
308

2. Channels in packet-mode:
· Operates in a time-varying way

· Also known as an asynchronous communications channel since it provides a variable bit


rate service.

1. Circuit-Mode:
· This type of network is also known as a circuit switched network.

· A circuit-mode network comprises an interconnected set of switching offices/exchanges


to which the subscribers/computers are connected.

· Prior to sending any information, the source must first set up a connection through the
network.

· The bit rate associated with the connection is fixed.

· The messages associated with the setting up and clearing of a connection are known as
signaling messages.

· There is a call/connection setup delay.

Example: PSTN, ISDN

2. Packet Mode
 There are 2 types of packet-mode networks: connection-oriented (CO) and connectionless
(CL)

 This type of network is also known as a packet switched network.

(i) Connection Oriented Network


 A connection-oriented network comprises an interconnected set of packet-switching
exchanges (PSEs).

 Prior to sending any information, a connection is first set up through the network.

 The connection utilizes only a variable portion of the bandwidth of each link and hence it’s
known as a virtual connection or a virtual circuit (VC).
309

 Each PSE has a routing table which defines a packet coming from which input link will be
delivered to which output link.

 Examples: X.25, ATM network

(ii) Connectionless Network


 The establishment of a connection is not required and the two communicating terminals/
computers can communicate and exchange information as and when they wish.

 Each packet must carry the full source and destination addresses in its header in order
for each PSE to route the packet onto the appropriate outgoing link.

 The term router is normally used rather than PSE.

 Example: Internet

12.6 Media Consumption


 Browsing, navigation, displaying, annotation

 Books, proceedings, newspapers – Customized access possible

 Kiosks – Airport, train station, bank assistant, cinema information, real-estate catalogue,
university, museum showcase etc. – Fast response is necessary

 Tele-shopping

Check your Progress


1. HTML web pages can be read and rendered by _________.

a. Compiler

b. Server

c. Web Browser

d. Interpreter

2. Engineer design cars before producing them using a multimedia applications called
__________________
310

3. Say True or False

Multimedia element that makes object move is called animation.

a. True b. False

4. Using Multimedia directory and an _________________ are example of multimedia


applications for finding information

a. Text

b. Joystick

c. Voiceover

d. Encyclopedia

5. A Combination of _________________ and entertainment makes learning enjoyable in


schools

a. Training

b. Education

c. Transferring

d. Examination

6. ___________________ multimedia is a combination between multimedia technology and


internet technology.

12.7 Media Entertainment

Multimedia in entertainment application aims at diverting users that is engaging them in


some or the other activity. The activities include listening to music, watching a video, playing
games, participating in an interactive story, meeting people at virtual environment etc. Higher
interactivity, mobility, content awareness is major roles played by the multimedia application
software. Multimedia is specially used in movie making and animations. Multimedia games are
a popular past time and software programmes are also available in either CD-ROMs (or) on-
line. Few video games are also uses multimedia features. Multimedia application that allows
users to actively participate is called Interactive multimedia. Digital recording material may be
just as durable and instantly reproducible with perfect copies every time.
311

The entertainment industry has used this technology the most to create real life like
Games. Several developers have used graphics, sound, animation of multimedia to create
variety of games. The special technologies such as virtual reality have made. These games
just like experiences of real life. Example is flight simulator which creates a real-life imaging.
Many multimedia games are now available on computers. The children can enjoy these
experiences, for example, they can drive cars of different variety, fly aircraft play any musical
instrument, play golf etc. Multimedia productions are also using creation of many movies where
the multimedia components are mixed with real life pictures to create powerful entertainment
atmosphere.

In addition, multimedia is heavily used in the entertainment industry, especially to develop


special effects in movies and animations. Multimedia games are a popular pastime and are
software programs available either as CD-ROMs or online. Some video games also use
multimedia features. Multimedia applications that allow users to actively participate instead of
just sitting by as passive recipients of information are called Interactive Multimedia.

12.8 Media Games

One of the most exciting applications of multimedia is games. Now days the live internet
pay to play gaming with multiple players has become popular. Actually the first application of
multimedia system was in the field of entertainment and that too in the video game industry.
The integrated audio and video effects make various types of games more entertaining.
Generally most of the video games need joystick play.

One can download digital online multimedia or can be streamed. This Streaming
multimedia can be on-demand or live Multimedia games and simulations may be used with
exclusive effects in a physical environment, in an online network, with diversified users; it can
also be used at offline mode, game system, or hosier.

Electronic games, 3D adventure games, sporting games and interactive movies are
extremely popular forms of multimedia applications. The key to their popularity lies in their
interactive nature. The new generations of games provide ingenious levels of interactivity and
realism to captivate the user of the product. The attraction of this type of application is realism,
312

fast action and user input through peripherals such as mouse, track-pad, keyboard and joystick.
Computer-based games have led to many developments in interactive computing. This type of
application requires a high level of graphics computing power and hence the impetus to develop
more efficient algorithms for display movement and more powerful graphics cards.

12.9 Multimedia Services

The advances of computing, communication and creation of relevant standards have


lead to the beginning of an era where people getting multimedia facilities at home.

The forms of an Interactive T.V or through the World Wide Web are listed below:

These services may include:

 Basic Television Services

 Interactive entertainment

 Digital Audio

 Video on demand

 Home shopping through e-mail

 Financial transactions using ecommerce

 Interactive single and multiuser games

 Digital multimedia libraries

 Electronic versions of newspapers, magazines etc.

Cable TV and telephone companies, dot com companies, publishing industry etc. are the
main infrastructure providers for these facilities. The networking technology along with the
improved compiling and compression technologies are delivering interactive services profitably.
The entertainment cable, telephone, and Internet passed industries Companies are trying to
design wide variety of such multimedia services.

Today Personal Computers are the tool that promotes collaboration. They are essential
to any multimedia workstations. Many high-speed networks are in place that allows multimedia
conferencing, or electronic conferencing. Such facilities are even available today through Internet
313

also. Today, we have to depend on our telephone to link us with others, whether it is a phone
call or a group audio conference or dialup Internet connection. However, tomorrow it will be
sort of based links that link us with others. A Computer-based multimedia conference allows us
to exchange audio, text, image, and even video information. It also facilitates group development
of documents and other information products. Let us discuss more about these concepts in
greater details.

Benefits of Multimedia
1. Addresses multiple learning styles

2. Provides an excellent way to convey content

3. Uses a variety of media elements to reinforce one idea.

4. Activates multiple senses creating rich experiences

5. Gives life to flat information

6. Enhances user enjoyment

7. Improves retention

8. Enables users to control web experience

12.10 Summary
 TheWorldWideWeb(WWW)is aglobalinformationmediumwhichuserscanreadandwrite
viacomputerconnectedtotheinternet.

 Information on the Web is displayed in pages. These pages are written in a standard
language called HTML (HyperText Markup Language) which describes how the information
should be displayed regardless of the browser used or the type of computer

 A Web site is a collection of related Web pages with a common Web address.

 Web sites and the pages they contain each have a unique worldwide address.

 Usershavedifferentconnectionspeed,i.e.bandwidth,toaccesstheWebsites.


Pagelayoutdefinesthevisualstructureofthepageanddividesthepageareaintodifferentpartsto
314

presenttheinformationofvaryingimportance.

 In applications that involve just a single type of media, the basic form of representation of
the particular media type is required.Otherwise, different media types should be integrated
together in a digital form.

 Examples of typical multimedia applications include: digital video editing and production
systems; electronic newspapers and magazines; the World Wide Web; online
referenceworks, such as encyclopedias; games; groupware; home shopping; interactive
TV; multimediacourseware; video conferencing; video-on-demand; and interactive movies.

12.11 Check Your Answer


1. c) Web Browser

2. Computer Aided Design

3. a) True

4. d) Encyclopedia

5. b) Education

6. Web-based

12.12 Model Questions


1. Discuss how the multimedia is used in business and education field.

2. Explain in detail about working on the web.

3. List out the applications area of multimedia.

4. Write short notes on web design issues.

5. Describes in detail about multimedia communication

6. What are the benefits of multimedia?

7. List out the communication technologies and multimedia services.


315

LESSON 13
MULTIMEDIA IN FUTURE

Structure
13.1 Introduction

13.2 Learning Objectives

13.3 Multimedia-Looking towards Future

13.4 Digital Communication and New Media

13.5 Interactive Television

13.6 Summary

13.7 Check Your Answer

13.8 Model Questions

13.1 Introduction

The feature scope of the multimedia is to gain the place to desire the people’s need and
also understands their expressions. The feature trends of the multimedia use technology to
understand the expression of the human begins and respond to them in the right manner.
Multimedia provides information by sources like web engine, online news and social media.
The feature trends of multimedia influence many in an effective manner. Many sectors apply
multimedia technologies to enhance sector-related factors.

13.2 Learning Objectives


At the end of the lesson, the learner will be able to

 The feature scope of the multimedia.

 Learn the application of multimedia in several fields.

 Understand the Extend of multimedia influence in future life.

 Latest innovations of multimedia.


316

13.3 Multimedia-Looking towards Future

The future of the multimedia can be decided by a few factors like Comfortless of the
people by automation features of multimedia technologies. Many multimedia companies like
Sony, Panasonic use multimedia technologies to a great extent. Multimedia technology like
record players, Bluetooth headphones, radio eyes, blue tooth headphones, light bulb speaker
with baby cry detector, ear buds.

There are many future directions for Multimedia, indeed the multimedia explosion has
probably only just initiated.

There many trends that could be mentioned here, for example:

 developments in multimedia system’s hardware

 developments in multimedia system’s networking and distributed environments

 developments in multimedia system’s design

 Developments in Hypermedia Models which we mentioned in the last chapter briefly with
RMM for example.

 Developments multimedia standards, formats and compression etc.

However, here we concentrate on three emerging technologies/applications likely to have


a major impact in the coming years on multimedia:

1. Digital Libraries that need Knowledge-Based Multimedia Systems to perform content-


based retrieval or indexing of multimedia data

2. High Definition TeleVision (HDTV)

3. Interactive Television

 Digital Library

 An evolution from small databases, to image databases, ..., to Digital Library

 Tremendous potentials and challenges to effective multimedia information retrieval


317

Content-Based Retrieval (CBR)


 Contents contained in digital text, sound, music, image, video, etc.

A big step forward from traditional database search which is largely based on simple
attributes.

 Serve as a browsing tool - analogous to the current web search

 Keyword indexing is fast and easy to implement. However, it has limitations:

o Can’t handle nonspecific queries such as “Find a scenic photo of Lake Tahoe”

o Misspelling is frequent and difficult, e.g., “azalia” for “azalea”

o Descriptions are inaccurate and incomplete

 High Definition TV (HDTV)

HDTV High-definition television (HDTV) is a digital television broadcasting system with


greater resolution than traditional television systems (NTSC, SECAM, PAL). HDTV is digitally
broadcast because digital television (DTV) requires less bandwidth if sufficient video compression
is used. There are three key differences between HDTV and what’s become known as standard
definition TV i.e. regular NTSC, PAL or SECAM.

The three differences are; an increase in picture resolution, 16:9 widescreen as standard,
and the ability to support multi-channel audio such as Dolby Digital. The most important aspect
of HDTV, and the one which gives it its name is the increased resolution. Standard definition
NTSC broadcasts have 525 horizontal lines, and PAL broadcasts are slightly better at 625
lines. In both these systems however, the actual number of lines used to display the picture,
known as the active lines, is fewer than that. In addition, both PAL and NTSC systems are
interlaced, that is, each frame is split into two fields, one field is the odd-numbered lines and
the other is the even lines.

Each frame is displayed alternately and our brain puts them together to create a complete
image of each frame. This has an adverse effect on picture quality. HDTV is broadcast in one
of two formats; 720p and 1080i. The numbers refer to the number of lines of vertical resolution
318

and the letters refer to whether the signal is progressive scan, ‘p’, or interlaced, ‘i’. Progressive
scan means that each frame is shown in its entirety, rather than being split into fields. Both
systems are significantly better quality than either PAL or NTSC broadcasts. The first is 720p
(“p” stands for progressive), which is an image comprised of 1280 lines along the horizontal by
720 vertical lines. It shows the whole image in a single frame – that is, progressively. The
second is 1080i, which measures 1920 x 1080 lines and is displayed as two fields that are
interlaced.

A high-res screen with at least 720 lines will show both formats but only a 1080-line
screen will show 1080i footage at its best, i.e. in an un-scaled form. The 1080p format, which
is the absolute best form of HD is not used by broadcasters. Movies made in 1080p (e.g. the
last three Star Wars films) might appear in Blu-ray and/or HD DVD format. Sony’s PlayStation
3 produces 1080p output. There are more and more ‘Full HD’ screens (capable of displaying
1080p) appearing. A 1080p screen can de-interlace a 1080i signal. With very few 1080p sources
available, the main benefit of a Full HD screen is its ability to map a source such as Sky TV
(1080i) pixel for pixel to the screens resolution (ie 1920 x 1080). HDTV uses 16:9 widescreen
as is its aspect ratio so widescreen pictures are transmitted properly and not letterboxed or
panned.

Dolby Digital multichannel sound can be broadcast as part of an HDTV signal, so if you
have a surround sound speaker set-up you can use it to listen to TV rather than just DVDs. To
receive an HDTV broadcast you need either a TV with a built-in HDTV tuner or a HDTV receiver
which can pick-up off the air HDTV channels, or cable or satellite HDTV like. You also need to
live in be where HDTV channels are broadcast or distributed by cable or satellite. Currently
HDTV is widespread in Japan and is becoming commonplace in the US, with most major
networks distributing HDTV versions of their popular content. The situation in Europe is not so
bright.

There is only one company broadcasting HDTV in the whole of Europe, Euro1080, and
it has only two HDTV channels, both in the 1080i format. Euro1080HDe shows major cultural
and sporting events to cinemas and clubs around Europe, while HD1 broadcasts sports, opera,
rock music, and lifestyle programs via satellite to homes in Europe. UK satellite broadcaster,
319

Sky, which is owned by Fox proprieter Rupert Murdoch, has announced plans to broadcast
some HDTV content in 2006. The BBC has also made noises about broadcasting HDTV
programs (it already films some programs in HD format).

Fig 13.1 : Resolutions of HDTV

Development and Future of Multimedia Technology


a) Factors Contributing towards the development of Multimedia Technology:

i) Price: The drop in the prices of multimedia components assures us that


multimediatechnological development will be more rapid in the future. Today the price of
320

a multimediaproducts are dropping rapidly, this increases the demand for them as they
become moreaffordable.

ii) MMX Technologies: Enabled the computer systems to interact fully with the audio,
videoelements and compact disc drive, more effectively.

iii) Development of DVD Technology: DVD technology has replaced VHS technology
andlaser disk in the production of digital videos or films because DVD pictures are
clearer,faster, higher quality, higher capacity and lower price.

iv) Erasable Compact Discs (CD E): Since it is re writable, it enables us to change data,
toarchive large volumes of data and also to backup copies of data stored in the hard disk.

(v) Software Development: Software applications for education, games and


entertainmentbecame easier to use with these various additional elements in the MMX
Technologies. AsVisual programming was introduced, multimedia software development
became easier, fasterand increased rapidly.

vi) Internet: Brought dramatic changes in the distribution of multimedia materials.

vii) Increased usage of Computers: Previously, computers were used for just
WordProcessing, with the development of multimedia technology, text is not the only
mainmedium used to disseminate information but also graphics, audio, video, animation
andinteractivity. Hence, computers role has diversified and now act as the source for
education,publication, entertainment, games and many others.

Check Your Progress


1. ____recording allows DVD-R and DVD+R discs to store more data, up to 8.5 GB per
discs

a. Single Layer

b. Dual Layer

c. Multi-Layer

d. Assigned Layer
321

2. HDTV stands for ____

3. The viewer of a multimedia project to control what and when the elements are delivered,
it is called_____.

a. interactive multimedia

b. selective multimedia

c. onscreen multimedia

d. portable multimedia

4. Mass media suggests communication to a large, ________, and unknown audience

5. Say True or False

Online gaming sites are a fast and efficient ways for companies to promote their products

a. True b. False

6. Say True or False

Interactive multimedia allows the viewer of the multimedia presentation to control what
and what sequence the elements of multimedia are delivered.

a. True b. False

7. How can multimedia be displayed?

a. Magazines, television and books

b. Computers, T.V’s and Websites

c. Computers, newspapers and Websites

d. Computers, newspapers and Websites

13.4 Digital Communication and New Media

New media is a catch-all term used for various kinds of electronic communications that
are conceivable due to innovation in computer technology. In contrast to “old” media, which
includes newspapers, magazines, books, television and other such non-interactive media, new
media is comprised of websites, online video/audio streams, email, online social platforms,
322

online communities, online forums, blogs, Internet telephony, Web advertisements, online
education and much more.

Traditional media methods include mostly non-digital advertising and marketing methods.
Traditional media is:

 Television advertisements

 Radio advertising

 Print advertising

 Direct mail advertisements

 Billboards and off-site signs

 Cold calling

 Door-to-door sales

 Banner ads

New media, also called digital media, consists of methods that are mostly online or involve
the Internet in some sense. These methods include:

 Search engine optimization

 Pay-per-click advertising

 Content marketing

 Social media

 Email marketing

Interactive Media:

Interactive media, also called interactive multimedia, any computer-delivered electronic


system that allows the user to control, combine, and manipulate different types of media, such
as text, sound, video, computer graphics, and animation. Interactive media integrate computer,
memory storage, digital (binary) data, telephone, television, and other information technologies.
Their most common applications include training programs, video games, electronic
323

encyclopedias, and travel guides. Interactive media shift the user’s role from observer to
participant and are considered the next generation of electronic information systems.

 A personal  computer (PC)  system  with  conventional  magnetic-disk  memory  storage


technically qualifies as a type of interactive media. More advanced interactive systems
have been in use since the development of the computer in the mid-20th century—as flight
simulators in the aerospace industry, for example. The term was popularized in the early
1990s, however, to describe PCs that incorporate high-capacity optical (laser) memory
devices and digital sound systems.

 The most common media machine consists of a PC with a digital speaker unit and a CD-
ROM (compact disc read-only memory) drive, which optically retrieves data and instructions
from a CD-ROM. Many systems also integrate a handheld tool (e.g., a control pad
or joystick) that is used to communicate with the computer. Such systems permit users to
read and rearrange sequences of text, animated images, and sound that are stored on
high-capacity CD-ROMs. Systems with CD write-once read-many (WORM) units allow
users to create and store sounds and images as well. Some PC-based media devices
integrate television and radio as well.

 Among the interactive media systems under commercial development by the mid-1990s
were cable television services with computer interfaces that enable viewers to interact
with television programs; high-speed interactive audiovisual communications systems
that rely on digital data from fiber-optic lines or digitized wireless transmissions; and virtual
reality systems that create small-scale artificial sensory environments.

Communication Technology and Multimedia Services

The advances of computing, communication and creation of relevant standards haveLed


to the beginning of an era where people getting multimedia facilities at home.

These services may include:

 Basic Television Services

 Interactive entertainment

 Digital Audio
324

 Video on demand

 Home shopping through e-mail

 Financial transactions using ecommerce

 Interactive single and multiuser games

 Digital multimedia libraries

 Electronic versions of newspapers, magazines etc.

Cable TV and telephone companies, dot com companies, publishing industry etc. arethe
main infrastructure providers for these facilities. The networking technologyalongwith the
improved compiling and compression technologies are delivering interactiveservices profitably.
The entertainment cable, telephone, and Internet passed industriesCompanies are trying to
design wide variety of such multimedia services.

Today Personal Computers are the tool that promotes collaboration. They areessential
to any multimedia workstations. Many high-speed networks are in place thatallows multimedia
conferencing, or electronic conferencing. Such facilities are evenavailable today through Internet
also. Today, we have to depend on our telephone tolink us with others, whether it is a phone
call or a group audio conference or dialupInternet connection. However, tomorrow it will be sort
of based links that link uswith others. A Computer-based multimedia conference allows us to
exchange audio,text, image, and even video information. It also facilitates group development
ofdocuments and other information products.

13.5 Interactive Television(iTV)

Interactive Television (iTV) is the integration of traditional television technology and data
services. It is a two-way cable system that allows users to interact with it via commands and
feedback information. A set-top box is an integral part of an interactive television system. It can
be used by the viewer to select the shows that they want to watch, view show schedules and
give advanced options like ordering products shown in ads, as well as accessing email and
Internet.
325

Interactive television is also known simply as interactive TV.

Interactive television refers to technology where traditional TV services are combined


with data services. The major aim of interactive TV is to provide an engaging experience to the
viewer.

Interactive TV allows various forms of interaction, such as:

 Interacting with the TV set

 Interacting with the program content

 Interacting with TV-related content

 Interactive TV services

 Closed-circuit interactive television

Interactive TV is similar to converged TV services, but should not be confused with it.
Interactive TV is delivered through pay-tv set-top boxes, whereas converged TV services are
delivered using Internet connectivity and Web-based services with the help of over-the-top
boxes like Roku or gaming consoles.

Interactive TV increases engagement levels by allowing user participation and feedback.


It can also become part of a connected living room and be controlled using devices other than
the remote control, like mobile phones and tablets.

The return path is the channel that is used by viewers to send information back to the
broadcaster. This path can be established using a cable, telephone lines or any data
communications technology. The most commonly used return path is a broadband IP connection.

However, when iTV is delivered through a terrestrial aerial, there is no return path, and
hence data cannot be sent back to the broadcaster. But in this case, interactivity can be made
possible with the help of appropriate application downloaded onto the set-top box.

Basics of Interactive TV

Beside the normal services provided by the current telephone and cable
services, Interactive TV will provide a variety of new services to homes, such as:
326

 video-on-demand

 home shopping, banking

 interactive single/multiple player games

 interactive entertainment

 digital multimedia libraries

 Electronic newspapers, magazines, yellow pages, etc.

User Experience

The viewer must be able to alter the viewing experience (e.g. choose which angle to
watch a football match), or return information to the broadcaster.

This “return path,” return channel or “back channel” can be by telephone, mobile SMS
(text messages), radio, digital subscriber lines (ADSL) or cable.

Cable TV viewers receive their programs via a cable, and in the integrated cable return
path enabled platforms, they use the same cable as a return path.

Satellite viewers (mostly) return information to the broadcaster via their regular telephone
lines. They are charged for this service on their regular telephone bill. An Internet connection
via ADSL, or other, data communications technology, is also being increasingly used.

Interactive TV can also be delivered via a terrestrial aerial (Digital Terrestrial TV such as
‘Free view’ in the UK). In this case, there is no ‘return path’ as such - so data cannot be sent
back to the broadcaster (so you could not, for instance, vote on a TV show, or order a product
sample). However, interactivity is still possible as there is still the opportunity to interact with an
application which is broadcast and downloaded to the set-top box.

Increasingly the return path is becoming a broadbandIP connection, and some hybrid
receivers are now capable of displaying video from either the IP connection or from traditional
tuners. Some devices are now dedicated to displaying video only from the IP channel, which
has given rise to IPTV - Internet Protocol Television. The rise of the “broadband return path”
327

has given new relevance to Interactive TV, as it opens up the need to interact with Video on
Demand servers, advertisers, and website operators.

How it works

Many technologies implement the working of interactive television it makes as to do a


two-way communication like cable operator and program service. Provides security by giving
real-time pop ups screen of alert message like malware detection.

There are a few delivery mechanisms available for the medium:

Telephone Network:
 Advantage: High availability, security. Good support for interactive/two way traffic

 Disadvantage: Low band width

Cable Network:
 Analog video down the wire

 Advantage: High band width

 Disadvantage: Little infrastructure for long-distance, low security, harder for two way traffic.
328

Telephone Company Solutions


1. The ADSL (Asymmetric Digital Subscriber Line) by Bell Atlantic:

2. Wireless Cable

 Remotely, signals transmitted via satellites at 4 GHz. regionally, from mountain-top towers
at 2.1-2.7 GHz microwave band, with a total of 33 analog 6 MHz channels.

 Itself addresses the bandwidth problem, not interactive

3. FTTC (Fiber To The Curb)

 Optical fiber to each residential neighborhood, terminating in ONU (Optical Network Unit).

 Each ONU supports up to 16 copper local loops that can run full-duplex T1 or T2 for
MPEG-1 and MPEG-2 respectively.4

 FTTH (Fiber To The Home)

 BERLU (Broadband Enhanced Remote Line Unit) - SONET-based VOD developed by


GTE in Cerritos, California

 High capacity broadband switches - multiple of 51.84 Mbps

Cable Company Solutions

HFC (Hybrid fiber coax)

o Analog signal. Digital transmission is via QAM (quadrature amplitude modulation)


329

o Backbone is digital network (SONET with ATM)

o forward path: 50-750 MHz, reverse path: 5-30 MHz

 Cable Modem

o capable of providing 10-100 Mbps, e.g. Motorola  CyberSURFR  -  10  Mbps  per  user
downstream and 768 kbps return upstream

 “500-Channel” Scenario:

o 70 analog 6-MHz channels (total of 450 MHz), and

o 430-plus digital channels for compressed MPEG-2 movies

(Transmitted through 50 analog 6-MHz channels, each channel is capable of sending


eight to ten 3.35 Mbps MPEG-2 movies via QAM.)

Forms of Interaction

The term “interactive television” is used to refer to a variety of rather different kinds of
interactivity (both as to usage and as to technology), and this can lead to considerable
misunderstanding. At least three very different levels are important (see also the instructional
video literature which has described levels of interactivity in computer-based instruction which
will look very much like tomorrow’s interactive television):

The term “interactive television” is used to refer to a variety of rather different kinds of
interactivity (both as to usage and as to technology), and this can lead to considerable
misunderstanding. At least three very different levels are important (see also the instructional
video literature which has described levels of interactivity in computer-based instruction which
will look very much like tomorrow’s interactive television): The forms of interaction will be of
different types such as interactive with TV set, TV related programs with TV content and
interactive programs
330

Interactivity with a TV set

The simplest, Interactivity with a TV set is already very common, starting with the use
of the remote control to enable channel surfing behaviors, and evolving to include video-on-
demand, VCR-like pause, rewind, and fast forward, and DVRs, commercial skipping and the
like. It does not change any content or its inherent linearity, only how users control the viewing
of that content. DVRs allow users to time shift content in a way that is impractical with VHS.
Though this form of interactive TV is not insignificant, critics claim that saying that using a
remote control to turn TV sets on and off makes television interactive is like saying turning the
pages of a book makes the book interactive.

In the not too distant future, the questioning of what is real interaction with the TV will be
difficult. Panasonic already has face recognition technology implemented its prototype Panasonic
Life Wall. The Life Wall is literally a wall in your house that doubles as a screen. Panasonic
uses their face recognition technology to follow the viewer around the room, adjusting its screen
size according to the viewer’s distance from the wall. Its goal is to give the viewer the best seat
in the house, regardless of location. The concept was released at Panasonic Consumer
Electronics Show in 2008. Its anticipated release date is unknown, but it can be assumed
technology like this will not remain hidden for long.

Interactivity with TV program content

In its deepest sense, Interactivity with normal TV program content is the one that is
“interactive TV”, but it is also the most challenging to produce. This is the idea that the program,
itself, might change based on viewer input. Advanced forms, which still have uncertain prospect
for becoming mainstream, include dramas where viewers get to choose or influence plot details
and endings.

 As an example, in Accidental Lovers viewers can send mobile text messages to the
broadcast and the plot transforms on the basis of the keywords picked from the messages.

 Global Television Network offers a multi-monitor interactive game for Big Brother 8 (US)
“‘In The House’” which allows viewers to predict who will win each competition, who’s
going home, as well as answering trivia questions and instant recall challenges throughout
the live show. Viewers login to the Global website to play, with no downloads required.
331

 Another kind of example of interactive content is the Hugo game on Television where
viewers called the production studio, and were allowed to control the game character in
real time using telephone buttons by studio personnel, similar to The Price Is Right.

 Another example is the Click vision Interactive Perception Panel used on news programmes
in Britain, a kind of instant clap-o-meter run over the telephone.

Commercial broadcasters and other content providers serving the US market are
constrained from adopting advanced interactive technologies because they must serve the
desires of their customers, earn a level of return on investment for their investors, and are
dependent on the penetration of interactive technology into viewers’ homes. In association
with many factors such as

 requirements for backward compatibility of TV content formats, form factors and Customer
Premises Equipment (CPE)

 the ‘cable monopoly’ laws that are in force in many communities served by cable TV
operators

 consumer acceptance of the pricing structure for new TV-delivered services. Over the air
(broadcast) TV is Free in the US, free of taxes or usage fees.

 proprietary coding of set top boxes by cable operators and box manufacturers

 the ability to implement ‘return path’ interaction in rural areas that have low, or no technology
infrastructure

 the competition from Internet-based content and service providers for the consumers’
attention and budget

 and many other technical and business roadblocks

Interactivity with TV-related content

The least understood,Interactivity with TV-related content may have most promise to
alter how we watch TV over the next decade. Examples include getting more information about
what is on the TV, weather, sports, movies, news, or the like.
332

Similar (and most likely to pay the bills), getting more information about what is being
advertised, and the ability to buy it—(after futuristic innovators make it) is called “tcommerce”
(short for “television commerce”). Partial steps in this direction are already becoming a mass
phenomenon, as Web sites and mobile phone services coordinate with TV programs (note:
this type of interactive TV is currently being called “participation TV” and GSN and TBS are
proponents of it). This kind of multitasking is already happening on large scale—but there is
currently little or no automated support for relating that secondary interaction to what is on the
TV compared to other forms of interactive TV. In the coming months and years, there will be no
need to have both a computer and a TV set for interactive television as the interactive content
will be built into the system via the next generation of set-top boxes. However, set-top-boxes
have yet to get a strong foothold in American households as price (pay per service pricing
model) and lack of interactive content have failed to justify their cost.

Many think of interactive TV primarily in terms of “one-screen” forms that involve interaction
on the TV screen, using the remote control, but there is another significant form of interactive
TV that makes use of Two-Screen Solutions, such as NanoGaming. In this case, the second
screen is typically a PC (personal computer) connected to a Web site application. Web
applications may be synchronized with the TV broadcast, or be regular websites that provide
supplementary content to the live broadcast, either in the form of information, or as interactive
game or program. Some two-screen applications allow for interaction from a mobile device
(phone or PDA), that run “in synch” with the show.

Such services are sometimes called “Enhanced TV,” but this term is in decline, being
seen as anachronistic and misused occasionally. (Note: “Enhanced TV” originated in the mid-
late 1990s as a term that some hoped would replace the umbrella term of “interactive TV” due
to the negative associations “interactive TV” carried because of the way companies and the
news media over-hyped its potential in the early 90’s.)

Notable Two-Screen Solutions have been offered for specific popular programs by many
US broadcast TV networks. Today, two-screen interactive TV is called either 2-screen (for
short) or “Synchronized TV” and is widely deployed around the US by national broadcasters
with the help of technology offerings from certain companies. The first such application was
333

Chat Television™ (ChatTV.com), originally developed in 1996. The system synchronized online
services with television broadcasts, grouping users by time-zone and program so that all real-
time viewers could participate in a chat or interactive gathering during the show’s airing.

One-screen interactive TV generally requires special support in the set-top box, but Two-
Screen Solutions, synchronized interactive TV applications generally do not, relying instead on
Internet or mobile phone servers to coordinate with the TV and are most free to the user.
Developments from 2006 onwards indicate that the mobile phone can be used for seamless
authentication through Bluetooth, explicit authentication through Near Field Communication.
Through such an authentication it will be possible to provide personalized services to the mobile
phone.

Interactive TV services
Notable interactive TV services are:

 Active Video (formerly known as ICTV) - Pioneers in interactive TV and creators of


CloudTV™: A cloud-based interactive TV platform built on current web and television
standards. The network-centric approach provides for the bulk of application and video
processing to be done in the cloud, and delivers a standard MPEG stream to virtually any
digital set-top box, web-connected TV or media device.

 T-commerce - Is a commerce transaction through the set top box return path connection.

 BBC Red Button

 ATVEF - ‘Advanced Television Enhancement Forum’ is a group of companies that are set
up to create HTML based TV products and services. ATVEF’s work has resulted in an
Enhanced Content Specification which makes it possible for developers to create their
content once and have it display properly on any compliant receiver.

 MSN TV - A former service originally introduced as WebTV. It supplied computer less


Internet access. It required a set-top box that sold for $100 to $200, with a monthly access
fee. The service was discontinued in 2013, although customer service remained available
until 2014.
334

 Philips Net TV - solution to view Internet content designed for TV; directly integrated
inside the TV set. No extra subscription costs or hardware costs involved.

 An Interactive TV purchasing system was introduced in 1994 in France. The system was
using a regular TV set connected together with a regular antenna and the Internet for
feedback. A demo has shown the possibility of immediate purchasing, interactively with
displayed contents.

 QUBE - A very early example of this concept, it was introduced experimentally by Warner
Cable (later Time Warner Cable, now part of CharterSpectrum) in Columbus, Ohio in
1977. Its most notable feature was five buttons that could allow the viewers to, among
other things, participate in interactive game shows, and answer survey questions. While
successful, going on to expand to a few other cities, the service eventually proved to be
too expensive to run, and was discontinued by 1984, although the special boxes would
continue to be serviced well into the 1990s.

13.6 Summary
 HDTV High-definition television (HDTV) is a digital television broadcasting system with
greater resolution than traditional television systems (NTSC, SECAM, PAL).

 HDTV is digitally broadcast because digital television (DTV) requires less bandwidth if
sufficient video compression is used.

 There are three key differences between HDTV and what’s become known as standard
definition TV i.e. regular NTSC, PAL or SECAM.

 New media, also called digital media, consists of methods that are mostly online or involve
the Internet in some sense.

 Interactive media, also called interactive multimedia, any computer-delivered electronic


system that allows the user to control, combine, and manipulate different types of media,
such as text, sound, video, computer graphics, and animation.

 Interactive Television (iTV) is the integration of traditional television technology and data
services.
335

13.7 Check Your Answer


1. b. Dual Layer

2. High Definition Television

3. a. interactive multimedia

4. anonymous

5. a. True

6. a. True

7. b. Computers, T.V’s and Websites

13.8 Model Questions


1. Define Content Based Retrieval (CBR).

2. What is interactive media?

3. Define Telephone and cable network.

4. Define ADSL.

5. List out the multimedia services.

6. Write shorts notes on High Definition Television (HDTV).

7. Discuss in detail about Development and Future of Multimedia Technology.

8. Explain digital communications and new media in detail.

9. Explain in detail about interactive TV.

10. Write short notes on interactive TV Services.


336

LESSON 14
MULTIMEDIA TECHNOLOGIES

Structure
14.1 Introduction

14.2 Learning Objectives

14.3 Digital Broadcasting

14.4 Digital Radio

14.5 Multimedia Conferencing

14.6 Summary

14.7 Check Your Answer

14.8 Model Questions

14. 1 Introduction

The technology used in televisions has improved dramatically. With the introduction of
digital broadcasting, users have now a wide array of options when it comes to methods of
receiving television signals. It also allows you to play or stream videos in different resolutions.
Traditional televisions receive data through analog waveforms to assign radio frequencies or
broadcasts to television channels. However, in digital broadcast, digital data is used.

14.2 Learning Objectives


At the end of this lesson the reader will be able to:

 Understand the concepts of broadcasting, the transmission of audio or video content


using radio-frequency waves.

 Know the practices of using digital signals rather than analogue signals for broadcasting
over radio frequency bands.
337

 Learn benefits of digital radio, higher quality sound than current AM and FM radio
broadcasts to fixed, portable and mobile receivers.

 Understand the different types of multimedia conferencing and online multimedia tools
etc.

14.3 Digital Broadcasting

Digital broadcasting is the practice of using digital signals rather than analogue signals
for broadcasting over radio frequency bands. Digital Television (DTV) broadcasting (especially
satellite television) is widespread. Content providers can provide more services or a higher-
quality signal than was previously available.

Functions of digital broadcast

Digital Television is more advanced than the older analog technology. Unlike analog
television, which uses a continuously variable signal, a digital broadcast converts the
programming into a stream of binary on/off bits—sequences of 0s and 1s. The air digital signals
don’t weaken over distance, as analog signals do.

Digital channels

Digital Television is the transmission of television signals, including the sound channel,
using digital encoding, in contrast to the earlier televisiontechnology, analog television, in which
the video and audio are carried by analog signals.

Digitization Process

A digitization process is used to convert analog data, such as media, sound, image, and
text, into a numerical representation through two discrete steps:

(i) Sampling

(ii) Quantization
338

(i) Sampling

The first step, data is sampled at regular intervals, such as the grid of pixels used to
represent a digital image. The frequency of sampling is referred to as resolution of the image.
Sampling turns continuous data (analog) into discrete (digital) data. This is data occurring in
distinct units: people, pages of a book, pixels.

Second, each sample is quantified, i.e. assigned a numerical value drawn from a defined
range (such as 0-255 in the case of an 8-bit grayscale image).

(ii) Quantization

Any image or audio, like color, projects a signal of its wavelength. The signals are measured
through a y=sin(x) graph. It is a mathematical representation that becomes digitized when
sampled by a computer.The digital representation can change depending on its selected
resolution. The higher the resolution, the more accurately the digital representation will measure
the signal.

Sampling refers to considering the image only at a finite number of points and quantization
refers to the representation of the color value (in RGB format) at each sampled point using a
finite number of bits. In this case, each image sample is called a pixel and every pixel has one
and only one color value. Any typical desktop image scanner does sampling quantization.
Usually, in scanning a printed image, the first steps are about the sampling area and rate and
the later steps deal with the quantization parameters, such as resolution and file size.

Digitization should not only be seen as a technical process because it also has an important
semi-logical and cultural significance. “While some oldmedia such as photography and sculpture
is truly continuous, most involve the combination of continuous and discrete coding. One example
is motion picture film: each frame is a continuous photograph, but time is broken into a number
of samples (frames).

Video goes one step further by sampling the frame along the vertical dimension (scan
lines). Similarly, a photograph printed using a halftone process coming discrete and continuous
representations. Such photographs consist from a number of orderly dots (i.e., samples),
339

however the diameters and areas of dots vary continuously. As this last example demonstrates,
while old media contains level(s) of discrete representations, the samples were never quantified.
This quantification of samples is the crucial step accomplished by digitization.

Digital TV

Digital Television (DTV) is the transmission of television signals using digital rather than
conventional analog methods.

Digital Television is not the same thing as HDTV (High-Definition Television). HDTV
describes a new television format (including a new aspect ratio and pixel density), but not how
the format will be transmitted. Digital Television can be either standard or high definition.

Digital TV Standards
 Digital Video Broadcasting (DVB) uses coded orthogonal frequency-division multiplexing
(OFDM) modulation and supports hierarchical transmission. This standard has been
adopted in Europe, Africa, Asia, Australia, total about 60 countries.

 Advanced Television System Committee (ATSC) uses eight-level vestigial sideband (8VSB)
for terrestrial broadcasting. This standard has been adopted by 6 countries: United States,
Canada, Mexico, South Korea, Dominican Republic and Honduras.

 Integrated Services Digital Broadcasting (ISDB) is a system designed to provide good


reception to fixed receivers and also portable or mobile receivers. It utilizes OFDM and
two-dimensional interleaving. It supports hierarchical transmission of up to three layers
and uses video andAdvanced Audio Coding.

This standard has been adopted in Japan and the Philippines. ISDB-T International is an
adaptation of this standard using H.264/MPEG-4 AVC that been adopted in most of South
America and is also being embraced by Portuguese-speaking African countries.

 Digital Terrestrial Multimedia Broadcasting (DTMB) adopts Time-Domain Synchronous


(TDS) OFDM technology with a pseudo-random signal frame to serve as the guard interval
(GI) of the OFDM block and the training symbol.
340

 Digital Multimedia Broadcasting (DMB) is a digital radio transmissiontechnology developed


in South Korea as part of the national IT project for sending multimedia such as TV, radio
and data casting to mobile devices such as mobile phones, laptops and GPS navigation
systems.

Advantages of Digital Broadcasting

1. Better Bandwidth

One of the main advantages is that they are more efficient when it comes to bandwidth
usage than analog transmission. Furthermore, the image quality delivered by digital signals is
more efficient when it comes to image quality. In fact, high definition televisions can only display
images with the use of digital data. The digital signals are divided into 5 signal patterns, which
can accommodate various aspect ratios. This in turn improves the quality of the images displayed
on your television.

2. Automatic Tuning

Digital signals can be tuned automatically and auto selects the suitable resolution for
your digital television. This in turn allows your television to display clearer and more detailed
images. It also gives you the assurance that your television will work regardless of its bandwidth
capability.

3.Multiple Reception Outlets

Digital broadcasting also allows your television to receive television signals through various
methods. One of the most common methods used is through a cable connection, which is also
known as digital cable. It also allows televisions to receive digital signals with the use of
satellite dish.

Because of the advancements in technology, digital broadcast can now be run through
the DSL connections. This improvement also makes it possible for mobile phones to receive
digital signals. This also allows you to set up a computer to television system, which is great for
entertainment.
341

Some systems have USB ports that can be connected to a telephone line, allowing you
to contact your service provider, as well as do other electronic transactions.

4.Capability to Record Programs

It also allows you to record television programs, so that you can view them at your own
convenient time. With this feature, you can have the assurance that you will never miss an
episode of favorite TV series.

In case you want to have a great experience watching television programs at your home
during your leisure time, it is advisable that you incorporate digital broadcasting system in your
home. It offers many benefits. It improves the quality of images that your television displays.
The digital format is also compatible with any resolutions, giving you the assurance that your
broadcasting will work regardless of the size of your television screen.

Disadvantages of Digital Broadcasting

1. Making the Conversion

The United States made the switch to digital television broadcasting in 2009, which meant
that individuals using standard analog televisions had to convert. This meant either purchasing
an entirely new digital television set or—the less expensive option—purchasing an external
converter box, which you can attach to an analog television much like a cable box.

Although this was likely an inconvenience for many, the government offered coupons
prior to the conversion to help cover the costs of these converter boxes. According to nhk.or.jp,
broadcasters also had to adjust to the conversion, and needed to invest in new production,
transmission and operating equipment as well as new devices for video and audio encoding.

2. Scanning Channels

According to kmos.ucmo.edu, when you first set up a digital converter box or turn on
your digital TV, you will not have instant access to channels as with an analog system. This is
due to a delay between when your digital device receives a transmission and when it can
display it. So before you can start watching, your television needs to complete a channel scan
or memorization. This will take approximately 30 to 60 seconds per channel.
342

3. The Cliff Effect

While analog broadcasting provides a continuous although distorted feed when


transmission is interfered with, digital broadcasting will suddenly cut out if not enough information
is received.

4. Finding New Frequencies

A long-term problem that will occur with digital broadcasting is that more and more
frequencies will eventually be needed to make room for more digital programming. According
to nhk.or.jp, this means that the frequencies usually reserved for analog broadcasting, such as
those used by traditional radio stations, will eventually need to be appropriated. Otherwise,
digital TV will only be able broadcast a limited amount of programming.

14.4 Digital Radio

Digital radio is the transmission and reception of sound processed into patterns of numbers,
or “digits” – hence the term “digital radio.” In contrast, traditional analog radios process sounds
into patterns of electrical signals that resemble sound waves.

Digital radio reception is more resistant to interference and eliminates many imperfections
of analog radio transmission and reception. There may be some interference to digital radio
signals, however, in areas that are distant from a station’s transmitter. FM digital radio can
provide clear sound comparable in quality to CDs, and AM digital radio can provide sound
quality equivalent to that of standard analog FM.

Digital Radio Function

The radio station creates a digital signal at the same time they create the analog signal.
The digital signal is compressed and then broadcast along with the analog signal. The nice
thing about high definition receivers is that they can filter out the signals with the interference
of the waves reflecting off of buildings.
343

Types of radio and radio broadcasting

The term broadcasting means the transmission of audio or video content using radio-
frequency waves. With the recent advancements in digital technology, radio broadcasting now
applies to many different types of content distribution.

Analog Radio

Analog radio consists of two main types: AM (amplitude modulation) and FM (frequency
modulation). Analog radio station frequently feeds only one transmitter and referred to as an
AM station or an FM station in the U.S. But it is quite possible for a station to feed both transmitters
in a similar area, or to feed more than one transmitter covering different areas. In either case,
AM or FM refers only to a particular transmitter and not to the entire station. The latter
arrangement is becoming widespread throughout the U.S.

AM radio uses the long-wave band in some nations. This long-wave band comes with
frequencies that are fairly lower than the FM band, and having slightly different transmission
features, better for broadcasting over long distances. Both AM and FM are in use to broadcast
audio signals to home, car, and moveable receivers.

Digital Radio Types

Digital Radio Types


344

 Conventional FM: As previously mentioned, conventional FM is a popular technology in


analog radio. Almost every major manufacturer in the world supports some form of
conventional FM technology.

 MPT1327: Perhaps the most widely used analog trucking technology today is called MPT
1327. It is named after the UK Ministry of Post and Telegraph that invented this particular
open standard. A number of different manufacturers support this trucking technology.

 Tetra: As the world becomes more digital, a number of digital radio technologies have
emerged. One of these is Tetra, developed in Europe in the late eighties. It’s very similar
to GSM used in modern digital cellphones. Tetra is a 4-slot TDMA technology that works
in 25 kHz (wideband) channel spacing. It’s very popular amongst large public safety
agencies and used in the airports and has strong data applications. Tetra operates in
specific bands: 380 to 420 MHz and in the 700/800 MHz system.

 P25: Another major open standard for digital radio technology is APCO Project 25 or P25
for short which was developed specifically for public safety agencies in the United States.
P25 Phase 1 differs from Tetra by being an FDMA technology and also supporting
conventional, trunked, and simulcast operation (or a combination of all three of these).

P25 can be used in any licensed frequency that a public safety agency has whether it be
VHF, UHF, 700, 800, even 900 MHz. It can be employed by non-public safety users as
well. P25 actually comes in two phases. Phase 1 is an FDMA technology operating in the
12.5 kHz channel spacing. Phase 2 is a more recent development and is only available in
trunked. It is also TDMA and offers two time slots in a single 12.5 kHz channel spacing
given the equivalent of a 6.25 kHz channel.

 DMR: One of the newest open radio standards is called digital mobile radio or DMR for
short. It’s a TDMA technology which uses 2-time slots and operates in the 12.5 kHz
channel spacing, available in any licensed frequency. Tier 2 DMR offers conventional
operation and Tier 3 DMR offers trunked operation. DMR is increasingly used by businesses
such as mining, utilities and transport throughout the world.

 NXDN: NXDN is a FDMA technology, similar to DMR, which operates in a 6.25 kHz channel
spacing. It’s not limited to any particular frequency band and it also supports conventional
and trunked operation.
345

Digital Radio Standards

Four standards for digital radio systems exist worldwide: IBOC (In-Band On-Channel),
DAB (Digital Audio Broadcasting), ISDB-TSB (Integrated Services Digital Broadcasting-
Terrestrial Sound Broadcasting), and DRM (Digital Radio Mondiale). All are different from each
other in several respects.

 IBOC

A company named iBiquity Digital Corporation, with a trademarked name of HD Radio


developed IBOC and still continues to manage it. Introduced for regular use in 2003, it’s now in
frequent in U.S. More than 2,000 U.S. AM and FM stations are using the IBOC digital radio
services today. The majority of U.S. HD radio stations are using FM band, and most of those
are offering one or more multicast services now Today, IBOC stations broadcast two versions
of its primary content: analog and digital. So they’re serving bothlegacy and new receivers
using the same broadcast channel.

 DAB

Also known as Eureka 147 in the U.S. and as Digital Radio in the U.K., DAB comes with
a number of advantages similar to IBOC. But it is fundamentally different in its design. Unlike
IBOC, DAB cannot share a channel with an analog transmit. So it needs a new, dedicated
band. Each DAB broadcast also needs much more band as it consists of multi-program services
(typically 6 to 10, depending on quality and the amount of data it carries). This makes it unusable
by a typical local radio station. It is generally implemented with the cooperation of several
broadcasters, or by a third-party aggregator that acts as service operators for broadcasters.

Recently, improved versions of DAB, known as DAB+ and DAB-IP, have been developed.
These developments increase the range of DAB signal. Today, almost 40 countries worldwide
have DAB services on air (mostly in Europe), and others are thinking about the adoption of it or
one of its variants.
346

 ISDB-TSB

Specifically developed for Japan in 2003, ISDB-TSB is the digital radio system used for
multi- program services. It is currently using transmission frequencies in the VHF band. A
unique feature of ISDB-TSB is that the digital radio channels are intermingled with ISDB digital
TV channels in a similar broadcast.

 DRM

DRM is a system developed primarily as a direct substitute for AM international


broadcasting in the short-wave band. DRM uses the similar channel plan as the analog services,
and, with some limitations and changes to the analog service, a DRM broadcast can share the
same channel with an analog station, existing channel allocations DRM is a single audio channel
system when used with. An enhanced version is DRM +, introduced in 2007 for the VHF band.
This improvement presents two-channel and surround-sound capability.

 Sirius Xm:Sirius XM is the combination of two similar but competing satellite radio
services: XM Satellite Radio and Sirius Satellite Radio. XM and Sirius, which still operate
separately at the retail level, are subscription services. They broadcast more than 150
digital audio channels intended for reception by car, portable, and fixed receivers. These
provide coverage of the complete continental United States, much of Canada, and parts
of Mexico.

Internet Radio

Many radio stations are now using online streaming audio services to provide a simulated
broadcast of their over-the-air signals to web listeners. A broadcaster may also offer additional
online audio streams that are re-purposed, time-shifted, or completely different from their on-
air services. Because no scarcity of bandwidth or obligation for licensing of online services
exists, broadcasters may offer as many services as they wish. Unlike over-the-air broadcasting,
web distribution is delivered to end-users by the third-party telecommunication providers on a
nationwide or worldwide basis.
347

Transmitting Internet Radio

Traditional radio stations simulcast their programs using one of the compatible audio
formats that internet radio uses such as MP3, OGG, WMA, RA, AAC Plus and others. Most up-
to-date software media players can play streaming audio using these popular formats.

Traditional radio stations are limited by the power of their station’s transmitter and the
available broadcast options. They might be heard for 100 miles, but not much further, and they
may have to share the airwaves with other local radio stations.

Internet radio stations don’t have these limitations, so you can listen to any internet radio
station anywhere you can get online. In addition, internet radio stations are not limited to audio
transmissions. They have the option to share graphics, photos, and links with their listeners
and to form chat rooms or message boards.

Benefits of Digital Radio

Digital radio is able to offer generally higher quality sound than current AM and FM radio
broadcasts to fixed, portable and mobile receivers. The sound quality can relate to the bandwidth
and the data rates used.

Listeners benefit from an increased variety of radio programs because each broadcaster
is permitted to transmit multiple program streams. This means that broadcasters may provide
numerous new digital radio stations instead of a single analog radio station.

The technology also enables a number of additional audio, image and text services,
including:

 Program information such as the station name, song title and artist’s name

 Traffic information, news and weather

 Additional services such as paging and global satellite positioning

 The ability to pause and rewind services.


348

Check your Progress


1. ___________ audio/video refers to the broadcasting of radio and TV programs through
the Internet.

a) Interactive

b) Streaming live

c) Streaming stored

d) None of the above

2. We can divide audio and video services into _______ broad categories.

3. Which of the following is NOT a common use of Teleconferencing?

a) Audio Conferencing

b) Video Conferencing

c) Computer Conferencing

d) Virtual Reality Conferencing

4. Say True or False

Broadcast leads usually withhold much important information because listeners do not
hear the first two or three words of a story. a) True b) False

5. Say True or False

Public radio stations typically schedule shorter and less frequent news programs than do
commercial radio stations. a) True b) False

6. Which corrects the sampling time problem in a digital system?

a) Interpolator

b) Decimator

c) Equalizer

d) Filter
349

7. Which one is the examples of digital communication

a) ISDN

b) Modems

c) Classical telephony

d) All of the mentioned

8. DMB acronym _______________________

14.5 MultimediaConferencing

Conferencing supports collaborative computing and is also called synchronous tele-


collaboration. Conferencing is a management service that controls the communication among
multiple users via multiple media, such as video and audio, to achieve simultaneous face-to-
face communication. More precisely, video and audio have the following purposes in a tele-
conferencing system:

 Video is used in technical discussions to display view-graph and to indicate how many
users are still physically present at a conference. For visual support, workstations, PCs or
video walls can be used.

For conferences with more than three or four participants, the screen resources on a PC
or workstation run out quickly, particularly if other applications, such as shared editors or drawing
spaces, are used. Hence, mechanisms which quickly resize individual images should be used.

Conferencing services control a conference (i.e., a collection of shared state information


such as who is participating in the conference, conference name, start of the conference,
policies associated with the conference, etc) Conference control includes several functions:

 Establishing a conference, where the conference participants agree upon a common


state, such as identity of a chairman (moderator), access rights (floor control) and audio
encoding. Conference systems may perform registration, admission, and negotiation
services during the conference establishment phase, but they must be flexible and allow
participants to join and leave individual media sessions or the whole conference. The
flexibility depends on the control model.
350

 Closing a conference.

 Adding new users and removing users who leave the conference.

Conference states can be stored (located) either on a central machine (centralized control),
where a central application acts as the repository for all information related to the conference,
or in a distributed fashion.

Multimedia Conferencing

A multimedia conferencing system is an on-line real-time system where the multimedia


information is generated, transmitted, and presented in real-time. As the number of participants
and locations of the conference increase, the resource demands will also increase. The system
primarily deals with creating digitized video, digitized voice, data, images, and graphics and
transmitting such information across a communication network so that it reaches the
destination(s) in real-time.

The challenge of multimedia for video conferencing

(i) Support for continuous media

The use of continuous media, such as voice, video, in distributed systems implies the
need for continuous data transfers over relatively long periods of time; for example, play out of
video from a remote conferencing camera. Furthermore, the timeliness of such media
transmissions must be maintained as an ongoing commitment for the duration of the continuous
media presentation.

(ii) Real-time synchronization

Synchronization refers to the maintenance of real-time constraints across the continuous


media connection. Usually in video conferencing, more than one media type needs to be
maintained. Examples of inter-media constraints include lip synchronization between audio
and video channels or synchronization of text subtitles and video sequences. Synchronization
mechanisms must operate correctly in a distributed environment, potentially involving both
local and wide area networks.
351

(iii) Multiparty communications

There are several aspects to group support for multimedia. Firstly, it is necessary to
provide a programming model for multiparty communications (supporting both discrete and
continuous media types). Facilities should also be provided to enable management of such
groups: for example, providing support for joining and leaving of groups at run-time. Secondly,
it is important to ensure that the underlying system provides the right level of support for such
communications, particularly for continuous media types. Thirdly, with multimedia, it is necessary
to cater for multicast communications where receivers may require different qualities of service.
This adds some complexity to quality of service management. Fourthly, it is important to be
able to support a variety of policies for ordering and reliability of data delivery.

Types of Multi-Point Conferences


1. Meet-Me Conference

2. Ad-Hoc Conference

3. Interactive-Broadcast Conference

1. Meet-Me Conference

Meet-Me Conference

 Conference is pre-arranged

 Time and address of bridge are known to participants

 Participants call the bridge to enter the conference

 Bridge may also call out to participants

 Central conference bridge is a resource owned by a network or serviceprovider

 Mixes and distributes audio and video signals


352

 Examples: Telephone conference services, Skype conference call

Multi-Point Control Unit (MCU)


 Traditional name for conference bridges in telephone/ISDN networks

 Mixes the voice signals coming from participants

 One consistent joint signal distributed to all partners

 Partner may be silenced until sound level exceeds some threshold

 Determines the video signal to be sent to the participants(in case of audio/video conference)

 Video source of participant with highest voice energy is chosen

2. Ad-Hoc Conference

Ad-Hoc Conference

 Conference starts as a point-to-point conversation

 Grows to a multi-point conference when participantsinvite other people by calling their


terminals

 Conference is usually not pre-arranged

 Example: Three-way call in ISDN/private telephone exchanges

– A talks to B

– A puts B on hold

– A calls C

– A joins B and C into a three-way call


353

 User originating the conference call must be able toprovide the necessary bridge
functionality

– Bridge outside the public network, e.g. in a privatenetwork

– Capacity limited (e.g. in number of participants)

3. Interactive-Broadcast Conference

 Asymmetric conference

– Master distributes media and signaling to many terminals

– Terminals have a much simpler back channel to the master (e.g. justsignaling or a plain
text stream)

 Scales to thousands of terminals

 Typical applications: Tele-teaching, business TV

Interactive-Broadcast Conference

Multimedia conferencing is to interact with people across the world.


 It uses certain tools like cameras, computers and internet.

 Adobe Connect is one of the tool to broadcast the events interactively to the web.

 Polycom video conferencing system supports meetings with peers all over the world.

 The interactive conferencing includes certain gadgets like audio speakers, LCD Projectors.

 Adobe Connect need to configure to access the multimedia.

 Polycom is one of the video conferencing tools.


354

You can interact online with conferencing tools such as

 BEING THERE

 CU-SEEME from white pine

 LIVEMEDIA from Netscape

 NETMEETING from Microsoft

 3-D worlds

 Virtual Reality Modeling Language (VRML)

 Macromedia Director/Shockwave Player

 Apple QuickTime

 Java

14.6 Summary
 Digital broadcasting is a way of transmitting audio and video information through an
encoded signal that is comprised of 1s and 0s.

 It represents the latest in mainstream television broadcasting, having replaced the analog
system.

 Digital radio broadcasting is a method of assembling, broadcasting and receiving


communications services using digital technology.

 Digital radio broadcasting is significantly more spectrum efficient than analog FMradio.

 Video Conferencing improves productivity and reduces travel time.

 Individuals using video conferencing can share data with each other and transfer
information such as photo or documents.

14.7Check your Answer


1. b) Streaming Live

2. Three
355

3. d) Virtual Reality Conferencing

4. a) True

5. b) False

6. a) Interpolator

7. d) All the mentioned

8. Digital Multimedia Broadcast

14.8 Model Questions


1. Define Digital Broadcasting.

2. What is sampling process?

3. Define Sampling and Quantization.

4. Describe briefly about the broadcast video standards.

5. Write short notes on Digital Radio.

6. Define Conferencing and its types.

7. Explain in detail about Multimedia Conferencing with an example.


356

LESSON 15
STAGES OF MULTIMEDIA APPLICATION
DEVELOPMENT

Structure
15.1 Introduction

15.2 Learning Objectives

15.3 Stages of Multimedia applications

15.4 Six Stages of Production in Multimedia

15.5 Content and Talent

15.6 Delivering

15.7 CD-ROM Technology

15.8 Summary

15.9 Check Your Answers

15.10Model Questions

15.1 Introduction

Even though we have all the required elements of multimedia to start and finish a full-
fledged multimedia project, it also requires a plan of action relating to project handling that
includes planning, budgeting, analysis, provisioning etc., so, this lesson gives a brief introduction
to multimedia project handling stages.

15.2 Learning Objectives


At the end of the lesson, the learner will be able to

 Learn different stages of multimedia applications

 Know the six stages of production in multimedia

 Understand content and delivering methods in multimedia

 Learn about the CD_ROM technology


357

15.3 Stages of Multimedia Applications

A Multimedia application is developed in stages as all other software is being developed.


In multimedia application development a few stages have to complete before other stages
being, and some stages may be skipped or combined with other stages.

Following are the four basic stages of multimedia project development:

1. Planning and Costing

2. Designing and Producing

3. Testing

4. Delivering

 Planning and Costing

This stage of multimedia application is the first stage which begins with an idea or need.
This idea can be further refined by outlining its messages and objectives. Before starting to
develop the multimedia project, it is necessary to plan what writing skills, graphic art, music,
video and other multimedia expertise will be required.

It is also necessary to estimate the time needed to prepare all elements of multimedia
and prepare a budget accordingly. After preparing a budget, a prototype or proof of concept
can be developed.

 The needs of a project are analyzed by outlining its messages and objectives.

 A plan that outlines the required multimedia expertise is prepared.

 A graphic template, the structure, and navigational system are developed.

 A time estimate and a budget are prepared.

 A short prototype or proof-of-concept is prepared.

Planning and Costing


(i) The process of making multimedia.

(ii) Scheduling.
358

(iii) Estimating.

(iv) RFPs and bid proposals.

(i) The process of making multimedia

 Idea analysis.

 Pre-testing.

 Task planning.

 Development.

 Delivery

Before beginning a multimedia project, it is necessary to determine its scope and content.

 Balance is the key principle in idea analysis.

 The aim is to generate a plan of action that will become the road map for production.

 It is necessary to continually weigh the purpose or goal against the feasibility and the cost
of production and delivery.

 This can be done dynamically by adding elements to or subtracting elements from a


project.

 Additive process involves starting with minimal capabilities and gradually adding elements.

 Subtractive process involves discarding unnecessary elements from a fully developed


project.

 Idea Analysis:

 CPM - Project management software typically provides Critical Path Method (CPM)
scheduling functions to calculate the total duration of a project based upon each identified
task, showing prerequisites.

 PERT - Program Evaluation Review Technique (PERT) charts provide graphic


representations of task relationships.

 Gantt charts - depict all the tasks along a timeline.


359

Idea analysis involves finding answers to questions like:

 Who is the intended audience? What are their needs?

 What multimedia elements will best deliver the message?

 What hardware, software, and storage capacity would be required?

 How much time, effort, and money would be needed?

 How will the final product be distributed?

Ideal Analysis Project management software includes:

 Microsoft Project.

 Designer’s Edge.

 Screenplay System’s Screenwriter and Story View.

 Outlining programs.

 Spreadsheets

 Pre-Testing:

 Involves defining project goals in fine detail and spelling out what it will take in terms of
skills, content, and money to meet these goals.

 Work up a prototype of the project on paper to help you relate your ideas to the real world.

 Task Planning:

Task planning involves:

 Designing the instructional framework.

 Holding creative idea sessions.

 Determining the delivery platform and authoring platform.

 Assembling the team.

 Building a prototype, producing audio and video, testing the functionality, and delivering
the final product.
360

 Development

Prototype development

 Also known as a proof-of-concept or feasibility study.

 Involves testing of the initial implementation of ideas, building mock-up interfaces, and
exercising the hardware platform.

 Trial calculations are possible after prototyping.

 A written report and an analysis of budgets allow the client some flexibility and also provide
a reality check for developers.

 Alpha development – At this stage, the investment of effort increases and becomes
more focused. More people get involved.

 Beta development – At this stage, most of the features of a project are functional.
Testing is done by a wider arena of testers.

 Delivery

 In the delivery stage, the project is said to be “going gold.”

 The concerns shift towards the scalability of the project in the marketplace.

(ii) Scheduling
 Milestones are decided at this stage.

 The time required for each deliverable, that is the work products delivered to the client, is
estimated and allocated.

 Scheduling is difficult for multimedia projects because multimedia creation is basically


artistic trial and error.

 Scheduling is also difficult because computer hardware and software technology are in
constant flux.

 At this stage, clients need to approve or sign off on the work created.

 Any revisions of previously approved material would require a change order.


361

 A change order stipulates that the additional cost of revising previously approved material
should be borne by the client.

 When negotiating with a client, limit the number of revisions allowed.

(iii) Estimating
 Cost estimation is done by analyzing the tasks involved in a project and the people who
build it.

 The hidden costs of administration and management are also included in the cost estimates.

 A contingency rate of 10 to 15 percent of the total cost should be added to the estimated
costs.

 Time, money, and people are the three elements that can vary in project estimates.

 The time at which payments are to be made is determined and are usually made in three
stages.

 Time, money, and people are the three elements that can vary in project estimates.

 The time at which payments are to be made is determined and are usually made in three
stages.

 Contractors and consultants can be hired, but they should be billed at a lower rate.

 Ensure that contractors perform the majority of their work off-site and use their own
equipment to avoid classifying them as employees.

The categories of expenses incurred for producing multimedia are

i. Project development costs.

ii. Production costs.

iii. Testing costs.

iv. Distribution costs.


362

i. Project Development Cost


These include:

 Salaries.

 Client meetings.

 Acquisition of content.

 Communication.

 Travel.

 Research.

 Proposal and contract prep.

 Overheads.

ii. Production Cost

Production costs can further be classified as:


 Management costs.

 Content acquisition costs.

 Content creation costs.

 Graphics production costs.

 Audio production costs.

 Video production costs.

 Authoring costs.

iii. Testing Cost

These include:
 Salaries.

 Facility rental.

 Printing costs.
363

 Food and incentives.

 Coop fees (payment for participation).

 Editing.

 Beta program.

iv. Distribution Cost:


These include:

 Salaries

 Documentation

 Packaging

 Manufacturing

 Marketing

 Advertising

 Shipping

Hardware:

 Hardware is the most common limiting factor for realizing a multimedia idea.

 List the hardware capabilities of the end-user’s platform.

 Examine the cost of enhancing the delivery platform.

The most common delivery platforms require a monitor resolution of 800X600 pixels and
at least 16- bit color depth

(iv) RFPs and Bid Proposals

Request for Proposals (RFPs):


 These are formal and detailed documents from large corporations who are “outsourcing”
their multimedia development work.

 They provide information about the scope of work and the bidding process.

 They are generally not very detailed and specific.


364

Bid proposals:

 Should contain an executive summary or an overview.

 The backbone of the proposal is the estimate and project plan, which describes the scope
of the work.

 The cost estimates for each phase or deliverable milestone and the payment schedules
should also be included.

 Should contain the graphic and interactive goals of the project.

 Prepare a brief synopsis if a project is complicated.

 Lists the terms and conditions of the contract.

 The terms of a contract should include a description of the billing rates, invoicing policy,
third-party licensing fees, and a disclaimer for liability and damages.

 Design the proposal according to a client’s expectations.

 A proposal should appear plain and simple, yet businesslikeA table of contents or an
index is a straightforward way to present the elements of a proposal in condensed overview.

 Need analysis and description describes the reasons the project is being put forward.

 It is necessary to describe the target audience and the target platform.

 Creative strategy – This section describes the look and feel of a project. This is
useful if the reviewing executives were not present for the preliminary discussions.

 Project implementation – This section contains a detailed calendar, PERT and


Gantt charts, and lists of specific tasks with associated completion dates,
deliverables, and work hours.

 Designing and Producing

The next stage is to execute each of the planned tasks and create a finished product.

 The planned tasks are performed to create a finished product.

 The product is revised, based on the continuous feedback received from the client.
365

Strategies for Creating Interactive Multimedia


 Designing and building multimedia projects go hand-in-hand.

 Balance proposed changes against their cost.

 Feedback loops and good communication between the design and production effort are
critical to the success of a project.

 A user can either describe the project in minute details, or can build a less- detailed
storyboard and spend more effort in actually rendering the project.

 The method chosen depends upon the scope of a project, the size and style of the team,
and whether the same people will do design and development.

 If the design team is separate from the development team, it is best to produce a detailed
design first.

Designing a Multimedia Project:


 Designing a multimedia project requires knowledge and skill with computers, talent in
graphics, arts, video, and music, and the ability to conceptualize logical pathways.

 Designing involves thinking, choosing, making, and doing.

1. Designing the structure

2. Designing the user interface.

 Designing the structure

· The manner in which project material is organized has just as great an impact on the
viewer as the content itself.

· Mapping the structure of a project should be done early in the planning phase.

Navigation:
 Navigation maps are also known as site maps.

 They help organize the content and messages.

 Navigation maps provide a hierarchical table of contents and a chart of the logical flow of
the interactive interface.
366

 Navigation maps are essentially non-linear.

There are four fundamental organizing structures:

(i) Linear - Users navigate sequentially, from one frame of information to another.

(ii) Hierarchical - Users navigate along the branches of a tree structure that is shaped by the
natural logic of the content. It is also called linear with branching.

(iii) Non-linear - Users navigate freely through the content, unbound by predetermined routes.

(iv) Composite - Users may navigate non-linearly, but are occasionally constrained to linear
presentations.

Navigational structure used in Multimedia

 The navigation system should be designed in such a manner that viewers are given free
choice.

 The architectural drawings for a multimedia project are storyboards and navigation maps.

 Storyboards are linked to navigation maps during the design process, and help to visualize
the information architecture.

Structural Depth:
 A user can design their product using two types of structures:
367

(i) Depth structure - Represents the complete navigation map and describes all the links
between all the components of the project.

Depth Structure

(ii) Surface structure - Represents the structures actually realized by a user while navigating
the depth structure.

Surface Structure

Hotspots:
 Add interactivity to a multimedia project.

 The three categories of hotspots are text, graphic, and icon. – The simplest hot spots on
the Web are the text anchors that link a document to other documents.

Hyperlinks
 A hotspot that connects a viewer to another part of the same document, a different
document, or another Web site is called a hyperlink.

 Image maps - Larger images that are sectioned into hot areas with associated links are
called image maps.

 Icons - Icons are fundamental graphic objects symbolic of an activity or concept.

 Buttons - A graphic image that is a hotspot is called a button.

 Plug-ins such as Flash, Shockwave, or JavaScript’senable users to create plain or animated


buttons.
368

 Small JPEG or GIF images that are themselves anchor links can also serve as buttons on
the Web.

 Highlighting a button is the most common method of distinguishing it.

 It is essential to follow accepted conventions for button design and grouping, visual and
audio feedback, and navigation structure.

 Avoid hidden commands and unusual keystroke/mouse click combinations.

 Designing the User Interface

 The user interface of a project is a blend of its graphic elements and its navigation system.

 The simplest solution for handling varied levels of user expertise is to provide a modal
interface.

 In a modal interface, the viewer can simply click a Novice/Expert button and change the
approach of the whole interface.

 Modal interfaces are not suitable for multimedia projects.

 The solution is to build a project that can contain plenty of navigational power, which
provides access to content and tasks for users at all levels.

 The interface should be simple and user- friendly.

Graphical user interface (GUI):


 The GUIs of Macintosh and Windows are successful due to their simplicity, consistency,
and ease of use.

 GUIs offer built-in help systems, and provide standard patterns of activity that produce
the standard expected results.

Graphical approaches that work:

- Plenty of “non-information areas,” or white space in the screens.

- Neatly executed contrasts.

- Gradients.
369

- Shadows.

- Eye-grabbers.

Graphical approaches to avoid:

- Clashes of color.

- Busy screens.

- Requiring more than two button clicks to quit.

- Too many numbers and words.

- Too many substantive elements presented too quickly.

Audio interface:
 A multimedia user interface can include sound elements.

 Sounds can be background music, special effects for button clicks, voice-overs, effects
synced to animation.

 Always provide a toggle switch to disable sound.

Producing a Multimedia Project


 In the development or the production phase, the project plan becomes the systematic
instruction manual for building the project.

 The production stage requires good organization and detailed management oversight
during the entire construction process.

 A good time-accounting system for everyone working on a project is required to keep


track of the time spent on individual tasks.

 It is important to check the development hardware and software and review the
organizational and administrative setup. Potential problems can be avoided by answering
these questions:

o Is there sufficient disk storage space for all files?

o Is the expertise available for all stages of the project?


370

o Is there a system for backing up critical files?

o Are the financial arrangements secured?

o Are the communications pathways open with clients?

Working with clients:


– Have a system in place for good communication between the client and the people actually
building the project.

– Control the client review process to avoid endless feedback loops.

– Develop a scheme that specifies the number and duration of client approval cycles.

- Provide a mechanism for change orders when changes are requested after sign-off

Data storage media and transportation:


– This is necessary so that a client is easily able to review the work.

– There needs to be a matching data transfer system and media.

– Access to the Internet at high bandwidth is preferred.

– The most cost-effective and time-saving methods of transportation are CD-R or DVD-
ROMs.

Tracking:
– Organize a method for tracking the receipt of material to be incorporated in a project.

– Develop a file-naming convention specific to your project’s structure.

– Store the files in directories or folders with logical names.

– To address cross-platform issues, develop a file identification system that uses the DOS
file-naming convention of eight characters plus a three-character extension.

Tracking and copyrighting:


– Version control of your files is very important, especially in large projects.

– If storage space allows, archive all file iterations.


371

– Insert a copyright statement in the project that legally designates the code as the creator’s
intellectual property.

– Copyright and ownership statements are embedded in <meta> tags at the top of a HTML
page.

 Testing

Testing a project ensure the product to be free from bugs. Apart from bug elimination
another aspect of testing is to ensure that the multimediaapplication meets the objectives of
the project. It is also necessary to test whether the multimedia project works properly on the
intended deliver platforms and they meet the needs of the clients.

The program is tested to ensure that it meets the objectives of the project, works on the
proposed delivery platforms, and meets the client requirements.

 Delivering

The final stage of the multimedia application development is to pack the project and
deliver the completed project to the end user. This stage has several steps such as
implementation, maintenance, shipping and marketing the product.

The final project is packaged and delivered to the end user.

15.4 Six Stages of Production in Multimedia

Multimedia projects are complex; they involve the skills and efforts of multiple teams or
people. During the development process, a project moves through the specialized parts of the
team, from story creation to technical editing, with regular collective review sessions Each
stage is designed to refine the project with attention to the client’s needs, technical requirements
and audience preferences

(i) Planning Meeting to Start the Process

A planning meeting is a crucial part of the multimedia development process; it creates a


shared vision for everyone working on the project. The meeting usually kicks off a project,
372

bringing together the team. During the meeting, the project manager communicates the major
goals and lays out the milestones. The meeting may include a discussion of the target audience
and how each division can help support the overarching goal.

(ii) Creative Brief and Script Writing

Most multimedia projects have a story behind them. After the initial meeting, the people
in charge of the background story write a script, creative brief or outline. The text hits the main
points of the project and uses language that appeals to the audience in jargon, tone and style.

(iii) Story Boarding to Tie the Elements Together

A multimedia project usually includes multiple pieces: audio, video, imagery, text for
voiceovers and on-screen titles. Story boarding ties everything together; a story board panel
for a scene includes a sketch of the visual elements, the voiceover or title text, and any production
notes. It guides the process, keeps everyone in check and gives structure to the project.

(iv) Designing the Visual Aspects

During the design stage, designers take over the visual aspects of the project to determine
how it looks and feels. Using the notes from the storyboard, they create graphics, design the
navigation and give direction to photographers and videographers regarding the correct shots.
Depending on the project, the design stage might include graphic design, web design, information
design, photography or image collection. Design is always done with an eye toward the audience

(v) Review and Editing

Editing is one of the most involved and complex stages of the multimedia development
process. The people responsible for editing the project turn the various pieces into a cohesive
product, taking into consideration the time constraints, story line and creative specifications.
Depending on the scope of the project, pieces of the project may be edited separately.

For projects with a large amount of video, editing is the longest stage of the process; a
minute of final video can take hours of editing. The editing stage usually involves internal
review iterations and may also include rounds of client review and editing.
373

(vi) Production and User Testing

The production stage is when all the parts of a multimedia project come together. The
production staff gathers all of the edited assets in one place and puts them together in a logical
sequence, using the story board as a guide. The rough draft is then put through rounds of
review and final edits, both internally and with the client. To ensure that a project has the
desired impact on the target audience, a company may engage in user testing as part of
production.

During this stage, test members of the audience use the multimedia piece while team
members observe. Depending on the goals of the project, the staff might observe users’ reactions
or have them answer questions to see if the project hits the right marks. After user testing,
there are usually further adjustments to the project. Once the team and clients are satisfied,
the project goes out for distribution.

 Programming technologies can be used for online content delivery, such as

 COMMON GATEWAY INTERFACE (CGI) PROGRAMMING

 PERL

 JAVA

 JAVASCRIPT

 PHP

15.5 Content and Talent

Delivering of Multimedia Content

(a) CD-ROM

A Compact Disc or CD is an optical disc used to store digital data, originally developed
for storing digital audio. The CD, available on the market since late 1982, remains the standard
playback medium for commercial audio recordings to the present day, though it has lost ground
in recent years to MP3 players.
374

An audio CD consists of one or more stereo tracks stored using 16-bit PCM coding at a
sampling rate of 44.1 kHz. Standard CDs have a diameter of 20 mm and can hold approximately
80 minutes of audio. There are also 80 mm discs, sometimes used for CD singles, which hold
approximately 20 minutes of audio. The technology was later adapted for use as a data storage
device, known as a CD-ROM, and to include record once and re-writable media (CD-R and
CD-RW respectively).

CD-ROMs and CD-Rs remain widely used technologies in the computer industry as of
2007. The CD and its extensions have been extremely successful: in 2004, the worldwide
sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007, 200 billion
CDs had been sold worldwide.

(b) DVD

DVD (also known as “Digital Versatile Disc” or “Digital Video Disc”) is a popular optical
disc storage media format. Its main uses are video and data storage. Most DVDs are of the
same dimensions as compact discs (CDs) but store more than 6 times the data. Variations of
the term DVD describe the way data is stored on the discs:

DVD-ROM has data which can only be read and not written, DVD-R can be written once
and then functions as a DVD-ROM, and DVD-RAM or DVDRW holds data that can be re-
written multiple times.

DVD-Video and DVD-Audio discs respectively refer to properly formatted and structured
video and audio content. Other types of DVD discs, including those with video content, may be
referred to as DVD-Data discs. The term “DVD” is commonly misused to refer to high density
optical disc formats in general, such as Blu-ray and HD DVD. “DVD” was originally used as
initials for the unofficial term “digital video disc”. It was reported in 1995, at the time of the
specification finalization,applications), however, the text of the press release announcing the
specification finalization only refers to the technology as “DVD”, making no mention of what (if
anything) the letters stood for. Usage in the present day varies, with “DVD”, “Digital Video
Disc”, and “Digital Versatile Disc” all being common.
375

(c) About Flash Drives

A USB flash drive is a data storage device that includes flash memory with an integrated
Universal Serial Bus (USB) interface. USB flash drives are typically removable and rewritable,
and physically much smaller than a floppy disk. Most weigh less than 30 g. As of January 2012
drives of 1 terabytes (TB) are available and storage capacities as large as 2 terabytes are
planned, with steady improvements in size and price per capacity expected. Some allow up to
100,000 write/erase cycles (depending on the exact type of memory chip used) and 10 years
shelf storage time.

USB flash drives are used for the same purposes for which floppy disks or CD-ROMs
were used. They are smaller, faster, have thousands of times more capacity, and are more
durable and reliable because they have no moving parts. Until approximately 2005, most desktop
and laptop computers were supplied with floppy disk drives, but floppy disk drives have been
abandoned in favor of USB ports.

USB flash drives use the USB mass storage standard, supported natively by modern
operating systems such as Linux, Mac OS X, Windows, and other Unix-like systems, as well
as many BIOS boot ROMs. USB drives with USB 2.0 support can store more data and transfer
faster than much larger optical disc drives like CD-RW or DVD-RW drives and can be read by
many other systems such as the Xbox 360, PlayStation 3, DVD players and in some upcoming
mobile smart phones.

(d) About Internet

The Internet is a global system of interconnected computer networks that use the standard
Internet protocol suite (TCP/IP) to serve billions of users worldwide. It is a network of networks
that consists of millions of private, public, academic, business, and government networks, of
local to global scope, that are linked by a broad array of electronic, wireless and optical
networking technologies. The Internet carries an extensive range of information resources and
services, such as the inter-linked hypertext documents of the World Wide Web (WWW) and
the infrastructure to support email.
376

15.6 Delivering
Multimedia can be delivered using

 Optical disk (CD-based)

 Over a distributed network (Web-based)

Optical Disks
 Themostcost effectivemethodofdeliveryformultimediamaterials.

 The devices are used to store large amounts of some combination of text, graphics,
sound, and moving video.

Optical Disks

Media Storage

Compact Disc (CD) 650MB

Digital Versatile Disc (DVD) 4.7GB

Blue ray Disc (BD) 27GB

Distributed Network

 Suitableforweb-basedcontente.g.Website

 Filesneedtobecompressbeforetransfer
377

Distributed Network

Web-based CD-based

Limited in picture size andlow video Can storehigh end Multimedia elements
resolution

Can be changes, damaged ordeleted Can be permanently stored and


by irresponsible individuals notchangeable

Information can be updated easily Information can be quickly outdated


and cheaper

Check your Progress


1. In the __________ section of the project proposal, you might find a detailed calendar or
Gantt chart.

2. By the time you reach the __________ stage of multimedia project development, you are
producing the final product.

3. __________ are passed through several levels of a company so that managers and
directors can evaluate projects’ quality and price.

4. The end of each phase of the development of a multimedia project is a natural place to
set __________.

a. prerequisites

b. scopes

c. contingencies

d. milestones

5. According to the text, during which phase of development should you build a skills matrix?

a. Idea analysis

b. Pretesting

c. Building a team

d. Scheduling
378

6. Say True or False

Large corporations that are “outsourcing” their multimedia development work often create
Requests for Proposals (RFPs), which provide background information, the scope of
work, and information about the bidding process to potential contractors.

a. True b. False

7. __________ provide you with a table of contents, as well as a chart of the logical flow of
the interactive interface

8. __________ is the phase when your multimedia project is fully rendered.

9. __________ structure describes all the links between all the components of your project.

10. __________ interfaces provide the simplest solutions for handling varied levels of user
expertise

15.7 CD-ROM Technology

Introduction

Optical storage devices have become the order of the day. The high storage capacity
available in the optical storage devices has influenced it as storage for multimedia content.
Apart from the high storage capacity the optical storage devices have higher data transfer rate.

15.7.1 CD-ROM

A Compact Disc or CD is an optical disc used to store digital data, originally developed
for storing digital audio. The CD, available on the market since late 1982, remains the standard
playback medium for commercial audio recordings to the present day, though it has lost ground
in recent years to MP3 players.

An audio CD consists of one or more stereo tracks stored using 16-bit PCM coding at a
sampling rate of 44.1 kHz. Standard CDs have a diameter of 120 mm and can hold approximately
80 minutes of audio. There are also 80 mm discs, sometimes used for CD singles, which hold
approximately 20 minutes of audio. The technology was later adapted for use as a data storage
device, known as a CD-ROM, and to include recordonce and re-writable media (CD-R and CD-
379

RW respectively). CD-ROMs and CD-Rs remain widely used technologies in the computer
industry as of 2007. The CD and its extensions have been extremely successful: in 2004, the
worldwide sales of CD audio, CD-ROM, and CD-R reached about 30 billion discs. By 2007,
200 billion CDs had been sold worldwide.

CD-ROM History

In 1979, Philips and Sony set up a joint task force of engineers to design a new digital
audio disc. The CD was originally thought of as an evolution of the gramophone record, rather
than primarily as a data storage medium. Only later did the concept of an “audio file” arise, and
the generalizing of this to any data file. From its origins as a music format, Compact Disc has
grown to encompass other applications. In June 1985, the CD-ROM (read-only memory) and,
in 1990, CD-Recordable were introduced, also developed by Sony and Philips.

Physical details of CD-ROM

A Compact Disc is made from a 1.2 mm thick disc of almost pure polycarbonate plastic
and weighs approximately 16 grams. A thin layer of aluminum (or, more rarely, gold, used for
its longevity, such as in some limited-edition audiophile CDs) is applied to the surface to make
it reflective, and is protected by a film of lacquer. CD data is stored as a series of tiny indentations
(pits), encoded in a tightly packed spiral track molded into the top of the polycarbonate layer.
The areas between pits are known as “lands”. Each pit is approximately 100 nm deep by 500
nm wide, and varies from 850 nm to 3.5 µm in length.

The spacing between the tracks, the pitch, is 1.6 µm. A CD is read by focusing a 780 nm
wavelength semiconductor laser through the bottom of the polycarbonate layer.

While CDs are significantly more durable than earlier audio formats, they are susceptible
to damage from daily usage and environmental factors. Pits are much closer to the label side
of a disc, so that defects and dirt on the clear side can be out of focus during playback. Discs
consequently suffer more damage because of defects such as scratches on the label side,
whereas clear-side scratches can be repaired by refilling them with plastic of similar index of
refraction, or by careful polishing.
380

Disc shapes and diameters

The digital data on a CD begins at the center of the disc and proceeds outwards to the
edge, which allows adaptation to the different size formats available. Standard CDs are available
in two sizes. By far the most common is 120 mm in diameter, with a 74 or 80-minute audio
capacity and a 650 or 700 MB data capacity. 80 mm discs (“Mini CDs”) were originally designed
for CD singles and can hold up to 21 minutes of music or 184 MB of data but never really
became popular. Today nearly all singles are released on 120 mm CDs, which is called a Maxi
single.

15.7.2 Logical formats of CD-ROM

Audio CD

The logical format of an audio CD (officially Compact Disc Digital Audio or CD-DA) is
described in a document produced in 1980 by the format’s joint creators, Sony and Philips. The
document is known colloquially as the “Red Book” after the color of its cover. The format is a
two-channel 16-bit PCM encoding at a 44.1 kHz sampling rate. Four-channel sound is an
allowed option within the Red Book format, but has never been implemented.

The selection of the sample rate was primarily based on the need to reproduce the
audible frequency range of 20Hz - 20kHz. The Nyquist–Shannon sampling theorem states that
a sampling rate of double the maximum frequency to be recorded is needed, resulting in a 40
kHz rate. The exact sampling rate of 44.1 kHz was inherited from a method of converting
digital audio into an analog video signal for storage on video tape, which was the most affordable
way to transfer data from the recording studio to the CD manufacturer at the time the CD
specification was being developed. The device that turns an analog audio signal into PCM
audio, which in turn is changed into an analog video signal is called a PCM adaptor.

Main physical parameters

The main parameters of the CD (taken from the September 1983 issue of the audio CD
specification) are as follows:
381

· Scanning velocity: 1.2–1.4 m/s (constant linear velocity) – equivalent to approximately


500 rpm at the inside of the disc, and approximately 200 rpm at the outside edge. (A disc
played from beginning to end slows down during playback.)

 Track pitch: 1.6 µm

 Disc diameter 120 mm

 Disc thickness: 1.2 mm

 Inner radius program area: 25 mm

 Outer radius program area: 58 mm

 Center spindle hole diameter: 15 mm

The program area is 86.05 cm² and the length of the recordable spiral is 86.05 cm² / 1.6
µm = 5.38 km. With a scanning speed of 1.2 m/s, the playing time is 74 minutes, or around 650
MB of data on a CD-ROM. If the disc diameter were only 115 mm, the maximum playing time
would have been 68 minutes, i.e., six

A disc with data packed slightly more densely is tolerated by most players (though some
old ones fail). Using a linear velocity of 1.2 m/s and a track pitch of 1.5 µm leads to a playing
time of 80 minutes, or a capacity of 700 MB. Even higher capacities on non-standard discs (up
to 99 minutes) are available at least as recordable, but generally the tighter the tracks are
squeezed the worse the compatibility.

15.7.3Data structure

The smallest entity in a CD is called a frame. A frame consists of 33 bytes and contains
six complete 16-bit stereo samples (2 bytes × 2 channels × six samples equals 24 bytes). The
other nine bytes consist of eight Cross-Interleaved Reed-Solomon Coding error correction
bytes and one subcode byte, used for control and display. Each byte is translated into a 14-bit
word using Eight-toFourteen Modulation, which alternates with 3-bit merging words. In total we
have 33 × (14 + 3) = 561 bits. A 27-bit unique synchronization word is added, so that the
number of bits in a frame totals 588 (of which only 192 bits are music). These 588-bit frames
are in turn grouped into sectors. Each sector contains 98 frames, totaling 98 × 24 = 2352 bytes
of music.
382

The CD is played at a speed of 75 sectors per second, which results in 176,400 bytes per
second. Divided by 2 channels and 2 bytes per sample, this result in a sample rate of 44,100
samples per second. “Frame” For the Red Book stereo audio CD, the time format is commonly
measured in minutes, seconds and frames (mm:ss:ff), where one frame corresponds to one
sector, or 1/75th of a second of stereo sound. Note that in this context, the term frame is
erroneously applied in editing applications and does not denote the physical frame described
above. In editing and extracting, the frame is the smallest addressable time interval for an
audio CD, meaning that track start and end positions can only be defined in 1/75 second steps.

Logical structure

The largest entity on a CD is called a track. A CD can contain up to 99 tracks (including


a data track for mixed mode discs). Each track can in turn have up to 100 indexes, though
players which handle this feature are rarely found outside of pro audio, particularly radio
broadcasting. The vast majority of songs are recorded under index 1, with the pre-gap being
index 0. Sometimes hidden tracks are placed at the end of the last track of the disc, using
index 2 or 3. This is also the case with some discs offering “101 sound effects”, with 100 and
101 being index 2 and 3 on track 99. The index, if used, is occasionally put on the track listing
as a decimal part of the track number, such as 99.2 or 99.3.

CD-Text

CD-Text is an extension of the Red Book specification for audio CD that allows for storage
of additional text information (e.g., album name, song name, and artist) on a standards-compliant
audio CD. The information is stored either in the lead-in area of the CD, where there is roughly
five kilobytes of space available, or in the subcode channels R to W on the disc, which can
store about 31 megabytes.

CD + Graphics

Compact Disc + Graphics (CD+G) is a special audio compact disc that contains graphics
data in addition to the audio data on the disc. The disc can be played on a regular audio CD
player, but when played on a special CD+G player, can output a graphics signal (typically, the
CD+G player is hooked up to a television set or a computer monitor); these graphics are
383

almost exclusively used to display lyrics on a television set for karaoke performers to sing
along with. CD + Extended Graphics Compact Disc + Extended Graphics (CD+EG, also known
as CD+XG) is an improved variant of the Compact Disc + Graphics (CD+G) format. Like CD+G,
CD+EG utilizes basic CD-ROM features to display text and video information in addition to the
music being played.

This extra data is stored in subcode channels R-W. CD-MIDI Compact Disc MIDI or CD-
MIDI is a type of audio CD where sound is recorded in MIDI format, rather than the PCM format
of Red Book audio CD. This provides much greater capacity in terms of playback duration, but
MIDI playback is typically less realistic than PCM playback. Video CD Video CD (aka VCD,
View CD, Compact Disc digital video) is a standard digital format for storing video on a Compact
Disc. VCDs are playable in dedicated VCD players, most modern DVD-Video players, and
some video game consoles. The VCD standard was created in 1993 by Sony, Philips, Matsushita,
and JVC and is referred to as the White Book standard. Overall picture quality is intended to be
comparable to VHS video, though VHS has twice as many scanline (approximately 480 NTSC
and 580 PAL) and therefore double the vertical resolution. Poorly compressed video in VCD
tends to be of lower quality than VHS video, but VCD exhibits block artifacts rather than analog
noise.

Super Video CD

Super Video CD (Super Video Compact Disc or SVCD) is a format used for storing video
on standard compact discs. SVCD was intended as a successor to Video CD and an alternative
to DVD-Video, and falls somewhere between both in terms of technical capability and picture
quality. SVCD has two-thirds the resolution of DVD, and over 2.7 times the resolution of VCD.

One CD-R disc can hold up to 60 minutes of standard quality SVCD-format video. While
no specific limit on SVCD video length is mandated by the specification, one must lower the
video bitrate, and therefore quality, in order to accommodate very long videos. It is usually
difficult to fit much more than 100 minutes of video onto one SVCD without incurring significant
quality loss, and many hardware players are unable to play video with an instantaneous bitrate
lower than 300 to 600 kilobits per second.
384

Photo CD

Photo CD is a system designed by Kodak for digitizing and storing photos in a CD.
Launched in 1992, the discs were designed to hold nearly 100 high quality images, scanned
prints and slides using special proprietary encoding. Photo CD discs are defined in the Beige
Book and conform to the CD-ROM XA and CD-i Bridge specifications as well. They are intended
to play on CD-i players, Photo CD players and any computer with the suitable software
irrespective of the operating system. The images can also Enhanced CD Enhanced CD, also
known as CD Extra and CD Plus, is a certification mark of the Recording Industry Association
of America for various technologies that combine audio and computer data for use in both
compact disc and CD-ROM players.

The primary data formats for Enhanced CD disks are mixed mode (Yellow Book/Red
Book), CD-i, hidden track, and multisession (Blue Book). Recordable CD Recordable compact
discs, CD-Rs, are injection molded with a “blank” data spiral. A photosensitive dye is then
applied, after which the discs are metalized and lacquer coated. The write laser of the CD
recorder changes the color of the dye to allow the read laser of a standard CD player to see the
data as it would an injection molded compact disc. The resulting discs can be read by most (but
not all) CD-ROM drives and played in most (but not all) audio CD players.

CD-R recordings are designed to be permanent. Over time the dye’s physical
characteristics may change, however, causing read errors and data loss until the reading device
cannot recover with error correction methods. The design life is from 20 to 100 years depending
on the quality of the discs, the quality of the writing drive, and storage conditions. However,
testing has demonstrated such degradation of some discs in as little as 18 months under
normal storage conditions. This process is known as CD rot. CD-Rs follow the Orange Book
standard.

Recordable Audio CD

The Recordable Audio CD is designed to be used in a consumer audio CD recorder,


which won’t (without modification) accept standard CD-R discs. These consumer audio CD
recorders use SCMS (Serial Copy Management System), an early form of digital rights
385

management (DRM), to conform to the AHRA (Audio Home Recording Act). The Recordable
Audio CD is typically somewhat more expensive than CD-R due to (a) lower volume and (b) a
3% AHRA royalty used to compensate the music industry for the making of a copy.

Re-Writable CD

CD-RW is a re-recordable medium that uses a metallic alloy instead of a dye. The write
laser in this case is used to heat and alter the properties (amorphous vs. crystalline) of the
alloy, and hence change its reflectivity. A CD-RW does not have as great a difference in reflectivity
as a pressed CD or a CD-R, and so many earlier CD audio players cannot read CD-RW discs,
although later CD audio players and stand-alone DVD players can. CD-RWs follow the Orange
Book standard.

Features of Compact Disc Technologies:

 Can be used for all kind of storage

 Wide application area

 Large capacity

 Base is CD-DA technology (except CD-MO)

 Sequential specification of the different CD technologies

Disadvantages:

 Long average access time

 Incompatibility of CD-MO Future

 CD with enhanced storage space and data retrieval rate

 Smaller optical disc with similar capacity

15.8 Summary
 The basic stages of a multimedia project are planning and costing, design and production,
testing and delivery.

 Knowledge of hardware and software, as well as creativity and organizational skills are
essential for creating a high-quality multimedia project.
386

 Before beginning a project, determine its scope and content.

 The process of making multimedia involves idea analysis, pre-testing, task planning,
development, and delivery.

 Costs related to multimedia creation are categorized as project development costs,


production costs, testing costs, and distribution costs.

 Feedback loops and good communication between the design and the production efforts
are critical to the success of a project.

 The four fundamental organizing structures are linear, non-linear, hierarchical, and
composite.

 The user interface should be simple, user- friendly, and easy to navigate.

 The three categories of hotspots are text, graphic, and icon.

 A multimedia project is actually rendered in the production stage.

15.9 Check Your Answers


1. Idea Analysis

2. Task Planning

3. Estimations

4. Milestones

5. Building a team

6. a.True

7. A Navigation Map

8. Production

9. Depth

10. Modal
387

15.10 Model Questions


1. List out the stages of multimedia project development.

2. Define Idea analysis.

3. What are alpha and beta development?

4. Define Request for Proposals(RFPs).

5. Define structural Depth.

6. Explain in detail about stages of multimedia project development.

7. Discuss in detail about designing and producing of multimedia project development.

8. Write short notes on content and Talent.

9. Discuss in detail six stages of production in multimedia.

10. Explain in detail about CD-ROM Technology.


388

MODEL QUESTION PAPER


M.C.A–COMPUTER APPLICATION
SECOND YEAR - FOURTH SEMESTER
CORE PAPER - XX
MULTIMEDIA SYSTEMS
TIME: 3 hrs Marks: 80

PART A - (10 x 2 = 20 Marks)


Answer any TEN of the following in about fifty words each

1. What is multimedia?List the basic elements of multimedia.

2. Define hypertext and hypermedia.

3. What are the stages of multimedia project?

4. What is the first stage of a multimedia project?

5. What is multimedia software?

6. What is keyboard and pointing devices?

7. Define Modems and ISDN.

8. List the software tools of multimedia.

9. Define digital image. List out the formats of digital images.

10. Define lossy and lossless compression.

11. Define Photoshop.

12. Define Sampling and Quantization.


389

PART B - (5 x 6 = 30 Marks)
Write Short Notes on any SIX of the following, in about 250 words each

13. Describe about the multimedia applications in different fields.

14. Write short notes on multimedia skills and training.

15. Write short notes on Macintosh and Windows Production Platform.

16. What are authoring tools explain it in detail?

17. Describe multimedia tools in detail.

18. Discuss in detail about animation techniques.

19. Explain text Compression techniques in detail.

20. Discuss about Domain Name System in detail.

PART C - (3 x 10 = 30 Marks)
Write Essay on any THREE of the following, in about 750 words each

21. Explain in detail about connecting devices in multimedia.

22. Discuss in detail about images with an example.

23. Explain about step by step procedure to set the working environment in
Dreamweaver.

24. Describe in detail about MultimediaConferencing with an example.

You might also like