Module Multimedia Technologies Module1
Module Multimedia Technologies Module1
Introduction to multimedia
Objectives
State the different media modalities and give examples of how a piece of
information can be represented using different modalities
Make distinctions between different definitions of the terms "media" and
"multimedia", describe what motivates the different perspectives, and discuss the
difficulties in arriving at a comprehensive definition of multimedia
Discuss Packer's characterization of multimedia and how it differs from other
definitions of multimedia
Construct your definition of multimedia
(Note: Although "media" is strictly speaking the plural form of medium, media can be
used as a singular noun.)
Media modalities
Since this isn't a media studies course but rather a course that deals with problems and
strategies specific to educational material creation and evaluation, we are interested in
specific aspects of media related to design and production. Table 2 lists areas of media
design and production that are relevant to this course. Note that with the exception
of media encoding type, each area references one or more definitions of media given
in Table 1. Each of these areas play an important role in the design and evaluation of
multimedia educational material.
Encyclopedic Dictionary of Definition 4: any means, ---
Semiotics, Media, and agency, or instrument of
Communications (2000) communication;
Keep Table 2 in mind as we turn our attention to the term "multimedia" and what it could
mean.
Defining multimedia
Arriving at a definition of multimedia is not easy. As an initial try, let us look at the topics
being studied by people who identify themselves as multimedia experts. We look for
people who say, "I am a multimedia expert, and if you want to know what multimedia is,
look what I'm studying." For instance, the peer-reviewed journal, Journal of Multimedia,
considers articles written on the following topics:
This extensive list helps us understand the broad range of academic and industry
interests in multimedia, and we can see that disciplines such as engineering, computing
science, psychology, law, and (yes) education all have a stake in multimedia. What that
list allows us to do is look at definitions proposed by those who identify themselves as
authorities in multimedia. Consider the following passage from Steve
Heath’s Multimedia and Communications Technology (1999):
If there is a term or phrase that has appeared in more diverse publications than
any other over the last few years, it must be multimedia. The number of
definitions for it are as numerous as the number of companies working on it. If
this is the case, what is multimedia?
The reality is somewhere between the extremes. Undoubtedly, with the ever-
improvising ability of the PC to provide TV quality audio and video, the television
and PC are becoming very close. Add the ability to provide graphical overlays
and the difference is very small indeed. With cable TV companies providing
telephone connections and the increasing combination of PC with a modem to
access the Internet and thus provide an intelligent telephone, the forecasts for
the universal widget are a logical progression. (Heath, 1999)
Compare that passage with this excerpt from Ralf Steinmetz & Clara Nahrstedt’s
book, Multimedia Systems (2004):
Multimedia is probably one of the most overused terms of the 90s... The field is
at the crossroads of several major industries: computing, telecommunications,
publishing, consumer audio-video electronics, and television/movie/broadcasting.
Multimedia not only brings new industrial players to the game, but adds a new a
dimension to the potential market… Similarly, not only the segment of
professional audio-video is concerned, but also the consumer audio-video
market, and the associated TV, movie,and broadcasting sectors.
Finally, consider this passage from Ze-Nian Li & Mark Drew’s book, Fundamentals of
Multimedia (2005):
People who use the term "multimedia" often seem to have quite different, even
opposing, viewpoints. A PC vendor would like us to think of multimedia as a PC
hat has sound capability, a DVD-ROM drive, and perhaps the superiority of
multimedia-enabled microprocessors that understand additional multimedia
instructions. A consumer entertainment vendor may think of multimedia as
interactive cable TV with hundreds of digital channels, or a cable-TV-like service
delivered over a high-speed internet connection.
1. Based on the preceding excerpts, can you summarize why it has been
difficult to arrive at a consensus of multimedia actually is?
2. On which points do these three sets of authors agree?
There are at least two points to note from the preceding excerpts. First, all three sets of
authors suggest that “multimedia” was mobilized as a buzzword in the 1990s by
technology and entertainment industries to push their respective (and largely profit-
driven) agendas. The authors seem to suggest that there was no clear agreement of
what properly lies under the umbrella of “multimedia”, since the stakeholders in the term
differed in their approaches to integrating and innovating on mass media types, media
transmission strategies, and media storage technologies.
However, we also do see a common thread among all three excerpts that is particularly
useful for our attempt to understand multimedia from an instructional design
perspective. Consider some selected excerpts from these passages, presented in Table
4.
(Heath, 1999) ... the use or presentation of data in two or more forms.
(Steinmetz & ... the combination of two or more continuous media, that is, media
Nahrstedt, that have to be played during some well-defined time interval,
2004) usually with some user interaction. In practice, the two media are
normally audio and video, that is, sound plus moving pictures.
(Li & Drew, ... a more application-oriented view of what multimedia consists of:
2005) applications that use multiple modalities to their advantage,
including text, images, drawing , drawings (graphics), animation,
video, sound (including speech), and, most likely, interactivity of
some kind.
Table 4 draws out what these authors consider as multimedia when they consider
media modalities. From this admittedly small selection of expert opinions, we can see
that multimedia can be characterized by at least two general features:
(Note that Steinmetz and Nahrstedt's definition requires "continuous media" that have to
be "played during some well-defined time interval", suggesting that multimedia also has
to be time-based. For the purposes of this course, we can safely ignore this restriction.)
Experts in the area of instructional media use the term multimodal to describe the
simultaneous use of two or more media modalities. Multimodal media is not
necessarily interactive; this is important to keep in mind.
To see multimodality in action, consider the subject of Newton's Laws of Motion. The
ideas behind these laws can be explained in any number of ways:
Video: We could watch a recording professor Walter Lewin at the Massachusetts
Institute of Technology give a lecture in front of a class of undergraduate
students.
Text: We could read the transcript of Prof. Lewin's lecture. Or we could pick up
any college physics textbook and read about Newton's Laws there.
Animation: We could watch a video that shows simulated interactions between
physical objects. We could even play with an interactive web application that
shows how Newton's Second Law works.
Audio: We could buy an audio CD or download an MP3 and listen to a physics
expert discuss Newton's Laws while we're cooking or walking down the street.
What can you say about the structure of the website? Does it follow a
particular structural pattern? Is it linear? Radial? Hierarchical?
How does the structure of the website facilitate, hinder, or otherwise affect
your understanding of the history multimedia?
After reading Packer's article, you will notice that in addition to interactivity and the
simultaneous presentation of media modalities, Packer proposes that multimedia is
characterized by immersion, an integration of
disciplines, narrativity, and hypertextuality/hyperlinkedness. Immersion and disciplinary
integration work together to envelop a user in the multi-sensory experience of a
multimedia environment. Creating a sense of a narrative (or perhaps several parallel
ones) in the multimedia environment invites the user to create meaning and therefore
process the content on cognitive and affective levels. Hyperlinkedness allows users (at
least in theory) to find their way through the environment using a navigational logic that
suits them.
In the introduction of the course, I cited the four stages identified by JISC at which
design decisions need to be made in planning, creating, and delivering a course. At
each of these stages, technology can play an important role in one or both of two ways:
By these observations, we can easily conclude the PowerPoint program is a tool for
creating multimedia. However, it is also not unreasonable to consider the PowerPoint
program to be a multimedia product in itself:
Can you see yet another reason why it is not easy to get a firm grasp of what
multimedia is? The line between the tools of production and the products themselves is
a blurred one. However, this slippage is also a useful one, since it allows you to build
(digital) tools for enabling learners to build (digital) products.
When multimedia first started gaining currency, it was during a point when it was
inconceivable to transmit and receive the quantities and types of data over the Internet
that we now do today. Barfield recounts the history of the term, and how the Internet
changed how we looked at information was delivered:
'Multimedia' used to mean the design of systems authored with tools such as
Macromedia Director and distributed using cd-roms as a carrier medium. In the mid-
1990s the developments surrounding the [I]nternet and the [W]eb... meant that the
focus of multimedia development shifted from the static physica carriers like the cd-rom
to dynamic and updatable delivery methods on the web. even now this shift is getting
mor eand more pronounced, pushed along by the developments on the web, the
increase in bandwidth avialable and the explosion in access by the public. Distribution
based on cd-roms will always have a niche in the market, but the main focus of
multimedia will be online content on the web... Many classical multimedia courses are
introducing their students to the Web and including Web design and construction as part
of the curriculum. This trend will gather pace as courses restructure to follow
developments on the Internet. [emphasis added]
The term new media is now commonly applied to media delivered through the Internet,
while rich media refers specifically to bandwidth-heavy content such as audio and video.
It is, however, also true that access to the Web is not equal, and if you look to the most
marginalized populations of the global society, you will find increasingly spotty Internet
penetration. However, it is often now claimed that mobile phones are revolutionizing the
way people and communities are linked together in the developing world, allowing more
people to have some kind of access (no matter how indirectly) to data located on the
Web. Nowadays, it is becoming increasingly inconceivable and unrealistic
to not consider the capability of students to access (in one way or another) the World
Wide Web when you plan a multimedia-based instructional intervention.
Summary
In this module, we looked at various experts' attempts to define multimedia, a term that
is still very contested. First we looked at we meant by media. Then we looked at how
multimedia is not merely the same multiple media. Throughout the discussion, I hope
you got a glimpse of how personal, professional, academic, and corporate interests
shape the field of study of multimedia. We then highlighted media modalities as salient
to our study of educational multimedia design and evaluation, and spent some time
looking at Randall Packer's ideas around multimedia, which I claim are useful for our
study of educational multimedia design and evaluation. Finally, we looked at a particular
example of a slippage in definitions: multimedia can be both a digital product and a tool
that is used to create a digital product.
Throughout this course, you will find that I will be using the terms "multimedia", "digital
tools", "technology", and "software" more or less interchangeably. If I do, I mostly do it
for reasons of style. I do not want to leave you with the impression that they these terms
identical. Of these terms, "technology" is the broadest. Technology can be seen as a
way of doing something coupled with the tools needed to actually do it. "Digital tools"
can refer to both hardware (such as desktop computers, laptops, mobile devices, and
other tangible electronic objects) and software (which are made out of code and run on
hardware) that rely on data to be encoded in binary form.
I hope that the previous discussion helps provide you with a more workable road map
that you can use to understand what we mean when we talk about multimedia. Note
that the previous discussion really has said nothing about how to design and evaluate
multimedia for instructional purposes. The taxonomies I have outlined here have
nothing to say abot pedagogical value of multimedia. We postpone that discussion for
Module 3, when we classify digital tools and products in a way that should be more
useful for us. Before that, Module 2 looks at issues specific to each media modality as
well as some issues around interactivity.
Digital Media
Objectives
At the end this module, you should be aware of the technical possibilities and limitations
of each of the media modalities described in the module. You should be familiar with the
basic principles that underlie the way all digital media is created so that as standards
and conventions change, you are able to make sense of these changes. You should be
familiar with which applications you can use for creating and editing these media
modalities.
Introduction
In Module 1, I made the case that media modalities (which we defined to be the term
that collectively refers to text, audio, video, graphics, and animation) are salient features
of multimedia that we need to pay attention to. In this module, we'll be taking a look at
each modality. The treatment will necessarily be brief and incomplete, and I encourage
you to check out the bibliography at the end of this module if you are interested in
digging deeper into any of the issues that are raised.
Before I present to you the sections for each of the digital media modalities, I wanted to
point out some issues or features shared by all of these modalities.
Bits are assembled in groups of eight. Eight bits forms a byte. 1024 bytes form a
kilobyte. 1024 kilobytes form a megabyte. To learn more about units for measuring
digital data, refer to section 12.2 of the UK Open University's course, Introducing ICT
Systems: http://openlearn.open.ac.uk/mod/resource/view.php?id=182500.
File sizes and bandwidth restrictions can be problematic; compression can help
Media files can take up a lot of disk space. For example, an hour-long video can take up
as much as 100 megabytes of disk space. Sometimes, you cannot but work with large
files. If you do, consider the following:
If you are transmitting media across a local network or over the Internet,
transmission time will suffer with large files.
You should always be backing up your files, and backing up large files can take
time (especially on older and slower machines)
Editing large files can eat up your computer's resources (CPU time and memory),
especially on older computers.
Most digital media can be compressed to save disk space and bandwidth. There are
two general types of compression mechanisms. Lossy compression discard data
irretrievably. This leads to some loss of quality, but depending on how the compression
is performed, this loss may not be noticeable. Lossless compression reduces file sizes
without discarding essential information. This is possible because the way that data is
recorded as bits is not always efficient. Generally, lossy compression can produce more
dramatic file size reductions. Save your media in file formats that use lossless
compression whenever possible, but also remember that lossy compression (if
managed well) can provide significant benefits without affecting the experience of the
user.
Let's talk about digital artifacts now. A common artifact in JPEG images is the presence
of blocky areas, such as shown in Figure 2.2. JPG is a lossy compression scheme and
it throws away fine details. When those details are discarded, you get blockiness.
Artifacts in digital sound takes various forms, but a common one involves a metallic
quality in the sound of highly compressed MP3 files. Video exhibits a large number of
possible artifacts; see Basith & Done (1996) for a detailed discussion of them.
Figure 2.2. Digital artifacts in JPEG compression. The top image is uncompressed while
the bottom image is compressed.
Images from the Wikimedia Commons, and are licensed under
a Creative Commons Attribution ShareAlike 3.0 license
Let's now take a look at each digital media modality. We'll also looking at a hypermedia
and interactivity as part of this discussion.
After reading the sections that individually dealt with each media modality, the next
question you might be asking yourself is: how do I put it all together?
As a browser-based hypertext document on the Web
Consider this course, EDDE 221. This course is presented to you as a series of
webpages written in XHTML, or Extensible Hypertext Markup Language. In order to
quickly put together these pages, I used an online services provided Google called
Google Sites. Using Google Sites, I was able to embed videos and images alongside
written text. In effect, I am doing all my writing and editing online. The advantage is that
I don't need to keep my offline document synchronized with the version online. The
downside is that when I don't have access to the Web, I don't have access to my
document.
You can also edit your hypertext document offline and then upload it to your webhosting
server using FTP, which stands for File Transfer Protocol. A good (and free) FTP
program is Filezilla, while you can get free webhosting from a number of sites. I keep a
list of sites that provide free hosting on http://delicious.com/dmaranan/webhosting. You
have more control over the look and feel of your hypertext document if you go this route,
but you will need to be more proficient with authoring skills, such as HTML authoring,
fomatting using CSS, and using scripting languages such as Javascript.
Note that you can transform practically any document into a PDF file. Transforming a
Microsoft Powerpoint or your Open Office Impress presentation into a PDF file
guarantees that your intended audience will be able to view your document, unless you
are sure that they have Powerpoint or Impress installed on their computer.
In the sections focusing on specific media modalities, you will notice that I place an
emphasis on free and open source software in this module, but they may not be able to
cover all your needs. Choosing the right software is part of technology selection, which
is an important part of the design process; we discuss technology selection in Module 4.
"Hybrid" forms
The previous discussions around each type of media modality should not deter you from
experimenting with what I loosely called "hybrid" forms. For example, consider Video
2.1. This style of video tutorial was popularized by Common Craft
(www.commoncraft.com). It's video, but it feels a lot like an animation. Implementing this
tutorial using the usual digital animation and graphics tools (like Flash and Photoshop)
would have taken far more time than it did to generate it in this "low-tech" way. If you
already know how to digitally edit video footage, you can create animations with
Common Craft's technique.
Another kind of "hybrid" form is the audio book: instead of relying on vision-centric
methods to present your content to the user, you could choose spoken text as the
primary delivery mechanism for your target audience, particularly if you've ascertained
through a needs-assessment process that this method is optimal for your target
audience. Unlike written text, which allows your audience to quickly scan a document
and figure out where they left off, audio is received by the user in a more linear fashion.
Cross-referencing between learning modules is more difficult to do. However, employing
an audio-centric strategy might allow your user to work on other tasks while listening to
educational material, though whether they can do both without compromising either
depends on factors that may be beyond your control.
Anglin, G. J., Vaez, H., & Cunningham, K. L. (2004). Visual representations and
learning: The role of static and animated graphics. In D. H. Jonassen
(Ed.), Handbook of research on educational communications and
technology (2nd ed., pp. 865–916). Lawrence Erlbaum Associates. Retrieved
from http://institute.nsta.org/scipack_research/AECT_Animated_Graphics_33.pdf
"Sam and Anita". (n.d.). Comic Sans. Vimeo. Retrieved June 11, 2009,
from http://vimeo.com/1994310
Barfield, L. (2004). Design for New Media: Interaction Design for Multimedia and
the Web (1st ed.). Addison Wesley.
Barron, A. E. (2004). Auditory instruction. In Handbook of research on
educational communications and technology (Vol. 2, pp. 949–978). Retrieved
from http://jan.ucc.nau.edu/~etc-c/etc667/2006/readings/Barron-2004-
AuditoryInstruction.pdf
Bartram, L. (2009). IAT 814: Knowledge Visualization Lecture Slides. Simon
Fraser University.
Basith, S. A. (1996, May 24). Digital Video : An Introduction. Information
Systems Engineering Department of Computing and Department of Electrical
and Electronic Engineering, Imperial College of London. Retrieved January 30,
2010, from http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol1/sab/article1.html
Basith, S. A., & Done, S. R. (1996, June 14). Digital Video, MPEG and
Associated Artifacts. Information Systems Engineering Department of Computing
and Department of Electrical and Electronic Engineering, Imperial College of
London. Retrieved February 2, 2010,
from http://www.doc.ic.ac.uk/~nd/surprise_96/journal/vol4/sab/report.html
Brief introduction to typography. (n.d.). Western Illinois University. Retrieved
June 11, 2009, from http://www.wiu.edu/art/courses/handouts/type.htm
Gillespie, J. (2000). Typography. Web Page Design for Designers. Retrieved
November 12, 2008, from http://www.wpdfd.com/issues/23/typography/
Golan Levin, Kamal Nigam, & Jonathan Feinberg. (2006, February 14). The
Dumpster. Retrieved May 3, 2008,
from http://www.tate.org.uk/netart/bvs/thedumpster.htm
Hede, A. (2002). Integrated Model of Multimedia Effects on Learning. Journal of
Educational Multimedia and Hypermedia, 11(2), 177.
Hornak, E. (n.d.). Introduction to Typography. Introduction to Desktop Publishing
- Rochester Institute of Technology. Retrieved June 11, 2009,
Huang, J. (2010). Digitization of Sound. CMPT 365, Simon Fraser University,
Spring Semester. Retrieved from http://www.sfu.ca/~jha48/notes/6_1.pdf
Hundhausen, C., & Douglas, S. (2000). Using visualizations to learn algorithms:
should students construct their own, or view an expert's? In Visual Languages,
2000. Proceedings. 2000 IEEE International Symposium on (pp. 21-28).
Presented at the Visual Languages, 2000. Proceedings. 2000 IEEE International
Symposium on. doi:10.1109/VL.2000.874346
Johnson, R. (2009, January 3). The Gutenburg Diagram in Design. Web Design
Marketing Podcast & Blog. Retrieved June 11, 2009,
from http://www.3point7designs.com/blog/2009/01/03/the-gutenburg-diagram-in-
design/
Kenney, A. R., Rieger, O. Y., & Entlich, R. (2003, February 20). Moving Theory
Into Practice : Digital Imaging Tutorial. Cornell University Library/Research
Department. Retrieved June 30, 2009,
from http://www.library.cornell.edu/preservation/tutorial/intro/intro-01.html
Long, B., & Schenk, S. (2002). The digital filmmaking handbook. Cengage
Learning.
Lowe, R. (2004). Interrogation of a dynamic visualization during
learning. Learning and Instruction, 14(3), 257-274.
doi:10.1016/j.learninstruc.2004.06.003
Lynch and Horton. (2004, March 5). Typography. Web Style Guide. Retrieved
November 12, 2008, from http://webstyleguide.com/type/index.html
Lynch, P. J., & Horton, S. (2009). Web Style Guide. Yale University Press.
Retrieved from http://www.webstyleguide.com/wsg3/8-typography/index.html
MIT News Office. (n.d.). Using the inverse pyramid structure, from "Writing News:
A Quick Primer". Retrieved June 11, 2009,
from http://web.mit.edu/newsoffice/write-news.html#4
Nelson, T. H., Smith, R. A., & Mallicoat, M. (2007). Back to the future: hypertext
the way it used to be. In Proceedings of the eighteenth conference on Hypertext
and hypermedia (p. 228). Retrieved
from http://xanadu.com/XanaduSpace/btf.htm
Nelson, T. H. (1965). Complex information processing. In Proceedings of the
1965 20th national conference on - (pp. 84-100). Presented at the the 1965 20th
national conference, Cleveland, Ohio, United States. doi:10.1145/800197.806036
Poynton, C. (1996). Chapter 1, Basic Principles. In A technical introduction to
digital video. New York: J. Wiley. Retrieved
from http://www.poynton.com/PDFs/TIDV/Basic_principles.pdf
Robinson, D. H., & Schraw, G. (2008). Recent innovations in educational
technology that facilitate student learning. IAP. Retrieved from
http://books.google.com/books?id=6DukxLc-8qcC&lr=&source=gbs_navlinks_s
Ryan, T. A., & Schwartz, C. B. (1956). Speed of perception as a function of mode
of representation. The American Journal of Psychology, 69(1), 60–69.
Savage, T. M., & Vogel, K. (2008a). Text. In An Introduction to Digital
Multimedia (1st ed.). Jones & Bartlett Publishers. Retrieved
from http://sites.google.com/a/upou.edu.ph/edde-221/files/Savage
%26Vogel_Intro.Digital.Multimedia.Text.PWD.pdf?attredirects=0&d=1
Savage, T. M., & Vogel, K. (2008b). Animation. In An Introduction to Digital
Multimedia (1st ed., pp. 199-208). Jones & Bartlett Publishers. Retrieved
from http://sites.google.com/a/upou.edu.ph/edde-221/files/Savage
%26Vogel_Intro.Digital.Multimedia.Text.PWD.pdf?attredirects=0&d=1
Schnotz, W., & Rasch, T. (2002). Enabling, Facilitating, and Inhibiting Effects in
Learning from Animated Pictures. Proceedings of the International Workshop on
Dynamic Visualizations and Learning. Retrieved from http://www.iwm-
kmrc.de/workshops/visualization/schnotz.pdf
Sweller, J. (2002). Visualisation and Instructional Design. Proceedings of the
International Workshop on Dynamic Visualizations and Learning, 1501-1510.
Tam, K. (2006). Digital typography : a primer. Retrieved
from http://keithtam.net/documents/keithtam_digital_type_primer.pdf
Tuovinen, J. E. (2001). Cognition research basis for instructional multimedia.
In Design and management of multimedia information systems (pp. 323-335). IGI
Publishing. Retrieved from http://books.google.com/books?id=-XF--
zCwhiEC&lpg=PP1&pg=PA146#v=onepage&q=&f=false
United States Library of Congress. (2009). Alphabetical List of Audio Formats.
Retrieved January 29, 2010,
from http://www.digitalpreservation.gov/formats/fdd/browse_list.shtml
Uttal, D. H., & O'Doherty, K. (2008). Comprehending and Learning from
'Visualizations': A Developmental Perspective. In J. K. Gilbert, M. Reiner, & M.
Nakleh (Eds.), Visualization: Theory and Practice in Science Education, Models
and Modeling in Science Education (Vol. 3, pp. 53-72). Springer. Retrieved
from http://books.google.com/books?id=35Ik6jgavIIC&source=gbs_navlinks_s
Warde, B. (1956). The crystal goblet, or printing should be invisible. The Crystal
Goblet: Sixteen Essays on Typography. New York. Retrieved
from http://www.d.umn.edu/~jjacobs1/PhD/papers/Warde/THE%20CRYSTAL
%20GOBLET.pdf
Uses of Multimedia
Introduction
In this module, we enumerate some multimedia tools and multimedia products and
classify them according to some useful typologies. Before we do, recall how the course
introduction emphasized that EDDE 221 would concentrate on design and evaluation of
multimedia for learning objects instead of the role that multimedia plays in the three
other stages of planning and delivering instructional material. However, you should keep
in the back of your mind that multimedia does play a role along multiple levels, and
many of the tools and products discussed in this module are significant in other stages
of design and delivery of instructional material.
The most obvious classification scheme organizes tools and products according to the
various types of media that they use. Table 1 does precisely this and lists the digital
tools and products mentioned in Appendix 3 of Beetham & Sharpe (2007), which we
discuss in more length in the next section. See Table 1 before proceeding (clicking the
link will open a new window).
Most of these tools should be familiar to you. Integrated learning systems as Roblyer
has defined them, however, may not be. ILS are similar to course management
software such as Moodle, but broader in scope. They can come prepackaged with
instructional content, including instructional objectives, lessons integrated into standard
curricula, educational software for each grade level, and a management system (Bailey
and Lumley, 1991, cited in Roblyer, 2005). A look through the PLATO website, for
example, tells you that PLATO software solutions tailor the ILS they deliver to their
USA-based clients to match state and national standards, including standardized tests.
Roblyer also gives a detailed overview in Chapter 4 of her book of how to use what she
calls the “basic three” software (spreadsheet software, word processing software, and
database software) for instructional purposes. In Chapter 5, she discusses software that
is not directly used in instruction but can be used to support teaching and learning in
other ways, including the following:
You can visit the companion website of Roblyer's book by going to the URL listed in the
Bibliography section. The website contains many additional resources and links that you
might find useful for integrating technology in your practice.
Classifying tools and products by the learning experiences they support (Laurillard's
media forms)
In her 2003 book, Rethinking University Teaching: A Conversational Framework for the
Effective Use of Learning Technologies, Diana Laurillard discusses how educational
technology can support learning through the lens of what she calls a conversational
framework. Under this framework, learning is refined through a continuous process of
dialogue, which can take place between teacher and student or through "the student's
own internal dialogue" (Laurillard, 2003: 88). A diagram from the book, representing the
activities embedded in the framework, is reprinted as Figure 1.
Table 3. Five principal media forms with the learning experiences they support and the
methods used to deliver them (from Laurillard, 2003: 90)
There are several points to be made about Laurillard's classification scheme and
Beetham & Sharpe's use of it in their own typology. Of all the media forms they discuss,
Beetham & Sharpe's summary of the advantages, risks, and examples of
communicative media forms is the most detailed, comprehensive, and straightforward.
The other forms, however, seem to constantly be on the verge of slipping into each
other's areas of responsibility, so to speak, a slippage that is caused mostly by
slippages in ever-changing definitions of interaction and information.
Narrative media and productive media
In Laurillard's original formulation, narrative media is not only linear (which is what
makes the media narrative) and often time-based, but the information flows only in the
direction from the teacher to the student. (Laurillard is highly critical of narrative media
that prevents users from controlling the speed and manner of learning: live lectures are
the worst, in her analysis, because the student is forced to sit through them.) Beetham
& Sharpe have relaxed Laurillard's definition of narrative media, and merely require the
media to be representative of something. They have also added an additional
distinction within narrative media: narrative media for reception (where information flows
from teacher to student) and narrative media for production (where information flows
from student to teacher). "Production", in the sense of narrative media, pertains to the
ability to produce media, and should not be confused with Laurillard's productive media
form, which Beetham & Sharpe have instead applied to refer to systems that manipulate
data. In fact, "productive media" seems to be a misnomer for what could be more simply
called data manipulation media or technologies.
Interactive media and adaptive media
Laurillard's original definition of interactive media is predicated on interactivity as the
ability to navigate hypermedia and hyperlinked documents. The obvious challenge of
using hypermedia is avoiding "information overload" by presenting the student with too
many links. (Beetham & Sharpe 2007, Parlangeli 1999). But the more subtle danger
hinges on the student's reliance on received structures in hypermedia and hyperlinked
documents. Laurillard observes pointedly:
"[W]e do not typically create the links. We follow the links created for us... The
presentational qualities of hypermedia are better suited to the focused, goal-oriented
gathering of information and ideas by the student who has their own narrative in mind...
[W]ithin an educational experience provided by a non-linear narrative medium, such as
hypermedia, we must take care to help learners maintain their own narrative line"
(Laurillard, 2003)
However, Beetham & Sharpe have taken interactivity to be closer to the idea
of information retrieval and have located navigation as a concern in narrative media
forms. This makes some sense because navigation through an information space, after
all, is always a critical issue regardless of whether the information space is linear,
hierarchical, or heterogenously-connected (i.e., rhizomatic qua Deleuze and Guattari).
This is why they Beetham & Sharpe have classified webpages (which are almost always
hyperlinked in nature) as narrative instead of interactive in form.
Though Beetham & Sharpe define interaction in a broader sense than does Laurillard---
interaction "return[s] information based on user input" (Beetham & Sharpe, 2007)---they
both rely on a specific definition of information, in that the information returned, in
essence, has to be text- or graphics-based, and satisfies some kind of problem or
query. But information does not have to be restricted as such. To illustrate, consider
virtual-reality-based surgery simulations through haptic interfaces by Arbabtafti et al
(2008), among others. A haptic interface provides force feedback to a user; in this case,
a user learns to wield a specially-equipped "surgery blade" that responds to a virtual
model of a bone and that provides the user with the appropriate amount of resistance as
she cuts through virtual skeletal structures. Using this highly-specialized learning tool,
surgeons learn to perform bone surgery in a safe and realistic manner.
This simulation (which is what this tool is) would be classified as an example of adaptive
media. But if you think about it, the system certainly returns information (in this case,
force feedback) in response to user input (user movement). However, this tool cannot
be considered narrative or representational. It should be noted that in the fields of user
interface design and computing science, what Laurillard and Beetham & Sharpe call
adaptive media can be collectively referred to as immersive (or virtual) environments.
"An adaptive item is an item that adapts either its appearance, its scoring (Response
Processing) or both in response to each of the candidate's attempts. For example, an
adaptive item may start by prompting the candidate with a box for free-text entry but, on
receiving an unsatisfactory answer, present a simple choice interaction instead and
award fewer marks for subsequently identifying the correct response. Adaptivity allows
authors to create items for use in formative situations which both help to guide
candidates through a given task while also providing an outcome that takes into
consideration their path."(Ibid.)
Systems that supportive adaptive quizzes cannot easily be placed in the same category
as virtual worlds and simulations. But they are clearly more sophisticated than simple
computer-assisted assessment systems.
A renaming of types
The previous discussion suggests an alternative set of nomenclature, listed in Table 4,
to Laurillard's classification scheme as applied specifically to Beetham & Sharpe's
typology. Laurillard's original nomenclature confuses because the terms alternately
describe what the technologies do (adapt to users, produce new data, interact with user
queries), what the technologies produce in the act of using them ( narratives), and what
the users can do with the technologies (communicate with other people).
Partly for fun, but also partly to illustrate a particularly interesting tool, the information in
Table 1 has been incorporated in an interactive, hyperlinked, loosely-hierarchical
mindmap. It is recommended that you explore this mindmap and note for yourself the
advantages and risks associated with organizing information in such a manner, from the
standpoint of someone creating a mindmap and reading someone else's mindmap.
Now that this module has provided you a typology (Beetham & Sharpe's application
of Laurillard's media forms) and a set of examples (listed in both Table 1 and
Appendix 1)...
... can you apply another existing typology or create a new typology that can
be used to classify the set of examples provided?
... can you add more digital tools and products to the typology discussed in
this module?
A color loop-up table (LUT) is a mechanism used to transform a range of input colors
into another range of colors. Color look-up table will convert the logical color numbers
stored in each pixel of video memory into physical colors, represented as RGB triplets,
which can be displayed on a computer monitor. Each pixel of image stores only index
value or logical color number. For example if a pixel stores the value 30, the meaning
is to go to row 30 in a color look-up table (LUT). The LUT is often called a Palette.
Characteristic of LUT are following:
The number of entries in the palette determines the maximum number of colors
which can appear on screen simultaneously.
The width of each entry in the palette determines the number of colors which the
wider full palette can represent.
A common example would be a palette of 256 colors that is the number of entries is
256 and thus each entry is addressed by an 8-bit pixel value. Each color can be
chosen from a full palette, with a total of 16.7 million colors that is the each entry is of
24 bits and 8 bits per channel which sets the total combinations of 256 levels for each
of the red, green and blue components 256 x 256 x 256 =16,777,216 colors.
Sources of Videos
TYPES OF VIDEO SIGNALS - MULTIMEDIA
Video signals can be organized in three different ways: Component video, Composite
video, and S - video.
Component Video
Component video is a video signal that has been split into two or more component
channels. In popular use, it refers to a type of component analog video (CAV)
information that is transmitted or stored as three separate signals. Component video
can be contrasted with composite video (NTSC, PAL or SECAM) in which all the video
information is combined into a single line - level signal that is used in analog television.
Like composite, component - video cables do not carry audio and are often paired with
audio cables.
When used without any other qualifications the term component video generally refers
to analog YPBPR component video with sync on luma.
Composite Video
Composite video (1 channel) is an analog video transmission (no audio) that carries
standard definition video typically at 480i or 576i resolution. Video information is
encoded on one channel in contrast with slightly higher quality S - video (2 channel),
and even higher quality component video (3 channels).
Composite video is usually in standard formats such as NTSC, PAL, and SECAM and is
often designated by the CVBS initialism, meaning "Color, Video, Blanking, and Sync."
S - Video
Separate Video (2 channel), more commonly known as S - Video and Y/C, is an
analog video transmission (no audio) that carries standard definition video typically at
480i or 576i resolution. Video information is encoded on two channels: luma (luminance,
intensity, "Y") and chroma (colour, "C"). This separation is in contrast with slightly lower
quality composite video (1 channel) and higher quality component video (3 channels).
It's often referred to by JVC (who introduced the DIN - connector pictured) as both an S
- VHS connector and as Super Video.
The four - pin mini - DIN connector (shown at right) is the most common of several S -
Video connector types. Other connector variants include seven - pin locking "dub"
connectors used on many professional S - VHS machines, and dual "Y" and "C" BNC
connectors, often used for S - Video patch panels. Early Y/C video monitors often used
phono (RCA connector) that were switchable between Y/C and composite video input.
Though the connectors are different, the Y/C signals for all types are compatible.
Since voltage is one - dimensional — it is simply a signal that varies with time — how do
we know when a new video line begins? That is, what part of an electrical signal tells us
that we have to restart at the left side of the screen?
The solution used in analog video is a small voltage offset from zero to indicate black
and another value, such as zero, to indicate the start of a line. Namely, we could use a
"blacker - than - black" zero signal to indicate the beginning of a line.
The following figure shows a typical electronic signal for one scan line of NTSC
composite video. 'White' has a peak value of 0.714 V; 'Black' is slightly above zero at
0.055 V; whereas
Blank is at zero volts. As shown, the time duration for blanking pulses in the signal is
used for synchronization as well, with the tip of the Sync signal at approximately —
0.286 V. In fact, the problem of reliable synchronization is so important that special
signals to control sync take up about 30% of the signal!
Electronic signal for one NTSC scan line
The vertical retrace and sync ideas are similar to the horizontal one, except that they
happen only once per field.
NTSC Video
NTSC, named for the National Television System Committee, is the analog television
system that is used in most of North America, parts of South America (except Brazil,
Argentina, Uruguay, and French Guiana), Myanmar, South Korea, Taiwan, Japan, the
Philippines, and some Pacific island nations and territories.
Most countries using the NTSC standard, as well as those using other analog television
standards, are switching to newer digital television standards, of which at least four
different ones are in use around the world. North America, parts of Central America, and
South Korea are adopting the ATSC standards, while other countries are adopting or
have adopted other standards.
The first NTSC standard was developed in 1941 and had no provision for color
television. In 1953 a second modified version of the NTSC standard was adopted, which
allowed color television broadcasting compatible with the existing stock of black - and -
white receivers. NTSC was the first widely adopted broadcast color system and
remained dominant where it had been adopted until the first decade of the 21st century,
when it was replaced with digital ATSC. After nearly 70 years of use, the vast majority of
over - the - air NTSC transmissions in the United States were turned off on June 12,
2009 and August 31, 2011 in Canada and most other NTSC markets.
Digital broadcasting permits higher - resolution television, but digital standard definition
television in these countries continues to use the frame rate and number of lines of
resolution established by the analog NTSC standard; systems using the NTSC frame
rate and resolution (such as DVDs) are still referred to informally as "NTSC". NTSC
baseband video signals are also still often used in video playback (typically of
recordings from existing libraries using existing equipment) and in CCTV and
surveillance video systems.
Video raster, including retrace and sync data
Samples per line for various analog video formats
Different video formats provide different numbers of samples per line, as listed in the
above table. Laser disks have about the same resolution as Hi - 8. (In comparison, mini
DV 1/4 - inch tapes for digital video are 480 lines by 720 samples per line.)
Interleaving Y and C signals in the NTSC spectrum
PAL Video
PAL (Phase Alternating Line) is a TV standard originally invented by German scientists.
It uses 625 scan lines per frame, at 25 frames per second (or 40 msec / frame), with a 4
: 3 aspect ratio and interlaced fields. Its broadcast TV signals are also used in
composite video. This important standard is widely used in Western Europe, China,
India and many other parts of the world.
PAL uses the YUV color model with an 8 MHz channel, allocating a bandwidth of 5.5
MHz to Y and 1.8 MHz each to U and V. The color subcarrier frequency is fsc ≈ 4.43
MHz. To improve picture quality, chroma signals have alternate signs (e.g., +U and —
U) in successive scan lines; hence the name "Phase Alternating Line. This facilitates the
use of a (line - rate) comb filter at the receiver — the signals in consecutive lines are
averaged so as to cancel the chroma signals (which always carry opposite signs) for
separating Y and C and obtain high - quality Y signals.
SECAM Video
SECAM, which was invented by the French, is the third major broadcast TV standard.
SECAM stands for Systeme Electronique Couleur Avec Memorie. SECAM also uses
625 scan lines per frame, at 25 frames per second, with a 4:3 aspect ratio and
interlaced fields. The original design called for a higher number of scan lines (over 800),
but the final version settled for 625.
SECAM and PAL are similar, differeing slightly in their color coding scheme. In SECAM,
U and V signals are modulated using separate color subcarriers at 4.25 MHz and 4.41
MHz, respectively. They are sent in alternate lines - that is, only one of the U or V
signals will be sent on each scan line.
Table Comparison of the analog broadcast TV systems.
DIGITAL VIDEO
Digital video comprises a series of orthogonal bitmap digital images displayed in rapid
succession at a constant rate. In the context of video these images are called frames.
We measure the rate at which frames are displayed in frames per second (FPS).
Since every frame is an orthogonal bitmap digital image it comprises a raster of pixels. If
it has a width of W pixels and a height of Hpixels we say that the frame size is WxH.
Pixels have only one property, their color. The color of a pixel is represented by a fixed
number of bits. The more bits the more subtle variations of colors can be reproduced.
This is called the color depth (CD) of the video.
An example video can have a duration (T) of 1 hour (3600sec), a frame size of 640 x
480 (W x H) at a color depth of 24bits and a frame rate of 25fps. This example video
has the following properties:
pixels per frame = 640 * 480 = 307,200
bits per frame = 307,200 * 24 = 7,372,800 = 7.37Mbits
bit rate (BR) = 7.37 * 25 = 184.25Mbits / sec
video size (VS) = 184Mbits / sec * 3600sec = 662,400Mbits = 82,800Mbytes =
82.8Gbytes
The advantages of digital representation for video are many. It permits
Storing video on digital devices or in memory, ready to be processed (noise
removal, cut and paste, and so on) and integrated into various multimedia
applications
Direct access, which makes nonlinear video editing simple Repeated recording
without degradation of image quality
Ease of encryption and better tolerance to channel noise
Table Comparison of analog broadcast TV systems
In earlier Sony or Panasonic recorders, digital video was in the form of composite video.
Modem digital video generally uses component video, although RGB signals are first
converted into a certain type of color opponent space, such as YUV. The usual color
space is YCbCr.
Chroma Subsampling
Since humans see color with much less spatial resolution than black and white, it makes
sense to decimate the chrominance signal. Interesting but not necessarily informative
names have arisen to label the different schemes used. To begin with, numbers are
given stating how many pixel values, per four original pixels, are actually sent. Thus the
chroma subsampling scheme "4:4:4" indicates that no chroma subsampling is used.
Each pixel's Y, Cb, and Cr values are transmitted, four for each of Y, Cb, and Cr.
The scheme "4:2:2" indicates horizontal subsampling of the Cb and Cr signals by a
factor of 2. That is, of four pixels horizontally labeled 0 to 3, all four 7s are sent, and
every two Cbs and two Crs are sent, as {CbO, Y0)(Cr0, Yl)(Cb2, Y2)(Cr2, Y3)(Cb4,
Y4), and so on.
The scheme "4:1:1" subsamples horizontally by a factor of 4. The scheme "4:2:0"
subsamples in both the horizontal and vertical dimensions by a factor of 2.
Theoretically, an average chroma pixel is positioned between the rows and columns, as
shown in the below figure. We can see that the scheme 4:2:0 is in fact another kind of
4:1:1 sampling, in the sense that we send 4, 1, and 1 values per 4 pixels. Therefore, the
labeling scheme is not a very reliable mnemonic!
Scheme 4:2:0, along with others, is commonly used in JPEG and MPEG.
CCIR Standards for Digital Video
The CCIR is the Consultative Committee for International Radio. One of the most
important standards it has produced is CCIR - 601, for component digital video. This
standard has since become standard ITU - R - 601, an international standard for
professional video applications. It is adopted by certain digital video formats, including
the popular DV video.
The NTSC version has 525 scan fines, each having 858 pixels (with 720 of them visible,
not in the blanking period).Because the NTSC version uses 4:2:2, each pixel can be
Chroma subsampling
represented with two bytes (8 bits for Y and 8 bits alternating between Cb and Cr). The
CCIR 601. (NTSC) data rate (including blanking and sync but excluding audio) is thus
approximately 216 Mbps (megabits per second):
525 x 858 x 30 x 2 bytes x 8 bits / byte≈ 216 Mbps
During blanking, digital video systems may make use of the extra data capacity to carry
audio signals, translations into foreign languages, or error - correction information.
The following table shows some of the digital video specifications, all with an aspect
ratio of 4:3. The CCIR 601 standard uses an interlaced scan, so each field has only half
as much vertical resolution (e.g., 240 lines in NTSC).
Table Digital video specifications
CIF stands for Common Intermediate Format, specified by the International Telegraph
and Telephone Consultative Committee (CCITT), now superseded by the International
Telecommunication Union, which oversees both telecommunications (ITU - T) and radio
frequency matters (ITU - R) under one United Nations body. The idea of CIF, which is
about the same as VHS quality, is to specify a format for lower bitrate. CIF uses a
progressive (noninterlaced) scan. QCIF stands for Quarter - CIF, and is for even lower
bitrate. All the CIF / QCIF resolutions are evenly divisible by 8, and all except 88 are
divisible by 16; this is convenient for block - based video coding in H.261 and H.263.
CIF is a compromise between NTSC and PAL, in that it adopts the NTSC frame rate
and half the number of active lines in PAL. When played on existing TV sets, NTSC TV
will first need to convert the number of lines, whereas PAL TV will require frame-rate
conversion.
High Definition TV (HDTV)
The introduction of wide - screen movies brought the discovery that viewers seated near
the screen enjoyed a level of participation (sensation of immersion) not experienced
with conventional movies. Apparently the exposure to a greater field of view, especially
the involvement of peripheral vision, contributes to the sense of "being there". The main
thrust of High Definition TV (HDTV) is not to increase the "definition" in each unit area,
but rather to increase the visual field, especially its width.
First - generation HDTV was based on an analog technology developed by Sony and
NHK in Japan in the late 1970s. HDTV successfully broadcast the 1984 Los Angeles
Olympic Games in Japan. Multiple sub - Nyquist Sampling Encoding (MUSE) was an
improved NHK HDTV with hybrid analog / digital technologies that was put in use in the
1990s. It has 1,125 scan lines, interlaced (60 fields per second), and a 16:9 aspect
ratio. It uses satellite to broadcast — quite appropriate for Japan, which can be covered
with one or two satellites.
The Direct Broadcast Satellite (DBS) channels used have a bandwidth of 24 MHz. In
general, terrestrial broadcast, satellite broadcast, cable, and broadband networks are all
feasible means for transmitting HDTV as well as conventional TV. Since uncompressed
Table Advanced Digital TV Formats Supported by ATSC
HDTV will easily demand more than 20 MHz bandwidth, which will not fit in the current 6
MHz or 8 MHz channels, various compression techniques are being investigated. It is
also anticipated that high - quality HDTV signals will be transmitted using more than one
channel, even after compression.
In 1987, the FCC decided that HDTV standards must be compatible with the existing
NTSC standard and must be confined to the existing Very High Frequency (VHF) and
Ultra High Frequency (UHF) bands. This prompted a number of proposals in North
America by the end of 1988, all of them analog or mixed analog / digital.
In 1990, the FCC announced a different initiative — its preference for full - resolution
HDTV. They decided that HDTV would be simultaneously broadcast with existing NTSC
TV and eventually replace it. The development of digital HDTV immediately took off in
North America.
Witnessing a boom of proposals for digital HDTV, the FCC made a key decision to go
all digital in 1993. A "grand alliance" was formed that included four main proposals, by
General Instruments, MIT, Zenith, and AT&T, and by Thomson, Philips, Sarnoff and
others. This eventually led to the formation of the Advanced Television Systems
Committee (ATSC), which was responsible for the standard for TV broadcasting of
HDTV. In 1995, the U.S. FCC Advisory Committee on Advanced Television Service
recommended that the ATSC digital television standard be adopted.
The standard supports video scanning formats shown in Table. In the table, "I" means
interlaced scan and "P" means progressive (noninterlaced) scan. The frame rates
supported are both integer rates and the NTSC rates — that is, 60.00 or 59.94, 30.00 or
29.97, 24.00 or 23.98 fps.
For video, MPEG - 2 is chosen as the compression standard. As will be seen in
Chapter, it uses Main Level to High Level of the Main Profile of MPEG - 2. For audio,
AC - 3 is the standard. It supports the so - called 5.1 channel Dolby surround sound —
five surround channels plus a subwoofer channel.
The salient difference between conventional TV and HDTV [4, 6] is that the latter has a
much wider aspect ratio of 16:9 instead of 4:3. (Actually, it works out to be exactly one -
third wider than current TV) Another feature of HDTV is its move toward progressive
(noninterlaced) scan. The rationale is that interlacing introduces serrated edges to
moving objects and flickers along horizontal edges.
The FCC has planned to replace all analog broadcast services with digital TV
broadcasting by the year 2006. Consumers with analog TV sets will still be able to
receive signals via an 8 - VSB (8 - level vestigial sideband) demodulation box. The
services provided will include
Standard Definition TV (SDTV)— the current NTSC TV or higher
Enhanced Definition TV (EDTV) — 480 active lines or higher — the third and
fourth rows
High Definition TV (HDTV)— 720 active lines or higher. So far, the popular
choices are 720P (720 lines, progressive, 30 fps) and 10801 (1,080 lines,
interlaced, 30 fps or 60 fields per second). The latter provides slightly better
picture quality but requires much higher bandwidth.
Page-based tools organize elements as pages of a book. These tools are used when
the content of the project consists of elements that can be viewed individually. These
tools organize them in a user-defined sequential form. organize elements as objects.
These tools display the flow diagrams of activities along with branching paths. organize
the elements along a time-line. These tools play back the sequentially organized
graphic frames at user-set speed and time. organize the elements in a hierarchical
order as related “objects”. These tools make these objects perform according to
properties assigned to them. We will give here a brief description of two such tools
Authorware (Icon based) and Macromedia Director (Time based).
Macromedia Authorware has a visual interface, which one has to simply drag and drop
icons to create an application. You do not need to be a programmer to use this software
as it has an interactive design. Authorware provides direct support for graphics and
animations made in Flash. Authorware can capture and integrate animations and video
made in different programmes like Flash and QuickTime. It can integrate sound into
your project in order to enhance the effect. It has an antialiasing feature which
smoothest out the edges of text and graphics. Authorware has built-in templates which
give you flexibility and convenience while developing your project. You can learn about
basic authoring, editing and publishing ways with the help of a multimedia tutorial which
is built-in with this software.
Macromedia Director is a multimedia authoring application capable of producing
animations, presentations and movies. It provides a wide range of possibilities for
integrating different multimedia elements. It supports inputs from programs like
shockwave, photoshop and premier. It has applications in building professional Page
based tools Icon based authoring tools Time based authoring tools Object Oriented
tools.
multimedia presentations. You can also integrate Real Audio and Real Video in Director
projects. Compatibility of Director with other packages means that you can use your
favorite tools and software to create content for your project and then bring that content
into Director for authoring.