Human Computer Interaction PDF
Human Computer Interaction PDF
Interaction:
Concepts, Methodologies,
Tools, and Applications
Panayiotis Zaphiris
City University of London, UK
Chee Siang Ang
City University of London, UK
Volume I
Acquisitions Editor:
Development Editor:
Senior Managing Editor:
Managing Editor:
Typesetter:
Cover Design:
Printed at:
Kristin Klinger
Kristin Roth
Jennifer Neidig
Jamie Snavely
Michael Brehm, Jeff Ash, Carole Coulson, Elizabeth Duke,
Jennifer Henderson, Chris Hrobak, Sean Woznicki
Lisa Tosheff
Yurchak Printing Inc.
Copyright 2009 by IGI Global. All rights reserved. No part of this publication may be reproduced, stored or distributed in any form or by
any means, electronic or mechanical, including photocopying, without written permission from the publisher.
Product or company names used in this set are for identification purposes only. Inclusion of the names of the products or companies does not
indicate a claim of ownership by IGI Global of the trademark or registered trademark.
Human computer interaction : concepts, methodologies, tools, and applications / Panayiotis Zaphiris and Chee Siang Ang, editors.
p. cm.
Includes bibliographical references and index.
Summary: "This reference book penetrates the human computer interaction (HCI) field a wide variety of comprehensive research papers
aimed at expanding the knowledge of HCI"--Provided by publisher.
ISBN 978-1-60566-052-3 (hardcover) -- ISBN 978-1-60566-053-0 (ebook)
1. Human-computer interaction. I. Zaphiris, Panayiotis. II. Ang, Chee Siang.
QA76.9.H85H8563 2008
004.01'9--dc22
2008035756
Editor-in-Chief
Associate Editors
Steve Clarke
University of Hull, UK
Murray E. Jennex
San Diego State University, USA
Annie Becker
Florida Institute of Technology USA
Ari-Veikko Anttiroiko
University of Tampere, Finland
List of Contributors
Contents
by Volume
Volume I.
Volume II.
Chapter 2.18 A Graphical User Interface (GUI) Testing Methodology /
Zafar Singhera, ZAF Consulting, USA; Ellis Horowitz, University of Southern California, USA;
and Abad Shah, R & D Center of Computer Science, Pakistan.......................................................... 659
Chapter 2.19 Socio-Cognitive Engineering /
Mike Sharples, University of Nottingham, Jubilee Campus, UK......................................................... 677
Chapter 2.20 On the Cognitive Processes of Human Perception
with Emotions, Motivations, and Attitudes / Yingxu Wang, University of Calgary, Canada.............. 685
Chapter 2.21 Sociotechnical System Design for Learning: Bridging
the Digital Divide with CompILE / Benjamin E. Erlandson, Arizona State University, USA............. 698
Chapter 2.22 Problem Frames for Sociotechnical Systems / Jon G. Hall, The Open University, UK;
and Lucia Rapanotti, The Open University, UK.................................................................................. 713
Chapter 2.23 Integrating Semantic Knowledge with Web Usage Mining for Personalization /
Honghua Dai, DePaul University, USA; and Bamshad Mobasher, DePaul University, USA............ 732
Chapter 2.24 Modeling Variant User Interfaces for Web-Based Software Product Lines /
Suet Chun Lee, BUSINEX, Inc., USA................................................................................................... 760
Chapter 2.25 A User-Centered Approach to the Retrieval of Information in an Adaptive Web Site /
Cristina Gena, Universit di Torino, Italy; and Liliana Ardissono, Universit di Torino, Italy......... 791
Chapter 2.26 Auto-Personalization Web Pages /
Jon T. S. Quah, Nanyang Technological University, Singapore;
Winnie C. H. Leow, Singapore Polytechnic, Singapore;
K. L. Yong, Nanyang Technological University, Singapore................................................................. 807
Chapter 2.27 A Qualitative Study in Users Inforamtion-Seeking Behaviors on Web Sites:
A User-Centered Approach to Web Site Development /
Napawan Sawasdichai, King Mongkuts Institute of Technology, Thailand........................................ 816
Chapter 2.28 Developing and Validating a Measure of Web Personalization Strategy /
Haiyan Fan, Texas A&M University, USA; and Liqiong Deng, University of West Georgia, USA..... 850
Chapter 2.29 User Interface Formalization in Visual Data Mining /
Tiziana Catarci, University of Rome La Sapienza, Italy;
Stephen Kimani, University of Rome La Sapienza, Italy;
and Stefano Lodi, University of Bologna, Italy.................................................................................... 872
Chapter 2.30 Understanding the Nature of Task Analysis in Web Design /
Rod Farmer, The University of Melbourne, Australia;
and Paul Gruba, The University of Melbourne, Australia.................................................................. 899
Chapter 2.31 Designing for Tasks in Ubiquitous Computing: Challenges and Considerations /
Stephen Kimani, Jomo Kenyatta University of Agriculture and Technology, Kenya;
Silvia Gabrielli, University of Rome La Sapienza, Italy;
Tiziana Catarci, University of Rome La Sapienza, Italy;
and Alan Dix, Lancaster University, UK............................................................................................. 928
Chapter 2.32 Task Ontology-Based Human-Computer Interaction /
Kazuhisa Seta, Osaka Prefecture University, Japan............................................................................ 950
Chapter 2.33 Social Networking Theories and Tools to Support Connectivist Learning Activities /
M. C. Pettenati, University of Florence (IT), Italy;
and M. E. Cigognini, University of Florence (IT), Italy...................................................................... 961
Chapter 2.34 User-Centered Design Principles for Online Learning Communities:
A Sociotechnical Approach for the Design of a Distributed Community of Practice /
Ben K. Daniel, University of Saskatchewan, Canada;
David OBrien, University of Saskatchewan, Canada;
and Asit Sarkar, University of Saskatchewan, Canada........................................................................ 979
Chapter 3.16 Plagiarism, Instruction, and Blogs / Michael Hanrahan, Bates College, USA............ 1251
Chapter 3.17 Twin Wiki Wonders? Wikipedia and Wikibooks as Powerful Tools
for Online Collaborative Writing / Meng-Fen Grace Lin, University of Houston, USA;
Curtis J. Bonk, Indiana University, USA; and Suthiporn Sajjapanroj, Indiana University, USA..... 1262
Chapter 3.18 Wikis as Tools for Collaboration / Jane Klobas, Bocconi University, Italy &
University of Western Australia, Australia........................................................................................ 1283
Chapter 3.19 Academic Weblogs as Tools for E-Collaboration Among Researchers /
Mara Jos Luzn, University of Zaragoza, Spain............................................................................ 1291
Chapter 3.20 Assessing Weblogs as Education Portals / Ian Weber, Texas A&M University,
USA.................................................................................................................................................... 1298
Chapter 3.21 Innovative Technologies for Education and Learning: Education
and Knowledge-Oriented Applications of Blogs, Wikis, Podcasts, and More /
Jeffrey Hsu, Fairleigh Dickinson University, USA............................................................................ 1308
Chapter 3.22 Ubiquitous Computing Technologies in Education /
Gwo-Jen Hwang, National University of Tainan, Taiwan;
Ting-Ting Wu, National University of Tainan, Taiwan;
and Yen-Jung Chen, National University of Tainan, Taiwan............................................................. 1330
Chapter 3.23 Utilizing Web Tools for Computer-Mediated Communication to Enhance
Team-Based Learning / Elizabeth Avery Gomez, New Jersey Institute of Technology, USA;
Dezhi Wu, Southern Utah University, USA; Katia Passerini, New Jersey Institute of Technology,
USA; and Michael Bieber, New Jersey Institute of Technology, USA................................................ 1334
Volume III.
Chapter 3.24 How Technology Can Support Culture and Learning /
David Luigi Fuschi, GIUNTI Labs S.r.l., Italy;
Bee Ong, University of Leeds, UK;
and David Crombie, DEDICON S.r.l., The Netherlands................................................................... 1350
Chapter 3.25 Tangible User Interfaces as Mediating Tools within
Adaptive Educational Environments / Daria Loi, RMIT University, Australia................................. 1388
Chapter 3.26 Facilitating E-Learning with Social Software: Attitudes and Usage from
the Students Point of View / Reinhard Bernsteiner, University for Health Sciences,
Medical Informatics and Technology, Austria; Herwig Ostermann, University for
Health Sciences, Medical Informatics and Technology, Austria; and Roland Staudinger,
University for Health Sciences, Medical Informatics and Technology, Austria.....................................1402
ferences in the adoption and reaction to IT, while later contributions offer an extensive analysis of differing social and cultural dimensions of technology adoption. The inquiries and methods presented in
this section offer insight into the implications of human-computer interaction at both a personal and
organizational level, while also emphasizing potential areas of study within the discipline.
Chapter 5.1 Gender, Race, Social Class and Information Technology /
Myungsook Klassen, California Lutheran University, USA;
and Russell Stockard, California Lutheran University, USA............................................................. 1729
Chapter 5.2 Gender and the Culture of Computing in Applied IT Education /
Susan C. Herring, Indiana University, USA; Christine Ogan, Indiana University, USA;
Manju Ahuja, Indiana University, USA; and Jean C. Robinson, Indiana University, USA.............. 1736
Chapter 5.3 Gender Equalization in Computer-Mediated Communication /
Rosalie J. Ocker, The Pennsylvania State University, USA............................................................... 1745
Chapter 5.4 The Cross-Cultural Dimension of Gender and Information Technology /
Haiyan Huang, The Pennsylvania State University, USA.................................................................. 1753
Chapter 5.5 Enhancing E-Collaboration Through Culturally Appropriate User Interfaces /
Dianne Cyr, Simon Fraser University, Canada................................................................................. 1761
Chapter 5.6 Cultural Barriers of Human-Computer Interaction /
Deborah Sater Carstens, Florida Institute of Technology, USA........................................................ 1769
Chapter 5.7 Technology and Culture: Indian Experiences /
Ramesh C. Sharma, Indira Gandhi National Open University, India;
and Sanjaya Mishra, Indira Gandhi National Open University, India.............................................. 1777
Chapter 5.8 Intercultural Computer-Mediated Communication Between
Chinese and U.S. College Students / Yun Xia, Rider University, USA.............................................. 1786
Chapter 5.9 Global Culture and Computer Mediated Communication /
Susan R. Fussell, Carnegie Mellon University, USA; Qiping Zhang, Long Island University, USA;
and Leslie D. Setlock, Carnegie Mellon University, USA.................................................................. 1801
Chapter 5.10 Linguistics of Computer-Mediated Communication:
Approaching the Metaphor / Rosanna Tarsiero, Gionnethics, Italy.................................................. 1817
Chapter 5.11 Impression Formation in Computer-Mediated Communication
and Making a Good (Virtual) Impression / Jamie S. Switzer, Colorado State University, USA........ 1837
Chapter 5.12 Group Decision Making in Computer-Mediated Communication
as Networked Communication: Understanding the Technology and Implications /
Bolanle A. Olaniran, Texas Tech University, USA............................................................................. 1849
Chapter 5.13 Effects of Computer-Mediated Communication /
Stuart S. Gold, DeVry University, USA.............................................................................................. 1864
Volume IV.
Chapter 6.4 Social Networking and Knowledge Transfer in Collaborative Product Development /
Katariina Ala-Rmi, University of Oulu, Finland............................................................................. 2037
Chapter 6.5 Reframing Information System Design as Learning Across Communities of Practice /
Kevin Gallagher, Northern Kentucky University, USA;
and Robert M. Mason, University of Washington, USA..................................................................... 2052
Chapter 6.6 Grounding Business Interaction Models: Socio-Instrumental Pragmatism
as a Theoretical Foundation / Gran Goldkuhl, Linkping University
& Jnkping International Business School, Sweden; and Mikael Lind, University
College of Bors, Sweden, & Linkping University, Sweden............................................................ 2071
Chapter 6.7 Managing Socio-Technical Integration in Iterative Information System
Development Projects / Bendik Bygstad, Norwegian School of Information Technology, Norway.. 2090
Chapter 6.8 Human Factors for Networked and Virtual Organizations /
Vincent E. Lasnik, Independent Knowledge Architect, USA.............................................................. 2106
Chapter 6.9 Trust in Computer-Mediated Communications: Implications
for Individuals and Organizations / Susan K. Lippert, Drexel University, USA................................ 2118
Chapter 6.10 Understanding an ERP System Implementation in a Higher
Education Institution: A Grounded Theory Approach / Jose Esteves, Universitat Politecnica
de Catalunga, Barcelona, Spain; and Joan Pastor, Universitat Internacional de Catalunga,
Barcelona, Spain................................................................................................................................ 2132
Chapter 6.11 Ontologies for Scalable Services-Based Ubiquitous Computing /
Daniel Oberle, SAP Research, CEC Karlsruhe, Germany;
Christof Bornhvd, SAP Research, Research Center Palo Alto, USA;
and Michael Altenhofen, SAP Research, CEC Karlsruhe, Germany................................................. 2144
Chapter 6.12 Web Personalization for E-Marketing Intelligence /
Penelope Markellou, University of Patras, Greece & Research Academic Computer Technology
Institute, Greece; Maria Rigou, University of Patras, Greece & Research Academic Computer
Technology Institute, Greece; and Spiros Sirmakessis, Technological Educational Institution of
Messolongi, Greece& Research Academic Computer Technology Institute, Greece........................ 2164
Chapter 6.13 Knowledge Blogs in Firm Internal Use /
Miia Kosonen, Lappeenranta University of Technology, Finland;
xxxi
Preface
The use of, engagement with, and study of technology has become the basis of modern life. As such, the
field of human-computer interaction has emerged as an essential, multidisciplinary field that seeks to
determine how humans interact with different interfaces and how these interfaces can be better designed,
evaluated, and implemented to minimize barriers between human thought and computer behavior. As
we continue to develop new and innovative technologies, we must also strive to more fully understand
how these technologies impact humanity, both on a societal and individual level, and to remain aware
of the latest in human-computer interaction research and exploration.
In recent years, the areas of study related to the interaction between people and technology have
become innumerable. As a result, researchers, practitioners, and educators have devised a variety of
techniques and methodologies to develop, deliver, and, at the same time, evaluate the effectiveness of
the interfaces implemented and used in modern society. The explosion of methodologies in the field has
created an abundance of new, state-of-the-art literature related to all aspects of this expanding discipline.
This body of work allows researchers to learn about the fundamental theories, latest discoveries, and
forthcoming trends in the field of human-computer interaction.
Constant technological and theoretical innovation challenges researchers to remain informed of and
continue to develop and deliver methodologies and techniques utilizing the disciplines latest advancements. In order to provide the most comprehensive, in-depth, and current coverage of all related topics
and their applications, as well as to offer a single reference source on all conceptual, methodological,
technical, and managerial issues in human-computer interaction, Information Science Reference is pleased
to offer a four-volume reference collection on this rapidly growing discipline. This collection aims to
empower researchers, practitioners, and students by facilitating their comprehensive understanding of
the most critical areas within this field of study.
This collection, entitled Human Computer Interaction: Concepts, Methodologies, Tools, and
Applications, is organized into eight distinct sections which are as follows: 1) Fundamental Concepts
and Theories, 2) Development and Design Methodologies, 3) Tools and Technologies, 4) Utilization and
Application, 5) Organizational and Social Implications, 6) Managerial Impact, 7) Critical Issues, and 8)
Emerging Trends. The following paragraphs provide a summary of what is covered in each section of
this multi-volume reference collection.
Section One, Fundamental Concepts and Theories, serves as a foundation for this exhaustive
reference tool by addressing crucial theories essential to understanding human-computer interaction.
Specific issues in human-computer interaction, such as ubiquitous computing, communities of practice,
and online social networking are discussed in selections such as Ubiquitous Computing and the Concept
of Context by Antti Oulasvirta and Antti Salovaara, Sociotechnical Theory and Communities of Practice by Andrew Wenn, and Online Social Networking for New Researched Opportunities by Lionel
Mew. Within the selection Personalization Techniques and Their Application, authors Juergen Anke
xxxii
and David Sundaram provide an overview of the personalization of information systems and the impact
such an approach has on the usability of everything from e-learning environments to mobile devices. The
selections within this comprehensive, foundational section allow readers to learn from expert research
on the elemental theories underscoring human-computer interaction.
Section Two, Development and Design Methodologies, contains in-depth coverage of conceptual
architectures and frameworks, providing the reader with a comprehensive understanding of emerging
theoretical and conceptual developments within the development and utilization of tools and environments that promote interaction between people and technology. Beginning this section is the contribution
Measuring the Human Element in Complex Technologies by Niamh McNamara and Jurek Kirakowski,
which analyzes the role that user satisfaction has in shaping the design of products and the impact this
has on developers. Similarly, Design Methods for Experience Design by Marie Jefsioutine and John
Knight describes the authors framework for web site designthe Experience Design Framework.
Other selections, such as Content Personalization for Mobile Interfaces by Spiridoula Koukia, Maria
Rigou, and Spiros Sirmakessis and Kinetic User Interfaces: Physical Embodied Interaction with Mobile
Ubiquitous Computing Systems by Vincenzo Pallotta, Pascal Bruegger, and Bat Hirsbrunner offer
insight into particular methodologies for the design, development, and personalization of mobile user
interfaces. From basic designs to abstract development, chapters such as Designing and Evaluating InCar User-Interfaces by Gary Burnett and Integrating Usability, Semiotic, and Software Engineering
into a Method for Evaluating User Interfaces by Kenia Sousa, Albert Schilling, and Elizabeth Furtado
serve to expand the reaches of development and design techniques within the field of human-computer
interaction.
Section Three, Tools and Technologies, presents extensive coverage of various tools and technologies
that individuals interact, collaborate, and engage with every day. The emergence of social networking
tools such as blogs and wikis is explored at length in selections such as PDA Plagiarism, Instruction,
and Blogs by Michael Hanrahan, Wikis as Tools for Collaboration by Jane Klobas, and Assessing
Weblogs as Education Portals by Ian Weber. Further discussions of the role of technology in learning
are explored in chapters such as How Technology Can Support Culture and Learning by David Luigi
Fuschi, Bee Ong, and David Crombie and Facilitating E-Learning with Social Software: Attitudes and
Usage from the Students Point of View by Reinhard Bernsteiner, Herwig Ostermann, and Roland
Staudinger. The latter of these two chapters discusses both the theoretical basis for the implementation
of technology in an educational setting and the impact this implementation has on students. Ultimately,
the authors conclude that online social networking has the potential to emerge as a useful tool for both
students and teachers. The rigorously researched chapters contained within this section offer readers
countless examples of the dynamic interaction between humans and technology.
Section Four, Utilization and Application, provides an in-depth analysis of the practical side of
human computer interaction, focusing specifically on environments in which the relationship between
humans and technology has been significant. Mobile device usage is highlighted in the selections A
De-Construction of Wireless Device Usage by Mary R. Lind, Localized User Interface for Improving
Cell phone Users Device Competency by Lucia D. Krisnawati and Restyandito, and Mobile Phone
Use Across Cultures: A Comparison Between the United Kingdom and Sudan by Ishraga Khattab and
Steve Love. These selections offer conceptualization and analysis of the factors impacting wireless device
usage, ultimately determining that many factors, such as culture, familiarity with the device itself, and
ease of use influence an individuals wireless device usage. Further contributions explore how human
factors impact the successful use of technology in areas such as trend detection, veterinary medicine,
mobile commerce, and web browsing. From established applications to forthcoming innovations,
contributions in this section provide excellent coverage of todays global community and demonstrate
xxxiii
how the interaction between humans and technology impacts the social, economic, and political fabric
of our present-day global village.
Section Five, Organizational and Social Implications, includes a wide range of research pertaining
to the organizational and cultural implications of humans reaction to technology. This section begins
with a thorough analysis of the intersection of gender and technology in contributions such as Gender,
Race, Social Class, and Information Technology by Myungsook Klassen and Russell Stockard, Jr.,
Gender and the Culture of Computing in Applied IT Education by Susan C. Herring, Christine Ogan,
Manju Ahuja, and Jean C. Robinson, and Gender Equalization in Computer-Mediated Communication
by Rosalie J. Ocker. Other issues that are surveyed within this section include the implication of cultural
differences within Deborah Sater Carstens Cultural Barriers of Human-Computer Interaction, computer
mediated communication in Bolanle A. Olanirans Group Decision Making in Computer-Mediated
Communication as Networked Communication: Understanding the Technology and Implications, and
community telecommunication networks in Sylvie Albert and Rolland LeBrasseurs Collaboration
Challenges in Community Telecommunication Networks. Overall, the discussions presented in this
section offer insight into the implications of human computer interaction in both organizational and
social settings, as well as provide solutions for existing problems and shortcomings.
Section Six, Managerial Impact, presents contemporary coverage of the managerial applications and
implications of human computer interaction. This collection of research opens with Social Impact of
Virtual Networking by Hakikur Rahman, which documents the emergence of the virtual enterprise and
the impact such a structure has on social communication. Similarly, within the article Human Factors
for Networked and Virtual Organizations, Vincent E. Lasnik emphasizes the role that human factors
engineering must play in the future design of virtual and networked environments. Later contributions,
such as Knowledge Blogs in Firm Internal Use, investigate knowledge transfer within and among
organizations. Within this selection, authors Miia Kosonen, Kaisa Henttonen, and Kirsimarja Blomqvist
identify factors for the implementation of knowledge blogs within organizations and explain why these
tools can be used to encourage institutional memory, create identities, and to inform the organization
itself. The comprehensive research in this section offers an overview of the major issues that practitioners, managers, end users and even consumers must address in order to remain informed about the latest
managerial changes in the field of human computer interaction.
Section Seven, Critical Issues, presents readers with an in-depth analysis of the more theoretical and
conceptual issues within this growing field of study by addressing topics such as security, ethics, and gender
differences in technology adoption and use. Specifically, these topics are discussed in selections such as
Trusting Computers Through Trusting Humans: Software Verification in a Saftey-Critical Information
System by Alison Adam and Paul Spedding and Global Information Ethics: The Importance of Being
Environmentally Earnest by Luciano Floridi. Later selections, which include Emotional Digitalization
as Technology of the Postmodern: A Reflexive Examination from the View of the Industry, review more
novel issues, such as how the digitalization of emotions helps to bridge the gap between technology and
humanity. Specifically, within this chapter, author Claus Hohmann identifies the emotion inherent in the
design of new technologies and investigates how this has and will continue to impact human reaction. In
all, the theoretical and abstract issues presented and analyzed within this collection form the backbone
of revolutionary research in and evaluation of human-computer interaction.
The concluding section of this authoritative reference tool, Emerging Trends, highlights research
potential within the field of human-computer interaction while exploring uncharted areas of study for the
advancement of the discipline. Communicating in the Information Society: New Tools for New Practices by Lorenzo Cantoni and Stefano Tardini presents a framework for the latest digital communication
tools and their implementation, while the evolution of the semantic web is analyzed in Cristian Peraboni
xxxiv
and Laura A. Ripamontis Socio-Semantic Web for Sharing Knowledge. The nature of podcasts and
their role in encouraging collaboration and acting as an educational tool is explored in Stay Tuned for
Podcast U and the Data on M-Learning by Deborah Vess and Michael Gass and Podcastia: Imagining Communities of Pod-People by Jonathan Cohn. Other new trends, such as voice enabled interfaces
for mobile devices, programmable ubiquitous computing environments, and intelligent user interfaces
for mobile and ubiquitous computing are discussed in this collection. This final section demonstrates
that humanitys interaction with technology will continue to grow and evolve, shaping every facet of
modern life.
Although the contents of this multi-volume book are organized within the preceding eight sections
which offer a progression of coverage of important concepts, methodologies, technologies, applications,
social issues, and emerging trends, the reader can also identify specific contents by utilizing the extensive
indexing system listed at the end of each volume. Furthermore, to ensure that the scholar, researcher,
and educator have access to the entire contents of this multi-volume set, as well as additional coverage
that could not be included in the print version of this publication, the publisher will provide unlimited,
multi-user electronic access to the online aggregated database of this collection for the life of the edition free of charge when a library purchases a print copy. In addition to providing content not included
within the print version, this aggregated database is also continually updated to ensure that the most
current research is available to those interested in human-computer interaction.
As technology continues its rapid advancement, the study of how to successfully design, implement
and, ultimately, evaluate human reactions to and interactions with the modern world becomes increasingly critical. Innovations in the design of mobile devices, educational environments, and web sites have
all been made possible through a more thorough understanding of how humans react to and engage with
technological interfaces. Continued evolution in our understanding of the human-computer dynamic will
encourage the development of more usable interfaces and models that aim to more thoroughly understand
the theories of successful human-computer interaction.
The diverse and comprehensive coverage of human-computer interaction in this four-volume, authoritative publication will contribute to a better understanding of all topics, research, and discoveries in
this developing, significant field of study. Furthermore, the contributions included in this multi-volume
collection series will be instrumental in the expansion of the body of knowledge in this enormous field,
resulting in a greater understanding of the fundamentals while also fueling the research initiatives in
emerging fields. We at Information Science Reference, along with the editor of this collection, hope
that this multi-volume collection will become instrumental in the expansion of the discipline and will
promote the continued growth of human-computer interaction.
xxxv
xxxvi
In the United Kingdom, The Disability Discrimination Act (DDA) began to come into effect in
December 1996 and brought in measures to prevent discrimination against people on the basis of disability. Part III of the Act aims to ensure that disabled people have equal access to products and services.
Under Part III of the Act, businesses that provide goods, facilities, and services to the general public
(whether paid for or free) need to make reasonable adjustments for disabled people to ensure they do
not discriminate by:
There is a legal obligation on service providers to ensure that disabled people have equal access to
Web-based products and services. Section 19(1) (c) of the Act makes it unlawful for a service provider
to discriminate against a disabled person in the standard of service which it provides to the disabled
person or the manner in which it provides it.
An important provision here is that education is not covered by the DDA, but by separate legislation,
titled the Special Educational Needs and Disability Act 2001 (SENDA). This Act introduces the right for
disabled students not to be discriminated against in education, training, and any services provided wholly
or mainly for students, and for those enrolled in courses provided by responsible bodies, including
further and higher education institutions and sixth form colleges. Student services covered by the Act
can include a wide range of educational and non-educational services, such as field trips, examinations
and assessments, short courses, arrangements for work placements and libraries, and learning resources.
In a similar wording to the DDA, SENDA requires responsible bodies to make reasonable adjustments
so that people with disabilities are not at a substantial disadvantage.
Activity 2: Analysis
Evaluate current findings and identify issues not yet addressed
xxxvii
Activity 4: Analysis
Establish key problems and assess if any areas of the service have not been covered by user evaluations
Activity 6: Analysis
Analyze all data identifying key issues that need to be addressed in the redesign of the service.
Establish new usability and accessibility goals for the design
We now describe some of the key methods that are associated with the above mentioned framework.
Interviewing
This query-based process elicits from users knowledge on a set information topic based on their expertise
in the domain in question. It is useful for obtaining behavioral reasoning and background knowledge.
Interviews can be categorized as structured or unstructured. Structured interviews elicit limited responses
from users, by using a series of closed questions that have to be answered based on given solutions.
This enables the user data to be analysed quicker but is not necessarily as informative as unstructured
(open ended) interviews.
Preece et al. (1994) suggests that interview processes are most effective as semi-structured based on
a series of fixed questions that gradually lead into more in-depth user needs and requirements understanding, then allowing for open ended responses to possibly create new dynamic questions based on
prior structured responses (Macaulay, 1996). On-site stakeholder interviews allows researchers to bring
about a vivid mental model of how users work with existing systems and how new systems can support
them (Mander and Smith, 2002).
Interviews are useful when combined with surveys or questionnaires, as they can be used to improve
the validity of data by clarifying specific issues that were raised in the survey or questionnaire.
Surveys
In conducting surveys, three things are necessary: a) the set of questions, b) a way to collect responses,
and c) access to the demographics group you wish to test. There are several widely reported templates
for acquiring different types of user data, such as the Quality of User Interface Satisfaction (QUIS) by
Chin et al. (1988) and the Computer System Usability Questionnaire (CSUQ) by IBM with Lewis et
al. (1995).
xxxviii
Surveys can be similarly open and closed question based, but also allow us to enquire scalar results
giving indicators of quality in positive and negative statements. Self-filling surveys can be time efficient
to deploy, and results from closed questions can be fast to analyze.
Open questions tend to elicit unanticipated information which can be very useful for early design.
Existing survey sampling techniques include face-to-face, paper- and pencil-based, telephone surveys
where the researcher will fill in the results (which becomes more of an interview style) but there is
modern interest in computer assisted and Web-based surveying techniques.
Focus Groups
This activity is useful for eliciting cross-representative domains of knowledge from several stakeholders/users, in an open discussion format. Sessions are often moderated and tend to be informal by nature,
centering on the creation of new topics from open questions.
Evidence shows that the optimal number needed for a mixed experience focus group is between five
to eight participants, with group size being inversely related to the degree of participation (Millward,
1995).
Observation
Observation methods elicit user knowledge from the way users interact with a prototype or a final product.
It can be direct, whereby a researcher is present and can steer users to particular points in an interaction.
This tends to utilize video camera equipment and note taking to successfully enquire the timeline of user
actions that is getting from point A to point D may require steps B or C.
The other model of observation is indirect, whereby all user actions are captured electronically.
The researcher has to maintain co-operation between users and should only pose questions if clarification is needed.
Paper Prototyping
There are several approaches to paper prototypes, enabling users to create quick and partial designs of
their concepts. It is often used in early stages of the design processes. Though the methodology lacks
standardization, Rettig (1994) distinguishes between high-tech and low-tech views, and the more commonly modeled categories are of low, medium and high fidelity prototypes (Greenberg, 1998). Rudd
et al. (1996) also distinguishes prototypes according to horizontal and vertical prototypes, with vertical representing deep functionality of a limited view to the final output, and horizontal giving a wide
overview of the full functionality of the system but with a weaker depth of understanding. Hall (2001)
discusses the benefits of using various levels of fidelities of prototypes.
Cognitive Walkthrough
Cognitive Walkthrough is an expert based evaluation technique that steps through a scenario/task by
focusing on the users knowledge and goals. The expert evaluator first starts with descriptions of: the
prototype interface, the task(s) from the users perspective, the correct sequence of actions needed to
complete the task using the prototype and any assumptions about the characteristics of the user.
Then the evaluator walks through the tasks using the system, reviewing the actions that are necessary
and attempting to predict how the users will behave
xxxix
Heuristic Evaluation
Heuristic Evaluation is an expert review technique where experts inspect the interface to judge compliance with established usability principles (the heuristics)
Heuristic Evaluation is usually conducted in a series of four steps:
Prepare: create a prototype to evaluate; select evaluators; prepare coding sheets to record problems
Determine approach: either set typical user tasks (probably the most useful approach) or allow
evaluators to establish their own tasks or conduct an exhaustive inspection of entire interface
Conduct the evaluation: evaluators inspect interface individually to identify all violations of
heuristics (the usability problems); record the problem (feature and location), severity (based on
frequency, impact, criticality/cost) and heuristic violated
Aggregate and analyze results: group similar problems; reassess severity; determine possible
fixes
xl
Eye Tracking
Eye tracking has been used for a very long time in psychology, focusing on recording eye movements
while reading. However, in the 1980s researchers began to incorporate eye tracking into issues of human
computer interaction. As technological tools such as the Internet, e-mail, and videoconferencing evolved
into viable means of communication and information sharing during the 1990s and beyond, researchers
started using eye tracking in order to answer question about usability. Eye tracking technologies have
been widely used as a proxy for users attention and the eye movement data gathered helps to understand
where people focus attention, and in what order before they make a selection in their interactions with
computer interfaces.
Goldberg and Kotval (1999) made a convincing argument for the use of eye-movement data in interface evaluation and usability testing. They claim that Interface evaluation and usability testing are
expensive, time-intensive exercises, often done with poorly documented standards and objectives. They
are frequently qualitative, with poor reliability and sensitivity. The motivating goal for their research
work assessing eye movements as an indicator of interface usability was the provision of an improved
tool for rapid and effective evaluation of graphical user interfaces.
They performed an analysis of eye movements (using interface designers and typical users) in order
to assess the usability of an interface for a simple drawing tool. Comparing a good interface with wellorganized tool buttons to a poor interface with a randomly organized set of tool buttons, the authors
could show that the good interface resulted in shorter scan paths that cover smaller areas. The chief merit
of this study was the introduction of a systematic classification of different measures for evaluating the
usability of user interfaces based on eye movement data. These measures were grouped as follows: Measures of Processing (Number of fixations, Fixation Duration, Fixation / Saccade ratio, Scanpath Length,
Scanpath Duration, Convex Hull Area, Spatial Density, Transition Matrix, Number of Saccades, Saccadic
Amplitude), Other Measures (Backtrack, On-target / all-target fixations, Post-target fixations).
xli
screen capture tools, microphones, and so forth. One of the main challenges lies in the integration of the
media in various formats in a coherent way. Together with cheap computer hardware, usability testing
software overcomes this problem by providing a coherent solution, in which Webcam, microphone and
screen capture software operate concurrently under one system managed by the usability software. The
output is seamless presentation of all media file which can be edited and annotated.
Typically, usability testing tools (hardware and software) consists of the following functionalities:
Analysis
Although still limited to basic visualization, usability tools can support data analysis by calculating aggregate usability metrics for users or for tasks. For instance, we are able to quickly identify tasks with
low usability metrics and thus focus on improving the design relevant to the tasks.
This reduces the amount of work and time significantly and it makes usability testing less costly.
ComputerAugmented Environments
One application area in which HCI plays an important role is the computer-augmented environments,
or commonly known as augmented reality or mixed reality. It refers to the combination of real world
and computer-generated data visualization. In other words it is an environment which consists of both
real world and virtual reality. For instance, a surgeon might be wearing goggles with computer generated medical data projected on it. The goggles are said to augment the information the surgeon can see
in the real world through computer visualization. Therefore, it is not difficult to see the connection of
augmented reality with ubiquitous computing and wearable computers.
Since its inception, augmented reality has had an impact on various application domains. The most
common use is probably the support of complex tasks in which users need to perform a series of complicated actions while having access to large amount of information at the same time, such as surgery,
assembly and navigation. Apart from these, augmented reality is also used for learning and training,
such as flight and driving simulations.
Augmented reality implementation usually requires additional devices for input and output in order
to integrate computer generated data into real world:
xlii
A Cave Automatic Virtual Environment multi-user, room-sized, high-resolution, 3D video and audio
immersive environment in which the virtual reality environment is projected onto the walls. The user
wearing a location sensor can move within the display boundaries, and the image will move with and
surrounds the user.
A head-up display (HUD) is transparent display that presents data without obstructing the users
view. It is usually implemented on vehicles in which important information is projected directly in the
drivers viewing field. Thus the user does not need to shift attention between what is going on in the real
world and the instrumental panel.
A head-mounted display is a display device, worn on the head or as part of a helmet that has a small
display optic in front of one or both eyes.
Some of these devices have become commercially available and increasingly affordable. The challenge of HCI lies in the design of information visualisation which is not obtrusive to the users tasks.
ComputerBased Learning
A lot of effort has been put in coupling learning and technology to design effective and enjoyable learning. Various areas, namely e-learning, computer-based learning, serious games, etc have emerged, hoping
to utilize the interactive power of computers to enhance teaching and learning experience. A myriad of
design strategies have been proposed, implemented and evaluated, these include the early use of computer
in presentation, drill and practice (the behaviourist paradigm), tutorials (cognitivist paradigm), games,
story telling, simulations (constructivist paradigm), and so forth. As we progress from behaviorist to
constructivist, we notice an explosion of user interface complexity. For instance, drill and practice programs usually consist on a couple of buttons (next, previous buttons, buttons for multiple choice, etc)
while simulations could involve sophisticated visualization (outputs) and various user interface elements
for manipulating parameters (input). Recently computer-based learning has moved from single user offline environments to online network spaces in which a massive number of users can interact with each
other and form a virtual learner community. This social constructivist learning paradigm requires not
only traditional usability treatment, but also sociability design in which the system includes not only the
learning tools, but other sociability elements such as rules and division of labors.
Information Visualization
Information visualization is an area in HCI which can be related to many other areas such as augmented
reality just described before. Most modern computer applications deal with visual outputs. Graphical user
interface has almost entirely replaced command-based interaction in many domains. Information visualization can be defined as the use of computer supported, interaction, visual representations of abstract
data to amplify cognition (Shneiderman, 1992). To amplify cognition means that visualization shifts
cognitive loads to the perceptual system, thus expanding working memory and information storage.
Visualization provides a more perceptually intuitive way of viewing raw data, thus allowing users
to identify relevant patterns which would not have been identified in raw data.
Therefore, it has a huge impact on many applications domains, ranging from engineering, education,
various fields in science, and so forth.
In HCI, the most obvious application is the use of visualization is in the design of graphical user
interface that allows more intuitive interaction between human and computers. Various innovative
interaction styles have been developed such as WIMP (window, icon, menu, pointing device) which is
so familiar in todays software. Three-dimensional graphics are also emerging although currently they
xliii
are mostly used in computer games and computer-aided design. One recent example of 3D graphical
interface is the new windows navigation and management known as Windows Flip 3D in Windows Vista
which allows the user to easily identify and switch to another open window by displaying 3D snapshot
thumbnail preview of all windows in stack.
Today, Information visualization is not only about creating graphical displays of complex information
structures. It contributes to a broader range of social and collaborative activities. Recently, visualization
techniques have been applied on social data to support social interaction, particularly in CMC. This area
is known as social visualization by (Donath, Karahalios, & Vigas, 1999). Other technique such as social
network analysis has also become increasingly important in visualization social data.
Other areas where HCI plays an important role include: Intelligent and agent systems; Interaction
design; Interaction through wireless communication networks; Interfaces for distributed environments;
Multimedia design; Nonverbal interfaces; Speech and natural language interfaces; Support for creativity; Tangible user interfaces; User interface development environments and User support systems.
xliv
[online] communities are social aggregations that emerge from the Net when enough people carry on
those public discussions long enough, with sufficient human feeling, to form Webs of personal relationships in cyberspace. (Rheingold, 1993, 5)
Online communities are also often referred to as cyber societies, cyber communities, Web groups, virtual
communities, Web communities, virtual social networks and e-communities among several others.
The cyberspace is the new frontier in social relationships, and people are using the Internet to make
friends, colleagues, lovers, as well as enemies (Suler, 2004). As Korzeny pointed out, even as early as
1978, online communities are formed around interests and not physical proximity (Korzeny, 1978).
In general, what brings people together in an online community is common interests such as hobbies,
ethnicity, education, beliefs. As Wallace (1999) points out, meeting in online communities eliminates
prejudging based on someones appearance, and thus people with similar attitudes and ideas are attracted
to each other.
It is estimated that as of September 2002 there are over 600 million people online (Nua Internet
Surveys, 2004). The emergence of the so-called global village was predicted years ago (McLuhan,
1964) as a result of television and satellite technologies. However, it is argued by Fortner (1993) that
global metropolis is a more representative term (Choi & Danowski, 2002). If one takes into account
that the estimated world population of 2002 was 6.2 billion (U.S. Census Bureau, 2004), then the online
population is nearly 10% of the world population a significant percentage which must be taken into
account when analyzing online communities. In most online communities, time, distance and availability
are no longer disseminating factors. Given that the same individual may be part of several different and
numerous online communities, it is obvious why online communities keep increasing in numbers, size
and popularity.
Preece et al. (2002) states that an online community consists of people, a shared purpose, policies
and computer systems. She identifies the following member roles:
CMC has its benefits as well as its limitations. For instance, CMC discussions are often potentially
richer than face-to-face discussions. However, users with poor writing skills may be at a disadvantage
when using text-based CMC (Scotcit, 2003).
xlv
Are the users spread across time zones? Can all participants meet at the same time?
Do the users have access to the necessary equipment?
What is the role of CMC in the course?
Are the users good readers/writers?
Is the activities time independent?
How much control is allowed to the students?
Sharing/Comparing of Information
The Discovery and Exploration of Dissonance or Inconsistency among Ideas, Concepts or Statements
Negotiation of Meaning/Co-Construction of Knowledge
Testing and Modification of Proposed Synthesis or Co-Construction
Agreement Statement(s)/Applications of Newly Constructed Meaning
xlvi
In this section we provide a description of some of the most commonly used online community
evaluation techniques as well as their weaknesses and strengths.
Studying CMC
All of the already mentioned methods can be used to evaluate the usability and accessibility of the online
community. But, apart from usability and accessibility we often also want to evaluate the user experience
and the sociability of the interface. We describe below some of the methods that can assist us in this:
Personas
Findings from interviews and questionnaires can be further used as a basis for developing user profiles
using personas. A persona is a precise description of the user of a system, and of what he/she wishes
to accomplish. (Cooper, 1999). The specific purpose of a persona is to serve as a tool for software and
product design. Although personas are not real people, they represent them throughout the design stage
nd are best based on real data collected through query based techniques.
Personas are rich in details, include name, social history and goals, and are synthesized from findings
through the use of query based techniques with real people (Cooper, 1999). The technique takes user
characteristics into account and creates a concrete profile of the typical user (Cooper, 1999).
For online communities, personas can be used to better understand the participants of the community
and their background. Personas can also be used as a supplement to Social Network Analysis (described
later in this chapter) to get a greater overview of the characteristics of key participants of a community.
Using personas, Web developers gain a more complete picture of their prospective and/or current users
and are able to design the interfaces and functionality of their systems, to be more personalized and
suited for the communication of the members of their online communities.
Advantages of personas include: can be used to create user scenarios; can be anonymous protecting
use privacy; represent the user stereotypes and characteristics.
Disadvantages of personas include: if not enough personas are used, users are forced to fall into a
certain persona type which might now accurately represent them; time-consuming.
Log Analysis
A log, also referred to as Web-log, server log or log-file is in the form of a text file and is used to track
the users interactions with the computer system they are using. The types of interaction recorded include
key presses, device movements and other information about the user activities. The data is collected
and analyzed using specialized software tools and the range of data collected depends on the log settings. Logs are also time stamped and can be used to calculate how long a user spends on a particular
task or how long a user is lingered in a certain part of the Web site (Preece, Rogers & Sharp, 2002). In
addition, an analysis of the server logs can help us find out: when people visited the site, the areas they
navigated, the length of their visit, the frequency of their visits, their navigation patterns, from where
they are connected and details about the computer they are using.
Log analysis is a useful and easy to use tool when analyzing online communities. For example,
someone can use log analysis to answer more accurately questions like student attendance of an online
learning community. Furthermore, logs can identify the Web pages users spend more time viewing, and
also the paths that they used. This helps identify the navigation problems of the Web site, but also gives
xlvii
a visualization of the users activities in the virtual communities. For instance, in the case of e-learning
communities, the log files will show which students are active in the CMC postings even if they are not
active participants (few postings themselves), but just observing the conversations. Preece (2003) notes
that data logging does not interrupt the community, while at the same time can be used to examine mass
interaction.
Advantages of Logs (Preece et al., 2002): helps evaluators analyze users behavior; helps evaluators
understand how users worked on specific tasks; it is unobtrusive; large volumes of data can be logged
automatically.
Disadvantages of Logs (Preece et al., 2002): powerful tools are needed to explore and analyze the
data quantitatively and qualitatively; user privacy issues.
xlviii
The aim of social network analysis is to describe why people communicate individually or in groups
(Preece, 2000, pp. 183), while the goals of SNA are (Dekker, 2002):
It is also worth pointing out that network analysis is concerned about dyadic attributes between pairs
of actors (like kinship, roles, and actions), while social science is concerned with monadic attributes of
the actor (like age, sex, and income).
There are two approaches to SNA:
Ego-centered analysis: Focuses on the individual as opposed to the whole network, and only a
random sample of network population is normally involved (Zaphiris, Zacharia, & Rajasekaran,
2003). The data collected can be analyzed using standard computer packages for statistical analysis
like SAS and SPSS (Garton, Haythornthwaite, & Wellman, 1997).
xlix
Whole network analysis: The whole population of the network is surveyed and this facilitates
conceptualization of the complete network (Zaphiris et al., 2003). The data collected can be analyzed using microcomputer programs like UCINET and Krackplot (Garton et al., 1997).
The following are important units of analysis and concepts of SNA (Garton et al., 1997; Wellman,
1982; Hanneman, 2001; Zaphiris et al, 2003; Wellman, 1992):
Social Network Analysis is a very valuable technique when it comes to analyzing online communities
as it can provide a visual presentation of the community and more importantly it can provide us with
qualitative and quantitative measures of the dynamics of the community.
Some effort has also been taken to incorporate game elements into productive activities. For instance,
a project has been carried out to use games to label the contents of images meaningfully. Others have
used games for education and training. An area of research, known as serious games, is expanding
quickly to study productive games.
Recently, playability design has undergone a major transformation as games are becoming increasingly
collaborative with the emergence of massively multiplayer online role-playing games (MMORPGs).
These games are becoming one of the most interesting interactive media of computer-mediated communication and networked activity environments (Taylor, 2002). Understanding the pattern of participation in these game communities is crucial, as these virtual communities function as a major mechanism
of socialization of the players.
Some usability studies have shown that MMORPG design should incorporate what is known as
sociability. For instance research has found that game locations can be designed to encourage different
styles of social interactions
li
new artefacts and locations utilising simple modelling tools and scripting languages. Whilst some 3D
virtual worlds are designed with game-like goal structures that impose obstacles or challenges, some
are completely open, meaning that the users are free to do as they please.
Although sociability issues of conventional CMC are well studied and documented, we have very
little understanding on social interactions in 3D CMC. Therefore, it is worth investigating user activities
in such environments in order to cast some light on the group formation process and other sociability
issues.
It might be potentially more challenging in researching this 3D CMC, in which communication takes
place both through texts and other virtual actions users can perform with their 3D avatars. Unlike the
avatar in conventional CMC which is often a static graphical or animated representation of the user, in
3D CMC, the avatar can interact with other avatars directly in the virtual space. A 3D avatar can perform a wide range of actions on the 3D world and other avatars. For instance, it is not uncommon that
avatars can hug, kiss or wave to each other. There is also research on the facial expression of 3D avatars
(Clarkson et al., 2001), implemented in the 3D CMC context with the intention to enhance non-verbal
communication. Moreover, given the fantasy theme of some 3D CMC environments, new sets of rules
for virtual communication which are completely different from physical communication might arise.
This is worth investigating as well as groups open operate within the boundary of norms and rules that
emerge through user interaction.
Ubiquitous Computing
Another exciting future trend in HCI is the emergence of ubiquitous computing, in which information
processing is thoroughly diffused into objects and experiences of everyday life. In another word, computers are disappearing. Users are no longer consciously engaged in using the computers. Instead they
are operating these devices which are so well integrated into artifacts of everyday activities without
being aware of using the computers.
Perhaps the most obvious example is the mobile phone, and indeed mobile computing has witnessed
an explosion of research interest within and beyond HCI community.
Other less obvious examples could include computerized refrigerators which are able to detect their
contents, plan and recommend a variety of recipes, automatically shop according to the users needs.
The focus of ubiquitous computing from the point of view of HCI suggests a shift from tool-focused
design to activity-focused design. The primary objective is thus to design tools which can be seamlessly
mediate everyday activities without interfering with users tasks. One such area is wearable computing
which has the potential to support human cognitions, facilitate creativity and communication. Unlike
mobile devices, wearable computers are attached to human, thus reducing the possibility of interruption
or displacement.
lii
REFERENCES
Archer, W., Garrison, R. D., Anderson, T., & Rourke, L. (2001). A framework for analyzing critical
thinking in computer conferences. European Conference on Computer-Supported Collaborative Learning, Maastricht, Nerthelands.
liii
Bates, A. W. (1995). Technology, open learning and distance education. London: Routledge.
Beidernikl, G., & Paier, D. (2003, July). Network analysis as a tool for assessing employment policy. In
Proceedings of the Evidence-Based Policies and Indicator Systems Conference 03. London.
Borgatti, S. (2000). What is Social Network Analysis. Retrieved on November 9, 2004 from http://www.
analytictech.com/networks/whatis.htm
Burge, E. L., & Roberts, J. M. (1993). Classrooms with a difference: A practical guide to the use of
conferencing technologies. Ontario: University of Toronto Press.
Burge, J. E. (2001). Knowledge Elicitation Tool Classification. PhD Thesis, Worcester Polytechnic
Institute.
CAP, University of Warwick. (2004). E-Guide: Using computer Mediated Communication in Learning
and Teaching. Retrieved November 8, 2004 from http://www2.warwick.ac.uk/services/cap/resources/
eguides/cmc/cmclearning/
Chin, J. P., Diehl, V. A., Norman, K. L. (1988). Development
of an Instrument Measuring User Satisfaction of the Human-Computer Interface. ACM CHI88 Proceedings, (pp. 213-218).
Choi, J. H., & Danowski, J. (2002). Cultural communities on the net - Global village or global metropolis?: A network analysis of Usenet newsgroups. Journal of Computer-Mediated Communication, 7(3).
Clarkson, M. J., Rueckert, D., Hill, D. L. G., & Hawkes, D. J. (2001). Using photo-consistency to register
2d optical images of the human face to a 3d surface model. IEEE Transactions on Pattern Analysis and
Machine Intelligence, 23(11), 1266-1281.
Cook, D., & Ralston, J. (2003). Sharpening the Focus: Methodological issues in analyzing on-line conferences. Technology, Pedagogy and Education, 12(3), 361-376.
Cooke, N. J. (1994). Varieties of knowledge elicitation techniques. International Journal of HumanComputer Studies, 41, 801-849.
Cooper, A. (1999). The Inmates are Running the Asylum. Indianapolis IN: SAMS, a division of Macmillan Computer Publishing.
December, J. (1997). Notes on defining of computer-mediated communication. Computer-Mediated
Communication Magazine, 3(1).
Dekker, A. H. (2002). A Category-Theoretic Approach to Social Network Analysis. Proceedings of Computing: The Australasian Theory Symposium (CATS), Melbourne, Australia, 28 Jan to 1 Feb 2002.
Donath, J., Karahalios, K., & Vigas, F. (1999, 8 May 2008). Visualizing conversation. Paper presented
at the Proceedings of the 32nd Annual Hawaii International Conference.
Fahy, P. J. (2003). Indicators of support in online interaction. International Review of Research in Open
and Distance Learning, 4(1).
Fahy, P. J., Crawford, G., & Ally, M., (2001). Patterns of interaction in a computer conference transcript.
International Review of Research in Open and Distance Learning, 2(1).
Ferris, P. (1997) What is CMC? An Overview of Scholarly Definitions. Computer-Mediated Communication Magazine, 4(1).
liv
Fortner, R. S. (1993). International communication: History, conflict, and control of the global metropolis. Belmont, CA: Wadsworth.
Garton, L., Haythorthwaite, C., & Wellman, B. (1997). Studying On-line Social Networks. In Jones, S.
(Eds.), Doing Internet Research. Thousand Oaks CA: Sage.
Greenberg, S. (1998). Prototyping for design and evaluation. Retrieved on November 30, 2004 at http://
pages.cpsc.ucalgary.ca/~saul/681/1998/prototyping/survey.html
Gunawardena, C., Lowe, C., & Anderson, T. (1997). Analysis of a Global Online Debate and the Development of an Interaction Analysis Model for Examining Social Construction of Knowledge in Computer
Conferencing. Journal of Educational Computing Research, 17(4), 397-431.
Hall, R. R. (2001). Prototyping for usability of new technology. International Journal.
Hanneman, R. A. (2001). Introduction to Social Netwok Methods. Retrieved on November 9, 2004 from
http://faculty.ucr.edu/~hanneman/SOC157/TEXT/TextIndex.html
Heeren, E. (1996). Technology support for collaborative distance learning. Doctoral dissertation, University of Twente, Enschede.
Henri, F. (1992). Computer Conferencing and Content Analysis, In A. R. Kaye (Ed), Collaborative learning through computer conferencing: The Najaden Papers, (pp. 117-136). Berlin: Springer-Verlag.
John, B. E., & Marks, S. J. (1997) Tracking the Effectiveness of Usability Evaluation Methods. Behaviour and Information Technology, 16(4/5), 188-202.
Jones, S. (1995). Computer-Mediated Communication and Community: Introduction. Computer-Mediated Communication Magazine, 2(3).
King, N., Ma, T.H.Y., Zaphiris, P., Petrie, H., Fraser, H. (2004). An incremental usability and accessibility
evaluation framework for digital libraries. In Brophy, P., Fisher S., Craven, J. (2004) Libraries without
Walls 5: The distributed delivery of library and information services. London, UK: Facet Publishing.
Korzenny, F. (1978). A theory of electronic propinquity: Mediated communication in organizations.
Communication Research, 5,3-23
Krebs, V. (2004). An Introduction to Social Network Analysis. Retrieved November 9, 2004 from http://
www.orgnet.com/sna.html
Kuniavksy, M. (2003). Observing the User Experience. Morgan Kaufmann Publishers.
Lewis, J. R. (1995). IBM Computer Usability Satisfaction Questionnaires: Psychometric Evaluation and
Instructions for Use. International Journal of Human-Computer Interaction, 7(1), 57-78.
Linden Lab. (2003). Second life. Last retrieved 21 June 2007.
Macaulay, L. A. (1996). Requirements engineering. Springer Verlag Series on Applied Computing.
Madden, M., & Rainie, L. (2003). Pew Internet & American Life Project Surveys. Pew Internet &
American Life Project, Washington, DC.
Maiden, N. A. M. Mistry, P., & Sutcliffe, A. G. (1995). How people categorise requirements for reuse:
a natural approach. Proc. of the 2nd IEEE Symposium on Requirements Engineering, (pp. 148-155).
lv
Mander, R., & Smith, B. (2002). Web usability for dummies. New York: Hungry Minds.
Mason, R. (1991). Analyzing Computer Conferencing Interactions. Computers in Adult Education and
Training, 2(3), 161-173.
McLuhan, M. (1964). Understanding media: The extension of man. New York: McGraw Hill
Metcalfe, B. (1992). Internet fogies to reminisce and argue at Interop Conference. InfoWorld.
Millward, L. J. (1995). In G. M. Breakwell, S. Hammond and C. Fife-Shaw (Eds.), Research Methods
in Psychology. Sage.
NUA Internet Surveys (2004). Retrieved October 20, 2004, from http://www.nua.ie/surveys/how_many_
online/index.html
of Human-Computer Studies, 55, 4, 485-502.
Pfeil, U. (2007). Empathy in online communities for older people an ethnographic study. City University London.
Preece, J. (2000) Online Communities: Designing Usability, Supporting Sociability. Chichester, UK:
John Wiley and Sons.
Preece, J., & Maloney-Krichmar, D. (2003). Online communities: Focusing on sociability and usability.
London. In J. A. Jacko & A. Sears (Eds.), Handbook of human-computer interaction. Lawrence Erlbaum
Associates Inc.
Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction Design: Beyond Human-Computer Interaction.
New York, NY: John Wiley & Sons.
Preece, J., Rogers, Y., Sharp, H., Benyon, D., Holland, S., & Carey, T. (1994). Human-Computer Interaction. Addison Wesley.
Reiser, R. A., & Gagn, R. M. (1983). Selecting media for instruction. Englewood Cliffs: Educational
Technology Publications.
Rettig, G. (1994). Prototyping for tiny fingers. Communications of the ACM, 37, (4), 21-27.
Rheingold, H. (1993). The Virtual Community: Homesteading on the Electonic Frontier. Reading: Addison-Wesley.
Rice, R. (1994). Network analysis and computer mediated communication systems. In S. W. J. Galaskiewkz
(Ed.), Advances in Social Network Analysis. Newbury Park, CA: Sage.
Rice, R. E., Grant, A. E., Schmitz, J., & Torobin, J. (1990). Individual and network influences on the
adoption and perceived outcomes of electronic messaging. Social Networks, 12, 17-55.
Rudd, J., Stern, K., & Isensee, S. (1996). Low vs. high fidelity prototyping debate. Interactions, 3(1),
76-85. ACM Press.
SCOTCIT. (2003). Enabling large-scale institutional implementation of communications and information technology (ELICIT). Using Computer Mediated Conferencing. Retrieved November 2, 2004 from
http://www.elicit.scotcit.ac.uk/modules/cmc1/welcome.htm
Scott, J. P. (2000). Social network analysis: A handbook. Sage Publications Ltd.
lvi
Shneiderman, B. (1992). Designing the user interface: Strategies for effective human-computer interaction. 2nd edition.Reading: Addison-Wesley.
Suler, J. (2004). The Final Showdown Between In-Person and Cyberspace Relationships. Retrieved
November 3, 2004 from http://www1.rider.edu/~suler/psycyber/showdown.html
Taylor, T. L. (2002). Whose game is this anyway? Negotiating corporate ownership in a virtual world.
Paper presented at the Computer Games and Digital Cultures Conference Proceedings., Tampere: Tampere University Press.
Turkle, S. (1995). Life on the screen: Identity in the age of the Internet. New York: Simon & Schuster.
U.S Census Bureau (2004). Global Population Profile 2002. Retrieved October 20, 2004 from http://
www.census.gov/ipc/www/wp02.html
Usability Net. (2003). UsabilityNet. Retrieved on December 3, 2004 from http://www.usabilitynet.
org/
Wallace, P. (1999). The Psychology of the Internet. Cambridge: Cambridge University Press.
Walther, J. B. (1995). Relational aspects of computer-mediated communication: Experimental observations over time. Organization Science, 6(2), 186-203.
Wellman, B. (1992). Which types of ties and networks give what kinds of social support? Advances in
Group Processes, 9, 207-235.
Wellman, B. (1982). Studying personal communities. In P. M. N Lin (Ed.), Social Structure and Network
Analysis. Beverly Hills, CA: Sage.
Wellman, B., & Gulia, M. (1999). Net surfers dont ride alone: Virtual communities as communities. In
B. Wellman (Ed.), Networks in the global village (pp. 331-366.). Boulder, CO: Westview Press.
Zaphiris, P., Kurniawan, S. H. (2001). Using Card Sorting Technique to Define the Best Web Hierarchy
for Seniors. Proc. of CHI 2001 Conference on Human Factors in Computing Systems. ACM Press.
Zaphiris, P., Zacharia, G., & Rajasekaran, M. (2003). Distributed
lvii
Panayiotis Zaphiris is a reader in human-computer interaction at the Centre for HCI Design. He
got his B.Sc and M.Sc. from the University of Maryland College Park (USA). Panayiotis got his PhD
(in human-computer interaction) from Wayne State University where he was also a research assistant
at the Institute of Gerontology. Panayiotis research interests lie in HCI with an emphasis on inclusive
design and social aspects of computing. He is also interested in Internet related research (Web usability, mathematical modelling of browsing behaviour in hierarchical online information systems, online
communities, e-learning, Web based digital libraries and finally social network analysis of online human-to-human interactions). He has authored over 110 papers and four books in this area including a
recently released book titled User-Centered Design of Online Learning Communities (Lambropoulos
and Zaphiris, 2006). He is principal investigator for the AW-Model EPSRC/BBSRC project funded
under the SPARC scheme. Panayiotis was the principal investigator on the JISC-funded Usability
Studies for JISC Services and Information Environment and Information Visualisation Foundation
Study projects which looked at usability and information visualisation in the context of online learning
environments. Finally, he was the co-principal investigator on the largest ever conducted Web accessibility study (the DRC funded An in-depth study of the current state of Web accessibility project) that
assessed at the accessibility of 1000 UK Web sites. More information about Panayiotis research can be
found at: http://www.zaphiris.org
Chee Siang Ang is a research fellow in the Centre for Human-Computer Interaction (HCI) Design.
He is interested in human interactions and social tendencies in the virtual world, particularly Second
Life, from a sociological, psychological and HCI perspective. His main research interests include the
psychology and sociology of computer games, virtual worlds or 3D Computer-Mediated Communication (CMC), learning theories particularly in gaming, digital media such as interactive narrative and
simulation
Section I
Fundamental Concepts
and Theories
This section serves as the foundation for this exhaustive reference tool by addressing crucial theories essential to the understanding of human-computer interaction. Chapters found within these pages provide
an excellent framework in which to position human-computer interaction within the field of information
science and technology. Individual contributions provide overviews of ubiquitous computing, cognitive
informatics, and sociotechnical theory, while also exploring critical stumbling blocks of this field. Within
this introductory section, the reader can learn and choose from a compendium of expert research on the
elemental theories underscoring the research and application of human-computer interaction.
Chapter 1.1
Introduction to Ubiquitous
Computing
Max Mhlhuser
Technische Universitt Darmstadt, Germany
Iryna Gurevych
Technische Universitt Darmstadt, Germany
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Weisers Vision of UC
Mark Weisers ideas were first exposed to a large
worldwide audience by way of his famous article
The Computer of the 21st Century, published in
Scientific American in 1991. A preprint version
of this article is publicly available at: http://www.
ubiq.com/hypertext/weiser/SciAmDraft3.html.
Maybe the most frequently cited quotation
from this article reads as follows: The most profound technologies are those that disappear. They
weave themselves into the fabric of everyday life
until they are indistinguishable from it. This was
Marks vision for the final step in a development
away from standard PCs, towards a proliferation
and diversification of interconnected computerbased devices. A deeper understanding of Mark
Weisers visions can be drawn from his position
towards three dominant, maybe overhyped trends
in computer science at his time: virtual reality,
artificial intelligence, and user agents. With a
good sense for how to raise public attention,
Mark criticized these three trends as leading in
the wrong direction and positioned UC as a kind
of opposite trend. We will follow Marks arguments for a short while and take a less dramatic
view afterwards.
2.
3.
4.
5.
2.
3.
4.
History Revised
The preceding paragraphs are important to know
for a deeper understanding of the mindset and roots
of UC. However, about 15 years after the time when
the corresponding arguments were exchanged, it
is important to review them critically in the light
of what has happened since. We will first revise
the three religious disputes that Mark Weiser
conducted against AI, VR, and UAs. To put the
bottom line first, the word versus should rather
be replaced by and today, meaning that the
scientific disciplines mentioned should be (and
have, mostly) reconciled:
As to UC and VR, specialized nodes in a global
UC network can only contribute to a meaningful
holistic purpose if models exist that help to cooperatively process the many specialist purposes
of the UC nodes. In other words, we need the
computer embedded into the world and the world
embedded in the computer. Real Time Enterprises
are a good example for very complex modelsin
this case, of enterprisesfor which the large-scale
deployment of UC technology provides online
connectivity to the computers embedded into the
world, that is, specialized nodes (appliances, smart
labels, etc.). In this case, the complex models are
usually not considered VR models, but they play
the same role as VR models in Mark Weisers
arguments. The progress made in the area of
augmented reality is another excellent example
of the benefit of reconciliation between UC and
VR: in corresponding applications, real-world
vision and virtual (graphical) worlds are tightly
synchronized and overlaid.
As to UC and AI, Mark Weiser had not addressed the issue of how interconnected, smart,
that is, modest, specialized nodes would be
integrated into a sophisticated holistic solution. If
the difference between AI and the functionality of
a single smart UC node (e.g., temperature sensor)
was comparable to the difference between a brain
and a few neurons, then how can the equivalent
of the transition (evolution) from five pounds of
Post-PC era: The root of this term is obvious, it describes the era that comes after
the second, that is, the PC era. We suggest
avoiding this term since it points at what it
2.
b.
The reader must be aware that all terms arranged in the taxonomy are not settled yet for a
common understanding. For instance, one might
argue whether a sensor network that computes
context information for networked appliances and
users should be considered a set of smart items
10
(as we defined it) or a part of the smart environment. Nevertheless, we find it useful to associate
a well-defined meaning with these terms and to
apply it throughout the book (see Figure 2).
In addition, it should be noted that smart
environments (with integrated smart items)
constitute a particularly important research
areamaybe because they permit researchers
and project leaders to implement self-contained
little UC worlds without a need for multiparty
agreements about interoperability standards. In
particular, smart homes were among the first
subjects of investigation in the young history of
UC. Prestigious projects in the smart home area
were and are conducted by industry (Microsoft
eHome, Philips AmbientIntelligence initiative,
etc.) and academia (GeorgiaTech AwareHome,
MIT House, etc.). HP made an early attempt
to overcome the isolation of such incompatible
islands by emphasizing standard middleware in
the Cooltown project). Quite a number of projects
about smart homes terminated without exciting
results, not to the least due to insufficient business
impact (note our argument in favor of Real Time
Enterprises as a more promising subject). More
recently, smart homes projects have focused on
issues considered to be particularly promising,
as was discussed in the preface to this book.
Important areas comprise home security, energy
conservation, home entertainment, and particu-
11
12
13
Before we introduce concrete reference architectures, it is worth recalling the two complementary
flavors:
14
15
16
2.
3.
4.
17
18
References
Aitenbichler, E., Kangasharju, J., & Mhlhuser,
M. (2004). Talking assistant headset: A smart digital identity for ubiquitous computing. Advances
in pervasive computing (pp. 279-284). Austrian
Computer Society.
Bond, A. (2001). ODSI: Enterprise service co-ordination. In Proceedings of the 3rd International
Symposium on Distributed Objects and Applications DOA01 (pp. 156-164). IEEE Press.
Additional Reading
Aarts, E., & Encarnaco J. L. (Eds.). (2006). True
visions. The emergence of ambient intelligenc.
Berlin, Germany: Springer.
Adelstein, F., Gupta, S. K. S. et al. (2004). Fundamentals of mobile and pervasive computing. New
York: McGraw-Hill Professional Publishing.
Antoniou, G., & van Harmelen, F. (2004). A semantic web primer. Massachusetts: MIT Press.
Hansmann, U., Merk, L. et al. (2003). Pervasive
computing handbook. The mobile world. Berlin,
Germany: Springer.
Hedgepeth, W.O. (2006): RFID metrics: Decision
making tools for todays supply chains. University
of Alaska, Anchorage, USA
This work was previously published in Handbook of Research on Ubiquitous Computing Technology for Real Time Enterprises,
edited by M. Mhlhuser and I. Gurevych, pp. 1-20, copyright 2008 by Information Science Reference, formerly known as Idea
Group Reference (an imprint of IGI Global).
19
20
Chapter 1.2
INTRODUCTION
Mark Weiser (1991) envisioned in the beginning
of the 1990s that ubiquitous computing, intelligent
small-scale technology embedded in the physical
environment, would provide useful services in the
everyday context of people without disturbing the
natural flow of their activities.
From the technological point of view, this
vision is based on recent advances in hardware
and software technologies. Processors, memories,
wireless networking, sensors, actuators, power,
packing and integration, optoelectronics, and biomaterials have seen rapid increases in efficiency
with simultaneous decreases in size. Moores
law on capacity of microchips doubling every 18
months and growing an order of magnitude every
five years has been more or less accurate for the last
three decades. Similarly, fixed network transfer
capacity grows an order of magnitude every three
years, wireless network transfer capacity every
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
Ubiquitous Computing Transforms
Human-Computer Interaction
Human-computer interaction currently is shifting its focus from desktop-based interaction to
interaction with ubiquitous computing beyond
the desktop. Context-aware services and user
interface adaptation are the two main application classes for context awareness. Many recent
prototypes have demonstrated how context-aware
devices could be used in homes, lecture halls,
gardens, schools, city streets, cars, buses, trams,
shops, malls, and so forth.
With the emergence of so many different ways
of making use of situational data, the question of
what context is and how it should be acted upon
has received a lot of attention from researchers
in HCI and computer science. The answer to this
question, as will be argued later, has wide ramifications for the design of interaction and innovation
of use purposes for ubiquitous computing.
HISTORY
Context as Location
In Weisers (1991) proposal, ubiquitous computing
was realized through small computers distributed
throughout the office. Tabs, pads, and boards
helped office workers to access virtual information
associated to physical places as well as to collaborate over disconnected locations and to share
information using interfaces that take locational
constraints sensitively into account. Although
Weiser (1991) never intended to confine context to
mean merely location, the following five years of
research mostly focused on location-based adaptation. Want et al. (1992) described the ActiveBadge,
a wearable badge for office workers that could
be used to find and notify people in an office.
Weiser (1993) continued by exploring systems for
FUTURE TRENDS
CONCLUSION
22
REFERENCES
Dey, A. K., & Abowd, G. D. (1999). Towards a better understanding of context and contextawareness [technical report]. Atlanta: Georgia Institute
of Technology.
Schilit, B., Adams, N., & Want, R. (1994). Contextaware computing applications. Proceedings of the
IEEE Workshop on Mobile Computing Systems
and Applications.
23
KEY TERMS
Attentive User Interfaces: AUIs are based
on the idea that modeling the deployment of
user attention and task preferences is the key for
minimizing the disruptive effects of interruptions.
By monitoring the users physical proximity, body
orientation, eye fixations, and the like, AUIs can
determine what device, person, or task the user
is attending to. Knowing the focus of attention
makes it possible in some situations to avoid interrupting the users in tasks that are more important
or time-critical than the interrupting one.
Peripheral Computing: The interface attempts to provide attentionally peripheral awareness of people and events. Ambient channels
provide a steady flow of auditory cues (i.e., a
sound like rain) or gradually changing lighting
conditions.
Pervasive Computing: Technology that provides easy access to information and other people
anytime and anywhere through a mobile and
scalable information access infrastructure.
This work was previously published in Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 630-633, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
24
25
Chapter 1.3
INTRODUCTION
Although gender differences in a technological
world are receiving significant research attention,
much of the research and practice has aimed at
how society and education can impact the successes and retention of female computer science
professionals. The possibility of gender issues
within software, however, has received almost no
attention, nor has the population of female end
users. However, there is relevant foundational
research suggesting that gender-related factors
within a software environment that supports
end-user computing may have a strong impact
on how effective male and female end users can
be in that environment. Thus, in this article, we
summarize theory-establishing results from
other domains that point toward the formation of
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
26
Figure 1. A spreadsheet calculating the average of three homework scores. Assertions about the ranges
and values are shown above each cells value. For example, on HomeWork1 there is a user-entered
assertion (noted by the stick figure) of 0 to 50. The other three cells have assertions guessed by the
Surprise-Explain-Reward strategy. Since the value in HomeWork1 is outside of the range of the assertion, a red circle notifies the user of the violation. A tooltip (lower right) shows the explanation for one
of the guessed assertions.
CONFIDENCE
This document uses the term confidence for the
interrelated concepts of self-confidence, self-efficacy, overconfidence, and perceived risk.
From the field of computer science, there is
substantial evidence of low confidence levels as
computer science females compare themselves
to the males (Margolis, Fisher, & Miller, 1999).
Of particular pertinence to end-user computing,
however, is the evidence showing that low confidence relating to technology is not confined to
computer science females (Busch, 1995; Huff,
2002; Torkzadeh & Van, 2002).
As a measure of confidence, researchers often use self-efficacy, as was done in the Busch
study. Self-efficacy is belief in ones capabilities
to perform a certain task (Bandura, 1994). There
is specific evidence that low self-efficacy impacts
27
28
SUPPORT
We will use the term support to mean built-in
aspects of the software, such as on-line help systems and Figure 1s tooltips that help users learn
or understand the environment.
The systems approach to help users achieve
mastery in remembering the softwares devices
may depend on a users learning style. One survey of university students found that students
with an abstract random learning style were
significantly more likely to be female and, as
a result, could find computer-based instruction
ineffective for learning (Ames, 2003). Other researchers have also found gender differences in
learning styles (Heffler, 2001; Severiens & ten
Dam, 1997). One implication of these findings
is that end-user computing may need to support
several learning styles, especially if some users are
easily dissuaded by support devices not sensitive
to their learning style.
Problem-solving style also shows gender differences, at least for computer games (Kafai, 1998).
Researchers found that, unlike boys, rather than
working in a linear fashion through the game,
girls prefer to explore and move freely about a
game (Gorriz & Medina, 2000). In another difference in problem-solving style, boys games
MOTIVATION
Research has shown that computer science females
are motivated by how technology can help other
people, whereas males tend to enjoy technology
for its own sake (Margolis, Fisher, & Miller,
1999). These differences are also found with other
Table 1. Summary of gender differences in fantasizing about technology (Brunner, Bennett, & Honey,
1998). Reprinted with permission.
Women
fantasize about it as a
MEDIUM
2 see it as a
TOOL
3 want to use it for
COMMUNICATION
4 are im pressed with its potentia l for
CREATION
5 see it as
EXPRESSIVE
6 ask it for
FLEXIBILITY
7 are concerned with its
EFFECTIVENESS
8 like its ability to facil itate
SHARING
9 are concerned with
INTEGRATING it into their
personal lives
10 talk about wanting to
EXPLORE
worlds
11 are
EMPOWERED by it
1
ACKNOWLEDGMENT
This work was supported in part by Microsoft
Research and by the EUSES Consortium via NSF
grants ITR 0325273 and CNS 0420533.
30
Men
fantasize abou t it as aPRODUCT
see it as a
WEAPON
want to use it for
CONTROL
are impressed with its potential for
POWER
see it a sINSTRUMENTAL
ask it for
SPEED
are concerned with its
EFFICIENCY
like its ability to facil itate
AUTONOMY
are in tent on
CONSUMING it
talk about using it to
EXPLOIT
resources and po tentialities
want
TRANSCENDENCE
REFERENCES
Ames, P. (2003). Gender and learning styles
interactions in students computer attitudes.
Journal of Educational Computing Research,
28(3), 231-244.
Bandura, A. (1994). Self-efficacy. In V. S. Ramachaudran (Ed.), Encyclopedia of human behavior
(Vol. 4, pp. 71-81). New York: Academic Press.
Beckwith, L., & Burnett, M. (2004, September
26-29). Gender: An important factor in end-user
programming environments? IEEE Symp. Visual
Languages and Human-Centric Computing (pp.
107-114), Rome, Italy.
Beckwith, L., Burnett, M., Wiedenbeck, S., Cook,
C., Sorte, S., & Hastings, M. (2005, April 2-7).
Effectiveness of end-user debugging software
features: Are there gender issues? ACM Confer-
31
KEY Terms
End User: Users who are not trained programmers.
End-User Computing: Computer-supported
problem solving by end users, using systems such
as spreadsheets, multimedia authoring tools,
and graphical languages for demonstrating the
desired behavior.
End-User Programming: A term synonymous with end-user computing.
Gender HCI: Human-computer interaction
(HCI) work that takes gender differences into
account.
Overconfidence: Higher self-efficacy than is
warranted by a users abilities.
Self-Efficacy: Belief in ones capabilities to
perform a certain task.
Under Confidence: Lower self-efficacy than
is warranted by a users abilities.
This work was previously published in Encyclopedia of Gender and Information Technology, edited by E. Trauth, pp. 398-404,
copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global)
32
33
Chapter 1.4
Abstract
Cognitive Informatics (CI)
is a transdisciplinary
enquiry of the internal information processing
mechanisms and processes of the brain and
natural intelligence shared by almost all science
and engineering disciplines. This
article presents
Introduction
The development of classical and contemporary
informatics, the cross fertilization between computer science, systems science, cybernetics, computer/software engineering, cognitive science,
knowledge engineering, and neuropsychology, has
led to an entire range of an extremely interesting
and new research field known as Cognitive Informatics (Wang, 2002a, 2003a, b, 2006b; Wang,
Johnston & Smith 2002; Wang & Kinsner, 2006).
Informatics is the science of information that
studies the nature of information; its processing,
and ways of transformation between information,
matter, and energy.
Definition 1. Cognitive Informatics (CI) is a
transdisciplinary enquiry of cognitive and information sciences that investigates the internal in-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
CI
Theories (T)
CI
Applications (A)
Descriptive
Mathematics for
CI (M)
T1
The IME
model
T2
The LRMB
model
T7
CI laws of
software
M1
Concept
algebra (CA)
A1
Future generation
Computers
T3
The OAR
model
T8
Perception
processes
M2
RTPA
A2
Capacity of human
memory
T4
CI model of
the brain
T9
Inference
processes
M3
System algebra
(SA)
A3
Autonomic
computing
T5
Natural
intelligence
T10
The knowledge
system
T6
Neural
informatics
A9
Cognitive complexity
of software
34
A4
Cognitive properties
of knowledge
A5
Simulation of
cognitive behaviors
A8
Deductive semantics
of software
A7
CI foundations of
software engineering
A6
Agent
systems
The Information-Matter-Energy
Model
Information is recognized as the third essence of
the natural world supplementing to matter and energy (Wang, 2003b), because the primary function
of the human brain is information processing.
Theorem 1. A generic worldview, the IME
model states that the natural world (NW) that
forms the context of human beings is a dual world:
one aspect of it is the physical or the concrete
world (PW), and the other is the abstract or the
perceptive world (AW), where matter (M) and
energy (E) are used to model the former, and
information (I) to the latter, that is:
NW PW || AW
= p(M, E)|| a(I)
= n(I, M, E)
(1)
where || denotes a parallel relation, and p, a, and
n, are functions that determine a certain PW, AW,
or NW, respectively, as illustrated in Figure 2.
According to the IME model, information
plays a vital role in connecting the physical world
with the abstract world. Models of the natural
world have been well studied in physics and other
natural sciences. However, the modeling of the
abstract world is still a fundamental issue yet to
be explored in cognitive informatics, computing,
software science, cognitive science, brain sciences, and knowledge engineering. Especially
the relationships between I-M-E and their transformations are deemed as one of the fundamental
questions in CI.
Corollary 1. The natural world NW(I, M, E),
particularly part of the abstract world, AW(I), is
cognized and perceived differently by individuals because of the uniqueness of perceptions and
mental contexts among people.
Corollary 1 indicates that although the physical world PW(M, E) is the same to everybody, the
natural world NW(I, M, E) is unique to different
individuals because the abstract world AW(I), as a
part of it, is subjective depending on the information an individual obtains and perceives.
Corollary 2. The principle of transformability
between IME states that, according to the IME
model, the three essences of the world are predicated to be transformable between each other as
described by the following generic functions f1
to f6:
I = f1 (M)
M = f2 (I) f1 -1(I)
I = f3 (E)
(2.1)
(2.2)
(2.3)
35
E = f4 (I) f3 -1(I)
E = f5 (M)
M = f6 (E) f5 -1(E)
(2.4)
(2.5)
(2.6)
f :X
Sk
logk X
Layer 6
Higher cognitive functions
Layer 5
Meta cognitive functions
Layer 4
Action
Subconscious
cognitive
processes
Layer 3
Perception
Layer 2
Memory
Layer 1
Sensation
36
(3)
Conscious
cognitive
processes
I b = f : X Sb
= log b X
(4)
37
informatics. Fundamental
cognitive mechanisms
of the brain, such as the architecture of the thinking engine, internal knowledge representation,
long-term memory establishment, and roles of
sleep in long-term memory development have
been investigated (Wang & Wang, 2006).
(5)
38
association and premotor cortex in the frontal lobe, the temporal lobe,
sensory cortex in the frontal lobe, visual cortex
in the occipital lobe, primary motor cortex in the
frontal lobe, supplementary motor area in the
frontal lobe, and procedural memory in cerebellum (Wang & Wang, 2006).
The CMM model and the mapping of the four
types of human memory onto the physiological
organs in the brain reveal a set of fundamental
mechanisms of NeI. The OAR model of information/knowledge representation described in
the OAR model of information representation in
the brain section provides a generic description
of information/knowledge representation in the
brain (Wang, 2006h; Wang et al., 2003).
The theories of CI and NeI explain a number
of important questions in the study of NI. Enlightening conclusions derived in CI and NeI are
such as: (a) LTM establishment is a subconscious
process; (b) The long-term memory is established
during sleeping; (c) The major mechanism for
LTM establishment is by sleeping; (d) The general
acquisition cycle of LTM is equal to or longer than
24 hours; (e) The mechanism of LTM establishment is to update the entire memory of information
represented as an OAR model in the brain; and
(f) Eye movement and dreams play an important
Abstraction
Generality
Cumulativeness
Dependency on cognition
Three-dimensional behavior space known
as the object (O), space (S), and time (T)
Sharability
Dimensionless
Weightless
Transformability between I-M-E
Multiple representation forms
Multiple carrying media
Multiple transmission forms
Dependency on media
Dependency on energy
Wearless and time dependency
Conservation of entropy
Quality attributes of informatics
Susceptible to distortion
Scarcity
Mechanisms of Human
Perception Processes
Definition 7. Perception is a set of interpretive
cognitive processes of the brain at the subconscious cognitive function layers that detects,
relates, interprets, and searches internal cognitive
information in the mind.
Perception may be considered as the sixth
sense of human beings, which almost all cognitive life functions rely on. Perception is also an
important cognitive function at the subconscious
layers that determines personality. In other words,
personality is a faculty of all subconscious life
functions and experience cumulated via conscious
life functions.
According to LRMB, the main cognitive
processes at the perception layer are emotion,
motivation, and attitude (Wang, 2005a). The relationship between the internal emotion, motivation,
attitude, and the embodied external behaviors can
be formally and quantitatively described by the
motivation/attitude-driven behavioral (MADB)
model (Wang & Wang, 2006), which demonstrates
that complicated psychological and cognitive
mental processes may be formally modeled and
rigorously described by mathematical means
(Wang, 2002b, 2003d, 2005c).
39
2005c). All formal logical inferences and reasonings can only be carried out on the basis of
abstract properties shared by a given set of objects
under study.
Definition 8. Abstraction is a process to elicit a
subset of objects that shares a common property
from a given set of objects and to use the property
to identify and distinguish the subset from the
whole in order to facilitate reasoning.
Abstraction is a gifted capability of human
beings. Abstraction is a basic cognitive process
of the brain at the metacognitive layer according
to LRMB (Wang et al., 2006). Only by abstraction can important theorems and laws about the
objects under study be elicited and discovered
from a great variety of phenomena and empirical
observations in an area of inquiry.
Definition 9. Inferences are a formal cognitive
process that reasons a possible causality from
given premises based on known causal relations
between a pair of cause and effect proven true
by empirical arguments, theoretical inferences,
or statistical regulations.
Formal inferences may be classified into the
deductive, inductive, abductive, and analogical
categories (Wang, 2005c). Deduction is a cognitive process by which a specific conclusion
necessarily follows from a set of general premises. Induction is a cognitive process by which a
general conclusion is drawn from a set of specific
premises based on three designated samples in
reasoning or experimental evidences. Abduction
is a cognitive process by which an inference to
the best explanation or most likely reason of
an observation or event. Analogy is a cognitive
process by which an inference about the similarity of the same relations holds between different
domains or systems, and/or examines that if two
things agree in certain respects, then they probably agree in others. A summary of the formal
40
Denotational Mathematics
for CI
The history of sciences and engineering shows
that new problems require new forms of math-
Inference
technique
Formal description
Primitive form
Composite form
Usage
Abstraction
S, p e E S,
p(e)
Deduction
x X, p(x) a
X, p(a)
Induction
Abduction
(x X, p(x) q(x))
(a X, q(a) p(a))
Analogy
a X, p(a) b
X, p(b)
To predict a similar
phenomenon or consequence
based on a known
observation.
41
Discipline
Doctrine
Definitions
Propositions
Hypotheses
Theories
Concepts
Factors
Lemmas
Corollaries
Truths
Phenomena
Theorems
Empirical verifications
Formal proofs
Laws
Arguments
Principles
Instances
Rules
Models
Case studies
Statistical norms
Methodologies
Algorithms
42
Concept Algebra
A concept is a cognitive unit (
Ganter & Wille,
1999
; Quillian,
1968;
Wang, 2006e) by which
the meanings and semantics of a real-world or an
abstract entity may be represented and embodied
based on the OAR model.
Definition 10. An abstract concept c is a 5-tuple,
that is:
c
i
o
(7)
c (O, A, R , R , R )
where
O is a nonempty set of object of the concept,
O = {o1, o2, , om} = U, where U denotes
a power set of U.
A is a nonempty set of attributes, A = {a1,
a2, , an} = M.
Definition 11. Concept algebra is a new mathematical structure for the formal treatment of
abstract concepts and their algebraic relations,
operations, and associative rules for composing complex concepts and knowledge (Wang,
2006e).
Concept algebra deals with the algebraic relations and associational rules of abstract concepts.
The associations of concepts form a foundation
to denote complicated relations between concepts
in knowledge representation. The associations
among concepts can be classified into nine categories, such as inheritance, extension, tailoring,
substitute, composition, decomposition, aggrega-
where
UC
i=1
= C , and
+
, , , , , } .
Rk = {, , ,
Inheritance
Extension
A1
Tailoring
R1
Substitute
c2
o21
Instantiation
A2
R21
A21
R2
Composition
O1
Decomposition
O2
Aggregation
Specification
43
i=1
i= j
CN = R : XCi XC j
(9)
where Rk R.
Because the relations between concepts are
transitive, the generic topology of knowledge is
a hierarchical concept network. The advantages
of the hierarchical knowledge architecture K
in the form of concept networks are as follows:
(a) Dynamic: The knowledge networks may be
updated dynamically along with information
acquisition and learning without destructing the
existing concept nodes and relational links. (b)
Evolvable: The knowledge networks may grow
adaptively without changing the overall and existing structure of the hierarchical network.
A summary of the algebraic relations and operations of concepts defined in CA are provided
in Table 2.
44
R(@ e P )
k =1
(11)
P = (@ e P )
k =1
(12)
n 1
=
[@ e
( p (k ) r (k ) p (k ))], j = i + 1
k =1
i =1
ij
P ={:=, , , , , , , |, |,, , , ,
!, , ,}
(14)
The definitions, syntaxes, and formal semantics of each of the metaprocesses and process
relations may be referred to RTPA (Wang, 2002b,
2006f). A complex process and a program can
be derived from the metaprocesses by the set of
algebraic process relations. Therefore, a program
is a set of embedded relational processes as described in Theorem 5.
A summary of the metaprocesses and their
algebraic operations in RTPA are provided in
Table 2.
(13)
Definition 18. A process relation is a composing rule for constructing complex processes by
using the metaprocesses. The process relations
R of RTPA are a set of 17 composing operations
and rules to built larger architectural components
and complex system behaviors using the metaprocesses, that is:
45
where
S = S S
Ri1
R1
C1
B1
Ro1
1
S
Rc1
Rc1
R2
Ri2
C2
B2
2
46
(16)
Ro2
There was a myth on an ideal system in conventional systems theory that supposes the work
down by the ideal system W(S) may be greater
than the sum of all its components W(ei), that is:
n
W ( S ) W (ei ).
i =1
Theorem 7. The Wangs first law of system science, system fusion, states that system conjunction
or composition between two systems S1 and S2
creates new relations R12 and/or new behaviors
(functions) B12 that are solely a property of the
new supersystem S determined by the sizes of
the two intersected component sets #(C1) and
#(C2), that is:
(18)
R12 = #(R) - (#(R1) + #(R2))
= (#(C1 + C2))2 - ((#(C1))2 +(#(C2))2)
= 2 (#(C1) #(C2))
W ( S ) W (ei ),
1
(19)
i =1
i
i
S
Applications of CI
47
Concept
Algebra
System
Algebra
Super/sub relation
Related /
independent
:=
/ /
Evaluation
Equivalent
Addressing
Branch
Consistent
Memory allocation
Switch
||
Memory release
While-loop
Repeat-loop
Overlapped
Read
Sequence
Jump
Conjunction
Elicitation
Comparison
Input
Recursion
Definition
Output
Procedure call
Difference
Inheritance
Write
Timing
For-loop
Parallel
||
Concurrence
Extension
Increase
Interleave
Tailoring
Decrease
Substitute
Exception
detection
Interrupt
Time-driven dispatch
Event-driven
dispatch
Interrupt-driven
dispatch
Skip
Composition
Decomposition
Stop
Aggregation/
generalization
System
Specification
Instantiation
P i p e l i n e
Duration
2006). The fundamental research in CI also creates an enriched set of contemporary denotational
mathematics (Wang, 2006c), for dealing with
the extremely complicated objects and problems
in natural intelligence, neural informatics, and
knowledge manipulation.
The theory and philosophy behind the next
generation computers and computing methodologies are CI (Wang, 2003b, 2004). It is commonly
48
Relational Operations
|||
IE
Enquiries
LTM
KMU
LTM
Knoledge
LTM
BMU
ABM
Behaviors
ABM
EMU
LTM
Experience
ABM
SMU
ABM
Skills
CM = IE || PE
Stimuli
Interactions
SBM
BPU
ABM
Behaviors
SBM
EPU
LTM
Experience
PE
The Cognitive Machine (CM)
(20)
WA (IE || PE)
= ( KMU// The knowledge manipulation
unit
|| BMU//The behavior manipulation unit
|| EMU // The experience manipulation
unit
|| SMU// The skill manipulation unit
)
|| ( BPU // The behavior perception unit
|| EPU // The experience perception unit
)
(21)
49
Autonomic Computing
The approaches to implement intelligent systems
can be classified into those of biological organisms,
silicon automata, and computing systems. Based
on CI studies, autonomic computing (Wang, 2004)
is proposed as a new and advanced computing
technique built upon the routine, algorithmic, and
adaptive systems as shown in Table 3.
The approaches to computing can be classified into two categories known as imperative and
autonomic computing. Corresponding to these,
Constant
Variable
Type of behavior
Event (I)
50
Behavior (O)
Constant
Variable
Routine
Adaptive
Algorithmic
Autonomic
Deterministic
Nondeterministic
Ways of
Acquisition
Abstract Concept
Direct or indirect
Empirical Action
Experience
Direct only
Skill
51
52
Agent Systems
Definition 29. A software agent is an intelligent
software system that autonomously carries out
robotic and interactive applications based on
goal-driven mechanisms (Wang, 2003c).
Because a software agent may be perceived as
an application-specific virtual brain (see Theorem
3), behaviors of an agent are mirrored human
behaviors. The fundamental characteristics of
agent-based systems are autonomic computing,
goal-driven action-generation, knowledge-based
machine learning. In recent CI research, perceptivity is recognized as the sixth sense that serves
the brain as the thinking engine and the kernel of
the natural intelligence. Perceptivity implements
self-consciousness inside the abstract memories of
the brain. Almost all cognitive life functions rely
on perceptivity such as consciousness, memory
searching, motivation, willingness, goal setting,
emotion, sense of spatiality, and sense of motion.
The brain may be stimulated by external and
internal information, which can be classified as
willingness-driven (internal events such as goals,
motivation, and emotions), event-driven (external
events), and time-driven (mainly external events
triggered by an external clock). Unlike a computer,
the brain works in two approaches: the internal
willingness-driven processes, and the external
event- and time-driven processes. The external
information and events are the major sources
that drive the brain, particularly for conscious
life functions.
Recent research in CI reveals that the foundations of agent technologies and autonomic
computing are CI, particularly goal-driven action generation techniques (Wang, 2003c). The
LRMB model (Wang et al., 2006) described in
the Layered Reference Model of the Brain section
may be used as a reference model for agent-based
technologies. This is a fundamental view toward
the formal description and modeling of architectures and behaviors of agent systems, which are
created to do something repeatable in context,
to extend human capability, reachability, and/or
memory capacity. It is found that both human
and software behaviors can be described by a
3-dimensional representative model comprising action, time, and space. For agent system
behaviors, the three dimensions are known as
mathematical operations, event/process timing,
and memory manipulation (Wang, 2006g). The
3-D behavioral space of agents can be formally
described by RTPA that serves as an expressive
mathematical means for describing thoughts and
notions of dynamic system behaviors as a series
of actions and cognitive processes.
CI Foundations of
Software Engineering
Software is an intellectual artifact and a kind of
instructive information that provides a solution for
a repeatable computer application, which enables
existing tasks to be done easier, faster, and smarter,
or which provides innovative applications for the
industries and daily life. Large-scale software
systems are highly complicated systems that have
never been handled or experienced precedent by
mankind.
The fundamental cognitive characteristics
of software engineering have been identified as
follows (Wang, 2006g):
53
(p) =
=
=
2
2
fq ( p ) =
v p (t , s )
t s
t s
#T ( p ) # S ( p )
R Rv
(t , s j )
p i
i =0
j =1
1 #{s1 , s 2 , ..., s m }
R
i =0
j =1
v p (ti , s j )
s1 s 2 sm
t
v01 v02 v0 m
= 0
(t , t ] v
v1m
0 1 11 v12
(22)
54
i =1
RTPA Notation
|
| |
Description
Sequence
Branch
Switch
For-loop
Repeat-loop
While-loop
7
8
9
|| or
Function call
Recursion
Parallel
7
11
15
10
Interrupt
22
= {
k =1
nCLM
i =1
w(k , i )}
{ OBJ(CLM k ) +
k =1
(24)
nC
OBJ(Ck )}
k =1
[FO]
55
Time
Cyclomatic Symbolic
complexity complexity complexity
(Ct [OP])
(Cm [-])
(Cs [LOC])
IBS (a)
IBS (b)
MaxFinder
SIS_Sort
O(n)
O(n)
O(m+n)
1
2
2
5
7
8
5
8
Conclusions
This article has presented an intensive survey of
the recent advances and ground breaking studies in
Cognitive informatics, particularly its theoretical
framework, denotational mathematics, and main
application areas. CI has been described as a new
discipline that studies the natural intelligence and
internal information processing mechanisms of
the brain, as well as processes involved in perception and cognition. CI is a new frontier across
disciplines of computing, software engineering,
cognitive sciences, neuropsychology, brain sciences, and philosophy in recent years. It has
been recognized that many fundamental issues
in knowledge and software engineering are based
on the deeper understanding of the mechanisms
56
Cognitive complexity
Operational
Architectural
Cognitive
complexity
complexity
complexity
(Cop [F])
(Ca [O])
(Cc [FO])
13
5
65
34
5
170
115
7
805
163
11
1,793
Acknowledgment
The author would like to acknowledge the Natural Science and Engineering Council of Canada
(NSERC) for its support to this work. The author
would like to thank the anonymous reviewers for
their valuable comments and suggestions.
References
Bell, D. A. (1953). Information theory. London:
Pitman.
Ganter, B., & Wille, R. (1999). Formal concept
analysis (pp. 1-5). Springer.
Hoare, C. A. R. (1985). Communicating sequential
processes. Prentice Hall.
Jordan, D. W., & Smith, P. (1997). Mathematical
von Neumann, J. (1946). The principles of largescale computing machines. Reprinted in Annals
of History of Comp., 3(3), 263-273.
informatics: A new
transdisciplinary research field. Brain and Mind:
A Transdisciplinary Journal of Neuroscience and
57
58
This work was previously published in International Journal of Cognitive Informatics and Natural Intelligence, Vol. 1, Issue
1, edited by Y. Wang, pp. 1-27, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of
IGI Global).
59
60
Chapter 1.5
Introduction
News reports do not frequently mention many
problems or accidents caused by human error.
The specialty of human factors seeks to avoid
human error by making certain that computers
and all other equipment are designed to be easy to
understand and use; costly human errors are thus
minimised. This article provides a basic overview
of the subject of human factors as it pertains to
problem and error avoidance in computerised public information systems. When computer/system
design does not adequately consider human capability, the performance of the computer/system
and the user will be below desired levels.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
used did not consider human performance limitationsfor example, some designs presented too
much information at the same time or in the wrong
order for humans to be able to successfully operate
controls. Or, the arrangement of controls made
them difficult to reach quickly and easily. From
this discovery came the concept that human users, their work activities, and the contexts of the
activities had to be thought of as different parts
of a whole system and that each depends upon the
other for successful operation (Bailey, 1996).
After WWII the discipline of human factors
became a specialised knowledge area as it became
apparent that the human element of any system
had to be considered if the capabilities of new
technologies were to be efficiently exploited.
The older strategy of modifying designs over a
long period of time through user experiences was
inadequate; rates of change had become so rapid
that products were obsolete before improvements
could be added. Now, the strategy often used
by successful design environments is to include
human factors in design and development. When
properly managed, products or systems that use
human factors knowledge are more efficient,
safer, and more pleasing to use because they are
designed to accommodate human performance
capabilities (Norman, 1988).
Human factors is an extremely broad technical and scientific discipline; founders of the
first national and international human factors
organisations came from such diverse fields as
engineering, design, education, computer technology, psychology, and medicine. Through its
diversity human factors is able to draw upon and
combine knowledge from any area when working with human and system performance issues.
Due to the complexity of human behaviour, human factors specialists emphasise in their work
an iterative empirical approach. First, an initial
recommendation or interface design is made and
then laboratory or field studies are conducted to
test this initial design (the prototype). Changes
are made when deficits are identified; modifica-
tions are made; and further testing is then performed. This process continues until significant
problems are no longer found. Finally, validation
is achieved through observation in the field after
system deployment.
This emphasis on empirical work tends to shape
how human factors specialists perform their roles.
Irrespective of the specific methodology chosen
for gathering data about tasks, users, and the use
of products, human factors work tends to result
in product improvements likely to be economical,
easy, and efficient to use from the beginning of use;
the cost of and need to go back and fix problems
when human factors is not used is avoided.
Human factors can also be called ergonomics.
As the term human factors is in more common usage in the computer field, it is used for
this article.
61
62
63
64
User-Centred Design
User-centred design (UCD) refers to the design
of interaction between users and the system,
called interaction design (Preece, Rogers, &
Sharp, 2002). It models a system from a user
perspective and focuses on the usability of an
interactive software system. The core objective
is to effectively support users in executing their
tasks (Earthy, 2001).
Usability is recognised as one of the most
important quality characteristics of software
intensive systems and products. Usability gives
many benefits including increased productivity,
enhanced quality of work, improved user satisfaction, reductions in support and training costs and
improved user satisfaction (ISO13407, 1999).
The prevailing paradigm of developing usable products and systems (or UCD) is that usable products are created through processes of
user-centred design. The UCD process model is
illustrated in Figure 1. Achieving quality in use
requires this type of user-centred design process
and the use of appropriate usability evaluation
techniques. Usability is defined as a high level
quality objective: to achieve effectiveness, efficiency, and satisfaction. This requires not only
ease of use but also appropriate functionality,
reliability, computer performance, satisfaction,
comfort, and so on.
Usability Facilities
Usability laboratories have become an important
tool. These laboratories, together with appropriate
field-kit evaluations, enable us to meet the growing
number of requests for usability support, chiefly
user-based evaluations of development prototypes
and competitors products. Together with the
appropriate development processes, laboratory
studies help predict and deal with specific usability
problems during product development. Such a
laboratory also provides an extremely powerful
tool to identify and publicise general usability issues that affect overall product use. Rubin (1994)
describes usability testing in detail and gives
examples of typical usability laboratories.
Implementation Issues
Although improved technology will always have
a critical role in system design, it is often the case
that human factors considerations will take up the
majority of the time of IT designers and managers. Much is known about the nature of human
error, the conditions that encourage error, and
hardware/software designs that are error resistant
(Norman, 1990), and much of this is now well
recognised in IT training literature (Salvendy,
1997). While human factors is recognised and
used by many information system developers and
designers, there are those who take the perspective
that users who want access must accommodate
themselves to an information system.
The attitude of the employees response for
administering a public information system is
also critical for success. New users or those with
problems may need to ask an employee (a humanhuman interface) for help. Employees who have
a negative attitude will too often give the user a
negative attitude towards the system (Cialdini,
1993). Including employees in processes will help
to avoid the problem of employees undermining
the operation of a public information system; following are some points to use in the avoidance of
employee resistance.
65
Cognitive Perception
In IT projects there are four common cognitive
perceptions that must be overcome by employees:
(1) automation is mandated arbitrarily; (2) the
new computer system will be unreliable; (3) the
new system will increase rather than decrease
the work burden; and (4) they (employees) will
neither understand a system nor be able to operate it. Free and open communications throughout
an IT implementation process is an important
avenue for reducing employee resistance coming
from these issues. Negative perceptions regarding the introduction of IT may be overcome to a
significant extent through the encouragement of
employee decision making during an IT introduction process.
Attitudes
Although user attitudes tend to be against change,
once new technology is seen as bringing desired
benefits, attitudes will begin to adapt. In particular,
acceptance is much more likely when computing
is presented as complementary to human skills,
enhancing rather than replacing them (Petheram,
1989; Rosenbrock, 1981). Changing employee
attitudes is all the more important since word of
mouth is a critical aspect of the process by which
technological innovations are spread (Czepiel,
1974). Dickson, a pioneer in the study of human
relations and IT noted that, when it comes to
66
Future Trends
When considering the future of human factors
in public information systems, it is useful to
first recognise that an astonishingly rapid rate of
technological change has been and is the norm for
computing systems. Whole buildings and highly
trained teams were once needed for a single computer; in contrast, more powerful systems can now
be found on the top of the desk of the least-skilled
person in an organisation. Given this rapid and
dramatic pace of change and development, it is
challenging to predict future directions.
Despite this dramatic pace which defies the
imagination it is possible to observe that whatever technology may bring the core emphasis of
human factors will remain unchanged. For any
technological change to be effective, it must be
readily understandable and usable by an average
user. For example, while nearly instantaneous
processing and presentation of data is becoming
the norm, that data are of no value if confusing
or overwhelming.
Any future system, regardless of any technological advance it may achieve, must include in its
design consideration how humans comprehend,
manage, and process information. For example,
Miller (1956) reported that the average person
is unable to remember more than seven pieces
Conclusion
New technologies are often seen as an ultimate
solution. While this is essentially correct, it is also
necessary to recognise that improved technology
does not change human capability. Computers and
computing systems, as with public information
systems, can only be as effective as users are
capable. Designs that expect users to learn to perform beyond usual capabilities ignore the reality
that humans are limited and unlike technology,
cannot be improved or upgraded. For this reason
it is essential that the concept of user-centred
design be part of any public information system
design process.
As implied, the introduction into IT system
development of user-centred design procedures
ensures that success stories associated with
usability engineering will continue. Advanced
development projects that examine psychological factors underlying consumer reactions and
expectations include user-centred evaluations of
prototypes and match human capabilities with
system designs. These are most likely to result in
public information systems that are accepted by
both the public and governmental employees.
The inclusion of human factors into public
information system development and design
References
Bailey, R. W. (1996). Human performance
engineering (3r d ed.). Upper Saddle River, NJ:
Prentice Hall.
Cialdini, R. (1993). Influence. New York: Quill.
Cooper, A. (1999). The inmates are running the
asylum. Indianapolis, IN: SAMS.
Czepiel, J. A. (1974). Word of mouth processes in
the diffusion of a major technological innovation.
Journal of Marketing Research, II, 172-180.
Davis, G. B., & Olsen, N. (1985). Management
information systems. New York: McGraw-Hill.
Dickson, G. W. (1968). Management information decision systems. Business Horizons, II(6),
17-26.
Dix, A., Finlay, J., Abowd, G., & Beale, R. (1998).
Human-computer interaction (2n d ed.). London:
Prentice Hall.
Earthy, J. (2001). The improvement of human-centered processes facing the challenge and reaping
the benefit of ISO 13407. International Journal
of Human-Computer Studies, 55, 553-585.
Helander, M. (1997). Handbook of human-computer interaction. Amsterdam: Elsevier Science
Publishers B.V.
67
68
Key Terms
Cognitive Engineering: Understanding and
predicting how changes in a task environment
will influence task performance.
Human-Computer Interaction (HCI):
The study of people, computer technology, and
the ways these influence each other (Dix et al.,
1998).
Human Factors (or Ergonomics): The scientific discipline concerned with the understanding
of interactions among humans and other elements
of a system, and the profession that applies theory,
principles, data, and methods to design in order
to optimise human well-being and overall system
performance (IEA, 2005).
This work was previously published in Encyclopedia of Digital Government, edited by A. Anttiroiko and M. Malkia, pp. 940-946,
copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
69
70
Chapter 1.6
INTRODUCTION
The history of task analysis is nearly a century
old, with its roots in the work of Gilbreth (1911)
and Taylor (1912). Taylors scientific management
provided the theoretical basis for production-line
manufacturing. The ancient manufacturing approach using craft skill involved an individual,
or a small group, undertaking, from start to finish, many different operations so as to produce a
single or small number of manufactured objects.
Indeed, the craftsperson often made his or her
own tools with which to make end products. Of
course, with the growth of civilisation came specialisation, so that the carpenter did not fell the
trees or the potter actually dig the clay, but still
each craft involved many different operations by
each person. Scientific managements novelty was
the degree of specialisation it engendered: each
person doing the same small number of things
repeatedly.
Taylorism thus involved some large operation,
subsequently called a task, that could be broken
down into smaller operations, called subtasks.
Task analysis came into being as the method
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
Stanton (2004) suggests that [s]implistically,
most task analysis involves (1) identifying tasks,
(2) collecting task data, (3) analyzing this data so
that the tasks are understood, and then (4) producing a documented representation of the analyzed
tasks (5) suitable for some engineering purpose.
While there are many similar such simplistic
descriptions, Stantons five-item list provides an
adequate description of the stages involved in
task analysis, although the third and fourth are,
in practice, usually combined. The following
four subsections deal with them in more detail,
but with two provisos. First, one should always
start with Stantons final item of establishing the
purpose of undertaking a task analysis. Second,
an iterative approach is always desirable because
how tasks are performed is complicated.
Identifying Tasks
In the context of task scenarios, which Diaper
(2002a, 2002b) describes as low fidelity task
simulations, Carroll (2000) rightly points out that
there is an infinity of possible usage scenarios.
Thus, only a sample of tasks can be analysed. The
tasks chosen will depend on the task analysis
purpose. For new systems, one usually starts
with typical tasks. For existing systems and welldeveloped prototypes, one is more likely to be
concerned with complex and difficult tasks, and
important and critical ones, and, when a system
71
72
he says parenthetically, N.B. slightly more accurately perhaps, from a solipsistic position, it is
the mapping between one model of the assumed
real world and another.
At one end of the task-fidelity spectrum there
is careful, detailed task observation, and at the
other, when using scenarios of novel future systems, task data may exist only in task analysts
imagination. Between, there is virtually every
possible way of collecting data: by interviews,
questionnaires, classification methods such as card
sorting, ethnography, participative design, and so
forth. Cordingley (1989) provides a reasonable
summary of many such methods. The primary
constraint on such methods is one of perspective, maintaining a focus on task performance.
For example, Diaper (1990) describes the use of
task-focused interviews as an appropriate source
of data for a requirements analysis of a new generation of specialised computer systems that were
some years away from development.
73
Table 1. An abridged classification of some task analysis methods (based on Limbourg & Vanderdonkt,
2004)
Method
HTA
GOMS
MAD*
GTA
MUSE
TKS
CTT
Dianne+
TOOD
Origin
Cognitive
analysis
Cognitive
analysis
Psychology
Computersupported
cooperative
work
Software
engineering
& human
factors
Cognitive
analysis &
software
engineering
Software
engineering
Software
engineering
& process
control
P rocess
control
Planning
Plans
Hierarchy
Leaves
Operators
Operational
Level
Tasks
Methods &
Unit tasks
selection rules
C onstructors P re- &
postconditions
Constructors
Basic tasks
Operators
Goals &
constructors
Tasks
Actions
Plans &
constructors
Procedures
Actions
Operators
Scenarios
Basic tasks
Goals
Procedures
Input/output
transitions
74
Operationalisation
Tasks
Actions &
system
operations
Actions
Operations
Task
FUTURE TRENDS
While recognising the difficulty, perhaps impossibility, of reliably predicting the future, Diaper and
Stanton (2004b) suggest that one can reasonably
predict possible futures, plural. They propose that
[f]our clusters of simulated future scenarios for
task analysis organized post hoc by whether an
agreed theory, vocabulary, etc., for task analysis
emerges and whether task analysis methods become more integrated in the future. While not
predicting which future or combination will occur,
or when, they are however confident that [p]eople
will always be interested in task analysis, for task
analysis is about the performance of work, even
though they admit that [l]ess certain is whether
it will be called task analysis in the future.
Probably because of its long history, there is
an undoubted need for the theoretical basics that
underpin the task concept and task analysis to be
revisited, as Diaper (2004) attempts to do for the
development of STA. Diaper and Stanton (2004b)
also suggest that some metamethod of task analysis needs to be developed and that more attention
needs to be placed on a wide range of types of
validation, theory, methods, and content, and also
on methods predictive capability to support design and for other engineering purposes (Annett,
2002; Stanton, 2002; Stanton & Young, 1999). At
least two other areas need to be addressed in the
future: first, how work is defined, and second, the
currently ubiquitous concept of goals.
Task analysis has always been concerned with
the achievement of work. The work concept,
however, has previously been primarily concerned
with employment of some sort. What is needed, as
Karat, Karat, and Vergo (2004) argue, is a broader
definition of work. Their proposals are consistent
75
76
CONCLUSION
Two handbooks (although at about 700 pages
each, neither is particularly handy) on task
analysis have recently become available: Diaper
and Stanton (2004a) and Hollnagel (2003a). Both
are highly recommended and, while naturally
the author prefers the former because of his personal involvement, he also prefers the Diaper and
Stanton tome because it provides more introductory material, is better indexed and the chapters
more thoroughly cross-referenced, comes with a
CD-ROM of the entire book, and, in paperback,
is substantially cheaper than Hollnagels book.
No apology is made for citing the Diaper and
Stanton book frequently in this article, or for the
number of references below, although they are a
fraction of the vast literature explicitly about task
analysis. Moreover, as task analysis is at the heart
of virtually all HCI because it is fundamentally
about the performance of systems, then whether
called task analysis or not, nearly all the published
HCI literature is concerned in some way with the
concept of tasks and their analysis.
REFERENCES
Anderson, R., Carroll, J., Grudin, J., McGrew,
J., & Scapin, D. (1990). Task analysis: The oft
missed step in the development of computer-human interfaces. Its desirable nature, value and
role. Human-Computer Interaction: Interact90,
1051-1054.
Annett, J. (2002). A note on the validity and reliability of ergonomic methods. Theoretical Issues
in Ergonomics Science, 3, 228-232.
Annett, J. (2003). Hierarchical task analysis. In
E. Hollnagel (Ed.), Handbook of cognitive task
design (pp. 17-36). Mahwah, NJ: Lawrence Erlbaum Associates.
Diaper, D. (1989b). Task observation for humancomputer interaction. In D. Diaper (Ed.), Task
analysis for human-computer interaction (pp.
210-237). West Sussex, UK: Ellis Horwood.
Coronado, J., & Casey, B. (2004). A multicultural approach to task analysis: Capturing user
requirements for a global software application. In
D. Diaper & N. A. Stanton (Eds.), The handbook
of task analysis for human-computer interaction
(pp. 179-192). Mahwah, NJ: Lawrence Erlbaum
Associates.
77
(Eds.), The handbook of task analysis for humancomputer interaction (pp. 603-619). Mahwah, NJ:
Lawrence Erlbaum Associates.
Dix, A., Ramduny-Ellis, D., & Wilkinson, J.
(2004). Trigger analysis: Understanding broken
tasks. In D. Diaper & N. A. Stanton (Eds.), The
handbook of task analysis for human-computer
interaction (pp. 381-400). Mahwah, NJ: Lawrence
Erlbaum Associates.
Dowell, J., & Long, J. (1989). Towards a conception
for an engineering discipline of human factors.
Ergonomics, 32(11), 1513-1535.
Gilbreth, F. B. (1911). Motion study. Princeton,
NJ: Van Nostrand.
Greenberg, S. (2004). Working through task-centred system design. In D. Diaper & N. A. Stanton
(Eds.), The handbook of task analysis for humancomputer interaction (pp. 49-68). Mahwah, NJ:
Lawrence Erlbaum Associates.
Hollnagel, E. (2003a). Handbook of cognitive
task design. Mahwah, NJ: Lawrence Erlbaum
Associates.
Hollnagel, E. (2003b). Prolegomenon to cognitive
task design. In E. Hollnagel (Ed.), Handbook of
cognitive task design (pp. 3-15). Mahwah, NJ:
Lawrence Erlbaum Associates.
John, B. E., & Kieras, D. E. (1996). Using GOMS
for user interface design and evaluation: Which
technique? ACM Transactions on Computer-Human Interaction, 3, 320-351.
Johnson, H., & Johnson, P. (1987). The development of task analysis as a design tool: A method
for carrying out task analysis (ICL Report).
Unpublished manuscript.
Johnson, P., Diaper, D., & Long, J. (1984). Tasks,
skills and knowledge: Task analysis for knowledge based descriptions. Interact84: First IFIP
Conference on Human-Computer Interaction,
1, 23-27.
78
Karat, J., Karat, C.-M., & Vergo, J. (2004). Experiences people value: The new frontier for task
analysis. In D. Diaper & N. A. Stanton (Eds.), The
handbook of task analysis for human-computer
interaction (pp. 585-602). Mahwah, NJ: Lawrence
Erlbaum Associates.
Kieras, D. (2004). GOMS models for task analysis.
In D. Diaper & N. A. Stanton (Eds.), The handbook
of task analysis for human-computer interaction
(pp. 83-116). Lawrence Erlbaum Associates.
Levinson, S. C. (1983). Pragmatics. Cambridge,
MA: Cambridge University Press.
Limbourg, Q., & Vanderdonkt, J. (2004). Comparing task models for user interface design. In
D. Diaper & N. A. Stanton (Eds.), The handbook
of task analysis for human-computer interaction
(pp. 135-154). Mahwah, NJ: Lawrence Erlbaum
Associates.
Long, J. (1997). Research and the design of human-computer interactions or What happened
to validation? In H. Thimbleby, B. OConaill, &
P. Thomas (Eds.), People and computers XII (pp.
223-243). New York: Springer.
Ormerod, T. C., & Shepherd, A. (2004). Using task
analysis for information requirements specification: The sub-goal template (SGT) method. In D.
Diaper & N. A. Stanton (Eds.), The handbook of
task analysis for human-computer interaction (pp.
347-366). Lawrence Erlbaum Associates.
Shepherd, A. (2001). Hierarchical task analysis.
London: Taylor and Francis.
Stanton, N. A. (2002). Developing and validating theory in ergonomics. Theoretical Issues in
Ergonomics Science, 3, 111-114.
Stanton, N. A. (2004). The psychology of task
analysis today. In D. Diaper & N. A. Stanton
(Eds.), The handbook of task analysis for humancomputer interaction (pp. 569-584). Mahwah, NJ:
Lawrence Erlbaum Associates.
KEY TERMS
Application Domain: That part of the assumed real world that is changed by a work system
to achieve the work systems goals.
Goal: A specification of the desired changes
a work system attempts to achieve in an application domain.
Performance: The quality, with respect to
both errors and time, of work.
Subtask: A discrete part of a task.
Task: The mechanism by which an application
domain is changed by a work system to achieve
the work systems goals.
Work: The change to an application domain
by a work system to achieve the work systems
goals.
Work System: That part of the assumed real
world that attempts to change an application domain to achieve the work systems goals.
This work was previously published in Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 579-587, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
79
80
Chapter 1.7
Evaluating Mobile
Human-Computer Interaction
Chris Baber
The University of Birmingham, UK
Abstract
In this chapter the evaluation of human computer
interaction (HCI) with mobile technologies is
considered. The ISO 9241 notion of context of
use helps to define evaluation in terms of the
fitness-for-purpose of a given device to perform
given tasks by given users in given environments.
It is suggested that conventional notions of usability can be useful for considering some aspects
of the design of displays and interaction devices,
but that additional approaches are needed to fully
understand the use of mobile technologies. These
additional approaches involve dual-task studies in
which the device is used whilst performing some
other activity, and subjective evaluation on the
impact of the technology on the person.
INTRODUCTION
This chapter assumes that usability is not a feature of a product, that is, it does not make sense
to call a product itself usable. Rather, usability
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Term
Functional
Consistent
Minimal memorization
Feedback
User help
User control
81
Method
Metrics
Target
Best
Current
CPA
Time
-15%
+5%
-2%
User trials
1st vs. 3rd trial
Time
% change
-15%
1st > 3rd
0
3rd > 1st
+5%
0
-10%
3rd>1st
SUS1
SUMI2
Heuristics
Scale: 0-100
Scale: 0-100
Scale: 0-10
50
50
<6
60
60
6
70
70
>6
65
60
8
82
Worst
somewhat different while sitting on a train versus walking down a busy street. This change in
environmental setting will have a marked effect
on usability of the device. This does not necessarily result from the design of the device itself
but rather from the interactions between design,
use, and environment. As Johnson (1998) pointed
out, HCI methods, models and techniques will
need to be reconsidered if they are to address the
concerns of interaction on the move. (Johnson,
1998). The question for this chapter, therefore,
is how best to address the relationship between
user, activity, product, and environment in order
to evaluate the usability of mobile technology.
Related to this question is how evaluation might
capture and measure this relationship, and then
what can designers do to improve usability. This
latter point is particularly problematic if one
assumes that design is about creating a product
rather than about creating an interaction.
Before considering these questions, it is worth
rehearsing why one might wish to conduct evaluation. Baber (2005) notes that the primary reason
for conducting evaluation, in HCI, is to influence
design (ideally, to improve the product). This implies that evaluation ought never to be a one-off
activity to be conducted at the end of the design
lifecycle in order to allow a design to be signed-off
(Gould & Lewis, 1985; Johnson, 1992). Rather, it
means the following:
1.
2.
3.
4.
A final point to note is that evaluation is a process of comparing the product against something
else, for example, other products, design targets,
requirements, standards. Thus, evaluation requires a referent model (Baber, 2005). It is nave
to believe that one can evaluate something in
a vacuum, that is, to think that one can take a
single product and evaluate it only in terms of
itself. In many ways this is akin the concept of
a control condition in experimental design; one
might be able to measure performance, but without
knowing what would constitute a baseline for the
measure, it is not possible to determine whether
it is good or bad.
83
2.
3.
84
85
86
SUBJECTIVE EVALUATION OF
TECHNOLOGY
[U]ltimately it is the users of a software system
[or any product] who decide how easy its user
interface is to manipulate (Holcomb & Tharp,
1991). Thus, one might feel that asking people
about the product would be the obvious and most
useful approach to take. However, there are several
87
DESIGNING AN EVALUATION
PROTOCOL
In addition to eliciting opinions from users regarding the device, researchers are also keen to obtain
reactions of some of the consequences of using
the device. By way of analogy, if we consider the
virtual reality research community, we can see
efforts to elicit reaction to either the physical effects of using virtual reality, for example, Cobb
et al.s (1999) Virtual Reality Induced Symptoms
and Effects (VRISE) or the measurement of
presence(Slater et al., 1994; Witmer & Singer,
1998). In the domain of wearable computers, physical effects have been evaluated using self-report
on a comfort rating scale (Knight et al., 2002).
In terms of performing an activity, researchers
often make use of the NASA-TLX (Hart & Staveland, 1988) which measure subjective response to
workload. The basic notion is that activities make
different demands on people in terms of time pressure
or mental effort, and can lead to different responses
Activity
Goal
User
Tasks
Usability Metrics
Environment
Efficiency
Other products
Product
88
Effectiveness
Outcome
Satisfaction
89
Prototyping
During prototyping different versions of the
product are developed and tested. The prototype
need not be a fully-functioning product. Indeed,
Nilsson et al. (2000) shows how very simple
models can be used to elicit user responses and
behaviors. Their study involved the development
of a handheld device (the pucketizer for use
in water treatment plants and the initial studies
had operators walking around the plant with a
non-functioning object to simulate the device.
From this experience, the design team went on to
implement a functioning prototype based on an
8-bit microcontroller, wireless communications,
and a host computer running a JAVA application).
This work is interesting because it illustrates
how embedding the evaluation process in the
environment and incorporating representative
end-users lead to insights for the design team.
Taking this idea further, it is feasible for very
early prototyping to be based on paper versions.
For example, one might take the form factor of the
intended device (say a piece of wood measuring
5 x 3 x which is approximately the size of
Personal Digital Assistant) and then placing 3 x
2 paper overlays to represent different screen
stateschange the screens is then a matter of
the user interacting with buttons on the product
and the evaluator making appropriate responses.
Of course, this could be done just as easily using
an application in WinCE (or through the use of
a slideshow on the device), but the point is that
initial concepts can be explored well before any
code is written or any hardware built.
90
CONCLUSION
While the concept of usability as multi-faceted
might seem straightforward, it raises difficult
problems for the design team. The design team
focuses its attention on the device, but the concept
of usability used in this chapter implies that the
device is only part of the equation and that other
factors relating to the user and environment can
play significant roles. The problem with this, of
course, is that these factors lie outside the remit
of the design team. One irony of this is that a
well-designed device can fail as the result of
unanticipated activity, user characteristics, and
environmental features.
The issue raised in this chapter is that evaluating
mobile technology involves a clear appreciation of
the concept of usability, in line with ISO standard
definitions. The ISO9241 concept of usability emphasizes the need to clearly articulate the context
of use of the device, through consideration of user,
REFERENCES
Baber, C. (2005). Evaluation of human-computer
interaction. In J. R. Wilson & E. N. Corlett (Eds.),
Evaluation of human work (pp. 357-388). London:
Taylor and Francis.
Baber, C., Arvanitis, T. N., Haniff, D. J., & Buckley, R. (1999). A wearable computer for paramedics: Studies in model-based, user-centered and
industrial design. In M. A. Sasse & C. Johnson
(Eds.), Interact99 (pp. 126-132). Amsterdam:
IOS Press.
Baber, C., Haniff, D. J., Knight, J., Cooper, L., &
Mellor, B. A. (1998). Preliminary investigations
into the use of wearable computers. In R. Winder
(Ed.), People and computers XIII (pp. 313-326).
Berlin: Springer-Verlag.
Baber, C., Haniff, D. J., & Woolley, S. I. (1999).
Contrasting paradigms for the development of
91
92
Key Terms
93
Endnotes
SUS: Software Usability Scale (Brooke,
1996)
2
SUMI: Software Usability Metrics Inventory, Kirakowski and Corbett (1993)
1
This work was previously published in Handbook of Research on User Interface Design and Evaluation for Mobile Technology, edited by J. Lumsden, pp. 731-744, copyright 2008 by Information Science Reference, formerly known as Idea Group
Reference (an imprint of IGI Global).
94
95
Chapter 1.8
An Overview of Multimodal
Interaction Techniques and
Applications
Marie-Luce Bourguet
Queen Mary University of London, UK
INTRODUCTION
Desktop multimedia (multimedia personal computers) dates from the early 1970s. At that time,
the enabling force behind multimedia was the
emergence of the new digital technologies in
the form of digital text, sound, animation, photography, and, more recently, video. Nowadays,
multimedia systems mostly are concerned with
the compression and transmission of data over
networks, large capacity and miniaturized storage devices, and quality of services; however,
what fundamentally characterizes a multimedia
application is that it does not understand the data
(sound, graphics, video, etc.) that it manipulates.
In contrast, intelligent multimedia systems at the
crossing of the artificial intelligence and multimedia disciplines gradually have gained the ability
to understand, interpret, and generate data with
respect to content.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
In this section, we briefly review the different
types of modality combinations, the user benefits brought by multimodality, and multimodal
software architectures.
Combinations of Modalities
Multimodality does not consist in the mere
juxtaposition of several modalities in the user
interface; it enables the synergistic use of different
combinations of modalities. Modality combinations can take several forms (e.g., redundancy and
complementarity) and fulfill several roles (e.g.,
disambiguation, support, and modulation).
Two modalities are said to be redundant when
they convey the same information. Redundancy is
well illustrated by speech and lip movements. The
redundancy of signals can be used to increase the
accuracy of signal recognition and the overall robustness of the interaction (Duchnowski, 1994).
Two modalities are said to be complementary
when each of them conveys only part of a message but their integration results in a complete
96
User Benefits
It is widely recognized that multimodal interfaces,
when carefully designed and implemented, have
the potential to greatly improve human-computer
interaction, because they can be more intuitive,
natural, efficient, and robust.
Flexibility is obtained when users can use
the modality of their choice, which presupposes
that the different modalities are equivalent (i.e.,
they can convey the same information). Increased
robustness can result from the integration of
redundant, complementary, or disambiguating
inputs. A good example is that of visual speech
recognition, where audio signals and visual signals
are combined to increase the accuracy of speech
Software Architectures
In order to enable modality combinations in the
user interface, adapted software architectures
are needed. There are two fundamental types
of multimodal software architectures, depending on the types of modalities. In feature level
architectures, the integration of modalities is performed during the recognition process, whereas
in semantic level architectures, each modality
is processed or recognized independently of the
others (Figure 1).
Feature-level architectures generally are
considered appropriate for tightly related and
synchronized modalities, such as speech and lip
movements (Duchnowski et al., 1994). In this
type of architecture, connectionist models can
be used for processing modalities because of
their good performance as pattern classifiers and
because they easily can integrate heterogeneous
features. However, a truly multimodal connectionist approach is dependent on the availability of
multimodal training data, and such data currently
is not available.
When the interdependency between modalities
implies complementarity or disambiguation (e.g.,
speech and gesture inputs), information typically
is integrated into semantic-level architectures (Nigay et al., 1995). In this type of architecture, the
main approach for modality integration is based
on the use of data structures called frames. Frames
are used to represent meaning and knowledge and
to merge information that results from different
modality streams.
97
Implementing Multimodality
Developers still face major technical challenges for
the implementation of multimodality, as indeed,
the multimodal dimension of a user interface raises
numerous challenges that are not present in more
traditional interfaces (Bourguet, 2004). These
challenges include the need to process inputs from
different and heterogeneous streams; the coordination and integration of several communication
channels (input modalities) that operate in parallel
(modality fusion); the partition of information
sets across several output modalities for the
generation of efficient multimodal presentations
(modality fission); dealing with uncertainty and
recognition errors; and implementing distributed
interfaces over networks (e.g., when speech and
gesture recognition are performed on different
processors). There is a general lack of appropriate
tools to guide the design and implementation of
multimodal interfaces.
Bourguet (2003a, 2003b) has proposed a simple
framework, based on the finite state machine
formalism, for describing multimodal interaction
designs and for combining sets of user inputs of
different modalities. The proposed framework can
help designers in reasoning about synchronization patterns problems and testing interaction
robustness.
98
APPLICATIONS
Two applications of multimodal interaction are
described.
Augmented Reality
Augmented reality is a new form of multimodal
interface in which the user interacts with realworld objects and, at the same time, is given
supplementary visual information about these
objects (e.g., via a head mounted display). This
supplementary information is context-dependent
(i.e., it is drawn from the real objects and fitted to
them). The virtual world is intended to complement
the real world on which it is overlaid. Augmented
reality makes use of the latest computer vision
techniques and sensor technologies, cameras, and
head-mounted displays. It has been demonstrated,
for example, in a prototype to enhance medical
surgery (Dubois, 1999).
Tangible Interfaces
People are good at sensing and manipulating
physical objects, but these skills seldom are
used in human-computer interaction. Tangible
interfaces are multimodal interfaces that exploit
the tactile modalities by giving physical form to
digital information (Ishii, 1997). They implement physical objects, surfaces, and textures as
tangible embodiments of digital information. The
tangible query interface, for example, proposes
a new means for querying relational databases
through the manipulation of physical tokens on
a series of sliding racks.
FUTURE TRENDS
Ubiquitous Computing
Ubiquitous computing describes a world from
which the personal computer has disappeared
and has been replaced by a multitude of wireless,
small computing devices embodied in everyday
objects (e.g., watches, clothes, or refrigerators).
The emergence of these new devices has brought
new challenges for human-computer interaction.
A fundamentally new class of modalities has
emergedthe so-called passive modalitiesthat
corresponds to information that is automatically
captured by the multimodal interface without any
voluntary action from the user. Passive modalities
complement the active modalities such as voice
command or pen gestures.
Compared with desktop computers, the screens
of ubiquitous computing devices are small or
non-existent; small keyboards and touch panels
are hard to use when on the move, and processing
powers are limited. In response to this interaction
challenge, new modalities of interaction (e.g.,
non-speech sounds) (Brewster, 1998) have been
proposed, and the multimodal interaction research
community has started to adapt traditional multimodal interaction techniques to the constraints
CONCLUSION
Multimodal interfaces are a class of intelligent
multimedia systems that extends the sensorymotor capabilities of computer systems to better
match the natural communication means of human beings. As recognition-based technologies
such as speech recognition and computer vision
techniques continue to improve, multimodal interaction should become widespread and eventually
may replace traditional styles of human-computer
interaction (e.g., keyboard and mice). However,
much research still is needed to better understand
users multimodal behaviors in order to help designers and developers to build natural and robust
multimodal interfaces. In particular, ubiquitous
computing is a new important trend in computing
that will necessitate the design of innovative and
robust multimodal interfaces that will allow users
to interact naturally with a multitude of embedded
and invisible computing devices.
REFERENCES
Bolt, R. A. (1980). Put-that-there: Voice and gesture at the graphics interface. Proceedings of the
7th Annual Conference on Computer Graphics and
Interactive Techniques. Seattle, Washington.
Bourguet, M. L. (2003a). Designing and prototyping multimodal commands. Proceedings
of the IFIP TC13 International Conference on
Human-Computer Interaction, INTERACT03,
Zurich, Switzerland.
Bourguet, M. L. (2003b). How finite state machines can be used to build error free multimodal
interaction systems. Proceedings of the 17th British
Group Annual Conference, Bath, UK.
99
100
KEY TERMS
This work was previously published in Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 451-456, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
101
102
Chapter 1.9
Abstract
This chapter introduces the concepts of multimodal and federated interaction. Because multimodality means, simply, the combination of
multiple modalities (or types of input and output),
the authors first introduce some of the various
modalities available for computer interaction. The
chapter then discusses how multimodality can be
used both in desktop and mobile computing environments. The goal of the chapter is to familiarize
scholars and researchers with the range of topics
covered under the heading multimodality and
suggest new areas of research around the combination of modalities, as well as the combination
of mobile and stationary computing devices to
improve usability.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Different Forms
The basic definition of multimodality is the use
of more than one modality within a single interface. The availability of both keyboard and voice
input is one of the most common examples of
multimodality, as is the use of both visual (text
or graphical) and audio output. Most of the five
classical human senses (sight, hearing, touch,
smell, and taste) can be used for both the input
and output sides. Each sense allows a broad range
of possibilities.
Table 1 gives a brief list of the types of input
and output that are associated with the senses.
The most common use of the sense of sight
is in the visual presentation (output) of information on small and large displays. Sight can also
be used for input: eye tracking can be used for
selection or to gauge interest in a particular area
of a screen, and retinal scanning can be used to
identify the user.
Input options based on the sense of hearing include speech, for entering text or giving
commands, speaker identification (to identify or
authenticate the user), and even humming (Ghias
et al., 1995). Audio output can be used for presenting written text (using text-to-speech), recorded
audio files, document and interface structures,
and sonifications of graph data [see the chapter
Mobile Speech Recognition and James (1998)
for an overview]. Speech input and audio-based
output are useful in a variety of contexts, including mobile and vehicle-based scenarios, as well
as accessibility.
The sense of touch is already commonly found
in computer inputs today, through the use of
keyboards, pointing devices, and touch screens.
In addition to detecting simply that a key, button,
or screen area has been clicked or touched, more
advanced devices (such as game controllers and
track pads) can also detect the amount of pressure
exerted by the user. Handwriting and gesture are
also gaining in popularity within certain contexts,
along with the use of fingerprints for user iden-
103
Input Types
Output Types
Sight
Eye tracking
Retinal scan
Speech
Text-to-speech
Speaker identification
Recorded audio
Humming
Hearing
Sonification
Touch/Gesture
Keyboard
Tactile display
Mouse/pointing device
Braille display
Handwriting
Chemical odor-detecting
sensors
Olfactory output
Taste
(currently none)
Gustatory output
104
Different Purposes
Applications are designed to use multimodal interactions for a wide variety of reasons. Having
several modalities available can allow designers
to create applications that are easier or more
natural to use, or that can be used by more users
within more contexts. Table 2 shows the different
purposes of using multimodality, categorized by
Purpose
Complementary Modalities
(Multimodal Fusion)
o
o
o
Replacement Modalities
o
o
o
Redundant Modalities
o
o
o
o
accessibility
improve efficiency
compensate for hands- or eyes-busy tasks mobile
compensate for environment mobile
the way the modalities are combined in the interaction. In many cases, the purposes are valid both
for mobile and non-mobile (i.e., desktop) devices;
in other cases, there is special value in using multimodality for specific device categories.
The most common usage of complementary
modalities (or multimodal fusion) is to attempt
to make the interaction more natural, where modalities are chosen that are the best fit to different
parts of the task. For example, pointing and other
gestures are appropriate for selection, while voice
may be more efficient for entering text. Complementary modalities can also be used to spread
the input or output across different modalities,
reducing the load on any one channel. One example here is on the output side, where the most
important or urgent information is presented in
audio while supporting information is presented
visually. The benefit is that the user can get both
the key and supporting information simultaneously by listening to the audio and attending to
the visuals at the same time.
Replacement modalities, as the name implies,
replace one modality for another. Replacement
can produce a more natural interaction, when,
for example, keyboard entry of a password is replaced by speaker identification in a system that
already allows voice input. On a mobile device,
replacement can also be used to compensate for
the limited capabilities of the devicelong por-
Multimodality on the
Desktop
Because desktop computers are still the primary
computing environment for most users, we begin
our discussion of multimodality on this platform.
People frequently see multimodality as unnecessary or even inadvisable on desktop computersI dont want to talk to my computer, or
have it talk back to me, in my officesomeone
might hear some of my private information!
This section will show that there are many cases
105
2.
3.
106
Speech is perhaps the most commonly available input modality after keyboards and pointing
devices (although as Oviatt [1999] points out, it
should by no means be considered the primary
information carrying modality within a multimodal system). Speech recognition systems use
grammars to define the commands or phrases
that can be accepted by the system. Speech recognition grammars can range from types found
in standard computer theory (regular grammars,
context-free grammars) to statistical grammars
(e.g., n-gram grammars, where the system uses
the preceding n-1 words spoken by the user to
help identify the nth word).
Replacement Modalities
Within the desktop context, speech input can be
used to make an interaction more natural and
also more visually appealing, by eliminating the
need to display complicated tool bars and menus.
A striking example of this is the CommandTalk
system, which enabled users to create, control,
and modify battlefield missions and control the
map display of the ModSAF battlefield system
using spoken language and mouse input (Moore
et al., 1997). Through the addition of the CommandTalk speech interface, the researchers were
able to reduce the screen clutter of ModSAF,
eliminating almost all of the tool bars that had
previously been required for specifying details
of checkpoints and ordinances, controlling the
display, and specifying mission movements.
The architecture of CommandTalk included
many natural language processing components
to create an intuitive command language. The
speech recognition grammar was designed to
allow the same commands and object referents as
would be used in a human-to-human interaction.
Complications regarding the resolution of noun
phrases (the M1 platoon) to specific units shown
on the display, and the mapping of predicates
to interface actions (move, which can map to
one of two different commands depending on
Complementary Modalities
Input modalities can also be used in a complementary way, where the inputs from several modalities
are fused to produce a single input command.
This is commonly known as multimodal fusion.
Multimodal fusion allows users to mix and match
modalities as they choose, and the system attempts
to find a coherent meaning for the set of inputs.
For example, when speech and gesture input
et al.
(1993) describe a system
that integrates speech, gesture, and eye gaze input
to query objects on a map. Their focus is on the
interpretation of multimodal commands from the
three modalities, using reasoning to combine the
modalities and determine the objects and actions
requested. The Multimodal Maps application
designed by Adam Cheyer and Luc Julia (Cheyer
107
108
Example
Benefits
Drawbacks
o
o
increase accessibility of
existing content
no changes required to
existing app.
o
o
Mac OS X VoiceOverc
increase accessibility of
existing content
requires original
manufacturer to decide to
make product change
original application
unchanged
109
110
Accessibility and design for users with disabilities may seem like a specialized area of
work, with a relatively small target population.
However, as pointed out in Perry et al. (1997)
and other sources, users can be handicapped, or
unable to accomplish a task, merely because of
the circumstances in which they try to perform
it. So-called temporary disabilities, such as the
inability of a user to direct his visual attention
to a computer monitor, can be mitigated through
the use of alternative modalities.
111
112
113
et
al. (1999)
Device Federation
An emerging area of research in human-computer
interaction involves the combination of small,
portable devices with ambient computing and
interaction resources in the users environment.
This concept, which is cropping up in industrial
and academic research projects at various locations, is an attempt to balance the dual problems
of portability and usability through a new model
for mobile interaction, which we will call here
device federation.
The idea of device federation is to augment
small, portable devices such as smart phones
(called personal devices) with ambient computing resources, such as large displays, printers,
computers, PDAs, and keyboards. The personal
device can be used to establish the users identity
(see the discussion above related to improving
security by combining something the user carries with what the user knows and who the user
is), run applications, or connect to back-end
databases and servers, while ambient resources
are leveraged to provide usable input and output.
Conversely, personal devices can be connected
to external sensors that have minimal or no user
interfaces of their own, allowing the user to view
or manipulate data that would otherwise be hidden
within the environment.
Device federation is a broad concept, which
covers a wide range of federation types. An obvious example is federating large displays with small
mobile devices. Other types include federating
sensors and other resources with limited human
interaction capabilities with mobile devices, federating portable user devices with audio output,
and federating portable input devices with ambient
computers to provide accessibility.
114
115
116
Summary
This chapter has described some of the basic
principles of multimodality. We began with a
description of some of the ways that the human
senses can be used to interact with a computer,
and discussed the idea that modalities can be used
redundantly, where more than one modality is used
to either present or gather the same information,
or complementarily, where information from
more than one modality must be combined to
produce the whole input or output message. Each
of these methods can be used for either desktop
or mobile interactions, although generally for
different reasons.
Multimodal interfaces to desktop computers
strive to provide natural mappings between the
information to be input or output and the modality
used. This more frequently leads to interfaces that
use complementary modalities, or multimodal fusion. Desktop applications also use multimodality
to support users with disabilities or to provide more
usable security. Another interesting research area
for multimodality is around the use of redundant
output modalities to support users who move from
desktop to mobile computers during the course
of an interaction.
Mobile applications use multimodality to
improve the input and output capabilities of the
devices. Because mobile devices must be small
to be portable, they often have very small visual
displays and keypads. Voice and audio are obvious
choices here, and have been used widely. Finally,
because mobile devices are often used in contexts
where the user is carrying out another (more
primary) task, such as walking or driving, it is
important to design interfaces that do not require
too much of the users attention or overload any
of the senses.
Next, we described an emerging area of human-computer interaction that seeks to combine
the portability of mobile devices with the interaction capabilities of larger ambient devices, called
device federation. Input and output federation can,
118
References
Bakst, S. (1988). The future in security methods.
The Office, 108(19-20).
Bezerianos, A., & Balakrishnan, R. (2005).
The vacuum: Facilitating the Manipulation of
Distant Objects In Proceedings of the SIGCHI
Conference on Human Factors in Computing
Systems, Portland, OR (pp. 361-370). New York:
ACM Press.
Bolt, R.A. (1980). Put-that-there: Voice and
gesture at the graphics interface. In Proceedings
of the Seventh Annual Conference on Computer
Graphics and Interactive Techniques (pp. 262270). New York: ACM Press.
119
120
myths of multimodal
interaction. Communications of the ACM, 42(11),
74-81.
Perry, J., Macken, E., Scott, N., & McKinley, J.L.
(1997). Disability, inability and cyberspace. In B.
Friedman (Ed.), Human values and the design of
computer technology (Number 72 Ed., pp. 65-89).
Stanford, CA: CSLI Publications.
Weinshall, D., & Kirkpatrick, S. (2004). Passwords Youll Never Forget, but Cant Recall.In
CHI 04 Extended Abstracts on Human Factors
in Computing Systems, Vienna, Austria (pp. 13991402). New York: ACM Press.
additional reading
Arons, B. (1991). Hyperspeech: Navigating in
Speech-Only Hypermedia. In Proceedings of
the Third Annual ACM Conference on Hypertext, San Antonio, TX (pp. 133-146). New York:
ACM Press.
Blattner, M.M. (1992). Metawidgets: Towards a
theory of multimodal interface design. In Proceedings of the Sixteenth Annual International
Computer Software and Applications Conference,
Chicago, IL (pp. 115-120). IEEE Press.
Bly, S. (1982). Presenting information in sound.
In Human factors in computer systems (pp. 371375). Gaithersburg, MD.
Buxton, W.A.S. (1994). The three mirrors of interaction: A holistic approach to user interfaces.
In L.W. MacDonald & J. Vince (Eds.), Interacting
with virtual environments. New York: Wiley.
Edwards, W.K., Mynatt, E.D., & Stockton, K.
(1994). Providing access to graphical user interfacesnot graphical screens. In Proceedings of
ASSETS 94: The First Annual ACM Conference
on Assistive Technologies, Marina del Rey, CA
(pp. 47-54). New York: ACM Press.
Flowers, J.H., Buhman, D.C., & Turnage, K.D.
(1996). Data sonification from the desktop: Should
sound be part of standard data analysis software?.
ACM Transactions on Applied Perception, 2(4),
467-472.
Gaver, W.W. (1990). The Sonicfinder: An interface
that uses auditory icons. In E.P. Glinert (Ed.),
Visual programming environments: Applications
121
Endnotes
e
f
g
h
k
i
j
This work was previously published in Handbook of Research on Ubiquitous Computing Technology for Real Time Enterprises,
edited by M. Mhlhuser and I. Gurevych, pp. 487-507, copyright 2008 by Information Science Reference, formerly known as
Idea Group Reference (an imprint of IGI Global).
122
123
Chapter 1.10
Introduction
In 801, Harun Rashid offered Charlemagne a
water clock, the like of which was inexistent in all
of Europe at that time; the Kings court thought
that a little devil was hidden inside the clock. In
the 1930s, King Abdulaziz of Saudi Arabia had
to convince his people that the radio was not the
making of the devil and that it could in fact be used
to broadcast and spread the Quran. In 2003, the
Arab region is found to be still lagging in modern
technologies adoption (UNDP, 2003). Thus, in
a little more than 11 centuries, the Arabs were
transformed from leaders to adopters, then to late
adopters as far as technologies are concerned.
The Arab world is taken to mean the 22 members of the Arab League, accounting for more than
300 million people with an economy of 700 billion
dollars. Although most Arabs practice Islam, they
represent less than one third of all Muslims. The
Arab world is often thought of as economically
prosperous due to its oil resources; yet its total
GDP is lower than that of Spain (UNDP, 2003).
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
21
magne a water
n all of Europe
at a little devil
King Abdulaziz
e that the radio
it could in fact
n. In 2003, the
ng in modern
hus, in a little
e transformed
opters as far as
22 members of
an 300 million
n dollars. Alrepresent less
world is often
due to its oil
n that of Spain
hether existing
n this delay. If
culture play an
n IT adoption,
at no existing
adoption phe-
lags behind in
003). (See Figumber of Arab
ulation (Ajeeb,
Internet users
rated DITnets
Figure 1. The MENA region is under-performing in terms of IT spending even when compared
with other developing regions (Source: Aberdeen,
Figure
2001) 1. The MENA region is under-performing in
terms of IT spending even when compared with other
developing regions (Source: Aberdeen, 2001)
120000
90000
60000
30000
0
2001
Latin America
2002
2003
MENA
2004
2005
Sub-Saharan Africa
Reasons FOR
For THE
TheLAG
Lag
REASONS
The
be be
explained
by the
with which
Thelag
lagcan
canpartly
partly
explained
bydelay
the delay
with
1
technologies
have
traditionally
reached
Arab
countries
which technologies have traditionally reached.
Davison et al. (2000)
suggested several other reasons:
1
. Davisonbetween
et al. (2000)
suggested
countries
aArab
perceived
incompatibility
local cultures
and
several other
reasons:aapreference
perceivedfor
incompatibility
imported
technologies,
autonomy and
independence
respect
to imported
technology,
and a lack of
between localwith
cultures
and
technologies,
economic resources to acquire technology.
a preference
for autonomy and independence with
The first two of are plausible as is it often the case
respect
to
technology,
andnot
a lack
of of
economic
that IT stumbling blocks occur
because
technical
reasons
but rather
because
of human and social obstrucresources
to acquire
technology.
tions.
Thefirst
thirdtwo
reason
canplausible
be excluded
six Gulf
The
of are
as for
is ittheoften
the
countries which claim per capita revenues of nearly five
case that
IT stumbling
blocks
not because
times
the average
of the rest
of theoccur
Arab countries.
The
of
technical
reasons
but
rather
because
of human
rate of adoption of the Internet for these countries
is up
124
Other factors also explain the low rate of Internet penetration in Arab nations as compared
to the rest of the world. In these nations, the rate
of penetration is essentially measured based on
only one part of society: men.
Information and IT
When Arab countries invest in IT, they do so
mainly in hardware. While this may be a characteristic of developing countries, it may also be
viewed as Arabs distrust of anything immaterial.
Software on the other hand is associated with
innovation, creativity, and the free flow of information and knowledge, qualities that the Arabs
have been found lacking (UNDP, 2003). Thus, not
only Arabs are increasing their dependence to the
West being consumers of hardware, they seem to
be passive users of the software and intelligence
produced elsewhere.
This issue leads to the tight relationship between information (and not IT, let alone hardware)
and democracy and freedom. If Arab countries
are truly information shy (Henry, 1998), then
what information is to be shared and circulated
by IT? Therefore, the Arab does not see what
use he could make of IT and would therefore not
consider it an instrument of development.
Arabs areArabs
increasing
are increasing
their dependence
their dependence
to the West
to the
being
West being
pulling) pulling)
could becould
used to
be lower
used to
thelower
levelthe
of uncertainty
level of uncertainty
consumers
consumers
of hardware,
of hardware,
they seem
they
to be
seem
passive
to beusers
passive
of users
during
of business
during business
transactions.
transactions.
Even when
Evenmore
whendirect
more direct
the software
the software
and intelligence
and intelligence
producedproduced
elsewhere.
elsewhere.means are
means
available,
are available,
Arabs prefer
Arabswasta
prefer
because
wasta because
of the of the
This issue
This
leads
issueto leads
the tight
to the
relationship
tight relationship
between between
human contact
humanitcontact
offers. itWasta
offers.
is Wasta
a way of
is alife
way
that
of builds
life that builds
information
information
(and
not
(and
IT,
let
not
alone
IT,
let
hardware)
alone
hardware)
and
democand
democupon
human
upon
interactions,
human
interactions,
a
major
part
a
major
of
an
part
Arabs
of
an
life,
Arabs life,
The Arab World, Culture and Information Technology
racy andracy
freedom.
and freedom.
If Arab countries
If Arab countries
are trulyare
informatruly informawhich she
which
or he
shemay
or he
notmay
be willing
not be to
willing
sacrifice
to sacrifice
to
to
tion shytion
(Henry,
shy 1998),
(Henry,then
1998),
what
then
information
what information
is to be is totechnology.
be
technology.
shared and
shared
circulated
and circulated
by IT? Therefore,
by IT? Therefore,
the Arab the
does
Arab
not does not
100%
100%
100%
100%
80%
80%
80%
80%
60%
60%
60%
60%
40%
40%
40%
40%
20%
20%
20%
20%
0%
0%
2001
0%
2001
2002
2003
2002
Software+IT Services
Software+IT Services
Hardware
22
2004
2003
2005
2004
Hardware
22
Culture
2005
0%
2001
2001
2002
2003
2002
Software+IT Services
Software+IT Services
Hardware
2004
2003
2005
2004
2005
Hardware
125
126
Conclusion
Significant contradictions may exist between
how Arabs perceive themselves and how they are
perceived by others. For example, Hill et al. (1998)
argue that the adoption of technologies is rarely
in quest of imitating the West. But Ibn Khaldun
maintains that imitation is characteristic of Arabs. Abbas and Al-Shakti, (1985) even suggested
that Arab executives are the product of cultural
values that tend to produce more followers than
leaders. Yet, imitation is sometimes suspicious in
the Muslims eye as some believe that imitation
of the non-Muslims is haram (sinful) While it
is also believed that imitating non-Muslims is
permissible, the average Arab Muslim sometimes
wonders when imitation stops being haram and
starts being hallal (allowed)12. How to reconcile
these points of view?
Two theories seem promising in that they may
complement the research models we reviewed
here. Social Identity Theory recognizes that cultural layers exist that describe different levels of
programming (social, national, regional, religious,
contextual, organizational, etc.).
Abdul-Gader and Kozar (1995) borrowed
the construct of technological alienation from
psychosociology to explain certain purchase and
use decisions of IT. They showed that alienated
127
individuals resist any kind of technology adoption. More generally, Value-Expectancy Theory
(Feather, 1995) promises to enrich the debate on
IT adoption by Arabs since it addresses the issue
of the value attributed to things by individuals and
their expectations, founded or not, such as their
resistance to the possible danger of technological
and cultural dependence. This is all the more valid
that Arabs view IT as a technology, not as a medium of knowledge and of accessing knowledge,
something they need direly as evidenced by the
conclusions of the UNDP (2003).
References
Abbas, A. (1987). Value systems as predictors of
work satisfactions of Arab executives. International Journal of Manpower, 8(2), 3-7.
Abbas, A., & Al-Shakti, M. (1985). Managerial
value systems for working in Saudi Arabia: An
empirical investigation. Group & Organizational
Studies, 10(2), 135-152.
Abdalla, I.A., & Al-Homoud, M.A. (2001) Exploring the implicit leadership theory in the Arabian
gulf states. Applied Psychology An International
Review, 50(4), 506-531.
128
Hofstede, G.J. (2001b). Adoption of communication technologies and national culture. Systmes
dInformation et Management, 6(3), 55-74.
UNDP. (2003, October 20) Arab Human Development Report 2003. United National Development
Program.
KEY Terms
Arab World: The Arab world is taken to
include all 22 countries members of the Arab
League.
Culture: According to hofstede (1991), it is
[the] collective programming of the mind that
distinguishes the members of one group of people
from those of another. For the 14th century arab
scholar, ibn khaldun, man is son to his habits
and his environment, not to his nature and his
moods. In all the literature about culture, there
is a common understanding that culture is an
abstraction from concrete behaviour but is not
behaviour itself. Hofstedes typology includes
five cultural dimensions:
129
Information Technology: Hardware, software, network and services related to the use and
operation of equipment with the aim of processing and communication of analogue and digital
data, information, and knowledge. These include
computers and computer applications such as the
Internet, Intranets, Extranets, Electronic Data
Interchange, electronic commerce, mobile and
fixed lines, etc.
Endnotes
1
130
com/maktoob/press1998/press1998-1.html).
See more on Kaitlin Duck Sherwoods site,
http://www.webfoot.com/advice/WrittenArabic.html.
Nielsen//NetRatings (May 2003), www.
journal dunet.com/cc/01_internautes/inter_profil_eu.shtml, accessed August 31,
2003.
Ibn Khaldun, The Muqaddimah, An Introduction to History, Translated from French
by F. Rosenthal, Princeton, 1958; 1967.
For Muslim Arabs, this may be explained
historically and religiously by the fact that
when the Divine message was delivered, the
Angel Gabriel dictated the Quranic verses to
the Prophet Muhammad. In all pre-modern
times, documents were not copied; they were
memorized, where there was no other way
to preserve them.
In his book, The Saddam Years (Fayard,
2003), Saman Abdul Majid, personal interpreter to the deposed dictator, explains
how, in 1993, President Clinton sent a secret
agent to Iraq to suggest that a new leaf be
turned over and that discussions be resumed.
Saddam did not immediately answer, an act
that Clinton took as a refusal. That file was
then closed. In fact, Saddam was expecting a
more solid and thought-out proposition to be
put forward, and was surprised that Clinton
did not come through with one. This miscommunication between two men of very
different cultures has had the now all-too
known consequences.
It is assumed that most readers are familiar with Hofstedes work. Due to space
limitations, details of his work will not be
elaborated here. For more information, the
reader is referred to Hofstede (2001b).
Also of interest is the GLOBE (Global Leader
and Organizational Behavior Effectiveness)
project which seeks to determine the relationship between leadership and societal
culture (House et al., 2002). GLOBE uses
This work was previously published in Encyclopedia of Developing Regional Communities with Information and Communication Technology, edited by S. Marshall, W. Taylor, and X. Yu, pp. 21-27, copyright 2006 by Information Science Reference,
formerly known as Idea Group Reference (an imprint of IGI Global).
131
132
Chapter 1.11
Information Technology
Acceptance across Cultures
Amel Ben Zakour
FSJEGJ, Tunisia
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
dimension (ranked 43rd). Even though there is accumulated evidence pointing to national culture
as a factor influencing IT adoption and acceptance, few studies have handled this issue at the
individual level. Therefore, the main objective of
this chapter is to provide a conceptual framework
that examines the influence of national culture,
though a macro-level construct, on IT adoption
by integrating specific cultural value dimensions
to technology acceptance model (TAM) (Davis,
Bagozzi, & Warshaw, 1989). This framework is
aimed at being used at the individual level since
cultural dimensions characterizing national
culture (i.e., individualism/collectivism, power
distance, masculinity/femininity, uncertainty
avoidance, high/low context of communication
and polychronism/monochronism) are value
dimensions considered as individual psychological dispositions and TAM is designed to capture
acceptance at the individual level.
We have divided the chapter into four parts. The
first part is devoted to a review of comparative and
international studies in the IS field. The second
part presents an overview on culture theory, which
is found in the literature of various disciplines
such as anthropology or cross-cultural psychology. The third part focuses on the presentation of
TAM (Davis et al., 1989), judged to be the most
parsimonious model of information technology
usage, as well as TAM-relevant extensions. The
fourth part is aimed at presenting the rationale
sustaining the conceptual framework developed
to explain how national culture could influence
IT usage.
133
IS Research Area
Countries
Key Results
Igbaria
and Zviran
(1996)
End-user computing
(EUC)
USA
Israel Taiwan
Straub, Keil,
and Brenner
(1997)
IT individual
acceptance
USA
Japan
Switzerland
134
Table 1. Continued
Leidner and
Carlsson
(1998)
Executive Support
System (ESS)
adoption
Sweden
USA
Mexico
Hill et al.
(1998)
IT transfer
Jordan
Egypt
Saudi Arabia
Lebanon
Sudan
Management
Information System
(MIS) adoption and
use
Gamble and
Gibson
(1999)
Rowe and
Struck
(1999)
Media choice
China
France
IT diffusion
Japan
USA
Rice, DAmbra,
and More
(1998)
Hong-Kong,
Singapore,
Australia,
USA
Tan, Wei,
Watson,
Clapper, and
McLean
(1998)
Computer-mediated
communica-tion
Singapore
USA
IT adoption
West-Africa
The MiddleEast Australia
Hofstede
(2000)
IT adoption
56 countries
30 countries
IT adoption
135
Culture Theory
Even though originally rooted in anthropology,
a population-level discipline, culture has been
defined and researched by many other disciplines
such as cross-cultural psychology. Culture has
been defined according to several perspectives.
Definitions go from the most complex and the most
comprehensive (Kluckhohn, 1962) to the simplest
(Hofstede, 1997; Triandis, 1972). According to
Kluckhohn (1962), Culture consists of patterns,
explicit and implicit, of and for behavior acquired
and transmitted by symbols, constituting the distinctive achievement of human groups, including
their embodiments in artifacts; the essential core
of culture consists of traditional (i.e., historically
derived and selected) ideas and especially their
attached values; culture systems may, in one hand,
be considered as product of action, on the other
hand, as conditioning influences upon further
action (p. 73). Hofstede (1997) defines culture as
the collective programming of the mind which
distinguishes the members of one group or category of people from another (p. 5).
Researchers in comparative and intercultural
management most of the time use the concept of
organizational culture or national culture. Nevertheless, they omit the fact that individual behaviors
and attitudes in an organizational context could
be influenced by other kinds of cultures. Indeed,
culture is a multi-level phenomenon that could be
approached according to different levels such as
region, ethnic group, religion, language, nation,
profession, firm, gender, social class (Hofstede,
1997; Karahanna, Evaristo, & Srite, 2005; Schneider & Barsoux, 2003). Culture also could
be defined according to continental or political
belonging (Lvi-Strauss, 1985). Furthermore,
these different cultures interact with each other.
For example, several ethnic groups are found in
India.
In the present study, we are interested in
national culture since it has been shown to influence management and organizations (Hernandez,
136
Authors
Power Distance
Hofstede (1997)
Individualism/Collectivism
Hofstede (1997)
Masculinity/Femininity
Hofstede (1997)
Uncertainty Avoidance
Hofstede (1997)
Long-term Orientation
Hofstede (1997)
Conservatism
Schwartz (1994)
Intellectual autonomy
Schwartz (1994)
Affective autonomy
Schwartz (1994)
Hierarchy
Schwartz (1994)
Egalitarianism
Schwartz (1994)
Mastery
Schwartz (1994)
Harmony
Schwartz (1994)
Universalism/Particularism
Individualism/Communitarianism
Neutral/Emotional
Specific/Diffuse
Achievement/Ascription
Attitudes to time
Attitudes to environment
Communication context
Perception of space
Monochronic/polychronic time
Hall (1989)
Table 2. Continued
Nature of people
Conception of space
138
Hofstede (1997)
Schwartz
(1994)
Trompenaars&
Hampden-Turner
(1998)
Chinese
Culture
Connection
(1987)
Hall (1989)
Kluckhohn
and
Strodtbeck
(1961)
Power distance
Hierarchy /
Egalitarianism
Individualism/
Collectivism
Autonomy
Individualism/
Communitarianism
Relational
Orientation
Masculinity/
Femininity
Mastery/
Harmony
Achievement/
Ascription
Inner-directed/
Outer-directed
Man-Nature
Orientation
Uncertainty
Avoidance
Long-term
Orientation
Conservatism
Attitudes to Time
Confucian
Work
Dynamism
Time
Perception
Time
Orientation
Specific/
Diffuse
Space (personal
space and
territory)
Space
Orientation
(public,
private)
High /Low
Context
since Hofstedes work (The Chinese Culture Connection, 1987; Schwartz, 1994; Trompenaars &
Hampden-Turner, 1998) exploring the national
culture through values have sustained and amplified his findings rather than having contradicted
them (Smith & Bond, 1999). The two dimensions
related to time orientation and communication
context are based upon Halls well-established
studies on intercultural communications. Indeed,
cross-cultural studies of styles of communication,
briefly reviewed by Smith and Bond (1999), reveal
a divergence between societies in several aspects
of communication and provide evidence sustaining Halls contention about high/low context
(Gudykunst & Ting-Toomey, 1988).
In the following paragraphs, we are going to
give more explanations about the origins of these
cultural dimensions.
Individualism/Collectivism (I/C)
Individualism concept takes its roots in the
Western world. Indeed, in Britain, the ideas of
Hobbes and Adam Smith about the primacy of
the self-interested individual sustain this concept.
In contrast, Confucianism in the Eastern world,
emphasizing virtue, loyalty, reciprocity in human
relations, righteousness, and filial piety, underlies
a collectivistic view of the world. Even though
some societies are characterized as individualistic
and others as collectivistic, they have to deal with
both individualistic and collectivistic orientations.
These orientations co-exist, and what makes the
difference among societies is the extent to which
they emphasize individualistic and collectivistic
values.
Individualism/collectivism has been criticized
by several disciplines and especially psychology
(for a review, see Kagitibasi, 1997). Nevertheless, it is still the focal dimension in cross-cultural studies and has been used most often as an
explanatory variable (Schwartz, 1994).
Hui and Triandis (1986) define collectivism
as a cluster of attitudes, beliefs, and behaviors
toward a wide variety of people (p. 240). Seven
aspects of collectivism has shown to be relevant
in characterizing it, which is the consideration of
the implications of our decisions for other people,
the sharing of material resources, the sharing of
nonmaterial resources (e.g., affection or fun), the
susceptibility to social influence, self-presentation and facework, the sharing of outcomes and
139
140
Masculinity/Femininity (MAS)
The MAS dimension was derived empirically
from Hofstedes work on work-related values
across countries. Hofstede distinguishes between
cultures according to their emphasis on achievement or on interpersonal harmony. In labeling this
dimension according to gender, Hofstede refers to
the social and culturally determined roles associated with men vs. women and not to the biological
distinction. Several other studies have identified
almost the same dimension labeled as mastery/harmony (Schwartz, 1994) or achievement/ascription
(Trompenaars & Hampden-Turner, 1998). All the
dimensions show an obvious overlapping, since
they are driven by almost the same set of opposing
values. Empirically, a cross-cultural study conducted by Schwartz (1994) shows a positive correlation between MAS and mastery. In one pole,
the underlying values are assertiveness, material
success, control or mastery, and competition. This
is the pole of masculinity (mastery, achievement).
In the pole of femininity, the dominant values are
modesty, caring for others, warm relationships,
solidarity, and the quality of work life.
Uncertainty Avoidance
This dimension has been inferred from Hofstedes
survey pertaining to the theme of work stress
when he addressed his questionnaire to IBM
employees. According to Hofstede, the extent to
which individuals tend to avoid uncertainty can
differentiate among countries. Indeed, the feeling
of uncertainty is something that could be acquired
and learned in the diverse institutions of a society such as family, school, or state. Each society
will have its proper behavioral model toward this
feeling of uncertainty. Hofstede argues that uncertainty is strongly linked to anxiety. The latter
could be overcome through technology, laws, and
religion (Hofstede, 1997). In strong uncertainty
avoidance societies like South America, people
have rule-oriented behaviors. On the contrary, in
societies where uncertainty avoidance is weak,
people do not need formal rules to adopt a specific
behavior. In the same vein, Triandis (1989) makes
the difference between loose and tight cultures.
He sustains that loose cultures encourage freedom and deviation from norms, whereas in tight
cultures, norms are promoted, and deviation from
those norms is punished. Shuper, Sorrentino,
Otsubo, Hodson, and Walker (2004) also have
confirmed that countries do differ in uncertainty
orientation. Indeed, they have found that Canada
is an uncertainty-oriented society that copes with
uncertainty by attaining clarity and finding out
new information about the self and the environment, and Japan is a certainty-oriented society
that copes with uncertainty by maintaining clarity
and adhering to what is already known (Shuper
et al., 2004)
Time Orientation
The most overlapping dimensions are attitudes
to time (Trompenaars & Hampden-Turner, 1998),
time perception (Hall, 1989), and time orientation
(Kluckhohn & Strodtbeck, 1961), which highlight
two main dimensions describing the concept of
time: the structure of time (discrete vs. continuous) and the horizon of time reference (reference
to the past, present, or future). According to Hall
and Hall (1987), a monochronic person runs one
activity at a time and associates to each activity
a precise time, while a polychronic person can
perform many activities simultaneously without
preparing an exact schedule, or if it exists, it
can be ignored. Studies conducted in this sense
(Schramm-Nielsen, 2000) have shown that polychronic time is very specific to the Mediterranean
141
142
gestures and voice intonation can make the difference between messages of the same content. On
the contrary, low-context communications rely
more on the transmitted part of the message and
less on the context. Americans, Germans, Swiss,
and Scandinavians are found to be low-context.
In such cultures, characterized by Hall (1989) as
fragmented cultures, people have more elaborate
codes in communication because of their lack of
shared assumptions about communication rules.
All the meaning they try to transmit is reflected
in the information contained in the literal message. In this case, it seems obvious that there is no
need to use different kinds of signals as a guide
to interpretation.
Technology Acceptance
Model
The technology acceptance model (TAM) is considered a tool designed to understand and measure
the IT individual determinants of use. The most
comprehensive TAM is designed by Davis et al.
(1989). It is based on two key concepts: perceived
usefulness (PU), defined as the prospective
users subjective probability that using a specific
application system will increase his or her job
performance within an organizational context
(p. 985); and perceived ease of use (PEOU), defined as the degree to which the prospective user
expects the target system to be free of effort (p.
985). According to this model, an individual who
intends to use a system will have two types of
beliefs (PU and PEOU), which influence behavioral intention through attitudes. In addition, PU
will have a direct effect on behavioral intention.
Finally, external variables will have a direct effect
on the two beliefs.
In order to study individual adoption of IT,
the present research will be using the TAM.
The use of the latter is motivated by theoretical,
empirical, and practical considerations. From a
theoretical point of view, TAM takes its roots
143
Igbaria (1995)
Jackson, Chow,
and Leitch (1997)
IT Tested
Research Methodology
Hardware and
software in a
computing resource
center
- Subjective norms
- Perceived behavioral control
Field study
Questionnaire addressed to 1,000
students from a business school
Computers
- Subjective Norms
- Normative beliefs and motivation to comply
with the referent group
- Computer anxiety
- Computer knowledge
- Direction support
- Info center support
- Organizational politics
- Organizational use of the system (colleagues,
subordinates, and CEOs)
Field study
Questionnaire addressed to
471 managers and professionals
working in 54 North-American firms
Different kinds of IS
- Situational involvement
- Intrinsic involvement
- Prior use
Filed study
Questionnaire addressed to 585
employees from organizations
developing or revising their IS
Field study
Questionnaire addressed to 100 users
working in a worldwide American
firm in the transportation sector
Karahanna and
Straub (1999)
Electronic Mail
-
-
-
-
Venkatesh (1999)
Virtual Workplace
System
- Game-based training
-
-
-
-
Social Influence
Social Presence
Perceived Accessibility
Availability of User Training and Support
Dishaw and
Strong (1999)
Maintenance support
software tools
Broker workstations.
Venkatesh and
Morris (2000)
- Gender
- Subjective Norms
- Experience
-
-
-
-
Personal Innovativeness
Playfulness
Self-efficacy
Cognitive Absorption
Field study
Questionnaire addressed to 250
students
Venkatesh and
Davis (2000)
- Scheduling
information system
- Windows-based
environment
- Windows-based
customer account
management
system
-
-
-
-
-
-
-
Subjective norms
Image
Job relevance
Output quality
Result demonstrability
Experience
Voluntariness
- Perceived Playfulness
Agarwal and
Karahanna (2000)
144
Field study
Questionnaire addressed to 152
students
Individualism/collectivism: Defined as
the degree to which people in a society are
integrated into strong cohesive ingroups.
Masculinity/femininity: Defined as the
extent to which a society attributes qualities
such as assertiveness, material success to
men. and modesty and concern about the
quality of life to women.
Uncertainty avoidance: Defined as the
extent to which the members of a culture
feel threatened by uncertain and unknown
situations.
Power distance: Defined as the extent to
which the less powerful members of institutions and organization within a country
expect and accept that power is distributed
unequally.
Moreover, we rely upon Halls work on intercultural communication, which is based on two
key cultural dimensions:
These dimensions are hypothesized to have direct effects or moderator ones on TAM constructs
and relationships. Integrating cultural dimensions to TAM is an attempt to better understand
the genuine influence of national culture on IT
adoption at the individual level. Indeed, crosscultural studies on IS (see Table 1) focused on
IS phenomena at the organizational level, such
as IT transfer. Moreover, all the authors (except
Srite & Karahanna, 2006) do not conceptualize
national culture at the individual level. They
have just studied IT-related behaviors in different
countries, supposing that each country is characterized by a different set of cultural values. Their
145
Power
Distance
High Context/
Low Context
Polychronism/
Monochronism
P3 (-)
P1 (-)
Perceived
Usefulness
P2 (-)
P4 (+)
System
Usage
Perceived
Ease of Use
Subjective
Norms
P5 (-)
Individualism/
Collectivism
P6 (-)
Masculinity/
Femininity
P7 (+)
P8 (+)
146
Power
Distance
Uncertainty
Avoidance
147
148
her (and having the power of reward and punishment) wants him or her to behave in a specific
manner. Therefore, conforming to referent others by adopting IT will weaken the uncertainty
pertaining to the nonadoption of IT.
Practical Implications
The framework we provide is an attempt to highlight the importance of culture when dealing with
IT adoption in different national cultural settings.
It should improve IT designers awareness of the
importance of cultural values for IT acceptance
in foreign countries. Actually, the framework
we offer does not suggest prescriptions to be
adopted by IT designers. Rather, it encourages
the latter to take into account cultural values
when designing IS and/or preparing IT implementation procedures for companies around the
world. Actually, IT designers should be aware that
implementation failure could be caused by cultural
incompatibility. As a result, they should adapt
their implementation tactics so that they will be
more congruent with end users cultural systems.
For example, in high power distance cultures, end
users may be reluctant to use computer-mediated
communication tools, because this use may lead
them to contradict their superiors. Therefore, IT
designers should adjust IT features to the social
needs of end users.
Furthermore, the conceptual model we have
proposed suggests that the key belief in TAM,
which is the perceived usefulness of IT, has cultural antecedents. Indeed, the perceived usefulness of IT depends on the type of the context of
communication (high or low) and on the type of
time adopted (monochronic or polychronic). In
addition, the well-established positive relationship between perceived usefulness and system
usage is shown to be moderated by two cultural
values; namely, power distance and masculinity/
femininity. Finally, the intensity of the influence
of subjective norms on system usage has been
Conclusion
In this chapter, we offer a theoretical perspective
on the effect of national culture on IT adoption
and use. Through the conceptual model we have
presented, we wanted to move beyond models
considering culture as a black box and offering an ad hoc cultural explanation of IT-related
behaviors. First, we have identified pertinent
cultural values that could offer a fruitful explanation of differences in IT- related behaviors and
beliefs across countries. Then, we have provided
propositions linking these cultural values to TAM
constructs and relationships. We contend that
high context/low context of communication and
polychronism/monochronism have a direct effect
on perceived usefulness of IT; power distance and
masculinity/femininity have moderator effects
on the relationship between perceived usefulness
and system usage and individualism/collectivism;
uncertainty avoidance, masculinity/femininity,
and power distance have moderator effects on
the relationship between subjective norms and
system usage.
In the current context of internationalization,
taking into account the societal environment in
the understanding of IT acceptance behavior becomes more relevant in comparison with the past.
Indeed, information technologies are so ubiquitous
and pervasive that one should explore more their
relationship with different societal contexts varying across countries. Actually, IT that is accepted
in a specific cultural context may not be accepted
in the same way in another culture.
The framework presented here is a first step
in better understanding the relationship between
References
Agarwal, R. (1999). Individual acceptance of
information technologies. Retrieved October
2003, from http://www.pinnaflex.com/pdf/framing/CH06.pdf
Agarwal, R., & Karahanna, E. (2000). Time flies
when youre having fun: Cognitive absorption and
beliefs about information technology usage. MIS
Quarterly, 24(4), 665-694.
Agarwal, R., & Prasad, J. (1999). Are individual
differences germane to the acceptance of new
information technologies? Decision Sciences,
30(2), 361-391.
Ajzen, I. (1988). Attitudes, personality,and behavior. Chicago: Dorsey Press.
Ajzen, I., & Fishbein, M. (1980). Understanding
attitudes and predicting social behavior. Englewood-Cliffs, NJ: Prentice-Hall.
Chinese Culture Connection. (1987). Chinese
values and the search for culture free dimensions
of culture. Journal of Cross-Cultural Psychology,
18(2), 143-164.
Danowitz, A. K., Nassef, Y., & Goodman, S. E.
(1995). Cyberspace across the Sahara: Computing
in North Africa. Communications of the ACM,
38(12), 23-28.
Davis, F. D., Bagozzi, R. P., & Warshaw, P. R.
(1989). User acceptance of computer technology:
A comparison of two theoretical models. Management Science, 35(8), 982-1003.
149
Hill, C. E., Loch, K. D., Straub, D. W., & ElSheshai, K. (1998). A qualitative assessment of
Arab culture and information technology transfer.
Journal of Global Information Management,
6(3), 29-38.
Hofstede, G. (1980). Cultures consequences:
International differences in work-related values.
Beverly Hills, CA: Sage.
Hofstede, G. (1985). The interaction between national and organizational value systems. Journal
of Management Studies, 22(4), 347-357.
Hofstede, G. (1997). Cultures and organizations:
Software of the mind. London: McGraw-Hill.
Hofstede, G. J. (2000). The information age across
countries. Proceedings of the 5me Colloque de
lAIM: Systmes dInformation et Changement
Organisationnel, Montpellier, France.
Goodman, S. E., & Green, J. D. (1992). Computing in the Middle East. Communications of the
ACM, 35(8), 21-25.
Hofstede, G., & Bond, M. H. (1988). The Confucius connection: From cultural roots to economic
growth. Organizational Dynamics, 16(4), 4-21.
151
152
This work was previously published in Information Resources Management: Global Challenges, edited by W. Law, pp. 25-53,
copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
153
154
Chapter 1.12
Is Information Ethics
Culture-Relative?
Philip Brey
University of Twente, The Netherlands
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
155
The Descriptive
Cultural Relativity of
Information-Related
Values
In this section, I will investigate the descriptive
cultural relativity of three values that are the topic
of many studies in information ethics: privacy,
intellectual property, and freedom of information.
Arguments have been made that these values
are distinctly Western, and are not universally
accepted across different cultures. In what follows I will investigate whether these claims seem
warranted by empirical evidence. I will also relate
the outcome of my investigations to discussions
of more general differences between Western and
non-Western systems of morality.
How can it be determined that cultures have
fundamentally different value systems regarding
notions like privacy and intellectual property?
I propose that three kinds of evidence are relevant:
156
1.
2.
3.
Privacy
It has been claimed that in Asian cultures like
China and Japan, no genuine concept or value of
privacy exists. These cultures have been held to
value the collective over the individual. Privacy
is an individual right, and such a right may not
be recognized in a culture where collective interest tend to take priority over individual interests. Using the three criteria outline above, and
drawing from studies of privacy in Japan, China
and Thailand, I will now consider whether this
conclusion is warranted.
At the conceptual level, there are words in
Japanese, Chinese, and Thai that refer to a private
sphere, but these words seem to have substantially
different meanings than the English word for privacy. Mizutani et al. (2004) have argued that there
is no word for privacy in traditional Japanese.
Modern Japanese, they claim, sometimes adopt
a Japanese translation for the Western word for
privacy, which sounds like puraibashii, and
written in katakana. Katakana is the Japanese
phonetic syllabary that is mostly used for words of
foreign origin. According to Nakada and Tamura
(2005), Japanese does include a word for private,
Watakusi, which means partial, secret and
selfish. It is opposed to Ohyake, which means
public. Things that are Watakusi are considered
less worthy than things that are Ohyake. Mizutani
et al. (2004) point out, in addition, that there are
certainly behavioral customs in Japan that amount
157
158
Freedom of Information
Freedom of information is often held to comprise
two principles: freedom of speech (the freedom
to express ones opinions or ideas, in speech or
in writing) and freedom of access to information.
Sometimes, freedom of the press (the freedom to
express oneself through publication and dissemination) is distinguished as a third principle. In
Western countries, freedom of information is often
defined as a constitutional and inalienable right.
Laws protective of freedom of information are often especially designed to ensure that individuals
can exercise this freedom without governmental
interference or constraint. Government censorship
or interference is only permitted in extreme situations, pertaining to such things as hate speech,
libel, copyright violations, and information that
could undermine national security.
In many non-Western countries, freedom of
information is not a guiding principle. There
are few institutionalized protections of freedom
of information; there are many practices that
interfere with freedom of information, and a
concept of freedom of information is not part
of the established discourse in society. In such
societies, the national interest takes precedence,
and an independent right to freedom information
either is not recognized or is made so subordinate
to national interests that it hardly resembles the
Western right to freedom of information. These
are countries in which practices of state censorship
are widespread; mass media are largely or wholly
government-controlled, the Internet, databases,
and libraries are censored, and messages that do
not conform to the party line are cracked down
upon.
Let us, as an example, consider the extent to
which freedom of information can be said to be
a value in Chinese society. Until the 1980s, the
Rights-Centered and
Virtue-Centered Morality
A recurring theme in the above three discussions
has been the absence of a strong tradition of individual rights in the cultures that were discussed
those of China, Japan, and Thailand and the
priority that is given to collective and state interests. Only very recently have China, Japan, and
Thailand introduced comprehensive human rights
legislation, which has occurred mainly through
Western influence, and there is still considerable
tension in these societies, especially in China
and Thailand, between values that prioritize the
collective and the state and values that prioritize
the individual.
Various authors have attempted to explain the
worldview that underlies the value system of these
countries. In Japan and Thailand, and to a lesser
extent China, Buddhism is key to an understanding
of attitudes towards individual rights. Buddhism
holds a conception of the self that is antithetical
to the Western conception of an autonomous
self which aspires to self-realization. Buddhism
holds that the self does not exist and that human
desires are delusional. The highest state that humans can reach is Nirvana, a state of peace and
contentment in which all suffering has ended. To
reach Nirvana, humans have to become detached
from their desires, and realize that the notion of
an integrated and permanent self is an illusion. In
Buddhism, the self is defined as fluid, situationdependent, and ever-changing. As Mizutani et al.
and Kitiyadisai have noted, such a notion of the
self is at odds with a Western notion of privacy
and of human rights in general, notions which
presuppose a situation-independent, autonomous
self which pursues its own self-interests and which
has inalienable rights that have to be defended
against external threats.
In part through Buddhism, but also through
the influence of other systems of belief such as
Confucianism, Taoism, and Maoism, societies
159
160
Conclusion
The discussion of privacy, intellectual property
rights, and freedom of information has shown
that a good case can be made for the descriptive
cultural relativity of these values. These values
are central in information ethics, as it has been developed in the West. Moreover, it was argued that
the uncovered cultural differences in the appraisal
of these values can be placed in the context of a
dichotomy between two fundamentally different
kinds of value systems that exist in different societies: rights-centered and virtue-centered systems
of value. Information ethics, as it has developed
in the West, has a strong emphasis on rights, and
little attention is paid to the kinds of moral concerns that may exist in virtue-centered systems
of morality. In sum, it seems that the values that
are of central concern in Western information
ethics are not the values that are central in many
non-Western systems of morality. The conclusion
therefore seems warranted that descriptive moral
relativism is true for information ethics.
Metaethical Moral
Relativism and
Information Ethics
In the first section, it was argued that descriptive moral relativism is a necessary condition for
metaethical moral relativism, but is not sufficient
to prove this doctrine. However, several moral
arguments exist that use the truth of descriptive
relativism, together with additional premises, to
argue for metaethical relativism. I will start with
a consideration of two standard arguments of this
161
162
Conclusion
I have argued, pace metaethical relativism, that it
is difficult if not impossible to provide compelling
arguments for the superiority of different notions
of the Good that are central in different moral
systems, and by implication, that it is difficult to
present conclusive arguments for the universal
truth of particular moral principles and beliefs. I
have also argued, pace metaethical absolutism,
that is nevertheless possible to develop rational
arguments for and against particular moral values
and overarching conceptions of the Good across
moral systems, even if such arguments do not
result in proofs of the superiority of one particular
moral system or moral principle over another.
From these two metaethical claims, a normative position can be derived concerning the
way in which cross-cultural ethics ought to take
place. It follows, first of all, that it is only justified for proponents of a particular moral value
or principle to claim that it ought to be accepted
in another culture if they make this claim on the
basis of a thorough understanding of the moral
system operative in this other culture. The proponent would have to understand how this moral
system functions and what notion of the Good it
services, and would have to have strong arguments
that either the exogenous value would be a good
addition to the moral system in helping to bring
about the Good serviced in that moral system,
or that the notion of the Good serviced in that
culture is flawed and requires revisions. In the
next section, I will consider implications of this
position for the practice of information ethics in
cross-cultural settings.
Information Ethics in a
Cross-Cultural Context
It is an outcome of the preceding sections that
significant differences exist between moral systems of different cultures, that these differences
have important implications for moral attitudes
towards uses of information and information
technology, and that there are good reasons to
take such differences seriously in normative studies in information ethics. In this section, I will
argue, following Rafael Capurro, that we need an
intercultural information ethics that studies and
evaluates cultural differences in moral attitudes
towards information and information technology.
I will also critically evaluate the claim that the Internet will enable a new global ethic that provides
a unified moral framework for all cultures.
163
164
Conclusion
It was found in this essay that very different moral
attitudes exist in Western and non-Western countries regarding three key issues in information ethics: privacy, intellectual property, and freedom of
information. In non-Western countries like China,
Japan, and Thailand, there is no strong recognition
of individual rights in relation to these three issues.
These differences were analyzed in the context
of a difference, proposed by philosopher David
Wong, between rights-centered moralities that
dominate in the West and virtue-centered moralities that prevail in traditional cultures, including
those in South and East Asia. It was then argued
that cross-cultural normative ethics cannot be
practiced without a thorough understanding of
the prevailing moral system in the culture that is
being addressed. When such an understanding
has been attained, scholars can proceed to engage
in moral criticism of practices in the culture and
propose standards and solutions to moral problems. It was argued, following Rafael Capurro,
that we need an intercultural information ethics
that engages in interpretive, comparative, and
normative studies of moral problems and issues
in information ethics in different cultures. It is
to be hoped that researchers in both Western and
non-Western countries will take up this challenge
and engage in collaborative studies and dialogue
on an issue that may be of key importance to
future international relations.
165
References
Bao, X., & Xiang, Y. (2006). Digitalization and
global ethics. Ethics and Information Technology, 8, 41-47.
Capurro, R. (2005). Privacy: An intercultural
perspective. Ethics and Information Technology,
7(1), 37-47.
Capurro, R. (in press). Intercultural information
ethics. In R. Capurro, J. Frhbaure, & T. Hausmanningers (Eds.), Localizing the Internet. Ethical issues in intercultural perspective. Munich:
Fink Verlag. Retrieved January 25, 2007, from
http://www.capurro.de/iie.html
De George, R. (2006). Information technology,
globalization and ethics. Ethics and Information
Technology, 8, 2940.
Ess, C. (2002).
Computer-mediated colonization,
the renaissance, and educational imperatives for
an intercultural global village. Ethics and Information Technology, 4(1), 11-22.
Gorniak-Kocikowska, K. (1996). The computer
revolution and the problem of global ethics. Science and Engineering Ethics, 2, 177190.
Harman, G. (1996). Moral relativism. In G. Harman & J. J. Thompson (Eds.), Moral relativism
and moral objectivity (pp. 3-64). Cambridge, MA:
Blackwell Publishers.
Harman, G. (2000). Is there a single true morality?
In G. Harman (Ed.), Explaining value: And other
essays in moral philosophy (pp. 77-99). Oxford:
Clarendon Press.
Hofstede, G. (2001). Cultures consequences.
Thousand Oaks, CA: Sage.
Hongladarom, S. (2001). Global culture, local
cultures and the Internet: The Thai example. In
C. Ess (Ed.), Culture, technology, communication: Towards an intercultural global village (pp.
166
Endnotes
By information ethics I mean the study of ethical
issues in the use of information and information
technology. Contemporary information ethics is
a result of the digital revolution (or information
revolution) and focuses mainly on ethical issues
in the production, use, and dissemination of
digital information and information technologies.
It encloses the field of computer ethics (Johnson,
2000) as well as concerns that belong to classical
This work was previously published in International Journal of Technology and Human Interaction, Vol. 3, Issue 3, edited by B.
C. Stahl, pp. 12-24, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
167
168
Chapter 1.13
Introduction
Personalization is an approach to increase the
usability of complex information systems and
present the user with a comprehensible interface
that is tailored to his or her needs and interests.
In this article, we examine general techniques
that are employed to achieve the personalization
of Web sites. This is followed by a presentation
of real-world examples. It will be shown how different levels of personalization can be achieved
by employing the discussed techniques. This
leads finally to a summary of the current state in
personalization technologies and the issues connected with them. The article closes with some
ideas on further research and development, and
a conclusion.
In general, the concept of personalization refers
to the ability of tailoring standardized items to the
needs of individual people. It is originally derived
from the ideas of Pine (1993) who proposed that
companies should move from the paradigms of
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Personalization Techniques
The core idea of personalization is to customize
the presentation of information specifically to the
user to make user interfaces more intuitive and
easier to understand, and to reduce information
overload.
The main areas of tailoring presentation to
individual users are content and navigation. Content refers to the information being displayed, and
navigation refers to the structure of the links that
allow the user to move from one page to another.
Personalized navigation can help the user to easily
find what he or she is looking for or to discover
new information. For example, a system discussed
by Belkin (2000) assists users in refining search
queries by giving recommendations on related
or similar terms.
In contrast, adaptive methods change the presentation implicitly by using secondary data. This
data can be obtained from a variety of sources,
for example, from the users actions, from the
behaviour of other users on that site, or based on
the currently displayed content. Methods that use
this data as input are discussed in detail below.
The most distinctive characteristic of adaptive
methods is that they are constantly monitoring
the users activities to adjust the arrangement and
selection of relevant information.
Adaptive methods or machine-learning algorithms are huge steps toward automated customization. Current static interfaces suffer from the
fact that the designer has to anticipate the needs,
interests, and previous knowledge of the users in
advance. As these preferences change over time,
customization that requires human interaction
for collecting and identifying preferences leads
quickly to outdated user profiles.
Table 1 shows how adaptive and adaptable
methods can be applied to customize content and
navigation. The examples given are intended to
be generic; more concrete examples are examined
in the case studies below.
Degree of Personalization
Another important criterion for classification is
the degree of personalization. Systems can have
transient or persistent personalization, or be nonpersonalized. With transient personalization, the
customization remains temporary and is largely
based on a combination of the users navigation
and an item-to-item correlation. For example, if an
item is selected, the system attaches similar items
as recommendations to it whereby the content of
the shopping cart is taken into consideration.
Persistent personalization systems maintain a
permanent user account for every user to preserve
his or her settings and preferences across separate
sessions. Although this raises privacy issues and
is the most difficult to implement, it offers the
greatest benefit. These systems can make use
169
170
Goal
As the Amazon.com product catalogue contains
more than 2 million products, users can easily get
frustrated if they do not find what they are looking for. Thus, one of the main goals is to tailor
the product catalogue as much as possible to the
needs and interests of the user.
Aside from easy navigation, the site offers a
seamlessly integrated recommendation system. It
is intended to offer customers products that are
either related to their interests or to the product
that is currently displayed to exploit cross-selling
potentials.
Personalization Techniques
Amazon.com is a highly developed online shopping site and incorporates a combination of
171
172
Goal
The goal of Yahoo! is to bind its users by differentiating from other Web catalogues and search
engines, and to provide a fully customizable and
integrated portal. As the customer structure of
Yahoo! is very heterogeneous, it is a good idea
to offer personalization and let users construct an
individual start page. Yahoo!s service is free for
its users; money is mainly earned with advertising and revenue provisions of shopping partners.
Thus, the second goal is to make advertising as
effective as possible. This can be achieved by
selecting banner ads that are likely to be of interest for the user.
Personalization Techniques
Yahoo! offers an adaptable system that requires
the user to explicitly provide information for
personalization. The user profile is kept on a
server between different visits, thus Yahoo! offers
persistent personalization.
My Yahoo! enables registered users to build
their own Yahoo! pages. The content is selected as
so-called modules. Among the available modules
are ones for weather, news, sports results, stock
quotes, horoscopes, movie reviews, personal news
filters, and many more. Further configuration can
(c) Delete
or configure
module
(a) Headline
module
173
Conclusion
174
Future Outlook
Personalization technologies have found their way
out of experimental systems of researchers into
commercial applications. They are a powerful
means to handle information overload, to make
complex information systems more usable for a
REFERENCES
Alpert, S. R., Karat, J., Karat, C., Brodie, C., &
Vergo, J. G. (2003). User attitudes regarding a
user-adaptive ecommerce Web site. User Modeling and User-Adapted Interaction, 13(4), 1-2.
Belkin, N. J. (2000). Helping people find what
they dont know. Communications of the ACM,
43(8), 58-61.
Hirsh, H., Basu, C., & Davison, B. D. (2000).
Learning to personalize. Communications of the
ACM, 43(8), 102-106.
Kantor, P. B., Boros, E., Melamed, B., Mekov,
V., Shapira, B., & Neu, D. J. (2000). Capturing
human intelligence in the Net. Communications
of the ACM, 43(8), 112-115.
Karat, C. M., Brodie, C., Karat, J., Vergo, J. G.,
& Alpert, S. R. (2003). Personalizing the user
experience on ibm.com. IBM Systems Journal,
42(4), 686-701.
Manber, U., Patel, A., & Robinson, J. (2000).
Experience with personalization on Yahoo! Communications of the ACM, 43(8), 35-39.
McCarthy, J. (2000). Phenomenal data mining.
Communications of the ACM, 43(8), 75-79.
Mobasher, B., Cooley, R., & Srivastava, J. (2000).
Automatic personalization based on Web usage
mining. Communications of the ACM, 43(8),
142-151.
175
KEY TERMS
Adaptable Personalization Systems: Systems that can be customized by the user in an
explicit manner; that is, the user can change the
content, layout, appearance, and so forth to his
or her needs.
This work was previously published in Encyclopedia of E-Commerce, E-Government, and Mobile Commerce, edited by M.
Khosrow-Pour, pp. 919-925, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
176
177
Chapter 1.14
User-Centered Evaluation of
Personalized Web Sites:
Whats Unique?
Sherman R. Alpert
IBM T. J. Watson Research Center, USA
John G. Vergo
IBM T. J. Watson Research Center, USA
Abstract
In addition to traditional usability issues, evaluation studies for personalized Web sites and
applications must consider concerns specific
to these systems. In the general case, usability
studies for computer-based applications attempt
to determine whether the software, in actual use,
meets users needs; whether users can accomplish
their goals in using the software; whether users
can understand and use the application (whether
they comprehend what they can do and how); the
rate, frequency, and severity of user errors; the
rate of and time duration for task completion; and
so on. But in the case of user-centered evaluations
of personalized Web sites, there are additional
questions and issues that must be addressed. In
this paper, we present some of these, based on our
INTRODUCTION
Personalized Web sites attempt to adapt and
tailor the user experience to a particular users
preferences, needs, goals, interests, knowledge,
or interaction history. A personalized site adapts
its content, content structure, the presentation of
information, the inclusion of hyperlinks, or the
availability of functionality to each individual
users characteristics and/or usage behavior. Such
a site may place specific information, which
it thinks you will be interested in, at a distinguished or obvious location on a Web page.
Another personalized site may choose to add or
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
178
179
considered a step moving toward the full realization of the issues and factors that must be addressed
in personalized Web site evaluations.
USER-CENTERED
PERSONALIZATION MEASURES
As mentioned previously, there are many proposals in the adaptive systems literature aimed at
evaluating whether the application or Web site
in question performs its adaptations correctly or
accurately. And usability studies of (nonpersonalized) Web sites have shown that ease of use can
increase the number of revisits and purchases
(e.g., Nielsen, 2003). We touch only gently on
general usability issues here. Instead, this discussion focuses on the users views and opinions of
personalized adaptations, not only whether they
work as intended, but even if they do so, whether
users want the Web site to be making and using
inferences and data about the user to influence
or direct an adaptive presentation to the user.
Evaluations of adaptive and personalized applications must ultimately address the question, do
the adaptive features actually improve the users
experience when using the site? (see also Chin,
2001). Designers, researchers, and developers
may have many ideas for personalized functionality for a Web site that they think would provide
users with some benefit. But actual users when
confronted with such features may find them
useless or, worse, objectionable. The intuitions of
the builders of (personalized and all) interactive
software must be confirmed by actual potential
users of that software.
Here we introduce some of the questions and
issues that must be addressed when performing
user-centered evaluations of personalized Web
sites. These must be addressed in addition to
traditional usability concerns, which we will not
discuss but that, of course, must be incorporated
into the user-centered evaluation. For example,
fundamental to any evaluation is whether the site
180
(and its adaptive behaviors) supports users in accomplishing their goals. Or, even more specifically
related to personalization, does the inclusion of
personalized features make the user more efficient and decrease time-to-goal completion.
These are important, and ignoring such issues
would be foolish. In this paper, we only touch
on these more traditional concerns but go further
in discussing issues that are of a more subjective
nature and relate to the overall user experience,
not simply quantitative efficiency measures such
as time-to-task completion.
181
182
Consideration of Context
Some personalization features are based on the
behavior, attitudes, likes and dislikes, navigation,
or purchases of other users. For example, many
existing e-commerce sites recommend additional
related products to the user when a purchase
transaction is about to be completed: People
who bought this product also bought products
X, Y, and Z. This is a simple form of collaborative filtering or recommender technologies (e.g.,
Burke, 1999; Resnick & Varian, 1997; Schafer,
Konstan, & Riedl, 2001). When applied in the
context of e-commerce, these technologies use the
buying behavior of prior customers to attempt to
183
184
relevant to the ongoing dialog and to be as informative as required but not more so. This result
coincides with the experimental work of Reeves
and Nass (1999). After conducting numerous
studies, Reeves and Nass concluded that users
expectations regarding their interactions with
computers and other media are based in large part
on social interactions in real life, that is, users
expect computers to obey rules that come from
the world of interpersonal interaction. When a site
provides more information or, in some manner,
other information than what is expected, the
user may not immediately know what to make
of the extra information. In discussing many
features, we heard from our users that they want
to understand, without difficulty, why and how
the computer side of the ongoing bilateral conversation, represented by the content the Web site
displays, chose to say what it has displayed.
Further, does a users need to understand what
is happening vary with the type of personalized
site? If personalized adaptation is based on, say,
previous navigation, the resultant application
behavior may not always be clear to the user.
For example, the simple collaborative filtering
mechanism implemented on the Amazon site is
explained as Customers who bought [this book]
also bought Thus the attempt is made to have
users understand this simple personalization. On
the other hand, when personalized results for a
particular search query result in differing output
at different times, does the user understand why
the results differ? During usability evaluation, we
must also be sure that when a site does explain
its behavior, the explanation is satisfactorily informative from the users perspective.
Andersen, Andersen, and Hansen (2001) assert
that users of adaptive e-commerce sites wish to
be surprised by the site, but in a very different
manner than what is being discussed here. They
mean pleasantly surprised in terms of value-added
services as a reward for using the site. First, they
state that users wish to see recommended products
related to what they are already purchasing. We
Who is in Control?
In our previous studies regarding personalized
Web sites, the most important issue to users was
their fervent desire to be or at least feel like
they are in control. The feeling of security
experienced by a user of an interactive system is
determined by the users feeling of control of the
185
186
from a user-centered perspective, including asking not only is the site usable, but does it provide
functionality and affordances users actually desire, that provide obvious value to users, and that
users will choose to use. The evaluation process
must address these many questions posed previously. The pragmatics of evaluating these issues
is where things are the same as other thorough
and comprehensive user-centered assessments.
The techniques and approach are the same, the
focus and issues addressed are expanded.
As for other types of Web sites, user-centered
evaluations of personalized sites certainly ought
to involve a system prototype that users can
view and perhaps minimally interact with. Upon
deciding what personalized features you wish to
potentially incorporate into the site, a prototype
that reifies those features should be constructed.
Evaluation studies should be performed early in
the design process using site prototypes; this can
be a money- and effort-saving technique if designers initial intuitions regarding adaptive features
and functionality do not match users opinions of
them. This savings may be greater in the case of
adaptive functionality than traditional software
because the former may involve techniques that
are difficult and time consuming to implement.
As Paramythis et al. (2001) suggest, Eliciting
user feedback regarding the modeling process
requires that at least a prototype of the system exists. Taking this a step further, initial user testing
should begin when only a prototype exists, that
is, before the site is fully implemented, so that
many design and implementation decisions are
made prior to the expense and time of building
the full site.
A reasonable method of putting the site and its
personalization behaviors in front of users without
the expense of actually building the system (including its adaptive functionality, which is often
complex to implement) is to use paper prototypes,
a technique often used in formative usability
evaluations. Here, drawings of potential screens
187
Of course, this implies that driving the evaluation should be realistic scenarios of use of the
Web site. Each scenario should involve a story
in which the user must accomplish a particular
goal by using the Web site. The full corpus of
scenarios must include ones that exercise the personalization features the site offers. For example,
a scenario of use for a computer-purchasing site
might involve the goal of purchasing a memory
card that is compatible with a particular computer
(Alpert et al., 2003; Karat et al., 2003).
One approach might be to contrast performing
a particular scenario using two different prototype
sites, with and without specific personalization
features that might or might not support the user
in accomplishing specific goals. Experimenters
can walk users through interaction with mock-site
prototypes, or users can use, hands on, a minimally interactive prototype, to accomplish this
same scenario goal with the two prototypes. Or
the walk-through or interaction may occur with
only a single prototype, one that incorporates and
demonstrates a particular personalization feature.
Then, experimenter-led discussions with focus
groups and with individual study participants,
written and oral questionnaires, time-to-task
completion, number of clicks, and keyboard
interactions those materials and procedures
typically incorporated into user evaluations of
interactive systems would also be employed (as
in the personalization evaluation studies reported
in Alpert et al. (2003) and Karat et al. (2003).
For example, questionnaires might list features such as The site will conduct constrained
searches for accessories and upgrades, searching
only among those that are compatible with the
products you already own, and study participants
would be asked to rate each feature in terms of
its value to the participant using a Likert scale
ranging from Highly valuable to Not at all
valuable. Evaluations must interrogate whether
the personalization features actually helped the
user accomplish his/her goals. Questionnaires
might include assertions, including such general
188
CONCLUSION
Design of personalization or user-adaptive systems (or any software technology) cannot occur in
a vacuum, specifically, it cannot usefully proceed
without assessing the value and usefulness to
users of the concepts proposed and implemented
by researchers and developers. Is the overall user
experience enhanced due to the inclusion of personalized features and functionality?
We can see that many of the presented usercentered measures interact and overlap: User
trust and confidence in a Web site will plainly be
affected by the quality of the systems inferences
regarding what the user wants to see; if a user
cannot understand why particular information is
REFERENCES
189
190
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 257-272, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
191
192
Chapter 1.15
Abstract
Usability is integral to software quality. Software
developers increasingly acknowledge the importance of user-centered, Web site development. The
value of usability engineering and the role of the
usability engineer (UE) are less understood. A
common assumption is that the UEs role is only
to be a user advocate. To this role, we add the
responsibility of addressing concerns of other
stakeholders in Web site design and development.
We discuss usability engineering and the processes
that it encompasses, such as project planning,
requirements definition, user-centered design
(UCD) and evaluation/testing within the context
of traditional software engineering lifecycles. We
Introduction
People use the Web in a variety of ways. Their
interaction with the Web can be self-motivated
or externally motivated; their proficiency novice
or expert; their needs and expectations simple or
complex. To engineer a successful and satisfactory user experience with a Web site, we need
to understand issues such as why people go to
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
a Web site; what they expect and intend to accomplish at the site; and everything impacting
on their experience.
A Web site is the result of a set of processes,
usually iterative, beginning with conceptualization, planning and requirements definition, then
going on to design, version production, and testing/
evaluation, before culminating in the site launch.
For usability engineering to be fully integrated into
Web site development, its practices must be fully
integrated into software development lifecycles
(Addelston & OConnell, 2004, 2005). Lifecycles
are structured frameworks for software development activities. For example, Figure 1 incorporates
elements that iterative lifecycles typically include.
In practice, the sequence and frequency of activities can vary. Research and experience show
that including usability in software engineering
lifecycles is critical (Mayhew, 1992).
Developing a Web site is a team effort. Each
team member has roles and responsibilities. The
roles of the usability engineer (UE) are integral
to these processes and to the team implement-
Figure 1. A generic, variable sequence, iterative Web site development lifecycle illustrates points where
usability engineering is most beneficial
Product
Conceptualization
Engineering
Users
Project
Planning
Usability
Evaluation/
Testing
Software
Launch &
Site Maintenance
Requirements
Definition
Engineering
Version
Production
Design
Note: With the exception of version production, each of the activities in the outer ovals includes both usability
engineering and software engineering processes. In practice, the sequence and frequency of activities can vary.
193
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Usability
Users
User interface
Usability engineering
Software engineering
Integrating usability engineering into software engineering lifecycles
Lifecycle activities
Usability
People outside the field of usability engineering
sometimes consider usability to be obvious, but
vague and unstructured something common
sense can recognize and accomplish. Sometimes
they are surprised to learn that the field has its own
definitions and established processes. Although
those people are happy to define usability as I
know it when I see it, for UEs, a strict definition underlies our focus on users needs and our
goal of meeting those needs through usability
engineering processes. This chapter discusses
users needs in the context of human-computer
interaction (HCI), specifically as users interact
with Web sites.
The International Organization for Standardization (ISO) defines usability through the
attributes of users interactions with software
products in specific contexts of use: efficiency,
effectiveness, and user satisfaction. We boil
these attributes down to two outcomes: (1) user
success and (2) user satisfaction (1998). The ISO
definition implies that usable software must be
accessible.
Throughout this chapter, we draw illustrations
from our work in a specialized branch of usability
called accessibility. Accessibility enables people
194
with disabilities to experience success and satisfaction with software to a degree comparable to
that enjoyed by people without disabilities. Although some authors treat accessibility as distinct
from usability, we consider it to be a subdomain
of usability in which the users are people with
physical and/or cognitive disabilities (Hix &
OConnell, 2005).
Users
In the context of this chapter, users are people who
interact with Web sites. In the sense in which we
use the term, users are also known as end users,
the people who visit Web sites and interact with
their contents. The term user excludes people
employed in a Web site project, for example, the
UEs. It excludes the sites providers and others
who have any stake in the Web site. People close to
the project can be too technical or too expert in a
domain to represent a user who does not have the
same training, goals or background. Those close
to the project run a high risk of unintentionally
clouding their view of users needs with their own
commitment to achieving the projects goals.
Many variables are inherent to users (Bias &
Karat, 2005). Some are intrinsic, for example, age;
gender; experience with technology; intellectual
or aesthetic preferences; interaction styles; and the
presence or absence of physical or cognitive disabilities. Other variables, such as employer goals
and working environment are extrinsic, but affect
the user experience. Many user attributes can
decline with age, for example, memory and perception of color contrast (O'Connell, in press).
Each user brings a cluster of capabilities and
limitations to any interaction with a Web site.
These are the well-documented human capabilities
and limitations in perception, manual dexterity,
memory, problem solving, and decision making
(e.g., Baddeley, 1990; Brown & Deffenbacher,
1979; Mayer, 1992). For example, the limitations
of working memory are well known: seven plus
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
195
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
196
User Interface
In one sense, a user interface (UI) is software
that people use to interact with technology. For
UEs it is a matter of layers. Above the underlying
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Usability Engineering
Usability engineering is a set of defined, usercentered processes, grounded in research and
experience-based principles. The purpose of
usability engineering in Web development is to
raise the potential for users success and satisfaction and, thereby, to support Web site providers
goals. The UE must understand the complex set
of variables residing in any user group and apply
this understanding to promote users success and
satisfaction. This understanding is what makes
usability engineering critical to achieving Web site
usability. Because peoples styles of interacting
with technology change as technology progresses,
usability engineering is a continually evolving
field informed by applied research in human
interaction with technology.
The UE applies expertise not usually found
in other software development team members to
make an essential contribution to the quality of
a Web site. As noted by Bias and Karat, good
usability is not standard for most Web sites
(2005, p. 2). When usability engineering is not
part of Web site development, the team faces a
high risk that, at the least, the site will not promote users success and satisfaction; at worst, it
197
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Stakeholders
Software Engineering
198
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Integrating usability
Engineering into Software
Engineering Lifecycles
Software engineering lifecycles are hospitable to
usability engineering. Commonalities between
software engineering and usability engineering
facilitate this compatibility. The two professions
share tools such as use cases, although they sometimes employ them differently. They have the
common goal of delivering quality Web sites, on
time and on budget, to satisfied users, customers,
and other stakeholders. They share terminology,
but sometimes with different meanings or connotations. For example, in software engineering,
the word interface primarily means a connection
between two components of a software system,
whereas, to a UE, interface first and foremost
denotes the human-computer interface.
Software engineering and usability engineering processes can occur in parallel because their
activities and outputs are compatible. Sometimes
these processes are rigid, but the constraints of
developing Web sites in real time against tight
schedules and tighter budgets drive a trend toward
adaptability. This trend emphasizes the need for a
UE. In this fast-paced environment, users on the
development team can be few and their involvement infrequent. In such a case, a UE draws on
knowledge of the field, for example, usability
principles and knowledge of users, to aid in the
development of usable Web sites.
Lifecycle Activities
Usability engineering has corresponding activities
for most software engineering activities. Not all
lifecycles incorporate the same activities. Activity
sequence and frequency can vary. Some activities can be simultaneous. Each activity has goals,
inputs, processes, and products.
In Figure 1, we present a high-level view of
a user-centered software engineering lifecycle.
Project Planning
A Web site project starts as an idea. This can be
a new idea encapsulating the site providers vision. More often, at the end of a lifecycle, a team
returns to product conceptualization when they
evaluate an existing site and begin to plan future
versions. In either case, the team needs a blueprint
for the steps between the original concept and the
insertion of the final product into the workplace.
This blueprint is called the project plan.
Formal software engineering lifecycles start
with project planning. Successful project planning
199
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Table 1. Usability engineering activities and processes during a software development lifecycle (Partially
based on Addelston & OConnell, 2005; Mayhew, 1992)
Software & Usability
Engineering
Lifecycle Activities
Product
Conceptualization
Project Planning
Requirements
Definition
Design
Version Production
Evaluation/Testing
(Across versions)
200
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Requirements Definition
From the software engineering perspective,
requirements definition is a set of processes that
identify and document a Web sites goals in terms
of how the site will fulfill its providers vision
by delivering information and/or functionality.
It focuses on user needs assessment. Usability
engineering also looks at the site from the users
perspectives as well to verify that the users needs
and expectations are being met. It addresses user
requirements.
Web sites have functional, system performance, and usability requirements. Functional
requirements define what a Web site is supposed
to do. For example, a functional requirement for
an e-commerce site that sells printers stipulates
that the site must display photos of the printers.
System performance is a measure of how well
the Web site does what it is supposed to do. In
our example, a system performance requirement stipulates that the site will deliver a specified photo over a 56K modem in less than two
seconds. Usability requirements are sometimes
called non-functional requirements. This term is
misleading, however, because it diminishes the
importance of usability.
Usability is a measure of users success and
satisfaction with their experience at the site. But
usability must also address the goals of the sites
providers. For example, the providers of an ecommerce site benefit from having customers
201
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
202
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
203
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
204
Design
Incorporating users input from requirements
definition, the UE participates in developing the
sites information architecture. Information architecture is like a road map; it sets out the paths
that users follow to their destinations on a Web
site. It is at the heart of design. Impacting more
than site navigation, the information architecture
impacts a pages content and layout. The UEs
role is to assure that the information architecture
facilitates navigation and makes finding information natural for users.
Important UCD processes, collectively called
interaction design, consider the ways that real
users attempt to accomplish goals at a Web site.
UEs base interaction design on all that they have
learned about the users, for example, their agebased capabilities, mental models, and expectations within the context of the goals of the sites
providers. Usability principles provide UEs with
rules of thumb that inform UCD decisions. Consider a site intended for senior citizens who expect
a prominent link to articles about leisure activities
for seniors. The UE considers usability principles
on legibility for older users with decreased visual
acuity. These principles recommend a large font
and a strong contrast between the font and background colors (e.g., Czaja & Lee, 2003).
User-Centered Design
Best practices in usability engineering include
UCD, a set of usability engineering processes that
focus on understanding users, their goals, their
strengths and limitations, their work processes
all user attributes that impact how users will
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
205
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Use Cases
A use case is a formal description of ways a product
can be used. It consists of a statement of goals
with a description of the users and the processes
the designers expect them to perform to achieve
those goals. Sometimes, a use case is expressed
in a sketch. Use cases first come into play in task
analysis activities during requirements definition.
They are referred to during design.
Use cases provide an example of how a UE
can prevent a well-intentioned practice from misrepresenting users. Use cases are the product of
a process analysis technique to develop a simple,
high-level statement of users goals and processes.
Use cases are common to the tool kits of both
software engineers and usability engineers.
Basing Web design on use cases has strengths
and weaknesses. For each module of a system,
common processes are written up with the prerequisites for each process, the steps to take for
the users and the system, and the changes that
will be true after the process is completed. Use
cases help to ensure that frequent processes are
supported by the system, that they are relatively
206
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Evaluation/Testing
Verification and validation (V&V) are software
engineering terms for testing. Verification is iterative testing against requirements. Validation is
the final testing against requirements at the end of
the lifecycle. Usability evaluation is a set of V&V
processes that occurs in conjunction with other
V&V activities and is an integral component of an
overall V&V approach. In addition to checking for
conformance to usability requirements, usability
evaluation has the added goal of assessing a wide
range of users experiences at the site. The UE
keeps the door open for new requirements based
on the way real users interact with the site. New
requirements become input to the next cycle. At
projects end, they become input for new product
conceptualization.
Key user-centered, usability evaluation processes entail observing users interacting with a
Web site. Activities for formal user observation
include writing a test plan; identifying participant
users; working with site providers to set usability goals for each user group and task; defining
tasks; writing statements of goals that never tell
users how to achieve those goals; preparing a
user satisfaction survey; preparing ancillary
materials such as consent forms; carrying out
the observations; analyzing data, and writing a
report (Lazar, Murphy, & OConnell, 2004). These
formal processes entail structuring evaluation
activities to reflect the tasks identified during
requirements definition.
Evaluation draws on the products of all earlier
activities. For example, usability goals are based
207
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
208
Summary
We maintain that usability engineering is rigorous, process-based, and addresses needs of stakeholders, such as site providers, as well as users.
We have set out a typical software engineering
process and discussed key usability engineering
contributions. We have demonstrated simultaneous, complementary activities whose products
benefit later activities without adversely affecting schedules. We have shown what would be
lacking without usability engineering and how
the potential of users success and satisfaction
increases with usability engineering. We stress
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
that usability engineering is the means to providing successful and satisfactory experiences for
Web site users while fulfilling the goals of the
sites providers. The UEs contribution is integral
to Web site development.
References
Addelston, J. D., & OConnell, T. A. (2004). Usability and the agile project management process
framework. Cutter Consortium Agile Project
Management Executive Report, 5(10).
Addelston, J. D., & OConnell, T. A. (2005). Integrating accessibility into the spiral model of the
software development lifecycle. In Proceedings
of the 11th International Conference on HumanComputer Interaction (Volume 8 Universal
Access in HCI: Exploring New Dimensions of
Diversity). Las Vegas, NV: Mira Digital Publishers. CD-ROM.
Andre, A. D., & Wickens, C. D. (1995). When users want whats NOT best for them. Ergonomics
in Design, 4, 10-14.
Baddeley, A. (1990). Human memory. Boston:
Allyn & Bacon.
Bias, R. G., & Karat, C. M. (2005).
Justifying
cost-justifying usability. In R. G. Bias & D. J.
Mayhew (Eds.), Cost justifying usability: An update for the Internet age (2nd ed., pp. 2-16). San
Francisco: Elsevier.
Bolt, R. A. (1984). The human interface: Where
people and computers meet. Toronto, Canada:
Wadsworth.
Brown, E. L., & Deffenbacher, K. (1979). Perception and the senses. New York: Oxford University
Press.
Caldwell, B., Chisholm, W., Slatin, J., & Vanderheiden, G. (Eds.). (2005, June 30). Web content
accessibility guidelines 2.0 (WCAG 2.0). Working
209
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
210
The Usability Engineering Behind User-Centered Processes for Web Site Development Lifecycles
Van Der Veer, G. C., & Del Carmen Puerta Melguizo, M. (2003). Mental models. In J. A. Jacko &
A. Sears (Eds.), The human-computer interaction
handbook: Fundamentals, evolving technologies,
and emerging applications (pp. 52-80). Mahwah,
NJ: Erlbaum.
Vicente, K. J. (1999). Cognitive work analysis:
Toward safe, productive, and healthy computerbased work. Mahwah, NJ: Erlbaum.
Web accessibility initiative (WAI). Retrieved July
16, 2005, from http://www.w3.org/WAI/
Whiteside, J., Bennett, J., & Holtzblatt, K. (1990).
Usability engineering: Our experience and evolution. In M. Helander (Ed.), Handbook of humancomputer interaction (pp. 791-817). New York:
Elsevier B.V.
World Wide Web Consortium (W3C). (2004).
Endnote
1
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 1-21, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
211
212
Chapter 1.16
Abstract
Introduction
This chapter reviews the different types of personalization systems commonly employed by Web
sites and argues that their deployment as Web
site interface design decisions may have as big
an impact as the personalization systems themselves. To accomplish this, this chapter makes a
case for treating Human-Computer Interaction
(HCI) issues seriously. It also argues that Web
site interface design decisions made by organizations, such as the type and level of personalization
employed by a Web site, have a direct impact on
the communication capability of that Web site.
This chapter also explores the impact of the
deployment of personalization systems on users
loyalty towards the Web site, thus underscoring the
practical relevance of these design decisions.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
Background: Personalization
Systems
Most of the technologies and tools that companies
use to manage their relationship with their customers usually fall under the banner of Customer
Relationship Management (CRM) System. Even
though personalization is just one piece of the
CRM pie, it is a very crucial piece as effective
personalization significantly enhances the ability
of the organization to initiate a discourse with
Personalization Processes
Implicit
Profiling
Explicit
Profiling
Legacy
Data
Business Rules
213
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
manage content. Most of the advanced personalization might require sophisticated data mining
techniques and the ability to display dynamic
content without seriously compromising system
resources (dynamic display of content will usually
mean increased download time).
There are a few well-known techniques for
personalization. Rules-based personalization
modifies the content of a page based on specific
set of business rules. Cross-selling is a classic
example of this type of personalization. The key
limitation of this technique is that these rules must
be specified in advance. Personalization that uses
simple filtering techniques determines the content that would be displayed based on predefined
groups or classes of visitors and is very similar to
personalization based on rules-based techniques.
Personalization based on content-based filtering
analyzes the contents of the objects to form a
representation of the visitors interest (Chiu,
2000). This would work well for products with
a set of key attributes. For example, a Web site
can identify the key attributes of movies (VHS,
DVD) such as drama, humor, violence, etc., and
can recommend movies to its visitors based on
similar content. Personalization based on collaborative filtering offers recommendations to
a user based on the preferences of like-minded
peers. To determine the set of users who have
similar tastes, this method collects users opinion on a set of products using either explicit or
implicit ratings (Chiu, 2000). Please see Figure
1 for an illustration of how a Web site could use
all three personalization methods to best serve
the customer.
Deployment of
Personalization Systems
Profiling
An intelligent way to make the Web site adaptive is to use not only the information provided
214
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
Personalization as an Interactive
Dialogue Between a Web Site and Its
Users
The level and extent of personalization offered
by the Web site will have an effect on the communication characteristics of the media. This
research argues that different levels of support
provided for personalization will specifically impact on the adaptiveness [similar to contingency
used by (Burgoon et al., 2000)] of the Web site.
This is best illustrated by discussing a real life
example using Amazon.com. Appendices 1 to 3
include three screen shots that show the different ways Amazon.com attempts to personalize
the experience of the customer. When the user
enters the Web site, he or she is invited to log
in if desired. Once the user logs in, Appendix 1
shows the Web page that is dynamically created
by Amazon.com. This page recommends products to the user based on past purchase history
and on the explicit ratings provided by the user
to a set of select items. Appendix 2 shows the
product page for a book the user is interested in.
The column on the left hand side of this page
shows the associated related content about the
product that is displayed on this page. Appendix
3 shows the page tailor-made for the user based
on his recent browsing history and past purchase
history. Of course, the scenario described above
assumes that the user logged into the Web site at
the outset. An intelligent Web site can still adapt
its content in its product page by assuming that
215
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
Deployment of Personalization as
Web Site Design Decisions
Reeves, Nass and their colleagues at the Center
for the Study of Language and Information at
Stanford have shown that even experienced users
tend to respond to computers as social entities
(Nass, Lombard, Henriksen, & Steur, 1995; Nass,
Moon, Fogg, Reeves, & Dryer, 1995; Nass & Steur,
1994). These studies indicate that computer users
follow social rules concerning gender stereotypes
and politeness, and that these social responses
are to the computer as a social entity and not to
the programmer. When explicitly asked by the
researchers, most users consistently said that
social responses to computers were illogical and
inappropriate. Yet, under appropriate manipulation, they responded to the computer as though it
were a social entity. This, in fact, is the essence
of the Theory of Social Response (Moon, 2000;
Reeves et al., 1997). Thus I argue that there is
value in conceptualizing the Web site as a social
actor and that the Web site can be equated to the
agents mentioned above in terms of source
orientation. There are several points-of-contact
between a Web site and its users that will result
216
Conclusion
In practice, it is advantageous for the Web sites to
offer some form of support for personalization or
virtual community as this makes the Web site to
be perceived as more adaptive. This will facilitate better communication between the Web site
and the shoppers, thus leading to higher levels
of customer loyalty. Companies do understand
that in practical terms it takes a lot more money
and effort to acquire a new customer than to keep
an existing customer and arguments presented in
this chapter throws new light on the role of support for personalization and consumer reviews
in increasing customer loyalty.
Good personalization systems can be very
expensive to set up. Enabling an e-commerce
Web site with the necessary tools to build a vibrant
community costs little (especially when compared
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
References
Andre, E., & Rist, T. (2002). From adaptive hypertext to personalized Web companions. Communications of the ACM, 45(5), 43-46.
Billsus, D., Brunk, C. A., Evans, C., Gladish, B.,
& Pazzani, M. (2002). Adaptive interfaces for
ubiquitous Web access. Communications of the
ACM, 45(5), 34-38.
Brown, S., Tilton, A., & Woodside, D. (2002).
Online communities pay. McKinsey Quarterly,
19(1), 17.
Burgoon, J. K., Bonito, J. A., Bengtsson, B., Cederberg, C., Lundeberg, M., & Allspach, L. (2000).
Testing the interactivity model: Communication
processes, partner assessments, and the quality
of collaborative work. Journal of Management
Information Systems, 16(3), 33-56.
Chiu, W. (2000). Web site personalization. IBM
High-Volume Web Site Team, WebSphere Software Platform.
217
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
Appendix 1
Appendix 2
218
Personalization Systems and Their Deployment as Web Site Interface Design Decisions
Appendix 3
This work was previously published in Web Systems Design and Online Consumer Behavior, edited by Y. Gao, pp. 147-155,
copyright 2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
219
220
Chapter 1.17
Communication +
Dynamic Interface =
Better User Experience
Simon Polovina
Sheffield Hallam University, UK
Will Pearson
Sheffield Hallam University, UK
Introduction
Traditionally, programming code that is used
to construct software user interfaces has been
intertwined with the code used to construct the
logic of that applications processing operations
(e.g., the business logic involved in transferring
funds in a banking application). This tight coupling
of user-interface code with processing code has
meant that there is a static link between the result
of logic operations (e.g., a number produced as the
result of an addition operation) and the physical
form chosen to present the result of the operation to the user (e.g., how the resulting number
is displayed on the screen). This static linkage
is, however, not found in instances of natural
human-to-human communication.
Humans naturally separate the content and
meaning that is to be communicated from how
Background
This section accordingly reviews certain theories
of communication from different disciplines and
how they relate to separating the meaning being
communicated from the physical form used to
convey the meaning.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
221
222
223
The Human-computer
interaction Benefits
The human race rarely uses fixed associations
between content or meaning and its physical
representation. Instead, people encode the meaning into a form appropriate for the situation and
purpose of the communication. Communication
can be encoded using different ontologies such
as different languages and terminology. Communication is thus able to take different physical
channels (e.g., sound through the air, or writing
on paper), all of which attempt to ensure that the
content or meaning is communicated between the
parties in the most accurate and efficient manner available for the specific characteristics of
224
future trends
One emerging trend is the use of explicit user
modeling to modify the behaviour and presentation of systems based on a users historic use of
that system (Fischer, 2001). Explicit user modeling
involves tracking the preferences and activities of
a user over time, and building a model representing that behaviour and associated preferences.
This, coupled with the concept of presenting the
content and meaning in the form most suitable for
the user, holds the ability to tailor the content to a
specific individuals needs. By monitoring how a
user receives different types of information over
time, a historic pattern can be developed that can
subsequently be used to present the content and
meaning based on an individuals actual requirements, not on a generalized set of requirements
from a specific group of users.
Conclusion
Currently, by entwining the association between
content and meaning and the physical form used to
represent it, software user interfaces do not mimic
natural human-to-human communication. Within
natural communication, the content and meaning
that is to be conveyed is detached from its physical form, and it is only encoded into a physical
form at the time of transmission. This timing of
the point at which the content and meaning are
encoded is important. It gives the flexibility to
encode the content and meaning in a form that
References
Booch, G. (1990). Object oriented design with
applications. Redwood City, CA: Benjamin-Cummings Publishing Co., Inc.
Bruno, F. (2002). Psychology: A self-teaching
guide. Hoboken, NJ: John Wiley & Sons.
Ferraiolo, J., Jun, F., & Jackson, D. (Eds.). (2003).
SVG 1.1 recommendation. The World-Wide Web
Consortium (W3C). Retrieved October 4, 2005
from http://www.w3.org/TR/SVG
Fischer, G. (2001). User modeling in humancomputer interaction. User Modeling and UserAdapted Interaction, 11(1-2), 65-86.
Fowler, M. (2003). Patterns of enterprise application architecture. Boston: Addison Wesley
Press.
French, T., Polovina, S., Vile, A., & Park, J.
(2003). Shared meanings requirements elicitation
(SMRE): Towards intelligent, semiotic, evolving architectures for stakeholder e-mediated
225
226
Key Terms
Accessibility: The measure of whether a
person can perform an interaction, access information, or do anything else. It does not measure
how well he or she can do it, though.
Content: The information, such as thoughts,
ideas, and so forth, that someone wishes to communicate. Examples of content could be the ideas
and concepts conveyed through this article, the
fact that you must stop when a traffic light is
red, and so on. Importantly, content is what is
to be communicated but not how it is to be communicated.
Encoding: Encoding is the process by which
the content and meaning that is to be communicated is transformed into a physical form suitable
for communication. It involves transforming
thoughts and ideas into words, images, actions,
and so forth, and then further transforming the
words or images into their physical form.
Object Orientation: A view of the world
based on the notion that it is made up of objects
classified by a hierarchical superclass-subclass
structure under the most generic superclass (or
root) known as an object. For example, a car is
a (subclass of) vehicle, a vehicle is a moving
object, and a moving object is an object. Hence,
a car is an object as the relationship is transitive
and, accordingly, a subclass must at least have the
attributes and functionality of its superclass(es).
Thus, if we provide a generic user-presentation
object with a standard interface, then any of its
subclasses will conform to that standard interface.
This enables the plug and play of any desired
subclass according to the users encoding and
decoding needs.
Physical Form: The actual physical means
by which thoughts, meaning, concepts, and so
forth are conveyed. This, therefore, can take the
a standard interface between the tiers. This enables the easy swapping in and out of presentation
components, thus enabling information to be
encoded into the most appropriate physical form
for a given user at any given time.
This work was previously published in the Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 85-91,
copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
227
228
Chapter 1.18
Pervasive Computing:
What Is It Anyway?
Emerson Loureiro
Federal University of Campina Grande, Brazil
Glauber Ferreira
Federal University of Campina Grande, Brazil
Hyggo Almeida
Federal University of Campina Grande, Brazil
Angelo Perkusich
Federal University of Campina Grande, Brazil
Abstract
Inside Chapter
In this chapter, we introduce the key ideas related to the paradigm of pervasive computing.
We discuss its concepts, challenges, and current
solutions by dividing it into four research areas.
Such division is how we were able to understand
what really is involved in pervasive computing
at different levels. Our intent is to provide readers with introductory theoretical support in the
selected research areas to aid them in their studies
of pervasive computing. Within this context, we
hope the chapter can be helpful for researchers of
pervasive computing, mainly for the beginners,
and for students and professors in their academic
activities.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Pervasive Computing
Introduction
Today, computing is facing a significant revolution. There is a clear migration from the traditional desktop-based computing to the ubiquitous
229
Pervasive Computing
230
Pervasive Computing
231
Pervasive Computing
Table 1. The introductory literature that has been used and their main contributions
(Weiser, 1991)
Reference
Main Contribution
(Weiser, 1993)
232
Pervasive Computing
Pervasive Networking
Pervasive networking is about the plumbing involved in the communication of devices in pervasive computing environments. Therefore, studies
within this area range from the design and energy
consumption techniques of wireless interfaces to
the development of high level protocols, such as
routing and transport ones. At this high level, mobility and host discovery play fundamental roles
as enablers of pervasive environments. Research
in these areas has considerably advanced, and as a
result, some practical solutions are already available today. Therefore, in this section we present a
review of the concepts associated with mobility
and host discovery.
Mobility
You probably receive your mail at your residence,
right? Now, consider that you are moving to a
new house. Among other concerns, you would
probably want to change the mailing address associated with correspondences like your credit
card bill. In this case, you must notify your credit
card company that you have just moved, and that
consequently your mailing address has changed.
Either you do this or your credit card bill will
be delivered to the old address, which is not a
desirable situation.
A scenario similar to the above one is basically what happens in computing environments
enhanced with mobility. In other words, mobility
must allow a device to change its physical location
and still be capable of receiving network packages from the other hosts. Note that, by physical
location, we are referring to the network a device
is connected to. Therefore, moving through different networks is what requires a node to have
its address changed.
233
Pervasive Computing
234
adjacent to a set of others. All cells have a central element, which provides connectivity for the
nodes within them. Therefore, a mobile device
can move through different cells and maintain a
connection with their central element, becoming
thus accessible even when moving. Current mobile
telephony networks are an example of this kind
of communication, where the base stations act
as the central elements. Finally, pervasive communication can be mainly characterized by the
lack of a fixed network infrastructure different
from the two previous ones. Therefore, nodes
should establish connections directly with each
other whenever they come close enough. These
features are what characterize the so-called ad
hoc networks (Chlamtac, Conti, & Liu, 2003), and
will be of great importance in the deployment of
pervasive computing environments.
Among the current solutions for mobility, we
could cite Mobile IP (Perkins, 1997), GPRS (General Packet Radio System), and Bluetooth. Basically, Mobile IP and GPRS are mobility solutions
respectively for IP and mobile telephony networks.
Bluetooth, on the other hand, is a standard for
short-range and low-cost wireless communication
in an ad hoc way. Further description concerning
these technologies can be found on the Web sites
listed in the Useful URLs section.
Host Discovery
Putting it simply, host discovery is about finding other hosts in the network, and also being
found by them. This apparently simple concept
is of great importance for pervasive computing
environments, and can be found in technologies
such as Bluetooth and UPnP (Universal Plug and
Play). As an example of its usage, consider the
acquisition of context information in decentralized
environments, such as the available services. By
using host discovery, a device can, for example,
find the available hosts in the environment and
query them for the services they provide. This
Pervasive Computing
Context in Pervasive
Computing
A fundamental functionality of pervasive computing applications is to present users with relevant
information or services at the right place and in
the right time, in a seamless way. Such information can be, for instance, a landmark for tourists
to visit based on their preferences. In this process, two key inputs are involved: the needs and
interests of the user and the information available
both in the environment and in their devices. The
former allows the applications to define what sort
of information would be relevant to the user. The
latter is the source from where such information
will be retrieved. Let us get back to our first example, the one presented in the First Glance on
Pervasive Computing section. In that case, your
desire for a cappuccino and the book you wanted
to buy were your needs and interests. Whereas the
former could be acquired, for instance, by keep-
235
Pervasive Computing
A Definition of Context
The preceding discussion should provide at least
a first impression of what context really means. In
pervasive computing literature, context has been
defined in a number of ways. Some researchers
have defined context by categorizing the different
information associated with it. Gwizdka (2000),
for example, identifies two types of context:
internal and external. Internal context provides
information about the state of the users, such as
their current emotional state. External context,
on the other hand, describes the environment on
which a user is immersed, for example, informing about the current noise or temperature level.
In the work of Petrelli, Not, Strapparava, Stock,
and Zancanaro (2000), two types of context are
identified: material and social. Material context
is associated with location (e.g., at home), devices
(e.g., a handheld, a cellular phone) or the available infrastructure (e.g., available networks).
Social context, on the other hand, encapsulates
the information about the current social state of
the user, for example, in a meeting or a movie
theater. Another work, by Schilit and Theimer
(1994), defines three categories for grouping
context information: computing context, user
context, and physical context. A refinement of
these categories is presented by Chen and Kotz
(2000), through the addition of a fourth category,
time context. The information associated with
each category is presented as follows.
236
Pervasive Computing
237
Pervasive Computing
Development of Pervasive
Computing Systems
Based on our discussion until now, it is easy to
realize that the intrinsic features of pervasive
computing have an impact on the way software
is designed and developed. For example, adaptability, customization, and context sensitivity are
some of the characteristics that are constantly
associated with pervasive computing systems
(Raatikainen, Christensen, & Nakajima, 2002).
Different software engineering techniques have
been used when dealing with them. In this way, we
will now review some of these techniques, as well
as how they can be applied in pervasive computing
systems. More precisely, we will discuss how the
component and plugin-based approaches can be
used to provide such systems with adaptability
and customization. In addition, we show how they
can be aware of changes in the context, through
the generation and notification of events.
238
Pervasive Computing
Plug-in-Based Architectures
Applications based on the plug-in approach are
characterized by having a functional core with
well-defined hooks where extensions (i.e., plugins) can be dynamically plugged (see Figure
5) (Mayer, Melzer, & Schweiggert, 2002). The
functional core contains only the minimum set of
functionalities the application needs to run. Plugins, on the other hand, are intended to enhance
the application by adding features to it. Therefore,
plug-in-based applications can be executed even
when no extensions have been installed. Besides,
features that are not in use can be safely removed,
by plugging out the associated plug-in.
A more revolutionary view of plug-in-based
architectures is to consider everything as a plugin. In this new form of plug-in architectures, the
application becomes a runtime engine for managing each plug-ins life cycle. As a consequence,
end user functionalities are entirely provided by
means of plug-ins. For such kinds of application,
the extension of plug-ins through other plug-ins is
thus a fundamental feature (Birsan, 2005).
Plug-in-Based Architectures in
Pervasive Computing
The application of the plug-in approach in pervasive computing systems provides them with
the needed characteristic of customization. From
minimum, but functional software, users can
gradually download specific plug-ins to their
daily activities, choosing the ones which best
supply their needs. Take as an example an envi-
239
Pervasive Computing
240
Event-Based Systems
An event-based system is the one in which the
communication among some of its components is
performed by generating and receiving events. In
this process, initially a component fires an event,
Pervasive Computing
241
Pervasive Computing
242
Pervasive Computing
Jini
Jini is a middleware focused on service discovery
and advertisement. Jini services are advertised in
OSGi
The open services gateway interface (OSGi) is a
specification supporting the development of Java
service-based applications through the deployment of components known as bundles (Lee,
Nordstedt, & Helal, 2003). A major advantage of
the OSGi specification is that such bundles can
be installed, uninstalled, and updated without the
need to stop and restart the Java applications. In
the scope of pervasive computing, this is a fundamental feature, as it enables pervasive computing systems to adapt themselves in a completely
transparent way to their users.
The main idea behind OSGi is the sharing and
discovery of services. In other words, bundles are
able to advertise and discover services. When
advertising a service, a bundle can define a set
of key-value pairs, representing the services
properties. Such properties can thus be useful for
other bundles, in order to discover the services
they need. These advertisement and discovery
processes are both performed through a registry,
managed by the OSGi implementation. In this
way, it is able to keep track of all services cur-
243
Pervasive Computing
RCSM
Reconfigurable context-sensitive middleware
(RCSM) (Yau, Karim, Yu, Bin, & Gupta, 2002)
addresses context-awareness issues through an
object-based framework. The context-independent
information of an application is implemented in
programming languages such as C++ and Java.
The context-sensitive information is implemented
as an interface, using the context-aware interface
definition language (CA-IDL). This interface has
a mapping of what actions should be executed
according to each activated context. In this way,
the application logic is isolated from the context
specification.
SOCAM
Service-oriented context-aware middleware
(SOCAM) (Gu, Pung, & Zhang, 2004b) supports
the development of context-aware services where
ontologies are used for representing context
information. There are two types of ontologies:
high-level ontologies describe generic concepts
which are domain-independent, such as person,
activity, location, and device, and domain-specific
ontologies define concepts which concern specific
domains, such as vehicle and home domains.
Aura
The main focus of Aura is to minimize the distraction of users by providing an environment in
which adaptation is guided by the users context
and needs. This project, developed at the Carnegie Mellon University, has been applied in the
implementation of various applications, such as
a wireless bandwidth advisor, a WaveLan-based
people locator, an application which captures the
244
Pervasive Computing
245
Pervasive Computing
for this purpose, with the goal of enabling knowledge to be acquired, created, and disseminated
by transforming it from tacit to explicit forms
and vice-versa. One of the great advantages of
IT, in this case, is the possibility to transpose
the barriers of time and space. Based on this, it
is not hard to realize how pervasive computing
can aid in further transposing such barriers. With
pervasive computing technology, members of
an organization are able to establish meetings,
anytime and anywhere, with the goal of sharing their knowledge. In this scenario, video and
audio over the Internet could be used to give the
impression that the members are in a real meeting.
Therefore, people from organizations geographically distributed would not need to travel to meet
each other. Another use of pervasive computing
technology is for the generation of reports with
the intent of disseminating someones knowledge.
With the possibility of having Internet connection
whenever needed, members of an organization
could prepare reports about a specific topic and
then distribute them to the others, no matter where
they are. Within this process, any needed document, either internal or external to the organization, could also be accessed, enabling knowledge
to be acquired, created, and disseminated in a
ubiquitous way.
Due to this possibility of extending the limits
of knowledge acquisition, creation, and dissemination, it is not surprising to see the first solutions
trying to combine pervasive computing and
knowledge management. An example is the work
of Will, Lech, and Klein (2004), which proposes
a tamagotchi-based solution for supporting mobile workers in finding relevant information for
the work they perform. Basically, this solution
works by enabling a mobile device to interact
with information suppliers in a seamless way,
through a continuous and proactive matching
between the information they provide and the one
needed by the mobile workers. Another work in
this category is the p-learning Grid, also known
246
Conclusion
In this chapter, we have discussed some concepts
surrounding pervasive computing. We have provided the reader with a high level understanding
of the pervasive computing paradigm at different
levels, and thus we have ranged from networking to software engineering issues. By using this
approach, we presented a broad vision related
to pervasive computing to aid researchers, students, and professors in research and teaching
activities.
Based on the concepts we have discussed,
it is possible to conclude that the application of
pervasive computing in the real world is still in its
beginning. Many efforts have still to be made in
order to bring the primary vision of pervasiveness
to real life. One could ask whether such vision
will really be conceived. This is, for now, a question which is still unanswered. Answering it will
require deep theoretical and, mainly, practical
studies. This is what researchers should focus
on, and thus, this work has been an introductory
theoretical contribution to this end. We believe
the concepts presented here will be summed up in
other works, and also be useful when developing
Pervasive Computing
References
Aldestein, F., Gupta, S.K.S., Richard, G.G., &
Schwiebert, L. (2005). Fundamentals of mobile
and pervasive computing. McGraw-Hill.
Amann, P., Bright, D., Quirchmayr, G., & Thomas,
B. (2003). Supporting knowledge management
in context aware and pervasive environments
using event-based co-ordination. In Proceedings
of the 14th International Workshop on Database
and Expert Systems Applications, Prague, Czech
Republic.
Apostolopoulos, T.K., & Kefala, A. (2003). A configurable middleware architecture for deploying
e-learning services over diverse communication
networks. In V. Uskov (Ed.), Proceedings of the
2003 IASTED International Conference on Computers and Advanced Technology in Education
(pp. 235-240). Rhodes, Greece. Acta Press.
Bachman, F., Bass, L., Buhman, C., Dorda, S.C.,
Long, F., Robert, J., Seacord, R., & Wallnau, K.
(2000). Technical concepts of component-based
software engineering (vol. 2). Pittsburgh, PA.
EUA: Carnegie Mellon Software Engineering
Institute. Retrieved October 10, 2006, from
http://www.sei.cmu.edu/pub/documents/00.reports/pdf/00tr008.pdf
Bardram, J.E., & Christensen, H.B. (2001). Middleware for pervasive healthcare: A white paper.
In G. Banavar (Ed.), Advanced topic workshop:
Middleware for mobile computing, Heidelberg,
Germany.
Becker, C., Handte, M., Schiele, G., & Rothermel, K. (2004). PCOM: A component system for
pervasive computing. In Proceedings of the 2nd
IEEE International Conference on Pervasive
247
Pervasive Computing
Garlan, D., Siewiorek, D., Smailagic, A., & Steenkiste, P. (2002). Project Aura: Toward distraction-free pervasive computing. IEEE Pervasive
Computing, 1(2), 22-31.
248
Pervasive Computing
249
Pervasive Computing
Will, O.M., Lech, C.T., & Klein, B. (2004). Pervasive knowledge discovery: Continuous lifelong
learning by matching needs, requirements, and
resources. In Proc. of the 4th International Conf.
on Knowledge Management, Graz, Austria.
Yau, S.S., Karim, F., Yu, W., Bin, W., & Gupta,
S.K.S. (2002). Reconfigurable context-sensitive
middleware for pervasive computing. IEEE Pervasive Computing, 1(3), 33-40.
250
Endnote
1
Pervasive Computing
Interaction
Select one of the foothill projects presented on the
Ubiquitous Computing Grand Challenge Web site
and prepare one of the following items:
1.
2.
251
Pervasive Computing
Questions
1.
2.
3.
252
Pervasive Computing
October
UbiSoft pervasive software. Retrieved
10, 2006, from http://www.it.lut.fi/kurssit/0405/010651000/Luennot/T2238.pdf
Kindberg, T., & Fox, A. (2002). System software
for ubiquitous computing. IEEE Pervasive Computing, 1(1), 70-81.
Landay, J.A., & Borriello, G. (2003). Design
This work was previously published in Ubiquitous and Pervasive Knowledge and Learning Management: Semantics, Social
Networking and New Media to Their Full Potential, edited by M. Lytras and A. Naeve, pp. 1-34, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
253
254
Chapter 1.19
INTRODUCTION
Nowadays, we are experiencing an increasing use
of mobile and embedded devices. These devices,
aided by the emergence of new wireless technologies and software paradigms, among other
technological conquests, are providing means to
accomplish the vision of a new era in computer
science. In this vision, the way we create and use
computational systems changes drastically for a
model where computers loose their computer appearance. Their sizes were reduced, cables were
substituted by wireless connections, and they are
becoming part of everyday objects, such as clothes,
automobiles, and domestic equipments.
Initially called ubiquitous computing, this paradigm of computation is also known as pervasive
computing (Weiser, 1991). It is mainly characterized by the use of portable devices that interact
with other portable devices and resources from
wired networks to offer personalized services to
the users. While leveraging pervasive computing,
these portable devices also bring new challenges
to the research in this area. The major problems
arise from the limitations of the devices.
At the same time that pervasive computing was
attaining space within the research community,
the field of grid computing (Foster, Kesselman,
& Tuecke, 2001) was also gaining visibility and
growing in maturity and importance. More than
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
255
256
restrict user mobility in the sense that users become restricted to interact with the system only
where the infrastructure was deployed.
257
258
Mobile/Wireless Grids
Mobile grids have as main argument carrying
grids where cables do not reach (Chu & Humphrey, 2004; Kurkovsky & Bhagyavati, 2003;
Park, Ko & Kim, 2003). The basic idea is to use
high performance where the wired infrastructure
is not available and clustering mobile devices is
the last choice. In this context, software programs
are ready to use resources of wired grids as well
as switch to an unplugged mode, where mobile
devices collaborate to achieve better throughput
for distributed processing.
restrictions about the kind of data to be monitored. If large amounts of data are needed to be
transferred to mobile devices then this approach
may be inadequate owning to reasons already
discussed in this chapter. This approach may be
unsuitable in those scenarios where users want
real-time access to monitoring data. Depending
on the size of data, displaying a continuous stream
of data may overdrive mobile devices. (Taylor et
al., 2003) also relates to this approach. In this
work Taylor et al. (2003) use a grid to simulate
the formation of galaxies. Mobile devices may
be used to monitor the workflow of grid services
involved in the process.
CONCLUSION
In this chapter we discussed various approaches
for combining grid and pervasive computing. The
approaches were grouped into categories that were
exemplified and criticized. The list is not exhaustive, but we believe it is sufficiently representative
of how research groups are attempting to blend
the technologies.
Resource constraints of mobile devices still
are the bottleneck for this integration. In the perspective of pervasive computing, massive search
applications (e.g., cryptographic key breakers)
are the best example of class of application with
all features for a successful integration between
grids and pervasive computingsmall pieces of
data requiring intensive processing that produce
more small pieces of data.
All efforts presented here are valid and, in
well defined scenarios, they show the technologies
can be successfully integrated. The approaches
discussed in this chaper have their own application
niche and, hence, a space where they can evolve.
Nevertheless, our hopes are that while small
mobile devices were so resource constrained,
difficultely researches will have both freedom
and tools enough for developing their ideas.
On the other hand, there are several problems
259
REFERENCES
Akogrimo. (2006). AkogrimoAccess to knowledge through the grid in a mobile world. Retrieved
April 8, 2006 from www.mobilegrids.org
Barratt, C., Brogni, A., Chalmers, M., Cobern,
W.R., Crowe, J., Cruickshank et al. (2003). Extending the grid to support remote medical monitoring, Proceedings of the 2nd UK e-Science All
Hands Meeting (AHM03), Nottingham, United
Kingdom.
Chu, D.C., &Humphrey, M. (2004). Mobile
OGSI.NET: Grid computing on mobile devices.
Proceedings of the 5th IEEE/ACM International
Workshop on Grid Computing (GRID04), (pp.
182-191). Pittsburgh, PA, USA.
Clarke, B., & Humphrey, M. (2002). Beyond the
device as portal: Meeting the requirements of
wireless and mobile devices in the legion grid
computing system. In Proceedings of the 2nd
International Workshop on Parallel and Distributed Computing Issues in Wireless Networks and
Mobile Computing (WPIM02), (pp. 192-199). Ft.
Lauderdale, FL, USA.
Davies, N., Friday, A., & Storz, O. (2004). Exploring the grids potential for ubiquitous computing.
IEEE Pervasive Computing, 3(2), 74-75.
260
Phan, T., Hung, L., & Dulan, C. (2002). Challenge: Integrating mobile wireless devices into
the computational grid. In Proceedings of the
8th Annual International Conference on Mobile
Computing and Networking (MobiCom02), (pp.
271278). Atlanta, GA, USA.
Srinivasan, S.H. (2005). Pervasive wireless grid
architecture. In Proceedings of the 2nd Annual
Conference on Wireless On-demand Network
Systems and Services (WONS05), (pp. 83-88).
St. Moritz, Switzerland
Taylor, I., Shields, M., Wang, I., & Philp, R. (2003).
Distributed P2P Computing within Triana: A galaxy visualization test case. In Proceedings of the
17th International Parallel and Distributed Processing Symposium (IPDPS03), Nice, France
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 66-75.
Key Terms
Context Information: Any relevant information regarding the environment and its users.
Smart spaces can use context information to
deliver personalized services to users.
Embedded Devices: An embedded device
is a special-purpose computer system, which is
completely encapsulated by the device it controls.
Examples of embedded devices include home
automation products, like thermostats, sprinklers
and security monitoring systems.
Grid Services: A kind of web service. Grid
services extend the notion of web services through
Endnotes
1
2
This work was previously published in the Encyclopedia of Networked and Virtual Organizations, edited by G.D. Putnik and
M.M. Cunha, pp. 1223-1229, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
261
262
Chapter 1.20
Abstract
The advancement of technologies to connect
people and objects anywhere has provided many
opportunities for enterprises. This chapter will
review the different wireless networking technologies and mobile devices that have been developed,
and discuss how they can help organizations
better bridge the gap between their employees
or customers and the information they need. The
chapter will also discuss the promising application areas and human-computer interaction modes
in the pervasive computing world, and propose
a service-oriented architecture to better support
such applications and interactions.
Introduction
With the advancement of computing and communications technologies, people do not have to
sit in front of Internet-ready computers to enjoy
the benefit of information access and processing.
Pervasive computing, or ubiquitous computing,
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Mobile Communication
Networks
Mobile communication technologies range from
personal area networks (PANs; a range of about 10
meters) and local area networks (a range of about
100 meters) to wide area networks (WANs; a few
kilometers). From a network-topology perspective,
most networks are based on a client-server model.
A few are based on the peer-to-peer model.
Wireless PANs
A wireless personal area network allows the
different devices that a person uses around a
cubicle, room, or house to be connected wirelessly. Such devices may include the computer,
personal digital assistants (PDAs), cell phone,
printer, and so forth.
Bluetooth is a global de facto standard for
wireless connectivity (Bluetooth SIG, 2005). The
technology is named after the 10th-century Danish
King Harald, who united Denmark and Norway
and traveled extensively.
Bluetooth
2.4 GHz
Maximum
Distance
10 meters
HomeRF
2.4 GHz
50 meters
ZigBee
2.4 GHz
75 meters
Technology
Radio Frequency
Data Capacity
721 Kbps
0.4-10 Mbps, depending on
distance
220 Kbps
263
create new requirements for the enterprise architecture to extend access to applications. However,
they do require security measures to make sure
the device that is receiving information is a recognized device. It also creates an opportunity for
the computing infrastructure to potentially know
where a particular device, and most likely the associated user, is located. How these are handled
will be discussed later in the description of the
proposed service-oriented architecture.
Wireless LANs
The set of technical specifications for wireless
local area networks (WLANs), labeled 802.11
by IEEE, has led to systems that have exploded
in popularity, usability, and affordability. Now
wireless LAN can be found in many organizations
and public places.
With a wireless LAN, a users device is connected to the network through wireless access
points (APs). APs are inexpensivemany are
available for less than $100and will usually work
perfectly with little or no manual configuration.
Wireless LANs use a standard, called IEEE
802.11, that provides a framework for manufactures to develop new wireless devices. The first
two standards released for wireless LANs were
802.11b and 802.11a. The 802.11b standard was
used in most wireless devices in the early adoption
of wireless LAN. A new standard, called 802.11g,
combines data-transfer rates equal to 802.11a with
the range of an 802.11b network (Geier, 2002). It
uses access points that are backward compatible
with 802.11b devices.
Wireless technology has become so popular
that many new devices, especially laptop computers, have built-in wireless LAN capabilities.
Windows XP, Mac OS, and Linux operating
systems automatically configure wireless settings,
and software such as NetStumbler and Boingo
provides automatic connections to whatever
WLANs they encounter. What is more, community-based groups have furthered neighborhood
264
Maximum
Data Capacity
Distance
5 GHz
20 meters
54 Mbps
2.4 GHz
100 meters
11 Mbps
2.4 GHz
100 meters
54 Mbps
A security standard for encryption on wireless LANs
Varies
Varies
> 100 Mbps
A standard security protocol for user authentication on
wireless LANs
Radio Frequency
Wireless MANs
A wireless metropolitan area network (MAN;
also referred to as broadband wireless access,
or WiMAX) can wirelessly connect business to
business within the boundary of a city. It is becoming a cost-effective way to meet escalating
business demands for rapid Internet connection
and integrated data, voice, and video services.
Wireless MANs can extend existing fixed
networks and provide more capacity than cable
networks or digital subscriber lines (DSLs). One
of the most compelling aspects of the wireless
MAN technology is that networks can be created
quickly by deploying a small number of fixed-base
stations on buildings or poles to create high-capacity wireless access systems.
In the wireless MAN area, IEEE has developed
the 802.16 standard (IEEE, 2005b), which was
published in April 2002, and has the following
features.
It addresses the first mile-last mile connection in wireless metropolitan area networks.
It focuses on the efficient use of bandwidth
between 10 and 66 GHz.
265
NLOS uses self-configuring end points that connect to a PC (personal computer). The end point
has small attached antennas and can be mounted
anywhere without the need to be oriented like
satellite antennas. Two major vendors are Navini
Networks and Nokia.
With the wireless MAN technology, enterprises can quickly set up a network to provide
wireless access to people in a certain area. It is
very useful in situations such as an off-site working session or meeting.
Wireless NANs
Wireless neighborhood area networks are community-owned networks that provide wireless
broadband Internet access to users in public areas (Schwartz, 2001). To set up a wireless NAN,
community group members lend out access to the
Internet by linking wireless LAN connections
to high-speed digital subscriber lines or cable
modems. These wireless LAN connections create
network access points that transmit data for up to
a 1-kilometer radius. Anyone possessing a laptop
or PDA device equipped with a wireless network
card can connect to the Internet via one of these
community-established access points.
Wireless NANs have been established in more
than 25 cities across the United States. Community-based networks differ from mobile ISPs
(Internet service providers) such as MobileStar
and Wayport that offer subscribers wireless access to the Internet from hotels, airports, and
coffee shops. Wireless NANs extend access to
consumers in indoor as well as outdoor areas,
and the access is typically offered at no charge.
For instance, NYC Wireless (http://www.nycwireless.net) provides Internet access to outdoor
public areas in New York City. In addition, this
organization is negotiating with Amtrak to bring
wireless Internet access to Penn Station.
Enterprises could leverage the existing wireless NANs and equip employees with the right
266
Wireless WANs
Wireless wide area networks are commonly
known as cellular networks. They refer to the
wireless networks used by cell phones.
People characterize the evolution of wireless
WAN technology by generation. First generation
(1G) started in the late 1970s and was characterized by analog systems. The second generation of
wireless technology (2G) started in the 1990s. It
is characterized by digital systems with multiple
standards and is what most people use today.
2.5G and 3G are expected to be widely available
1 to 3 years from now. 4G is being developed in
research labs and is expected to launch as early
as 2006.
Wireless WAN originally only offered voice
channels. Starting from 2G, people have used
modems to transmit data information over the
voice network. More recent generations offer
both voice and data channels on the same cellular network.
One of the major differentiating factors among
the wireless generations is the data transmission
speed in which the wireless device can communicate with the Internet. The table below is a comparison of the data transmission rates of the 2G,
2.5G, 3G, and 4G technologies (3Gtoday, 2005).
Both 2G and 2.5G include different technologies
with different data transmission rates. Global
Systems for Mobile Communications (GSM)
and Code Division Multiple Access (CDMA) are
2G technologies. General Packet Radio Service
(GPRS), CDMA 1x, and Enhanced Data for GSM
Environment (EDGE) are 2.5G technologies.
In the United States, cellular carriers Verizon
and Sprint use CDMA technology. Cingular uses
GSM, GPRS, and EDGE technologies. Both Verizon and Sprint have rolled out their CDMA 1x
Maximum
9.6 Kbps
14.4 Kbps
115 Kbps
144 Kbps
384 Kbps
2 Mbps
20 Mbps
Ultrawideband (UWB)
Traditional radio-frequency technologies send
and receive information on particular frequencies,
usually licensed from the government. Ultrawideband technology sends signals across the entire
radio spectrum in a series of rapid bursts.
Ultrawideband wireless technology can transmit data at over 50 Mbps. A handheld device
using this technology consumes 0.05 milliwatts
of power as compared to hundreds of milliwatts
for todays cell phones. Ultrawideband signals
appear to be background noise for receivers of
other radio signals. Therefore it does not interfere
with other radio signals. Ultrawideband is ideal
for delivering very high-speed wireless-network
data exchange rates (up to 800 Mbps) across
relatively short distances (less than 10 meters)
with a low-power source.
Initial
< 28 Kbps
32 Kbps
64 Kbps
< 128 Kbps
TBD
Typical
28-56 Kbps
32-64 Kbps
64-128 Kbps
128-384 Kbps
TBD
267
Sensor Networks
Motes (also called sensor networks or Smart Dusts;
Culler & Hong, 2004) are small sensing and communication devices. They can be used as wireless
sensors replacing smoke detectors, thermostats,
lighting-level controls, personal-entry switches,
and so forth. Motes are built using currently
268
A scanner that can scan and measure information on temperature, light intensity,
vibrations, velocity, or pressure changes
A microcontroller that determines tasks
performed by the mote and controls power
across the mote to conserve energy
A power supply that can be small solar cells
or large off-the-shelf batteries
TinyOS, an open-source software platform
for the motes. TinyOS enables motes to selforganize themselves into wireless network
sensors.
TinyDB, a small database that stores the
information on a mote. With the help of
TinyOS, the mote can process the data and
send filtered information to a receiver.
Pervasive Devices
1.
2.
3.
4.
5.
Figure 1. Voice gateway connects the phone network with the data network
phone
network
2
5
Voice
gateway
internet
enterprise
network
269
270
IP Phones
IP phones are telephones that use a TCP/IP (transmission-control protocol/Internet protocol) network for transmitting voice information. Since IP
phones are attached to the data network, makers of
such devices often make the screens larger so that
the phones can also be used to access data. What
makes IP phones pervasive devices is that a user
who is away from his or her own desk can come
to any IP phone on the same corporate network,
log in to the phone, and make the phone work as
his or her own phone. The reason is for this is
that an IP phone is identified on the network by
an IP address. The mapping between a telephone
number and an IP address can be easily changed
to make the phone belong to a different user.
In terms of the information-access capability,
Cisco (http://www.cisco.com) makes IP phones
that can access information encoded in a special
XML format. Example applications on the phone
include retrieving stock quotes, flight departure
and arrival information, news, and so forth.
Pingtel (http://www.pingtel.com) developed
a phone that runs a Java Virtual Machine. This
makes the phone almost as powerful as a computer.
Mitel (http://www.mitel.com) made an IP
phone that allows a user to dock a PDA. With
this capability, users can go to any such IP phone,
Orbs operate via wireless pager networks under the command of a server. This server gathers
pertinent information from sources, including the
Web, condenses it to a simple value, and periodically sends the information to the orbs.
Orbs are currently available from several
retailers. The wireless service costs about $5 per
month per device. Ambient Devices (http://www.
ambientdevices.com) sells orbs and provides the
communications service.
The information displayed by orbs is configurable. There are currently available data feeds for
stock-market movement and weather forecasts.
Application Scenarios
From an enterprises perspective, the following
applications areas are where pervasive computing
brings business value.
271
Communication: Unified
Communication and Instant
Communication
With cell phones and pagers, it is not very hard
to keep mobile users in touch. But some pervasive communication technologies have reached a
higher level. Let us look at two such technologies:
unified communication and instant communication.
Unified communications refers to technologies
that allow users access to all their phone calls,
voice mails, e-mails, faxes, and instant messages
as long as they have access to either a phone or
a computer. With a computer, a software phone
allows the user to make or receive phone calls.
Voice-mail messages can be forwarded to the
e-mail box as audio files and played on the computer. Fax can be delivered to the e-mail box as
images. With a phone, a user can listen to e-mail
messages that the system would read using the
text-to-speech technology. A user can request a
fax to be forwarded to a nearby fax machine.
Unified communications services are offered by most traditional telecommunications
technology providers such as Cisco, Avaya, and
Nortel.
272
Sales-Force Automation
Salespeople are often on the road. It is important for them to have access to critical business
information anywhere at anytime. Pervasive access to information increases their productivity
by using their downtime during travel to review
information about clients and prospects, about
the new products and services they are going to
sell, or to recap what has just happened during
a sales event when everything is still fresh in
their memory. Being able to use smart phones or
wireless PDAs to conduct these activities is much
more convenient for salespeople as opposed to
having to carry a laptop PC.
Dashboard or Project-Portfolio
Management
For busy executives, it is very valuable for them to
be able to keep up to date on the dashboard while
they are away from the office and to take actions
when necessary. It is also very helpful for them
to be able to look at the portfolio of projects they
are watching, update information they have just
received during a meeting or conversation, and
take notes or actions about a specific project.
with the work, and get the next work order without
having to come back to the office. Mobile access
also reduces the amount of bookkeeping, which
requires a lot of manual intervention, and thus
reduces the chance of human errors.
Location-Based Services
A special type of pervasive application is location-based service. With wireless LANs, when a
mobile user is in the vicinity of an access point,
273
274
User-Interaction Models
In the context of pervasive computing, it is usually
inconvenient, if not impossible, for the user to
enter text using a regular keyboard. Sometimes,
it is also inconvenient for the user to read text.
Therefore, other input and output mechanisms
have to be employed.
Nontraditional input mechanisms include
speech recognition, gesture, touch screen, eye
gazing, software keyboard, and projection keyboard. Among these, a combination of speechrecognition and pen-based touch-screen input is
most natural for most situations. This is also what
PDAs and tablet PCs typically offer.
Nontraditional output mechanisms include
converting text to speech and using sound,
blinking, and vibration to convey information
(as in ambient computing described earlier in
this chapter).
Multimodal interaction allows a user to choose
among different modes of input and output. For
mobile users, speech is typically the most convenient way for input, while visual means may still
be the most powerful way of seeing the output
(especially when the output includes pictures or
diagrams).
Kirusa (http://www.kirusa.com) has developed
technologies to support multiple levels of multimodal interaction. SMS multimodality allows users to ask a question in voice and have the answers
delivered to their mobile devices in the form of
an SMS message. Sequential multimodality allows users to use the interaction mode deemed
most appropriate for each step of the process.
Simultaneous multimodality lets users combine
different input and output modes at the same time.
For example, for driving directions, a user can
A Service-Oriented Architecture to
Support Pervasive Computing
For an enterprise to leverage pervasive computing,
instead of deploying various point solutions, the
better way is to build an architecture that is well
positioned to support pervasive devices and usage.
In order to provide mobile users with maximum
access to enterprise information and applications
with customized interaction methods and work
flow, and at the same time minimize the extra cost
in supporting pervasive access, a service-oriented
architecture should be established.
The following picture shows a service-oriented
architecture that supports pervasive computing.
Let us look at this architecture from the top to
the bottom.
275
Smart
Phone
User
Regular
Phone
User
Wireless
PDA
User
WebTV
User
Laptop
User
Access
Control
Engine
Content
Transform.
Engine
Location
Determin.
Engine
Session
Persist.
Engine
Security
Services
Mgmt and
Control
Services
Other
Technical
Services
276
CICS
Services
Stored
Procedure
Services
Thirdparty
Services
Other
Business
Services
Future Directions
Moving forward, there needs to be much research
and development work on building a system
infrastructure that can use different sources of
information to judge where the user is, and what
devices and interaction modes are available to the
user during a pervasive session. This will enable
smarter location-based information push to better
serve the user.
A related research topic is how to smoothly
transition an interaction to a new device and interaction mode as the user changes locations and
devices. Some initial work on this subject, referred
to as seamless mobility, is being conducted at
IBM and other organizations.
Another area that deserves much attention is
the proactive delivery of information that users
will need based on their profiles and information
such as activities on their calendars or to-do lists.
This relates to previous research efforts on intelligent personal assistants with integration into the
pervasive computing environment.
References
3Gtoday. (2005). Retrieved November 5, 2005,
from http://www.3gtoday.com
Blackwell, G. (2002, January 25). Mesh networks:
Disruptive technology? Wi-Fi Planet. Retrieved
October 25, 2005, from http://www.wi-fiplanet.
com/columns/article.php/961951
Bluetooth SIG. (2005). The official Bluetooth
wireless info site. Retrieved November 3, 2005,
from http://www.bluetooth.com
277
This work was previously published in Enterprise Service Computing: From Concept to Deployment, edited by R. Qiu, pp.
261-284, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
278
279
Chapter 1.21
INTRODUCTION
The fast development on microelectronics has
promoted the increase on the computational power
of hardware components. On the other hand, we
are facing a significant improvement on energy
consumption as well as the reduction of the physical size of such components. These improvements
and the emergence of wireless networking technologies are enabling the development of small
and powered mobile devices. Due to this scenario,
the so-called pervasive computing paradigm, in-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
280
SERVICE-ORIENTED COMPUTING
The service-oriented computing (SOC) paradigm
has been considered as the next step in distributed
computing (Papazoglou, 2003). In a general way,
this paradigm can be viewed as the development
of applications through the runtime integration of
software pieces named of services (McGovern,
Tyagi, Stevens, & Mathew, 2003). In this process,
PERVASIVE COMPUTING
The field of pervasive computing has its origins at
the Xerox Palo Alto Research Center. The pioneer
work that has been led there has culminated in
the novel article of Mark Weiser in 1991 (Weiser,
1991), where he describes the fist ideas of pervasive
computing. Weisers vision is at the same time
revolutionary and simple: a world where computing is embedded in everyday objects, like cars,
televisions, and air conditionings, all seamlessly
integrated into our lives and performing tasks for
us (Turban, Rainer, & Potter, 2005). When Weiser
talked about seamless integration, he meant that
applications running in these objects should
act proactively on our behalf. They should, for
281
SERVICE PROVISION
APPROACHES IN PERVASIVE
COMPUTING
When provisioning services in a pervasive environment, one aspect to be considered is the way it
is organized; that is, whether the environment is
based on a wired network infrastructure, whether
it is formed in an ad hoc way, or both. This is
necessary for dealing with the particularities of
each environment, and within this scope, we can
say that there are two major ways of performing
service provision (Nickull, 2005): the push-based
and the pull-based approach. In the next sections
we will outline each of these approaches as well
as describe how they fit into pervasive computing
environments.
282
Centralized Provision
The centralized service provision consists in scattering service registries in specific servers (i.e.,
registry servers) of the network. Therefore, for
advertising a service, the provider must initially
find which of these servers are available in the
283
284
Distributed Provision
In the distributed service provision, services are
advertised in registries located in each host. Undoubtedly, in this approach the advertising task is
easier to be performed than in the other ones, as it
does not involve sending advertisements to central
servers or directly to the other hosts. However,
service discovery is more complicated, as it must
be performed by inquiring each available host
for the needed services. As no centralizer hosts
are necessary for advertising services, discovery
is possible whenever a client and a provider are
present in the network. An example of the distributed service provision approach is illustrated
in Figure 4. Initially, each host advertises its
services (Step 1). Once a client needs to perform
service discovery, in our example host A, it asks
285
SERVICE ORIENTED
TECHNOLOGIES
In this section we present the main technologies
related to the provision of services in pervasive
computing environments.
286
Jini
Jini is a service-oriented Java technology based on
a centralized pull-based approach (Waldo, 1999).
Therefore, service advertisements are stored in
central servers, which are named lookup servers.
Jini uses the RMI (http://java.sun.com/products/
jdk/rmi - Remote Method Invocation) protocol
for all interactions involved in the advertisement,
discovery, and invocation of services. When a
client discovers and binds to a service, it is incorporated to the client by downloading the code
of a proxy to the required service, named remote
control object.
The Jini platform uses the concept of lease
for controlling the access to the services. A lease
is a sort of warrant that a client has for using a
service during a specific period of time. When
the lease expires the client needs to renew it
with the provider if it wishes to continue using
the service.
Bluetooth
Bluetooth is a standard for wireless communication among small devices within short distances
(Johansson, Kazantzidis, Kapoor, & Gerla, 2001),
defining higher-level protocols for both host and
service discovery (http://www.bluetooth.org). The
discovery of services in the Bluetooth standard is
defined by the service discovery protocol (SDP),
287
CONCLUSION
The service-oriented paradigm has proved to be
an important element in pervasive computing
systems, in order to provide anytime and anywhere
access to services. Its dynamic binding feature
enables to build applications powered with on-demand extensibility and adaptability, two important
elements of any pervasive system.
Given this trend, in this chapter we have
tried to present an overview of service provision
in pervasive computing environments. More
precisely, we have showed an introduction to the
main characteristics, challenges, and solutions
concerning the way that services are advertised,
discovered, and used in pervasive environments.
Although we presented concepts at an introductory level, we believe they may serve as a good
source of knowledge, helping both students and
researchers involved with these fields.
REFERENCES
Bellur, U., & Narendra, N. C. (2005).
Towards
service orientation in pervasive computing systems. In International Conference on Information
Technology: Coding and Computing (Vol. II, pp.
289-295).
Las Vegas, NV.
Costa, P., Coulson, G., Mascolo, C., Picco, G. P.,
& Zachariadis, S. (2005).
The RUNES
middleware: A reconfigurable component-based
approach to networked embedded systems. In
Proceedings of the 16th IEEE International Symposium on Personal Indoor and Mobile Radio
Communications. Berlin, Germany: IEEE Communications Society.
Huhns, M. N., & Singh, M. P. (2005). Service
oriented computing: Key concepts and principles.
IEEE Internet Computing, 9(1), 75-81.
Johansson, P., Kazantzidis, M., Kapoor, R., &
Gerla, M. (2001). Bluetooth: An enabler for
288
Symposium on Distributed Objects and Applications (Vol. 3760, pp. 732-749). Agia Napa, Cyprus:
Springer Verlag.
Turban, E., Rainer, R. K., & Potter, R. (2005).
Mobile, wireless, and pervasive computing. Information Technology for Management: Transforming Organizations in the Digital Economy (pp.
167-206). New York: John Wiley & Sons.
Waldo, J. (1999). The Jini architecture for network-centric computing. Communications of the
ACM, 42(7), 76-82.
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.
KEY TERMS
289
290
Chapter 1.22
Abstract
Computer mediated communication (CMC)
provides a way of incorporating participant interaction into online environments. Use of such
features as discussion forums and chats enhance
collaborative work and learning. For many,
however, CMC may be an unfamiliar medium.
To ensure a successful CMC event, it is essential
to adequately prepare participants for CMC. A
proposed four step model prepares participants for
CMC. The four steps include conducting a needs
and population analysis, providing an orientation
before the event and shortly after the event begins,
and providing continuing support.
Introduction
Computer mediated communication (CMC) provides interaction during online events, whether
synchronous or asynchronous. The modera-
Background
Moore and Kearsley (2005) cite one common
misconception that participants new to online
learning hold is that learning online is less demanding than face-to-face learning. In fact, the
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
291
4.
292
293
Needs assessment
Population analysis
294
1.
2.
3.
4.
5.
Continuing Support
Participants should be supported throughout the
course of the event. It is suggested that a discussion
forum be established for questions that the participants might have. Moderators should be vigilant
in monitoring participation and be prepared to
intervene if participants fail to contribute. It is
possible that a lack of participation may indicate
that participants are experiencing technical or
other difficulties.
Case Study
The techniques described above were used in
delivering an online training based on CMC
295
Continuing Support
The Ask the Moderators discussion forum was
included in all four weeks of activities. The mod-
296
Future Trends
As more organizations look to contain costs,
it is expected that even more of them will turn
to some form of CMC. As CMC draws more
participants, there is potential for problems if
these new users are not fully prepared for what
is required in a CMC environment. With more
people using Internet technology, including CMC,
it can be expected that less intensive preparation
may be required in the future. The nature of the
preparation may also change with the evolution
of technology and in the way CMC is delivered.
For instance, audio-based discussion boards now
exist allowing participants to record their postings
and responses to posts, rather than responding
by typing text.
Another future trend is the emergence of more
sophisticated modes of online communication.
One popular social online community is called
Second Life, a virtual reality gaming environment
that allows participants to create alternate identities and lifestyles through avatars. Community
members can buy and sell virtual products, services, and even real estate using virtual dollars
that can be exchanged for real money (Newitz,
2006). The success of these virtual communities
serves as a model for creating and sustaining an
online community. Companies such as Wal-Mart,
American Express, and Intel have been studying
interaction in Second Life in conjunction with
their corporate training objectives (Newitz, 2006).
Ohio University has been incorporating Second
Life into its formal online learning programs (Dewitt, 2007). Virtual communities such as Second
Life provide a more life-like form of interaction
than what is generally seen today in CMC.
The closer developers can get to replicating
face-to-face interaction, the more likely that CMC
Conclusion
Organizations are turning to CMC as a way of
meeting, collaborating, and learning. More people
are being introduced to CMC, including users who
are unfamiliar with the style of communication or
the technology involved. Participant acceptance
of CMC will grow if they are prepared for what
is encountered in a CMC environment.
A four-step process may be employed to prepare participants for CMC. These steps include
conducting a needs and population analysis on
the target audience, providing a participant orientation before the CMC event begins and during
the first week, and providing continuing support throughout the event. A systematic process
ensures that CMC developers and moderators
will understand their audiences strengths and
weaknesses and will enable them to develop an
effective deployment strategy. Communication
techniques found in CMC offer participants an
environment that more closely replicates faceto-face interaction with which they may be most
comfortable. Preparing participants for CMC is
critical in making participants comfortable with
communicating online.
References
Chou, C. (2001, Summer-Fall). Formative evaluation of synchronous CMC systems for a learnercentered online course. Journal of Interactive
Learning Research. Retrieved July, 31 2006, from
Infotrac College Edition.
Dewitt, D. (2007, January 29). Virtual-reality
software creates parallel campus, enhances educa-
297
Key Terms
Asynchronous: Online communication that
does occur at the same time.
Avatar: A computer-generated representation
of an online participant commonly found in virtual
reality environments.
Computer Mediated Communication
(CMC): Design that incorporates synchronous
and asynchronous online communication that
promotes interaction among participants. Such
features include discussion forums and online
chat features.
Needs Assessment: An analysis of the needs
and desires of a target audience. Methods of
conducting a needs assessment include survey
questionnaires, focus groups, and personal interviews.
Open Source: Programs that are developed
with an open license and distributed without
cost. Upgrades to programs and support are often
provided by the user community.
Population Analysis: An analysis of a targeted
audience focusing on its attributes, abilities, and
feelings.
Synchronous: Online communication that
occurs among participants simultaneously.
299
Chapter 1.23
Computer-Mediated
Communication Research
J. D. Wallace
Lubbock Christian University, USA
Abstract
This chapter asks What is meant by computermediated communication research? Numerous
databases were examined concerning business,
education, psychology, sociology, and social sciences from 1966 through 2005. A survey of the literature produced close to two thousand scholarly
journal articles, and bibliometric techniques were
used to establish core areas. Specifically, journals,
authors, and concepts were identified. Then, more
prevalent features within the dataset were targeted,
and a fine-grained analysis was conducted on
research-affiliated terms and concepts clustering
around those terms. What was found was an area
of scholarly communication, heavily popularized
in education-related journals. Likewise, topics
under investigation tended to be education and
Internet affiliated. The distribution of first authors
was overwhelming populated by one time authorship. The most prominent research methodology
emerging was case studies. Other specific research
methodologies tended to be textually related, such
as content and discourse analysis. This study was
Introduction
Computer-mediated communication (CMC) involves a wide number of characteristics involving
human communication. It also includes systems,
methods, and techniques that are typical of online
environments. Therefore, one would rightfully
expect definitional difficulties both technological
and methodological. Wallace (1999) extensively
surveyed the literature concerning CMC and
found relatively few definitions. While differences abounded in the definitions found, the one
constant was the use of the computer as an intermediary device. The centrality of the computer
and communication layers human characteristics
and technological issues.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Levinson (1990) suggests that in order to understand a device or a technique, not only should
we take a microscopic view through research
and examination, but we should also take a more
macroscopic view. A survey of the scholarly
communication might help provide a different
perspective. Hopefully, it would reveal some of
the larger areas of inquiry concerning online
research in general, and computer-mediated communication specifically. This macroscopic survey
would enable researchers and scholars to more
efficiently coordinate their own activities with
outlets and concepts that have the most pressing
need for their contributions. It has the additional
benefit of documenting CMCs developmental
features that can be compared with future research
or differing methodologies.
Similar to other such studies, the purpose of
this chapter is provide an overview of the CMC
scholarly literature, and to identify its component features in providing a tangible means of
identification (Dick & Blazek, 1995, p. 291).
Likewise, it is not to determine the magnitude
that CMC occupies as a discipline, field, specialty,
or subspecialty area. For purposes of literary
description, the term field is not a cataloguing
designate, but rather a convenient moniker under
which CMC scholarship resides. CMC is often
described in the literature as a field. However,
designates of specialty, or subfield are probably
more accurate.
Simply put, the statement of the problem is:
what are trends in computer-mediated communication research? Definitions and descriptions
of current literature on the subject reflect views
that are selective and often disparate. Rather than
revisit debatable definitional issues, an arguably
more objective approach will be used as the focus
of this inquiry. Specifically, what authors, journals,
concepts, and research issues possibly populate
the CMC domain?
Certainly, a number of conceptual problems
would be introduced with any kind of predictive
examination (Hargittai, 2004). Therefore, ex-
300
Method
In order to produce a literature survey that spans
almost 40 years, two studies were combined. A
more recent analysis of CMC research extended
a previous CMC literature study (Wallace, 1999).
The previous studys data was collected in June
301
Data Analysis
Journals. The data file was examined in terms
of journal frequency. From this, it could be determined the identity of CMC journals and how
they were positioned in terms of the overall literature as defined by this chapter. Subsequently,
Bradford-type partitions were derived to identify
the core journals.
302
Results
The 1997 CMC study generated 611 unique article
references for 1997 and prior. Four hundred and
fifty-nine, or about 75%, tended to be education
related as indicated by their ERIC affiliation.
The current examination started from a more
mature 1997 dataset through 2005. It generated
Zones
Early
Recent
Recent
Articles Journals Articles
195
25*
430
196
101
427
195
360
426
*11 or more articles
303
304
1997-2005
British Journal of Educational Technology
Business Communication Quarterly
CALICO Journal
CyberPsychology & Behavior
Distance Education
Distance Education Report
Indian Journal of Open Learning
Instructional Science
Internet and Higher Education
Journal of Adolescent & Adult Literacy
Journal of Computer Assisted Learning
Journal of Educational Technology Systems
Journal of Instruction Delivery Systems
Journal of the American Society for Information
Science
New Media & Society
Quarterly Review of Distance Education
Small Group Research
1997-2005 Authors
Abrams, Zsuzsanna Ittzes
Li, Qing
Arbaugh, J. B
MacDonald, Lucy
Baron, Naomi S
Riva, Giuseppe
Belz, Julie A
Rourke, Liam
Benbunan-Fich, Raquel
Saba, Farhad
Caverly, David C.
Savicki, Victor
Fahy, Patrick J
Selwyn, Neil
Flanagin, Andrew J.
Trentin, Guglielmo
Guguen, Nicolas
Tu, Chih-Hsiung
Hampton, Keith N.
Vrooman, Steven S.
Haythornthwaite, Caroline
Walther, Joseph B.
Herring, Susan C
Warnick, Barbara
Johnson, E. Marcia
Wellman, Barry
Kling, Rob
Wilson, E. Vance
Kock, Ned
Wolfe, Joanna
Lee, Lina
305
Difference
Computer-Mediated Communication
Higher Education
Internet
Distance Education
Computer-Assisted Instruction
Foreign Countries
Educational Technology
Computer Uses in Education
Online Systems
Electronic Mail
Information Technology
Computer Networks
Teleconferencing
210%
250%
206%
322%
198%
268%
317%
219%
332%
78%
139%
51%
111%
Freq
2005
960
510
287
245
212
153
152
151
146
107
71
70
69
Rank
2005
1
2
3
4
5
7
8
9
10
12
19
20
21
Freq
1997
457
204
139
76
107
57
48
69
44
137
51
136
62
Rank
1997
1
2
3
8
6
14
18
10
19
4
17
5
12
2005
Freq
2005
175
142
Rank
1997
2005
6
Telecommunications
11
Information Networks
Freq
1997
100
72
Rank
1997
7
9
93
90
83
82
81
78
13
14
15
16
17
18
67
57
53
52
42
42
11
13
15
16
20
21
Interpersonal Communication
Computer Applications
Adulthood
Computers
Experimental Theoretical
Group Dynamics
306
1997
29
38
21
17
14
5
3
7
11
9
5
12
14
21
2005
37
36
32
14
28
23
11
31
15
11
14
11
17
14
Adulthood
Communication
Collaborative learning
Distance education
Foreign countries
Instructional effectiveness
Interaction
Internet
Online systems
Research methodology
Student attitudes
Student evaluation
Case studies. Both studies included the indexical terms higher education, Internet, distance
education, foreign countries, computer-assisted
instruction, and electronic mail. The 2005 study
also had World Wide Web, instructional effectiveness, online systems, information technology, college students, literacy, and teacher role. The 1997
study was more writing oriented with computer
networks, teaching methods, writing research,
student attitudes, collaborative writing, technical
writing, and writing instruction.
Evaluation methods. Only higher education
and computer-assisted instruction appeared
prominently in both studies when evaluations
methods was used as a filter. However, no other
terms, except for the previously mentioned higher
education, reached a level of predominance in the
1997 study. The 2005 conceptual set was relatively
extensive including the following terms:
307
Teaching methods
Teleconferencing
World Wide Web
308
Case studies
Case studies
Literacy
group discussion
computer literacy
graduate study
Tutoring
writing instruction
Communication research
Communication research
communication behavior
Organizational communication
Discourse analysis
interpersonal communication
classroom communication
Computers
Comparative analysis
Discourse analysis
microcomputers
interpersonal communication
Content analysis
Computer literacy
Research methodology
Evaluation methods*
Comparative analysis
learner controlled instruction
Survey
electronic mail
Content analysis
ethics
Research methodology
research needs
Use studies
computer use
Survey
undergraduates
Teleconferencing
technology utilization
foreign countries
Use studies
Literature reviews
Community
Literature reviews
tables data
Questionnaires.
*=not enough for 1997 analysis
309
Discussion
This chapter analyzed what is meant by computer-mediated communication research? While
a number of suitable answers exist for this question, it chose to let the field define itself as it
exists in the extant literature concerning business,
education, psychology, and the social sciences. In
this regard, almost 40 years of archival data was
surveyed. Wallaces study of 1997 and prior literature was combined with this 2005 examination
to offer a look at how CMC research is viewed
through a database analytic lens. Because of the
interdisciplinary nature of computer-mediated
communication (CMC), a parallel analysis of
multiple databases from different perspectives was
used (Ingwersen & Christensen, 1997; McLaughlin, 1994; Spasser, 1997). It should be noted that
as database sophistication increases regarding
education, business, and social life in general,
this kind of lens may become more crucial.
The descriptive nature of this study necessarily dictated a balance between rigor and latitude.
Several limitations must be considered in this
respect. These include generalization, design
limitations, and theoretical assumptions. First,
operationalization of the domain should clearly
impact the use of the findings. Therefore, results
from this analysis do not claim to identify characteristics of CMC beyond the domain examined.
310
Conclusion
This survey of computer-mediated communication literature revealed three interesting trends.
The first trend is a paradoxical turbulence and
stability common in literature surveys. This pattern was somewhat present for articles, journals,
authors, concepts, and research affiliations. The
total articles doubled while the number of core
journals remained relatively constant. Both the
overall production of articles and the production
of core journals increased by 217%. Despite this
increase, the number of core journals producing
those articles only advanced by 3, from 22 to
25.
The total number of core authors producing
more than two articles also had relatively little
growth, while there was a virtual turnover in
actual author names. Joe Walther was the only
author to emerge in both surveys. Core authorship increased by an anemic 30%, from 21 to 31.
When considering total articles produced, core
author production actually shrank from 13% in
the 1997 survey, to 8.5 % in the 2005 survey.
The top 21 terms in both studies accounted for
30% of the total indexical terms. Eight of those
terms were not reciprocal. Most of the unique
terms can be attributed to shifts in the more turbulent bottom half of the distribution. However,
Future research
Clearly, this study detailed CMC research as an
area that tended to be education and Internet
affiliated. Furthermore, the computer-mediated
communication literature prominently used a
number of textual analysis techniques, such as
content and discourse analysis. Noticeably absent
were articles focused on possible experimental
techniques, ethnographies, and focus groups.
This does not mean that these were not tools used
in the literature, merely that they were not the
focus of the more prominently presented research
articles (Schneider & Foot, 2004). Surveys also
had a surprisingly diminutive presence. Certainly,
there are both specific and extensive treatments
of online survey methodology (e.g., Andrews,
Nonnecke, & Preece 2003; Katz & Rice, 2002).
However, the current examination suggests a need
for this and other volumes to specifically localize
methodological issues relevant to computer-mediated and online communication.
With the exception of Joe Walther, core authors had completely overturned. While Nicholls
concedes the robustness of the straight count, an
exhaustive identification of authorship might more
311
References
312
Borgman, C. L., & Rice, R. E. (1992). The convergence of information science and communication:
A bibliometric analysis. Journal of the American
Society for Information Science, 43(6), 397-411.
Burnham, J. F, Shearer, B. S., & Wall, B. C. (1992).
Combining new technologies for effective collection development: A bibliometric study using
CD-ROM and a database management program.
Bulletin of the Medical Library Association, 80,
150-156.
Callon, M., Courtial, J. P., & Laville, F. (1991). Coword analysis as a tool for describing the network
of interactions between basic and technological
research: The case of polymer chemistry. Scientometrics, 22(1), 155-205.
Cambrosio, A., Limoges, C., Courtial, J. P., &
Laville, F. (1993). Historical scientometrics?
Mapping over 70 years of biological safety research with co-word analysis. Scientometrics,
27(2), 119-14.
Courtial, J. P. (1994). A co-word analysis of scientometrics. Scientometrics, 31(3), 251-260.
Courtial, J. P., Callon, M., & Sigogneau, M. (1984).
Is indexing trustworthy? Classification of articles
through co-word analyses. Journal of Information
Science, 9, 47-56.
Cronin, B., & Overfelt, K. (1994). Citation-based
auditing of academic performance. Journal of
the American Society for Information Science,
45(2), 61-72.
313
Wallace, J. D. (1999). An examination of computer-mediated communications scholarly communication. Unpublished doctoral dissertation,
University of Oklahoma.
key terms
Artifacts: Artifacts are any number of forms
of scholarly communication. Conceptually, they
could range from working papers to books.
Bibliographic Coupling: Bibliographic coupling is where two documents each have citations
to one or more of the same publication, but do not
have to necessarily cite each other.
Bibliometrics: The mathematical and statistical analysis of patterns that arise in the publication
and use of documents
Bradford Partitions: These partitions are
used in library and information science to establish core journals. The process ranks journals
from most to least prolific in terms of number of
articles produced concerning a subject. They are
then divided into three or more zones that have
roughly the same number of articles. The zone
314
Appendix: 1997-2005
Journal
Case studies
Count
Journal
Count
Evaluation methods
Instructional Science
Communication research
Journal of Educational
Computing Research
Distance Education
Performance Improvement.
Literature reviews
ARIST
Evaluation methods
Written Communication
Internet Research
Comparative analysis
6
2
Pilot project
Internet Research
Research
Content analysis
Instructional Science
Discourse analysis
Survey
Educational Media International
CALICO Journal
Use studies
Internet Research
Written Communication
Research methodology
The Journal of the American Society for
Information Science
Educational research
British Journal of Educational Technology
This work was previously published in the Handbook of Research on Electronic Surveys and Measurements, edited by R.
Reynolds, R. Woods, and J. Baker, pp. 207-222, copyright 2007 by Information Science Reference, formerly known as Idea
Group Reference (an imprint of IGI Global).
315
316
Chapter 1.24
Computer-Mediated
Communication in Virtual
Learning Communities
Lisa Link
Flensburg University of Applied Sciences, Germany
Daniela Wagner
University of Hildesheim, Germany
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
CHARACTERISTICS OF CMC
CMC can be classified into two major groups:
asynchronous and synchronous CMC. The main
difference between these two types is temporal:
asynchronous CMC is time independent, that is, it
does not require that the communication partners
be simultaneously online, whereas synchronous
CMC takes place in real time or quasi real time,
requiring the telepresence of the communication
partners. E-mail, mailing lists, and discussion
forums are examples of asynchronous forms.
Chat rooms and shared whiteboards represent
synchronous forms of CMC.
A further classification of CMC is whether
it represents a one-to-one (1:1), one-to-many (1:
n) or many-to-many (n:n) communication form.
Depending on their use, the different types of
CMC can fall into more than one category, for
example, e-mail and chat can represent both 1:1
and n:n communication. A topic of interest in this
context is the double function CMC can have: It
can be used for individual communication, but also
for mass communication. This goes along with a
double function that is very interesting in a learning setting, for example, in higher education. Email messages and discussion forum postings can
simultaneously fulfill two successive functions:
(1) interpersonal communication between two or
more participants and subsequently (2) serve as
an information pool for other participants. Chats
that have a protocol option can also be used as
an information pool for passive students. Fritsch
(1998) coined the term witness learning to describe
the indirect learning possibilities of learners who
do not actively take part in interactions, but learn
from witnessing the interactions of others. In
virtual learning environments, participants have
ranked witnessing (i.e., reading) the interactions
317
CMC THEORIES
For the effective use of CMC in educational
contexts, a variety of computer-mediated communication theories can provide insights into
selecting appropriate CMC tools as well as understanding their limitations. Prevalent theories can
be categorized into three large groups (Dring,
2003, p. 128):
1.
2.
3.
318
2.
3.
In these courses, students are offered information on the theory and characteristics of CMC,
for example, in Web-based learning modules. The
virtual teams are free to choose which communication tools to use for the various tasks and phases
of their virtual teamwork. This has resulted in all
CMC tools being utilised. This combination of
theory, task-based teamwork, and joint reflection
phase has been rapidly accepted by the students
and the reflection phase, in particular, is seen by
them as a vital component of this concept.
For a successful transfer of this concept to
other courses, it is important to consider the
competencies required of students and instructors
in a virtual learning community.
319
320
CONCLUSION
In this article we presented a definition and classification of CMC. CMC is a phenomenon that is
studied by a wide variety of disciplines: linguists,
social psychologists, and computer scientists
have proposed approaches to help understand
the particularities and impacts of CMC. In addition to a discussion of selected CMC theories,
we presented a sensibilization concept for CMC
in higher education with the aim of helping students and instructors attain the key competencies
required of members of a virtual (learning) community. The dissemination of distributed virtual
REFERENCES
Crystal, D. (2001). Language and the Internet.
Cambridge, UK: Cambridge University Press.
Culnan, M. J., & Markus, M. L. (1987). Information technologies. In F. M. Jablin, L. L. Putnam,
K. H. Roberts, & L. W. Porter (Eds.), Handbook
of organizational communication: An interdisciplinary perspective (pp. 420-443). Newbury
Park, CA: Sage.
Daft, R., & Lengel, R. (1984). Information richness: A new approach to managerial behavior
and organization design. In B. Shaw & L. L.
Cummings (Eds.), Research in organizational
behavior, Vol. 6 (pp. 191-233). Greenwich, CT:
JAI Press.
Daft, R., & Lengel, R. (1986). Organizational
information requirements, media richness, and
structural design. Management Science, 32,
554-570.
Dring, N. (2003). Sozialpsychologie des Internet.
Gttingen, Germany: Hogrefe.
Fritsch, H. (1998). Witness-learning. Pedagogical
implications for net-based teaching and learning.
In M. Hauff (Ed.), [email protected]?
Entwicklung GestaltungEvaluation neuer
Medien (pp. 123-152). Mnster, Germany: Waxmann.
Fulk, J., Schmitz, J., & Steinfeld, C. (1990). A
social influence model of technology in use. In
J. Fulk & C. Steinfeld (Eds.), Organizations and
321
Key Terms
Blended Learning: Learning design that
combines various activities such as face-to-face
meetings, Internet-based learning modules, and
virtual learning communities.
Computer-Mediated Communication
(CMC): Communication between humans using
the computer as a medium.
Emoticons: A combination of punctuation
marks and other special characters from the
keyboard used to convey the tone of a computermediated communication message. For example,
the combination :-) depicts smiling.
Learning Platform: Software systems that
are used to deliver and support online teaching
and learning. Learning platforms manage access to
the platform and to learning materials and usually
include various communication tools.
Netiquette: Standard rules of courtesy and
correct behaviour on the Internet.
Witness Learning: A term coined by Dr.
Helmut Fritsch, senior researcher at the FernUniversitt in Hagen, Germany, that refers to the
indirect learning possibilities of learners who do
not actively take part in interactions, but learn
from witnessing the interactions of others, for
example, in online discussion forums.
This work was previously published in the Encyclopedia of Virtual Communities and Technologies, edited by S. Dasgupta, pp. 4953, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
322
323
Chapter 1.25
Human-Computer Interaction
and Security
Kai Richter
Computer Graphics Centre (ZGDV), Germany
Volker Roth
OGM Laboratory LLC, USA
INTRODUCTION
Historically, computer security has its roots in the
military domain with its hierarchical structures
and clear and normative rules that are expected to
be obeyed (Adams & Sasse, 1999). The technical
expertise necessary to administer most security
tools stems back to the time where security was
the matter of trained system administrators and
expert users. A considerable amount of money and
expertise is invested by companies and institutions to set up and maintain powerful security
infrastructures. However, in many cases, it is
the users behavior that enables security breaches
rather than shortcomings of the technology. This
has led to the notion of the user as the weakest
link in the chain (Schneier, 2000), implying that
the user was to blame instead of technology. The
engineers attitude toward the fallible human and
the ignorance of the fact that technologys primary
BACKGROUND
With the spreading of online work and networked
collaboration, the economic damage caused by
security-related problems has increased considerably (Sacha, Brostoff, & Sasse, 2000). Also, the
increasing application of personal computers,
personal networks, and mobile devices with their
support of individual security configuration can
be seen as one reason for the increasing problems
with security (e.g., virus attacks from personal
notebooks, leaks in the network due to personal
wireless LANs, etc.) (Kent, 1997). During the
past decade, the security research community
has begun to acknowledge the importance of the
human factor and has started to take research on
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
USER ATTITUDE
The security of a system cannot be determined
only by its technical aspects but also by the attitude of the users of such a system. Dourish
et al. (2003) distinguish between theoretical
security (e.g., what is technologically possible)
and effective security (e.g., what is practically
achievable). Theoretical security to their terms
can be considered as the upper bound of effective
security. In order to improve effective security, the
everyday usage of security has to be improved. In
two field studies, Weirich and Sasse (2001) and
Dourish et al. (2003) explored users attitudes to
security in working practice. The findings of both
studies can be summarized under the following
categories: perception of security, perception of
threat, attitude toward security-related issues, and
the social context of security.
Perception of security frequently is very inaccurate. Security mechanisms often are perceived
as holistic tools that provide protection against
threats, without any detailed knowledge about the
324
Users should be able to access security settings easily and as an integral part of the
actions, not in the separated fashion as it is
today; therefore, security issues should be
integrated in the development of applications
(Brostoff & Sasse, 2001; Gerd tom Markotten, 2002).
It is necessary that people can monitor and
understand the potential consequences of
their actions (Irvine & Levin, 2000) and that
they understand the security mechanisms
employed by the organization.
Security should be embedded into working
practice and organizational arrangement,
and visible and accessible in everyday physical and social environment (Ackerman &
Cranor, 1999).
Security should be part of the positive values
in an organization. So-called social marketing could be used to establish a security
culture in a company.
The personal responsibility and the danger
of personal embarrassment could increase
the feeling of personal liability.
The importance of security-aware acting
should be made clear by emphasizing the
relevance to the organizations reputation
and financial dangers.
325
As has been shown, the design and implementation of security mechanisms are closely
interlinked to the psychological and sociological
aspects of the users attitude and compliance
toward the system. Any security system is in
danger of becoming inefficient or even obsolete if
it fails to provide adequate support and motivate
users for its proper usage. The following sections
discuss these findings in the context of the main
application domains of computer security.
AUTHENTICATION
Information technology extends our ability to
communicate, to store and retrieve information,
and to process information. With this technology
comes the need to control access to its applications for reasons of privacy and confidentiality,
national security, or auditing and billing, to name
a few. Access control in an IT system typically
involves the identification of a subject, his or her
subsequent authentication, and, upon success, his
or her authorization to the IT system.
The crucial authentication step generally is
carried out based on something the subject knows,
has, or is. By far the most widespread means of
authentication is based on what a subject has (e.g.,
a key). Keys unlock doors and provide access to
cars, apartments, and contents of a chest in the
attic. Keys are genuinely usablefour-year-olds
can handle them. In the world of IT, something
the subject knows (e.g., a password or a secret
personal identification number [PIN]) is the
prominent mechanism.
The exclusiveness of access to an IT system
protected by a password rests on the security of
the password against guessing, leaving aside other
technical means by which it may or may not be
broken. From an information theoretic standpoint,
a uniformly and randomly chosen sequence of
letters and other symbols principally provides
the greatest security. However, such a random
sequence of unrelated symbols also is hard to
326
E-MAIL SECURITY
Before the middle of the 1970s, cryptography was
built entirely on symmetric ciphers. This meant
that in order for enciphered communication to
take place, a secret key needed to be exchanged
beforehand over a secure out-of-band channel.
One way of doing that was to send a trusted
courier to the party with whom one intended to
communicate securely. This procedure addressed
two important issues: the secret key exchange
and the implicit authentication of the exchanged
keys. Once established, the keys could be used to
secure communication against passive and active
attacks until the key was expected to become or
became compromised.
When asymmetric cryptography (Diffie &
Hellman, 1976; Rivest, Shamir, & Adleman,
1978) was invented in the 1970s, it tremendously
simplified that task of key exchange, and gave
birth to the concept of digital signatures. Asymmetric cryptography did not solve the problem
of authenticating keys per se. Although we now
can exchange keys securely in the clear, how
could one be certain that a key actually belonged
to the alleged sender? Toward a solution to this
problem, Loren Kohnfelder (1978) invented the
public key certificate, which is a public key and
an identity, signed together in a clever way with
the private key of a key introducer whom the
communicating parties need to trust. This idea
gave rise to the notion of a public key infrastructure (PKI). Some existing models of public key
infrastructures are the OpenPGP Web of Trust
model (RFC 2440) and the increasingly complex
ITU Recommendation X.509-based PKIX model
(RFC 3280) (Davis, 1996; Ellison, 1996, 1997;
Ellison & Schneier, 2000).
In applications such as electronic mail, building trust in certificates, exchanging keys, and
managing keys account for the majority of the
interactions and decisions that interfere with the
goal-oriented tasks of a user and that the user has
difficulty understanding (Davis, 1996; Gutmann,
SYSTEM SECURITY
Computer systems progressed from single user
systems and multi-user batch processing systems
to multi-user time-sharing systems, which brought
the requirement to sharing the system resources
and at the same time to tightly control the resource
allocation as well as the information flow within
the system. The principal approach to solving this
is to establish a verified supervisor software also
called the reference monitor (Anderson, 1972),
which controls all security-relevant aspects in
the system.
However, the Internet tremendously accelerated the production and distribution of software,
some of which may be of dubious origin. Additionally, the increasing amounts of so-called malware
that thrives on security flaws and programming
errors lead to a situation where the granularity
of access control in multi-user resource-sharing
systems is no longer sufficient to cope with the
imminent threats. Rather than separating user
domains, applications themselves increasingly
must be separated, even if they run on behalf of
the same user. A flaw in a Web browser should
327
not lead to a potential compromise of other applications and application data such as the users
e-mail client or word processor. Despite efforts
to provide solutions to such problems (Goldberg
et al., 1996) as well as the availability of off-theshelf environments in different flavors of Unix,
fine-grained application separation has not yet
been included as a standard feature of a COTS
operating system.
Even if such separation were available, malicious software may delude the user into believing,
for example, that a graphical user interface (GUI)
component of the malware belongs to a different
trusted application. One means of achieving this
is to mimic the visual appearance and responses
of the genuine application. One typical example
would be a fake login screen or window. Assurance that a certain GUI component actually
belongs to a particular application or the operating system component requires a trusted path
between the user and the system. For instance,
a secure attention key that cannot be intercepted
by the malware may switch to a secure login
window. While this functionally is available in
some COTS operating systems, current GUIs still
provide ample opportunity for disguise, a problem
that also is eminent on the Web (Felten, Balfanz,
Dean, & Wallach, 1997). One approach to solving
this problem for GUIs is to appropriately mark
windows so that they can be associated with their
parent application (Yee, 2002). One instance of a
research prototype windowing system designed
with such threats in mind is the EROS Trusted
Window System (Shapiro, Vanderburgh, Northup,
& Chizmadia, 2003).
FUTURE TRENDS
Mobile computing and the emergence of contextaware services progressively are integrating into
new and powerful services that hold the promise
of making life easier and safer. Contextual data
328
CONCLUSION
The view of the user as the weakest link and
potential security danger finally has turned out
to be an obsolescent model. Security engineers
and perhaps, more importantly, those people who
are responsible for IT security have noticed that
working against the user will not do, and instead,
they have decided to work with and for the user.
During the past years, an increasing number of
research has focused on the issue of making security usable, addressing the traditional fields of
authentication, communication, and e-mail and
system security. This article has given a brief
overview of some of the work done so far. In order
to make information technology more secure,
the user is the central instance. The user must
be able to properly use the security mechanisms
provided. Therefore, understanding users needs
mons.somewhere.com/rre/1999/RRE.notes.and.
recommenda14.html
REFERENCES
Anderson, J. P. (1972). Computer security technology planning study (No. ESD-TR-73-51). Bedford,
MA: AFSC.
Brostoff, S., & Sasse, M. A. (2000). Are passfaces
more usable than passwords? A field trial investigation. In S. McDonald, Y. Waern, & G. Cockton
(Eds.), People and computers XIVUsability
or else! Proceedings of HCI 2000, Sunderland,
UK.
Brostoff, S., & Sasse, M. A. (2001). Safe and
sound: A safety-critical design approach to security. Proceedings of the New Security Paradigms
Workshop, Cloudcroft, New Mexico.
Colville, J. (2003). ATM scam netted 620,000
Australian. Risks Digest, 22, 85.
329
Irvine, C., & Levin, T. (2000). Towards quality of secure service in a resource management
system benefit function. Proceedings of the 9th
Heterogeneous Computing Workshop.
330
KEY TERMS
Asymmetric Cryptography: A data encryption system that uses two separate but related
encryption keys. The private key is known only to
its owner, while the public key is made available in
a key repository or as part of a digital certificate.
Asymmetric cryptography is the basis of digital
signature systems.
Public Key Infrastructure (PKI): The public
infrastructure that administers, distributes, and
certifies electronic keys and certificates that are
used to authenticate identity and encrypt information. Generally speaking, PKI is a system of
digital certificates, certification authorities, and
registration authorities that authenticate and verify
the validity of the parties involved in electronic
transactions.
Shoulder Surfing: The practice of observing
persons while entering secret authentication information in order to obtain illegal access to money
or services. This often occurs in the context of
PIN numbers and banking transactions, where
shoulder surfing occurs together with the stealthy
duplication of credit or banking cards.
Social Engineering: The technique of exploiting the weakness of users rather than software by
convincing users to disclose secrets or passwords
by pretending to be authorized staff, network
administrator, or the like.
Spoofing: The technique of obtaining or mimicking a fake identity in the network. This can be
used for pretending to be a trustworthy Web site
and for motivating users (e.g., entering banking
information), pretending to be an authorized
instance that requests the users password, or
making users accept information that is believed
to come from a trusted instance.
Types of Authentication: Authentication
generally can be based on three types of informa-
331
This work was previously published in the Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 287-294,
copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
332
333
Chapter 1.26
Abstract
Taking typical ubiquitous computing settings as a
starting point, this chapter motivates the need for
security. The reader will learn what makes security
challenging and what the risks predominant in
ubiquitous computing are. The major part of this
chapter is dedicated to the description of sample
solutions in order to illustrate the wealth of protection mechanisms. A background in IT security
is not required as this chapter is self-contained. A
brief introduction to the subject is given as well
as an overview of cryptographic tools.
Introduction
Mark Weisers vision of ubiquitous computing
(UC) raises many new security issues. Consider a
situation where a large number of UC peers interact
in a spontaneous and autonomous manner, without
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Four UC Settings
In order to pave the way for the development
of a systematic view on UC characteristics and
limitations in the section A Taxonomy of UC
Security, we sketch four representative settings.
Each setting exhibits one or more security-related
properties. They are termed mobile computing,
ad hoc interaction, smart spaces, and real-time
enterprises.
334
Mobile Computing
Mobile computing supports mobile users with
connectivity and access to services and backend
systems while being on the move. A synonymous
term is nomadic computing, emphasizing the
goal of providing a working environment more
or less equivalent to that of a desktop user. The
widespread availability of cellular networks and
802.11 WiFi allows a field worker to connect
to an arbitrary service on the Internet or to the
companys backend at almost any place and at
any time.
Mobile computing relies on a given infrastructure managed by a provider, for example, a cellular
network company. This fact has implications for
security: In order to access a service, a user needs
to register with a provider. Thus, the user group
is closed and the provider controls access to the
infrastructure. In addition, users are not able to
act in an anonymous manner.
Mobile devices can easily get lost, for example
left behind in the proverbial taxi (see http://www.
laptopical.com/laptops-lost-in-taxi.html). In case
of theft, an attacker might be able to impersonate
the legitimate device owner or learn her private
data like business contacts or personal email. This
physical threat is given whenever mobile devices
are considered.
Ad Hoc Interaction
In contrast to mobile computing, the second setting
does not rely on an infrastructure provider. Instead
of that, UC devices build the infrastructure on
their own by establishing temporary, wireless, and
ad hoc communication links between them. On
the application layer, they expose a spontaneous
interaction behavior. A typical characteristic is
the lack of a central instance allowing or restricting participation. A priori, there are no given or
managed user groups; all devices are free to join.
Plus, users and devices might act anonymously.
Here we illustrate a collaboration scenario
based on spontaneous interaction. This type of
communication is typical for Opportunistic Networks which are discussed in depth in the chapter
Opportunistic Networks.
(Straub& Heinemann, 2004), while being colocated. After an initial configuration, devices
interact autonomously and without users attention. Information dissemination is controlled by
profiles stored on the usersdevices. Such a profile
expresses a users interest in and knowledge about
some pieces of information to share.
The particularities of ad hoc interaction pose
numerous security challenges. On the one hand,
devices do not already know each other when they
start to communicate. On the other hand, personal
data is kept on the devices and exchanged with
strangers. As a consequence, privacy is inherently
at risk if systems are not designed carefully.
Smart Spaces
Smart spaces, which form our third UC setting,
emphasize user-friendliness and user empowerment as well as support for human interactions.
Interaction within a smart space happens in an
unobtrusive way. The use of contextual information (see chapter Context Models and Context
Awareness) plays also an important role here.
Sometimes, it is assumed that users carry some
type of digital identification and/or other devices
with or on them.
Due to the sensing and tracking capabilities
of a smart space, user privacy is at stake. Location privacy is an important field of UC research;
an overview is given by Grlach, Heinemann,
Terpstra,and Mhlhuser (2005). In addition, due
to the volatile nature of smart spaces, concepts
like trust (see Chapter 15) and reputation play
an important part in these kinds of applications.
We take patient monitoring in a hospital as an
example to illustrate a smart space.
335
Real-Time Enterprises
Real-time enterprises, which are defined in the
preface of this book, are an effort to leverage UC
technology and methods within enterprises. A
driving force behind these efforts is the goal of having immediate access to comprehensive and up-todate information about processes and procedures
within an enterprise. This allows management to
react very flexibly to variances in the market and
to increase customer support and satisfaction. For
example, an enterprise with detailed real-time
information on all production steps, including
delivery statuses of subcontractors, can provide
a customer with very accurate information on
when an order will be delivered.
336
A Taxonomy of UC Security
The beginning of this section provides a compact
introduction to IT security by explaining the
common objectives and the threats to them. We
then formulate two views on UC security issues.
In section First View: UC Characteristics and
These objectives can be achieved by cryptography as we will see below. In this respect, the
next objective is different as it typically requires
noncryptographic efforts as well.
Threat Modeling
Having discussed basic terminology and objectives, we now turn to a common abstract network
model to describe security threats: Two parties
exchange messages over a channel to which an
attacker has access, too. This notion captures
any kind of attackers (computer systems, individuals, organizations) and is independent of the
data transport medium itself. There is no need
to differentiate between the transmission and
the storage of data as the latter can be seen as a
special case of the model.
337
We follow the security communitys convention in using the names Alice and Bob for the
legitimate actors (instead of simply numbering
them serially) and in calling the attacker Mallory
(malicious). Mallory may change data Alice
sends to Bob, may generate her own messages
under the name of another person, or simply
eavesdrop on their connection. An attack of the
latter kind is called passive while the other two are
called active. A passive attacker can compromise
confidentiality at best, but an active onewho is
also called man-in-the-middle (MITM)targets
at all CIAA goals. Acting as a MITM, Mallory
sits in between the communication link, making
Alice believe she is Bob and spoofs Bob into
believing she is Alice.
Attacks are typically directed toward more
than one of the before-mentioned security objectives. For instance, Mallory may launch a DoS
attack in order to paralyze the systems defense
mechanisms.
The risk that a system may become compromised is proportional to both its vulnerabilities
and the threats acting upon it. Risks have to be
identified and rated in the light of the corresponding assets value. Threat perception can be very
subjective: While one individual cares about data
emitted by her UC device, which might be linked
back to her, another does not. Furthermore, not all
objectives are equally important in practice. For
instance, a bank primarily has a vital interest in
keeping its account datas integrity, but a research
lab emphasizes confidentiality. The choice of algorithms and security also takes into consideration
the (assumed) attackers strategy and resources,
especially computational power.
338
hoc manner as illustrated in the ad hoc interaction setting; other applications might ask for a
wireless multi-hop communication (see sensor
networks in the chapter Wireless and Mobile
Communication). Wireless communication
makes eavesdropping very easy as radio signal
are usually emitted in all directions. They can
be received by anyone in the senders vicinity
without her noticing it. MITM attacks are feasible
in the case of multi-hop wireless communication and they do not even require the attacker
to be physically close to the victim. In addition,
wireless ad hoc communication bears the risk of
impersonation, that is, an attacker might be able
to steal a peers credential by eavesdropping and
use it to access a certain service.
The pervasive nature of UC introduces even
more risks. Sensor nodes or RFID tags for example, are physically exposed, unmanaged, and
unsupervised. This bears the risks of device
and/or data theft as well as device manipulation.
As a consequence, access to the data stored on
the device must be carefully protected in order to
prevent identity theft(Eckert, 2005). The BlackBerry PDA for instance is a centrally manageable
system that supports a remote kill command to
erase data on a stolen device(Greiner, 2006).
UC devices are often battery-powered, which
allows for a DoS attack called sleep deprivation
torture(Stajano& Anderson, 1999): By constantly sending requests to a device, an attacker
can quickly drain its battery, thus rendering the
device useless. Last but not least, the UC settings
offer the capability of tracing objects or humans.
This feature is useful in many UC applications like
the real-time enterprise setting, but may violate
users privacy. Networked sensors, for instance,
may gather a good deal of personal information
that can be used to build user profiles.
The characteristics in UC and their corresponding risks are summarized in Table 1. In the
section Overview of Cryptographic Tools we
cover appropriate countermeasures.
wireless
ad hoc
multi-hop
physical exposure
battery-powered
traceability
risks
eavesdropping
impersonation
man-in-the-middle attacks
device/data theft, manipulation
sleep deprivation torture
privacy violation
339
Resource
Overview of Cryptographic
Tools
This section provides a compact overview of
cryptographic tools which are suitable to enforce
the security objectives introduced in the section
A Taxonomy of UC Security. It serves as a basis for the following discussion of the solutions
addressing UC characteristics and challenges.
Cryptographic primitives, namely symmetric and
public key cryptosystems, hash functions, and
authentication schemes, are the building blocks of
more complex techniques and protocols. We also
introduce some general rules of cryptography and
reason about security parameters. At the end of this
section we turn to the potentials and limitations
of cryptography in a UC setting. Readers with
previous knowledge in the area of cryptography
may directly proceed to section Potential and
Limitations of Cryptography in UC.
Symmetric Cryptosystems
Messages can be encrypted to prevent disclosure
of their content and ensure confidentiality. Encryption is a transformation that renders plaintext, that
is, the original message in its readable form, into
ciphertext which is unintelligible to an outsider.
The reverse transformation is called decryption.
In order to give Alice and Bob a competitive
edge, there must be some information Mallory
340
Challenges
entity authentication, policy
decision
algorithm implementation,
protocol design
trusted path
On this occasion, we exhibit a problem common to all symmetric ciphers: The use of shared
keys implies a secure key distribution step before
the actual communication takes place. Key distribution is often a problem as it needs out-of-band
mechanisms, that is, an additional communication channel not accessible to the attacker. We
will come back to this issue in the context of UC
in the section Out-of-Band Channels. Another
downside of shared keys is that the risk grows
proportional to the number of group members.
If one device falls into Mallorys hands, she
will be able to read all messages of the group. A
real-world example for the use of shared keys is
802.11 WEP (Wired Equivalent Privacy, see
for exampleEckert, 2006).
Hash Functions
In order to ensure data integrity, some kind of
redundancy has to be added to the payload. A
modification detection code (MDC) is a hash
algorithm, that is, a function that compresses
bitstrings of arbitrary finite length to bitstrings
of fixed length. State-of-the-art algorithms like
RIPEMD-160(Dobbertin, Bosselaers,& Preneel,
1996) or SHA-1(Eastlake& Jones, 2001) produce
outputs with a length of 160 bit, which should be
341
considered the lower bound due to security reasons. To be useful for integrity protection, a hash
function has to be 2nd preimage resistant: Given
an input x that hashes to h(x), an attacker must not
be able to find a value y x such that h(y) = h(x).
Such a pair of different inputs that result in the
same hash value is called a collision. Collisions
always exist due to the pigeon-hole principle since
the co-domain is finite while the domain is infinite.
Cryptographic hash functions must be collisionresistant, that is, finding concrete collisions must
be computationally infeasible. This allows us to
protect data x of arbitrary length against modifications by storing h(x), a small piece of information,
in a safe place. A widespread application of this
mechanism is integrity-protected software distribution via download mirror sites on the Internet.
Flipping a single bit in x results in a hash value
that differs in about half of the bits from h(x) if h
is one of the common hash functions.
Message authentication codes (MACs) are
hash functions that are additionally parameterized
with a secret key. Assume that Alice and Bob share
a key k, which for instance has been established in
a key exchange scheme as explained below. Alice
adds a MAC hk(x) to her message x to Bob. From
the pair (x,hk(x)) Bob can tell that the message
came from her since it is infeasible for Mallory
to create a pair (y,hk(y)) without knowing k. Remember that a MDC does not have this property,
since Mallory may change x to y and compute the
(unkeyed) hash function h(y). MACs in turn not
only provide data origin authentication, but also
integrity as Bob could detect a modification x
x because the received hk(x) would not match the
value hk(x) he computed himself. Each MDC h
can be extended to a MAC in the following way:
On input x, compute h(k || p1 || h(k || p2 || x)) where
k is the key, p1,p2 are constant padding strings,
and || denotes concatenation. This generic construction is called HMAC (hash-based MAC),
see Krawczyk, Bellare,and Canetti, 1997 for
implementation and security details.
342
Digital Signatures
MACs are useful for guaranteeing the authenticity
of a communication link. However, they do not
provide a transferable proof of authorship: As the
parties at both ends of the channel share knowledge of the key, a MACed message (x,hk(x))
could originate from either of them. As a matter
of fact, a third party cannot deduce its author.
Transferable proofs of authorship are required to
model digital workflows, for exampleorders in
electronic commerce that must be verifiable by
multiple parties (possibly including a judge in
order to arbitrate in a dispute). Digital signature
schemes are a means to provide such evidence.
They are also implemented with public key
cryptography. Signature verification is a one-way
function with trapdoor, referring to the ability to
create signatures.
The RSA signature scheme is based on the
same mathematics as the RSA cryptosystem.
Signing corresponds to decryption while verification corresponds to encryption. Digital Signature
Algorithm (DSA) and its EC-based variant ECDSA are alternative schemes which are not yet
as prevalent as RSA. EC-DSA is more efficiently
computable than RSA and uses shorter keys for
comparable strength.
Signatures are, in fact, computed over the hash
value h(x), not the message x itself. One reason
is the reduction of computational costs associated with public key cryptography. Secondly, the
one-way property of the hash function prevents
so-called existential forgery of signatures (see
e.g., Buchmann, 2004). However, h must be collision-resistant as in the case where h(x) = h(x), a
signature of x is always a signature of x, too.
Messages that comprise declarations of intent
should be legally binding for the originator. On the
one hand, this implies that the originator commits
himself to the content in a way that he cannot
deny his intent later. On the other hand, no one
other than the legitimate person should be able to
343
Setup Costs
Hash Functions
MD5
SHA-1
HMAC
Current Costs
Hashing (J/Byte)
0.59
0.75
1.16
Symmetric Encryption
AES (128 bit)
AES (192 bit)
AES (256 bit)
Encryption (J/Byte)
1.62
2.08
2.29
Digital Signatures
RSA (1024 bit)
DSA (1024 bit)
EC-DSA (1024 bit)
Sign (mJ)
546.5
313.6
134.2
Key Agreement
DH (1024 bit)
EC-DH (163 bit)
DH (512 bit)
344
Verify (mJ)
15.97
338.02
196.23
responses) shared among the nodes. Interestingly, this statement holds in an unconditional
sense, that is, for an adversary with unlimited
computational power.
Sample Solutions
In this section five example solutions for secure
UC systems are described. References to the
literature and UC applications as well as links
to our scenarios are given in the text.
At first, we present two strategies of privacyenhancing technologies: the technical concept of
anonymity and a policy-driven approach. While
anonymity is favorable from a privacy perspective,
it nevertheless leads to new threats as malicious
parties may benefit from it, too. One such threat
is the so-called Sybil attack which is discussed in
the section Fighting Sybil and DoS Attacks. We
explain the technique of proof-of-work as a countermeasure which can also be used as a defense
against DoS. A typical situation in UC is the lack
of a priori trust relations among the participants.
In order to cope with such a setting, a secure
communication channel has to be bootstrapped
somehow. The section Bootstrapping Secure
Communication treats this aspect from a theoretical point of view. Above all, the resurrecting
duckling security policy model is explained here.
The following section gives several real-world
examples for out-of-band channels used to set
up secure UC communication. Finally we touch
on the hot topic of RFID security and discuss
in particular privacy concerns and protection
mechanisms of electronic travel documents.
We deliberately omitted reputation systems
from the set of examples, as this topic is the
subject of the following chapter. Trusted Computing is also an interesting direction of current
UC research, but would go beyond the scope of
this chapter. We refer the interested reader to the
work of Hohland Zugenmaier (2005).
345
Privacy-Enhancing Technologies
Blurring Data
A large number of UC environments rely on sensing and tracking technologies of users and devices
in order to carry out their tasks. For example, in
order to provide location-based services, a users
position has to be determined beforehand. But even
without a given UC service in place, the fact that
most communication takes place over a wireless
link opens the door for attacks based on traffic
analysis. All communication patterns that can be
successfully linked to a human jeopardize a users
privacy. In this sense, the notion of confidentiality
has to be extended to message source confidentiality and/or message destination confidentiality.
This property of hiding the fact that particular
communication relationships exist at all is also
called sender or recipient anonymity, respectively. Anonymity is significantly more difficult
to achieve than message content confidentiality,
as it requires careful design of communication
protocols. Examples in the Internet setting include
mix networks(Chaum, 1981), onion routing (see
http://tor.eff.org for a popular implementation), or
anonymous re-mailers. Those protocols however
cannot be directly adapted to the UC world, since
they typically make assumptions about connectivity and resources. A first step in this direction is the
idea of Mist Routing(Al-Muhtadi, Campbell,
Kapadia, Mickunas,& Yi, 2002). Mix Zones
are a variant of the concept of mix networks to
locations (seeBeresford& Stajano, 2004).
As mentioned before, anonymity is of particular interest in UC when it comes to achieving
location privacy, that is, to prevent people bearing
devices emitting radio signals from being tracked
without noticing it. We discuss two different
strategies to preserve a users privacy. First, a
number of technical means are presented in order
to blur data that could later be used to identify
a person/subject. Second, the policy-based approach to achieve user privacy as proposed by
Langheinrich (2002).
Gruteser and Grunwald (2003) propose a middleware-based approach to blur location information from clients before passing the location
information onwards to a location based service
provider. They assume that clients communicate
their position as very precise location information to a location server. Position is determined
on the client itself, for example, via GPS or by
the wireless service provider through signal
triangulation. Location-based service providers
access location information through the location
server (see Figure 1).
A client specifies that he wants to be indistinguishable from at least k - 1 other clients within
a given area and time frame. In other words,
the clients want to stay k-anonymous. To reach
k-anonymity, their algorithm, an adaptive form
of a quadtree-based algorithm, adjusts the resolution of location information along spatial and
temporal dimensions in order to meet the specified
anonymity constraints, say k.
Providing anonymity requires careful system
design as identification may happen on each
network layer(Avoine& Oechslin, 2005). As a
346
practical consequence, user-controlled pseudonyms on the application layer are pretty useless
if the devices itself can be re-identified. This is
the case when static IP or Media Access Control
addresses are used. A proposal for temporary
addresses for anonymity on the data link layer is
made inOrava, Haverinen,and Honkanen (2002).
UC environments like scenario2 are particularly
suitable for addresses that are picked at random,
since the number of devices that are connected
to each other at the same time is by magnitudes
smaller than in an Internet setting. This guarantees, with a high probability, that device addresses
do not coincide.
Private Authentication
The following example illustrates how cryptographic protocols can be modified to provide
location privacy on the application layer. In typical
schemes for mutual authentication like the one
used in TLS/SSL(Dierks& Rescorla, 2006),
the parties reveal their identity during the handshake. As a consequence, Mallory may pretend
to be a legitimate party and start a handshake in
order to see who is in her vicinity. She may also
eavesdrop while Alice and Bob are authenticating
each other and learn about their communication
relation. Private authentication tackles this problem. The approach of Abadi (2002) allows Alice
to prove her identity to some self-chosen set S of
peers in order to establish a private and mutually
authenticated channel. Entities outside of S cannot detect Alices presence while entities inside
S do not learn more than their own membership.
Without loss of generalization, we restrict ourselves to the case where Bob is the only member
of S. If the set S contains more than one element,
Alice simply goes through parallel executions of
the protocol for each element. In order to start a
communication with Bob, Alice broadcasts the
plaintext hello accompanied with
(1)
Here KB is Bobs public key used for encryption, (KA ,K-1A) denotes her own key pair, K a
session key, and t is a timestamp. As Bob is the
only entity who knows the private key K-1B , he is
able to extract from c the senders identity (which
is unforgeable due to the digital signature). Assuming that Alice is on his whitelist of peers, he
answers with a message protected by K. Bobs
presence in turn cannot be detected provided that
the cryptosystem is which-key concealing (i.e.,KB
cannot be deduced from c) and Bob ensures ts
recency. Otherwise Mallory could mount a replay
attack by sending c and checking whether there
is a corresponding answer. Abadi (2002) also
describes a second protocol without timestamps
at the price of additional communication rounds.
This variant might be useful in cases where one
cannot assume synchronized clocks.
A Policy-Based Mechanism
Langheinrich (2002) proposes pawS, a system that
provides users with a privacy enabling technology. This approach is based on the Platform for
Privacy Preferences Project (P3P, seehttp://www.
w3.org/TR/P3P/), a framework which enables
the encoding of privacy policies into machinereadable XML. Making use of a trusted device,
the so-called privacy assistant (PA), the user
negotiates his privacy preferences with the UC
environment. For this purpose, the PA is able
to detect a privacy beacon upon entering a UC
environment, for example, a smart space. The
privacy beacon announces the available services,
for example, a printer, or a video camera, with
a reference to their data collection capabilities
and policies. The PA in turn contacts the users
personal privacy proxy located on the Internet,
which contacts the corresponding service proxies
347
348
Proof-of-Work
So-called proof-of-work (PoW) techniques treat
the computational resources of each user of a resource or service as valuable. To prevent arbitrarily
high usage of a common resource by a single user,
each user has to prove that she has made some
effort, that is, spent computing resources, before
she is allowed to use the service. This helps to
prevent for example, users from sending millions of spam emails, or from mounting denial
of service attacks.
The idea of PoW is to require the sender of a
message or service request to provide the answer
to a computational challenge along with the actual
message. If the verification of the proof fails, the
recipient discards the whole request without further processing. Obviously, the costs of creating
such a proof must be some order of magnitude
higher than for system setup and proof verification. A challenge in a PoW scheme may either
be generated by the recipient or calculated based
on implicit parameters that cannot be controlled
by the sender (in order to prevent the re-usage of
a proof). The second variant is very well suited
for UC as no infrastructure or multi-round communication is required.
(2)
Bootstrapping Secure
Communication
Consider the situation where two UC devices want
to establish a secure communication channel between each other in the presence of an adversary.
Here secure may have the meaning of confidentiality and/or authenticity. As confidentiality is
349
350
2.
3.
4.
Out-of-Band Channels
Many communication technologies are found in
the UC world that are suitable for cryptographic
out-of-band transmission. We now give a nonexhaustive list of examples that have been already
used in practice. Usually switching to a secondary
channel requires some human intervention, but
may also be triggered automatically by an appropriate protocol command on the primary channel. McCune, Perrig,and Reiter (2005) provide
351
352
specified reading range is 3 meters or more. Epassports use so-called proximity tags operating at
a frequency of 13.56MHz and a reading range of
at most 15 cm. Note that fraudulent RFID readers
do not necessarily adhere to standards and may
therefore exceed reading ranges significantly (see
e.g., Kirschenbaum& Wool, 2006).
353
There a several non-destructive countermeasures against skimming and tracking, for a comparison see for example, Weis, Sarma, Rivest,and
Engels (2003) or Juels, Rivest,and Szydlo (2003).
For instance, RFID chips in identity cards can be
shielded in a Faraday cage made of metal foil which
is integrated in a wallet. Clipped tags(Karjoth&
Moskowitz, 2005) provide a user-friendly way
to deactivate a tag by manually separating chip
and antenna. Such a method provides immediate visual feedback about the tags state. Instead
of completely deactivating the tag, the antenna
may be transformed in a way that reduces reading range considerably. Blocker tags(Juelset al.,
2003) are devices that fool readers by simulating
a pile of tags in order to prevent them to single out
and communicate with a single one. As the EPC
addressing scheme is hierarchical, a blocker tag
may restrict the range of IDs selectively.
354
Research Outlook
This chapter gave an overview of the field of UC
security taking into account UC characteristics
and limitations. By means of representative settings, ranging from mobile work to RFID-based
real-time enterprises, common security challenges
were exhibited. At the heart of the chapter was
the presentation of a bunch of concrete solutions
to such challenges. This list is not claimed to be
exhaustive, but it is an initial step toward a taxonomy. The solutions stand pars pro toto for UC
security, as they are generic and scalable enough
to be adaptable to a large number of scenarios.
Social acceptance of UC technology strongly
depends on the security of UC systems. On the one
hand, the growing use of UC in everyday life must
not lead to security breaches as demonstrated,
for instance, in a recent attack on RFID-enabled
credit cards (Heydt-Benjamin et al., 2007). It is
also a safe bet that conflicts of interest will again
occur between privacy and law enforcement as
we have already seen in the aftermath of 9/11, for
example, concerning the traceability capabilities
of UC. We hope that this chapter motivates the
References
Abadi, M. (2002).Private authentication. In Proceedings of theWorkshop on Privacy-Enhancing
Technologies (PET).
Aitenbichler, E., Kangasharju, J.,& Mhlhuser,
M. (2004). Talking assistant: A smart digital
identity for ubiquitous computing. Advances in
pervasive Computing(pp.279284). Austrian
Computer Society (OCG).
Al-Muhtadi, J., Campbell, R., Kapadia, A., Mickunas, M.,& Yi, S.
(2002). Routing through the
mist: Privacy preserving communication in ubiquitous computing environments. In Proceedings
of IEEE International Conference of Distributed
Computing Systems (ICDCS)(pp.6574).
Arkko, J.,& Nikander, P. (2002). Weak authentication: How to authenticate unknown principals without trusted parties. In Proceedings of
the IWSP: International Workshop on Security
Protocols.
Avoine, G.,& Oechslin, P. (2005). RFID traceability: A multilayer problem. In A.S. Patrick&
M.Yung(Eds.), Financial cryptography and data
355
356
Juels, A., Molnar, D.,& Wagner, D. (2005, September). Security and privacy issues in e-passports.
In Proceedings of the Conference on Security and
357
358
Stajano, F.,& Anderson, R.J. (1999). The resurrecting duckling: Security issues for ad-hoc
wireless networks. In Proceedings of the Security Protocols, 7th International Workshop,
Cambridge, UK, April 19-21, 1999, Proceedings(p.172-194).
Stallings, W.(2005).Cryptography and network
security.Prentice Hall.
St alli ngs, W.(20 06).Net work sec urit y
Prentice Hall.
essentials.
Straub, T., Hartl, M.,& Ruppert, M. (2006).
Additional Reading
Eckert, C. (2006). IT-Sicherheit. Konzepte: VerOldenbourg.
fahren: Protokolle.
Stajano, F. (2002). Security for ubiquitous computing. John Wiley & Sons.
Stallings, W.(2006).Network security essentials.
Prentice Hall.
Storz, O., Friday, A.,& Davies, N.(2003).Towards
ubiquitous ubiquitous computing: An alliance
with the grid.In Proceedings of the System
Support for Ubiquitous Computing Workshop
(UbiComp Workshop 2003).
This work was previously published in the Handbook of Research on Ubiquitous Computing Technology for Real Time Enterprises, edited by M. Mhlhuser and I. Gurevych, pp. 337-362, copyright 2008 by Information Science Reference, formerly
known as Idea Group Reference (an imprint of IGI Global).
359
Section II
361
Chapter 2.1
Abstract
Measuring satisfaction can provide developers with valuable insight into the usability of a
product as perceived by the user. Although such
measures are typically included in usability
evaluations, it is clear that the concept itself is
under-developed. The literature reveals a lack of
cumulative, systematic research and consequently
the field is in disarray (Hornbk, 2006). Clearly,
the area needs a strong theoretical foundation
on which to base research. This paper reviews
the literature on user satisfaction and proposes a
conceptualisation and definition of the concept
that will aid researchers in the development of
valid measures.
USER SATISFACTION AS
A USABILITY PARAMETER
The ISO 9241-11 standard suggests assessing
usability in terms of performance by measur-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
2.
3.
4.
362
363
364
365
inclusion of new dimensions. The operationalisations proposed by these researchers are very
different to those already discussed. For example,
Lindgaard and Dudek (2003, p. 430) define satisfaction as an expression of affect and suggest it
consists of four dimensions: Aesthetics, Likeability, Expectation, & Emotion. Similarly, Yun et al.
(2003) define satisfaction as the users subjective
impression of the device which is informed by performance and emotional aspects. The dimensions
of satisfaction they propose include Luxuriousness, Simplicity, Attractiveness, Colourfulness,
Texture, Delicacy, Harmoniousness, Salience,
Ruggedness, and Overall Satisfaction. Finally,
Han and Hong (2003) define satisfaction as meaning that products should provide pleasure and /
or attractiveness (p.1441). The corresponding
dimensions they propose include Volume, Shape,
Elegance, Harmoniousness, Simplicity, Comfort,
Attractiveness, and Overall Satisfaction. Clearly,
these conceptualisations of satisfaction have little
to do with perceived usability; rather they refer
to aesthetic concerns.
The second approach taken by researchers
involves a critique of the relevance of satisfaction.
It is argued that the construct should be replaced
(Jordan, 1998, 2000; Dillon, 2001). Just like the
call to go beyond usability (Logan, 1994), Dillon (2001) called for the creation of a construct
that goes beyond satisfaction. This is because
he believed that the concept of satisfaction as it
had been understood up to then was inadequate
for application to new technology. He proposed
the Affect construct as a replacement which is
predominantly concerned with user emotions.
Finally, recent approaches to satisfaction are
moving from a view of the construct as a characteristic of the interaction to one that conceptualises
satisfaction as an intrinsic product attribute. Even
though satisfaction has been shown not to be a
product characteristic (Kirakowski, 1999), new
conceptualisations of satisfaction proposed by Yun
et al. (2003) and Han and Hong (2003) take an ergonomic approach and conceptualise satisfaction
366
What is Satisfaction?
It is clear from this review of the user satisfaction
literature that there are three theoretical challenges
facing researchers.
First, little insight is given into the nature of
the construct. Some only offer broad definitions
of satisfaction and little else (e.g., ISO 9241-11).
Others offer both a definition and model of the
construct. But in many cases the model does not
seem to correspond to the definition. For example,
Yun et al. (2003) define satisfaction in terms of
both performance and affective elements, yet their
operationalisation of the construct does not include
any dimensions relating to user performance.
Some view the user response as exclusively cognitive (e.g., Chin et al., 1988), others as exclusively
emotional (e.g., Han & Hong, 2003; Lindgaard &
Dudek, 2003), and still others who take a combined
approach including both cognitive and affective
elements in their models of satisfaction (e.g. Porteous et al., 1995; Kirakowski et al., 1998).
Second, some researchers view satisfaction
as a characteristic of the interaction between the
user and the product (e.g. Porteous et al., 1995;
Kirakowski et al., 1998) which is in keeping with
the quality in use approach to usability proposed in
ISO 9241-11 and widely accepted by the majority
of usability professionals. However, others view
satisfaction as a product attribute (e.g., Chin et
al., 1988; Han & Hong, 2003; Yun et al., 2003)
indicating a return to the generally discredited
quality of features approach to measurement.
Finally, it appears to be taken for granted that
satisfaction is linked to usage. Many mention
this link (e.g., Lund, 2001; Kirakowski, 2002),
but there is little evidence of any investigations
(in the HCI literature at least) into the nature of
this relationship.
USER SATISFACTION AS AN
ATTITUDE
The study of attitudes, their structure, formation, and their relationships with other constructs
(most notably behaviour) has occupied a central
position within Social Psychology almost since
its inception (Palmerino, Langer, & McGillis,
1984; Ajzen, 2001).
Although there does not exist a standard definition of attitude (Fishbein & Ajzen, 1975; Dawes
& Smith, 1985; Olson & Zanna, 1992), it is clear
that evaluation is an important component. This is
illustrated in the following four definitions of attitude taken from classics in the attitude literature:
(1) (A term) used to refer to a relatively enduring
tendency to respond to someone or something in
a way that reflects a positive or negative evaluation of that person or thing (Manstead, 1996,
p. 3); (2) (a tendency) to evaluate an entity with
some degree of favour or disfavour, ordinarily
expressed in cognitive, affective, and behavioural
responses (Eagly & Chaiken, 1993, p.155); (3)
a persons feelings toward and evaluation of
some object, person, issue, or event (Fishbein
& Ajzen, 1975, p.12) and finally, (4) a summary
evaluation of a psychological object captured in
such attribute dimensions as good-bad, harmful-beneficial, pleasant-unpleasant, and likableunlikable (Ajzen, 2001, p. 28).
These definitions demonstrate that theorists
define attitudes in affective (e.g., Fishbein &
Ajzen, 1975), behavioural (e.g., Manstead, 1996),
and cognitive (e.g., Ajzen, 2001) terms. The
definition offered by Eagly and Chaiken (1993)
where attitudes are defined in terms of all three
components (affect, behaviour, and cognition)
367
368
369
370
as a property of the interaction between the individual and the device under evaluation.
This discussion suggests that viewing satisfaction as a product characteristic is inadequate
(which is what is implied by Yun et al., 2003, and
Han & Hong, 2003). Attitude research studies
provide the necessary theoretical basis for proposing that satisfaction is a characteristic of the
interaction, and support an approach such as that
followed by Porteous et al. (1995) which focuses
on user reactions to the product rather than referring to specific product features.
CONCLUSION
The preceding discussion illustrates that Melone
(1990) was correct in asserting that a broad conceptualisation of satisfaction as an attitude retains
the constructs essential elements while enabling
371
372
REFERENCES
Ajzen, I. (2001). Nature and operation of attitudes.
Annual Review of Psychology, 52, 27-58.
Bailey, J. E., & Pearson, S. W. (1983). Development
of a tool for measuring and analysing computer
user satisfaction. Management Science, 29(5),
530-545.
Bandura, A. (1977). Self-efficacy: Toward a unifying theory of behavioural change. Psychological
Review, 84(2), 191-215.
Bem, D. (1972). Self-perception theory. Advances
in Experimental Social Psychology, 6, 1-62.
Bevan, N., & Azuma, M. (1997). Quality in use:
Incorporating human factors into the software
engineering life cycle. In Proceedings of the 3rd
International Conference on Software Engineering Standards. Retrieved from http://doi.ieeecomputersociety.org/10.1109/SESS.1997.595963
Bevan, N., Kirakowski, J., & Maissel, J. (1991).
What is usability? In Proceedings of the 4th International Conference on HCI, Stuttgart.
Bevan, N., & Macleod, M. (1994). Usability measurement in context. Behaviour & Information
Technology, 13, 132-145.
Brooke, J. (1996). SUS: A quick and dirty usability scale. In P. W. Jordan, B. Thomas, B. A.
Weerdmeester, & I. L. McClelland, (Eds.), Usability evaluation in industry. London: Taylor
& Francis.
Chin, J., Diehl, V., & Norman, K. (1988). Development of an instrument measuring user satisfaction
Giese, J. L., & Cote, J. A. (2000). Defining consumer satisfaction. Academy of Marketing Science
Review. Retrieved from http://www.amsreview.
org/articles/giese01-2000.pdf
Han, S. H., & Hong, S. W. (2003). A systematic approach for coupling user satisfaction with product
design. Ergonomics, 46(13-14), 1441-1461.
Hornbk, K. (2006). Current practice in measuring usability: Challenges to usability studies and
research. International Journal of Human-Computer Studies, 64, 79-102.
Hassenzahl, M., Beu, A., & Burmester, M. (2001).
Engineering joy. Retrieved on June 26, 2006,
from http://www.iib.bauing.tu-darmstadt.de/it@
iib/9/s1has_lo.pdf
ISO (1998). ISO 9241: Ergonomic requirements
for office work with visual display terminals, Part
11: Guidance on usability.
Ives, B. S., Olson, M., & Baroudi, J. (1983). The
measurement of user information satisfaction.
Communications of the ACM, 26, 530-545.
Jordan, P. W. (1998). Human factors for pleasure in
product use. Applied Ergonomics, 29(1), 25-33.
Jordan, P. W. (2000). Designing pleasurable products: An introduction to the new human factors.
London: Taylor & Francis.
Judd, C. M., & Johnson, J. T. (1984). The polarising effects of affective intensity. In J.R. Eiser
(Ed.), Attitudinal judgment. New York: SpringerVerlag.
Keinonen, T. (1998). One-dimensional usability:
Influence of usability on consumers product
preference. Netherlands: University of Art &
Design.
Kirakowski, J. (1996). The software usability
measurement inventory: Background and usage.
In P. W. Jordan, B. Thomas, B. A. Weerdmeester,
& I. L. McClelland (Eds.), Usability evaluation
373
374
Zajonc, R. B. (1980). Feeling and thinking: Preferences need no inferences. American Psychologist,
35, 151-175.
This work was previously published in International Journal of Technology and Human Interaction, Vol. 4, Issue 1, edited by B.C.
Stahl, pp. 1-15, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
375
376
Chapter 2.2
Human-Centric Evolutionary
Systems in Design and
Decision-Making
I. C. Parmee
University of the West of England, UK
J. R. Abraham
University of the West of England, UK
A. Machwe
University of the West of England, UK
Abstract
The chapter introduces the concept of user-centric
evolutionary design and decision-support systems, and positions them in terms of interactive
evolutionary computing. Current research results
provide two examples that illustrate differing
degrees of user interaction in terms of subjective
criteria evaluation; the extraction, processing,
and presentation of high-quality information; and
the associated improvement of machine-based
problem representation. The first example relates
to the inclusion of subjective aesthetic criteria to
complement quantitative evaluation in the conceptual design of bridge structures. The second
relates to the succinct graphical presentation
Introduction
Uncertainty and poor problem definition are
inherent features during the early stages of design and decision-making processes. Immediate
requirements for relevant information to improve
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Interactive Evolutionary
Computation (IEC)
Interactive evolutionary computing (Takagi, 1996)
mainly relates to partial or complete human evaluation of the fitness of solutions generated from
evolutionary search. This has been introduced
where quantitative evaluation is difficult if not
impossible to achieve. Examples of application
377
This example brings together agent-based machine learning, evolutionary computing, and
subjective evaluation in search for aesthetically
pleasing, structurally feasible designs during the
conceptual design process. Although significant
theoretical work is evident with respect to the
inclusion of aesthetics in computer-based design,
application-based research has received less at
support the identification of high-perfortention (Moore, Miles, & Evans, 1996a, 1996b;
mance solutions where qualitative as well
Saunders, 2001).
as quantitative objectives play a major role;
Figure 1 illustrates the main components of
and
the system and the manner in which they interact.
modify and refine design problem represenThe user defines initial design requirements and
tation.
aesthetically evaluates the designs generated
by the Evolutionary Search, Exploration, and
The first example relates to aesthetic judgement
Optimisation System (ESEO) during the initial
of EC-generated designs and is closer to the more
generations. The agents have multiple tasks, which
traditional explicit interaction where user subjecinclude the creation of the initial population based
tive evaluation is evident. However, this subjective
on design requirements, the monitoring of designs
evaluation complements detailed, machine-based
for feasibility during the ESEO processes, and
quantitative evaluation. The second is current
evaluation of machine-based aesthetic criteria.
IEDS research relating to problem definition and
The ESEO identifies design solutions that can be
the iterative interactive improvement of machineconsidered high performance in terms of Strucbased design representations that sits further
tural Feasibility and Stability, Materials Cost, and
Human-Centric Evolutionary Systems in Design and Decision-Making
toward the implicit end of the spectrum.
Rule-Based Aesthetics.
Figure
1. The
user-centric
system
Figure
1. The
user-centric
system
USER
CONSTUCTION
AND
REPAIR AGENT
(C.A.R.A)
INITIAL POPULATION
AND
MONITORING OF DESIGNS
human
378
AESTHETICS
FITNESS
EVOLUTIONARY
SEARCH,
EXPLORATION
AND
OPTIMIZATION
SYSTEM
AESTHETIC
PREFERENCES
DESIGNS
AND
USER
EVALUATION
AESTHETICS
AGENTS
MACHINE
LEARNING
SUB-SYSTEM
Design 1
Design 2
Design P
D
E
S
I
G
N
S
P
A
N
S
U
P
P
O
R
T
Element 1
Element 2
Element M
E
L
E
M
E
N
T
X pos: Integer
Y pos: Integer
Length: Integer
Height: Integer
Element 1
Element 2
Element N
379
O
P
U
L
A
T
I
O
N
Design 1
D
E
S
I
G
N
Design 2
Design P
S
P
A
N
Element 1
Element 2
Element M
E
L
E
M
E
N
T
X pos: Integer
Y pos: Integer
Length: Integer
Height: Integer
Element
1
Human-Centric
Evolutionary
Systems in Design and Decision-Making
U
P
P
O
R
T
Element 2
Element N
380
399
Design:
Span {B1, B2}
Support {B3}
Where:
B1 = {x1, y1, 1, h}
B2 = {x2, y2, 1, h}
B3 = {x3, y3, 1, h}
Initial testing
freeform asse
bridge structu
based approac
proved a diffic
rule-based ag
ward and rapi
been develope
population of
ous repair
system perfo
optimisation a
tures.
The constr
have the task
400
Introduction of Agency
Initial testing of the representation involved
freeform assembly and evolution of simple bridge
structures using GA, EP, and agent-based approaches. Evolving feasible structures proved a
difficult and lengthy process, whereas rule-based
agent assembly was straightforward and rapid.
Thus a combined approach has been developed
where agents create the initial population of
structures and provide a continuous repair
(1 + Fi )
system is run for 100 generations. A few members from the initial population are shown in
To ensure the overall integrity of the strucFigure 6.
ture, the above equations are used. It is evident
The initial population consists of three difthat the closer the dimensionsHuman-Centric
of the spanEvolutionary Systems in Design and Decision-Making
ferent kinds of designs: a simply supported
elements are to the ideal ratio (R), the lower the
Figure
6. Sample
of mixed
population
bridge shapes
Figure
6. Sample
of mixed
initialinitial
population
of bridgeofshapes
401
(1)
1
Stability =
(1 + f i )
(2)
2 EI
H2
Example
(3)
design represe
detailed aesthe
is evaluated b
rule-based ae
Aesthetic _
382
where wi are w
rules (Ai = A
Aesthetic _ Fitness = wi Ai
i =1
(4)
3.
4.
5.
User stipulates the frequency of user interaction (e.g., once every 10 generations).
User aesthetically evaluates a preset number of population members from the initial
population (usually the top 10 members, i.e.,
those with highest fitness regarding stability, material usage, and explicitly defined
aesthetic criteria).
The EP system runs.
Population members are aesthetically evaluated by the user every n generations.
Repeat steps 3 and 4 until user terminates
the evolutionary process.
Box 1.
Fitness = ( w1* Stability ) + (
w2
) + ( w3* Aesthetic _ Fitness ) + ( w4 * Ufit ) (5)
Material _ Usage
383
384
Evolving
Problem
Space
beam spans andthe
unsupported
beam spans.
Based
through
Interactive
on this model a fuzzy rule generator has been
Evolutionary
Processes
implemented. Initial results
have been encourag-
The research utilises the BAE Systems MiniCAPs model, a simplified version of a suite of
preliminary design models for the early stages
of military aircraft airframe design and initially
developed for research relating to the development of the IED concept. The model comprises
nine continuous input variables and 12 continuous output parameters relating to criteria such as
performance, wing geometry, propulsion, fuel
capacity, structural integrity, and so forth. Input
variables are:
1.
2.
3.
4.
5.
6.
7.
8.
9.
Identifying High-Performance
Regions Relating to Differing
Objectives
Figures 9(a), (b), and (c) show HP regions comprising COGA-generated solutions relating to three of
the 12 MiniCAPS objectives (Ferry RangeFR,
Attained Turn RateATR1, and Specific Excess
PowerSEP1) projected onto a variable hyperplane relating to two of the nine variables utilised
in the search process. This projection allows the
designer to visualise the
HP regions, identify their bounds, and subsequently reduce the variable ranges as described
in previously referenced papers. These papers
also introduce the projection of these differing
objective HP regions onto the same variable
hyperplane as shown in Figure 10 from which
the degree of objective conflict immediately becomes apparent to the designer. The emergence
of a mutually inclusive region of HP solutions
relating to the ATR1 and FR objectives indicates
385
Figure 9. COGA-generated high-performance regions relating to three differing objectives: (a) FRFerry
Range;9.(b)COGA-generated
ATR1Attained Turn
Rate; (c) SEP1Specific
Excess Power.
projectedobjectives:
onto the GWPA
Figure
high-performance
regions relating
to threeAlldiffering
(a)
(Gross
Wing
Plan
Area)/WAR
(Wing
Aspect
Ratio)
variable
hyperplane
(N.B.
Colour
versions
of
figures
FRFerry Range; (b) ATR1Attained Turn Rate; (c) SEP1Specific Excess Power. All
can be found
projected
ontoat:
thehttp://www.ad-comtech.co.uk/cogaplots.htm)
GWPA (Gross Wing Plan Area)/WAR (Wing Aspect Ratio) variable hyperplane
80
65
50
35
20
1.5
2.625
3.75
4.875
(a)
(b)
(c)
a low
degree
of conflict,
whereas
HP region
conflict. However, searching through all possible
N.B.
Colour
versions
of figures
can the
be found
at: http://www.ad-comtech.co.uk/cogaplots.htm
relating to SEP1 is remote (in variable space) to
both the ATR1 and FR regions, indicating a higher
of the
conflict.
todegree
three of
12 MiniCAPS objectives (Ferry
There
is
much
information
contained inand
the
RangeFR, Attained
Turn RateATR1,
HP regions
relating
to appropriateprojected
variable ranges
Specific
Excess
PowerSEP1)
onto
for
single
objectives,
degree
of
conflict
between
a variable hyperplane relating to two of the nine
multiple objectives,
emergence
and definivariables
utilised inand
thethe
search
process.
This
tion
of
mutually
inclusive
(common)
HP
regions.
projection allows the designer to visualise the
This
representation
providesand
an excelHPgraphical
regions, identify
their bounds,
sublent
spatial
indication
of
the
degree
of
objective
sequently reduce the variable ranges as described in previously referenced papers. These
386
405
3.
387
1 and FR
of conflict.
ined in the
e variable
of conflict
the emersive (comepresentadication of
However,
mensional
h informant research
sentations
ective data
l perspecbox plot
gure 11 is
ral reposiple-objec-
ts
1985) aproviding a
ationships
distribution of solutions in all variable dimensions and the correlation between different
dimensions. The disadvantage of the technique
when attempting to include multiple objectives
Human-Centric
Evolutionary
Systems Evolutionary
in Design andSystems
Decision-Making
Human-Centric
in Design and Decision-Maki
Human-Centric
is that the density of the information
presented Evolutionary Systems in Design and Decision-Making
hinders perception (Parmee & Abraham, 2004,
2005). To overcome the data density problem, three modifications to the standard parallel
between
and objective
Parallel
Figure10.
10.representation
All HP
HP regions
regions
projected
onto the
the
Figure
12.variable
Projection
of results
onto variable
Figure
All
projected
onto
Figure
12.
Projection
of space.
results
onto
Wing Aspect R
coordinate
have
been included:
coordinate
each
variGWPA/WARvariable
variable
hyperplane
1/variable
2 representation
hyperplane
for displays
Attained
Rate
GWPA/WAR
hyperplane
variable
1/variable
2 hyperplane
forTurn
Attained
and FR/ATR1
able
dimension
vertically
parallel
to
each
other.
(ATR1)
objective
Turn
Rate
(ATR1)
objective
reflected in th
1. additional vertical axes for each variable
Points corresponding to a solutions value of
cated by the w
so that each objective can be represented,
that
variable
can
then
be
plotted
on
each
vertiFigure 11 and
2. an indication of the degree of HP region
cal
variable
axis.
It
is
thus
possible
to
show
the
In terms of var
solution cover across each variable range,
distribution
of
solutions
in
all
variable
dimenrelating to AT
and
sions and the correlation between different
extent of the
3. the introduction of box plots to indicate
dimensions.
The
disadvantage
of
the
technique
HP regions in
skewness of solutions across each variwhen attempting to include multiple objectives
reflect the rela
able range.
is that the density of the information presented
of all objectiv
hinders
perception
(Parmee
&
Abraham,
2004,
illustrated in
This PCBP provides a much clearer graphic
2005).
To
overcome
the
data
density
probprojection
of
Human-Centric Evolutionary
System
(see Figure 11). The vertical axis of each
lem, three modifications to the standard parallel
Cruise Height
variable is scaled between the minimum and
coordinate representation have been included:
(variable 2) hy
distribution o
Figure 11. Parallel box plot of solution distri1. additional vertical axes for each variable
Figure 11. Parallel box plot of solution
plane
refle
Figure13.13.
Distribution
of FR
and
ATR1
Figureis 14.
bution
of each
objective
across
each variable
Figure
Distribution
of FR can
and
ATR1
solutions
variable
space)
to both
the ATR1
FR
so that
each objective
be
represented,
distribution
of each
objective
acrossandeach
plots of Figur
solutions
in
objective space
solutions aga
dimension
in2.
objective
space
regions, indicating
an indication
of the degree of HP region
variable
dimensiona higher degree of conflict.
The PCBP
There is much information contained in the
solution
cover
across
eachinvariable
range,
which the de
maximum
value
of the
variable
the HP region
v1 regions
v8
v2
v4
v5 to v6appropriate
v7
v9
v3 relating
HP
variable
and
ables are cau
solutions of each objective. The length of the
ranges for single objectives, degree of conflict
3. represents
the introduction
of box plots
to indicate
conflict. To g
axis
the normalised
ranges
of varibetween multiple objectives, and the emerskewness
of
solutions
across
each
varispective of th
able values present in an HP region. Where an
gence and definition of mutually inclusive (comable range.
variables can
HP solution
set does not fully extend across the
mon) HP regions. This graphical representarelevant graph
variable range, the axis is terminated by a
FR
tion provides an excellent spatial indicationSEP1
of
ATR1
This PCBP
provides a or
much
clearervalue
graphic
reinforcemen
whisker
at the maximum
minimum
of
the degree of objective conflict. However,
(see
Figure
11).
The
vertical
axis
of
each
spectives exp
the variable. The colour-coded box plots relate
searching through all possible two-dimensional
variable
is scaled(i.e.,
between
minimum
and
lating to proje
to
each objective
SEP1,the
ATR1,
and FR).
variable hyperplanes to visualise such informaproved unders
The median is marked within the box, and the
tion is not a feasible approach. Recent research
of the comput
box extends between the lower and upper
has resulted in single graphical representations
Figure 11. Parallel box plot of solution
propriate sett
quartile values within the variable set. The
that can present all variable and objective data
distribution of each objective across each
PCBP clearly visualises the skewness of soluwhilst providing
links toOutput
other visual
variable dimension
Projection
of COGA
on perspecto
Projection
tion distribution relating to each objective in
tives. The Space
parallel coordinate box plot
For comparative purposes, Figure 14 illustrates
Objective
Objective s
v1
v6 which
v8
each
variable
dimension
provides
an
v2
v4
v5
v7
v9
v3
(PCBP) representation shown in Figure 11 is
the
distribution
of COGA
output
and SPEA-II
2001),
which
tend
to
use
a
non-dominance
sUMMArY
indication of the degree of conflict between
one such graphic that provides a central reposi(Zitzler
et al., 2002) Pareto-front output in objecapproach.
The HP region solutions for ATR1 and FR can
The
HP region
objectives.
tory
containing
much
single and
multiple-objective For
space.
Using a standard
multi-objective
GA
be
projected
onto
objective
space
as shown in
comparative
purposes,
Figure
14
illusTheprojected
aesthetico
be
For instance, it is apparent that all three
tive solution information.
(MOGA),
it distribution
is possible toof
obtain
solutions
lying
trates
the
COGA
output
and
Figure 13. A relationship between the HP region
tential
in
Figure
13.term
A
objective boxes overlap in the case of variables
along
the Pareto-front,
but2002)
difficult
to exploreoutthe
SPEA-II
(Zitzler
et
al.,
Pareto-front
solutions and a Pareto-frontier emerges along
FR
that include
gion
solutionc
1, 2, 3, 6, and 9. However, significant differATR1
Parallel
coordinate
box Plots
SEP1
relationship
between
variable
and
the objective
the
outer edge
of the plot (Parmee
& Abraham,
put
in
objective
space.
Using
a
standard
multititative
throug
along
the
out
ences in the distribution of the boxes are evispace.
However,
it is likely that
the
designer
is
also
objective
GA
(MOGA),
it
is
possible
to
obtain
2004) despite the fact that the working principle
the system200
wi
Abraham,
dent in terms of at least one objective where
Parallel
plots to
(Inselberg,
1985) apinterested
in
solutions
thatPareto-front,
lie around particular
solutions
lying
along
the
but
diffiof
COGAcoordinate
is very different
that of evolutionary
tive
indication
ing
principle
o
variables 4, 5, 7, and 8 are concerned. Varipeared to offer potential
in (Deb,
terms of
providing
sections
of the Pareto-front.
cult
to
explore
the
relationship
between
varimulti-objective
algorithms
2001),
whicha
design
and lik
of
evolutionar
ables 4 and 5 are Gross Wing Plan Area and
single
illustrating complex
relationships
able and the objective space. However, it is
tend
to graphic
use a non-dominance
approach.
tural feasibilit
406
388
The introd
cess also pos
How ma
should b
Human-Centric Evolutionary
Systems
in Design and
Decision-Making
Human-Centric
Evolutionary
Systems
in Design and Decision-Making
FR and ATR1
non-dominance
fore provides a
gree of conflict
nity to explore
and view their
nd the ability to
o-front relating
ation plus soluhis is in addition
objective space
ced papers. All
ents the original
mation extracon.
Figure
Distribution
of ATR1
FR solutions
Figure14.14.
Distribution
ofand
ATR1
and FR
against
SPEA-II
Pareto
front Pareto front
solutions
against
SPEA-II
How many
evaluations
can apotential
user be
The aesthetics
work reveals
a significant
expected
to performofbefore
becoming
fain terms
of the development
systems
that include
tigued?
criteria ranging from purely quantitative through
to purely subjective. Ultimately the system will
These questions
been repeatedly
posed,
be required
to give ahave
comparative
indication
in
but seldom
successfully
addressed
within
the
terms
of aesthetically
pleasing
design and
likely
interactive
evolutionary
computing
(IEC) comcost
whilst indicating
structural
feasibility.
munity.
Our continuing
is addressing
The introduction
of suchresearch
an interactive
process
these
issues
and,
in
particular,
the
user-fatigue
also poses many questions such as:
aspect.
389
390
ACKNOWLEDGMENT
The authors wish to acknowledge the contribution
to the bridge aesthetics work of Professor John
Miles of the Institute of Machines and Structures
at the University of Cardiff.
REFERENCES
Bentley, P. J. (2000, April 26-28). Exploring
component-based representationsThe secret
of creativity by evolution? In I. C. Parmee (Ed.),
Proceedings of the 4th International Conference
on Adaptive Computing in Design and Manufacture (ACDM 2000) (pp. 161-172). University of
Plymouth, UK.
Deb, K. (2001). Multi-objective optimisation
using evolutionary algorithms. New York: John
Wiley & Sons.
Carnahan, B., & Dorris, N. (2004). Identifying
relevant symbol design criteria using interactive
evolutionary computation. Proceedings of the
Genetic and Evolutionary Computing Conference
(GECCO), Seattle, USA (pp. 265-273).
Cramer, N. L. (1985). A representation for the
adaptive generation of simple sequential programs. Proceedings of the International Conference on Genetic Algorithms and Their Application, Pittsburgh (pp.183-187).
Fogel, D. B. (1988). An evolutionary approach
to the travelling salesman problem. Biological
Cybernetics, 60(2), 139-144.
Goldberg, D. E. (1989). Genetic algorithms in
search, optimization, and machine learning.
Reading, MA: Addison-Wesley.
Herdy, M. (1997). Evolutionary optimisation based
on subjective selectionevolving blends of coffee. Proceedings of the 5th European Congress
on Intelligent Techniques and Soft Computing
391
genetic algorithms in virtual library design. Proceedings of the IEEE Congress on Evolutionary
Computation, Edinburgh, UK (pp. 668-675).
Takagi, H. (1998). Interactive evolutionary
computation: Cooperation of computational intelligence and human KANSEI. Proceedings of
5th International Conference on Soft Computing
and Information/Intelligent Systems, Japan (pp.
41-50).
This work was previously published in Handbook of Research on Nature-Inspired Computing for Economics and Management,
edited by J. Rennard, pp. 395-411, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference
(an imprint of IGI Global).
392
393
Chapter 2.3
Abstract
INTRODUCTION
Limited resources of mobile computing infrastructure (cellular networks and end user devices)
set strict requirements to the transmission and
presentation of multimedia. These constraints
elevate the importance of additional mechanisms,
capable of handling economically and efficiently
the multimedia content. Flexible techniques are
needed to model multimedia data adaptively for
multiple heterogeneous networks and devices
with varying capabilities. Context conditions
(the implicit information about the environment,
situation and surrounding of a particular communication) are of great importance.
Adaptive services based on context-awareness
are indeed a precious benefit of mobile applications: in order to improve their provided service,
mobile applications can actually take advantage
of the context to adjust their behaviors. An effective adaptation has to be based on certain context
criteria: presence and availability mechanisms
enable the system to decide when the user is in
a certain locale and whether the user is available
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
to engage in certain actions. Hence, mobile applications aim to adapt the multimedia content to
the different end user devices.
However, typically each and every person
receives the same information under the same
context conditions. What is even more challenging is a personalization of the user interface (UI)
to the interests and preferences of the individual
user and the characteristics of the user end device.
The goal of mobile applications is to increasingly
make their service offerings more personalized
toward their users. Personalization has the ability to adapt (customize) resources (products,
information, or services) to better fit the needs of
each user. Personalization in mobile applications
enables advanced customized services such as
alerts, targeted advertising, games, and improved,
push-based mobile messaging. In particular,
multimedia personalization is concerned with the
building of an adaptive multimedia system that
can customize the representation of multimedia
content to the needs of a user.
Multimedia personalization enlarges the
applications complexity since every individuals
options have to be considered and implemented. It
results in a massive amount of variant possibilities:
target groups, output formats, mobile end devices,
languages, locations, etc. Thus, manual selection
and composition of multimedia content is not
practical. A personalization engine is needed
to dynamically create the context-dependent
personalized multimedia content. General solution approaches concerning the personalization
engine, include personalization by transformation
(using XML-based transformations to produce
personalized multimedia documents), adaptive
multimedia documents (using SMIL-like presentation defined alternatives), personalization by
constraints (optimization problemconstraint
solving), personalization by algebraic operators
(algebra to select media elements and merge
them into a coherent multimedia presentation), or
broader software engineering approaches.
Mobile multimedia (M3) personalization es-
394
BACKGROUND
Adaptation Objectives
The diversity of end device and network capabilities in mobile applications along with the
known multimedia challenges (namely, the efficient management of size, time, and semantics
parameters of multimedia), demand media content
and service to be flexible modeled for providing
easy-to-use and fast multimedia information.
Multimedia adaptation is being researched to
merge the creation of the services so that only
one service is needed to cover the heterogeneous
environments (Forstadius, Ala-Kurikka, Koivisto,
& Sauvola, 2001). Even though adaptation effects
could be realized in a variety of ways, the major
multimedia adaptation technologies are adaptive content selection, and adaptive presentation.
Examples of adaptation include down-scaling
the multimedia objects and changing the style
of multimedia presentation according to users
context conditions.
In general, adaptive hypermedia and adaptive
Web systems belong to the class of user-adaptive
systems. A user modelthe explicit representation of all relevant aspects of a users preferences,
intentions, etc.forms the foundation of all
adaptive systems (Bauer, 2004). The user model
is used to provide an adaptation effect, which is
tailoring interaction to different users. The first
two generations (pre-Web and Web) of adaptive systems explored mainly adaptive content
selection and adaptive recommendation based
on modeling user interests. Nowadays, the third
(mobile) generation extends the basis of the adaptation by adding models of context (location,
395
Adaptation
and Personalization of User Interface and Content
Adaptation and Personalization of User Interface
and Content
Figure 1.
1. Analyzing
Figure
Analyzingmobile
mobilesetting
setting
adaptation &
personalization
temporal
mobility
personal character
context-sensitivity
contextual
mobility
instant
connectivity
constraints
of devices &
infrastructure
environmental
perspective
system
perspective
mobility dimensions
in user interactions
characteristics of
mobile applications
396
cONtENt
ADAPtAtION
ANDmust be
Users perspective
characteristics
PErsONALIZED
UsEr
regarded rather differently,
because they are to a
INtErFAcE
certain degree consequences of the system and
Analyzing
Mobile
setting
sioned concept
of mobility
influences on them
269
Spatial mobility denotes mainly the most immediate dimension of mobility, the extensive
geographical movement of users. As users
carry their mobile devices anywhere they
go, spatiality includes the mobility of both
the user and the device
Temporal mobility refers to the ability of
users for mobile browsing while engaged
in a peripheral task
Contextual mobility signifies the character
of the dynamic conditions in which users
employ mobile devices. Users actions are
intrinsically situated in a particular context
that frames and it is framed by the performance of their actions recursively
Because of their mobility (and in correspondence with its dimensions), we distinguish three
attributes regarding mobile device usage:
1.
2.
3.
397
Figure 2.2.Mobile
Mobile
multimedia
adaptation/personalization
categories
multimedia
adaptation/personalization
categories
M3 content
perception of
quality
usage
patterns
presence &
availability
personal
profile
M3 presentation
quality of
multimedia
selection of
multimedia
items
personal
profile
aesthetic
elements
limited
attention
limitat ions of
screen space
presence &
availability
operational
elements
M3 communic ation
personal
profile
between users
perception
of quality
usage
patterns
presence &
availability
398
between
site & users
399
400
FUTURE TRENDS
CONCLUSION
401
REFERENCES
Bauer, M. (2004). Transparent user modeling for
a mobile personal assistant. Working Notes of the
Annual Workshop of the SIG on Adaptivity and
User Modeling in Interactive Software Systems
of the GI (pp. 3-8).
Billsus, D., Brunk, C. A., Evans, C., Gladish, B.,
& Pazzani, M. (2002). Adaptive interfaces for
ubiquitous Web access. Communications of the
ACM, 45(5), 34-38.
Brusilovsky, P., & Maybury, M. T. (2002). From
adaptive hypermedia to adaptive Web. Communications of the ACM, 45(5), 31-33.
Chae, M., & Kim, J. (2003). Whats so different
about the mobile Internet? Communications of
the ACM, 46(12), 240-247.
Cingil, I., Dogac, A., & Azgin, A. (2000). A
broader approach to personalization. Communications of the ACM, 43(8), 136-141.
Dunkley, L. (2003). Multimedia databases. Harlow, UK: Addison-WesleyPearson Education.
Elliot, G., & Phillips, N. (2004). Mobile commerce
and wireless computing systems. Harlow, UK:
Addison WesleyPearson Education.
Forstadius, J., Ala-Kurikka, J., Koivisto, A., &
Sauvola, J. (2001). Model for adaptive multimedia
services. Proceedings SPIE, Multimedia Systems,
and Applications IV (Vol. 4518).
Ghinea, G., & Angelides, M. C. (2004). A user
perspective of quality of service in m-commerce.
Multimedia Tools and Applications, 22(2), 187206. Kluwer Academic.
402
KEY TERMS
Content Adaptation: The alteration of the
multimedia content to an alternative form to meet
current usage and resource constraints.
MMDBMS: Multimedia database management system is a DBMS able to handle diverse
kinds of multimedia and to provide sophisticated
mechanisms for querying, processing, retrieving,
inserting, deleting, and updating multimedia.
Multimedia database storage and content-based
search is supported in a standardized way.
Personalization: The automatic adjustment of
information content, structure and presentation
tailored to an individual user.
This work was previously published in Handbook of Research on Mobile Multimedia, edited by I. Ibrahim, pp. 266-277 , copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
403
404
Chapter 2.4
Abstract
INTRODUCTION
This chapter will show you how to use and specialise UML diagrams for describing the user
interfaces of a software system. In order to accomplish the description of user interfaces, the
proposed technique considers three specialised
UML diagrams called user-interaction, user-interface, and GUI-class diagrams, which will be built
following a model-driven development (MDD)
perspective. These diagrams can be seen as the
UML-based UI models of the system. In addition,
this chapter is concerned with code-generation
to implement the user interfaces of the system
by using GUI-class diagrams and user-interaction diagrams. A case study of an Internet book
shopping system is introduced in this chapter to
proof and illustrate the proposed user interaction
and interface design technique.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
3.
4.
2.
3.
4.
Rapid prototyping of the developed software: Software modellers would find it useful to quickly generate user interfaces from
high-level descriptions of the system.
Model validation and refinement: Prototyping can detect fails in design and refinement and validation of model by testing user
interfaces and user requirements.
Model-based code generation: Generated
code would fit with developed models.
Starting point for implementers: Prototypes
can be refined until final implementation.
BACKGROUND
In the literature there are some works dealing with
the problem of user interfaces in UML.
405
406
With regard to previous works on code generation, the UI models can get prototypes of the user
interface of our application. Through mapping
between UML and Java, we are able to generate
low-level Java code directly from the user interaction diagram. This code generation is adapted to
the special case of user interfaces, which is user
event-based and handles input and output data by
means of special kinds of UI components.
MODEL-DRIVEN DEVELOPMENT
MODEL-DRIVEN
DEVELOPMENT
FOR
USER INTERFACES
FOR USER INTERFACES
Use Case Diagrams
Use Case Diagrams
Use case diagrams are used as starting point for
Use case diagrams
areUse
used
as starting
user-interface
design.
cases
are alsopoint
a wayfor
of
user-interface
design.
Use cases
are alsoand
a way
of
specifying
required
usages
of a system,
they
specifying
required
usages
of
a
system,
and
they
are typically used for capturing the requirements
areatypically
used is,
forwhat
capturing
the requirements
of
system (that
a system
is supposed
of
a
system
(that
is,
what
a
system
is supposed
to do). The key concepts associated with
the useto do).model
The key
associated
with
theusers
usecase
are concepts
actors and
use cases.
The
case
model
are
actors
and
use
cases.
The
users
and systems that may interact with the system
and represented
systems thatbymay
interact
with
the system
are
actors.
Actors
always
model
are
represented
by
actors.
Actors
always
entities that are outside the system. Use model
cases
entities that
thethe
system.
represent
the are
set ofoutside
tasks that
actorsUse
carrycases
out.
represent
the
set
of
tasks
that
the
actors
carry
In addition, the use cases can be decomposedout.
by
In addition,
the use
cases can be
decomposed
by
means
of include
relationships,
and
they can also
means
of
include
relationships,
and
they
can
also
be related by means of generalisation/specialisabe related
by means
of compare
generalisation/specialisation
relationships
that
more general and
tion
relationships
that
compare
more general and
particular tasks.
particular
In ordertasks.
to design a prototype of the user inIn
order
to design
a prototype
of the
user the
interface, the use
case diagram
should
include
terface,actors
the use
case
should
include
the
system
and
thediagram
set of (main)
tasks
for each
system
actors
and
the
set
of
(main)
tasks
for
each
one in which he or she takes part. From a point
oneview
in which
or she takes
part. From
a point
of
of userheinterface
modelling,
the use
case
of
view
of
user
interface
modelling,
the
use
case
diagram can be seen as a high-level description
diagram
canwindows
be seen as
a high-level
of
the main
of the
system. description
of the
main
windows
of
the
system.
To illustrate the functionality
of the MDDTo
illustrate
the
functionality
of theInternet
MDDbased technique we will explain a simple
based
technique
we
will
explain
a
simple
Internet
book shopping (IBS) model.
book
(IBS) model.
In shopping
the IBS example
(Figure 1), three actors
In
the
IBS
example
1),manager,
three actors
appear: the customer, the(Figure
ordering
and
appear:
the
customer,
the
ordering
manager,
and
the administrator. A customer directly makes
the administrator. A customer directly makes
User
Interaction Diagrams
The second modelling technique in our framework
Our activity diagrams include states and transitions. The states represent data output actions, that
is, how the system responds to user interactions
showing data (or requesting them). Then the user
can introduce data and the corresponding event
is handled and specified by means of transitions. Transitions can be conditioned, that is, the
handled event is controlled by means of condition,
which can be referred to data/business logic or
a previous user interaction. In other words, it is
possible more than one transition from a state,
and to know which of them will run depends on
data/business logic or the previous user choices.
We call user-interaction diagrams to this kind
of activity diagrams used for user interaction
description.
Now, it is supposed that each use case in the
use case diagram is described by means a userinteraction diagram. However, from a practical
point of view, it is convenient to use more than
one user-interaction diagram for describing a
use case. This is so because the logic of a use
case is usually too complex. For this reason, a
user-interaction diagram can be deployed in
several user-interaction diagrams, where a part
of the main logic is separately described. For this
reason, user interaction diagrams can include
states that do not correspond to data output,
rather than representing subdiagrams. Now, it
is desirable in some cases to be able to combine
the logic of the subdiagrams and the main logic.
For this reason, we will use in the main diagram
transition conditions that can be referred to the
logic of the subdiagrams.
Activity diagrams describe input and output
user interactions. Given that we have decided to
implement our user interface by means of Java
swing package, we will consider the JFrame
class as a container class that opens new frame
windows (if needed). In addition, graphical components can be classified as input (a text field, a
button, etc) and output components (a label, list,
etc). Input/output components are associated with
terminal states and transitions by using the ap-
408
410
Include Relationships
Figure
piece
user
interface
diagram
administrator
side
Figure
4.4.
AA
piece
ofof
thethe
user
interface
diagram
ofof
thethe
administrator
side
411
User Interaction
Interaction and Interface
Interface Design with
with UML
User
User Interaction and
and Interface Design
Design with UML
UML
Figure 5.
5. A
A piece
piece of
of the
the user-interface
user-interface diagram
diagram
Figure
first case
is the
catalogue
and
the The
querying
process
by query
adding the
searched items
purchase
case
. InIt this
case, the
to the shopping
cart.
is specified
bypurchase
adding the
user-interaction
diagram
contains
a state
Add to cart button as transition from
(and(use
to)
query
catalogue
in
the
case)
that
specialises
query catalogue. Therefore we can identify
catalogue
user
following
sense.relationship
The querybetween
purchase
a specialisation
interaction
describes
howsupposed
to query that
the
catalogue
. It is also
and query diagram
catalogue
of
the
IBS
by
introducing
the
searching
there will be a window for query catalogue
criteria
and purchase
showing the
results. However, the
inherits.
from which
purchase
user
interaction
can interrupt
The second case is the diagram
relationship
between
the
querying
process
by
adding
the
searched
items
query catalogue and query catalogue
by
to
the
shopping
cart.
It
is
specified
by
adding
the
administrator. In this case, the administrator
Add
to cart
button
as transition
(and to)
is supposed
to have
higher
privilegesfrom
for querying
query
catalogue
.
Therefore
we
can
identify
the catalogue and therefore the user interaction
purchase
adiagram
specialisation
relationship
betweenby
catalogue
adminof the query
and
query
catalogue
.
It
is
also
supposed
istrator (see Figures 5 and 6) specialises inthat
the
query
catalogue
there
will
be
a
window
for
query catalogue user interaction diagram
from which purchase inherits.
Figure 6. The user-interaction diagram for the
Figure
6. The user-interaction
diagram for the
query catalogue
by administrator
query catalogue by administrator
Thefollowing
second case
is the
in the
sense.
Therelationship
states of thebetween
query
query catalogue and query catalogue by
catalogue by administrator corresponding
administrator. In this case, the administrator
412
classtodiagram
isGuI
connected
more than one use case, it can be
considered a window by the actor that invokes (or
The nexteach
stepwindow
of ourofmodel-driven
embeds)
each use case.technique
consists
of
the
building
of a class
for
Therefore, the actor window
candiagram
be a menu
GUI components.
The in
user-interface
diagrams
window.
In addition,
the user-interaction
obtained
in
the
previous
state
give
us
the have
main
diagrams obtained from use cases, we
windows.
Eachinput
use and
caseoutput
connected
to an actor
also
described
components
for
can
be
converted
into
a
window,
and
if
an
actor
data output and request and user events. It gives
is connected
to more thanfor
oneeach
use case,
it can
us
the GUI components
window.
If be
a
considered
a
window
by
the
actor
that
invokes
(or
user-interaction diagram has a state described
embeds)
windowuser-interaction
of each use case.
by
meanseach
of another
diagram,
Therefore,
canthebeuse
a menu
we can
supposethe
thatactor
the window of
case
window.
In
addition,
in
the
user-interaction
could also contain a separate window for this
diagramstask.
obtained
fromnow,
use we
cases,
have
separate
However,
havewe
to take
also
described
input
and
output
components
for
into account the user-interface diagram and the
datacase
output
and requestInand
It gives
use
relationships.
theuser
caseevents.
of inclusion,
us
the
GUI
components
for
each
window.
the logic is also separate, and it is possibleIftoa
user-interaction
diagram
has a state
described
consider
a new window.
However,
in the
case of
by
means
of
another
user-interaction
diagram,
generalisation/specialisation, the window corwe can suppose
the windowand
of the
use case
responds
with a that
specialisation,
therefore
it
could
also
contain
a
separate
window
for
this
is better to consider new window by using the
separate task.
However, now, we have to take
inheritance
relation.
413
Figure 8. Purchase class obtained from user-interface diagrams and user-interaction diagrams
Figure 8. Purchase class obtained from user-interface diagrams and user-interaction diagrams
Figure 9. Query catalogue class obtained from user-interface diagrams and user-interaction diagrams
Figure 9. Query catalogue class obtained from user-interface diagrams and user-interaction diagrams
Figure 10. Shopping cart class obtained from user-interface diagrams and user-interaction diagrams
414
Figure 10. Shopping cart class obtained from user-interface diagrams and user-interaction diagrams
Figure 10. Shopping cart class obtained from user-interface diagrams and user-interaction diagrams
Figure 10. Shopping cart class obtained from user-interface diagrams and user-interaction diagrams
Figure11.
11.Confirm
Confirmproceed
proceedclass
classobtained
obtainedfrom
fromuser-interface
user-interfacediagrams
diagramsand
anduser-interaction
user-interactiondiadiaFigure
Figure 11. Confirm proceed class obtained from user-interface diagrams and user-interaction diagrams
grams
grams
415
be obtained
e 13 shows
e window.
ery similar
except that
more than
en windows
agram as a
use cases:
purchase
purchase
nd that the
case of the
Figure 12. Confirm remove article class obtained from user-interface diagrams and user-interaction
Figure 12. Confirm remove article class obtained from user-interface diagrams and user-interaction
diagrams
diagrams
windows (Figure 13d) are also associated with
two buttons: the remove article button in the
shopping cart window and the proceed button
in the purchase window. Note again how these
windows are also described as inclusion relations
between use cases.
Moreover, observe the user-interaction diagrams shown in Figure 2 to better track the behaviour of the example. To develop the example,
we have used the Rational Rose for Java tool.
For space reasons, we have only included a part
of the GUI project developed for the case study.
Figure
The applet windows of the customers
GuI13.
Prototypes
Figure
13.
The applet windows of the customers
side
side
Finally, rapid GUI prototypes could be obtained
from the GUI-class diagram. Figure 13 shows
a first visual result of the purchase window.
Note how the purchase window is very similar
to the query catalogue window, except that
the second one includes three buttons more than
the first window. This similarity between windows
was revealed in the user interface diagram as a
generalisation relationship between use cases:
between the query catalogue and purchase
use cases. In the IBS design, the customer will always work on a purchase window opened from
the Customer window, and never on a query
catalogue window, though the former inherits
the behaviour of the latter (i.e., by the relation of
generalisation). Let us remark that purchase
intoinherits
a JTextField
class in the
GUI-class diagram.
from query
catalogue, and that the
Something
similar
happens
with
theuse
restcase
of stefive windows
(GUI)
are the
five
of the
reotyped
states
and
transitions.
Figures
8
to
12
client side (see Figure 3).
show the
classes
of window
GUI-class
diagram
Themain
shopping
cart
(Figure
13c)ofapthepears
customers
side.
when the show cart button is pressed
As the
it can
be seenwindow
in these (Figure
figures, the
on
purchase
13b).stereoNote in
typed
states
and
transitions
in
the
user-interaction
the user interface diagram, shown in Figure 7,
diagrams
translated
into Javawith
classes
the
how theare
button
is associated
the in
window
GUI-class
diagram.
The
stereotype
name
of
by means of an inclusion relation between ause
transition
or state
translated
appropriate
cases. On
the isother
hand,into
thethe
two
information
416
0
Java
swing class.
For example,
windows
(Figure
13d) are the
also<<JButton>>
associated with
stereotype
of thethe
Proceed
that
appears
removetransition
article
button
in the
two buttons:
in shopping
the purchase
user-interaction
diagram
(see
cart window and the proceed button
JButton
class.these
Figure
2a)purchase
is translated
into aNote
in the
window.
again how
windows are also described as inclusion relations
GUI
Prototypes
between
use cases.
Moreover, observe the user-interaction diaFinally,
GUIinprototypes
obtained
gramsrapid
shown
Figure 2 tocould
betterbetrack
the befrom
the
GUI-class
diagram.
Figure
13
shows
haviour of the example. To develop the example,
a first
visualused
result
the purchase
we have
theofRational
Rose forwindow.
Java tool.
purchase
window
is
very
similar
Note
how
the
For space reasons, we have only included a part
query
catalogue
window,
except
to the
of the
GUI project
developed
for the
casethat
study.
the second one includes three buttons more than
13. The
applet
windows
of thewindows
customers
theFigure
first window.
This
similarity
between
wasside
revealed in the user interface diagram as a
generalisation relationship between use cases:
between the query catalogue and purchase
use cases. In the IBS design, the customer will always work on a purchase window opened from
the Customer window, and never on a query
catalogue window, though the former inherits
the behaviour of the latter (i.e., by the relation of
generalisation). Let us remark that purchase
inherits from query catalogue, and that the
five windows (GUI) are the five use case of the
client side (see Figure 3).
The shopping cart window (Figure 13c) appears when the show cart button is pressed
on the purchase window (Figure 13b). Note in
the user interface diagram, shown in Figure 7,
how the button is associated with the window
by means of an inclusion relation between use
cases. On the other hand, the two information
windows (Figure 13d) are also associated with
two buttons: the remove article button in the
shopping cart window and the proceed button
in the purchase window. Note again how these
windows are also described as inclusion relations
between use cases.
Moreover, observe the user-interaction diagrams shown in Figure 2 to better track the behaviour of the example. To develop the example,
we have used the Rational Rose for Java tool.
For space reasons, we have only included a part
of the GUI project developed for the case study.
A complete version of the project is available at
http://indalog.ual.es/mdd/purchase.
418
If the user confirms the elimination (in window slide 8) he or she would get to the last level
of the window shown in slide 9; otherwise, he or
she would get to the slide 7 again.
2.
Figure 15. A summary for the user-interface and user-interaction diagram of the purchase function
Figure 15. A summary for the user-interface and user-interaction diagram of the purchase function
of
lines of Java code. The most general structure
our method is the Frame classes for the windows
of the system.
To explain how to make this relationship between diagrams and code, we are focussing on
the purchase user-interaction diagram. Figure 15
shows three perspectives of the purchase:
1.
419
File name
purchase
purchase.java
query catalogue
querycatalogue.java
shopping cart
shoppingcart.java
confirm proceed
confirmproceed
confirmremovearticle.java
Code generation
Diagram
import java.awt.*;
import javax.swing.*;
query
catalogue
2.
3.
420
with its equivalent in the user-interface perspective, including the type of relationship between
windows (in order to clarify the dependences
between them).
In this example, these five use cases lead to
five Java files of frame type. Their names are obtained from the use case name (which will be the
same as the one for the user-interaction diagram,
one per use case). If compound nouns exist then
a simple name is created with the initials of each
word in capital letters (Table 1).
Classes extend JFrame class and present a
general structure of implementation based on
frames (see Table 2). The basic structure of frame
prototype created in the diagrams-to-code transition, is composed of four sections: one heading,
one class constructor, one initiator of the frame
( jbInit) and one basic prototype of the implementation of the detected methods (related with the
logic of presentation). In the heading (labelled
with /** @GUIcomponents */) the attributes
are considered to be graphical components that
have been detected in the corresponding userinteraction diagram. The class constructor is
the same for all code files created (except for the
name of the constructor). In the initiator of the
frame ( jbInit) it is included the code lines that
model the graphical content of the window. The
designer should establish the position and size of
the graphical components of the frame later on,
as the code generated is just a prototype of the
window. In the fourth section of the Frame, it is
created the basic methods related to the interaction of the user with the graphic elements that
he or she can interact with: buttons, lists, and
text fields.
GUI Components
In order to analyze how translating the diagrams
components to code inside the frame, well again
use the purchase example in Figure 15.
The behaviour of the purchase window begins
with the initial state query catalogue (represented
<<JButton>>
[ selected article ] / Add to cart
The transition [exit] in the initial query catalogue state, corresponds with the operation of
clicking on the exit button inside the behaviour
diagram of the window (frame): query catalogue
(window inherited by purchase). The same happens with the other two transitions that reach
this initial state from the state Confirm proceed
(window to confirm the purchase).
For this example, there are four modelled
graphical components: labels, text fields, lists,
and buttons. In Table 3, it is shown the correspondence between graphic components of the
user-interaction diagram (stereotypes) and the
code generation.
In Table 3, column stererotype represents
the stereotypes used in the states/transitions of
the user-interactions diagram.
The columns attribute and class represent the name of the attribute and the base class
instantiated in the code generated. The criterion
followed to establish the name generated in
the code is: name_Type. That is, as a name it is
used the same name indicated in the interaction
diagram in capital letters, followed by a hyphen
(i.e., _) and then finished by base type (label,
button, list, etc.). If the original name has blanks
Code generation
Stereotype
Atribute
Class
In/Out
Listener
S/T
Out
No
State
/** @Label */
JTextField
In/Out
No
State
/** @TextField */
JList
In/Out
Maybe
State
/** @List */
In
Yes
Trans.
/** @Button */
In/Out
No
Trans.
None
<<JLabel>>
name _ Label
JLabel
name
<<JTextField>>
name
name _ Label
name _ TextField
Markup
name _ Label
<<JList>>
name _ List
name
name _ ScrollPane
<<JButton>
name _ Label
name
name _ Button
JButton
[condition]
None
None
421
Figure
Figure16.
16.The
Theconfirm
confirmremove
removearticle-label
article-labelstate
stateininquery
querycatalogue
catalogue
422
Figure17.
17.The
Thesearching
searchingcriteria
criteriatext
textfield
fieldstate
stateininquery
querycatalogue
catalogue
Figure
Figure
The
results
state
query
catalogue
Figure
18.18.
The
results
listlist
state
in in
query
catalogue
423
Note that purchase class inherits querycatalogue in the code. The two include relationships
are included in the code in the heading section (one
of the four that frame has). On the code line a @
GUI mark is inserted to trace an inclusion of the
window in the process of translation. Moreover,
the criterion for the name of the GUI variable is
considering the original name of the included
window followed by _GUI.
Terminal States
Terminal states (label, list, textfield) are translated
into code lines in the heading, initiation of Frame
( jbInit) and implementation of methods sections
(only in list). Appendix A contains the Java code
patterns translated from the graphical components
of the user-interaction diagram. Lets see each
of them separately.
For a label case, three basic code lines are
generated: one in the heading to define the type
(JLabel) and two in the initiation ( jbInit) to establish the component. In Figure 16, it is shown
Figure
Figure 19.
19. The
The purchase
purchase window
window
425
426
427
1.
2.
3.
ACKNOWLEDGMENT
This work has been partially supported by
the Spanish project of the Ministry of Science
and Technology TIN2005-09207-C03-02 and
TIN2006-06698 under FEDER funds.
REFERENCES
Anderson, D. J. (2000). Extending UML for
UI. UML2000 Workshop on Towards a UML
Profile for Interactive Systems Development
(TUPIS2000), York, UK, October 2/3.
428
OMG. (2005). Unified modeling language specification (Tech. Rep. No 2). Object Management
Group.
Paterno, F. (2001). Towards a UML for interactive
systems. EHCI 2001, LNCS 2254 (pp. 7-18).
Pinheiro da Silva, P., & Paton, N. W. (2000).
User interface modelling with UML. Information Modelling and Knowledge Bases XII (pp.
203-217). IOS Press.
Schfer, W. (2002). Fujaba documentation (Tech.
Rep.). University of Paderborn.
Code Generation
import java.awt.event.*;
public class className extends JFrame {
/** @GUIcomponents */
public JButton name _ Button = new JButton();
private void jbInit() throws Exception {
/** @Button */
name _ Button.setText(name);
name _ Button.addActionListener(
new java.awt.event.ActionListener() {
public void actionPerformed(ActionEvent e) {
name _ Button _ nameClass _ GUI(); } });
/** @Panel */
this.getContentPane().add(name _ Button, null);
} // end jbInit()
/***
* Method THIS
*/
/** Button */
void name _ Button _ className _ GUI() { ... }
}
Code Generation
public class className extends JFrame {
/** @GUIcomponents */
public JLabel name _ Label = new JLabel();
private void jbInit() throws Exception {
/** @Label */
name _ Label.setText(name);
/** @Panel */
this.getContentPane().add(name _ Label, null);
} // end jbInit()
}
Code Generation
public class className extends JFrame {
/** @GUIcomponents */
public JLabel name _ Label = new JLabel();
public JTextField name _ TextField = new JTextField();
private void jbInit() throws Exception {
/** @Label */
name _ Label.setText(name);
/** @Panel */
this.getContentPane().add(name _ Label, null);
this.getContentPane().add(name _ TextField, null);
}// end jbInit()
}
429
Code Generation
import javax.swing.event.*;
public class className extends JFrame {
/** @GUIcomponents */
public JLabel name _ Label = new JLabel();
public JScrollPane name _ ScrollPane = new JScrollPane();
public JList name _ List = new JList();
private void jbInit() throws Exception {
/** @Label */
name _ Label.setText(name);
/** @List */
name _ ScrollPane.getViewport().add(name _ List, null);
name _ List.addListSelectionListener(
new javax.swing.event.ListSelectionListener() {
public void valueChanged(ListSelectionEvent e) {
selected _ articles _ List _ ShoppingCart _ GUI(); } });
/** @Panel */
this.getContentPane().add(name _ Label, null);
this.getContentPane().add(name _ ScrollPane, null);
} // end jbInit()
/***
* Method THIS
*/
/** @List */
void selected _ articles _ List _ ShoppingCart _ GUI() { ... }
}
430
/** @Button */
void accept _ Button _ ConfirmRemoveArticle _ GUI() { this.setEnabled(true); }
This work was previously published in Visual Languages for Interactive Computing: Definitions and Formalizations, edited
by F. Ferri, pp. 328-356 , copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
431
432
Chapter 2.5
ABSTRACT
INTRODUCTION
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
433
434
Qualities
Each product will have its own specific user
requirements. The EDF proposes that four fundamental qualities underlie these requirements.
Usable
To go beyond usability does not mean that it is
no longer necessary. The benefits of usability are
well documented (e.g., Bevan, 2000). Usability is
defined as the extent to which a product can be
used by specified users to achieve specified goals
with effectiveness, efficiency and satisfaction in a
specified context of use (ISO, 1998) and the capability of the software product to be understood,
learned, used and attractive to the user, when used
under specified conditions (ISO, 2000). Indeed,
in long-term use of a product, there is evidence
Accessible
The Web relies on a level of standardisation
such that pages can be accessed by anyone, on
whatever browser or device they use, be it a PC,
Mac, or mobile phone. As the Web has grown,
so have the standards and so have the number of
inaccessible Web sites. Despite the publication of
guidelines for accessible web design by the World
Wide Web Consortium (W3C, 1999), a survey of
Web designers found that difficulty interpreting
guidelines was a major barrier to implementing
them (Knight & Jefsioutine, 2003). Furthermore,
the UKs Disability Rights Commission (2004)
found that nearly half of the usability and accessibility problems were not violations of any of the
WCAGs checkpoints. DiBlas, Paolini, Speroni,
and Capodieci (2004) argue that W3C guidelines
are not sufficient to ensure an efficienteven
less satisfactoryWeb experience (p. 89). It is
important that accessibility is seen as part of the
user experience rather than a series of technical
checkpoints to cover.
435
Engaging
Shedroff (2001, p. 4) suggests an experience
comprises of an attraction, an engagement, and
a conclusion. What attracts someone to a product
could be a need to perform a task, an aesthetic
quality, or an affordance. The engagement is then
sustained over a period of time, beyond the initial
attraction. Csikszentmihalyi (1991) describes
the experience of optimal experience and flow:
Concentration is so intense that there is no attention left over to think about anything irrelevant,
or to worry about problems. Self-consciousness
disappears, and the sense of timing becomes
distorted (p. 71).
Although this level of engagement may not be
appropriate for all products, it is useful to consider
the properties of an experience that make it engaging. Fiore (2003) suggests that an experience
includes a number of dimensions: it is educative
and memorable; whole, unique and nonreproducible; historical; meaningful/aesthetic; contextual;
physical/sensual/embodied; and situated in time
and space. Jones, Valdez, Nowakowski, and Rasmussen (1994) suggest that engaged learning tasks
are challenging, authentic, and multidisciplinary.
Such tasks are typically complex and involve
sustained amounts of time and are authentic.
Quinn (1997) suggests that engagement in learning
applications comes from two factorsinteractivity and embeddedness, where the user perceives
that they have some control over the system, and
it is relevant and meaningful to them.
Beneficial
According to Csikszentmihalyi (1991), an optimal
experience is so gratifying that people are willing
to do it for its own sake, with little concern for
what they will get out of it (p. 71). One might
assume that what they are getting is some degree
of pleasure. Jordan (2000) identifies pleasure as
the ultimate quality of the user experience, and
DeJean (2002) points out that pleasure is a complex
436
Dimensions of Experiencing
McDonagh-Philp and Lebbon (2000) suggest that
emphasis must change from hard functionality,
to soft functionality (p. 38). Rather than focussing on what a product does and how it does it,
the focus is on less tangible aspects like emotional
Research Contexts
This section describes a set of key questions that
can be asked throughout the design process to develop an understanding of the contexts of product
use. The questions are derived from a number
of models including Rothsteins (2002) model
consisting of the four As of activity, artefacts,
atmosphere and actors (p. 3), and Ortony, Clore,
and Collins (1998) cognitive model comprising
events, agents and objects (p. 63). The EDF
advocates four key contexts: (1) who and why
(users and stakeholders and their motivations); (2)
what and how (content, tasks, task flow, actions,
functionality); (3) when and where (situation,
frequency, environment); and (4) with what (tools,
knowledge, and skills).
437
With What
This refers to objects, artefacts, or tools that are
being used or are influencing use (such as software,
browsers, input devices, assistive technologies),
and to knowledge, expertise, and skills that are
438
Exploration
These methods are about discovery and can be
drawn from demography, ethnography, market
research, psychology, and HCI. They include
surveys, interviews, questionnaires, focus groups,
task analysis, field observation, user testing, affinity diagramming, laddering, and experience
diaries. A key area of exploration is in understanding users mental models of a domain. This can
be explored by, for example, in depth elicitation
techniques and card sorts.
Contextual interviews are conducted during the activity or in the environment in which
the product will be used. Users are able to refer
Communicating
Design involves communication with a wide
range of people, from users to software engineers.
Design teams need to accommodate different
viewpoints and share a common language. Curtis
(2002) emphasises the importance of actively
listening to a client and finding out the story
behind a product. Hammer and Reymen (2004)
stress the importance of designers expressing
their emotional as well as rational reflections on
design decisions. Communication methods serve
to clarify and share the goals of stakeholders, the
exploratory research data, design requirements,
and ideas to a multi-disciplinary team who may
not have a common vocabulary. It is important
to ensure good communication throughout the
process to ensure the product itself communicates
the design goals effectively. Methods include story
telling; user profiles and personas; use cases or task
scenarios; scenario-based design; mood boards;
written briefs and specifications; storyboarding;
and prototypes.
User profiles are generated from demographic
data and lifestyle surveys. They can include textual
and visual descriptions of key user groups. They
are used to think through design solutions and for
recruiting users for research. Mood boards are
normally collages of photographic information
that aim to generate a visual personality for the
product or service. Mood boards can be created
by designers or users or be the result of collaboration between the two. Mood boards are useful
because they work at a nonverbal level where
439
Empathy
These methods represent an approach aimed at
gaining a deeper understanding and empathy
for users. They include focus groups, diaries,
workshops, participatory design, and immersion.
Diaries can be used to record users interaction
with a product over time while workshops and
participatory design involve users in the development team either directly or through a user
advocate that champions their perspective. Participant observation, or eat your own dog food,
involves taking part in the activity or culture being
observed, where the designer becomes a user.
Molotch (2003) describes a method used by Ford
designers in which they test products dressed in
what they call a third age suit, with glasses and
gloves, to simulate having the body and eyesight
of a 70-year old (p. 49).
Crossley (2004) describes a technique used
to help designers develop an awareness of the
target audience for male grooming products. It
involved:
rapidly immersing the design team into the lives,
hearts and minds of people in a short space of time.
The challenge for this project was to get young men
inspired to tell us their own stories and express
their emotions about a mundane functional activity Character modelling [was used] where
the team and sometimes the user has a kit with
questions, cameras and collages, [that enabled
them] to frame and understand the lifestyle of the
person they are creating for. (pp. 38-39)
440
Speculation
In addition to understanding users wants and
needs, designers also need to speculate about new
solutions and future trends. The Sony Walkman,
for example, introduced an entirely novel mode of
behaviour that no users had asked for. The decision to add text messaging to mobile phones was
based on a speculation of how that functionality
might be needed and by whom. The success of
text messaging, however, was based on its uptake
by an entirely different user group with its own
needs and method of use. Designers may need
to predict how users will adopt and adapt to a
new product.
Evaluation
Evaluation methods include auditing, standards
compliance, and user testing. Evaluating may
also use similar methods to exploration, with
a shift in emphasis from discovery to checking
outcomes against intentions. Does the product
meet the design goals and/or the user expectations?
Evaluation can be formative, conducted during
the development of a product, or summative, conducted when a product is complete and is being
used. Summative testing of an earlier version or
a similar product can be useful to identify design
goals, while summative testing of the product at
the end of its design lifecycle is usually done for
auditing and verification purposes. Feedback at
this stage is of little use to the designer, the deadline has passed and the money is spent. Clearly
formative testing is most helpful to a designer/
developer. Gould and Lewis (1985) stress the
importance of empirically testing design iterations throughout the design process. Evaluative
tools such as heuristics are often used, although
evidence suggests that they are no substitute for
testing real users (e.g., Lee, Whalen, McEwen, &
Latremouille, 1984). The EDF broadens the test
and evaluative criteria from the traditional focus
on cognitive and behavioural measures. Bonapace
(2002) describes a method aimed at tapping into
the four pleasures described by Jordan (2000),
called the Sensorial Quality Assessment Method
(SEQUAM) applied to the design of physical
products in car manufacturing. User testing can
combine empirical methods of behavioural observation with techniques such as co-discovery,
think aloud and empathic interviewing, to tap into
441
Visioning Workshops
Visioning workshops usually take place at the
beginning of the design process, preferably before
the brief is finalised. The workshop is structured
around the experience design framework and fulfils a number of functions. Techniques are adapted
to improve communication; to build literacy and
a common language to describe the medium; and
to build an empathic understanding of users and
other contexts of use. Individual and group activities are facilitated by a researcher and recorded
by a scribe. Activities include the following:
442
Contextual Interviews
Contextual interviews take place prior to detailed
requirements specifications and may follow a
visioning workshop. Firstly, stakeholders are
identified and a representative sample recruited.
The aim is to survey a large sample and to iteratively develop knowledge about the design
problem. In this context, as many as 50 users may
be involved and the focus is to gain as full a set
of requirements as is possible in a short space of
time. The interviews are semi-structured over
approximately 30 minutes and are conducted
within the context of use. Their exact format is
dependent on whether the product is a new one or
a refinement of an existing one. In the former case,
the researcher works with low-fidelity prototypes
and in the latter case with the existing product.
Activities include the following:
The results of interviews inform the production of anonymous personas and use scenarios,
which are used to communicate the requirements
to the development team and build their empathy
with the users.
Participatory Prototyping
Participatory prototyping combines the skills
of the development team with user feedback.
Prototypes are developed of content structures,
interaction flow, and layouts. Initial prototypes
are developed and users are asked to critique
or adapt the prototype with specific reference
to the qualities of the EDF and the dimensions
of experience. Designers interpret this feedback
and develop further prototypes, with the focus on
interaction and structure, and later on look and
feel issues. Activities include the following:
By the end of the process a complete low-fidelity prototype has been developed that has been
iterated around the qualities of the EDF.
443
stakeholder groups. The prototype then undergoes a group critique that tests the solution
against the initial requirements and evaluative
criteria gathered by the visioning workshop and
contextual interviews. As well as ensuring that
the prototype is suitable the workshops gauge
barriers and opportunities to take up and adoption of a new product or design. In addition, it
provides the development team with a rationale
and evidence of the suitability of the final design
solution. Activities include the following:
User diaries
Online discussion groups and surveys
Focus groups and ongoing user testing
444
References
Bevan, N. (2000). Esprit Project 28015 Trump:
Cost benefit analysis. Retrieved April 26, 2002,
from http://www.usability.serco.com/trump/documents/D3.0_Cost%20benefit_v1.1.doc
Beyer, H., & Holtzblatt, K. (1998). Contextual
design: Designing customer-centred systems. In
S. Card, J. Grudin, M. Linton, J. Nielson, & T.
Skelley (series Eds.). Morgan Kaufmann series in
interactive technologies. San Fransisco: Morgan
Kaufmann.
Bonapace, L. (2002). Linking product properties
to pleasure: The sensorial quality assessment
Campbell-Kelly, M., & Aspray, W. (1996). Computer: A history of the information machine. New
York: Basic Books.
Carmichael, A., Newell, A. F., Morgan, M., Dickinson, A., & Mival, O. (2005, April 5-8). Using
theatre and film to represent user requirements.
Paper presented at Include 2005, Royal College
of Art, London. Retrieved from http://www.hhrc.
rca.ac.uk/programmes/include/2005/proceedings/pdf/carmichaelalex.pdf
Cayol, A., & Bonhoure, P. (2004). Prospective
design oriented towards customer pleasure. In
D. McDonagh, P. Hekkert, J. van Erp, & D. Gyi
(Eds.), Design and emotion: The experience of
everyday things (pp. 104-108). London: Taylor
& Francis.
Cooper, A. (1999). The inmates are running the
asylum. Why high-tech products drive us crazy
and how to restore the sanity. Indianapolis, IN:
SAMS.
Cross, N. (2001). Designerly ways of knowing:
Design discipline versus design science. Design
Issues, 17(3), 49-55.
Crossley, L. (2004). Bridging the emotional gap.
In D. McDonagh, P. Hekkert, J. van Erp, & D.
Gyi (Eds.), Design and emotion: The experience
of everyday things (pp. 37-42). London: Taylor
& Francis.
Csikszentmihalyi, M. (1991). Flow: The psychology of optimal experience. New York: Harper
Collins.
Curtis, H. (2002). MTIV: Process, inspiration and
practice for the new media designer. Indianapolis,
445
446
Maguire, M. (2004). Does usability = attractiveness? In D. McDonagh, P. Hekkert, J. van Erp, &
D. Gyi (Eds.), Design and emotion: The experience of everyday things (pp. 303-307). London:
Taylor & Francis.
Maslow, A. (1970). Motivation and personality
(2nd ed.). New York: Harper & Row.
McDonagh-Philp, D., & Lebbon, C. (2000). The
emotional domain in product design. The Design
Journal, 3(1), 31-43.
Molotch, H. (2003). Where stuff comes from:
How toasters, toilets, cars, computers and many
other things come to be as they are. London; New
York: Routledge.
Norman, D. (2004). Emotional design: Why we
love [or hate] everyday things. New York: Basic
Books.
Ortony, A., Clore, G. L., & Collins, A. (1988). The
cognitive structure of emotions. Cambridge; New
York; Melbourne: Cambridge University Press.
Quinn, C. N. (1997). Engaging Learning. Instructional Technology Forum (ITFORUM@UGA.
CC.UGA.EDU), Invited Presenter. Retrieved June
19, 2003, from http://itech1.coe.uga.edu/itforum/
paper18/paper18.html
Reeves, B., & Nass, C. (2002). The media equation:
How people treat computers, television, and new
media like real people and places. Stanford, CA:
Center for the Study of Language and Information Publications.
Rittel, H., & Webber, M. (1973). Dilemmas in a
general theory of planning. Policy Sciences, 4,
155-169. Amsterdam: Elsevier Scientific Publishing Company Inc., 155-169.
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 130-147, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
447
448
Chapter 2.6
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
User-Interface Evaluation
In this section, we present concepts and evaluation
techniques from usability engineering, software
engineering, and semiotic engineering.
Usability Engineering
Usability engineering is a set of activities that
ideally take place throughout the lifecycle of
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Software Engineering
Semiotic Engineering
Semiotic engineering is an HCI theory that emphasizes aspects related to the metacommunication
designer user(s) via user-system communication,
which passes through the UIs of interactive
applications. The system is considered to be
the deputy or a representative of the system
designer (Souza, Barbosa, & Silva, 2001). The
content of messages is the application usability
model. Its expression is formed by the set of all
interaction messages sent through the UI during
the interaction process. The user plays a double
role: interacting with the system and interpreting
messages sent by the designer.
Semiotic engineering is essentially involved in
test procedures with final users (empiric evaluation), aiming at system communicability analysis
based on qualitative evaluation, in which
there are four phases: test preparation; labeling;
interpretation, and formatting and elaboration
of the semiotic profile of the application to be
evaluated. The techniques used in theses phases
are: system-user observations, questionnaires,
(somative-evaluation) inspections, interviews,
filming,and so on.
Semiotic engineering is essentially present in
tests with final users (e.g., empiric evaluation),
aiming at analyzing the system communicability.
A UI has a good level of communicability when it
is able to successfully transmit the designer message to the user, allowing him or her to understand
the system goal, the advantages of using it, how it
works, and the basic UI interaction principles.
450
Evaluation
After studying about evaluation techniques, artifacts and approaches from software, usability, and
semiotic engineering we are able to conclude that
an evaluation process can be seen under various
perspectives.
Concerning software engineering, we noticed
the importance of software quality concerning
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
The Process
UPi can serve as a guide, providing useful steps
and artifacts that can be tailored and customized
when organizations intend to develop usable
interactive systems. One of the best advantages
of UPi is the idea to focus on activities, artifacts
and guidelines that add value to the UI generation.
With this approach, it can be integrated with any
other process and inherit activities that are vital
to the entire process, but that are better defined
and solidified in other processes. For instance,
project management, configuration and change
management, implementation, and deployment
activities are very well detailed in the RUP
(Kruchten, Ahlqvist, & Bylund, 2001). Besides
the RUP, UPi can also be applied in conjunction
with ISO 13407 (ISO 13407, 1999), which already
has other activities defined and validated, such as
project planning, testing, and so on.
UPi is composed of activities that aim at designing UIs. These activities are based on RUP
activities, but they follow different guidelines that
take into consideration usability aspects.
In this work, we are integrating UPi with
UPi-Test (to be presented in the next section) in
order to guide professionals that are developing
interactive systems to evaluate them throughout
the entire development process.
Phase I: Inception
The main goal in this phase is to elicit requirements from users in order to develop an interactive system that best suits their needs through the
execution of some activities (presented as follows).
These requirements are documented through
certain artifacts: use-case models, task models,
usability requirements, and paper sketches.
Use-case models represent a well-established
manner to define the system functionality, while
task models can be used to detail use cases by
breaking them down into tasks. Usability requirements represent users preferences or constraints
that can be part of a usable interactive system.
Paper sketches focus on the interaction, UI
components, and on the overall system structure,
keeping the style guide secondary, without being
too abstract.
The purpose of the Elicit Stakeholder Needs
activity is to understand users, their personal
characteristics, and information on the environment where they are located that have a direct
influence on the system definition, and to collect special nonfunctional requirements that the
system must fulfill, such as performance, cost,
and device requests.
The purpose of the Find Actors and Use Cases
and Structure the Use-case Model activities is
to define the actors (users or other systems) that
will interact with the system and the functionality of the system that directly attend to users
needs and support the execution of their work
productively.
451
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
452
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Phase I: Inception
The inception phase is important in guaranteeing
that the following phases achieve results to attend
to users and customers usability goals. This phase
has the constant participation of users in order
to understand their requests, which are verified
and validated according to usability engineering
to allow the development of these requests. The
description of the activities and artifacts in this
phase is presented as follows.
453
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
1. Evaluation
process of UIs
Figure 1.Figure
Evaluation
process of UIs
454
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
(if new functionality is requested) and new proposals of prototypes (if changes in the navigation
are requested). This process is repeated until the
generated prototype attends users preferences
and needs.
This activity early in the process provides
flexibility for users and customers to evaluate the
evolution of the system, therefore, designers and
users feel more confident with the UI design.
After the conclusion of the inception phase,
the resulting artifacts are verified and validated
as paper sketches.
455
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
456
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Use Observations/Checklist/
Questionnaires
Observations, questionnaires, and checklists are
artifacts and techniques proposed by the semiotic
engineering in order to verify the user-system
interactivity and communicability. Experts and
observers will use these artifacts during the
tests, which result in the definition of the quality
of the interactive system. These results can lead
to change requests for developers to correct the
detected mistakes.
Users comments and the actual execution of
the tests will be recorded to help in the analysis
of the results of the questionnaires and of users
observations.
Make Corrections
In this activity, developers make corrections proposed by experts after the tests. After the changes
are made, users validate the product.
457
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Elaborate Reports
The results obtained will be used as a basis for
the elaboration of evaluation reports, which
propose adaptations in the used patterns and in
the creation of new patterns that can be used in
future iterations.
Case Study
In this chapter, we describe the case study of this
research work, which is concerned with the evaluation of UIs for the SBTVD project, focusing on
the applications: electronic portal, insertion of
texts, and help.
Introduction
The digital TV represents digital and social inclusion for a great part of the Brazilian population,
especially for people less privileged, who do not
have access to computers, and therefore, cannot
access the Internet.
The SBTVD must be adapted to the socioeconomic conditions of the country, as well as allow
the use of conventional TV sets already in large
use in the country in order to decrease risks and
costs for the society.
The digital TV creates various possibilities of
interaction between the TV and the user, such as:
exchange of text or voice messages, virtual chats,
searches for a favorite show, access to information about the government, and so on. These
possibilities are different from the characteristics
of the conventional TV, in which the user plays
a passive role.
458
Phase I: Inception
Talk with Users and/or Customers
This activity was difficult to perform in this project
because there is an almost unlimited number of
users and/or customers. Fortunately, there were
specific and clear specifications, established by the
Brazilian government. Such specifications were
used as requests from users and customers.
To define the scope of the application under
our responsibility (access portal), we had meetings with representatives of the government and
with other institutions that are participating in
the project. These meetings were supported with
brainstorming and the resulting decisions were
analyzed and were shared with all the institutions
through e-mails and discussion lists.
After these meetings, we decided that the portal
will consist of a main application that allows the access to all other applications in the SBTVD, which
can be: electronic mail, electronic commerce, EPG,
help, electronic government, and so on.
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
459
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Watch TV
Color
Font
Help
460
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
461
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Test case
Personalization
Test items
Pre-conditions
Inputs
The user must select a type of personalization: Font color The user selects green
Expected results
462
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
Conclusion
In this chapter, we present a new method, which
focuses on integrating different techniques of
usability, semiotic and software engineering. The
aims are to design usable UIs following a modelbased UI design process and to facilitate the test
process by using the evaluated HCI models. In this
manner, we hope to contribute to the development
of interactive systems that are easy for users to
learn and use, and to help testers in performing
their usability tests in an efficient manner.
As main contributions, we focus on evaluating the usability of UIs with the constant participation of users and customers. Besides that,
the integration of various approaches results in
positive outcomes for the prototypes, as well as
for multidisciplinary team members, who are
better integrated and can have their knowledge
References
Baranauskas, M. C. C., & Rocha, H. V. da. (2003).
Design e Avaliao de Interfaces Humano-Computador, NIEDNcleo de Informtica Aplicada
Educao, UNICAMPUniversidade Estadual
de Campinas.
Coyette, A., Faulkner, S., Kolp, M., Limbourg,
Q., & Vanderdonckt, J. (2004). SketchiXML:
Towards a multi-agent design tool for sketching
user interfaces based on UsiXML. In P. Palanque,
P. Slavik, & M. Winckler (Eds.), 3rd Int. Workshop
on Task Models and Diagrams for User Interface
Design (pp. 75-82). New York: ACM Press.
Gould, J. D., & Lewis, C. (1985). Designing for
usability: Key principles and what designers think.
Communications of the ACM, 28(3), 300-311.
ISO 13407. (1999). Human-centred design processes for interactive system teams.
Kruchten, P., Ahlqvist, S., & Bylund, S. (2001).
User interface design in the rational unified process. In M. Van Harmelen (Ed.), Object modeling
and user interface design (pp. 45-56). New York:
Addison-Wesley.
Lewis, C. (1982). Using the thinking-aloud
method in cognitive interface design (IBM
Research Rep. No. RC9265, #40713). Yorktown
Heights, NY: IBM Thomas J. Watson Research
Center.
Myers, G. J. (2004). The art of software testing.
New York: John Wiley & Sons.
Nielsen, J. (1993). Usability engineering. Boston:
Academic Press.
Pressman, R. S. (1995). Engenharia de Software.
So Paulo: Makron Books.
463
Integrating Usability, Semiotic, and Software Engineering into a Method for Evaluating User Interfaces
This work was previously published in Verification, Validation and Testing in Software Engineering, edited by A. Dasso, pp.
55-81, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
464
465
Chapter 2.7
Abstract
How can we design technology that suits human cognitive needs? In this chapter, we review
research on the effects of externalizing information on the interface versus requiring people to
internalize it. We discuss the advantages and
disadvantages of externalizing information.
Further, we discuss some of our own research
investigating how externalizing or not externalizing information in program interfaces influences problem-solving performance. In general,
externalization provides information relevant to
immediate task execution visibly or audibly in the
interface. Thus, remembering certain task-related
knowledge becomes unnecessary, which relieves
working memory. Examples are visual feedback
aids such as graying out nonapplicable menu
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
Humans interact with information in the world
around them by taking it in, processing it, and
outputting reactions. To process information, they
use cognitive skills such as thinking, learning,
reasoning, recognizing, and recalling, as well as
metacognitive skills, which entail thinking about
cognition (for instance, planning, strategizing,
or choosing between reasoning or calculation
types). Cognitive science studies these domains
of human thought. Much research in this field is
done through the analysis of subject reactions
to presented information. This makes cognitive
science a source of knowledge that couldand
doesguide interface and system designers toward a more effective presentation of information
in computer systems. We believe that utilizing
cognitive findings to enhance the effectiveness
of software can make a valuable contribution.
Increasingly humans exchange information with
the aid of computers, for instance, in education,
entertainment, office tasks, information search, email, and many other domains. Advances in computer and multimedia technology ensure that the
format of this information is increasingly diverse
using multiple media. Moreover, programs can
have hundreds of functions. However, progression
becomes difficult with this complexity of choices
and representations. Harnessing this complexity
to make it manageable for humans gave rise to the
domain of usability. Soon, among other things,
the importance of minimizing user memory load
became apparent. This resulted in recommenda-
466
tions to simplify the interface, restricting available options to those needed to carry out the task
action at hand, and to keep options visible on the
interface so users could interact on the basis of
recognition rather than recall (Nielsen, 1994). In
other words, the aim was just-in-time delivery of
just the right information, obviating the need for
memorization and extensive search in memory.
Our research does not aim at uncovering
more principles that make systems even more
usable, intuitive, or appealing. It goes beyond
plain usability and focuses on how to shape interfaces that induce a user to learn cognitive and
metacognitive skills, and thereby learn about the
domain underlying the interface. We would like
to find patterns of human behavior occurring with
computer use, to find out what kind of behavior
certain decisions in interface design provoke,
not only during interaction, but also after delays
and in transfer situations. In this, we feel that one
continually has to consider the real purpose of
the system. If a system ought to teach material to
students or children, or needs to make sure that
users do not mindlessly follow interface cues
because the task to perform is of a certain crucial
nature, then we should know what it is about an
interface design that induces people to think and
learn. In this chapter, the focus is on internalization and externalization of information, and how
this may lead to different behavior on the users
part. In the following sections, we explain the
different terms used in this context. After this,
we will look at the pros and cons of externalizing
and internalizing information, and some effects
of varying interface elements on learning and
metacognitive processes. In the next sections we
discuss an experiment on users behavior that two
interface styles (internalization and externalization) provoke, and more specifically, the amount
of planning and learning from the users side. In
the concluding section, we discuss our findings
and lay out our future plans.
Externalization vs.
Internalization Information:
Related Fields
Visualization of task-specific information, thus
minimizing user memory load, is often called
externalizing the information (Zhang & Norman, 1994). Externalization of information can
be contrasted with internalization of information,
whereby certain task-related information is less
directly available and needs to be internalized
(inferred and stored in memory).
Early work of Simon (1975) examining advanced chess skills and strategies to solve the
Tower of Hanoi puzzle had noticed the interdependence of external memory (perception, cueing
recognition memory), and internal memory (recall
memory). Norman (1988) argued for the need of a
similar analysis for human-computer interaction.
Tabachneck-Schijf, Leonardo, and Simon (1997)
created a model in which distributed cognition
was reified. For an example of distributed cognition, showing the interaction between internal
and external memory in creating a Microsoft
PowerPoint page, see Tabachneck-Schijf (2004).
The question is, how much information should be
internalized and how much should be externalized
for an optimal task execution? Some research
shows that the more information is externalized,
the easier executing a task becomes (e.g., Zhang
& Norman, 1994). Other research shows that
externalizing all needed information seduces
the user into mindlessly following the interface
cues and learning or planning little (e.g., Mayes,
Draper, McGregor, & Oatley, 1988). Yet other
research shows that giving users incentives to
plan and learn (i.e., internalizing information)
pays off in better performance (e.g., OHara &
Payne, 1999). Internalizing and externalizing
information are related to, but not the same as,
internal and external memory, internal and external cognition, and plan-based and display-based
problem solving. All relate to having information in the head, be it in long-term or in working
467
468
469
Figure 2. Hypotheses
470
Experiment Session 1
Hypotheses
1. Externalization will initially yield better task
performance than internalization. Knowledge is yet to be acquired, so guidance by
indicating the legality of moves will help
externalization subjects.
2. Internalization yields better task performance later, especially after a severe distraction.
After a while, subjects will have learned to
solve the puzzle. Internalization subjects had
no guidance and had to acquire the solving
skill by themselves. They will have stored
the rules in long-term memory more solidly,
and have the needed information more readily available and thus perform better later,
especially after an interruption erased working memory. Because of the guiding nature
of the interface, externalization subjects will
plan and think less than the internalization
subjects, therefore work more on the basis
of trial and error and consequently display
worse performance.
3. Internalization yields better knowledge. Not
having externalized information available
will motivate a subject to start planning on
the basis of self-acquired rules. After the
Materials
Our problem-solving task, Balls & Boxes
(B&B), is an isomorph of the classic puzzle
Missionaries and Cannibals (M&C). Five
missionaries and five cannibals are standing
on a riverbank, and all have to reach the other
bank by boat. Constraints are that the boat only
holds three creatures, and the minimum to sail
is one, because the boat cannot move by itself.
Furthermore, the cannibals can never outnumber
the missionaries at any place (except when there
are zero missionaries), or the missionaries will be
471
472
Pink arrows
Yellow arrows/balls
Blue arrows/balls
473
Procedure
The experiment was conducted in the usability
lab at the Center for Content and Knowledge
Engineering, Utrecht University. We informed
the subjects of the course of the experiment, and
gave a brief oral explanation of the interface and a
short demonstration. The experiment consisted of
nine puzzle trials, divided into three equal phases,
and a 10-minute distraction task between phase 2
and 3. The maximum time for each trial was set
at 7 minutes. Slightly different starting situations
of the puzzle were used to avoid subjects simply
repeating actions (states A, B, and C in Figure
4). Also, in the second phase, the playing direction of the puzzle was reversed to right to left.
In the third phase, the playing direction was set
to left to right again. After the last trial, subjects
filled out a knowledge test (score 08) consisting
474
T ria ls s o lve d p e r p h a se
2 .0
1 .5
1 .0
.5
0 .0
2
P ha se
Dead-End States
Knowledge Test
300
250
200
150
100
50
0
2
P ha se
Nu m b e r o f d e a d -e n d -sta te s re a ch e d
14
12
10
8
6
Inte rfa ce style
2
0
P ha se
475
476
Experiment Session 2
We were curious to see how stable this better
knowledge provoked by internalization was, and
therefore decided to conduct a second session
after a delay of 8 months. We invited the same
subjects of experiment 1; 14 of the 30 were available. There were two reasons for this rerun. First,
to see whether the better knowledge measured
in the internalization subjects had endured, we
asked subjects to solve B&B five times (experiment 1 showed three to four puzzles suffice for all
subjects to be able to solve the puzzle within the
allotted time). Second, to see whether the better
knowledge might result in better performance on
a transfer task, we also confronted subjects with
a transfer problem. Transfer problems require
subjects to apply acquired skill on a different
task of the same nature. To be able to measure
differences in performance between the two initial groups (internalization and externalization)
on the same material, we presented all subjects
with the same material this time, one interface
style, namely, externalization. Note that the internalization subjects had to make a change in
interface style.
Hypotheses
1.
2.
Materials
To test knowledge retention, all subjects first
solved the B&B puzzle in the externalized version five times. To test transfer performance, we
used another puzzle of the same M&C family,
but with varying characteristics. We first used a
quite literal version of M&C, which was further
away in terms of transfer.
477
1.
2.
Figure 10. The initial state of the Missionaries & Cannibals puzzle
478
479
480
Future Trends
There are still many challenges in human-computer interaction and many issues that need to be
explored. Understanding how users will react to
interface information (on the basis of cognitive
findings) is one important issue in attuning software to its purpose, thereby allowing it to achieve
its goal. In the future, we will further investigate
this issue by exploring behavior in different types
of more realistic planning-related tasks. As a more
realistic planning task, we think of, for example,
spreadsheet or drawing applications where actions
are less repetitive, more complex, and could be
part of a real job. We are currently designing such
an environment.
References
Carroll, J. M., & Rosson, M. B. (1987). The
paradox of the active user. In J. M. Carroll (Ed.),
Interfacing thought: Cognitive aspects of humancomputer interaction (pp. 80-111). Cambridge,
MA: MIT Press.
Greeno, J. G., & Simon, H. A. (1974). Processes
of sequence production. Psychological Review,
81, 8798.
Larkin, J. H. (1989). Display based problem solving. In D. Klahr & K. Kotovsky (Eds.), Complex
information processing: The impact of Herbert
A. Simon (pp. 319-341). Hillsdale, NY: Lawrence
Erlbaum.
Lewis, C. H., & Polson, P. G. (1990). Theory-based
design for easily learned interfaces. Special issue:
Foundations of human-computer interaction.
Human Computer Interactions, 5, 191220.
Mayes, J. T., Draper, S. W., McGregor, A. M.,
& Oatley, K. (1988). Information flow in a user
interface: The effect of experience and context
on the recall of MacWrite screens. Proceedings
of the Fourth Conference of the British Computer Society on People and Computers, IV (pp.
275289).
Nielsen, J. (1994). Usability engineering. San
Francisco: Morgan Kaufmann.
Norman, D. A. (1988). The psychology of everyday
things. New York: Basic Books.
OHara, K., & Payne, S. J. (1998). The effects of
operator implementation cost on planfulness of
problem solving and learning. Cognitive Psychology, 35, 3470.
OHara, K. P., & Payne, S. J. (1999). Planning
the user interface: The effects of lockout time
and error recovery cost. International Journal of
Human-Computer Studies, 50, 4159.
Payne, S. J., Howes, A., & Reader, W. R. (2001).
Adaptively distributing cognition: A decisionmaking perspective on human-computer interaction. Behaviour & Information Technology,
20(5), 339346.
Scaife, M., & Rogers, Y. (1996). External cognition: How do graphical representations work?
International Journal of Human-Computer Studies, 45, 185213.
Simon, H. A. (1975). The functional equivalence
of problem solving skills. Cognitive Psychology,
7, 268288.
481
Trudel, C. I., & Payne, S. J. (1996). Self-monitoring during exploration of an interactive device.
International Journal of Human-Computer Studies, 45, 723747.
Van Joolingen, W. R. (1999). Cognitive tools for
discovery learning. International Journal of Artificial Intelligence in Education, 10, 385397.
Van Oostendorp, H., & De Mul, S. (1999). Learning by exploration: Thinking aloud while exploring an information system. Instructional Science,
27, 269284.
Zhang, J. (1997). The nature of external representations in problem solving. Cognitive Science,
21(2),179217.
Zhang, J., & Norman, D. (1994). Representations
in distributed cognitive tasks. Cognitive Science,
18, 87122.
This work was previously published in Cognitively Informed Systems: Utilizing Practical Approaches to Enrich Information
Presentation and Transfer, edited by E. M. Alkhafia, pp. 74-101, copyright 2006 by IGI Publishing, formerly known as Idea
Group Publishing (an imprint of IGI Global).
482
483
Chapter 2.8
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
484
Figure 1. The user interface for medical equipment should be straightforward, friendly, and intuitive.
Also, rarely is the operating manual available to the end user, which makes the labeling of the controls
especially important. Consider then the user controls shown for this operating-room table in use at
the authors facility. The top and bottom left-hand controls lower the head and feet, respectively, while
the right-hand controls raise the head and feet. But what if the entire operating table is to be raised or
lowered, which is by far the most common request from the surgeon? It turns out that the entire table is
raised by pushing both right-hand buttons, while the entire table is lowered by pushing both left-hand
buttons. This arrangement makes sense if one thinks about it for a while, but an intuitive interface should
not require a lot of thinking. Furthermore, there is plenty of space available on the control panel to add
two extra buttons.
485
Design Guidelines
The U.S. Food and Drug Administration has
offered a number of guidelines to help with the
design of medical equipment, such as the following (adapted from http://www.fda.gov):
Make the design consistent with user expectations; both the users prior experience
with medical devices and well-established
conventions are important.
Design workstations, controls, and displays
around the basic capabilities of the user,
such as strength, dexterity, memory, reach,
vision, and hearing.
Design well-organized and uncluttered
control and display arrangements. Keys,
switches, and control knobs should be sufficiently apart for easy manipulation and
placed in a way that reduces the chance of
inadvertent activation.
Ensure that the association between controls
and displays is obvious. This facilitates
proper identification and reduces the users
memory load.
Ensure that the intensity and pitch of auditory
signals and the brightness of visual signals
allow them to be perceived by users working
under real-world conditions.
Make labels and displays so that they can
be easily read from typical viewing angles
and distances.
Use color and shape coding to facilitate the
identification of controls and displays. Colors
and codes should not conflict with industry
conventions.
486
References
487
KEY TERMS
Cognitive Task Analysis (CTA): A family of
methods and tools for understanding the mental
processes central to observable behavior, especially those cognitive processes fundamental to
task performance in complex settings. Methods
used in CTA may include knowledge elicitation
(the process of obtaining information through
in-depth interviews and by other means) and
knowledge representation (the process of concisely
displaying data, depicting relationships, etc.).
Ecological Interface Design (EID): A conceptual framework for designing human-machine
interfaces for complex systems such as computerbased medical equipment. The primary goal of
EID is to aid operators, especially knowledge
workers, in handling novel or unanticipated
situations.
Ergonomics: A synonym for human-factors
engineering, especially in the European literature.
This work was previously published in Handbook of Research on Informatics in Healthcare and Biomedicine, edited by A.
A. Lazakidou, pp. 390-395, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
488
489
Chapter 2.9
Development Of E-Government
Services For Cultural Heritage:
Examining the Key Dimensions
Carugati Andrea
Catholic University of Lille, France
Hadzilias Elias
Catholic University of Lille, France
Abstract
This article is aimed at defining a framework for
the design of e-government services on cultural
heritage. Starting from an analysis of three cases
on digitization of different types of cultural objects, we highlight the problems existing in the
creation of e-services on cultural heritage. These
cases show the existence of four key issues in the
development of this kind of information systems:
digitization, requirement engineering, standardization, and interoperability. The proposed framework addresses these issues, focusing on the user
requirements on one side, and the cultural object
representationwhich is the key to interoperabilityon the other. In the cultural domain, the EU
Lisbon strategy pushes for the compatibility of
shared content across multiple, locally generated
contents. Dynamic content exchange requires the
use of a prescriptive framework for the develop-
Introduction
The Lisbon strategy for eEurope (EU Report,
2002) and the following eEurope 2002, eEurope
2005, eEurope 2010, drafted as results of the
activities of the European Council, are aimed to
make the European Union the most competitive
and dynamic knowledge-based economy with
improved economy and social cohesion by 2010.
In concrete terms, this means broadband and
high-level Internet-based services for the entire
population of the European Union. The means
envisioned to achieve this goal are largely based
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
490
Despite the above-mentioned issues, in Europe as well as elsewhere, there are a number of
interesting examples and initiatives, on various
scales, of successful economic promotion of the
cultural heritage (some of these mentioned above).
Unfortunately, they have often only a local or regional visibility, and their positive (and negative)
experiences cannot be fully exploited and shared
by other projects.
To resolve these issues, the article proposes
an integrated framework for developing egovernment services on cultural heritage. The
framework emerges from the study of multiple
cultural heritage electronic initiatives and the in
depth investigation of three case studies. This
framework is represented using the activity modelling method IDEF0, where all necessary steps,
inputs, outputs, rules, and roles are described in
a hierarchical manner.
The article is organized in the following way:
first, the research methodology is explained,
secondly, we present the case studies and analyse
them to highlight key issues, we describe the
IDEF0 modelling technique, and then we continue
presenting the development framework, focusing
on user segmentation and interoperability. Finally,
we conclude with a discussion of the proposed
framework, highlighting the need for a systemic
approach to the activities.
Research Methodology
The research was carried out following the constructive paradigm of the case study approach
(Yin, 2002). In this context, the research used
a practice-based lens to identify processes and
problems in the development of e-government
services on cultural heritage. The process was
comprised of three steps:
1.
2.
3.
491
Case Studies
According to the selection process, we consider
the following case studies: the Electronic Beowulf
Project at the British Library (UK), the Piero
della Francesca Project in Arezzo (Italy), and the
digitization of the incunabula manuscripts in the
Herzog August Library in Cologne (Germany).
492
Development Process
Since the Beowulf represents one of the first attempts to digitize a cultural heritage object, many
difficulties were encountered in different aspects
of the project, as the work in preparing the digital
edition was very complex in the period considered. Technical difficulties concerned scanning
technologies, storing media, and managing the
transfer across large distances of large quantities
of data or moving 24-bit colour images across
different technical platforms. In some cases, it
was necessary to devise innovative technical
solutions to achieve the desired end result. The
following is a quote from Prof. Kiernan2s Web
site, highlighting the technical difficulties encountered in 1993:
The equipment we are using to capture the images is the Roche/Kontron ProgRes 3012 digital
camera, which can scan any text, from a letter or
a word to an entire page, at 2000 x 3000 pixels in
24-bit color. The resulting images at this maximum resolution are enormous, about 21-25 MB,
and tax the capabilities of the biggest machines.
Three or four imagesthree or four letters or
words if that is what we are scanningwill fill
up an 88 MB hard disk, and we have found that
no single image of this size can be processed in
real time without at least 64 MB of RAM. In our
first experiments in June with the camera and its
dedicated hardware, we transmitted a half-dozen
both pictures and transcribed text. The CD contains multiple transcription of the Beowulf from
different accredited authors. Different search
criteria are available to query Line (Edition), Folio
(Edition), FolioLine (Transcript), Fitt (Edition or
Transcript), Scribe (Edition or Transcript). Images
can be zoomed to a very high detail in order to be
studied by paleographers and calligraphers (see
Figure 1). Alternatively, the layout of the screen
can show both the original manuscript and the
transcription, which is intended to be used by
researchers interested in the content of the poem
(see Figure 2).
The electronic version enables readers to place
the original manuscripts leaves side by side, to
examine the color and texture of the vellum leaves
by magnifying the images and to explore the work
of the early scribes (see Figure 3). Building on
this material, the CD features multiple versions of
restoration of which the most recent one was made
possible by the different scanning techniques.
increase access
to its collections by use of imaging and network
technology. As such, the Beowulf project was
not directly geared towards providing content,
services, or functionalities to other institutions
and remains a stand alone product.
One of the collaborations that emerged from
this project was the inclusion of the Thorkelins
transcript of the Beowulf catalogued in the Royal
Library of Copenhagen.
493
494
Development Process
Originally, this project was carried out to provide
professional restorers with new tools for restoring
the artworks of Piero della Francesca. For this
reason, the initial requirements were set by the
restorers and as a result, the quality and detail of
the digitized material is very high. For example,
they integrated text, vectorial information (CAD
Figure 5. Data capturing and sample images in the project of Piero della Francesca
495
s capabilities is that
it provides a chronological reconstruction of the
artist
s production.
Connected to this, the system
allows the study of the relationship between the
saline deposits and the presence of fixatives added
during earlier restorations or, alternatively, to view
the deposits in relation to the various painting
techniques used by the artist.
mappings
. The
Web site proposes two games: one that consists
of adding colours to a black and white painting
and the other where parts of the paintings have
been divided (electronically) into pieces that the
player is supposed to reassemble like a puzzle.
496
Usability Testing
The usability testing was carried out by five
people: two laymen, one professional restorer,
and two children aged seven and eight years old.
The adults got the task of finding the painting of
La Madonna del Parto, starting from the home
page. The children had the task of solving the
two games. All subjects reported poor navigation
497
498
Development Process
In order to deal with the vast amount information
to be digitized, a standard development process
was followed, which is depicted in Figure 10.
The first stage of the development was the
raw digitization of the single pages, which was
conducted using a Nikon DXM 1200 and a camera
server that stored the images with total capacity
of 80 GB. A book cradle for photos in an angle
of 45 degrees (see Figure 11) allowed treating the
incunabula carefully while simultaneously saving
time and money. The pages were scanned in a 24bit color scheme using a resolution of 3800x3072
pixel leading to a data file size of 34 MB/page.
With such a digitization procedure, more than 80%
of incunabula were digitized leading to about 12
TB raw data volume.
At the second step, the digitized data was
transferred through a local area network to a preprocessing server with 250 GB storage capacity.
This server was responsible for the transformation
of the output files to the size that is efficient for
Web site access and for the different users.
Finally, the files were forwarded to the Web
server environment and the digitized files were
available on the Internet, after passing a quality
control at the Metadata Cache and being transformed to DACOs. These versions of the original
files were not more than 10% of the original data
size, so that they can be accessed quickly from the
Internet. During this phase, the data supervision
and the production of the copy-referred development data took place. The archive server offers a
storage area for disaster recovery purposes.
The entire architecture was based on cost-efficient Linux servers highly performing for the
hierarchical storage organization.
Obviously, beyond tackling the technical challenges, cost played a major role in pushing such
Camera Server
80 GB
(Raw Digitization)
Pre-processing Server
250 GB
(File Downsizing)
Metadata Cache
250 GB
(Quality Control)
Webserver
1 TB
(WWW Interface)
Archive Server
1000 TB
(Long-term backup)
499
500
Usability Testing
Given the content of this site, we had to carry
out a careful selection of the five users. Finally,
we selected three librarians and two history researchers. Even though the test subjects did not
research specifically the incunabula, they were
selected for their knowledge in the research and
management of historical documents. They were
given two tasks: to find the document
Mensa
philosophica and the complete production of
Gerson. The users completed the first task in less
than two minutes and with 10/14 mouse clicks
(compared to an experienced user that requires
seven mouse clicks). They completed the second
task, restarting from the main page, with seven
mouse clicks in less than one minute. The users
reported the site to be easy to navigate, even
though it was in German language. However,
they evidenced two major failures in the database
operation: the field tables were not linked and
the navigation functions in the index, instead
Case Analysis
After examining thoroughly the three case studies of e-services in the cultural sector and with
the knowledge of the other existing initiatives,
we observed that the greatest challenge lies on
the management of four dimensions: digitization,
501
the requirements were set initially by the most demanding groups, namely the professionals working with cultural objects in their daily working
life. Researchers were in contact with the technical
developers in order to discuss and define the initial
set of requirements. This helped to set very high
quality standard for what later became material
available to laymen. The process of creating the
Web sites available to the public was carried out
following a top down approach that provided the
digitized content to the public without examining
the specific needs of the different categories of
users. A notable exception is the functionality
offered in the Piero della Francesca project where
the pedagogic games addressed the needs of the
specific target group of children. However, this
effort followed the same push strategy as the other
two and our usability tests proved the limitation
of this approach.
As far as standardization is concerned, there
seems to be a trend towards finding a common way
Digitization
Incunabula
(2001)
Flat documents in
different resolutions. Major issue:
cost.
Requirements
engineering
Top-down approach
from professionals
perspective
Top-down approach
from professionals
perspective
Standardization
Ad-hoc approach.
Standards applied to
the image format.
Use of XML
standard. DACO
typology candidate
for becoming a
European standard.
Content shared
with other institutions thanks to
specially designed
XML objects
(DACO).
Interoperability
502
Beowulf (1993)
503
0
A0
A-0
M ore G eneral
1
2
M ore Detailed
3
4
A4
A0
1
2
A42
3
A4
1
2
3
A42
504
4.
2.
3.
Users. This mechanism includes art specialists (like the restorers in the Piero della
Francesca case), historians, researchers, and
the public in general. In Figure 15, it is shown
that the mechanism users (M1) supports
the collection of users requirements (A1),
the organization of cultural content (A4),
and the development of e-service (A5). The
participation of users in these activities ensures the user-centered evaluation that the
usability testing proved essential to avoid
critical errors.
Technical project team. It includes the project
manager, the system architects, the system
analysts, the database designers, and the
developers, including the Web designers.
In figure 15, it is shown that the mechanism
technical project team (M2) supports the
collection of users requirements (A1), the
digitization of cultural content (A2), the
design of system interoperability (A3), and
the development of e-service (A5). The
participation of the technical project team
in these activities ensures that the users
needs can be supported by the technology
and the resources available.
505
DATE: 26/03/06
REV:
AUTHOR:
PROJECT:
USED AT:
WORKING
DRAFT
RECOMMENDED
PUBLICATION
NOTES: 1 2 3 4 5 6 7 8 9 10
testing
protocol
C2
I1
cultural
heritage
e-service
plan
user
segmentation
C1
Collect
Users
Requirements
A1
MINERVA-Digicult
best practices
System
Specifications
C3
ISO 21127
standard
C4
DATE CONTEXT:
READER
ICOM-CIDOC
standards
C5
ISO 9126
standards
C6
Digital
fax-simile
Digitize
Cultural
ContentA2
System
Specifications
System
Specifications
Design
System
Interoperability
A3
Organize
Cultural
ContentA4
Develop
e-service
System
Specifications
users M1
NODE:
A0
technical
project team
TITLE:
M2
O1
A5
cultural
M3
organization
506
cultural
heritage
e-service
NUMBER:
P. 2
507
508
the cultural organization collaborate in this activity which, as seen in all three cases, is very
demanding.
The control is provided by the MINERVA
guidelines. MINERVA, MInisterial NEtwoRk for
Valorising Activities in digitization (http://www.
minervaeurope.org/), is a resource for guidelines
concerning the digitization of cultural content.
MINERVA was created by a network of European
Union (EU) Member States Ministries to discuss,
correlate, and harmonize activities carried out in
digitization of cultural and scientific content for
creating an agreed European common platform,
recommendations, and guidelines about digitization, metadata, long-term accessibility, and
preservation. Due to the high level of commitment
assured by the involvement of EU governments,
it aims to coordinate national programs, and its
approach is strongly based on the principle of
embeddedness in national digitization activities.
The use of the MINERVA guidelines insures the
contacts with other European countries, international organizations, associations, networks,
international and national projects involved in
this sector.
The output of this activity is the digitized
content. This content can exist independently
from the creation of the e-service that uses it (like
a repository) or will be used in the development
activity. The other output is the revised version of
the specification document in accordance to the
problems and opportunities that the digitization
phase might bring to the surface.
of
digital
products
and
processes
. The
meta-data that
concern the processes of digitization, identification, quality and thematic content of the
digitized material belong to this category.
Preservation information of the cultural
content. This information refers to meta-
509
Considering these activities, and the specification document we recommend, as control, the
use of the ICOM/CIDOC standards (Abraham
& Means, 2001; Aquarelle, 1999; Baskerville &
Pries-Heje, 2004). The ICOM/CIDOC standards
are the output of a joint effort across museums,
archaeological sites, and other cultural initiatives to create internationally accepted guidelines for museum object information. Since the
ICOM/CIDOC standards have been agreed by
the cultural organizations, their use should be
straightforward.
The output of this activity is the final requirement document containing functional, structural,
and process specifications.
E-Service Development
Once the requirements have been collected, the
material digitized, and the standards set, the time
for the actual e-service development arrives. We
envision the e-service to be, as a starting point,
of the kind presented in the incunabula project:
a complete, widely accessible, shared content of
precious cultural material.
The e-service development can be regarded as
a problem of information system development. In
the particular case of cultural heritage e-services,
there are particular conditions that make them
different in relation to the needs of traditional
Web sites treated in the literature (Baskerville and
Pries-Heje, 2004). The needs for quick adaptability
and flexibility (ibid) do not apply in most cultural
heritage cases where the data is stable and the
510
Conclusion
The study of the existing initiatives of e-government on cultural heritage shows that the different
member states are pursuing the Lisbon strategy
for eEurope, but in different ways and at different
paces. While some countries are more advanced
than others, even for the leading ones, it is difficult to speak about concerted action to create
a one-stop site for cultural heritage being this at
subject level (archaeology, graphical art, sculptures, and so forth), local level, national level, or
international level. While cooperation obviously
exists among museums, ministries, universities,
and other institutions, this cooperation only very
recently has begun to be addressed seriously.
While cultural organizations, research institutions, and standard organizations have begun to
create usable standards, one of the main issues that
we have observed in an initial survey is the lack
of a methodology that systematically integrates
and adopts these standards.
The in depth examination of three case studies
of e-services has pointed out four dimensions that
are important to address when developing these
511
References
Abramson, M. A., & Means, G. E. (2001). EGovernment 2001. IBM Center for the Business
of Government Series, Lanham, MD: Rowman
and Littlefield.
Aquarelle (1999). Sharing Cultural Heritage
through Multimedia Telematics. DGIII, ESPRIT
Project 01/01/1996 - 31/12/1998.
Baskerville, R., & Pries-Heje, J. (2004).
Short
cycle time systems development. Information
Systems Journal, 14(3), 237-265.
Carroll, J. M. (2001). Community computing
as humancomputer interaction. Behaviour &
Information Technology, 20(5), 307-314.
Coursey, D., & Killingsworth, J. (2000). Managing government web services in Florida: Issues
and lessons. In D. Garson (Ed.), Handbook of
public information systems. New York: Marcel
Dekker.
EU Report (2002). eEurope 2005 executive summary. Retrieved from http://europa.eu.int/information_society/eeurope/2005/index_en.htm
Fountain, J. (2001). Building the virtual state:
Information technology and institutional change.
Washington: Brookings Institution.
Fountain, J. (2003). Electronic government and
electronic civics. In B.Wellman (Ed.), Encyclopedia of community. Berkshire: Great Barrington,
436 441.
Ho, A. T.-K. (2002). Reinventing local governments and the e-government initiative. Public
Administration Review, 62(4), 434445.
Kiernan, K. S. (1981). Beowulf and the Beowulf
manuscript. New Brunswick, NJ: Rutgers University Press.
Klawe, M. and Shneiderman, B. (2005). Crisis and
opportunity in computer science. Communications of the ACM, November, 48(11), 27-28.
512
Loebbecke, C., & Thaller, M. (2005, May). Preserving Europes cultural heritage in the digital
world. Proceedings of the European Conference
on Information Systems (ECIS). Regensburg,
Germany.
Malhotra, R., & Jayaraman, S. (1992). An integrated framework for enterprise modeling. Journal of
Manufacturing Systems, 11(6), 426-441.
Marca, D. A. & McGowan, C. L. (1988). SADT:
Structured analysis and design technique. New
York: McGraw-Hill.
Molich, R. (1999). Bruger-venlige edb-systemer.
(in Danish) Copenhagen: Teknisk Forlag.
Moon, M. J. (2002). The evolution of e-government
among municipalities: Reality or rhetoric? Public
Administration Review, 62(4), 424433.
Nielsen, J., & Landauer, T. K. (1993). A mathematical model of the finding of usability problems. In
Proceedings of ACM INTERCHI93 Conference
(pp. 206-213). Amsterdam, The Netherlands.
webart.nationalmuseum.se/
new face of
government: Citizen-initiated contacts in the era of
e-government. Journal of Public Administration
Research and Theory, 13(1), 83101.
513
514
Endnotes
This work was previously published in the International Journal of Technology and Human Interaction, Vol. 3, Issue 2, edited
by B. C. Stahl, pp. 45-70, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
515
516
Chapter 2.10
An Activity-Oriented Approach
to Designing a User Interface
for Digital Television
Shang Hwa Hsu
National Chiao Tung University, Taiwan
Ming-Hui Weng
National Chiao Tung University, Taiwan
Cha-Hoang Lee
National Chiao Tung University, Taiwan
Abstract
This chapter proposes an activity-oriented approach to digital television (DTV) user interface
design. Our approach addresses DTV usefulness
and usability issues and entails two phases. A
user activity analysis is conducted in phase one,
and activities and their social/cultural context are
identified. DTV service functions are then conceived to support user activities and their context.
DTV service usefulness can be ensured as a result.
The user interface design considers both activity
requirements and user requirements such as users
related product experience, mental model, and
preferences in phase two. Consequently, DTV us-
Introduction
DTV has several advantages over conventional
analogue television: better picture and sound quality, more channels, interactivity, and accessibility.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
requirements. The user interface considers compatibility between users information processing
limitations and task demands, to ensure usability.
Since the user-centered approach recognizes
that the design will not be right the first time, it
suggests that an iterative design be incorporated
into the product development process. By using
a product prototype, the design can be refined
according to user feedback. The user-centered
design approach has been successfully applied
to many computer products.
However, the user-centered approach has
some drawbacks and limitations. Norman (2005)
warned against using it in designing everyday
products. He argued that the user-centered approach provides a limited design view. It is suitable
for products targeted for a particular market and for
specific user-task support, but everyday products
are designed for everyone and support a variety
of tasks. These tasks are typically coordinated
and integrated into higher -level activity units.
For everyday product design, Norman proposed
an activity-oriented design approach. He asserted
that a higher-level activity focus enables designers
to take a broader view, yielding an activity supportive design. In a nutshell, the activity-oriented
approach focuses on user activity understanding,
and its design fits activity requirements.
In line with Normans (2005) notion of activity-centered design, Kuutti (1996) proposed
an activity theory application framework to the
human-computer interaction design (HCI). According to activity theory (Engestrm, 1987), an
activity is the way a subject acts toward an object.
An activity may vary as the object changes, and the
relationship between the subject and object is tool
mediated. Tools enable the subject to transform an
object into an outcome. Furthermore, an activity
is conducted in an environment that has social and
cultural context. Two new relationships (subjectcommunity and community-object) were added
to the subject-object model. The community is a
shareholder group in a particular activity or those
who share the same activity objective. Rules and
517
518
519
Video-on-demand
Music-on-demand
Games
Travel
Horoscope
TV shopping
Ticket Purchase
News-on-demand
Voting
Transportation
520
Meteorology
Transportation
Travel
Financial
Employment
Horoscope
TV banking
Movie
News-on-demand
Games
Employment
e-Library
Games
e-Learning
e-Newspaper
Transactions
Financial services
TV banking
TV shopping
Reserving
Ticket Purchase
Lottery
e-Health management
Commodity
Daily living
Appliance control
Voting
e-Health care
Video phones
Lottery
Home security
e-Health management
Commodity
521
522
523
context and provide suitable activity support options. The functions are displayed in a menu bar
and numbered in terms of their appropriateness
(Figure 4).
Usability Evaluation
The purpose of usability evaluation is twofold:
(1) assessing user interface design usability, and
(2) evaluating usefulness of each user interface
design feature from the users point of view.
Participants
524
28
program guide, (2) browsing a movie from a videoon-demand (VoD) service, and (3) scheduling a
program recording. The information-browsing
scenario illustrated daily information-related
activities such as checking weather, stock information, and traffic information. Finally, the TV
shopping task simulated TV-shopping behavior
including searching for products, browsing product information, comparing different products,
and engaging in online transactions.
When performing these tasks, subjects were
asked to think aloud through every task step. After
performing the tasks, performance data (task error rate and verbal protocol) were recorded. Upon
each task completion, subjects rated the perceived
usefulness (five-point Likert scale: 1-non-useful,
5-very useful) and perceived usability (five-point
Likert scale: 1-unusable, 5-easy to use) of each
user interface feature.
Results
Task Performance
Scenario 1: Searching for News/Movie
Programs
In the searching for news/movie programs
scenario, almost all operations were error free,
except that a few subjects made an error at the
525
Sub Tasks
selecting a TV
program
Searching for a
News/Movie
program
Browsing
Information
Shopping
on TV
526
browsing a movie
from a video-ondemand
(1) power on; (2) main menu [shallow menu structure]; (3) select
VOD;
(4) select program type; (5) select program; (6) start movie
scheduling a
program recording
(1) power on; (2) key in target program number; (3) press enter;
(4) press function key [context sensitive functions]; (5) select
function;
(6) start recording
checking weather
(1) power on; (2) main menu [shallow menu structure]; (3) select
information services; (4) select the item weathers
(In TV viewing state, use multiple display):
(1) press short-cut bar; (2) select items weather information
adaptive information presentation], multiple viewing & display
management]
stock information
(1) power on; (2) main menu [shallow menu structure], (3) select
information services, (4) select the item stock; (5) select a
target stock.
(In TV viewing state, use multiple display):
(1) press short cut bar; (2) select items my stock [adaptive
information presentation], multiple viewing & display
management]
traffic information
(1) Power on, main menu [shallow menu structure]; (2) select
information services, (3) select the item traffic
(in TV viewing state, use multiple display):
(1) press short cut bar; (2) select the item traffic information;
adaptive information presentation], multiple viewing & display
management]
TV - shopping
(1) power on; (2) main menu; (3) select purchase; (4) TVshopping;
(5) select shopping store; (6) select target products; (7) compare
products;
(8) buy the product. [shallow menu structure], [activity oriented
UI flow]
Sub Tasks
Operations
(1) power on
selecting a TV
(2) key in target program number
program
(3) press enter
(1) main menu
browsing a
(2) select Video On Demand
Searching movie from
(3) select program type
for News/
video-on(4) select program
Movie
demand
(6) start movie
programs
(1) key in target program number
scheduling a (2) press enter
program
(3) press function key
recording (4) select function
(5) start recording
Error rate
0.05
0.00
0.00
0.00
0.00
0.05
0.00
0.00
0.00
0.00
0.40
0.00
0.00
browsing
information
Sub Tasks
Access services
Operations
Error rate
(1) power on
(2) main menu [shallow menu structure]
(3) select information services
(4) select the information item
0.00
0.00
0.00
0.15
0.00
0.00
Scenario 3: Shopping on TV
All subjects not only understood how to perform the TV shopping task but were also able to
complete it within a short time. In the debriefing interview, we asked subjects why they were
so skilled. Subjects indicated that they felt the
TV shopping functions were similar to real life
shopping situations. In addition, the function
arrangement in the menu structure was easy to
understand and navigate.
527
Shopping on TV
Sub Tasks
TV shopping
Operations
(1) power on
(2) main menu
(3) select purchase
(4) TV-shopping
(5) select shopping store
(6) select target products
(7) buy the product
528
Error rate
0.00
0.05
0.05
0.00
0.00
0.00
0.00
Context-Sensitive Functions
All subjects gave high marks to this design feature. Students and housewives expressed that this
design feature was helpful (mean of perceived
usefulness = 3.80, Std. = 1.01) and usable (mean
of perceived easy to use = 4.00, Std. = 1.08) be-
Conclusion
This study attempts to apply an activity-oriented
design approach to digital TV. This approach
analyzes activities and identifies requirements
for each type of activity. Activity-oriented user
interface flow can help users complete their
activities and reduce their workload. The evaluation results support this notion, and services are
considered to be accessible to users and useful in
supporting activities.
This study also explores good design features
of related products used by users for the same
activities. Incorporating good design features
of related products into the new design not only
eases the learning process for the new product,
but also helps establish a familiar and comfortable
feeling for first-time users. This helps users gain
out-of-the-box experience (Ketola, 2005).
We also found that different user groups have
different user interface expectations. For example,
technicians and students prefer multiple viewing
and multiple tasks. On the contrary, older adults
prefer a simple TV environment. Therefore, the
proposed design concept is flexible enough to
accommodate these two types of interaction
methods in order to meet two separate user requirements.
Acknowledgment
This research is supported by MediaTek Research Center, National Chiao Tung University,
Taiwan.
References
Bdker, S. (1996). Applying activity theory to
video analysis: How to make sense of video data
529
This work was previously published in Interactive Digital Television: Technologies and Applications, edited by G. Lekakos, K.
Chorianopoulos, and G. Doukidis, pp. 148-168, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing
(an imprint of IGI Global).
530
Information
Education
Entertainment
Daily living
Transaction
Internet
.788
.191
.173
.243
.413
Ball gaming
.002
.450
.535
.011
.272
Watching movie
.704
.454
.385
.064
.488
Body-building
.297
.357
.503
.082
.476
Newspaper
.673
.200
.270
.309
.335
Dinner party
.259
.241
.492
.035
.173
Bookstore shopping
.616
.257
.203
.193
.268
Camping
.355
.437
.481
.446
.364
Shopping
.502
.175
.289
.142
.029
.136
.250
.478
.308
.283
Magazine
.440
.228
.301
-.147
.368
Singing (Karaoke)
.236
.338
.476
.145
.185
Information showing
.372
.665
.500
.015
.475
Religious activity
.009
.375
.433
.008
.127
Social service
.127
.659
.451
.062
.310
Housekeeping
.305
.170
.104
.661
.243
Art exhibition
.470
.611
.490
-.007
.483
Computer gaming
.362
.145
.082
.615
.285
Accomplishment course
.294
.594
.242
.262
.398
Pet breeding
-.035
.356
.377
.527
.312
School
.239
.587
.370
.267
.252
Gardening
.021
.370
.262
.441
.400
Refresher course
.312
.565
.294
.153
.378
Cooking
.217
.328
.279
.412
.129
Photography
.395
.547
.489
.210
.414
Commodity buying
.346
.341
.158
.360
.670
Tai-Chi chuan
-.108
.471
.250
.178
.187
TV / e-shopping
.300
.301
.338
.384
.638
Handicraft making
.214
.445
.361
.263
.106
Investment
.300
.527
.343
.303
.538
Traveling
.422
.462
.611
.063
.397
Finical news
.444
.360
.407
.227
.498
Joy riding
.413
.311
.559
.325
.341
Lottery
.194
.387
.375
.001
.460
531
532
Chapter 2.11
Abstract
INTRODUCTION
The introduction of computing and communications technologies within cars raises a range of
novel human-computer interaction (HCI) issues.
In particular, it is critical to understand how userinterfaces within cars can best be designed to
account for the severe physical, perceptual and
cognitive constraints placed on users by the driving context. This chapter introduces the driving
situation and explains the range of computing
systems being introduced within cars and their
associated user-interfaces. The overall humanfocused factors that designers must consider
for this technology are raised. Furthermore, the
range of methods (e.g., use of simulators, instrumented vehicles) available to designers of in-car
user-interfaces are compared and contrasted.
Specific guidance for one key system, vehicle
navigation, is provided in a case study discussion.
To conclude, overall trends in the development
of in-car user-interfaces are discussed and the
research challenges are raised.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
533
534
Overload: Many of these systems (particularly those providing novel types of information and/or interactions) lead to situations in
which a driver must divide their attention
between core driving tasks (e.g., watching
out for hazards) and secondary system tasks
Users
As with many other consumer products, there will
be a large variability in user characteristics (e.g.,
in perceptual and cognitive abilities, computer
experience, anthropometry) to consider when
designing in-car computing systems. Car manufacturers may have particular socio-economic
groups in mind when designing a vehicle, but the
user base may still be extremely large.
One fundamental individual difference factor
often addressed in research is driver agedrivers
can be as young as 16 (in certain countries) and
as old as 90. In this respect, younger drivers may
be particularly skilled in the use of computing
technology, in comparison with the population
at large, but are especially prone to risk taking
(Green, 2003). Moreover, studies have shown a
limited ability to divide attention and prioritize
sources of information, largely due to lack of
driving experience (Wickman, Nieminem &
Summala, 1998). Subsequently, system block outs,
which prevent the use of complex functions in
inappropriate driving situations, are likely to be
of particular benefit for these individuals.
In contrast, older drivers often suffer from
a range of visual impairments that can lead to a
range of problems with in-vehicle displays. For
instance, presbyopia (loss of elasticity in the lens
of the eye) is extremely common amongst older
people, as is reduced contrast sensitivity. Studies
consistently show that older drivers can take 1.5
to 2 times longer to read information from an
in-vehicle display compared to younger drivers
(Green, 2003). Given that drivers have a limited
ability to change the distance between themselves
Tasks
A key task-related issue is that the use of an in-car
computing system is likely to be discretionary.
Drivers do not necessarily have to use the system to achieve their goals and alternatives will
be available (e.g., a paper map, using the brake
themselves). As a result, the perceived utility
of the device is critical. Furthermore, drivers
affective requirements may be particularly important. In certain cases, this requirement may
conflict with safety-related needs, for instance,
for a simple, rather than flashy or overly engaging
user-interface.
The factor that most differentiates the driving
context from traditional user-interface design is
the multiple-task nature of system use, and in this
respect, there are two critical issues that designers
must take into consideration. The first concerns
the relationship between primary driving tasks and
secondary system tasks, as drivers seek to divide
their attention between competing sources of information. Driving is largely a performance and
time-critical visual-manual task with significant
spatial components (e.g., estimating distances).
Consequently, secondary tasks must not be overly
time-consuming to achieve or require attentional
resources that are largely visual, manual, and
spatial in nature, if they are to avoid having a
significant impact on primary driving.
A second fundamental issue is the amount
of information processing or decision making
required for successful task performance, known
as mental workload (Wickens et al., 2004). Novel
in-car computing systems may provide functionality of utility to a driver or passengers, but
interaction with the technology will inevitably
increase (or in some cases decrease) overall workload. Context is very important here, as driving
is a task in which workload varies considerably
535
536
Equipment
The driving situation necessitates the use of input and output devices which are familiar to the
majority of user-interface designers (pushbuttons,
rockers, rotaries, LCDs, touchscreens, digitized
or synthesized speech), together with equipment
which is perhaps less known. For instance, there
is a considerable research literature regarding the
use of Head-Up Displays (HUDs) within vehicles.
A HUD uses projection technology to provide
virtual images which can be seen in the drivers
line of sight through the front windscreen (see
Figure 1). They are widely used within the aviation
and military fields, and are now beginning to be
implemented on a large-scale within road-based
vehicles. HUDs will potentially allow drivers to
continue attending to the road ahead whilst taking
in secondary information more quickly (Ward &
Parkes, 1994). As a consequence, they may be
most applicable to situations in which the visual
modality is highly loaded (e.g., urban driving),
and for older drivers who experience difficulties
in rapidly changing accommodation between near
and far objects (Burns, 1999).
From a human-focused perspective, there are
clear dangers in simply translating a technology
from one context to another, given that vehiclebased HUDs will be used by people of varying
perceptual and cognitive capabilities within an
Environments
The physical environment is also a specific area
that designers need to be aware of. In particular,
the light, sound, thermal and vibration environment within a car can be highly variable. A range
of design requirements will emerge from a consideration of these factors, for instance, potential
for glare, problems with speech interfaces, use
with gloves, and so on.
From anthropometric and biomechanical perspectives, the vehicle cabin environment provides
many challenges for designers. This is an area in
which designers make considerable use of CAD
modeling to analyze different locations for displays and controls, ultimately aiming to ensure
good fit for the design population. However, drivers sit in a constrained posture, often for several
hours and have limited physical mobility (e.g., to
comfortably view displays or reach controls). Consequently, there is limited space within a vehicle
for the placement of a physical user-interface, a
537
2.
3.
Figure 2. Environments for evaluation of in-car computing devices and the relationship between validity and control
Real road field trials
Increasing
confidence
that data
correspond to
real phenomena
538
Increasing
control
of variables
and replication
Field Trials
Participants are given a car fitted with an operational system for several months for use in
everyday activities. This method tends to look
at broad issues relating to the long-term use of a
system, for example, drivers acceptance of the
technology, and whether any behavioral adaptation effects arise. Objective data can be measured
using on-board instrumentation (e.g., cameras,
speed sensors) whereas subjective data is often
captured using survey or interview-based approaches. Clearly, such a method provides an ecologically valid test of a system, and is particularly
Table 1. Overview of methods used to evaluate the user-interface for in-car computing systems
Method
Environment
Task manipulations
Overall
Measures
Primary
Advantages
Primary
Disadvantages
Field trials
Multi-task (according
to driver motivation)
Primary/ secondary
task performance/
behavior, user
opinions, etc.
Ecological validity,
can assess behavioral
adaptation
Resource intensive,
ethical/liability
issues to consider
Road trials
Multi-task (commonly,
evaluator-manipulated)
Primary/ secondary
task performance/
behavior, user
opinions, etc.
Balance of
ecological validity
with control
Resource intensive,
ethical/liability
issues to consider
Simulator
trials
Virtual driving
environment
(varying in fidelity)
Multi-task (commonly,
evaluator-manipulated)
Primary/ secondary
task performance/
behavior, user
opinions, etc.
Control over
variables, safe
environment,
cost-effective
Validity of driver
behavior, simulator
sickness
Occlusion
Laboratory/
statically in car
Secondary task
achieved in controlled
visual experience
Visual demand of
user-interface
Standardized
approach, control
over variables
Peripheral
detection
Road/virtual
driving
environment
Multi-task (although
commonly, evaluatormanipulated)
Visual/ cognitive
workload
Assesses cognitive,
as well as visual
demand
Can be resource
intensive, range of
approaches
Lane
change task
Multi-task motorway
driving scenario
Standardized
approach, control
over variables
Difficult to relate
results to interface
characteristics
15 second
rule
Secondary task
achieved without presence of driving task
Simple approach
Only relates to
certain aspects of
visual demand
KeystrokeLevel
Model
(KLM)
Modeling exercise
Quick/cheap,
analysis explains
results
Only relates to
certain aspects of
visual demand
Extended
KLM
Modeling exercise
Visual demand of
user-interface
Requires reliability
assessments
539
Road Trials
Drivers take part in a short-term (normally less
than one day) focused study using a system in
an instrumented car on public roads (occasionally on test tracks). For such trials, a wide range
of variables may be measured and analyzed
(e.g., visual behavior, workload, vehicle control,
subjective preference) depending on the aims of
the study. Road trials enable more experimental
control than field trials, but are still potentially
affected by a wide range of confounding variables
(e.g., traffic conditions, weather). Furthermore,
such a method remains costly to implement and
requires robust protocols to ensure the safety of
all concerned. Many road trials are reported in
the literature, particularly concerning information
and entertainment/productivity oriented systems.
For instance, Burnett and Joyner (1997) describe
a study which evaluated two different user-interfaces for vehicle navigation systems.
Simulator Trials
Drivers take part in a short-term (normally less
than one day) focused study using a system fitted
or mocked up within a driving simulator. The
faithfulness that a simulator represents the driving
task (known as its fidelity) can vary considerably,
and configurations range from those with single
computer screens and game controller configurations, through to real car cabins with multiple
projections and motion systems. An example of
a medium fidelity driving simulator is shown in
Figure 3.
Driving simulators have become increasingly
popular in recent years as a result of reduced
hardware and software costs, and potentially offer
an extremely cost-effective way of investigating
many different design and evaluation issues in a
safe and controlled environment (Reed & Green,
1999). Nevertheless, there are two key research
issues concerning the use of driving simulators.
Firstly, it is well known that individuals can experience symptoms of sickness in driving simulators,
manifested as feelings of nausea, dizziness, and
headaches. There has been considerable research
regarding such sickness in virtual environments,
and whilst there is still debate regarding the
540
Occlusion
This is a laboratory-based method which focuses
on the visual demand of in-vehicle systems. Participants carry out tasks with an in-vehicle system
(stationary within a vehicle or vehicle mock up)
whilst wearing computer-controlled goggles with
LCDs as lenses which can open and shut in a
precise manner (see Figure 4). Consequently, by
stipulating a cycle of vision for a short period of
time (e.g., 1.5 seconds), followed by an occlusion
interval (e.g., 1.5 seconds), glancing behaviour
is mimicked in a controlled fashion. Occlusion
offers a relatively simple method of predicting
visual demand, but is has been pointed out that
its emphasis on user trials and performance data
means that it requires a robust prototype and is
therefore of limited use early in the design process
(Pettitt et al., 2006).
541
Figure 4. The occlusion method with participant wearing occlusion goggles, with shutters open (left)
and closed (right)
Following considerable research, the occlusion method has recently been formalized as an
international standard (ISO, 2005). In particular,
guidance is given on how many participants are
required, how much training to give, how many
task variations to set, data analysis procedures, and
so on. Moreover, two key metrics are stipulated:
total shutter open time (the total time required
to carry out tasks when vision is available); and
resumability (the ratio of total shutter open time
to task time when full vision is provided). For
resumability, there is considerable debate regarding the merit of the measure. Advocates believe
the metric provides an indication of the ease by
which a task can be resumed following a period
without vision (Baumann et al., 2004). Critics
point out that the metric is also influenced by the
degree to which participants are able to achieve
tasks during occluded (non-vision) periods (Pettitt et al., 2006). Consequently, it can be difficult
for a design team to interpret the results of an
occlusion trial.
542
workload and distraction associated with secondary tasks (Young et al., 2003). The advantage of
this method over occlusion is that it offers an assessment of cognitive, as well as visual demand (of
relevance to the assessment of speech interfaces,
for instance). The primary disadvantage is that the
method still requires some form of driving task.
Moreover, in contrast with occlusion, the method
has not been fully standardized, and the ability to
make cross study comparisons is severely limited
by the specific choice of driving task scenarios
(affecting task load and the conspicuity of the
peripheral stimuli). It has also been noted that it
is very difficult to discern between the level of
cognitive demand and the visual demand for a
given user-interface (Young et al., 2003).
An interesting recent development addresses
some of these limitations. Engstrom, Aberg and
Johansson (2005) considered the potential for
the use of a haptic peripheral detection task,
where drivers respond to vibro-tactile stimulation through the wrist whilst interacting with an
in-vehicle system. Clearly, such a variation of
peripheral detection is not affected by variations
in lighting conditions. Furthermore, the authors
argue on the basis of their validation work that
this method provides a pure measure of cognitive load not mixed up with the effect of simply
looking away (p.233).
15 Second Rule
Participants carry out tasks with an in-car computing system whilst stationary within a vehicle
or mock up (i.e., with no driving task) and with
full vision. The mean time to undertake a task is
considered to be a basic measure of how demanding visually it is likely to be when driving (Green,
1999). A cut-off of 15 seconds has been set by
the Society for Automotive Engineers (SAE). If
the task on average takes longer than 15 seconds
to achieve when stationary, it should not be allowed in a moving vehicle. The method is simple
to implement and has the key advantage that it
has been formalized in an SAE statement of best
practice (SAE, 2000).
Research by Green (1999) and other research
teams (e.g., Pettitt et al., 2006) has shown strong
correlations between static task times and the
total amount of time spent looking away from the
road at displays/controls, both in simulator and
road studies. However, the correlation between
static task times and the duration of single glances
543
544
545
546
REFERENCES
Baumann, M., Keinath, A., Krems, J.F., & Bengler, K (2004). Evaluation of in-vehicle HMI
using occlusion techniques: Experimental results
and practical implications. Applied Ergonomics,
35(3), 197-205.
Bishop, R. (2005). Intelligent vehicle technology
and trends. London: Artech House Publishers.
Blaauw, G.J. (1982). Driving experience and
task demands in simulator and instrumented
car: A validation study. Human Factors, 24(4),
473-486.
Burnett, G.E. (2000). Turn right at the traffic
lights. The requirement for landmarks in vehicle
navigation systems. The Journal of Navigation,
53(3), 499-510.
Burnett, G.E. (2003). A road-based evaluation
of a head-up display for presenting navigation
information. In Proceedings of HCI International
547
548
to the use of in-vehicle information and communication systems. Draft International Standard
ISO/DIS 16673. ISO/TC 22/SC 13.
549
550
KEY TERMS
Driver Distraction: Occurs when there is a
delay by the driver in the recognition of information necessary to safely maintain the lateral and
longitudinal control of the vehicle. Distraction may
arise due to some event, activity, object or person,
within or outside the vehicle that compels or tends
to induce the drivers shifting attention away
from fundamental driving tasks. Distraction may
compromise the drivers auditory, biomechanical,
cognitive or visual faculties, or combinations
thereof (Pettitt & Burnett, 2005).
Driving Simulators: Provide a safe, controlled
and cost-effective virtual environment in which
research and training issues related to driving can
be considered. Simulators vary considerably in
This work was previously published in Handbook of Research on User Interface Design and Evaluation for Mobile Technology,
edited byJ. Lumsden, pp. 218-236, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference
(an imprint of IGI Global).
551
552
Chapter 2.12
Abstract
This chapter presents digital habitats, a conceptual
and methodological framework for analyzing
and designing smart appliances in the context
of pervasive computing. The concrete topic is
a project in pervasive gaming for children. The
framework consists of a set of theoretical concepts supplemented by diagrams for representing
semiformal models. We give a short overview
of selected theories of play and gaming and apply the framework to an implemented simple
pervasive game. Finally, we use the framework
in a constructive manner to produce a concrete
design of a new game. The result is discussed
and compared to other approaches. The main
points are the following: (a) it can describe communicative as well as material acts plus the way
they hang together; (b) it provides an explicit
link between human activities and their spatial
Introduction
In this chapter, we will present an approach to
analysis and design of computing systems that
transcends the boundaries of traditional office
computer systems such as PCs and laptops. These
transcending systems are called ambient, ubiquitous, or pervasive computing systems, and they
pose new challenges to the way we understand,
analyze, and design information technology.
With such systems, computing power spreads
from dedicated computing hardware into other
artifacts and places, both at the workplace and
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
On the Concept of
Intelligence
The first issue that must be discussed is what
smart and intelligent mean.
Intelligence
Since there is no universally accepted definition
of intelligence (Roth & Dicke, 2005), we accept
Gilbert Ryles (1970) claim that these words denote
(a) the manner in which an action is performed
and (b) a prediction about the way other actions
are performed. In this case, intelligent does not
denote a special mental process that is the cause
of the action but rather a disposition generally to
act in a certain manner. What is intelligent depends upon the circumstances but often involves
features such as: the action achieves its goal, it
does not contain superfluous steps, it economizes
with resources, it does not destroy or hurt participants, it is an innovative way of solving a difficult
problem, and so forth.
This definition is similar to a prevalent view
in contemporary cognitive science, that mental or behavioral flexibility is a good measure
of intelligence, resulting in the appearance of
novel solutions that are not part of the animals
normal repertoire (Roth & Dick, 2005, 250).
We choose to focus on the behavioral aspect and,
thus, preclude ourselves from making inferences
about neuroanatomy or mental mechanisms. On
the other hand, this choice to focus strictly on
behavior allows us to use the word about humans
as well as artifacts without committing ourselves
to philosophical doctrines about the nature of the
mind (see Dennett, 1991).
553
Networks of Stupidity
This analysis is easier to verify with negative
predicates such as negligence, stupidity, and inability. For example, accident reports must point
to a participant that is guilty of the accident in
order to suggest future remedies against the type
of accident and because of insurance issues. Although accidents are mostly caused by a particular
configuration of participants in the network, the
report must point out the weak link in the chain.
However, this is often difficult and a matter of
interpretation.
Here is an example, due to PhD student Thomas
Koester: a ferry was fitted with a system that
compensated the heeling of the boat by moving
water in the ballast tanks. The system had a manual
and an automatic mode, but sometimes it would
unexpectedly go from automatic to manual mode.
The mode change was displayed on the bridge
and in a closed locker on the deck. The accident
occurred when the deck crew was emptying the
deck for its cargo of trucks and cars. When cargo
is removed from one side of the ship, it will heel,
and the system is supposed to compensate; in
this case, it had switched to manual. The result
was that the heeling was more than six degrees,
and the ramp was damaged. Who was to blame?
The crew? It knew about the fault; should they
554
Intelligent Technology
If we are to single out one participant of an intelligently conducted activity, it must be because its
contribution is particularly conspicuous. How do
we decide this? One way is a simple substitution:
if we keep the chain constant and replace one
participant, does the performance of the chain
become more or less intelligent? If the chain
performs less intelligently, we may tentatively
attribute intelligence to this part of the chain.
From these arguments follows a definition of
smart/intelligent technology:
1.
4.
5.
The contribution of the present chapter is threefold: (a) it explores the usefulness of the habitat
concept in the domain of pervasive games; (b) it
presents a framework for describing communicative and material activities and the way they
interact; and (c) it presents a notion of intelligent
or smart technology that is consistent with the
framework. The chapter mostly draws on semiotic
and linguistic theory, and our basic understanding
of the habitat concept is well-captured in Peirces
semiotic triangle (Figure 1). The space itself and
its manufactured representations (e.g., signposts
and electronic displays) are representamens; the
interpretant of these signs is the activities associated to the habitat, and the object is the phenomena
inside the reference area (i.e., things or events that
are relevant to the activities).
Activities
In order to use definition (1) for design purposes,
we need a general functional definition of activi-
representamen:
Space and representations in
access area
556
Object:
Phenomena in reference
area
7.
8.
9.
Applied to mechanical agents, comprehensibility (6) means that they only should form plans that
are understandable to their human and nonhuman
colleagues. Understandable algorithms have a
higher priority than cunning ones, but there is a
tradeoff between intelligibility and efficiency.
The intentions of the agent should be signaled
to the other participants in a way that allows them
to reliably infer intentions from behavior (accessibility, 7). In other words, there is a mutual
commitment to signal the truth, the whole truth,
and nothing but the truth in the least complicated
manner (equivalent to the so-called Gricean
maxims [Grice, 1975] from the pragmatics of
natural language). On the other hand, once the
deliberate representation of intentions has been
introduced, it is possible to misuse it to lie about
intentions as well.
Attention is intentionally directed perception
(Gibson & Rader, 1979; Tomasello, 1995). Joint or
shared attention (8) is necessary for cooperation:
If I cannot see what my colleague is concentrating
on, I cannot collaborate with him or her. However,
shared attention is more than just looking at the
same object or detecting gaze direction; it is the
mutual understanding between two intentional
agents that they are, indeed, intentional agents
(Tomasello, 1995) (see Dennetts, 1987, The
Intentional Stance). In order for me to share a
goal-directed activity with somebody, I not only
must be sure that the other participants have their
minds focused on the objects or topics I think of
myself, but all participants also must share the
fundamental assumption that activities can be
goal-directed and that they can be coordinated
by detecting and manipulating attention. In fact,
shared attention is a constituent factor in cultural
557
558
An example of a game that supports intermittent participation is LEGO Star Wars (www.
lego.com/starwars), which allows players to join
and leave a game in progress so that a parent can
step in and help a child but leave again without
having to play the entire game. When a player
leaves, the character continues as an independent
agent controlled by the software (at least one
player must be controlled by a human, however;
otherwise, the game ends). Massively Multiplayer
Online Games (MMOGs) (e.g., World of Warcraft
[www.worldofwarcraft.com]) are another example
of games built around intermittent participation. Mobile MMOGs exist, too, and are called
3MOGs (e.g., Undercover 2: Merc Wars [www.
undercover2.com]). Often, children can be seen
not only as mobile but also as nomadic, because
they are dependent on the resources and partners
offered by the environment (see the following
section). In this case, the proximate environment
has a marked influence on their patterns of play
(Brynskov, Christensen, Ludvigsen, Collins, &
Grnbk, 2005).
The problem of sharing attention between humans and machines is that machines do not have
the ability to read attention, let alone intentional
stance (see, however, Kaplan & Hafner, 2004). In
the ferry incident, one of the problems was also
the lack of joint attention. The automatic pump
system switched to manual operation without
monitoring whether this important change was
brought to the attention of the crew. The crew, on
the other hand, could not monitor the attention of
the pump system and, therefore, failed to realize
that they were supposed to operate the pumps
manually. Thus, intermittent participation seems
to require joint attention.
People use language to a large extent to signal
their focus of attention. With machines in the
loop, this is not possible, not because computers
cannot produce or parse speech at all (they can
to some extent), but because their underlying
representation of attention and states do not map
easily onto their human partners. This is because
Door Openers
In this section, we illustrate the concepts used
to model activities by a widespread technology;
namely door openers. In addition, we offer some
simple diagramming techniques. Door openers
are good examples because they involve networks
of humans and nonhumans, and nonhumans play
an Agent role in the sense that they initiate and
monitor activities.
The activity concept is the one described
previously. We use a diagram consisting of actions, and we highlight the relations between the
actions by graphical means. Two actions can be
connected by arrows signifying dependencies
559
Figure 3. The activity of door-opening focusing on the door participant. Obl = obligation, abil = ability.
+ means increases, - means decreases
Participant: Door
Prevent:
Achieve:
Heat evaporates
through door
Persons pass
through door
[+abil ]
Persons walks
towards door
[-abil]
[+abil]
[+obl
+ abil]
[+obl ]
Door opens
[-abil ]
[+abil]
[-abil]
560
[-abil]
physical space, if the space plays the role of location in the activity.
Habitats
In pervasive computing, the physical space and
its relations to the informational space becomes
important. Therefore, we need a diagram that
highlights the spatial properties of activities and
networks, in opposition to the previous section
in which we highlighted the relations between
actions.
Figure 5 shows a diagram that codes the spatial
participants graphically and tones down relations
between actions. We have selected the spatial
participants (i.e., spaces that participate in the
Location role of the activities, spatial participants
filling the location roles are called habitats. We
have shown two habitats: the shopping mall and the
entrance. To each habitat is associated the actions
that can be performed there: selling and paying
in the shopping mall, and walking through doors
at the entrance. In addition, we have represented
the signs involved in terms of (a) the area from
where they can be accessed (the access area) and
(b) the area containing the object denoted (the
reference area). Both areas are shown by means
Persons walks
towards door
Prevent:
Achieve:
Heat evaporates
through door
Persons pass
through door
Door opens
561
Figure 5. Shopping mall and entrance. #door means an instance of the class of
shopping mall
Roles: customer, clerk
Clerk sells commodity to customer
Customer pays price to clerk
562
Entrance
Roles: #door, pedestrian
#door opens
#door closes
Pedestrian walks through #door
563
564
Johan Huizingas free activity ... outside ordinary life ; and Jesper Juuls (2005):
A game is (1) a rule-based formal system with (2)
a variable and quantifiable outcome, where (3) the
different outcomes are assigned different values,
(4) the player exerts effort in order to influence
the outcome, (5) the player feels attached to the
outcome, and 6) the consequences of the activity
are optional and negotiable. (Juul, 2005, p. 3)
We distinguish between two types of playful
activities: playing and gaming. A game has fixed
rules and a quantifiable outcome (a clear goal),
whereas play has ad hoc negotiable rules and a
fuzzy outcome. A game can be embedded in play
(see Juuls point about the optional and negotiable
consequences of a game). Both types of activities
should be fun.
An important distinction is that between
emergent games and games of progression (Juul,
2005). In the former, the game is defined by a
small number of simple rules governing a possibly
large number of human or nonhuman participants.
An emergent game can be played many times,
since each round is different. Emergent games
invite planning and strategic thinking. Games of
progression are defined by a more or less fixed
sequence of events and often tell an elaborate
story. Like novels and films, they seldom can be
played more than once.
Our two games are games of emergence, but
in Bogeyman, we have tried to add narrative
traits.
565
The Technology
In the Nomadic Play project, we try to invent new
ways of having fun or being cool using digital
media, specifically pervasive and ubiquitous computing devices. Those devices are characterized by
being small enough to be carried around as other
small objects that surround children (e.g., toys,
phones, clothes, pencils, books, stones), or they
may be embedded in the environment like other
resources we knowin playgrounds, computers,
blackboards, and furniture. In essence, these new
digital artefacts and environments should be
designed in a way that allows them to become a
natural part of playful activities.
566
Compared to traditional artefacts, these devices present new opportunities. They can process
information, sense the environment via sensors,
interact with it physically through actuators, and
communicate with other devices through wireless networks (for early thoughts on ubiquitous
gaming in the context of construction kits, see
Sargent, Resnick, Martin, and Silvermans, 1996,
list of Twenty Things to Do With a Programmable Brick). Apart from actually making the
technology work, one of the biggest challenges
is to find out how activities involving children,
artefacts, hardware, and software can be brought
to play in a way that feels natural and fun. The
digital media should not be a separate distraction
from the play, not in the sense that it should be
invisible, but rather that it should be an integrated
part of the activities. In order to test our ideas,
we have begun designing playful activities for
children involving pervasive computing.
Figure 6. (a) GPS unit and mobile phone; (b) phone interface with team (red dot) and star; (c) Loser
Star screen presented to losing team
(a)
(b)
(c)
Figure 7. Diagram of the activity Playing StarCatcher (simple version) with each of the actions and
their relations seen from both teams
B is
home
A is
home
B walks
A walks
B
catches
S
A
catches
S
B:
B walks
with S
A walks
with S
B is home
with S
B wins
A is home
with S
A wins
A:
567
B is
home
B
catches
S
B walks
B:
B walks
with S
B is home
with S
B wins
B meets
A
B walks
home
B drops
S
A drops
S
A walks
home
A meets
B
A is
home
A
catches
S
A walks
A walks
with S
A is home
with S
A wins
A:
Table 1. The action walks with necessary (agent and action) and possible roles filled (goal and sociative); the action has no glue, since it is not a role filled by a participant
Role
Agent
Action
Goal
Sociative
Glue
Filler
walks
[home]
[with star]
O
Catch star
A = the child
O = the star
Walk home
Walk with star
Drop star
home
Win
Meet the other team
Walk
568
The Bogeyman
If all we wanted to do was to model locative
games as simple as StarCatcher, there would not
be much need for a framework. But with the expanding opportunities of pervasive gaming (due
to technological evolution and commoditization
of pervasive computing systems). a whole range
of complex issues arise that make formalized support for development of such systems more and
more difficult. We can design more sophisticated
games using the dynamic interplay of aspects at
different levels (e.g., physical, technological and
social) but at the cost of complexity. Addressing
this complexity at appropriate levels is an important motivation for developing our framework.
StarCatcher actually was implemented and
tested; Bogeyman is an extension of StarCatcher
and has not been implemented yet, although we
do present a core algorithm of the play. The purpose is to test our framework as a design support
tool; does it help produce interesting ideas for an
extended game?
569
570
Cats:
Mice:
Cats eat
mice
Mice enter
sewer
Mice:
Dogs:
Dogs catch
cats
Cats catch
mice
Mice eat
candy
Cats:
Children:
Cats chase
mice
Children
eat candy
Cats eats
fish
Bogeyman:
Dogs catch
bogeyman
Mice:
Mice
multiply
Mice leave
sewer
Bogeyman
runs away
Bogeyman
catches
children
Bogeyman:
Children:
Dogs chase
bogeyman
Dogs:
Children
give fish to
cats
Children
take candy
Children
run away
Cats run
away
Children
take fish
Dogs chase
cats
Dogs eat
sausage
Children give
sausage to
dogs
Children
take
sausage
571
572
Mode
Scale
Use
Interaction
Technology
Proximity
1:1
Navigate
Walking
GPS
Global
1:10
Cursor
GPS
Scan
10:1
Menu
Bluetooth
Global
NPc
Proximity
Scan
573
Figure 12. Basic setup with two players, the bogeyman, some candy, and the home bases
Real motion
of player
s
s
b
M
M
s
Delayed avatar
Avatar on
autopilot
Real motion
of player
Time
575
C: Taleban
z
A: George
B: Al Qaeda
576
C: Taleban
A: George
B: Al Qaeda
In all of these very real examples, the following two simple rules have been used ( is logical
equivalence):
18. zt = (xt-1 y t-1)
19. zt = (xt-1 yt-1)
Z at time t is calculated as the equivalence
of the x and y (or y) relations at time t-1. At the
same time, the rules provide explanations for the
attitudes: z holds because x and y hold. George
opposes the Taleban because they promote AlQaeda, and he opposes Al-Qaeda.
There is one problem, however. As we have
already seen, we cannot assume that such networks of actors and actions are consistent in good
narratives. Therefore, we often will encounter
the phenomenon that the same relation will be
calculated differently if we use two different
triangles that share one side. This is the problem
of overwriting. George Bush endorses an occupation that leads to casualties; therefore, George
ought to be in favor of the casualties (rightmost
triangle in Figure 17. However, he also likes his
voters, who are opposed to casualties. Therefore,
George ought to be opposed to casualties (leftmost
triangle). If the leftmost triangle in Figure 17 wins,
George may change his attitude to the occupation
in order to avoid dissonance (e.g., begin opposing
it). If the rightmost triangle wins, he has to support casualties (which is the strategy of his op-
voters
casualties
George
occupation
577
"PPA")
--- relations where a is an agent
and z = 0
--- relations where a is an action
set relation a/c to equiv(x, y)
end choose
end if
if y 0 and chance() then
--- here we know the inverse of y
if (thecase = "PAP" or thecase = "PPA"
or thecase ="PPP" or thecase ="PAA")
of b/a +
and Chance() then
578
end if
end if
end repeat
end if
end repeat
end repeat
end RemoveDissonance
George
casualties
casualties
George
occupation
occupation
casualties
George
occupation
579
mice
cats
children
580
bogeyman
581
Future Developments
The simulation does not deal with the balance
between reasoning about actions and executing them, although the conditions for executing
actions are well-defined (see (10) in the section
Activities).
The next step, therefore, is to orchestrate the
percolation of desires in the network and the
execution of these desires. One convention could
be that an action is not allowed to influence other
actions before it is executed; only when an action
is realized may it have effect on other actions. An
even stronger restriction is that only participants
who have experienced the execution of the action
are allowed to change their attitudes because of it.
In this way, the network changes would be much
easier to follow, and both solutions will reduce
the part of the network that the algorithm must
search and thereby decrease the complexity of the
algorithm, which regrettably is (n3).
However, the two conventions are probably too
strong. On the one hand, communication probably should count as a kind of execution: when
the Bogeyman tells the mice that he doesnt like
them to eat the candy, this should make the mice
reconsider their positive attitudes toward the bogeyman with the same force as if he had actually
prevented them from eating candy. Similarly, a
promise from the children to give sausage to dogs
should make the dogs side with the children as if
they had actually given sausages to the dogs. On
the other hand, some degree of secret reasoning is
a good literary trickit invites the reader/player
to reconstruct the possible reasons for unexpected
behavior.
But the whole play is, after all, staged for the
benefit of the children. In order for them to create
their tactics, it seems a good idea to inform the
children of the interactions between the software
582
Related Work
In this concluding section, we compare our
approach to a number of related approaches in
which notions like communicative acts, activity,
semantic roles, context, and pervasive games are
central:
583
584
Conclusion
Based on the comparisons in the previous section,
we can identify six areas in which the present
approach seems to present advantages:
Acknowledgment
The authors wish to thank the anonymous reviewers for their detailed critical remarks and good
suggestions. Part of this work has been supported
by Center for Interactive Spaces, ISIS Katrinebjerg
(Project #122). Thanks to our colleagues in the
research group Frameworks for Understanding
Software-Based Systems, which has been a great
inspiration for this chapter.
References
Andersen, P. B. (2004a). Habitats: Staging life
COSIGN 2004: Comand art. In Proceedings of
putational Semiotics, Croatia.
Andersen, P. B. (2004b). Diagramming complex activities. In Proceedings of ALOIS 2004,
Linkping, Sweden. Retrieved from http://www.
vits.org/konferenser/alois2004/proceedings.asp
Andersen, P. B. (2004c). Anticipated activities
in maritime work, process control, and business
processes. In K. Liu (Ed.), Virtual, distributed
and flexible organizations. studies in organizational semiotics (pp. 35-60). Dordrecht: Kluwer
Academic Publishers.
Andersen, P. B. (2005). Things considered harmful. In Proceedings of ALOIS 2005, Limerick.
Andersen, P. B., & Nowack, P. (2002). Tangible
objects: Connecting informational and physical
space. In L. Qvortrup (Ed.), Virtual space: Spatiality of virtual inhabited 3D worlds (pp. 190-210).
London: Springer Publishers.
Andersen, P. B., & Nowack, P. (2004). Modeling
moving machines. In P. B. Andersen & L. Qvortrup (Eds.), Virtual applications: Applications
585
586
587
Russell, S., & Norvig, P. (2003). Artificial intelligence: A modern approach. Upper Saddle River,
NJ: Prentice Hall.
Ryan, M. L. (1991). Possible worlds, artificial
intelligence and narrative theory. Bloomington:
Indiana University Press.
Ryle, G. (1970). The concept of mind. London:
Hutchinson.
Salen, K., & Zimmerman, E. (2004). Rules of
play. Cambridge, MA: MIT Press.
Sargent, R., Resnick, M, Martin, F., & Silverman,
B. (1996). Building and learning with programmable bricks. In Y. Kafai & M. Resnick (Eds.),
Constructionism in practice. Designing, thinking,
and learning in a digital world. Hillsdale, NJ:
Lawrence Erlbaum.
Steels, L. (2003). Evolving grounded communication for robots. Trends in Cognitive Science,
7(7), 308-312.
This work was previously published in Semiotics and Intelligent Systems Development, edited by R. Gudwin, J. Queiroz, pp.
211-255, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
588
589
Chapter 2.13
ABSTRACT
In urban residential environments in Australia
and other developed countries, Internet access
is on the verge of becoming a ubiquitous utility
like gas or electricity. From an urban sociology
and community informatics perspective, this
article discusses new emerging social formations
of urban residents that are based on networked
individualism and the potential of Internet-based
systems to support them. It proposes that one of
the main reasons for the disappearance or nonexistence of urban residential communities is a lack
of appropriate opportunities and instruments to
encourage and support local interaction in urban
neighborhoods. The article challenges the view
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
Introduction
The area of technology and human interaction
is cross-disciplinary and requires many different academic fields and design practices to work
together effectively in order to generate a better
understanding of the social context and human
factors in technology design, development, and
usage. This article focuses on the social communication aspects of this field and hopes to
establish a greater awareness of the contribution
that communit
y media and communication studies can deliver to
the field of human computer interaction. It seeks
to build a theoretical foundation for an analysis
of two interrelated issues, which are discussed
in turn.
First, the importance of place and the continued
purpose and relevance of urban neighborhoods are
established. New media and networked information and communication technologies have not led
to the diminishment of local place and proximity. However, they have given rise to new types
of social interaction and to new emerging social
formations. Understanding the nature and quality
of interaction in these new social formations can
inform the successful animation of neighborhood
community and sociability.
Second, appropriate opportunities and instruments to encourage and support local interaction
in urban neighborhood networks are not limited
to technology, but technology can be a key facilitator. Thus, system designers and engineers are
crucial allies to social scientists in the search for
hybrid methodologies that integrate community
development approaches with technology design.
The article questions whether it is sufficient to
appropriate tools originally designed for dispersed online (that is, virtual) communities in
the context of community networks (Schuler,
1996) for urban neighborhoods. Purpose-built
tools and instruments are required that afford
590
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
591
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
are opportunities to design and develop purposebuilt systems from the ground up, which, instead
of merely trying to make ends meet, take the
unique requirements into account of the social- and
place-based context in which they are used. Tools
to animate and network urban neighborhoods
require a consideration and treatment of notions
of sociability, place, privacy, and proximity in
order to take full advantage of the communicative opportunities that this environment offers its
inhabitants and the wider society.
Place matters:
Communication and
interaction in urban
neighborhoods
Tnnies (1887) idea of community as Gemeinschaft implies a well-connected, place-based,
collective, village-like community. However,
this notion of community represents an overly
romanticized image of community and ignores
more contemporary forms of community that have
been explored by recent sociological studies (Wellman, 2001, 2002). Gemeinschaft might resemble
Hobbiton in the Shire described by Tolkien (1966).
This communitarian notion (de Tocqueville, 2000;
Etzioni, 1995) is still referred to frequently in the
community development literature, although the
homogeneous, egalitarian, and all-encompassing
nature of Gemeinschaft is a utopian ideal that
is less and less compatible with contemporary
characteristics of community as social networks
in todays network society.
Before the advent of modern information and
communication technology, human interaction
was limited by the reach of the physical presence of self or the representations of self (e.g.,
letters and photographs) and available means
of transportation. The need to socialize and to
communicate was usually satisfied with family
members in the same household, with friends and
peers nearby, at work, or within the vicinity of the
592
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
example, use their mobile phones to arrange meeting places on the spot; this could be the local caf,
the shopping mall, or someones home (Satchell,
2003). This emerging behavior introduces challenges to conventional understandings of place
and public places and opens up opportunities for
residential architecture, town planning, and urban
design (Castells, 2004; Florida, 2003; Grosz, 2001;
Horan, 2000; Mitchell, 2003; Oldenburg, 2001;
Walmsley, 2000).
In a lively online discussion about the continued purpose and relevance of neighborhood
communities, one participant (eric_brissette,
2004) illustrates the point that having less exposure to neighbors (as opposed to coworkers or
friends) does not mean that it is less likely that
there are, in fact, prospective friends living in
the neighborhood:
I guess it all depends on where you live. I live in
a rural town of about 10,000. Most people say
hello or good morning to you as you pass
them on the sidewalk. I cant say Ive known all
of my neighbors well, but I have at least spoken
with them enough to know a bit about who they
are. Visiting larger cities like Boston or New York
makes me feel weird. Nobody looks you in the eye,
and everyone seems constantly pissed off, almost
like everyone is scared of everyone else ... yet this
all seems perfectly normal to them. ... Chances are
good that there are people in your neighborhood
that share your [interests] or are at least [compatible] at the personality level who you wouldnt
normally interact with on a daily basis.
In todays networked society, it is questionable to project the image of the rural village and
use it as a best practice urban village model for
a city because of inherent differences between
both places and their inhabitants. Yet, the specific
characteristics of a city can give rise to a different model of urban village that acknowledges
the potential opportunities that this particular
environment offers its residents. For example,
593
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
594
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
Community networks in
urban neighborhoods
Arnold (2003) states that for the ordinary citizen,
social interaction is the killer application of the
Internet (p. 83). This development has sparked
an increased interest among researchers from a
range of disciplines to investigate online communication and online communities (Preece, 2000).
Yet, the majority of the work undertaken so far in
this research field focuses on globally dispersed
online (virtual) communities and not on the use
of information and communication technology for
communities of place (Papadakis, 2004).
There is a small but growing body of literature
that reports on the use of information and communication technology for community development in place-based contexts, mostly within the
emerging discipline that Gurstein (2000, 2001)
terms community informatics. However, most of
these accounts investigate communities that are
in one way or another deprived (e.g., telecenters
or community access centers in rural and remote
locations; ICT for development and poverty reduction in developing countries). The transferability
of these studies to urban settings is questionable.
Urban dwellers may think of themselves as being
quite well-off and may lack common disadvantages, such as low income or unemployment. Such
instances of deprivation could contribute to shared
agony, which ultimately may help to establish a
collective need for change (Foth, 2004b) and, thus,
a reason to make use of technology for action
and change. In its absence, however, alternative
motivations to form neighborhood community
need to be found.
Today, the value of door-to-door and place-toplace relationships in urban neighborhoods seems
to be on the decline. Researchers and practitioners
595
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
596
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
597
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
598
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
599
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
600
References
Arnold, M. (2003). Intranets, community, and
social capital: The case of Williams Bay. Bulletin
of Science, Technology & Society, 23(2), 78-87.
Arnold, M., Gibbs, M. R., & Wright, P. (2003).
Intranets and local community: Yes, an intranet
is all very well, but do we still get free beer and
a barbeque? In M. Huysman, E. Wenger, & V.
Wulf (Eds.), Proceedings of the First International
Conference on Communities and Technologies
(pp. 185-204). Amsterdam, NL: Kluwer Academic
Publishers.
Barabsi, A.-L. (2003). Linked: How everything
is connected to everything else and what it means
for business, science, and everyday life. New
York: Plume.
Butler, B. S. (2001). Membership size, communication activity, and sustainability. Information
System Research, 12(4), 346-362.
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
Carroll, J. M., & Rosson, M. B. (2003). A trajectory for community networks. The Information
Society, 19(5), 381-394.
Castells, M. (2001). The Internet galaxy: Reflections on the Internet, business, and society.
Oxford: Oxford University Press.
601
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
Foth, M., & Brereton, M. (2004, November 2024). Enabling local interaction and personalised
networking in residential communities through
action research and participatory design. In P.
Hyland, & L. Vrazalic (Eds.), Proceedings of
OZCHI 2004: Supporting Community Interaction.
Wollongong, NSW: University of Wollongong.
Gilchrist, A. (2000). The well-connected community: Networking to the edge of chaos. Community Development Journal, 35(3), 264-275.
Hampton, K. N. (2003). Grieving for a lost network: Collective action in a wired suburb. The
Information Society, 19(5), 1-13.
Hampton, K. N., & Wellman, B. (2003). Neighboring in netville: How the Internet supports
community and social capital in a wired suburb.
City and Community, 2(4), 277-311.
Gillespie, A., & Richardson, R. (2004). Teleworking and the city: Myths of workplace transcendence and travel reduction. In S. Graham (Ed.),
The cybercities reader (pp. 212-218). London:
Routledge.
602
Horrigan, J. B. (2001). Cities online: Urban development and the Internet. Washington, DC: Pew
Internet & American Life Project.
Horrigan, J. B., Rainie, L., & Fox, S. (2001).
Online communities: Networks that nurture longdistance relationships and local ties. Washington,
DC: Pew Internet & American Life Project.
Huysman, M., & Wulf, V. (Eds.). (2004). Social
capital and information technology. Cambridge,
MA: MIT Press.
Jankowski, N. W., Van Selm, M., & Hollander, E.
(2001). On crafting a study of digital community
networks: Theoretical and methodological considerations. In L. Keeble, & B. D. Loader (Eds.),
Community informatics: Shaping computer-mediated social relations (pp. 101-117). New York:
Routledge.
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
Putnam, R. D. (2000). Bowling alone: The collapse and revival of American community. New
York: Simon & Schuster.
Quan-Haase, A., Wellman, B., Witte, J. C., &
Hampton, K. N. (2002). Capitalizing on the Net:
Social contact, civic engagement, and sense of
community. In B. Wellman, & C. A. Haythornthwaite (Eds.), The Internet in everyday life (pp.
291-324). Oxford: Blackwell.
Rheingold, H. (2002). Smart mobs: The next social
revolution. Cambridge, MA: Perseus.
Satchell, C. (2003, November 26-28). The swarm:
Facilitating fluidity and control in young peoples
use of mobile phones. In S. A. Viller, & P. Wyeth
(Eds.), Proceedings of OZCHI 2003: New directions in interaction, information environments,
media and technology. Brisbane, QLD: University
of Queensland.
Schuler, D. (1996). New community networks:
Wired for change. New York: ACM Press.
Schuler, D., & Namioka, A. (Eds.). (1993). Participatory design: Principles and practices. Hillsdale,
NJ: Lawrence Erlbaum Associates.
Tolkien, J. R. R. (1966). The lord of the rings (2nd
ed.). London: Allen & Unwin.
Tnnies, F. (1887). Gemeinschaft und gesellschaft
(3rd ed.). Darmstadt, Germany: Wissenschaftliche
Buchgesellschaft.
Walmsley, D. J. (2000). Community, place and
cyberspace. Australian Geographer, 31(1), 5-19.
Watters, E. (2003). Urban tribes: Are friends the
new family? London: Bloomsbury.
Watts, D. J. (2003). Six degrees: The science of a
connected age. New York: Norton.
Wellman, B. (2001). Physical place and cyberplace:
The rise of personalized networking. International
Journal of Urban and Regional Research, 25(2),
227-252.
603
Analyzing the Factors Influencing the Successful Design and Uptake of Interactive Systems
This work was previously published in International Journal of Technology and Human Interaction 2(2), edited by B. Stahl, pp.
65-82, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
604
605
Chapter 2.14
ABSTRACT
This article describes a study clarifying information systems (IS) designers conceptions of human
users of IS by drawing on in-depth interviews
with 20 designers. The designers lived experiences in their work build up a continuum of
levels of thought from more limited conceptions
to more comprehensive ones reflecting variations
of the designers situated knowledge related to
human-centred design. The resulting forms of
thought indicate three different but associated
levels in conceptualising users. The separatist
form of thought provides designers predominantly
with technical perspectives and a capability for
objectifying things. The functional form of
thought focuses on external task information and
task productivity, nevertheless, with the help of
positive emotions. The holistic form of thought
provides designers with competence of human-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
606
External
horizon
referential
aspect
Internal
horizon
607
608
609
610
611
612
613
614
goal of constructing computer interfaces with human-like features: the interaction between people
and computers is then envisaged as enriched with
dialogues conveying both the rational and emotional meanings of the information in question
(e.g., Nakazawa, Mukai, Watanuki & Miyoshi,
2001). Respectively, the depictions of various human features in technology reveal understandings
suggesting human features built into technology
render the interaction between users and IS as
resembling the interplay of cognitive, emotional
and social aspects that occur between humans:
R: What kind of user interface do you think that
people would want to use?
D4: I strongly believe that 3D interfaces are coming. They could offer kind of human-like facial
features as agents, which would bring a human
sense to the systems. The third dimension could
also be utilised so that interfaces become tangible
and accessible.
Further, the context-centred conception of
the human being as an organisational learner,
which highlights people as organisations which
learn about their own work processes, refers indirectly to learning, which stresses both cognitive
and social human features. Collective cognitive
features are referred to as an organisations ability
to form new insights into its work processes and
to guide the deployment of IS effectively (Robey,
Boudreau & Rose, 2000). A social dimension is
also implied when it is assumed that people learn
as an organisation:
D8: Needs are prone to change rapidly, especially
after the implementation of the system, because
they teach an organisation a lot about itself, and
an organisations self-knowledge increases and
usually needs change in a more clever direction.
Then there very quickly happens a sort of learning leap, which is often experienced as if the
system is not valid at all although it is a question
615
616
References
Avison, D.E., & Fitzgerald, G. (1994). Information
systems development. In W. Currie & R. Galliers (Eds.), Rethinking management information
systems: An interdisciplinary perspective (pp.
250-278). Oxford: Oxford University Press.
617
618
Gummerus.
Orlikowski, W.J. (1992). The duality of technology: Rethinking the concept of technology
619
620
This work was previously published in International Journal of Technology and Human Interaction 3(1), edited by B. Stahl, pp.
30-48, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
621
622
Chapter 2.15
INTRODUCTION
Daily use of computer systems often has been
hampered by poorly designed user interfaces.
Since the functionality of a computer system is
made available through its user interface, its design has a huge influence on the usability of these
systems (Carroll, 2002; Preece, 2002). From the
users perspective, the user interface is the only
visible and, hence, most important part of the
computer system; thus, it receives high priority
in designing computer systems.
A plea for human-oriented design in which the
potentials of computer systems are tuned to the
intended user in the context of their utilization
has been made (Rossen & Carroll, 2002).
An analysis of the strategies that humans use
in performing tasks that are to be computer-supported is a key issue in human-oriented design
of user interfaces. Good interface design thus
requires a deep understanding of how humans
perform a task that finally will be computersupported. These insights then may be used to
design a user interface that directly refers to their
information processing activities. A variety of
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
In this section, we describe how the think aloud
method can be used to analyze a users task
behavior in daily life situations or in interaction
with a computer system and how these insights
may be used to improve the design of computer
systems. Thereafter, we will go into the pros and
cons of the think aloud method.
623
is that the subject performs the task at hand, possibly supported by a computer, and says out loud
what comes to mind.
A typical instruction would be, I will give
you a task. Please keep talking out loud while
performing the task. Although most people do
not have much difficulty rendering their thoughts,
they should be given an opportunity to practice
talking aloud while performing an example task.
Example tasks should not be too different from the
target task. As soon as the subject is working on the
task, the role of the instructor is a restrained one.
Interference should occur only when the subject
stops talking. Then, the instructor should prompt
the subject by the following instruction: Keep
on talking (Ericsson & Simon, 1993).
Full audiotaping and/or videorecording of
the subjects concurrent utterances during task
performance and, if relevant, videorecording of
the computer screens are required to capture all
the verbal data and user/computer interactions
in detail. After the session has been recorded, it
has to be transcribed. Typing out complete verbal
protocols is inevitable to be able to analyze the data
in detail (Dix et al., 1998). Videorecordings may
be viewed informally, or they may be analyzed
formally to understand fully the way the subject
performed the task or to detect the type and number
of user-computer interaction problems.
The use of computer-supported tools that are
able to link the verbal transcriptions to the cor-
Figure 1. Excerpt from a coded verbal protocol for analyzing humans task behavior
624
Code
Explanation
NPSCR04
MBT012
Meaning of button012
RTACT002
VSSACT006
MSSACT009
625
FUTURE TRENDS
The think aloud method is propagated and far more
often used as a method for system usability testing
than as a user requirements eliciting method. In
626
evaluating (prototype) computer systems, thinking aloud is used to gain insight into end users
usability problems in interaction with a system
to better the design of these systems. The use of
think aloud and video analyses, however, may be
helpful not merely in evaluating the usability of
(prototype) computer systems but also in analyzing in detail how end users tackle tasks in daily
life that in the end will be computer supported.
The outcomes of these kinds of analyses may
be used to develop a first version of a computer
system that directly and fully supports users in
performing these kinds of tasks. Such an approach
may reduce the time spent in iterative design of
the system, as the manner in which potential
end users process tasks is taken into account in
building the system.
Although a deep understanding of users task
behaviors in daily settings is indispensable in
designing intuitive systems, we should keep in
mind that the implementation of computer applications in real-life settings may change and
may have unforeseen consequences for work
practices. So, besides involving potential user
groups in an early phase of system design and in
usability testing, it is crucial to gain insight into
how these systems may change these work practices to evaluate whether and how these systems
are being used. This adds to our understanding
of why systems may or may not be adopted into
routine practice.
Today, a plea for qualitative studies for studying a variety of human and contextual factors that
likewise may influence system appraisal is made
in literature (Aarts et al., 2004; Ammenwerth et
al., 2003; Berg et al., 1998; Orlikowski, 2000; Patton, 2002). In this context, sociotechnical system
design approaches are promoted (Aarts et al., 2004;
Berg et al., 1998; Orlikowski, 2000). Sociotechnical system design approaches are concerned not
only with human/computer interaction aspects of
system design but also take psychological, social,
technical, and organizational aspects of system
design into consideration. These approaches
References
CONCLUSION
627
KEY TERMS
Cognitive Task Model: A model representing the cognitive behavior of people performing
a certain task.
Sociotechnical System Design Approach:
System design approach that focuses on a sociological understanding of the complex practices in
which a computer system is to function.
Think Aloud Method: A method that requires
subjects to talk aloud while solving a problem or
performing a task.
User Profile: A description of the range of relevant skills of potential end users of a system.
Verbal Protocol: Transcription of the verbal
utterances of a test person performing a certain
task.
Verbal Protocol Analysis: Systematic
analysis of the transcribed verbal utterances to
develop a model of the subjects task behavior
that then may be used as input to system design
specifications.
Video Analysis: Analysis of videorecordings of the user/computer interactions with the
aim to detect usability problems of the computer
system.
This work was previously published in Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 597-602, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
628
629
Chapter 2.16
Lessons Learned in
Designing Ubiquitous
Augmented Reality
User Interfaces
Christian Sandor
Technische Universitt Mnchen, Germany
Gudrun Klinker
Technische Universitt Mnchen, Germany
Introduction
In recent years, a number of prototypical demonstrators have shown that augmented reality has the
potential to improve manual work processes as
much as desktop computers and office tools have
improved administrative work (Azuma et al., 2001;
Ong & Nee, 2004). Yet, it seems that the classical
concept of augmented reality is not enough (see
also http://www.ismar05.org/IAR). Stakeholders
in industry and medicine are reluctant to adopt
it wholeheartedly due to current limitations of
head-mounted display technology and due to
the overall dangers involved in overwhelming a
users view of the real world with virtual information. It is more likely that moderate amounts
of augmented reality will be integrated into a
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
630
Background
In this section, we provide an overview of the
current use of UAR-related interaction techniques
and general approaches toward systematizing the
exploration of design options.
631
632
PAARTI
In the PAARTI project (practical applications of
augmented reality in technical integration), we
have developed an intelligent welding gun with
BMW that is now being used on a regular basis
to weld studs in the prototype production of cars
(Echtler et al., 2003). It exemplifies the systematic exploitation of constraints between task and
system criteria.
The task was to assist welders in positioning
the tip of a welding gun with very high precision
at some hundred predefined welding locations on
a car body. The main system design issue was to
find an immersive solution with maximal precision. An AR-based system would need a display
(D), a tracking sensor (S), and some markers (M)
FataMorgana
In the FataMorgana project, we have developed
an AR-based prototypical demonstrator for
designers at BMW, helping them compare real
mock-ups of new car designs with virtual models
(Klinker et al., 2002). The rationale for building
this system was that, although the importance of
digital car models is increasing, designers have
not yet committed wholeheartedly to a VR-based
approach but rather prefer relying on physical
633
Figure 2. Part of the design space for the intelligent welding gun (Reprinted with permission of Springer
Science and Business Media, Echtler et al., 2003, Figures 17.6, 17.7, and 17.8)
feasability
3
4
none
room
sensors
user
tool
none
room
user
tool
markers
634
Figure 3. Five different tasks resulting in different user head motions ( 2002 IEEE, Klinker et al.,
2002)
(a) Turning
(b) Overview
(c) Detail
(d) Discuss
(e) Compare
635
Overview of Tools
To support Jam Sessions, we have created a toolbox
of lightweight and flexible tools. They form the
basic building blocks which user interface development teams can use to generate, experience, and
test their novel interaction techniques.
The tools use AR, TUI, and WIMP interaction
paradigms and are designed to support a number
of tasks. The first task focuses on monitoring the
user. The second task involves the configuration
of dataflow networks. UAR systems need to
636
Figure 4. Classification of implemented tools. Development tasks are addressed with tools that use different user interface paradigms
User interface paradigm
Augmented
reality
WIMP
Tangible
Monitoring
the user
Configuring dataflow
networks
CAR
CAR is an industry-sponsored multi-disciplinary project to investigate issues pertaining to
the design of augmented reality user interfaces
in cars. CAR has used most of the tools T1-T6 to
investigate several user interface questions.
Motivation
In CAR, we have investigated a variety of questions: How can information be presented efficiently across several displays that can be found
in a modern car (e.g., the dashboard, the board
computer, and heads-up displays (HUDs))? How
can we prevent that information displayed in a
HUD is blocking the drivers view in crucial
situations? Since a wealth of input modalities
Specifying
Creating con-
dialog control
text-aware animations
637
Physical Setup
We have set up a simulator for studying car
navigation metaphors in traffic scenes (Figure
5). It consists of two separate areas: a simulation
control area (large table with a tracked toy car)
and a simulation experience area (person sitting
at the small table with a movable computer monitor in the front and a stationary large projection
screen in the back). In the simulation control area,
members of the design team can move one or
more toy cars on the city map to simulate traffic
situations, thereby controlling a traffic simulator
via a tangible object. The simulation experience
area represents the cockpit of a car and the driver.
The picture projected on the large screen in the
front displays the view a driver would have when
sitting in the toy car. The monitor in front of the
driver provides a mock-up for the visualizations
to be displayed in a HUD. Further monitors can
be added at run-time, if more than one view is
needed.
The room is equipped with an outside-in
optical tracking system (http://www.ar-tracking.
Figure 5
. Physical setup
638
Figure 6
. Discussion of user interface options for car navigation in a design team
(a)
(b)
Real-Time Development
Environment for Interaction Design
The tools presented in earlier sections were geared
toward the immediate use by non-programming
user interface experts. They mainly address the
customization of a set of functionalities and filters,
linking context measurements to information
presentation schemes. In order to add new functionality to a system, the development team must
also be able to modify the underlying network of
components, and its dataflow scheme. Tools T3
and T5 provide such support.
All system configuration tools are based on
DWARF (distributed wearable augmented reality
framework) (Bauer et al., 2001) and AVANT-
639
Figure 7
. Tangible interaction for adjustment of a three-dimensional map
Figure 8
. Sketching the context visualization function: (a) Staircase function; (b) linear function
(a)
(b)
Figure 9. Attentive user interface, visualizing a drivers eye and head motions: (a) The DWARFs
Interface Visualization Environment for managing distributed components; (b) The User Interface
Controller for specifying dialog control
(a)
(b)
DWARFs interactive visualization environment (MacWilliams et al., 2003) (tool T3, Figure
10[a]) enables developers to monitor and modify
the dataflow network of distributed components.
However, since this requires substantial knowl-
Figure 10
. Tools for programmers used in CAR: (a) The DWARFs interactive visualization environment
for managing distributed components; (b) the user interface controller for specifying dialog control
(b)
(a)
Conclusion
In PAARTI and FataMorgana, we have learned
that the reduction of criteria through inter-class
constraints is a valuable approach for designing
641
642
References
Azuma, R., Baillot, Y., Behringer, R., Feiner, S.,
Julier, S., & MacIntyre, B. (2001). Recent
advances
in augmented reality. IEEE Computer Graphics
and Applications, 21(6), 34-47.
Bauer, M., Brgge, B., Klinker, G., MacWilliams,
A., Reicher, T., Riss, S., et al. (2001).
Design of a
component-based augmented reality framework.
In ISAR01: Proceedings of the International
Symposium on Augmented Reality (pp. 45-54).
New York.
Bell, B., Hllerer, T., & Feiner, S. (2002). An
643
thesis,
outdoor augmented reality worlds. PhD
University of South Australia.
Poupyrev, I., Tan, D. S., Billinghurst, M., Kato, H.,
Regenbrecht, H., & Tetsutani, N. (2001). Tiles:
A
mixed reality authoring interface. In INTERACT
01: The 7th Conference on Human-Computer
Interaction, Tokyo, Japan (pp. 334-341).
Pustka, D. (2003). Visualizing distributed systems
of dynamically cooperating services. Unpublished masters thesis, Technische Universitt
Mnchen.
Sandor, C. (2005). A software toolkit and authoring tools for user interfaces in ubiquitous augmented reality. PhD thesis, Technische Universitt
Mnchen, Germany.
Sandor, C., & Klinker, G. (2005). A rapid prototyping software infrastructure for user interfaces
in ubiquitous augmented reality. Personal Ubiquitous Computing, 9(3), 169-185.
Sandor, C., Olwal, A., Bell, B., & Feiner, S.
(2005). Immersive mixed-reality configuration of hybrid user interfaces. In ISMAR 05:
Proceedings of IEEE and ACM International
Symposium on Mixed and Augmented Reality,
Vienna, Austria.
Tnnis, M., Sandor, C., Klinker, G., Lange, C.,
& Bubb, H. (2005).
Experimental evaluation of
an augmented reality visualization for directing
a car drivers attention. In ISMAR 05: Proceedings of IEEE and ACM International Symposium on Mixed and Augmented Reality, Vienna,
Austria.
Vertegaal, R. (2003). Attentive user interfaces.
Communications of ACM, Special Issue on Attentive User Interfaces, 46(3).
Zauner, J., Haller, M., Brandl, A., & Hartmann,
W. (2003).
Authoring of a mixed reality assembly
instructor for hierarchical structures. In ISMAR
03: Proceedings of the IEEE and ACM International Symposium on Mixed and Augmented
Reality, Tokyo, Japan (pp. 237-246).
This work was previously published in Emerging Technologies of Augmented Reality: Interfaces and Design, edited by M.
Haller, B. Thomas, and M. Billinghurs, pp. 218-235, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
644
645
Chapter 2.17
Abstract
INTRODUCTION
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
LITERATURE REVIEW
The phenomenon of OSS development has attracted considerable attention from both practitioners
and researchers in diverse fields, such as computer
science, social psychology, organization, and
management. Because of the multifaceted nature
of OSS, researchers have investigated OSS phenomenon from varied perspectives. For example,
focusing on technical perspective, researchers
studied issues such as OSS development method-
646
Figure 1. Four phases of social structures (from Krebs and Holley 2004)
1.
2.
3.
4.
scattered clusters,
single hub-and-spoke,
multihub small-world network, and
core/periphery.
647
THEORETICAL FOUNDATION
Social Structure and Social
Interaction
Social structure, as suggested by Schaefer and
Lamm (1998), refers to the way in which society
is organized into predictable relationships. Social
structure can be considered in terms of three
aspectsactors, their actions, and their interactions. The social actor is a relatively static concept
addressing issues such as roles, positions, and
statuses. Individual actors are embedded in the
social environment and, therefore, their actions
are largely influenced by the connections between
each other. Social interaction is generally regarded
as the way in which people respond to one another.
These interaction patterns are to some extent independent of individuals. They exert a force that
shapes both behavior (i.e., actions) and identity
(i.e., actors) (Schaefer & Lamm, 1998).
Research on social interaction focuses on how
individuals actually communicate with each other
in group settings. These studies address issues
such as the interaction patterns, the underlying
rules guiding interaction, the reasons accounting
for the way people interact, and the impacts of
the interaction patterns on the individual behavior
and the group performance. These issues begin by
questioning what might be the interaction pattern
in a specific social setting and that addresses our
research questionunderstanding social interaction of OSS project teams.
648
RESEARCH METHODOLOGY
Social Network Analysis
Social network analysis is used in our study to
investigate the interaction pattern of the OSS
development process. Social network analysis
Longitudinal Data
Because we are interested in studying how the
interaction pattern of OSS projects evolves over
time, cross-sectional observations of interaction networks are not sufficient. Cross-sectional
observations of social networks are snapshots of
interactions at a point in time and cannot provide
traceable history, thus limiting the usefulness of
the results. On the other hand, longitudinal observations offer more promise for understanding
Case Selection
OSS projects were selected from the SourceForge1,
which is the worlds largest Web site hosting OSS
projects. SourceForge provides free tools and services to facilitate OSS development. At the time of
the study, it hosted a total of 99,730 OSS projects
and involved 1,066,589 registered users (This data
was retrieved on May 4, 2005). Although a few
big OSS projects have their own Web sites (such
as Linux), SourceForge serves as the most popular
data resource for OSS researchers.
Following the idea of theoretical sampling
(Glaser & Strauss, 1967), three OSS projects
were selected from SourceForge in terms of their
similarities and differences. Theoretical sampling
requires theoretical relevance and purposes
(Orlikowski, 1993). In terms of relevance, the
selection ensures that the interaction pattern of
OSS projects over time is kept similar. Therefore,
the projects that are selected have to satisfy two
requirements. First, the projects must have considerable interaction among members during the
development process. All three projects had more
than 10 developers, and the number of bugs reported was more than 1,000. Second, since we are
interested in the interaction over time, the projects
must have a relatively long life. In our case, all
three projects were at least three years old.
In addition to similarities, differences are
sought among cases because the study aims
to study interaction patterns of various OSS
projects. Therefore, the three projects differ on
several project characteristics, such as project
size, project type, and intended audience. These
differences enable us to make useful contrasts
during data analysis.
The Table 1 summarizes the three projects
with a brief description.
649
Description
Similarities
Differences
3.
650
Compiere is a smart
ERP+CRM solution covering all major business
areasespecially for
small-medium enterprises.
1,361
1,695
2,296
Development
duration (more
than 3 years)
55 months (registered
on 10/2000)
47 months (registered on
6/2001)
50 months (registered
on 3/2001)
Software type
Internet, network
management
Enterprise: ERP+CRM
J2EE-based middleware
Group size
(number of
developers)
Small (14)
Median (44)
Large (75)
Intended audience
Developers, system
administrators
Business
Developers, system
administrators
2.
J-boss
Bug reports
(more than
1,000 bugs)
1.
Lables:
cmsavage dteixeira jcbowman jsber-bnl m-a rapr rtprince sf-robot svenn xbursam ydirson
Data:
0 0
0 0
5 1
0 0
0 0
0 0
0 1
0 0
0 0
0 0
0 0
0
0
0
3
0
1
0
1
0
1
3
0
0
2
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
2
0
0
1
0
0
0
0
0
0
0
0
0
0
1
0
0
0
0
0
0
0
0
0
0
6
0
0
0
0
0
0
0
0
Jcbowman replied to
ydirson 6 times
651
Group centralization
(%)
Core/periphery
fitness
Density
Net-SNMP
Compiere
JBoss
st
1.
9.420
15.624
4.931
2nd.
3.071
2.294
4.45
3rd.
2.316
1.288
4.12
1st.
0.674
0.774
0.485
2 .
0.654
0.796
0.477
nd
3 .
0.651
0.765
0.501
1st.
0.0235
0.0584
0.0073
2nd.
0.0109
0.0610
0.0039
rd
3 .
0.0072
0.0571
0.0026
1st.
2.546
2.711
3.438
2nd.
2.794
2.302
3.281
rd
Average distance
Distance-based
cohesion
rd
3 .
2.917
2.278
3.239
1st.
0.181
0.198
0.118
2nd.
0.143
0.253
0.147
3rd.
0.141
0.279
0.136
RESEARCH RESULTS
Snapshots of the Three Projects
Monthly data were extracted from the bug tracking system of each project. To illustrate the trend
of interaction pattern, we provide three snapshots
for each project (see Figures 3-5)2.
Table 2 summarizes the relevant network
characteristics of each project. In addition to the
652
April 2002
October 2003
April 2005
Sep. 2002
Jan. 2004
April 2005
structure that has a core (a group of core developers) together with several hangers-on (periphery).
Intense interactions exist within the core (among
several core developers) and between each core
member and his/her periphery. However, only
653
July 2002
November 2003
April 2005
654
Core/periphery fitness
NetSNMP
Compiere
JBoss
655
DISCUSSION
This research uses the longitudinal data of three
OSS projects selected from SourceForge to study
the social network structures of OSS teams. The
purpose of this study is to investigate the evolvement of interaction patterns of OSS project teams.
The research results suggest a decrease of group
centralization over time and a tendency of core/
periphery structure in OSS project teams.
The network plots (as shown in Figures 3-5)
indicate a layer structure instead of a flat one as
suggested by earlier literature. The interaction
pattern evolves from a single hub to a core/periphery structure. As the number of participants
increases, a core with only one person (who may
be the starter/initiator of the project) cannot satisfy
the increasing requirements of development and
communication. Therefore, other developers or
active users join the core to serve as key members of the project. This results in a more stable
structure, and the project is less dependent on a
single leader.
With the growth of a software project, more
people are attracted to the project. The original
leader may not be able to solve all the technical
problems encountered in the development process. Each key member has his/her specialty, is
responsible for solving relevant problems, and
has his/her own periphery in the network plot.
Although there are multiple peripheries in the
project, collaboration among key members in the
project is vital. This phenomenon of distribution
and collaboration can be viewed as a critical
success factor of OSS development. And the
evolvement is vividly demonstrated in our social
network analysis.
In a way, the social structure of OSS projects is
both centralized and decentralized. On one hand,
it is centralized in the sense that there is a core that
consists of key members. These key members are
responsible for various issues encountered during
the development process. On the other hand, it
is decentralized in the sense that the decision or
656
IMPLICATIONS AND
CONCLUSION
This paper examines the interaction patterns of
OSS teams. The research findings suggest that
the interaction structure starts from a single hub
and evolves to a core/periphery model. We argue
that the social structure of OSS teams is both
centralized and decentralized. It is centralized in
the sense that there exists a relatively stable core
that consists of a group of key developers. It is
also decentralized because of distributed decision
making among key developers and the broad collaboration between developers and users as well
as among developers themselves.
The paper presents the evolvement of the social
structure of OSS projects from a longitudinal
perspective. It also provides empirical evidence
of the change of interaction patterns from a
single hub to a core/periphery model. Moreover,
the paper utilizes social network analysis as the
research method. This approach has been shown
in this research as an effective tool in analyzing
the social structure in OSS teams.
Social structure is an important variable for
understanding social phenomenon. Open source
software, with its open and unique nature, attracts researchers to ask a series of questions.
For example, how do participants of OSS projects
interact and collaborate with each other? What
factors facilitate the interaction and the collaboration? And further, how does the collaboration
affect project performance of OSS teams? Social
network analysis is a good approach to investigate these questions. This research represents a
pioneering effort in this direction.
References
Ahuja, M., & Carley, K. (1999). Network structure
in virtual organizations. Organization Science,
10(6), 741-747.
657
Krebs, V., & Holley, J. (2004). Building sustainable communities through network building.
Retrieved from http://www.orgnet.com/BuildingNetworks.pdf
Lee, G. K., & Cole, R. E. (2003). From a firm-based
to a community-based model of knowledge creation: The case of the Linux kernel development.
Organization Science, 14(6), 633-649.
Madey, G., Freeh, V., & Tynan R. (2002). The
open source software development phenomenon:
An analysis based on social network theory (AMCIS2002). Dallas, TX.
Mockus, A., Fielding, R. T., & Herbsleb, J. D.
(2000). A case study of open source software
development: The Apache server. ICSE 2000.
Mockus, A., Fielding, R. T., & Herbsleb, J. D.
(2002). Two case studies of open source software
development: Apache and Mozilla. ACM Transactions on Software Engineering and Methodology,
11(3), 309346.
Moon, J. Y., & Sproull, L. (2000). Essence of
distributed work: The case of Linux kernel. First
Monday, 5(11).
Orlikowski, W. J. (1993). CASE tools as organizational change: investigating incremental and
radical changes in systems development. MIS
Quarterly, 17(3), 309-340.
Raymond, E. S. (1998). The cathedral and the
bazaar. First Monday, 3(3), Retrieved January ,
2005, from http://www.catb.org/~esr/writings/cathedral-bazaar/cathedral-bazaar/
Sagers, G. W. (2004). The influence of network
governance factors on success in open source
software development projects. In Twenty-Fifth
International Conference on Information Systems
(pp. 427-438). Washington, DC:
Schaefer, R. T., & Lamm, R. P. (1998). Sociology
(6th ed.). McGraw-Hill.
Scott, J. (2000). Social network analysis. A handbook (2nd ed.). London: SAGE Publications.
Siau, K., & Cao, Q. (2001). Unified modeling
languageA complexity analysis. Journal of
Database Management, 12(1), 26-34.
Siau, K., Erickson, J., & Lee, L. (2005). Theoretical versus practical complexity: The case of
UML. Journal of Database Management, 16(3),
40-57.
Siau, K., & Loo, P. (2006). Identifying difficulties
in learning UML. Information Systems Management, 23(3), 43-51.
Stamelos, I., Angelis, L., Oikonomu, A., & Bleris,
G. L. (2002). Code quality analysis in open-source
software development. Information Systems
Journal, 12(1), 43-60.
Von Hippel, E., & Von Krogh, G. (2003). Open
source software and the private-collective innovation model: Issues for organization science.
Organization Science, 14, 209-223.
Von Krogh, G., Spaeth, S., & Lakhani, K. R.
(2003). Community, joining, and specialization
in open source software innovation: A case study.
Research Policy, 32(7), 1217-1241.
Wasserman, S., & Faust, K., (1994). Social
Network Analysis: Methods and Applications.
New York: Cambridge University Press.
Endnote
This work was previously published in Journal of Database Management 18(2), edited by K. Siau, pp. 25-40, copyright 2007
by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
658
659
Chapter 2.18
ABSTRACT
Software testing in general and graphical user
interface (GUI) testing in particular is one of the
major challenges in the lifecycle of any software
system. GUI testing is inherently more difficult
than the traditional and command-line interface
testing. Some of the factors that make GUI testing
different from the traditional software testing and
significantly more difficult are: a large number of
objects, different look and feel of objects, many
parameters associated with each object, progressive disclosure, complex inputs from multiple
sources, and graphical outputs. The existing testing techniques for the creation and management of
test suites need to be adapted/enhanced for GUIs,
and new testing techniques are desired to make
the creation and management of test suites more
efficient and effective. In this article, a methodol-
Introduction
Graphical user interfaces (GUl) are an important
part of any end-user software application today
and can consume significant design, development, and testing activities. As much as half
of the source code of a typical user-interaction
intensive application can be related to user inter-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
660
GUI Testing:
Best Practices and
Recommendations
In this section, we highlight some of the sought
features, well-knows best practices and recommendations for planning a testing activity for a
graphical user interface.
661
Step 1: Initialization
Make basic decisions about the testing activity.
Some of the most important decisions, which must
be taken at this point, are:
662
663
Step 2: Initialization
664
An Introduction to Xman
In this section, we introduce the application
Xman, which is used to demonstrate the effectiveness of the methodology, discussed in
the previous section. Xman is a small application which is distributed with the standard X
release. It provides a graphical user interface
to the UNIX man utility. It has been developed
using the Athena widget set and some of its
windows are shown in Figure 1. The following
paragraphs briefly describe the functionality
of Xman.
When Xman is started, it displays its main
window, called Xman, by default. This main
window contains three buttons: Help, Manual
Page and Quit. Clicking on the manual page button, it displays a window, called manual page. A
Help window is displayed when the help button
is clicked. The quit button is used to exit from
Xman.
The manual page window is organized into
various sections. A breathe top of the window
contains two menu buttons, options and sections,
in the left half and a message area in the right. The
rest of the area below the bar, called text area, is
used to display the names of the available manual
pages in the currently selected section, and the
contents of the currently selected manual page.
Both the names and contents portions of the text
area are resizable and have vertical scrollbars
on the left.
665
666
Initialization
Help
Search
All.scr
DspXmanWin.scr
XmanMain.scr
RemoveXmanWin.scr
HelpButton.scr
QuitButton.scr
MainPageButton.scr
Level-2
All.scr
Xman
Help
Serach
All.scr
MainWindow.scr
XmanOrder.scr
Level-3
All.scr
Xman
Help
Search
All.sc
Help.scr
ManualPage.scr
Quit.scr
667
668
First Level
Suppose that the initial object list is built in such
a way so that Xman Main Window is on the
top of the list. We select it as the object under
test and create a new directory, called ~/Xman
Test/Leve1-1/Xman to build test scripts related to
the first level testing. The scripts related to XMan
Main window object display itself the window
on the screen, exercise all the window manager
operations on it, and then finally pop-down the
window. DspXmanWin.scr script pop-up the
Xman window and verifies that looks right RemoveXman Win.scr pops down the Xman window.
The script XmanMain.scr ruses DspXmanWin.scr
in its entering section to display the Xman window.
It verifies the window manager operations in its
core section and then uses RemoveHe1p Win.
scr script in its leaving section to pop-down the
Xman Window. As soon as the Xman window
pops up on the screen, we see that it contains four
objects, i.e., Manual
Second Level
The object list of the second level contains all the
top level windows of Xman. As we are considering
the Main Window only in this discussion so we
assume that it is at the top of the list and is selected
for the testing. There is not much interaction going
on in the objects which belong to the Xman Main
Window. The only interaction is the disappearance of Main Window, in response to click on
the Quit button. So there will be only one script
related to the Main Window which will verify
that a click on the Quit button actually destroys
the Main Window of Xman. This script is called
MainWindow.scr and is located in ~/XmanTest/
Level2/. This script is also used DspXmanWin.
scr and RemoveXman.scr script to display and
remove the Main Window from the screen. Another potential script, let us call it XmanOrder.
scr, related to the Main Window verifies that the
order in which Help or Manual Page buttons are
pressed is insignificant. No matter the Help button
is pressed before or after the Manual Page button,
it displays the Help window properly. The same is
also true for the Manual Page button.
Third Level
The object list of the third level includes the root
object only, and tests any interactions among the
top level windows of Xman can be done. Such
interactions which involve the Main Window of
Xman include display of the Help window and
the Manual Page window in response to mouse
clicks on the Help and the Manual page buttons,
respectively. Similarly, it also includes disappearance of all the windows related to Xman in
response to a click on the Quit button. The three
scripts provided at this level, that is, Help.scr,
ManualPage.scr and Quit.scr, verify the behavior,
related to the corresponding button, mentioned
above. This level might also include scripts which
verify application behavior, like multiple clicks
669
MainWin
Form
StandBy
LikeToSav
Help
Message
Form
Form
Manual Page
Search
Form
MainPage
MainPage
Quit
Help
Label
Section
Option
ManuBar
Cancel
SCWin
Apropos
Text
Label
Form
Option
Section
Message
Scrollbar
ManuBar
Option
Section
TopSW
Message
MainPage
Scrollbar
TopForm
BtmSW
TopSB
Legend
Parent/Child
Mouse Click:
Text Entry:
model for Xman. The following subsection illustrates each step of the methodology, described
in Section 2.2.
670
Initialization
All the decision and actions taken at the initialization step, that is, the number of testing levels
and their organization, the location of the test
suite and the application default resources, is
kept the same as for testing without a formal
specification, described in Section 4.1.
Capturing Scripts
A tool can be used for capturing user sessions for
building a test suite, in exactly the same way as for
capturing without a formal model, as mentioned
in Section 4.3. However, the presence of a formal
model makes the generated scripts more robust
and easy to follow and debug. A formal model
also provides the flexibility that the captured
scripts can be executed in any order without any
conflict in window names. It becomes easier to
modify the test suite in case of a modification to
the application under test. In fact in most cases
the modification can be reflected in the specification file and the test suite remains valid without
making any changes.
671
do
directory.
fi
fi
done
672
done
0
0
50
0
10
0
0
0
0
0
0
0
0
0
0
0
2
0
0
0
10
0
0
0
0
6
2
0
0
8
0
0
0
0
KP
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
KR
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
0
673
done
Coverage Analysis
No testing activity is useful unless it provides
some coverage measures/analysis. This coverage measure reflects the quality of the testing
activity. The UIG provides us with a framework
to determine such a coverage. During capture
or replay of scripts, XTester keeps track of the
user actions and their effects on individual objects. This information is available in a .stt file
at the end of the testing activity. Currently, the
information captured in .stt about a particular
object includes the number of times it was created, mapped, unmapped, and destroyed. It also
accumulates the number of times a mouse button
or keyboard key was pressed or released over the
object. This information helps the user to locate
any particular objects in the application which have
not been created, destroyed, mapped, unmapped,
or received a user event. These statistics can also
674
675
References
Berstel, J., Reghizzi, S. C., Roussel, G., & San
Pietro, P. (2001). A scalable formal method for
design and automatic checking of user interfaces.
In Proceedings of the 23rd International Conference on Software Engineering, (pp. 453-462).
Campos J., & Harrison, M. (2001). Model checking interactor specifications. Automated Software
Engineering, 3(8), 275-310.
Gamma, E., Helm, R., Johnson, R., & Vlissides,
J. (1995). Design patterns. Addison Wesley
Publishers.
Horowitz, E.& Singhera, Z. (1993). Graphical
user interface testing. In proceedings of the Eleventh Annual Pacific Northwest Software Quality
Conference.
Horowitz, E.& Singhera, Z. (1993). XTester
A System for Testing X Applications. Technical
Report No. USC-CS-93-549, Department of Computer Science, University of Southern California,
Los Angeles, CA.
Horowitz, E. & Singhera, Z. (1993). A Graphical
User Interface Testing Methodology. Technical
Report No. USC-CS-93-550, Department of Computer Science, University of Southern California,
Los Angeles, CA.
This work was previously published in International Journal of Information Technology and Web Engineering 3(2), edited by G. Alkhatib and
D. Rine, pp. 1-18, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
676
677
Chapter 2.19
Socio-Cognitive Engineering
Mike Sharples
University of Nottingham, Jubilee Campus, UK
Introduction
Socio-cognitive engineering is a framework for
the systematic design of socio-technical systems
(people and their interaction with technology),
based on the study and analysis of how people
think, learn, perceive, work, and interact. The
framework has been applied to the design of a
broad range of human-centered technologies,
including a Writers Assistant (Sharples, Goodlet,
& Pemberton, 1992), a training system for neuroradiologists (Sharples et al., 2000), and a mobile
learning device for children (Sharples, Corlett, &
Westmancott, 2002). It has been adopted by the
European MOBIlearn project (www.mobilearn.
org) to develop mobile technology for learning.
It also has been taught to undergraduate and
postgraduate students to guide their interactive
systems projects. An overview of the framework
can be found in Sharples et al. (2002).
Background
The approach of socio-cognitive engineering
is similar to user-centered design (Norman &
Draper, 1986) in that it builds on studies of po-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Socio-Cognitive Engineering
Framework
Figure 1 gives a picture of the flow and main
products of the design process. It is in two main
parts: a phase of activity analysis to interpret
how people work and interact with their current
tools and technologies, and a phase of systems
design to build and implement new interactive
technology. The bridge between the two is the
relationship between the Task Model and the
Design Concept. Each phase comprises stages of
analysis and design that are implemented through
specific methods. The framework does not prescribe which methods to use; the choice depends
on the type and scale of the project.
It is important to note that the process is not a
simple sequence but involves a dialogue between
the stages. Earlier decisions and outcomes may
need to be revised in order to take account of later
findings. When the system is deployed, it will enable and support new activities, requiring another
cycle of analysis, revision of the Task Model, and
further opportunities for design.
The elements of socio-cognitive engineering
are as follows:
Figure 1. Overview of the flow and main products of the design process
678
Socio-Cognitive Engineering
Socio-Cognitive Engineering
Activity:
Purpose:
Outcome:
Description of the existing organizational and workplace structures; identification of significant events.
Level 2
Significant events
Activity:
Purpose:
To discover how activities, communication, and social interaction are conducted in practice.
Outcome:
A description and analysis of events that might be important to system design; identification of
mismatches between how activity has been scheduled and how it is has been observed to happen.
Level 3
Activity:
Conduct interviews with participants to discuss areas of activity needing support, breakdowns, issues,
differences in conception.
Purpose:
To determine peoples differing conceptions of their activity; uncover issues of concern in relation to
new technology; explore mismatches between what is perceived to happen and what has been observed.
Outcome:
Issues in everyday life and interactions with existing technology that could be addressed by new
technology and working practices.
Level 4
Determining designs
Activity:
Purpose:
Outcome:
in their everyday lives, the limitations of existing practices, and ways in which they could be
improved by new technology.
The outcomes of these two studies are synthesized into a Task Model. This is a synthesis of
theory and practice related to how people perform
relevant activities with their existing technologies.
It is the least intuitive aspect of socio-cognitive
engineering; it is tempting to reduce it to a set of
bullet-point issues, yet it provides a foundation
for the systems design. It could indicate:
680
Socio-Cognitive Engineering
Task Engineering
Knowledge Engineering
Organizational
Engineering
Maintain
Installed system
Augmented knowledge
Evaluate
Debugging
Usability
New organizational
structure
Organizational change
development
Integrate
Implement
Design
Prototype System
Prototypes,
Interfaces, Cognitive
Knowledge
Communications,
Documentation
tools
representation
Network resources
Algorithms and
Human-computer
Socio-technical system
heuristics
interaction
model
Interpret
Task Model
Analyze
Requirements
Survey
Existing systems
Knowledge: concepts,
Workplace: practices,
methods
skills
interactions
Conventional task
Domain knowledge
Organizational
General Requirements
Although these stages are based on a conventional process of interactive systems design (see
Preece, Rogers, & Sharp [2002] for an overview),
they give equal emphasis to cognitive and organizational factors as well as to task and software
specifications. The stages shown in Figure 1 are
an aid to project planning but are not sufficiently
detailed to show all the design activities. Nor does
the figure make clear that to construct a successful integrated system requires the designers to
integrate software engineering with design for
human cognition, social interaction, and organizational management. The building-block
diagram in Table 2 gives a more detailed picture
of the systems design process.
The four pillars indicate the main processes
of software, task, knowledge, and organizational
engineering. Each brick in the diagram shows
681
Socio-Cognitive Engineering
682
Future Trends
The computer and communications industries are
starting to recognize the importance of adopting
a human-centered approach to the design of new
socio-technical systems. They are merging their
existing engineering, business, industrial design,
and marketing methods into an integrated process,
underpinned by rigorous techniques to capture
requirements, define goals, predict costs, plan
activities, specify designs, and evaluate outcomes.
IBM, for example, has developed the method of
User Engineering to design for the total user experience (IBM, 2004). As Web-based technology
becomes embedded into everyday life, it increasingly will be important to understand and design
distributed systems for which there are no clear
boundaries between people and technology.
Conclusion
Socio-cognitive engineering forms part of an
historic progression from user-centered design and
soft systems analysis toward a comprehensive and
rigorous process of socio-technical systems design
and evaluation. It has been applied through a broad
range of projects for innovative human technology
and is still being developed, most recently as part
of the European MOBIlearn project.
Socio-Cognitive Engineering
References
Checkland, P., & Scholes, J. (1990). Soft systems
methodology in action. Chichester, UK: John
Wiley & Sons.
Greenbaum, J., & Kyng, M. (Eds.). (1991). Design
at work: Cooperative design of computer systems.
Hillsdale, NJ: Lawrence Erlbaum Associates.
IBM. (2004). User engineering. Retrieved August
12, 2004, from http://www-306.ibm.com/ibm/
easy/eou_ext.nsf/publish/1996
Meek, J., & Sharples, M. (2001). A lifecycle
approach to the evaluation of learning technology. Proceedings of the CAL 2001 Conference,
Warwick, UK.
Mumford, E. (1995). Effective systems design and
requirements analysis: The ETHICS approach.
Basingstoke, Hampshire, UK: Macmillan.
Norman, D.A. (1986). Cognitive engineering.
In D.A. Norman, & S.W. Draper (Eds.), User
centred system design. Hillsdale, NJ: Lawrence
Erlbaum.
KEY TERMS
Activity System: The assembly and interaction of people and artefacts considered as a holistic
system that performs purposeful activities. See
http://www.edu.helsinki.fi/activity/pages/chatanddwr/activitysystem/
Preece, J., Rogers, Y., & Sharp, H. (2002). Interaction design: Beyond human-computer interaction.
New York: John Wiley & Sons.
Socio-Technical System: A system comprising people and their interactions with technology
(e.g., the World Wide Web).
Sachs, P. (1995). Transforming work: Collaboration, learning and design. Communications of the
ACM, 38(9), 36-44.
Sharples, M., Corlett, D., & Westmancott, O.
(2002). The design and implementation of a mobile learning resource. Personal and Ubiquitous
Computing, 6, 220-234.
System Image: A term coined by Don Norman (1986) to describe the guiding metaphor or
model of the system that a designer presents to
users (e.g., the desktop metaphor or the telephone
as a speaking tube). The designer should aim
to create a system image that is consistent and
683
Socio-Cognitive Engineering
This work was previously published in the Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 542-547,
copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
684
685
Chapter 2.20
Abstract
An interactive motivation-attitude theory is developed based on the Layered Reference Model
of the Brain (LRMB) and the object-attributerelation (OAR) model. This paper presents a
rigorous model of human perceptual processes
such as emotions, motivations, and attitudes. A
set of mathematical models and formal cognitive
processes of perception is developed. Interactions
and relationships between motivation and attitude
are formally described in real-time process algebra (RTPA). Applications of the mathematical
models of motivations and attitudes in software
engineering are demonstrated. This work is a
part of the formalization of LRMB, which provides a comprehensive model for explaining the
fundamental cognitive processes of the brain and
their interactions. This work demonstrates that
the complicated human emotional and perceptual phenomena can be rigorously modeled and
INTRODUCTION
A variety of life functions and cognitive processes has been identified in cognitive informatics
(Wang, 2002a, 2003a, 2003b, 2007b) and cognitive
psychology (Payne & Wenger, 1998; Pinel, 1997;
Smith, 1993; Westen, 1999; Wilson & Keil, 1999).
In order to formally and rigorously describe a
comprehensive and coherent set of mental processes and their relationships, an LRMB has been
developed (Wang & Wang, 2006; Wang, Wang,
Patel, & Patel, 2006) that explains the functional
mechanisms and cognitive processes of the brain
and the natural intelligence. LRMB encompasses
39 cognitive processes at six layers known as the
sensation, memory, perception, action, meta and
higher cognitive layers from the bottom up.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
Definition 1: Perception is a set of internal sensational cognitive processes of the brain at the
subconscious cognitive function layer that detects,
relates, interprets, and searches internal cognitive
information in the mind.
Perception may be considered as the sixth
sense of human beings since almost all cognitive life functions rely on it. Perception is also an
important cognitive function at the subconscious
layers that determines personality. In other words,
personality is a faculty of all subconscious life
functions and experience cumulated via conscious
life functions. It is recognized that a crucial
component of the future generation computers
known as the cognitive computers is the perceptual engine that mimic the natural intelligence
(Wang, 2006a, 2007c).
The main cognitive processes at the perception layer of LRMB are emotion, motivation, and
attitude (Wang et al., 2006). This article presents
a formal treatment of the three perceptual processes, their interrelationships, and interactions.
It demonstrates that complicated psychological
and cognitive mental processes may be formally
modeled and rigorously described. Mathematical
models of the psychological and cognitive processes of emotions, motivations, and attitudes are
developed in the following three sections. Then,
interactions and relationships between emotions,
motivations, and attitudes are analyzed. Based
on the integrated models of the three perception
processes, the formal description of the cognitive processes of motivations and attitudes will
be presented using RTPA (Wang, 2002b, 2003c,
2006b, 2007a). Applications of the formal models of emotions, motivations, and attitudes will
be demonstrated in a case study on maximizing
strengths of individual motivations in software
engineering.
Basic level
Joy
Love
Anger
Sadness
Fear
Bliss, pride,
contentment
Fondness,
infatuation
Annoyance, hostility,
contempt, jealousy
Horror, worry
Sub-category level
686
Description
Super level
Negative (unpleasant)
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
(1)
Te : ES IS BL
(2)
No emotion
Weak emotion
Moderate emotion
Strong emotion
Strongest emotion
Description
Comfort
Fear
Joy
Sadness
Pleasure
Anger
Love
Hate
687
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
688
(3)
2.5
| Em | (E -S )
C
(4)
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
to people, objects, events, and other stimuli. Attitudes may be formally defined as follows.
Definition 8. The mode of an attitude A is determined by both an objective judgment of its conformance to the social norm N and a subjective
judgment of its empirical feasibility F, that is:
A
1, N
0, N
(5)
INTERACTIONS BETWEEN
MOTIVATION AND ATTITUDE
This section discusses the relationship between
the set of interlinked perceptual psychological
processes such as emotions, motivations, attitudes,
decisions, and behaviors as formally modeled in
the preceeding sections. A motivation/attitudedriven behavioral model will be developed for
formally describing the cognitive processes of
motivations and attitudes.
It is observed that motivation and attitude have
considerable impact on behavior and influence the
ways a person thinks and feels (Westen, 1999). A
reasoned action model is proposed by Fishbein
and Ajzen (1975) that suggests human behavior
is directly generated by behavioral intensions,
which are controlled by the attitude and social
norms. An initial motivation before the judgment
by an attitude is only a temporal idea; with the
689
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
M A
2.5
| Em | (E -S )
A
C
(6)
1, T
P =
0, T
P =
(7)
T,
Mr D
F,
otherwise
2.5 | Em | (E -S )
AD
C
(8)
FORMAL DESCRIPTION OF
COGNITIVE PROCESSES OF
MOTIVATION AND ATTITUDE
The formal models of emotion, motivation, and
attitude have been developed in previous sections.
This section extends the models and their relationship into detailed cognitive processes based on
the OAR model (Wang, 2007d) and using RTPA
(Wang, 2002b, 2003c, 2006b, 2007a), which enable
more rigorous treatment and computer simulations
of the MADB model.
Motivation
Rational
motivation
Stimuli
Emotion
Strengthen/weaken
Attitude
(Perceptual
feasibility)
N
Values/
social
norms
Experience
Internal process
690
Mr
Behavior
Outcome
D
Decision
(physical
feasibility)
T/R/P
Availability
of time,
resources,
and energy
External process
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
MAXIMIZING STRENGTHS OF
INDIVIDUAL MOTIVATIONS
Studies in sociopsychology provide a rich theoretical basis for perceiving new insights into
the organization of software engineering. It is
noteworthy that in a software organization, according to Theorem 2, the strength of a motivation of individuals M is proportional to both the
strength of emotion and the difference between
the expectancy and the current status of a person.
At the same time, it is inversely proportional to
the cost to accomplish the expected motivation C.
The job of management at different levels of an
organization tree is to encourage and improve Em
and E, and to help employees to reduce C.
Example 1: In software engineering project organization, the manager and programmers may be
motivated to the improvement of software quality
to a different extent. Assume the following factors
as shown in Table 3 are collected from a project
on the strengths of motivations to improve the
quality of a software system, analyze how the
factors influence the strengths of motivations of
the manager and the programmers.
According to Theorem 2, the strengths of
motivations of the manager M1 and the programmers M2 can be estimated using Equation 4,
respectively:
M 1 (manager )
and
M 2 (programer )
2.5 | Em | (E -S )
C
2.5 4 (8 - 5)
3
10.0
2.5 3.6 (8 - 6)
8
2.3
691
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
Quantify (E(o)N)
Quantify (C(o)N)
M (o)N
M(o) N > 1
M(o)BL = T
// Positive motivation
M(o)BL = F
// Negative motivation
)
III. Check the mode of attitude A(o)N
// Refer to the Attitude process
IV. Form rational motivation Mr(o)
Mr(o)N := M(o)N A(o)N
(
Mr(o)N > 1
|
Mr(o)BL = T
// Rational motivation
Mr(o)BL = F
// Irrational motivation
)
V. Determine physical availability D(o)N
// Refer to the Attitude process
VI. Stimulate behavior for Mr(o)
(
D(o)N = 1
// Implement motivation o
GenerateAction (Mr(o))
ExecuteAction (Mr(o))
R := R <o, Mr(o)>
|
// Give up motivation o
~
D(o)N := 0
o :=
R :=
)
OAR ST = <O o, A A , R R > // Form new OAR model
Memorization (OAR ST)
}
692
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
N(o)BL F(o)BL = T
A(o)N := 1
A(o)N := 0
Qualify (P(o)BL)
D(o)N := 1
// Confirmed motivation
D(o)N := 0
// Infeasible motivation
// Implement motivation o
GenerateAction (Mr(o))
ExecuteAction (Mr(o))
R := R <o, Mr(o)>
|
// Give up motivation o
D(o)N := 0
o :=
R :=
)
OAR ST = <O o, A A , R R > // Form new OAR model
Memorization (OAR ST)
}
Em
3.6
693
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
Quantify (S(o)N)
Quantify (E(o)N)
Quantify (C(o)N)
M (o) N
M(o)N > 1
M(o)BL = T
// Positive motivation
M(o)BL = F
// Negative motivation
)
III. Check the mode of attitude A(o)N
// Perceptual feasibility
Qualify (N(o)BL)
// The social norm
Qualify (F(o)BL)
N(o)BL F(o)BL = T
A(o)N := 1
~
A(o)N := 0
Mr(o)BL = T
// Rational motivation
Mr(o)BL = F
// Irrational motivation
)
V. Determine physical availability D(o)N
Qualify (T(o)BL)
// The time availability
Qualify (R(o)BL)
Qualify (P(o)BL)
D(o)N := 1
// Confirmed motivation
D(o)N := 0
// Infeasible motivation
// Implement motivation o
GenerateAction (Mr(o))
ExecuteAction (Mr(o))
R := R <o, Mr(o)>
|
// Give up motivation o
D(o)N := 0
o :=
R :=
)
OARST = <O o, A A, R R> // Form new OAR model
Memorization (OARST)
694
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
Motivation
Behavior
Productivity
Organizational
objectives
Attitude
Quality
695
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
CONCLUSION
REFERENCES
ACKNOWLEDGMENT
The author would like to acknowledge the Natural Science and Engineering Council of Canada
(NSERC) for its support to this work. We would
like to thank the anonymous reviewers for their
valuable comments and suggestions.
696
On the Cognitive Processes of Human Perception with Emotions, Motivations, and Attitudes
theoretical framework of
cognitive informatics. The International Journal
of Cognitive Informatics and Natural Intelligence,
-27.
1(1), 1
This work was previously published in the International Journal of Cognitive Informatics and Natural Intelligence, Vol. 1,
Issue 4, edited by Y. Wang, pp. 1-13, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint
of IGI Global).
697
698
Chapter 2.21
Abstract
CompILE is a sociotechnical comprehensive interactive learning environment system for personal
knowledge management and visualization that represents the growing collective knowledge an individual
gathers throughout his or her lifespan. A network of intelligent agents connects the user and his or her
inhabited knowledge space to external information sources and a multitude of fellow users. Following a
brief perspective on educational technology, concepts of human-computer interaction, and a description
of CompILE, this chapter will introduce CompILE as a sociotechnical system supported by an enriched
design process. From an educational perspective, CompILE can bridge the digital divide by creating
community, embracing culture, and promoting a learning society.
Introduction
This chapter begins with a brief perspective on
educational technology, concepts of humancomputer interaction, and a description of the
Comprehensive Interactive Learning Environment (CompILE) as a knowledge management
system controlled using a network of intelligent
software agents. Strategies for bridging the digital
Copyright 2008, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Background
One trait of humans that makes us advanced
social creatures is our ability to create artificial
devices or artifacts that expand our capabilities
(Norman, 1993). Throughout history, many of
these artifacts have been perceived as the final
piece to the ultimate puzzle of technological advancement; the last thing humans would create
to meet all their future needs. The automobile,
the airplane, the telephone, the television, the
microwave, the Internet; the list goes on. While
no single invention can actually be that final
puzzle piece, major technological advancements
cannot be denied as a driving force of society.
Norman (1993) agrees, noting the essential nature
of technology for growth of human knowledge
and mental capabilities.
An important consideration to be made when
designing any new interface is the anticipated
impact it will have on human activity. In fact,
most significant consequences of design are
699
Human-computer interaction
There are four main sources of knowledge underlying information systems design: (1) systematic
empirical evidence of user-interface relations,
(2) models and theories of human cognition and
artificial systems, (3) field studies of how humans
cope with tasks in real-world settings, and (4)
specific work domain knowledge derived from
textbooks, interviews with experts, and analytical methods such as task analysis (Rasmussen,
Anderson, & Bernsen, 1991). The diversity of
these sources underscores the interdisciplinary
nature of the studies and practices involved with
human-computer interaction (HCI), including but
not limited to the natural sciences, engineering,
the social sciences, and humanities.
One might assume that the foundations of HCI
are purely objective, due to the binary nature of
information systems. Computers are objective
machines based on logic, but humans are interacting with them; this completely unbalances the
equation. HCI research and development must take
into account human knowledge, decision-making
and planning, current mental representations of
the user based on causal and intentional relationships, and the fact that interactive behavior is
influenced by the environment in which it takes
place (Bechtel, 1997; Gifford, 1997; Sommer,
1969).
It is increasingly inadequate for system designers to consider the modules of the information
system in isolation from each other (Rasmussen
et al, 1991). Increasingly numerous applications
working in seamless integration on the same
computer (and across networks) becomes more
feasible with each new cycle in computer processing technology. Modules can no longer be
designed around the requirements of a narrow
group of users (Rasmussen et al, 1991). As greater
700
Comprehensive interactive
learning environments
The workplace of most interest to those designing interfaces for educational technology would
be the transitioning traditional classroom, which
can be propelled beyond the 21st century with
CompILE. CompILE relies heavily upon provision of access to information to all students at
all stages of the educational process, as well as
continual facilitation of mediated communication
between humans and machines. This is accom-
701
702
703
704
Self-direction of learners
Mobile access to distributed information
Heterogeneity of learners
Necessity for the maintenance of individual
records of growth in competence
Usability
There are five distinct attributes of usability:
learnability, efficiency, memorability, errors,
and satisfaction. An interface should be simple
enough to learn that users can quickly become
productive with the application. Efficiency is also
essential, as a user should be able to maintain
high levels of productivity with the application
upon learning the interface. The interface should
be easy to remember, so those users that are not
constantly in contact with the application can
easily reorient themselves when necessary. User
error should be kept to a minimum, and errors
that are unavoidable should have easy avenues of
recovery. The interface should be enjoyable for
the user, creating a sense of satisfaction with the
application (Nielsen, 1993).
Scenario-based design
While most engineering methods seek to control
the complexity and fluidity of design using filtering and decomposition techniques, scenario-based
design techniques seek to exploit the complexity
705
Creating and using scenarios pushes designers beyond simple static answers. The emphasis
on raising questions makes it easier for designers to integrate reflection and action into their
own design practice. The process creates constant integration between the designers and the
constituents of the problem domain by evoking
reflection, contrasting the simultaneously concrete and flexible nature of scenarios, promoting
work orientation, and melding abstraction with
categorization. Scenarios also allow analysts
and designers to visually sketch interactions in
order to probe relationships (Carroll, 2000). It is
precisely the nuances revealed through these questions and sketches that hold the keys to progress
within the design of a system like CompILE. In
addition, much of the reflection/action paradox
is resolved by scenarios, as they provide a language for action that invokes reflection (Carroll,
2000). For example, a designer concerned with
the implementation of stand-alone Web-browsing
functionality within the global navigation menu of
the CompILE interface would have a much easier
time making a primary decision if he or she knew
the current classroom dynamic between students,
teachers, and the integration of Web content as a
daily part of the curriculum.
Scenarios of use reconcile concreteness and
flexibility by embodying concrete design actions
that evoke concrete move testing by designers,
and by facilitating open-ended exploration of
design requirements and possibilities (Carroll,
2000). This approach is perfect for a project such
as CompILE, because many decisions would be
cut-and-dry, based on preceding projects within
the same vein, but the specific needs of any school
system would call for the refreshment of openended exploration. Exploration of this nature
would help avoid overapplication of technology,
as would the practical approach to interface use
and analysis; heuristic evaluation.
706
Heuristic evaluation
Heuristic evaluation is the process of a person
viewing an interface and making value judgments
based on the 10 heuristics of usability, using his
or her own common sense or intuition. Nielsen
(1993) provides the following 10 heuristics of
usability:
One of the best ways to find mistakes and problems in interfaces is to use the interface and look
for them. Evaluators should work individually,
only communicating after completion, including
written or recorded reports following inspection
(Nielsen, 1993). This lowers the probability of
biased evaluations. For CompILE, individualized
evaluation might not work, since students would
be using the interface in close proximity to each
other within the classroom environment. Even
though the results would likely be biased for these
group evaluation sessions, the bias itself would be
an integral part of the feedback loop in designing
CompILE. Examining off-line communication
between students would be essential for creating seamless computerized intercommunication
functionality.
Observers are a necessity for evaluating CompILE, since the evaluators would include young
children, incapable of successfully completing
comprehensive written evaluations. However, the
observer method would still have its shortcomings, as much of the responsibility for interface
content analysis still falls upon the evaluator. Due
Roles of participation
How can these agents specifically benefit the
current and future individuals involved in the
educational system: students, teachers, parents,
administrators, researchers, and designers? Stu-
dents could benefit from research agents, tutoring agents, archival agents, and communication
agents. Research agents could constantly scour
the infinite resources of the World Wide Web,
reporting back to the student (or another agent)
whenever any relative information is discovered.
Tutoring agents would be highly customizable
personal agents that grow with the student. Much
like the aforementioned product brokering agents,
these personal tutoring agents would remember the
learning history of the student, keeping track of
patterns of learning. This would enable the tutor to
create a custom-built learning package that would
cater directly to the childs learning style.
Archival agents could track the student
throughout the day, digesting every idea communicated (and piece of work created) by the
student into a digital archive. In the CompILE
environment, every interaction in the classroom
would be recorded in digital format. These records
would be readily available, produced quickly and
quietly by the archival agents. If a student cannot
remember exactly what a teacher said or wrote
at the beginning of the day, he or she would no
longer need to worry, with everything available
on a permanent basis. Issues of storage would
need to be considered, but a half-sufficient remedy could be the use of a single storage facility
(accessible by all the students in the class) for all
in-class communication. Communication agents
representing each student could queue up for the
teacher, and he or she could answer the questions
in the order in which they were received. This
would work much like the spooling process
for a networked printer. A student could place
the requests and continue working on some other
task while his or her communication agent representative waited to be served.
The teacher could employ several intelligent
teaching assistant agents to deal with the overload
of student questions. Many students would likely
repeat the same question, so one of these assistant
agents could be created specifically to tackle this
issue, essentially replicating the teachers original
707
708
helping the children finish their homework efficiently and effectively. The homework agents
could also serve as tutoring assistance agents,
gathering information to assist the parent who
might need a refresher on the fundamentals to
be learned in the homework. These agents could
report back to the teacher on the following day,
giving a brief evaluation of the previous evenings
proceedings.
Administrators would have many uses for
agents. In addition to agents already mentioned,
the administration could employ agents to monitor
teachers, creating periodical evaluations that
indicate the progress of the students; one indication
of the teachers ability to teach. Financial agents
could help the administration cope with more
political issues, such as which departments need
more funding for equipment or textbooks, with
research agents in turn scouring the resources of
the Internet for the best options. Communication
agents could serve the administration by contacting other school systems with similar demographic
issues and comparing notes.
Researchers could employ the assistance of
statistical agents that would perform data collection, sorting, and statistical analysis. Observation
agents could continually observe the actions of
different types of participants, searching for
recognizable patterns of use of the CompILE
system. As with administrators, communication
agents could assist researchers by linking to the
research taking place at other school systems with
similar and different circumstances, performing
trend comparisons and facilitating the collaboration of researchers on pertinent issues.
Design teams could rely heavily upon testing
agents for the initial alpha testing of upgrades
and new modules to be added to the CompILE
application tool set, as the purpose of alpha testing
is to find all possible ways to break software,
and agents are the perfect digital drones for simulating endless combinations of user actions in
an organized fashion. Role-playing agents could
assist designers by demonstrating alternative
709
710
Conclusion
As a sociotechnical system, CompILE must
manifest as a hybrid system in at least two ways.
First, its users, in all forms of participation, would
in fact be de facto designers. Second, CompILE
would combine reality and virtual environments
in a manner conducive to constructive learning
practices, taking full advantages of the added
dimensionality provided by this combination.
Imagine a learner interested in Greek civilization,
standing in modern-day Greece, able to augment
his or her current visual field with images of
classic Greek structures overlaid precisely upon
those structures in their current form. Upon the
request of the user, these augmenting images
could animate, showing the degradation of the
structures over time, highlighting access to information about specific historic events that had
direct impact on the process. This is just one
example of how CompILE could embody the
careful integration of ICT as a tool fundamental
to the learning process.
The agents of CompILE can help promote
higher levels of information literacy amongst
its users, which in time should lead to a more
informed consumer society, which will provide
References
Adler, M. J. (1981). Six great ideas. New York:
Simon & Schuster.
Bechtel, R. B. (1997). Environment & behavior:
An introduction. Thousand Oaks, CA: Sage.
Brown, J. S., & Duguid, P. (2000). The social
life of information. Boston: Harvard Business
School Press.
Carroll, J. M. (1991). Designing interaction: Psychology at the human-computer interface. New
York: Cambridge University Press.
Carroll, J. M. (2000). Making use: Scenariobased design of human-computer interactions.
Cambridge: MIT Press.
Crick, R. D. (2005). Being a learner: A virtue for
the 21st century. British Journal of Educational
Studies, 53(3).
Dondis, D. A. (1973). A primer of visual literacy.
Cambridge: MIT Press.
Faure, E., Herrera, F., Kaddoura, A.-R., Lopos,
H., Petrovsky, A. V., Rahnema, M., & Ward, F.
C. (1972). Learning to be: The world of education
today and tomorrow. Paris: UNESCO.
Fischer, G. (n.d.). Introduction to L3D. Retrieved
December, 2006, from http://l3d.cs.colorado.
edu/introduction.html
Fischer, G. (2001). Lifelong learning and its support with new media. In N. J. Smelser & P. B.
Baltes (Eds), International encyclopedia of social
and behavioral sciences. Amsterdam: Elsevier.
Geelan, D. (2006). Undead theories: Constructivism, eclecticism and research in education.
Rotterdam: Sense Publishers.
Gelernter, D. (1998). Machine beauty: Elegance
and the heart of technology. New York: Basic
Books.
Gifford, R. (1997). Environmental psychology:
Principles and practice (2nd ed.). Boston: Allyn
and Bacon.
Howard, P. N. (2002). Network ethnography and
the hypermedia organization: New media, new
organizations, new methods. New Media and
Society, 4.
Kling, R. (2000). Learning about information
technologies and social change: The contribution
of social informatics. The Information Society,
16(3).
Koper, E. J. R., & Sloep, P. (2003). Learning
networks: Connecting people, organizations,
autonomous agents and learning resources to
establish the emergence of effective lifelong
learning. Heerlen: The Open University of the
Netherlands.
Koper, E. J. R., & Tattersall, C. (2004). New
directions for lifelong learning using network
technologies. British Journal of Educational
Technology, 35(6).
Lambeir, B. (2005). Education as liberation: The
politics and techniques of lifelong learning. Educational Philosophy and Theory, 37(3).
Marks, W., & Dulaney, C. L. (1998). Visual information processing on the World Wide Web.
In C. Forsythe, E. Grose, & J. Ratner (Eds.),
Human factors and Web development. Mahwah,
NJ: Lawrence Erlbaum.
711
Smith, J., & Spurling, A. (1999). Lifelong learning: Riding the tiger. London: Cassell.
Snowdon, D. N., Churchill, E. F., & Frcon, E.
(2004). Inhabited information spaces: Living with
your data. New York: Springer.
Sommer, R. (1969). Personal spaces: The behavioral basis of design. Englewood Cliffs, NJ:
Prentice Hall.
van Dijk, J. A. G. M., (2005). The deepening divide:
Inequality in the information society. Thousand
Oaks, CA: Sage.
Wenger, E. (2000). Communities of practice:
The key to knowledge strategy. In E. Lesser, M.
Fontaine, & J. Slusher, (Eds.), Knowledge and
communities, (pp. 3-20). Oxford: ButterworthHeinemann.
This work was previously published in Social Information Technology: Connecting Society and Cultural Issues, edited by T.
Kidd and I.L. Chen, pp. 348-362, copyright 2008 by Information Science Publishing (an imprint of IGI Global).
712
713
Chapter 2.22
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
714
Chapter Overview
Problem Representation
Figure 2. The chemical reactor schematic (adapted from Dieste & Silva, 2000)
715
of a problem: it records the characteristic descriptions and interconnections of the parts (or
domains) of the world the problem affects; it
places the requirements in proper relationship
to the problem components; it allows a record of
concerns and difficulties that may arise in finding its solution.
For the chemical reactor, there are a number
of domains, including those that appear in the
schematic of Figure 2. Also the operator will
play an important role in issuing commands to
control the catalyst and cooling systems. Placing
all of these domains in their correct relationship
to each other leads to the problem diagram shown
in Figure3.
The components are:
Operation machine: The machine domain,
that is, the software system and its underlying hardware. The focus of the problem is
to build the Operation machine.
a: {open_catalyst, close_catalyst}
b: {catalyst_status, water_level}
c: {open_catalyst, close_catalyst}
d: {is_open, is_closed}
e: {increase_water, decrease_water}
716
f: {water_level}
g: {oil_level}
717
Problem Classification
Programs
Embedded Controllers
For a problem class to be purposefully analyzed
in PFs, some given domain(s) of interest must exist outside the machine. An interesting problem
class is identified in Jackson (2001) by introducing a single causal domain. The problem frame
is known as the Required Behaviour Frame, and
its characterizing problem is that of building a
machine that controls the behaviour of some part
of the physical world so that it satisfies certain
conditions. Some software engineers may find it
easy to identify this problem with that of building
an embedded controller (although it does apply
also to more general software problems).
The Required Behaviour Frame is illustrated in
Figure6. The frame has a topology that is captured
by a frame diagram (that of the illustration).
The frame diagram resembles a problem diagram, but it also includes some further annotation.
718
Figure 7. Regulating the water level in the cooling system as a required behaviour problem
e:{increase_water,decrease_water}
f:{water_level}
719
User Interaction
Another interesting class of problems can be
obtained by including a single biddable domain
outside the machine, which represents the user of
the system. We call the resulting problem frame
the User Interaction Frame, and its characterizing problem is that of building a machine that
enforces some rule-based interaction with the
user. The frame diagram for the User Interaction
Frame is given in Figure9.
The Interaction machine is the machine to be
built. The User is a biddable domain representing
the user who wants to interact with the machine.
The requirement gives the rules that establish
legal user/machine interactions.
The manifestation of the user/machine interaction is through exchanges of causal phenomena
1
2
3
720
a: {open_catalyst, close_catalyst}
b: {catalyst_status, water_level}
1
Given this set of machine phenomena, when the user causes this phenomena (it
may or may not be sensible or viable)...
2
if sensible or viable the machine will accept it...
3
resulting in this set of machine phenomena...
4
thus achieving the required interaction in every case.
721
Figure 13. Controlling the catalyst as an instance of a user commanded behaviour problem
a: {open_catalyst, close_catalyst}
c: {open_catalyst, close_catalyst}
In the chemical reactor problem, we can apply the User Commanded Behaviour Frame to
analyze how the catalyst is controlled by the
operator. The corresponding problem diagram
is given in Figure13.
A possible description of the interaction rules
could be as follows. The machine shall allow the
user to control the catalyst under the following
constraints:
1. catalyst_status is a faithful representation
of the state of the catalyst
2. the initial state of the catalyst is catalyst_
closed
3. possible user commands are open_catalyst
or close_catalyst
722
b: {catalyst_status}
d: {is_open, is_closed}
Problem Decomposition
Most real problems are too complex to fit basic
problem frames. They require, rather, the structur-
Figure 15. The frame concern for the User Commanded Behaviour Frame
1
Given a choice of commands in the current state, when the user issues this
command (it may or may not be sensible).
2
if sensible or viable, the machine will cause these events...
3
resulting in this state or behaviour...
4
which satisfies the requirement...
5
and which the machine will relate to the user...
6
thus satisfying the requirement in every case.
Classical Decomposition
In classical PF decomposition a problem is decomposed into simpler constituent sub-problems that
723
Figure 16. Raising the alarm as an instance of the information display problem
h:{ring_bell}
i:{bell_ringing}
g:{oil_level}
a: {open_catalyst, close_catalyst}
c: {open_catalyst, close_catalyst}
724
b: {catalyst_status}
d: {is_open, is_closed}
AFrames
AFrame decomposition complements classical
decomposition in providing guidance and decomposition and recomposition rules. The rational
behind AFrames is the recognition that solution
structures can be usefully employed to inform
problem analysis.
AFrames characterise the combination of
a problem class and an architectural class. An
AFrame should be regarded as a problem frame for
which a standard sub-problem decomposition
(that implied by an architecture or architectural
style) exists. AFrames are a practical tool for subproblem decomposition that allow the PF practitioner to separate and address, in a systematic
fashion, the concerns arising from the intertwining
of problems and solutions, as has been observed
to take place in industrial software development
(Nuseibeh, 2001). Further motivation for, and other
examples of, AFrames can be found in Rapanotti,
Hall, Jackson, & Nuseibeh (2004).
725
726
a: {open_catalyst, close_catalyst}
c: {open_catalyst, close_catalyst}
e: {open, closed}
b: {catalyst_status}
d: {is_open, is_closed}
A Requirements Analysis
Model for Sociotechnical
Systems
The consideration of more sophisticated humanmachine relationships is our next concern. To be
specific we now wish to look at users behaviour
as being the subject of requirements statements,
727
1
Given a choice of commands in the current state, when the user issues this
command (it may or may not be sen-sible)...
2
if sensible or viable, the machine will cause these events...
3
resulting in this state or behaviour...
4
which satisfies the requirement...
5
and which the machine will relate to the user...
6
thus satisfying the requirement in every case.
728
W,S,I,UI |- R
729
Conclusion
Acknowledgment
In their classical form problem frames happily represent interactions between a user and
a machine, as might be characteristic of simple
sociotechnical systems. In this chapter we have
presented an enrichment of the PF framework
to allow the representation and analysis of more
complex sociotechnical systems. To do this we
have introduced two new problem frames, the
User Interaction and User Commanded Behaviour
Frames. Although not exhaustive in their treatment of socio-technological interaction problems,
they hopefully will provide a sound basis for a
richer taxonomy of user interaction within the
PF framework.
One of the as-yet under-developed areas within
the PF framework is the treatment of problem decomposition, in particular from the perspective of
how to do it in practice. We are currently exploring
the development and use of AFrames. An AFrame
offers guidance in problem decomposition on the
basis of solution space structures. In this chapter
we have shown how basic sociotechnical interaction problems can be decomposed when the target
architecture is to be the MVC style.
Although both these enrichments are new in
the PF framework, they do not move outside of
its original conceptual basis in the two-ellipse
model of requirements analysis. In contrast we
have seen in this chapter that the development of
general sociotechnical systems raises challenges
for the PF framework. We have suggested solutions to these challenges in the reification of the
two-ellipse model to a three-ellipse version, in
which social sub-systems individuals, groups,
and organisations can also be considered as the
focus of the design process. With the introduction of the knowledge domain, the manifestation
of this extension in problem frames, we aim to
We acknowledge the kind support of our colleagues, especially Michael Jackson and Bashar
Nuseibeh in the Department of Computing at the
Open University.
730
References
Bass, L., Clements, P., & Kazman, R. (1998).
Software architecture in practice. SEI Series in
Software Engineering. Addison Wesley.
Dieste, O., & Silva, A. (2000). Requirements:
Closing the gap between domain and computing
knowledge. Proceedings of SCI2000, II.
Gamma, E., Helm, R., Johnson, R., & Vlissides,
J. (1995). Design patterns. Addison-Wesley.
Gunter, C., Gunter, E., Jackson, M., & Zave, P.
(2000). A reference model for requirements and
specifications. IEEE Software, 3(17), 37-43.
Hall, J.G., & Rapanotti, L. (2003). A reference
model for requirements engineering. Proceedings
of the 11th International Conference of Requirements Engineering, 181-187.
Jackson, M. (1995). Software requirements &
specifications: A lexicon of practice, principles,
and prejudices. Addison-Wesley.
Jackson, M. (1997). Principles of program design.
Academic Press.
Jackson, M. (2001). Problem frames. AddisonWesley.
Jackson, M.A. (1998). Problem analysis using
small problem frames [Special issue]. South African Computer Journal, 22, 47-60.
This work was previously published in Requirements Engineering for Sociotechnical Systems, edited by J. L. Mate & A. Silva,
pp. 318-339, copyright 2005 by Information Science Publishing (an imprint of IGI Global).
731
732
Chapter 2.23
Integrating Semantic
Knowledge with Web Usage
Mining for Personalization
Honghua Dai
DePaul University, USA
Bamshad Mobasher
DePaul University, USA
Abstract
Web usage mining has been used effectively as
an approach to automatic personalization and as
a way to overcome deficiencies of traditional approaches such as collaborative filtering. Despite
their success, such systems, as in more traditional
ones, do not take into account the semantic knowledge about the underlying domain. Without such
semantic knowledge, personalization systems
cannot recommend different types of complex
objects based on their underlying properties and
attributes. Nor can these systems possess the
ability to automatically explain or reason about
the user models or user recommendations. The
integration of semantic knowledge is, in fact,
the primary challenge for the next generation
of personalization systems. In this chapter we
provide an overview of approaches for incorporating semantic knowledge into Web usage mining
and personalization processes. In particular, we
Introduction
With the continued growth and proliferation of
e-commerce, Web services, and Web-based information systems, personalization has emerged
as a critical application that is essential to the
success of a Website. It is now common for Web
users to encounter sites that provide dynamic
recommendations for products and services, tar-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
733
734
used for semantic Web usage mining and personalization. Finally, we present a framework for more
systematically integrating full-fledged domain
ontologies in the personalization process.
Background
Semantic Web Mining
Web mining is the process of discovering and extracting useful knowledge from the content, usage,
and structure of one or more Web sites. Semantic
Web mining (Berendt, Hotho & Stumme, 2002)
involves the integration of domain knowledge
into the Web mining process.
For the most part the research in semantic
Web mining has been focused in application
areas such as Web content and structure mining. In this section, we provide a brief overview
and some examples of related work in this area.
Few studies have focused on the use of domain
knowledge in Web usage mining. Our goal in this
chapter is to provide a road map for the integration
of semantic and ontological knowledge into the
process of Web usage mining, and particularly,
in its application to Web personalization and
recommender systems.
Domain knowledge can be integrated into the
Web mining process in many ways. This includes
leveraging explicit domain ontologies or implicit
domain semantics extracted from the content or
the structure of documents or Website. In general, however, this process may involve one or
more of three critical activities: domain ontology
acquisition, knowledge base construction, and
knowledge-enhanced pattern discovery.
736
Common representation approaches are vectorspace model (Loh et al., 2000), descriptive logics
(such as DAML+OIL) (Giugno & Lukasiewicz,
2002; Horrocks & Sattler, 2001), first order logic
(Craven et al., 2000), relational models (Dai &
Mobasher, 2002), probabilistic relational models
(Getoor, Friedman, Koller & Taskar, 2001), and
probabilistic Markov models (Anderson, Domingos & Weld, 2002).
737
Figure 1. General Framework for Web Personalization Based on Web Usage Mining The Offline
Pattern Discovery Component
738
Figure 2. General Framework for Web Personalization Based on Web Usage Mining The Online
Personalization Component
Usage data: The log data collected automatically by the Web and application servers represents the fine-grained navigational
behavior of visitors. Depending on the goals
of the analysis, this data needs to be transformed and aggregated at different levels
of abstraction. In Web usage mining, the
most basic level of data abstraction is that
of a pageview. Physically, a pageview is an
aggregate representation of a collection of
Web objects contributing to the display on
739
740
741
Representation of Domain
Knowledge
Representing Domain Knowledge as
Content Features
One direct source of semantic knowledge that can
be integrated into mining and personalization processes is the textual content of Web site pages. The
semantics of a Web site are, in part, represented
by the content features associated with items or
objects on the Web site. These features include
keywords, phrases, category names, or other
textual content embedded as meta-information.
Content preprocessing involves the extraction of
relevant features from text and meta-data.
During the preprocessing, usually different
weights are associated with features. For features
extracted from meta-data, feature weights are
usually provided as part of the domain knowledge specified by the analyst. Such weights may
reflect the relative importance of certain concepts.
For features extracted from text, weights can
normally be derived automatically, for example
as a function of the term frequency and inverse
document frequency (tf.idf) which is commonly
used in information retrieval.
Further preprocessing on content features can
be performed by applying text mining techniques.
This would provide the ability to filter the input
to, or the output from, other mining algorithms.
For example, classification of content features
based on a concept hierarchy can be used to limit
the discovered patterns from Web usage mining
to those containing pageviews about a certain
subject or class of products. Similarly, performing
learning algorithms such as, clustering, formal
concept analysis, or association rule mining on
the feature space can lead to composite features
representing concept categories or hierarchies
(Clerkin, Cunningham, & Hayes, 2001; Stumme
et al., 2000).
The integration of content features with usage-based personalization is desirable when we
742
743
744
Levels of Abstraction
Capturing semantic knowledge at different levels
of abstraction provides more flexibility both in the
mining phase and in the recommendation phase.
For example, focusing on higher-level concepts in
a concept hierarchy would allow certain patterns
to emerge which otherwise may be missed due
to low support. On the other hand, the ability to
drill-down into the discovered patterns based
on finer-grained subconcepts would provide the
ability to give more focused and useful recommendations.
Domain knowledge with attributes and relations requires the management of a great deal more
data than is necessary in traditional approaches
to Web usage mining. Thus, it becomes essential
to prune unnecessary attributes or relations. For
example, it may be possible to examine the number
of distinct values of each attribute and generalize the attributes if there is a concept hierarchy
over the attribute values. In Han & Fu (1995) a
multiple-level association rule mining algorithm
is proposed that utilizes concept hierarchies. For
example, the usage data in our hypothetical movie
site may not provide enough support for an association rule: Spiderman, Xmen Xmen2, but
mining at a higher level may result in obtaining
a rule: Sci-Fi&Action, Xmen Xmen2. In
Anderson et al. (2002) relational Markov models
are built by performing shrinkage (McCallum et
al., 1998) between the estimates of parameters
at all levels of abstractions relative to a concept
hierarchy. If a pre-specified concept hierarchy does
not exist, it is possible to automatically create such
hierarchies through a variety of machine learning
techniques, such as hierarchical agglomerative
clustering (Stumme et al., 2000).
Preprocessing Phase
The main task of data preprocessing is to prune
noisy and irrelevant data, and to reduce data volume for the pattern discovery phase. In Mobasher,
Dai, Luo, & Nakagawa (2002), it was shown
that applying appropriate data preprocessing
techniques on usage data could improve the effectiveness of Web personalization. The concept
level mappings from the pageview-level data to
concepts can also be performed in this phase.
This results in a transformed transaction data
to which various data mining algorithms can
be applied. Specifically, the transaction vector t
given previously can be transformed into a vector
t ' = wot , wot , , wot , where each oj is a semantic
object appearing in one of the pageviews contained
in the transaction, and is a weight associated with
that object in the transaction. These semantic
objects may be concepts appearing in the concept
hierarchy or finer-grained objects representing
instances of these concepts.
1
Post-Processing Phase
Exploiting domain knowledge in this phase can
be used to further explain usage patterns or to
filter out irrelevant patterns. One possibility is to
first perform traditional usage mining tasks on
the item-level usage data obtained in the preprocessing phase, and then use domain knowledge to
interpret or transform the item level user profiles
into domain-level usage profiles (Mobasher &
Dai, 2002) involving concepts and relations in
the ontology. The advantage of this approach is
that we can avoid the scalability issues that can
be endemic in the pattern discovery phase. The
disadvantage is that some important structural
relationships may not be used during the mining
phase resulting in lower quality patterns.
745
Sim( A, B) =
746
a A
bB
SemSim(a, b)
AB
CastSim(i, j ) +
DirectorSim(i, j ) +
GenreSim(i, j ) + ...
747
where:
the significance weight, weight(p, prc l ), of
the page p within the usage (respectively,
content) profile prc l is given by:
1
weight ( p, prcl ) = | cl | w( p, s)
scl
748
Knowledge Representation
General ontology representation languages such
as DAML+OIL (Horrocks, 2002) provide formal syntax and semantics for representing and
reasoning with various elements of an ontology.
These elements include individuals (or objects),
concepts (which represent sets of individuals),
and roles (which specify object properties).
In DAML+OIL, the notion of a concept is quite
general and may encompass a heterogeneous set
of objects with different properties (roles) and
structures. We, on the other hand, are mainly
C u rre n t U s e r
749
750
The type of an attribute in the above definition may be a concrete datatype (such as string
or integer) or it may be a set of objects (individuals) belonging to another class.
In the context of data mining, comparing and
aggregating values are essential tasks. Therefore,
ordering relations among values are necessary
properties for attributes. We associate an ordering
relation a with elements in Da for each attribute
a. The ordering relation a can be null (if no
ordering is specified in the domain of values), or
it can define a partial or a total order among the
domain values. For standard types such as values
from a continuous range, we assume the usual
ordering. In cases when an attribute a represents
a concept hierarchy, the domain values of a are a
set of labels, and a is a partial order representing
the is-a relation.
Furthermore, we associate a data mining operator, called the combination function, a , with each
attribute a. This combination function defines an
aggregation operation among the corresponding
attribute values of a set of objects belonging to
the same class. This function is essentially a generalization of the mean or average function
applied to corresponding dimension values of a
set of vectors when computing the centroid vector.
In this context, we assume that the combination
function is specified as part of the domain ontology for each attribute of a class. An interesting
extension would be to automatically learn the
combination function for each attribute based on
a set of positive and negative examples.
Classes in the ontology define the structural
and semantic properties of objects in the domain
which are instances of that class. Specifically,
each object o in the domain is also characterized
by a set of attributes Ao corresponding to the
attributes of a class in the ontology. In order to
more precisely define the notion of an object as
an instance of a class, we first define the notion
of an instance of an attribute.
({a , w
o1
, ao2 , w2 , , aom , wm
})=
aagg , wagg
Ontology Preprocessing
The ontology preprocessing phase takes as input
domain information (such as database schema
and metadata, if any) as well as Web pages, and
An Example
As an example, let us revisit our hypothetical
movie Web site. The Web site includes collections of pages containing information about
movies, actors, directors, etc. A collection of
pages describing a specific movie might include
information such as the movie title, genre, starring actors, director, etc. An actor or directors
information may include name, filmography (a set
of movies), gender, nationality, etc. The portion
of domain ontology for this site, as described,
contains the classes Movie, Actor and Director
(Figure 4). The collection of Web pages in the
site represents a group of embedded objects that
are the instances of these classes.
In our example, the class Movie has attributes
such as Year, Actor (representing the relation
751
Year
Genre
Name
Romance
Director
Actor
Genre-All
Action
Actor
Movie
Nationality
Comedy
Director
Romantic
Comedy
Black
Comedy
Kid &
Family
752
Name
Movie
Nationality
A bo ut a boy
F rom h ttp://w w w .reel.com /m ovie.asp?M ID = 134706
M ovie
G enre
G enreA ll
S tarring
G enre
A ctor
Y ear
S tarrin g Y
{H .G ra n t 0 .6 ;
T o n i C olle tte : 0 .4 }
ear
2002
C om e dy
R o m a n tic
Com edy
Pattern Discovery
As depicted in Figure 3 domain ontologies can be
incorporated into usage preprocessing to generate
semantic user transactions, or they can be integrated into pattern discovery phase to generate
semantic usage patterns. In the following example,
we will focus on the latter approach.
Given a discovered usage profile (for example,
a set of pageview-weight pairs obtained by clustering user transactions), we can transform it into
a domain-level aggregate representation of the
underlying objects (Dai & Mobasher, 2002). To
distinguish between the representations we call
the original discovered pattern an item-level
usage profile, and we call the new profile based on
the domain ontology a domain-level aggregate
K id &
F a m ily
S tep 1: O ntology
P reprocessin g
753
754
( w w )/ w .
i
755
Aggregate Semantic
Usage Patterns
Match Profiles
Extended User Profile
Instantiate to Real
Web Objects
Recommendations
of Items
756
Conclusion
We have explored various approaches, requirements, and issues for integrating semantic
knowledge into the personalization process based
on Web usage mining. We have considered approaches based on the extraction of semantic
features from the textual content contained in a
site and their integration with Web usage mining
tasks and personalization both in the pre-mining
and the post-mining phases of the process. We
have also presented a framework for Web personalization based on full integration of domain
ontologies and usage patterns. The examples
provided throughout this chapter reveal how such
a framework can provide insightful patterns and
smarter personalization services.
We leave some interesting research problems
for open discussion and future work. Most important among these are techniques for computing
similarity between domain objects and aggregate
domain-level patterns, as well as learning techniques to automatically determine appropriate
combination functions used in the aggregation
process.
More generally, the challenges lie in the successful integration of ontological knowledge at
every stage of the knowledge discovery process.
In the preprocessing phase, the challenges are in
automatic methods for the extraction and learning of the ontologies and in the mapping of users
activities at the clickstream level to more abstracts
concepts and classes. For the data mining phase,
the primary goal is to develop new approaches that
take into account complex semantic relationships
such as those present in relational databases with
multiple relations. Indeed, in recent years, there
has been more focus on techniques such as those
based relational data mining. Finally in the personalization stage, the challenge is in developing
techniques that can successfully and efficiently
measure semantic similarities among complex
objects (possibly from different ontologies).
topic-specific Web resource discovery. Proceedings of the 3rd World Wide Web Conference,
Toronto.
References
757
Ghani, R., & Fano, A. (2002). Building recommender systems using a knowledge base of
product semantics. Proceedings of the Workshop
on Recommendation and Personalization in
E-Commerce, 2nd International Conference on
Adaptive Hypermedia and Adaptive Web Based
Systems, Malaga, Spain.
758
This work was previously published in Web Mining: Applications and Techniques, edited by A. Scime, pp. 276-306, copyright
2005 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
759
760
Chapter 2.24
ABSTRACT
Software product line (SPL) is a software engineering paradigm for software development. SPL
is important in promoting software reuse, leading
to higher productivity and quality. A software
product within a product line often has specific
functionalities that are not common to all other
products within the product line. Those specific
functionalities are termed variant features in a
product line. SPL paradigm involves the modeling
of variant features. However, little work in SPL
investigates and addresses the modeling of variant features specific to UI. UML is the de facto
modeling language for object-oriented software
systems. It is known that UML needs better support in modeling UIs. Thus, much research developed UML extensions to improve UML support
in modeling UIs. Yet little of this work is related
to developing such extensions for modeling UIs
for SPLs in which variant features specific to user
interfaces (UI) modeling must be addressed. This
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
Software product line (SPL) (Chastek, Donohoe,
Kang, & Thiel, 2001; Clements & Northrop, 2002;
SEI, 2005a) is a software engineering paradigm
to develop software products. One important
step in the SPL paradigm is the modeling of the
functional features of software products across
the product line. The features are called common
core. An even more important step in the SPL
paradigm is the modeling of the specific functional
features within a particular member product in a
product line. These specific functional features
are called variant features because they are the
features that differentiate member products in the
product line. Then based on the model, a product
is assembled by reusing the common core and
selected variant features.
Unified Modeling Language (UML) (OMG,
2003b, 2004; Rumbaugh, Jacobson, & Booch,
2005) is a standard object-oriented modeling language. UML includes multiple views and diagram
types to capture software functionalities from
user perspective. However, UML seems to have
not been developed for modeling user interface
specific issues (Kovacevic, 1998; Silva & Paton,
2003). One of the usages of user interface models
is that, in model-based user interface management
systems (MB-UIMSs) (Griffiths et al., 2001; Szekely, Sukaviriya, Castells, Muthukumarasamy,
& Salcher, 1996), user interface models can be
used to generate user interface codes. There are
extensions of UML (Blankenhorn & Jeckle, 2004;
Nunes, 2003; Silva, 2002) to make UML better
support user interface modeling. Yet, these extensions often assume the modeling of a single system
instead of a SPL. On the other hand, although
standard UML (OMG, 2003b, 2004) seems to
have not been developed to support the modeling
of SPLs, there are works (Gomaa, 2004; Gomaa
& Gianturco, 2002; Ziadi, Hlout, & Jzquel,
2004) on extending UML to improve UML
supports in modeling SPLs. Yet, these works do
not focus on user interface modeling. Currently,
Background and
related work
Unified Modeling Language
Unified Modeling Language (UML) (Booch,
Rumbaugh, & Jacobson, 1999; OMG, 2003b, 2004;
Scott, 2004) is a graphical language for specifying
software systems. UML is a standard of the Object
Management Group (OMG; see http://www.omg.
org). The most current version for UML is UML
2.0 (OMG, 2003b, 2004). This research considers
UML in UML 2.0 context.
UML is a standardized notation for objectoriented development. UML consists of views,
diagrams, model elements, and general mechanisms. Views are used to present different aspects
of complex systems from both the system in the
761
762
763
764
765
<<stereotype>>
D imension
0..*
<<stereotype>>
Position
<<stereotype>>
StyleD ecorator
1
0..*
<<stereotype>>
StyleS trategy
0..* 1
<<stereotype>>
0..1
0..1
0..*
WUITemplate
1
1
<<stereotype>>
<<stereotyp>> 0..*
W UIC omposite
Style
1
1
<<stereotype>>
<<stereotype>>
0..*
0..*
Table
<<stereotype>>
Upload
W UIElement
<<stereotype>>
PasswordInput
<<stereotype>>
C ommand
<<stereotype>>
<<stereotype>>
<<stereotype>>
Range
TextArea
<<stereotype>>
<<stereotype>>
SingleSelect
<<stereotype>>
<<stereotype>>
Submit
766
Input
C ontent
1
MultiSelect
Output
<<stereotype>>
0..*
<<stereotype>>
Hyperlink
W UIE le m e nt
id
label
value
tabIndex
im age
getId()
s etId()
getLabel()
s etLabel()
getV alue()
s etV alue()
ac tivate()
foc us In()
foc us O ut()
getHelpM es s age()
getHintM es s age()
notify V alueChanged()
notify ControlV alid()
notify ControlInvalide()
notify ReadO nly ()
notify ReadW rite()
notify Required()
notify O ptional()
notify E nabled()
notify Dis abled()
s howM es s age()
s etF oc us ()
res etV alue()
loadW indow()
s etIndex ()
getIndex ()
getIm age()
s etIm age()
C o m m a nd
ac tion
m ethod
s end()
768
A ctivi ty (fro m
Ba sicBe ha vio rs)
+a ctivity
0.. 1
0..1
+no d e
0..1
A ctivity
A ctivityNo d e
0..*
A ctio n
E xe cuta b le No d e
0..*
E le m e nt (fro m
K e rne l)
V a ri antA ctio n
0.. *
0.. 1
0..*
S P L A ctio n
769
card order. In another case, corporate purchasing often submits purchase orders, and then pays
the bill after receiving the invoice. In this case,
submit purchase order may be desired. Yet
another variant is in the case where a customer
wants to receive the purchase and check it first
before making a payment, these customers may
want the submit Cash-On-Delivery (COD) order.
Yet some merchants may offer a combination of
these payment methods. There are a number of
variants in the submit order activity, and we
need to be able to model the variants. This paper
extends UML activity diagram to model the
requirements as follows.
Figure 5 shows a graphical symbol for the
SPLAction. The symbol exposes the base action
and its four variants actions. The shape of the base
submit order action is filled, while the variants
are not. The base Submit Order action defines
the common logics that are applicable across all
of the variants. Each variant differs from other
variants by some features. Our concern is on
user interface modeling; it is clear that the user
interface to collect information on a credit card
order is different from the user interface to collect
a COD order in some way.
Sometimes, there are variant logics and behaviors within a variant member product that capture
different execution scenarios. Those logics and
behaviors can be modeled using standard UML
activity diagrams. For example, suppose there
is a WUI that provides user interface elements
770
C heck out
C heck out
D ependency
D elegation
W U ID erive
W U IE xten d
U se
T em plateInstantiatio n
C lass
W U IA g gre gatio n
1
1
< < m e ta class> >
E xp ression
P aram e terization
C onfigura tion
1
0..*
< < m etacla ss> >
C o nstraint
1
0..*
1 ..*
< < m etaclass> >
P roperty
G ene ralization
W U IG eneralization
XOR
O ptio n a lE le m ent
OR
771
772
773
774
Figure 10. A product WUIML model for the online patient registration systems
Validation
The goal of this research is to improve UML support in modeling user interfaces for Web-based
SPLs. The improvement goal is to have the new
WUIML method exhibit decreased effort needed
by increasing effectiveness and efficiency. The
improvement will reduce the SPL engineers
efforts needed for developing user interfaces. To
exhibit that this improvement goal is met, a case
study research validation method is applied to
supply supporting evidence for the thesis (research
hypothesis).
775
776
Study Propositions
Study propositions are derived from the studys
research questions (Yin, 2003) but are more specific than the studys research questions. Study
propositions quantify the quality variables (indirect metrics) in a studys research question into
directly measurable quantitative metrics (direct
metrics or indicators). For example, in this multiple-case study, the studys research question is
decomposed into two propositions:
1.
2.
Units of Analysis
Units of analysis are materials such as documents
or other resources that the subject matter experts
(SMEs) use as inputs or materials to apply the
method or tools being validated. In this study,
the method under investigation is WUIML. The
units of analysis are the Web user interface requirements for a medical SPL.
In the first case study, the requirements for
the Pediatric Medical Profile Login WUI, the
Adolescent Medical Profile Login WUI, and the
Adult Medical Profile Login WUI are provided.
These WUIs are each from a different member
product (Pediatric Medical Management System,
Adolescent Medical Management System, and
Adult Medical Management System) of a medical product line (Medical Management Systems).
The WUIs to model are extracted from three
Web-based medical products under development
in BUSINEX Inc.
The WUIs for the medical Web software
products are based on actual medical forms from
health-care providers in the United States. For
the first case study, three member products of the
product line are considered for WUI modeling.
In particular, this case study requires the SMEs
to model the Medical Profile Login WUI and a
related activity across three member products of
the product line. This WUI is chosen because it
allows one to exercise the modeling of commonality and variability found in product lines.
777
Table 1. Required modeling items for Pediatric Medical Profile Login WUI
Required modeling items for Pediatric Medical Profile Login WUI
Page title: Pediatric Medical Profile Login
Label 1: Profile ID:
A textbox for label 1.
Label 2: Password:
A textbox for label 2.
Label 3: Role:
A radio button.
The radio button must default to be checked.
A label for the radio button: Parent/Guardian. (Only parent or legal guardian who are previously registered with the health provider
can login for the child.).
A submit button with name Login.
The Page title must placed on the top of the page.
The Profile ID and its related textbox must be immediately next to each other.
The Password and its related textbox must be immediately next to each other.
The Role and its related radioButton must not be immediately next to each other.
Profile ID must be placed on top of the Password and its related textbox.
The Role and the radio button must be on a line that is on top of the Login button.
The Login button must be placed at the lower left hand side of the page.
The activity diagram should include an action node: Pediatric Medical Profile Login.
The activity diagram should include an action node: Login Error.
The activity diagram should include an action node: Parent Welcome.
The activity diagram should include an action node: Customer Service.
The activity diagram should include a start node.
The activity diagram should include an end node.
The activity diagram should include a decision node: whether validation is successful.
The activity diagram should indicate the condition that Profile ID, Password, and Role values are available.
778
Figure 12. Case study worksheet for modeling Pediatric Medical Profile Login WUI using WUIML
Case Study Worksheet
Using WUIML to model the Pediatric Medical Profile Login WUI.
SME ID: _____________________
Please answer the following questions:
1. What tool did you use to perform the modeling? Pediatric_WUIML_modelingTool
-----------------------------------------------------------------------------------------------------------2. What is the filename of the resulted model(s)? Pediatric_WUIML_modelsFilename
---------------------------------------------------------------------------------------------------------3 Please list below the model elements (e.g. classes and activity nodes, etc.) in your
resulted models. For each element, first provide its name and then provide a brief
description on its purpose. List as many as you wish in additional pages (if needed).
Pediatric_WUIML_ required_modeling_ itemi (i = 1 to the total number of modeled
elements in the resulted models for the WUI.)
-----------------------------------------------------------------------------------------------------------4 Among the items listed in 3, which of the model elements are developed via reuse?
List them below.
Pediatric_WUIML_reusei (i = 1 to the total number of modeled elements in the resulted
models for the WUI.)
-----------------------------------------------------------------------------------------------------------5. How many person-hours did you spend in complete the modeling of this WUI?
Pediatric_WUIML_personHours
------------------------------------------------------------------------------------------------------------
779
be analyzed to support (or reject) the propositions. Thus, the most concrete criteria metrics
are measured terms found within the questions
on the case study worksheets. Figure 12 shows
a case study worksheet used in the first case
study for modeling the Pediatric Medical Profile
Login WUI. In Figure 12, each concrete criteria
metrics in questions that link to propositions is
identified by a name formed by three sections.
For example, Pediatric_WUIML_personHours
Evidence captured
Propositions to
support/reject
Pediatric_WUIML_modelingTool
Proposition 1 and
Proposition 2
Pediatric_WUIML_modelsFilename
Proposition 1 and
Proposition 2
Pediatric_WUIML_required_modeling_itemi
Proposition 1
Pediatric_WUIML_reusei
Proposition 2
Pediatric_WUIML_personHours
Proposition 1 and
Proposition 2
Pediatric_UML_modelingTool
Proposition 1 and
Proposition 2
Pediatric_UML_modelsFilename
Proposition 1 and
Proposition 2
Pediatric_UML_required_modeling_itemi
Proposition 1
Pediatric_UML_reusei
Proposition 2
Pediatric_UML_personHours
Proposition 1 and
Proposition 2
Adolescent_WUIML_modelingTool
Proposition 1 and
Proposition 2
Adolescent_WUIML_modelsFilename
Proposition 1 and
Proposition 2
780
Table 2. (continued)
Adolescent_WUIML_required_modeling_itemi
Proposition 1
Adolescent _WUIML_reusei
Proposition 2
Adolescent _WUIML_personHours
Proposition 1 and
Proposition 2
Adolescent _UML_modelingTool
Proposition 1 and
Proposition 2
Adolescent _UML_modelsFilename
Proposition 1 and
Proposition 2
Adolescent_WUIML_required_modeling_itemi
Proposition 1
Adolescent _UML_reusei
Proposition 2
Adolescent _UML_personHours
Proposition 1 and
Proposition 2
Adult_WUIML_modelingTool
Proposition 1 and
Proposition 2
Adult_WUIML_modelsFilename
Proposition 1 and
Proposition 2
Adult _WUIML_required_modeling_itemi
Proposition 1
Adult_WUIML_reusei
Proposition 2
Adult _WUIML_personHours
Proposition 1 and
Proposition 2
Adult _UML_modelingTool
Proposition 1 and
Proposition 2
Adult _UML_modelsFilename
Proposition 1 and
Proposition 2
Adult _UML_required_modeling_itemi
Proposition 1
Adult _UML_reusei
Proposition 2
Adult _UML_personHours
Proposition 1 and
Proposition 2
781
the Web user interface requirements of a Webbased medical SPL) respectively. The results,
that is, the resulting models and the completed
use case worksheets, are analyzed to find out the
following:
D1. How many of the required modeling items
are correctly modeled when the modeling
was done in WUIML?
D2. How many of the required modeling items
are correctly modeled when the modeling
was done in standard UML?
D3. How many of the required modeling items
are correctly modeled via reuse when the
modeling was done in WUIML? To model
via reuse is to create new models by re-using previously created models or model
elements. For example, suppose one previously created a model that includes a class
representing fruit. Now one can reuse the
fruit class to create a class that represents a
specific fruit, such as an apple, by extending the fruit class. Both standard UML and
WUIML allow SMEs to model via reuse.
D4. How many of the required modeling items
are correctly modeled via reuse when the
modeling was done in standard UML?
D5. How many person-hours spent to generate
the WUIML models?
D6. How many person-hours spent to generate
the standard UML models?
D7. The total number of required modeling items.
D1, D2, D5, D6, and D7 link to proposition
1. D3, D4, D5, D6, and D7 link to proposition 2.
782
1.
2.
3.
4.
5.
6.
7.
8.
9.
783
Table 3. Results for modeling the Pediatric Medical Profile Login WUI
Required modeling items for Pediatric Medical Profile Login WUI
Modeled correctly in
Standard UML
Modeled correctly
in WUIML
Yes
Yes
Yes
Yes
Yes
Yes
Label 2: Password:
Yes
Yes
Yes
Yes
Label 3: Role:
Yes
Yes
A radio button.
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
The Profile ID and its related textbox must be immediately next to each
other.
Yes
The Password and its related textbox must be immediately next to each
other.
Yes
The Role and its related radioButton must not be immediately next to
each other.
Yes
Profile ID must be placed on top of the Password and its related textbox.
Yes
The Role and the radio button must be on a line that is on top of the Login
button.
Yes
The Login button must be placed at the lower left hand side of the page.
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
Yes
The activity diagram should indicate the condition that Profile ID,
Password, and Role values are available.
Yes
Yes
18/25
24/25
784
Table 4. Ratio of correctly modeled required modeling items to the total number of required modeling
items
WUI
WUIML (Person-hours)
Pediatric
18/25 (8)
24/25 (7)
Adolescent
23/30 (10)
29/30 (4)
Adult
18/25 (7)
24/25 (2)
WUIML model
Pediatric
2.25
3.4
Adolescent
2.3
7.25
Adult
2.57
12
Average
2.37
9.08
WUIML (Person-hours)
Pediatric
0/10 (8)
10/11 (3+4)
Adolescent
8/10 (10)
10/11 (4)
Adult
8/10 (7)
10/11 (2)
785
Table 7. Number of correctly modeled required modeling items via reuse (or product line reuse) per
person-hours
Standard UML model
WUIML model
Pediatric
1.43
Adolescent
0.8
2.5
Adult
1.14
Average
0.65
2.98
786
References
Altheim, M., Boumphrey, F., McCarron, S.,
Schnitzenbaumer, S., & Wugofski, T. (Eds.).
(2001). Modularization of XHTML. World Wide
Web Consortium. Retrieved April 1, 2005, from
http://www.w3.org/TR/xhtml-modularization/
xhtml-modularization.html
787
788
Meyer, E. A. (2003). Eric Meyer on CSS: Mastering the language of Web design (2nd ed.).
Indianapolis, IN: New Riders Publishing.
Sauers, M., & Wyke, R. A. (2001). XHTML essentials. New York: Wiley Computer Publishing.
Schengili-Roberts, K. (2004). Core CSS: Cascading style sheets (2nd ed.). Upper Saddle River, NJ:
Prentice Hall PTR.
Schmitt, C. (2003). Designing CSS Web pages (2nd
ed.). Indianapolis, IN: New Riders Publishing.
Scogings, C., & Phillips, C. (2001). A method for
the early stages of interactive system design using
UML and Lean Cuisine+. In Proceedings Second
Australasian User Interface Conference, 2001,
AUIC 2001, Gold Coast, Queensland, Australia
(pp. 69 -76). IEEE Computer Society.
Scott, K. (2004). Fast track UML 2.0: UML 2.0
reference guide. APress.
SEI. (2005a). A framework for software product
line practice, version 4.2. Software Engineering
Institute. Retrieved on March 23, 2005, from http://
www.sei.cmu.edu/productlines/framework.html
SEI. (2005b). Software product lines. Retrieved
on March 23, 2005, from http://www.sei.cmu.
edu/productlines/
Shin, E. M. (2002). Evolution in multiple-view
models of software product families. Fairfax, VA:
George Mason University.
Silva, P. P. d. (2002). Object modelling of interactive systems: The UMLi approach. Unpublished
doctoral dissertation, University of Manchester,
UK.
Silva, P. P. d., & Paton, N. W. (2003, July/August). User interface modeling in UMLi. IEEE
Software, 62-69.
789
W3C. (2002). XHTML 1.0: The Extensible Hypertext Markup Language (2nd edition).
This work was previously published in the International Journal of Information Technology and Web Engineering, Vol. 1, Issue
1, edited by G. Alkhatib and D. Rine, pp. 1-34, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing
(an imprint of IGI Global).
790
791
Chapter 2.25
A User-Centered Approach
to the Retrieval of Information
in an Adaptive Web Site
Cristina Gena
Universit di Torino, Italy
Liliana Ardissono
Universit di Torino, Italy
Abstract
This chapter describes the user-centered design
approach we adopted in the development and
evaluation of an adaptive Web site. The development of usable Web sites, offering easy and
efficient services to heterogeneous users, is a
hot topic and a challenging issue for adaptive
hypermedia and human-computer interaction.
User-centered design promises to facilitate this
task by guiding system designers in making decisions, which take the users needs in serious account. Within a recent project funded by the Italian
Public Administration, we developed a prototype
information system supporting the online search
of data about water resources. As the system was
targeted to different types of users, including
generic citizens and specialized technicians, we
adopted a user-centered approach to identify their
Introduction
The development of a Web-based information
system targeted to different types of users challenges the Web designer because heterogeneous
requirements, information needs, and operation
modes have to be considered. As pointed out by
Nielsen (1999) and Norman and Draper (1986),
the users mental model and expectations have
to be seriously taken into account to prevent
her/him from being frustrated and rejecting the
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
services offered by a Web site. Indeed, this issue is particularly relevant to Web sites offering
task-oriented services, because most target users
utilize them out of their leisure time, if not at work.
Being under pressure, these users demand ease of
use as well as efficient support to the execution
of activities.
The positive aspect of a technical Web site is,
however, the fact that the users can be precisely
identified and modeled; moreover, their information needs, representing strong requirements,
can be elicited by means of a suitable domain
analysis. Therefore, utilities, such as data search
and retrieval, can be developed to comply with
different goals and backgrounds. Of course, users involvement and testing have to be carried
out also in this case because they support the
development of effective and usable services (see
Dumas & Redish, 1999; Keppel, 1991).
In our recent work, we faced these issues in
the development of ACQUA, a prototype Webbased information system for the Italian Public
Administration presenting information about
water resources (a demo is available at http://acqua.di.unito.it). During the system design phase,
we put in practice traditional usability principles
and adaptive hypermedia best practices and we
derived general guidelines for the development
of usable Web-based systems for technical users
(see Brusilovsky, 1996, 2001; Fink, Kobsa, & Nill,
1999; Maybury & Brusilovsky, 2002). The system
described in the rest of this chapter is targeted to
two main classes of users:
792
Background
Several researchers suggested to address usability issues by developing adaptive systems.
For instance, Benyon (1993) proposed adaptivity
as a solution, because a single interface cannot
be designed to meet the usability requirements
of all the groups of users of a system. However,
it is possible to prove that adaptivity enhances
the usability of a system only if it can be shown
that, without the adaptive capability, the system
performs less effectively. Benyon identifies five
interdependent activities to be considered when
designing an adaptive system:
1.
2.
3.
4.
5.
793
794
795
ii.
Figure 1. Searching quantitative data (continuous hydrometric and chemical-physical parameters) about
Po River in the Torino-Murazzi observation point
796
Adaptive Features
The information about water resources concerns
rivers, lakes, and underground waters and includes
the following:
Descriptive data about resources and observation points: for example, maps of the
points, charts representing environmental
changes, pictures, documents, publications,
797
Figure 2. Portion of the page describing the Torino observation point on Po River
798
Association Rules
In order to define the association rules to be applied for anticipating the users information needs,
we analyzed a repository of requests, which real
Figure 3. Annotated link for the suggested information and descriptions of the monitoring stations
799
800
Evaluation of ACQUA
We first evaluated the ACQUA prototype in a
usability test by involving external users who
were not cooperating at the project (see Dumas
& Redish, 1999, for methodological details). The
evaluation highlighted some usability problems
concerning the presentation of basic information,
such as the choice between simple and advanced
search and the background color of the menus.
After having solved those problems, we tested
the final prototype with real end users representative of the users the Web site is devoted to. In
particular, we involved technicians working at
the Water Resources Division in different fields
(rivers, lakes, underground rivers, etc.) and not
collaborating to the design of the project. We
carried out both an experimental evaluation and
a qualitative session to assess the suitability of
the adaptive features offered the system.
Subjects. We evaluated 10 potential users of
the ACQUA system, four females and six males,
aged 3050. All the users worked in the water
resource area and none of them was involved in
the project.
Procedure. The subjects were split up in two
groups (five subjects each) and randomly assigned
to one of the two groups. The experimental group
had to solve some tasks using the adaptive Web
site, which applies the association rules described
in Section Adaptive Features to compute the
p<0.01;
Task 7.
ANOVA: F(1.8) = 9.23
2=0.45; n=4.86
p<0.05;
It should be noticed that all the results are significant and have a large estimate of the magnitude
of the treatment effect. In addition, by exploiting
a power of 0.80 and the corresponding 2 for each
task we could determine the requested sampled
size, which fits our sample size (n=5) (for details
about statistics, see Keppel, 1991).
Post-task walk-through. During any post-task
walk-through, test subjects are asked to think about
the event and comment on their actions. Thus,
after each test we talked to the subjects to collect
their impression and to discuss their performance
and the problems encountered during the test. In
this session, we also aimed at retrieving useful
feedback for a qualitative evaluation of the site.
In fact, although our experimental evaluation
801
reported significant results supporting our hypothesis, the actual user behavior could be different. As recently pointed out by Nielsen (2004),
statistical analyses are often false, misleading,
and narrow; in contrast, insights and qualitative
studies are not affected by these problems because
they strictly rely to the users observed behavior
and reactions.
In most cases, the interviewed users were satisfied with the site. Most of them encountered some
problems in the execution of the starting query of
task 2, thus we modified the interface form.
All the users of the experimental group followed the adaptive suggestion link provided
by the system but they did not realize that
it represented a personalization feature.
When we explained the adaptations, they
noticed the particularity of the suggestion
(We also recommend you ...). Anyway,
they were attracted from the suggestions and
they appreciated the possibility of skipping
the execution of a new query. The adaptive
suggestions were considered visible and not
intrusive.
The users of the control group reported
similar considerations when we described
the adaptive features offered by the Web site.
Even if they did not receive any suggestions
during the execution of tasks, they explored
the result pages in order to find a shortcut to
proceed in the task execution. After having
followed some links, they went back to the
previous query page or to the home page
by clicking on the Back button of the
browser.
802
Future Trends
It is worth mentioning that the manual definition
of the first set of association rules supporting the
803
Conclusion
We presented our experience in the design and
development of ACQUA, an interactive prototype
Web site for the Public Administration. The system
presents information about water resources and
supports the user in the search for generic information, as well as technical information about the
rivers, lakes, and underground waters.
The usability and functional requirements that
emerged during the design of the ACQUA system
were very interesting and challenging, as they
imposed the development of functions supporting
the efficient retrieval of data by means of a simple
user interface. We found out that the introduction
of basic adaptivity features, aimed at understanding the users information needs in detail, was
very helpful to meet these requirements.
We were asked to develop a system having a
simple user interface, designed to meet usability
and predictability requirements. This fact limited
our freedom to add advanced interaction features,
desirable in a Web site visited by heterogeneous
users; however, it challenged us to find a compromise between functionality and simplicity.
In order to address this issue, we developed two
interactive features enabling the user to create a
personal view on the information space:
804
Acknowledgments
This work was funded by Regione Piemonte,
Direzione Risorse Idriche. We thank Giovanni Negro, Giuseppe Amadore, Silvia Grisello, Alessia
Giannetta, Maria Governa, Ezio Quinto, Matteo
References
Benyon, D. (1993). Adaptive systems: A solution
to usability problems. International Journal of
User Modeling and User-Adapted Interaction,
3, 6587.
Billsus, D., & Pazzani, M. (1999). A personal
news agent that talks, learns and explains. In
Proceedings of 3rd International Conference on
Autonomous Agents (pp. 268275).
Brusilovsky, P. (1996). Methods and techniques
of adaptive hypermedia. International Journal
of User Modeling and User-Adapted Interaction,
6(23), 87129.
Brusilovsky, P. (2001). Adaptive hypermedia.
International Journal of User Modeling and
User-Adapted Interaction, 11(12), 87110.
Bunt, A., & Conati, C. (2003). Probabilistic student modelling to improve exploratory behaviour.
International Journal of User Modeling and
User-Adapted Interaction, 13(3), 269309.
Chin, D. N., (2001). Empirical evaluation of user
models and user-adapted systems. International
Journal of User Modeling and User-Adapted
Interaction, 11(12), 181194.
Cotter, P., & Smyth, B. (2000). WAPing the Web:
Content personalization for WAP-enabled devices.
Proceedings of International Conference on
Adaptive Hypermedia and Adaptive Web-Based
Systems (pp. 98108).
Dix, A., Finlay, J., Abowd, G., & Beale, R. (1998).
Human computer interaction (2nd ed.). Prentice
Hall.
805
This work was previously published in Cognitively Informed Systems: Utilizing Practical Approaches to Enrich Information
Presentation and Transfer, edited by E. M. Alkhafia, pp. 142-166, copyright 2006 by IGI Publishing, formerly known as Idea
Group Publishing (an imprint of IGI Global).
806
807
Chapter 2.26
Auto-Personalization
Web Pages
Jon T. S. Quah
Nanyang Technological University, Singapore
Winnie C. H. Leow
Singapore Polytechnic, Singapore
K. L. Yong
Nanyang Technological University, Singapore
Introduction
This project experiments with the designing of a
Web site that has the self-adaptive feature of generating and adapting the site contents dynamically
to match visitors tastes based on their activities
on the site. No explicit inputs are required from
visitors. Instead a visitors clickstream on the
site will be implicitly monitored, logged, and
analyzed. Based on the information gathered,
the Web site would then generate Web contents
that contain items that have certain relatedness to
items that were previously browsed by the visitor.
The relatedness rules will have multidimensional
aspects in order to produce cross-mapping between items.
The Internet has become a place where a vast
amount of information can be deposited and also
retrieved by hundreds of millions of people scattered around the globe. With such an ability to
reach out to this large pool of people, we have seen
the expulsion of companies plunging into conducting business over the Internet (e-commerce).
This has made the competition for consumers
dollars fiercely stiff. It is now insufficient to just
place information of products onto the Internet
and expect customers to browse through the Web
pages. Instead, e-commerce Web site designing is
undergoing a significant revolution. It has become
an important strategy to design Web sites that are
able to generate contents that are matched to the
customers taste or preference. In fact a survey
done in 1998 (GVU, 1998) shows that around 23%
of online shoppers actually reported a dissatisfying experience with Web sites that are confusing
or disorganized. Personalization features on the
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
3.
4.
Currently, most Web personalization or adaptive features employ data mining or collaborative
filtering techniques (Herlocker, Konstan, Borchers, & Riedl, 1999; Mobasher, Cooley, & Srivastava, 1999; Mobasher, Jain, Han, & Srivastava,
1997; Spiliopoulou, Faulstich, & Winkler, 1999)
which often use past historical (static) data (e.g.,
previous purchases or server logs). The deployment of data mining often involves significant
resources (large storage space and computing
power) and complicated rules or algorithms. A
vast amount of data is required in order to be able
to form recommendations that made sense and
are meaningful in general (Claypool et al., 1999;
Basu, Hirsh, & Cohen, 1998).
While the main idea of Web personalization
is to increase the stickiness of a portal, with the
proven presumption that the number of times a
shopper returns to a shop has a direct relationship
to the likelihood of resulting in business transactions, the method of achieving the goal varies.
The methods range from user clustering and time
framed navigation sessions analysis (Kim et al.,
2005; Wang & Shao, 2004), analyzing relationship
between customers and products (Wang, Chuang,
Hsu, & Keh, 2004), performing collaborative filtering and data mining on transaction data (Cho
808
Description of System
In this article, we seek to provide an adaptive
feature using a fast and cost-effective means.
The aim is to provide adaptiveness in the sense
that when a visitor selects the next link or a new
page, the contents of the page generated will
have relatedness to previous pages contents.
This adaptive feature will be immediate and will
not experience delay or repetitive computational
filtering problems, as compared to using mining
or collaborative filtering (Claypool et al., 1999;
Basu et al., 1998).
The rules-based technique offers an excellent and flexible mechanism to specify rules that
map categories that exhibit relatedness among
themselves (IBM, 2000). Adding new product
809
Aa2
Aam
An
A2
A1
Aa1
SK
Ab1
Ab2
Abm
Ax1
Ax2
Axm
810
Ka1
Ka2
Kam
Kn
K2
K1
Kb1
Kb2
Kbm
Kx1
Kx2
Kxm
In order to give priority to activities that occur more often and recent, a point system is used
whereby the current session log is given heavier
weight than those retrieved from the cookies so
that the current activities will be more likely to
be nominated into the visitors latest profile. The
activities log tracked by the session object will
be given higher consideration during profiling in
order for the system adaptiveness to be reflected
accordingly and immediately to the changes in
the visitor browsing behavior. In this design, a
total of three activities logs are used (two from
cookies if available, and the remaining one is
from the current session object that tracks the
current activities).
In order to demonstrate the adaptiveness feature, the Web site domain should consist items or
products that have the following characteristics:
1. Products that can be easily categorized.
2. Selection of items or products should be able
to reflect the taste or preference of visitor.
3. Certain form of relatedness between the
products.
Pets
Books on pets
Accessories for pets
Food for pets
1.
Relatedness between categories across products (e.g., pet dog and dog food and books
on dogs);
2. Relatedness between categories in the same
product (e.g., books on how to care for dogs
and books on diseases of dogs and books on
nutrition for healthy dogs); and
3. Relatedness at items level across products
(e.g., food and accessory items from the
same company/brand).
Items (1) and (2) are achieved based on the
rules-base technique, while content-based filtering is used for item (3).
The server software architecture is decomposed into independent modules that have very
similar structure. Each product has its own module
that is responsible for presenting and generating
its dynamic Web page contents. The development
platform chosen is Java Server Pages (JSP), which
offers object-oriented components deployment.
By developing the whole Web program in JSP
Logs
Processor
(Profiler)
Visitor's
PC
Internet
network
Java
Web Server
JSP
Page
Recommendation
Engine
(Rule-based)
Items Sortor
(Content-based)
811
JSP Page
Logs Processor (JavaBean)
Recommendation Engine (JavaBean)
Item Sorter (JavaBean)
Impact of System
The prototype system is implemented using a Java
Server from JRUN which provides the container
for running JSP and JavaBeans classes. The database stores all external data inputs. The Web
server PC is installed with JRUN and the Web
application. Customers can access the Web site
using any Web browser.
To illustrate the effectiveness of this system in
providing dynamic content generation, a series of
user inputs and the subsequent system responses
are tabulated.
812
Conclusion
We have developed a system that can adapt its
Web contents based on visitors activities on the
Web site through combining rule-based with content-based filtering techniquesresulting in an
implementation that is both flexible and can rapidly
adjust its recommendations. Rule-based structure
offers cross-product mapping. Content-based
filtering takes the items attributes into account
when generating recommendations. The system
transparently and seamlessly tracks the visitor
on the server side and does not require explicit
inputs (ratings or purchases or login account) to
determine the visitors profile dynamically.
My-Pets system design utilizes the concept of
category management, which is widely practiced
in brick-and-mortar shop fronts and maps product
813
References
Basu, C., Hirsh, H., & Cohen, W. (1998). Recommendation as classification: Using social and
content-based information recommendation.
Proceedings of the 1998 Workshop on Recommender Systems (pp. 11-15).
Cho, Y. B., Cho, Y. H., & Kim, S. H. (2005).
Mining changes in customer buying behavior for
collaborative recommendations. Expert Systems
with Applications, 28(2), 359-369.
Cho, Y. H., & Kim, J. K. (2004). Application of Web
usage mining and product taxonomy to collaborative recommendations in e-commerce. Expert
Systems with Applications, 26(2), 233-246.
Cho, Y. H., Kim, J. K., & Kim, S. H. (2002). A
personalized recommender system based on Web
usage mining and decision tree induction. Expert
Systems with Applications, 23(3), 329-342.
Claypool, M., Gokhale, A., Miranda, T., Murnikov,
P., Netes, D., & Sartin, M. (1999). Combining
content-based and collaborative filters in an online
newspaper. Proceedings of the ACM SIGIR Workshop on Recommender Systems.
Cooley, R., Mobasher, B., & Srivastava, J. (1999).
Data preparation for mining World Wide Web
814
Key Terms
Category Management: Classify and manage
items based on some predetermined categories.
Clickstream: A sequence of mouse clicks.
Collaborative Filtering: Unveiling general
patterns through sniffing through users past
activities.
Personalization: Customization to individual
users preferences and needs.
Portal: The entry node/point of navigation
semantic unit for a Web site.
Profiling: Capturing individual users interests and needs.
Self-Adaptive: Ability of a Web portal to
automatically adjust its presentations to perceived
users preferences.
Session Object: Information items that capture characteristics and activities of a user during
a Web session.
This work was previously published in the Encyclopedia of E-Commerce, E-Government, and Mobile Commerce, edited by
M. Khosrow-Pour, pp. 20-25, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
815
816
Chapter 2.27
A Qualitative Study in
Users Information-Seeking
Behaviors on Web Sites:
A User-Centered Approach to
Web Site Development
Napawan Sawasdichai
King Mongkuts Institute of Technology, Ladkrabang, Thailand
ABSTRACT
This chapter introduces a qualitative study of
users information-seeking tasks on Web-based
media, by investigating users cognitive behaviors
when they are searching for particular information
on various kinds of Web sites. The experiment,
which is a major part of the recently completed
doctoral research at the Institute of Design-IIT,
particularly studies cognitive factors including
user goals and modes of searching in order to
investigate if these factors significantly affect
users' information-seeking behaviors. The main
objective is to identify the corresponding impact
of these factors on their needs and behaviors in
relation to Web site design. By taking a userbased qualitative approach, the author hopes
that this study will open the door to a careful
consideration of actual user needs and behav-
INTRODUCTION
When visiting a Web site, each user has a specific
goal that relates to a pattern of needs, expectations, and search behaviors. They also approach
with different modes of searching based on varied
knowledge, experience, and search sophistication.
This leads to differences in information-seeking
strategies and searching behaviors. Since information on Web sites is traditionally structured and
presented based on Web sites goals and contents,
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
817
Figure 2. An example page from logs file analysis example: Top entry pages
818
consideration of user goals, user modes of searching and their search behaviors by taking a userbased qualitative approach. The study expects
to expand understanding within the area of user
studies, and accordingly investigates how these
user cognitive factors contribute to differences
in user information needs and their information-seeking behaviors on Web-based media,
particularly in view of user search strategies and
user search methods. Understanding within this
area will contribute to the further development of
information architecture and interface design.
819
Figure 3. Primary focuses of the research: The design of Web information-based on user-centered approach, particularly with regard to usefulness and suitability
820
User Goals
Each user has a specific goal when visiting a Web
site. Different goals suggest different kinds of
needs, expectations, and search behaviors, which
are factors in Web usage and success. Further, users may access the same Web site with different
goals at different times; moreover, they often link
several goals and explore them sequentially. User
goals may be categorized as follows:
821
822
in this study. Moreover, entertainment and community Web sites are quite different from other
Web sites because of their unique goals, contents,
and functions.
By simultaneously considering the three
important factors of information design on
823
Figure 5. The research analytic frame: Generating 10 different study cases for the research
824
RESEARCH QUESTIONS
The study is specifically conducted within these
selected 10 cases generated from the research
analytic frame shown in Figure 5 in order to find
the answers to these research questions:
825
The research findings that answer these questions will be analyzed to identify the relationships
existing among user goals and user modes of
searching with their information needs, search
strategies, and search methods. These results
will help establish the classification of cognitive
factors, as well as provide an analysis framework
for information design on Web sites.
RESEARCH METHODOLOGY
Research Methods
A qualitative research method is used in this
study to explore the similarities and differences of
user search patterns. Users information-seeking
behaviors are observed through controlled observation, through video observation combined with
protocol analysis. User profiles are also collected
through a series of questionnaires. Ten scenarios
are designed to create the 10 study cases originating from the research analytic frame to help the
participants enter the situation and the tasks they
needed to accomplish. Each scenario is embedded
with a particular mode of searching, and a different search goal resulting in the performance of a
task, ranging from open-ended to very specific
purpose and search.
826
Analysis Methods
Since the research data collected from participants is qualitative in nature, several methods of
qualitative analysis are used in this research to
carefully analyze various aspects of the data, in
order to obtain integrated research findings that
answer the related but different research questions
on which this research focuses. Each analysis
method used in the study delivers distinctive
analytical results answering a specific research
question. The analytical results obtained from
these different analysis methods are also crossexamined in order to accumulate further findings.
This collective analysis process helps to uncover
the pattern of relationship that exists among various user cognitive factors, as well as to identify
their substantial impact on user search behaviors
and information needs.
Thematic analysis (Boyatzis, 1998), the process used for encoding qualitative information,
is performed in this study to analyze the overall
user search behaviors including user task list and
process. In order to uncover the differences and
similarities in user search behaviors, a thematic
analysis framework with a coding scheme is
827
828
Table 3. An example analysis of thematic analysis by using the pre-designed coding scheme
Information
display on
Web page
page 1
Homepage
Menu bar
Table of
contents
Search field
Recommend
features
page 2
Result Page
Table of
contents
Recommend
products
(small image +
short description)
page 3
Result Page
Small images +
short descriptions
Users key
actions
Users key
speech/thoughts
Users cognitive
and physical behaviors
829
Figure 7. An example use of time-ordered matrix used for further analyzing and generalizing the encoded information gained from the earlier thematic analysis by presenting user search behaviors in the
time-ordered sequences
830
Figure 8. An example of procedural analysis used for presenting the process of user search patterns
831
Chernoff Faces
coding system
Information-collecting states
The intended information is found. Users primary
information needs are fulfill, but users are interested in
finding other relevant or related information.
The intended information is found. Users primary
information needs are fulfill. Users are ready use
information they found to take further action(s).
Users record the information they found.
Struggling states
The intended information is found. Users primary
information needs are fulfill. Users are ready use
information they found to take further action(s).
Users keep searching by changing search strategy.
832
Chernoff Faces
coding system
Decision-making states
Users make a positive decision (to proceed) about something
according to information they found.
Users go back to the previously selected or recorded
(bookmarked) pages or results, and/or compare the selected
pages or results side by side in case there are more than one
page or result selected.
Satisfactory states
Users are satisfied. All users needs are fulfilled. Users are
able to accomplish their goals based on the information they
found.
Users are somewhat satisfied, but not completely satisfied.
Users primary needs are fulfilled, and users are able to
accomplish their goals based on the information they found.
However, users still need more information to fulfill all their
needs completely.
Users are not satisfied. Users needs are not fulfilled. Users
are unable to accomplish their goal(s).
833
Figure 9. Examples of Chernoff Face Analysis used to visually identify various types of user search
behaviors regarding the frequency of each behavior
834
Table 5. An example analysis of user search strategies and user search methods by using the Checklist
and Sequence Record
Checklist and Sequence Record: Showing the frequency and sequence of use of different search tools
Scenario 2: commercial Web site + making decision goal + existence searching mode
1
2** 5
Participant 2/1*
2
Participant
2/2
3
Participant
2/3
4
Participant
2/4
5
Participant
2/5
6
Participant
2/6
7
Participant
2/7
8
Participant
2/8
9
Participant
2/9
10
Participant 2/10
Index
Shortcut
Site map
Back button
Next button
7
1 1
4 6
2 3
10
14
11
6
13
15 1
10 2
4
8
2
3
Search field
Simple search function
Related items/topics
list of items/topics
Advertising
Feature items/topics
Table of content
Menus
Exploring/Browsing
6.0
1.4 0
0.4
1.8
0 0 0 8 0.9 7.6
16.5
835
Table 6. An example analysis of user information needs by using the Checklist Record
Scenario 2
Commercial Web site, making decision goal, existence searching mode
Participant 2/1
Participant 2/2
Participant 2/3
Participant 2/4
Participant 2/5
Participant 2/6
Participant 2/7
Participant 2/8
Participant 2/9
Participant 2/10
Central
tendency
(mean)
836
News, reports
Types
of Information
Opinions, reviews
recommendations
Diagrams, maps
Matrix, tables
Textual descriptions
Presentation
Methods
of Information
display
Abstracts, summaries
Formats
of Information
display
Characteristics
of Information
Remarks:
Users
thoughts,
comments
on their
information
needs
7 15 9
23
24
25
3 14 8
22
22
22
7 11 11 22
22
22
3 15 7
23
23
24
4 11 15 26
26
25
2 19 11 30
30
29
10
30 6
36
35
36 YES
36
16
16
12
12
27
21 6
27
26
23.8
4.5
16
23.6
21 YES
Most users
want to see
comparison
information
or want a
comparison
tool.
837
Total
instances
among
100 cases
Output
code:
presence/
absence
of
instance
(P)
Output
code:
achieving
goal
search
success
(S)
1*
B
Have
visited
the Web
site
before
(return
user)?
1
n/a**
12
15
n/a**
22
0
0
0
0
0
1
0
0
0
0
1
0
1
0
0
0
6
0
3
7
= 100
cases
1
0
1
1
1
n/a**
1
0
Have prior
knowledge
and/or
experience
in content?
Utilize
different
kinds of
search
tools?
Read text
or detailed
information
thoroughly?
1*
1*
Achieve
original
goal
search
success?
838
Figure 11. An example construction of the confusion matrix showing the scores of agreement and disagreement between two observers in their judgment when coding the events
839
Figure 12. An example calculation of the proportion of agreement, the proportion expected by chance,
and the Cohens Kappa Score to measure the extent of inter-observer agreement
840
Figure 13. User goals and user modes of searching, the main factors regulating user search behaviors and the resulting search patterns
841
842
843
844
Table 8. Boolean minimization process applied to the primitive expressions from truth table 1
Minimization : Step 1
ABCD
ABCD
ABCD
ABCd
ABcD
ABcD
AbCD
AbCD
AbcD
AbcD
abCD
AbCd
AbCd
aBcD
Minimization : Step 2
to produce ABC
to produce ABD
to produce ACD
to produce ACd
to produce BcD
to produce AcD
to produce AbC
to produce bCD
to produce bcD
to produce Abc
to produce abC
to produce Abd
to produce bCd
to produce acD
ABC
ACD
ACD
AbC
AbC
AcD
BcD
bCD
bCD
combines with
combines with
combines with
combines with
combines with
combines with
combines with
combines with
combines with
AbC
ACd
AcD
Abc
abC
acD
bcD
bcD
bCd
to produce
to produce
to produce
to produce
to produce
to produce
to produce
to produce
to produce
AC
AC
AD
Ab
bC
cD
cD
bD
bC
Table 9. Prime implicant chart showing coverage of original terms by prime implicants
Prime Implicants
Primitive Expressions
ABCD
ABCd
AC
AD
ABcD
Ab
bC
bD
cD
ABD
X
X
X
X
Abd
Acd
X
X
845
expressions derived from truth table 1 as demonstrated in Table 8. With the Boolean minimization
process applied, the reduced expressions (prime
implicants) on user search success (S) from truth
table 1 can be represented in the simpler equation
as follows:
S = AC + AD + Ab +bC + bD + cD + ABD +
Abd + Acd
Then, the final step of Boolean minimization
is conducted by using the prime implicant chart
(see Table 9) to map the links between nine prime
implicants (see the second equation previously
shown) and 11 primitive expressions (see the
first equation). This process helps to eliminate
redundant prime implicants in order to produce
a logically minimal number of prime implicants
which cover as many of the primitive Boolean
expressions as possible.
Eventually, with the final process of Boolean
minimization applied, the final equation (S) from
truth table 1 demonstrates six combinations of
causal conditions that produce the positive outcome (user search success) as follows:
S = AC + AD + Ab +bC + bD + cD
This final equation significantly demonstrates
the result showing that causal condition A (users
have prior knowledge and/or experience in the
content), condition C (users utilize different
kinds of search tools), and condition D (users
read text or detailed information thoroughly) are
the important variables that help users to achieve
their goals.
Contrary to the traditional view on user experience with Web site interface (first-time versus
return users), the result shows that causal condition B (users have visited the Web site before)
is not the primary factor contributing to users
accomplishment in their search.
846
Total
instances
among
100 cases
Output
code:
presence/
absence of
instance
(P)
1
1
1
1
0
1
1
1
0
0
1
1
0
1
0
1
1
0
0
0
0
1
0
0
1
1
0
1
0
0
0
0
1
1
1
0
1
1
1
0
0
1
1
0
1
0
1
1
0
0
0
0
1
0
1
1
1
0
0
0
1
0
0
0
1
1
0
1
1
1
0
0
1
1
1
1
0
1
1
0
0
0
1
1
0
0
0
1
0
1
0
0
0
1
0
0
Output
code:
achieving
goal
search
success (S)
1
0
1
1
1
1
0
1
1
1
0
0
1
1
0
1
0
1
1
0
1
1
0
0
0
0
0
0
0
0
1
0
Achieve
original
goal
search
success?
0
5
1
1
0
0
0
0
9
0
0
7
9
0
9
0
7
0
0
5
0
0
0
6
0
3
25
0
0
5
0
8
= 100 cases
0
1*
1
1
0
0
0
0
1
0
0
1
1
0
1
0
1
0
0
1
0
0
0
1
0
1
1
0
0
1
0
1
n/a**
1*
1
1
n/a
n/a
n/a
n/a
1
n/a
n/a
1
1
n/a
1
n/a
0
n/a
n/a
1
n/a
n/a
n/a
1
n/a
1
1
n/a
n/a
0
n/a
0
847
CONCLUSION
REFERENCES
This investigation demonstrates that a user-centered approach can improve information design
on Web-based media through study of various
factors, especially user cognitive factors including user goals and modes of searching, to identify
the corresponding impact of these factors on information and functional needs in terms of user
behaviors. As an attempt to solve the problems of
information-seeking tasks in Web-based media,
the research is successful in providing a new
perspective on Web site design considerations
by strongly taking a user-centered approach to
incorporate a careful consideration of actual user
needs and behaviors together with requirements
from a Web site.
By conducting extensive qualitative research
on user study in relation to search needs and
behaviors on Web sites as well as employing
various analytical methods to uncover different aspects of the research data, the study
answers the research questions. The common
patterns of user information-seeking behavior,
user search strategies and methods, as well as
user information needs presented in different
cases are revealed. These valuable findings will
be further synthesized to develop frameworks
and classifications.
Deeper understanding of these various
factors, especially user cognitive factors, may
complement the use of existing analytical
or design methods such as task analysis and
scenario-based design, by helping Web developers to recognize the important factors that
may be subtle or previously unidentified yet
substantially affect user task performances. By
recognizing these elements, Web developers
can identify the useful and appropriate functions and/or information to include in each
particular case, in order to support user needs
and task performances and eventually promote
their satisfaction.
848
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 42-77, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
849
850
Chapter 2.28
Abstract
An important theoretical undertaking in personalization research is to identify the structure of
the multidimensional construct of personalization
and to operationalize them in measurable terms.
The purpose of this study was to develop and
validate measurement scales for personalization
by identifying four distinctive personalization
archetypes and hypothesizing their respective
relationships with different cognitive and affective
outcomes. This effort was successful in several
respects. New scales for measuring personalization strategies were developed based on the definitions of personalization archetypes (architectural,
instrumental, social and commercial), which
were in turn derived from an extensive review of
multidisciplinary studies. A lab experiment with
229 student subjects was conducted to explore the
Introduction
In e-commerce and mobile commerce, personalization has been recognized as an important
element in customer relationship and Web strategies. Personalization is commonly treated as an
independent variable that influences Web usage
outcomes such as customer experience (Novak,
Hoffman, & Yung, 2000), Web site usability
(Agarwal & Venkatesh, 2002; Palmer, 2002), and
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
851
852
Four Archetypes of
Personalization Strategy
Different schools of thought can be discerned
within the diverse personalization literature.
To capture the characteristic features of these
logically consistent approaches to thinking about
personalization, we conducted a multiparadigm
review of the literature on personalization. During
the review process, we utilized two metatriangulation techniques discussed by Lewis and Grimes
(1999) to uncover paradigmatic disparity and
complimentarily. Metatriangulation is a theorybuilding strategy for exploring the assumptions of
divergent theoretical views and gaining insights
into the multiple paradigms (Lewis & Grimes,
1999). First, we used the paradigm bracketing
technique to differentiate and articulate various
sets of assumptions of alternative paradigms of
personalization. Second, we employed the technique of paradigm bridging to identify transition
zones (Lewis & Grimes, 1999), where paradigmatic boundaries become fuzzy and new views
permeating across paradigms are synthesized.
The sample for the multiparadigm review
includes 86 journal articles, 35 books or book
sections, 13 conference papers, and 8 Web references, obtained from the electronic library
853
situated needs. Instrumental personalization focuses on the functionality of the system. Its goal
is to support users in accomplishing their goals.
Unlike architectural personalization, in which
function and form balance each other, instrumental personalization emphasizes functionality
and usability and treats aesthetics as a secondary
consideration to be addressed once instrumental
standards are met.
Social personalization can be defined as the
personalization via the mediation of interpersonal
relationships and utilization of relational resources
to facilitate social interactions by providing a convenient platform for people to interact with others
in a way that is compatible with the individuals
desired level of communality and privacy. The
motivation behind social personalization is to
personalize by fulfilling the users needs for
socialization and a sense of belonging. The goal
is two-fold: (1) to enhance the effectiveness of
interpersonal interactions, and (2) to help generate social capital (Wellman, 2002) by providing new opportunities for strengthening social
relationships and maintaining social networks.
Applications amenable to relational personalization vary greatly in size and complexity. They can
be as simple as providing an e-mail to a friend
button to notify others of ones flight schedule
after booking tickets online or as complicated as
a conglomeration of online information portal and
activity center in a Digital City that engages
residents or visitors (Toru, 2002).
Commercial personalization is the differentiation of product, service, and information to
increase sales and to enhance customer loyalty
by segmenting customers in a way that efficiently
and knowledgeably addresses each user or group
of users needs and goals in a given context. One
of the most important human activities is the
consumption of goods and services. The motivation of commercial personalization is to fulfill
users material needs and thus contribute to their
psychic welfare (Douglas & Isherwood, 1979). It
primarily focuses on the content of the system and
854
Architectural
form and function
Instrumental
functionality and usability
Social
Meaning
Commercial
content
Intrinsic Motivation
Individual
Interactional
Extrinsic Motivation
Effects of Personalization
The personalization archetype scheme implies that
no single standard or approach to personalization
is the best. Each archetype employs different
criteria for evaluating how well the system succeeds in delivering the desired effect. Based on
our paradigm bridging of the four personalization
archetypes, the effects of different personalization archetypes can be differentiated in terms of
the extrinsic or intrinsic motivation they intend
to support.
Commercial and instrumental personalization,
predominantly used for information retrieval,
transaction processing, and content management,
belong to the class of productivity/utility applications. They are utilitarian-oriented, the goal of
which is to achieve specific objectives or tasks.
The idea behind instrumental personalization is
that users will find systems that are designed and
855
856
Research Method
The goal of this research is to develop and validate
measures for different dimensions of personalization corresponding to the four distilled personalization archetypes. We first distilled the four
personalization archetypes stated above from an
extensive literature review of five general areas
in which personalization has been studied: marketing/e-commerce, computer science/cognitive
science, architecture/environmental psychology,
information science, and social sciences, including sociology, anthropology and communication.
Next, we generated and pretested 20 candidate
measurement items for all four archetypes of
personalization (see Table 2).
Experimental Design
A total of 229 undergraduate students (94 male,
135 female) enrolled in an introductory IS course
were recruited to participate in the study for the
benefit of extra course credit. Because this course
is designed for non-IS majors, the sampling frame
represents a wider range of disciplinary backgrounds than IS or business courses. Majors of
subjects included chemistry, agriculture, liberal
arts, and life sciences. This type of heterogeneity is advantageous in testing the reliability and
generalizability of the measurement instrument
(Shadish, Cook, & Campbell, 2001).
During the experiment, we showed the movie
clips of someone using the personalization features
of Web sites to the participants rather than having
them actually visit the site. We did this for two
reasons. First, having the participants view the
movie clips ensures that they are only exposed to
the personalization features that they are assigned
to. Second, this approach reduces or eliminates
the differences in participants responses resulting
from irrelevant factors such as different browsing
paths participants traversed.
To approximate actual Web browsing in the
movie as closely as possible, we used Camtasia
857
Code
Questionnaire Item
ARCH1
Personalizing the site creates a delightful Web experience that is unique to me.
ARCH4
ARCH5
I am able to tailor the look and feel of the site to my own personal taste and
style.
Personalizing the site creates a Web environment that is aesthetically pleasing to
me.
I have a sense of control and mastery in creating my own space online.
It feels like decorating my own house when Im personalizing this site.
INSTR1
Personalizing the site makes it more useful and usable for me.
ARCH2
ARCH3
Instrumental
INSTR2
INSTR3
INSTR4
INSTR5
INSTR6
Social
SOCIA1
SOCIA2
SOCIA3
Commercial
SOCIA4
COMM1
COMM2
COMM3
COMM4
COMM5
I found personalizing the site helps me to obtain my goal for using the site more
efficiently.
I would like to see other sites that I frequently view have the functionality this
site provides.
Personalizing the site makes the interaction with the site easier for me.
The site provides many functions that I can configure to suit my own needs.
Personalizing the site helps me locate the right information/product/service I
need.
Personalizing the site helps to connect me to a community that is potentially
interesting to me.
Personalizing the site helps to fulfill my needs for socialization and communication with others.
Personalizing the site gives me a sense of community while maintaining my own
self-identity.
Personalizing the site helps to create a congenial social environment for me.
Personalizing the site enables faster checkout, transaction, or information scanning.
My concern for privacy is a major factor in determining whether I would personalize the site.
Im made aware of new products or useful information regarding sales and promotion that I didnt know before by personalizing the site.
Personalizing the site helps the businesses know me better so that they can serve
me better.
I felt that this site is knowledgeable about my needs and wants in terms of what
they can offer to me.
858
Web site describing the typical context and purpose of using the personalization features. It is
important to make the usage context and purpose
explicit to the subjects, as the conceptualization
and operationalization of the four personalization strategies are based on the user motivations
and objectives underlying personalization. To
ensure the best audio quality, we tested several
volunteers and chose one with the most pleasant
voice in speaking standard American English. A
total of five AVI files were generated, including
instructions for the participants, and one for each
site. The resolution of the movie clip on screen
is 1080x720.
To optimize the effect of movie viewing, we
conducted the experiment in a state-of-the-art
instructional lab, in which participants viewed
the movie on a personal computer via a 19-inch
flat screen LCD monitor using a headphone. For
practical purposes, balanced incomplete block
design (BIBD) was adopted for the experiment.
To reduce judging fatigue and increase reliability
of evaluations, each participant was randomly
assigned to view only two of the four Web site
stimuli, with order counterbalanced across the
participants. Among the 458 data points collected
from the experiment, 390 were useable after
discarding the questionnaires with incomplete
responses and inconsistent answers to the same
question being asked twice.
859
Sample Web
Site
Operationalization
Personalization Strategy
Personalization Motivation
Fulfilling users need for a personalized Web space that meets his/her
information need and reflects his/her
aesthetic taste and style.
860
Reliability
Table 3 presents the reliability coefficients in the
right-most column, which are calculated from
the EFA analysis of the data across four sites.
Table 4. Construct loading for personalization strategy instrument from exploratory factor analysis
Latent Construct Loading
__________________________________________________________________________
Scale Items
Instrumental Social
Architectural Commercial
Reliability Coefficient
________________________________________________________________________
INSTR1
0.613
0.8118
INSTR2
0.886
INSTR4
0.517
INSTR6
0.520
SOCIA2
0.724
0.8183
SOCIA3
0.828
SOCIA4
0.748
ARCH2
0.867
0.8129
ARCH3
0.728
ARCH4
0.708
ARCH5
0.743
COMM1
0.559
0.6334
COMM3
0.573
COMM4
0.660
COMM5
0.458
Eigenvalues
5.337
2.519
1.641
1.109
861
Convergent Validity
As our theory predicted, four components had
eigenvalues greater than 1. The four columns in
the middle of Table 3 present the factor loadings
for the four constructs of personalization from
the EFA analysis based on data pooled from all
four sites. All 15 items converge well on their
corresponding constructs, with high loadings on
the constructs they are intended to measure and
low loadings on others (factor loadings that are
less than 0.5 are not shown in the table).
Independent CFA analysis (see Table 5) on
each site showed varied indices of goodness of
fit for the measurement model. Among the four
sites, Amazon seems to demonstrate the best
fit (GFI=0.8582, CFI=0.9454, RMSEA=0.0588,
chi-square/DF < 2). Yahoo!Group and MyMSN
Discriminant Validity
Table 6 presents the factor correlations and average
variances extracted (AVE) for the four personalization constructs from the EFA analysis. The
Table 5. Summary of goodness-of-fit of personalization strategy model for the four Web sites
Websites
Amazon
Lands End
My MSN
Yahoo!Group
RMSEA
CFI
NNI
NFI
0.0588
0.1112
0.0882
0.0602
0.9454
0.7857
0.8281
0.8981
0.9317
0.7322
0.7852
0.8726
0.8138
0.6700
0.6768
0.6848
Instrumental
Social
Architectural
Commercial
862
Instrumental
0.634
0.195
-0.443
0.404
Social
Architectural
Commercial
0.7667
-0.141
-0.207
0.7615
-0.133
0.5625
863
864
Table 7. Association
1
0.395
31.062**
2
0.230
14.158**
3
0.463
40.909**
Usefulness
Ease of Use
Enjoyment
0.061
0.396**
0.039
0.336**
0.116
0.279**
0.194*
0.063
0.168*
0.259**
0.425**
0.064
Beta
Dependent
Variables
Independent
Variables
Commercial
Instrumental
Architectural
Social
Discussion
The purpose of this study was to develop and
validate measurement scales for personalization
865
on each site separately. The study empirically supported our theoretical model of the structure of
personalization construct, which consists of four
distinct dimensions: architectural, instrumental,
social and commercial.
Because of the multidimensional nature of
the personalization construct, we should not use
only a single yardstick to measure the effectiveness of personalization strategies. We identified
two major categories of use outcomes associated
with extrinsic motivation and extrinsic motivation, respectively. To test the relationship between
personalization and extrinsic motivation, we
hypothesized that instrumental and commercial
personalization would positively influence two
TAM constructs: perceived usefulness and ease
of use. To test the relationship between personalization and intrinsic motivation, we hypothesized
that social and architectural personalization would
positively influence perceived enjoyment and ease
of use. Our hypotheses were partially supported
by the empirical results which revealed additional
interesting findings.
Among significant findings, instrumental personalization was found to have a positive influence
not only on perceived usefulness and ease of use
but also on perceived enjoyment. Instrumental
personalization facilitates perceived ease of
Contrast
1
2
3
4
Personalization
Type
Commercial
Instrumental
Architectural
Social
866
Website
Amazon
3
-1
-1
-1
Lands End
-1
3
-1
-1
My MSN
-1
-1
3
-1
Yahoo!Group
-1
-1
-1
3
T Value
6.40**
0.73
11.81**
17.53**
four dimensions. The instrument exhibited sufficient psychometric properties on the calibrated
Web site that are representative of respective
dimensions.
In practice, this instrument can be utilized in
at least two ways. Firstly, the instrument we developed in this research can be used as a guideline
for developing Web personalization strategies
because it provided four basic means to personalize Web sites; that is, by providing functionalities
and information specifically needed by the user
so as to enhance efficiency and productivity, by
tailoring the interface to the users own taste and
style, and by enabling interaction and connectivity specifically for the users social network. For
productivity-oriented personalization systems,
key usability issues to consider would be ease
of use, clarity, consistency, free from ambiguity, and error. The aspect of ease of use includes
both the use of the application itself and the setup
and configuration to make personalized features
functional. Consistency helps users better orient
themselves to the site and alleviates cognitive
effort. The enjoyment or entertainment-oriented
personalization applications capitalize on the
process and experience of using the systems.
They are designed to stimulate thinking and to
invoke feelings. The results are not tangible, but
the process itself is critical in creating an engaging, fulfilling user experience. The principle of
consistency may not be sufficient to invoke feelings or engage users on the site for an extended
amount of time. In addition, the instrument can
measure users perception of different personalization strategies and be used as a criterion for
evaluating the effectiveness of the implementation of personalization strategy. For example, at
the design and testing stage, Web designers and
researchers can evaluate the performance of the
site in terms of its ability to cater to users personal
needs by having the users rate the site using the
instrument. Weak scores would indicate potential
areas for improvement.
867
Secondly, we empirically tested the hypothesis that different personalization strategies lead
to different cognitive and affective outcomes of
usage. As extant literature shows, measuring the
effectiveness of Web site personalization using a
monolithic method such as Return-On-Investment
or click-to-buy ratio is not sufficient to gauge the
multidimensional nature of personalization. As
Web users come to interact with the site with
different motivations, which largely dictates their
usage expectation and online behavior (Davis et
al., 1992; Venkatesh, 2000), understanding the
underlying motivations for using Web personalization features and how different motivations relate
to respective cognitive and affective outcome is
crucial for realizing the potential of Web personalization. This study is our initial attempt along this
line of research. Drawing on the existing literature
on the distinction between extrinsic and intrinsic
motivations, we empirically tested different usage
outcomes by using different Web personalization
features. Specifically, we found significant and
strong correlations between architectural personalization and perceived enjoyment and ease
of use, and between instrumental personalization
and perceived usefulness and ease of use. The empirical data also suggested significant, small-sized
correlation between social personalization and
perceived usefulness, and between commercial
personalization and perceived enjoyment. Our
initial hypotheses were partially supported and
new insights were obtained.
Limitations
Several limitations should be considered when
interpreting the results of this study. First, the
data were collected from a convenience sample of
students, which may restrict the generalizability
of the results. Although college students are representative of many young Web users and online
consumers in the real world, the participants of this
868
may not represent a full insight into the participants view of Web personalization features.
Finally, another limitation of this study is
concerned with the four criteria we developed to
check how well each stimulus represents its corresponding personalization archetype. Because
the focus of this study is to differentiate the four
personalization archetypes and their respective
cognitive and affective effects, the criteria examine each stimulus representativeness of its
corresponding personalization archetype relative
to the other three stimuli. Therefore, a criterion
will be supported if the stimulus of focus is shown
to represent its corresponding personalization
archetype better than the other three (i.e., the
stimulus has a higher score on its corresponding
personalization archetype than the other three).
However, meeting such a criterion may not be
sufficient to establish the stimulus of focus as a
successful operationalization of its corresponding
personalization archetype, because the criterion
evaluates the performance of stimulus using its
relative score rather than its absolute score on the
corresponding personalization archetype.
Despite the abovementioned limitations of this
study, we do believe our results provide valuable
insights on different dimensions of personalization and their influences on users cognitive and
affective responses. This research is the beginning of a rich stream of research investigating the
multidimensional nature of personalization. We
call for further studies along this line of research
using the validated instrument of personalization,
and testing it in various settings.
Future Directions
There are several possible ways of continuing this
research. First, the current study suggested ways of
improving the instrument. Revising the items for
the instrumental personalization by focusing on
the design aspects of instrumental personalization,
such as providing, enabling and delivering useful
References
Agarwal, R., & Venkatesh, V. (2002). Assessing
a firms Web presence: A heuristic evaluation
procedure for the measurement of usability. Information Systems Research, 13(2), 168-186.
Argyle, M. (1996). The social psychology of
leisure. London: Penguin.
Argyle, M., & Lu, L. (1990). The happiness of
extraverts. Personality and Individual Differences, 11, 1011-1017.
Atkinson, M., & Kydd, C. (1997). Individual characteristics associated with World Wide Web use:
An empirical study of playfulness and motivation.
869
Fischer, E., Bristor, J., & Gainer, B. (1996). Creating or escaping community: An exploratory study
of Internet consumers behaviors. Advances in
Consumer Research, 23, 178-182.
870
This work was previously published in the International Journal of Technology and Human Interaction, edited by B. C. Stahl,
Volume 4, Issue 4, pp. 1-28, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
871
872
Chapter 2.29
Abstract
Despite the existence of various data mining efforts that deal with user interface aspects, very
few provide a formal specification of the syntax
of the interface and the corresponding semantics.
A formal specification facilitates the description
of the system properties without being concerned
about implementation details and enables the
detection of fundamental design issues before
they manifest themselves in the implementation.
In visual data mining, a formal specification can
enable users to decide which interaction/operation
to apply to get a desired result; help users to predict
the results of their interactions/operations with the
system; and enable the development of a general
interaction model that designers/developers can
use to understand the relationships between user
interactions and their compositions. In this work,
INTRODUCTION
In this day and age, data still present formidable
challenges to effective and efficient discovery of
knowledge. It should be acknowledged that a lot
of research work has been and is being done with
respect to knowledge discovery (KD). Much of
the work has concentrated on the development
and the optimization of data mining algorithms
using techniques from other fields such as artificial intelligence, statistics, and high performance
computing (Fayyad, Piatetsky-Shapiro, & Smyth,
1996b). Besides various glaring issues (such as
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
Knowledge Discovery
Knowledge discovery (KD) may be defined as
the process of identifying valid, novel, potentially
useful, and ultimately understandable models
and/or patterns in data (Fayyad, Piatetsky-Shapiro, Smyth, & Uthurusamy, 1996a; Fayyad et al.,
1996b). On the whole, the knowledge discovery
process may be defined as an interactive and
iterative non-trivial process that entails various
phases as seen in Figure 1.
873
Data Mining
Data mining is a core step in the knowledge
discovery process that, under acceptable computational efficiency limitations, enumerates
models and patterns over the data (Fayyad et
874
Rules
Metaqueries
Metaquerying (Mitbander, Ong, Shen, & Zaniolo, 1996) is a data mining technique that
is especially useful in mining relational and
deductive databases. Metaqueries (or metapatterns) provide a generic description of a class of
patterns that the user may want to discover from
the underlying dataset. With metaqueries, it is
possible to mine patterns that link several tables
in the target dataset. Metaquery specification
can be carried out manually (for instance by an
expert user). Alternatively, the specification can
be automated by exploiting the schema of the
underlying dataset.
Let U be a countable domain of constants.
A database DB is (D,R1, ....,Rn) where D U is
finite, and each Ri is a relation of fixed arity a(Ri)
such that Ri Da(Ri).
A metaquery is a second-order template of
the form (Angiulli, Ben-Eliyahu-Zohary,
Ianni,
& Palopoli
, 2000):
Equation 1
T L1, ..., Lm
where T and Li are literal schemes. Each literal
scheme T or Li is of the form Q(Y1, ..., Yn) where Q
is either a predicate (second-order) variable or a
relation symbol, and each Yj (1 j n) is an ordinary
(first-order) variable. If Q is a predicate variable,
then Q(Y1, ..., Yn) is called a relation pattern of
arity n, otherwise it is called an atom of arity n.
The left-hand side T is called the consequent or
the head of the metaquery. The right-hand side
L1, ...,Lm is called the antecedent or the body of
the metaquery. Consider the relations CustCent,
ClustOutl1 and ServCent with the following
attributes: CustCent.CustID, CustCent.CentID,
ClustOut1.CustID, ClustOut1.ServID, ServCent.
ServID and ServCent.CentID. The following is an
example of a corresponding metaquery:
Equation 2
CustCent(CustID, CentID) {ClustOut1 (CustID, ServID), ServCent(ServID, CentID)}
Intuitively, given a database instance DB,
answering a metaquery MQ on DB amounts to
finding all substitutions of relation patterns
appearing in MQ by atoms having as predicate
names relations in DB such that the Horn rule
(MQ) (which is obtained by applying to MQ)
encodes a dependency between the atoms in its
head and body. The Horn rule is supposed to hold
in DB with a certain level of plausibility/relevance.
The level of plausibility is based on measures of
interestingness such as support and confidence.
The measures of support and confidence are
described in Section Support and Confidence.
Metaqueriers have been applied in the telecommunication industry, in a common-sense knowledge base, and in the chemical industry (Leng &
Shen, 1996). Metaqueries have also been applied
in analyzing time sequence data for semiconductor
875
Association Rules
Association rules were introduced in Agrawal,
Imielinski, and Swami (1993). Association rules
represent a data mining technique that is used to
discover implications between sets of items in
the database.
Let I = I1, I2, ..., Im be a set of data items or
literals and D a set (or database) of transactions,
in which each transaction T is a set of items from I
(i.e. T I). Each transaction (T) is assigned some
unique identifier, TID.
Let X I and Y I. A transaction T is said
to contain X if X T. An association rule is an
implication of the form:
Equation 3
YX
where X Y = . The left-hand side, Y, is the
consequent or the head of the association rule
whereas the right-hand side, X, is the antecedent
or the body of the association rule.
The problem of mining association rules is
to generate all association rules with a degree of
relevance/interestingness that is greater than a
certain minimum (such as user-specified) value.
The problem of discovering all association rules
can be decomposed into two sub-problems
(Agrawal et al., 1993):
1.
2.
876
in
Equation 3
has support supp in D if supp% of
transactions in D contain X Y . The confidence
of a rule in D is the fraction or percentage of tuples
in D containing the antecedent, that also contain
the consequent. For instance, the association rule
in
Equation 3
has confidence conf in D if conf%
of transactions in D that contain X also contain Y.
Intuitively, support indicates how frequently the
items in the rule occur together in the transactions of the database, and therefore represents the
utility of the rule, whereas confidence indicates
the strength of the implication represented by
the rule.
Clustering
Clustering is a process through which the target
dataset is divided into groups of similar objects,
such that the objects in a particular group are
dissimilar to objects in other groups. Each such
group is referred to as a cluster. Clustering is
applicable in many arenas such as in analyzing
astronomical data, in demographics, in insurance,
urban planning, and Web applications.
Hierarchical Clustering
Hierarchical methods produce a sequence of
nested partitions. A compact way to represent
nested partitions is by a dendrogram, i.e., a tree
having single objects as leaves, showing the hierarchical relationships among the clusters. It is
therefore possible to explore the underlying dataset at various levels of granularity. Hierarchical
methods are further subdivided into agglomerative
and divisive (Jain & Dubes, 1988; Kaufman &
Rousseeuw, 1990).
Partitional Clustering
Partitional methods attempt to identify clusters
directly either by iteratively relocating points
between subsets, or by associating clusters with
the areas that are densely populated with data.
Consequently, partitional methods fall into two
categories: relocation methods and density-based
methods.
Relocation methods focus on how well points
fit into their clusters. Such methods intend to
ensure that the built clusters have the proper
shapes. Relocation methods are further subdivided
into probabilistic, k-medoids, and k-means. The
probabilistic clustering model is based on the
assumption that data has been independently
drawn from a mixture model of several probability distributions. The results of probabilistic
clustering are often easy to interpret. Probabilistic
clustering algorithms include SNOB (Wallace
& Dowe, 1994), AUTOCLASS (Cheeseman &
877
Miscellaneous
There exist many other clustering techniques that
do not fit well in one of the foregoing categories.
878
1997),
STING (statistical
information grid-based method) (Wang, Yang, &
Muntz, 1997), and WaveCluster (Sheikholeslami,
Chatterjee, & Zhang, 1998). On the other hand,
the idea behind grid-based methods is exploited
by other types of clustering algorithms (such as
CLIQUE (clustering in quest) (Agrawal, Gehrke,
Gunopulos, & Raghavan, 1998), MAFIA (merging
of adaptive finite intervals) (Goil, Nagesh, & Choudhary, 1999; Nagesh, Goil, & Choudhary, 2001) as
an intermediate phase in their processing.
Co-occurrence techniques are meant to handle
special requirements when it comes to clustering categorical data. Algorithms ROCK (Guha,
Rastogi, & Shim
, 1999), SNN (shared nearest
neighbors) (Ertz, Steinbach, & Kumar, 2003),
and CACTUS (clustering categorical data using
summaries) (Ganti,
Gehrke, & Ramakrishnan,
1999).
Figure 2. An illustration of the overall outlook of the interface. Reproduced from Kimani, Lodi, Catarci,
Santucci, and Sartori (2004) with the permission of Elsevier B. V.
879
880
Clustering
The clustering environment in VidaMine provides
various visual widgets for specifying or selecting
parameters characterizing a clustering task. The
parameters include a fixed number of clusters or a
measure (of homogeneity, separation, or density);
Table 1
.
Definitions of the main sets and functions
Main Sets and Functions
Dataset S = {Oi | i = 1, 2, . . . , N}
Symmetric dissimilarity function diss : S S
R+
Classification C of S is a subset of a partition of S
Accuracy function m is a function on the set of all
classifications of S to R+
881
1.
2.
3.
882
Equation 5
Ho(P) = {Q j : Oj C(O ) diss(Oi, Oj) | Q {max, ,
i
avg}}
Equation 6
Hc(P) = {Qi:O C h(Oi) | h Ho(P), Q {min,
i
max, , avg}}
Equation 7
H = {Qc P h(C) | h Hc(P), Q {max, ,
avg}}
and S by
Equation 8
So(P) = {Q j: Oj C(O ) diss(Oi, Oj) | Q {min, ,
i
avg}}
Equation 9
Sc(P) = {Qi : Oi C s(Oi) | s So(P), Q {min,
max, , avg}}
Equation 10
S = {QC P s(C) | s Sc(P), Q {min, max, ,
avg}}
where C(Oi) is the cluster containing object Oi.
Equation 5
defines a family of pointwise homogeneity functions, expressing that the homogeneity of object Oi can be defined as either the
maximum (i.e., worst-case) dissimilarity to other
objects in the same cluster, or the sum or average
of all such dissimilarities. Likewise, Equation
6
defines a family of clusterwise homogeneity
functions, expressing that the homogeneity of
a cluster can be defined as the maximum, minimum, sum, or average of pointwise homogeneity
7
defines a
family of partitionwise homogeneity functions;
the homogeneity of a partition can be defined as
either the minimum (i.e., worst-case) clusterwise
homogeneity over all its clusters, or the sum or
average of all such homogeneities. Equation 8Equation 10
provide analogous definitions for
the separation function. Note, however, that the
quantifier expressing worst-case pointwise or
partitionwise separation is the minimum instead
of the maximum, and that the quantifiers defining
the separation of Oi extend to every object not in
its cluster.
Equation 5
-
Equation 10
induce a simple
taxonomy with four levels into which functions
m are classified. At the first level, homogeneity is
separated from separation. Then, classes at lower
levels in the taxonomy are separated according
to the objectwise, clusterwise, or partitionwise
quantifier.
Clustering Based on Density Estimation
In this section, diss is assumed to be a distance
function and S to be a subset of a metric space
(X, diss). By elementary intuition, clusters can
be regarded as regions of the object space where
objects are located most frequently. Such simple
analogy leads to approaches to clustering based on
statistical techniques of non-parametric density
estimation (Ankerst et al., 1999; Ester et al., 1996;
Hinneburg et al., 1998; Schikuta, 1996; Silverman,
1986). The goal of density estimation is to fit to a
data set S a density function of type X R+. The
implemented system supports clustering based
on an important family of estimates, known as
kernel estimators (Silverman, 1986). Functions in
such family are defined modulo two parameters,
the window width h, and the kernel function .
The value of the estimate at x X is obtained
by summing, over all data objects, a quantity
modeling the influence of the object. Influence
is computed by transforming distance, scaled by
a factor 1/h, using :
883
Equation 11
h , ( x ) =
diss (Oi , x)
1 N
(
)
Nh i =1
h
Equation 13
P = {C : (xo S)C = {x S : xo ~ x}}
The method can be further enhanced by
introducing a notion of noise objects (Ester et
al., 1996): Density at a noise object is less than a
specified threshold parameter . Noise objects are
not part of any cluster, thus the method generates
a classification instead of a partition:
Equation 14
C = {C : (xo S)C = {x S : xo ~ x h, (x) }}
Problems 1 and 2 can be considered meaningful defining m(C) as the minimum density of
an object, over all objects in all clusters of C.
Abstract Syntax and Semantics of
Clustering
The visual language is defined by the following:
Equation 15
Equation 12
diss(h,(Oi),Oi) = O min
{diss(Oi,Oj) : h,(Oj)>
j A ( Oi )
(O
)}
h,
i
Equation 16
884
=
Equation 17
V = {ProbRad, NClusSpin, HomSlid, SepSlid,
DenSlid, AccurRad, HomObjwQCom, HomCluswQCom, HomPartwQCom, SepObjwQCom,
SepCluswQCom, SepPartwQCom, KernCom,
SmoothSlid}
F i n a l l y, l e t f o r b r e v i t y h p q =
v(HomPartwQCom), hcq = v(HomCluswQCom),
h o q = v ( H o m O b j w Q C o m) , s p q =
v(SepPartwQCom), scq = v(SepCluswQCom),
soq = v(SepObjwQCom). The classification C
of the dataset S is defined by:
Equation 18
C=
Phom(S) if v(accurRad) = Homogeneity,
Psep(S) if v (accurRad) = Separation,
Cden(S) if v (accurRad) = Density
and the following hold
if v(ProbRad) = 2
where
Equation 23
mhom(P) = hpq
msep(P)= spq
CP
Equation 25
|Phom(S)| = v(NClusSpin)
|Psep(S)| = v(NClusSpin)
|Cden(S)| = v(NClusSpin)
Equation 20
mhom(Phom(S)) v(HomSlid)
msep(Psep(S)) v(SepSlid)
mden(Cden(S)) v(DenSlid)
if v(ProbRad) = 2
Equation 21
P : | P |= v(NClusSpin) mhom (P) mhom (Phom (S))
P : | P |= v(NClusSpin) msep (P) msep (Psep (S))
C : | C |= v(NClusSpin) mden (C) mden (Cden(S))
if v(ProbRad) = 1
Equation 22
P : mhom(P) v(HomSlid) | P | | Phom (S) |
P : msep(P) v(SepSlid) | P | | Psep (S) |
C : mden(C) v(DenSlid) | C | | Cden (S) |
i:Oi C
j:O j C ( Oi )
Equation 24
Equation 19
if v(ProbRad) = 1
hcq
CP
scq
i:Oi C
i:Oi C
soq
j:O j C ( Oi )
diss(Oi, Oj)
v ( SmoothSlid ), v ( KernCom )
(Oi )}
885
886
{h
hL
bL { h}
Equation 32
L = {P(X1, ..., Xm) : (n) P = pred(n) isrel(n)
(i m)(n')Xi = (n') inschema (n', n)}
where
Equation 33
isadj (n,n') (e)l(e) = (n,n') (e) = adjacent
Equation 34
intersects (n,n') (e)l(e) = (n,n') (e) = intersecting
Equation 35
(n) if v(n) = " X ",
pred(n)=
v(n) otherwise
Equation 36
Equation 37
Equation 40
Equation 38
I1, ..., Im I
isconn = isadje
Equation 39
inschema (n, n') (islink(n) (n") isconn (n', n")
intersects (n, n") (islink (n) isconn(n', n))
and isadje is the equivalence relation generated
by isadj , that is, the smallest equivalence relation
containing isadj. Therefore, an equivalence class
contains nodes corresponding to frames gathered
in one relation scheme in the target space.
L is the set of literals defined by the visual
configuration (Equation 32).
In each literal, P is the relation name enclosed
in a frame, or a distinct predicate variable, if the
name is X (Equation 35 and Equation 37). Every
variable corresponds to an attribute frame that
is connected to the relation frame, which names
the literal, or corresponds to a link that intersects
such an attribute frame (Equation 39). The set MQ
of metaqueries is obtained from L by generating
Equation 41
conf (r) =
{t T :{I } t ' t}
{t T : t ' t}
Equation 42
supp(r) =
{t T :{I } t ' t}
T
In the association rule environment of VidaMine, there is the provision of market baskets.
As seen in Figure 5, the association rule environment offers two baskets, the IF basket and the
THEN basket. The IF basket represents items in
the antecedent part of an association rule and the
THEN basket represents items in the consequent
part of the rule. Users may drag and drop items
from the target dataset into the relevant baskets.
887
888
889
890
891
892
893
894
CONCLUSION
Despite the existence of many data mining efforts that deal with user interface aspects, there
is relatively little work going on or that has been
published on the specification of the syntax of such
user interface and the corresponding semantics.
In general, a formal specification can bring about
many benefits such as facilitating the description
of the system properties without having to be concerned about implementation details and enabling
the detection of fundamental design issues before
they manifest themselves in the implementation.
In visual data mining, where visualization is often
a key ingredient, a formal specification can be
rewarding. For instance, it can enable users to
decide which interaction/operation to apply to get
a particular output, it can help users to predict the
output of their interactions/operations with the
system, and it can facilitate the development of
interaction models. In this work, we have proposed
and described an approach for specifying such a
formal specification in the process of developing
a visual data mining system, VidaMine.
References
Agrawal, R., & Srikant, R. (1994). Fast algorithms
for mining association rules in large databases.
In J.
B. Bocca, M. Jarke, & C. Zaniolo (Ed.), Proceedings of the 20th International Conference on
Very Large Data Bases (VLDB). San Francisco:
Morgan Kaufmann Publishers.
Agrawal, R., Gehrke, J., Gunopulos, D., &
Raghavan, P. (1998). Automatic subspace clustering of high dimensional data for data mining
applications. In A.
895
operator interaction
framework for visualization systems. In
G. Wills
& J. Dill (Ed.), Proceedings IEEE Symposium on
Information Visualization (InfoVis 98) (pp. 63-70).
Los Alamitos, CA
: IEEE Computer Society.
Chuah, M. C., & Roth, S. F. (1996). On the semantics of interactive visualizations. Proceedings
IEEE Symposium on Information Visualization 96
(pp. 29-36). Los Alamitos, CA
: IEEE Computer
Society.
DMG (The Data Mining Group). PMML: Predictive Model Markup Language. Retrieved March
15, 2006, from http://www.dmg.org
Ertz, L., Steinbach, M., & Kumar, V. (2003).
Finding clusters of different sizes, shapes, and
densities in noisy, high dimensional data. In
D. Barbar & C. Kamath (Ed.), Proceedings of
the 3rd SIAM International Conference on Data
Mining. SIAM. Retrieved from http://www.siam.
org/meetings/sdm03/index.htm
Erwig, M. (1998). Abstract syntax and semantics
of visual languages. Journal of Visual Languages
and Computing, 9(5), 461-483.
Ester, M., Kriegel, H. P., Sander, J., & Xu, X.
(1996). A density-based algorithm for discovering clusters in large spatial databases with noise.
In E. Simoudis, J. Han, & U. M. Fayyad (Ed.),
Proceedings of the 2nd International Conference
on Knowledge Discovery and Data Mining (KDD96) (pp. 226-231). AAAI Press.
896
AAAI Press.
Discovery and Data Mining.
Fayyad, U., Piatetsky-Shapiro, G., Smyth, P.,
& Uthurusamy, R. (Ed.). (1996a). Advances in
An efficient clustering algorithm for large databases. Proceedings of the 1998 ACM SIGMOD
International Conference on Management of Data
(pp. 73-84). New York: ACM Press.
Hansen, P., & Jaumard, B. (1997). Cluster analysis
and mathematical programming. Mathematical
Programming 79, 191-215.
Hartigan, J. A. (1975). Clustering algorithms.
New York: John Wiley & Sons.
Hartigan, J. A., & Wong, M. (1979). Algorithm
AS136: A k-means clustering algorithm. Applied
Statistics 28, 100-108.
Hinneburg, A., & Keim, D. A. (1998). An efficient approach to clustering in large multimedia
databases with noise. In R. Agrawal, P. Stolorz,
& G. Piatetsky-Shapiro (Ed.), Proceedings of
the 4th International Conference on Knowledge
Discovery and Data Mining (pp. 58-65). Menlo
Park, CA: AAAI Press.
Jain, A. K., & Dubes, R. C. (1988). Algorithms
897
Zhang, J. Debenham, & D. Lukose (Ed.), Proceedings of the Australian Joint Conference on Artificial Intelligence (pp. 37-44). World Scientific.
Wang, J. Z., Wiederhold, G., Firschein, O., &
Wei, S. X. (1998). Content-based image indexing
and searching using Daubechies wavelets. International Journal on Digital Libraries (IJODL),
1(4), 311-328.
Wang, W., Yang, J., & Muntz, R. R. (1997).
STING: A statistical information grid approach
to spatial data mining. In M.
Jarke, M. J. Carey,
K. R. Dittrich, F. H. Lochovsky, P. Loucopoulos,
& M. A. Jeusfeld (Ed.), Proceedings of the 23rd
International Conference on Very Large Data
Francisco:
Bases (VLDB) (pp. 186-195). San
Morgan Kaufmann Publishers.
Ward, J. H. (1963). Hierarchical grouping to optimize an objective function. Journal of the American Statistical Association 58(301), 236-244.
Xu, X., Ester, M., Kriegel, H. P., & Sander, J.
(1998). A distribution-based clustering algorithm
for mining in large spatial databases. Proceedings of the International Conference on Data
Los Alamitos,
Engineering (ICDE) (pp. 324-331).
CA: IEEE Computer Society.
Endnote
This work was previously published in Visual Languages for Interactive Computing: Definitions and Formalizations, edited
by F. Ferri, pp. 247-272, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
898
899
Chapter 2.30
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
900
901
of human activity that are directly related to understanding and improving human performance.
Traditionally, this area of research has focused
on two aspects of work: operator awareness
and capability modelling; and human-system
simulation environments. Operator modelling
is for the most part a behavioural activity, using
techniques such as time analysis and operator
function models (Kirwan & Ainsworth, 1992).
Human-system simulation environments, on the
other hand, require sophisticated, computationaltask simulation tools and theories to evaluate
human decision making, including recognitionprimed decisions and naturalistic decision making
(Zsambok & Klein, 1997).
Critical incident analysis focuses on developing causal models of relationships within
complex systems to prevent accidents or error
states in safety-critical environments (Shrayne,
Westerman, Crawshaw, Hockey, & Sauer, 1998).
Design
Development
and Deployment
Evaluation
and prediction
902
Purpose
To elicit, analyse, and specify functional and nonfunctional stakeholder requirements within the context of use and the existing limitations and constraints upon
the activity
To generate high-level, coarse-grained descriptions of the main tasks and objectives that are relevant to the user(s)
Traditionally, the role of this phase has been to develop the user-interface design, however it should also include high-level, system architecture specification,
documentation and additional resource design. These activities should all occur
concurrently.
To define and model the generic functionalities of the system, especially consistency and learnability (affordance, usability) when the system is to be deployed
across several platforms
To analyse the implemented functionality of the system in terms of efficiency
To automatically generate parts of the architecture and subsystem components
related to the functional stakeholder requirements
Through the use of specific task modelling notations, to produce a series of design
tests which evaluate user performance and ensure that the final product is fit-forpurpose
What is a Task?
Figure 1 depicts a traditional structural view of
task in HCI where it is seen as a conscious act
(Kirwan & Ainsworth, 1992; Watts & Monk,
1998). Task-as-activity is comprised of some goal
that must be achieved through mediated interaction via agents and artefacts of the environment
(Flor & Hutchins, 1991; Hutchins, 1995). Planning
is implied as goals reflect the system state to be
achieved through effecting some combination of
actions (events). Unfortunately, here we run into
903
904
905
906
Purpose
To inform users and designers about potential problems. Common themes
involve needs analysis, usability, and affordance.
To establish the impact of new or existing tools upon task performance in
some work practice
To provide a detailed and structured conceptual model of the task, including behavioural (time, errors, feedback, etc.) and structural modelling
(functionality, visibility, etc.)
To help develop prototypes, user interfaces, and elements of the system
architecture
2.
3.
4.
foundations lie in system theory and information processing, seeing task performance as the
interaction between human and machine, the latter becoming increasingly complex as computers
and automation develop[ed] (Annett, 2004, p.
68). With relation to task performance, the term
analysis in HTA refers to the process of problem
identification and structuring. HTA is an effective process for proposing empirical solutions to
existing specified problems (Annett, 2004).
In HTA, goal refers to an expected outcome,
or system state. Goals and subgoals, may be active
or latent, arising when the need presents itself to
achieve some expected outcome. HTA frequently
models this goal-oriented behaviour as tree structures, such as decision trees. Goals are established
and acquired through an information processing
cycle similar to that in our discussion of GOMS
(Annett & Duncan, 1967; Kieras, 2004). As with
goals, actions in HTA may also be decomposed
into nested actions, each maintaining their direct
relationship to an established system state. Being
nested, actions are available at both the current
node of activity, and at their super node (parent
task). Therefore, according to HTA, user behaviour can be reduced to a cycle of monitoring for
new input, deciding upon available alternatives
and controlling subsequent behaviour.
Although an attractive framework, there are a
number of problems HTA practitioners typically
encounter. Firstly, when a parent goal becomes
active, all subgoals (and their related actions) become active as well. However, it is seldom the case
that a user is simultaneously aware of current and
future goals. Rather, user behaviour is typically
more anticipatory and reflexive (Endsley, 2000;
Suchman, 1987). According to HTA, subgoals
and their actions are maintained as conscious
constructs available to the user. Again, this is
unlikely to be the case, as many procedural actions
essential for effective task completion are habituated or unconscious even when the associated goal
is being actioned (Whittaker et al., 2000).
907
908
Summary of Approaches
To determine which approach may best be used
within a particular Web design project, Table 4
provides a brief overview of the cognitive approaches previously mentioned and demonstrates
their shared and unique attributes.
Criteria
Expressive power
Complexity
Collaboration
Timing
Roles and responsibilities
Evaluation
Requires Training
Scalable
Social Orientation
Approaches
GOMS
HTA
LOW
LOW
LOW
LOW
NO
NO
YES
YES
NO
NO
YES
YES
LOW
LOW-MED
NO
NO
NO
NO
GTA
MEDIUM
MEDIUM
YES
NO
YES
YES
MED-HIGH
NO
NO
CTA
HIGH
HIGH
NO
NO
NO
YES
HIGH
YES
NO
CWA
HIGH
HIGH
YES
NO
YES
YES
HIGH
YES
YES
909
The need for integrative frameworks is especially relevant to Web designers who must not
only support existing social practices through a
novel medium, but also the cognitive demands
imposed by semiotic constraints when delivering
information via the Web (Smart, Rice, & Wood,
2000). Table 5 outlines the required properties of
an integrative framework.
Description
Formalised notation
system
Common, accessible notation that facilitates conceptual modelling and communication of knowledge between interested parties (Balbo et al., 2004; Erickson &
Kellogg, 2001)
Methodological flexibility
The ability to adopt existing methodologies rather than constantly reinvent the
wheel (Whittaker et al., 2000)
Cost-effective practices
Cost-effective practices that encourage the use task analysis during system design,
thus increasing return-on-investment (Stanton, 2004)
Reuse
The ability to support that HCI claims reuse (Carroll, 1996; Sutcliffe, 2000)
Ecological validity
Research findings and technology must support existing social processes and cultural practices (Nardi, 1996)
910
1.
2.
3.
911
912
Distributed Cognition
Distributed cognition is concerned with how
knowledge is propagated and transformed by
agents within activity. An agent is any cognitive
artefact of the system, be it human, machine, or
other work product. The unit of analysis is the
cognitive system. Distributed cognition relaxes the
assumption that the individual is the best or only
useful unit of analysis and thus extends the reach
of what is considered cognitive to both systems
that are smaller and larger than the individual
(Hollan et al., 2001; Hutchins, 1995). The cognitive system in distributed cognition is thus more
akin to the term complex cognitive system.
Goals, according to distributed cognition,
are not merely maintained within the mind of
the subject (individual or group), but rather embedded within the cognitive system. Distributed
cognition posits that artefacts may themselves
possess goals. The cognitive system can only be
understood when we know the contributions of
individual agents, their shared contributions and
collaboration strategies, and the nature agent behaviour in the environment. In contrast to situated
action, distributed cognition incorporates culture,
context and history, but from within an embedded
perspective (Hollan et al., 2001).
There are striking similarities between the
distributed cognition and activity theory (Nardi,
1996). Both are activity-centric: they recognise
activity as a hierarchical, goal-oriented structure;
align physical, verbal, and nonverbal actions
with specific goals; and they distinguish between
conscious and unconscious actions. Additionally, neither framework prescribes a particular
set of methodological practices. Nevertheless,
there are two notable differences between the
approaches:
Activity Theory
Activity theory is a descriptive conceptual
framework that has emerged primarily from contributions by Vygotsky (1986), Leontev (1978),
and Engestrm (1987). Activity theory serves to
describe the different forms of human praxis and
developmental processes involved in HCI. Activ-
913
Scenario-Based Design
Scenario-based design (SBD) promotes the use
of scenarios (or structured narratives) in HCI as a
way of understanding human activity and creating
computer systems (Carroll, 2000). Today, they are
widely used across disciplines. In an attempt to
render software development more social, Carroll
(1996, 1997) argues that activity theory can be
applied effectively to SBD. While Carroll does
914
not describe how this can be accomplished, suggestions are found elsewhere (Carroll, 2000; Go
& Carroll, 2004; Kazman et al., 1996; Rosson
& Carroll, 2002). Admittedly, natural language
narratives are not the most scientific notation used
within HCI or software engineering; however, it is
often by these means that technical information is
conveyed throughout the development effort (Carroll, 1996). Moreover, scenarios provide a flexible
mechanism for integrating real life accounts of
activity with the more empirically discrete views
employed within software engineering.
During discovery and definition (see Table
1), scenarios can be used speculatively at the
start of the SDLC to document expected user
behaviour or to describe hypothetical activities.
In the absence of an existing system, this process
improves ecological validity during requirements
engineering (Stanton, 2004). To assist with requirements engineering, Kaptelinin et al. (1999)
suggest that Activity Checklists can assist practitioners to focus on salient aspects of an activity,
thereby constraining the process of requirements
engineering during SBD.
Scenarios represent purposeful interaction
within a system, and are inherently goal-oriented.
Scenarios are always particular to a specific situation, and thus situated within a particular frame
of research. Because scenarios are actor-driven,
describing interaction from the perspective of
at least one individual, they can be effectively
integrated with an activity theory framework.
Finally, as scenarios use a common notation, natural language, all stakeholders in the development
process can easily communicate requirements,
experiences, and other opinions/beliefs without
requiring extensive training. Consequently,
scenarios serve as an excellent lingua franca for
communication between all project stakeholders; a
primary goal of any good task analysis framework
(Balbo et al., 2004).
Situated
Action
Distributed
Cognition
Activity
Theory
LOW
LOW
NO
YES
NO
LOW
LOW
NO
YES
NO
MEDIUM
MEDIUM
YES
NO
YES
ScenarioBased
Design
HIGH
HIGH
NO
NO
NO
YES
LOW
NO
NO
YES
LOW-MED
NO
NO
YES
MED-HIGH
NO
NO
YES
HIGH
YES
NO
Summary of Approaches
Table 6 provides a brief overview of the postcognitive approaches described previously and demonstrates their shared and unique attributes.
915
916
917
918
919
920
Conclusion
Web design is a complex activity at the best of
times. Not only do designers frequently encounter technological limitations imposed by a novel
communication medium, but also they are highly
isolated from their users. Moreover, arguably
more than with any other technological medium,
the Web application target audience is extremely
heterogeneous. Therefore, the ambiguous and
diverse nature of Web application use imposes
critical limitations on Web design practices.
Shadowed by the importance of developing
Acknowledgments
We would like to thank Baden Hughes and Sandrine Balbo for their collaboration and comments
on related work. Special thanks go to Sandrine
Balbo for her presentation on task models in
HCI, which spawned an insightful and lively
afternoon of discussions that stimulated aspects
of our discussion here.
References
Albers, M. J. (1998). Goal-driven task analysis:
Improving situation awareness for complex
problem solving. In Proceedings of the 16th
Annual International Conference on Computer
Quebec, Canada:
Documentation (pp. 234-242).
ACM Publishing.
921
Carroll, J. M. (2000). Five reasons for scenariobased design. Interacting with Computers, 13,
43-60.
Carroll, J. M. (2002). Making use is more than
a matter of task analysis. Interacting with
Computers, 14, 619-627.
Checkland, P. (1999). Soft systems methodology
in action. New York: John Wiley & Sons Ltd.
Corbel, C., Gruba, P., & Enright, H. (2002). Taking
the Web to task. Sydney, Australia: National Centre
for English Language Teaching and Research,
Macquarie University.
Czerwinski, M. P., & Larson, K. (2002). Cognition
and the Web: Moving from theory to Web
design. In J. Ratner (Ed.), Human factors and
Mahwah,
NJ:
Web development (pp. 147-165).
Erlbaum.
Diaper, D. (2004). Understanding task analysis for
human-computer interaction. In D. Diaper & N.
Stanton (Eds.), The handbook of task analysis for
human-computer interaction (pp. 5-49). Mahwah,
NJ: Lawrence Erlbaum Associates.
Dix, A. (2005). Human-computer interaction
and Web design. In R.W. Proctor & K.-P. L. Vu
(Eds.), Handbook of human factors in Web design
(pp. 28-47). Mahwah, NJ: Lawrence Erlbaum
Associates.
Endsley, M. R. (2000). Theoretical underpinnings
of situation awareness: A critical review. In M.
R. Endsley & D. J. Garland (Eds.), Situation
awareness analysis and measurement (pp. 3-33).
Mahwah, NJ: Lawrence Erlbaum Associates.
Engestrm, Y. (1987). Learning by expanding.
Helsinki, The Netherlands: Orienta-Konsultit.
Erickson, T., & Kellogg, W. A. (2001). Social
922
Farmer, R. A. (2005). Multimodal speech recognition errors and second language acquisition: An
activity theoretic account. In Proceedings of the
6th Conference on Using Technology in Foreign
Language Teaching. Compiegne, France: Universit de Technologie Compigne.
Farmer, R. A. (2006). Situated task analysis in
learner-centered CALL. In P. Zaphiris & G.
Zacharia (Eds.), User-centered computer-assisted
language learning (pp. 43-73). Hershey, PA: Idea
Group Publishing.
Farmer, R. A., Gruba, P., & Hughes, B. (2004).
Towards principles for CALL software quality
improvement. In J. Colpaert, W. Decoo, M.
Simons, & S. Van Beuren (Eds.), CALL and
Research Methodologies: Proceedings of the 11th
International Conference on CALL (pp. 103-113).
Belgium: Universiteit Antwerpen.
Farmer, R. A., & Hughes, B. (2005a). A situated
learning perspective on learning object design. In
P. Goodyear, D. G. Sampson, D. Yang, Kinshuk,
T. Okamoto, R. Hartley, et al. (Eds.), Proceedings
of the 5th International Conference on Advanced
Learning Technologies, Kaohsiung, Taiwan (pp.
72-74). CA: IEEE Computer Society Press.
Farmer, R. A., & Hughes, B. (2005b). A classification-based framework for learning object
assembly. In P. Goodyear, D. G. Sampson, D.
Yang, Kinshuk, T. Okamoto, R. Hartley et al.
(Eds.), Proceedings of the 5th International Conference on Advanced Learning Technologies (pp.
4-6). Kaohsiung, Taiwan. CA: IEEE Computer
Society Press.
Farmer, R. A., & Hughes, B. (2005c). CASE: A
framework for evaluating learner-computer interaction in Computer-Assisted Language Learning.
In B. Plimmer & R. Amor (Eds.), Proceedings
of CHINZ 2005 Making CHI Natural, Auckland, New Zealand (pp. 67-74). New York: ACM
Press.
923
924
Schach, S. R. (1999). Classical and objectoriented software engineering (4th ed.). Singapore:
McGraw-Hill International.
Shepherd, A. (2001). Hierarchical task analysis.
London: Taylor & Francis.
Shrayne, N. M., Westerman, S. J., Crawshaw, C.,
Hockey, G. R. J., & Sauer, J. (1998). Task analysis
for the investigation of human error in safetycritical software design: A convergent methods
approach. Ergonomics, 41(11), 1719-1736.
Smart, K. L., Rice, J. C., & Wood, L. E. (2000).
Meeting the needs of users: Toward a semiotics
of the web. In Proceedings of the 18th Annual
ACM International Conference on Computer
Documentation: Technology & Teamwork (pp.
593-605). Piscataway, NJ: IEEE Educational
Activities Department.
Sommerville, I. (2004). Software engineering
(7th ed.). New York: Addison-Wesley.
Stanton, N. (2004). The psychology of task analysis
today. In D. Diaper & N. Stanton (Eds.), The
handbook of task analysis for human-computer
interaction (pp. 569-584). Mahwah, NJ: Lawrence
Erlbaum Associates.
Suchman, L. A. (1987). Plans and situated actions:
The problem of human-computer communication.
New York: Cambridge University Press.
Sutcliffe, A. (2000). On the effective use and
reuse of HCI knowledge. ACM Transactions on
Computer-Human Interaction, 7(2), 197-221.
Tatnall, A. (2002). Actor-network theory as a
socio-technical approach to information systems
research. In S. Clarke, E. Coakes, M. G. Hunter,
& A. Wenn (Eds.), Socio-technical and human
cognition elements of information systems (pp.
266-283). Hershey, PA: Idea Group.
Turner, P., Turner, S., & Horton, J. (1999).
From description to requirements: An activity
925
926
High
Medium
Method
Low
Category
Key (applicability)
Concept Require- Task Interface Work- Training Problem
Definiments Design Develop- load
Develop- Investigation
Analysis
ment
Estimament
tion
tion
Appendix
Task analysis matrix adapted from a survey conducted by Bonaceto and Burns (2003).
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 78-107, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
927
928
Chapter 2.31
Abstract
The traditional desktop computing paradigm has
had major successes. It also should be noted that we
are in a day and age where many good computer
and device users are increasingly finding themselves being required to perform their activities
not in offices/desktops but in real-world settings.
Ubiquitous computing can make possible in the
real-world setting what would have otherwise
been impossible through desktop computing.
However, there is a world of difference between
the real-world and the desktop settings. The move
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
It is worth acknowledging that the traditional desktop computing paradigm has had major successes.
On the same note, it should be observed that we
are in a day and age where many people have become good computer and device users. However,
these users are increasingly finding themselves
performing or being required to (or having to)
perform their activities not in offices and desktops
but in the real world settings. In describing the
situation, Kristoffersen and Ljungberg indicate
that the hands of such users are often used to
manipulate physical objects, as opposed to users
in the traditional office setting, whose hands are
safely and ergonomically placed on the keyboard.
(Kristoffersen & Ljungberg, 1999). It is interesting
to observe how ubiquitous computing can come
in handy toward making possible in the natural
setting what would have otherwise been impossible through the desktop computing paradigm.
It is therefore not uncommon to encounter a user
who carries out one or many parallel activities
from virtually anywhere at anytime while at the
same time interacting with other user(s) and/or
device(s). (Bertini et al., 2003).
However, it is worth noting that there is a world
of difference between the real world setting and
the desktop setting. As we consider the move
from desktop computing (fixed user interfaces)
to the real world settings, various issues and
demands arise when we consider the nature of
tasks the ubiquitous devices/applications (and thus
ubiquitous user interfaces) would be expected to
support and the real world context in which they
will be used.
Consequently, it does turn out that a careful
study of the nature of tasks in ubiquitous computing can make some requirements in the design and
evaluation of ubiquitous applications become more
evident, which forms the basis of this chapter. In
particular, we will describe the nature of tasks
in ubiquitous computing, and then propose and
Background Knowledge
In this section, we describe some of the key concepts relevant to the chapter. In particular, we
describe ubiquitous computing. It should be noted
that in the history of computing, the requirement
to take into consideration the real world context
has arguably never been more critical and pressing
than in this day and age of ubiquitous computing.
After describing ubiquitous computing, we then
focus the description on the concept of context.
Ubiquitous Computing
Weiser coined the term ubiquitous computing (ubicomp) and gave a vision of people and
environments augmented with computational
resources that provide information and services
when and where desired (Weiser, 1991). Dix et al.
define ubicomp as: Any computing activity that
permits human interaction away from a single
workstation (Dix et al., 2004). Since then, there
have been tremendous advances in mobile and
wireless technologies toward supporting the envisioned ubiquitous and continuous computation
and, consequently, ubiquitous applications that
are intended to exploit the foregoing technologies
have emerged and are constantly pervading our
life. Abowd et al. in (Abowd et al., 2000) observe
929
Context
Context has been defined as any information
that can be used to characterize the situation of
an entity. (Dey, 2000), where an entity refers
to a person, place, or object that is considered
relevant to the interaction between a user and an
application, including the user and applications
themselves. (Dey, 2000). Context entails aspects
such as location, infrastructure, user, environment, entities, and time. The infrastructure could
include technical resources such as server and network capabilities and connections, applications,
and so forth. User includes user data/profile, usage
patterns, and so forth. The environment refers
to the physical condition of the setting an could
include light, temperature, and so on. Entities refer
to people, devices and objects. Time could include
date, time of the day, season, and so on. Abowd
et al. provide in (Abowd et al., 2000) a review of
ubicomp research and summarize context in the
form of five W s:
930
Who: As human beings, we tailor our activities and recall events from the past based
on the presence of other people.
What: Perceiving and interpreting human
activity is a difficult problem. Nevertheless,
interaction with continuously worn, contextdriven devices will likely need to incorporate
interpretations of human activity to be able
to provide useful information.
Where: In many ways, the where component of context has been explored more than
the others. Of particular interest is coupling
notions of where with other contextual
information, such as when.
When: With the exception of using time as an
index into a captured record or summarizing
how long a person has been at a particular
location, most context-driven applications
are unaware of the passage of time. Of
particular interest is the understanding of
relative changes in time as an aid for interpreting human activity. Additionally, when
a baseline of behavior can be established,
action that violates a perceived pattern would
be of particular interest.
Why: Even more challenging than perceiving what a person is doing is understanding
why that person is doing it.
Nature of Tasks
The interaction of the user with the ubiquitous
device/application can be viewed in at least two
dimensions:
User-ubiquitous application interaction dimension entails tasks in which the user is primarily
interacting with the ubiquitous application and
the device I/O modalities in order to access services such as support services (e.g., emergencies,
help/service), information services (e.g., gathering/recording information, accessing/retrieving
information, sharing information, communicating) and entertainment services (e.g., games,
music, videos). User-ubiquitous device dimension
categorizes tasks that entail the actual handling
of the device (such as holding the device, wearing
the device, attending to the device).
There are situations whereby interaction with
the ubiquitous application, though important, is
not the primary task but rather a secondary/supplementary task. In such a case, such ubiquitous
devices/applications would be used to provide
support/assistance and gather/make available
some resources (such as information) on behalf
of a user who is engaged in another primary task
in the real environment/setting. In fact, the tension between the primary tasks of a ubiquitous
user and the users interaction with the ubiquitous
device/application can be seen in the literature
(e.g., Pascoe et al., 2000). Notwithstanding the
foregoing, there also are situations in which
interaction with the ubiquitous application is the
primary contributor to the users accomplishment
of the primary task; interacting with the ubiquitous device/application can be viewed as directly
carrying out the primary task. In this case, the
use of the ubiquitous device/application tends to
be more intimately connected with what the user
is really doing (or intends to achieve) in his/her
embodied/physical self. For instance, where the
user is using the ubiquitous application to inform
him/her about the location he/she is in. However,
it is also worth pointing out that at different time
granularities, the primary task and secondary task
may swap in ubiquitous computing. The foregoing
situations raise challenges and that would need to
be taken into consideration when designing and
evaluating (developing) ubiquitous application
user interfaces.
931
Design Considerations
It is worth noting that the user of a ubiquitous
device often has to focus on more than one task
because s/he might have to interact with the
device itself (which is itself a task) while probably performing another task in the real world
setting (where this could be the primary task or
the secondary task). On one hand, interaction
with the ubiquitous device/application to some
extent requires users innate resources (such as
attention). On the other hand, the latter task often
too does require the users physical, visual, and
cognitive involvement/resources (such as hands,
visual attention, mental focus). The users physical, visual, and cognitive involvement/resources
are therefore likely to get constrained. Ideally, the
ubiquitous application (including interactions with
the device) should support the user in carrying
out that which is the primary task without supporting the user in tampering with the primary
task. We should minimize distracting the user
from the primary task or disrupting the users
primary task, unless the disruption/distraction
is of genuine (and great) value or of critical importance. In the words of Holland and Morse: It
is important that the critical focus of the users
attention be directed towards the primary task
at hand (Holland & Morse, 2001). In adopting
ways to meet the requirement, it is also critical
to consider the status of a users attention in the
timing of the tasks on the ubiquitous device. Borrowing from a research effort on guidelines for
using agents and direct manipulation (Horvitz,
1999), it is important to consider the costs and
benefits of deferring action to a time when action
will be less distracting. Where necessary, the
ubiquitous application should enable/allow the
user to temporarily halt a task on the device and
to resume the interrupted task.
One of the challenges with a new or innovative
technology/application is that its users may try
to use it in situations or ways the designers and
932
Activity Theory
The activity theory model provides a broad conceptual framework for describing the structure,
development, and context of computer-supported
activities. It was developed by the Russian psychologists Vygotsky, Rubinshtein, Leontev and
others (Kaptelinin et al., 1995; Leontev, 1978).
Activity theory is comprised of a set of basic
933
design of a context-aware mobile learning application. Pinto and Jose (2006) propose ActivitySpot,
a ubicomp framework for localized activities
such as activities that are strongly related to a
specific physical environment and that only can
be achieved there. The framework defines a conceptual model that has been inspired by activity
theory model. In their attempt to develop a context
model for ubiquitous computing, Kaenampornpan
and ONeill in (2004) have relied extensively on
activity theory. They give the following three
reasons for using activity theory:
934
Situated Action
The situated action model emphasizes the emergent, contingent nature of human activity, that
is, the way activity grows directly out of the
particularities of a given situation. The focus is
situated activity or practice. The situated action
model does not underestimate the importance
of artifacts or social relations or knowledge or
values, but rather its true locus of inquiry is the
everyday activity of persons acting in [a] setting (Lave, 1988). The world of computing has
always faced contextual issues. However, the
current wide adoption and usage of ubiquitous
computing (e.g., cellphones, personal digital assistants, etc.) have made contextual issues arguably more prominent than during any other time
in history of computing. The main reason is that
the ubiquitous devices and applications primarily
are used in real settings and therefore, there is a
need for the ubiquitous devices and applications
to support situated activities. The basic unit of
analysis in situated action models is the activity
of persons-acting in setting. (Lave, 1988). The
unit of analysis is thus neither the individual, nor
the environment, but rather a relation between the
two. The situated action model stresses responsiveness to the environment and the improvisatory
nature of human activity. Users under the influence of the environment, may use or attempt to
use ubiquitous technologies/applications in new
ways that even the designers had not anticipated.
The situated action model, therefore, can be suitable for capturing and accommodating such user
improvisations. On the same note, the situated
Distributed Cognition
Flor et al. in (Flor et al., 1991) describe distributed
cognition as a new branch of cognitive science
devoted to the study of: the representation of
knowledge both inside the heads of individuals
and in the world ...; the propagation of knowledge
between different individuals and artifacts ...; and
the transformations which external structures
undergo when operated on by individuals and
artifacts.... By studying cognitive phenomena in
this fashion it is hoped that an understanding of
how intelligence is manifested at the systems level,
as opposed to the individual cognitive level, will
be obtained. It should be observed that ubiquitous
devices and applications are primarily used within
real settings/context (the world). Therefore, it is
important that knowledge pertaining to the real
settings be modeled. As has been the case with
the desktop computing applications, knowledge
about the target user too is important in the arena
of ubiquitous computing. On the same note, it is
worth noting that the users of ubiquitous technolo-
gies tend to operate in real settings and, therefore, often have to simultaneously interact with
other people/individuals and artifacts. Knowledge
pertaining to such artifacts and such other individuals is, therefore, important to the design and
development of the ubiquitous applications and
devices being used. In distributed cognition, the
unit of analysis is a cognitive system composed
of individuals and the artifacts they use (Flor et
al., 1991). Distributed cognition moves the unit
of analysis to the system and finds its center of
gravity in the functioning of the system (Nardi,
1996). In a manner similar to traditional cognitive
science (Newell et al., 1972), distributed cognition is concerned with structure (representations
inside and outside the head) and the transformations these structures undergo. However, the
difference is that cooperating people and artifacts are the focus of interest, not just individual
cognition in the head (Nardi, 1996). Another
aspect that distributed cognition emphasizes is
the understanding of the coordination among
individuals and artifacts. The work reported in
(Spinelli et al., 2002) is an investigation of users
involved in carrying out collaborative activities,
locally distributed and mobile. The investigation
utilizes the distributed cognition framework and
contextual design for representing and analyzing
the work observed. By using distributed cognition
to model cognition across users and artifacts, the
study could look at collaboration from an innovative point of view that highlights how context and
external resources impact collaboration. In (Laru
& Jrvel, 2003), the authors address an effort that
has used distributed cognition and collaborative
learning in order to develop a pedagogical model
of mobile learning. UbiLearn is a ubiquitous and
mobile learning project (Laroussi, 2004). Its work
is based on two mobile learning viewpoints; the
first is the technical oriented perspective which
focuses on a traditional behaviouristic educational paradigm as given and tries to represent
or to support it with mobile technologies. The
second is the pedagogical socio-cognitive and
935
Situated Interaction
It may be resourceful to highlight an interaction
paradigm, namely situated interaction that has
been defined based on and motivated by some
of the above models. Situated interaction refers
to the integration of human-computer interaction
and the users situation in a particular working
context in a mobile environment (Hewagamage
& Hirakawa, 2000). This combination perceives
that the interaction is not only a function of device,
but also strongly dependent on the users activities and context in which the device is used. The
concept of situated interaction can be discerned
in, and may be said to have been inspired by, both
the situation action model and the activity theory
936
Evaluation Considerations
Conventional user-centered methods could be appropriately exploited in the development process
of ubiquitous applications. On the same note, some
of the traditional usability evaluation techniques
might become useful when adapted for ubiquitous
computing. For instance, there are several efforts
toward realizing usability principles and heuristics for the design and evaluation of ubiquitous
environments/systems, such as ambient heuristics
(Mankoff et al., 2003) and groupware heuristics
(Baker et al., 2001). On the same note, we actually already have proposed a review of usability
principles for mobile computing (Bertini et al.,
2005). We have also developed usability heuristics that are appropriate for evaluation in mobile
computing (Bertini et al., 2006).
Much traditional understanding of work organizations has its roots in Fordist and Taylorist
models of human activity, which assume that
human behavior can be reduced into structured
tasks. HCI has not been spared from this either. In
particular, evaluation methods in HCI have often
relied on measures of task performance and task
efficiency as a means of evaluating the underlying
application. However, it is not clear whether such
measures can be universally applicable when we
consider the current move from rather structured
tasks (such as desktop activities) and relatively
stable settings to the often unpredictable ubiquitous settings. Such primarily task-centric evaluation may, therefore, not be directly applicable
to the ubiquitous computing domain. It would be
interesting to consider investigating methods that
go beyond the traditional task-centric approaches
(Abowd & Mynatt, 2000). It is also worth keeping in mind that tasks on the ubiquitous device
(and elsewhere) tend to be unpredictable and
opportunistic.
In this era of ubiquitous computing, the real
need to take into account the real-world context
has become more crucial than at any other time
in the history of computing. Although the concept of context is not new to the field of usability
(e.g., ISO 9241 guidelines propose a model
consideration of context), evaluation methods
have, however, found it challenging, in practice to
adequately/completely integrate the entire context
during the evaluation process. There are various
ways to address this challenge.
One option is the employment of observational
techniques (originally developed by different
disciplines) to gain a richer understanding of
context (Abowd et al., 2002; Dix et al., 2004).
Main candidates are ethnography, cultural probes,
and contextual design. Another option is to use
the Wizard-of-Oz technique, other simulation
techniques, or even techniques that support the
participants imagination. Prototyping too presents an avenue for evaluating ubiquitous computing applications.
Ethnography
Ethnography is an observational technique that
uses a naturalistic perspective; that is, it seeks to
understand settings as they naturally occur, rather
than in artificial or experimental conditions, from
the point of view of the people who inhabit those
settings, and usually involves quite lengthy periods of time at the study site (Hughes et al., 1995).
Ethnography involves immersing an individual
researcher or research team in the everyday activities of an organization or society, usually for
a prolonged period of time. Ethnography is a well
established technique in sociology and anthropology. The principle virtue of ethnography is its
ability to make visible the real world aspects
of a social setting. It is a naturalistic method
relying upon material drawn from the first-hand
experience of a fieldworker in some setting. Since
ubiquitous devices and applications are mainly
used in real world settings, then ethnography
has some relevance to ubiquitous computing. The
aim of ethnography is to see activities as social
actions embedded within a socially organized
domain and accomplished in and through the
day-to-day activities of participants (Hughes
et al., 1995). Data collected/gathered from an
ethnographic study allows developers to design
systems that take into account the sociality of
interactions that occur in the real world. The
work by Crabtree et al. (Crabtree et al., 2006),
shows how ethnography is relevant to and can
be applied in the design of ubiquitous computing
applications. The ultimate aim of the effort is to
foster a program of research and development
that incorporates ethnography into ubiquitous
computing by design, exploiting the inherent
features of ubiquitous computing applications to
complement existing techniques of observation,
data production, and analysis. While describing
937
Cultural Probes
Cultural probes (Gaver et al., 1999a) represent a
design-led approach to understanding users that
stresses empathy and engagement. They were
initially deployed in the Presence Project (Gaver
et al., 1999b), which was dedicated to exploring
the design space for the elderly. Gaver has subsequently argued that in moving out into everyday
life more generally, design needs to move away
from such concepts as production and efficiency
and instead focus and develop support for ludic
pursuits. This concept is intended to draw attention to the playful character of human life,
which might best be understood in a post-modern
sense. Accordingly, the notion of playfulness is
not restricted to whatever passes as entertainment,
but is far more subtle and comprehensive, directing
attention to the highly personal and diverse ways
in which people explore, wonder, love, worship,
and waste time together and in other ways engage
in activities that are meaningful and valuable
to them (Gaver, 2001). This emphasis on the ludic
derives from the conceptual arts, particularly the
influence of Situationist and Surrealist schools of
938
Contextual Inquiry
Contextual inquiry (Holtzblatt et al., 1993) is
a method that aims at grounding design in the
context of the work being performed. Contextual
inquiry recommends the observation of work as
it occurs in its authentic setting, and the usage
of a graphical modeling language to describe the
work process and to discover places where technology could overcome an observed difficulty. It
is worth noting that in its application, contextual
inquiry does combine various methods such as
field research and participatory design methods
(Muller et al., 1993) in order to provide designers with grounded and rich/detailed knowledge
of user work. Contextual inquiry is one of the
parts of what is referred to as contextual design.
Contextual design is a design approach that was
developed by Holtzblatt and Beyer (Beyer et al.,
939
940
Prototypes
In the formative stages of the design process, low
fidelity prototypes can be used. However, as the
design progresses, user tests need to be introduced.
In the context of ubiquitous computing, user tests
will not only require the inclusion of real users,
real settings, and device interaction tasks, but also
real or primary tasks (or realistic simulations of the
real tasks and of the real settings). As mentioned
previously, realistic simulations of the real tasks
and of the real settings could be adopted as an
alternative. Therefore, there would be the need
to provide a prototype that supports the real tasks
and real settings or their simulations. This does
imply some cost in the design process because the
prototype at this level would need to be robust
and reliable enough in order to support primary
tasks in real settings or the simulations. In fact,
the technology required to develop ubiquitous
computing systems is often on the cutting edge.
Finding people with corresponding skills is difficult. As a result, developing a reliable and robust
ubiquitous computing prototype or application
is not easy (Abowd & Mynatt, 2000; Abowd et
al., 2002).
Choice of Methods
We have described several methods appropriate
for evaluating in ubiquitous computing. One of
the major issues is deciding which of the methods
to choose. Of such evaluation methods, one may
want to know which one(s) will be most suitable
for a certain ubicomp application. Considering
evaluation methods in general (not just evaluation
methods for ubicomp), Dix et al. indicate that:
there are no hard and fast rules in this each
method has its particular strengths and weakness
and each is useful if applied appropriately. (Dix
et al., 2004). They, however, point out that there
are various factors worth taking into consideration
when choosing evaluation method(s), namely:
941
Choice of Models
Fithian et al. in (Fithian et al., 2003) observe that
mobile and ubiquitous computing applications
lend themselves well to the models: situated action; activity theory; and distributed cognition. As
for which of these models are most suitable for a
certain mobile or ubiquitous application, the foregoing authors say that the choice depends largely
on the kind of application and of which aspects
of design are in the limelight. They recommend
that the choice be based on a critical analysis of
the users and their knowledge, the tasks, and the
application domain.
In (Fithian et al., 2003), Fithian et al. also note
that basing entire evaluation on just time measurements can be very limiting, especially if the
tasks are benchmarked in a situated action setting.
Although time measurements are important, other
performance measures that may be much more
useful for evaluating such ubicomp applications
include interruption resiliency, interaction suspensions, interaction resumptions, and so forth.
Interestingly, these richer metrics require a
far richer model of what is going on than simpler
end-to-end timing. This reinforces the message
on other areas of evaluation that understanding
mechanism is critical for appropriate and reliable
generalization (Ellis & Dix, 2006).
Classification of Tasks
In a study found in (Carter et al., 2007), Carter
et al. report that respondents felt that the current
mobile tools are poorly matched to the user tasks
of meeting and keeping up with friends and
acquaintances. The study observed that location-based technology might assist users in such
tasks. Moreover, the study found that users would
prefer to have cumbersome and repetitive tasks
carried out by their mobile technology artifacts
(e.g., the device, the application, etc.). Carter et al.
also found that planning tasks vary in nature and
detail depending on the formal or informal nature
942
Characterization of Tasks
In a work which primarily describes the challenges
for representing and supporting users activity in
the desktop and ubiquitous interactions, Voida et
al. in (Voida et al., to appear) characterize activities as follows:
of interaction research which results from considering the consequences of scaling ubiquitous
computing with respect to time. They indicate
that designing for everyday computing requires
focus on the following features of informal, daily
activities:
Like Fithian et al.s metrics described above (Fithian et al., 2003), these properties all emphasize
the fact that activities in a ubiquitous interaction
are more fragmented and require more divided
attention than architypal office applications,
although arguably these were never as simple as
the more simplisitic models suggested. However,
the first point also suggests that at a high-level
there may be more continuity, and this certainly
echoes Carter et al.s study (Carter et al., 2007)
with the importance of informal gathering and
communication a life-long goal.
943
Summary
As a way of emphasizing the relevance of the
theme of this chapter, it is worth observing that
there is a growing interest within the research
community regarding tasks in ubiquitous computing. Therefore, it comes as no surprise that
944
References
Abowd, G. D., & Mynatt, E. D. (2000). Charting
Past, Present, and Future Research in Ubiquitous
Computing. ACM Transactions on Computer-Human Interaction, 7(1), 29-58.
Abowd, G. D., Mynatt, E. D., & Rodden, T.
(2002). The Human Experience. IEEE Pervasive
Computing, 1(1), 48-57.
Abowd, G. D., Hayes, G. R., Iachello, G., Kientz,
J. A., Patel, S. N., & Stevens, M. M. (2005).
Prototypes and paratypes: Designing mobile
and ubiquitous computing applications. IEEE
Pervasive Computing, 4(4), 6773.
Baker, K., Greenberg, S., & Gutwin, C. (2001).
Heuristic Evaluation of Groupware Based on
the Mechanics of Collaboration. In Proceedings
of the 8th IFIP International Conference on
Engineering for Human-Computer Interaction
(pp. 123-140).
Bardram, J. E. (2005). Activity-Based Computing: Support for Mobility and Collaboration in
Ubiquitous Computing. Personal and Ubiquitous
Computing, 9(5), 312-322.
Bardram, J. E., & Christensen, H. B. (2004).
Open Issues in Activity-Based and Task-Level
Computing. Paper presented at the Pervasive04
Workshop on Computer Support for Human Tasks
and Activities (pp. 55-61), Vienna, Austria.
Berry, M., & Hamilton, M. (2006). Mobile Computing, Visual Diaries, Learning and Communication: Changes to the Communicative Ecology of
Design Students Through Mobile Computing. In
Proceedings of the eighth Australasian Computing
Education Conference (ACE2006)Conferences
945
Crabtree, A., Benford, S., Greenhalgh, C., Tennent, P., Chalmers, M., & Brown, B. (2006).
Supporting Ethnographic Studies of Ubiquitous
Computing in the Wild. In Proceedings of the
sixth ACM conference on Designing Interactive
Systems (pp.60-69). ACM Press.
Dignum, V., Meyer, J-J., Weigand, H., & Dignum,
F. (2002a). An Organizational-oriented Model for
Agent Societies. In Proceedings of the International Workshop on Regulated Agent-Based Social
Systems: Theories and Applications (RASTA02),
at AAMAS, Bologna, Italy.
Dignum, V., Meyer, J-J., Dignum, F., & Weigand,
H. (2002b). Formal Specification of Interaction in
Agent Societies. Paper presented at the Second
Goddard Workshop on Formal Approaches to
Agent-Based Systems (FAABS), Maryland.
Dignum, V. (2004). A model for organizational
interaction: based on agents, founded in logic.
Unpublished doctoral dissertation, Utrecht University.
Dix, A. (2002). Beyond intention - pushing
boundaries with incidental interaction. In Proceedings of Building Bridges: Interdisciplinary
Context-Sensitive Computing, Glasgow University, Sept 2002. http://www.hcibook.com/alan/
topics/incidental/
Dix, A., Finlay, J., Abowd, G., & Beale, R. (2004).
Human-Computer Interaction. Prentice Hall
(Third Edition).
Dix, A., Ramduny-Ellis, D., & Wilkinson, J.
(2004b). Trigger Analysis - understanding broken tasks. Chapter 19 in The Handbook of Task
Analysis for Human-Computer Interaction. D.
Diaper & N. Stanton (Eds.) (pp.381-400). Lawrence Erlbaum Associates.
Ellis, G., & Dix, A. (2006). An explorative analysis
of user evaluation studies in information visualisation. Proceedings of the 2006 Conference
on Beyond Time and Errors: Novel Evaluation
946
947
Li, Y., & Landay, J. A. (2006). Exploring ActivityBased Ubiquitous Computing Interaction Styles,
Models and Tool Support. In Proceedings of the
Workshop What is the Next Generation of Human-Computer Interaction? at the ACM SIGCHI
Conference on Human Factors in Computing
Systems.
Liu, K. (2000). Semiotics in Information Systems
Engineering. Cambridge University Press.
Liu, K., Clarke, R., Stamper, R., & Anderson, P.
(2001). Information, Organisation and Technology: Studies in Organisational Semiotics 1.
Kluwer, Boston.
Liu, K., & Harrison, R. (2002). Embedding
Softer Aspects into the Grid. Poster at EUROWEB 2002 - The Web and the GRID: from
e-science to e-business (pp. 179-182). The British
Computer Society.
Liu, K. (2003). Incorporating Human Aspects into
Grid Computing for Collaborative Work. Paper
presented at the ACM International Workshop on
Grid Computing and e-Science. San Francisco,
CA.
Liu, L., & Khooshabeh, P. (2003). Paper or interactive? A study of prototyping techniques for ubiquitous computing environments. In Proceedings of
the ACM SIGCHI Conference on Human Factors
in Computing Systems, extended abstracts, (pp.
130131). New York: ACM Press.
Mkel, K., Salonen, E-P., Turunen, M., Hakulinen, J., & Raisamo, R. (2001). Conducting
a Wizard of Oz Experiment on a Ubiquitous
Computing System Doorman. In Proceedings
of the International Workshop on Information
Presentation and Natural Multimodal Dialogue
(pp. 115-119). Verona.
Mankoff, J., & Schilit, B. (1997). Supporting
knowledge workers beyond the desktop with
PALPlates. In Proceedings of the ACM SIGCHI
Conference on Human Factors in Computing
948
This work was previously published in Advances in Ubiquitous Computing: Future Paradigms and Directions, edited by S.
Mostefaoui, Z. Maamar, and G. Giaglis, pp. 171-200, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
949
950
Chapter 2.32
Task Ontology-Based
Human-Computer Interaction
Kazuhisa Seta
Osaka Prefecture University, Japan
INTRODUCTION
In ontological engineering research field, the
concept of task ontology is well-known as a
useful technology to systemize and accumulate
the knowledge to perform problem-solving tasks
(e.g., diagnosis, design, scheduling, and so on).
A task ontology refers to a system of a vocabulary/concepts used as building blocks to perform
a problem-solving task in a machine readable
manner, so that the system and humans can collaboratively solve a problem based on it.
The concept of task ontology was proposed
by Mizoguchi (Mizoguchi, Tijerino, & Ikeda,
1992, 1995) and its validity is substantiated by
development of many practical knowledge-based
systems (Hori & Yoshida, 1998; Ikeda, Seta, &
Mizoguchi, 1997; Izumi &Yamaguchi, 2002;
Schreiber et al., 2000; Seta, Ikeda, Kakusho, &
Mizoguchi, 1997). He stated:
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
Necessity of Building Task
Ontologies as a Basis of HCI
It is extremely difficult to develop an automatic
problem-solving system that can cope with a
variety of problems. The main reason is that the
knowledge for solving a problem varies considerably depending on the nature of the problems. This
engenders a fact that is sometimes ignored: Users
have more knowledge than computers. From this
point of view, the importance of a user-centric
system (DeBells, 1995) is now widely recognized
by many researchers. Such framework follows a
collaborative, problem-solving-based approach
between human and computer by establishing
harmonious interaction between human and
computer.
Many researchers implement such a framework
with a human-friendly interface using multimedia
network technologies. Needless to say, it is important not only to apply the design principles of
the human interface but also principle knowledge
for exchanging meaningful information between
humans and computers.
Systems have been developed to employ research results of the cognitive science field in order
to design usable interfaces that are acceptable to
humans. However, regarding the content-oriented
view, it is required that the system can understand
the meaning of humans cognitive activities in
order to capture a humans mind.
We, therefore, need to define a cognitive model,
that is, to define the cognitive activities humans
perform in a problem-solving/decision-making
process and the information they infer, and then
systemize them as task ontologies in a machine
understandable manner in order to develop an
effective human-computer interaction.
951
952
953
954
learning activities and a cognitive activity in connection with problem-solving activities. Thereby,
a conceptual system is constructed that reflects
the task structure of PSOL. For example, typical
metacognition activities that a learner performs
in PSOL, such as Monitor knowledge state
and Monitor learning plan, are systematized
as lower concepts of metacognition activities in
the Observe activity.
Figure 5 shows a conceptual definition of an
act that identifies a possible cause of why a plan
is infeasible. All the concepts in Figure 5 have
a conceptual definition in a machine readable
manner like this, thus, the system can understand
what the learner tries to do and what information
he/she needs.
Cause identification activities defined include:
the actor of the activity is a learner; a learners
awareness of infeasibility becomes an input
(in%symptom in Figure 5); the lower plan of an
target plan that the learner tries to make it feasible now is made into a reference information
(in%reference in Figure 5). Moreover, this cognitive activity stipulates that a learners awareness
of causes of infeasiblity is output (out%cause
in Figure 5). The definition also specifies that
the causes of the infeasibility include (axioms
in Figure 5): that the sufficiency of that target
plan is not confirmed (cause1 in Figure 5); that
the feasibility of a lower plan, small grained
plan that contributes to realize the target plan, is
not confirmed (cause2 in Figure 5); and that the
target plan is not specified (cause3 in Figure 5).
Based on this machine understandable definition,
the system can suggest the candidate causes of
infeasibility of the object plan, and the information the learner should focus on.
Making this PSOL task ontology into the basis
of a system offers useful information in the situation that encourages appropriate decision-making.
This is one of the strong advantages using PSOL
task ontology.
955
956
Figure 6. Interactive navigation based on problem solving oriented learning task ontology
957
958
Future trends
Ontology-Aware System
The systems which support users to perform
intelligent tasks based on the understanding of
ontologies are called ontology aware systems
(Hayashi, Tsumoto, Ikeda, & Mizoguchi, 2003).
Systemizing ontologies contributes to providing
theories and models, which are human-orientated
to enhance systems abilities of explanation and
reasoning. Furthermore, from the viewpoint
of system development, building systems with
explicit ontologies would enhance their maintainability and extendability. Therefore, future work in
this field should continue developing systems that
integrate ontology and HCI more effectively.
CONCLUSION
This article introduced a task ontology based
human computer interaction framework and
discussed various related issues. However, it is
still difficult and time consuming to build high
quality sharable ontologies that are based on the
analysis of users task activities. Thus, it is important to continue building new methodologies
for analyzing users tasks. This issue should be
carefully addressed in the future, and we hope
more progress can be achieved through collaboration between researchers in the fields of ontology
engineering and human computer interaction.
References
Brown, A. L., Bransford, J. D., Ferrara, R. A., &
Campione, J. C. (1983). Learning, remembering,
and understanding. In E. M. Markman, & J. H.
Flavell (Eds.), Handbook of child psychology (4th
ed.), Cognitive development, 30 (pp. 515-629).
New York: John Wiley & Sons.
KEY TERMS
Attentional Capacity: Cognitive capacity divided and allocated to perform cognitive task.
959
This work was previously published in the Encyclopedia of Human Computer Interaction, edited by C. Ghaoui, pp. 588-596,
copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
960
961
Chapter 2.33
Abstract
This article considers the affordances of social
networking theories and tools to build new and
effective e-learning practices. We argue that
connectivism (social networking applied to
learning and knowledge contexts) can lead to a
reconceptualization of learning in which formal,
nonformal, and informal learning can be integrated as to build potentially lifelong learning
activities to be experienced in personal learning
environments. In order to provide a guide in the
design, development, and improvement both of
personal learning environments and in the related
learning activities, we provide a knowledge flow
model highlighting the stages of learning and the
related enabling conditions. The derived model is
applied in a possible scenario of formal learning
Towards an e-lifelong
learning experience
Formal, nonformal, and informal learning have
become subjects of study and experimentation as
for their potentialities to be carried on through the
network. The pervasiveness of telematic technologies in current learning and knowledge processes
justifies the hopes of success and emerging approaches become always more open, destructured,
and nonformalised. According to this vision,
formal, informal, and nonformal learning can be
seen, such as integration of actions and situations,
that can be developed both in the network and in
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
962
sarily intentional and can be nonrecognized sometimes from the subject himself/herself as knowledge and competence acquisition (Cross, 2006).
According to this perspective, aimed at retrieving and valuing the potentialities embedded in
spontaneous contexts, in this case the network,
the emerging domain of study of informal elearning is receiving greater attention because
of the widespread of social networking practices
and technologies. The online transposition of the
social network is nowadays referred to as social
networking phenomena, and it is related to a set
of available technologies and services allowing
individuals to take part in network-based virtual
communities. Social networking is emerging as a
highly natural practice because it is deeply rooted
in our daily behaviour; spontaneous relations,
interactions, and conversations support informal
learning practices, contributing to the creation and
transmission of knowledge. In informal learning
practices, the social behaviour and the support of
technologies converge toward the network; a
network made by people and resources, a social
network, unified by personal needs or common
goals, interaction policies, protocol and rules,
and telematic systems all together favouring
the growth of a sense of belonging to the net
community.
At the same time, the culture of lifelong learning is gaining importance as one of the most
effective answers to face the challenges brought
by the information and knowledge society (Siemens, 2006): the rapid obsolescence of professional knowledge and skills requires updating
and continuous training as well as recurring and
personalised learning. Under these premises, the
domain of e-lifelong learning is being configured
as a sociotechnical system in which knowledge
and learning are both the form and the content
as for their social and relational meaning. The
subject undergoing an e-lifelong-learning experience crosses this territory doing practices and
strategies of continuous interconnection and com-
Affordances of learning in a
connectivist Environment
Scenarios, which become always more common,
highlight that through informal channels, new
learning and knowledge management spaces are
more easily enabled, thanks to people and their
963
Table 1. Social network characteristics (Adapted from Pettenati & Ranieri, 2006b)
Social Network Characteristics
Relation based on individual interests, debate, confront on
Goal
specific topics; multiplicity and heterogeneity of joining
interests and motivations
Belonging
Spontaneous and autonomous motivation
Duration
Nondefined
High level of trust (relevance of reputation), sense
of responsibility, high technological skills, disCohesion and entributed
reflexivity
and
evaluation
(nonautoabling factors
nomous, nor heteronymous but socially spread)
Type of relation: share/evaluate
964
This radical change of the users role is under our eyes; between 2000 and 2004, relevant
literature reported about research investigating
the possible use of blog tools in education without
being capable of bringing credible answers. The
use of the tool at that time had not yet entered
in the praxis and communicative modality of
the people and the potentialities related to the
aspects of metareflection and self-evaluation of
the blog had not yet clearly emerged (Barrett,
2004; Barret 2006; Banzato, 2006). More recently,
the exponential growth of blog use, coupled with
the syndication technologies, which added the
relational dimension to these tools, made the
blog one of the most important social networking
assets used at the moment. This is just an example
that serves to highlight that the determinant variable of the shift in the use of the tool was not due
entirely to technology, but also to the spontaneous
change in the practice of use together with the
overcoming of the diffusion critical threshold that
could make the tool an important instrument in
learning (formal, informal, nonformal) processes.
Another relevant mark of this change in the users role is related to recent news; in December
2006, all the blogosphere rumoured about TIME
magazines cover title: TIMEs Person of the Year
for 2006 is you. According to Lev Grossman
(2006), author of this news, the explosive growth
and the enormous influence of the user-generated
content (such as blogs, video sharing sites, etc.)
should be read in a new way: But look at 2006
through a different lens and youll see another
story, one that isnt about conflict or great men.
Its a story about community and collaboration
on a scale never seen before. Its about the cosmic
compendium of knowledge Wikipedia and the
million-channel peoples network YouTube and
the online metropolis MySpace. Its about the
many wresting power from the few and helping
one another for nothing and how that will not
only change the world, but also change the way
the world changes.
965
3.
966
967
to this topic at the eStrategy conference on ePortfolio (Baker, 2006) is but another evidence
of the rising importance of studying this field.
In the context of a knowledge society, where
being information literate is critical, the portfolio
can provide an opportunity to demonstrate ones
ability to collect, organise, interpret and reflect on
documents and sources of information. It is also
a tool for continuing professional development,
encouraging individuals to take responsibility for
and demonstrate the results of their own learning.
Furthermore, a portfolio can serve as a tool for
knowledge management, and is used as such by
some institutions.
968
4.
5.
Description
Blog guides
Social tagging (folksonomy)
Social bookmarking
Web syndication, Web feed
management
Tag clouds
Wikis
Collaborative real-time
editing
969
Table 2. continued
Content aggregation and
management, Mashup (Web
application hybrid)
Instant messaging
Podcasting
970
Enabling Conditions
The model in Fig. 1 envisages five subsequent
stages (or knowledge processes) that are at the
heart of the schema. The processes are framed by
an external layer, where the enabling conditions
that are relevant for the knowledge processes
development are highlighted:
Figure 1. Knowledge process in a connectivist environment; stages of the learning experience and
enabling conditions
971
1.
2.
3.
4.
5.
972
it is still the awareness of the positive interaction with others that sustains mutual
understanding and social grounding; in this
context the (often tacit) agreement of respect,
use of reputation feedback, and respect of a
common socioquette, contribute to build a
positive social climate, making the online
relational environment a trusted environment.
2.
3.
4.
973
b.
c.
974
b.
Students are then given the evaluation assignment to be carried in a post-class phase within
15 days. The assignment is a journalist-style
individual writing of a possible local newspaper
reporting the case of the role play in light of the
analysis of the juvenile behaviour in the urban
suburbs. Students are asked to post the assignment
in their personal blog, tagging it with the course
name and class topic.
b.
Conclusion
In this article we tried to provide our interpretation of the current sociotechnical educational
system shaped by technologies and practices
of the knowledge society to locate the role of
learning and learners in a lifelong perspective.
We believe that both users attitudes and available
technologies are mature enough to let us envisage
that each network user could easily engage in a
lifelong learning personal experience if properly
lead by appropriate methodologies, and sustained
by accordingly designed and developed personal
learning environments.
To this extent we provided a model to schematize the knowledge flow occurring during an
effective learning experience in a connectivist
environment. The purpose of this model is twofold: from one side it can be used by personal
learning-environment designers as a guideline
for checking if all phases and enabling conditions are supported by the integrated tools;
on the other side it can be used by instruc-
975
Acknowledgment
We want to thank Prof. Dino Giuli for giving us
the possibility to carry on our research in this
domain. We are also very grateful to Prof. Antonio Calvani for keeping on asking us stimulating
scientific research questions, pushing us to work in
order to find possible answers. Moreover, we want
to thank the colleagues of the Educational Technology Laboratory and Telematics Technologies
Laboratory for the fruitful discussions occurring
both in formal and nonformal settings.
References
Baker, A. (2006). E-strategies for empowering
learners. ePortfolio Conference, Oxford, England,
on 11-13 October 2006. Retrieved on January 2007,
from, http://www.eife-l.org/news/ep2006
Banzato, M. (2006). Blog e didattica. Dal web
publishing alle comunit di blog per la classe in
rete. TD Tecnologie Didattiche, 38(2).
Barrett, H. (2004). Myonline portfolio adventure.. Retrieved on January 2007, from, http://
electronicportfolios.org/myportfolio/versions.
html
Barrett, H. (2006). Authentic assessment with
electronic portfolios using common software and
etrieved on January 2007, from,
Web 2.0 tools. R
http://electronicportfolios.org/web20.html
976
Fallows, J. (2006). Homo conexus. Technology Review. July 2006. Retrieved on January 2007, from,
http://www.technologyreview.com/read_article.
aspx?id=17061&ch=infotech.
Norris, D., Mason, J., & Lefrere, P. (2003). Transforming e-knowledge - A revolution in the sharing
of knowledge. Society for College and University
Planning Ann Arbor. Michigan.
Fini, A. (2006, October). Nuove prospettive tecnologiche per l'e-learning: dai Learning Object
ai Personal Learning Environment.Master thesis
in educational sciences. University of Florence.
977
This work was previously published in the International Journal of Web-based Learning and Teaching Technologies, edited
by L. Esnault, Volume 2, Issue 3, pp. 42-60, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an
imprint of IGI Global).
978
979
Chapter 2.34
User-Centered Design
Principles for Online Learning
Communities:
Abstract
This chapter examines current research on online
learning communities (OLCs), with the aim of
identifying user-centered design (UCD) principles
critical to the emergence and sustainability of distributed communities of practice (DCoPs), a kind
of OLC. This research synthesis is motivated by
the authors involvement in constructing a DCoP
dedicated to improving awareness, research, and
sharing data and knowledge in the field of governance and international development. It argues
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
Increasingly, distributed communities of practice
(DCoPs) are attracting attention for their potential
to enhance learning, to facilitate information
exchange, and to stimulate knowledge creation
across cultural, geographical, and organizational
boundaries. Research shows the utility of
DCoP
on their members is positive (Daniel, Sarkar, &
OBrien, 2004a; Daniel, Poon, & Sarkar, 2005;
Schwier & Daniel, Chapter II, this volume).
Their allure aside, experience indicates that they
may not emerge or flourish even in the presence
of demand from users. In fact the process
of
constructing DCoP is not well understood, and
factors influencing sustainability merit further
research attention.
This chapter introduces the authors involvement in the development of a DCoP. The DCoP in
question is the Governance Knowledge Network
(GKN). This project began in 2001 with the aim
of assessing the interest of academics and practitioners in Canada to develop an online learning
community (OLC) for systematizing the exchange
of information at the intersection of governance
and international development
(Daniel et al.,
2004a).
The surveys of key Canadian stakeholders
in the project indicated considerable data existed,
and recommended the proposed GKN to: actively
engage in dissemination and archiving of data
not widely accessible in the public sphere, profile
community members, promote social network
building and collaboration, and inform members
of current events and opportunities.
Following the identification of the demand and
interest, the second stage of our research involved
the development of a GKN prototype. In this
unchartered course, we were guided by enabling
technology and other DCoP models (World Bank,
UNDP).1 We also turned to research to inform our
efforts on how to effectively sustain the project.
Our synthesis of research in the area identified
promising insights from studies we refer to as
the sociotechnical approach. As applied to DCoP,
980
the sociotechnical approach aims at understanding peoples interaction with technology and the
ensuing communication, feedback, and control
mechanisms necessary for people to take ownership of the design and implementation process.
This chapter focuses on this interaction, as it is
germane to the development and sustainability of
the GKN, in particular, and DCoP more generally.
The chapter is divided into the following sections.
The next section outlines relevant research on
DCoPs and the sociotechnical approach. We next
provide an overview of the GKN OLC project
and present key results from the research that
informed the design of the GKN. A discussion
of various human and technology elements we
consider critical to the initiation, development,
growth, and sustainability of the GKN follows,
and in the next section, we revisit the key human
and technology design issues. Finally, we conclude
the chapter and present UCD principles for OLCs
drawn from the sociotechnical approach.
Related Work
Daniel, Schwier, and McCalla (2003b) observe that
online learning communities have attracted diverse disciplinary interest, but that it is possible to
identify two dominant perspectivestechnological determinism and social constructivism. The
basic tenet of the technology determinism research
is that technology shapes cultural values, social
structure, and knowledge. In technology-related
fields, such as computer science and information
systems, significant attention has been given to
understanding technological developments and
how these changes influence social structures.
The social constructivism perspective, on the
other hand, posits that knowledge and world views
are created through social interaction. Social
constructivism theories have inspired research
on knowledge construction within communities
of practice. Lave and Wenger (1991) assert that
a societys practical knowledge is situated in
981
982
In Table 1, we simplify this diversity by distinguishing between formal and informal online
learning communities.
Formal online learning
communities have explicit learning goals and
evaluation criteria. Examples would include
courses/programs offered by education institutions or companies (McCalla, 2000; Schwier,
2001). In contrast, informal OLCs achieve learning outcomes through social learning. Examples
would include distributed communities of practice
(Daniel, OBrien, & Sarkar 2004b). A unique
feature of DCoPs is the absence of a teacher or
instructor; rather, in a DCoP, the learners are also
teachers, as members collectively determine the
content and support each other throughout the
learning process. Further differences are contrasted in Table 1.
A growing body of research identifies the
contribution of DCoPs to facilitating information
exchange and knowledge creation, thereby enriching the work of the collective (Brown & Duguid,
1991; Hildreth, Kimble, & Wright, 1998; Lesser
& Prusak, 2000). These positive outcomes have
caught the interest of scholars and knowledge
managers. And yet, there is little comparative
research on the correlates of DCoP performance
or sustainability. We find this surprising, given the
fact that OLCs emerged and proliferated with the
advent of the Internet and then World Wide Web
over a decade ago. The case-study foundations
for comparative research are certainly present,
however (Kalaitzakis, Dafoulas, & Macaulay,
2003; Hartnell-Young, McGuinness, & Cuttance,
Chapter XII, this volume).
Germane to the topic of DCoP emergence
and sustainability is the question of constructability. Can the DCoP features listed in Table
1 be built, or have DCoPs simply migrated from
the temporal to the online world? If we return to
the literature review briefly touched on earlier,
perhaps not surprisingly we would find a different answer to this question depending on the
literature consulted. For example, the sociology
Table 1. Features of online learning communities and distributed communities of practice (adapted from
Daniel et al., 2003b)
Formal: Online Learning Communities
(OLCs)
Membership is explicit and identities are
generally known
Participation is often required
High degree of individual awareness (who
is who, who is where)
Explicit set of social protocols for
interaction
Formal learning goals
Possibly diverse backgrounds
Low shared understanding of domain
Loose sense of identity
Strict distribution of responsibilities
Easily disbanded once established
Low level of trust
Lifespan determined by extent in which
goals are achieved
Pre-planned enterprise and fixed goals
983
suited to understanding the development and sustainability of DCoPs. In particular, the relevance
of a sociotechnical approach to the evolution of
the GKN project results from the attention to,
and monitoring of, feedback loops to inform
design and subsequent operation. For example, a
sociotechnical approach cautions against a build
it and wait till they come approach, and favors
a co-design process that enables potential users
to define their goals and areas of concerns. Joint
construction can be regarded as fostering a shared
identity and building networks necessary for the
development of trust and effective ICT-mediated
interaction.
984
985
986
987
Design Principles
Discussion
The sociotechnical approach to the development
of a DCoP suggests that human and technical
factors are interlinked and they co-determine the
emergence, evolution, growth, and sustainability
of DCoPs. For practitioners involved in designing
or developing a DCoP, the variables outlined previously will likely provide a useful starting point
for guiding implementation and identifying key
relationships. For researchers, our preliminary
exploration of these relationships creates a number
of hypotheses for future investigation. As these
relationships have a bearing on both practice and
research, we intend to track these relationships
through user evaluations and internal monitoring. We anticipate that these findings will work
toward a framework for comparative research on
988
Didactic Principles
Sociability Principles
Acknowledgments
The research reported in this chapter has been
supported financially by the Policy Branch of
the Canadian International Development Agency
(CIDA), the Social Sciences and Humanities
Research Council of Canada (SSHRC), and the
International Center for Governance and Development at the University of Saskatchewan.
References
Brook, J., & Boal, I. A. (Eds.). (1995). Resisting
the virtual life: The culture and politics of information. San Francisco: City Lights Books.
Brown, J. S., & Duguid, P. (1991). Organizational
learning and communities of practice: Towards a
989
Lesser, E. L., & Prusak, L. (2000). Communities of practice, social capital and organizational
knowledge. In E. L. Lesser, M. A. Fontaine, & J.
A. Slusher (Eds.), Knowledge and communities.
Boston: Butterworth Heinemann.
Heylighten, F. (1999). What are socio-technical approaches and systems science? Principia
socio-technical approach Web. Retrieved from
http://pespmc1.vub.ac.be/CYBSWHAT.html
990
Schuler, D., & Namioka, A. (Eds.). (1993). Participatory design: Principles and practices. Hillsdale,
NJ: Lawrence Erlbaum.
Schwier, R. A. (2001). Catalysts, emphases, and
elements of virtual learning communities. Implication for research. The Quarterly Review of
Distance Education, 2(1), 5-18.
Sclove, R. E. (1995). Democracy and technology.
New York: The Guildford Press.
Shneiderman, B. (1998). Designing the user interface. Strategies for effective human-computer
interaction. Boston: Addison-Wesley.
Smith, M. (1992). Voices from the WELL: The
logic of the virtual commons. Masters thesis,
Department of Sociology, UCLA, USA.
Endnote
1
This work was previously published in User-Centered Design of Online Learning Communities, edited by N. Lambropoulos
and P. Zaphiris, pp. 54-70, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
991
992
Chapter 2.35
INTRODUCTION
The contribution of context information to content
management is of great importance. The increase
of storage capacity in mobile devices gives users
the possibility to maintain large amounts of content
to their phones. As a result, this amount of content
is increasing at a high rate. Users are able to store
a huge variety of content such as contacts, text
messages, ring tones, logos, calendar events, and
textual notes. Furthermore, the development of
novel applications has created new types of content,
which include images, videos, MMS (multi-media
messaging), e-mail, music, play lists, audio clips,
bookmarks, news and weather, chat, niche information services, travel and entertainment information,
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
capture location and information about the hierarchical relationships of different locations).
993
994
CONCLUSION
The increasing amount of stored content in
mobile devices and the limitations of physical
mobile phone user interfaces introduce a usability
challenge in content management. The efficient
management of large amounts of data requires
developing new ways of managing content. Stored
data are used by applications which should express
information in a sensible way, and offer users a
simple and intuitive way of organizing, searching, and grouping this information. Inadequate
design of user interface results in poor usability
and makes an otherwise good application useless. Therefore, it is necessary to design and built
context-aware applications.
Issues of usefulness and meaningfulness in
utilizing context metadata need to be further
investigated. Usefulness depends on the type of
metadata. As far as location and proximity are
concerned, it appears that the more time has passed
since the recording of the data, the more accurate
the information needs to be. Furthermore, in the
case of location information, the closer to ones
home or familiar places the data refers to, the
more detailed the information needs to be. A main
usability challenge is the creation of meaningful
context metadata automatically, without users
having to add this information manually. There
exist many ways for automatic recording of information about a users context, but the generated
information is not always meaningful.
Another field that requires further research is
privacy. It seems that users are willing to accept
a loss of privacy, provided that the information
they receive is useful and they have control over
the release of private information. Content management provides users with a safe, easy-to-use,
REFERENCES
Ackerman, M., Darrel, T., & Weitzner, D. J. (2001).
Privacy in context. Human Computer Interaction,
16, 167-176.
Brown, P. J. (1996). The stick-e document: A
framework for creating context-aware applications.
IFIP Proceedings of Electronic Publishing 96,
Laxenburg, Austria, (pp. 259-272).
Brown, P. J., Bovey, J. D., & Chen, X. (1997).
Context-aware applications: From the laboratory
to the marketplace. IEEE Personal Communications, 4(5), 58-64.
Campbell, C., & Tarasewich, P. (2004). What can
you say with only three pixels? Proceedings of
the 6th International Symposium on Mobile Human-Computer Interaction, Glasgow, Scotland,
(pp. 1-12).
Cheverist, K., Smith, G., Mitchell, K., Friday, A.,
& Davies, N. (2001). The role of shared context
in supporting cooperation between city visitors.
Computers &Graphics, 25, 555-562.
Dey, A. K., Abowd, G. D., & Wood, A. (1998).
CyberDesk: A framework for providing selfintegrating context-aware services. Knowledge Based
Systems, 11(1), 3-13.
Dey, A. K. (2001). Understanding and using
context. Personal & Ubiquitous Computing,
5(1), 4-7.
Kaasinen, E. (2003). User needs for location-aware
mobile services. Personal Ubiquitous Computing, 7, 70-79.
Kim, H., Kim, J., Lee, Y., Chae, M., & Choi, Y.
(2002). An empirical study of the use contexts and
usability problems in mobile Internet. Proceed-
995
KEY TERMS
This work was previously published in the Encyclopedia of Mobile Computing and Commerce, edited by D. Taniar, pp. 116-118,
copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
996
997
Chapter 2.36
Speech-Centric Multimodal
User Interface Design in
Mobile Technology
Dong Yu
Microsoft Research, USA
Li Deng
Microsoft Research, USA
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
998
999
1000
Recognizer +
speech Input
Semantic Parser
(central Modality)
Pen Input
Recognizer +
Language
Semantic Parser
Model
Semantic
Keyboard Input
Parser
Semantic
Parser
Other Inputs
Surface Semantics
Context
Manager
Discourse
Semantic
Manager
Model
Discourse Semantics
Responses
Response
Behavior
Manager
Model
use the system, yet at the same time allow bargeins and accelerators for the expert users to reduce
the overall task completion time.
Third, the system should be designed to allow
easy correction of errors. For example, the system
should provide context sensitive, concise, and effective help. Other approaches include integrating
complementary modalities to improve overall
robustness during multimodal fusion; allowing
users to select a less error-prone modality for a
given lexical content, permitting users to switch
to a different modality when error happens; and
incorporating modalities capable of conveying
rich semantic information.
Fourth, the systems behavior should be consistent internally and with users previous experiences. For example, a similar dialog flow should
be followed and the same terms should be used
to fulfill the same task. Users should not have to
wonder whether the same words and actions have
different meaning under different context.
Fifth, the system should not present more information than necessary. For example, dialogues
should not contain irrelevant or rarely needed information, and the prompts should be concise.
While the best practices summarized are common to all speech-centric MUIs, some special
attention needs to be paid to speech modality and
multimodality fusion due to the great variations
of mobile device usage environments. We address
these special considerations next.
1001
Figure 2. Illustration of distributed speech recognition where the actual recognition happens at
the server (e.g., PC)
1002
Figure 3. Distributed speech recognition architecture: speech input is encoded and sent to the server.
Speech feature extraction happens at the server side
Codec
Encoded
Signal
Speech
Front-End
Speech
Features
Speech
Input
Recognition
Text
Speech
Recognition
Engine
Figure 4. Distributed speech recognition architecture alternative: the speech feature extraction happens
on the mobile devices. Only the features are sent to the server
Speech
Input
Speech
Front-End
Speech
Features
Recognition
Text
In the past, distributed recognition is unquestionably the dominant approach due to the low CPU
speed and small amount of memory available
on the mobile devices. Nowadays, although the
CPU speed and memory size are increasing
dramatically, distributed recognition is still the
prevailing approach over local recognition due to
the advantages discussed previously.
Speech
Recognition
Engine
1003
1004
Modality Switching
One of the problems in speech recognition under noisy environment is modality switching.
If the speech recognition engine is always on,
noises and by-talks may be misrecognized as a
legitimate user input and hence, can erroneously
trigger commands.
A widely used modality switching approach
is called push to talk, where the user presses
a button to turn on the speech recognizer, and
releases the button to turn off the recognizer.
Another approach is called tap & talk (Deng
et al., 2002; Huang, Acero, A., Chelba, C., Deng,
L., Duchene, D., Goodman, et al., 2000, Huang
et al., 2001), where the user provides inputs by
tapping the tap & talk field and then talking
to it. Alternatively, the user can select the tap
& talk field by using the roller to navigate and
holding it down while speaking. Tap & talk can
be considered as a combination of push-to-talk
control and indication of where the recognized
text should go. Both the push-to-talk and tap &
talk avoid the speech detection problem that is
critical to the noisy environment under which the
mobile devices are typically deployed.
Figure 5 shows an example of the tap & talk
interface used in the MiPad (Deng et al., 2002).
If the user wants to provide the attendee information for a meeting scheduling task, he/she taps
Figure 5. An example of the Tap & Talk interface (Deng et al., 2002, 2002 IEEE)
especially important for improving speech recognition accuracy under noisy environments.
Context information can be utilized in many
different ways in speech modality. One particular approach is to construct the language model
based on the context. For example, the tap &
talk approach (Deng et al., 2002) customizes the
language model depending on the field the user
is pointing to, as mentioned in section 3.3.
Language model can also be customized, based
on the user information and the dialog state. For
example, if the system is expecting the recipient
information, the language model can include only
the names in the global address book. If the user
information is also used, the language model can
also include users contact list and people who
have exchanged e-mails with the user in the past.
An even more effective language model would
weight different names differently, depending on
the frequencies the user exchanged e-mail with
the person, and the recentness of the interaction
(Yu, Wang,
1005
Language understanding
Good speech recognition accuracy does not always
translate to good understanding of users intents,
as indicated by Wang, Acero, and Chelba (2003).
A robust language-understanding model is needed
to obtain good user experience for speech-centric
MUI applications, especially since speech recognition errors will affect the understanding.
The first issue to address in language understanding is constructing the semantic grammar.
Since the importance of each word to the understanding is different, the words need to be treated
differently. A typical approach is to introduce a
specific type of nonterminals called semantic
classes to describe the schema of an application
(Wang, 2001; Yu, Ju, Wang, & Acero, 2006). The
semantic classes define the concepts embedded in
the linguistic structures, which are usually modeled with probabilistic context-free grammars.
1006
Modality fusion
One strong advantage of using MUIs is the improved accuracy and throughput through modality
integration. There are typically two fusion ap-
1007
1008
scores. Furthermore, each team can apply a different weighting scheme, and can examine different
subsets of data. Finally, the committee weights the
results of the various teams, and reports the final
recognition results. The parameters at each level of
the hierarchy are trained from a labeled corpus.
(Oviatt, et al., 2000, online version, p. 24).
1009
References
Bolt, R. A. (1980). Put-that-there: Voice and gesture at the graphics interface. Computer Graphics,
14(3), 262-270.
1010
1011
1012
modal speech and gesture applications: State-ofthe-art systems and research directions. Human
Computer Interaction, 263-322. (online version),
Retrieved July 20, 2006, from http://www.cse.ogi.
edu/CHCC/Publications/designing_user_interface_multimodal_speech_oviatt.pdf
Oviatt, S. L., DeAngeli, A., & Kuhn, K. (1997).
Integration and synchronization of input modes
during multimodal human-computer interaction.
Proceedings of Conference on Human Factors in
Computing Systems (CHI97) (pp. 415-422).
Oviatt, S. L. & Olsen, E. (1994).
Integration themes
in multimodal human-computer interaction. In
Shirai, Furui, & Kakehi (Eds.), Proceedings of the
International Conference on Spoken Language
Processing, 2, 551-554.
Oviatt, S. L., & vanGent, R. (1996). Error resolution during multimodal human-computer interaction. Proceedings of the International Conference
on Spoken Language Processing, 2, 204-207.
Pavlovic, V., Berry, G., & Huang, T. S. (1997).
Integration of audio/visual information for use in
human-computer intelligent interaction. Proceedings of IEEE International Conference on Image
Processing (pp. 121-124).
Pavlovic, V., & Huang, T. S., (1998). Multimodal
prediction and classification on audio-visual
features. AAAI98 Workshop on Representations
for Multi-modal Human-Computer Interaction,
55-59.
Ravden, S. J., & Johnson, G. I. (1989). Evaluating
usability of human-computer interfaces: A practical method. Chichester: Ellis Horwood.
Reeves, L. M., Lai, J., Larson, J. A., Oviatt, S.,
Balaji, T. S., Buisine, S., Collings, P., Cohen, P.,
Kraal, B., Martin, J. C., McTear, M., Raman, T.
V., Stanney, K. M., Su, H., & Wang, Q. Y. (2004).
Guidelines for multimodal user interface design.
Communications of the ACM Special Issue on
Multimodal Interfaces, 47(1), 57-59.
Rhyne, J. R., & Wolf, C. G. (1993). Recognition-based user interfaces. In H. R. Hartson &
D. Hix (Eds.), Advances in Human-Computer
Interaction, 4, 191-250.
Vo, M. T., & Wood, C. (1996). Building an application framework for speech and pen input
integration in multimodal learning interfaces.
Proceedings of IEEE International Conference
of Acoustic, Speech and Signal Processing, 6,
3545-3548.
1013
Key Terms
Modality: A communication channel between
human and computer, such as vision, speech,
keyboard, pen, and touch.
Modality Fusion: A process of combining
information from different input modalities in a
principled way. Typical fusion approaches include
early fusion, in which signals are integrated at the
feature level, and late fusion, in which information
is integrated at the semantic level.
Multimodal User Interface: A user interface
with which users can choose to interact with a
system through one of the supported modalities,
or multiple modalities simultaneously, based on
the usage environment or preference. Multimodal
user interface can increase the usability because
the strength of one modality often compensates
for the weaknesses of another.
Push to Talk: A method of modality switching
where a momentary button is used to activate and
deactivate the speech recognition engine.
This work was previously published in the Handbook of Research on User Interface Design and Evaluation for Mobile Technology, edited by J. Lumsden, pp. 461-477, copyright 2008 by Information Science Reference, formerly known as Idea Group
Reference (an imprint of IGI Global).
1014
1015
Chapter 2.37
Abstract
This chapter presents a conceptual framework for
an emerging type of user interfaces for mobile
ubiquitous computing systems, and focuses in
particular on the interaction through motion of
people and objects in physical space. We introduce
the notion of Kinetic User Interface as a unifying
framework and a middleware for the design of
pervasive interfaces, in which motion is considered as the primary input modality.
Introduction
Internet and mobile computing technology is
changing the way users access information and
interact with computers and media. Personal Computing in its original form is fading and shifting
towards the ubiquitous (or pervasive) computing
paradigm (Want et al., 2002). Ubiquitous Computing systems are made up of several interconnected
heterogeneous computational devices with different degrees of mobility and computing po
wer.
All
of these devices and appliances are embedded in
everyday objects, scattered in space, capable of
sensing the environment and of communicating
with each other, and carried or exchanged by
people. Therefore, we are facing a new ecology
of computing systems that poses new issues in
their integration and usability. Human-computer
interfaces that were designed for desktop personal computers must be re-conceived for this
new scenario. Due to the different capabilities
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Ubiquitous Computing
Ubiquitous Computing (henceforth Ubicomp)
is an emerging research sub-area of Distributed
Systems whose main focus is studying how heterogeneous, networked computing devices can be
embedded in objects of daily use in order to enable
new applicative scenarios and user experiences.
Mark Weiser (1991; 1993; 1994) introduced the
term Ubiquitous Computing in the 90s as a new
way to understand computer technology and to lay
the foundations of an expected and necessary computing paradigm revolution. Weisers vision has
been adopted and interpreted by a great number
of researchers, among whom we consider relevant
for our goals the works of (Abowd & Mynatt,
2000; Abowd et al., 2002; Banavar & Bernstein,
2004; Bellotti et al., 2002; Greenfield, 2006; Norman, 1999; Want et al., 2002). We summarize the
Ubicomp vision in four fundamental points that
motivate our effort of providing a new conceptual
framework for Ubicomp user interfaces:
1.
2.
1016
Todays computer (e.g., the personal computer) will disappear, and the computing
power will fade inside the network infrastructure, as it is already the case to some
extent with existing web-services.
Computing will be extremely distributed
and heterogeneous. This will result from
the interconnection of several computing
3.
4.
1017
Unobtrusive Interfaces
When HCI intersects Ubicomp, many assumptions
that were made when designing interaction for
ordinary computing devices are no longer valid.
In Ubicomp, computers exist in different forms
and only in a minimal portion as ordinary desktop
computers (i.e., where interaction is performed
through screens, keyboards, mice). As pointed
out by Weiser and other promoters of Ubicomp,
interacting with a ubiquitous system should be
realized through an unobtrusive interface, more
precisely, an interface that does not capture the
full attention of the user, who can still use the
system to perform the foreground tasks
(Nardi,
1996)
. In contrast, an obtrusive interface is one
that requires an unjustified cognitive effort to be
operated, thus interfering with the normal usage
of the system. Weiser & Seely Brown (1996) call
this setting Calm Technology in order to stress
the importance of adapting the computers and their
interfaces to human pace, rather that the other way
around. In this vision, computers should follow
users in their daily activity and be ready to provide
information or assistance on demand. Moreover,
they should not require much attention from the
user by asking information that can be autonomously obtained from the actual usage context.
They must be aware of the context and be able
to adapt their behaviour and interfaces to different usage situations. In other words, ubiquitous
computers must be smart and adaptive.
Kinetic-Awareness
Context-awareness is considered as the most important issue in Ubicomp (Baldauf et al., 2006;
Dourish, 2004; Hong et al., 2005). Specifically,
location-awareness is considered a key component
of context in designing user interfaces for mobile
systems. Location-awareness has been always
treated as a sub-case of context-awareness, and
motion as a form of context change. Location
change is taken as a context change for adapting
1018
2.
1019
1020
1021
1022
Feedback Management
As in GUIs, an important issue in KUIs is feedback management. Due to the different nature
of physical space with respect to GUIs synthetic
1023
KUI-Enabled Scenarios
1024
1025
1026
1027
1028
More specifically, in the KUI software components location and motion information are linked
1029
1030
retrieving information from the GeoDB, or explicitly by applications. The latter case is useful
when we logically link agent and artefact Kuidgets
together, allowing one of them to inherit motion
properties from the other one.
The Activity Layer is responsible of aggregating motion information from one or several
Kuidget, as well as dynamic information generated by the Relation Manager. Relations and
Kuidgets status are used as the building blocks
for the recognition of the previously described
kinetic interaction patterns.
Enabling Technology
The KUI middleware can be implemented on
top of a context-aware middleware such as the
Context Toolkit (Dey et al., 2001) and it can be
integrated within any enterprise architecture like
J2EE or .NET. Kinetic-aware devices typically
will be connected through wireless Internet so
that client-server software architecture is apparently justified. Applications exploit localization
infrastructures for indoor and outdoor tracking.
Indoor localization technologies include RFID
antennas, ultrasonic, ultrawide-band, and IR
sensors. For outdoor localization and motion
tracking, GPS offers the most available tracking
solution, which, combined with wireless Internet
communication (e.g., GPRS, EDGE or UMTS) is
nowadays available on commercial mobile phone
and handheld devices. Additionally, we expect to
detect others (more local) motion parameters (such
as acceleration and direction) by using wearable
sensors like accelerometers and digital compasses.
For this point, it is crucial for the Observation
Layer to be capable of dealing with several location and motion tracking technologies at the
same time and of easily associating them with
Kuidgets. The accuracy of different localization
devices and motion sensors is not considered to
be an issue in this discussion that pertains to the
conceptual framework for the development of user
Conclusion
In this chapter, we explored the notion of kineticawareness in Ubicomp user interfaces by means of
the seamless and transparent integration of objects
motion detection in the physical space as a primary
input modality. Kinetic User Interfaces enable the
users of Ubicomp systems to establish an interaction through continuous tracking of kinetic-aware
mobile devices at different spatial scales and by
the acquisition of kinetic input through motionaware embedded sensors. KUI interfaces allow
the seamless integration of contextual (implicit)
and intentional (explicit) interaction through motion. We presented a conceptual framework for
KUI interfaces and a middleware as the basis for
implementing the KUI component in standard
Ubicomp architectures.
Kinetic-awareness in Ubicomp seems to take
over simple location-awareness. Motion-based
interaction is a complementary notion to contextawareness. It is not just a matter of acting while
moving, but acting by moving. Motion is a great
source of information that leverages new dimensions of user experience in Ubicomp systems. As
noted in (Beaudouin-Lafon, 2004), in post-WIMP
user interfaces, it will be necessary to shift towards
a more holistic view of user interaction. Users are
expected to interact through activities rather than
single actions. Moreover, they will try to achieve
higher-level goals through activities, rather than
to accomplish tasks through planned actions. KUI
provides a framework for designing Ubicomp applications with embodied interaction with a special
focus on unobtrusiveness and fluidity.
References
Abowd, G.D., Mynatt, E.D., & Rodden. T. (2002,
January-March). The Human Experience. Pervasive Computing 1(1), 48-57.
Abowd, G.D., & Mynatt, E.D. (2000). Charting
Past, Present, and Future Research in Ubiquitous
Computing. ACM Transactions on Computer-Human Interaction, 7(1), 2958.
Addlesee, M., Curwen, R., Hodges, S., Newman,
J., Steggles, P., Ward, A.., & Hopper, A. (2001).
Implementing a Sentient Computing System.
IEEE Computer, 34(8), 50-56.
Ashbrook, D., Lyons, K., & Clawson, J. (2006).
Capturing Experiences Anytime, Anywhere.
IEEE Pervasive Computing 5(2), 8-11.
Baldauf, M., Dustdar, S., & Rosenberg, F. (2007). A
Survey on Context Aware Systems. International
Journal of Ad Hoc and Ubiquitous Computing, Inderscience Publishers. forthcoming. Pre-print from:
http://www.vitalab.tuwien.ac.at/~florian/papers/
ijahuc2007.pdf
Banavar, G., & Bernstein, A. (2004). Challenges
in Design and Software Infrastructure for Ubiquitous Computing Applications. Communications
of the ACM, 45(12), 92-96.
Barfield, W., & Caudell, T. (Eds.). (2001) Fundamentals of Wearable Computers and Augmented
Reality. LEA Books.
Beaudouin-Lafon, M. (2000). Instrumental Interaction: An Interaction Model for Designing
Post-WIMP User Interfaces. In Proceedings of
ACM Human Factors in Computing Systems,
CHI 2000, La Haye (Pays-Bas), Avril 2000, CHI
Letters 2(1):446-453, ACM Press.
Beaudouin-Lafon, M. (2004). Designing Interaction, not Interfaces. In Costabile, M.F. (Ed.),
In Proceedings of the working conference on
1031
1032
1033
1034
Endnotes
2
3
http://www.ubiq.com/hypertext/weiser/
UbiHome.html
http://en.wikipedia.org/wiki/Web_2.0
http://www.socialight.com
4
5
http://www.ubisense.com
WIMP stands for Windows, Icons, Menus,
Popups.
The motion detection can be obtained either
by the mobile device itself (e.g. a GPS-enabled handheld) or by external device or
infrastructure (e.g. a badge tracked by a
sensing space).
This interaction pattern is similar to the
Teleport application (Addlesee et al., 2001),
which allows users wearing ActiveBadges
10
11
This work was previously published in Advances in Ubiquitous Computing: Future Paradigms and Directions, edited by S.
Mostefaoui, Z. Maamar, and G. Giaglis, pp. 201-228, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
1035
Section III
This section presents extensive coverage of specific tools and technologies that humans interact with
and react to in their daily lives. These chapters provide an in-depth analysis of devices and tools such
as portable music players, mobile phones, and even blogs. Within these rigorously researched chapters,
readers are presented with countless examples of the technologies that support and encourage societal
development and their resulting impact.
1037
Chapter 3.1
INTRODUCTION
Bluetooth (Bluetooth SIG, 2004) and ZigBee
(ZigBee Alliance, 2004) are short-range radio
technologies designed for wireless personal area
networks (WPANs), where the devices must have
low power consumption and require little infrastructure to operate, or none at all. These devices
will enable many applications of mobile and pervasive computing. Bluetooth is the IEEE 802.15.1
(2002) standard and focuses on cable replacement
for consumer devices and voice applications for
medium data rate networks. ZigBee is the IEEE
802.15.4 (2003) standard for low data rate networks
for sensors and control devices. The IEEE defines
only the physical (PHY) and medium access
control (MAC) layers of the standards (Baker,
2005). Both standards have alliances formed by
different companies that develop the specifications for the other layers, such as network, link,
security, and application. Although designed for
BLUETOOTH
Bluetooth originated in 1994 when Ericsson
started to develop a technology for cable replacement between mobile phones and accessories.
Some years later Ericsson and other companies
joined together to form the Bluetooth Special
Interest Group (SIG), and in 1998 the specification 1.0 was released. The IEEE published the
802.15.1 standard in 2002, adopting the lower
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
piconet
slave
master
1038
scatternet
ZIGBEE
ZigBee has its origins in 1998, when Motorola
started to develop a wireless technology for
low-power mesh networking (Baker, 2005). The
IEEE 802.15.4 standard was ratified in May 2003
based on Motorolas proposal. Other companies
joined together and formed the ZigBee Alliance
in 2002. The ZigBee specification was ratified in
December 2004, covering the network, security,
and application layers (Baker, 2005).
ZigBee has been designed for low power
consumption, low cost, and low data rates for
monitoring, control, and sensor applications (Akyildiz, Su, Sankarasubramaniam, & Cayirci, 2002).
The lifetime of the networks are expected to be
of many months to years with non-rechargeable
batteries. The devices operate in unlicensed bands:
2.4 GHz (global), 902-928 MHz (Americas), and
868 MHz (Europe). At 2.4 GHz (16 channels), the
raw data rates can achieve up to 250 Kbps, with
offset-quadrature phase-shift keying (OQPSK)
modulation and direct sequence spread spectrum
(DSSS). The 868 MHz (1 channel) and 915 MHz
(10 channels) bands also use DSSS, but with
1039
peer-to-peer
topology
PAN
coordinator
1040
Bluetooth
ZigBee
Data rate
20-250 Kbps
Days
Years
Operating frequency
Security
Network topology
~100 KB
~28 KB
Transmission range
30 ms
15 ms
15 ms
RESEARCH CHALLENGES
In the Bluetooth specification there is no information on how a scatternet topology should be
formed, maintained, or operated (Persson et
al., 2005; Whitaker et al., 2005). Two scatternet
topologies that are created from separate approaches can have different characteristics. The
complexity of these tasks significantly increases
when moving from single piconets to multiple
connected piconets.
Some research challenges in Bluetooth scatternets are formation, device status, routing, and
intra and inter-piconet scheduling schemes. Each
1041
1042
CONCLUSION
Bluetooth and ZigBee are wireless technologies
that may enable many applications of ubiquitous
and pervasive computing envisioned by Weiser
(1991). Millions of devices are expected to be
equipped with one or both technologies in the
next few years. This work addressed some of
the main features and made some comparisons
between them. Some research challenges were
described. These issues must be properly studied
for the widespread use of ZigBee and Bluetooth
technologies.
REFERENCES
Akyildiz, I. F., Su, W., Sankarasubramaniam,
Y. & Cayirci, E. (2002). A survey on sensor networks. IEEE Communications Magazine, 40(8),
102-114.
Baker, N. (2005). ZigBee and Bluetooth: Strengths
and weakness for industrial applications. IEEE
Computing and Control Engineering Journal,
16(2), 20-25.
Bluetooth SIG. (2004). Specification of the Bluetooth system. Core, version 2.0 + EDR. Retrieved
from http://www.bluetooth.com
Bluetooth SIG. (2003). Specification of the Bluetooth system. Core, version 1.2. Retrieved from
http://www.bluetooth.com
Chen, L., Sun, T., & Gerla, M. (2006). Modeling channel conflict probabilities between IEEE
802.15 based wireless personal area networks.
Proceedings of the IEEE International Conference
on Communications, Istanbul, Turkey.
Geer, D. (2005). Users make a beeline for ZigBee sensor technology. IEEE Computer, 38(12),
16-19.
KEY TERMS
Carrier-Sense Medium Access with Collision Avoidance (CSMA-CA): A network contention protocol that listens to a network in order to
avoid collisions.
Direct Sequence Spread Spectrum (DSSS):
A technique that spreads the data into a large
coded stream that takes the full bandwidth of
the channel.
Frequency Hopping Spread Spectrum
(FHSS): A method of transmitting signals by
rapidly switching a carrier among many frequency
channels using a pseudorandom sequence known
to both transmitter and receiver.
Medium Access Control (MAC): A network
layer that determines who is allowed to access the
physical media at any one time.
Modulation: The process in which information signals are impressed on an radio frequency
carrier wave by varying the amplitude, frequency,
or phase.
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(3), 94-104.
This work was previously published in Encyclopedia of Mobile Computing and Commerce, edited by D. Taniar, pp. 272-276,
copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1043
1044
Chapter 3.2
Abstract
Collaboration is considered as an essential element
for effective learning since it enables learners to
better develop their points of view and refine their
knowledge. Our aim being to facilitate communities of practice members as learners, we argue
that collaboration tools should provide personalization features and functionalities in order to fit
the specific individual and community learning
requirements. More specifically, we propose a
framework of services supporting personalization
that being embedded in collaboration tools, can
Introduction
As organizations start to acknowledge the significance of communities of practice (CoPs) in helping
them meet their business needs and objectives, new
efforts to better facilitate the process of learning
in these communities are constantly emerging
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1045
Design Issues
The primary design aims of our approach in modelling users as learners was to achieve extensibility
and adaptability of the user profile as well as the
ability to exchange user information between the
proposed personalized collaboration services and
third-party services. In this context, the proposed
learner profile comprises both computational and
noncomputational information. Computational
information comprises information such as the
name, contact details, education, training, and so
forth, of users, as well as information about the
community they belong to. The noncomputational
information is calculated after the processing
of the users individual behaviour during their
participation in system activities. This type of
information comprises fields that can be defined
during run-time, whenever a new requirement
for a new kind of user information is raised. As
regards the source of the information stored in
User modelling
1046
Learner Profile
Static
Information
Dynamic Information
Individual
Information
Preferences
Relations
Community
Information
Competences
Experience
Domain independent
Domain specific
1047
and learning items (e.g., argument, URL, or document) can reveal the learners different personality
types and learning styles. Competences refer to
cognitive characteristics such as the creativity,
reciprocity, and social skills. Experience reflects
learners familiarity and know-how regarding a
specific domain. It should be noted that all dynamic elements of the proposed Learner Profile
can be of assistance towards learning. Nevertheless, the domain of the issue under consideration
is a decisive factor. Thus, dynamic aspects of a
learners profile are treated as domain specific in
our approach.
etc.). Their subjective nature may influence personalization services in an unpredictable way
(e.g., suggesting to a novice user a document that
requires advanced domain knowledge because
the user misjudged his experience or competence
level). To cope with such issues, we are currently
in the process of designing methods that assess
explicitly stated profile data, based on the users
behaviour. We refer to these ways as implicit or
behaviour-based data acquisition. In general, the
aim of implicit or behaviour-based data acquisition
is to assess experience, domains, competences of
an individual user based on his behaviour. Implicit
data acquisition utilizes the users actions and
interactions, and attempts to extract information
that can permit assessing or augmenting a user
profile data.
A special part of the systems architecture
should be dedicated to support implicit data acquisition and interpretation. It consists of a number
of modules, each of which is responsible for a
particular task (see Figure 2). More specifically, the
User Action and Tracking module is responsible
for observing user actions and recording them in
a special repository of the infrastructure called
the Action and Event Store. The Action and Event
Store only maintains all actions and events that
are useful for implicit user action analysis and
does not interpret them in any way. Analysis
and interpretation of the gathered data as well
as triggering of the appropriate computations
(i.e., system reactions) is the main responsibility
of the Action Interpretation
SystemData Base
1048
Interpretation Engine
Rule Store
Interpretation Engine analyses the available information in the actions and event store and triggers
computations that either update accordingly the
user profile or execute a particular action. The
interpretation engine can be configured using
rules that are also stored within the infrastructure,
making the interpretation engine rule based. A rule
essentially specifies under which circumstances
(i.e., the events and actions of a particular user in
the store) an action is triggered. The rule-based
nature of the interpretation engine makes the
engine itself extensible so that even more cases
of implicit data acquisition and interpretation are
able to be supported.
Based on the explicit or implicit data, explicit or
implicit adaptation mechanisms can be supported
within the collaboration tool. Explicit adaptation
mechanisms refer to approaches where the tool
adapts its services based on the explicitly stated
characteristics or preferences of the user. Users
are usually aware of explicit adaptations since they
themselves triggered the initiation and presence of
the respective services. On the other hand, implicit
adaptation mechanisms refer to approaches that
adapt the systems services to the user, based on
his/her actions within it. Such mechanisms work
in the background; Users are usually unaware
of the origin of these services since they did not
explicitly initiate their activation and, thus, do not
perceive their operation. Implicit personalization
mechanisms are automatically triggered by the
system utilizing implicit or behaviour-based data
in the proposed Learner Profile.
In order to enable the foreseen functionalities
(such as dynamic update of user information,
adaptation of the tool according to the user needs,
etc.), the most important actions of the entire set of
users actions should be tracked down. As regards
the User Action Tracking Mechanism, the recorded
data about user actions contain information about
who did the action, when, what type of action was
executed, and what objects were affected by the
action. In this way, it will be possible for the system
to give valuable feedback to other mechanisms so
1049
styles. In the
following we present a set of services employed for
enhancing software tools supporting collaboration
towards learning.
The proposed set of services
has resulted out of a thorough investigation of
the related literature, existing case studies that
consider diverse aspects of learning within communities, as well as a transversal analysis of a set
of interviews with real CoP members engaged in
various domains of practice.
Awareness
According to the findings of our research, CoPs'
members consider system awareness services
as the most helpful ones for collaboration tools.
Participation awareness provides information
about CoP members, online members as well as
the discourse moves of individual CoP members.
Users will be able to see which user is online, how
the space changed by a particular member, and so
forth. Social awareness provides information on
how members are related to other members in the
CoP, and includes statistics about how and how
many times members within a CoP communicate
with each other and social networks representing
the community. Based on the data populated in
the Learner Profile, personalized services can
provide the proper set of notification actions for
the provision of helpful personalized information
about system events to CoP members. For instance,
a collaboration tool could alert users about the
entrance of another user to the system, or about
new content insertion into the system.
In order to enable this personalized awareness, terms such as related or interesting
that define a relation between the user and the
content should be determined by the user himself,
or automatically by the system through the manipulation of some characteristics from the user
profile. Furthermore, system awareness can play
an important role to assist the familiarization on
the new learners of the system. By both informing
the CoP moderator about the entrance of a new
1050
Allocation of Resources
Allocation of resources is another service that
being personalized in collaboration tools can
facilitate learning activities, especially for autonomous learners. As regards to searching for
instance, a Learners Profile can provide useful
information to rank search resources according to a number of factors, such as the learners
preferences, or even his/her competence and
experience level. In this way, the system will be
able to adapt to an individual users needs. Moreover, the information about the users domains of
interest will provide additional information with
which a search can be better contextualized, thus
leading to more relevant results. Furthermore,
reasoning mechanisms could be employed for
providing the necessary filtering features for
Visualization
It has been widely argued that visualization of
collaboration conducted by a group of people
working collaboratively towards solving a common problem can facilitate the overall process
in many ways, such as in explicating and sharing individual representations of the problem,
in maintaining focus on the overall process, as
well as in maintaining consistency and in increasing plausibility and accuracy (Evangelou,
Karacapilidis, & Tzagarakis , 2006; Kirschner,
Buckingham-Shum, & Carr, 2003;). Personalized
representation of the associated processes, such as
the process of discoursing or knowledge sharing,
Building Trust
Privacy policies and access control services are
a critical requirement for the employment of all
these services, as well as for the building of trust
between the CoP members and the software application. These should be provided in order to satisfy
the learner/users need to know what information
about them is recorded, for what purposes, how
long this information will be kept, and if this
information is revealed to other people. Furthermore, the security assurance, while establishing
connections between users and services, or while
accessing stored information, should be taken into
consideration as well. Towards this end, two major
techniques are broadly used to provide denial of
access to data, that is, anonymity and encryption.
Anonymity cuts the relation between the particular
user and the information about him/her, while
information encryption provides protection of
the exchanged personal data. In our approach, we
1051
Implementation issues
According to current trends in developing Webbased tools, for reasons such as the reusability of
components and agility of services, our approach
builds on top of a service-oriented environment.
In order to exploit advantages enabled by the
Service Oriented Architecture (SOA) design
paradigm, the proposed set of services should be
based on Web service architecture so as to enable
the reusability of the implemented modules, as
well as the integration or the interoperation with
other services (from external systems). An overall
design for the enhancement of tools supporting
collaboration with personalized functionality
towards learning is depicted in Figure 3. In this
approach, we sketch a generic architecture design
in which a Learner Profile Service is the basis for
Conclusion
In this paper, we presented a set of services enhancing CoPs interactions and collaborative work
based on a generic Learner Profile model. Our
approach concerns an alternative form of online
learning with different forms of interaction, and
a new way of promoting community building.
Its purpose is to aid researchers and developers
in the development of personalized collaboration
Personalized Search
Personalized Presentation
Content Filtering and Recommendation
Related Learners Activity
Participation Evaluation
Learner Search by Expertise
Privacy Policies and Access Control
New Services
1052
Acknowledgment
Research carried out in the context of this paper
has been partially funded by the EU PALETTE
(Pedagogically Sustained Adaptive Learning
through the Exploitation of Tacit and Explicit
Knowledge) Integrated Project (IST FP6-2004,
Contract Number 028038). Also, the authors
thank the anonymous referees for their helpful
comments and suggestions on the previous versions of this paper.
References
Chen, W., & Mizoguchi, R. (1999). Communication content ontology for learner model agent in
multi-agent architecture. In Prof. AIED99 Workshop on Ontologies for Intelligent educational
Systems. Retrieved from http://www.ei.sanken.
osaka-u.ac.jp/aied99/a-papers/W-Chen.pdf
Cranor, L., Langheinrich, M., Marchiori, M.,
Presler-Marshall, M., & Reagle, J. (2002). The
1053
This work was previously published in InternatIonal Journal of Web-based LearnIng and TeachIng TechnologIes, Vol. 2, Issue
3, edited by L. Esnault, pp. 77-89, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint
of IGI Global).
1054
1055
Chapter 3.3
Abstract
The aim of this chapter is to discuss the mutual
influence between culture and technology on a
broad inter- and transcultural level. Especially,
how does information culture shape the meaning
of information, communication, and knowledge,
and consequently, the design, spread, and usage
of ICTs in certain societies? Vice versa, we are
interested in the ways in which the spread and
usage of ICTs affect the predominating culture.
We aim for a model that incorporates cultural as
well as technological factors in order to provide a
basis for future ICT research that goes beyond both
technological determinism and social constructivism. We believe that new technologies indeed can
contribute to more justice in the world in terms of
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1056
1057
INFORMATION AND
COMMUNICATION CULTURES
When referring to information and communication cultures, we address the basic significance
of having access to information and knowledge
and the practices of communication and cooperation in a specific society. The most important
consideration involves the relationship between
those who have access to information that has a
profound effect on the distribution of power of
control over flows of information within society.
It is assumed that within societies with a strong
hierarchical structure, the flow and dissemination of public information is restricted to just a
1058
and teacher. The education system in informationrestrictive cultures does not encourage curiosity
or question-based learning. The right answer
is the measure of success. What is right and what
is wrong again are defined by authorities in the
education system. People are not trained to address
their environments and to pose questions critically.
These answer-oriented societies are an obstacle
for the optimal utilization of new information and
communication technologies. Digital communication networks such as the Internet work best
with a question-oriented approach that leads to a
variety of plausible answers in different contexts.
Expecting the right and only answer (as people in
information-restrictive societies are trained) leads
to predictable disappointments and, therefore, less
motivation to get involved in new media.
In information-restrictive cultures, the flow of
information between authorities and citizens as
well as between businesses and customers follows
the push principle, whereby authorities and businesses decide which information is being passed
on. In such cultures, the Internet is perceived
merely as a new and additional (mass) medium
to transfer information to a mass audience. Consequently, a huge amount of information and
communication capacities of the Internet simply
are left unused. As there are not any geographical,
national, or cultural borders within digital communication networks, information and applications
from information-friendly cultural environments
compete with those from information-restrictive
cultures on a global stage.
We assume that information-friendly cultures
provide a competitive advantage for their members
in the global information society.
1059
p
le
cro
Ma
technology
vel
vel
ro le Capabilities
Mic
Access
human
Skills
Cognition
ICT adoption process
el
lev
i c ro
vel
o le
a cr
technology
human
Skills
Access
Cognition
Capabilities
vel
o le
a cr
el
lev
ro
Capabilities
Mic
technology
Access
human
Skills
Cognition
ICT adoption process
1060
Model of a Human-Centered
and Culturally Sensitive ICT
Adoption Process
The adoption process, which also can be considered the major stage for targeted ePolicy measures,
starts with the problems of technology-determined
access. We need access to technology in order to
make experiences and to trigger the following
steps. Unfortunately, many processes get stuck
in the access stage; If they build it, they will
come could be the motive for access-only strategies. Most countries favor this access-dominated
strategy, which is predominantly in the interest of
the technology industry and, therefore, an industry
policy measurement.
The critique of the access-only strategy led
to a human-oriented enhancement of the same
strategy. People need to have adequate skills in
order to use the accessed technology. At first
glance, this could solve the problemnot only
provide people with technology but also train
them to use it. Similar to the access stage, the
skills stage also is geared predominantly to the
interest of the technology industry; in this case,
the big international or global software monopolists. Acquiring skills means dealing with a given
technology. The creative potential of people in the
context of technology is not addressed (National
Research Council, 2004).
DIGITAL CULTURES
Cultural Shifts: Transculturality
In recent decades, the concept of interculturality
has been very popular and influential in regard
to the fairly young discipline of intercultural
communication (Leeds-Hurwitz, 1998). In this
context, communication was understood to be
an action taking place between countries that
were perceived as self-contained units. In this
traditional definition, cultures are seen as types of
autonomous islands that are virtually completely
closed-off, which Beck (1997) called metaphorically the container theory of society (p. 49). But
modern societies are very diverse entities. They
contain and incorporate many elements of different origins, and the boundaries between foreign
and indigenous cultures get blurred and finally
become untraceable. Tsagarousianou (2004) sug-
1061
1062
LINKING CULTURE,
KNOWLEDGE, AND ICTS
At this point, we introduce the extended concept
of culture, which is intertwined with the concept
of knowledge with the aim to discuss the correlation between culture, knowledge, and the role of
ICTs. This endeavor eventually should lead to an
approach that allows us to connect the complex
concept of cultures with its impact on various
spheres of our respective lives and, therefore, on
our identity. Therefore, the term digital culture
will be used to describe the model of mutual influence between culture and technology, which
we use as a fundamental framework to develop a
new understanding of the use of ICTs. This model
aims at an understanding of cultural differences
in handling information to guarantee a beneficial
development of society.
If the concept of transculturality is introduced
into the notion of knowledge, there is a rapid
increase of global knowledge. ICTs allow direct
communication between vast numbers of people
with different cultural backgrounds but do not
automatically distribute access to knowledge
equally. In fact, many citizens cannot gain access
to global knowledge or even local knowledge
other than to their own knowledge because of
their low economic status (digital divide) and
their low educational levels (cultural divide).
These divides create groups of haves or have-nots,
communication-rich or communication-poor,
winners or losers in the globalization process.
Concerning identities, these divides determine
Figure 2.
CULTURE
The
The spread
spread and
and
usage
usage ofofICTs
ICTs
affect
culture
affect culture
Push
DigitalCulture
Pull
Culturaldivide
divide
Cultural
Culture
shapesthe
the
Culture shapes
spread
spread and usage
usage
of ICTs
ICTs
of
Digital divide
Digital
TECHNOLOGY
points of view. It is equally important to demonstrate that culture and technology influence each
other by using the term digital culture.
Drawing upon these basic insights, we will
discuss the dialectic of shaping, diffusion, and
usage of ICTs in societies and different cultural
knowledge bases along the following dimensions:
content, distribution, and context.
1063
1064
Table 1.
Dimensions
Digital Divide
Cultural Divide
Digital Culture
Content
Data, Information
knowing that
Knowledge
knowing how
Data, Information,
Knowledge
knowing why
Distribution
Channels limited to
technical possibilities
Inadequacy between
text and channel
Context
Limited to technical
Connectivity
Skills
Realization
Application
Inclusion
Awareness
Capabilities
1065
1066
CONCLUSION
Hofkirchner, W. (1999). Does electronic networking entail a new stage of cultural evolution? In
P. Fleissner & J. C. Nyiri (Eds.), Cyberspace: A
new battlefield for human interests? Philosophy of
culture and the politics of electronic networking
(vol. II) (pp. 3-22). Innsbruck, ron, Budapest:
Studienverlag.
Starting with a critique of both techno-deterministic and social-constructive approaches toward the
relationship between technology and culture, we
argue for a dialectical, mutual-shaping approach.
Especially in the context of information and
communication technologies (ICTs) and society,
this dialectical relationship between culture and
technology is important. To strive for the capable
user, cultural dimensions have to be incorporated
into a model that transfers the spread and usage of
technology on the one hand and the social shaping
of technology on the other. The concept of digital
culture represents a framework that embraces the
techno-cultural dimensions of content, distribution, and context. This framework provides an
applicable instrument that allows addressing the
important questions in the context of technology
and society, such as equal knowledge distribution,
provision of capabilities, and social inclusion.
References
Barber, B. (2001). Jihad vs. McWorld. New York:
Ballantine.
Beck, U. (1997). Was ist Globalisierung? Frankfurt am Main: Suhrkamp.
Castells, M. (2001). The information age: Economy, society and culture: The rise of the network
society (vol. 1). Oxford: Blackwell.
Garnham, N. (1997). Amartya sens capabilities approach to the evaluation of welfare: Its
1067
van Dijk, J. (2005). The deepening divide. Inequality in the information society. Thousand
Oaks, CA: Sage.
Warschauer, M. (2002, July 1). Reconceptualizing
the digital divide. First Monday, Peer Reviewed
Journal on the Internet, 7(7). Retrieved from
http://www.firstmonday.org/issues/issue7_7/
warschauer/index.html
Welsch, W. (1999). TransculturalityThe puzzling form of cultures today. In M. Featherstone,
& S. Lash (Eds.), Spaces of culture: City, nation,
world (pp.194-213). London: Sage. Retrieved
February 2, 2006, from http://www2.uni-jena.
de/welsch/Papers/transcultSociety.html
Wieviorka, M. (2003). Kulturelle differenzen
und kollektive identitten. Hamburg: Hamburger
Edition.
Willke, H. (2004). Einfhrung in das systemische
wissensmanagement. Heidelberg: Carl-AuerSysteme.
This work was previously published in Information Technology Ethics: Cultural Perspectives, edited by S. Hongladarom and
C. Ess, pp. 54-67, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of
IGI Global).
1068
1069
Chapter 3.4
ABSTRACT
This article reports on an empirical investigation of
user perceptions of the importance of several characteristics of interface agents. Interface agents are
software entities that are incorporated into various
computer applications, including electronic mail
systems. As evidenced by the growing body of
empirical studies and the increasing number of
interface agent-based applications on the software
market, there is a strong need for the development
of this technology. According to a meta-review of
agent-related literature by Dehn and van Mulken
(2000), there are several characteristics of interface agents that require special attention from
agent developers. However, prior to this study,
the importance of these characteristics from the
end-user perspective remained unclear. In order to
identify the significance of these characteristics, a
group of actual users of an e-mail interface agent
was surveyed. The results indicate that information accuracy and the degree of the usefulness of
an agent are the most salient factors, followed by
user comfortability with an agent, the extent of
user enjoyment, and visual attractiveness of an
Introduction
To create an artificial being has been a dream of
men since the birth of science. Professor Hobby
(William Hurt) in Artificial Intelligence (Spielberg, 2002)
For thousands of years, people have thought of
someone doing basic tasks for them. That could be
a robot, a cyborg, or a well-trained pet. Not until
the beginning of the 21st century did it become
possible. Now, with the recent development of
telecommunications networks and computer technologies, a new type of software application plays
the role of virtual assistants that potentially may
alleviate some of the problems associated with the
employment of software systems. This class of applications often is referred to as intelligent agents,
software agents, avatars, or interface agents. As
demonstrated by the growing body of academic
literature and by the increasing number of agentbased software applications on the market, there is
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1070
As such, Dehn and van Mulken (2000) classified the various characteristics of interface
agents (e.g., the users subjective experience of
the system, the users behavior while interacting with the system, and the outcome of the
interaction). Each category includes several factors. However, it is not viable to investigate the
importance of these characteristics applied to all
types of interface agents in a single project. Since
interface agents may be incorporated in the form
of personal secretaries, Internet guides, electronic
commerce assistants, or educators, a separate
study is required for each kind of interface agents.
It is believed that interface agents embedded in
different types of software environments may
require certain system-specific features and facets.
For example, users who work with an interface
agent that facilitates online shopping may look for
effectiveness and efficiency. In contrast, people
who employ an interface agent as entertainers
may emphasize the aspect of enjoyment over that
of effectiveness or efficiency.
With respect to the present study, interface
agents for electronic mail were chosen for two
reasons. First, e-mail is an important telecom-
Characteristics
With respect to interface agents for e-mail, it is important for users:
1071
1072
5.34
5.47
natural
interactions
nondistraction
5.78
5.86
5.90
6.05
6.28
attractiveness
enjoyment
comfortability
usefulness
info accuracy
4.22
4.00
3.00
2.00
1.00
0.00
appearance vs
intelligence
Dimension
1073
1074
Mean
6.28
Std
dev
1.04
6.05
5.90
5.86
5.78
5.47
1.13
1.10
1.13
1.17
1.74
5.34
4.22
1.36
1.86
(I)
CHARACTERISTIC
1
appearance corresponds to the
level of intelligence
2
information accuracy
3
attractiveness
4
comfortability
5
usefulness
6
enjoyment
7
natural interactions
8
little distraction
(J) CHARACTERISTIC
2
3
4
5
6
7
8
1
3
4
5
6
7
8
1
2
4
5
6
7
8
1
2
3
5
6
7
8
1
2
3
4
6
7
8
1
2
3
4
5
7
8
1
2
3
4
5
6
8
1
2
3
4
5
6
7
Mean Difference
(I-J)
-2.05(*)
-1.55(*)
-1.67(*)
-1.83(*)
-1.64(*)
-1.12(*)
-1.24(*)
2.05(*)
.50
.38
.22
.41
.93(*)
.81(*)
1.55(*)
-.50
-.12
-.28
-.09
.43
.31
1.67(*)
-.38
.12
-.16
.03
.55
.43
1.83(*)
-.22
.28
.16
.19
.71(*)
.59
1.64(*)
-.41
.09
-.03
-.19
.52
.40
1.12(*)
-.93(*)
-.43
-.55
-.71(*)
-.52
-.12
1.24(*)
-.81(*)
-.31
-.43
-.59
-.40
.12
Sig.
.000
.000
.000
.000
.000
.000
.000
.000
.486
.800
.986
.719
.006
.028
.000
.486
1.000
.956
1.000
.674
.920
.000
.800
1.000
.999
1.000
.353
.674
.000
.986
.956
.999
.995
.092
.275
.000
.719
1.000
1.000
.995
.440
.761
.000
.006
.674
.353
.092
.440
1.000
.000
.028
.920
.674
.275
.761
1.000
1075
1076
References
Aczel, A. D. (1996). Complete business statistics.
Chicago: Irwin.
Bergman, R., Griss, M., & Staelin, C. (2002). A
personal email assistant [technical report HPL2002-236]. Hewlett-Packard Company.
Bickmore, T. W., & Cassell, J. (2005). Social
dialogue with embodied conversational agents. In
J. van Kuppevelt, L. Dybkjaer, & N. O. Bernsen
(Eds.), Advances in natural, multimodal dialogue
systems. New York: Kluwer Academic.
Bickmore, T. W., & Picard, R. W. (2005). Establishing and maintaining long-term human-computer
relationships. ACM Transactions on Computer-
Klein, G., Woods, D. D., Bradshaw, J. M., Hoffman, R. R., & Feltovich, P. J. (2004). Ten challenges for making automation a team player in
joint human-agent activity. IEEE Transactions on
Intelligent Systems, 19(6), 91-95.
1077
Spielberg, S. (Director/Producer), Curtis, B. (Producer), Harlan, J. (Producer), Kennedy, K (Producer), Parkes, W. F. (Producer). (2002). Artificial
intelligence [Motion picture]. DreamWorks.
Takacs, B. (2005). Special education and rehabilitation: Teaching and healing with interactive
graphics. IEEE Computer Graphics and Applications, 25(5), 40-48.
Voss, I. (2004). The semantic of episodes in communication with the anthropomorphic interface
agent MAX. Proceedings of the 9th International
Conference on Intelligent User Interface, Funchal,
Madeira, Portugal.
This work was previously published in International Journal of Intelligent Information Technologies, Vol. 2, Issue 2, edited
by V. Sugumaran, pp. 49-60, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
1078
1079
Chapter 3.5
INTRODUCTION
Over the last decade, we have witnessed an explosive growth in the information available on the
Web. Today, Web browsers provide easy access
to myriad sources of text and multimedia data.
Search engines index more than a billion pages
and finding the desired information is not an easy
task. This profusion of resources has prompted the
need for developing automatic mining techniques
on Web, thereby giving rise to the term Web
mining (Pal, Talwar, & Mitra, 2002).
Web mining is the application of data mining
techniques on the Web for discovering useful patterns and can be divided into three basic categories:
Web content mining, Web structure mining, and
Web usage mining. Web content mining includes
techniques for assisting users in locating Web
documents (i.e., pages) that meet certain criteria,
while Web structure mining relates to discovering information based on the Web site structure
data (the data depicting the Web site map). Web
usage mining focuses on analyzing Web access
logs and other sources of information regarding
user interactions within the Web site in order to
capture, understand and model their behavioral
patterns and profiles and thereby improve their
experience with the Web site.
As citizens requirements and needs change
continuously, traditional information searching,
and fulfillment of various tasks result to the loss of
valuable time spent in identifying the responsible
actor (public authority) and waiting in queues.
At the same time, the percentage of users who
acquaint with the Internet has been remarkably
increased (Internet World Stats, 2005). These two
facts motivate many governmental organizations
to proceed with the provision of e-services via
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
The close relation between Web mining and Web
personalization has become the stimulus for
significant research work in the area (Borges &
Levene, 1999; Cooley, 2000; Kosala & Blockeel,
2000; Madria, Bhowmick, Ng, & Lim, 1999).
Web mining is a complete process and involves
specific primary data mining tasks, namely data
collection, data reprocessing, pattern discovery,
and knowledge post-processing. Therefore, Web
mining can be viewed as consisting of the fol-
1080
1081
Personalized W eb
experiences
E-government
information a nd
services
1082
which incorporates the notion of time. For example a pattern may be a Web page or a set of
pages accessed immediately after another set of
pages: 55% of new businesses who apply for a
certain certificate will use the certificate within
15 days or Given the transactions of a citizens
who has not apply for any information/services
during the last 3 months, find all citizens with a
similar behavior.
Finally, as search engines often appear as a
helpful tool at e-government, personalized Web
search systems may be used to enhance their functionality. In order to incorporate user preferences
into search engines, three major approaches are
proposed (Shahabi & Chen, 2003):
FUTURE TRENDS
On the road to enhance an e-government application and treat each user individually, personalization plays a central role. The benefits for both
public authorities and citizens are significant
when it really works. However, several issues still
remain unclear. First of all, determining and delivering personalization is a data intensive task and
requires numerous processing steps. This usually
causes intolerably long response times, which in
turn may lead to site abandonment. To avoid this
constrain, parts of the process can be executed
1083
CONCLUSION
Governments enhance their attempt to offer efficient, advanced and modern services to their users
(citizens and businesses) based on information and
1084
REFERENCES
Borges, J., & Levene, M. (1999, August 15). Data
mining of user navigation patterns. Proceedings
of the WEBKDD99 Workshop on Web Usage
Analysis and User Profiling, San Diego, CA (pp.
31-36).
Cooley, R. (2000). Web usage mining: Discovery
and application of interesting patterns from Web
data. PhD Thesis, Department of Computer Science, University of Minnesota.
Eirinaki, M., & Vazirgiannis, M. (2003). Web
mining for Web personalization. ACM Transactions on Internet Technology, 3(1), 1-27.
Etzioni, O. (1996). The world wide Web: Quagmire or Gold Mine. Communications of ACM,
39(11), 65-68.
Internet World Stats. (2005). Internet Usage StatisticsThe Big Picture. World Internet Users and
Population Stats. Retrieved June 25, 2005, from
http://www.internetworldstats.com/stats.htm
Kosala, R., & Blockeel, H. (2000). Web mining research: A survey. SIGKDD Explorations:
Newsletter of the Special Interest Group (SIG)
on Knowledge Discovery & Data Mining, ACM,
2(1), 1-15.
Shahabi, C., & Chen, Y. S. (2003, September 2224). Web information personalization: Challenges
and approaches. Proceedings of Databases in
Networked Information Systems: The 3r d International Workshop, Japan (pp. 5-15).
Markellos, K., Markellou, P., Rigou, M., & Sirmakessis, S. (2004a). Web mining: Past, present,
and future. In S. Sirmakessis (Ed.), Text mining
and applications (pp. 25-35). Berlin; Heidelberg,
Germany: Springer Verlag.
Markellos, K., Markellou, P., Rigou, M., Sirmakessis, S., & Tsakalidis, A. (2004b, April 14-16).
Web personalization and the privacy concern.
Proceedings of the 7t h ETHICOMP International
1085
Key Terms
Clickstream: It is a record of a users activity on the Internet, including every Web site and
every page of every Web site that the user visits,
how long the user was on a page or site, in what
order the pages were visited, any newsgroups
that the user participates in and even the e-mail
addresses of mail that the user sends and receives.
Both ISPs and individual Web sites are capable
of tracking a users clickstream.
Cookie: The data sent by a Web server to a
Web client, stored locally by the client and sent
back to the server on subsequent requests. In other
words, a cookie is simply an HTTP header that
consists of a text-only string, which is inserted into
the memory of a browser. It is used to uniquely
identify a user during Web interactions within
a site and contains data parameters that allow
the remote HTML server to keep a record of the
user identity, and what actions she/he takes at the
remote Web site.
Data Mining: The application of specific
algorithms for extracting patterns (models) from
data.
This work was previously published in Encyclopedia of Digital Government, edited by A. Anttiroiko and M. Malkia, pp. 1629-1634,
copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1086
1087
Chapter 3.6
Context-Aware Service
Discovery in Ubiquitous
Computing
Huaqun Guo
Institute for Infocomm Research and National University of Singapore, Singapore
Daqing Zhang
Institute for Infocomm Research, Singapore
Lek-Heng Ngoh
Institute for Infocomm Research, A*STAR, Singapore
Song Zheng
Institute for Infocomm Research, Singapore
Wai-Choong Wong
National University of Singapore, Singapore
INTRODUCTION
The decreasing cost of networking technology
and network-enabled devices is driving the large
scale deployment of such networks and devices
so as to offer many new and innovative services
to users in ubiquitous computing. For example,
when you carry your mobile laptop or personal
digital assistant (PDA) around, or drive on the
road, various services have been made available,
ranging from finding a local printer to print a
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1088
1089
CONTEXT-AWARE SERVICE
DISCOVERY
The basic areas we need to address in order to
create a context-aware service discovery architecture are:
Developer
E-Speak
UDDI
Jini
UPnP
Salutation
SLP
Bluetooth SDP
HP
OASIS
standards
consortium
Sun
Microsoft
Salutation
consortium
IETF
Bluetooth SIG
Hardware
devices
Hardware
devices
Hardware
devices
Only Bluetooth
devices
Service type
Web
services
Web
services
Hardware
devices
with JVM
(Java Virtual
Machine)
Central cache
repository
Yes
Yes
Yes (lookup
server)
No
Yes
Yes
No
Operation w/o
directory
Lookup table
required
No
Yes
Yes
Yes
Network
transport
TCP/IP
TCP/IP
Independent
TCP/IP
Independent
TCP/IP
Bluetooth
communication
Programming
language
XML
Independent
Java
Independent
Independent
Independent
Independent
OS and
platform
Independent
Dependent
JVM
Dependent
Dependent
Dependent
Dependent
1090
1.
2.
3.
Platform independent. Based on Java technologies, the OSGi gateway can run on most
operating systems.
Multiple network technology support. Home
network technologies, such as Jini and UPnP,
facilitate device and service discovery in the
OSGi gateway (Dobrev, 2002).
Knowledge Base
C ontext R easoning
Ser vices R epr esentation
OSGi
W r apper
W r apper
W r apper
1091
Interoperability
The benefits of OSGi provide a feasible interoperation platform for different service discovery
mechanisms. Each of the existing service discovery mechanisms has a wrapper to communicate
with the OSGi. Wrappers transform the functions
of those entities into the form of services and
publish those services in the OSGi framework
service registry. In the OSGi framework, entities discovered by existing service discovery
protocol are represented in the form of services
in a standardized way. As the functions of every
entity are described using common convention,
entities can thus understand each other and collaborate to achieve a certain goal.
Context-Awareness
Context information plays an important role in
making the physical spaces smart. Users and
applications often need to be aware of their surrounding context and adapt their behaviors to
context changes. Context-awareness involves
the use of context to provide relevant services to
the user, where relevancy depends on the users
task (Dey, 2000).
An appropriate model should address different
characteristics of context information, such as
dependency and uncertainty. Context acquisition
is closely coupled with sensors to acquire context
data from physical or virtual sensors. In our earlier work, we have proposed an ontology-based
context model to describe context information
in a semantic way, which exhibits features such
as expressiveness, extensibility, ease of sharing
and reuse, and logic reasoning support (Zhang,
2006). Context search provides users and applica-
1092
FUTURE TRENDS
The pervasive availability of embedded devices
in the environment imposes significant technical
and research challenges in service cooperation
and interaction. One aspect that captures more
attention at the present is security, privacy, and
trust which are explained briefly next.
Security includes the three main properties of
confidentiality, integrity, and availability (Stajano,
2002). Confidentiality is concerned with protecting the information/service from unauthorized
access; Integrity is concerned with protecting the
information/service from unauthorized changes;
and Availability is concerned with ensuring that
the information/service remains accessible.
Privacy is the claim of individuals, groups,
or institutions to determine for themselves when,
how, and to what extent information is communicated to others. Privacy is about protecting
CONCLUSION
This paper surveys the existing service discovery
protocols, such as E-speak, UDDI, Jini, UPnP,
Salutation, SLP, and SDP, and discusses the dif-
REFERENCES
Arnold, K. (Ed.). (2001). The Jini specifications
(2nd edition). Addison-Wesley.
Cook, D. J., & Das, S.K. (2005). Smart environments technologies, protocols and application.
NJ: John Wiley & Sons.
Dey, A.K. (2001). Understanding and using context. Personal and ubiquitous computing. Special
issues on Situated Interaction and Ubiquitous
Computing.
Dey, A.K., & Abowd, G.D. (2000). Towards a
better understanding of context and contextawareness. CHI 2000 Workshop on the What,
Who, Where, When and How of Context-Awareness, April.
Dobrev, P., et al. (2002). Device and service
discovery in home networks with OSGi. IEEE
Communication Magazine, 86-92.
Guttman, E., Perkins, C.,Veizades, J., & Day,
M. (1999). Service location protocol Version 2.
IETF RFC 2608, June. http://www.rfc-editor.
org/rfc/rfc2608.txt
Helal, S. (2002). Standards for service discovery
and delivery. IEEE Pervasive Computing, July/
September.
1093
1094
Key TERMS
Client: An application that is interested in or
requires some other application to perform some
type of work for the client (Intel).
Context: Any information that can be used to
characterize the situation of an entity. An entity
is a person, place, or object that is considered
relevant to the interaction between a user and an
application, including the user and the application
themselves (Dey, 2001).
Context-Awareness: To use context to provide
relevant services to the user, where relevancy
depends on the users task (Dey, 2000).A service
is a component or application that performs the
work on behalf of a requesting application or
client (Intel).
Service Advertisement: Is responsible for advertising a given service description on a directory
service or directly to other hosts in the network.
The effectiveness of an advertisement is measured
as a combination of the extent of its outreach and
the specificity of information it provides up front
about a service (Sen, 2005).
Service Description: Is responsible for
describing a service and the type of context
information in a comprehensive, unambiguous
manner that is machine interpretable to facilitate
automation and human readable to facilitate rapid
formulation by users (Sen, 2005).
This work was previously published in Encyclopedia of Internet Technologies and Applications, edited by M. Freire and M.
Pereira, pp. 119-125, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an imprint
of IGI Global).
1095
1096
Chapter 3.7
Ontological Engineering in
Pervasive Computing
Environments
Athanasios Tsounis
University of Athens, Greece
Christos Anagnostopoulos
University of Athens, Greece
Stathes Hadjiefthymiades
University of Athens, Greece
Izambo Karali
University of Athens, Greece
Abstract
Pervasive computing is a broad and compelling
research topic in computer science that focuses
on the applications of technology to assist users
in everyday life situations. It seeks to provide proactive and self-tuning environments and devices
to seamlessly augment a persons knowledge and
decision making ability, while requiring as little
direct user interaction as possible. Its vision is
the creation of an environment saturated with
seamlessly integrated devices with computing
and communication capabilities. The realisation
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
The vision of pervasive computing presents many
technical issues, such as scaling-up of connectivity requirements, heterogeneity of processors and
access networks and poor application portability over embedded processors. These issues are
currently being addressed by the research community; however the most serious challenges are
not technological but structural, as embedded
processors and sensors in everyday products
imply an explosion in the number and type of
organisations that need to be involved in achieving
seamless interoperability (OSullivan, 2003). In a
typical pervasive computing environment (PCE)
there will be numerous devices with computing
capabilities that need to interoperate (Nakajima,
2003). These devices might be of different vendors
and may operate based on different protocols.
Therefore, the key issue in deploying a PCE is
achieving application level interoperability. The
complexity of such a venture is considerable. It is
extremely difficult to reach agreements when the
players involved expand from all the hardware and
software providers (e.g., IBM, HP, Microsoft) to
all the organisations that will equip their products
with computing and communication capabilities
(e.g., coffee machines, refrigerators). Therefore,
we cannot rely on shared a priori knowledge based
on commonly accepted standards to resolve the
issue. Instead, software components must adapt
to their environment at runtime to integrate their
functionality with other software components
seamlessly. An intriguing way of resolving this
issue is the use of semantics, namely the use of
Semantic Web technologies such as ontologies.
In this manner, software entities provide semantically enriched specifications of the services
that they provide and the way they should be
invoked. Moreover, the data that are exchanged
are also semantically enriched, enabling the entities to reason and make effective decisions. This
is particularly important for the description of
contextual information, which is of main interest
in a PCE. As context we identify any information
that is, directly or indirectly, associated with any
entity in the environment.
The novelty of the Semantic Web is that the
data are required to be not only machine readable
but also machine understandable, as opposed to
todays Web which was mainly designed for human interpretation and use. According to Tim
Berners-Lee, the Director of World Wide Web
Consortium, the Semantic Webs goal is to be
a unifying system which will (like the Web for
human communication) be as un-restraining as
possible so that the complexity of reality can
be described (Berners-Lee, 2001). With the
realisation of a Semantic Web it would be easy
to deploy a wide range of services that would be
almost impossible to manage in the current Web.
Semantics enable developers to create powerful
tools for complex service creation, description,
discovery, and composition. The application
areas of the Semantic Web extend from knowledge repositories to e-commerce and from user
profiling to PCE.
New standards are being developed as a first
step in realising the Semantic Web. The Resource
Description Framework (RDF), which is a Web
mark-up language that provides basic ontological primitives, has been developed by the W3C
(Beckett, 2004). RDF is a language for representing meta-information about resources in the
World Wide Web. However, by generalising the
concept of a Web resource, RDF can also be
used to represent information about things that
can be identified on the Web, by means of URIs.
The DARPA Agent Markup Language + Ontology Inference Layer (DAML+OIL) extends RDF
with a much richer set of modelling primitives
(Rapoza, 2000). The DAML+OIL have been
submitted to W3C as a starting point for the Web
Ontology Working Group and led to the creation
1097
A Service-Oriented Pervasive
Computing Environment
Service-oriented architectures focus on application-level interoperability, in terms of well-established service ontologies (Martin, 2003; Fensel,
2002). This is achieved by means of well-defined
interfaces to various software components. In this
fashion, rapid integration of existing functionality
is achieved in the design and implementation of
new applications. Object Management Groups
(OMG) Common Object Request Broker Architecture (CORBA) (Orfali, 1998) is an example of
such widespread service-oriented architecture.
One of the most important issues in a PCE is the
establishment of ad-hoc relationships between
applications or between applications and devices.
As a result, the use of a common mechanism,
which will be able to provide interoperability in
a dynamic fashion, is essential. Furthermore, the
separation of the applications functionality from
the adopted communication schemes is of considerable importance, as it enables system developers to distinguish between service functionality
(e.g., functional and no-functional context-aware
service features, as referred in McIlraith et al.,
2001) and system functionality. Hence, the deployment of a PCE reduces to the problem of ad-hoc
service discovery, composition (e.g., WSFL,
service composition context-aware paradigm) and
execution with minimum a-priori knowledge of
the services functionality. While OWL-S is based
on the combination of OWL, WSDL and SOAP,
WSMO uses F-Logic and XML-based features
of Web Services (WS). Based on the previous
languages, significant research has been devoted
to semantic WS (e.g., ODE SWS [Gmez et al.,
2004], METEOR [Aggarwa et al., 2004]).
However, interoperability is a difficult task to
accomplish in a highly heterogeneous and volatile
environment. Therefore, for service-oriented architectures to be applicable a well-defined mechanism is required to describe the communication
schemes which are employed, provide data with
Ontological Engineering
The word ontology comes from philosophy, where
it means a systematic explanation of being. In
the last decade, this word has become relevant
to the knowledge engineering community. The
authors in Guarino (1995) propose the words
Ontology (with capital o) and ontology to
refer to the philosophical and knowledge engineering concepts, respectively. There are a lot of
relevant definitions, but we keep only two terms
that are relevant, to some extent, to the pervasive
computing paradigm. In Neches et al. (1991) an
ontology is defined as follows:
An ontology defines the basic terms and relations
comprising the vocabulary of a topic area as well
as the rules for combining terms and relations to
define extensions to the vocabulary.
Such descriptive definition informs us what
to do to build an ontology by giving some vague
guidelines: This ontology description denotes how
to identify basic terms and relations between terms
and how to identify rules combining terms and
their relationships. Through such a definition one
can deduce that ontology does not include only
terms that are explicitly defined in, but also the
knowledge that can be inferred from it. In this
point, the authors in Guarino (1995) consider an
ontology as:
A logical theory, which gives an explicit, partial account of a conceptualisation, where conceptualisation is basically the idea of the world that a person
1099
1100
Subject
Predicate
Object
rd f:S u bje ct
rd f:O b je ct
rd f:P re d ica te
1101
1102
1103
P u b lish e r
writes
N am e
publishes
B o ok
isTitled
T itle
A u th o r-c e n tric O n to lo g y
B o ok
hasAuthor
A u tho r
owl:Class
xmlSchemaDatatype
owl:DatatypeProperty
owl:ObjectProperty
1104
hasName
N am e
isPublished
isTitled
T itle
B o o k -c e n tric O n to lo g y
P u b lish e r
Ontology-Based Profiling
and Information Retrieval
In a typical PCE there exists a vast amount of
contextual information available for retrieval. The
traditional solution to the problem of information
retrieval employs keyword-based searching techniques. This solution is inadequate and, therefore,
new searching mechanisms must be developed.
1105
1106
words to meanings and the creation of a meaning-based index structure. In the discussed work,
the authors have solved the problem of an index
structure through the design and implementation
of a concept-based model using domain dependent
ontologies. In other words, the ontology provides
index terms/concepts that can be used to match
with user requests. Furthermore, the generation of
a database query takes place after the keywords
in the user request are matched to concepts in
the ontology. To achieve this, they employ a
proprietary method for query expansion and SQL
query generation.
In Wang et al. (2003), a system for classifying text documents based on ontologies
for representing the domain specific concept
hierarchy (an example is depicted in Figure 3)
is presented. Many researchers have shown that
similarity-based classification algorithms, such
as k-nearest neighbour and centroid-based classification, are very effective for large document
collections. However, these effective classification
algorithms still suffer disadvantages from high
dimensionality that greatly limit their practical
performance. Finding the nearest neighbours in
high dimensional space is very difficult because
the majority of points in high dimensional space
are almost equi-distant from all other points. The
k-nearest neighbour classifier is an instance-based
classifier. This means that a training dataset of high
quality is particularly important. An ideal training
document set for each particular category will
cover all the important terms, and their possible
distribution in this category. With such a training set, a classifier can find the true distribution
model of the target domain. Otherwise, a text
that uses only some keywords out of a training
set may be assigned to the wrong category. In
practice, however, establishing such a training
set is usually infeasible. Actually, a perfect training set can never be expected. By searching the
concept hierarchy defined by a domain specific
ontology, a more precise distribution model for a
predefined classification task can be determined.
A rtificia l
In te llige n ce
N e u ra l
N e tw o rk s
G e n e tic
A lg o rithm s
D a ta ba se
S yste m s
D e d u ctive
D a ta ba se s
R a tio na l
D a ta ba se s
1107
1108
Semantic-Based Service
Interoperability Through
Ontologies in Pervasive
Environments
Although, the Web was once just a content repository, it is, now, evolving into a provider of services.
Web-accessible programs, databases, sensors,
and a variety of other physical devices realise
such services. In the next decades, computers
will most likely be ubiquitous and most devices
will have some sort of computing functionality.
Furthermore, the proliferation of intranets, ad-hoc,
and mobile networks sharpen the need for service
interoperation. However, the problem of service
interoperability arises because todays Web is designed primarily for human use. Nevertheless, an
increased automation of services interoperation,
primarily in business-to-business and e-commerce applications, is being noted. Generally,
such interoperation is realised through proprietary
APIs that incorporate hard-coded functionality
in order to retrieve information from Web data
sources. Ontologies can prove very helpful in
the direction of service description for automatic
service discovery, composition, interoperation,
and execution.
In McIlraith et al. (2001), the authors present
an agent technology based on reusable generic
procedures and customising user constraints
that exploits and showcases WS markup. To
realise their vision of Semantic WS, they cre-
use of new services that become available during their lifetime without having been explicitly
told of their existence from the outset. The task
of searching for a system component which can
perform some given service (i.e., service discovery) is the enabling technique that makes looselycoupled systems possible, and provides a process
by which system components may find out about
new services being offered. Service descriptions
are more complex expressions, which are based
on terms from agreed vocabularies and attempt to
describe the meaning of the service, rather than
simply ascribing a name to it. An essential requirement to service discovery is service description.
A key component in the semantics-rich approach
is the ontology. In the conventional WS approach
exemplified by Web Services Definition Language
(WSDL) or even by DAML Services, the communicative intent of a message (e.g., whether it is
a request or an assertion) is not separated from the
application domain. This is quite different from
the convention from the Multi-Agent Systems
world, where there is a clear separation between
the intent of a message, which is expressed using
an Agent Communication Language (ACL), and
the application domain of the message, which is
expressed in the content of the message by means
of domain-specific ontologies. In this approach,
there is a definite separation of the intent of the
messages and the application domain.
In Medjahed et al. (2003), the authors propose
an ontology-based framework for the automatic
composition of WS. An important issue in the
automatic composition of WS is whether those
services are composable (Berners-Lee, 2001).
Composability refers to the process of checking
if WS to be composed can actually interact with
each other. A composability model is proposed
for comparing syntactic and semantic features of
WS (Figure 4). The second issue is the automatic
generation of composite services. A technique
is proposed to generate composite service descriptions while obeying the aforementioned
composability rules. Such technique uses as
1109
CCom
ompositio
positionn
SSoundness
oundness
SSem
emantic
antic
CCom
omposability
posability
Q
Qualitativ
ualitativee
PProperties
roperties
O
Operation
peration
SSem
emantics
antics
SServ
ervice
ice &&
O
Operation
peration LLeevvel
el
M
Messages
essages
O
Operation
peration
M
Mode
ode
BBinding
inding
1110
SSyntactic
yntactic
CCom
omposability
posability
relevance gives an approximation of the composition soundness and the completeness gives
the proportion of composite service operations
that are composable with other services. The
last phase in their approach aims at generating a
detailed description of a composite service. This
description includes the list of outsourced services, mappings between composite services and
component service operations, mappings between
messages and parameters and flow of control and
data between component services.
Let us imagine a PCE where all devices interoperate through services. In this way, a serviceoriented architecture for functionality integration
is being employed. Therefore, the problem of
deploying a PCE reduces to the problem of service description and ad-hoc service discovery,
composition, and execution.
We propose that the services/agents that
represent the various devices share a common
communication scheme, possibly by means of
an ACL or a common ontology that describes
the communication details. However, they do
not share any a priori knowledge about the se-
Examples of Semantic-Based
Service Interoperability
Through Ontologies in PCE
To illustrate the key issues, consider the following
simple scenario (Figure 5). Bob has an important
professional meeting (i.e., human society context) in his office (OfficeA) with Alice. For this
meeting it is important that Bob has a certain
postscript file (FileA) printed (i.e., application
context). Unfortunately, the printer in his office
(PrinterA) is not capable of printing postscript
files. However, there is one such printer (PrinterB)
1111
B obs context
B obs agenda c ontext
O ffice -A
O ffice -B
L e ge nd
H istory
isAdjacent
patterns
o w l:C lass
o w l:Indiv idual
o w l:O bjectP roperty
rdfs:subC lassO f
instance
S patial
isLocatedAt
P rinterA -A pp
E ntity
A ppointm ent
T em poral
takesPlace
duringInterval
A ctivity
isEngaged
P rinterB -A pp
A pp lication
runsOn
boundsService
E ditin g
produces
A rtefact
D ev ice
actsOn
hasDuty
H um an
S ocia l
F ile-A
R egistry
S rv P rntB
A genda
P rinterA
B ob
P rinterB
A lice
M eeting
S rv P rntA
1112
ontology model (expressed in OWL), and as contextual knowledge, the dynamic knowledge that is
inferred from acquired situational information.
In Ranganathan et al. (2003) and RanganathanGAIA (2003), the authors describe GAIA, an
infrastructure for Smart Spaces, which are ubiquitous computing environments that encompass
physical spaces. GAIA converts physical spaces
and the ubiquitous computing devices they contain
into a programmable computing system. It offers
services to manage and program a space and its
associated state. GAIA is similar to traditional
operating systems in that it manages the tasks common to all applications built for physical spaces.
Each space is self-contained, but may interact
with other spaces or modular ontologies (e.g., via
context mappings). GAIA provides core services,
including events, entity presence (e.g., devices,
users and services), discovery, and naming. By
specifying well-defined interfaces to services,
applications may be built in a generic way so that
they are able to run in arbitrary active spaces.
The core services are started through a bootstrap
protocol that starts the GAIA infrastructure.
Finally, GAIA allows application developers to
specify different behaviours of their applications
for different contexts. The use of contextualised
ontologies makes easier for developers to specify
context-sensitive behaviour.
One of the main uses of ontologies is that it
allows developers to define all the terms that can
be used in the environment. Ontological engineering allows the attachment of precise semantics
to various terms and the clear definition of the
relationships between different terms. Hence, it
prevents semantic ambiguities where different
entities in the environment have different ideas
from what a particular term means. Different entities in the environment can refer to the ontology
to get a definition of a term, in case they are not
sure (e.g., due to the imprecise conceptual modelling or the nature of the contextual information).
Furthermore, developers employ ontologies for
describing both entities (e.g., concepts) and con-
1113
1114
References
Aggarwal, R., Verma, K., Miller, J., & Milnor,
W. (2004). Constraint driven Web service composition in METEOR-S. University of Georgia,
LSDIS Lab.
Amann, P., & Quirchmayr, G. (2003). Foundation
of a framework to support knowledge management in the field of context-aware and pervasive
computing. Proceedings of the Australasian
Information Security Workshop Conference on
ACSW03 (Vol. 21), Adelaide, Australia.
Anyanwu, K., & Sheth, A. (2003). -Queries:
Enabling querying for semantic associations
on the Semantic Web. Proceedings of the 12th
International Conference on World Wide Web,
Budapest, Hungary.
Artale, A., & Fraconi, E. (1998). A temporal
description logic for reasoning about actions and
plans. Journal of Artificial Intelligence Research,
9, 463-506.
Horrocks, I., Patel-Schneider, P., Boley, H., Tabet, S., Grosof, B., & Dean, M. (2003). SWRL: A
Semantic Web Rule Language Combining OWL
and RuleML. Version 0.5.
Khan, L., McLeod, D., & Hovy, E. (2004). Retrieval effectiveness of an ontology-based model
for information selection. The VLDB Journal
The International Journal on Very Large Data
Bases,13(1), 71-85.
Lei, L., & Horrocks, I. (2003). A software framework for matchmaking based on Semantic Web
technology. Proceedings of the 12th International
Conference on World Wide Web, Budapest, Hungary.
Nakajima, T. (2003). Pervasive servers: A framework for creating a society of appliances. Personal
and Ubiquitous Computing,7(3), 182-188.
Neches, R., Fikes, R., Finin, R., Gruber, T., Senator, T., & Swartout, W. (1991). Enabling technology
for knowledge sharing. AI Magazine, 12, 36-56.
Nejdl, W., Wolpers, M., Siberski, W., Schmitz, C.,
Schlosser, M., Brunkhorst, I., et al. (2003). Superpeer-based routing and clustering strategies for
RDF-based peer-to-peer network. Proceedings
of the 12th International Conference on World
Wide Web, Budapest, Hungary.
Noy, N., & McGuinness, D. (2001). Development
101: A guide to creating your first ontology. Stanford Knowledge Systems Laboratory Technical
Report KSL-01-05 and SMI-2001-0880.
Orfali, R., & Harkey, D. (1998). Client/server
programming with Java and CORBA. Wiley.
OSullivan, D., & Lewis, D. (2003). Semantically
driven service interoperability for pervasive com-
This work was previously published in Web Semantics and Ontology, edited by D. Taniar, pp. 334-363, copyright 2006 by IGI
Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
1117
1118
Chapter 3.8
MASACAD:
ABSTRACT
The evolution of the Internet into the Global Information Infrastructure has led to an explosion
in the amount of available information. Realizing
the vision of distributed knowledge access in
this scenario and its future evolution will need
tools to customize the information space. In this
article we present MASACAD, a multi-agent
system that learns to advise students and discuss
important problems in relationship to information
customization systems and smooth the way for
possible solutions. The main idea is to approach
information customization using a multi-agent
paradigm.
Introduction
In our previous work (Hamdi, 2005), we have
presented an e-learning system that provides a
service to a student that checks whether lecturers
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
MASACAD
Background
The MASACAD system makes use of a wide
range of technologies. These are discussed briefly
in the following paragraphs.
E-Learning: Several years ago, online education was considered as an experimental approach with more disadvantages than advantages.
However, today it should be considered not only
a complementary educational resource but also
a serious alternative that competes with conventional and now classical methods. The adaptation
to the new features and services of the e-learning environment is not immediate and requires
experience, time, investment, pedagogical and
technical resources, and government or campus
administration support. At the UAE University
there exists enormous interest in the area of online education. Rigorous steps are taken towards
the creation of the technological infrastructure
(hardware, software, and communications) and
the academic infrastructure (course materials,
teacher-student communication) for the improvement of teaching and learning. MASACAD, the
academic advising system described in this article
is to be understood as a tool that uses network
technology to support learning and as part of the
e-learning environment at the university. We use
it to demonstrate the capability of exploiting the
digital infrastructure, enabled by the online mode
1119
MASACAD
1120
MASACAD
domains, when evaluated in terms of their generalization ability (Shavlik, Mooney, & Towell,
1991). Although the almost complete ignorance
of problem-specific theory by empirical learning systems may mean that they do not address
important aspects of induction, it is interesting to
see in the following study, how domain-specific
knowledge about academic advising of students
can be employed by a domain free neural network
learning algorithm.
Web Mining: Current Web mining research
aims at creating systems that (semi) automatically tailor the content delivered to the user from
a Web site. This is usually done by mining the
Web both the contents, as well as the users
interaction (Cooley, Mobasher, & Srivastava,
1997). To mine data from the Web is therefore
different from mining data from other sources of
information. For the problem of academic advising at hand, the Web represents the main source
of information. A solution to this problem will
therefore mine the Web. The Web mining technique adopted, bases on the multi-agent paradigm
combined with machine learning and ideas from
user modeling.
Academic Advising
The general goal of academic advising is to assist
students in developing educational plans which
are consistent with academic, career, and life
goals and to provide students with information
and skills needed to pursue those goals. More
specifically, advisors will assist students in the
following ways:
1121
MASACAD
Resources Needed
for Academic Advising
There are a lot of diverse resources that are
required to deal with the problem of academic
advising (see Table 1).
First of all, one needs the student profile that
includes the courses already attended, the corresponding grades, the interests of the student
concerning the courses to be attended, and perhaps
much other information. The part of the profile
consisting of the courses already attended, the
corresponding grades, and so on, is maintained
by the university administration in appropriate
databases to which the access is restricted to some
administrators. The part of the profile consisting
of the interests of the student concerning the
1122
Courses Already
Attended
A ll information
concerning courses
already attended by the
student
S tored in a database
A ccess is restricted to
the administration
Frequently changed
Other
Information
Other useful
information
about the
student
Courses Offered
Expertise
A ll information
available about
courses offered
in the semester
for which
advising is
needed
A vailable online
and accessible
for everyone
Frequently
changed
Documented knowledge:
MASACAD
1123
MASACAD
Details of the
Multi-Agent-Based Solution
MASACAD, the multi-agent system described in
the following, offers a service to a student who
needs academic advising, that is, he wants to
know which courses he should enroll himself in
for the coming semester. The service is currently
available only for computer science students. In
the current version, no attempt is made to learn
the student profile, that is, the part the profile
consisting of the interests of the student regarding the courses to be attended. The interests are
therefore asked for from the student before advising him. The focus in the current version is on
the individual agents of the system, on how they
cooperate to solve the problem, and on how one
of them, namely, the Learning Agent, learns to
perform the advising process by adopting a supervised learning solution using neural networks
(see Advising Procedure).
As a solution to the problems of network
communication, we use Bee-gent (Bonding and
Encapsulation Enhancement Agent) (Kawamura,
Hasegawa, Ohsuga, & Honiden, 2000), a communication framework based on the multi-agent
model. The Bee-gent framework is comprised of
GUI
Application
GUI
Operation
Events
2.Move
Display
User
System
IRS
Searcher
M obile
M ediation
A gent
8.Move
1124
Grading
System
4.Info Reply
D ata B ase A gent
IRS
Searcher
5.Move
Learning A gent
IRS
Searcher
URL
URL
connection
Course
Announcement
System
W W W A gent
MASACAD
Grading System
The application Grading System is a data base
application for answering queries about the students and the courses they have already taken.
The agent wrapper for the application Grading
System is responsible for invoking the database
by using JDBC (Java DataBase Connectivity),
transforming requests for information about the
students and courses into queries to the database
system, collecting the results of the queries, and
finally replying these results to the mediation
agent. Table 2 summarizes the steps performed
by the agent wrapper of the application Grading System.
Mediation Agent
The mediation agent realizes services by interacting with the agent wrappers on the basis
of conversations (sending and receiving XML
[eXtensible Markup Language] messages). When
the mediation agent migrates, it carries its own
program, data and current state. Frequency of
communication is reduced compared to a purely
message-based system and network loads are decreased largely because communication links can
be disconnected after the launch of the mediation
agent. Processing efficiency is improved because
the mediation agent communicates with the applications locally. The behavior of the mediation
agent is described in Table 4.
User System
The agent wrapper for the User System application creates a mediation agent (Searcher), which
migrates to the agent wrappers of the applications
Grading System and Course Announcement
System to retrieve the needed information.
After that, the GUI Application is contacted
to give the opportunity to the student to express
his interests. The Advising Procedure (see next
1125
MASACAD
Prepare data to be sent to the Web server (this data consists of the request for information communicated
by the mediation agent represented in the appropriate way as expected by the Web application)
Send data to the Web server (the CGI (Common Gateway Interface) program on the server site reads this
information, performs the appropriate actions, and then sends information back to the agent wrapper
via the same URL)
Extract information from this data; i.e., extract course identifiers of offered courses
1126
MASACAD
Receive request for information retrieval from agent wrapper for User System
Move to agent wrapper for Grading System
Judge the migration result:
If failure: end
If success:
Request for retrieval of (course, grade) pairs for the given student ID
Receive the reply
Move to agent wrapper for Course Announcement System
Judge the migration result:
If failure: end
If success:
Request for retrieval of IDs of offered courses for the given semester, college, and subject
Receive the reply
Move to agent wrapper for User System
Judge the migration result:
If failure: end
If success:
Report retrieved results to agent wrapper for User System
End
When GUI event occurs (student enters ID, Password, and Email address the first time, or asks again for
advice by clicking the button Get Advice Again) or when periodic monitoring of changes in student profile
and offered courses is due:
Create mediation agent
Request mediation agent to perform information retrieval
Receive results from mediation agent
If information retrieval was initiated by a GUI event then:
Output information concerning the offered courses to the GUI (the GUI Application represents
this information appropriately (boxes that can be checked by the student))
When GUI event occurs (student clicks on Get Advice button):
Prepare input (input vector stored in a text file) for Advising Procedure
Invoke Advising Procedure
Process results (the results are found in a text file) and output them to GUI
Else (periodic monitoring): compare retrieved information with old version and alert the user
via Email in the case that something has changed
1127
MASACAD
Learning Phase
The aim of the learning phase was to determine
the most suitable values for the learning rate, the
size of the network, and the number of training
cycles that are needed for the convergence of the
network.
= 0 . 01
A v er a ge er r or
800
700
Learning Rate: To determine the most suitable learning rate, experiments with 10 network
configurations 85-X-X-85 with X in {3, 5, 7, 9,
20, 30, 50, 85, 100, 120} were performed (85-XX-85 represents a network with 85 input neurons,
X neurons in each of the two hidden layers, and
85 output neurons). For each of these networks,
experiments with the five learning rates: 0.01,
0.25, 0.50, 0.75, and 1.0 were conducted. In each
of these 50 experiments, the network was allowed
to learn for a period of 1000000 cycles. After
each epoch of 50000 cycles the average selection
error for the 50 pairs from the selection set was
calculated. From the 50 experiments it was clear
that the learning rate 0.01 is the most suitable
one for this application because it produces the
smallest selection error in most of the cases and
more importantly, it causes the selection error
to decrease continuously which forebodes the
convergence of the network.
Network Size: Figure 2 shows the average
selection error for the 50 pairs from the selection
set plotted as a function of the number of training
cycles for the 10 different network configurations
85-X-X-85 with X in {3, 5, 7, 9, 20, 30, 50, 85,
100, 120}. In all 10 cases the network was allowed
to learn for a period of 1000000 of cycles and
the learning rate was set to 0.01. The network
configurations 85-X-X-85 with X in {50, 85, 100,
120} seem to work best in combination with the
learning rate 0.01. To determine which one of
them is more suitable, a longer learning period
is needed. The results for a learning period of
10,000,000 cycles are illustrated in Figure 3. The
configurations 85-100-100-85 and 85-120-120-85
cause the selection error to decrease continuously
1128
600
3
5
9
7
500
20
30
400
50
300
85
100 120
200
100
0
0
200000
400000
600000
800000
1000000
1200000
Tr a ining cy c les
MASACAD
= 10 000 000
A v er a ge er r or
700
600
500
50
400
300
85
200
100
100
120
0
0
2000000
4000000
6000000
8000000
10000000
12000000
Tr a ining cy c les
Testing Phase
The final model was tested with the test set data.
For 42 of the 50 test cases (84 percent) the networks
actual output was exactly the same as the target
output, that is, the network suggested the same
courses in the same order as specified by the test
examples. In the remaining eight test cases, the
target courses were always present at the beginning of the course list produced by the network.
However, the network proposed some additional
courses. The courses proposed additionally occur
always at the end of the course list. This, in addition to the fact that the system is an advisory one,
makes these errors tolerable. In four of the eight
cases the additional courses were correct choices.
In the other four cases some of the additional
courses were wrong choices because the student
has not yet taken their prerequisite courses.
1129
MASACAD
System Evaluation
To evaluate the advising multi-agent system in
real academic environment, 20 computer science
students in different stages of study were involved.
Each of them was asked to use the system to get
academic advice for the coming term. Most of
the problems that occurred at the beginning were
of technical nature, mainly concerning the communication over the network. It took a while to
deal with this kind of problems and to obtain a
relatively stable system.
The advice delivered by the system in each
of the 20 cases was analyzed carefully by the
concerned student together with an academic
adviser with the aim of evaluating its suitability
and detecting possible errors. The results were
very encouraging. In none of the 20 cases was the
advice found to contain errors, that is, to contain
courses that, if chosen, will violate the university regulations such as those concerning course
prerequisites. Also, all of the results (suggested
courses) were judged by the concerned students
and academic advisers to be suitable and in most
of the cases even very appropriate to be taken by
the students. Actually, 15 of the 20 students took
during the term for which advising was needed
exactly the courses suggested by the advising
system. In one of the remaining five cases, the
student, because of her poor health, was later (at
the beginning of the term) forced to drop three
courses and to take only two of the five suggested
ones. In another case, the student didnt take any
course during the term because of leave of absence
for the whole term. In the other three cases, each
of the students exchanged a suggested course (the
same course in all three cases) with another one.
Later, it became clear that the three students were
in a clique and that one of them didnt care for
the instructor of the course, so they moved all to
the other course.
1130
MASACAD
this complicates the creation of the agent wrappers (how to wrap an unknown application?), as
well as the creation of the mediation agent (which
route to take to navigate to the different agent
wrappers, how to communicate with the different
agent wrappers).
Related Research
In recent years, the administration of academic
advising has undergone radical transformation as
technological developments have altered the processes by which information is collected, stored,
and accessed; the systems by which communication is enabled; and the structures by which transactions are conducted. Technological innovations
have created an abundance of opportunities for
new practices and enhanced services frequently
characterized as real-time, student-centered,
and any time, any place (Moneta, 1997, p. 7). The
technological environments on many campuses
are evolving rapidly and comprise numerous elements: information dissemination, transactional
interaction, communications applications, and
educational technologies. Steele and McDonald
(n.d.) contains an annotated bibliography compiled
by George Steele and Melinda McDonald for
research related technology and advising.
The problem of course assignment has been
studied in the literature from different angles.
Advising tools are usually intended to complement the student-advisor relationship. The courseplanning consultant (CPC), for example, is a
computerized Web-based prerequisite checker
developed for students in the General Engineering
undergraduate curriculum at the University of Illinois at Urbana-Champaign. It was reported that
1131
MASACAD
Conclusion
Academic advising is an important area in any
university, which seldom gets adequate resources.
Hence, any system that can support this operation
will be worthwhile. The idea of realizing such a
system is very interesting and highly related to
the current research and development trends. In
this paper, MASACAD, a well defined architecture for a multi-agent system for addressing the
problem of academic advising was proposed. It
uses intelligent agents and neural networks for
learning and recommendation. The prototype
implementation and preliminary testing show
that the multi-agent paradigm, combined with
ideas from machine learning, user modeling, and
Web mining, would be a good approach for the
1132
References
Carpenter, G. A., Grossberg, S., Markuzon, N.,
Reynolds, J. H. & Rosen, D. B. (1992). Fuzzy
ARTMAP: A neural network architecture for
incremental supervised learning of analog multidimensional maps. IEEE Transactions on Neural
Networks, 3, 698-713.
Cooley, R., Mobasher, B. & Srivastava, J. (1997,
November). Web mining: Information and pattern
discovery on the World Wide Web. In Proceedings of the 9th IEEE International Conference on
Tools with Artificial Intelligence (ICTAI97) (pp.
558-567), Newport Beach, CA.
Course Planning Consultant (CPC). (n.d.). Background and guest user information. Retrieved May
28, 2005, from http://www.ge.uiuc.edu/ugrad/advising/cpc.html
Feigenbaum, E. A. (1977). The art of artificial intelligence: Themes and case studies of knowledge
Engineering. In Proceedings of the 5th International Joint Conference on Artificial Intelligence
(pp. 1014-1029), Cambridge, MA.
Hamdi, M. S. (2005). Extracting and customizing
information using multi-agents. In A. Scime (Ed.),
Web mining: Applications and techniques (pp.
228-252). Hershey, PA: Idea Group Publishing.
Jackson, P. (1999). Introduction to expert systems
(3rd ed.). Harlow, England: Addison-Wesley.
MASACAD
This work was previously published in International Journal of Intelligent Information Technologies, Vol. 2, Issue 1, edited
by V. Sugumaran, pp. 1-20, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
1133
1134
Chapter 3.9
Abstract
INTRODUCTION
In this chapter wearable computers are considered from the perspective of human factors. The
basic argument is that wearable computers can
be considered as a form of prosthesis. In broad
terms, a prosthesis could be considered in terms
of replacement (i.e., for damaged limbs or organs),
correction (i.e., correction to normal vision or
hearing with glasses or hearing aids), or enhancement of some capability. Wearable computers offer
the potential to enhance cognitive performance
and as such could act as cognitive prosthesis,
rather than as a physical prosthesis. However,
wearable computers research is still very much
at the stage of determining how the device is to
be added to the body and what capability we are
enhancing.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1135
Moores law continues to guarantee that the processors will get smaller, and work in the field of
micro-electrical-mechanical systems (MEMS)
shows how it is possible to create application
specific processors that are small enough to be
incorporated into buttons on clothing or into
jewelry (Figure 1a).
Furthermore, it is feasible to assume the
widespread development of general purpose processors, such as the mote (and similar) concept,
that combine low power with sufficient processing capability to deal with a number of different
sensors (figure 1b).
One direction for wearable computers is that
the miniaturization of technology will mean
it is possible to implant processors under the
1136
1137
1138
(1998) demonstrate that for general purpose activity, the forearm would be an appropriate place to
mount the device. This location is attractive as it
allows the wearer to easily move the display into
the field of vision. However, Knight and Baber
(2007) have questioned this location or at least
raised concerns. Recording shoulder and upper
arm muscle activity and measuring perceptions
of exertion while participants interacted with
arm mounted computers of different weights they
found that the mere act of holding the arm in an appropriate posture to interact with an arm mounted
computer was sufficient to exceed recommended
levels of muscle activity for sustained activity .
In addition it induced symptoms of fatigue after
only two minutes where the addition of weight in
the form of the mounted technology compounded
this physical effect.
1139
that is, information presented to one eye competes for attention with information presented
to the other, which results in one information
source becoming dominant and for vision to be
directed to that source. A consequence of this
phenomenon is that the wearer of a monocular
display might find it difficult to share attention
between information presented to the display
eye and information seen through the free eye.
This problem is compounded by the field of view
of such displays.
The field of view for the commercial monocular
HMDs ranges from 16-60, which is considerably less than that of normal vision (around 170
for each eye, Marieb 1992). Narrow field of view
can degrade performance on spatial tasks such
as navigation, object manipulation, spatial awareness, and visual search tasks. Restrictions on field
of view will tend to disrupt eye-head coordination
and to affect perception of size and space (Alfano
& Michel, 1990). One implication of a restricted
field of view is that the wearer of a see-through
HMD will need to engage in a significant amount
of head movement in order to scan the environment (McKnight & McKnight, 1993). Seagull and
Gopher (1997) showed longer time-on-task in a
flight simulator when using a head down visual
display unit than with a head-mounted, monocular
display. Thus, it appears that a monocular display
1140
While it is possible that wearing technology affects user performance, it is also possible that
the physical activity of the person can affect the
computer, for example through the use of sensors
to recognize actions and use this recognition to
respond appropriately.
Given the range of sensors that can be attached
to wearable computers, there has been much interest in using data from these sensors to define and
recognize human activity. This has included work
using accelerometers (Amft et al., 2005; Junker
et al., 2004; Knight et al., 2007; Ling & Intille,
2004, Van Laerhoven & Gellersen, 2004; Westeyn
et al., 2003) or tracking of the hand (Ogris et al.,
2005). The approach is to collect data on defined
movements to train the recognition systems, and
then use these models to interpret user activity.
At a much simpler level, it is possible to define
thresholds to indicate particular postures, such as
sitting or standing, and then to use the postures
to manage information delivery (Bristow et al.,
2004). The use of data from sensors to recognize
human activity represents the merging of the
research domains of wearable computers with
that of pervasive computing, and implies the
recognition not only of the actions a person is
performing but also the objects with which they
are interacting (Philipose, et al., 2004; Schwirtz
& Baber, 2006).
An area of current interest for the wearable or
ubiquitous computing communities is the interpretation of human movement related to maintenance
and related activity. The artifacts with which the
person is interacting could be instrumented. For
example, a simple approach is to fit switches on
Supporting Memory
A common example used to illustrate the benefits
of a wearable computer is what can be termed
the context-aware memory. Imagine you are
attending a conference (or any large social gathering) and having to remember someones name.
Systems using some form of badge or face-recognition have been proposed to help with such
situations; the computer would register the person
and provide you with name and some additional
details about the person, for example when you
last met, what research interests are listed on
their web-page, where they work, and so forth.
There has been little research on whether and how
these systems improve memory, and this example
points to a possible confusion between supporting processes involved in recalling information
from memory and the provision of contextuallyrelevant information. An associated question is
whether wearable computers (particularly having
a head-mounted display) positioned on the eye
can have an impact on recall. The analogy is
with the tourist watching a parade through the
lens of a video cameradoes the act of recording something weaken the ability to process and
recall information? Baber et al. (2001) use a search
task coupled with surprise recall to show that,
in comparison with not using any technology,
participants using a digital camera and wearable
computer conditions showed lower performance,
and that overall the wearable computer showed
the biggest impairment in recall. There are many
reasons why interruption at initial encoding can
limit the ability to remember something, and the
question is whether the head-mounted display
serves to interrupt encoding; either due to distraction (with a host of information appearing on the
screen), or through limitations of field of view, or
for some other reasons.
1141
1142
1143
the participants in the paper condition would order the tests as they saw fit (Baber et al., 1999b,
Ockerman & Pritchett, 1998).
In terms of interacting with wearable computers and the appropriate devices to use, there has
been very little work to date. While one might
assume that the optimal interaction techniques
would be ones that support hands-free interaction, such as speech recognition, studies suggest
that walking has a negative impact on speech
recognition performance (Oviatt, 2000; Price et
al., 2004). In terms of entering data, Thomas et al.
(1997) showed that a forearm mounted QWERTY
keyboard led to superior performance over a
five-button chording device or a virtual keyboard
controlled using an isometric button. However, one
might question the recommendation of a forearm
mounted device, based on consideration of musculoskeletal strain. In terms of selecting objects
on a display, Thomas et al. (1998) found that a
touchpad mounted on the forearm was preferred
by users, but that one mounted on the thigh lead
to superior performance when sitting, kneeling or
standing. Thus, the mounting of a pointing device
can have a bearing on performance (although one
might question whether pointing is an appropriate
means of performing selection tasks on a wearable computer). Zucco et al. (2006) considered the
performance of drag and drop tasks while stationary and whilst walking, using different devices.
They found that a gyroscopic mouse lead to best
performance while stationary, but that touchpad
or trackball were lead to better performance when
1144
1145
1146
DISCUSSION
Wearable computers continue to raise many significant challenges for human factors research.
These challenges involve not only cognitive
aspects of presenting information but also perceptual aspects of displaying the information against
the backdrop of the everyday environment and
physical aspects of mounting the devices on the
person. This chapter has overviewed some of
the developments in the field and offered some
consideration of how these human factors can be
considered. While the field is largely motivated
by technological advances there is a need to carefully ground the developments in the physical and
cognitive characteristics of the humans who are
intended to wear them.
REFERENCES
Alfano, P.L., & Michel, G.F.(1990). Restricting the
field of view: Perceptual and performance effects.
Perceptual and Motor Skills, 70, 35-45.
Amft, O., Junker, H., & Troster, G. (2005). Detection of eating and drinking arm gestures using
inertial body-worn sensors, Ninth International
Symposium on Wearable Computers ISWC 2005
(pp.160-163). Los Alamitos, CA: IEEE Computer
Society.
Antifakos, S., Michahellis, F., & Schiele, B. (2002).
Proactive instructions for furniture assembly.
UbiComp 2002 4th International Conference on
Ubiquitous Computing (pp. 351-360). Berlin:
Springer.
Buechley, L. (2006) A construction kit for electronic textiles. In Digest of Papers of the 10th
International Symposium on Wearable Computers
(pp. 83-90). Los Alamitos, CA: IEEE Computer
Society.
1147
El Kaliouby, R., & Robinson, P. (2003). The emotional hearing aidAn assistive tool for autism,
HCI International
Farringdon, J., Moore, A.J., Tilbury, N., Church,
J., & Biemond, P.D. (1999). Wearable sensor badge
and sensor jacket for contextual awareness. Digest
of Papers of the 3rd International Symposium on
Wearable Computers (pp. 107-113). Los Alamitos,
CA: IEEE Computer Society.
Feiner, S., MacIntyre, B., Hollerer, T., & Webster,
A. (1997). A touring machine: Prototyping 3D
mobile augmented reality systems for exploring the urban environment. Digest of Papers
of the 1st International Symposium on Wearable
Computers, (pp. 74-81). Los Alamitos, CA: IEEE
Computer Society.
Frolich, D., & Tallyn, E. (1999). AudioPhotography: Practice and prospects. CHI99 Extended
Abstracts (pp. 296-297). New York: ACM.
Gemperle, F., Kasabach, C., Stivoric, J., Bauer,
M., & Martin, R. (1998). Design for Wearability.
Digest of Papers of the 2nd International Symposium on Wearable Computers (pp. 116-123). Los
Alamitos, CA: IEEE Computer Society.
Healey, J., & Picard, R.W. (1998). StartleCam: a
cybernetic wearable computer, Proceedings of
the 2nd International Symposium on Wearable
Computers (pp. 42-49). LosAlamitos, CA: IEEE
Computer Society.
Hinckley, K., Pierce, J., Horvitz, E., & Sinclair, M.
(2005). Foreground and background interaction
with sensor-enhanced mobile devices. ACM Trans.
Computer-Human Interaction, 12(1), 31-52.
Huang, C-T., Tang, C-F., & Shen, C-L. (2006).
A wearable textile for monitoring respiration,
using a yarn-based sensor. Digest of Papers of
the 10th International Symposium on Wearable
Computers, Los Alamitos, CA: IEEE Computer
Society, 141-142
1148
1149
Philipose, M., Fishkin, K. P., Perkowitz, M., Patterson, D. J., Fox, D., Kautz, H., & Hahnel, D.
(2004). Inferring activities from interactions with
objects. Pervasive Computing 2004, 5057.
Picard, R. (1997). Affective computing. Cambridge, MA: MIT Press.
Post, E.R. & Orth, M. (1997). Smart fabric, or
wearable clothing. Digest of Papers of the 1st
International Symposium on Wearable Computers
(pp. 167-168). Los Alamitos, CA: IEEE Computer
Society.
PriceK. J., Min L., Jinjuan, F., Goldman, R.,
Sears, A., & Jacko, J.A. (2004). Data entry on the
move: An examination of nomadic speech-based
text entry. User-centered interaction paradigms
for universal access in the information society
(pp.
460-471).
Berlin: Springer.
Rash, C.E., Verona, R.W., & Crowley, J.S. (1990).
Human factors and safety considerations of night
vision systems flight using thermal imaging systems. Proceedings of SPIEThe International
Society for Optical Engineering, 1290, 142-164.
Rhodes, B.J., & Starner, T. (1997). Remembrance
agent: A continuously running automated information retrieval system. The Proceedings of The
First International Conference on The Practical
Application Of Intelligent Agents and Multi Agent
Technology (PAAM 96), pp. 487-495.
Rohaly, A. M., & Karsh, R. (1999). Helmetmounted displays. In J. M. Noyes & M. Cook
(Eds.), Interface technology: The leading edge (pp.
267-280). Baldock: Research Studies Press Ltd.
1150
Sheridan, J. G., Lafond-Favieres, V., & Newstetter, W. C. (2000). Spectators at a Geek Show: An
Ethnographic Inquiry into Wearable Computing,
Digest of Papers of the 4th International Symposium on Wearable Computers (pp. 195-196). Los
Alamitos, CA: IEEE Computer Society.
Starner, T. (1999). Wearable computing and
contextual awareness. Unpublished Ph.D. thesis,
Masachusetts Institute of Technology, USA.
Stiefmeier, T., Ogris, G., Junker, H., Lukowicz, P.
& Trster,
Alamitos,
Wearable Computers (pp. 97-104). Los
CA: IEEE Computer Society.
Stein, R., Ferrero, S., Hetfield, M., Quinn, A.,
& Krichever, M. (1998). Development of a commercially successful wearable data collection
system. Digest of Papers of the 2nd International
Symposium on Wearable Computers (pp. 18-24).
Los Alamitos, CA: IEEE Computer Society.
Strachan, S., Williamson, J., & Murray-Smith, R.
(2007). Show me the way to Monte Carlo: density-based trajectory navigation. In Proceedings
of ACM SIG CHI Conference, San Jose.
G. (2006).
Activity recognition of assembly tasks using using body-worn microphones and accelerometers.
IEEE Transaction on Pattern Analysis and Machine Intelligence.
Tan, H. Z., & Pentland, A. (1997). Tactual displays for wearable computing. Digest of Papers
of the 1st International Symposium on Wearable
Computers (pp. 84-89). Los Alamitos, CA: IEEE
Computer Society.
Wellman, B. (2001). Physical place and cyberspace: The rise of personalized networks. International Journal of Urban and Regional Research,
25, 227-252.
1151
KEY TERMS
Activity Models: Predictive models of human
activity, based on sensor data
Augmentation Means: Devices that can augment human bevaiora term coined by Doug
Engelbart, and covering: Tools & Artifacts: the
technologies that we use to work on the world
which supplement, complement or extend our
physical or cognitive abilities; Praxis: the ac-
1152
Endnotes
http://www.intel.com/research/exploratory/
motes.htm
2
Bainbridge (1987) argued that full automation can lead to the ironic situation that, the
role of the human operator is to intervene
when something goes wrong. However, the
automation is such that the human is locked
out of the process and has little understanding as to what is happening. Consequently,
the human will not be able to intervene in an
informed and efficient manner. Ultimately,
it means that, by designing the human out
of the system, the potential for a flexible and
intelligent response to unknown situations
is lost.
1
3
4
http://www.bodymedia.com/main.jsp
http://www.extra.research.philips.com/
pressmedia/pictures/wearelec.html
This work was previously published in Handbook of Research on User Interface Design and Evaluation for Mobile Technology, edited by J. Lumsden, pp. 158-175, copyright 2008 by Information Science Reference, formerly known as Idea Group
Reference (an imprint of IGI Global).
1153
1154
Chapter 3.10
ABSTRACT
In this chapter we discuss a number of recent studies that demonstrate the use of rational analysis
(Anderson, 1990) and cognitive modelling methods to understand complex interactive behaviour
involved in three tasks: (1) icon search, (2) graph
reading, and (3) information retrieval on the World
Wide Web (WWW). We describe the underlying
theoretical assumptions of rational analysis and the
adaptive control of thought-rational (ACT-R) cognitive architecture (Anderson & Lebiere, 1998),
a theory of cognition that incorporates rational
analysis in its mechanisms for learning and decision making. In presenting these studies we aim
to show how such methods can be combined with
eye movement data to provide detailed, highly
constrained accounts of user performance that are
grounded in psychological theory. We argue that
INTRODUCTION
With the rapid increase in Internet use over the past
decade there is a growing need for those engaged
in the design of Web technology to understand the
human factors involved in Web-based interaction.
Incorporating insights from cognitive science
about the mechanisms, strengths, and limits of
human perception and cognition can provide a
number of benefits for Web practitioners. Knowledge about the various constraints on cognition,
(e.g., limitations on working memory), patterns
of strategy selection, or the effect of design
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1155
RATIONAL ANALYSIS
1156
Box 1.
IF
AND
THEN
set the goal to search the WWW for eudaimonia (control state)
IF
AND
THEN
AND
1157
1158
MODELLING INTERACTIVE
BEHAVIOUR
In the following section, we will summarise a
number of recent studies which employ rational
analysis, cognitive modelling, eye tracking, or a
combination of all three, to understand human
performance in Web-based or HCI tasks. We
first discuss recent efforts to model information
foraging and interactive search on the WWW.
These studies show how ACT-R and rational
analysis can be successfully applied to explain
different aspects of peoples behaviour when conducting interactive search tasks. This can include
both high-level behaviours such as backtracking
through Web-pages and low-level behaviours
such as patterns of visual attention obtained from
eye-tracking studies. We then describe two studies which combine experimental data collection,
eye movement recording, and cognitive modelling
methods using ACT-R to provide detailed accounts
of the cognitive, perceptual, and motor processes
involved in the tasks. These studies were chosen
because both develop a detailed process model
which not only captures the human response
time data from the experiment, but also provides
a close match to the patterns of visual attention
revealed by the eye movement study. This level of
detail in modelling is still relatively uncommon
and the strong constraints added by seeking to
1159
SNIF-ACT
Scent-based Navigation and Information Foraging in the ACT architecture (SNIF-ACT) (Pirolli
& Fu, 2003) is a model of human behaviour in
an interactive search task. The model makes use
of ACT-Rs spreading activation mechanism so
that the information scent of the currently viewed
Web page activates chunks in declarative memory
as does the spreading activation from the goal.
Where these two sources of activation coincide
there are higher levels of activation and this indicates a high degree of relevance between the
goal and the page being attended to. This activation is what ultimately drives the behaviour of
the model. The model includes the use of search
1160
Eye-Tracking Experiments in
Interactive Search
When presented with a list of search results or
items on a menu within a Web site (i.e., a patch
of information), the user has to choose between
selecting an item which will move him/her to
another patch and doing some assessment on
either the currently attended item or some other
item in the list (i.e., consume the information
presented within the current patch). As has been
mentioned previously, IFT proposes that the user
will make use of the information scent of the items
to guide their behaviour. If the information scent
of a particular item in the list is higher than the
rest (i.e., that item appears to be relevant to the
task and the user believes that clicking it will
lead them to better information) then the item
will be selected.
Eye-tracking experiments have been used to
investigate what people attend to when conducting
interactive search tasks (Brumby & Howes, 2004;
Silva & Cox, 2005). Participants were given an
information goal and a list of items and asked to
select the label that they thought would lead to the
1161
1162
1163
Notes: The graphs on the left (labelled 1) show years 1970 to 1979 while those on the right (labelled 2) show years
1980 to 1989. Dashed lines indicate the optimal scan path required to answer the question, when the value of
oil is 3, what is the value of gas?
1164
corresponding value of the target variable (Lohse, 1993; Peebles & Cheng, 2001, 2002;
Figure 4. Mean response times for experimental participants and ACT-R models for each question type
(Peebles & Cheng, 2003)
1165
Figure 5. Screen shots showing an experimental participants eye movement data (left) and the ACT-R
models visual attention scan path (right) for the QVQV question oil = 6, gas = ? using the 1980s
parametric graph
Note: In the model screen shot, numbered circles on the scan path indicate the location and sequence of fixations.
1166
Copyright 2007, Idea Group Inc. Copying or distributing in print or electronic forms without written permission of
Idea Group Inc. is prohibited.
1167
To provide an explanation of their data, Fleetwood and Byrne (2006) produced an ACT-R
model of the task that was able to interact with
the same experiment software as the participants.
As described previously, each experiment trial is
comprised of two stages, the first where the target
icon and its file name are encoded and the second
in which it is sought. The model has a set of seven
productions to carry out the first stage: (1) locate
the target icon and (2) encode an attribute pair (e.g.,
grey rectangle), (3) look below the icon and (4)
encode the associated file name, and finally (5)
locate and (6) click on the Ready button. In the
second stage, the model locates and attends to an
icon with the previously encoded target feature
and then shifts visual attention to the file name
below it. If the file name matches the target file
name, visual attention is returned to the icon and
the mouse clicks on it. If the file name is not the
target, however, the model continues the search
by locating another icon at random with the same
target features. This sequence of events requires
four productions and takes 285 ms to complete.
Figure 7 reveals a close correspondence
between the mean RTs produced by the model
and those of the experiment participants (R 2 =
.98, RMSE = 126ms) and shows that an ACT-R
Figure 7. Response time by set size and icon quality for Fleetwood and Byrnes (in press) revised model
and the experiment data
4000
Data-P oor
3500
Data-F air
3000
Data-G ood
Model- P oor
2500
Model- Fair
Model- G ood
2000
1500
1000
500
0
1168
12
S et S ize
18
24
Conclusion
In this chapter we have presented a number
of recent examples of research that we believe
clearly demonstrate the value of rational analysis
and cognitive modelling in the study of complex
interactive behaviour. Such tasks typically involve
the complex interaction of three elements: (1) the
perceptual and cognitive abilities of the user; (2)
the visual and statistical properties of the task
environment; and (3) the specific requirements
of the task being carried out. The use of rational
analysis and an embodied cognitive architecture
such as ACT-R allows all three of these elements
to be brought together in an integrated theoretical
account of user behaviour. Rational analysis provides a set of assumptions and methods that allow
researchers to understand user behaviour in terms
of the statistical structure of the task environment
and the users goal of optimising (i.e., reducing the
cost/benefit ratio of) the interaction. Developing
Figure 8. Mean number of shifts of visual attention per trial made by Fleetwood and Byrnes (in press)
revised model relative to the mean number of gazes per trial made by participants
Avera ge G azes / S hifts of V is ua l A ttention per T ria l
Data-P oor
Data-Fa ir
Data-G ood
6
Model-P oor
Model-F air
Model-G ood
4
3
2
1
0
12
S et S ize
18
24
1169
cognitive models of interactive behaviour in a cognitive architecture such as ACT-R allows researchers to specify precisely the cognitive factors (e.g.,
domain knowledge, problem-solving strategies,
and working memory capacity) involved. In addition, the recent incorporation of perceptual-motor
modules to cognitive architectures allows them
to make predictions about users eye movements
during the entire performance of the task, which
can be compared to observed eye movement data
a highly stringent test of the sufficiency and
efficacy of a model. The use of these methods has
increased rapidly over the last 5 years, as has the
range of task interfaces being studied. Although
we are still a long way from achieving the goal of
an artificial user that can be applied off the shelf
to novel tasks and environments, the models of
interactive behaviour described here demonstrate
a level of sophistication and rigour still relatively
rare in HCI research. As these examples illustrate,
developing more detailed accounts of interactive
behaviour can provide genuine insights into the
complex interplay of factors that affect the use
of computer and Web technologies, which may
inform the design of systems more adapted to
their users.
Note
All correspondence to: David Peebles, Department
of Behavioural Sciences, University of Huddersfield, Queensgate, Huddersfield, HD1 3DH, UK;
[email protected].
References
Anderson, J. R. (1990). The adaptive character
of thought. Hillsdale, NJ: Lawrence Erlbaum
Associates.
Anderson, J. R., Bothell, D., Byrne, M. D.,
Douglass, S., Lebiere, C., & Qin, Y. (2004). An
1170
1171
Silva, M., & Cox, A. L. (2005, August 31-September 2) Eye-movement behaviour in interactive
menu search: Evidence for rational analysis.
Paper presented at the BPS Cognitive Section
Conference 2005, University of Leeds.
St. Amant, R., Horton, T. E., & Ritter F. E. (2004).
Model-based evaluation of cell phone menu interaction. In Proceedings of the CHI04 Conference
on Human Factors in Computer Systems (pp.
343-350). New York: ACM.
Young, R. M. (1998). Rational analysis of exploratory choice. In M. Oaksford & N. Chater (Eds.),
Rational models of cognition (pp. 469-500). UK:
Oxford University Press.
Young, R. M., Green, T. R. G., & Simon, T. (1989).
Programmable user models for predictive evaluation of interface designs. In Proceedings of CHI
89: Human Factors in Computing Systems (pp.
15-19). AMC Press.
This work was previously published in Human Computer Interaction Research in Web Design and Evaluation, edited by P.
Zaphiris, pp. 290-309, copyright 2007 by Information Science Publishing (an imprint of IGI Global).
1172
1173
Chapter 3.11
Device Localization in
Ubiquitous Computing
Environments
Rui Huang
University of Texas at Arlington, USA
Gergely V. Zruba
University of Texas at Arlington, USA
Sajal Das
University of Texas at Arlington, USA
Abstract
In this chapter, we will study the localization
problem in ubiquitous computing environments.
In general, localization refers to the problem of
obtaining (semi-) accurate physical location of the
devices in a dynamic environment in which only
a small subset of the devices know their exact
location. Using localization techniques, other
devices can indirectly derive their own location
by means of some measurement data such as
distance and angle to their neighbors. Localization is now regarded as an enabling technology
for ubiquitous computing environments because
it can substantially increase the performance of
other fundamental tasks such as routing, energy
Introduction
In a ubiquitous computing environment, devices
are often connected to one another on the fly to
form an infrastructure-less network that is fre-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1174
Background
The localization problem is hard for a number
of reasons:
1
need to take advantage of multi-hop information, that is, estimating node locations
based on other nodes location estimates.
Availability of measurements: For localization algorithms that require distance or angle
measurements, certain sensory devices will
need to be available to provide such readings.
However, it is likely that not all nodes have
the same sensory capacity. In other words,
there is a need for the localization algorithm
to work in a heterogeneous environment,
in which devices with different location
sensory capacities coexist.
Measurement error and error propagation: Even when measurement devices are
available, there is a general consensus that
those measurements are prone to errors. For
instance, a distance measurement can be
derived based on a received signal strength
indication (RSSI) reading, in which a receiving device measures the strength of the
signal from a sending device and obtains the
distance via an estimated signal propagation
model. RSSI reading is prone to multi-path
fading and far field scattering. The error
can be especially high when there are a
significant number of obstacles in between
the sender and the receiver. Since most
localization algorithms require measurements from nodes several hops away, the
measurement error is likely to aggregate
along the path and eventually completely
throw off the location estimate.
2.
3.
4.
5.
Applications of Localization
There have been numerous algorithms proposed
for MANETs that rely on localization data; in
this section, we provide a brief survey of these
1175
Unicast Routing
Routing refers to the task of finding the correct
route from a sending device (source) to a receiving device (destination). Routing is an especially
challenging task for MANETs because their
frequent topology change implies the underlying
instability of any established routes. As such,
routes are needed to be frequently rediscovered,
reestablished, and repaired. In general, routing
(i.e., route discovery and repair) involves flooding the routing control packets throughout the
network. Flooding can often be expensive in
terms of delay and bandwidth usage it incurs,
both of which can greatly affect the network
performance. Thus, there is a strong incentive to
design efficient routing algorithms that minimize
the overhead caused by any unnecessary packet
flooding. Unicast routing based on location information, often called geometric routing or location
based routing, has shown to be one of the viable
solutions to this problem.
Location-aided routing (LAR) (Ko, 2000)
protocol is the first MANET routing algorithm
proposed that uses location data. In LAR, every
node is assumed to know its own location, and each
individual location is then periodically broadcast
throughout the network. Thus, at any time t, every
node knows the locations of any other nodes at
some previous time <t. Based on this location
information and an estimated velocity, a node
can derive an estimated location range, called
expected zone, of a target node at the current
time. Instead of flooding the entire network, the
routing request packets can be directed to search
for the target node only at this expected zone.
Global flooding is performed only after the location based routing request has failed. Limiting
route discovery to a smaller expected zone with
1176
Multicast Routing
Similar to unicast routing, multicast routing can
also benefit from location data. Multicast routing
using geographic information is often referred to
in the literature as geocast routing. The LocationBased Multicast (LBM) algorithm (Ko, 1999) is a
multicast extension to the unicast Location-Aided
Routing (LAR). Like LAR, which forwards the
routing requests according to the location of the
destination node, LBM forwards the requests according to the direction of the geocast region that
contains all the multicast destinations. GeoGRID
(Liao, 2000) is the multicast extension to GRID
(Liao, 2001). Like in GRID, location information
is used by GeoGRID to identify the grid block
where nodes reside. Multicast is done through the
gateway node selected at each grid block. Based
on the location of the source node and the geocast
region, LBM and GeoGRID define a forwarding region that contains the intermediate nodes
responsible for forwarding packets. The size
and shape of the forwarding region have a direct
impact on the overall performance; shapes such
as rectangles and cones have been proposed in
(Ko, 1999).
While the standard shapes such as rectangles
and cones work well in most cases, there are
situations where viable routes exist only outside
the forwarding region. For instance, a network
can be partitioned into two sub-networks connected only through a narrow linkage due to
some obstacles (e.g., two islands connected by a
bridge). When the source and the destination are
Power Management
MANET is often used as the model for sensor
networks, an emerging technology for pervasive
computing. One of the major challenges of sensor
networks is power management. Since sensors are
commonly small in size and are battery powered,
conserving energy would prolong their service
time and, thus, the lifespan of the entire network.
The Geographical Adaptive Fidelity (GAF) algorithm (Xu, 2001) is a network topology management algorithm with reduced energy consumption
as its primary objective. The idea behind GAF is
that there are often a large number of nodes that
are redundant during packet routing in MANETs.
If the redundant nodes can be identified, they
1177
Security
In (Hu, 2003) the authors proposed a technique
called packet leashes to defend against wormhole attacks in MANETs. A wormhole attack
is a type of security breach where an adversary
intercepts incoming packets and tunnels them to
another part of the network via a single long-range
directional wireless link or through a direct wired
link. From there, the adversary can retransmit
the packets to the network. Note that this type of
capture-and-retransmit attack can be immune
to common packet encryption methods, since the
adversary does not need to read the packet content.
Wormhole attacks can severely disrupt ad hoc
routing protocols such as Ad hoc On-Demand Distance Vector Routing (AODV) or Dynamic Source
Routing (DSR), and cause a denial of service to
the network. The core of packet leashes is based
on two assumptions: i) all nodes know their own
locations; and ii) all nodes are synchronized. To
enable packet leashes, the sender node encloses
its location and transmission time-stamp within
the packet. At the receiver node, the packet leash
is validated against the receivers own location
and clock. In particular, the sender location information gives the distance from the original
sender to the receiver, and the time-stamp gives
1178
Connectivity Only
Measurement
At a minimum, a node can detect connectivity
to its immediate neighbors, that is, its one-hop
neighborhood. The connectivity only measurement is a binary reading between two nodes of
either true or false indicating whether they
are neighbors. Based on this connectivity information, one can derive the general proximity of the
nodes as a way to localize the network.
path loss exponent. However, in the actual environment where obstacles exist, multipath signals
and shadowing become two major sources of
noise that impact the actual RSSI. In general,
those noises are commonly modeled as a random
process during localization. Let Pi, j be the RSSI
(in dB) obtained at the receiver node j from the
sender node i. Pi, j is commonly modeled as a
Normal distribution (Patwari, 2003)
2
Pi , j = N ( Pi , j , dB
)
(1)
(2)
1179
N( di, j / c, 2 )
AoA Measurement
A node can be localized if the angles between
it and two beacons are known. Thus, the angle
information (i.e., bearing, or angle of arrival
(AoA)) can be used to localize the network. Currently, there is no off-the-self device that offers
AoA sensing capability. However, a number of
prototype devices are available. For instance,
Cricket Compass(Priyantha, 2001) is a small form
device that uses ultrasonic measurements and
fixed beacons to obtain acoustic signal orientations. In(Niculescu, 2004) a rotating directional
antenna is attached to an 801.11b base station; by
measuring the maximum received signal strength,
a median error of can be obtained from the sensor. The challenge here is to design AoA sensing
devices with small form factor and low energy
consumption. In(Chintalapudi, 2004), the authors
outline a solution with a ring of charge-coupled
devices (CCDs) to measure AoA with relatively
low energy consumption.
In general, AoA is also modeled as a Normal
distribution. Let the true angle between the sender
i and j be i, j , the AoA measurement between i
and j is therefore
Ai, j
1180
N( Ai, j , 2 )
Interferometric Ranging
Measurement
Interferometric ranging is a widely used technique in both radio and optical astronomy to
determine the precise angular position of celestial
bodies as well as objects on the ground (Kus,
2006). Interferometric ranging exploits the
property that the relative phase offset between
two receivers determines their distances to two
simultaneous senders. Due to the recent advancement in hardware, it is now possible to implement
interferometric ranging sensors in much smaller
form factor to be used for localization (Marti,
2005). By synchronizing the transmission at
the two senders, each of which sends a signal
at a slightly different frequency, the receivers
can derive the relative phase offset of the two
signals by comparing the RSSI readings. The
distance difference (also called the q-range) can
then be calculated from the relative phase offset
with high accuracy. A q-range obtained from
interferometric ranging from two senders A and
B, and two receivers C and D is the distance difference dABCD=dADdBD+dBCdAC+e where e
is the measurement error (Figure 1).
A major advantage of interferometric ranging
is that the measurement could be extremely accurate compared to noise-prone RSSI readings.
In a recent experiment (Marti, 2005), in which
16 nodes are deployed in a 4x4 grid over a 18x18
meter flat grassy area with no obstruction, the
maximum q-range error was shown to be around
0.1 meters while the medium error was less than
0.04 meters. However, interferometric ranging is
more difficult to implement due to the following
reasons.
1.
2.
The measurement can be impacted by various sources of noise such as frequency drift,
ground multipath error, and time synchronization error (Marti, 2005). Frequencies
of the transmissions need to be precisely
calibrated, as any carrier frequency drift
and phase noise would directly impact
the observed phase offset. Precise time
synchronization is needed at the senders of
a q-range. Thus, there will be overhead to
maintain clock synchronization.
A significantly larger number of measurements are required for localization than using
direct ranging techniques. While there are
also a large number of measurements available (O(n4)) even for a small network, only a
small subset of them are independent of each
other. The rest merely provide redundant
information. It has been shown in (Kus,
2006) that the number of independent measurement using interferometric measurements is O(n2), which is significantly higher
than with RSSI and AoA ranging (O(n)).
Considering the localization problem in
relative coordinates, for a network of n nodes
there are 2n-3 unknowns in two dimensions
and 3n-6 unknowns in three dimensions.
This is because the relative coordinates are
3.
1181
Localization Algorithms
The previous section introduced the primary types
of measurement that can be used for localization. However, obtaining measurements such as
distance ranging and angle of arrival is only the
first step of localization. To calculate the actual
node location, we will have to mathematically
incorporate those measurement readings to derive
localization algorithms. While there are various
ways of classifying localization algorithms, we
feel it is more logical to classify them according to
the measurement assumptions as follows: i) connectivity-only; ii) range-based; iii) angle-based;
iv) interferometric ranging based; v) hybrid, and
vi) mobility-based.
Connectivity-Based Algorithms
A number of localization methods rely on connectivity information only. These types of methods
are also referred to as range-free methods in
the literature. For instance, the Centroid method
(Bulusu, 2000) estimates the location of an unknown node as the average of its neighboring
beacon locations. Clearly, in order for the location
estimate to be reasonably accurate, a large number
of beacons need to be heard. Thus, to provide
sufficient localization coverage, the Centroid
method requires more powerful beacons with a
large transmission range.
The APIT (Approximated Point-In-Triangulation) method (He, 2003) estimates the node location by isolating the area using various triangles
formed by beacons. For each triangle formed by
three beacons, the node is either in or out of the
triangle. For instance in Figure 2(a), if it can be
determined the node G is inside ABC and DEF, Gs
location can be isolated to the shaded overlapping
area of the two triangles. To determine whether
a node is inside or outside the triangle, APIT
RSSI
Accuracy
Cost
Measured Value
N/A
Low
Proximity
Low
Ti , j = N (di , j / c, T2 )
Ai , j = N ( Ai , j , 2A )
Low
ToA
High
High
AoA
Low/
Medium
N/A
Angle Ai, j
between node i
and j,
Math Model
1 if two nodes are connected; 0
otherwise
2
Pi , j = N ( Pi , j , dB
)
q-range
Interfero
metric
Ranging
1182
Very High
Medium/
High
q-range (distance
difference between four nodes)
dABCD=dADdBD+dBCdAC+e
between node A, B, C and D. Noise
to each q-range is modeled using a
Normal distribution
Figure 2. APIT. (a) localization using overlapping triangles; (b) node outside a triangle; (c) node inside
a triangle
(a)
(b)
(c)
1183
Figure 3. DV-Hop
1184
1185
1186
Figure 6. Collaborative Localization Using Particle Filters: (a) Particle distribution of node 0 when node
1 is not present; (b) Particle distribution of node 1 when node 0 is not present, (c) Particle distribution
of node 0 when node 1 is present; (d) Particle distribution of node 1 when node 0 is present
(a)
(b)
(c)
(d)
AoA-Based Algorithms
Even though the future of AoA sensing devices is
still unclear, some works have been published on
Interferometric-Ranging
Based Algorithms
Due to the fact that interferometric sensing devices for localization are relatively new, there
have been only a limited number of localization
1188
Hybrid Algorithms
A combination of the above techniques can be
employed to form hybrid localization methods. For
instance, a hybrid method is proposed in (Ahmed,
2005) that uses both DV-Distance (Niculescu,
2001) and Multi-Dimensional Scaling (MDS)
(Shang, 2003). The algorithm contains three
phases. In the first phase, a small subset of nodes
is selected as reference nodes. In the subsequent
phase, the reference nodes are then localized in
relative coordinates using MDS. The final phase
uses DV-Distance to localize the rest of the nodes
in absolute coordinates. The rational behind
such hybrid algorithms is to exploit the tradeoff
between different localization algorithms. For
example, MDS gives good localization accuracy,
but as the network size is increased, MDS can be
costly. Meanwhile, DV-Distance is less costly, but
it only works well when beacon ratio is high. With
the hybrid algorithm, the cost is minimized by
only running MDS on the reference nodes, and
then the reference nodes are used as beacons for
DV-Distance.
1189
1190
2004). Instead of the actual probability distribution, the possible device locations are represented
with a bounding box. As the beacon passes by, the
area contained by the bounding box is progressively reduced as positive and negative information
is processed. The bounding box method drastically
simplifies the probability computation, making
it possible to implement this method on sensor
devices. However, such large simplification has its
side-effects in that it sacrifices the preciseness of
the distribution for its simplicity as the box cannot precisely describe multiple possible locations.
There is also the problem of noise from ranging
devices. This method may work well when ranging
error is minimal; however, when noise is present
(which is inevitable when using RSSI ranging),
there might be situations where no bounding box
exists to satisfy all readings.
Table 2 lists all the localization algorithms
described in this section. In summary, different
measurement types and their unique properties
to a large degree dictate the design of localization
algorithms. For instance, connectivity-based measurements can only provide coarse localization
without a higher beacon ratio or nodal degrees.
Range and AoA-based measurements can provide
much finer localization results, but they are more
prone to measurement error. A quantitative comparison between the more well-known algorithms
such as DV-Hop, Euclidean and multilateralization can be obtained from (Langendoen, 2003),
in which the comparison is done in the context
of specific constraints of sensor networks, such
as error tolerance and energy efficiency. Their
results indicate that there is no single algorithm
that performs best and that there is room for
further improvement.
Theoretical Results
While there have been many localization algorithms proposed for various scenarios, only
recently have researchers started to address the
Centroid (Bulusu, 2000), APIT (He, 2003), DV-Hop (Niculescu, 2001), MCL (Hu,
2004)
Range-based
DV-Distance (Niculescu, 2001), Euclidean (Niculescu, 2001), Collaborative Multilateration (Savvides, 2001), Hop-TERRAIN (Savarese, 2002), n-Hop Multilateration (Savvides, 2003), Probabilistic Localization (Huang, 2005)
Angle-based
Weighted Mean Square Error (Chintalapudi, 2004), AoA Triangulation (Niculescu, 2003)
Interferometric Ranging
Hybrid
Mobility-based
has been shown as NP-Complete under the measurement of distance (Eren, 2004), angle (Bruck,
2005), connectivity (Breu, 1998; Kuhn, 2004), and
interferometric ranging (Huang, 2007).
The above theoretical results indicate the general intractability of the localization problem even
in the ideal case where measurements (such as
edge distances) are 100% accurate. Unfortunately,
measurements in the real world are a far-cry from
being accurate, and any optimization method
has to deal with not only different measurement
types, but also noise. The localization inaccuracy
attributed to the measurement types and noise
can be statistically qualified using Cramer-Rao
Bounds (CRB) (Patwari, 2005). The CRB is a
lower bound on the covariance of any unbiased
location estimator that uses measurements such
as RSSI, ToA, or AoA. Thus, the CRB indicates a
lower bound of the estimation accuracy of a given
network scenario regardless of the localization
algorithm. In other words, with CRB we have a
way to tell the best any localization algorithm can
do given a particular network, measurement type,
and measurement noise scenario. CRB formulas
of individual measurement types such as RSSI,
ToA, and AoA under most common noise models
(mostly Gaussian) are currently known.
The CRB of the localization error for a sample
network is shown in Figure 8 as rings of radius
being the standard deviation of the minimum
localization error that can be possibly attained at
the node. Here, the nodes represented by squares
are beacons while circles represent nodes to be
localized using RSSI ranging. The edges indicate
the communication links available to measure
RSSI readings. We assume the measurement
model to be RSSI with the path loss exponent p
and the standard deviation of the noise dB. A ring
with smaller radius (i.e., a smaller CRB) signals
that more accurate localization result can be
theoretically obtained. Conversely, a larger ring
indicates a larger localization variance and, thus,
a less accurate result. In the figure, two types of
nodes do not have rings. First, all beacons have
1192
Conclusion
In this chapter, we studied the localization
problem in ubiquitous computing environments.
Localization in general refers to the problem of
identifying the physical location of devices using
a limited amount of available measurement data.
The most common measurement types include
device connectivity (i.e., whether two devices are
neighbors), ranging using RSSI and ToA, angle of
arrival (AoA), and interferometric ranging. Given
a small number of nodes with accurate geometric
location (e.g., using GPS receivers), localization
Figure 8. The CRB of the sample network is depicted as rings of the radius i. There are two exceptions:
1) beacons, depicted as squares, have 0 CRB, and 2) some regular nodes have infinite CRB (such as
node 38, 48, 49 and 78 at the top left corner) indicating that they cannot be localized
algorithms try to derive the location of those devices that are not GPS-enabled. The motivation
of localization can be justified by the large number
of algorithms proposed for ubiquitous computing
that rely on (semi-)accurate location information
and the fact that current technology prevents GPS
from being installed on all network devices due
to power constraints and form factors. It has been
shown that localization in general, regardless of the
measurement types used, is an NP-Hard problem.
Thus, current effort in solving it relies on some
sort of stochastic optimization. Meanwhile, as
with other network-related problems in ubiquitous
computing environments, the ideal solution calls
1193
Future Directions
Device localization within ubiquitous computing
environment has been an active research field in
the past several years. Much work has been done in
the area of hardware/sensor design (in particular,
reducing the form factor and power consumption of sensory devices), algorithmic design and
theoretical analysis. However, like many areas
of ubiquitous computing, localization is still a
relatively new front with much of the work yet to
be done. In this section, we will briefly discuss
a few directions which we feel could produce
fruitful results in the near future. We hope our
discussion will encourage the readers to actively
participate and contribute their own ideas to this
exciting and important field.
1194
Interferometric Ranging
Since interferometric ranging is a relatively new
type of measurement available to the localization
problem, there are still many open problems in
this area. Of the localization algorithms proposed
for interferometric ranging, all but the iterative
algorithm proposed in (Huang, 2007) is centralized. There is a definite need to design distributed
localization algorithms for interferometric ranging so that it can be implemented with reasonable
efficiency and scalability. To reduce the number
of beacons, the distributed algorithms should
make use of multi-hop location information,
which unfortunately is much more difficult for
interferometric ranging because each measure-
Collaborative Localization of
Multiple Measurement Types
Previous localization algorithms often assume that
the entire network has to be localized using the
same type of measurement (such as connectivityonly, RSSI, ToA, AoA, or interferometric ranging).
However, to be true to the spirit of ubiquitous
References
Ahmed, A. A., Shi, H., & Shang, Y. (2005).
SHARP: A new approach to relative localization
in wireless sensor networks. In Proceedings of
the 25th IEEE International Conference on Distributed Computing Systems Workshops (ICDCS
05) (pp. 892-898).
Ash, J. N., & Potter, L. C. (2004). Sensor network
localization via received signal strength measurements with directional antennas. In Proceedings of
the 2004 Allerton Conference on Communication,
Control, and Computing (pp. 1861-1870).
1195
Bruck, J., Gao, J., & Jiang A. (2005). Localization and routing in sensor networks by local
angle information. In Proceeding of the 6th ACM
International Symposium on Mobile Ad Hoc Networking and Computing (pp. 181-192).
1196
Peng, R., & Sichitiu, M. L. (2005). Robust, probabilistic, constraint-based localization for wireless
sensor networks. In Proceeding of the 2nd Annual
IEEE Communications Society Conference on
Sensor and Ad Hoc Communications and Networks (SECON 2005) (pp. 541-550).
Niculescu, D., & Nath, B. (2001). Ad hoc positioning system (APS). In Proceedings of the IEEE
(GLOBECOM01) (pp. 2926-2931).
Niculescu, D., & Nath, B. (2003). Ad hoc positioning system (APS) using AoA. In Proc. of IEEE
INFOCOM03 (pp. 1734-1743).
Niculescu, D., & Nath, B. (2004). VOR base stations for indoor 802.11 Positioning. In Proceeding
of the 10th Annual International Conference on
Mobile Computing and Networking (pp. 58-69).
1197
1198
Additional Reading
Bulusu, N., Heidemann, J., & Estrin, D. (2000).
GPS-less low cost outdoor localization for very
small devices. IEEE Personal Communications
Magazine, 7(5), 28-34.
Eren, T., Goldenberg, D., Whiteley, W., Yang, Y. R.,
Morse, A. S., Anderson, B. D. O., & Belhumeur,
P. N. (2004). Rigidity, computation, and randomization of network localization. In Proceedings of
the IEEE (INFOCOM04) (pp. 2673-2684).
Ko, Y., & Vaidya, N. H. (2000). Location-aided
routing (LAR) in mobile ad hoc networks. Wireless Networks, 6(4), 307-321.
Langendoen, K., & Reijers, N. (2003). Distributed localization in wireless sensor networks: a
quantitative comparison. Computer Networks,
43(4), 499-518.
Liao, W.-H., Tseng, Y.-C., & Sheu, J.-P. (2001).
GRID: a fully location-aware routing protocol
for mobile ad hoc networks. Telecommunication
Systems, 18(1), 37-60.
Patwari, N., Hero III, A. O., Perkins, M., Correal,
N. S., & ODea, R. J. (2003). Relative location
estimation in wireless sensor networks. IEEE
Transactions on Signal Processing, 51(8), 21372148.
Patwari, N., Hero, A., Ash, J., Moses, R., Kyperountas, S., & Correal, N. (2005). Locating the
nodes: cooperative geolocation of wireless sensors. IEEE Signal Processing Magazine, 22(4),
54-69.
Savvides, A., Park, H., & Srivastava, M. B.
(2003). The n-hop multilateration primitive for
node localization problems. Mobile Networks
and Applications, 8, 443-451.
Stojmenovic, I., Ruhil, A. P., & Lobiyal, D. K.
(2006). Voronoi diagram and convex hull based
geocasting and routing in wireless networks.
This work was previously published in Advances in Ubiquitous Computing: Future Paradigms and Directions, edited by S.
Mostefaoui, Z. Maamar, and G. Giaglis, pp. 83-116, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
1199
1200
Chapter 3.12
Socio-Cultural Interpretations
to the Diffusion and Use of
Broadband Services in a Korean
Digital Society
Dal Yong Jin
Simon Fraser University, Canada
ABSTRACT
INTRODUCTION
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
Technology as Cultural
Forms
It is generally recognized that technologies are
primarily neutral because they operate essentially under the same norm of efficiency in all
situations. Many users of technology argue that
technology is essentially amoral and an entity
devoid of values (Rescher, 1969; Mesthene, 1970).
This instrumental theory, the dominant view of
modern governments and the policy sciences on
which they depend, argues that if people use
technology for destruction or pollution, as in the
case of nuclear weapons and chemical pollution,
it should not be blamed on technology, but on its
misuse by politicians, the military, big business
and others (Pacey, 1983, p. 2).
For many scholars, however, technology is
not simply a means to an end, but has become an
environment and a way of life; this is its substantive impact (Borgmann, 1984). This substantive
theory of technology holds that technology is
not neutral, but has a substantive value bias.
Substantive theory, best known through the writings of Jacques Ellul, Arnold Pacey, and Martin
Heidegger, claims that technology constitutes
a new type of cultural system that restructures
the entire social world as an object of control.
Substantive theory explicates cultural aspects
of technology, such as values, ideas, and the
creative activity of technology (Feenberg, 1991).
This type of cultural system is characterized by
an expansive dynamic which ultimately mediates
every pre-technological enclave and shapes the
whole of social life.
1201
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
1202
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
Cultural Characteristics
Contributing to the Deployment of
Broadband Services
Although the Korean government and telecom
firms have collaboratively initiated and expedited
the deployment of high-speed Internet services,
the swift growth of broadband services would
have been impossible without people readily accepting new technology more than other nations.
The diffusion of services and the widespread
use of high-speed Internet can be attributable
to certain distinct characteristics of the Korean
people, since the diffusion of broadband services
took place with rapid acceptance by most Koreans (Jin, 2007). Regardless of the fact that many
countries around the world have initiated and
supported the growth of broadband services, the
result varies markedly due to the acceptance of
new technology by people, the users, as well as
government initiatives.
Therefore, it is crucial to explore the significance of assessing the culture in which technology
is created and the context in which it is widely
accepted. As ITU acknowledges (2003b), Internet
user demand indeed contributed most decisively to
the rapid explosion of broadband in many places,
particularly in Korea. Again Ellul (1964) emphasized that the development of technology is not
an isolated fact in society but is related to every
factor in the life of modern humanity. In particu-
1203
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
1204
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
early adopters in the market (Kwon, 2006). According to a survey by the U.S. market research
firm Parks Association in November 2005, Korea
ranked second on the list of 13 countries in the
adoption of consumer technologies (Parks Associates, 2005).7 Although balli balli culture has
several negative aspects, such as bribery scandals
and the collapse of buildings and bridges, in the
midst of achieving social and economic successes
in the Korean society, rapid adaptability to change
is the key to the swift deployment of broadband
services.
An excessive enthusiasm for edutainmentthe
combination of education and entertainmenthas
also greatly contributed to the unique growth in
broadband services. Above all, Korea is one of
the most developed countries in terms of education. Its overall school enrollment rate (primary,
secondary, and tertiary) of 90% is the highest
throughout the world. Koreas high rate of literacy
and school enrollment are essential as prerequisites for the widespread adoption of ICTs, and
these factors have helped contribute to the growing
impact of ICT in Korean society (ITU, 2003a).
Over-enthusiasm for education in Korea is not
new, but after the 1997 economic crisis, Internet
skills were considered one of the most important
survival tools for many Koreans. They devote
their attention to the Internet due in part to the
Internet being a necessary technology for their
jobs, education, and entertainment. In particular,
for most parents broadband is a necessary tool
for childrens education (Jin, 2007). Relatively
simple initiatives such as encouraging school
teachers to post their homework assignments on
their own personal Web site and requiring students
to submit their assignments by e-mail could create a feeling among parents that the Internet is a
necessity for their childrens education (Choudrie
& Lee, 2004).
The majority of Koreans have become fast
adopters of high-tech, information-intensive products, and they are highly receptive to marketing
stories that offer education after the 1997 economic
1205
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
1206
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
CONCLUSION
Korea presents a unique example with its rapid
deployment of broadband penetration. Several
significant factors have contributed to the rapid
development of broadband Internet connections.
The government and telecommunications companies, as providers, have played important roles
in the rapid development of broadband Internet,
in particular to providing infrastructure for its
development. Favorable government policies
and competition among telecommunications
companies became driving forces for the rapid
deployment of broadband Internet.
The explosion of broadband in Korea, however,
was greatly made possible against the backdrop
of the 1997 economic crisis and due in large part
to various deeply rooted social, historical, and
cultural factors. Although political factors can
be important driving forces behind broadband
penetration, growth also requires an existing
receptiveness to using the services and applications that can be provided through broadband (Jin,
2007). In this regards, it was the citizens who actually made inroads into the worlds most shrewd
market for broadband services in Korea.
Several socio-cultural factors, which are crucial to the diffusion and use of new technologies,
have played significant roles in the swift deployment of broadband services. Cultural characteristics emphasizing quick communication and quick
responses, as well as enthusiasm for edutainment,
have contributed to the exponential growth of
broadband services. A growing complexity of
characteristics of the younger generation showing double personality of social solidarity and
individualism, which is not easy to find in any
other country, has particularly contributed to the
rapid growth in broadband services. If the younger
generation has only one characteristic of these
1207
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
REFERENCES
Bajai, V. (2002). High-sped connections common in South Korea. The Seattle Times, (October
21).
1208
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
Lau, T., Kim, S.W., & Atkin, D. (2005). An examination of factors contributing to South Koreas
global leadership in broadband adoption. Telematics and Informatics, 22(4), 349-359.
Lee, J.S. (2000). Balanced cultural policy. Retrieved September 3, 2006, from http://www.kcpi.
or.kr/database/e_sosik/20000102/news_1.html
Lee, Y.K., & Lee, D. (2003). Broadband access in
Korea: Experience and future perspective. IEEE
Communications Magazine, 30-36.
Lee, H.J., OKeefe, R.M., & Yun, K. (2003). The
growth of broadband and electronic commerce in
South Korea: Contributing factors. The Information Society, 19(1), 81-93.
Lee, H.J., Oh, S., & Shim, Y.W. (2005). De we
need broadband? Impacts of broadband in Korea.
Info: The Journal of Policy, Regulation, and
Strategy for Telecommunications, Information
and Media, 7(4), 47-56.
Lim, H. (2001). Hype of education information
policy. Kookmin Ilbo, (April 25), 3.
McFadyen, S., Hoskins, C., & Finn, A. (1998). The
effect of cultural differences on the international
co-production of television programs and feature
films. Canadian Journal of Communication,
23(4), 523-538.
Mesthene, E. (1970). Technological change. New
York: Signet.
MIC (Ministry of Information and Communication). (2004). Broadband IT Korea vision 2007.
Seoul.
MIC. (2006). 2006 Korea Internet white paper.
Seoul.
Pacey, A. (1983). The culture of technology.
Cambridge: MIT Press.
1209
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
1210
KEY TERMS
1997 Economic Crisis: The Korean economy
has gone from being an example of one of the most
successful development experiences in modern
history to economic stagnation and decline in 1997.
From the middle of 1997, Korea was beset with
a series of financial crises. The trend of decades
of rising incomes reversed, and unemployment
and poverty were reaching alarming levels. In
particular, factors responsible for the decline in
the value of exports then include a dramatic fall
in the prices of some electronic and information
equipment, in particular semiconductors, which
had dire consequences for a number of countries
in the East and South Asia region.
Confucianism: A philosophy of life developed
by Confucius. It stressed the proper relationships
in society, such as father/son and ruler/subject.
The philosophies of Confucius are emphasizing
love for humanity; high value given to learning,
and devotion to family, peace, and justice.
Convergence: Also known as digital convergence. The concept that all modern information
technologies are becoming digital in nature. The
technological trend whereby a variety of different
digital devices such as TVs, in particular highdefinition TVs, and mobile telephones are merging into a multi-use communications appliance
employing common software to communicate
through the Internet.
Edutainment: The combination of education
and entertainment. Many people in the digital
age use the Internet as a form of studying several
subjects, in particular English while enjoying
playing in cyberspace.
Information Technology (IT): Compared to
the labor-led technologies and industries, information technology usually includes semiconductors,
computers, and telecommunications, although
some economists like to use the term knowledgebased industry in order to explain IT.
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
ENDNOTES
1
1211
Socio-Cultural Interpretations to the Diffusion and Use of Broadband Services in a Korean Digital Society
This work was previously published in Handbook of Research on Global Diffusion of Broadband Data Transmission, edited by
Y. Dwivedi; A. Papazafeiropoulou; J. Choudrie, pp. 78-89, copyright 2008 by Information Science Reference, formerly known
as Idea Group Reference (an imprint of IGI Global).
1212
1213
Chapter 3.13
INTRODUCTION
Traditional user interface design generally deals
with the problem of enhancing the usability of a
particular mode of user interaction, and a large
body of literature exists concerning the design
and implementation of graphical user interfaces.
When considering the additional constraints that
smaller mobile devices introduce, such as mobile
phones and PDAs, an intuitive and heuristic user
interface design is more difficult to achieve.
Multimodal user interfaces employ several
modes of interaction; this may include text, speech,
visual gesture recognition, and haptics. To date,
systems that employ speech and text for application interaction appear to be the mainstream
multimodal solutions. There is some work on
the design of multimodal user interfaces for general mobility accommodating laptops or desktop
computers (Sinha & Landay, 2002). However,
advances in multimodal technology to accommodate the needs of smaller mobile devices, such
as mobile phones and portable digital assistants,
are still emerging.
Mobile phones are now commonly equipped
with the mechanics for visual browsing of Internet
applications, although their small screens and
cumbersome text input methods pose usability
challenges. The use of a voice interface together
with a graphical interface is a natural solution
to several challenges that mobile devices present. Such interfaces enable the user to exploit
the strengths of each mode in order to make it
easier to enter and access data on small devices.
Furthermore, the flexibility offered by multiple
modes for one application allows users to adapt
their interactions based on preference and on
environmental setting. For instance, handsfree speech operation may be conducted while
driving, whereas graphical interactions can be
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
Multimodal interaction is defined as the ability to
interact with an application using multiple sensory
channels (i.e., tactile, auditory, visual, etc.). For
example, a user could provide input by speaking,
typing on a keypad, or handwriting, and receive
the subsequent response in the form of an audio
prompt and/or a visual display. Useful multimodal
applications can cover a broad spectrum including
tightly synchronized, loosely synchronized, and
complementary modes of operation. Synchronization behavior must be defined both for input
(the way in which input from separate modes
is combined) and for output (the way in which
input from one mode is reflected in the output
modes). The W3C distinguishes several types of
multimodal synchronization for input as follows
(W3C, 2003a):
1214
Pen
HR
Keyboard
Interaction Manager
Integration
Manager
(Fusion)
Modality n
Modality n
Text
Speaker
TTS
Graphics
Transcode
MULTIMODAL TECHNOLOGY
IN MOBILITY SYSTEMS
Multimodal applications for small mobile devices
must overcome several technical challenges;
Generation
Manager
(Fission)
Dialog,
Session, and
Compound
Management
Multimodal Application
Device
Microphone
Multimodal Architectures
Due to the need to support two or more modes of
input and output, the solutions to support multimodal systems are more complex than unimodal
systems. Additional capabilities are required to
support composite input and output requests,
manage multiple application states, and perform
session management between devices and the
multimodal application services.
There are fundamentally two architectural
approaches to constructing multimodal solutions
for mobile devices such as mobile phones and
1215
TTS
GUI
Integration
Manager
Transcode
Interpretation
Channel Decoding
HR
Channel Encoding
ASR
1216
Generation
Manager
Dialog,
Session, and
Compound
Management
Multimodal Application
Device
Microphone
Modality n
Bearer n
Integration
Manager
Generation
Manager
Dialog,
Session, and
Compound
Management
Multimodal Application
Voice Cct
Interpretation
Speaker
Xcode
Browser
Server
GPRS
Device
1217
Multimodal Presentation
Markup Language (MPML)
The MPML Language is designed for presenting
multimodal output on browsers supporting XML.
The limited processing capability of phones is
accommodated, with a mobile edition defined
for J2ME applications (Saeyor, Mukherjee, Uchiyama, & Ishizuka, 2003). As such, the architecture
resembles a distributed client solution.
1218
XHTML+Voice (X+V)
X+V is a proposed standard for multimodal
markup. Designed for clients that support spoken
and visual interaction, X+V technology furnishes
traditional Web pages with further voice tags
for input and output speech tasks. A goal is to
support thin client, however in its current form
this technology is most suited to the distributed
client architecture. Use of VoiceXML has also
been studied (Niklfeld et al., 2001).
Synchronized Multimedia
Integration Language (SMIL)
SMIL is a relevant standard to managing media
synchronization, an important multimodal problem. However, the standard is largely aimed at
conventional browser technology that supports
XML, DOM, and XHTML. Hence, it is restricted
to browsers that may not be deployed to mobile
phones.
FUTURE TRENDS
There are several industry trends and future areas
of research including the extension of multimodal
solutions to accommodate haptic response, vi-
1219
CONCLUSION
The literature has shown that multimodal user
interfaces are able to provide a superior user
experience when interacting with multimodal
applications on mobile devices such as cellular
phones. Furthermore, the applicability of multimodality is extending beyond traditional mobile
Web applications to a greater range of domains.
For instance, due to the plurality of user interface
modes, multimodal systems provide a natural
improvement for people with disabilities (Kvale
et al., 2005). Although studies demonstrate that
multimodal user interfaces provide a superior user
experience on a mobile phone, it remains unclear
whether such an improvement remains consistent
for all types of applications.
Distributed client architectures are able to overcome several problems identified in the literature,
however the practicality of deployment for mobile
phones requires consideration. Browser-based
multimodal architectures overcome deployment
issues and provide an alternative and easier
mechanism for allowing mobile devices to access
services. In order to support multimodal applications that make use of three or more interaction
modes, the use of thin client architecture is as yet
unproven. In addition, further capabilities that may
not be inherent within the devices are required,
such as local composite management. Hence,
distributed clients would seem a necessary choice
to support these capabilities and appear most appropriate to address the needs of the increasing
complexity due to several modes of interaction,
1220
REFERENCES
Anegg, H., Dangl, T., & Jank, M. (2004). Multimodal interfaces in mobile devicesthe MONA
project. Proceedings of the Workshop on Emerging Applications for Mobile and Wireless Access
(www2004 Conference), New York.
Baillie, L., & Schatz, R. (2005). Exploring
multimodality in the laboratory and the field.
Proceedings of the 7th International Conference
on Multimodal Interfaces (ICMI 2005) (pp. 100107). Torento, Italy.
Baillie, L., Simon, R., Schatz, R., Wegscheider, F.,
& Anegg, H. (2005). Gathering requirements for
multimodal mobile applications. Proceedings of
the 7th International Conference on Information
Technology Interfaces (ITI 2005), Dubrovnik,
Croatia (pp. 240-245).
Calvet, G., Julien Kahn, J., Pascal Salembier, P.,
& Zouinar, M. (2003). In the pocket: An empirical
study of multimodal devices for mobile activities.
Proceedings of the HCI International Conference,
Crete, Greece, (pp. 309-313).
Chou, W., Shan, X., & Li, J. (2003). An architecture
of wireless Web and dialogue system convergence
for multimodal service interaction over converged
networks. Proceedings of COMPSAC 2003, Dallas, TX, (p. 513).
Coutaz, J., Nigay, L., & Salber, D. (1993). Taxonomic issues for multimodal and multimedia
interactive systems. Proceedings of the Workshop
on Multimodal Human-Computer Interaction
(ERCIM 93) (pp. 3-12). Nancy, France.
Hastie, H., Johnston, M., & Ehlen, P. (2002). Context-sensitive multimodal help. Proceedings of the
4th IEEE International Conference on Multimodal
Interfaces (ICMI 02), Pittsburgh, PA, (p. 93).
Niklfeld, G., Finan, R., & Pucher, M. (2001). Architecture for adaptive multimodal dialog system
based on VoiceXML. Proceedings of EuroSpeech
2001, Aalborg, Denmark, (pp. 2341-2344).
Oakley, I., & OModhrain, S. (2005). Tilt to scroll:
Evaluating a motion based vibrotactile mobile
interface. Proceedings of the 1st Joint Eurohaptics
Conference and Symposium on Haptic Interfaces
for Virtual Environment and Teleoperator Systems
(WHC05) (pp. 40-49). Pisa, Italy.
Oviatt, S. L. (2003). Advances in robust multimodal interface design. IEEE Computer Graphics
and Applications, 23(5), 62-68.
Pavlovski, C. J., Lai, J., & Mitchell, S. (2004a).
Etiology of user experience with Natural Language speech. Proceedings of the 8th International
Conference on Spoken Language Processing
(ICSLP/INTERSPEECH 2004), Jeju Island, Korea,
(pp. 951-954).
Pavlovski, C. J., Wood, D., Mitchell, S., & Jones,
D. (2004b). Reference architecture for 3G thin
client multimodal applications. Proceedings of
the International Symposium on Communications
and Information Technologies (ISCIT 2004), Sapporo, Japan, (pp. 1192-1197).
KEY TERMS
Automated Speech Recognition (ASR): The
use of computer processing to automatically translate
spoken words into a text string.
This work was previously published in Encyclopedia of Mobile Computing and Commerce, edited by D. Taniar , pp. 644-650,
copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1222
1223
Chapter 3.14
Abstract
This chapter aims to explore the future trajectory
of enjoying digital music entertainment among
consumers comparing the characteristics of the
usage patterns of digital music appliances in the
U.S. and those in Japan. As the first step of this
research, the author conducted two empirical
surveys in the U.S. and Japan, and found some
basic differences in the usage patterns of a variety of digital music appliances. Next, a series of
ethnographical research based on focus-group
interviews with Japanese young women was done
and some interesting reasons of the differences
were discovered. In Japan, sharing the experiences
of listening to the latest hit songs with friends by
playing them with mobile phones that have the high
quality, ring tone functions can be a new way of
enjoying music contents, while hard-disk music
Introduction: Central
Questions
The November 2001 debut of iPod and the subsequent opening of iTunes Music Store have brought
a rapid expansion of the digital music market
around the world. Some estimate that the market
will be worth $1.7 billion dollars by 2009 (Jupiter
Research). Now, iTunes Music Store service is
available in 30 countries around the world, with
the total number of downloaded songs surpassing
the 500 million mark in July 2005.
The store only opened in Japan in August 2005
and sold over 1 million songs in the first 4 days.
This is an astonishing achievement, consider-
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1224
1225
1226
2.
3.
Gender Comparison
Current digital-music-content users mainly consist of men, accounting for 66.7%, as opposed to
women at 33.3%. Digital-music-content potential
users have a more even gender distribution, consisting of men and women at respectively 54.4%
and 45.6%. Nonusers of digital music contents
have a greater proportion of women at 58.4%,
compared to men at 43.6%.
1227
figures indicate, current users have a large proportion of people who spend more on CDs, whereas
nonusers have a large proportion of people who
spend less on them.
Summarizing the results thus far, current
digital-music-content users are mainly young
men and women in their 20s, with substantial CD
ownership and high music-related spending per
month. They can be described as music fans with
substantial music-related consumption. Potential
users of digital music content, who are expected
to enter this market, are distributed across both
genders and broad generations, from youth to
those in middle age. They are characterized as
middle-level users in music consumption. Nonusers of digital music content are mainly women in
higher age groups, with relative inactiveness in
terms of music consumption. The results illustrate
clear differences in demographic characteristics
and music consumption behavior. There are major
differences between consumers who have bolstered the computer-based, digital-music-content
market until now, and those who will support the
market from now on. These facts alone point to
the possibility that the current market is set to
undergo substantial changes in its nature. In order
to examine details of anticipated changes, we have
compared the three groups in their attitude and
mentality in listening to music.
1228
1229
1230
1231
Conclusion
As we have examined, Japans digital-musiccontent marketwhich started off with the
distribution of ring tones as one mobile phone
servicehas embraced the arrival of fully fledged
digital music players and online stores, both
designed to be used via computers, such as iPod
and iTunes Music Store. From the viewpoint of
hardware competition, the market has now entered
a stage of combined development of mobile-phonebased devices and computer-based devices. It has
brought about two contrasting consumption styles
with distinctive characteristics (computer-based
and mobile-phone-based consumption of digital
music content), and diversified peoples styles in
enjoying music at the same time.
People who use a computer-based means to
enjoy digital music content, have a self-contained
style of consuming music in a specific context,
loading a hard-disk music player with a greater
amount of music from their personal CD collection
than previously possible and enjoying songs in
1232
random order. In contrast, people who use mobilephone-based devices, employ a mobile phone as
a communal music player for playing 30-second
tunes of high popularity and consume music as
topics (information) for sharing various occasions
with friends or enhancing the atmosphere.
At present, these styles are separate tendencies
and can be observed among users of hard-disk
music players and users of mobile phones as music players as two extremes. However, a steady
proliferation of hard-disk or USB flash-memory
music players may cause these styles to merge
on the side of individual users. Competition
between two types of devices has created two
distinctive styles of listening to music. Now, each
user may start using both of these devices at the
same time, hence adopting both styles alongside
each other. Such a user may eventually begin to
seek both of the styles in one of the two types of
devices, which may amount to hardware integration, brought about by the symbiosis of the two
different music-listening styles. Closely paying
attention to consumer behavior and practices in
the future will then give way to rich empirical data
to be used to develop and elaborate the stream of
thought outlined in this study further.
Further Reading
Institute for Information and Communications
Policy. (Eds.). (2005). Henbou suru contents
business. (Contents business.) Tokyo: Toyo keizai
shinpo sha.
Masuda, S. (2005). Sono ongaku no sakusha toha
dare ka. Tokyo: Misuzu shobo.
Masuda, S., & Taniguchi, F. (2005). Ongaku mirai
kei. Tokyo: Yosen sha.
Ministry of Internal Affairs and Communications.
(Eds.). (2005). Information and communications
in Japan 2005. Tokyo: Gyousei.
Endnotes
This work was previously published in Information Communication Technologies and Emerging Business Strategies, edited
by S. van der Graaf, pp. 59-75, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of
IGI Global)
1233
1234
Chapter 3.15
Abstract
Diffusion of radio frequency identification (RFID)
promises to boost the added value of assistive
technologies for mobile users. Visually impaired
people may benefit from RFID-based applications that support users in maintaining spatial
orientation (Mann, 2004) through provision of
information on where they are, and a description
of what lies in their surroundings. To investigate
this issue, we have integrated our development
tool for mobile device, (namely: MADE, Bellotti,
Berta, De Gloria, & Margarone, 2003),
with a
complete support for RFID tag detection, and
implemented an RFID-enabled
location-aware
tour-guide
.
We have evaluated the guide in an
Introduction
Starting from the European Union cofounded
E-Tour project, we designed the tourist digital assistant (TDA) concept and developed multimedia
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Margarone,
, 2002).
The tour guide provides multimedia contents,
added-value information, and location-based
services to the tourists. Added-value services are
implemented by integrating the mobile devices
with additional hardware and software tools such
as GPS, electronic compasses, wireless connectivity, digital cameras, written text input, databases,
and so forth.
See Figure 1 for snapshots of tourist guide
applications.
Relying on the argument that play is a powerful mediator for learning throughout a persons
life, we developed the educational territorialgaming concept in VeGame (Bellotti, Berta, De
Gloria, Ferretti,
Figure 1. Snapshots from the Aquarium and Strada Nuova tour guides on PocketPC device
1235
Harrison, 1999).
1236
Rajani, &
Spasojevic,
2002). In Goker et al. (Goker, Watt,
Myrhaug Whitehead,
1237
MADE Architecture
A typical MADE application consists of a set of
pages containing multimedia and service objects.
The micromultimedia services language (MSL)
script specifies pages layout and objects appearance, synchronization, and user-interaction
modalities. MSL scripts are interpreted at runtime
by the M3P player that manages presentation of
contents and user interaction according to the
instructions specified in the input MSL script.
M3P player relies on two-layer architecture
(see Figure 4) involving a high-level, platformindependent director and a low-level driver. The
director is responsible for creating, initializing,
and managing the objects that implement the
language functionalities. In order to support
incremental development of the player, M3P is
composed by a set of modules. In particular, the
MSL
Application
M3P Core
Data Structure
Director
A C++ object
corresponds to each
MSL components
instance
Driver
Operating System
(WinCE)
User
1238
1239
Description
Period
Repetition
Id
Delay
dBm
A time period in milliseconds between two consecutive environmental scans to detect tags
A number of tag detection operations executed consecutively on each scanning action
A list of RFID tags that are of interest for the component
A list of time frames, one for each interesting tag, in which tags are not identified again
A list of signal strength values, one for each interesting tag, that specify thresholds for tag
identifications
A list of events, one for each interesting tag, that RFID component launch when a tag is
identified
If a component launch this event on a RFID component starts the scanning of tags
If a component launch this event on a RFID component stops the scanning of tags
onFound
Start
Stop
List of identifiers of the components to which information about identified tags are sent
1240
Dubendorfer, 2003):
due to
the collision problem, some tags can appear and
disappear from sequential scanning, generating a
fast list of tag identifications. The MADE RFID
sensing module allows the programmer to decide
how to convert scan results into applications events
handling this problem. Programmer can specify,
through the delay field, a time period (for each
interesting tag) starting from the detection. During this time, subsequent detection events of the
same tag are discarded; also, the exact definition
of this delay is application dependent. Applications with events that occur only one time, like
tourist guide for museums with linear path, can
have delay values set to infinite. Instead, in applications with events generated multiple times
closer each other, like territorial games, the delay
should be short or zero.
Currently, we have implemented the low-level
driver support for the iCARD Identec reader in
a PCMCIA card format (IDENTEC). This card
can be integrated in handheld, portable, or laptop computers to communicate with the iQ and
iD active RFID tags at a distance of up to 100
meters. The RF signal is in the UHF radio band
(915 MHz or 868 MHz), providing long-range
communication and high-speed transmission
rates for reliable data exchange.
1241
The developed application concerns the research area of assistive technologies for visually
impaired people. Such assistive applications have
the potential to improve the quality of life of a
large portion of population (by 2020, there will
be approximately 54 million of blind persons over
age 60 worldwide (WHO, 1997)).
Figure 7. a) The packaging of the multimedia guide in a leather case; b) Snapshots from the tests: users
visit EuroFlora 2006 supported by the guide and touch some dedicated plants
1242
Design Methodology
The necessity for combining the flexibility and
multimedia potential of a mobile device with the
extreme simplicity of interaction, required for use
by a wide audience (also visually impaired people),
involves facing three main HCI issues:
1243
The basic element of the interface is the multimedia card. A multimedia card corresponds
to each subject of a presentation (e.g., a flower
species). Each multimedia card provides, in an
audio format, texts specifically written for visually
impaired people (i.e., highlighting olfactive and
1244
Field evaluation
Experimental Framework
Real evaluation of advanced mobile device applications and of the impact on their intended
population is difficult and costly. Evaluation
requires analysis of real users, in a real context
of use. In order to adequately evaluate interaction
with computation resources, test-people should
use a fully operational, reliable, and robust tool,
not just a demonstration prototype (Abowd &
Mynatt, 2000). Hence, it is important to perform
1245
Preexhibition Tests
In an early test session-performed 2 days before
the official opening of the exhibition, when some
stands were already readyenabling a realistic
test-we prepared a prototype software version
that was used by five selected visually impaired
users visiting 30% of the total exhibition area. We
followed and interviewed the users in this phase,
in order to understand shortcomings, defects and
weaknesses, and strong points of the product.
In this phase, we understood and solved some
problems on user interface and contents, such as
the most suited assignment of buttons to presentation control functionalities and the length of
the descriptions. Some test-users found the long
silence time between a presentation activation
and the next one (i.e., the period of time in which
the user is walking through areas not covered
by RFID tags) frustrating. We partially tackled
this issue by periodically providing a message
saying that the user is currently in an area not
close to a POI.
Ecological Tests
One hundred and twenty blind people used
the guide during the exhibition. Sixty of them
(aged from 12 to 78 years old) participated in
1246
Average
4.00
4.25
4.20
Standard Deviation
0.64
0.75
0.66
201 minutes
30 minutes
Conclusion
The ubiquitous presence of smart tags will offer,
in the near future, a critical mass of information,
1247
1248
References
Abowd, G. D., & Mynatt, E. D. (2000). Charting
past, present, and future research in ubiquitous
computing. ACM Transaction in Computer-Human Interaction, 7(1), 29-58.
Beck, A. (1993). User participation in system
design: Results of a field study. In M. J. Smith, Human-computer interaction: Applications and case
Elsevier.
studies (pp. 534-539). Amsterdam:
Bellotti, F., Berta, R., De Gloria, A., Ferretti, E.,
& Margarone, M. (2003).
VeGame: Field exploration of art and history in Venice. IEEE Computer,
26(9), 48-55.
Bellotti, F., Berta, R., De Gloria, A., Ferretti, E.,
& Margarone, M. (2004 ). Science
game: Mobile
gaming in a scientific exhibition. eChallenges
e2004. Fourteenth International Conference on
eBusiness, eGovernment, eWork, eEurope 2005
Vienna.
and ICT.
Bellotti, F., Berta, R., De Gloria, A., & Margarone,
M. (2002). User
real-world ubiquitous
location systems. Communications of the ACM,
Special Issue on The Disappearing Computer,
48(3) 3641.
Bruno, R., & Delmastro F., (2003). Design and
analysis of a bluetooth-based indoor localization
system. Personal Wireless Communications,
2775, 711-725.
Carroll, J. M. (1997). Human-computer interaction: Psychology as a science of design. Inter-
1249
from http://www.who.int/archives/inf-pr-1997/
en/pr97-15.html
Xu, Z., & Jacobsen, H. A., (2005). A framework
for location information processing. 6th International Conference on Mobile Data Management
(MDM05). Ayia Napa, Cyprus.
Key terms
Chi-Square Test: The Chi-square is a test of
statistical significance for bivariate tabular analysis (crossbreaks). This test provides the degree of
confidence we can have in accepting or rejecting
a hypothesis.
Ecological Context: The ecological context is
a set of conditions for a user test experiment that
gives it a degree of validity. An experiment with
real users to possess ecological validity must use
methods, materials, and settings that approximate
the real-life situation that is under study.
Human-Computer Interaction: Human
computer interaction (HCI), also called manmachine interaction (MMI) or computerhuman
interaction (CHI), is the research field that is
focused on the interaction modalities between
users and computers (interface). It is a multidisciplinary subject, relating to computer science
and psychology.
Location-Aware Computing: Locationaware computing is a technology that uses the
location of people and objects to derive contextual
information with which to enhance the application
behaviour. There are two ways to acquire infor-
This work was previously published in the Handbook of Research on User Interface Design and Evaluation for Mobile Technology, edited by J. Lumsden, pp. 657-672, copyright 2008 by Information Science Reference, formerly known as Idea Group
Reference (an imprint of IGI Global).
1250
1251
Chapter 3.16
Plagiarism, Instruction,
and Blogs
Michael Hanrahan
Bates College, USA
Abstract
This chapter takes as its point of departure the
Colby, Bates, and Bowdoin Plagiarism Project
(http://ats.bates.edu/cbb), which sought to approach the problem of undergraduate plagiarism
as a pedagogical challenge. By revisiting the decision to publish the projects content by means of
a weblog, the article considers the ways in which
weblogs provide a reflective tool and medium
for engaging plagiarism. It considers weblog
practice and use and offers examples that attest
to the instructional value of weblogs, especially
their ability to foster learning communities and
to promote the appropriate use of information and
intellectual property.
Introduction
Alarmist news accounts of student dishonesty and
cheating abound. More often than not, such stories
describe how universities, colleges, and even high
schools have resorted to plagiarism detection
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Background
Over the past several years, national, regional,
local, and campus newspapers across the globe
have regularly featured articles on student cheating. While academic dishonesty takes any number
of forms (using a PDA, cell phone, or crib notes
during an exam; submitting unoriginal work
copied from an existing publication, cut and
pasted from an online source, or purchased from
a paper mill; or simply peering over a classmates
shoulder during a quiz), plagiarism has emerged
as the most visible form of student cheating. In
many ways, the term threatens to subsume all
other categories of academic dishonesty. A passing visit to the statistics page at Turnitins Web
site (plagiarism.org) reinforces this tendency.
Turnitin, the worlds leading plagiarism detection
service, claims that A study by The Center for
Academic Integrity (CAI) found that almost 80
percent of college students admit to cheating at
least once. Besides generalizing and rounding
up the centers published summary (On most
campuses, over 75 percent of students admit to
some cheating), Turnitins claim isolates a common tendency to conflate a number of dishonest
behaviors with plagiarism. Donald McCabe
(personal communication, August 4, 2004)
explains that the 75 percent figure published
by the CAI represents about a dozen different
behaviors and was obtained in a written survey.
1252
1253
1254
1255
1256
Solutions and
Recommendations:
Building Learning
Communities via Weblogs
Blogs are powerful and flexible publishing tools:
they publish content rapidly and easily; they provide an archive for content that is readily searchable by date, subject, or keyword; and they can
also publish their content in a number of ways,
including dedicated Web sites as well as RSS
feeds that can populate other Web sites, weblogs,
aggregators, e-mail clients, and Web browsers.
That which has secured their popularity and
wide reception (the rapid creation, publication,
and circulation of information) also represents
1257
1258
Conclusion
An increased use and understanding of media in
the curriculum, moreover, may very well allow
faculty to harness the creative energies of students in a way that deals with plagiarism in both
practical and theoretical terms readily understood
by students. Current wisdom on how to avoid
plagiarism has emphasized the need to rethink
written assignmentsfor example, essays should
be conceived of as ongoing processes consisting
of specific, discrete stages or components, all of
which are submitted for review, evaluation, and
assessment, rather than a single finished product
submitted in its entirety only once. In rethinking assignments, instructors may also want to
begin to rethink what writing is and to encourage
non-traditional forms of writing. I have in mind
here the creation of fictional and non-fictional
narratives, reports or accounts by means of multimediadigital video and audio or computer animation and graphics or any combination of these
and other media. Just as the weblog has emerged
as a reflective tool for considering plagiarism, a
media-rich learning environment would allow
students to begin to understand plagiarism in new
and perhaps more compelling ways. In a recent
essay on plagiarism, the novelist Jonathan Lethem
(2007) describes what it is like to be cut adrift in
our contemporary media environment:
The world is a home littered with pop-culture
products and their emblems. I also came of age
swamped by parodies that stood for originals yet
mysterious to me Im not alone in having been
born backward into an incoherent realm of texts,
products, and images, the commercial and cultural
environment with which weve both supplemented
and blotted out our natural world. I can no more
claim it as mine than the sidewalks and forests
of the world, yet I do dwell in it, and for me to
stand a chance as either artist or citizen, Id probably better be permitted to name it.
References
Adar, E. & Adamic, L. A. (2004). Tracking
information epidemics in blogspace. Retrieved
October 5, 2006 from http://www.hpl.hp.com/research/idl/papers/blogs2/index.html
Adar, E., Zhang, L., Adamic, L. A., & Lukose, R.
M. (2004). Implicit structure and the dynamics
of blogspace. Retrieved October 2, 2006 from
http://www.hpl.hp.com/research/idl/papers/
blogs/index.html
Anderson, B. (1991). Imagined communities: Reflections on the origin and spread of nationalism.
London: Verso.
Asaravala, A. (2004). Warnings: Blogs can be
infectious. Wired News. Retrieved October 2,
2006 from http://www.wired.com/news/culture/0,1284,62537,00.html
Beale, P. (2006). E-mail, October 6.
Blog Epidemic Analyzer. Retrieved October 5,
2006 from http://www.hpl.hp.com/research/idl/
projects/blogs/index.html
Brown, J. S. & Daguid, P. (1996). The social
life of documents. First Monday, 1(1). Retrieved
1259
1260
Endnotes
1
This work was previously published in Student Plagiarism in an Online World: Problems and Solutions, edited by T. Roberts,
pp. 183-193, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI
Global).
1261
1262
Chapter 3.17
Abstract
Web 2.0 technologies empower individuals to
contribute thoughts and ideas rather than passively survey online content and resources. Such
participatory environments foster opportunities
for community building and knowledge sharing,
while encouraging the creation of artifacts beyond
what any single person could accomplish alone.
In this chapter, we investigate the emergence and
growth of two of such environments: the highly
popular Wikipedia site and its sister project,
Wikibooks. Wikipedia has grown out of trends for
free and open access to Web tools and resources.
While Wikipedians edit, contribute, and monitor
distinct pieces of information or pages of documents, Wikibookians must focus on larger chunks
Introduction
Thomas Friedman, in his 2005 book, The World
is Flat, talks about 10 forces that have flattened
the world in terms of economic globalization.
The word flat acts as a metaphor to symbolize
the leveled playing field on a global scale. In
Friedmans (2005) view, when the playing field
is leveled, everyone can take part. And he means
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Background
Brandon Hall (2006) defines a wiki as a collection of Web pages that can be easily viewed and
modified by anyone, providing a means for sharing
and collaboration. These are open-ended, generative, and unstructured environments (Honegger,
2005; Leuf & Cunningham, 2001; Lio, Fraboni,
Leo, 2005). Pioneered by Ward Cunningham
in 1995, wikis are online spaces for recording
information, sharing knowledge, typically in
collaboration with others. Each modification is
recorded as the history of a document. The history page records the time of change, the person
who made the change, and the changes that were
made. Such a mechanism not only permits page
retraction by anyone, it also behaves as a podium
for reputation management. In addition, the history page permits open examinations of each
revision, allowing each version to be compared
and contrasted by anyone.
Many universities have picked up the wiki
fever and started using its functions for information sharing. For example, Stanford has an institutionalized wiki wherein students can contribute
information on places to eat, workout, study,
socialize, and so forth. (Campus Technology,
2006b). As this Web site indicates, there is now a
wave of student-contributed wiki resources. Similarly, MIT has created the Center for Collective
Intelligence, where people from around the planet
could come and solve huge scientific, social, and
business problems (Campus Technology, 2006a).
The underlying belief of these wiki projects
indicates that, collectively, the human race can
act more powerfully than it can at an individual
level. As a prime example of this principle, in
early February, 2007, Penguin books announced
1263
1264
1265
1266
Studying Wikibookians
Findings
The survey and interview data helped us understand the challenges, frustrations, and successes
of Wikibookians. From the 8 participants and
12 e-mail interview questions, five key themes
emerged. These themes were as follows:
1.
2.
3.
4.
5.
1267
Control
If looking at the issue of authority at a deeper level,
we further discovered the concerns regarding
control over contribution. Who has the right to
contribute? Who can decide who has the right to
contribute? Who can decide whose contribution
is superior to others? In other words, is it true that
in the world of Wiki, everyone stands equally?
Certainly some did not believe so:
The biggest problem I see are the control guilds
that have sprung up. They can be the Christian
groups, or the anti-metaphysicists that go around
labeling everything pseudoscience because they
lack the ability to see the difference between the
two. (Participant 2)
Most participants addressed the issue about
control over contribution from the perspective
1268
Collaboration
Coordination
Working in a Wikibook project requires coordination, communication, an understanding of the
collaborative writing process, and a great deal
of time management. In particular, Participant 4
shared many of his perspectives about the inner
workings of a Wikibookian and the process of
Wikibook creation:
Development has been slower than expected. I
expect to be able to finish the bookin about
two years.
My experience has been that Wikimedia projects
promote much more formal rules of etiquette, but
a much less structured writing process.
Maintaining a wiki is much more a challenge for
social issues than for technological issues. Above
all, a collaborative writing community must have
a common vision, specifically a vision to create
a written work.
Interestingly, he suggested that software be
created that makes it easy to post a positive contribution and more difficult to post something
negative.
Communication
Clearly, issues about communication stood out in
many of the Wikibookian observations and suggestions. Some of the comments included:
Resolving Disagreements
Even though some seemed to be lone writers,
disagreements are bound to happen when working with strangers from around the world. We
asked the Wikibookians about how they resolved
differences. Again, communication provided the
bridge to reconcile different opinions, including
mechanisms such as the talk page, a mediator,
private message, or revert the changes. However,
Wikibookians differed in how they would approach or resolve the differences. As the following
quotes indicate, some would revert the changes,
some would discuss them first, and still others
1269
1270
60.56%
60%
50%
40%
30%
19.72%
20%
10%
15.49%
4.23%
0%
Strong Disagree
Disagree
Agree
Strong Agree
1271
1272
Academic Acceptance
In their responses to our online survey questions,
Wikibookians seemed to strongly believe in Wikibooks as an online library (64%), a learning tool
(40%), and a supplement to classroom or training
resources (36%). In addition, it was a place for
communities of writers (60%) and learners (34%)
to form. However, making Wikibooks a teaching
and learning tool and enabling the educational
community to embrace it proved a difficult task.
Academics might be the largest potential users of
Wikibooks; they might hold the key to Wikibooks
success. The following quotes indicate the shared
concerns and hopes of Participant 6 and 7:
Future trends
Each of the previously mentioned themes provides
interesting data for further wiki-related research
and development; especially that related to Wikibooks. Table 1 summarizes our major findings
according to the five themes.
1273
1274
Wikipedia
January 15, 2001
Wikipedia is a community of
practice for millions of people.
There are myriad subcommunities
within it for different languages,
topics, resources, and tools.
3. Resources Created
Wikipedia is an information
resource for people to look up. It
is comprised of many linked, yet
individual pieces.
7,483,939 pages
6.4 million articles
1,629,257 articles in English
250 languages
110,836,256 edits
14.81 edits per page
700,001 media files
3,511,411 registered users
1,111 system administrators
(Wikipedia, 2007d).
5. Views of Contributors
7. Participation or Contribution
Criteria
Wikibook
July 10, 2003
Wikibooks has communities
of practice for each book
project. There is also an overall
community of Wikibookians
at the staff lounge within
Wikibooks.
Wikibooks creates usable texts,
guidebooks, and reference
materials. The final product
should be coherent.
67,399 pages
23,790 modules or chapters
Over 1,000 books, the largest
category in English
120 languages
759,033 page edits
11.26 edits per page
50,582 registered users,
33 system administrators
(Wikibooks, 2007b).
A Wikibookian is someone who
coordinates or contributes to a
Wikibook project.
1275
Wikipedia
Tools for tracking the history of
document changes, talk pages,
edit pages, hyperlinking, lounges,
site statistics, and so forth.
9. User Community
Massively millions.
12. Experts
15. Ownership
1276
Wikibook
Same tools as Wikipedia but
could also use book planning,
outlining, and overview tools,
enhanced discussion tools,
mark-up and commenting tools,
enhanced tracking of book
contributions and contributors,
and ways of fostering
collaboration and interaction
among those in a Wikibook
project.
Extensive yet far more limited
membership than Wikipedia
None, open to all to contribute or
edit a document.
Wikibooks is a collection
of books to which one can
contribute to or read.
Conclusion
As we pointed out in this chapter, wikis offer a
unique window into knowledge negotiation and
References
Alexa. (2007). Results for Wikipedia. Retrieved
February 7, 2007, from http://www.alexa.com/
search?q=wikipedia
Brown, J. S. (2006). Relearning learningApplying the long tail to learning. Presentation at
MIT iCampus. Retrieved February 9, 2007, from
http://www.mitworld.mit.edu/video/419
Bruns, A., & Humphreys, S. (2005). Wikis in
teaching and assessment: The M/Cyclopedia
project. Paper presented at the WikiSym 2005.
Retrieved February 5, 2007, from http://www.
wikisym.org/ws2005/proceedings/paper-03.pdf
Bryant, S. L., Forte, A., & Bruckman, A. (2005).
Becoming Wikipedian: Transformation of participation in a collaborative online encyclopedia. In M. Pendergast, K. Schmidt, G. Mark, &
M. Acherman (Eds.), Proceedings of the 2005
International ACM SIGGROUP Conference on
Supporting Group Work, GROUP 2005, Sanibel
Island, FL, November 6-9 (pp. 1-10). Retrieved
February 7, 2007, from http://www-static.
cc.gatech.edu/~aforte/BryantForteBruckBecomingWikipedian.pdf
1277
1278
1279
Additional reading
Bargh, J. A., & McKenna, K. Y. A. (2004). The
Internet and social life. Annual Review of Psychology, 55, 573-590.
Bezroukov, N. (1999b). A second look at the cathedral and the bazaar [Electronic Version]. First
Monday, 4(12). Retrieved March, 15, 2007, from
http://firstmonday.org/issues/issue4_12/bezroukov/index.html.
Bonk, C. J. (2001). Online teaching in an online
world. Bloomington, IN: CourseShare.com.
Bonk, C. J., & Cunningham, D. J. (1998). Searching for learner-centered, constructivist, and sociocultural components of collaborative educational
learning tools. In C. J. Bonk & K. S. Kim (Eds.),
Electronic collaborators: Learner-centered
technologies for literacy, apprenticeship, and
discourse (pp. 25-50). NJ: Erlbaum.
Bonk, C. J., & Kim, K. A. (1998). Extending
sociocultural theory to adult learning. In M. C.
Smith & T. Pourchot (Ed.), Adult learning and
development: Perspectives from educational
psychology (pp. 67-88). Lawrence Erlbaum Associates.
Bonk, C. J., Wisher, R. A., & Nigrelli, M. L.
(2004). Learning communities, communities of
practice: Principles, technologies, and examples.
In K. Littleton, D. Miell & D. Faulkner (Eds.),
Learning to collaborate, collaborating to learn
(pp. 199-219). Hauppauge, NY: Nova Science
Publishers.
Brown, J. S. (2006, December 1). Relearning
learningApplying the long tail to learning.
Presentation at MIT iCampus. Retrieved February 9, 2007, from http://www.mitworld.mit.
edu/video/419
Cole, M., & Engestrom, Y. (1997). A culturalhistorical approach to distributed cognition. In G.
Salomon (Ed.), Distributed cognitions: Psycho-
1280
1281
Wertsch, J. V. (1991). Voices of the mind: A sociocultural approach to mediated action. Cambridge,
MA: Harvard University Press.
This work was previously published in Social Information Technology: Connecting Society and Cultural Issues, edited by T.
Kidd and I.L. Chen, pp. 253-272, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference
(an imprint of IGI Global).
1282
1283
Chapter 3.18
Introduction
Tim Berners-Lee, the inventor of the World Wide
Web, envisioned it as a place where people can
communicate by sharing their knowledge in
a pool putting their ideas in, as well as taking
them out (Berners-Lee, 1999). For much of its first
decade, the Web was, however, primarily a place
where the majority of people took ideas out rather
than putting them in. This has changed. Many
social software services now exist on the Web
to facilitate social interaction, collaboration and
information exchange. This article introduces wikis, jointly edited Web sites and Intranet resources
that are accessed through web browsers. After a
brief overview of wiki history, we explain wiki
technology and philosophy, provide an overview
of how wikis are being used for collaboration,
and consider some of the issues associated with
management of wikis before considering the
future of wikis.
In 1995, an American computer programmer,
Ward Cunningham, developed some software to
help colleagues quickly and easily share computer
programming patterns across the Web. He called
the software WikiWikiWeb, after the Wiki Wiki
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Wikipedia (www.wikipedia.org). And wiki hosting services and application service providers
(ASPs) were established to enable individuals and
organizations to develop wikis without the need to
install and maintain wiki software themselves.
By July 2006, nearly 3,000 wikis were indexed at the wiki indexing site www.wikiindex.
org, popular wiki hosting services such as Wikia
(www.wikia.org) and seedwiki (www.seedwiki.
org) hosted thousands of wikis between them, and
Wikipedia had more than four and a half million
pages in over 100 languages. Moreover, wikis
were increasingly being used in less public ways,
to support and enable collaboration in institutions
ranging from businesses to the public service and
not-for-profit organizations.
Content
A template which defines the layout of the
wiki pages
Wiki engine, the software that handles all
the business logic of the wiki
Figure 1. How wikis work (Adapted from Klobas & Marlia, 2006)
Template
Wiki
engine
Wiki page
1284
Content
Wikis usually adopt soft security, social conventions that assume that most people behave in
good faith, establish that users (rather than the
software or a system administrator) operate as
peer reviewers of content and behavior, allow that
people might make mistakes but that mistakes
can be corrected, and emphasize the importance
of transparency in their management (Meatball,
2006). Together, these technical features and social
principles provide a supportive environment for
human collaboration.
1285
1286
1287
1288
References
Andrus, D. C. (2005). The wiki and the blog: Toward a complex adaptive intelligence community.
Studies in Intelligence, 49(3). Retrieved October
26, 2006, from http://ssrn.com/abstract=755904
Angeles, M. (2004). Using a wiki for documentation and collaborative authoring. Retrieved
November 1, 2006, from http://www.llrx.com/
features/librarywikis.htm
Godwin-Jones. (2003). Blogs and wikis: Environments for on-line collaboration. Language,
Learning and Technology, 7(2), 12-16.
Berners-Lee, T. (1999). Transcript of Tim BernersLees talk to the LCS 35th anniversary celebrations, Cambridge, Massachusetts, 14 April 1999.
Retrieved October 26, 2006, from http://www.
w3.org/1999/04/13-tbl.html
1289
Tapscott, D., & Williams, A. D. (2006). Wikinomics: How mass collaboration changes everything.
Portfolio Hardcover.
Turnbull, G. (2004). Talking to Ward Cunningham about wikis. Luvly, Retrieved November 1,
2006, from http://gorjuss.com/luvly/20040406wardcunningham.html
Wagner, C. (2004). Wiki: A technology for conversational knowledge management and group
collaboration. Communications of the Association
for Information Systems, 13, 265-289.
Wenger, E., McDermott, R., & Snyder, W. M.
(2002). Cultivating communities of practice. Boston: Harvard Business School University Press.
Key Terms
This work was previously published in the Encyclopedia of E-Collaboration, edited by N. Kock, pp. 712-717, copyright 2008
by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1290
1291
Chapter 3.19
Academic Weblogs as
Tools for E-Collaboration
Among Researchers
Mara Jos Luzn
University of Zaragoza, Spain
Introduction
Although scientific research has always been a
social activity, in recent years the adoption of Internet-based communication tools by researchers
(e.g., e-mail, electronic discussion boards, electronic mailing lists, videoconferencing, weblogs)
has led to profound changes in social interaction
and collaboration among them. Research suggests that Internet technologies can improve and
increase communication among noncollocated
researchers, increase the size of work groups,
increase equality of access to information by
helping to integrate disadvantaged and less established researchers, help to coordinate work
more efficiently, help to exchange documents
and information quickly (
Carley
&
Wendt, 1991;
Nentwich, 2003). There is abundant research on
new forms of group work originated from the use
of computer technologies. Carley and Wendt (1991)
use the term extended research group to refer
to very large, cohesive, and highly cooperative
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Background
The term Weblog was coined by Jorn Barger in
1997 to refer to personal Web sites that offer frequently updated information, with commentary
and links. Blood (2002) classifies blogs into two
styles: the filter type, which includes links pointing to other sites and comment on the information
on those sites, and the personal-journal type, with
more emphasis on personal self-expressive writing. There are many other types of blogs described
in the literature, defined on the basis of different
criteria; for example, knowledge blogs (k-blog),
community blogs, meta-blogs.
The capabilities of blogs make them helpful tools for communication between members
of a community or organisation. Some types
of weblogs have originated as an answer to the
communicative needs of specific communities; for example, knowledge blogs, weblogs for
personal knowledge publishing. Kelleher and
Miller (2006) describe knowledge blogs as the
online equivalent of professional journals used
by authors to document new knowledge in their
disciplines. A related concept is that of personal
knowledge publishing, defined by Paquet (2002)
as an activity where a knowledge worker or researcher makes his observations, ideas, insights,
interrogations, and reactions to others writing
publicly in the form of a weblog. Many corporate and academic blogs make use of capabilities
that afford collaboration: they enable scholars to
communicate with a wide community, fostering
1292
peer review and public discussion with researchers from different disciplines. These weblogs
have a precedent in what Harnard (1990) terms
scholarly skywriting: using multiple e-mail
and topic threaded Web archives (e.g., electronic
discussion) to post information that anybody can
see and add their own comments to.
There are many types of academic blogs (blogs
from journal editors, individual scholars blogs,
research groups blogs, PhD blogs), each of them
used for different purposes. For instance, while
the main purpose of the weblogs implemented
by universities is discussion, weblogs by PhD
students are mainly used to comment on the days
progress and on the process of PhD writing, and
blogs from journal editors are usually filter blogs,
which provide links to articles or which comment
on news related to the journal topic.
The uses of weblogs in research have been
discussed in several papers and blog posts
(Ameur, Brassard,, & Paquet, 2003; Efimova,
2004; Efimova & de Moor, 2005; Farmer, 2003;
Mortensen & Walker, 2002; Paquet, 2002). These
researchers depict blogs as facilitating scientific
enquiry in two ways: (1) they help to access and
manage content, through features such as archives,
RSS (an automated system that enables bloggers
to syndicate their content to other blogs), searchable databases, post categories; and (2) they are
tools for collaboration, through communication
and network features. These features include
hyperlinks, comments, trackbacks (records of
the Web address of the blogs that have linked to
a blog posting), RSS, or blogrolls (a list of blogs
the author usually reads, and that, therefore,
deal with similar topics). But the most important
ingredient for collaboration is the bloggers
perception of blogs as a means to point to, comment on, and circulate information and material
from other blogs.
As a tool for collaborative research, blogs have
several uses: (1) supporting community forming; (2) helping to find other competent people
with relevant work; (3) facilitating connections
E-Collaborating Through
Academic Weblogs
We used a corpus of 100 academic English-language weblogs, collected between January and
March, 2006, in order to analyse how they are
used as tools for e-collaboration and how the elements of weblogs contribute to this purpose. Many
academic weblogs are not intended as collaborative tools and thus do not include social software.
In other cases, although comments are allowed,
interactivity is very limited, and the weblogs are
not really used as conversation tools. Therefore, for
the purpose of this research, we have selected only
active weblogs that include and make use of the
comment feature. Drawing on previous research
on blog analysis (e.g., Herring et al., 2004) and on
our initial inspection of the blogs of the corpus,
we identified the functional properties that make
interaction and collaboration possible (e.g., comments, links). Then, we analysed the function of
the 15 most recent entries in each blog (e.g., ask
for feedback), the function of the comments to
these entries (e.g., disagree with a post), and the
types of links included in the postings and comments (e.g., to other blogs, to news sites, to Web
sites by others, to the bloggers own site).
The analysis showed that academic weblogs
can be used to support the following forms of
collaboration (or collaboration stages):
1.
Weblogs share features of personal and public genres. As personal genres, they are used by
researchers to keep records of their thoughts,
ideas or impressions while carrying out research
activities. They also have an open nature, since
anybody can access and comment on the authors
notes, and they are in fact intended to be read. By
revealing the research a researcher is currently
involved in and, at the same time, providing tools
for immediate feedback on that research, weblogs
facilitate collaboration. In addition, since weblogs
are open to the public, not restricted to peers
working in the same field, they facilitate contact
among scholars from different disciplines and
invite interdisciplinary knowledge construction
(Ameur et al., 2003).
Blogrolling lists enable readers of a weblog to
find peers with relevant work. These lists do not
only point to related work, but also function as
signs of personal recommendation, which help
others to find relevant contacts faster (Efimova,
2004). Links and comments are also useful to expand a researchers community. Other researchers
read the postings in the research blog and leave
comments, usually including links to their own
research. Authors of weblogs can also include
links to articles on similar work carried out by
other researchers, thus enabling the readers to get
into contact with them.
Many researchers use the comment feature to
explain that they are working on areas similar to
the ones dealt with in the original posting and to
suggest the possibility of collaboration or contact,
in a more or less direct way (e.g., I would like to
keep in touch about your findings). Community
blogs are sometimes used by bloggers to summarize briefly their research and invite collaboration
and contact with others.
1293
1294
Future Trends
Weblogs are a genre whose capabilities offer a
great potential for collaboration among researchers. However, there are still few academics who
engage in blogging, and in many cases academic
blogs are used for purposes other than collaboration. Several reasons have been suggested: the risk
of sharing information and having ideas stolen
or attacked before the research is published, the
fear of damaging credibility, the time that blogging takes away from more traditional research
activities (Lawley, 2002; Mortensen & Walker,
2002).
Since the weblog is quite a recent genre,
there is little research on how weblogs are used
for enquiry and knowledge creation. Their open
nature makes weblogs appropriate for nonestablished researchers, who can that way collaborate
with peers they dont know, and researchers who
seek collaboration with a worldwide non-discipline-restricted community of researchers. There
are many different types of weblogs written by
1295
Conclusion
References
1296
Blood, R. (2002). The Web log handbook: Practical advice on creating and maintaining your blog.
Cambridge, MA: Perseus Publishing.
Carley, K., & Wendt, K. (1991). Electronic mail and
scientific communication: A study of the SOAR
extended research group. Knowledge: Creation,
Diffusion, Utilisation, 12(4), 406-440.
Efimova, L. (2004). Discovering the iceberg of
knowledge work. In Proceedings of the Fifth
European Conf. on Organisational Knowledge,
Learning, and Capabilities (OKLC 2004). Retrieved February 20, 2006, from https://doc.telin.
nl/dscgi/ds.py/Get/File-34786
Efimova, L., & de Moor, A. (2005). Beyond
personal webpublishing: An exploratory study
of conversational blogging practices. Proceedings of the 38th Hawaii International Conference
on System Sciences (HICSS-38) (pp.107a). Los
Alamitos, CA: IEEE Press.
Farmer, J. (2003). Personal and collaborative
publishing (PCP): Facilitating the advancement
of online communication and expression within
Paquet, S. (2002). Personal knowledge publishing and its uses in research. Retrieved March 12,
2006, from http://radio.weblogs.com/0110772/
stories/2002/10/03/personalKnowledgePublishingAndItsUsesInResearch.html
Walsh, J. P., & Maloney, N. G. (2002). Computer network use, collaboration structures and
productivity. In P. Hinds & S. Kiesler (Eds.),
Distributed work (pp. 433-458). Cambridge, MA:
MIT Press.
KEY Terms
Blogroll: A list of links to Web pages the
author of a blog finds interesting.
Comment: A link to a window where readers
can leave their comments or read others comments and responses from the author.
Personal Knowledge Publishing: The use
of Weblogs by knowledge workers or researchers
to make their observations, ideas, insights and
reactions to others writing public.
Research E-Collaboration: The use of
e-collaborating technologies in order to share
information and discuss issues which contribute
to advancing knowledge in a specific area.
Scholarly Skywriting: Using multiple e-mail
and topic threaded Web archives (e.g., electronic
discussion) to post information that anybody can
see and add their own comments to.
Trackback: A link to notify the blogger
that his/her post has been referred to in another
blog.
Weblog: A frequently updated Web page,
consisting of many relatively short postings,
organized in reverse chronological order, which
tend to include the date and a comment button
so that readers can answer.
This work was previously published in the Encyclopedia of E-Collaboration, edited by N. Kock, pp. 1-6, copyright 2008 by
Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1297
1298
Chapter 3.20
Assessing Weblogs as
Education Portals
Ian Weber
Texas A&M University, USA
Introduction
Education is one of the key sectors that has benefited from the continuous developments and
innovations in information and communication
technology (ICT). Web-based facilities now
provide a medium for learning and a vehicle for
information dissemination and knowledge creation (Khine, 2003). Accordingly, developments
in ICTs provide opportunities for educators to
expand and refine frameworks for delivering
courses in innovative and interactive ways that
assist students achieve learning outcomes (Kamel
& Wahba, 2003). However, the adoption of ICTs
has also created tensions between traditional
control and directiveness in teaching and student-centred learning, which relies on flexibility,
connectivity, and interactivity of technology-rich
environments.
This chapter examines the introduction of
Web-based technologies within a media studies
course. The objective was to establish a community of learning, which provides students with a
2.
3.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
4.
Background
Early approaches to integrating ICTs into education environments emerged from conventional
learning models, originating from the objectivist
approach in which a reality exists and experts
instruct individuals of that reality (Belanger &
Slyke, 2000). However, such teacher-centric,
information-based approaches failed to adequately prepare students to become independent learners. Responding to these limitations,
educators embraced learner-centric approaches
such as constructivism, which leant weight to
the empowerment of individuals to take charge
of their own learning environments. As Wilson
(1996) suggests, the constructivist movement in
instructional design emphasized the importance
of providing meaningful, authentic activities that
can help the learner to construct understandings
and develop skills relevant to solving problems and
not overloading them with too much information.
Solis (1997) supports this position, suggesting that
student-centred learning relies on groups of
students being engaged in active exploration, construction, and learning through problem solving,
rather than in passive consumption of textbook
materials (p. 393).
In spite of these favorable positions, Khine
(2003) warns that creating such learning environments supported by ICTs can be intrinsically problematic. Accordingly, it is critically
important that careful planning and design is
employed at the early stages of instructional
design to provide proper support and guidance,
as well as rich resources and tools compatible
to each context. When adequate consideration
is given to new learning and teaching strategies
that incorporate ICTs, real opportunities exist
for educators to provide students with a dynamic
1299
1300
Enhancing the
Undergraduate Experience
Texas A&M Universitys (TAMU) focus on Enhancing the Undergraduate Experience provides
a series of strategies to increase and expand opportunities for students to be actively involved in
learning communities. Establishing such an experience is guided by the recognition that students
2.
3.
1301
Courses
Content
Freshman
Foundation
Introduce
competencies
Sophomore,
Junior, or both
Senior
Career or
Graduate School
1302
Outcomes
Critically analyze,
Personal integrity,
Contribute to society,
Communication
facilitate community building and understandings of social networking and citizenship at local, national and international
levels;
provide a platform for students to evaluate,
communicate and critique current issues
and problems within informal and engaging
online environments;
improve content, information and knowledge
management within the course structure;
Future Trends
Participation within the Global Media course
(online and classroom) was measured in several
ways, including the number of entries (posts and
comments), user log-ins, survey, interviews, and
1303
Figure 3. Taxonomy of COMM 458 course student behavior relating to content and interpersonal interaction
1304
2.
3.
4.
Conclusion
This article presented pilot study findings on
using weblog facilities to increase participation
and improve interaction in Web-based learning
communities. The study revealed a number of
aspects to assist educators in providing more
structured and comprehensive online learning
environments. The most important of these are
people-related functions such as creating strategically-focused collaborative learning environ-
1305
References
Belanger, F., & Slyke, C. V. (2000, Spring). Enduser learning through application play. Information Technology, Learning, and Performance
Journal, 18(1), 61-70.
Bento, R., & Schuster, C. (2003). Participation:
The online challenge. In A. Aggarwal (Ed.),
Web-based education: Learning from experience
(pp. 156-164). Hershey, PA: Information Science
Publishing.
Davison, A., Burgess, S., & Tatnall, A. (2003).
Internet technologies and business. Melbourne,
Australia: Data Publishing.
Harasim, L., Calvert, T., & Groeneboer, C. (1997).
Virtual-U: A Web-based system to support collaborative learning. In B. H. Khan (Ed.), Web-based
instruction (pp. 149-158). Englewood Cliffs, NJ:
Educational Technology Publications.
Kamel, S., & Wahba, K. (2003). The use of a
hybrid model in Web-based education: The
global campus project. In A. Aggarwal (Ed.),
Web-based education: Learning from experience
(pp. 331-346). Hershey, PA: Information Science
Publishing.
Katz, R. N. (2002). Its a bird! Its a plane! Its
a portal? In R. N. Katz & Associates (Eds.),
Web portals & higher education: Technologies
to make IT personal (pp. 1-14). San Francisco:
Jossey-Bass.
1306
Key Terms
Constructivism: Learning as interpretive,
recursive, building processes by active learners
interacting with physical and social worlds.
Information and Communication Technology (ICT): Refers to an emerging class of technologiestelephony, cable, satellite, and digital
technologies such as computers, information
networks, and softwarethat act as the building
blocks of the networked world.
Learning Communities: Characterized
as associated groups of learners, sharing common values, and a common understanding of
purpose.
This work was previously published in the Encyclopedia of Portal Technologies and Applications, edited by A. Tatnall, pp. 5864, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1307
1308
Chapter 3.21
Abstract
INTRODUCTION
For many years, the mediums employed for education have remained fairly constant and traditional:
tried and true methods such as the blackboard
and chalk, whiteboards, flipcharts, and overhead
projectors. The employment of computing technologies has resulted in the use of PowerPoint,
e-mail, and Web-based course portals/enhancements such as Blackboard and WebCT.
There have been numerous studies done, and
papers written, about the use of technology in the
classroom, together with work on the related areas
of e-learning, Web-based learning, and online
learning. The usage of computing technologies
in education has been examined in numerous
studies, and there is a sizable body of work on
Web and online learning, including the studies
by Ahn, Han, and Han (2005), Liu and Chen
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
CONVERSATIONAL
TECHNOLOGIES AND
CONSTRUCTIVIST LEARNING
TOOLS
The notion of conversational technologies is
not a new one, as it encompasses many types
of systems that have been widely used for some
time, including e-mail, video conferencing, and
discussion forums.
The term conversational technology is derived from the work of Locke et al. (2000) relating
to conversational exchanges and his Cluetrain
Manifesto. One of the key concepts here is that
markets are conversations and that knowledge
is created and shared using question and answer
dialog. Specific theses that relate to this form of
conversational knowledge management suggest
that aggregation and abstraction of information
helps to create information. Other characteristics
of conversational knowledge management include
the fact that it is fast, stored in different locations,
and does not require sophisticated technologies in
order to be accomplished (Wagner, 2004).
Conversational technologies encompass a wide
range of systems and software, many of which
are familiar, including e-mail, instant messaging,
Web pages, discussion forums, video and audio
content/streaming, wikis, and Weblogs. While
there are specific aspects that are of interest in
terms of the more mature technologies, the ones
that will be given attention in this article are the
issues, impacts, and applications relating to IM,
blogs, wikis, and podcasts. These are technologies
that are newer, have a growing base of users, and
are starting to become recognized as viable tools
for education.
The term constructivist learning tool has also
become associated with these, particularly blogs
and wikis, in that they have a key characteristic
of allowing users to develop and maintain their
own content. Some of the characteristics of con-
1309
1310
However, the advent of digital and conversational technologies has brought forth the new
concept of secondary orality (Ong, 1982). This
concept emphasizes that teaching and learning
should go beyond printed materials toward a
greater emphasis on group work, fostering student
communities, and encouraging student participation. The concept encourages a greater sense of
interaction with and ownership of knowledge,
emphasizing self-awareness and expression, and
effectively using electronic tools (Gronbeck, Farrell, & Soukup, 1991).
The use of conversational technologies can
have a positive impact, because they attempt to
not only improve upon the print approach, but also
use secondary-oral techniques. In other words,
while a student can still be presented with material (in different formats) using the print model,
the introduction of secondary-oral methods can
be used to improve the overall learning experience. Using the latter, there is the opportunity to
work and learn collaboratively, explore, analyze,
engage in discussion, and otherwise learn in
new and innovative ways (Ferris & Wilder, 2006;
Wallace, 2005).
Table 1.
Description
Instant Messaging
Real-time communications that allow for informal communications to
be conducted easily and quickly
Availability and acceptance by students
Social presence (know the status of other users online)
Advantages
Disadvantages
Active learning
Dual (verbal and visual) processing
1311
1312
setting, the results from educational studies appear to be mixed, with both positive and negative
effects noted. While there seem to be advantages
to real-time communications between students,
between students and instructors, and also between groups working on a project, it appears
that there are problems and limitations if the
technology is used in a classroom setting. The
challenges of focusing on a class lecture, together
with maintaining a conversation online, seem to
be a problem that has not yet been resolved. In addition, while instructors can often establish closer
relationships with students using IM, there is also
the problem of unreasonable student expectations
of continuous teacher access, which may not be
present if IM was not available as an option. In
connection with this, using IM for student help
can result in a greater time commitment, since
sessions can become lengthy with many questions and responses being sent back and forth.
BLOGS (WEBLOGS)
Blogs started as a means for expressive individuals to post online diaries of themselves. Complete
with text and photos, these logs were essentially an
individuals online narrative or diary, with events,
stories, and opinions. While its original use was
for personal expression, recently its effectiveness
as a tool for education has been discovered, including its use as an extension of learning logs,
which are created online (Barger, 1997). One of
the earliest blogs, as we know and use them today,
was Dave Winers Scripting News, which was put
online in 1997. While the use of Weblogs can be
considered generally new, the concepts of keeping
a log or learning log is not.
The concept of learning logs has been in use
since before the advent of the Weblog. The concept
of this is to enable someone to document his or her
learning, and also to do some critical reflection
(Fulwiler, 1987) and self-analysis. The use of a
learning log or journal is related to action research
1313
Table 2.
Description
WEBLOGS (BLOGS)
A technology that allows a sequence of entries (online diary, journal) to be posted and published online
Reflection and critical thinking are encouraged
Advantages
Disadvantages
Educational applications
Course/
subject
suitability
Theoretical foundations
1314
1315
1316
1317
WIKIS
Yet another technology, known as the wiki, has
emerged, which allows for improved collaboration
compared with Weblogs. While the major emphasis of Weblogs is the creation of a set of pages
1318
Table 3.
Description
Advantages
Disadvantages
Educational
applications
Course/subject suitability
Theoretical
foundations
WIKI
A technology that allows for material to be easily published online,
and also allows open editing and inputs by a group
Contributions and editing by a group
Open access to all users
Collaborative
Lack of organization and structure may result in an unmanageable
wiki
Tracking of contributions and modifications can be difficult
Quality control
Collaborative writing/authoring
Group project management
Brainstorming activities
Knowledge base creation (knowledge management)
Knowledge management
Writing
Group work in courses
Conversational technology
Constructivist learning tool
1319
1320
(closed), sources of knowledge acquisition management and bazaar (open) could be illustrated
by the difference between encyclopedias that are
created by a single firm, such as Encarta or the
Encyclopedia Brittanica, and those that obtain
information from readers and users, such as the
well-known Wikipedia.
The emphasis therefore is on teamwork,
continuous review and testing, and the development of conversational sharing (Wagner, 2006).
Inherent in the workings of wikis is support
for an open, collaborative environment, where
many people can contribute to the development
of knowledge instead of being limited to a set of
experts. It appears that conversational knowledge acquisition and management are appropriate
for wikis (Cheung, Lee, Ip, & Wagner, 2005).
As for educational applications and KM, a study
by Raman, Ryan, and Olfman (2005) examined
the use of a wiki to help encourage and support
collaborative activities in a knowledge management course. More specifically, using wikis in the
course helped to encourage openness and better
sharing and updating of knowledge bases. Manyto-many communication is supported, and the
persistence of the created pages formed the basis
of a knowledge repository. In short, the impact
of easy page creation and improved updating and
editing, together with effective maintenance of
knowledge histories, were seen as positives (Raman et al., 2005; Bergin, 2002).
Activities in the KM course activities included
group article review assignments, answering questions about sharing knowledge and uses of the
wiki technology, and also creating a wiki-based
knowledge management system. Students were
asked to create, update, refine, and then maintain
a class knowledge management system. In terms
of these experiences, while the use of the wiki
technology was generally viewed positively, feedback received indicated that, since the goals of
using the wiki were not made clear, using one was
perceived to be counter-productive. More specific
guidance on goals and objectives, a clearer system
1321
1322
Podcasts
While the terms pod and podcast at first
mention might evoke visions of Invasion of the
Body Snatchers, for most tech people in the
know, the reference to Pod is almost certainly a
reference to Apples popular and ubiquitous iPod.
However, podcasts are in actuality not what
their name might imply them to be. A podcast,
a combination of iPod and broadcast, neither
refers to a technology specifically requiring an
iPod, nor broadcasts information to users. Instead,
podcasts are multimedia files (typically audio or
video) that are downloaded to users on a subscription basis. Because of the potential confusion due
to the use of the word pod, some have called for
the letters to mean personal option digital or
personal on demand, rather than iPod.
Podcasts can be played back on any device or
system that can play digital audio (typically MP3)
or video files, and are not broadcast to a large audience, in the way that television, radio, or spam
e-mails are sent. Instead, they are sent to users who
have specifically subscribed to a podcast service,
and as such, files are automatically downloaded
to the users computer when they are ready and
available. In addition, podcast files are generally
not streamed (as video is streamed), but rather
are downloaded for later playback (Lim, 2005;
Lum, 2006). Podcasts are delivered to subscribers through the use of RSS or RFD XML format
media feeds, rather than more traditional forms
of downloading (Descy, 2005).
Podcasts are considered to be a viable educational tool for several reasons. First, because of
the popularity and wide use of devices such as
the iPod and similar units, it would seem like a
good medium from which to distribute educational
materials. Secondly, the ease with which information can be retrieved and accessed makes this
a good choice for students, who are using these
devices on a regular basis for music and should
have few technical difficulties or learning curves
(Lum, 2006).
Table 4.
Description
Advantages
PODCASTS
The ability to create audio (and other media) based files to be distributed
on a regular/subscription basis to users; these can be easily retrieved and
played back on handheld devices, computers, and through other means
Allows for information to be retrieved and played back on widely available, ubiquitous devices
More suitable to auditory and visual learners
Disadvantages
Educational
applications
Course/subject suitability
Theoretical
Foundations
1323
1324
1325
References
Ahn, J., Han, K., & Han, B. (2005). Web-based
education: Characteristics, problems, and some
solutions. International Journal of Innovation
and Learning 2005, 2(3), 274-282.
Armstrong, L., Berry, M., & Lamshed, R. (2004).
Blogs as electronic learning journals. E-Journal
of Instructional Science and Technology, 7(1).
Retrieved from http://www.usq.edu.au/electpub/ejist/docs/Vol7_No1/CurrentPractice/Blogs.htm
1326
Engstrom, M. E., & Jewitt, D. (2005). Collaborative learning the wiki way. TechTrends, 49(6),
12-15.
Hargis, J., & Wilson, B. (2005). Fishing for learning with a podcast net. Instruction and Research
Technology, Univ. of North Florida.
Ferris, S. P., & Wilder, H. (2006). Uses and potentials of wikis in the classroom. Innovate, 1(5).
1327
1328
Wagner, C. (2004). Wiki: A technology for conversational knowledge management and group
collaboration. Communications of the AIS,
13(2004), 265-289.
Wagner, C. (2006). Breaking the knowledge
acquisition bottleneck through conversational
knowledge management. Information Management Resources Journal, 19(1), 70-83.
Walker, J. (2005). Weblog. Definition from the
Routledge Encyclopedia of Narrative Theory.
Wallace, M. (2005). Notes towards a literacy for
the digital age. Retrieved from http://uclaccc.ucla.
edu/articles/article-digitalage.htm
Wang, J., & Fang, Y. (2005). Benefits of cooperative learning in Weblog networks. In Proceedings
APAMALL2005 and ROCMELIA2005, Kun Shan
University.
Wenden, A. (1991). Learning strategies for learner
autonomy. New York: Prentice-Hall.
Wickramasinghe, N., & Lichtenstein, S. (2006).
Supporting knowledge creation with e-mail. International Journal of Innovation and Learning
2006, 3(4), 416-426.
Wikipedia. (2006). Mobile computing, definition. Retrieved August 25, 2006, from http://
en.wikipedia.org/wiki/Mobile_computing
This work was previously published in the International Journal of Information and Communication Technology Education,
edited by L. A. Tomei, Volume 3, Issue 3, pp. 70-89, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
1329
1330
Chapter 3.22
Ubiquitous Computing
Technologies in Education
Gwo-Jen Hwang
National University of Tainan, Taiwan
Ting-Ting Wu
National University of Tainan, Taiwan
Yen-Jung Chen
National University of Tainan, Taiwan
Abstract
The prosperous development of wireless communication and sensor technologies has attracted
the attention of researchers from both computer
and education fields. Various investigations have
been made for applying the new technologies to
education purposes, such that more active and
adaptive learning activities can be conducted in
the real world. Nowadays, ubiquitous learning
(u-learning) has become a popular trend of education all over the world, and hence it is worth
reviewing the potential issues concerning the
use of u-computing technologies in education,
which could be helpful to the researchers who
are interested in the investigation of mobile and
ubiquitous learning.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
adaptive services. In an ideal context-aware ulearning environment, the computing, communication and sensor equipment will be embedded
and integrated into the articles for daily use. In
addition, researchers also indicated that time
and location might be the most important parameters for describing a learners context.
There are several ways to detect the timely
location of a learner. GPS (Global Positioning
System) is one of the popular technologies for
continuously detecting an objects position by
satellites, which trace air waves shot from the
IC chips embedded in the objects. The objects
location is described with longitude, latitude and
elevation. Other sensors, such as RFID (Radio
Frequency Identification), which is an automatic
identification method relying on storing and remotely retrieving data using devices called RFID
tags or transponders, can also be used to detect
the location of a learner by reading the messages
from the tags, and then calculating the learners
position based on the intensity of the signals.
Advanced Technologies for Detecting Personal Contexts
Learners might feel distressed or confused
while encountering problems in the u-learning
environment. Under such circumstances, a ulearning system could actively provide timely
hints or assistance if the contexts concerning
human emotions or attitudes can be sensed.
Recent studies have depicted the possibilities
for detecting such advanced personal contexts.
Sensing devices with affective aware ability can
not only capture the expressions of human faces,
but also tell apart their emotional conditions. For
example, the Affective Computing Group in MIT
Media Lab of America have presented significant
progress in this field, which can be used to create
more friendly interaction between human and
computer by the detection of affective computing.
Other studies concerning facial expression detec-
1331
1332
In addition, for some labs with special purposes (e.g., precise instruments, biotechnology
and medical science), it is necessary to detect the
volume of particles in the air. Taking semiconductor as an example, these particles may cause
instruments a short circuit and even disable the
devices.
This work was previously published in the International Journal of Distance Education Technologies, edited by S.-K. Chang
and T. K. Shih, Volume 5, Issue 4, pp. 1-4, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an
imprint of IGI Global).
1333
1334
Chapter 3.23
Abstract
Team-based learning is an active learning instructional strategy used in the traditional face-to-face
classroom. Web-based computer-mediated communication (CMC) tools complement the face-toface classroom and enable active learning between
face-to-face class times. This article presents the
results from pilot assessments of computer-supported team-based learning. The authors utilized
pedagogical approaches grounded in collaborative
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
INTRODUCTION
Instructors of both traditional face-to-face and
online classrooms seek active learning techniques
that engage the learners. The increased use of
Web-based computer-mediated communications (CMC) as support tools that supplement
the face-to-face classroom (blended learning)
and enable active learning between face-to-face
class times fit this quest. CMC is regarded as
an efficient computer support tool to facilitate
student participation (Phillips & Santoro, 1989).
Prior research (Wu & Hiltz, 2004) reports that
adding asynchronous online discussions through
CMC platforms enhances students learning
quality in a face-to-face class setting. Although
various Web-based computer-mediated communications learning strategies have been applied
in the field (e.g., online collaborative learning),
limited research focuses on computer-supported
team-based learning in a face-to-face classroom.
Team-based learning (TBL) is an instructional
strategy that promotes active learning in small
groups that form a team over time (Michaelsen,
Fink, & Knight, 2002).
Our goal is to assess the impact of teambased learning when introduced in a face-to-face
classroom that utilizes Web-based CMC as a
supplemental learning tool between classes, thus
increasing team interaction across the semester.
A Web-based computer-mediated communications tool called WebBoard was utilized in
our computer-supported team-based learning
research to facilitate team learning activities
and communication. This paper describes results
from this experience. The paper begins with
a literature review building on constructivist
THEORETICAL BACKGROUND
Constructivist Learning Theory
Leidner and Jarvenpaa (1995) classify learning
models and discuss their relevance and impact
in information systems educational approaches.
The broadest categories of this classification
are objectivism and constructivism. Objectivism posits that learning occurs in response to
an external stimulus. Learners respond to the
stimulus by modifying their behaviors. This
model assumes that abstract representations of
reality and knowledge exist independently from
the learners. Teaching consists of transferring
knowledge from the expert to the learner. Opposite
to objectivism, constructivism posits that learning is not a process of knowledge assimilation,
but an active process of constructing individual
mental models, in which knowledge is created
1335
Collaborative Learning
Built upon the constructivist theory, collaborativism suggests that learning quality enhances
through shared understanding of more than one
learner. Due to social interactions and intensive
sharing among collaborative learners, collaborative learning results in higher learning compared
1336
WEB-SUPPORTED
TEAM-BASED LEARNING
EXPERIENCES
Computer-supported team-based learning conducted via a Web-based CMC tool was introduced
in a 15-week semester, face-to-face graduate
course at a US public technological university
during the Fall 2004 and Spring 2005 semesters.
The first class period introduced the computer-supported team-based learning instructional strategy.
The class was divided into teams of five to six
students, including at least one woman and at least
one student with a wireless laptop. The course
materials were divided into six modules, with
no midterm or final. Two out-of-class individual
article reviews were assigned to the students.
The graduate courses in which we conducted
this research was re-modeled, according to
Michaelsen et al. (2002). Several steps of the
TBL instructional strategy were modified to
incorporate the use of a Web-based CMC tool
called WebBoard (http://www.webboard.com/)
intended for active learning activities between
weekly face-to-face classes. A phased approach
to active learning assignments via WebBoard was
implemented across the two semesters of the study
(Gomez & Bieber, 2005). Particular emphasis
was placed on additional individual preparedness pre-module activities and team post-module
activities through the use of WebBoard; a learning management system that provides a flexible
interface for classroom discussion management.
Figure 1 shows a typical discussion interface in
WebBoard. The left frame provides an organized
list of class activities. The right frame shows the
details of each individual posting. For example,
in the first semester, students posted preparation
materials on WebBoard (http://www.webboard.
com/) and wrote their post-team activity results
on a poster in class. In the second semester, teams
posted results of their work on WebBoard instead
of handwriting on posters.
1337
1338
questioning, and discovering the conceptual relationships between and among various concepts
(Passerini & Granger, 2000), and spending more
time on preparation of the readiness assessment
tests (RAT) and team activities, rather than course
lecturing. The instructor plays a pivotal role in
the TBL process, both for the course organization
and ongoing module activities. In lieu of class
lecturing, the instructor actively participates
in the classroom as a task coordinator and time
manager while observing each teams discussions
to readily intervene either as a subject matter
expert or to clarify points of the task. Timely
grading and feedback are essential aspects of the
TBL process and should not be overlooked when
planning. The instructor also provides feedback
when team activities come together for sharing
across the classroom.
Our emphasis with the introduction of Webbased tools to the TBL process is on the individual
Recommended (Michaelsen,
2002)
Discouraged
Individual Preparedness
Readiness Assessment
Test (RAT)
Team Activities
Multiple activities of
increasing complexity per
module.
Tests, assignments and
certain in-class team
activities should be graded.
Not mentioned.
Grading
Course Survey
Team Formation
Team Size
Team Arrangement
Team Roles
Team Composition
Team Diversity
Team Building
5-7
Same groups
No assigned roles.
Not specified.
Recommended that
diverse groups are formed
randomly.
Achieved through initial
tRAT, team exercise to
determine grade weights and
choose a team name.
1339
1340
ASSESSING
COMPUTER-SUPPORTED
TEAM-BASED LEARNING
The constructivist approach, cooperative, and
collaborative theories and models, provide the
Perceived
Individual
Preparedness from
TBL
H1c
H1a
H2a
H3b
H2c
Perceived Team
Member s Value
or Contributions
from TBL
Perceived
Learning from
TBL
H3a
H1b
H2b
Perceived Trust
/Communication
Skills from TBL
Perceived
Motivation from
TBL
H4
Perceived
Enjoyment from
TBL
1341
1342
reiteration are needed as trust measures were collected only in the Spring 2005 reiteration of the
field study), we include trust and communication
as an important variable to consider in a computersupported team-based learning model, which will
be influenced by the individual preparedness and
team contributions. As discussed in the future
research section of this paper, additional observations in the Summer and Fall semester 2005 will
supplement our preliminary observations.
Does students perception of team contributions impact their learning from the
computer-supported team-based learning
process?
Does individual preparedness affect perceptions of computer-supported team-based
learning experiences?
1343
1344
1
2
3
4
5
6
7
8
9
10
11
[]
23
Component
.363
100.000
Cumulative %
37.442
48.232
55.716
62.319
67.197
71.694
75.535
79.126
82.216
84.947
87.225
8.357E02
Initial Eigenvalues
% of
Total
Variance
8.612
37.442
2.482
10.790
1.721
7.484
1.518
6.602
1.122
4.879
1.034
4.497
.883
3.841
.826
3.591
.711
3.089
.628
2.732
.524
2.278
1
1
-0.026
1
0.288*
-0.188
1
0.708**
0.518**
-0.174
(4)
(3)
(2)
(1)
1
0.635**
0.637**
0.437**
-0.171
** Correlation is significant at 0.01 level (two-tailed)
* Correlation is significant at 0.05 level (two-tailed)
Extraction Method:
Principal Component Analysis
(5)
between team contributions and motivation significant at the p=0.05 level (see Figure 4).
From the above data analysis results, it is
suggested that how individuals value their team
1345
Perceived Individual
Preparedness from
TBL
Perceived Motivation
from TBL
(+)H2a supported
R=0.28*
Perceived Team
Member s Value or
Contributions from
TBL
(+)H2c supported
R=0.43**
(+) H4 supported
R=0.63**
Perceived Enjoyment
from TBL
1346
Perceived Learning
from TBL
1347
ACKNOWLEDGMENTS
We gratefully acknowledge partial funding support for this research by the United Parcel Service
Foundation, the New Jersey Center for Pervasive
Information Technology, the New Jersey Commission on Science and Technology, and the
National Science Foundation under grants IIS0135531, DUE-0226075 and DUE-0434581, and
the Institute for Museum and Library Services
under grant LG-02-04-0002-04. An earlier version of this paper was presented at the IRMA
2006 International Conference in Washington
DC, May 2006.
REFERENCES
Bloom, B.S., Englehart, M.D., Furst, E. J., Hill,
W.H., & Krathwohl, D.R. (1956). A taxonomy of
1348
Isabella, L., & Waddock, S. (1994). Top management team certainty: environmental assessments,
teamwork, and performance implications. Journal
of Management, Winter.
Johnson, D., & Johnson, R. (1999). What makes
cooperative learning work. Japan Association for
Language Teaching, pp. 23-36.
Johnson, D.W., Johnson, R.T., & Smith, K.A.
(1991). Cooperative learning: increasing college
faculty instructional productivity. ASHE-ERIC
Higher Education Report No.
Phillips, G. M., & Santoro, G. M. (1989). Teaching group discussions via computer-mediated
communication. Communication Education, 39,
151161.
This work was previously published in the International Journal of Web-Based Learning and Teaching Technologies, edited
by L. Esnault, Volume 2, Issue 2, pp. 21-37, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an
imprint of IGI Global).
1349
1350
Chapter 3.24
Abstract
Introduction
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1351
1352
In China, the First Emperor (259 B.C.) established a process to formalise a set of hierarchies
and exams related to the acquisition of public roles
(becoming a Mandarin or an emperors officer). In
similar fashion measures, rules, laws, and even the
way of writing were standardised and formalised
so that the same characters are still in use. Since
then, in China, culture has always been highly
valued; there is even a common saying related to
this that translates as a word is worth a 1,000
pieces of gold.
The origin of this common Chinese saying can
be found in the famous book Shiji (or Historical
Records) by the historian Sima Qian (around
145-87 B.C.). He describes an episode from the
time when L Buwei (the natural father of the
first emperor) was prime minister. L Buwei
was hosting and protecting over 3,000 scholars,
so he compiled the best of their writings in a
book, claiming that it was the encyclopaedia of
his time covering science, literature, and all the
1353
1354
Enhanced Learning
All Training & Learning
1355
Figure 5. Relation between learning autonomy and learner satisfaction in relation to media richness
and bandwidth (Adapted from Novak, 1998)
1356
Language learning
Most professional sectors have adopted technologies to support the learning process. Probably the
first one to do so was the professional training
sector (pilots, train conductors) and language
learning. Later came the informatics sector (programmers, EDP operators), then the finance and
banking sector (initially jointly with the informatics sector, then on its own, due to the increase in
complexity of the environment itself), and lastly
the medical training sector (initially in imaging
and diagnostic to progressively shift towards a full
coverage)6. This section focuses on the languagelearning environment as an example, given that
it represents a case where technology has already
been widely adopted to properly support learning
processes over a long period.
It is possible to find self-learning materials
based on a widespread set of technologies (from
vinyl audio support, to tapes, and now CDs or
DVDs) and furthermore, many different learning
1357
1358
For each of these, there are specific programmes, schedules, and methods aimed to
maximise the training outcome.
Except for literature or business-oriented
courses, dialogues and drill downs are considered
as complementary or support activities, just like
exercises. Therefore, dialogues are usually linked
to curiosities and hints on culture and civilization
User proficiency
1359
(1)
Grammar
Vocabulary
Examples
(2)
Dialogues
Exercises
(3)
Grammar
Listen & practice
Beginner (1)
Dialogues
Drill down
Advanced (1)
Culture
Society
1360
(2)
Dialogue
Understanding &
comprehension
Related grammar
Exercises
Beginner
Drill down
Culture
Society
Intermediate
Listen (2)
(3)
1361
Stimulus
15-20 minutes
Stimulus
Activity
Activity
Activity
Activity
Activity
...
15-20 minutes
Time
Figure 10. Typical learner access level evaluation scheme of a language course
1362
Level 3
0-7/10 ok
Level 1 stimulus with 5 questions
4/5 ok
0-3/5 ok
Level 2
Level 1
Figure 11. Typical navigation scheme of an e-learning language course structured in levels
Target language selection
Entry Test
Course 2
Course 3
Course 4
Course 5
Default path
Course 6
Unit 1
Unit 2
Unit 3
Unit 4
Unit 5
Unit 6
Unit 7
Unit 8
Unit 9
Unit 10
Unit 11
Unit 12 (test)
On-line help
User results
Tools
Services
1363
Language Labs
The setup of language labs dates back to at least
the 1970s, when language teachers would use this
infrastructure for practices and for assessing students progress, especially in two of the four abilities, namely listening and speaking, while reading
and writing ability would be aligned with class
work. The most frequent usage of the language lab
would be either in listening and comprehension
or in listening and repeating. Often (but this
was much dependent on the teacher) in language
labs, students would encounter audio sources that
could span a wide range of topics, from read-aloud
literature to radio programmes conversation, or
even music (folk, traditional, pop).
In the late 1980s and early 1990s, the traditional
language lab started to be sidelined by multimedia labs (initially devoted to purposes other than
language learning). Only in very recent times
has there been a convergence of language and
multimedia labs into a single entity, a multimedia
language lab, even though it is still quite popular
Professional
learning education
1364
Provider
Learning objective
Primary and /or secondary Acquisition of the basic
education;
skills (read, write, listen,
Specific language schools. speak)
Linguistic refinement;
Interpretation schools;
Literature;
Language universities;
Language teaching;
Specific language schools.
Situational training.
Interpretation;
Translation;
Interpretation schools;
Linguistic
refinement;
Language universities;
Literature;
Specific language schools.
Language teaching;
Situational training.
Target audience
All categories of trainees;
Businesspeople;
Professionals.
All categories of trainees;
University students;
Businesspeople;
Professionals.
Interpreters;
Translators;
Dubbers;
Businesspeople;
Professionals (legal, science,
technical or medical).
1365
1366
Figure 16. Typical structure of a multimedia language lab class and teacher work place
1367
Content Production
In the publishing environment, the content production chain follows well-codified and standardised
processes that have been developing over several
years. This process, and the actors involved, are
schematically reported in Figure 18.
Cost control and overall IPR and copyright
management are usually undertaken in a parallel
stream, to ensure constant monitoring of highrisk factors. As far as the economic impacts are
concerned, cost control has to monitor process
development and ensure that it is kept in line with
expectations and budgets, retaining its profitability (or even increasing it whenever possible) and
therefore, is a good part of the standard management process.
On the other hand, IPR and copyright management interactions occur whenever an asset cannot
be cleared and therefore, has to be replaced. This
event may occur at any step, and the impact may
be marginal or relevant, depending on a set of
possible combinations of factors:
This refers to traditional, desktop and multimedia publishing as they present the highest
number of contact points and overlaps.
A similar approach applies to the editorial process of any product, yet digital TV, serials, movies
and in general products requiring video production
are somehow different as the design and planning
phase is much longer and has several by-products
1368
A Contextualised Example of
Content Development for E-Learning
To provide a better understanding and to highlight
the benefits of the adoption of certain technologies
as a support tool for the production of educational
content, we examine the process followed in a
successful European RTD project conducted in
IST-FP5 dealing with the application of technologies developed for the support of dyslexia to adult
task
Involved roles
Management
Authors
Chief Editor
Press Office
Idea
Market survey
Title design
Go / No Go decision based on market data and production cost analysis to ensure the
expected return on investment
Research of:
Sources
References
Contacts
Multimedia
Similar titles
Management
Authors
Editorial board
Editorial staff
Press Office
Legal Department
Draft acceptance
(if positive the next step starts if not the previous is
reiterated).
Editing of:
Texts
Notes
Indexes
Multimedia
Captions
Management
Authors
Editorial board
Legal Department
IPR/ clearance
Finalisation of:
Chief Editor
Editorial board
Editorial staff
Instructional designer
Production department
Press Office
Legal Department
Texts
Notes
Indexes
Multimedia
Captions
Books &
Magazines
CD/ROM
DVD
Web
TV, iTV, PDA
mobile and other
new media
Authors
Chief Editor
Editorial board
Editorial staff
Instructional designer
Press Office
Legal Department
Authors
Chief Editor
Legal Department
Production
Management
Cost
control
IPR/ Contracts
management
Management
Chief Editor
Editorial board
Legal Department
Production department
Outsourced service
Press Office
Marketing manager
Legal department
Company accountant
Marketing
Promoting
Marketing manager
Legal department
Revenue
management
Marketing manager
Legal department
Company accountant
1369
Idea
Title design
Involved roles
Storyboard drafting
Cost estimation
Producer
Production accountant
Go/No Go decision
Producer
Storyboard finalising
Scripting drafting
Casting preparation
Design &
preparation
Casting
Scripting
Lighting drafting
Shooting plan drafting
Effects drafting
Costumes drafting
Sound track drafting
Contracts finalising
Development &
shooting
Storyboard adapting
Lighting management
Shooting plan adapting
Effects management
Post Production
Mounting
Sound addition
Effects addition
Packaging
Author
Script editor
Producer
Director
Casting director
Art director
Cost
estimation
refinement
Cost
estimation
refinement
Marketing
Costume designer
Makeup designer
Composer
Audio Engineer
Graphic designer
Special effect designer
Fight arranger
Lighting cameramen
Production assistant
Location manager
Art director
Director
Casting director
Producer
Production lawyer
Production accountant
Marketing manager
Revenue
management
L2 language learning, named FLIC (Foreign Language Acquisition through the Instinct of a Child)
(FLIC, 2004, 2005a, 2005b, 2006); (MediTECH,
2005a, 2005b); (Lasnier, 2005). The results of the
project, in terms of pedagogical and neurological validation, have been conducted by Sheffield
University, while the original idea and method at
the basis of project development has been based
1370
Author
Writer
Script editor
Art director
Director
Producer
Production accountant
Marketing manager
Production accountant
on the experience gained by MediTECH in respect of the development of supporting tools for
dyslexia treatments.
Given this content production process, it is
apparent that producing content for multimedia
applications, especially in the educational field for
language learning, is a rather complex task per
se; the introduction of new technologies that alter
such a process is often opposed due to the potential cost impact. FLIC proved that the additional
effort required to take into account the specific
solutions that will lead to product improvement
is not excessive if this is done from the product
design phase (FLIC, 2004, 2005a, 2005b); (MediTECH, 2005a); (Lasnier, 2005). For the sake of
reference, we will briefly report here the process
followed in FLIC.
In FLIC, the audio content was using dummy
stereophony (see Figure 20) for recorded sounds.
The dummy head simulates the human head in an
acoustic sense, as far as possible, as in place of
eardrums, the dummy head has suitable microphones whose directional characteristics and other
physical properties correspond to the properties
of human ears (FLIC, 2005a).
The need for this special recording procedure
was strictly connected with the special learning
support provided by the FLIC equipment, namely
the lateralisation (MediTECH, 2005a), the multichannel voice fusion (HW/SW versions) (Medi-
Freq. of
adoption
66% / 63%
61%
47%
34%
32%
26%
21%
18%
13%
16%
1371
sound sources
headphones
dummy head
with two
microphones
The multi channel voice fusion (MCVF) technology divides the audio information of the left
and right canal into eight (8) bands each, without
losing any information during the splitting. The
wave bands are differentiated in the following
way: Band 1 (low-pass filter till 200Hz), Band
2 (250-315 Hz), Band 3 (400-630 Hz), Band 4
(800-1000 Hz), Band 5 (1250-1600 Hz), Band
6 (2000-2500 Hz), Band 7(3150-4000 Hz), and
Band 8 (high-pass filter from 5000 Hz). Consequently, two times 8 canals are generated. The
mutual mixture of the canals causes an overlay
of the separate audio information (left and right)
1372
FLIC
Voice Analyzer
mean rate
mean pitch
Time-scale factor
Pitch-scale factor
FLIC
Voice Engine
mean rate
Speech from student
FLIC
Voice Analyzer
mean pitch
1373
1374
many ways, m-learning is recapitulating the evolutionary process that e-learning experienced as
it emerged from traditional classroom training. In
the mature e-learning markets in North America
and Europe, mobile e-learning exists side by side
with conventional e-learning as a general customer
option. It is the product of choice over conventional
e-learning for the mobile workforce.
All this leads back to the accessibility issue
in the wider sense; as technology has reached a
point where the objective could be reached to a
great extent; what is lacking is both the application and regulation framework, along with a broad
acceptance and recognition of this need for a real
broad accessibility to content and education.
Previously, we have already provided a definition for e-learning and how it has come to the
stage where, when talking about learning and
accessibility, it is necessary to point out that elearning is one of the terms that has emerged from
the rapidly developing world of the Internet, and
is broadly defined as Internet-enabled learning.
This is usually referring to accessibility in terms
of possibility to access, despite distance and time
constraints, to sources, not in terms of impairment
support. This is certainly not a minor issue.
Much expectation surrounds the Internet and
its role in education, as e-learning can contribute to
the improvement of standards and to the effectiveness of teaching and learning. Yet when claiming
this, almost no reference is made to impairment
support. Most Web-based content producers are
often not even aware of W3C guidelines for accessible design of Web-based applications. The
reason for this can probably be found in the fact
that Internet access has generally only been available to society as a whole for the last 10-15 years,
and to schools in the last 2-5 years. Such access
is able to provide:
1375
Summing up, the availability of Internet access provides a challenge to traditional forms of
teaching and learning, and opens up opportunities
previously denied to the majority. Yet all aforementioned factors are usually taken into account
regardless of proper accessibility and usability
related issues (in the sense that most content
available on the Internet may fail either to meet
accessibility issues in respect to regulations like
the U.S. 508 one or similar, or usability criteria
for certain user communities like elderly people
or people with cognitive impairments).
This has happened largely due to the most
widely adopted and common definition of e-learning. According to the IDC report E-learning:
The Definition, the Practice, and the Promise,
we can say that a good, working definition for elearning (and in more general sense of e-anything)
is electronic or Internet-enabled. Internetenabled learning, or e-learning, strictly means
learning activities on the Internet. Those events
can be live learning that is led by an instructor
or self-paced learning, where content and pace
are determined by the individual learner.
The only two common elements in this process
are a connection to the Internet (either physical
or wireless) and learning. In 1996, a program for
bringing technology into education was launched
in the U.S. Its primary goals were:
1376
Tomorrow
Performance improvement
Personalised learning
Learner centric
Learning on demand
Time to perform
Learning by doing
Project based learning
Know why
Inquiry, discovery and knowledge
basics
Proactive
Inhibitors
Technology Compatibility
Economic turbulence
Limitations
Need for continued education and staff Billing systems
training
Increased Internet / eCommerce usage Security concerns
Price
1377
2.
3.
4.
5.
6.
7.
8.
1378
1379
challenge for businesses is to realize the full potential of e-learning as a driver of productivity
and performance gains by making it an integral
part of organizational strategy and operations. For
government, the challenge is to create a nurturing policy environment for e-learning, firstly by
removing barriers that restrict access to e-learning
benefits and, secondly, by promoting industry
self-regulation while balancing citizensinterests
and needs.
By adopting e-learning standards like the one
defined by IEEE, AICC, and so forth, it is possible
to achieve full interoperability of e-learning platforms and to have real, full portability of produced
content. A learning object respecting all features
described in those standards will be available in
whatsoever platform and therefore, full independence from a specific vendor is achieved. Moreover,
it is possible to package objects developed with
different tools and for different delivery platforms, achieving a consistent and interoperable
object, where the tracking will also be kept across
editing, aggregation, and reuse. Other emerging
standards that focus on personalisation are likely
to have an influence in the near future, such as
the internationalization of the IMS AccessForAll
approach in ISO, the Individualized Adaptability
and Accessibility in e-learning, Education and
Training Standard.
1380
Reasons to leave
Lack of a moderator
A number of
participants
imposing their
viewpoints
It is not easy to
understand the
content and type of
mail received from
the forum
Too much mail is
received
Shift in focus
Lack of education
in participants
behaviour
1381
Contents
Cardiff Teleform. CBIQuick,
DazzlerMax, EsaTest, EzSurvey,
Halogen Esurveyor, Intelligent Essay
Assessor, IST Author, Macromedia's
eLearning Studio, Perseus
SurveySolutions, PowerSim,
ProForma, Quandry, QuestionMark,
Quiz Rocket, Rapidbuilder,
RapidExam, ReadyGo Inc, Respondus,
Inc, Riva eTest, SecureExam,
SimCadPro, StatPac, SurveyTracker,
The Survey
System, Trainersoft,
Trivantis Lectora,
WebQuiz
Delivery Solutions
Click2Learn
Learn eXact,
Lotus
ExamBuilder,
Link Systems,
Testcraft
HorizonLive
Services
1382
Browser-based (works with Netscape and Internet Explorer, though often PC-centric).
Online testing (designed for use over Internet dial-up connections or corporate
intranets).
Live streaming audio (generally one-way,
often two-way)
Text chat, and occasionally private text chat,
among students or between participants and
presenter.
Sequencing is controlled by a presenterleader; a secondary/copresenter is available
for the higher end products.
Ability to show PowerPoint presentations.
1383
1384
References
Abraham, M. (1943). A theory of human motivation. Psychological Review, 370-396. Reissued in
2004 as an Appendix to The Third Force: The
Psychology of Abraham Maslow in Adobe PDF.
Brandon Hall Research. Retrieved from http://
www.brandon-hall.com/
CETIS, the Centre for Educational Technology
Interoperability Standards. Retrieved from http://
www.cetis.ac.uk/
Chapman, B. (2006). LMS knowledge base 2006.
Sunnyvale, CA: Brandon-Hall.
Chapman, B., & Nantel, R. (2006). Low-cost
learning management systems 2006. Sunnyvale,
CA: Brandon-Hall.
Council Resolution of 25 March 2002 on the
eEurope Action Plan 2002: Accessibility of public
websites and their content. (2002/C 86/02). Official
Journal of the European Communities. Retrieved
from http://eur-lex.europa.eu/LexUriServ/site/en/
oj/2002/c_086/c_08620020410en00020003.pdf
EEIG. European Economic Interest Grouping (EEIG). Activities of the European Union,
Summaries of Legislation, EUROPA.EU.INT.
Retrieved from http://europa.eu.int/scadplus/leg/
en/lvb/l26015.htm
FLIC Project. (2004). D2.1 Design of a pedagogical template.
FLIC Project. (2005a). D2.4 Voice models recording.
FLIC Project. (2005b). D2.3 Development of
materials for the evaluation.
FLIC project. (2006). Special project status report
for the EC.
Hornof, A. J., & Halverson, T. Cognitive modeling, eye tracking and human-computer interaction. Department of Computer and Information
Science, University of Oregon. Retrieved March
23, 2007, from http://www.cs.uoregon.edu/research/cm-hci/
IDC. (2002). Corporate eLearning: European
market forecast and analysis 2001/2002. IDC
Corporate, 5 Speen Street, Framingham, MA,
01701.
Kowler, E. Eye movements and visual attention.
In MITECS: The MIT encyclopedia of the cognitive sciences. Retrieved from http://cognet.mit.
edu/MITECS/Entry/kowler
Lasnier, C. (2005). How to best exploit the VOCODER. AGERCEL, 2005.
LeLoup, J. W., & Ponterio R. (2001). On the
NetFinding song lyrics online. Language
Learning & Technology, 5(3), 4-6. Retrieved from
http://llt.msu.edu/vol5num3/onthenet
Mah, A. Y. (2003). A thousand pieces of goldA
memoir of Chinas past through its proverbs (pp.
1385
1386
ENDNOTES
The term clerices survives in some students organisations, named Goliardia,
that can still be found in ancient European
universities.
2
St. Edmund Hall (1278), Exeter (1314), Oriel
(1326), Queens (1340), New (1379), Lincoln
(1427), All Souls (1438), Magdalen (1458),
Brasenose (1509), Corpus Christi (1517),
Christ Church (1546), Trinity (1554), St
Johns (1555), and Jesus (1571).
3
Africans (RSA and South African Radio
Broadcasting Corporation), Spanish (Radio
Andorra), Chinese (Radio Beijing), Jew
(Israel Broadcasting Authority), French
(Radio France International), Japanese
(Radio Japan), Greek (Cyprus Broadcasting Corporation), English (BBC, Voice of
America in Special English, Radio Australia,
KGEI California, WYFR), Dutch (Radio
Nederland), Polish (Radio Poland), Russian
(Radio Moscow), Swedish (Radio Sweden),
German (Deutsche Welle, Deutshlandfunk,
Radio DDR).
4
Originally named APPANET (from the
combination of the names ARPA - Advanced
Research Projects Agency and Net) was born
during the late 1960s as a Defense Advanced
Research Projects (DARPA) project to
ensure efficient and robust communication
among military bases and the capitol, in
case of a nuclear attack.
5
Some of the most well-known systems are
the one used in avionics (e.g., on the Cobra
attach helicopter), or in tanks and many
other combat systems.
6
This is just a rough description of evolution
partition and sequencing in the field
1
Actually, the first emperor ordered that a 1:1
replica of the Dalai Lama palace was built
so that his guest could feel at home.
8
Matteo Ricci is the author of the first world
map printed in China, which notably is
China-centred, well in accordance with the
Chinese definition of their land: All that is
under Heaven.
9
Initially based on a combination of books and
records, then tapes, and now CD/DVD
10
NotePad or other text editor
11
Director, Authorware, ToolBook, and so
forth.
7
ReadyGo, Lectora Publisher, Trainersoft,
and so forth.
13
http://www.section508.gov/index.cfm
14
In 1999, 95% of schools in the U.S. have
Internet access, and 63% instructional
rooms with Internet access (Source: Catrina
Williams .(2000). Internet Access in U.S.
Public Schools and Classrooms, 1994-1999,
NCES 2000-086, Washington, D.C. U.S.
Department of Education, National Center
for Education Statistics. For more details
see http://nces.ed.gov/pubsearch/pubsinfo.
asp?pubid=2000086
12
This work was previously published in Interactive Multimedia Music Technologies, edited by K. Ng and P. Nesi, pp. 195-230,
copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1387
1388
Chapter 3.25
Abstract
This chapter proposes that, as approaches to human computer interaction (HCI), tangible user
interfaces (TUIs) can scaffold rich classroom
experiences if they are coupled and generated
within multi-pedagogical frameworks that adopt
concepts such as Multimodality, Multisensoriality,
and Multiliteracies. It overviews some necessary
conditions for these tools to be effective, arguing
that tangible user interfaces and multi-pedagogies
are efficient when they are conceptualized as part
of adaptive educational environmentsteaching
and learning ecologies where learners and teachers are seen as co-creators of content and of new
ways of interacting with such content.
Introduction
Information and communications technologies
(ICTs) enable types of learning experiences
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
This suggests that children are capable of demonstrating their knowledge via physical actions
(e.g., gestures) and can solve problems by working
with given concrete materials even if they cannot
1389
1390
1391
1392
multi-pedagogical
framework
learners
TUIs
teachers
context
learning
ecology
learners
TUIs
multi-pedagogical
frameworks
teachers
learning
ecology
context
teams together with teachers and pedagogy experts; and the contexts where technology will be
adopted/embedded must be considered at all times.
There are a number of examples of teams currently
developing tangible user interfaces within these
participatory paradigms. An interesting example
within the educational context is offered by the
Children as Design Partners project.
As previously mentioned, Reimann and
Goodyear (2004) propose that in educational
environments, technology is not so important
in itself: what matters is how the technology is
used (p. 2). Their statement, which implies that
pedagogical choices in making good use of ICT
have crucial roles, should be amplified because
when considering HCI learning ecologies, the
design of ICT has an equally crucial role, as it
strongly impacts what pedagogical choices can
be conceived and enacted. The interrelations and
co-dependencies of technology and pedagogy
should not be underestimated. In light of this,
I suggest that tangible user interfaces can offer
significant opportunities for effective use of HCI
within the classroom if:
1393
1394
take risks and make mistakes in a non-threatening atmosphere (Davies, 1999). These are in
synchrony with some characteristics typical of
tangible user interfaces, such as their physicality, manipulability, concreteness, and familiarity (OMalley & Stanton Fraser, 2004).
Tangible HCI enables learners to combine
and recombine the known and familiar in new
and unfamiliar ways (Hoyles & Noss, 1999) and to
unfold the world through discovery and participation
(Soloway, Guzdial, & Hay, 1994; Tapscott, 1998).
As Hoyles and Noss (1999) stress, it is this dialectic
between known and unknown, familiar and novel,
that provides a motor for creativity (p. 19). Moreover, combining familiarity with unfamiliarity can
promote reflexiveness (Rogers et al., 2002; Scaife,
2002), stimulating awareness and enhancing learning (Ackerman, 1996; Piaget & Inhelder, 1967).
Green, Facer, Rudd, Dillon, and Humphreys
(2005) have suggested that as mobile, tangible,
and networked technologies are increasingly deployed in facilitating creative work, educational
environments will be characterized by a high
degree of personalization where individuals will
have greater influence over how they choose and
use learning resources. The education system is
constantly challenged by diversity issues, and
researchers/educators are regularly looking for
ways to enable learning for diverse learners (refer,
for instance, to Cope et al., 2000). Through his
work on the Internet generation, Prensky (2001)
argues there is a need to radically rethink teaching practice to mirror current ways of learning.
The diversity issue implies a need to change the
education system so that it can conform to learners. Interestingly, learners are already shaping
their own learning outside the classroom, using
digital resources to create personalized learning
environments for themselves outside of school. As
a matter of fact, it has been suggested that by the
age of 21 the average person will have spent 15,000
hours in formal education, 20,000 hours in front
of the TV, and 50,000 hours in front of a computer
1395
The notion of tangible user interfaces as mediating agents within educational environments
implies a shift in how the learning process is
viewed, a shift in the roles of teachers and learners, and a shift in how learning spaces are shaped
and interacted with.
OMalley and Stanton Fraser (2004) express
their hope that teachers will take inspiration from
the whole idea of technology-enhanced learning
moving beyond the desktop or classroom computer
by, for example, making links between ICT-based
activities and other more physical activities. I
share this hope, and this chapter aims at nurturing this notion by linking technological aspects
of HCI with pedagogical, ecological, and methodological considerations under the same holistic
framework. The proposed HCI scaffold fosters
a different way of looking at technology-design
and technology-deployment, while promoting the
opportunity to re-conceptualize roles, practices,
and the relationships among key actors.
As discussed in this chapter, tangible user
interfaces offer some significant benefits to
classroom activities that involve HCI, in particular from creativity, diversity, and collaborative
perspectives. However, these benefits imply a
need for educational institutions to re-consider
and re-address how they view the professional
development of teachers, the classroom space,
and how curriculum development and learning
resources should be designed and deployed.
In the next phase of this work and building on
current work around creative educational spaces,
I propose to investigate the opportunities to reconceptualize tangible user interfaces as designed
interventions. Future research in this field might
also include longer term qualitative studies to
(1) test tangible user interfaces in specific HCI
contextsliving laboratorieswith particular
emphasis on their capacity to scaffold collaboration, creativity, and diversity; (2) further explore
their agency in learning ecologies and (3) their
1396
Acknowledgments
My special thanks go to Prof. Patrick Dillon
for sharing with me the adaptive educational
environments journey. Thank you also to Mary
Featherston for her contagious spaces-for-learning-enthusiasm. Finally, many thanks to Dr. Peter
Burrows for our discussions and shared visions,
which truly enrich my work.
References
Ackerman, E. (1996). Perspective-taking and
object construction: Two keys to learning. In Y.
Kafai & M. Resnick (Eds.), Constructionism in
practice: Designing, thinking and learning in a
digital world. NJ: Lawrence Erlbaum.
Africano, D., Berg, S., Lindbergh, K., Lundholm,
P., Nilbrink, F., & Persson, A. (2004, April).
Designing tangible interfaces for childrens
collaboration. Paper presented at the CHI 04,
Vienna, Austria.
Anderson, L. S. (2005). A digital doorway to the
world. T.H.E. Journal, 33(4), 14-15.
Ba, H., Tally, W., & Tsikalas, K. (2002). Investigating childrens emerging digital literacies. The
Journal of Technology, Learning and Assessment,
1(4), 3-50.
1397
1398
Loveless, A. (2003). Literature review in creativity, new technologies and learning. Bristol:
NESTA Futurelab.
Mazalek, A., Wood, A., & Ishii, H. (2001). genieBottles: An interactive narrative in bottles.
Paper presented at the SIGGRAPH 2001Spe-
1399
1400
Project Zero, & Reggio Children. (2001). Making learning visible: Children as individual and
Reggio Emilia, Italy: Reggio
group learners.
Children.
Raffle, H., Joachim, M., & Tichenor, J. (2002).
Super cilia skin: An interactive membrane. Paper
presented at the ACM SIGCHI Conference on
Human Factors in Computing Systems (CHI02),
Fort Lauderdale, FL.
Raffle, H., Parkes, A., & Ishii, H. (2004). Topobo:
A constructive assembly system with kinetic
memory. Paper presented at the ACM SIGCHI
Conference on Human Factors in Computing
Systems (CHI04), Vienna, Austria.
Reimann, P., & Goodyear, P. (2004). ICT and
pedagogy stimulus paper. In Review of national
goals: Australias common and agreed goals
for schooling in the twenty-first century (pp.
1-42). Sydney: Ministerial Council for Education, Employment, Training and Youth Affairs
(MCEETYA) Task Force.
Rogers, Y., Scaife, M., Gabrielli, S., Smith, H.,
& Harris, E. (2002). A conceptual framework
for mixed reality environments: Designing novel
learning activities for young children. Presence:
Teleoperators & Virtual Environments, 11(6),
677-686.
Rogoff, B. (1990). Apprenticeship in thinking:
Cognitive development in social context. New
York: Oxford University Press.
Ryokai, K., & Cassell, J. (1999). StoryMat: A
play space for collaborative storytelling. Paper
presented at the ACM SIGCHI Conference on
Human Factors in Computing Systems (CHI99),
Pittsburgh, PA.
Ryokai, K., Marti, S., & Ishii, H. (2004). I/O brush:
Drawing with everyday objects as ink. Paper
presented at the ACM SIGCHI Conference on
Human Factors in Computing Systems (CHI04),
Vienna, Austria.
This work was previously published in Enhancing Learning Through Human Computer Interaction, edited by E. McKay, pp. 178191, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1401
1402
Chapter 3.26
Abstract
This article explores how social software tools can
offer support for innovative learning methods and
instructional design in general, and those related
to self-organized learning in an academic context
in particular. In the first section, the theoretical
basis for the integration of wikis, discussion forums, and Weblogs in the context of learning are
discussed. The second part presents the results of
an empirical survey conducted by the authors and
explores the usage of typical social software tools
that support learning from a students perspective.
The article concludes that social software tools
Introduction
One major task of higher education is to train
students for the requirements of their future work
by applying and adapting their knowledge to specific workplace-related requirements and settings.
Due to the ongoing pressure on enterprises to cut
costs, the periods of vocational adjustment in a
company will become shorter and shorter.
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
On the one hand, the rising pressure of innovation and fast-paced development in the economy
results in increased demand for continuous
employee training. On the other, growing global
competition forces enterprises to use available
resources very economically so that employee
training is considered to be necessary and desired
even though it is conducted under considerable
time and cost pressure (Kllinger, 2002).
According to these goals, the settings of the
education must be changed adequately: While
most of higher education still ascribes to traditional models of instruction and learning, the
workplace is characterized by rapid changes
and emergent demands that require individuals
to learn and adapt in situ and on the job without
the guidance of educational authorities (Sharma
& Fiedler, 2004, p. 543).
In the field of higher education, it has become
an important goal to develop digital literacy
and educate learners as competent users and participants in a knowledge-based society (Kerres,
2007), but it can be assumed that there is a new
generation of students, the digital natives, who
are accustomed to digital and Internet technology
(Prensky, 2001a, 2001b).
Oblinger and Oblinger (2005) characterize
next-generation students (called n-gen, for Net
generation) as digitally literate, highly Internet
savvy, connected via networked media, used to
immediate responses, preferring experiential
learning, highly social, preferring to work in
teams, craving interactivity in image-rich environments, and having a preference for structure
rather than ambiguity.
According to a study conducted by Lenhart
and Madden (2005), half of all teens in the USA
may be considered content creators by using
applications that provide easy-to-use templates
to create personal Web spaces.
Classical face-to-face learning is seen as
rigid and synchronous, and it promotes one-way
(teacher-to-student) communication. Thus, it
is not surprising that more and more students
1403
Theoretical Framework
This part refers to the necessary theoretical
background required for the following empirical study, especially the areas of social software
and learning.
Social Software
The term social software emerged and came into
use in 2002 and is generally attributed to Clay
Shirky (2003). Shirky, a writer and teacher on the
social implications of Internet technology, defines
social software simply as software that supports
group interaction.
Another definition of social software can
be found in Coates (2005), who refers to social
software as software that supports, extends, or
derives added value from human social behaviour.
Users are no longer mere readers, audiences,
or consumers. They have the ability to become
active producers of content. Users can act in
user and producer positions and they can rapidly
change the position.
Nowadays the term social software is closely
related to Web 2.0. The term Web 2.0 was introduced by Tim OReilly (2005), who suggested
the following definition:
Web 2.0 is the network as platform, spanning all
connected devices; Web 2.0 applications are those
that make the most of the intrinsic advantages of
that platform: delivering software as a continually
updated service that gets better the more people
use it, consuming and remixing data from multiple sources, including individual users, while
providing their own data and services in a form
that allows remixing by others, creating network
effects through an architecture of participation,
1404
1405
1406
1407
Description and
Classification of Social
Software Tools
In the following section, three social software
toolsWeblogs, discussion forums, and wikisare described in more detail and the tools
are compared. Students were able to select these
tools during the empirical study.
Weblog
A Weblog, a compound of Web and logbook, usually just called blog, is a Web site that contains
new articles or contributions in a primarily chronological order, listing the latest entry on top.
Primarily, a Weblog is a discussion-oriented
instrument especially emphasizing two functions:
RSS feeds and trackback. RSS feeds, also called
RSS files, can be read and processed for further use
by other programs. The most common programs
are RSS readers or RSS aggregators that check
RSS-enabled Web sites on behalf of the user to
read or display any updated contribution that can
be found. The user can subscribe to several RSS
feeds. Thus, the information of different Web sites
can be retrieved and combined. Preferably, news
or other Weblogs are subscribed to.
Trackback is a service function that notifies
the author of an entry in a Weblog if a reference
to this contribution has been made in another
Weblog. By this mechanism, a blogger (person
who writes contributions in a Weblog) is immediately informed of any reactions to his or
her contribution on other Weblogs (Hammond,
Hannay, & Lund, 2004).
Forum
A discussion forum or Web forum is a service
function providing discussion possibilities on
the Internet. Usually, Web forums are designed
1408
Wiki
A WikiWikiWeb, shortly called wiki, is a hypertext system for storing and processing information.
Every single site of this collection of linked Web
pages can be viewed through a Web browser.
Furthermore, every site can also be edited by
any person. The separation between authors and
readers who write their own text, and change
and delete it is obsolete as also third parties can
carry out these functions (Augar, Raitman, &
Zhou, 2004).
1409
Empirical Survey
The purpose of this survey was to determine
if the integration of Web-based social software
tools (wikis, discussion forums, and Weblogs)
are suitable to foster learning from the students
point of view.
1410
The courses were organized as blended-learning courses so they included on-campus lessons
and off-campus work in which the students could
work face-to-face or using the social software
tools.
More than 90% of the students attending the
courses took part in this survey. In order to give
the participants an impression of the functionality and usage of the tools, short presentations of
the tools were made by an instructor before the
students made their choice.
At the end of the testing phaseafter 4 weeks
of using the toolsselected students reported their
experiences with the tools used. Students who
had decided not to use the tools in the first place
got an impression about the usage, advantages,
and disadvantages of the tools from their fellow
students. Following these short presentations, a
questionnaire was completed that provided the basic findings for further inspection and research.
A total of 268 first-semester students of different Austrian universities in five selected courses
took part in this survey. The majority of the participants were between 18 and 20 years old. The
portion of female students was about 17%.
Distribution
17%
31%
22%
30%
Part 1: Analysis of the usage of wikis, discussion forums, and Weblogs of the students
before the study was started
Part 2: Experiences made with the tools
during the study
Part 3: Potential future usage of the tools
1411
Wiki
76%
24%
Forum
78%
22%
Weblog
11%
89%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
1412
Wiki
33%
35%
9%
16%
8%
Percent
Number
23.1%
22.4%
0.4%
62
60
1
42.9%
1.9%
0.7%
115
5
2
6.7%
18
1.9%
Wiki
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
57%
33%
3%
8%
1%
Forum
22%
29%
12%
24%
12%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wiki
38%
52%
10%
2%
0%
Forum
10%
31%
41%
15%
4%
Wiki
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
2%
6%
29%
34%
29%
Forum
4%
18%
37%
27%
14%
1413
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wiki
23%
36%
32%
5%
3%
Forum
8%
21%
31%
25%
15%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
1414
Wiki
8%
13%
45%
14%
19%
Forum
7%
19%
34%
22%
17%
Wiki
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
9%
33%
29%
15%
15%
Forum
39%
37%
17%
4%
3%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wiki
10%
25%
39%
15%
10%
Forum
28%
32%
23%
11%
6%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wiki
26%
36%
31%
5%
1%
Forum
19%
37%
26%
14%
4%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wikis
18%
36%
30%
9%
7%
Forums
16%
23%
16%
12%
33%
Weblogs
13%
23%
24%
13%
27%
I totally agree
I generally agree
Neither...nor (neutral)
I slightly disagree
I disagree
Wikis
11%
36%
30%
14%
9%
Forums
14%
23%
25%
7%
32%
Weblogs
9%
22%
24%
16%
28%
1415
Discussion
The results clearly show that wikis are currently
the most often used instrument and furthermore
have the greatest potential as a tool for learning
and knowledge management in the field of learning; these findings are in line with other empirical
studies (Bendel, 2007; Chao, 2007).
Other studies (McGill et al., 2005; Nicol &
MacLeod, 2004) report that a shared workspace
helps to support collaborative learning; the possibility of being able to access and contribute to
1416
CONCLUSION
The aim of this contribution was to investigate
the experiences of students using social software
tools in the context of learning. Wikis, Weblogs,
and discussion forums are typical social software
tools and were used for this survey.
The results clearly show that wikis and discussion forums can support learning and collaboration. The usage of Weblogs in this study
was limited and hence no statements about their
applicability can be made. In order to assure a
successful implementation of these tools, social
and psychological issues must be taken into consideration as well.
The results of this study are the basis for the
introduction of social software into education
to help students set up individual learning environments. These learning environments should
support lifelong learning.
There are likely to be other unplanned consequences of the intensive use of the Internet in
References
Aggarwal, A. K., & Legon, R. (2006). Web-based
education diffusion. International Journal of
Web-Based Learning and Teaching Technologies, 1(1), 49-72.
Alexander, B. (2006). Web 2.0: A new wave of
innovation for teaching and learning? Educause
Review, 41(2), 32-44.
Attwell, G. (2006). Personal learning environment. Retrieved May 17, 2007, from http://www.
knownet.com/writing/weblogs/Graham_Attwell/
entries/6521819364
Augar, N., Raitman, R., & Zhou, W. (2004, December 5-8). Teaching and learning online with wikis.
In R. Atkinson, C. McBeath, D. Jonas-Dwyer,
& R. Phillips (Eds.), Beyond the Comfort Zone:
Proceedings of the 21st ASCILITE Conference,
Perth, Western Australia. Retrieved May 17,
2007, from http://www.ascilite.org.au/conferences/perth04/procs/contents.html
Babcock, M. (2007). Learning logs in introductory
literature courses. Teaching in Higher Education,
12(4), 513-523.
Baumgartner, P. (2004). The Zen art of teaching.
Communication and Interactions in eEducation.
Retrieved May 17, 2007, from http://bt-mac2.fernuni-hagen.de/peter/gems/zenartofteaching.pdf
Baumgartner, P. (2005). Eine neue lernkultur
entwickeln: Komptenzbasierte ausbildung mit
blogs und e-portfolios. In V. Hornung-Prhauser
1417
Chao, J. (2007). Student project collaboration using wikis. In Proceedings of the 20th Conference
on Software Engineering Education & Training
(pp. 255-261). Washington, DC.
Evans, C., & Sadler-Smith, E. (2006). Learning styles in education and training: Problems,
politicisation and potential. Education + Training,
48(2/3), 77-83.
1418
http://www.dlib.org/dlib/december04/hammond/
12hammond.html
Hurst, B. (2005). My journey with learning logs.
Journal of Adolescent & Adult Literacy, 49(1),
42-46.
Jonassen, D. H., Mayes, T., & McAleese, R. (1993,
May 14-18). A manifesto for a constructivist approach to uses of technology in higher education.
In T. M. Duffy, J. Lowyck, D. H. Jonassen, & T.
Welsh (Eds.), Xpert.press: Vol. 105. Designing Environments for Constructive Learning: Proceedings of the NATO Advanced Research Workshop
on the Design of Constructivist Learning Environments Implications for Instructional Design
and the Use of Technology, Leuven, Belgium (pp.
231-247). Berlin, Germany: Springer.
Kantel, J. (2003). Vom Weblog lernen: Community,
peer-to-peer und eigenstndigkeit als ein modell
fr zuknftige wissenssammlungen. Retrieved
March 12, 2007, from http://static.userland.com/
sh4/gems/schockwellenreiter/blogtalktext.pdf
Kerres, M. (2006). Web 2.0 and its implications
to e-learning. In T. Hug, M. Lindner, & P. A.
Bruck (Eds.), Micromedia & E-Learning 2.0:
Gaining the Big Picture: Proceedings of Microlearning Conference 2006, Innsbruck, Austria.
University Press.
Kerres, M. (2007). Microlearning as a challenge
for instructional design. In T. Hug & M. Lindner
(Eds.) Didactics of microlearning. Mnster,
Germany: Waxmann. Retrieved March 10, 2007,
from http://mediendidaktik.uni-duisburg-essen.
de/files/Microlearning-kerres.pdf
Klamma, R., et al. (2006). Social software for professional learning: Examples and research. In B.
Kinshuk, R. Koper, P. Kommers, P. Kirschner, D.
Sampson, & W. Didderen (Eds.), Advanced Learning Technologies: ICALT 2006, Los Alamitos,
CA (pp. 912-917). IEEE Computer Society.
1419
1420
Educational innovation in economics and business: Vol. 6. Teaching today: The knowledge of
This work was previously published in the International Journal of Web-Based Learning and Teaching Technologies, edited
by L. Esnault, Volume 3, Issue 3, pp. 16-33, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an
imprint of IGI Global).
1421
Section IV
This section introduces and discusses the ways in which particular technologies have impacted humanity
and proposes new ways in which IT-related innovations can be implemented within organizations and
in society as a whole. These particular selections highlight, among other topics, ubiquitous computing
applications in an educational setting and computer-mediated communication. Contributions included
in this section provide excellent coverage of todays world and insight into how our interaction with
technology impacts the fabric of our present-day global village.
1423
Chapter 4.1
Abstract
Recent trends and rapid improvement in technology such as computer-mediated communication
(CMC) and increasing bandwidth in the Internet
are facilitating increased electronic interactions
(i.e., e-interactions otherwise known as or commonly referred to as the human computer interaction (HCI)). CMC technology systems are a
common occurrence in educational institutions as
administrators attempt to encourage technology
usage and instructors race to learn and implement
CMC use in their classrooms and students demand
greater flexibility and control in how they learn.
Notwithstanding is the need to decide which forms
of HCI technology to use, how to use them, and
what benefits can accrue from such usage. The
discussion here explores each of these issues,
but more specifically will focus on addressing
the case for blending e-interactions with the
introduction
Human computer interaction (HCI) occurs
through a host of information communication
technologies (ICT), specifically computer-mediated systems (e.g., e-mail, computer conferencing, video-conferencing) that facilitate electronic
interaction among people. A multitude of organizations involved in the knowledge management
industry are finding ways to incorporate computermediated communication (CMC) technologies
into their day-to-day operations as coordination
tools and learning curriculum dissemination
tools. HCI has also found explosive growth in
educational settings such as traditional and non
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
traditional universities to extend current instructional content delivery and to tap into distance
educational settings. The idea of CMC systems
for the instructional delivery tool to provide both
uniform and customized training is called electronic learning, which is otherwise referred to as
e-learning. Contemporary research has discussed
at length the potential benefits of CMC systems
in general and within distance education in particular (Barnard, 1997; McIsaac & Gunawardena,
1996; Yakimovicz & Murphy, 1995), but research
on joint use of the CMC system and face-to-face
(FTF) communication in regular classrooms is
scarce (Olaniran, 2004). This chapter will address
how using a combined CMC and (FTF) interaction could benefit courses in communication and
other social sciences.
Teachers are challenged to incorporate
information communication technology (ICT)
into their curriculum to enhance learning and
to prepare students for future careers (Althaus,
1997; Craig, 2001-2002; Snoeyink & Ertmer, 20012002; Witmer, 1998). ICT offers opportunities for
facilitating discussion and presenting forums for
varieties of opinions (McComb, 1994; Olaniran,
2001; Olaniran, Savage, & Sorenson, 1996, Witmer, 1998). HCI technology, specifically, CMC,
offers the opportunity for active and experiential
learning and its benefits in group activities have
been acknowledged (Craig, 2001-2002; Gasker
& Cascio, 2001; Olaniran, 1994; Olaniran, et al.,
1996).
In spite of the identified advantages, incorporating ICT especially CMC into classrooms
remains challenging. Some challenges facing
instructors regarding implementation of CMC
in classrooms and courses include selection and
usage long after adoption decisions are made. In
addition to technology issues, instructors need to
focus on pedagogical issues surrounding course
structure, and course management.
1424
Computer Mediated
Communication and Learning
Facilitation
The rampant effect of information communication technology (ICT) is constantly being felt
in contemporary organizations, interpersonal
interactions, and academic settings. ICTs are
instrumental in facilitating human computer
interaction, which underlies the computer mediated communication (CMC) in organizations,
classrooms, groups, and interpersonal contexts. As
a matter of fact, the issue facing most institutions
today (academic and non academic organizations)
is not whether to use CMC systems but rather how
to use them effectively.
There is a significant volume of literature
on CMC systems and distance education (e.g.,
Abrahamson, 1998; Twigg, 1997, Wegrif, 1998),
however, not all usages of CMC in learning environment are exclusively distance education. Some
research in CMC technologies concentrates on the
social effects of communication technologies (e.g.,
Hiltz & Turoff, 1978; Sproull & Kiesler, 1986).
No doubt, the demand for distance education
contributes to the proliferation of ICT as a way to
provide education and training. However, there is
a need to look at how CMC and other technologies facilitate learning and to identify other key
variables that must be accounted for in order for
effective learning to occur. This chapter seeks to
provide a practical guide to educators and learners
for enhancing learning through communication
technology.
The globalization trends in contemporary
organizations are putting priority on uniform and
customized training to the extent that institutions
are looking at e-learning to meet their curriculum
needs. As a result, varieties of online universities
like the University of Phoenix and Westwood College Online are developing in order to meet the
needs of non traditional students (e.g., corporate
travelers and expatriates). Furthermore, several
universities are throwing their support behind
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
Perhaps the most prominent reason administrators and instructors offer for incorporating HCI
and other CMC technologies into course design
is the vision of personalizing learning to meet
the idiosyncrasies of students learning styles.
Innovative teachers see computers as tools to
help students in their learning needs with the
goal of tailoring course content to suit each student. Similarly, personalized learning is closely
related to active or transformational learning
in the sense that it allows for students to apply
instructional content while retaining information
longer than the lecture and print mode that is
characterized by traditional modes of instruction
Information Access
CMC offers students the opportunity to scan
information through Website navigation (browsing) and by searching for particular information. For instance, in Web-assisted methods of
CMC, students are able to navigate instructors
Web pages to access course notes and other
instructional resources provided by instructors.
However, access to information is not restricted
to just instructors Web pages. Students can use
a browser to seek additional information or to
research similar topics and other topics of interest.
General browsing is categorized as low cognitive
activity in learning (Jacobson, 1995). Browsing
can, however, facilitate a high cognitive thinking
because students learn as they seek varieties of
information and when they develop a method to
categorize information in a structured manner
(i.e., retention and recall) at a later time (Jacobson,
1995; Robson, 2000). At the same time, one must
not forget that students differ even in the way they
search and learn. For instance, Robson (2000)
argues that some students prefer general browsing
to specific keyword searches. Students also differ
in the way they avoid disorientation or get lost
in the multitude of information (Hara, Bonk, &
Angeli, 2000; Wegrif, 1998). For example, some
students are able to navigate through multiple sites
and keep track of information obtained, while
others are stressed and are unable to distinguish
one network from another (Duchier, 1996).
As far as information access and personalized
learning is concerned, Olaniran (2004) argues
that a mixed or combined CMC and face-to-face
communication media (CMC/FTF) offers a good
1425
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
1426
Skill Development
Skills take two forms. The first deals with the
technical skills that allow both student and instructor to use hardware and software or evaluate
data. The second involves the social skills that are
deployed by technology users during interactions
(Robson, 2000). The technical skills address the
issue of competence with the technology in use,
while social skills deal with issues of etiquette
and discernment of contextually appropriate
behaviors. In CMC research, Olaniran (2004)
addresses the adjustment process that accompanies students interaction, which range from
use of all caps to protocols of posting messages
in a threaded discussion. Personal experience in
threaded discussions also suggests that students
often participate in threaded discussion by breaking the thread; that is, starting a new thread while
still on the same topic.
Nurturing Appropriate
Attitudes
As students use CMC they develop attitudes
toward the technology medium (Olaniran, 2004)
as well as developing professional attitudes and
values consistent with the course content or discipline. Robson (2000) argues that the learning
of concepts, skills, and development of attitudes
equates cognitive development on the part of the
student, which represents an important goal in all
forms of education. However, it is not certain that
attitude development toward technology translates
positively to professional attitude development.
For instance, how users (i.e., students) perceive a
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
CMC medium based on ease of use affects decisions to adopt and continue using the technology
(e.g., Olaniran, 1993; 2004; Vaughan & MacVicar,
2004). Thus, one could argue that when students
are frustrated within a CMC system, they may
abandon the technology or fail to adopt it.
At the same time, students never get to develop
professional attitudes or achieve high cognitive
development when they fail to use a particular
communication technology and thus, they rob
themselves the opportunity to learn the technology even when they know it might be beneficial to
them. For instance, in a course utilizing Web authoring tools, I have found few students deciding to
skip certain assignments (designing personal Web
page) simply because of the perceived difficulty
with the technology software. Students discuss
the lack of competence in the Web authoring tools
offered through the university computing services
center. Therefore, it is imperative for instructors
to realize that simply because the tool is available
and the instructor has offered some training on
the tools does not mean that students grasp the
concept and are ready to perform certain tasks
on their own. This is especially the case with
non-technically oriented courses in liberal arts
and social sciences. Robson (2000) echoes this assessment when arguing that access to knowledge
through technology does not guarantee improved
outcomes (p. 157). Instructors are then critical
about facilitating students learning; how they design the learning experience is what will facilitate
the desired outcomes. This raises the issue of how
instructors can assist students in learning, which
is discussed in the next section.
The use of CMC in teaching courses can foster
transformational learning. The transformation
learning provides an opportunity to contrast
experiential knowledge (i.e., previously acquired
information) with new information in order to
make informed decisions about which to accept
(Olaniran, 2004). The transformational learning
identifies three factors (1) Validation of prior
knowledge, (2) modification of prior knowledge,
and (3) replacement of prior knowledge (Craig,
2001-2002; Salaberry, 2000). The process is
facilitated through the direct communication
process enhanced in CMC (Alavi, 1994; Mason
& Bascich, 1998). Olaniran (2004) reported evidence of transformation learning when students
challenged their existing beliefs about both CMC
and FTF media, that is, some changed their prior
beliefs after they were confronted with new information and experience. The best evidence of
transformation learning reported by the learners
is illustrated in the understanding of the reason
for the change however.
It is possible that the change in opinion of both
communication media could be attributed to the
course design. The CMC and FTF format helped
fortify the transformational learning experience
by providing students with practical use of communication technology, which provides students
with the opportunity to validate information
(beliefs and facts) about both study and the communication media (CMC vs. FTF). The learning
1427
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
instructor or wait for appointment. The significance is that the traditional gate-keeping role of
a secretary is reduced. The notion of direct
manipulation (Fong & Kwan, 2003) speaks to
the idea, where a task as simple as icon dragging
or cursor position on a computer screen provide
students the knowledge about the technology and
also about the task. Indirectly, Rubens et al. (2005)
address control when discussing the flexibility that
technology design affords for knowledge building.
By knowledge building, students instead of passive
participation in class discussion are individually
responsible for identifying ideas and to expand
their knowledge base on those ideas.
Furthermore, Olaniran (2004) finds that
increased control in access to instructors with
CMC provides students opportunity to share more
personal information with the instructor, which
further aids learning as students are able to develop
a rapport and comfort with an instructor, course,
and the technology. On the contrary, traditional
FTF interaction limits time for sharing such information. At the same time, a balance needs to
be struck between offering users or students the
control to accomplish a task versus control that
can be counter productive. Fong and Kwan (2003)
offer the following advice for effective electronic
learning that users need to have technologies that
provide them with knowledge of getting their
work done without destroying their data. While
technology is supposed to provide a means for
accomplishing a task in both an effective and efficient manner, they are known to be problematic
either through mistake or malfunctioning. Thus,
it is advised that technology offer tool box or
alert box (i.e., wizards) that can notify students of
impending mistakes such as accidental deletion
of document (Fong & Wang, 2003).
Course Management
The goal of any course regardless of the communication medium (technological or traditional) is
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
1429
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
1430
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
Challenges and
Recommendations for
CMC Structure
A major challenge facing instructors in incorporating CMC into course design involves deciding
which technology to use and when to use it. Asynchronous CMC (e.g., threaded discussion, bulletin
board, e-mail) are popular because of the set up
ease. The fact that asynchronous design allows
participants (students and Instructors) to reflect
before posting contributions is also appealing.
1431
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
1432
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
information must be easy enough for inexperienced users to understand. This simplification
will help to reduce some anxieties with computer.
However, the CMC training manual or Web
page should not prevent sophisticated users from
enjoying the systems benefits. Olaniran (2004)
advises that CMC should be used in a way that
facilitates creativity. Encouraging creativity can
be done when students are allowed to explore
the medium for other uses that deviate from the
courses primary purpose such as socialization
and recreation activities (Olaniran, 1993). A
benefit from creative use of CMC is that it allows
meta-interaction whereby students are able to
explore and develop the novel use of the system
as well as to seek help from one another when
they encounter problems. Finally, the opportunity
for students to help or coach one another in CMC
forum may alleviate the need for instructors to
be the only source of help.
References
Abrahamson, C. E. (1998). Issues in interactive
communication in distance education. College
Student Journal, 32(1), 33-43.
Alavi, M. (1994). Computer-mediated collaborative learning: An empirical evaluation. MIS
Quarterly, 18(2), 159-174.
Althaus, S. L. (1997). Computer-mediated communication in the university classroom: An experiment with on-line discussions. Communication
Education, 46, 158-174.
Bailey, E. K., & Coltar, M. (1994). Teaching via
the Internet. Communication Education, 43(4),
pp. 182-193.
Barnard, J. (1997). The World Wide Web and
higher education: The promise of virtual universities and online libraries. Educational Technology,
37(3), 30-35.
with
Weblogs: An empirical investigation. IEEE Proceedings of Hawaii International Conference on
System Sciences, pp. 1-9.
Duchier, D. (1996). Hypertext, NY: Intelligent
Software Group. Retrieved 11/10/2005 www.isg.
sfu.ca/~duchier/misc/hypertext_review/chapter4.
htm
Fong, J., & Kwan, I. (2003). Effective e-learning
by using HCI and interactivity on data modeling.
International Journal of Computer Processing of
Oriental Languages, 16, 293-310.
Freire, P. (1983). Pedagogy in process: the letters
to Guinea-Bissau. New York: Continuum.
Gasker, J. A., & Cascio, T. (2001). Empowering
women through computer-mediated class participation. Affilia: Journal of Women & Social Work,
16(3), 295-313.
Hara, N., Bonk, C. J., & Angeli, C. (2000). Content
analysis of an on-line discussion in an applied
educational psychology course. Instructional
Science, 28, 115-152
Hiltz, S. R., & Turoff, M. (1978). The network
nation: Human communication via computer.
Reading, MA: Addison-Wesley.
1433
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
1434
of Web-based
collaborative learning environments. Translating
the pedagogical principles to human computer-interface. Computers & Education, 45, 276-294.
Salaberry, M. R. (2000). Pedagogical design
of computer-mediated Communication Tasks:
Learning objectives and technological capabilities. The Modern Language Journal, 84, 28-37.
Snoeyink, R., & Ertmer, P. (2001-2002). Thrust
into technology. Journal of Education Technology
Systems, 30, 85-110.
Solomon, C. M. (2001). Managing virtual teams.
Workforce, 80(6), 60-65.
Sproull, L., & Kiesler, S. (1986). Reducing social
context cues: Electronic mail in organizational
communication. Management Science, 32, 14921512.
Human Computer Interaction and the Best Mix of Face-to-Face and E-Interactions in Educational Settings
key terms
Computer-Mediated Communication
(CMC): Computer-mediated communication
involves communication interactions that exist
over computer networks.
Critical Thinking: Involves a mental process
of analyzing or evaluating information in an
attempt to attain higher level of understanding.
person or learner is what has been learnt.
E-Learning: involves the process of knowledge dissemination and acquisition taken place
over electronic networks
Electronic Toolkit: represents wizard-like
database that can be invoke in electronic communication by both learners and instructors to
provide general but customized feedback.
Human Computer Interaction (HCI): Involves the study of interaction between people
and computers where the interface of computer
and people enables goal accomplishment.
Globalization: Involves economic and sociocultural ideas where organizations are able transcend national geographic and cultural boundaries
through convergence of space and time in attempt
to accomplish goals.
Transformational Learning: Involves knowledge acquisition that individuals and students can
adapt to transform old knowledge about a given
idea or topic. In other words it represents a process
of getting beyond factual knowledge alone.
This work was previously published in Handbook of Research on Computer Mediated Communication, edited by S. Kelsey
and K. St.Amant, pp. 49-61, copyright 2008 by Information Science Reference, formerly known as Idea Group Reference (an
imprint of IGI Global).
1435
1436
Chapter 4.2
Abstract
This chapter focuses on HCI aspects to overcome
problems arising from technologies and applications that may hinder the normal teaching process
in ICT-ready classrooms. It investigates different
input devices on their usage and interactivity for
classroom teaching and argues that pen-based
computing is the mode of choice for lecturing in
modern lecture halls. It also discusses the software
design of the interface where digital ink, as a first
class data type is used to communicate visual
contents and interact with the ICT.
Introduction
Utilizing information and communication
technology (ICT) in modern classrooms for the
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Hardware-Related Issues:
Input Devices
Background: Ubiquitous Computing
Environments
As mentioned in the introduction, ICT usage in
classrooms offers great opportunities to support
and improve teaching. However, traditional input
and output devices often prove to be a hindrance
for the direct interaction between teacher and
students. They draw too much of the lecturers attention to the operation of the respective hardware.
Keyboard and mouse cannot be used to provide
content to the students as naturally as chalk is
used to write on blackboards. Monitors, even
the small ones from laptop computers, can stand
between the audience and the teacher building a
1437
1438
1439
electronic whiteboards and LCD panels with integrated tablet became available at a reasonable
price when pen-based computing became a real
alternative to chalk and blackboard usage.
Originally targeted toward special interest
groups and professions (such as graphic designers), the first LCD displays with integrated tablet,
in the following called interactive LCD panels
(cf. Figure 2), became available for the average
consumer by the end of the 1990s. These devices
operate like regular graphics tablets described
before with the main difference being that the
tablet is transparent and mounted on top of an
LCD panel. Hence, compared to common touch
screens, the surface does not react to any kind
of contact, but only to the input of a special pen,
thus enabling presenters to naturally write on
a horizontally mounted screen. Using the pen,
people can directly interact with the applications
(e.g., by making annotations on the presented
slides, navigating through subsequent slides, etc.).
Teachers generally see this as a big improvement
compared to normal touch screens or graphics
tablets. However, early versions often proved to
be too small, had a resolution that was too low,
and a limited processing power that resulted in
a noticeable time delay during freehand writing.
Nowadays, we observe significant advancements
1440
Figure 2. Examples of interactive LCD panels where the user can write directly on the computer screen
using a special digital pen. While original versions were rather small (top right), newer versions (left
and bottom right) are now available at reasonable sizes and prices.
in display size, screen resolution, as well as processing power, paired with lower prices. Hence,
when mounted horizontally such devices provide
the same ease of use as traditional overhead projectors while at the same time enabling users to
access and use the full functionality offered by
the underlying ICT installation.
An electronic whiteboard (cf. Figure 3) is
the equivalent of a chalkboard, but on a large,
wall-mounted, touch-sensitive digital screen that
is connected to a data projector and a computer
(see, for example, Smart Technologies Inc. [Online]). This computer can be controlled directly
from the digital screen with a digital pen or by
simply using a finger to touch it. Other known
terminologies of the electronic whiteboards include the eBoard, digital whiteboard, smart
board, or interactive whiteboard. They carry
slightly different meanings to different people
depending on their environment of application.
However, these terms all describe in general, the
group of technologies that are brought together to
support classroom activities. Different versions
exist that rely on front or rear projection. In the
case of front projection, there are also whiteboard solutions that do not react to any kind of
contact, but (similar to interactive LCD panels)
use a special pen for interaction. While such an
approach is preferable for LCD panels or any
kind of horizontally mounted input surface, it is
less important for vertically wall-mounted boards
since users generally do not rest their hand on the
board during writing.
Similar to early interactive LCD panels, first
versions of electronic whiteboards had limitations
regarding size, resolution, and processing power
and were often too expensive for large-scale usage. Again, the situation is improved significantly
today, and we can observe how more and more
classrooms are becoming equipped with such
devices. In fact, there is a growing trend amongst
learning institutions approving the integration
of electronic whiteboards into their classrooms.
There are also many reports in the emerging body
of literature suggesting that vendors of education
and teaching instructors alike find the electronic
whiteboards relatively easy and compelling to use
(BECTA, 2003; Glover & Miller, 2001). However,
1441
Figure 3. Examples of electronic whiteboards which enable direct interaction and freehand writing to the
users (top right). Earlier versions were rather small and often used front projection (top left) while newer
versions increased in size and offer rear projection at reasonable prices (top middle and bottom).
1442
Figure 4. Examples of Tablet PCs which enable pen-based input via a transparent graphics tablet mounted
on top of the LCD screen. While some versions purely rely on pen-based input (top) others, called hybrids
(bottom), feature a rotating display thus supporting both, traditional as well as pen-based interaction.
1443
Figure 5. Putting two electronic whiteboards next to each other, in order to form a larger input and
output area, resemble traditional blackboards
1444
Figure 6. Lectern with integrated computing facilities and pen-based input device. The interactive LCD
panel is integrated into the surface in order to support freehand writing on the presented slides and
pen-based operation of the used applications.
1445
Software-Related Issues:
Interaction Paradigm
Background: Digital Ink
In the previous section we argued that the pen is the
natural choice for interaction with wall-mounted
boards and interactive tablets. It enables users to
interact directly with the respective tools instead
of remotely as with a mouse pointer. Freehand
writing proves to be more intuitive and flexible
to enter content during a lecture than a keyboard.
Hence, handwriting becomes a first-class data
type, on an equal footing with text entered via
keyboards. However, common interface designs
and graphical user interfaces (GUIs) usually
follow the desktop metaphor and are therefore
optimized for input devices such as keyboard and
mouse. For example, navigation through nested
menus can easily be done with a mouse, but is
hard to do with a pen. For pen-based interaction,
earlier research on pie menus proved an advantage for users of digital screens, as people tend
to better remember the cyclic position, and as
such expedites the selection process (Callahan,
Hopkins, Weiser, & Shneiderman, 1988). Hence,
we see ourselves confronted with two different
demands: on the one hand, the aim of using penbased input devices in order to present information to the students, while on the other hand, the
need to rely on traditional, mouse-based input in
order to interact with the ICT and to operate the
respective programs. To avoid the unfavorable
switching between different devices, we propose
1446
1447
1448
1449
Writing
style
Support of
pixel-based operations
Object-oriented
mani-pulation
Symbolic
representation
Recording
Replay
Random
access
riteMail
yes
yes
yes
yes
static
no
no
Corel
Grafigo
yes
yes
yes
yes
static
no
no
Painter
Classic
yes
no
yes
no
static
no
no
Tablet PC
Software
Development
Kit
yes
yes
yes
yes
static
no
no
Lecturnity
no
yes
yes
no
dynamic
yes
yes
Software
1450
1451
1452
1453
Figure 7. The latest ink-element in (a) is interpreted as a gesture that can have several interpretations.
The current context of the board (writing area p9, slide p28, and ink@time(t-m)) is determined and
assembled appropriately in (b). The output of the mapping function is then rendered onto the screen as
support interface on-demand (c).
1454
Conclusion and
Future Trends
Freehand writing is a natural way of exchanging viewpoints and communicating visual ideas
between teachers and students in a classroom.
Motivated by this, we identified the value of penbased computing for the usage of ICT in classroom
teaching. By presenting an historical overview of
the usage of pen-based input devices in classrooms
and lecture halls and two case studies with currently available technology, we demonstrated that
todays hardware equipment can be used to greatly
improve the teaching experience. Using them, we
are able to build a ubiquitous computing environ-
1455
Figure 8. An electronic whiteboard mounted horizontally in order to create an interactive table for
group-work
Note
Some of the notions for tools and devices described in this article are registered Trademarks
of the respective companies or organizations. We
kindly ask the reader to refer to the given references. All features and characteristics of these
systems have been described to the best of our
knowledge. However, we do like to mention that
specific characteristics and technical specification
frequently change, and therefore discrepancies
between the descriptions in this article and the
actual systems are possible.
References
BECTA. (2003). What the research says about
interactive whiteboards. Becta ICT Research.
Callahan, J., Hopkins, D., Weiser, M., & Shneiderman, B. (1988). An empirical comparison of pie
vs. linear menus. In Proceedings of the SIGCHI
Conference on Human Factors in Computing
Systems (pp. 95-100).
1456
MacIntyre, B., & Feiner, S. (1996). Future multimedia user interfaces. Multimedia systems, 4(5),
250-268.
Weiser, M. (1991). The computer for the 21st century. Scientific American, 265(10), 71-72.
1457
Zupancic, B., & Horz, H. (2002). Lecture recording and its use in a traditional university course.
This work was previously published in Enhancing Learning Through Human Computer Interaction, edited by E. McKay, pp. 2142, copyright 2007 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1458
1459
Chapter 4.3
ABSTRACT
In their simplest form, Wikis are Web pages that
allow people to collaboratively create and edit
documents online. Key principles of simplicity,
robustness, and accessibility underlie the wiki
publication system. It is the open and free spirit
of Wikis fundamental to open source software
(OSS) that offers new contexts for learning and
knowledge creation with technology. This chapter
will briefly consider the role of technology in
learning before discussing Wikis and their development. The emerging literature on the application of Wikis to education will be reviewed and
discussed. It will be argued that Wikis embody an
exemplary model of open source learning that has
the potential to transform the use of information
communication technologies in education.
INTRODUCTION
Wikis are an instance of what is known as a read/
write technology. They allow groups of users,
many of whom are anonymous, to create, view,
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1460
BACKGROUND TO WIKIS
The founding developer of the World Wide Web
(WWW), Sir Tim Berners Lee, first conceived of
Core functionality
Open
Should a page be found to be incomplete or poorly organized, any reader can edit it as they see fit
Incremental
Pages can cite other pages, including pages that have not been written yet
Organic
The structure and text content of the site are open to editing and evolution
Mundane
A small number of (irregular) text conventions will provide access to the most useful page markup
Universal
The mechanisms of editing and organizing are the same as those of writing so that any writer is automatically
an editor and organizer
Overt
The formatted (and printed) output will suggest the input required to reproduce it
Unified
Page names will be drawn from a flat space, so that no additional context is required to interpret them
Precise
Pages will be titled with sufficient precision to avoid most name clashes, typically by forming noun phrases
Tolerant
Observable
Activity within the site can be watched and reviewed by any other visitor to the site
Convergent
Duplication can be discouraged or removed by finding and citing similar or related content
1461
Description
The freedom to study how the program works, and adapt it to your needs Access to the source code is a
precondition for this
The freedom to improve the program, and release your improvements to the public, so that the whole
community benefits. Access to the source code is a precondition for this
1462
that its content will have some longevity. Computer game players have been particularly active,
creating communities around their games. One
example of how this is used can be seen in the
ways that players of the massively multiplayer
online game Runescape (http://www.runescape.
com/) have built encyclopedic knowledge about
all aspects of the game (http://www.wikia.com/
wiki/Runescape).
The effective use of Wikis appears dependent on a clear goal matched to a group of
committed uses (Godwin-Jones, 2003).
Highly structured environments that rely
on top-down approaches (as opposed to
bottom-up) limit the potential of Wikis as a
tool for learning (Engstrom & Jewett, 2005;
Wagner, 2004).
Wikis such as Wikipedia are a rich source
of information that can promote content
creation, sharing, and discussion (LeLoup
& Ponerio, 2006, Lih, 2004).
It is important to augment students, Wiki
work with strategies to promote deep and
critical thinking to ensure high quality work
emerges (Engstrom & Jewett, 2005).
Wikis support a short edit-review cycle that
ensures the rapid development of content
(Lih, 2004).
Employing the user as organizer and editor (many eyeballs) is a highly effective
strategy for ensuring quality (Lih, 2004).
1463
AN EXEMPLARY MODEL OF
OPEN source LEARNING
Wikis offer a different model of creating, editing,
and sharing knowledge that is consistent with the
educational push toward what have become known
as sociocultural or constructivist approaches to
learning. A founding thinker in these areas, Lev
Vygotsky (1978), contended that learners neither
receive knowledge nor simply discover it. They
learn in social contexts in interaction with both
humans and tools. A key concept for Vygotsky was
the zone of proximal development (ZPD) in which
he said all learning takes place. In Vygotskys basic
model, it is adults who scaffold young learners,
helping to extend their thinking and learning.
However, as the technological tools develop and
evolve, we are beginning to see ways that both
humans and their tools can scaffold learning. The
technological spaces that make up wikis enable
new forms of sociotechnological ZPDs that support both individual and community knowledge
creation.
This focus on community and the power of joint
construction is taken up in The Wisdom of Crowds
(Surowiecki, 2004). Surowiecki argues that the
collective knowledge of large groups is often
unrecognized and almost always undervalued by
society. He explains that many everyday activities, from voting in elections and the operation of
the stock market to the way Google locates Web
pages, depend on the collective input and knowl-
1464
CONCLUSION
In the end, the success of innovations in learning such as Wikis will be seen in the increased
capacity of individuals and their communities to
REFERENCES
Augar, N., Raitman, R., & Zhou, W. (2004). Teaching and learning online with wikis. In R. Atkinson,
C. McBeath, D. Jonas-Dwyer, & R. Phillips (Eds.),
Beyond the comfort zone: Proceedings of the 21st
ASCILITE Conference (pp. 95-104). Retrieved June
17, 2006, from http://www.ascilite.org.au/conferences/perth04/procs/augar.html
Boaler, J. (1997). Experiencing school mathematics:
Teaching styles, sex and setting. Buckingham, UK:
Open University Press.
Bold, M. (2006). Use of Wikis in graduate course
work. Journal of Interactive Learning Research,
17(1), 5-14.
Cuban, L. (2001). Oversold and underused: Computers in the classroom. Cambridge, MA: Harvard
University Press.
1465
1466
KEY TERMS
Constructivist: An approach based on the
work of Lev Vygotsky, who contended that learners neither receive knowledge nor simply discover
This work was previously published in Handbook of Research on Open Source Software: Technological, Economic, and Social
Perspectives, edited by K. St.Amant and B. Still, pp. 681-689, copyright 2007 by Information Science Reference, formerly known
as Idea Group Reference (an imprint of IGI Global).
1467
1468
Chapter 4.4
Re-Examining the
Socioeconomic Factors
Affecting Technology Use in
Mathematics Classroom
Practices
Emiel Owens
Texas Southern University, USA
Holim Song
Texas Southern University, USA
Terry T. Kidd
University of Texas School of Public Health, USA
Abstract
Over the past 15 years a considerable amount
of research has been devoted to study of the
socio
economic
aspects that affect the use of
technology in the mathematics classroom. With
the call for curricular and instructional reform,
educational institutions have embarked on the
process to reform their educational practices to aid
the urban student in their quest to obtain quality
mathematics and science based education with the
integration of technology. The study performed
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
The introduction of microcomputers into classrooms during the 1980s was heralded by many as
the dawn of a new era in American education. Proponents argued that technology had the potential
to fundamentally transform the nature of teaching
and learning (Papert, 1980; U.S. Congress, Office
of Technology Assessment, 1988). However, over
time, it has become apparent that it is far easier
to acquire hardware, software, and Internet access (Becker, 1991; Dividing Lines, 2001) than
it is to capture the potential of technology in
significantly meaningful outcomes (Cuban, 2001).
Likewise, educators concerned about the chronic
underachievement of students often fall prey to
the allure of technology as a tool for reversing the
historical influences of poverty, discrimination,
inequity, chronic underachievement, and lack of
opportunity. However, 25 years after the introduction of the computer into the classroom, many of
the expectations associated with technology in
education remain unrealized to some but to other,
technology has proven to be an effective tool in
the efforts to provide students with opportunities
for quality teaching and active student learning
and engagement.
Educational institutions have called for instructional and curriculum reform that includes
active engagement of students, quality assessments, and the increased and innovative use
of technology applications to promote quality
teaching and active student learning (U.S. Department of Education, 2001). This is true in the
field of mathematics where organizations such as
the National Council of Teachers of Mathematics (1989, 2000), Mathematical Science Board
(MSEB, 1991), and Mathematics Association of
America (1991) have stressed that technology is
essential in teaching and learning mathematics.
Review of Literature
Over the past 15 years a considerable amount of
research has been devoted to sociocultural disparity in technology availability and use in the
mathematics (Becker, 2000; Garofalo et al., 2000;
Means et al., 2001; Manoucherhri, 1999; National
Center for Educational Statistics, 2004; Owens,
1993; Owens & Waxman, 1994, 1995; U.S. Department of Education, 2001; Huang & Waxman,
1996). Past studies conducted by Becker (2001) and
Coley, Cradler, and Engel (1997) found students
from higher income families have been found to
use computers in school and in their homes more
frequently than students from lower-income families. Students of color from urban schools have
also been found to have less access to computers
compared to Anglo-suburban students (Owens &
Waxman, 1993, 1994). More recently, lower SES
schools are only half as likely to have high speed
Internet compared to high SES schools (Advanced
Telecommunications, 1997). Consistent with this
idea of access are the issues within the digital
divide itself. Within the past decade, a growing
body of evidence supports the ever-widening
technological gap among members of society, in
particular children and the elderly (NTIA, 1995,
1997, 1999), with a important emphasis on urban
1469
1470
Methods
The data for the present study was drawn from
the base year of the Educational Longitudinal
Survey of 2002-2004 (ELS: 02). The ELS:02 is a
national survey administered by National Center
for Educational Statistics to provide an overall
picture of the present educational experience
of high school students in the U.S. The survey
has four subcomponents completed by students,
teachers, parents, and school administrators.
The design used to collect the data was a twostage, stratified national probability sample. The
survey included 1,052 public and private school
representing about 12,000 students across the
country. For the present study, only the student
survey data was used.
1471
1472
Results
Table 1 reports the results of the frequency of
calculator and computer use in mathematics
classrooms. The results indicated that students are
using more calculators in their math classrooms
compared to computers. 58% of the students reported that they had used calculators every day
in their math classroom compared to 8% that
indicated they were using computers on a daily
basis. 30% of the students reported using the
graphing calculator on a daily basis. One-third
of these students reported they never use the
graphing calculator in their classroom. 61% of
the students indicated they never use computers
in their math classroom. Finally, 7.4% indicated
they were using computers on a daily basis.
Table 2, compares calculator and computer use
across socioeconomic levels. The results indicate
that a significant association (p<.001) exists between calculator use and socioeconomic levels.
In this case, the lowest SES group reported using
calculators on a daily basis the least. On the other
hand students in the highest SES group reported
using calculators on a daily basis more often. 48%
of the lowest SES group reported using calculators
on a daily basis compared to 68% of the high SES
Group. There was also a significant association (p
< .001) between daily use of graphing calculators
and SES group membership. 21% of the students
classified in the lowest SES reported using the
graphing calculator on a daily basis compared
to about twice as many students (42%) classified
in the highest SES group. The final comparison
looked at computer usage across SES levels. The
results also indicated a significant association
exists (p < .001) between computer usage and
SES levels. In this case, students classified in
the lowest SES group were more likely to use
computers computer compared to those students
Table 1. How often technology is being used in mathematics classrooms overall frequency
N=11,618
How often do you use calculators in your math class
6.2
Never
12.0
Rarely
7.7
18.0
58.0
Everyday or almost
33.1
Never
19.6
Rarely
3.3
2.2
8.8
Everyday or almost
60.7
Never
2.2
Rarely
7.7
7.8
7.4
Everyday or almost
Table 2. Socioeconomic status and how often technology is being used in mathematics classrooms percentage
of use
N= 11,618
n=2627
n=2820
n=2896
n=3275
Lowest
Second
Third
Highest
8.83%
6.67
5.91
4.12
Never
18.12
12.91
11.08
7.18
Rarely
6.78
6.24
5.24
4.82
19.57
18.94
17.68
16.24
Once or twice
a week
46.71
55.25
60.08
67.63
Everyday or
almost
37.65
37.30
32.98
25.83
Never
22.95
20.47
20.30
15.66
Rarely
16.16
6.24
6.46
5.56
31.31
10.96
11.15
11.42
Once or twice
a week
94.94
25.00
29.11
41.53
Everyday or
almost
1473
Table 2. continued
Chi. Sq = 375.88, p < .001***
How often do you use computers in your math
class
53.73
61.06
62.81
64.09
Never
21.74
19.50
18.85
20.92
Rarely
13.13
5.00
6.00
5.80
53.53
6.24
5.28
4.09
Once or twice
a week
9.86
8.19
7.04
5.10
Everyday or
almost
1474
How often does your math teacher uses computer for one-on-one
instructions
52.4
Never
22.3
Rarely
8.6
7.6
9.1
Everyday or almost
28.6
Never
6.9
Rarely
14.7
13.5
16.3
Everyday or almost
30.6
Never
29.4
Rarely
18.0
12.2
9.9
Everyday or almost
33.6
Never
24.8
Rarely
15.3
13.3
13.0
Everyday or almost
35.1
Never
26.4
Rarely
3.3
12.1
9.2
Everyday or almost
28.0
Never
26.5
Rarely
18.1
13.6
13.8
Everyday or almost
51.6
Never
22.9
Rarely
8.8
6.6
9.9
Everyday or almost
1475
Table 3. continued
How often does your math teacher use the computer to show new
topics
41.1
Never
24.3
Rarely
9.9
1.1
10.7
Everyday or almost
Table 4. Socioeconomic status and how computers are used in mathematics classrooms percentage of
use
n=2627
n=2820
n=2896
n=3275
Lowest
Second
Third
Highest
38.76%
50.19
56.81
65.61
Never
27.91
23.74
20.42
16.32
Rarely
11.16
7.34
8.72
6.84
9.73
9.27
5.75
5.44
12.44
9.46
8.30
5.80
Everyday or almost
21.36
26.19
30.89
36.74
Never
29.15
23.81
25.61
28.37
Rarely
15.15
14.97
15.07
13.64
12.99
16.67
13.18
11.47
21.36
18.37
15.25
9.77
Everyday or almost
30.16
31.90
31.80
28.96
Never
27.74
28.77
28.75
32.19
Rarely
16.29
16.43
18.30
21.04
12.74
13.50
11.93
10.43
13.07
9.39
9.15
7.37
Everyday or almost
27.41%
32.21
34.61
40.73
Never
26.67
21.17
25.05
25.67
Rarely
15.56
16.37
16.25
13.15
13.04
17.26
13.00
10.30
17.33
12.99
11.09
10.14
Everyday or almost
32.95
33.14
36.46
38.14
Never
24.75
26.98
24.02
29.56
Rarely
17.54
15.68
19.65
16.61
1476
Table 4. continued
13.77
13.49
10.04
10.58
10.98
10.71
9.83
5.11
Everyday or almost
26.48
27.85
25.53
31.77
Never
24.41
26.40
28.60
27.10
Rarely
17.16
17.00
19.77
18.55
14.50
13.56
12.86
13.39
17.46
15.19
13.24
9.19
Everyday or almost
43.32
49.61
51.08
63.07
Never
26.41
24.22
22.41
18.38
Rarely
9.66
11.13
11.42
7.60
9.82
8.40
9.48
6.89
10.79
6.64
5.60
4.06
Everyday or almost
41.01
41.71
38.16
42.99
Never
24.52
25.50
23.68
23.47
Rarely
12.33
13.30
14.85
15.12
8.77
10.93
11.47
9.61
13.37
8.56
11.84
8.82
Everyday or almost
1477
Discussion
The use of technology in the math and science
classroom has been a main focus in improving
student learning outcomes. Technology not only
can provide visual learning in the classroom,
it also opens the door to improve higher level
thinking skills. The results of the present study
indicate that 10th graders use more calculators on
a daily basis compared to computers. Moreover,
calculator use far outweighs the use of computers
in todays math curriculum. This is also true for
the use of the graphing calculator.
The results of the present study suggests that
there are important differences in the use of
technology in 10th grade mathematics classrooms
associated with levels of SES status. Students from
low SES families are less likely to use calculators
on a daily basis compared to students from high
SES families. This also includes the use of the
graphing calculator on a daily basis. Low SES
students also reported that they were more likely
1478
Conclusion
As educators and educational technology professionals demand for research-based evidence about
the effectiveness of specific instructional practices has created renewed interest in educational
research (Edyburn, Higgins, & Boone, 2005).
As a result, there is an urgent need for research
that provides evidence about the effectiveness
of various educational technology interventions
applied to specific subject domains. This is especially needed in mathematics, where we are still
trying to get a handle on the achievement gap of
students in urban and suburban communities.
Research is a critical piece to the puzzle to fully
understanding the impact of technology on student
achievement. As a result more research is needed
on a large scale that will focus and assess some
of the following questions not addressed in the
present study:
1479
As we look to the future, technology is often viewed as an enticing means of closing the
achievement gap. It is seen as a magic bullet to
solve all instructional and learning related issues
in an educational environment. However, this
is not the reality. Statistics on the digital divide
have shown are that the use of technology is often
based on simple computer-to-student ratios that
have little relevance in describing the quality
of the technology experience of the use of the
intervention in the classroom. Recent advances
in educational technology have the potential to
significantly enhance the learning and achievement for all students in the urban environment.
However, these contributions hold for diverse urban learners suggests unlimited potential for their
application in urban schools. Finally, the current
accountability environment demands significant
attention to questions of efficacy, which must be
addressed in the context of using technology to
enhance student achievement.
The current literature has implies that innovative approaches used in teaching with technology
leaves students with a more effective learning
environment that promotes quality teaching and
active student learning. Consequently education
planners and policy makers must think beyond providing more hardware, software, and connecting
schools to the Internet, but instead thinking about
keeping urban schools and teachers well-informed
and trained in the effective use of technology for
educational purposes. One of these investments is
meaningless without the other. High-speed con-
1480
References
Advanced Telecommunications in U.S. Public
Elementary and Secondary Schools, Fall 1996,
U.S. Department of Education, National Center for
Education Statistics, February 1997 (Heaviside,
Riggins and Farris, 1997).
American Council on Education (1999). To touch
the future: Transforming the way teachers are
taught. An action agenda for college and university presidents. Retrieved July 23, 2007, from
http://www.acenet.edu/resources/presnetiteachered-rpt. pdf
Becker, H. J. (2000). Whos wired and whos not:
Childrens access to and use of computer technology. The Future of Children and Computer
Technology, 2(10), 44-75.
Cradler, R., & Cradler (1999). Just in time: Technology innovation challenge grant year evaluation report for Blackfoot School District No. 55.
San Mateo, CA: Educational Support Systems.
Cuban, L. (2001). Oversold and underused:
Computers in the classroom. Cambridge, MA:
Harvard University Press.
Dividing Lines (2001). Technology counts 2001:
The new divides: Looking beneath the numbers to
reveal digital inequities. Retrieved July 23, 2007,
from http://counts.edweek.orglsreportsltc01ItcOlarticle.cfm ?slug=35divideintro.h2 O
Edyburn, D., Higgins, K., & Boone, R. (2005).
Handbook of special education technology research and practice. Whitefish Bay, WI: Knowledge by Design, Inc.
Fabry, D. L. & Higgs, I. R. (1997). Barriers to the
effective use of technology in education: Current status. Journal of Educational Computing
Research, 17(4), 385-395.
Feldman, S. (2001). The haves and have nots of
the digital divide. The School Administrator,
1(3), 1-4.
Finneran, K. (2000). Let them eat pixels. Issues
in Science and Technology, 1(3), 1-4
Frank, K. A., Zhao, Y., & Borman, K. (2004).
Social capital and the diffusion of innovations
within organizations: The case of computer
technology in schools. Sociology of Education,
77(2), 148-171.
1481
1482
Papert, S. (1980). Mindstorms: Children, computers, and powerful ideas. New York: Basic
Books.
This work was previously published in International Journal of Web-Based Learning and Teaching Technologies, Vol. 2, Issue
4, edited by L. Esnault, pp. 72-87, copyright 2007 by IGI Publishing, formerly known as Idea Group Publishing (an imprint
of IGI Global).
1483
1484
Chapter 4.5
Abstract
This chapter introduces the use of blogs as an
educational technology in the K-12 classroom. It
argues that blogs can be used to promote verbal,
visual, and digital literacy through storytelling and
collaboration, and offers several examples of how
educators are already using blogs in school. This
chapter also reviews issues such as online privacy
and context-setting, and ends with recommendations for educators interested in implementing
blogs with current curricula.
INTRODUCTION
As Internet technologies continue to bloom,
understanding the behaviors of its users remain
paramount for educational settings. For teachers,
parents, school administrators, and policymakers,
learning what types of activities and applications
students are using on the Internet is only the
surface. Understanding how they are using these
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
WHAT IS A BLOG?
Blogs are personal journals written as a reversed
chronological chain of text, images, or multimedia, which can be viewed in a Web page and are
made publicly accessible on the Web (Huffaker,
2004a; Winer, 2003). As depicted in Figure 1,
blogs typically contain text in the form of a blog
post, offer the ability for readers to comment or
provide feedback, contain archives to past blog
posts, and link to other blogs and bloggers.1
Blogs are inherently different from personal
home Web pages. First, bloggers post entries
through manual software, such as a Web browser,
or automatic software, which is downloaded off
the Internet and used to instantly publish content
to the Web. Therefore, bloggers do not need to
understand HTML or other Web programming
1485
Blog Features
1486
1487
1488
Use of Blogs
Understanding the features of the blogs helps
distinguish them from other Internet applications,
and grasping the size of the blogosphere signifies
the popularity of blogs in Internet culture. The
next question involves the content of blogs. What
are bloggers writing about? The answer not only
provides a context for online community interaction, but possible application for educational
technology. These can be divided into five areas:
(a) personal blogs, (b) community blogs; (c) journalism blogs; (d) education and research blogs;
and (e) knowledge blogs.
Personal Blogs
The most popular use of blogs are similar to personal Web sites authored by individuals, which
include chronological posts as well as links to
other Web sites or Weblogs (Lamshed, Berry,
& Armstrong, 2002). Despite the popular notion
that Weblogs lean toward external events, or
remain highly interlinked with the blogosphere,
the majority of Weblogs are still individualistic
self-expressions written by one author (Herring
et al., 2004).
Community Blogs
Virtual communities develop through the use of
a blog (Lamshed et al., 2002). Examples might
include a community support group, a site for
parents to ask questions and exchange answers, a
research community sharing resources and data,
or a mirror of an offline community, like a softball team or neighborhood newsletter. Although
personal blogs may dominate the blogosphere,
the ability for individuals to choose their level of
community participation may be another reason
for blog popularity, as it allows the blog author to
explore individual needs while benefiting from
community interactions (Asyikin, 2003). The
linkages with other Web sites, people, and ideas
Journalism Blogs
The idea of alternative forms of journalism manifesting through Weblogs has received increasing
attention in media and scholarship (Alterman,
2003; Blood, 2003; Lasica, 2003). Where is Raed?
(http://dear_raed.blogspot. com), for instance, is
a blog by an Iraqi that discusses what is happening in Iraq since September 2003. He discloses
a candid view of the U.S. occupation, but also
introduces readers to fellow Iraqi bloggers. For
most, the global news agency is the only link to
international affairshaving personal, subjective
commentary within a foreign world provides a
unique view to outsiders.
A different, but equally unique log is J-Log,
which provides community critiques and commentary on current journalism and news. The
community not only shares interesting news
items, but also poses questions such as, Is this
news fair and balanced? Perhaps J-Log (http://
www.mallasch.com/journalism/) and individual
reports such as Raed demonstrate new forms of
online journalism; critiques, however, as to the
viability of these news sources remain an issue,
including the resources and references and even
the subjectivity amidst objective journalistic
philosophy.
A link to http://blogdex.net/, an MIT Laboratory experiment that captures the fastest-spreading
ideas in the blog community, typically results in
news headlines as the most contagious information.
1489
Knowledge Blogs
Similar to education and research, blogs provide
opportunities for organizations to manage and
share content, as well as communicate across the
network. Dave Pollard, Chief Knowledge Officer
at Ernst and Young, Inc. and popular writer on the
role of blogs in the business, suggests that blogs
can be used to store and codify knowledge into a
virtual file cabinet. But unlike other content management tools, blogs allow authors to publish in a
personal and voluntary way, creating a democratic,
peer-to-peer network (Pollard, 2003b). Pollard also
suggests companies can increase profitability by
designing information architecture to embrace the
use of Weblogs (Pollard, 2003a, 2003c).
1490
1491
EXAMPLES OF BLOGS
IN PRACTICE
Blogs are just beginning to infiltrate classrooms,
as educators and school administrators consider
blogs as a useful tool for communicating between
teachers, schools, students, and parents, and as a
way to showcase the work of students (Richardson,
2004). These practices are celebrated on the Internet through communities of educators interested
in blogging and education. Will Richardsons
Weblogg-ed: Using Weblogs and RSS in Education Web site (http://www.weblogg-ed.com/), for
instance, is a useful source of information. His
site focuses on best practices, offers a quick start
guide for teachers, and links to other bloggers
concerned with blogs in education. Edblogger
Praxis (http://educational.blogs. com/edbloggerpraxis/) is another important Web site which
unites educators who blog about their experiences
or pedagogical philosophies.
This section will look at examples of blogs in
practice, separating them into high school (grades
9-12), elementary and middle schools (grades 48), and primary school (grades K-3) in order to
contextualize blog use by age and developmental
stage, and to provide useful models for educators
and school administrators interested in viewing
how blogs work at different levels of the school
system.
1492
Elementary and
Middle School Blogs
Blogs can be utilized in many of the same ways
in other grade levels. The Institut St-Joseph in
Quebec City, Canada, for instance, uses blogs
among fifth and sixth graders in order to practice
reading and writing. Implemented by the school
principal, Mario Asselin, Institut St-Joseph
bloggers use a software program to write about
anything and everything that is school related.
Similar to Richardsons realization, the fact that
blog posts are being read by anyone in the world
has an acute effect on students. They felt empowered as their blogs received comments from total
strangers and even Canadian celebrities (Asselin,
2004). Similar to Will Richardsons work, Figure
1493
Primary School
Mrs. Dudiak, who teaches second grade at Oakdale
Elementary School in Maryland, uses blogs to
create writing assignments for her students. She
might use a picture such as a waterfall and ask
students to write a description, a story, or poetry.
She also asks them to discuss favorite books, what
types of books and stories they like, as well as
depictions of books into a movie in our head;
students reply in the comment section of the blog.
This is an excellent example of how blogs can be
placed in specific contexts to meet the goals of
classroom curriculum. Figure 8 exemplifies how
blogs can be simplified to reach even the young-
1494
1495
Figure 9. Example of blog as communication hub for teachers, schools, and parents (http://lewiselementary.org/)
1496
Setting Context
One of the primary challenges for using blogs,
as with many technologies, in the classroom is
the importance of setting a context for learning
to occur. While discovery and creativity does
abound when adolescents are allowed freedom to
explore these CMC venues, some structure needs
to be in place to facilitate a learning outcome.
For instance, if writing quality is a concern for
a teacher, then contextualizing the language to
focus on clear and succinct writing skills is a
must. Similarly, students have to be encouraged
to use the blog on a steady basis and focus on the
classroom material.
In sum, contextualizing blogs to the learning
experience will serve to produce more exciting
and educational blog experiences in the classroom, experiences that parents, administrators,
and students alike can observe and reflect upon.
The next section will provide some recommendations for educators interested in implementing
blogs in the classroom to promote literacy and
learning.
RECOMMENDATIONS
Use free blog software for easy implementation
in the classroom. There are several popular Web
LiveJournal: http://www.livejournal.com
BlogSpot: http://www.blogspot.com
Xanga: http://www.xanga.com
MoveableType: http://www.moveable type.
org
T-Blog: http://www.tblog.com
1497
1498
CONCLUSION
This chapter has sought to demonstrate how blogs
can be a useful tool for promoting literacy in the
K-12 classroom. Literacy most noticeably takes
ACKNOWLEDGMENTS
The research of this chapter was funded in part by
a grant to the Childrens Digital Media Center at
Georgetown University (http://cdmc.georgetown.
edu) by the National Science Foundation (Grant
#0126014). Special thanks to Sandra Calvert for
all her mentorship and support.
REFERENCES
Alterman, E. (2003). Determining the value of
blogs. Nieman Reports, 57(3).
Asselin, M. (2004). Weblogs at the Institut StJoseph. Proceedings of the International Confer-
1499
1500
1501
ENDNOTES
1
This work was previously published in Handbook of Research on Literacy in Technology at the K-12 Level, edited by L. Tan
and R. Subramaniam, pp. 337-356, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1502
1503
Chapter 4.6
Ubiquitous Computing
Applications in Education
Kostas Kolomvatsos
National & Kapodistrian University of Athens, Greece
Abstract
Inside Chapter
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Introduction
New technologies have brought many changes in
teaching, and of course in learning. Traditional
classrooms are being transformed in order to
utilize the advantages of the technology.
Ubiquitous computing (also known as Pervasive, Ambient, 1 to 1, or one to one)
is about distributed computing devices in the
environment, with which users are able to gain
access to information resources. These devices can
be wearable computers, or sensors and computers embedded in everyday objects. On the other
hand, ubiquitous computing involves the necessary infrastructures needed to support pervasive
computing applications.
Ubiquitous computing integrates technology
into the environment, giving the opportunity to
users to utilize it anytime and anywhere. It differs from traditional systems where the user is
bonded to a computer in a specific place. Now
it is possible for a user to utilize the technology
without the restriction of place or time.
Ubiquitous computing may provide significant
advantages in the application domain of education.
It can offer continuous access to a wide range of
software, or the Internet, to all students, as well
as teachers. As we will see below, the main targets of using pervasive techniques in education
are efficiency in teaching and learning, equality
between all students as to access to technology,
regardless of their economical state, increased
student engagement with their lessons, and different approaches according to the students needs
(Bonifaz & Zucker, 2004).
This chapter is organized as follows. The next
section gives information about the examined
1504
Background
Ubiquitous computing environments are different
from what one traditionally finds in most school
settings. It offers to all students and teachers
continuous access to a wide range of software,
electronic documents, the Internet, and other
digital resources for teaching and learning. These
initiatives goals include increasing economic
competitiveness, reducing inequities in access
to computers and information between students
from wealthy and poor families, and raising
student achievement through specific interventions. Other reasons cited for supporting laptop
initiatives include improving classroom culture,
increasing students engagement, making it easier
to differentiate instruction according to students
needs, and solidifying home-school connections
(Bonifaz & Zucker, 2004).
The UK government and Scottish executives
have listed a number of priorities for ubiquitous
education for the 14+ age range. This list is discussed in Smith (2002) and Sutherland (2002).
According to authors, the priorities posed are:
Shared vision: This means that the commitment to technology is systemic and continual. Also, there is a proactive leadership
and administrative support for the entire
system.
Access: Teachers must have limitless access to current technologies, software and
telecommunications networks.
Skilled educators: The educators that are
called to instruct students, who use the
technology in their tasks, must be skilled
and familiar with the technology and with
its use in the teaching procedure. Hence,
learning will be more efficient.
Professional development: Educators
must have consistent access to professional
development in support of technology use
1505
in school.
Students that are more engaged in their
learning.
Teachers increasingly utilizing project-based
and hands-on curriculum and teaching
methods.
Increase of math and science scores in the
eighth grade.
Children born since 1980 process information differently than children born before
1980. They learn best with multisensory
input.
1506
Features
The use of ubiquitous computing in education has
characteristics very important in learning. An
educational policy can be based on these features
in order to achieve a high level of learning. These
features are:
Connection Means
When computers first arrived in educational
environments, they brought a lot of challenges
in the form of space for electrical cords, attachments, peripherals, and other entanglements
(McKenzie, 2001). It is obvious that in recent
years there was a blast in wiring and cabling
in school networks.
The ultimate goal of ubiquitous computing
in education is the use of a computer for all
students and teachers. This means that we need
a more flexible way to connect all the devices.
The development of the technology in the domain of wireless radio connections, which gives
a high speed communication channels, can give
us a means to efficiently interconnect devices in
large areas.
We believe that a wireless connection will
be more appropriate and efficient than a wired
network where the users will be bound to specific
spots in the area. Using wireless connection,
students may have access from anywhere. They
may have access in the cafeteria, in the library,
in the class or outside.
In McKenzie (2001) authors provide a list
of advantages of the wireless technology. They
are:
Ease of movement
Relaxed fit
Strategic deployment
Flexibility
Cleanliness
Low profile
Convenience
Simplicity
Speed (especially nowadays)
1507
Figure 2. Portable devices: (1) Laptop, (2) small $100 laptop (3) PocketPC, (4) BlackBerry RIM, (5)
handheld laptop, (6) palm computer
1508
Pedagogical Goals
Todays generation of students looks at technology as part of their everyday environment. It is
important to understand the difference between
todays life in school in contrast to the past, where
students only occasionally used computers. In
the future, pupils will own a handheld device,
which will be their partner to complete tasks.
This means that these devices must be used in
a correct manner in order to help them in the
learning procedure.
To fully meet the students needs, technology
should be pervasivealways available. In one-toone learning, each student has access to a wireless
laptop to use at school or at home, enabling communication and collaboration among peers and
teachers, and connecting parents to their childrens
learning. Educators are provided digital tools to
create learning plans, manage educational content,
track student progress, and more.
The most important point is that full attention
to learning methodologies, with the help of the
digital world, must be given. Technology is the
mean to learning. A possible mistake will be to
pay attention in technology rather than learning. For this reason, policy makers must grapple
with several issues concerning appropriate and
effective integration of computers into schools
(Eagleton, 1999). In the referred list, one can
distinguish financial, equality, curricular, and
literacy issues.
The ultimate goal of such efforts is to improve
the assimilation ability of the students. To this
1509
Advantages Disadvantages
In this point, it is necessary to describe all the
advantages and disadvantages of the emerged
technology. It is critical to identify these issues
because it is a key to the road of the embodiment
of computers in the learning procedure.
Some advantages are:
1510
Teaching efficiency: With the use of ubiquitous computing, teaching style has to change.
Teachers must adapt their methods to the
new environment. This active environment
gives the opportunity for students to build
their knowledge on their own. Therefore, it
is imperative that teachers change their lessons plan in order to reflect the new situation.
1511
1512
Ulysses
This is an effort from the University of Laval, in
Quebec, Canada (Mantha, 2001). All the students
and teachers are provided with a laptop. In the academic year 2001-2002, there were approximately
1800 laptops in the school. Students and teachers
invested money to buy the devices. In Ulysses,
the laptop is to a large extent a communication
tool, and thus there is a need to be able to connect
to the network. The connection is established in
the classrooms, in the cafeteria, in hallways, and
so forth. For financial reasons, the system uses
wired connections to the network. There are 17
wired classrooms, of which 11 have the U shape.
There are 6 classrooms with movable tables with
network connections on the periphery of the room,
allowing the grouping of tables for team work. In
every classroom there is a console in which one
can find a computer and multimedia tools. An
intranet was developed to store all the material
needed for courses, like exercises, examples, and
eFusion
This system consists of an initiative of the University of Illinois that originated in the spring of
2002 (Peiper, Warden, Chan, Capitanu, & Kamin,
2005). eFuzion is an interactive classroom learning
environment where all students and teachers have
access to computing devices inside and outside the
class. During the lesson, educators present their
notes and examples with the help of presentation
tools that eFusion provides. Consequently, teachers send their notes to students devices through
a wireless network and students may make their
remarks in presentations and store them for
future access. The system supports interactive
newsgroups and communication tools that give
the opportunity for students to communicate
with their instructors or their assistants and take
WIL/MA Toolkit
This is a toolkit written in Java and initiated by the
University of Mannheim (http://www.lecturelab.
de/UCE_4.html). It is a client-server application,
where a server provides connection management,
user management, and service management. The
first supports the establishment of a connection
to the users and gives the capability for administrators to monitor the entire communication in
and out from the server. The second is used for
user identification and authentication. Finally, the
service management unit is responsible to inform
users of what services are available and also to
control data flow.
The system is based on wired connections,
but in order to avoid extensive use of cables,
students are able to connect to the network with
wireless LAN. The devices used by the students
are PocketPCs. On the other hand, teachers are
able to publish their material, which can be
presentations, slides, or files, and furthermore,
they can broadcast image and voice to remote
places. During their lectures, teachers can also
use tools like call-in (spontaneous questions),
or quiz and feedback (from the students in real
time). Asynchronous tolls are messaging and
chat/forum channels.
1513
University of Texas
In 2002, the University of Texas initiated a requirement for all teacher education students to
obtain a prescribed laptop and software for use
throughout their academic preparation (Resta &
Tothero, 2005). The goal is the preparation of a
new generation of teachers who would be able to
use new tools and practices in their teaching. This
programs vision is to prepare the future teachers
to enhance their future students to learn in technologically rich classrooms. Its official name is
LIFE (Laptop Initiative for Future Educators).
All the students are provided with laptops and
software that meet specific requirements, and
wireless access to network in their classrooms or
throughout the building. The involvement with the
technology creates two critical needs. The first
is the need to train the students to use the new
hardware and software tools. For this reason, in
the start of every semester, workshops are offered
to familiarize new students with the systems and
applications. The second is the need for technical
support. For this, a Students Laptop Helpdesk was
established. It provides equipment, supplies, and
instruction for both students and teachers.
1514
1515
can take the material and, using software like office, Web browsing tools, database, video, music,
image processing, and communication software,
can cope with their commitments. It is worthwhile
to mention that students cannot have access to the
Internet from their homes except in the cases that
the family has its own Internet provider. Technical
support is provided on a full-time basis.
Missouri
New Hampshire
New Jersey
New Mexico
New York
North Carolina
Ohio
Oregon
Pennsylvania
South Carolina
Texas
Vermont
Virginia
Washington
1516
hamburgmediaschool.com/pages/p118328_
v1.html). Since then, the school chose to continue
using portable devices in three classes. However,
the entire effort is characterized as positive. All the
students were provided with wireless laptops and
specialized software. Students that continue this
program are specialized in organizing themselves
and in learning to use technology, which was the
goal of the project.
Minervaskolan School
Since 1999, pupils in Minervaskolan have used
laptops with wireless access to the network
(http://www.minervaskolan.se/). The programs
technical specifications are similar to the others
as we described. The most important is that from
the projects evaluation students non-attendance
approaches to zero and scores to national tests
are remarkable. Also, students grades appear to
be to a continuous increment advocating to the
programs success. Now, Minervaskolan is one
of the top schools in Sweden.
Conclusion
Ubiquitous computing changes the field of education. It offers a timeless access to information
resources and allows learning methods that
are difficult to apply in traditional classrooms.
Research has shown it has positive impacts on
students' learning. Also, a lot of advantages exist
in teaching, and school communities see these
positive impacts of the new technology.
In order to integrate computers in education
smoothly, careful design is necessary. The focus
must be on the pedagogy and not on technology.
This is because the final goal is to accomplish a
high level of learning. For this reason, we speak
about technology enhanced learning, and not
about technical training.
Ubiquitous or pervasive computing has significant advantages over traditional teaching
methods, and we must work on them to reach
the desired results.
References
Apple Computer, Inc. (n.d.). Why 1 to 1 learning.
Retrieved October 11, 2006, from http://www.
apple.com/education
Bayon, V., Rodden, T., Greenhalgh, C., & Benford,
S. (2002, August 26-28). Going back to school:
Putting a pervasive environment into the real
world. Pervasive, 69-84. Zurich, Switzerland.
Becker, H., & Riel, M. (2000). Teacher professional engagement and constructive-compatible
computer usage (Report No. 7). Irvine, CA:
1517
Deters, R. (2001, May 19-23). Pervasive computing devices for education. In Proceedings of the
International Conference on AI and Education
(pp. 49-54), San Antonio, Texas.
Diggs, C. (2004, May). The effect of palm pilots
on student learning in the elementary classroom.
Retrieved October 11, 2006, from http://www.
successlink.org/handheld/PalmInHand.pdf
Digital Divide.org (2005). Digital divide: What
it is and why it matters. Retrieved December 18,
2006 from http://www.digitaldivide.org/dd/digitaldivide.html
Eagleton, M. (1999, April). The benefits and
challenges of a student-designed school Web site.
Retrieved October 11, 2006, from http://www.
readingonline.org
Educational Research Service. (2001). Does technology improve student achievement? Retrieved
October 11, 2006, from http://www.ers.org
Fullan, M. (1999). Change forces: The sequel.
London: Farmer Press.
International Society for Technology in Education (ISTE) (2002). Essential conditions to make
it happen. Retrieved December 18, 2006 from
http://cnets.iste.org/students/s_esscond.html
Mantha, R.W. (2001, May 4-6). Ulysses: Creating a
ubiquitous computing learning environment sharing knowledge and experience in implementing
ICTs in universities. EUA / IAU / IAUP Round
Table, Skagen, Denmark.
McKenzie, J. (2001, January). The unwired
classroom: Wireless computers come of age.
1518
Sutherland, A. (2002). Strategic factors affecting the uptake, in further education, of new and
emerging technologies for learning and teaching.
Retrieved October 11, 2006, from http://www.
techlearn.ac.uk/cgi-bin/docspec.pl?l=83
Vahey, P., Enyedy, N., & Gifford, B. (2000). Learning probability through the use of a collaborative,
inquiry-based simulation environment. Journal of
Interactive Learning Research, 11(1), 51-84.
1519
This work was previously published in Ubiquitous and Pervasive Knowledge and Learning Management: Semantics, Social
Networking and New Media to Their Full Potential, edited by M. Lytras and A. Naeve, pp. 94-117, copyright 2007 by IGI
Publishing, formerly known as Idea Group Publishing (an imprint of IGI Global).
1520
1521
Chapter 4.7
ABSTRACT
Popular online social networks such as Friendster
and MySpace do more than simply reveal the superficial structure of social connectednessthe
rich meanings bottled within social network
profiles themselves imply deeper patterns of
culture and taste. If these latent semantic fabrics
of taste could be harvested formally, the resultant
resource would afford completely novel ways
for representing and reasoning about web users
and people in general. This paper narrates the
theory and technique of such a featthe natural
language text of 100,000 social network profiles
were captured, mapped into a diverse ontology
of music, books, films, foods, etc., and machine
learning was applied to infer a semantic fabric of
taste. Taste fabrics bring us closer to improvisational manipulations of meaning, and afford us
Introduction
Recently, an online social network phenomenon
has swept over the WebMySpace, Friendster,
Orkut, thefacebook, LinkedInand the signs
say that social networks are here to stay; they
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Figure 1. Examples of social network profile formats, on Orkut (left) and Friendster (right). Note the
similarity of categories between the two.
1522
climbing will miss the opportunity to be connected with a friend-of-a-friend (foaf) who likes
wakeboarding because keyword-based search
is vulnerable to the semantic gap problem. We
envision that persons who like rock climbing
and wakeboarding should be matched on the
basis of them both enjoying common ethoi (characteristics) such as sense of adventure, outdoor
sports, and thrill seeking. A critic might at
this point suggest that this could all be achieved
through the semantic mediation of an organizing ontology in which both rock climbing and
wakeboarding are subordinate to the common
governor, outdoor sports. While we agree that
a priori ontologies can mediate, and in fact they
play a part in this papers research, there are subtler
examples where a priori ontologies would always
fail. For example, consider that rock climbing,
yoga, the food sushi, the music of Mozart,
and the books of Ralph Waldo Emerson all have
something in common. But we cannot expect a
priori ontologies to anticipate such ephemeral affinities between these items. The common threads
that weave these items have the qualities of being
liminal (barely perceptible), affective (emotional),
and exhibit shared identity, culture, and taste. In
short, these items are held together by a liminal
semantic force field, and united they constitute
a taste ethos.
What is a taste ethos? A taste ethos is an
ephemeral clustering of interests from the taste
fabric. Later in this paper we will formally explain
and justify inferring a taste fabric from social
network profiles, but for now, it suffices to say
that the taste fabric is an n by n correlation matrix,
for all n interest items mentioned or implied on a
social network (e.g., a book title, a book author,
a musician, a type of food, etc.). Taste fabric
specifies the pairwise affinity between any two
interest items, using a standard machine learning numeric metric known as pointwise mutual
information (PMI) (Church & Hanks, 1990). If a
taste fabric is an oracle which gives us the affinity between interest items as a(xi, xj), and a taste
1523
1524
Theoretical Background
This section lays a theoretical foundation for
how taste, identity, and social network politics
are approached in this work. For the purposes of
the ensuing theoretical discussion, social network
profiles of concern to this project can be conceptualized as a bag of interest items which a user has
written herself in natural language. In essence, it
is a self-descriptive free-text user representation,
or harkening to Julie Andrews in The Sound of
Music, these are a few of my favorite things. A
central theoretical premise of mining taste fabric
from social network profiles by discovering latent
semantic correlations between interest items is
that the collocation of a users bag of interest
items is meaningful, structured by his identity,
closed within his aesthetics, and informs the total
space of taste. Next, the paper argues that a users
bag of interests gives a true representation of his
identity, and enjoys unified ethos, or, aesthetic
closure. This is followed with a section which plays
devils advocate and betrays some limitations to
our theoretical posture. The section theorizes a
segregation of users profile keywords into two
speciesidentity-level items vs. interest-level
items. This distinction has implications for the
topological structure of the taste fabric.
is filled with multiplicity, heterogeneity, and diversity. The idea that we now have a much more
fine-grained vocabulary for expressing the self
is what ethnographer Grant McCracken, echoing
Plato, calls plenitude (McCracken, 1997). In a
culture of plenitude, a persons identity can only
be described as the sum total of what she likes
and consumes. Romantic proto-sociologist Georg
Simmel (1908/1971) characterized identity using
the metaphor of our lifes materials as a broken
glassin each shard, which could be our profession, our social status, our church membership, or
the things we like, we see a partial reflection of
our identity. These shards never fully capture our
individuality, but taken together, they do begin to
approach it. Simmels fundamental explanation
of identity is Romantic in its genre. He believed
that the individual, while born into the world as
an unidentified content, becomes over time reified
into identified forms. Over the long run, if the individual has the opportunity to live a sufficiently
diverse set of experiences (to ensure that he does
not get spuriously trapped within some local
maxima), the set of forms that he occupiesthose
shards of glasswill converge upon an authentic
description of his underlying individuality. Simmel believes that the set of shards which we collect over a lifetime sum together to describe our
true self because he believes in authenticity, as
did Plato long before him, and Martin Heidegger
after him, among others.
While Simmel postulated that earnest self-actualization would cause the collection of a persons
shards to converge upon his true individuality,
the post-Freudian psychoanalyst Jacques Lacan
went so far as to deny that there could be any
such true individualhe carried forth the idea
that the ego (self) is always constructed in the
Other (culture and worlds materials). From
Lacans work, a mediated construction theory of
identity was bornthe idea that who we are is
wholly fabricated out of cultural materials such
as language, music, books, film plots, etc. Other
popular renditions of the idea that language (e.g.,
1525
1526
1527
1528
Segmenting Profiles
Once profile texts are acquired, these texts need
to be segmented. First, texts are easily segmented
based on their interest categories. Recall in Figure 1 that texts are distributed across templated
categories, e.g., passions/general interests, books,
music, television shows, movies, sports, foods,
about me. Experience with the target social
network websites tell us that most users type
free-form natural language text into about me,
and natural language fragments for the specific
interest categories. For the passions/general interest category, text is likely to be less structured
than for specific interest categories, but still more
structured than about me. Perhaps this is due to
the following psychologyfor specific interests,
it is clear what the instances would be, e.g., film
names, director names, and film genres for the
films category, yet for the general interest category,
the instance types are more ambiguousso that
field tends to elicit more idiosyncratic speech.
For each profile and category, its particular
style of delimitation is heuristically recognized,
and then applied. Common delimitation strategies
were: comma-separated, semicolon-separated,
stylized character sequence-separated (e.g. item
1 \../ item 2 \../ ), new line separated, commas
with trailing and, and so on. Considering a successful delimitation as a category broken down
1529
1530
1531
1532
Figure 4. Two Ptolemaically-centered taste neighborhoods, computer generated with the follow parametersa maximum of 50 nodes in each neighborhood, up to the first three instances of any category type
are shown. Spatial layout is not implied by the neighborhood; nodes are manually arranged here.
1533
1534
Figure 5. Two screenshots of the InterestMap interactive visualization. 5a (top) depicts a user browsing
neighborhoods of taste visually. 5b (bottom) depicts a user visualizing his own taste ethos by dragging
and connecting interesting nodes to the who am i? node.
1535
InterestMap
InterestMap (Liu & Maes, 2005a) visualizes the
topology of the taste fabric, and in particular
it depicts taste cliques, identity hubs, and taste
neighborhoods as a navigatable map. As shown in
Figure 5a, users can browse InterestMaps tapestry
of neighborhoods, cliques and identity hubs, or,
as depicted in Figure 5b, they can interactively
build up their own taste ethoi, by searching for
and attaching descriptors to a stationary who
am i? node. The act of connecting a descriptor
to the self is deeper than making a mere superficial keyword association since each descriptor is
actually something more like a semantic cloud.
Once a user has connected several descriptors to
his self, those semantic clouds begin to intersect,
overlap, and mingle. They begin to imply that
other descriptors, which the user has not selected
himself, should be within the users taste. Hence,
the notion of a visual recommendation.
Taste-based recommendation. InterestMap
can, given a profile of the users interests, recommend in a cross-domain way, books, songs,
cuisines, films, television shows, and sports to that
user based on taste. The users interests are normalized according to aforementioned processes
and mapped into the taste fabric. These nodes
in the fabric constitute a particular activation
configuration that is unique to the user, and the
total situation described by this configuration
is the fuzzy taste model of the user. To make
recommendations, activation is spread outward
from this configuration, into the surrounding
nodes. Some nodes in the surrounding context
will be activated with greater energy because
they are more proximal to the taste signature of
the starting configuration. The nodes activated
with the highest energy constitute the users
recommendation. Figure 5b shows a visualization of the recommendation process. The users
self-described interests are the descriptors directly
connected to the who am i? node. Each of these
interests automatically entails other strongly
1536
Ambient Semantics
Ambient Semantics (Maes et al., 2005) is a wearable contextual information system that supports
users in discovering objects and meeting people
through pithy just-in-time feedback given in the
crucial first moments of an encounter. Here is an
example of a use case involving the discovery of a
new book: Wearing the Ambient Semantics RFID
reader wristband, you pick up a copy of Marvin
Minskys Society of Mind book. Through
your cell phone display, the system tells you that
you would be particularly interested in section 3
because it is relevant to your current research topics. It would tell you that your friends Henry and
Barbara listed this book among their favorites, and
that the authors expressed opinions seem sympathetic to your own, based on semantic analyses of
both your writings. The system can indicate that
you would find the book tasteful because it can
use taste fabric to detect that it is indeed within
close proximity to your taste ethos, translating to
a strong taste-based recommendation.
Exposing shared taste-context between two
strangers. The second use case concerns the system facilitating social introductions by breaking
the ice. This scenario demonstrates using the taste
fabric for the quantification and qualification of
the taste-similarity between two strangers. First,
a scenario. You are at a business networking event
1537
IdentityMirror
What if you could look in the mirror and see
not just what you look like, but also who you
are? Identity mirror (Figure 6) is an augmented
evocative object that reifies its metaphors in the
workings of an ordinary mirror. When the viewer
is distant from the object, a question mark is
the only keyword painted over his face. As he
approaches to a medium distance, larger font
sized identity keywords such as fitness buffs,
fashionistas, and book lovers identify him.
Approaching further, his favorite book, film, and
music genres are seen. Closer yet, his favorite
authors, musicians, and filmmakers are known,
Figure 6. Three screenshots of one of the authors gazing into the IdentityMirror. (left) Far away, only general identities can be seen; (center) at mid-distance, favorite music, book, and film genres emerge; (right)
finally, up-close, all of the details and specific interests in the viewers taste ethos become visible.
1538
Advanced Discussion
In this section, we present an evaluation of the
taste fabric, present related work, and discuss
other ways in which this work is of consequence
to the semantic mining and Semantic Web communities.
Evaluation
We evaluate the quality of the taste fabric apropos a telos of recommendation, scrutinizing
the performance of recommending interests via
spreading activation over the taste fabric, as
compared with a classic collaborative filtering
recommender. Much of this discussion is adapted
from (Liu & Maes, 2005a).
In this evaluation, we introduced three controls
to assess two particular features: (1) the impact that
identity hubs and taste cliques have on the quality
of recommendations; and (2) the effect of using
spreading activation rather than a simple tally of
PMI scores. Notably absent is any evaluation for
the quality of the produced taste neighborhoods,
because here we consider only quantitative and not
qualitative recommendation. Qualitative recommendation is not claimed to outperform quantitative recommendation in terms of accuracyour
suggestion was that linguistically identifying and
visually illustrating two persons cohabitations
of taste neighborhoods should facilitate trust and
transparency in the recommenders process.
In the first control, identity descriptor nodes
are simply removed from the network, and spreading activation proceeds as usual. In the second
control, identity descriptor nodes are removed,
and n-cliques9 where n>3 are weakened10. The
third control does not do any spreading activation,
but rather, computes a simple tally of the PMI
scores generated by each seed profile descriptor
for each of the 11,000 or so interest descriptors.
We believe that this successfully emulates the
mechanism of a typical non-spreading activation
1539
1540
Related Works
A cultural metadata approach to musical taste.
Whitman and Lawrence (2002) developed a
metadata model for characterizing the taste
coherence of musical genres. Mining adjectival
and noun phrases collocated with musical artist
discussions in newsgroups and chatrooms, they applied machine learning to automatically annotate
music artists with what they termed community
metadata. Then Whitman and Smaragdis (2002)
applied community metadata to build cultural
signatures for music genres that could be used, in
conjunction with the auditory signal, to classify
unknown artists based on style similarity. Their
notion of a metadata signature for musical styles is
sympathetic to our notion of taste ethos and taste
neighborhood, and both systems take a bottom-up
metadata-driven view of meaning definition. A
chief difference between our two works is that
taste knowledge is located in descriptive wordchoice in their system (e.g., wicked, loud), and
located in interest-choice in our system, that is,
the choices of what people consume (e.g., Britney
Spears, Oreo cookies).
Social information filtering. In prior work,
one of the authors co-developed a well-known
technique for item recommendation based upon
nearest taste-neighbor, the approach known variously as social filtering, or collaborative filtering.
Shardanand and Maes (1995) represent users
as vectors of (item, rating) pairs, and compute
taste-similarity as statistical correlation between
user vectors, or alternatively as cosine similarity
of vectors in n-dimensional item space. In their
Ringo social music browsing system, users were
recommended a list of potential tastemates on
the basis of taste-similarity. One difference between our two approaches is that social filtering
maintains distinct user profiles, whereas taste
fabrics dissolves user boundaries, and is, in their
1541
content entailments of social network users is McCallum, Corrada-Emmanuel, and Wangs (2005)
modeling of Author-Recipient-Topic correlations
in a social network messaging system. Given the
topic distributions of email conversations, the
ART model could predict the role-relationships
of author and recipient. The work considers group
clusters and dyadic relationship dynamics but does
not consider cultural aggregates as is the concern
of our present work.
Large-scale commonsense knowledge networks. Taste fabrics are a rich tapestry which
define the meaning space of taste and interests.
They are represented as semantic networks and
reasoning is performed via spreading activation
over this network. This approach to knowledge
representation and reasoning builds upon previous
work in large-scale semantic knowledge bases such
as WordNet (Fellbaum, 1998) and ConceptNet (Liu
& Singh, 2004). WordNet is a semantic network
whose nodes are words, and edges are various
nymic lexical relations between the words, e.g. a
dog has the hypernym of canine. ConceptNet
is a semantic network of commonsense knowledge whose 200,000 nodes are verb phrases (eat
burger, take shower), and 1.6 million edges
are one of 20 kinds of world semantic relations
(e.g., EffectOf, PropertyOf, DesireOf), e.g.,
(EffectOf be hungry cook food). ConceptNet
and taste fabrics reason similarly by activating
a seed configuration of nodes, and spreading
activation outward to define a semantic context.
Both resources are densely connected, semantically extensive within their respective domains,
and allow for improvisational manipulations of
meaning to take place atop them.
Reusable Methodologies
Sanitary semantic mining. The sanitariness of a
mined knowledge resource is the degree to which
it is purged of idiosyncrasy, especially idiosyncratic traces of user-specific information, and
also idiosyncrasies which implicate the original
1542
Conclusion
This paper presented a theory and implementation
of taste fabricsa semantic mining approach
to the modeling and computation of personal
tastes for lifestyle, books, music, film, sports,
foods, and television. Premised on philosophical
and sociological theories of taste and identity,
100,000 social network profiles were mined,
ontologically-sanitized, and a semantic fabric
of taste was weaved. The taste fabric affords a
highly flexible representation of a user in tastespace, enabling a keyword-based profile to be
relaxed into a spreading activation pattern on
the taste fabric, which we termed a taste ethos.
Ethotic representation makes possible many
improvisational manipulations of meaning, for
example, the taste-similarity of two people can
be computed as the shared activation between two
ethoi. Taste-based recommendation is already
implied by a taste ethos, as all items within an
ethos are intrinsically relevant to the taste of the
individual. Indeed, an evaluation of recommendation using the taste fabric implementation shows
that it compares favorably to classic collaborative
filtering recommendation methods, and whereas
collaborative filtering is an opaque mechanism,
1543
recommendation using taste fabrics can be effectively visualized, thus enhancing transparency
and cultivating user trust.
Two models of taste-based recommendationone quantitative based on shared activation,
and one qualitative based on k-nearest neighborhoodswere presented. Recommendation, time
and world-sensitive user representation, and interpersonal taste-similarity, were illustrated within
a survey of three applications of taste fabrics.
This paper makes three contributions to the
literature. First, it presents a novel mechanism for
mining and modeling the taste-space of personal
identities and interests. Second, the mining and
weaving of taste fabrics from idiosyncratic social
network profiles raises the issue of sanitation of
knowledge resources, and this paper illustrated
how ontology and non-linear correlation learning
can be used to purge idiosyncrasy and prepare
a general-purpose grade knowledge resource.
Finally and third, in addition to ontology-based
and metadata-based knowledge resources, taste
fabrics introduces a novel third approach to the
literatureinstance-based fabrics, where the
notion of knowledge is a purely relational one.
Fabrics, we suggest, excel at semantic mediation,
contextualization, and classification, and may
play a valuable role as a context mediator in a
recently complicated Semantic Web of formal,
semi-formal, and now, informal, entities.
Acknowledgments
This research was supported by a British Telecom Fellowship, an AOL Fellowship, and by the
research consortia sponsors of the MIT Media
Lab.
References
Aberer K., et al. (2004). Emergent semantics. Proc.
of 9th International Conference on Database
1544
1545
Endnotes
3
4
5
6
7
8
9
1
2
10
http://www.dmoz.org
http://www.imdb.com
http://tvtome.com
http://tvguide.com
http://www.wikipedia.org
http://www.allmusic.com
http://allrecipes.com
http://www.foodsubs.com
A qualifying clique edge is defined here
as an edge whose strength is in the 80th
percentile, or greater, of all edges.
By discounting a random 50% subset of the
cliques edges by a Gaussian factor (0.5 mu,
0.2 sigma).
This work was previously published in International Journal on Semantic Web & Information Systems, Vol. 2, Issue 1, edited
by A. Sheth & M. Lytras, pp. 42-71 , copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint
of IGI Global).
1546
1547
Chapter 4.8
Computer-Mediated
Communication:
Enhancing Online
Group Interactions
J. Michael Blocher
Northern Arizona University, USA
Abstract
Background
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Computer-Mediated Communication
1548
Computer-Mediated Communication
1549
Computer-Mediated Communication
Summary
Although the CMC technology does have the
potential to provide a non-linear and level environment that provides the conduit for human
interaction that is culturally neutral, it seems that
it may not. Indeed, the studies described above
strongly indicate that CMC may not be the level
playing field that it was once thought to be. In
particular, there are communication style differences within the CMC environment that parallel
traditional communication styles, thus creating
a group communication environment where the
communicators interact in similar ways that they
would in traditional face-to-face environments
where gender, power, and communication style
may have a strong impact on ones communication
within group interactions.
It is important to remember that these are
general tendencies and that not all CMC group
interactions are destined to exhibit the type of
interactions described above. Rather these are
tendencies that could impact ones experience
engaging in CMC, which might be exacerbated
by some of the unique communication issues
that are specific to current text-based CMC. For
example, often a message is sent that is totally
benign. However, the message even when taken
simply for the written text may be taken for a
completely different meaning than originally
intended, because the sender and recipient engaged in the communication at different times,
locations, and states of mind. Being more knowledgeable about the tendencies and limitations of
1550
CMC could help limit more adversarial interactions that were based upon miscommunication
due to CMC issues or communication styles. In
particular, becoming aware of and monitoring
ones personal communication style could help
make the individual more thoughtful and strategic about their communiqus, thus reducing the
tendencies for miscommunication to occur. The
next section will discuss specific issues at work
within the current CMC environments that might
hamper communication and strategies that could
be implemented to enhance more equitable and
clear CMC engagement.
Issues
As described, gender, power, and communication
style can impact ones personal communication
interactions within a CMC environment. Although
these elements parallel traditional face-to-face
communication interactions, there are issues
with CMC that are unique to that environment.
In particular, the discussion will focus on current
CMC issues in terms of how asynchronous and
synchronous electronic communication tools limit
the capacity of communication cues and metainformation as well as communication styles in
terms of gender, and whether these issues can be
influenced to help ameliorate those limitations.
Media Richness
Much of the CMC literature suggests that communicating within electronic messaging systems,
which are primarily text based, limits communication because of the lack of other communication cues or components. Media richness theory
(Daft & Lengel, 1986) defines communication
media in terms of face-to-face communication
elements. These elements include immediacy of
feedback, non-verbal, and other cues that enrich
communication. In other words, a medium is
considered richer or thicker when it can sup-
Computer-Mediated Communication
1551
Computer-Mediated Communication
CMC Strategies
From the study detailed above it seems that the
participants modified or enhanced their communication within the limited media in some way to
better perform their tasks. While it is clear that
various CMC media have limitations, users can
and do use strategies that can help ameliorate the
limitations of a particular media. There are specific examples of communicators modifying their
communication to better fit a medium. As a case
in point, text shortcuts, such as: btw = by the way,
irl = in real life, cu ltr = see you later, and imho =
in my humble opinion, have been developed and
utilized within in CMC environments providing
evidence of communicators utilizing new methods
to speed up and facilitate their interaction. The
practice of abbreviated-text messaging and instant
messaging has become common place. Indeed,
mobile phone text messaging has become a social
phenomenon practiced by young people in many
parts of the world. For example, Grinter and Eldridge, (2003) describe how British teenagers are
similar to teenagers in other parts of the world in
1552
Computer-Mediated Communication
1553
Computer-Mediated Communication
Summary
The studies described above detail some of the
major issues of CMC. Specifically, current CMC
systems tend to be lean media in terms of media
richness theory, which can impact certain populations based upon their communication strengths.
However, it also would seem that communicators
believe it is important and take steps to ameliorate
the limitations that the lean media present by
utilizing communication methods and strategies
that are supportive in nature. Interestingly, utilizing higher group development communication
style (HCS) might limit the identification of the
communicator, thus possibly providing a more
level communication venue in terms of gender.
More importantly, however, it seems that the
studies above demonstrate that communication
1554
Recommendations
From the review of the literature there are several
issues that dominate the various common CMC
systems and the participants who engage in group
interactions within them. In particular, there are
issues of media richness and communication
style differences. In addition, there is the issue
of a groups communication norms. These factors
can impact the groups communication norms,
which might be defined by: (A) the makeup of the
group membership, (B) the purpose of the group,
and (C) the media they employ to communicate.
These issues can provide some challenges for
CMC users. Drawing from distance or distributed
learning theory will provide some recommendations to help support electronic communication
by building online communities where group
interaction is enhanced by developing membership within a community of practice. In addition,
the literature suggests CMC strategies that might
help communicators better their electronic communiqus, thus reducing miscommunications
Computer-Mediated Communication
Community Building
As discussed earlier in this chapter, Rheingold
(1993) described virtual communities where
members interact using CMC within a level social
environment. Although this ideal may have been
suggested prior to having a greater understanding
of some of the media limitations and how group
interactions are impacted by them, it does not mean
that the idea should necessarily be abandoned. This
is especially important as group communication
norms can dictate the membership engagement of
communication style. One only needs to look at the
incredible growth in online tools like, MySpace
(n.d.), Orkut (n.d.), and the plethora of blogs to
see that people want to communicate and engage
in CMC group interactions. For example, more
advanced robust open-community CMC systems,
such as BuddySpace, are being developed to
provide the users with greater flexibility, in terms
of connectivity with other platforms. In addition,
BuddySpace provides the users location, which
can help group individuals who might have similar
interests because of their locale. However, as with
any community, online community members
may not be as supportive to one another as they
ultimately could, or perhaps should. Therefore,
it would be wise to understand the elements of
1555
Computer-Mediated Communication
1556
Computer-Mediated Communication
1557
Computer-Mediated Communication
and communications. Although the CMC administrators of the online community could censure
inappropriate individuals, and some communities may decide that some form of policing is
appropriate, still, here, the membership should
be responsible for the decision making of that
policy. It is recommended that by building an
online community of practice, inequities, and
disagreements would be settled by the membership of that community, based upon that groups
agreed-upon norms. The next section will outline
various communication strategies that will help
reduce miscommunications that can often be the
cause for online disagreements.
1558
Computer-Mediated Communication
He found that participants believed that supportive CMC strategies were important and utilized them within an online learning environment
in 25.9% of the interactions. In particular, he found
that the most commonly used supportive communication strategies utilized by his participants
included: (a) referential statements, (b) signatures,
(c) greetings, and (d) horizontal questions.
Within a community of practice or learning,
it is recommended that members become aware
of and begin to utilize these strategies. While it
might seem logical to use some of these strategies
in group interactions, messages are often received
that do not include any closing and, at times, not
even something as important as the senders name.
In utilizing these strategies the CMC message will
include more communication cues to help enrich
the medium with more information. As described
earlier, a lack of nonverbal cues can be a possible
issue for female members of a community, and this
simple strategy might better support a members
understanding of a particular message. Simply
utilizing emoticons to replace the nonverbal cues
(e.g., facial expressions) missing from CMC can
provide more information to the recipient of the
message. For example, utilizing the winking
emoticon can provide information to the recipient that the statement was not meant to be taken
seriously. Other more complex strategies, such as
stating acknowledgement and agreement, provide
the recipient with information that might replace
a head nod or other gestures, not communicated
in text-only CMC. Furthermore, apologies for
miscommunications or other miscues provide I
statements and self-disclosures that are associated
with higher group development communication
style (HCS). The use of horizontal questions,
inviting responses, and providing referential
statements of anothers message support the
notion of enhancing social interactions, which
encourages additional participant engagement
in group interaction. With the use of these more
supportive CMC strategies, members will engage
in richer CMC to better understand one another,
1559
Computer-Mediated Communication
Skype
Skype (n.d.) is an Internet telephony service that
utilizes a computer with Internet connectivity, primarily for audio, provides computer to computer
Internet Telephony. Although it does provide a
video option, it currently is in the beta version.
However, once connected users also have a chat
window available for synchronous messaging as
well. Users can set up conference calls with up
to four participants plus the host.
1560
Skype client software includes all of the features one might expect in such a tool, such as an
address book of contacts for organizational and
ease of use purposes. Skype accounts also provide
several services for various fees, such as SkypeIn,
which provides the user a phone number much like
a tradition phone including voicemail, and SkypeOut, which permits users to make phone calls to
both land and cell phone lines, including SMS, a
form of instant messaging. Requirements include
a headset with microphone, which is used instead
of a microphone and speakers to keep feedback
from occurring. The easy-to-use client software
can be downloaded for free. More importantly,
is that computer-to-computer communication is
free. There is a fee to utilize calls to and from
land or cell phones; users need a pre-paid account. However, the costs are quite inexpensive,
especially for international calls. Because this
form of Internet Telephony primarily focuses on
audio, Skype might be a good tool to consider
when beginning utilizing multimedia CMC, as it
only requires a headset and the easy-to-use free
software to get started.
iVisit
iVisit (n.d.) combines a variety of CMC tools
including, video conferencing, instant messaging, file sharing, and desktop sharing. Although
server licenses are available, most users have to
obtain access to the software application in the
traditional server/client relationship, where iVisit
provides the server access, and users download
client software and sign up for one of two types of
accounts, Light and Plus. The software includes
several windows, including; the video feed with
controls, an address book that lists both people
and places (available conferences), and a chat
window for both the individual and all chatting
within a conference.
An iVisit Light is a free account that permits
users to interact as guests in open or public con-
Computer-Mediated Communication
Microsoft NetMeeting
and Messenger
For quite a few versions, Microsoft Windowsbased operating systems have provided multimedia CMC tools that include various features
depending upon the particular version. The
tools range from earlier versions of Microsoft
NetMeeting (n.d.) to, most recently, Microsoft
Live Messenger (n.d.). While Live Messenger is
quite similar to Yahoo Messenger and focuses
on instant messaging (IM), audio-video Internet
telephony (somewhat like Skype), and file sharing, NetMeeting also includes application sharing. NetMeeting, will no longer be part of the
new Windows Vista operating system. Instead,
Microsoft will include a new suite of CMC tools
called, Windows Meeting Space for home and
home office users, Office Live Meeting Windows
Meeting Space for small and medium business
users, and Office Live Meeting Office Com-
Elluminate
Elluminate (n.d.) is a comprehensive online
multimedia-conferencing tool that includes audio,
chat, interactive whiteboard, application sharing,
file transfer, and direct messaging. It truly goes
beyond the tools previously listed, as its primary
focus is to support group interactions. As such,
Elluminate includes additional features designed
to provide greater user interactivity, such as: (a)
participant profiles to provide greater information
(photo & bio) about the participants, (b) polling,
which permits a moderator to get some feedback
regarding an issue, question, or comment, and
(c) breakout rooms, a great feature should users
want to break up a large conference for smaller
group discussions. One important feature is that
it utilizes full duplex audio, meaning that more
than one individual can speak at the same time.
However, the tool also provides for moderator
control, should that be desired. In addition, the
tool is designed for assistive access with closedcaption transcript and key stroke configurability.
Elluminate also provides for input from a Web
cam, which provides video feed of the speaker,
although this feature is not the primary focus as
with iVisit.
1561
Computer-Mediated Communication
1562
Summary
This section provided recommendations that could
enhance CMC group interactions. Specifically, it
is recommended that the users of CMC consider
fostering the building online communities of
practice where members co-construct, hopefully,
equitable communication norms in their online
group interactions. It also is recommended that the
members engaged in group interactions become
aware of and utilize CMC strategies that enhance
their interactions by fostering more clear communications. Finally, it is recommended that multimedia CMC tools be considered to enhance and
enrich CMC when practical and appropriate.
Computer-Mediated Communication
Future Trends
This chapter has discussed the issues and recommendations of current CMC group interactions.
Over the past few years, incredible advances
in communication technologies have become
available and have been embraced by our society. So what does the future hold? In reviewing
the literature of those whove studied aspects,
issues, and elements of CMC group interaction,
one observation is evidentelectronic computermediated communication has expanded in use,
sophistication, and reach. More people now use
various types of computer-mediated communication than ever before to communicate with others
in all parts of the world. However, the literature
provides evidence that online group interactions
can suffer the same issues as traditional face-toface group interactions. With that in mind, one
could predict that future trends in CMC group
interaction will include the continued expanding
use, advancement, and sophistication of CMC
tools to an even more global scale than currently
exists. With the expanding global reach, users
skills, in terms of more sophisticated tools and
more strategic use, will need to increase as well.
With more diverse users making use of the various
CMC tools, communicators will need to become
better accomplished CMC users to compensate
for the limitations of the various CMC systems.
In addition, they will need to be more knowledgeable about possible group interaction issues that
pertain to newer systems. If that prediction holds
true, future global CMC users would benefit from
greater investigation of cultural communication
differences and global CMC interactions as they
relate to the various media that transmits their
messages.
1563
Computer-Mediated Communication
1564
Computer-Mediated Communication
Summary
This section provided some ideas of what the
future might hold for CMC group interactions.
Certainly, we can presume that CMC usage will
increase and become more complex and, perhaps,
robust. CMC users will be required to become
more technically proficient. A more important
concern, however, will be that as the usage of
CMC expands more globally, the communities
that will better prosper will be those that trend toward the development of communities of practice,
where members think about their responsibility
in the development of their shared repertoire of
communal resources in light of possible cultural
differences. It will be vital for members engaged
in CMC group interactions to be aware of, and
provide consideration for, cultural differences.
While the CMC strategies outlined above may
work well for certain cultures, will they work
well within a global community? Or, will they
have unintended consequences?
1565
Computer-Mediated Communication
References
BuddySpace (n.d.). Retrieved March 15, 2007,
from http://www.buddyspace.org
Carli, L. L. (1984). Sex differences in task behaviors, social behaviors and influences as a function
of sex composition of dyads and instructions to
compete or cooperate. Dissertation Abstracts
International, 45(1-B), 401.
Daft, R. L., & Lengel, R. H. (1986). Organizational
information requirements, media richness and
structural design. Management Science, 32(5),
554-571.
1566
Computer-Mediated Communication
Additional Reading
Barnes, S. B. (2001). Online connections: Internet interpersonal relationships. Creskill, NJ:
Hampton Press.
Barrett, M., & Davidson, M. J. (2006). Gender
and communication at work: Gender and organizational theory series. Burlington, VT: Ashgate
Publishing.
Collison, G., Elbaum, B., Haavind, S., & Tinker,
R. (2000). Facilitating online learning: Effective
strategies for moderators. Madison, WI: Atwood
Publishing.
Day, P. (Ed.). (2004). Community practice in the
network society: Local action/global interaction.
New York: Routledge.
Ermann, M. D., & Shauf, M. S. (2003). Computers,
ethics, and society. New York: Oxford Press.
1567
Computer-Mediated Communication
1569
Chapter 4.9
Abstract
Recently, the ubiquitous use of mobile phones by
people from different cultures has grown enormously. For example, mobile phones are used to
perform both private and business conversations.
In many cases, mobile phone conversations take
place in public places. In this article, we attempt
to understand if cultural differences influence
the way people use their mobile phones in public
places. The material considered here draws on the
existing literature of mobile phones, and quantitative and qualitative work carried out in the UK
(as a mature mobile phone market) and the Sudan
(that is part of Africa and the Middle East culture
with its emerging mobile phone market). Results
indicate that people in the Sudan are less likely
to use their mobile phones on public transport or
Introduction
Economic globalization and the widespread use
of mobile phones have changed the way people
live and manage their lives, and cut down the
virtual distance between countries, regions, and
time zones. New ways of using mobile phones are
constantly emerging (e.g., downloading music to
listen to on the train), and the pervasive use of
mobile phones in public places for private talk
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1570
What is Culture?
Culture is a complicated paradigm that is difficult
to accurately define. According to some researchers, culture must be interpreted (van Peursson,
in Evers & Day, 1997). Hofstede (1980) conceptualized culture as programming of the mind,
suggesting that certain reactions were more likely
in certain cultures than in others, based on differences between the basic values of the members
of different cultures (Smith, Dunckley, French,
Minocha, & Chang, 2004). Culture can also be
seen as a collection of attributes people acquire
from their childhood training. These attributes
are associated with their environment, surroundings that influence the responses of people in that
culture to new ideas, and practices and use of new
technology (such as mobile phones). Given that
culture may affect the way people behave and
interact in general, Ciborowski (1979) identified a
close link between knowledge and culture. In the
context of mobile phone communication, it may
be argued that culture influences knowledgeor
the individuals general experiencetherefore
affecting, in this instance, their attitude towards
mobile phone use in public places.
Another explanation of culture has been offered by Hofstede (1980). He produced a cultural
model that focuses on determining the patterns of
thinking, feeling, and acting that form a cultures
mental programming. This model has been
adopted for the study reported in this article,
as researchers in the area of cross-cultural differences and technology use consider it a valid
and useful measure of systematic categorization
(e.g., De Mooij, 2003; Honald, 1999). In addition, it is also considered to be directly related
1571
1572
Methodology
Participants
88 participants took part in the study: 43 British
(22 male, 21 female) and 45 Sudanese (20 male,
25 female), ranging in age from 15 to 63 years old,
with the average age of 30 years. All participants
were mobile phone users. The range of mobile
phone use for the Sudanese participants was from
2-5 years, whereas the British participants had
used mobile phones for 4-12 years.
Data Collection
Data was collected in this study using a questionnaire and an interview. The development of
the questionnaire went through several stages.
First, the generation of the questionnaire was
collated by employing an exhaustive review of
the literature generally on mobile phones, human-computer interaction (HCI), and cultural
issues in mobile phone use. Second, an in-depth
session was conducted with participants from both
countries (the UK and the Sudan) to develop the
questionnaire. Initially, a total of nine Likert-type
questions were developed. The scale was then
tested for content validity, which can be defined
as the extent to which a test actually measures
what it is supposed to measure (Rust & Golombok,
1989). This was undertaken using what is known
as the judgemental approach, with three mobile
HCI experts.
As a result of this process, the questionnaire
was subsequently revised to consist of six Likerttype questions. The six Likert statements focused
on attitudes towards the use of mobile phones in
public places. An example of the Likert statement
used in this study is as follows:
Mobile phones should not be switched off during
meetings:
Strongly agree
Agree
Neutral
Disagree
Strongly disagree
The attitude scale had a combination of positive and negative statements in order to control for
any possible acquiescence effect from participants
when they were completing the attitude questionnaire. This is a phenomenon whereby participants in
a study may unwittingly try to respond positively to
every question in order to help the investigator with
Procedure
Participants were chosen from an opportunistic
sample in both the UK and Sudan and asked to
complete the questionnaire and return them to the
researcher once they had completed them.
The questionnaires took approximately 15
minutes to complete. At this point, an arrangement
was made to interview a subset of the participants
who had been selected randomly and volunteered
to answer the interview questions. Participants
were informed from the outset that the results of
1573
Results
An independent sample T test was carried out to
compare attitudes towards using mobile phones in
Table 1. Attitudes towards the use of mobile phones in public places in the UK and the Sudan
I would be comfortable
using my mobile phone
in restaurants
I would not be comfortable using my mobile
phone on public transport
I would be comfortable
using my mobile phone
whilst walking down the
street
Mean
Std.
Deviation
Std.
Error
Mean
df
P
Value
Sig 2
tailed
42
2.83
1.146
.177
1.325
70.241
.189
Sudan
45
2.56
.755
.113
British
42
3.29
1.175
.181
5.925
69.046
.000
***
Sudan
45
2.02
.753
.112
British
42
3.69
1.070
.165
3.884
82.171
.000
***
Sudan
45
2.84
.952
.142
42
4.45
.861
.133
3.094
51.314
.003
**
45
4.89
.318
.047
42
3.88
.968
.149
2.316
69.411
.023
45
4.29
.626
.093
42
4.00
1.307
.202
2.552
61.278
.013
45
4.58
.690
.103
COUNTRY
British
Table 2. Attitude difference between the Sudanese males in using mobile phones in public places and
the British males
Gender
Mobile phones should
Sudanese
be switched off in places
Male
of worship
British
Male
Mobile phones should
Sudanese
be switched off during
Male
meetings
British
Male
Mobile phones not to be
Sudanese
switched on in schools
Male
during classes
British
Male
I would be happy usSudanese
ing mobile phones in
Male
restaurants
British
Male
I would not be comfortable using a moSudanese
bile phone on public
Male
transport
British
Male
I would be comfortable
using a mobile phone
Sudanese
whilst walking on the
Male
street
British
Male
Std.
Deviation
Std.
Error
Mean
20 4.90
.308
.069
23 4.43
.992
.207
20 4.20
.523
.117
23 3.83
1.154
.241
20 4.50
.827
.185
23 4.17
1.403
.293
20 2.50
.688
.154
23 2.70
1.105
.230
20 2.25
.786
.176
23 3.13
1.180
.246
20 3.15
.813
.182
23 4.04
.825
.172
Mean
P
value
sig 2
tailed
df
2.134
26.761 .042
1.397
31.583 .172
.942
36.374 .352
-.706
37.389 .485
-2.912
38.570 .006
**
.869
40.330 .001
**
***
Gender
Mobile phones should be
switched off in places of
worship
Mobile phones should be
switched off during meetings
Sudanese
Female
British
Female
Sudanese
Female
British
Female
Sudanese
Female
British
Female
Sudanese
Female
British
Female
Sudanese
Female
British
Female
Sudanese
Female
British
Female
*P<0.05; **P<0.01; ***P<0.001
1576
Std.
Deviation
Std.
Error
Mean
25 2.60
.816
.163
19 3.00
1.202
.276
25 1.84
.688
.138
19 3.47
1.172
.269
25 2.60
1.000
.200
19 3.26
1.195
.274
25 4.88
.332
.066
19 4.47
.697
.160
25 4.36
.700
.140
19 3.95
.705
.162
25 4.64
.569
.114
19 3.79
1.182
.271
Mean
df
P
value
sig 2
tailed
-1.248
30.068
.222
-5.408
27.256
.000
-1.955
34.863
.059
2.348
24.196
.027
1.929
38.758
.061
2.892
24.322
.008
***
**
Interview Results
The interview results corresponded with the
questionnaire data, indicating that there is a difference between the British and the Sudanese attitudes towards the use of mobile phones in public
places. Sudanese were found to be less willing
to use mobile phones in public places than their
British counterparts. In the interview, Sudanese
participants revealed various reasons for their
uncomfortable attitude towards the use of mobile
phones in public places. For example, some of the
participants felt that the use of mobile phones in
public transport is unacceptable because it can
be disturbing to other people in close proximity
to the mobile phone user. As one of the Sudanese
interviewees commented:
Using a mobile phone in public places, especially on public transport where you are closely
surrounded by people, is not something that you
can do comfortably. It is viewed as improper and
unacceptable, as it disturbs others.
The use of mobile phones in public places to discuss private matters can put you in an awkward
1577
Discussion
The results from the study were interpreted in the
light of Hofstedes cultural dimensions to try and
gain some insight into the way culture may influence the use of mobile phones in public places.
It appears from the results that the British
generally are more comfortable using mobile
phones in public places than their Sudanese participants, who are more reluctant to use mobile
phones in contexts such as public transport and
whilst walking along the street.
The collectivistic culture to which the Sudan
belongs to (using Hofstedes criteria) indicates an
inclination toward a tightly-knit social framework
(Hofstede, 1980). The priority is for the groups
needs, rather than the individual wishes. Therefore, perhaps the use of mobile phones in public
places for private talks can be seen as a self-centred
act, and quite impertinent for the group needs. The
group expects the individual to be considerate to
the established social etiquette. The mobile phone
user in public transport is expected to adhere to
the social protocol and to respect other peoples
privacy.
1578
1579
Conclusion
The increased use of mobile phones by people
from different cultural backgrounds has become
an integral part of our world phenomena, yet to
date the impact of cultural differences on the way
people use their mobile phonesand its implications on mobile phone designhas failed to
be investigated comprehensively. As this article
illustrates, mobile phone users with cultural dif-
1580
References
Bayes, A., Von Braun, J., & Akhter, R. (1999). Village pay phones and poverty reduction: Insights
from a Grameen bank initiative in Bangladesh.
Bonn: Center for Development Research, Universitat Bonn.
BBC. (2003). A report by the Worldwatch Institute
in Washington. Mobile phone use grows in Africa.
Retrieved October 9, 2007, from http://news.bbc.
co.uk/1/hi/world/africa/3343467.stm
Burns, T. (1992). Erving Goffman. London:
Routledge.
Churchill, E. (2001). Getting about a bit: Mobile
technologies & mobile conversations in the UK
(FXPL international Tech. Rep. No. FXPAL.
TR.01-009).
Ciborowski, T.J. (1979). Cross-cultural aspects of
cognitive functioning: Culture and knowledge.
In A.J. Marsella, R.G. Tharp, & T.J. Ciborowski
(Eds), Perspectives on cross-cultural psychology.
New York: Academic Press Inc.
Cooper, G. (2000). The mutable mobile: Social
theory in the wireless world. Paper presented
De Mooij, M., & Hofstede, G. (2002). Convergence and divergence in consumer behavior:
Implications for international retailing. Journal
of Retailing, 78, 61-69.
Donner, J. (2005). The rules of beeping: Exchanging messages using missed calls on mobile phones
in Sub-Saharan Africa. Paper presented at the
55th Annual Conference of the International
Communication Association: Questioning the
Dialogue, New York.
Murtagh, G.M. (2001). Seeing the rules: Preliminary observations of action, interaction and mobile
phone use. In B. Brown, N. Green, & R. Harper
(Eds.), Wireless world. Social and interactional
aspects of the mobile age (pp. 81-91). London:
Springer-Verlag.
Rust, J., & Golombok, S. (1989). Modern pychometrics: The science of psychological assessment.
New York: Routledge.
1581
This work was previously published in the International Journal of Technology and Human Interaction, edited by B. C. Stahl,
Volume 4, Issue 2, pp. 35-51, copyright 2008 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
1582
1583
Chapter 4.10
INTRODUCTION
In the late 1970s, womens progress and participation in the more traditional scientific and technical fields, such as physics and engineering, was
slow, prompting many feminist commentators to
conclude that these areas had developed a nearunshakeable masculine bias. Although clearly
rooted in the domains of science and technology,
the advent of the computer was initially seen to
challenge this perspective. It was a novel kind
of artefact, a machine that was the subject of
its own newly created field: computer science
(Poster, 1990, p. 147). The fact that it was not
quite subsumed within either of its parent realms
led commentators to argue that computer science
was also somewhat ambiguously positioned in
relation to their identity as masculine. As such,
it was claimed that its future trajectory as equally
masculine could not be assumed, and the field of
computing might offer fewer obstacles and more
opportunities for women than they had experienced before. Early predictions of how womens
role in relation to information technology would
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
BACKGROUND
In the UK, throughout the 1990s and into the
new millennium, the achievements of secondary school-age girls (11-16 years) progressed
significantly in the more traditional scientific and
technical subjects, and began surpassing those
of boys. Before an age when some curriculum
choice is permitted (14 years old), girls perform
better in science. Furthermore, although fewer
of them take science once they have choice, they
continue to surpass boys achievements in the
area. Higher proportions of girls now gain an
A-C grade pass in their GCSE examinations in
chemistry and biology and physics (Department
of Trade & Industry (hereafter DTI), 2005; Equal
Opportunities Commission (hereafter EOC,
2004)). In terms of A levels, the qualifications
usually taken at the end of the first two-year period
of non-compulsory education (16-18 years), girls
also proportionately achieve more A-C passes in
these subjects (EOC, 2004).
Achievements in computing courses have
followed this trend. Over the last decade, girls
have gained around 40% of GCSE qualifications
in computer studies, and they are now more
far likely to gain an A-C pass than their male
counterparts (EOC, 1994-2004). Nevertheless,
at A level, when students traditionally specialise
in three or four subjects, the trend has been for
the majority of girls to opt out of computing. In
1993, in England, girls only accounted for 13%
of students deciding to further their computing
education to A level standard in England (EOC,
1994). By 2003, this picture had significantly
improved, with girls comprising 26% of those
studying computing or information technology
1584
Figure 1.
Figure 2.
1585
Figure 3.
and their performance would seem to be improving. In 1994/5, 9.6% of men achieved the top
grade of degreea First Classas opposed to
7.6% of women. By 2003, however, 11.9% of men
were achieving this grade against 11.6% of men.
Furthermore, women have also been marginally,
but consistently, more likely to obtain a good
degreea First Class or Upper Secondthan
men since 1994, and by 2003 just over 50% of
them achieved this standard as against just under
47% of men.
1586
The earlier communication about the advantages of IT work takes place, the better. It
is claimed that educational choices, based on
projected career costs and benefits, are already
taking place when UK teenagers face their first
opportunity to specialise, around age 14. This is
also an age when young males and females are
most sensitive to gender-appropriate and genderinappropriate career choices. It is of concern then,
that there seems to be so few reports of positive
representations of professional IT work as a viable
career goal for girls (and their parents) by teachers
or careers advisers within secondary schools, and
that the provision of female-friendly spaces for
computing is still considered cutting edge (Craig
et al., 2002b; Millar & Jagger, 2001; Peters et
al., 2003). Improvements in the knowledge, and
sometimes the attitude, of adults playing a key
role in pre-university course choices, could make
a significant difference to female participation
rates at this level.
As well as the role played by the image of IT
work, the reported experiences of those undertaking a computer science degree must also partially
account for the level of female participation at
university. There is a wealth of evidence to suggest that female students under-representation
on courses creates its own problems in terms
of their confidence, adjustment, enjoyment, and
achievement levels. They report themselves to
feel pressurised to adapt to a male-dominated
and male-oriented educational regime in order to
survive (see, for example: Bjorkman, Christoff,
Palm, & Vallin, 1998; Blum, 2001; Margolis &
Fisher, 2002; Peters et al., 2003). Evidence from
those institutions that have attempted to make
significant changes within their computer science
programmes so that women will feel less alienated
suggests that the have enjoyed far greater female
participation and approval rates as a result (Connor
et al., 1999; Margolis & Fisher, 2002; Millar &
Jagger, 2001). In doing so, they point to a wish list
of best practices for others interested in doing the
same. These include building a curriculum that
1587
assumes a minimal level of previous IT experience, providing mentoring systems and femalefriendly learning spaces, and ensuring that faculty
are sensitive to the needs and abilities of female
students (Blum, 2001; Cahoon, 2001; Hefce 2002;
Margolis & Fisher, 2002). One course in the UK,
where such considerations have been central for
some time, reports high rates of female uptake of
degree places, but also that 100% of women on
the course maintained a desire to become an IT
professional after the completion of their degree
(Craig et al., 2002).
The finding that women are heavily reliant
on the information provided in prospectuses is
heartening in this regard. It confirms that there
is a clear window of opportunity for the more
progressive universities through which they
can encourage potential applicants and reduce
reliance on general anecdotal or impressionistic
information that may not apply to their courses.
If more institutions embraced the same ethos, a
critical mass of women entering computer science
undergraduate programmes could be achieved
in the next decade. This, in turn, could impact
significantly on the general culture of educational
computing and beyond into the workplace, which
could lead to improvements in the sectors image.
The beneficiaries of this would not just be the
women who would otherwise have turned away
from the subject at degree level, but also the UK
economy and society which is arguably at present drawing its computing scientists and skilled
IT professionals from an artificially restricted
pool.
References
1588
Key Terms
Classification of Degree: UK degrees are
ascribed classifications that usually correspond
to the following percentage results (averaged over
all assessments contributing to the final grade):
First 70%+
Upper Second or 2:1 60-70%
Lower Second or 2:2 50-60%
Third 40-50%
Pass 30-40% (a bachelor degree without
honours)
1589
EndNotes
1
This work was previously published in the Encyclopedia of Gender and Information Technology, edited by E. Trauth, pp. 365371, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1590
1591
Chapter 4.11
INTRODUCTION
Information technology (IT) is transforming our
personal, social, cultural, economic and political
lives. But women in developing countries do not
have equal access to knowledge, due to the fact that
they do not have access to the new technologies
at the same level as western European women.
They need to understand the significance of new
technologies and use them in relevant fields. IT
can offer immense opportunities for virtually all
girls and women in developing countries, including poor women living in rural areas.
Developing countries like Bangladesh are usually seen as problematic hosts for ITs because most
developing regions of the country lack economic
resources and indigenous techno-scientific capabilities to develop and deploy IT infrastructures.
BACKGROUND
In early 2000, an initiative was taken to develop
a small river island (char) of Bangladesh through
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
establishment of a resource centre named Indigenous Science and Technology at Ikrail (ISTI), and
using IT as the main vehicle for development. IT
is a livelihood to many women of the developed
countries but is almost unknown to the women
and girls of the river island.
Women in Bangladesh are seen in the frontlines
to fight against hunger, poverty and environmental
degradation. Because of lack of awareness about
the benefits of IT, they cannot raise their problems
for solution to the proper authority. IT can benefit
women in various ways and can facilitate their
participation in different sectors and regions. It
can provide information women need to improve
their own well-being and that of their families.
The introduction of computers in offices can improve the quality of work and scope for women
in data entry, analysis, programming, clerical
and administrative occupations in Bangladesh.
IT could allow them to exchange views, opinions
and information with women of other regions
and countries.
The situation for rural populations in many
regions of the world is characterized by geographical distance from urban centres that offer employment and income, education, cultural events and
public services. IT bears the potential to improve
the situation of rural people in several ways. For
example, it is agreed that early diagnosis of medical issues can prevent many casualties. Once a
patient comes to the health centre, using various
sensors, information can be collected and communicated to an expert at hospitals in the district
headquarters. The primary healthcare centre could
be located in a rural area or in a mobile centre
installed in a van.
IT offers access to information and extension,
to education and training communication and
networking in online discussions. IT also offers
access to employment and income.
1592
Infrastructural Problems
Poor healthcare can result from a lack of good information. Establishing reliable communications
may be one of the most important priorities for
improving healthcare and education. Many rural
areas in the globe have no or outdated telephone
lines that can transmit Internet-based data. The
lack of infrastructure can be overcome by wireless
technology (e.g., radio modems). Mobile telecentres can be a solution for the target group.
HEALTHCARE IT PROJECT
A significant number of women scientists, researchers and technologists work in rural Bangladesh. They are disadvantaged and traditionally
underrepresented in most respects. Their knowledge and skills are unrecognized, underused and
under valued. As such, they are in greater need
of upgrading their skills, especially in the fast
advancing world of information and communication technologies (ICTs), which might enable
them to connect their global colleagues, sources
of information and global knowledge wherever
they may be located.
A survey conducted in early 2000 among 515
women professionals of various disciplines spanning life sciences, social sciences, physical sciences, mineral sciences, engineering, technologies
and medical sciences identified that they are almost
illiterate in IT, although they are well qualified and
experienced in their respective fields. At the first
step, the survey was conducted over the senior
professors of the public and private universities,
scientists of the Bangladesh Council of Scientific
and Industrial Research (BCSIR), researchers of
institutes of forest research, housing research,
fuel research, jute research and water resource
and geological research.
FUTURE TRENDS
Bangladesh is a country of rivers that create islands (chars) of different shapes and sizes. They
are created because of river erosion. The ISTI
resource centre is located in one such island.
The island is circular in shape and its area varies
from 20 to 30 square kilometers, depending on
seasons. Approximately 15,000 people live in the
island community. The people depend on local
cultivation for survival. Their average monthly
earning is $40 (United States dollars). The area
is fertile and rice, wheat, peanut, jute, sugarcane,
and different types of vegetables and fruits are
common. All work is performed manually. The
people are deprived of basic infrastructure. There
is no electricity, phone system or sanitation. The
only source of water is contaminated with arsenic.
There are no health clinics or doctors in the area.
Many adults are illiterate, and curable diseases,
such as cholera, malaria and tuberculosis, are
pervasive.
The ISTI resource centre has been established
with the objective of developing this isolated
region with proper education, empowering local
people with relevant technology, particularly
IT, and providing them with medical facilities
using modern techniques (such as telemedicine
and telehealth care). The ISTI resource centre,
equipped with basic medical test equipment like
stethoscopes, blood pressure meters and blood
testing chemicals, and connected with a hospital
located in the district town, might save lives of
many women and girls. Just a computer with a
modem connected with wireless local loop (WLL)
help solve most of the serious problems of local
women by the women medical doctors empowered
with IT. The ISTI resource centre is working as role
model for several million disadvantaged women
living in the river islands of Bangladesh.
1593
CONCLUSION
This training program has been particularly
suitable and effective for relatively marginalized
groups and remote communities in a developing
country because of its low cost and the use of
local organizers and trainers, courses especially
designed to meet the participants needs and in situ
follow up. The direct involvement in the courses of
senior policy makers, employers and local experts
has proven to be absolutely crucial in gaining
support for the continuation and expansion of the
course and, perhaps more importantly, effecting
attitude change to womens roles and capabilities
in science and medicine. This was underlined by
the increasing cooperation extended by them to the
project team during preparation and execution of
the project. The outcome of the whole training has
been impressive. Participants value to the work
place has increased, some attained promotions
and some changed careers.
The final evaluation report shows that most
women medical doctors and other health-related
professionals empowered with IT have expressed
their opinions that every medical doctor must
attend relevant technologyparticularly IT
courses if they want to enhance their services
and maintain their personal secrecies. Some mentioned that IT is a magic and all-pervasive tool.
References
Babul, R. (2005). Empowering women through
ICT.
Desai, U. B. (2000). COMMSPHERE, Communication sector for health care.
Johnson, C. J., & Rahman, L. (2001). In Proceedings of the Community Technology Conference at
Murdoc University, Australia (Vol. 1).
Rahman, L. (1999). Paths to prosperityCommonwealth science and technology for development.
Rahman, L. (2004). Leading role in ICTenabled
human development in Bangladesh. In Proceedings of the AISECT 2004 International Conference
on ICT in Education and Development.
Tomasevic, R. (2000). Putting the Web to work
for womenthe nature of gender.
KEY TERMS
Information and Communication Technology (ICT): ICT covers computing, electronics
and telecommunications.
Medical Professionals: Medical doctors and
scientists in health-related subjects, such as nutrition, bio-chemistry, psychology and so forth.
Researchers: Graduates and post graduates
engaged in research activities.
Scientists: Science graduate with some research experience in scientific fields.
Technologists: Engineers and scientists with
experience in hardware.
University: A university is characterized by a
wide variety of teaching and research, especially at
a higher level, that maintains, advances, disseminates and assists the application of knowledge,
develops intellectual independence and promotes
community learning.
This work was previously published in the Encyclopedia of Gender and Information Technology, edited by E. Trauth, pp. 423425, copyright 2006 by Information Science Reference, formerly known as Idea Group Reference (an imprint of IGI Global).
1594
1595
Chapter 4.12
ABSTRACT
This article presents a lexicalized HMM-based
approach to Chinese part-of-speech (POS) disambiguation and unknown word guessing (UWG).
In order to explore word-internal morphological
features for Chinese POS tagging, four types of
pattern tags are defined to indicate the way lexicon
words are used in a segmented sentence. Such patterns are combined further with POS tags. Thus,
Chinese POS disambiguation and UWG can be
unified as a single task of assigning each known
word to input a proper hybrid tag. Furthermore, a
uniformly lexicalized HMM-based tagger also is
developed to perform this task, which can incorporate both internal word-formation patterns and
surrounding contextual information for Chinese
POS tagging under the framework of HMMs.
Experiments on the Peking University Corpus
INTRODUCTION
While a number of successful part-of-speech
(POS) tagging systems have been reported for
English and many other languages over the past
years, it is still a challenge to develop a practical
Chinese POS tagger due to the language-specific
issues in Chinese POS tagging. First, there is not
a strict one-to-one correspondence for a Chinese
word between its POS and its function in a sentence. Second, an ambiguous Chinese word can
act as different POS categories in different contexts without changing its form. Third, there are
many unknown words in real Chinese text whose
POS categories are not defined in the dictionary
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
1596
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
its POS
i
n addition to the
contextual information
surrounding it
. For
LEXICALIZED HMMs
component
lexicon word
s is another
important evidence
that
1597
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
i =1
( w | wi N ,i 1 , t i N ,i ) P(t i | wi N ,i 1 , t i N ,i 1 )
T = arg max PT( w=i arg
| t i )max
P(t i
| tiP
N ,i 1 )i
T
i =1
i =1
(3)
1598
(4)
In comparison with standard HMMs, uniformly lexicalized HMMs can handle both contextual
words and contextual tags for the assignment of
hybrid tags to known words, which will result in
an improvement of tagging precision. In view of
the serious data sparseness in higher-order models,
we employ the first-order lexicalized HMMs in
our system.
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
2.
(5)
THE ALGORITHM
This section describes an algorithm for Chinese
POS tagging. The problem of inconsistent tagging
also is discussed in this section.
Viterbi Tagging
Given a sequence of words, there might be more
than one possible sequence of POS tags. The task
of a tagging algorithm is to find the best one that
has the highest score according to the models in
Equation (3) or (4). In our system, we apply the
3.
1599
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
Inconsistent Tagging
Inconsistent tagging will arise when the components of an unknown word are labelled with
different POS tags. For example, the personal
nameinthesegmentedsentence
(Zhang Xiaohua is working at the University of Hong Kong) might be inconsistently
tagged as/Vg-MOW /nr-EOW. In this case,
the system will be unable to make its decision in
determining the POS category of the full word
. Consequently, how to avoid inconsistent tagging is a fundamental issue that relates to the
practicability of our system.
To achieve a complete consistent tagging, we
develop a rule-based module in our system, which
can prevent inconsistent tags from entering into
the lattice for storing candidates during the generation of potential POS tags for unknown words.
Given an unknown word and its components, our
system first creates a set of POS candidates for
each of its components in terms of its relevant
pattern and its lexical probability. Then, it will
continue to generate the POS candidate set for
the unknown word using one of the following
four rules: (1) if the intersection of all the POS
candidate sets for its components is not null, then
the intersection is taken as its POS candidate set;
or (2) if the intersection of the two candidate sets
for its beginning and ending components is not
null, then the intersection is its POS candidate set;
or (3) if the union of the two sets of POS candidates for its beginning and ending components
is not null, then the union set is taken as its POS
candidate set; and (4) if the three previous rules
do not work, then the top frequent POS tag (i.e.,
noun) is taken as its POS candidates. It should
be noted that once the POS candidate set for the
unknown word is determined, all its components
should share this set as their POS candidates.
1600
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
All words
6,164,061
1,120,374
7,284,435
Known words
Total
Amb. known words
5,731,303
3,157,763 (55.10%)
1,042,285
569,859 (54.67%)
6,773,588
3,727,622 (55.03%)
Unknown words
432,758 (7.02%)
78,089 (6.97%)
510,847 (7.01%)
Rank
1
2
3
4
5
6
7
8
9
10
Total
POS
n
nr
m
t
ns
v
j
nz
l
vn
-
Training corpus
Frequency
91,972
79,149
78,395
56,809
31,907
19,460
15,115
14,399
11,955
6,770
405,931
%
21.15
18.20
18.02
13.06
7.34
4.48
3.48
3.31
2.75
1.55
93.80
POS
n
nr
m
t
ns
v
j
nz
l
r
-
Testing corpus
Frequency
16,071
15,991
13,983
10,614
5,306
3,365
2,496
2,390
2,388
959
73,563
%
20.58
20.48
17.91
13.59
6.79
4.31
3.20
3.06
3.06
1.23
94.20
1601
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
2.
Table 3 shows the results of our first experiment. We can see that the lexicalized HMMs
perform better than the standard HMMs as a
whole. In comparison with the standard HMMbased tagger, the lexicalized HMM-based tagger
can improve the tagging precision respectively by
2.06% for all words, 1.88% for all known words,
3.37% for ambiguous known words, and 4.42%
for unknown words. At the same time, the use
of lexicalized models does not lead to a rapid
increase of tagging time. This indicates that the
lexicalized HMM-based method is able to keep
a relative balance between tagging precision and
tagging efficiency.
Figures 1, 2, and 3 present the curves of the
tagging precision for all words, ambiguous known
words, and unknown words, respectively, vs.
the size of the data for training. We can see that
in comparison with the standard HMM-based
tagger, the tagging precision of the lexicalized
HMM-based tagger is changing in a sharper
ascent curve for all three cases as the size of the
training corpus increases, which indicates that
the uniformly lexicalized HMMs need much
more training data than the corresponding nonlexicalized models in order to achieve a reliable
estimation. In other words, the performance of a
lexicalized HMM-based tagging system is more
sensitive to the size of the training data than that
of a standard HMM-based tagging system. Furthermore, it can be observed from Figure 2 and
Figure 3 that a larger improvement of tagging
precision can be achieved for unknown word
Method
Standard HMMs
Lexicalised HMMs
1602
Overall
94.08
96.14
UWs
88.24
92.66
Tagging speed
(w/s)
3,295
2,661
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
98
97
96
95
94
LHMM
93
HMM
92
20
40
60
80
100
Figue 2. Precision for Amb. KWs vs. the size of training data
95
94
93
LHMM
92
HMM
91
90
89
20
40
60
80
100
1603
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
92
90
88
86
LHMM
84
HMM
82
20
40
60
Size of data for training
CONCLUSION
We presented a unified resolution to Chinese POS
disambiguation and unknown word guessing. In
order to explore word-internal cues for Chinese
POS tagging, we introduced four types of wordformation patterns that indicate whether a lexicon
1604
80
100
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
ACKNOWLEDGMENTS
We would like to thank the Institute of Computational Linguistics, the Peking University for their
corpus and lexicon. We also would like to thank
the two anonymous reviewers for their helpful
and valuable comments.
REFERENCES
Brants, T. (2000). TnT A statistical part-ofspeech tagger. In Proceedings of the 6th Applied
NLP Conference (ANLP-2000) (pp. 224-231).
Brill, E. (1995). Transformation-based errordriven learning and natural language processing:
A case study in part-of-speech tagging. Computational Linguistics, 21(4), 543-565.
Fu, G., & Luke, K.-K. (2003). Integrated approaches to prosodic word prediction for Chinese TTS.
In Proceedings of the 2003 IEEE International
Conference on Natural Language Processing
and Knowledge Engineering (NLP-KE03) (pp.
413-418).
Fu, G., & Luke, K.-K. (2005). Chinese unknown
word identification using class-based LM. In
Lecture Notes in Computer Science Vol. 3248,
(IJCNLP 2004) (pp. 262-269).
Fu, G., & Wang, X. (2002). A hybrid approach
to Chinese POS tagging: Integrating HMM with
log-linear model. Journal of Chinese Language
and Computing, 12(2), 109-126.
Lee, S.-Z., Tsujii, J., & Rim, H.-C. (2000). Lexicalised hidden Markov models for part-of-speech
tagging. In Proceedings of the 18th Conference
1605
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
Endnotes
1606
Unlike other languages like English, Chinese text has no explicit delimiters to mark
word boundaries except for some punctuation marks. Therefore, word segmentation
is usually the first step in performing POS
tagging on a text in Chinese. An introduction
of Chinese word segmentation can be seen
Chinese POS Disambiguation and Unknown Word Guessing with Lexicalized HMMs
APPENDIX A.
The PKU Part-of-Speech Tagset
Tag
Chinese
Definition
English
Tag
adjective
Rg
ad
a
dverb-adjective
Tg
an
a
dnoun
Ug
b
c
d
e
f
g
Ag
distinguish word
Vg
conjunction
Yg
a
dverb
h
exclamation
i
position word
j
morpheme
k
adjective morpheme l
distinguish
m
morpheme
adverb morpheme n
numeral morpheme nr
noun morpheme
ns
quantifier
nt
morpheme
Bg
Dg
Mg
Ng
Qg
Definition
English
p
ronoun
morpheme
time morpheme
a
uxiliary
morpheme
verb morpheme
modal morpheme
heading element
i
diom
a
bbreviation
tail element
habitual word
Tag
Chinese
Chinese
Definition
English
nz
onomatopoeia
preposition
q
r
s
t
u
v
vd
quantifier
pronoun
location word
time
auxiliary
verb
adverb-verb
n
umeral
vn
gerund
n
oun
persons name
t
oponym
w
x
y
punctuation
foreign character
modal word
o
rganization noun z
state word
This work was previously published in the International Journal of Technology and Human Interaction, edited by B. C. Stahl,
Volume 2, Issue 1, pp. 39-50, copyright 2006 by IGI Publishing, formerly known as Idea Group Publishing (an imprint of IGI
Global).
1607
1608
Chapter 4.13
IT Implementation in a
Developing Country
Municipality:
A Sociocognitive Analysis
Clive Sanford
Aalborg University, Denmark
Anol Bhattacherjee
University of South Florida, USA
ABSTRACT
This article presents an interpretive analysis of
the key problems and challenges to technology
implementation in developing countries, based
on a three-year case analysis of an IT project in
a city government in Ukraine. We employ the
concept of technological frames of reference as
an analytical tool for articulating the group-level
structures related to the implementation context
from the perspectives of key stakeholders and
examine the degree of conflict between these
frames using a Fishbone diagram. We report
that conflict between technological frames held
by key stakeholders in large-scale system implementation projects often create an unexpected,
dysfunctional, and politically charged implementation environment, ultimately leading to project
Introduction
Information technology (IT) has long been viewed
by central planners in the developing world as an
important tool for achieving rapid economic and
wage growth, improving operational efficiency
and effectiveness, and enhancing political participation and transparency. However, achievement of these objectives is often thwarted due
Copyright 2009, IGI Global, distributing in print or electronic forms without written permission of IGI Global is prohibited.
1609
1610
The Diffusion of IT to
Developing Countries
Information technology and associated organizational models, which form the basis of information systems and implementation methodologies,
have originated and continue to be developed
principally in western, developed countries. The
unidirectional transfer of IT knowledge and skills
from developed to developing countries is a process that characterizes most recent efforts at public
sector reform (Minogue, 2001). However, in order
to take advantage of technological knowledge of
developed countries, developing countries must
have acquired sufficient technological capabilities
and institutional capacities to identify suitable
technologies and to adapt, absorb, and improve
the technologies imported from abroad (Ahrens,
2002). Often, they are not appropriate for use in
countries that are in transition and have their
own realities on the design of technologies that
are used to automate and inform their internal
and citizen-focused processes. Fountain (2001)
addressed this issue in her differentiation between
the deployment of already invented technologies
that are available to practitioners and the need
to recognize the importance of customizing the
design to a specific locale.
A rationale that is often used for the introduction of IT is to automate, increase efficiencies, and
cut costs. The cost of labor in developed countries
is substantially higher than that in developing
countries (Dewan & Kraemer, 2000), while the
reverse is true with the cost of the IT (Heeks,
2002b). Therefore, replacing inexpensive civil
servants with more expensive IT would not be jus-
A Sociocognitive Perspective of
Implementation
The sociocognitive perspective is based on the
idea that reality is socially constructed through
human beings interpretation of social reality
and negotiation of meaning among social entities
(Berger & Luckmann, 1967). Like the cognitive
perspective, the sociocognitive perspective emphasizes the role of ones personal knowledge
structures in shaping her cognitive interpretation
of the world and consequent actions. However, it
differs from the cognitive perspective in the way
it views the formation of individual knowledge
structures. The cognitive perspective postulates
information processing and learning as the key
processes shaping individual knowledge structures, while sociocognitive research holds that
shared knowledge and beliefs (group-level structures), imbibed through the processes of interaction, socialization, and negotiation within a larger
social group, influence individual sense-making
and interpretation. These shared knowledge
structures, also called frames (Goffman, 1974)
or interpretive schemes (Giddens, 1984), serve
as templates or filters for individual sense-making
and problem solving, filling information gaps with
information that conforms with existing knowledge structures, and systematically rejecting
inconsistent information (Fiske & Taylor, 1984).
Frames also provide a vehicle for organizing and
interpreting complex and sometimes ambiguous
social phenomena, reducing uncertainty under
conditions of complexity and change, and justifying social actions.
1611