Chp 7 - Software Engineering, a practitioner's approach, 7th ed.
Chp 7 - Software Engineering, a practitioner's approach, 7th ed.
Q 7.1) What is the fundamental difference between the structured analysis and
object-oriented strategies for requirements analysis?
A) The fundamental difference between structured analysis and object-oriented
(OO) analysis strategies lies in how each approach models the system and what it
considers the primary focus of the analysis.
1. Focus of the Analysis
Structured Analysis: This approach is process-oriented or function-centered. It
focuses on understanding and defining the processes (functions) that the system
must perform. The central question is, "What does the system do?" Each
function is analyzed independently, and the goal is to describe how data flows
between processes and what transformations it undergoes.
Object-Oriented Analysis: This approach is object-oriented or entity-centered. It
focuses on defining the objects (entities) within the system, each representing a
real-world concept or entity, along with their behaviors and interactions. The
central question is, "What are the entities that interact within the system, and
what roles do they play?" The system is organized around these objects, which
encapsulate both data (attributes) and behavior (methods).
Example
For instance, in a DFD for an online ordering system:
An arrow labeled “Order Details” might connect a process like “Validate Order”
to another process, “Process Payment,” showing that order information flows
from validation to payment processing.
A separate arrow labeled “Payment Confirmation” would then indicate data
flowing from the “Process Payment” to “Update Order Status” to confirm the
payment.
Thus, in a DFD, arrows reflect data movement and data dependencies
without specifying control flow or execution order.
SHORT ANSWER:
In a data flow diagram (DFD), an arrow represents a flow of data, not a flow
of control. It indicates the movement of information between processes, data stores,
and external entities within the system, showing how data is input, transformed, and
output in different parts of the system. Control flows, which dictate the order of
operations, are not depicted in a DFD.
Q 7.3) What is “information flow continuity” and how is it applied as a data flow
diagram is refined?
A) Information flow continuity in the context of data flow diagrams (DFDs) refers
to the principle that as a system is decomposed into more detailed levels, the data
entering and leaving each process should remain consistent. This means that inputs
and outputs at a high level (context level or level 0) should map accurately to the
corresponding inputs and outputs in lower-level, more detailed DFDs. This continuity
ensures that the system’s functionality and data exchanges are maintained as it is
refined and broken down into smaller components.
SHORT ANSWER:
Information flow continuity ensures that data flow remains consistent across
different levels of a data flow diagram (DFD) as it is refined. When refining a DFD,
higher-level data flows are broken down into more detailed flows in lower-level
diagrams, but the information content should stay the same. This continuity
maintains traceability, ensuring that each input and output in the top-level diagram
is accurately represented and expanded upon in subsequent detailed diagrams,
preserving the logical consistency of the system.
Example
Consider the following requirement description:
“A customer places an order. The system checks inventory and calculates the total. If
items are available, the system processes the payment and updates the inventory.”
Processes:
“Places an order” suggests a process, likely labeled as “Place Order.”
“Checks inventory” and “calculates the total” indicate other processes:
“Check Inventory” and “Calculate Total.”
“Processes the payment” and “updates the inventory” suggest “Process
Payment” and “Update Inventory.”
Data Flows:
“Order” flows from “Customer” to “Place Order.”
“Inventory status” flows between “Check Inventory” and “Calculate Total.”
“Payment” flows into “Process Payment.”
Data Stores:
“Inventory” serves as a data store, where inventory information is stored
and updated.
External Entities:
“Customer” is an external entity interacting with the system to place orders.
SHORT ANSWER:
A grammatical parse is used in creating a DFD by analyzing the language in
requirements documents to identify nouns and verbs, which help define elements in
the DFD. Nouns typically represent data stores or external entities, while verbs
represent processes or data transformations. By systematically parsing and
categorizing these elements, the analyst can construct a DFD that accurately reflects
the relationships and data flows described in the system's requirements.
The control specification (CSPEC) serves as the blueprint for control logic in a
system, defining how processes respond to events and signals. It complements the
data-focused aspects of structured analysis by addressing when and how processes
are activated, providing a complete picture of both data and control flow.
SHORT ANSWER:
A control specification (or CSPEC) is a detailed description of the events,
conditions, and triggers that manage the flow of control within a system. It defines
how the system responds to various inputs and conditions, specifying control
behaviors such as decision-making, timing, and sequencing. The CSPEC is often
represented using control flow diagrams or state diagrams and complements data
flow diagrams (DFDs) by adding the control logic needed for dynamic processes
within the system.
Q 7.6) Are a PSPEC and a use case the same thing? If not, explain the differences.
A) No, a Process Specification (PSPEC) and a use case are not the same thing.
Although both are tools used in requirements and analysis to describe system
behavior, they have different focuses, purposes, and levels of detail. Here are the
primary differences between them:
1. Purpose and Focus
PSPEC: The Process Specification (PSPEC) is a detailed description of a single
process in a data flow diagram (DFD). It explains exactly how that process
transforms input data into output data. The PSPEC is typically used in structured
analysis to define the internal workings of a process by providing detailed logic,
often in the form of pseudo-code, decision tables, or structured English.
Use Case: A use case is a high-level functional description of how a user or
external system (an actor) interacts with the system to achieve a specific goal.
Use cases focus on the user’s perspective and describe interactions between the
system and external entities, emphasizing what the system should do rather
than how it does it. They capture system functionality in terms of user actions
and responses.
2. Level of Detail
PSPEC: Provides technical details about the logic, conditions, and steps involved
in a process. It describes how the process works internally and may include
specific calculations, data transformations, and rules that guide the process flow.
The PSPEC can be quite detailed, outlining specific inputs, algorithms, or control
logic.
Use Case: Is generally higher-level and scenario-based, focusing on the steps and
interactions from the user’s perspective without delving into the technical
implementation details. Use cases are typically written in natural language to be
easily understood by stakeholders, providing a narrative of the interaction
between actors and the system.
3. Audience
PSPEC: Primarily intended for developers and technical team members who
need a detailed understanding of a particular process’s functionality. It helps
guide implementation by providing a comprehensive breakdown of what each
DFD process does.
Use Case: Targeted at a broader audience, including both technical and non-
technical stakeholders. Use cases are often used to communicate requirements
with users, business analysts, and clients because they focus on user interactions
and expected outcomes rather than internal process details.
Example Comparison
Consider an online ordering system:
Use Case: A use case for placing an order would describe the interaction
between the user and the system, detailing steps like selecting items, entering
payment information, and confirming the order. It would outline the expected
outcomes if the payment is successful or if the payment fails but wouldn’t
specify the internal processes involved.
PSPEC: The PSPEC for a process like “Process Payment” in a DFD for the online
ordering system would specify the exact logic for handling the payment,
including how credit card information is verified, how the transaction is
processed, and what steps to take if the payment fails. It would detail data
transformations, validation steps, and error-handling logic.
In essence:
PSPEC: Focuses on how a specific process works internally (technical, logic-
driven).
Use Case: Describes how users interact with the system to achieve a goal (user-
focused, scenario-driven).
Each serves a unique purpose in requirements analysis, with the PSPEC
providing detailed technical specifications for development and the use case offering
a broader, user-focused view of system functionality.
SHORT ANSWER:
No, a PSPEC (Process Specification) and a use case are not the same thing.
A PSPEC provides a detailed, technical description of a single process within a
data flow diagram (DFD). It defines the logic, algorithms, or rules that govern a
specific process, often using structured language, pseudo-code, or decision
tables.
A use case, on the other hand, is a high-level description of an interaction
between an actor (user or external system) and the system to achieve a specific
goal. It focuses on user goals, outlining the steps for completing a task rather
than detailing internal process logic.
In short, PSPECs focus on internal process details, while use cases focus on
user interactions and goals.
Q 7.7) There are two different types of “states” that behavioral models can
represent. What are they?
A) In behavioral models, the two main types of states that can be represented are:
i. Passive States (or Data States):
These states represent the condition or state of data or objects in the
system at a given point in time. They do not trigger any specific behavior but
rather reflect a state that the system or an object can be in. Passive states
indicate that an object has reached a certain state based on its data values
or properties without requiring any action.
Example: In a banking system, an account object might have passive states
like “Active,” “Dormant,” or “Closed,” each representing a condition based
on the account's attributes.
Q 7.8) How does a sequence diagram differ from a state diagram. How are they
similar?
A) A sequence diagram shows the interaction between multiple objects over time,
focusing on the order of messages exchanged to complete a process. A state diagram
shows the life-cycle of a single object, focusing on its state changes in response to
events.
Key Differences:
Sequence Diagram: Multiple objects, time-ordered interactions.
State Diagram: Single object, state changes triggered by events.
Key Similarities:
Both are behavioral models representing event-driven behavior.
Both are used to visualize dynamic aspects of a system.
Both help in understanding.
Q 7.9) Suggest three requirements patterns for a modern mobile phone and write a
brief description of each. Could these patterns be used for other devices. Provide
an example.
A) Here are three requirements patterns for a modern mobile phone, along with
their descriptions and potential applicability to other devices:
Each pattern provides reusable solutions that enhance user experience and
device functionality across a range of technologies.
Q 7.10) Select one of the patterns you developed in Problem 7.9 and develop a
reasonably complete pattern description similar in content and style to the one
presented in Section 7.4.2?
A) Pattern Name: User Authentication Pattern
Intent:
Ensure secure and efficient user-specific access to the mobile phone and its
sensitive contents by providing multiple authentication methods.
Motivation:
In a modern mobile environment, sensitive data like personal photos,
messages, and financial information require protection. Authentication mechanisms
like PINs, passwords, or biometrics (fingerprints, facial recognition) are essential to
prevent unauthorized access while allowing convenient user access. This pattern
balances security and user convenience, providing multiple options for verification
and fallback methods in case a primary method fails.
Constraint:
Security vs. Usability: Must balance ease of access with robust security
measures.
Resource Usage: Biometrics require additional resources, which may impact
device performance or battery life.
Privacy and Compliance: Certain biometric data must comply with privacy
regulations (e.g., GDPR).
Applicability:
This pattern is applicable to any device where sensitive data or personal
settings require protection. Typical applications include personal mobile phones,
tablets, laptops, and smart home devices with controlled access to certain features
or data.
Structure:
Authentication Methods: Interfaces for password, PIN, or biometric data entry.
Authentication Management System: Manages user credentials, stores
configurations, and tracks access attempts.
Fallback Mechanisms: Alternative options if primary authentication fails, such as
recovery PIN or password.
Behavior:
User Access: When a user initiates access, the system prompts for an
authentication method.
Credential Verification: The system verifies the input against stored credentials.
Access Grant or Denial: If the input matches stored credentials, access is
granted; if it fails, access is denied, with fallback options presented after a set
number of attempts.
Participants:
User: The individual accessing the device.
Authentication Interface: Receives user input and passes it to the system for
validation.
Authentication Management System: Processes and verifies credentials,
manages settings, and stores user data.
Fallback Mechanism: Activated if primary access methods fail, providing an
alternative means of authentication.
Collaborations:
The User interacts with the Authentication Interface to gain access.
The Authentication Interface passes data to the Authentication Management
System for verification.
If verification fails, the Fallback Mechanism provides secondary access options.
Consequences:
Positive: Provides flexible, secure access with multiple options, improving
security while accommodating user preferences.
Negative: Adds complexity and resource usage, especially with biometrics,
which can impact performance. Potential for user frustration if access methods
fail.
This pattern enables secure, user-friendly access and can be adapted for
devices with sensitive data or controlled functionality.
Q 7.11) How much analysis modeling do you think would be required for
SafeHomeAssured .com? Would each of the model types described in Section 7.5.3
be required?
A) For SafeHomeAssured.com, a comprehensive home management system, a
substantial amount of analysis modeling would be necessary to ensure a clear
understanding of user needs, system functionality, and interactions. Given the
complexity of such a system, all the model types described in Section 7.5.3—
Content, Interaction, Function, Navigation, and Configuration Models—would play a
critical role in the overall analysis process.
Required Analysis Models
i. Content Models:
Requirement: Essential for defining the types of information that
SafeHomeAssured.com will manage, such as user profiles, device data, alerts,
and security logs.
Importance: Helps in structuring the data that needs to be stored, retrieved, and
presented to users, ensuring that all necessary content is accounted for.
v. Configuration Models:
Requirement: Necessary to define system settings and user preferences,
including security configurations, notification preferences, and integration with
other devices or services.
Importance: Helps in managing the customizable aspects of the system,
ensuring users can tailor their experience according to their specific needs and
preferences.
In summary, while delaying the functional model can foster creativity and
adaptability in design, it also poses risks related to requirement clarity, project
scope, and alignment between design and functionality. A balanced approach that
integrates both functional and design considerations early in the development
process is often more effective for creating a successful WebApp.
SHORT ANSWER:
The purpose of a configuration model is to define and manage the various
components, settings, and dependencies within a system to ensure consistency and
reliability across different environments (e.g., development, testing, production). It
helps track software versions, system configurations, hardware requirements, and
third-party integration's, enabling easier setup, maintenance, and troubleshooting.
By documenting these elements, the configuration model supports controlled
deployments, reduces errors, and facilitates scalability and reproducibility across
environments.
Q 7.15) How does the navigation model differ from the interaction model?
A) The navigation model and the interaction model serve distinct but
complementary purposes in the design and analysis of a WebApp. Here’s how they
differ:
Navigation Model
Purpose:
The navigation model focuses on how users move through the application. It
outlines the pathways and structure of the application’s content, including menus,
links, and the overall site architecture.
Components:
It typically includes elements like navigation menus, breadcrumbs, page
layouts, and the hierarchy of content. It visually represents how different sections of
the app are interconnected.
Focus:
The emphasis is on user pathways and how easily users can find and access
the information or features they need. It helps ensure that navigation is intuitive and
that users can seamlessly transition between different parts of the application.
Static vs. Dynamic:
The navigation model often represents a more static structure that remains
relatively consistent throughout user sessions, providing a stable framework for how
content is organized.
Interaction Model
Purpose:
The interaction model details how users engage with the application and how
the application responds to user actions. It encompasses the dynamic aspects of user
experience and interaction patterns.
Components:
It includes user inputs (e.g., clicks, taps, gestures), feedback from the system
(e.g., alerts, notifications), and the sequence of events triggered by user actions,
such as submitting a form or navigating through different views.
Focus:
The emphasis is on the actual interactions between users and the application,
including how users perform tasks and how the application communicates with them
during those tasks.
Dynamic Nature:
The interaction model is more dynamic, reflecting changes in state and
behavior based on user actions. It captures the various scenarios that can occur
during user interaction, including error handling and system responses.
In summary, the navigation model primarily addresses how users find their
way through the application and how the content is structured, while the interaction
model focuses on the specific interactions users have with the application and the
dynamic responses of the system. Together, these models provide a comprehensive
understanding of both the structure and behavior of a WebApp, leading to a more
cohesive user experience.
SHORT ANSWER:
The navigation model defines the structure and pathways users can take to
move through different parts of a WebApp, such as links, menus, and navigation
flows. It focuses on the organization of content and pages, guiding users from one
section to another.
In contrast, the interaction model focuses on the specifics of how users
interact with individual elements on each page, such as buttons, forms, and other UI
components. It details the behaviors, responses, and input handling for user actions.
In summary, the navigation model addresses where users can go, while the
interaction model addresses how users interact within each page.