0% found this document useful (0 votes)
5 views

1. Introduction to Software Design (1)

Software engineering is the discipline that applies scientific principles to the development of software products, ensuring they are effective and reliable. It is essential for managing large software projects, improving scalability, and maintaining quality while addressing the dynamic nature of user requirements. The Software Development Life Cycle (SDLC) outlines a structured approach to software development, encompassing stages from planning and requirement analysis to deployment and maintenance.

Uploaded by

Rusher Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
5 views

1. Introduction to Software Design (1)

Software engineering is the discipline that applies scientific principles to the development of software products, ensuring they are effective and reliable. It is essential for managing large software projects, improving scalability, and maintaining quality while addressing the dynamic nature of user requirements. The Software Development Life Cycle (SDLC) outlines a structured approach to software development, encompassing stages from planning and requirement analysis to deployment and maintenance.

Uploaded by

Rusher Gamer
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 27

Introduction to Software Design

What is Software Engineering?


The term software engineering is the product of two words, software, and engineering.

The software is a collection of integrated programs.

Software subsists of carefully-organized instructions and code written by developers on any of


various particular computer languages.

Computer programs and related documentation such as requirements, design models and
user manuals.

Engineering is the application of scientific and practical knowledge to invent, design, build,
maintain, and improve frameworks, processes, etc.

Software Engineering is an engineering branch related to the evolution of software product


using well-defined scientific principles, techniques, and procedures. The result of software
engineering is an effective and reliable software product.

Why is Software Engineering required?


Software Engineering is required due to the following reasons:

o To manage Large software


o For more Scalability
o Cost Management
o To manage the dynamic nature of software
o For better quality Management

Need of Software Engineering


The necessity of software engineering appears because of a higher rate of progress in user
requirements and the environment on which the program is working.

o Huge Programming: It is simpler to manufacture a wall than to a house or building,


similarly, as the measure of programming become extensive engineering has to step
to give it a scientific process.
o Adaptability: If the software procedure were not based on scientific and engineering
ideas, it would be simpler to re-create new software than to scale an existing one.
o Cost: As the hardware industry has demonstrated its skills and huge manufacturing
has let down the cost of computer and electronic hardware. But the cost of
programming remains high if the proper process is not adapted.
o Dynamic Nature: The continually growing and adapting nature of programming
hugely depends upon the environment in which the client works. If the quality of the
software is continually changing, new upgrades need to be done in the existing one.
o Quality Management: Better procedure of software development provides a better
and quality software product.

Characteristics of a good software engineer


The features that good software engineers should possess are as follows:

 Exposure to systematic methods, i.e., familiarity with software engineering principles.


 Good technical knowledge of the project range (Domain knowledge).
 Good programming abilities.
 Good communication skills. These skills comprise of oral, written, and interpersonal skills.
 High motivation.
 Sound knowledge of fundamentals of computer science.
 Intelligence.
 Ability to work in a team
 Discipline, etc.

Importance of Software Engineering

The importance of Software engineering is as follows:

1. Reduces complexity: Big software is always complicated and challenging to progress.


Software engineering has a great solution to reduce the complication of any project.
Software engineering divides big problems into various small issues. And then start
solving each small issue one by one. All these small problems are solved independently
to each other.
2. To minimize software cost: Software needs a lot of hardwork and software engineers
are highly paid experts. A lot of manpower is required to develop software with a large
number of codes. But in software engineering, programmers project everything and
decrease all those things that are not needed. In turn, the cost for software productions
becomes less as compared to any software that does not use software engineering
method.
3. To decrease time: Anything that is not made according to the project always wastes
time. And if you are making great software, then you may need to run many codes to
get the definitive running code. This is a very time-consuming procedure, and if it is not
well handled, then this can take a lot of time. So if you are making your software
according to the software engineering method, then it will decrease a lot of time.
4. Handling big projects: Big projects are not done in a couple of days, and they need lots
of patience, planning, and management. And to invest six and seven months of any
company, it requires heaps of planning, direction, testing, and maintenance. No one can
say that he has given four months of a company to the task, and the project is still in its
first stage. Because the company has provided many resources to the plan and it should
be completed. So to handle a big project without any problem, the company has to go
for a software engineering method.
5. Reliable software: Software should be secure, means if you have delivered the software,
then it should work for at least its given time or subscription. And if any bugs come in
the software, the company is responsible for solving all these bugs. Because in software
engineering, testing and maintenance are given, so there is no worry of its reliability.
6. Effectiveness: Effectiveness comes if anything has made according to the standards.
Software standards are the big target of companies to make it more effective. So
Software becomes more effective in the act with the help of software engineering.

https://www.javatpoint.com/software-engineering

https://www.geeksforgeeks.org/software-engineering-introduction-to-software-engineering/

Advantages of Software Engineering


There are several advantages to using a systematic and disciplined approach to software
development, such as:
1. Improved Quality: By following established software engineering principles
and techniques, the software can be developed with fewer bugs and higher
reliability.
2. Increased Productivity: Using modern tools and methodologies can streamline
the development process, allowing developers to be more productive and
complete projects faster.
3. Better Maintainability: Software that is designed and developed using sound
software engineering practices is easier to maintain and update over time.
4. Reduced Costs: By identifying and addressing potential problems early in the
development process, software engineering can help to reduce the cost of
fixing bugs and adding new features later on.
5. Increased Customer Satisfaction: By involving customers in the development
process and developing software that meets their needs, software engineering
can help to increase customer satisfaction.
6. Better Team Collaboration: By using Agile methodologies and continuous
integration, software engineering allows for better collaboration among
development teams.
7. Better Scalability: By designing software with scalability in mind, software
engineering can help to ensure that software can handle an increasing number
of users and transactions.
8. Better Security: By following the Software Development Life Cycle
(SDLC) and performing security testing, software engineering can help to
prevent security breaches and protect sensitive data.
In summary, software engineering offers a structured and efficient approach to software
development, which can lead to higher-quality software that is easier to maintain and
adapt to changing requirements. This can help to improve customer satisfaction and
reduce costs, while also promoting better collaboration among development teams.
Disadvantages of Software Engineering
While Software Engineering offers many advantages, there are also some potential
disadvantages to consider:
1. High upfront costs: Implementing a systematic and disciplined approach
to software development can be resource-intensive and require a significant
investment in tools and training.
2. Limited flexibility: Following established software engineering principles and
methodologies can be rigid and may limit the ability to quickly adapt to
changing requirements.
3. Bureaucratic: Software Engineering can create an environment that is
bureaucratic, with a lot of processes and paperwork, which may slow down the
development process.
4. Complexity: With the increase in the number of tools and methodologies,
software engineering can be complex and difficult to navigate.
5. Limited creativity: The focus on structure and process can stifle creativity and
innovation among developers.
6. High learning curve: The development process can be complex, and it requires
a lot of learning and training, which can be challenging for new developers.
7. High dependence on tools: Software engineering heavily depends on the
tools, and if the tools are not properly configured or are not compatible with
the software, it can cause issues.
8. High maintenance: The software engineering process requires regular
maintenance to ensure that the software is running efficiently, which can be
costly and time-consuming.

7 Stages Of SDLC: How To Keep Development Teams

Running
Software development life cycle (SDLC) is a structured process that is used to design,
develop, and test good-quality software. SDLC, or software development life cycle, is
a methodology that defines the entire procedure of software development step-by-step.
The goal of the SDLC life cycle model is to deliver high-quality, maintainable software
that meets the user’s requirements. SDLC in software engineering models outlines the
plan for each stage so that each stage of the software development model can perform
its task efficiently to deliver the software at a low cost within a given time frame that
meets users requirements. In this article we will see Software Development Life Cycle
(SDLC) in detail.

What is the Software Development Life Cycle (SDLC)?


SDLC is a process followed for software building within a software
organization. SDLC consists of a precise plan that describes how to develop, maintain,
replace, and enhance specific software. The life cycle defines a method for improving
the quality of software and the all-around development process.

Stages of the Software Development Life Cycle


SDLC specifies the task(s) to be performed at various stages by a software engineer or
developer. It ensures that the end product is able to meet the customer’s expectations
and fits within the overall budget. Hence, it’s vital for a software developer to have prior
knowledge of this software development process. SDLC is a collection of these six
stages, and the stages of SDLC are as follows:

Software Development Life Cycle Model SDLC Stages

The SDLC Model involves six phases or stages while developing any software.
Stage-1: Planning and Requirement Analysis
Planning is a crucial step in everything, just as in software development. In this same
stage, requirement analysis is also performed by the developers of the organization. This
is attained from customer inputs, and sales department/market surveys.
The information from this analysis forms the building blocks of a basic project. The quality
of the project is a result of planning. Thus, in this stage, the basic project is designed with
all the available information.

Stage-1 : Planning and Requirement Analysis

Stage-2: Defining Requirements


In this stage, all the requirements for the target software are specified. These
requirements get approval from customers, market analysts, and stakeholders.
This is fulfilled by utilizing SRS (Software Requirement Specification). This is a sort of
document that specifies all those things that need to be defined and created during the
entire project cycle.

Stage-2 : Defining Requirements

Stage-3: Designing Architecture


SRS is a reference for software designers to come up with the best architecture for the
software. Hence, with the requirements defined in SRS, multiple designs for the product
architecture are present in the Design Document Specification (DDS).
This DDS is assessed by market analysts and stakeholders. After evaluating all the
possible factors, the most practical and logical design is chosen for development.
Stage 3: Design

Stage-4: Developing Product


At this stage, the fundamental development of the product starts. For this, developers
use a specific programming code as per the design in the DDS. Hence, it is important for
the coders to follow the protocols set by the association. Conventional programming
tools like compilers, interpreters, debuggers, etc. are also put into use at this stage. Some
popular languages like C/C++, Python, Java, etc. are put into use as per the software
regulations.

Stage 4: Development

Stage-5: Product Testing and Integration


After the development of the product, testing of the software is necessary to ensure its
smooth execution. Although, minimal testing is conducted at every stage of
SDLC. Therefore, at this stage, all the probable flaws are tracked, fixed, and retested.
This ensures that the product confronts the quality requirements of SRS.
Documentation, Training, and Support: Software documentation is an essential part of
the software development life cycle. A well-written document acts as a tool and means
to information repository necessary to know about software processes, functions, and
maintenance. Documentation also provides information about how to use the product.
Training in an attempt to improve the current or future employee performance by
increasing an employee’s ability to work through learning, usually by changing his
attitude and developing his skills and understanding.

Stage 5: Testing

Stage-6: Deployment and Maintenance of Products


After detailed testing, the conclusive product is released in phases as per the
organization’s strategy. Then it is tested in a real industrial environment. It is important
to ensure its smooth performance. If it performs well, the organization sends out the
product as a whole. After retrieving beneficial feedback, the company releases it as it is
or with auxiliary improvements to make it further helpful for the customers. However,
this alone is not enough. Therefore, along with the deployment, the product’s
supervision.

Software Development Life Cycle Models


To this day, we have more than 50 recognized SDLC models in use. But None of them
is perfect, and each brings its favorable aspects and disadvantages for a specific
software development project or a team.
Here, we have listed the top six most popular SDLC models:
Waterfall Model: This SDLC model is
considered to be the oldest and most forthright. We finish with one phase and then start
with the next, with the help of this methodology. Why the name waterfall? Because each
of the phases in this model has its own mini-plan and each stage waterfalls into the next.
A drawback that holds back this model is that even the small details left incomplete can
hold an entire process.

Agile Model: Agile is the new normal; It is


one of the most utilized models, as it approaches software development in incremental
but rapid cycles, commonly referred to as “sprints”. With new changes in scope and
direction being implemented in each sprint, the project can be completed quickly with
higher flexibility. Agile means spending less time in the planning phases, and a project
can diverge from original specifications.

Iterative Model: This SDLC model stresses on repetition. Developers create a


version rapidly for relatively less cost, then test and improve it through successive
versions. One big disadvantage of this model is that if left unchecked, it can eat up
resources fast.

V-Shaped Model: This model can be considered as an extension of the waterfall


model, as it includes tests at each stage of development. Just like the case with waterfall,
this process can run into obstructions.
Big Bang Model: This SDLC model is considered best for small projects as it
throws most of its resources at development. It lacks the detailed requirements definition
stage when compared to the other methods.

Spiral Model: One of the most flexible of the SDLC models is the spiral model. It
resembles the iterative model in its emphasis on repetition. Even this model goes through
the planning, design, build and test phases again and again, with gradual improvements
at each stage.

Wrapping-up SDLC

SDLC can be a great tool that can help us with the highest level
of documentation and management control. But failure to consider customer’s
requirements, users or stakeholders can lead to project failure.

Note:
 Waterfall: Best for clear, stable projects with minimal changes.
 V-Model: Good for projects with clear requirements and a strong focus on
testing.
 Agile/Scrum: Ideal for projects with changing requirements and frequent client
interaction.
 Spiral: Suitable for high-risk projects with evolving requirements.
 RAD(Rapid Application Development) : Useful for projects needing rapid
development.
 DevOps: Best for continuous integration and ongoing support

Role of Software Design in the Software Development Life Cycle


(SDLC)
Software design plays a pivotal role in the Software Development Life Cycle (SDLC) by acting as the
blueprint for building the system. It bridges the gap between requirements analysis and implementation,
ensuring that the software system is robust, scalable, and maintainable. Below are the key roles and
contributions of software design in the SDLC:
1. Transforming Requirements into a Blueprint

 Role: Converts requirements gathered during the analysis phase into a structured design
document.
 Details:
o Defines the architecture, components, modules, and data structures.
o Ensures that the design aligns with the business and technical requirements.

2. Providing a Clear Roadmap for Development

 Role: Serves as a roadmap for developers to follow during the coding phase.
 Details:
o Specifies how the system will be implemented, including coding standards and
technologies.
o Reduces ambiguity and ensures consistency across the development team.

3. Enhancing System Quality

 Role: Ensures the system meets desired quality attributes such as performance, scalability, and
reliability.
 Details:
o Promotes modularity, allowing easier maintenance and upgrades.
o Facilitates error detection and prevention during the design phase.

4. Supporting Reusability

 Role: Encourages reusable components and design patterns.


 Details:
o Helps save time and resources by reusing existing components or frameworks.
o Standardizes practices for future projects.

5. Enabling Risk Mitigation

 Role: Identifies potential risks and challenges early in the development cycle.
 Details:
o Provides a framework to assess feasibility, identify bottlenecks, and mitigate risks.
o Prevents costly changes by addressing issues before implementation.

6. Ensuring Compatibility and Integration

 Role: Ensures seamless integration with other systems and components.


 Details:
o Defines interfaces, APIs, and communication protocols.
o Facilitates integration testing and deployment.
7. Simplifying Testing and Debugging

 Role: Makes testing more systematic by defining clear module boundaries and interactions.
 Details:
o Promotes unit testing through modular design.
o Aids debugging by isolating issues to specific modules or layers.

8. Facilitating Stakeholder Communication

 Role: Acts as a medium to communicate the system structure and behavior to stakeholders.
 Details:
o Uses diagrams and models (e.g., UML diagrams) to present the system visually.
o Helps non-technical stakeholders understand the system’s design.

9. Supporting Future Maintenance and Scalability

 Role: Lays the groundwork for maintaining and scaling the software post-deployment.
 Details:
o Incorporates flexibility to accommodate future changes.
o Promotes documentation for easier understanding by new developers.

10. Aligning with Project Goals and Timelines

 Role: Keeps the project aligned with its goals by providing a clear framework.
 Details:
o Helps estimate development time and resource allocation accurately.
o Reduces delays by minimizing rework during later phases.

Design Principles in System Design


Design Principles in System Design are a set of considerations that form the basis of any good
System. But the question now arises why use Design Principles in System Design? Design
Principles help teams with decision-making, and is a multi-disciplinary field that
involves trade-off analysis, balancing conflicting needs, and making decisions about
design choices that will impact the overall system.
Some of the most common Design Principles in System Design are:
1. Separation of Concerns
2. Encapsulation and Abstraction
3. Loose Coupling and High Cohesion
4. Scalability and Performance
5. Resilience to Fault Tolerance
6. Security and Privacy
Let us explain each design principle to get a better understanding of the same as
follows:
1. Separation of concerns
Fundamental design principles that encourage code organization and maintainability
include modularity and separation of concerns. Developers can concentrate on particular
parts of a system independently by breaking it up into smaller, self-contained modules,
making the system simpler to comprehend, test, and alter. Each module must have a
clearly defined role that encompasses proper functionality and reduces reliance on other
modules. This makes it possible to scale or replace particular components without having
an adverse effect on the system as a whole, and it also makes maintenance easier.
2. Encapsulation and Abstraction
Design approaches that support information hiding and minimize complexity include
abstraction and encapsulation. Encapsulation includes combining data and behavior into
a single object (class, module, etc.), whereas abstraction entails building simplified and
logical representations of complex things. and revealing.
Note: Fault activity is not detected here so do we will propose scalability measures in
order to understand design principles better.
1. Encapsulation

Definition:
Encapsulation is the process of bundling data (attributes) and methods (functions) that operate on the
data into a single unit, usually a class. It also involves restricting direct access to some components of the
object to enforce controlled interaction.

Key Characteristics:

 Data Hiding: Protects the internal state of an object from unauthorized access.
 Access Control: Uses access specifiers like private, protected, and public to control how data and
methods are accessed.
 Modularity: Encapsulation groups related data and behavior, making the system easier to
understand and modify.
 Encapsulated Classes: Often provide getter and setter methods to access or modify private
attributes indirectly.
Example (Encapsulation in Python):
class BankAccount:
def __init__(self, balance):
self.__balance = balance # Private attribute

def deposit(self, amount):


if amount > 0:
self.__balance += amount

def withdraw(self, amount):


if 0 < amount <= self.__balance:
self.__balance -= amount
else:
print("Insufficient funds")

def get_balance(self):
return self.__balance # Controlled access

# Using the class


account = BankAccount(100)
account.deposit(50)
account.withdraw(30)
print(account.get_balance()) # Output: 120
Advantages:

 Protects the internal state of an object.


 Simplifies debugging and maintenance.
 Promotes modular design.

Real-World Analogy:

Think of a capsule—it hides its contents and provides a controlled way to interact with them (e.g.,
ingesting it).

2. Abstraction

Definition:
Abstraction is the process of highlighting essential features and hiding unnecessary details. It focuses on
what an object does rather than how it does it.

Key Characteristics:

 Essential Information Only: Users interact with high-level functionalities without worrying about
the underlying complexity.
 Implementation Hiding: Details about how methods or systems work are abstracted away.
 Interfaces and Abstract Classes: Abstraction is often achieved using these constructs, which
define what functionalities a class must implement.

Example (Abstraction in Python):


from abc import ABC, abstractmethod

class Vehicle(ABC):
@abstractmethod
def start(self):
pass

@abstractmethod
def stop(self):
pass

class Car(Vehicle):
def start(self):
print("Car starting with a key")

def stop(self):
print("Car stopping with brakes")

class Bike(Vehicle):
def start(self):
print("Bike starting with a button")

def stop(self):
print("Bike stopping with disc brakes")

# Using the classes


vehicle1 = Car()
vehicle1.start() # Output: Car starting with a key
vehicle1.stop() # Output: Car stopping with brakes
Advantages:

 Simplifies complex systems by focusing on functionality.


 Reduces code duplication by emphasizing high-level design.
 Improves flexibility and scalability.

Real-World Analogy:

Consider a car's dashboard—you know how to drive the car using the steering wheel, accelerator, and
brake, but you don’t need to understand the mechanics of the engine.

Key Differences
Aspect Encapsulation Abstraction

Protects the internal state and enforces Focuses on simplifying complexity by


Purpose
controlled access. hiding unnecessary details.

Focus How the functionality is achieved. What functionality is provided.

Achieved by bundling data and methods in a Achieved through abstract classes,


Implementation
class and using access specifiers. interfaces, and inheritance.

Hides data through controlled access (e.g., Hides implementation details from the
Visibility
private variables). user.
Aspect Encapsulation Abstraction

Abstract classes or interfaces defining


Example Getter and setter methods for private variables.
high-level behavior.

Relationship Between Encapsulation and Abstraction

 Encapsulation is about protecting data and ensuring controlled access, while abstraction is about
hiding implementation details and exposing only essential functionalities.
 Encapsulation is often used to implement abstraction in object-oriented design. For example,
you might encapsulate the details of an abstract method within a derived class.

3. Loose Coupling and High Cohesion


As we know, Coupling refers to the degree of interdependence between software modules. High coupling
means that modules are closely connected and changes in one module may affect other modules. Low
coupling means that modules are independent and changes in one module have little impact on other
modules. It is of two types:

1. Loose Coupling

Definition:
Loose coupling occurs when modules or components have minimal dependencies on one another. They
interact through well-defined interfaces, reducing the impact of changes in one module on others.

Characteristics:

 Independent Modules: Each module can function independently and requires minimal information
about others.
 Clear Interfaces: Communication between modules is done via standardized interfaces or APIs.
 Flexibility: Changes in one module typically do not affect others, making the system more
adaptable to modifications.
 High Maintainability: Easier to update, replace, or scale individual components.
 Reusability: Components can often be reused in different systems or contexts.

Example:

 A REST API interacting with a frontend application. The frontend does not need to know the
backend’s implementation details as long as it adheres to the API contract.

Advantages:

 Improved maintainability and scalability.


 Easier debugging and testing due to isolated modules.
 Facilitates system upgrades and integrations.

Disadvantages:

 Initial design and implementation can be more complex.


 Might involve performance trade-offs due to abstraction layers.
2. Tight Coupling

Definition:
Tight coupling occurs when modules or components are highly dependent on each other's implementation
details. A change in one module often necessitates changes in others.

Characteristics:

 Interconnected Modules: Modules have a high level of dependency and knowledge about each
other's internal workings.
 Direct Interaction: Communication may occur directly without abstraction or standardization.
 Limited Flexibility: Modifications in one module often lead to cascading changes.
 Low Maintainability: More challenging to maintain or refactor as systems evolve.
 Reduced Reusability: Components are less portable due to dependencies.

Example:

 A frontend directly calling backend functions or relying on specific database structures instead of
an API layer.

Advantages:

 Simpler to design and implement for small-scale systems.


 Can lead to faster communication between tightly connected components.

Disadvantages:

 Difficult to scale or modify without significant effort.


 Changes in one module can introduce bugs in others.
 Testing and debugging are more complex due to interdependencies.

Key Differences
Aspect Loose Coupling Tight Coupling

Minimal dependencies between


Dependency High dependencies between modules
modules

High flexibility for updates and


Flexibility Low flexibility; changes affect multiple modules
changes

Components are hard to reuse due to


Reusability Components are easily reusable
dependencies

Maintainability Easier to maintain and refactor Challenging to maintain as the system evolves

Performance May involve performance overhead Often more efficient due to direct interactions

When to Use Loose Coupling vs. Tight Coupling

 Loose Coupling: Preferred in large, scalable, and maintainable systems where future changes or
integrations are expected (e.g., microservices architecture).
 Tight Coupling: Suitable for small systems where simplicity and performance are prioritized over
scalability.

Similarly, Cohesion refers to the degree to which elements within a module work together to fulfill a single,
well-defined purpose. High cohesion means that elements are closely related and focused on a single
purpose, while low cohesion means that elements are loosely related and serve multiple purposes.

1. Low Cohesion

Definition:
Low cohesion occurs when a module or component performs multiple, unrelated tasks or has
responsibilities that are not well-aligned with its purpose.

Characteristics:

 Unfocused Functionality: The module tries to do too much, often leading to unrelated
responsibilities being grouped together.
 Difficult to Understand: Understanding the module's purpose becomes challenging due to its
diverse tasks.
 Hard to Maintain: Changes in one functionality can inadvertently affect others.
 Reduced Reusability: The module cannot easily be reused in different contexts due to its lack of
focus.

Example:

A single class in an e-commerce system that handles user authentication, product inventory management,
and order processing all at once.

Advantages:

 Might simplify initial development by grouping everything in one place.

Disadvantages:

 Leads to tightly coupled and less maintainable code.


 Increases the risk of introducing bugs when making changes.

2. High Cohesion

Definition:
High cohesion occurs when a module or component performs a specific, well-defined task and all its
responsibilities are closely related to that task.

Characteristics:

 Focused Functionality: The module has a clear and singular purpose.


 Easier to Understand: The purpose of the module is intuitive and straightforward.
 Improved Maintainability: Changes are isolated to specific modules, reducing the risk of side
effects.
 Increased Reusability: The module can be reused in other projects or systems due to its
specialized purpose.
Example:

A class in the e-commerce system that only handles user authentication, while other classes handle
inventory management and order processing.

Advantages:

 Facilitates testing, debugging, and refactoring.


 Encourages modular design, promoting scalability and flexibility.

Disadvantages:

 May require more initial effort to design a highly cohesive system.

Key Differences
Aspect Low Cohesion High Cohesion

Focus Handles unrelated tasks Focuses on a single, specific task

Understandability Hard to understand and maintain Easy to understand and maintain

Reusability Limited due to unrelated responsibilities High due to a focused purpose

Low, as changes in one part may affect High, with minimal impact on unrelated
Flexibility
others parts

More prone to errors and unintended side


Error Prone Less prone to errors due to isolation
effects

When to Aim for High Cohesion

 Object-Oriented Design: Use classes that encapsulate specific responsibilities (e.g., single
responsibility principle).
 Modular Systems: In large systems, modules or services should be cohesive to allow for easier
updates and scaling.
 Microservices Architecture: Each service should perform a single, well-defined role.

Here while designing principles it is to make sure that coupling should be loose and
cohesion should be high while designing flexible systems. With loose coupling, we are
reducing dependencies between components by minimizing direct communication
between them and especially relying on interfaces. Now with high cohesion, the
functionality is confined within a module which ensures that components work together
for a common goal ensuring reusability and understandability.
4. Scalability and Performance
Building systems that are intended to manage increasing workloads or vast amounts of
data requires careful consideration of scalability and performance. System designers
should take into account both horizontal scaling (adding more instances or nodes) and
vertical scaling (raising the resources of a single node) to accomplish scalability. The
workload can also be distributed and system responsiveness increased by using
strategies like load balancing, caching, and asynchronous processing. To achieve
optimum performance, it’s critical to spot possible bottlenecks early in the design phase
and execute the necessary optimizations.

1. Performance

Definition:
Performance refers to how efficiently a system operates under a specific workload. It measures the
system's ability to process requests, execute tasks, and deliver results in a timely manner.

Key Characteristics:

 Speed: How quickly the system processes tasks (e.g., response time, latency).
 Throughput: The number of tasks or requests the system can handle in a given timeframe.
 Resource Utilization: Efficiency in using CPU, memory, disk, and network resources.
 Reliability: Consistency in delivering results within expected performance levels.

Performance Metrics:

 Latency: Time taken to respond to a request.


 Throughput: Number of requests or tasks processed per second.
 Error Rate: Frequency of errors in system operations.
 Resource Utilization: Percentage of system resources consumed.

Examples:

 A web server's response time when handling a single user request.


 The time it takes for a database query to return results.

Goal:

Optimize speed, efficiency, and resource utilization for the current workload.

2. Scalability

Definition:
Scalability refers to a system's ability to handle increasing workloads by adding resources (hardware,
software, or both) without sacrificing performance.

Key Characteristics:

 Horizontal Scalability: Adding more machines or instances to distribute the workload (e.g.,
scaling out).
 Vertical Scalability: Upgrading the capacity of existing machines (e.g., scaling up by adding more
CPU or RAM).
 Elasticity: Ability to dynamically scale resources up or down based on demand.
 Graceful Degradation: Maintaining partial functionality under extreme loads.

Types of Scalability:

 Load Scalability: Ability to handle more users or data without degradation.


 Geographic Scalability: Ability to perform well across distributed locations.
 Administrative Scalability: Ability to manage the system efficiently as it grows.
Examples:

 A cloud-based e-commerce platform scaling its infrastructure during Black Friday sales.
 A database cluster adding more nodes to handle increasing query volume.

Goal:

Ensure the system can grow seamlessly to meet future demand.

Key Differences
Aspect Performance Scalability

Efficiency of the system under a given Ability to grow and handle increasing
Definition
workload. workloads.

Current system behavior and


Focus Future growth and adaptability.
optimization.

Metrics like response time, throughput, Growth capacity with maintained


Measurement
and latency. performance.

Immediate operation under current Long-term operation under increased


Time Frame
conditions. demand.

Resource Relies on resource addition or


Limited by current resources.
Dependence redistribution.

Ensure system growth without


Objective Maximize efficiency.
performance loss.

Relationship Between Scalability and Performance

 A system can have high performance but low scalability, meaning it works well under current
conditions but cannot handle increased demand.
 A scalable system is designed to maintain or improve performance as workload increases, but it
might not always be optimized for maximum performance at a smaller scale.

Example Scenario: Web Application

1. Performance:
o The application responds to user requests in under 200 milliseconds with current traffic
levels.
o Optimizations such as caching and database indexing improve speed.
2. Scalability:
o As traffic increases from 1,000 to 100,000 users, the application uses load balancing and
adds more servers to maintain the same response time.
Optimizing Both Performance and Scalability

 Caching: Improves performance by reducing computation or database calls.


 Load Balancing: Distributes traffic across multiple servers for scalability and reliability.
 Asynchronous Processing: Improves performance by handling tasks in the background.
 Database Sharding: Increases scalability by dividing large databases into smaller, more
manageable pieces.
 Cloud Solutions: Use elastic cloud services to dynamically scale resources.

5. Resilience and Fault Tolerance


In order to guarantee system availability and reliability, it is crucial to design fault
tolerance and resilience. Techniques like redundancy, replication, and fault detection
algorithms are used in this. System downtime can be avoided or minimized by designing
systems that can survive component failures and gracefully handle exceptions, hence
minimizing the impact of failures. System resilience is further improved by putting backup
and recovery procedures in place and conducting careful testing and monitoring.
1. Fault Tolerance

Definition:
Fault tolerance refers to a system's ability to continue operating correctly even when one or more of its
components fail. It is a proactive approach to handling failures by incorporating redundancy and error-
handling mechanisms.

Key Characteristics:

 Focus on Continuity: Ensures uninterrupted operation despite faults.


 Redundancy: Often uses duplicate components or subsystems (e.g., backups, failover systems).
 Predictability: Designed to handle known types of failures.
 Error Detection and Correction: Includes mechanisms to identify and recover from errors
automatically.

Examples:

 RAID Storage: Uses data redundancy across multiple disks to prevent data loss from a single disk
failure.
 Failover Systems: Automatically switch to a backup server if the primary server goes down.
 Error-Correcting Code (ECC) Memory: Detects and corrects data corruption in real-time.

Limitations:

 Requires additional resources for redundancy.


 Can only handle failures it is explicitly designed to address.

2. Resilience

Definition:
Resilience is a broader concept that encompasses fault tolerance and focuses on the system's ability to
adapt, recover, and continue functioning under a wide range of disruptions, including unforeseen or
unpredictable events.
Key Characteristics:

 Focus on Recovery and Adaptation: Addresses the ability to return to normal operations after
disruptions.
 Flexibility: Adapts to both expected and unexpected failures or changes in conditions.
 Holistic Approach: Considers the entire system, including infrastructure, processes, and people.
 Proactive and Reactive: Incorporates fault tolerance but also emphasizes monitoring, learning,
and improving from disruptions.

Examples:

 Cloud Computing: Automatically scales resources during a sudden spike in demand or after a
hardware failure.
 Distributed Systems: Systems like blockchain networks that continue to function despite the
failure of individual nodes.
 Disaster Recovery Plans: Organizational strategies to restore operations after natural disasters or
cyberattacks.

Advantages:

 Covers a wider range of scenarios than fault tolerance.


 Enables systems to evolve and improve from disruptions.

Key Differences
Aspect Fault Tolerance Resilience

Handles both known and unknown


Scope Focuses on known faults or failures.
disruptions.

Objective Prevents failures from interrupting operations. Recovers and adapts to disruptions.

Proactive: Prevents faults from impacting the


Approach Proactive and reactive: Adapts and recovers.
system.

Incorporates redundancy, monitoring, and


Mechanism Uses redundancy and error correction.
recovery.

Complexity Focused on specific fault scenarios. Considers the entire system’s robustness.

May involve higher resource costs for Balances resource costs with recovery
Cost
redundancy. strategies.

Complementary Nature

 Fault Tolerance is a subset of Resilience. Fault tolerance provides immediate mechanisms to


handle faults, while resilience addresses long-term adaptability and recovery.
 Resilient systems often incorporate fault tolerance as part of their design but go further by
preparing for unpredictable events and improving over time.
Example Scenario: Cloud Service Provider

 Fault Tolerance: A cloud provider ensures that if one server fails, traffic is automatically routed to
a redundant server without service interruption.
 Resilience: The provider not only handles server failures but also adapts to large-scale disruptions
(e.g., data center outages, cyberattacks) by dynamically scaling resources, notifying affected
users, and improving system design to prevent future issues.

6. Privacy and Security


In today’s linked world, security and privacy are crucial design issues. Security controls
must be incorporated at every stage of the development process by system designers.
This includes using methods to safeguard sensitive data and thwart unauthorized access,
such as encryption, authentication, and access control systems.

1. Privacy

Definition:
Privacy refers to the right of individuals or entities to control access to their personal information and how
it is collected, stored, shared, and used.

Key Characteristics:

 Data Control: Users have the right to decide who can access their information.
 Transparency: Organizations must inform users how their data is collected and used.
 Compliance: Governed by laws and regulations, such as GDPR (General Data Protection
Regulation) and CCPA (California Consumer Privacy Act).
 Minimization: Collecting only the necessary data to perform a specific function.

Examples:

 A social media platform allowing users to set their profile visibility to "private."
 Websites requesting user consent before tracking cookies.
 Encryption of personal messages to prevent unauthorized access.

Threats to Privacy:

 Data Breaches: Exposing personal information to unauthorized parties.


 Surveillance: Excessive monitoring by governments or organizations.
 Unauthorized Sharing: Sharing user data without consent.

Real-World Analogy:

Think of privacy as curtains on a window—you control who can see into your house.

2. Security

Definition:
Security refers to the measures taken to protect systems, networks, and data from unauthorized access,
attacks, or damage.
Key Characteristics:

 Access Control: Ensures that only authorized users or systems can access resources.
 Confidentiality: Protects sensitive information from unauthorized disclosure.
 Integrity: Ensures that data is accurate and not tampered with.
 Availability: Keeps systems and data accessible to authorized users when needed.

Examples:

 Using firewalls to prevent unauthorized access to a network.


 Implementing strong password policies and multi-factor authentication.
 Regularly updating software to patch vulnerabilities.

Threats to Security:

 Hacking: Unauthorized access to systems or data.


 Malware: Software designed to disrupt, damage, or gain unauthorized access.
 Phishing: Fraudulent attempts to steal sensitive information.

Real-World Analogy:

Think of security as locks on a door—they protect your house from intruders.

Key Differences
Aspect Privacy Security

Protecting personal information and


Focus Safeguarding systems and data from threats.
user control.

Ensuring users' right to decide how Ensuring the confidentiality, integrity, and
Goal
their data is used. availability of systems and data.

Concerned with the who, what, and Concerned with the how of protecting data and
Scope
why of data access. systems.

Users consenting to data sharing


Example Using encryption to secure data in transit.
policies.

Laws and
GDPR, HIPAA, CCPA. ISO 27001, NIST, PCI DSS.
Standards

Relationship Between Privacy and Security

 Privacy depends on security: Without robust security measures, privacy cannot be ensured. For
example, a data breach can compromise both security and user privacy.
 Security is broader: Security protects systems, while privacy focuses specifically on protecting
individual rights and data.
 Complementary Goals: Privacy is about what to protect, and security is about how to protect it.
Practical Example: Online Banking

1. Privacy:
o Ensuring that users' account details and transactions are not shared with third parties
without consent.
2. Security:
o Protecting users' accounts with encryption, firewalls, and multi-factor authentication to
prevent unauthorized access.

Balancing Privacy and Security

 Encryption: Ensures that sensitive data remains confidential (privacy) and inaccessible to
attackers (security).
 Access Controls: Allow users to control data visibility (privacy) and restrict unauthorized access
(security).
 Transparency: Informing users how their data is protected while implementing robust defenses.

Understanding the Design Principles involved in the


System Design of a Transport System
Let us explore the design system of the transport system and see how design principles
can be applied that enhance the user experience as discussed above.
 Simplicity:
The transport system should have a clear and easy design that is easy to
understand. Clear and simple user-friendly leads to a simplified user experience. The
best example of it is Google whose no ads are run over the search main page in order to
keep it simple and minimalistic so a user-friendly interface is maintained.
 Balance:
The system should aim for an equitable distribution of transport options across the city,
taking into account elements such as demand, accessibility, and population density.
This guarantees that various places are suitably supplied and avoids congestion or
underuse of particular routes or modes.
 Contrast:
To distinguish between different forms of transportation (such as buses, trains, and
trams) or routes, contrast can be provided by visual cues and color coding. Users may
rapidly find pertinent information and make decisions regarding their journey thanks to
this.
 Unification:
A transportation system should work to achieve unification by maintaining a unified
visual identity across all touchpoints, such as cars, signage, and ticketing materials.
Users can recognize and trust the system’s services more easily because of a consistent
design that strengthens brand identification.
 Functionality:
To maintain seamless operations and user pleasure, the system should put a high
priority on functionality. This includes elements like dependable scheduling, seamless
transitions between modes of transportation, infrastructure that is accessible to those
with disabilities, and thoughtfully constructed platforms and waiting rooms.
Typography:
Signage, route maps, and other informational products within the transport system can
all benefit from the application of typographic principles to improve readability and
effectively transmit information. Clear communication with consumers is facilitated by
legal typefaces, suitable font sizes, and acceptable spacing.

Reference Links:
“What Is the Software Development Life Cycle (SDLC)? – Definition from
Techopedia.” Techopedia.com, www.techopedia.com/definition/22193/software-development-life-cycle-sdlc.

“SDLC (Software Development Life Cycle) Tutorial: What Is, Phases, Model.” Meet Guru99 – Free Training
Tutorials & Video for IT Courses, www.guru99.com/software-development-life-cycle-tutorial.html.

“What Is SDLC? Understand the Software Development Life Cycle.” Stackify, 12 Dec. 2017, stackify.com/what-
is-sdlc/.

“Software Development Life Cycle.” TOOLSQA, www.toolsqa.com/software-testing/software-development-


life-cycle/.

https://www.geeksforgeeks.org/

You might also like