0% found this document useful (0 votes)
19 views

Explain What Is Need of System Design

Uploaded by

nameayush06
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
19 views

Explain What Is Need of System Design

Uploaded by

nameayush06
Copyright
© © All Rights Reserved
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 9

Explain What Is Need Of System Design?

1. System design is an important stage in software engineering that involves the


process of defining, developing, and documenting the architecture, components,
modules, interfaces, and data for a software system.
2. The main goal of system design is to transform the requirements gathered during
the analysis phase into a detailed and comprehensive blueprint that can be used
by developers to build the software system.
3. To ensure that the software system meets the requirements: This includes
requirements related to usability, scalability, performance, security, and
maintainability.
4. To identify the right architecture and technologies: This includes identifying the
right programming language, database management system, network protocols,
and other tools and technologies that can be used to build the software system.
5. To ensure consistency and modularity: This includes defining the interfaces
between different components and modules, and ensuring that each module or
component performs a specific and well-defined function.
6. To facilitate communication and collaboration: By providing a common
understanding of the software system architecture and design, system design
helps team members to work together more effectively.
Architecture Design
1. Architecture design is a critical aspect of software engineering that involves the
process of defining the overall structure and organization of a software system.
2. It involves the identification of the key components, modules, and subsystems of
the software system, as well as the relationships and interactions between them.
Architecture design involves several key activities, including:
3. Identifying the key components and subsystems of the software system
4. Defining the interfaces and interactions between the components and subsystems
5. Selecting appropriate architectural patterns and styles
6. Defining the deployment and distribution strategy for the software system
7. Ensuring that the architecture is scalable, maintainable, and extensible
8. The architecture design process typically begins with the analysis of the
requirements for the software system.
Cyclomatic Complexity
1. Cyclomatic complexity is a software metric used in software engineering to
measure the complexity of a software system.
2. It is a quantitative measure of the number of independent paths in the code that
can be executed.
3. In other words, it measures the number of possible ways that the code can be
executed.
4. The cyclomatic complexity of a software system can be calculated using a
mathematical formula based on the number of decision points in the code.
5. Decision points include conditional statements, loops, and other control
structures that affect the flow of the program.
The formula for calculating cyclomatic complexity is as follows:
M=E-N+2
1. M = Cyclomatic complexity
2. E = Number of edges (control flow paths) in the graph
3. N = Number of nodes (decision points) in the graph
Write Short Note On Verification.
1. Verification is an important process in software engineering that involves the
evaluation of software products and artifacts to ensure that they meet their
specified requirements and comply with industry standards and best practices.
2. Verification activities are usually performed during the development lifecycle and
include activities such as reviews, inspections, and testing.
3. The main goal of verification is to detect and correct defects and errors in software
products before they are deployed to users.
4. Verification activities can also help to improve the quality of software products by
ensuring that they are reliable, maintainable, and scalable.
5. Code reviews and inspections: Code reviews and inspections can help to improve
the quality of software products by identifying potential problems early in the
development process.
6. Testing: Testing can help to identify defects and errors in software products and
ensure that they meet their specified requirements.
7. Formal verification: Formal verification can be used to prove that a software
product meets its requirements and that it is free from defects and errors.
COCOMO model
1. The Constructive Cost Model (COCOMO) is a software development cost
estimation model that was introduced by Dr. Barry Boehm in the late 1970s.
2. It is a well-known model used in software engineering to estimate the effort, time,
and cost required to develop software.
3. COCOMO is based on the assumption that the effort required to develop software
is directly proportional to the size of the software and the complexity of the
project.
4. It uses three levels of estimation: Basic, Intermediate, and Advanced COCOMO.
5. Basic COCOMO: This level of estimation is used for small software projects with
well-defined requirements. It estimates the cost of a project based on the number
of lines of code in the software. The formula for Basic COCOMO is: Effort =
a(KLOC)^b
6. Intermediate COCOMO: It takes into account various factors such as the
complexity of the project, the experience of the development team, and the
quality of the development environment. The formula for Intermediate COCOMO
is: Effort = a(KLOC)^b * EAF, where EAF is the effort adjustment factor.
7. Advanced COCOMO: It takes into account additional factors such as the
development team's capability, the size of the database, and the complexity of the
user interface. The formula for Advanced COCOMO is: Effort = a * (KSLOC)^b *
(EM)^c * (EAF)^d.
List The Basic Principles Of Project Scheduling.
The following are the basic principles of project scheduling in software engineering:
1. Define project scope: This includes identifying the project goals, objectives,
deliverables, and the stakeholders involved.
2. Break down the project into smaller tasks: This will help to identify the critical
path and schedule the tasks in the correct sequence.
3. Identify dependencies: This will help to ensure that the tasks are scheduled in the
correct order and that there are no conflicts or delays.
4. Estimate task duration: This will help to develop a realistic project schedule and
identify any potential delays.
5. Allocate resources: Identify the resources required for each task, such as
personnel, equipment, and materials. Allocate resources according to their
availability and make sure that there are no resource conflicts.
6. Develop a project timeline: This will help to track progress and ensure that the
project is on schedule.
7. Monitor and control the project: This includes identifying any delays or problems
and taking corrective action to keep the project on track.
Measure, Metrics, And Indicator
1. Measure, metrics, and indicators are all terms that are commonly used in software
engineering to describe different aspects of measurement and evaluation.
2. Measure : A measure is a quantifiable value that describes some aspect of the
software system or development process. For example, lines of code (LOC) is a
measure of the size of the software codebase.
3. Metrics : Metrics are a set of measures used to evaluate and quantify some aspect
of the software system or development process. Metrics are often used to track
and measure the progress of a project, identify areas for improvement, and
provide objective data for decision-making.
4. Indicators : Indicators are derived from metrics and provide an insight into the
performance or quality of the software system or development process. Indicators
are often used to monitor the state of the software project and to make informed
decisions about project planning, management, and improvement.
5. The goal of software metrics is to provide objective and measurable information
about the software system or development process.
6. The need for software metrics arises from the fact that software development is a
complex process involving many variables that can affect the quality and success
of the final product.
Types Of Testing Metrics
Testing metrics are used in software engineering to measure the quality and
effectiveness of the software testing process. The metrics are used to evaluate the
performance of the software testing process and identify areas for improvement.
1. Test Coverage Metrics: Test coverage metrics are used to measure the degree to
which the software testing process covers the requirements and functionalities of
the software. Defect Metrics: Defect metrics are used to measure the number and
severity of defects found during the testing process.
2. Test Efficiency Metrics: Test efficiency metrics are used to measure the efficiency
and effectiveness of the software testing process.
3. Test Management Metrics: Test management metrics are used to measure the
overall effectiveness of the testing process.
Software Risk
1. In software engineering, software risk refers to the possibility of events or
circumstances that could negatively impact the software development process,
product quality, or project success.
2. Risk management is a critical component of software engineering, and software
risk must be identified, assessed, and managed throughout the software
development lifecycle.
Explain Different Categories Of Software Risk.
1. Technical Risks: These are risks related to the technical aspects of the software
system, including software design, architecture, coding, testing, and maintenance.
2. Operational Risks: These are risks related to the operation of the software system,
including its deployment, maintenance, and use.
3. Project Risks: These are risks related to the management and execution of the
software development project, including planning, scheduling, budgeting, and
resource allocation.
4. External Risks: These are risks that are beyond the control of the software
development team, including legal, regulatory, and market risks.
5. Business Risks: These are risks related to the business goals and objectives of the
software system, including its marketability, profitability, and sustainability.
6. Personnel Risks: These are risks related to the people involved in the software
development project, including their skills, experience, and availability.
What Is Risk Identification
Risk identification is a critical process in software engineering that involves
identifying potential risks or issues that could arise during the software development
life cycle. By identifying these risks, software engineers can take proactive steps to
mitigate or avoid them and ensure the successful delivery of the software product.
Methods Involved In Risk Identification
1. Brainstorming: This method encourages free-thinking and creativity, allowing
team members to share their insights and ideas.
2. SWOT analysis: This method helps to identify potential risks and issues that could
arise during the software development process.
3. Checklist-based approach: This approach ensures that all potential risks are
considered and evaluated.
4. Failure Mode and Effects Analysis (FMEA): FMEA is a structured approach to risk
identification that involves systematically analysing each component of the
software system to identify potential failure modes and their effects.
5. Expert opinion: This method is particularly useful when dealing with complex
systems or technologies.
6. Historical data analysis: This approach helps to identify common risks and issues
that could be present in the current project.
Seven Principle Of Risk Management
1. Risk identification: This involves understanding the project's objectives,
stakeholders, resources, and constraints.
2. Risk assessment: This involves analysing the likelihood of the risk occurring and
its potential impact on the project.
3. Risk prioritization: After assessing the risks, they should be prioritized based on
their potential impact on the project. High-priority risks should be addressed first.
4. Risk mitigation: This may involve implementing safeguards or contingency plans.
5. Risk monitoring: This may involve tracking metrics, analyzing data, or conducting
regular risk assessments.
6. Risk communication: Risk information should be communicated effectively to all
stakeholders to ensure that everyone is aware of the potential risks and
mitigation strategies.
7. Risk documentation: All risk information, including risk identification,
assessment, prioritization, mitigation strategies, and monitoring, should be
documented in a formal risk management plan. This plan should be updated
regularly throughout the project.
Risk Monitoring
1. Risk monitoring is the ongoing process of tracking and reviewing the identified
risks and their mitigation strategies to ensure that they are still relevant and effective.
2. This involves regularly assessing the risk landscape, analyzing data, and evaluating
the effectiveness of the mitigation measures.
3. Risk monitoring is crucial in software engineering because risks can change over
time as the project progresses.
4. New risks may emerge, existing risks may become more or less likely, and
mitigation strategies may need to be adjusted.
5. By monitoring risks, project managers can take proactive measures to reduce the
impact of potential risks and avoid project failure.
6. Risk monitoring should be conducted regularly and systematically, using
established metrics and thresholds to measure progress and identify any new risks
or changes in existing risks.
7. Communication and collaboration among project team members are also
important to ensure that risk information is shared and acted upon in a timely
manner.
PDCA cycle
The PDCA (Plan-Do-Check-Act) cycle is a widely used model for continuous improvement
in software engineering and other fields. It is a four-step iterative process that helps
teams to plan, implement, and continuously improve their processes and products.
1. Plan: This step involves identifying the problem or opportunity for improvement,
analyzing the current situation, setting goals and objectives, and developing a plan to
achieve them.
2. Do: In this step, the plan is put into action. The team implements the plan, collects
data and information, and documents the process.
3. Check: Once the plan has been implemented, the team evaluates the results to
determine if the goals and objectives have been achieved.
4. Act: Based on the results of the check step, the team takes action to adjust the plan
and improve the process. This may involve making changes to the process,
implementing new procedures or tools, developing additional training or resources.
Different Between Software Quality Control And Software Quality Assurance
Software Quality Control (SQC):
1. SQC is focused on identifying and fixing defects in software after they have been
developed.
2. SQC involves a range of testing activities to evaluate software quality, including
unit testing, integration testing, system testing, and acceptance testing.
3. SQC involves a range of techniques for identifying defects, including manual
testing, automated testing, and code reviews.
4. SQC is often performed by a dedicated quality control team or by individual
testers.
Software Quality Assurance (SQA):
1. SQA is focused on preventing defects in software by implementing quality
processes throughout the software development lifecycle.
2. SQA involves a range of activities to ensure that software development processes
are followed correctly, including process audits, documentation reviews, and
training.
3. SQA involves a range of techniques for preventing defects, including formal
methods, modelling, and code inspections.
4. SQA is often performed by a dedicated quality assurance team or by individuals
who are responsible for ensuring that quality processes are followed.
State And Explain IEEE Standard SQA Plan
The IEEE standard SQA (Software Quality Assurance) plan is a document that outlines
the procedures and activities to be carried out during the software development
process to ensure that the final product meets certain quality standards. Here's a
brief overview of the key elements of the SQA plan:
1. Purpose: This section explains the overall objective of the SQA plan and how it fits
into the software development process.
2. Scope: This section defines the boundaries of the SQA plan, such as the software
components, phases of development, and organizational units involved.
3. References: This section lists any external standards, regulations, or guidelines
that the SQA plan must comply with.
4. Management: This section describes the roles and responsibilities of the SQA
team, as well as the procedures for managing changes to the SQA plan.
5. Documentation: This section outlines the documentation requirements for the
SQA plan and the software development process.
6. Reviews and Audits: This section describes the procedures for conducting reviews
and audits of the software development process and the software product.
CMM model
The IEEE Standard for Software Quality Assurance Plans (IEEE 730) is a guideline that
provides a set of requirements for creating a Software Quality Assurance (SQA) plan.
The SQA plan outlines the processes, methods, and tools that will be used to ensure
that software products are developed and delivered with high quality.
The IEEE 730 standard defines several key components of an SQA plan, including:
1. Introduction: This section provides an overview of the SQA plan, its purpose, and
scope.
2. References: This section includes a list of relevant documents and standards that
the SQA plan is based on.
3. Management: This section outlines the responsibilities and roles of the SQA team
and management, as well as the project schedule and budget.
4. Documentation: This section describes the documentation requirements for the
project, including standards for creating and maintaining documentation.
5. Standards, Practices, and Conventions: This section identifies the software
development standards, practices, and conventions that will be used throughout
the project.
6. Testing: This section outlines the testing procedures, including the types of tests
to be performed, the test schedule, and the test environment.
Difference between Static Testing and Dynamic Testing
Static Testing (verification)
1. It is performed in the early stage of the software development.
2. In static testing whole code is not executed.
3. Static testing prevents the defects.
4. Static testing is performed before code deployment.
5. Static testing is less costly.
6. Static Testing involves checklist for testing process.
7. It includes walkthroughs, code review, inspection
8. It generally takes shorter time
9. It can discover variety of bugs.
10. Static Testing may complete 100% statement coverage in comparably less time.
Dynamic Testing (Validation)
1. It is performed at the later stage of the software development.
2. In dynamic testing whole code is executed.
3. Dynamic testing finds and fixes the defects.
4. Dynamic testing is performed after code deployment.
5. Dynamic testing is highly costly.
6. Dynamic Testing involves test cases for testing process.
7. It involves functional and non-functional test.
8. It usually takes longer time as it involves running several testing
9. It expose the bugs that are exportable through execution hence discover only
limited type of bugs.
10. While dynamic testing only achieves less than 50% statement coverage.
Bug Life Cycle
The bug life cycle, also known as the defect life cycle, describes the stages that a
software bug goes through from the time it is discovered to the time it is resolved.
1. New: The bug is entered into a bug tracking system, which assigns it a unique
identifier and other important information such as the severity, priority, and
description of the bug.
2. Assigned: The developer may request additional information or clarification from
the tester or user who reported the bug.
3. Open: Once the developer has reproduced the bug and confirmed its existence,
the bug is marked as "open" and assigned to the appropriate team for resolution.
4. In Progress: In this stage, the developer begins working on a fix for the bug. The
developer may write code, run tests, and make changes to the software to
resolve the issue.
5. Fixed: When the developer has successfully fixed the bug, the bug is marked as
"fixed" and the developer creates a patch or update to the software.
6. Verified: The tester or user confirms that the bug has been fixed and performs
regression testing to ensure that the fix has not introduced any new bugs.
7. Closed: Finally, once the bug has been verified and confirmed as fixed, it is
marked as "closed" and removed from the bug tracking system.
Testing Principles
Testing principles are the fundamental concepts that guide the process of software
testing. Following these principles can help ensure that testing is effective, efficient,
and produces high-quality software.
1. Exhaustive testing is impossible: It is not possible to test every possible input and
scenario for a software application. Therefore, testing efforts should be focused
on areas that are likely to be most problematic or that pose the greatest risk.
2. Early testing: Testing should be incorporated early in the software development
life cycle. This can help identify and fix issues early, reducing the cost of fixing
defects later in the process.
3. Defect clustering: Defects tend to cluster in certain areas of a software
application. Testing efforts should be focused on those areas to find and fix
defects.
4. Pesticide paradox: Repetitive testing can lead to a decrease in the number of
defects found. To avoid this, testing should be varied and tailored to new or
different scenarios.
5. Testing is context dependent: Different software applications have different
requirements and environments. Testing should be tailored to the specific
context of the software application being tested.
6. Absence of errors fallacy: The absence of errors found in testing does not
necessarily indicate that the software is defect-free. Testing can only provide an
indication of the quality of the software, and there may still be undiscovered
defects.
Difference Between Black Box Testing And White Box Testing
White Box Testing
1. It is a testing approach which is used to test the software without the knowledge
of the internal structure of program or application.
2. It also knowns as data-driven, box testing, data-, and functional testing.
3. The main objective of this testing is to check what functionality of the system
under test.
4. This type of testing is ideal for higher levels of testing like System Testing,
Acceptance testing.
5. Programming knowledge is not needed to perform Black Box testing.
6. Implementation knowledge is not requiring doing Black Box testing.
White Box testing
1. It is a testing approach in which internal structure is known to the tester.
2. It is also called structural testing, clear box testing, code-based testing, or glass box
testing.
3. Internal working is known, and the tester can test accordingly.
4. Testing is best suited for a lower level of testing like Unit Testing, Integration
testing.
5. Programming knowledge is required to perform White Box testing.
6. Complete understanding needs to implement White Box testing.

You might also like